{"abstract": " We define the random magnetic Laplacien with spatial white noise as magnetic\nfield on the two-dimensional torus using paracontrolled calculus. It yields a\nrandom self-adjoint operator with pure point spectrum and domain a random\nsubspace of nonsmooth functions in L 2. We give sharp bounds on the eigenvalues\nwhich imply an almost sure Weyl-type law.\n"} {"abstract": " Searches for gravitational-wave counterparts have been going in earnest since\nGW170817 and the discovery of AT2017gfo. Since then, the lack of detection of\nother optical counterparts connected to binary neutron star or black hole -\nneutron star candidates has highlighted the need for a better discrimination\ncriterion to support this effort. At the moment, the low-latency\ngravitational-wave alerts contain preliminary information about the binary\nproperties and, hence, on whether a detected binary might have an\nelectromagnetic counterpart. The current alert method is a classifier that\nestimates the probability that there is a debris disc outside the black hole\ncreated during the merger as well as the probability of a signal being a binary\nneutron star, a black hole - neutron star, a binary black hole or of\nterrestrial origin. In this work, we expand upon this approach to predict both\nthe ejecta properties and provide contours of potential lightcurves for these\nevents in order to improve follow-up observation strategy. The various sources\nof uncertainty are discussed, and we conclude that our ignorance about the\nejecta composition and the insufficient constraint of the binary parameters, by\nthe low-latency pipelines, represent the main limitations. To validate the\nmethod, we test our approach on real events from the second and third Advanced\nLIGO-Virgo observing runs.\n"} {"abstract": " Fully implicit Runge-Kutta (IRK) methods have many desirable properties as\ntime integration schemes in terms of accuracy and stability, but high-order IRK\nmethods are not commonly used in practice with numerical PDEs due to the\ndifficulty of solving the stage equations. This paper introduces a theoretical\nand algorithmic preconditioning framework for solving the systems of equations\nthat arise from IRK methods applied to linear numerical PDEs (without algebraic\nconstraints). This framework also naturally applies to discontinuous Galerkin\ndiscretizations in time. Under quite general assumptions on the spatial\ndiscretization that yield stable time integration, the preconditioned operator\nis proven to have condition number bounded by a small, order-one constant,\nindependent of the spatial mesh and time-step size, and with only weak\ndependence on number of stages/polynomial order; for example, the\npreconditioned operator for 10th-order Gauss IRK has condition number less than\ntwo, independent of the spatial discretization and time step. The new method\ncan be used with arbitrary existing preconditioners for backward Euler-type\ntime stepping schemes, and is amenable to the use of three-term recursion\nKrylov methods when the underlying spatial discretization is symmetric. The new\nmethod is demonstrated to be effective on various high-order finite-difference\nand finite-element discretizations of linear parabolic and hyperbolic problems,\ndemonstrating fast, scalable solution of up to 10th order accuracy. The new\nmethod consistently outperforms existing block preconditioning approaches, and\nin several cases, the new method can achieve 4th-order accuracy using Gauss\nintegration with roughly half the number of preconditioner applications and\nwallclock time as required using standard diagonally implicit RK methods.\n"} {"abstract": " To successfully negotiate a deal, it is not enough to communicate fluently:\npragmatic planning of persuasive negotiation strategies is essential. While\nmodern dialogue agents excel at generating fluent sentences, they still lack\npragmatic grounding and cannot reason strategically. We present DialoGraph, a\nnegotiation system that incorporates pragmatic strategies in a negotiation\ndialogue using graph neural networks. DialoGraph explicitly incorporates\ndependencies between sequences of strategies to enable improved and\ninterpretable prediction of next optimal strategies, given the dialogue\ncontext. Our graph-based method outperforms prior state-of-the-art negotiation\nmodels both in the accuracy of strategy/dialogue act prediction and in the\nquality of downstream dialogue response generation. We qualitatively show\nfurther benefits of learned strategy-graphs in providing explicit associations\nbetween effective negotiation strategies over the course of the dialogue,\nleading to interpretable and strategic dialogues.\n"} {"abstract": " We propose a systematic analysis method for identifying essential parameters\nin various linear and nonlinear response tensors without which they vanish. By\nusing the Keldysh formalism and the Chebyshev polynomial expansion method, the\nresponse tensors are decomposed into the model-independent and dependent parts,\nin which the latter is utilized to extract the essential parameters. An\napplication of the method is demonstrated by analyzing the nonlinear Hall\neffect in the ferroelectric SnTe monolayer for example. It is shown that in\nthis example the second-neighbor hopping is essential for the nonlinear Hall\neffect whereas the spin-orbit coupling is unnecessary. Moreover, by analyzing\nterms contributing to the essential parameters in the lowest order, the\nappearance of the nonlinear Hall effect can be interpreted by the subsequent\ntwo processes: the orbital magneto-current effect and the linear anomalous Hall\neffect by the induced orbital magnetization. In this way, the present method\nprovides a microscopic picture of responses. By combining with computational\nanalysis, it stimulates further discoveries of anomalous responses by filling\nin a missing link among hidden degrees of freedom in a wide variety of\nmaterials.\n"} {"abstract": " It is a long-standing objective to ease the computation burden incurred by\nthe decision making process. Identification of this mechanism's sensitivity to\nsimplification has tremendous ramifications. Yet, algorithms for decision\nmaking under uncertainty usually lean on approximations or heuristics without\nquantifying their effect. Therefore, challenging scenarios could severely\nimpair the performance of such methods. In this paper, we extend the decision\nmaking mechanism to the whole by removing standard approximations and\nconsidering all previously suppressed stochastic sources of variability. On top\nof this extension, our key contribution is a novel framework to simplify\ndecision making while assessing and controlling online the simplification's\nimpact. Furthermore, we present novel stochastic bounds on the return and\ncharacterize online the effect of simplification using this framework on a\nparticular simplification technique - reducing the number of samples in belief\nrepresentation for planning. Finally, we verify the advantages of our approach\nthrough extensive simulations.\n"} {"abstract": " For the development of successful share trading strategies, forecasting the\ncourse of action of the stock market index is important. Effective prediction\nof closing stock prices could guarantee investors attractive benefits. Machine\nlearning algorithms have the ability to process and forecast almost reliable\nclosing prices for historical stock patterns. In this article, we intensively\nstudied NASDAQ stock market and targeted to choose the portfolio of ten\ndifferent companies belongs to different sectors. The objective is to compute\nopening price of next day stock using historical data. To fulfill this task\nnine different Machine Learning regressor applied on this data and evaluated\nusing MSE and R2 as performance metric.\n"} {"abstract": " Ensuring the privacy of research participants is vital, even more so in\nhealthcare environments. Deep learning approaches to neuroimaging require large\ndatasets, and this often necessitates sharing data between multiple sites,\nwhich is antithetical to the privacy objectives. Federated learning is a\ncommonly proposed solution to this problem. It circumvents the need for data\nsharing by sharing parameters during the training process. However, we\ndemonstrate that allowing access to parameters may leak private information\neven if data is never directly shared. In particular, we show that it is\npossible to infer if a sample was used to train the model given only access to\nthe model prediction (black-box) or access to the model itself (white-box) and\nsome leaked samples from the training data distribution. Such attacks are\ncommonly referred to as Membership Inference attacks. We show realistic\nMembership Inference attacks on deep learning models trained for 3D\nneuroimaging tasks in a centralized as well as decentralized setup. We\ndemonstrate feasible attacks on brain age prediction models (deep learning\nmodels that predict a person's age from their brain MRI scan). We correctly\nidentified whether an MRI scan was used in model training with a 60% to over\n80% success rate depending on model complexity and security assumptions.\n"} {"abstract": " An intrinsic antiferromagnetic topological insulator $\\mathrm{MnBi_2Te_4}$\ncan be realized by intercalating Mn-Te bilayer chain in a topological\ninsulator, $\\mathrm{Bi_2Te_3}$. $\\mathrm{MnBi_2Te_4}$ provides not only a\nstable platform to demonstrate exotic physical phenomena, but also easy\ntunability of the physical properties. For example, inserting more\n$\\mathrm{Bi_2Te_3}$ layers in between two adjacent $\\mathrm{MnBi_2Te_4}$\nweakens the interlayer magnetic interactions between the $\\mathrm{MnBi_2Te_4}$\nlayers. Here we present the first observations on the inter- and intra-layer\nphonon modes of $\\mathrm{MnBi_{2n}Te_{3n+1}}$ (n=1,2,3,4) using cryogenic\nlow-frequency Raman spectroscopy. We experimentally and theoretically\ndistinguish the Raman vibrational modes using various polarization\nconfigurations. The two peaks at 66 cm$^{-1}$ and 112 cm$^{-1}$ show an\nabnormal perturbation in the Raman linewidths below the magnetic transition\ntemperature due to spin-phonon coupling. In $\\mathrm{MnBi_4Te_7}$, the\n$\\mathrm{Bi_2Te_3}$ layers induce Davydov splitting of the A$_{1g}$ mode around\n137 cm$^{-1}$ at 5 K. Using the linear chain model, we estimate the\nout-of-plane interlayer force constant to be $(3.98 \\pm 0.14) \\times 10^{19}$\nN/m$^3$ at 5 K, three times weaker than that of $\\mathrm{Bi_2Te_3}$. Our work\ndiscovers the dynamics of phonon modes of the $\\mathrm{MnBi_2Te_4}$ and the\neffect of the additional $\\mathrm{Bi_2Te_3}$ layers, providing the\nfirst-principles guidance to tailor the physical properties of layered\nheterostructures.\n"} {"abstract": " We propose a strategy for optimizing a sensor trajectory in order to estimate\nthe time dependence of a localized scalar source in turbulent channel flow. The\napproach leverages the view of the adjoint scalar field as the sensitivity of\nmeasurement to a possible source. A cost functional is constructed so that the\noptimal sensor trajectory maintains a high sensitivity and low temporal\nvariation in the measured signal, for a given source location. This naturally\nleads to the adjoint-of-adjoint equation based on which the sensor trajectory\nis iteratively optimized. It is shown that the estimation performance based on\nthe measurement obtained by a sensor moving along the optimal trajectory is\ndrastically improved from that achieved with a stationary sensor. It is also\nshown that the ratio of the fluctuation and the mean of the sensitivity for a\ngiven sensor trajectory can be used as a diagnostic tool to evaluate the\nresultant performance. Based on this finding, we propose a new cost functional\nwhich only includes the ratio without any adjustable parameters, and\ndemonstrate its effectiveness in predicting the time dependence of scalar\nrelease from the source.\n"} {"abstract": " Gamma distributed delay differential equations (DDEs) arise naturally in many\nmodelling applications. However, appropriate numerical methods for generic\nGamma distributed DDEs are not currently available. Accordingly, modellers\noften resort to approximating the gamma distribution with an Erlang\ndistribution and using the linear chain technique to derive an equivalent\nsystem of ordinary differential equations. In this work, we develop a\nfunctionally continuous Runge-Kutta method to numerically integrate the gamma\ndistributed DDE and perform numerical tests to confirm the accuracy of the\nnumerical method. As the functionally continuous Runge-Kutta method is not\navailable in most scientific software packages, we then derive hypoexponential\napproximations of the gamma distributed DDE. Using our numerical method, we\nshow that while using the common Erlang approximation can produce solutions\nthat are qualitatively different from the underlying gamma distributed DDE, our\nhypoexponential approximations do not have this limitation. Finally, we\nimplement our hypoexponential approximations to perform statistical inference\non synthetic epidemiological data.\n"} {"abstract": " Markov state models (MSMs) have been broadly adopted for analyzing molecular\ndynamics trajectories, but the approximate nature of the models that results\nfrom coarse-graining into discrete states is a long-known limitation. We show\ntheoretically that, despite the coarse graining, in principle MSM-like analysis\ncan yield unbiased estimation of key observables. We describe unbiased\nestimators for equilibrium state populations, for the mean first-passage time\n(MFPT) of an arbitrary process, and for state committors - i.e., splitting\nprobabilities. Generically, the estimators are only asymptotically unbiased but\nwe describe how extension of a recently proposed reweighting scheme can\naccelerate relaxation to unbiased values. Exactly accounting for 'sliding\nwindow' averaging over finite-length trajectories is a key, novel element of\nour analysis. In general, our analysis indicates that coarse-grained MSMs are\nasymptotically unbiased for steady-state properties only when appropriate\nboundary conditions (e.g., source-sink for MFPT estimation) are applied\ndirectly to trajectories, prior to calculation of the appropriate transition\nmatrix.\n"} {"abstract": " The phase field paradigm, in combination with a suitable variational\nstructure, has opened a path for using Griffith's energy balance to predict the\nfracture of solids. These so-called phase field fracture methods have gained\nsignificant popularity over the past decade, and are now part of commercial\nfinite element packages and engineering fitness-for-service assessments. Crack\npaths can be predicted, in arbitrary geometries and dimensions, based on a\nglobal energy minimisation - without the need for \\textit{ad hoc} criteria. In\nthis work, we review the fundamentals of phase field fracture methods and\nexamine their capabilities in delivering predictions in agreement with the\nclassical fracture mechanics theory pioneered by Griffith. The two most widely\nused phase field fracture models are implemented in the context of the finite\nelement method, and several paradigmatic boundary value problems are addressed\nto gain insight into their predictive abilities across all cracking stages;\nboth the initiation of growth and stable crack propagation are investigated. In\naddition, we examine the effectiveness of phase field models with an internal\nmaterial length scale in capturing size effects and the transition flaw size\nconcept. Our results show that phase field fracture methods satisfactorily\napproximate classical fracture mechanics predictions and can also reconcile\nstress and toughness criteria for fracture. The accuracy of the approximation\nis however dependent on modelling and constitutive choices; we provide a\nrationale for these differences and identify suitable approaches for delivering\nphase field fracture predictions that are in good agreement with\nwell-established fracture mechanics paradigms.\n"} {"abstract": " The convergence property of a stochastic algorithm for the self-consistent\nfield (SCF) calculations of electron structures is studied. The algorithm is\nformulated by rewriting the electron charges as a trace/diagonal of a matrix\nfunction, which is subsequently expressed as a statistical average. The\nfunction is further approximated by using a Krylov subspace approximation. As a\nresult, each SCF iteration only samples one random vector without having to\ncompute all the orbitals. We consider the common practice of SCF iterations\nwith damping and mixing. We prove with appropriate assumptions that the\niterations converge in the mean-square sense, when the stochastic error has an\nalmost sure bound. We also consider the scenario when such an assumption is\nweakened to a second moment condition, and prove the convergence in\nprobability.\n"} {"abstract": " Multiple organ failure (MOF) is a severe syndrome with a high mortality rate\namong Intensive Care Unit (ICU) patients. Early and precise detection is\ncritical for clinicians to make timely decisions. An essential challenge in\napplying machine learning models to electronic health records (EHRs) is the\npervasiveness of missing values. Most existing imputation methods are involved\nin the data preprocessing phase, failing to capture the relationship between\ndata and outcome for downstream predictions. In this paper, we propose\nclassifier-guided generative adversarial imputation networks Classifier-GAIN)\nfor MOF prediction to bridge this gap, by incorporating both observed data and\nlabel information. Specifically, the classifier takes imputed values from the\ngenerator(imputer) to predict task outcomes and provides additional supervision\nsignals to the generator by joint training. The classifier-guide generator\nimputes missing values with label-awareness during training, improving the\nclassifier's performance during inference. We conduct extensive experiments\nshowing that our approach consistently outperforms classical and state-of-art\nneural baselines across a range of missing data scenarios and evaluation\nmetrics.\n"} {"abstract": " Instrumental systematics need to be controlled to high precision for upcoming\nCosmic Microwave Background (CMB) experiments. The level of contamination\ncaused by these systematics is often linked to the scan strategy, and scan\nstrategies for satellite experiments can significantly mitigate these\nsystematics. However, no detailed study has been performed for ground-based\nexperiments. Here we show that under the assumption of constant elevation scans\n(CESs), the ability of the scan strategy to mitigate these systematics is\nstrongly limited, irrespective of the detailed structure of the scan strategy.\nWe calculate typical values and maps of the quantities coupling the scan to the\nsystematics, and show how these quantities vary with the choice of observing\nelevations. These values and maps can be used to calculate and forecast the\nmagnitude of different instrumental systematics without requiring detailed scan\nstrategy simulations. As a reference point, we show that inclusion of even a\nsingle boresight rotation angle significantly improves over sky rotation alone\nfor mitigating these systematics. A standard metric for evaluating\ncross-linking is related to one of the parameters studied in this work, so a\ncorollary of our work is that the cross-linking will suffer from the same CES\nlimitations and therefore upcoming CMB surveys will unavoidably have poorly\ncross-linked regions if they use CESs, regardless of detailed scheduling\nchoices. Our results are also relevant for non-CMB surveys that perform\nconstant elevation scans and may have scan-coupled systematics, such as\nintensity mapping surveys.\n"} {"abstract": " Integer quantization of neural networks can be defined as the approximation\nof the high precision computation of the canonical neural network formulation,\nusing reduced integer precision. It plays a significant role in the efficient\ndeployment and execution of machine learning (ML) systems, reducing memory\nconsumption and leveraging typically faster computations. In this work, we\npresent an integer-only quantization strategy for Long Short-Term Memory (LSTM)\nneural network topologies, which themselves are the foundation of many\nproduction ML systems. Our quantization strategy is accurate (e.g. works well\nwith quantization post-training), efficient and fast to execute (utilizing 8\nbit integer weights and mostly 8 bit activations), and is able to target a\nvariety of hardware (by leveraging instructions sets available in common CPU\narchitectures, as well as available neural accelerators).\n"} {"abstract": " Let ${\\mathfrak M}=({\\mathcal M},\\rho)$ be a metric space and let $X$ be a\nBanach space. Let $F$ be a set-valued mapping from ${\\mathcal M}$ into the\nfamily ${\\mathcal K}_m(X)$ of all compact convex subsets of $X$ of dimension at\nmost $m$. The main result in our recent joint paper with Charles Fefferman\n(which is referred to as a ``Finiteness Principle for Lipschitz selections'')\nprovides efficient conditions for the existence of a Lipschitz selection of\n$F$, i.e., a Lipschitz mapping $f:{\\mathcal M}\\to X$ such that $f(x)\\in F(x)$\nfor every $x\\in{\\mathcal M}$. We give new alternative proofs of this result in\ntwo special cases. When $m=2$ we prove it for $X={\\bf R}^{2}$, and when $m=1$\nwe prove it for all choices of $X$. Both of these proofs make use of a simple\nreiteration formula for the ``core'' of a set-valued mapping $F$, i.e., for a\nmapping $G:{\\mathcal M}\\to{\\mathcal K}_m(X)$ which is Lipschitz with respect to\nthe Hausdorff distance, and such that $G(x)\\subset F(x)$ for all $x\\in{\\mathcal\nM}$.\n"} {"abstract": " Purpose: Quantitative magnetization transfer (qMT) imaging can be used to\nquantify the proportion of protons in a voxel attached to macromolecules. Here,\nwe show that the original qMT balanced steady-state free precession (bSSFP)\nmodel is biased due to over-simplistic assumptions made in its derivation.\nTheory and Methods: We present an improved model for qMT bSSFP, which\nincorporates finite radio-frequency (RF) pulse effects as well as simultaneous\nexchange and relaxation. Further, a correction to finite RF pulse effects for\nsinc-shaped excitations is derived. The new model is compared to the original\none in numerical simulations of the Bloch-McConnell equations and in previously\nacquired in-vivo data. Results: Our numerical simulations show that the\noriginal signal equation is significantly biased in typical brain tissue\nstructures (by 7-20 %) whereas the new signal equation outperforms the original\none with minimal bias (< 1%). It is further shown that the bias of the original\nmodel strongly affects the acquired qMT parameters in human brain structures,\nwith differences in the clinically relevant parameter of pool-size-ratio of up\nto 31 %. Particularly high biases of the original signal equation are expected\nin an MS lesion within diseased brain tissue (due to a low T2/T1-ratio),\ndemanding a more accurate model for clinical applications. Conclusion: The\nimproved model for qMT bSSFP is recommended for accurate qMT parameter mapping\nin healthy and diseased brain tissue structures.\n"} {"abstract": " We construct a categorical framework for nonlinear postquantum inference,\nwith embeddings of convex closed sets of suitable reflexive Banach spaces as\nobjects and pullbacks of Br\\`egman quasi-nonexpansive mappings (in particular,\nconstrained maximisations of Br\\`egman relative entropies) as morphisms. It\nprovides a nonlinear convex analytic analogue of Chencov's programme of\ngeometric study of categories of linear positive maps between spaces of states,\na working model of Mielnik's nonlinear transmitters, and a setting for\nnonlinear resource theories (with monoids of Br\\`egman quasi-nonexpansive maps\nas free operations, their asymptotic fixed point sets as free sets, and\nBr\\`egman relative entropies as resource monotones). We construct a range of\nconcrete examples for semi-finite JBW-algebras and any W*-algebras. Due to\nrelative entropy's asymmetry, all constructions have left and right versions,\nwith Legendre duality inducing categorical equivalence between their\nwell-defined restrictions. Inner groupoids of these categories implement the\nnotion of statistical equivalence. The hom-sets of a subcategory of morphisms\ngiven by entropic projections have the structure of partially ordered\ncommutative monoids (so, they are resource theories in Fritz's sense). Further\nrestriction of objects to affine sets turns Br\\`egman relative entropy into a\nfunctor. Finally, following Lawvere's adjointness paradigm for deductive logic,\nbut with a semantic twist representing Jaynes' and Chencov's views on\nstatistical inference, we introduce a category-theoretic multi-(co)agent\nsetting for inductive inference theories, implemented by families of monads and\ncomonads. We show that the br\\`egmanian approach provides some special cases of\nthis setting.\n"} {"abstract": " Garc\\'ia-Aguilar et al. [Phys. Rev. Lett 126, 038001 (2021)] have shown that\nthe deformations of \"shape-shifting droplets\" are consistent with an elastic\nmodel, that, unlike previous models, includes the intrinsic curvature of the\nfrozen surfactant layer. In this Comment, we show that the interplay between\nsurface tension and intrinsic curvature in their model is in fact\nmathematically equivalent to a physically very different phase-transition\nmechanism of the same process that we developed previously [Phys. Rev. Lett.\n118, 088001 (2017); Phys. Rev. Res. 1, 023017 (2019)]. The mathematical models\ncannot therefore distinguish between the two mechanisms, and hence it is not\npossible to claim that one mechanism underlies all observed shape-shifting\nphenomena without a much more detailed comparison of experiment and theory.\n"} {"abstract": " We propose a novel scheme for the exact renormalisation group motivated by\nthe desire of reducing the complexity of practical computations. The key idea\nis to specify renormalisation conditions for all inessential couplings, leaving\nus with the task of computing only the flow of the essential ones. To achieve\nthis aim, we utilise a renormalisation group equation for the effective average\naction which incorporates general non-linear field reparameterisations. A\nprominent feature of the scheme is that, apart from the renormalisation of the\nmass, the propagator evaluated at any constant value of the field maintains its\nunrenormalised form. Conceptually, the scheme provides a clearer picture of\nrenormalisation itself since the redundant, non-physical content is\nautomatically disregarded in favour of a description based only on quantities\nthat enter expressions for physical observables. To exemplify the scheme's\nutility, we investigate the Wilson-Fisher fixed point in three dimensions at\norder two in the derivative expansion. In this case, the scheme removes all\norder $\\partial^2$ operators apart from the canonical term. Further\nsimplifications occur at higher orders in the derivative expansion. Although we\nconcentrate on a minimal scheme that reduces the complexity of computations, we\npropose more general schemes where inessential couplings can be tuned to\noptimise a given approximation. We further discuss the applicability of the\nscheme to a broad range of physical theories.\n"} {"abstract": " Amorphous dielectric materials have been known to host two-level systems\n(TLSs) for more than four decades. Recent developments on superconducting\nresonators and qubits enable detailed studies on the physics of TLSs. In\nparticular, measuring the loss of a device over long time periods (a few days)\nallows us to investigate stochastic fluctuations due to the interaction between\nTLSs. We measure the energy relaxation time of a frequency-tunable planar\nsuperconducting qubit over time and frequency. The experiments show a variety\nof stochastic patterns that we are able to explain by means of extensive\nsimulations. The model used in our simulations assumes a qubit interacting with\nhigh-frequency TLSs, which, in turn, interact with thermally activated\nlow-frequency TLSs. Our simulations match the experiments and suggest the\ndensity of low-frequency TLSs is about three orders of magnitude larger than\nthat of high-frequency ones.\n"} {"abstract": " Learning from implicit user feedback is challenging as we can only observe\npositive samples but never access negative ones. Most conventional methods cope\nwith this issue by adopting a pairwise ranking approach with negative sampling.\nHowever, the pairwise ranking approach has a severe disadvantage in the\nconvergence time owing to the quadratically increasing computational cost with\nrespect to the sample size; it is problematic, particularly for large-scale\ndatasets and complex models such as neural networks. By contrast, a pointwise\napproach does not directly solve a ranking problem, and is therefore inferior\nto a pairwise counterpart in top-K ranking tasks; however, it is generally\nadvantageous in regards to the convergence time. This study aims to establish\nan approach to learn personalised ranking from implicit feedback, which\nreconciles the training efficiency of the pointwise approach and ranking\neffectiveness of the pairwise counterpart. The key idea is to estimate the\nranking of items in a pointwise manner; we first reformulate the conventional\npointwise approach based on density ratio estimation and then incorporate the\nessence of ranking-oriented approaches (e.g. the pairwise approach) into our\nformulation. Through experiments on three real-world datasets, we demonstrate\nthat our approach not only dramatically reduces the convergence time (one to\ntwo orders of magnitude faster) but also significantly improving the ranking\nperformance.\n"} {"abstract": " Full Duplex (FD) radio has emerged as a promising solution to increase the\ndata rates by up to a factor of two via simultaneous transmission and reception\nin the same frequency band. This paper studies a novel hybrid beamforming\n(HYBF) design to maximize the weighted sum-rate (WSR) in a single-cell\nmillimeter wave (mmWave) massive multiple-input-multiple-output (mMIMO) FD\nsystem. Motivated by practical considerations, we assume that the multi-antenna\nusers and hybrid FD base station (BS) suffer from the limited dynamic range\n(LDR) noise due to non-ideal hardware and an impairment aware HYBF approach is\nadopted by integrating the traditional LDR noise model in the mmWave band. In\ncontrast to the conventional HYBF schemes, our design also considers the joint\nsum-power and the practical per-antenna power constraints. A novel\ninterference, self-interference (SI) and LDR noise aware optimal power\nallocation scheme for the uplink (UL) users and FD BS is also presented to\nsatisfy the joint constraints. The maximum achievable gain of a multi-user\nmmWave FD system over a fully digital half duplex (HD) system with different\nLDR noise levels and numbers of the radio-frequency (RF) chains is\ninvestigated. Simulation results show that our design outperforms the HD system\nwith only a few RF chains at any LDR noise level. The advantage of having\namplitude control at the analog stage is also examined, and additional gain for\nthe mmWave FD system becomes evident when the number of RF chains at the hybrid\nFD BS is small.\n"} {"abstract": " We study the influence of running vacuum on the baryon-to-photon ratio in\nrunning vacuum models (RVMs). When there exists a non-minimal coupling between\nphotons and other matter in the expanding universe, the energy-momentum tensor\nof photons is no longer conserved, but the energy of photons could remain\nconserved. We discuss the conditions for the energy conservation of photons in\nRVMs. The photon number density and baryon number density, from the epoch of\nphoton decoupling to the present day, are obtained in the context of RVMs by\nassuming that photons and baryons can be coupled to running vacuum,\nrespectively. Both cases lead to a time-evolving baryon-to-photon ratio.\nHowever the evolution of the baryon-to-photon ratio is strictly constrained by\nobservations. It is found that if the dynamic term of running vacuum is indeed\ncoupled to photons or baryons, the coefficient of the dynamic term must be\nextremely small, which is unnatural. Therefore, our study basically rules out\nthe possibility that running vacuum is coupled to photons or baryons in RVMs.\n"} {"abstract": " To extract liver from medical images is a challenging task due to similar\nintensity values of liver with adjacent organs, various contrast levels,\nvarious noise associated with medical images and irregular shape of liver. To\naddress these issues, it is important to preprocess the medical images, i.e.,\ncomputerized tomography (CT) and magnetic resonance imaging (MRI) data prior to\nliver analysis and quantification. This paper investigates the impact of\npermutation of various preprocessing techniques for CT images, on the automated\nliver segmentation using deep learning, i.e., U-Net architecture. The study\nfocuses on Hounsfield Unit (HU) windowing, contrast limited adaptive histogram\nequalization (CLAHE), z-score normalization, median filtering and\nBlock-Matching and 3D (BM3D) filtering. The segmented results show that\ncombination of three techniques; HU-windowing, median filtering and z-score\nnormalization achieve optimal performance with Dice coefficient of 96.93%,\n90.77% and 90.84% for training, validation and testing respectively.\n"} {"abstract": " The Robot Operating System 2 (ROS2) targets distributed real-time systems and\nis widely used in the robotics community. Especially in these systems, latency\nin data processing and communication can lead to instabilities. Though being\nhighly configurable with respect to latency, ROS2 is often used with its\ndefault settings.\n In this paper, we investigate the end-to-end latency of ROS2 for distributed\nsystems with default settings and different Data Distribution Service (DDS)\nmiddlewares. In addition, we profile the ROS2 stack and point out latency\nbottlenecks. Our findings indicate that end-to-end latency strongly depends on\nthe used DDS middleware. Moreover, we show that ROS2 can lead to 50% latency\noverhead compared to using low-level DDS communications. Our results imply\nguidelines for designing distributed ROS2 architectures and indicate\npossibilities for reducing the ROS2 overhead.\n"} {"abstract": " We study a long-recognised but under-appreciated symmetry called \"dynamical\nsimilarity\" and illustrate its relevance to many important conceptual problems\nin fundamental physics. Dynamical similarities are general transformations of a\nsystem where the unit of Hamilton's principal function is rescaled, and\ntherefore represent a kind of dynamical scaling symmetry with formal properties\nthat differ from many standard symmetries. To study this symmetry, we develop a\ngeneral framework for symmetries that distinguishes the observable and surplus\nstructures of a theory by using the minimal freely specifiable initial data for\nthe theory that is necessary to achieve empirical adequacy. This framework is\nthen applied to well-studied examples including Galilean invariance and the\nsymmetries of the Kepler problem. We find that our framework gives a precise\ndynamical criterion for identifying the observables of those systems, and that\nthose observables agree with epistemic expectations. We then apply our\nframework to dynamical similarity. First we give a general definition of\ndynamical similarity. Then we show, with the help of some previous results, how\nthe dynamics of our observables leads to singularity resolution and the\nemergence of an arrow of time in cosmology.\n"} {"abstract": " State-space models (SSM) with Markov switching offer a powerful framework for\ndetecting multiple regimes in time series, analyzing mutual dependence and\ndynamics within regimes, and asserting transitions between regimes. These\nmodels however present considerable computational challenges due to the\nexponential number of possible regime sequences to account for. In addition,\nhigh dimensionality of time series can hinder likelihood-based inference. This\npaper proposes novel statistical methods for Markov-switching SSMs using\nmaximum likelihood estimation, Expectation-Maximization (EM), and parametric\nbootstrap. We develop solutions for initializing the EM algorithm, accelerating\nconvergence, and conducting inference that are ideally suited to massive\nspatio-temporal data such as brain signals. We evaluate these methods in\nsimulations and present applications to EEG studies of epilepsy and of motor\nimagery. All proposed methods are implemented in a MATLAB toolbox available at\nhttps://github.com/ddegras/switch-ssm.\n"} {"abstract": " An increasing amount of location-based service (LBS) data is being\naccumulated and helps to study urban dynamics and human mobility. GPS\ncoordinates and other location indicators are normally low dimensional and only\nrepresenting spatial proximity, thus difficult to be effectively utilized by\nmachine learning models in Geo-aware applications. Existing location embedding\nmethods are mostly tailored for specific problems that are taken place within\nareas of interest. When it comes to the scale of a city or even a country,\nexisting approaches always suffer from extensive computational cost and\nsignificant data sparsity. Different from existing studies, we propose to learn\nrepresentations through a GCN-aided skip-gram model named GCN-L2V by\nconsidering both spatial connection and human mobility. With a flow graph and a\nspatial graph, it embeds context information into vector representations.\nGCN-L2V is able to capture relationships among locations and provide a better\nnotion of similarity in a spatial environment. Across quantitative experiments\nand case studies, we empirically demonstrate that representations learned by\nGCN-L2V are effective. As far as we know, this is the first study that provides\na fine-grained location embedding at the city level using only LBS records.\nGCN-L2V is a general-purpose embedding model with high flexibility and can be\napplied in down-streaming Geo-aware applications.\n"} {"abstract": " The Earth's magnetotail is characterized by stretched magnetic field lines.\nEnergetic particles are effectively scattered due to the field-line curvature,\nwhich then leads to isotropization of energetic particle distributions and\nparticle precipitation to the Earth's atmosphere. Measurements of these\nprecipitation at low-altitude spacecraft are thus often used to remotely probe\nthe magnetotail current sheet configuration. This configuration may include\nspatially localized maxima of the curvature radius at equator (due to localized\nhumps of the equatorial magnetic field magnitude) that reduce the energetic\nparticle scattering and precipitation. Therefore, the measured precipitation\npatterns are related to the spatial distribution of the equatorial curvature\nradius that is determined by the magnetotail current sheet configuration. In\nthis study, we show that, contrary to previous thoughts, the magnetic field\nline configuration with the localized curvature radius maximum can actually\nenhance the scattering and subsequent precipitation. The spatially localized\nmagnetic field dipolarization (magnetic field humps) can significantly curve\nmagnetic field lines far from the equator and create off-equatorial minima in\nthe curvature radius. Scattering of energetic particles in these off-equatorial\nregions alters the scattering (and precipitation) patterns, which has not been\nstudied yet. We discuss our results in the context of remote-sensing the\nmagnetotail current sheet configuration with low-altitude spacecraft\nmeasurements.\n"} {"abstract": " Since the heavy neutrinos of the inverse seesaw mechanism mix largely with\nthe standard ones, the charged currents formed with them and the muons have the\npotential of generating robust and positive contribution to the anomalous\nmagnetic moment of the muon. Ho\\-we\\-ver, bounds from the non-unitary in the\nleptonic mixing matrix may restrict so severely the parameters of the mechanism\nthat, depending on the framework under which the mechanism is implemented, may\nrender it unable to explain the recent muon g-2 result. In this paper we show\nthat this happens when we implement the mechanism into the standard model and\ninto two versions of the 3-3-1 models.\n"} {"abstract": " Using an argument of Baldwin--Hu--Sivek, we prove that if $K$ is a hyperbolic\nfibered knot with fiber $F$ in a closed, oriented $3$--manifold $Y$, and\n$\\widehat{HFK}(Y,K,[F], g(F)-1)$ has rank $1$, then the monodromy of $K$ is\nfreely isotopic to a pseudo-Anosov map with no fixed points. In particular,\nthis shows that the monodromy of a hyperbolic L-space knot is freely isotopic\nto a map with no fixed points.\n"} {"abstract": " The object of Weakly-supervised Temporal Action Localization (WS-TAL) is to\nlocalize all action instances in an untrimmed video with only video-level\nsupervision. Due to the lack of frame-level annotations during training,\ncurrent WS-TAL methods rely on attention mechanisms to localize the foreground\nsnippets or frames that contribute to the video-level classification task. This\nstrategy frequently confuse context with the actual action, in the localization\nresult. Separating action and context is a core problem for precise WS-TAL, but\nit is very challenging and has been largely ignored in the literature. In this\npaper, we introduce an Action-Context Separation Network (ACSNet) that\nexplicitly takes into account context for accurate action localization. It\nconsists of two branches (i.e., the Foreground-Background branch and the\nAction-Context branch). The Foreground- Background branch first distinguishes\nforeground from background within the entire video while the Action-Context\nbranch further separates the foreground as action and context. We associate\nvideo snippets with two latent components (i.e., a positive component and a\nnegative component), and their different combinations can effectively\ncharacterize foreground, action and context. Furthermore, we introduce extended\nlabels with auxiliary context categories to facilitate the learning of\naction-context separation. Experiments on THUMOS14 and ActivityNet v1.2/v1.3\ndatasets demonstrate the ACSNet outperforms existing state-of-the-art WS-TAL\nmethods by a large margin.\n"} {"abstract": " Intrinsic Image Decomposition is an open problem of generating the\nconstituents of an image. Generating reflectance and shading from a single\nimage is a challenging task specifically when there is no ground truth. There\nis a lack of unsupervised learning approaches for decomposing an image into\nreflectance and shading using a single image. We propose a neural network\narchitecture capable of this decomposition using physics-based parameters\nderived from the image. Through experimental results, we show that (a) the\nproposed methodology outperforms the existing deep learning-based IID\ntechniques and (b) the derived parameters improve the efficacy significantly.\nWe conclude with a closer analysis of the results (numerical and example\nimages) showing several avenues for improvement.\n"} {"abstract": " We consider the barotropic Navier-Stokes system describing the motion of a\ncompressible viscous fluid confined to a bounded domain driven by time periodic\ninflow/outflow boundary conditions. We show that the problem admits a time\nperiodic solution in the class of weak solutions satisfying the energy\ninequality.\n"} {"abstract": " Understanding the features learned by deep models is important from a model\ntrust perspective, especially as deep systems are deployed in the real world.\nMost recent approaches for deep feature understanding or model explanation\nfocus on highlighting input data features that are relevant for classification\ndecisions. In this work, we instead take the perspective of relating deep\nfeatures to well-studied, hand-crafted features that are meaningful for the\napplication of interest. We propose a methodology and set of systematic\nexperiments for exploring deep features in this setting, where input feature\nimportance approaches for deep feature understanding do not apply. Our\nexperiments focus on understanding which hand-crafted and deep features are\nuseful for the classification task of interest, how robust these features are\nfor related tasks and how similar the deep features are to the meaningful\nhand-crafted features. Our proposed method is general to many application areas\nand we demonstrate its utility on orchestral music audio data.\n"} {"abstract": " An automorphism of a rooted spherically homogeneous tree is settled if it\nsatisfies certain conditions on the growth of cycles at finite levels of the\ntree. In this paper, we consider a conjecture by Boston and Jones that the\nimage of an arboreal representation of the absolute Galois group of a number\nfield in the automorphism group of a tree has a dense subset of settled\nelements. Inspired by analogous notions in theory of compact Lie groups, we\nintroduce the concepts of a maximal torus and a Weyl group for actions of\nprofinite groups on rooted trees, and we show that the Weyl group contains\nimportant information about settled elements. We study maximal tori and their\nWeyl groups in the images of arboreal representations associated to quadratic\npolynomials over algebraic number fields, and in branch groups.\n"} {"abstract": " Open quantum systems exhibit a rich phenomenology, in comparison to closed\nquantum systems that evolve unitarily according to the Schr\\\"odinger equation.\nThe dynamics of an open quantum system are typically classified into Markovian\nand non-Markovian, depending on whether the dynamics can be decomposed into\nvalid quantum operations at any time scale. Since Markovian evolutions are\neasier to simulate, compared to non-Markovian dynamics, it is reasonable to\nassume that non-Markovianity can be employed for useful quantum-technological\napplications. Here, we demonstrate the usefulness of non-Markovianity for\npreserving correlations and coherence in quantum systems. For this, we consider\na broad class of qubit evolutions, having a decoherence matrix separated from\nzero for large times. While any such Markovian evolution leads to an\nexponential loss of correlations, non-Markovianity can help to preserve\ncorrelations even in the limit $t \\rightarrow \\infty$. For covariant qubit\nevolutions, we also show that non-Markovianity can be used to preserve quantum\ncoherence at all times, which is an important resource for quantum metrology.\nWe explicitly demonstrate this effect experimentally with linear optics, by\nimplementing the required evolution that is non-Markovian at all times.\n"} {"abstract": " People hope automated driving technology is always in a stable and\ncontrollable state; specifically, it can be divided into controllable planning,\ncontrollable responsibility, and controllable information. When this\ncontrollability is undermined, it brings about the problems, e.g., trolley\ndilemma, responsibility attribution, information leakage, and security. This\narticle discusses these three types of issues separately and clarifies the\nmisunderstandings.\n"} {"abstract": " In a recent paper with J.-P. Nicolas [J.-P. Nicolas and P.T. Xuan, Annales\nHenri Poincare 2019], we studied the peeling for scalar fields on Kerr metrics.\nThe present work extends these results to Dirac fields on the same geometrical\nbackground. We follow the approach initiated by L.J. Mason and J.-P. Nicolas\n[L. Mason and J.-P. Nicolas, J.Inst.Math.Jussieu 2009; L. Mason and J.-P.\nNicolas, J.Geom.Phys 2012] on the Schwarzschild spacetime and extended to Kerr\nmetrics for scalar fields. The method combines the Penrose conformal\ncompactification and geometric energy estimates in order to work out a\ndefinition of the peeling at all orders in terms of Sobolev regularity near\n$\\mathscr{I}$, instead of ${\\mathcal C}^k$ regularity at $\\mathscr{I}$, then\nprovides the optimal spaces of initial data such that the associated solution\nsatisfies the peeling at a given order. The results confirm that the analogous\ndecay and regularity assumptions on initial data in Minkowski and in Kerr\nproduce the same regularity across null infinity. Our results are local near\nspacelike infinity and are valid for all values of the angular momentum of the\nspacetime, including for fast Kerr metrics.\n"} {"abstract": " Two-dimensional (2D) hybrid organic-inorganic perovskites (HOIPs) are\nintroducing new directions in the 2D materials landscape. The coexistence of\nferroelectricity and spin-orbit interactions play a key role in their\noptoelectronic properties. We perform a detailed study on a recently\nsynthesized ferroelectric 2D-HOIP, (AMP)PbI$_4$ (AMP =\n4-aminomethyl-piperidinium). The calculated polarization and Rashba parameter\nare in excellent agreement with experimental values. We report a striking new\neffect, i.e., an extraordinarily large Rashba anisotropy that is tunable by\nferroelectric polarization: as polarization is reversed, not only the spin\ntexture chirality is inverted, but also the major and minor axes of the Rashba\nanisotropy ellipse in k-space are interchanged - a pseudo rotation. A $k \\cdot\np$ model Hamiltonian and symmetry-mode analysis reveal a quadrilinear coupling\nbetween the cation-rotation modes responsible for the Rashba ellipse\npseudo-rotation, the framework rotation, and the polarization. These findings\nmay provide new avenues for spin-optoelectronic devices such as spin valves or\nspin FETs.\n"} {"abstract": " This paper presents a supervised learning method to generate continuous\ncost-to-go functions of non-holonomic systems directly from the workspace\ndescription. Supervision from informative examples reduces training time and\nimproves network performance. The manifold representing the optimal\ntrajectories of a non-holonomic system has high-curvature regions which can not\nbe efficiently captured with uniform sampling. To address this challenge, we\npresent an adaptive sampling method which makes use of sampling-based planners\nalong with local, closed-form solutions to generate training samples. The\ncost-to-go function over a specific workspace is represented as a neural\nnetwork whose weights are generated by a second, higher order network. The\nnetworks are trained in an end-to-end fashion. In our previous work, this\narchitecture was shown to successfully learn to generate the cost-to-go\nfunctions of holonomic systems using uniform sampling. In this work, we show\nthat uniform sampling fails for non-holonomic systems. However, with the\nproposed adaptive sampling methodology, our network can generate near-optimal\ntrajectories for non-holonomic systems while avoiding obstacles. Experiments\nshow that our method is two orders of magnitude faster compared to traditional\napproaches in cluttered environments.\n"} {"abstract": " We investigate the structure of the meson Regge trajectories based on the\nquadratic form of the spinless Salpeter-type equation. It is found that the\nforms of the Regge trajectories depend on the energy region. As the employed\nRegge trajectory formula does not match the energy region, the fitted\nparameters neither have explicit physical meanings nor obey the constraints\nalthough the fitted Regge trajectory can give the satisfactory predictions if\nthe employed formula is appropriate mathematically. Moreover, the consistency\nof the Regge trajectories obtained from different approaches is discussed. And\nthe Regge trajectories for different mesons are presented. Finally, we show\nthat the masses of the constituents will come into the slope and explain why\nthe slopes of the fitted linear Regge trajectories vary with different kinds of\nmesons.\n"} {"abstract": " Higgs-portal effective field theories are widely used as benchmarks in order\nto interpret collider and astroparticle searches for dark matter (DM)\nparticles. To assess the validity of these effective models, it is important to\nconfront them to concrete realizations that are complete in the ultraviolet\nregime. In this paper, we compare effective Higgs-portal models with scalar,\nfermionic and vector DM with a series of increasingly complex realistic models,\ntaking into account all existing constraints from collider and astroparticle\nphysics. These complete realizations include the inert doublet with scalar DM,\nthe singlet-doublet model for fermionic DM and models based on spontaneously\nbroken dark SU(2) and SU(3) gauge symmetries for vector boson DM. We also\ndiscuss the simpler scenarios in which a new scalar singlet field that mixes\nwith the standard Higgs field is introduced with minimal couplings to\nisosinglet spin--$0, \\frac12$ and 1 DM states. We show that in large regions of\nthe parameter space of these models, the effective Higgs-portal approach\nprovides a consistent limit and thus, can be safely adopted, in particular for\nthe interpretation of searches for invisible Higgs boson decays at the LHC. The\nphenomenological implications of assuming or not that the DM states generate\nthe correct cosmological relic density are also discussed.\n"} {"abstract": " We present geometric Bayesian active learning by disagreements (GBALD), a\nframework that performs BALD on its core-set construction interacting with\nmodel uncertainty estimation. Technically, GBALD constructs core-set on\nellipsoid, not typical sphere, preventing low-representative elements from\nspherical boundaries. The improvements are twofold: 1) relieve uninformative\nprior and 2) reduce redundant estimations. Theoretically, geodesic search with\nellipsoid can derive tighter lower bound on error and easier to achieve zero\nerror than with sphere. Experiments show that GBALD has slight perturbations to\nnoisy and repeated samples, and outperforms BALD, BatchBALD and other existing\ndeep active learning approaches.\n"} {"abstract": " Facial Expression Recognition(FER) is one of the most important topic in\nHuman-Computer interactions(HCI). In this work we report details and\nexperimental results about a facial expression recognition method based on\nstate-of-the-art methods. We fine-tuned a SeNet deep learning architecture\npre-trained on the well-known VGGFace2 dataset, on the AffWild2 facial\nexpression recognition dataset. The main goal of this work is to define a\nbaseline for a novel method we are going to propose in the near future. This\npaper is also required by the Affective Behavior Analysis in-the-wild (ABAW)\ncompetition in order to evaluate on the test set this approach. The results\nreported here are on the validation set and are related on the Expression\nChallenge part (seven basic emotion recognition) of the competition. We will\nupdate them as soon as the actual results on the test set will be published on\nthe leaderboard.\n"} {"abstract": " In this paper we compute the Newton polytope $\\mathcal M_A$ of the Morse\ndiscriminant in the space of univariate polynomials with the given support set\n$A.$ Namely, we establish a surjection between the set of all combinatorial\ntypes of Morse univariate tropical polynomials and the vertices of $\\mathcal\nM_A.$\n"} {"abstract": " Breast cancer is the most common invasive cancer in women, and the second\nmain cause of death. Breast cancer screening is an efficient method to detect\nindeterminate breast lesions early. The common approaches of screening for\nwomen are tomosynthesis and mammography images. However, the traditional manual\ndiagnosis requires an intense workload by pathologists, who are prone to\ndiagnostic errors. Thus, the aim of this study is to build a deep convolutional\nneural network method for automatic detection, segmentation, and classification\nof breast lesions in mammography images. Based on deep learning the Mask-CNN\n(RoIAlign) method was developed to features selection and extraction; and the\nclassification was carried out by DenseNet architecture. Finally, the precision\nand accuracy of the model is evaluated by cross validation matrix and AUC\ncurve. To summarize, the findings of this study may provide a helpful to\nimprove the diagnosis and efficiency in the automatic tumor localization\nthrough the medical image classification.\n"} {"abstract": " We have studied experimentally the generation of vortex flow by gravity waves\nwith a frequency of 2.34 Hz excited on the water surface at an angle $2 \\theta\n= arctan(3/4) \\approx 36\\deg$ to each other. The resulting horizontal surface\nflow has a stripe-like spatial structure. The width of the stripes L =\n$\\pi$/(2ksin$\\theta$) is determined by the wave vector k of the surface waves\nand the angle between them, and the length of the stripes is limited by the\nsystem size. It was found that the vertical vorticity $\\Omega$ of the current\non the fluid surface is proportional to the product of wave amplitudes, but its\nvalue is much higher than the value corresponding to the Stokes drift and it\ncontinues to grow with time even after the wave motion reaches a stationary\nregime. We demonstrate that the measured dependence $\\Omega$(t) can be\ndescribed within the recently developed model that takes into account the\nEulerian contribution to the generated vortex flow and the effect of surface\ncontamination. This model contains a free parameter that describes the elastic\nproperties of the contaminated surface, and we also show that the found value\nof this parameter is in reasonable agreement with the measured decay rate of\nsurface waves.\n"} {"abstract": " We investigate the impact of photochemical hazes and disequilibrium gases on\nthe thermal structure of hot-Jupiters, using a detailed 1-D\nradiative-convective model. We find that the inclusion of photochemical hazes\nresults in major heating of the upper and cooling of the lower atmosphere.\nSulphur containing species, such as SH, S$_2$ and S$_3$ provide significant\nopacity in the middle atmosphere and lead to local heating near 1 mbar, while\nOH, CH, NH, and CN radicals produced by the photochemistry affect the thermal\nstructure near 1 $\\mu$bar. Furthermore we show that the modifications on the\nthermal structure from photochemical gases and hazes can have important\nramifications for the interpretation of transit observations. Specifically, our\nstudy for the hazy HD 189733 b shows that the hotter upper atmosphere resulting\nfrom the inclusion of photochemical haze opacity imposes an expansion of the\natmosphere, thus a steeper transit signature in the UV-Visible part of the\nspectrum. In addition, the temperature changes in the photosphere also affect\nthe secondary eclipse spectrum. For HD 209458 b we find that a small haze\nopacity could be present in this atmosphere, at pressures below 1 mbar, which\ncould be a result of both photochemical hazes and condensates. Our results\nmotivate the inclusion of radiative feedback from photochemical hazes in\ngeneral circulation models for a proper evaluation of atmospheric dynamics.\n"} {"abstract": " We compare the macroscopic and the local plastic behavior of a model\namorphous solid based on two radically different numerical descriptions. On the\none hand, we simulate glass samples by atomistic simulations. On the other, we\nimplement a mesoscale elasto-plastic model based on a solid-mechanics\ndescription. The latter is extended to consider the anisotropy of the yield\nsurface via statistically distributed local and discrete weak planes on which\nshear transformations can be activated. To make the comparison as quantitative\nas possible, we consider the simple case of a quasistatically driven\ntwo-dimensional system in the stationary flow state and compare mechanical\nobservables measured on both models over the same length scales. We show that\nthe macroscale response, including its fluctuations, can be quantitatively\nrecovered for a range of elasto-plastic mesoscale parameters. Using a newly\ndeveloped method that makes it possible to probe the local yield stresses in\natomistic simulations, we calibrate the local mechanical response of the\nelasto-plastic model at different coarse-graining scales. In this case, the\ncalibration shows a qualitative agreement only for an optimized subset of\nmesoscale parameters and for sufficiently coarse probing length scales. This\ncalibration allows us to establish a length scale for the mesoscopic elements\nthat corresponds to an upper bound of the shear transformation size, a key\nphysical parameter in elasto-plastic models. We find that certain properties\nnaturally emerge from the elasto-plastic model. In particular, we show that the\nelasto-plastic model reproduces the Bauschinger effect, namely the\nplasticity-induced anisotropy in the stress-strain response. We discuss the\nsuccesses and failures of our approach, the impact of different model\ningredients and propose future research directions for quantitative multi-scale\nmodels of amorphous plasticity.\n"} {"abstract": " This paper expounds very innovative results achieved between the mid-14th\ncentury and the beginning of the 16th century by Indian astronomers belonging\nto the so-called \"M\\=adhava school\". These results were in keeping with\nresearches in trigonometry: they concern the calculation of the eight of the\ncircumference of a circle. They not only expose an analog of the series\nexpansion of arctan(1) usually known as the \"Leibniz series\", but also other\nanalogs of series expansions, the convergence of which is much faster. These\nseries expansions are derived from evaluations of the rests of the partial sums\nof the primordial series, by means of some convergents of generalized continued\nfractions. A justification of these results in modern terms is provided, which\naims at restoring their full mathematical interest.\n"} {"abstract": " Radio relics are the manifestation of electrons presumably being shock\n(re-)accelerated to high energies in the outskirts of galaxy clusters. However,\nestimates of the shocks' strength yield different results when measured with\nradio or X-ray observations. In general, Mach numbers obtained from radio\nobservations are larger than the corresponding X-ray measurements. In this\nwork, we investigate this Mach number discrepancy. For this purpose, we used\nthe cosmological code ENZO to simulate a sample of galaxy clusters that host\nbright radio relics. For each relic, we computed the radio Mach number from the\nintegrated radio spectrum and the X-ray Mach number from the X-ray surface\nbrightness and temperature jumps. Our analysis suggests that the differences in\nthe Mach number estimates follow from the way in which different observables\nare related to different parts of the underlying Mach number distribution:\nradio observations are more sensistive to the high Mach numbers present only in\na small fraction of a shock's surface, while X-ray measurements reflect the\naverage of the Mach number distribution. Moreover, X-ray measurements are very\nsensitive to the relic's orientation. If the same relic is observed from\ndifferent sides, the measured X-ray Mach number varies significantly. On the\nother hand, the radio measurements are more robust, as they are unaffected by\nthe relic's orientation.\n"} {"abstract": " In this paper, we present a sharp analysis for an alternating gradient\ndescent algorithm which is used to solve the covariate adjusted precision\nmatrix estimation problem in the high dimensional setting. Without the\nresampling assumption, we demonstrate that this algorithm not only enjoys a\nlinear rate of convergence, but also attains the optimal statistical rate\n(i.e., minimax rate). Moreover, our analysis also characterizes the time-data\ntradeoffs in the covariate adjusted precision matrix estimation problem.\nNumerical experiments are provided to verify our theoretical results.\n"} {"abstract": " FISTA is a popular convex optimisation algorithm which is known to converge\nat an optimal rate whenever the optimisation domain is contained in a suitable\nHilbert space. We propose a modified algorithm where each iteration is\nperformed in a subspace, and that subspace is allowed to change at every\niteration. Analytically, this allows us to guarantee convergence in a Banach\nspace setting, although at a reduced rate depending on the conditioning of the\nspecific problem. Numerically we show that a greedy adaptive choice of\ndiscretisation can greatly increase the time and memory efficiency in infinite\ndimensional Lasso optimisation problems.\n"} {"abstract": " When two identical two-dimensional (2D) periodic lattices are stacked in\nparallel after rotating one layer by a certain angle relative to the other\nlayer, the resulting bilayer system can lose lattice periodicity completely and\nbecome a 2D quasicrystal. Twisted bilayer graphene with 30-degree rotation is a\nrepresentative example. We show that such quasicrystalline bilayer systems\ngenerally develop macroscopically degenerate localized zero-energy states\n(ZESs) in strong coupling limit where the interlayer couplings are\noverwhelmingly larger than the intralayer couplings. The emergent chiral\nsymmetry in strong coupling limit and aperiodicity of bilayer quasicrystals\nguarantee the existence of the ZESs. The macroscopically degenerate ZESs are\nanalogous to the flat bands of periodic systems, in that both are composed of\nlocalized eigenstates, which give divergent density of states. For monolayers,\nwe consider the triangular, square, and honeycomb lattices, comprised of\nhomogenous tiling of three possible planar regular polygons: the equilateral\ntriangle, square, and regular hexagon. We construct a compact theoretical\nframework, which we call the quasiband model, that describes the low energy\nproperties of bilayer quasicrystals and counts the number of ZESs using a\nsubset of Bloch states of monolayers. We also propose a simple geometric scheme\nin real space which can show the spatial localization of ZESs and count their\nnumber. Our work clearly demonstrates that bilayer quasicrystals in strong\ncoupling limit are an ideal playground to study the intriguing interplay of\nflat band physics and the aperiodicity of quasicrystals.\n"} {"abstract": " In this paper, we delve into semi-supervised object detection where unlabeled\nimages are leveraged to break through the upper bound of fully-supervised\nobject detection models. Previous semi-supervised methods based on pseudo\nlabels are severely degenerated by noise and prone to overfit to noisy labels,\nthus are deficient in learning different unlabeled knowledge well. To address\nthis issue, we propose a data-uncertainty guided multi-phase learning method\nfor semi-supervised object detection. We comprehensively consider divergent\ntypes of unlabeled images according to their difficulty levels, utilize them in\ndifferent phases and ensemble models from different phases together to generate\nultimate results. Image uncertainty guided easy data selection and region\nuncertainty guided RoI Re-weighting are involved in multi-phase learning and\nenable the detector to concentrate on more certain knowledge. Through extensive\nexperiments on PASCAL VOC and MS COCO, we demonstrate that our method behaves\nextraordinarily compared to baseline approaches and outperforms them by a large\nmargin, more than 3% on VOC and 2% on COCO.\n"} {"abstract": " This paper theoretically investigates the following empirical phenomenon:\ngiven a high-complexity network with poor generalization bounds, one can\ndistill it into a network with nearly identical predictions but low complexity\nand vastly smaller generalization bounds. The main contribution is an analysis\nshowing that the original network inherits this good generalization bound from\nits distillation, assuming the use of well-behaved data augmentation. This\nbound is presented both in an abstract and in a concrete form, the latter\ncomplemented by a reduction technique to handle modern computation graphs\nfeaturing convolutional layers, fully-connected layers, and skip connections,\nto name a few. To round out the story, a (looser) classical uniform convergence\nanalysis of compression is also presented, as well as a variety of experiments\non cifar and mnist demonstrating similar generalization performance between the\noriginal network and its distillation.\n"} {"abstract": " Distributed networks and real-time systems are becoming the most important\ncomponents for the new computer age, the Internet of Things (IoT), with huge\ndata streams or data sets generated from sensors and data generated from\nexisting legacy systems. The data generated offers the ability to measure,\ninfer and understand environmental indicators, from delicate ecologies and\nnatural resources to urban environments. This can be achieved through the\nanalysis of the heterogeneous data sources (structured and unstructured). In\nthis paper, we propose a distributed framework Event STream Processing Engine\nfor Environmental Monitoring Domain (ESTemd) for the application of stream\nprocessing on heterogeneous environmental data. Our work in this area\ndemonstrates the useful role big data techniques can play in an environmental\ndecision support system, early warning and forecasting systems. The proposed\nframework addresses the challenges of data heterogeneity from heterogeneous\nsystems and real time processing of huge environmental datasets through a\npublish/subscribe method via a unified data pipeline with the application of\nApache Kafka for real time analytics.\n"} {"abstract": " Morphological Segmentation involves decomposing words into morphemes, the\nsmallest meaning-bearing units of language. This is an important NLP task for\nmorphologically-rich agglutinative languages such as the Southern African Nguni\nlanguage group. In this paper, we investigate supervised and unsupervised\nmodels for two variants of morphological segmentation: canonical and surface\nsegmentation. We train sequence-to-sequence models for canonical segmentation,\nwhere the underlying morphemes may not be equal to the surface form of the\nword, and Conditional Random Fields (CRF) for surface segmentation.\nTransformers outperform LSTMs with attention on canonical segmentation,\nobtaining an average F1 score of 72.5% across 4 languages. Feature-based CRFs\noutperform bidirectional LSTM-CRFs to obtain an average of 97.1% F1 on surface\nsegmentation. In the unsupervised setting, an entropy-based approach using a\ncharacter-level LSTM language model fails to outperforms a Morfessor baseline,\nwhile on some of the languages neither approach performs much better than a\nrandom baseline. We hope that the high performance of the supervised\nsegmentation models will help to facilitate the development of better NLP tools\nfor Nguni languages.\n"} {"abstract": " We propose and study a new mathematical model of the human immunodeficiency\nvirus (HIV). The main novelty is to consider that the antibody growth depends\nnot only on the virus and on the antibodies concentration but also on the\nuninfected cells concentration. The model consists of five nonlinear\ndifferential equations describing the evolution of the uninfected cells, the\ninfected ones, the free viruses, and the adaptive immunity. The adaptive immune\nresponse is represented by the cytotoxic T-lymphocytes (CTL) cells and the\nantibodies with the growth function supposed to be trilinear. The model\nincludes two kinds of treatments. The objective of the first one is to reduce\nthe number of infected cells, while the aim of the second is to block free\nviruses. Firstly, the positivity and the boundedness of solutions are\nestablished. After that, the local stability of the disease free steady state\nand the infection steady states are characterized. Next, an optimal control\nproblem is posed and investigated. Finally, numerical simulations are performed\nin order to show the behavior of solutions and the effectiveness of the two\nincorporated treatments via an efficient optimal control strategy.\n"} {"abstract": " We use high quality VLT/MUSE data to study the kinematics and the ionized gas\nproperties of Haro 11, a well known starburst merger system and the closest\nconfirmed Lyman continuum leaking galaxy. We present results from integrated\nline maps, and from maps in three velocity bins comprising the blueshifted,\nsystemic and redshifted emission. The kinematic analysis reveals complex\nvelocities resulting from the interplay of virial motions and momentum\nfeedback. Star formation happens intensively in three compact knots (knots A, B\nand C), but one, knot C, dominates the energy released in supernovae. The halo\nis characterised by low gas density and extinction, but with large temperature\nvariations, coincident with fast shock regions. Moreover, we find large\ntemperature discrepancies in knot C, when using different temperature-sensitive\nlines. The relative impact of the knots in the metal enrichment differs. While\nknot B is strongly enriching its closest surrounding, knot C is likely the main\ndistributor of metals in the halo. In knot A, part of the metal enriched gas\nseems to escape through low density channels towards the south. We compare the\nmetallicities from two methods and find large discrepancies in knot C, a\nshocked area, and the highly ionized zones, that we partially attribute to the\neffect of shocks. This work shows, that traditional relations developed from\naveraged measurements or simplified methods, fail to probe the diverse\nconditions of the gas in extreme environments. We need robust relations that\ninclude realistic models where several physical processes are simultaneously at\nwork.\n"} {"abstract": " We demonstrate a method that merges the quantum filter diagonalization (QFD)\napproach for hybrid quantum/classical solution of the time-independent\nelectronic Schr\\\"odinger equation with a low-rank double factorization (DF)\napproach for the representation of the electronic Hamiltonian. In particular,\nwe explore the use of sparse \"compressed\" double factorization (C-DF)\ntruncation of the Hamiltonian within the time-propagation elements of QFD,\nwhile retaining a similarly compressed but numerically converged\ndouble-factorized representation of the Hamiltonian for the operator\nexpectation values needed in the QFD quantum matrix elements. Together with\nsignificant circuit reduction optimizations and number-preserving\npost-selection/echo-sequencing error mitigation strategies, the method is found\nto provide accurate predictions for low-lying eigenspectra in a number of\nrepresentative molecular systems, while requiring reasonably short circuit\ndepths and modest measurement costs. The method is demonstrated by experiments\non noise-free simulators, decoherence- and shot-noise including simulators, and\nreal quantum hardware.\n"} {"abstract": " The Android operating system is the most spread mobile platform in the world.\nTherefor attackers are producing an incredible number of malware applications\nfor Android. Our aim is to detect Android's malware in order to protect the\nuser. To do so really good results are obtained by dynamic analysis of\nsoftware, but it requires complex environments. In order to achieve the same\nlevel of precision we analyze the machine code and investigate the frequencies\nof ngrams of opcodes in order to detect singular code blocks. This allow us to\nconstruct a database of infected code blocks. Then, because attacker may modify\nand organized differently the infected injected code in their new malware, we\nperform not only a semantic comparison of the tested software with the database\nof infected code blocks but also a structured comparison. To do such comparison\nwe compute subgraph isomorphism. It allows us to characterize precisely if the\ntested software is a malware and if so in witch family it belongs. Our method\nis tested both on a laboratory database and a set of real data. It achieves an\nalmost perfect detection rate.\n"} {"abstract": " This thesis is concerned with continuous, static, and single-objective\noptimization problems subject to inequality constraints. Nevertheless, some\nmethods to handle other kinds of problems are briefly reviewed. The particle\nswarm optimization paradigm was inspired by previous simulations of the\ncooperative behaviour observed in social beings. It is a bottom-up, randomly\nweighted, population-based method whose ability to optimize emerges from local,\nindividual-to-individual interactions. As opposed to traditional methods, it\ncan deal with different problems with few or no adaptation due to the fact that\nit does profit from problem-specific features of the problem at issue but\nperforms a parallel, cooperative exploration of the search-space by means of a\npopulation of individuals. The main goal of this thesis consists of developing\nan optimizer that can perform reasonably well on most problems. Hence, the\ninfluence of the settings of the algorithm's parameters on the behaviour of the\nsystem is studied, some general-purpose settings are sought, and some\nvariations to the canonical version are proposed aiming to turn it into a more\ngeneral-purpose optimizer. Since no termination condition is included in the\ncanonical version, this thesis is also concerned with the design of some\nstopping criteria which allow the iterative search to be terminated if further\nsignificant improvement is unlikely, or if a certain number of time-steps are\nreached. In addition, some constraint-handling techniques are incorporated into\nthe canonical algorithm to handle inequality constraints. Finally, the\ncapabilities of the proposed general-purpose optimizers are illustrated by\noptimizing a few benchmark problems.\n"} {"abstract": " In recent years, human activity recognition has garnered considerable\nattention both in industrial and academic research because of the wide\ndeployment of sensors, such as accelerometers and gyroscopes, in products such\nas smartphones and smartwatches. Activity recognition is currently applied in\nvarious fields where valuable information about an individual's functional\nability and lifestyle is needed. In this study, we used the popular WISDM\ndataset for activity recognition. Using multivariate analysis of covariance\n(MANCOVA), we established a statistically significant difference (p<0.05)\nbetween the data generated from the sensors embedded in smartphones and\nsmartwatches. By doing this, we show that smartphones and smartwatches don't\ncapture data in the same way due to the location where they are worn. We\ndeployed several neural network architectures to classify 15 different hand and\nnon-hand-oriented activities. These models include Long short-term memory\n(LSTM), Bi-directional Long short-term memory (BiLSTM), Convolutional Neural\nNetwork (CNN), and Convolutional LSTM (ConvLSTM). The developed models\nperformed best with watch accelerometer data. Also, we saw that the\nclassification precision obtained with the convolutional input classifiers (CNN\nand ConvLSTM) was higher than the end-to-end LSTM classifier in 12 of the 15\nactivities. Additionally, the CNN model for the watch accelerometer was better\nable to classify non-hand oriented activities when compared to hand-oriented\nactivities.\n"} {"abstract": " A space-time Trefftz discontinuous Galerkin method for the Schr\\\"odinger\nequation with piecewise-constant potential is proposed and analyzed. Following\nthe spirit of Trefftz methods, trial and test spaces are spanned by\nnon-polynomial complex wave functions that satisfy the Schro\\\"odinger equation\nlocally on each element of the space-time mesh. This allows for a significant\nreduction in the number of degrees of freedom in comparison with full\npolynomial spaces. We prove well-posedness and stability of the method, and,\nfor the one- and two- dimensional cases, optimal, high-order, h-convergence\nerror estimates in a skeleton norm. Some numerical experiments validate the\ntheoretical results presented.\n"} {"abstract": " We demonstrate the use of multiple atomic-level Rydberg-atom schemes for\ncontinuous frequency detection of radio frequency (RF) fields. Resonant\ndetection of RF fields by electromagnetically-induced transparency and\nAutler-Townes (AT) in Rydberg atoms is typically limited to frequencies within\nthe narrow bandwidth of a Rydberg transition. By applying a second field\nresonant with an adjacent Rydberg transition, far-detuned fields can be\ndetected through a two-photon resonance AT splitting. This two-photon AT\nsplitting method is several orders of magnitude more sensitive than\noff-resonant detection using the Stark shift. We present the results of various\nexperimental configurations and a theoretical analysis to illustrate the\neffectiveness of this multiple level scheme. These results show that this\napproach allows for the detection of frequencies in continuous band between\nresonances with adjacent Rydberg states.\n"} {"abstract": " We study the space of $C^1$ isogeometric spline functions defined on\ntrilinearly parameterized multi-patch volumes. Amongst others, we present a\ngeneral framework for the design of the $C^1$ isogeometric spline space and of\nan associated basis, which is based on the two-patch construction [7], and\nwhich works uniformly for any possible multi-patch configuration. The presented\nmethod is demonstrated in more detail on the basis of a particular subclass of\ntrilinear multi-patch volumes, namely for the class of trilinearly\nparameterized multi-patch volumes with exactly one inner edge. For this\nspecific subclass of trivariate multi-patch parameterizations, we further\nnumerically compute the dimension of the resulting $C^1$ isogeometric spline\nspace and use the constructed $C^1$ isogeometric basis functions to numerically\nexplore the approximation properties of the $C^1$ spline space by performing\n$L^2$ approximation.\n"} {"abstract": " Recently, higher-order topological matter and 3D quantum Hall effects have\nattracted great attention. The Fermi-arc mechanism of the 3D quantum Hall\neffect proposed in Weyl semimetals is characterized by the one-sided hinge\nstates, which do not exist in all the previous quantum Hall systems and more\nimportantly pose a realistic example of the higher-order topological matter.\nThe experimental effort so far is in the Dirac semimetal Cd$_3$As$_2$, where\nhowever time-reversal symmetry leads to hinge states on both sides of the\ntop/bottom surfaces, instead of the aspired one-sided hinge states. We propose\nthat under a tilted magnetic field, the hinge states in Cd$_3$As$_2$-like Dirac\nsemimetals can be one-sided, highly tunable by field direction and Fermi\nenergy, and robust against weak disorder. Furthermore, we propose a scanning\ntunneling Hall measurement to detect the one-sided hinge states. Our results\nwill be insightful for exploring not only the quantum Hall effects beyond two\ndimensions, but also other higher-order topological insulators in the future.\n"} {"abstract": " When two spherical particles submerged in a viscous fluid are subjected to an\noscillatory flow, they align themselves perpendicular to the direction of the\nflow leaving a small gap between them. The formation of this compact structure\nis attributed to a non-zero residual flow known as steady streaming. We have\nperformed direct numerical simulations of a fully-resolved, oscillating flow in\nwhich the pair of particles is modeled using an immersed boundary method. Our\nsimulations show that the particles oscillate both parallel and perpendicular\nto the oscillating flow in elongated figure 8 trajectories. In absence of\nbottom friction, the mean gap between the particles depends only on the\nnormalized Stokes boundary layer thickness $\\delta^*$, and on the normalized,\nstreamwise excursion length of the particles relative to the fluid $A_r^*$\n(equivalent to the Keulegan-Carpenter number). For $A_r^*\\lesssim 1$, viscous\neffects dominate and the mean particle separation only depends on $\\delta^*$.\nFor larger $A_r^*$-values, advection becomes important and the gap widens.\nOverall, the normalized mean gap between the particles scales as\n$L^*\\approx3.0{\\delta^*}^{1.5}+0.03{A_r^*}^3$, which also agrees well with\nprevious experimental results. The two regimes are also observed in the\nmagnitude of the oscillations of the gap perpendicular to the flow, which\nincreases in the viscous regime and decreases in the advective regime. When\nbottom friction is considered, particle rotation increases and the gap widens.\nOur results stress the importance of simulating the particle motion with all\nits degrees of freedom to accurately model the system and reproduce\nexperimental results. The new insights of the particle pairs provide an\nimportant step towards understanding denser and more complex systems.\n"} {"abstract": " We present a new model to describe the star formation process in galaxies,\nwhich includes the description of the different gas phases -- molecular,\natomic, and ionized -- together with its metal content. The model, which will\nbe coupled to cosmological simulations of galaxy formation, will be used to\ninvestigate the relation between the star formation rate (SFR) and the\nformation of molecular hydrogen. The model follows the time evolution of the\nmolecular, atomic and ionized phases in a gas cloud and estimates the amount of\nstellar mass formed, by solving a set of five coupled differential equations.\nAs expected, we find a positive, strong correlation between the molecular\nfraction and the initial gas density, which manifests in a positive correlation\nbetween the initial gas density and the SFR of the cloud.\n"} {"abstract": " The development of lightweight object detectors is essential due to the\nlimited computation resources. To reduce the computation cost, how to generate\nredundant features plays a significant role. This paper proposes a new\nlightweight Convolution method Cross-Stage Lightweight (CSL) Module, to\ngenerate redundant features from cheap operations. In the intermediate\nexpansion stage, we replaced Pointwise Convolution with Depthwise Convolution\nto produce candidate features. The proposed CSL-Module can reduce the\ncomputation cost significantly. Experiments conducted at MS-COCO show that the\nproposed CSL-Module can approximate the fitting ability of Convolution-3x3.\nFinally, we use the module to construct a lightweight detector CSL-YOLO,\nachieving better detection performance with only 43% FLOPs and 52% parameters\nthan Tiny-YOLOv4.\n"} {"abstract": " The dispersion of a tracer in a fluid flow is influenced by the Lagrangian\nmotion of fluid elements. Even in laminar regimes, the irregular chaotic\nbehavior of a fluid flow can lead to effective stirring that rapidly\nredistributes a tracer throughout the domain. When the advected particles\npossess a finite size and nontrivial shape, however, their dynamics can differ\nmarkedly from passive tracers, thus affecting the dispersion phenomena. Here we\ninvestigate the behavior of neutrally buoyant particles in 2-dimensional\nchaotic flows, combining numerical simulations and laboratory experiments. We\nshow that depending on the particles shape and size, the underlying Lagrangian\ncoherent structures can be altered, resulting in distinct dispersion phenomena\nwithin the same flow field. Experiments performed in a two-dimensional cellular\nflow, exhibited a focusing effect in vortex cores of particles with anisotropic\nshape. In agreement with our numerical model, neutrally buoyant ellipsoidal\nparticles display markedly different trajectories and overall organization than\nspherical particles, with a clustering in vortices that changes accordingly\nwith the aspect ratio of the particles.\n"} {"abstract": " We explore the ability of overparameterized shallow neural networks to learn\nLipschitz regression functions with and without label noise when trained by\nGradient Descent (GD). To avoid the problem that in the presence of noisy\nlabels, neural networks trained to nearly zero training error are inconsistent\non this class, we propose an early stopping rule that allows us to show optimal\nrates. This provides an alternative to the result of Hu et al. (2021) who\nstudied the performance of $\\ell 2$ -regularized GD for training shallow\nnetworks in nonparametric regression which fully relied on the infinite-width\nnetwork (Neural Tangent Kernel (NTK)) approximation. Here we present a simpler\nanalysis which is based on a partitioning argument of the input space (as in\nthe case of 1-nearest-neighbor rule) coupled with the fact that trained neural\nnetworks are smooth with respect to their inputs when trained by GD. In the\nnoise-free case the proof does not rely on any kernelization and can be\nregarded as a finite-width result. In the case of label noise, by slightly\nmodifying the proof, the noise is controlled using a technique of Yao, Rosasco,\nand Caponnetto (2007).\n"} {"abstract": " Pretrained language models have significantly improved the performance of\ndown-stream language understanding tasks, including extractive question\nanswering, by providing high-quality contextualized word embeddings. However,\nlearning question answering models still need large-scaled data annotation in\nspecific domains. In this work, we propose a cooperative, self-play learning\nframework, REGEX, for question generation and answering. REGEX is built upon a\nmasked answer extraction task with an interactive learning environment\ncontaining an answer entity REcognizer, a question Generator, and an answer\nEXtractor. Given a passage with a masked entity, the generator generates a\nquestion around the entity, and the extractor is trained to extract the masked\nentity with the generated question and raw texts. The framework allows the\ntraining of question generation and answering models on any text corpora\nwithout annotation. We further leverage a reinforcement learning technique to\nreward generating high-quality questions and to improve the answer extraction\nmodel's performance. Experiment results show that REGEX outperforms the\nstate-of-the-art (SOTA) pretrained language models and zero-shot approaches on\nstandard question-answering benchmarks, and yields the new SOTA performance\nunder the zero-shot setting.\n"} {"abstract": " Enterprise knowledge is a key asset in the competing and fast-changing\ncorporate landscape. The ability to learn, store and distribute implicit and\nexplicit knowledge can be the difference between success and failure. While\nenterprise knowledge management is a well-defined research domain, current\nimplementations lack orientation towards small and medium enterprise. We\npropose a semantic search engine for relevant documents in an enterprise, based\non automatic generated domain ontologies. In this paper we focus on the\ncomponent for ontology learning and population.\n"} {"abstract": " Recently, Doroudiani and Karimipour [Phys. Rev. A \\textbf{102} 012427(2020)]\nproposed the notation of planar maximally entangled (PME) states which are a\nwider class of multipartite entangled states than absolutely maximally\nentangled (AME) states. There they presented their constructions in the\nmultipartite systems but the number of particles is restricted to be even. Here\nwe first solve the remaining cases, i.e., constructions of planar maximally\nentangled states on systems with odd number of particles. In addition, we\ngeneralized the PME to the planar $k$-uniform states whose reductions to any\nadjacent $k$ parties along a circle of $N$ parties are maximally mixed. We\npresented a method to construct sets of planar $k$-uniform states which have\nminimal support.\n"} {"abstract": " We construct exact solutions to the Einstein-Maxwell theory with uplifting\nthe four dimensional Fubini-Study Kahler manifold. We find the solutions can be\nexpressed exactly as the integrals of two special functions. The solutions are\nregular almost everywhere except a bolt structure on a single point in any\ndimensionality. We also show that the solutions are unique and can not be\nnon-trivially extended to include the cosmological constant in any dimensions.\n"} {"abstract": " The introduction of an optical resonator can enable efficient and precise\ninteraction between a photon and a solid-state emitter. It facilitates the\nstudy of strong light-matter interaction, polaritonic physics and presents a\npowerful interface for quantum communication and computing. A pivotal aspect in\nthe progress of light-matter interaction with solid-state systems is the\nchallenge of combining the requirements of cryogenic temperature and high\nmechanical stability against vibrations while maintaining sufficient degrees of\nfreedom for in-situ tunability. Here, we present a fiber-based open\nFabry-P\\'{e}rot cavity in a closed-cycle cryostat exhibiting ultra-high\nmechanical stability while providing wide-range tunability in all three spatial\ndirections. We characterize the setup and demonstrate the operation with the\nroot-mean-square cavity length fluctuation of less than $90$ pm at temperature\nof $6.5$ K and integration bandwidth of $100$ kHz. Finally, we benchmark the\ncavity performance by demonstrating the strong-coupling formation of\nexciton-polaritons in monolayer WSe$_2$ with a cooperativity of $1.6$. This set\nof results manifests the open-cavity in a closed-cycle cryostat as a versatile\nand powerful platform for low-temperature cavity QED experiments.\n"} {"abstract": " We determine the dark matter pair-wise relative velocity distribution in a\nset of Milky Way-like halos in the Auriga and APOSTLE simulations. Focusing on\nthe smooth halo component, the relative velocity distribution is well-described\nby a Maxwell-Boltzmann distribution over nearly all radii in the halo. We\nexplore the implications for velocity-dependent dark matter annihilation,\nfocusing on four models which scale as different powers of the relative\nvelocity: Sommerfeld, s-wave, p-wave, and d-wave models. We show that the\nJ-factors scale as the moments of the relative velocity distribution, and that\nthe halo-to-halo scatter is largest for d-wave, and smallest for Sommerfeld\nmodels. The J-factor is strongly correlated with the dark matter density in the\nhalo, and is very weakly correlated with the velocity dispersion. This implies\nthat if the dark matter density in the Milky Way can be robustly determined,\none can accurately predict the dark matter annihilation signal, without the\nneed to identify the dark matter velocity distribution in the Galaxy.\n"} {"abstract": " In this essay, we qualitatively demonstrate how small non-perturbative\ncorrections are a necessary addition to semiclassical gravity's path integral.\nWe use this to discuss implications for Hawking's information paradox and the\nbags of gold paradox.\n"} {"abstract": " We report Keck-NIRSPEC observations of the Brackett $\\alpha$ 4.05 $\\mu$m\nrecombination line across the two candidate embedded super star clusters (SSCs)\nin NGC 1569. These SSCs power a bright HII region and have been previously\ndetected as radio and mid-infrared sources. Supplemented with high resolution\nVLA mapping of the radio continuum along with IRTF-TEXES spectroscopy of the\n[SIV] 10.5 $\\mu$m line, the Brackett $\\alpha$ spectra data provide new insight\ninto the dynamical state of gas ionized by these forming massive clusters. NIR\nsources detected in 2 $\\mu$m images from the Slit-viewing Camera are matched\nwith GAIA sources to obtain accurate celestial coordinates and slit positions\nto within $\\sim 0.1''$. Br$\\alpha$ is detected as a strong emission peak\npowered by the less luminous infrared source, MIR1 ($L_{\\rm IR}\\sim\n2\\times10^7~L_\\odot$). The second candidate SSC MIR2 is more luminous ($L_{\\rm\nIR}\\gtrsim 4\\times10^8~L_\\odot$) but exhibits weak radio continuum and\nBr$\\alpha$ emission, suggesting the ionized gas is extremely dense ($n_e\\gtrsim\n10^5$ cm$^{-3}$), corresponding to hypercompact HII regions around newborn\nmassive stars. The Br$\\alpha$ and [SIV] lines across the region are both\nremarkably symmetric and extremely narrow, with observed line widths $\\Delta v\n\\simeq 40$ km s$^{-1}$, FWHM. This result is the first clear evidence that\nfeedback from NGC 1569's youngest giant clusters is currently incapable of\nrapid gas dispersal, consistent with the emerging theoretical paradigm in the\nformation of giant star clusters.\n"} {"abstract": " Machine learning has brought striking advances in multilingual natural\nlanguage processing capabilities over the past year. For example, the latest\ntechniques have improved the state-of-the-art performance on the XTREME\nmultilingual benchmark by more than 13 points. While a sizeable gap to\nhuman-level performance remains, improvements have been easier to achieve in\nsome tasks than in others. This paper analyzes the current state of\ncross-lingual transfer learning and summarizes some lessons learned. In order\nto catalyze meaningful progress, we extend XTREME to XTREME-R, which consists\nof an improved set of ten natural language understanding tasks, including\nchallenging language-agnostic retrieval tasks, and covers 50 typologically\ndiverse languages. In addition, we provide a massively multilingual diagnostic\nsuite (MultiCheckList) and fine-grained multi-dataset evaluation capabilities\nthrough an interactive public leaderboard to gain a better understanding of\nsuch models. The leaderboard and code for XTREME-R will be made available at\nhttps://sites.research.google/xtreme and\nhttps://github.com/google-research/xtreme respectively.\n"} {"abstract": " This paper presents a state-of-the-art LiDAR based autonomous navigation\nsystem for under-canopy agricultural robots. Under-canopy agricultural\nnavigation has been a challenging problem because GNSS and other positioning\nsensors are prone to significant errors due to attentuation and multi-path\ncaused by crop leaves and stems. Reactive navigation by detecting crop rows\nusing LiDAR measurements is a better alternative to GPS but suffers from\nchallenges due to occlusion from leaves under the canopy. Our system addresses\nthis challenge by fusing IMU and LiDAR measurements using an Extended Kalman\nFilter framework on low-cost hardwware. In addition, a local goal generator is\nintroduced to provide locally optimal reference trajectories to the onboard\ncontroller. Our system is validated extensively in real-world field\nenvironments over a distance of 50.88~km on multiple robots in different field\nconditions across different locations. We report state-of-the-art distance\nbetween intervention results, showing that our system is able to safely\nnavigate without interventions for 386.9~m on average in fields without\nsignificant gaps in the crop rows, 56.1~m in production fields and 47.5~m in\nfields with gaps (space of 1~m without plants in both sides of the row).\n"} {"abstract": " Drone imagery is increasingly used in automated inspection for infrastructure\nsurface defects, especially in hazardous or unreachable environments. In\nmachine vision, the key to crack detection rests with robust and accurate\nalgorithms for image processing. To this end, this paper proposes a deep\nlearning approach using hierarchical convolutional neural networks with feature\npreservation (HCNNFP) and an intercontrast iterative thresholding algorithm for\nimage binarization. First, a set of branch networks is proposed, wherein the\noutput of previous convolutional blocks is half-sizedly concatenated to the\ncurrent ones to reduce the obscuration in the down-sampling stage taking into\naccount the overall information loss. Next, to extract the feature map\ngenerated from the enhanced HCNN, a binary contrast-based autotuned\nthresholding (CBAT) approach is developed at the post-processing step, where\npatterns of interest are clustered within the probability map of the identified\nfeatures. The proposed technique is then applied to identify surface cracks on\nthe surface of roads, bridges or pavements. An extensive comparison with\nexisting techniques is conducted on various datasets and subject to a number of\nevaluation criteria including the average F-measure (AF\\b{eta}) introduced here\nfor dynamic quantification of the performance. Experiments on crack images,\nincluding those captured by unmanned aerial vehicles inspecting a monorail\nbridge. The proposed technique outperforms the existing methods on various\ntested datasets especially for GAPs dataset with an increase of about 1.4% in\nterms of AF\\b{eta} while the mean percentage error drops by 2.2%. Such\nperformance demonstrates the merits of the proposed HCNNFP architecture for\nsurface defect inspection.\n"} {"abstract": " We present a framework for simulating realistic inverse synthetic aperture\nradar images of automotive targets at millimeter wave frequencies. The model\nincorporates radar scattering phenomenology of commonly found vehicles along\nwith range-Doppler based clutter and receiver noise. These images provide\ninsights into the physical dimensions of the target, the number of wheels and\nthe trajectory undertaken by the target. The model is experimentally validated\nwith measurement data gathered from an automotive radar. The images from the\nsimulation database are subsequently classified using both traditional machine\nlearning techniques as well as deep neural networks based on transfer learning.\nWe show that the ISAR images offer a classification accuracy above 90% and are\nrobust to both noise and clutter.\n"} {"abstract": " There are two cases when the nonlinear Schr\\\"odinger equation (NLSE) with an\nexternal complex potential is well-known to support continuous families of\nlocalized stationary modes: the ${\\cal PT}$-symmetric potentials and the Wadati\npotentials. Recently Y. Kominis and coauthors [Chaos, Solitons and Fractals,\n118, 222-233 (2019)] have suggested that the continuous families can be also\nfound in complex potentials of the form $W(x)=W_{1}(x)+iCW_{1,x}(x)$, where $C$\nis an arbitrary real and $W_1(x)$ is a real-valued and bounded differentiable\nfunction. Here we study in detail nonlinear stationary modes that emerge in\ncomplex potentials of this type (for brevity, we call them W-dW potentials).\nFirst, we assume that the potential is small and employ asymptotic methods to\nconstruct a family of nonlinear modes. Our asymptotic procedure stops at the\nterms of the $\\varepsilon^2$ order, where small $\\varepsilon$ characterizes\namplitude of the potential. We therefore conjecture that no continuous families\nof authentic nonlinear modes exist in this case, but \"pseudo-modes\" that\nsatisfy the equation up to $\\varepsilon^2$-error can indeed be found in W-dW\npotentials. Second, we consider the particular case of a W-dW potential well of\nfinite depth and support our hypothesis with qualitative and numerical\narguments. Third, we simulate the nonlinear dynamics of found pseudo-modes and\nobserve that, if the amplitude of W-dW potential is small, then the\npseudo-modes are robust and display persistent oscillations around a certain\nposition predicted by the asymptotic expansion. Finally, we study the authentic\nstationary modes which do not form a continuous family, but exist as isolated\npoints. Numerical simulations reveal dynamical instability of these solutions.\n"} {"abstract": " Many recent experimental ultrafast spectroscopy studies have hinted at\nnon-adiabatic dynamics indicating the existence of conical intersections, but\ntheir direct observation remains a challenge. The rapid change of the energy\ngap between the electronic states complicated their observation by requiring\nbandwidths of several electron volts. In this manuscript, we propose to use the\ncombined information of different X-ray pump-probe techniques to identify the\nconical intersection. We theoretically study the conical intersection in\npyrrole using transient X-ray absorption, time-resolved X-ray spontaneous\nemission, and linear off-resonant Raman spectroscopy to gather evidence of the\ncurve crossing.\n"} {"abstract": " Robots performing tasks in warehouses provide the first example of\nwide-spread adoption of autonomous vehicles in transportation and logistics.\nThe efficiency of these operations, which can vary widely in practice, are a\nkey factor in the success of supply chains. In this work we consider the\nproblem of coordinating a fleet of robots performing picking operations in a\nwarehouse so as to maximize the net profit achieved within a time period while\nrespecting problem- and robot-specific constraints. We formulate the problem as\na weighted set packing problem where the elements in consideration are items on\nthe warehouse floor that can be picked up and delivered within specified time\nwindows. We enforce the constraint that robots must not collide, that each item\nis picked up and delivered by at most one robot, and that the number of robots\nactive at any time does not exceed the total number available. Since the set of\nroutes is exponential in the size of the input, we attack optimization of the\nresulting integer linear program using column generation, where pricing amounts\nto solving an elementary resource-constrained shortest-path problem. We propose\nan efficient optimization scheme that avoids consideration of every increment\nwithin the time windows. We also propose a heuristic pricing algorithm that can\nefficiently solve the pricing subproblem. While this itself is an important\nproblem, the insights gained from solving these problems effectively can lead\nto new advances in other time-widow constrained vehicle routing problems.\n"} {"abstract": " Following the tremendous success of transformer in natural language\nprocessing and image understanding tasks, in this paper, we present a novel\npoint cloud representation learning architecture, named Dual Transformer\nNetwork (DTNet), which mainly consists of Dual Point Cloud Transformer (DPCT)\nmodule. Specifically, by aggregating the well-designed point-wise and\nchannel-wise multi-head self-attention models simultaneously, DPCT module can\ncapture much richer contextual dependencies semantically from the perspective\nof position and channel. With the DPCT module as a fundamental component, we\nconstruct the DTNet for performing point cloud analysis in an end-to-end\nmanner. Extensive quantitative and qualitative experiments on publicly\navailable benchmarks demonstrate the effectiveness of our proposed transformer\nframework for the tasks of 3D point cloud classification and segmentation,\nachieving highly competitive performance in comparison with the\nstate-of-the-art approaches.\n"} {"abstract": " We give two proofs to an old result of E. Salehi, showing that the Weyl\nsubalgebra $\\mathcal{W}$ of $\\ell^\\infty(\\mathbb{Z})$ is a proper subalgebra of\n$\\mathcal{D}$, the algebra of distal functions. We also show that the family\n$\\mathcal{S}^d$ of strictly ergodic functions in $\\mathcal{D}$ does not form an\nalgebra and hence in particular does not coincide with $\\mathcal{W}$. We then\nuse similar constructions to show that a function which is a multiplier for\nstrict ergodicity, either within $\\mathcal{D}$ or in general, is necessarily a\nconstant. An example of a metric, strictly ergodic, distal flow is constructed\nwhich admits a non-strictly ergodic $2$-fold minimal self-joining. It then\nfollows that the enveloping group of this flow is not strictly ergodic (as a\n$T$-flow). Finally we show that the distal, strictly ergodic Heisenberg\nnil-flow is relatively disjoint over its largest equicontinuous factor from\n$|\\mathcal{W}|$.\n"} {"abstract": " We study extensions of the Election Isomorphism problem, focused on the\nexistence of isomorphic subelections. Specifically, we propose the Subelection\nIsomorphism and the Maximum Common Subelection problems and study their\ncomputational complexity and approximability. Using our problems in\nexperiments, we provide some insights into the nature of several statistical\nmodels of elections.\n"} {"abstract": " The High Altitude Water Cherenkov (HAWC) observatory and the High Energy\nStereoscopic System (H.E.S.S.) are two leading instruments in the ground-based\nvery-high-energy gamma-ray domain. HAWC employs the water Cherenkov detection\n(WCD) technique, while H.E.S.S. is an array of Imaging Atmospheric Cherenkov\nTelescopes (IACTs). The two facilities therefore differ in multiple aspects,\nincluding their observation strategy, the size of their field of view and their\nangular resolution, leading to different analysis approaches. Until now, it has\nbeen unclear if the results of observations by both types of instruments are\nconsistent: several of the recently discovered HAWC sources have been followed\nup by IACTs, resulting in a confirmed detection only in a minority of cases.\nWith this paper, we go further and try to resolve the tensions between previous\nresults by performing a new analysis of the H.E.S.S. Galactic plane survey\ndata, applying an analysis technique comparable between H.E.S.S. and HAWC.\nEvents above 1 TeV are selected for both datasets, the point spread function of\nH.E.S.S. is broadened to approach that of HAWC, and a similar background\nestimation method is used. This is the first detailed comparison of the\nGalactic plane observed by both instruments. H.E.S.S. can confirm the gamma-ray\nemission of four HAWC sources among seven previously undetected by IACTs, while\nthe three others have measured fluxes below the sensitivity of the H.E.S.S.\ndataset. Remaining differences in the overall gamma-ray flux can be explained\nby the systematic uncertainties. Therefore, we confirm a consistent view of the\ngamma-ray sky between WCD and IACT techniques.\n"} {"abstract": " Cactus networks were introduced by Lam as a generalization of planar\nelectrical networks. He defined a map from these networks to the Grassmannian\nGr($n+1,2n$) and showed that the image of this map, $\\mathcal X_n$ lies inside\nthe totally nonnegative part of this Grassmannian. In this paper, we show that\n$\\mathcal X_n$ is exactly the elements of Gr($n+1,2n$) that are both totally\nnonnegative and isotropic for a particular skew-symmetric bilinear form. For\ncertain classes of cactus networks, we also explicitly describe how to turn\nresponse matrices and effective resistance matrices into points of Gr($n+1,2n$)\ngiven by Lam's map. Finally, we discuss how our work relates to earlier studies\nof total positivity for Lagrangian Grassmannians.\n"} {"abstract": " We propose a new method for accelerating the computation of a concurrency\nrelation, that is all pairs of places in a Petri net that can be marked\ntogether. Our approach relies on a state space abstraction, that involves a mix\nbetween structural reductions and linear algebra, and a new data-structure that\nis specifically designed for our task. Our algorithms are implemented in a\ntool, called Kong, that we test on a large collection of models used during the\n2020 edition of the Model Checking Contest. Our experiments show that the\napproach works well, even when a moderate amount of reductions applies.\n"} {"abstract": " Ethics is sometimes considered to be too abstract to be meaningfully\nimplemented in artificial intelligence (AI). In this paper, we reflect on other\naspects of computing that were previously considered to be very abstract. Yet,\nthese are now accepted as being done very well by computers. These tasks have\nranged from multiple aspects of software engineering to mathematics to\nconversation in natural language with humans. This was done by automating the\nsimplest possible step and then building on it to perform more complex tasks.\nWe wonder if ethical AI might be similarly achieved and advocate the process of\nautomation as key step in making AI take ethical decisions. The key\ncontribution of this paper is to reflect on how automation was introduced into\ndomains previously considered too abstract for computers.\n"} {"abstract": " In this paper, we investigate the decentralized statistical inference\nproblem, where a network of agents cooperatively recover a (structured) vector\nfrom private noisy samples without centralized coordination. Existing\noptimization-based algorithms suffer from issues of model mismatch and poor\nconvergence speed, and thus their performance would be degraded, provided that\nthe number of communication rounds is limited. This motivates us to propose a\nlearning-based framework, which unrolls well-noted decentralized optimization\nalgorithms (e.g., Prox-DGD and PG-EXTRA) into graph neural networks (GNNs). By\nminimizing the recovery error via end-to-end training, this learning-based\nframework resolves the model mismatch issue. Our convergence analysis (with\nPG-EXTRA as the base algorithm) reveals that the learned model parameters may\naccelerate the convergence and reduce the recovery error to a large extent. The\nsimulation results demonstrate that the proposed GNN-based learning methods\nprominently outperform several state-of-the-art optimization-based algorithms\nin convergence speed and recovery error.\n"} {"abstract": " A prototype version of the Q & U bolometric interferometer for cosmology\n(QUBIC) underwent a campaign of testing in the laboratory at Astroparticle\nPhysics and Cosmology laboratory in Paris (APC). The detection chain is\ncurrently made of 256 NbSi transition edge sensors (TES) cooled to 320 mK. The\nreadout system is a 128:1 time domain multiplexing scheme based on 128 SQUIDs\ncooled at 1 K that are controlled and amplified by an SiGe application specific\nintegrated circuit at 40 K. We report the performance of this readout chain and\nthe characterization of the TES. The readout system has been functionally\ntested and characterized in the lab and in QUBIC. The low noise amplifier\ndemonstrated a white noise level of 0.3 nV.Hz^-0.5. Characterizations of the\nQUBIC detectors and readout electronics includes the measurement of I-V curves,\ntime constant and the noise equivalent power. The QUBIC TES bolometer array has\napproximately 80% detectors within operational parameters. It demonstrated a\nthermal decoupling compatible with a phonon noise of about 5.10^-17 W.Hz^-0.5\nat 410 mK critical temperature. While still limited by microphonics from the\npulse tubes and noise aliasing from readout system, the instrument noise\nequivalent power is about 2.10^-16 W.Hz^-0.5, enough for the demonstration of\nbolometric interferometry.\n"} {"abstract": " We investigate the viability of producing galaxy mock catalogues with\nCOmoving Lagrangian Acceleration (COLA) simulations in Modified Gravity (MG)\nmodels employing the Halo Occupation Distribution (HOD) formalism. In this\nwork, we focus on two theories of MG: $f(R)$ gravity with the chameleon\nmechanism, and a braneworld model (nDGP) that incorporates the Vainshtein\nmechanism. We use a suite of full $N$-body simulations in MG as a benchmark to\ntest the accuracy of COLA simulations. At the level of Dark Matter (DM), we\nshow that COLA accurately reproduces the matter power spectrum up to $k \\sim 1\nh {\\rm Mpc}^{-1}$, while it is less accurate in reproducing the velocity field.\nTo produce halo catalogues, we find that the ROCKSTAR halo-finder does not\nperform well with COLA simulations. On the other hand, using a simple\nFriends-of-Friends (FoF) finder and an empirical mass conversion from FoF to\nspherical over-density masses, we are able to produce halo catalogues in COLA\nthat are in good agreement with those in $N$-body simulations. To consider the\neffects of the MG fifth force on the halo profile, we derive simple fitting\nformulae for the concentration-mass and the velocity dispersion-mass relations\nthat we calibrate using ROCKSTAR halo catalogues in $N$-body simulations. We\nthen use these results to extend the HOD formalism to modified gravity\nsimulations in COLA. We use an HOD model with five parameters that we tune to\nobtain galaxy catalogues in redshift space. We find that despite the great\nfreedom of the HOD model, MG leaves characteristic imprints in the redshift\nspace power spectrum multipoles and these features are well captured by the\nCOLA galaxy catalogues.\n"} {"abstract": " Combinatorial design theory studies set systems with certain balance and\nsymmetry properties and has applications to computer science and elsewhere.\nThis paper presents a modular approach to formalising designs for the first\ntime using Isabelle and assesses the usability of a locale-centric approach to\nformalisations of mathematical structures. We demonstrate how locales can be\nused to specify numerous types of designs and their hierarchy. The resulting\nlibrary, which is concise and adaptable, includes formal definitions and proofs\nfor many key properties, operations, and theorems on the construction and\nexistence of designs.\n"} {"abstract": " Motivated by the study of high energy Steklov eigenfunctions, we examine the\nsemi-classical Robin Laplacian. In the two dimensional situation, we determine\nan effective operator describing the asymptotic distribution of the negative\neigenvalues, and we prove that the corresponding eigenfunctions decay away from\nthe boundary, for all dimensions.\n"} {"abstract": " Given a finite abelian group $G$ and a natural number $t$, there are two\nnatural substructures of the Cartesian power $G^t$; namely, $S^t$ where $S$ is\na subset of $G$, and $x+H$ a coset of a subgroup $H$ of $G^t$. A natural\nquestion is whether two such different structures have non-empty intersection.\nThis turns out to be an NP-complete problem. If we fix $G$ and $S$, then the\nproblem is in $P$ if $S$ is a coset in $G$ or if $S$ is empty, and NP-complete\notherwise; if we restrict to intersecting powers of $S$ with subgroups, the\nproblem is in $P$ if $\\bigcap_{n\\in\\mathbb{Z} \\mid nS \\subset S} nS$ is a coset\nor empty, and NP-complete otherwise. These theorems have applications in the\narticle [Spe21], where they are used as a stepping stone between a purely\ncombinatorial and a purely algebraic problem.\n"} {"abstract": " Cycles, which can be found in many different kinds of networks, make the\nproblems more intractable, especially when dealing with dynamical processes on\nnetworks. On the contrary, tree networks in which no cycle exists, are\nsimplifications and usually allow for analyticity. There lacks a quantity,\nhowever, to tell the ratio of cycles which determines the extent of network\nbeing close to tree networks. Therefore we introduce the term Cycle Nodes Ratio\n(CNR) to describe the ratio of number of nodes belonging to cycles to the\nnumber of total nodes, and provide an algorithm to calculate CNR. CNR is\nstudied in both network models and real networks. The CNR remains unchanged in\ndifferent sized Erd\\\"os R\\'enyi (ER) networks with the same average degree, and\nincreases with the average degree, which yields a critical turning point. The\napproximate analytical solutions of CNR in ER networks are given, which fits\nthe simulations well. Furthermore, the difference between CNR and two-core\nratio (TCR) is analyzed. The critical phenomenon is explored by analysing the\ngiant component of networks. We compare the CNR in network models and real\nnetworks, and find the latter is generally smaller. Combining the\ncoarse-graining method can distinguish the CNR structure of networks with high\naverage degree. The CNR is also applied to four different kinds of\ntransportation networks and fungal networks, which give rise to different zones\nof effect. It is interesting to see that CNR is very useful in network\nrecognition of machine learning.\n"} {"abstract": " We develop a theory of T-duality for transitive Courant algebroids. We show\nthat T-duality between transitive Courant algebroids E\\rightarrow M and\n\\tilde{E}\\rightarrow \\tilde{M} induces a map between the spaces of sections of\nthe corresponding canonical weighted spinor bundles \\mathbb{S}_{E} and\n\\mathbb{S}_{\\tilde{E}} intertwining the canonical Dirac generating operators.\nThe map is shown to induce an isomorphism between the spaces of invariant\nspinors, compatible with an isomorphism between the spaces of invariant\nsections of the Courant algebroids. The notion of invariance is defined after\nlifting the vertical parallelisms of the underlying torus bundles M\\rightarrow\nB and \\tilde{M} \\rightarrow B to the Courant algebroids and their spinor\nbundles. We prove a general existence result for T-duals under assumptions\ngeneralizing the cohomological integrality conditions for T-duality in the\nexact case. Specializing our construction, we find that the T-dual of an exact\nor a heterotic Courant algebroid is again exact or heterotic, respectively.\n"} {"abstract": " Reggeon field theory (RFT), originally developed in the context of high\nenergy diffraction scattering, has a much wider applicability, describing, for\nexample, the universal critical behavior of stochastic population models as\nwell as probabilistic geometric problems such as directed percolation. In 1975\nSuranyi and others developed cut RFT, which can incorporate the cutting rules\nof Abramovskii, Gribov and Kancheli for how each diagram contributes to\ninclusive cross-sections. In this note we describe the corresponding\nprobabilistic interpretations of cut RFT: as a population model of two\ngenotypes, which can reproduce both asexually and sexually; and as a kind of\nbicolor directed percolation problem. In both cases the AGK rules correspond to\nsimple limiting cases of these problems.\n"} {"abstract": " Utilizing the Atacama Large Millimeter/submillimeter Array (ALMA), we present\nCS line maps in five rotational lines ($J_{\\rm u}=7, 5, 4, 3, 2$) toward the\ncircumnuclear disk (CND) and streamers of the Galactic Center. Our primary goal\nis to resolve the compact structures within the CND and the streamers, in order\nto understand the stability conditions of molecular cores in the vicinity of\nthe supermassive black hole (SMBH) Sgr A*. Our data provide the first\nhomogeneous high-resolution ($1.3'' = 0.05$ pc) observations aiming at\nresolving density and temperature structures. The CS clouds have sizes of\n$0.05-0.2$ pc with a broad range of velocity dispersion ($\\sigma_{\\rm\nFWHM}=5-40$ km s$^{-1}$). The CS clouds are a mixture of warm ($T_{\\rm k}\\ge\n50-500$ K, n$_{\\rm H_2}$=$10^{3-5}$ cm$^{-3}$) and cold gas ($T_{\\rm k}\\le 50$\nK, n$_{\\rm H_2}$=$10^{6-8}$ cm$^{-3}$). A stability analysis based on the\nunmagnetized virial theorem including tidal force shows that $84^{+16}_{-37}$ %\nof the total gas mass is tidally stable, which accounts for the majority of gas\nmass. Turbulence dominates the internal energy and thereby sets the threshold\ndensities $10-100$ times higher than the tidal limit at distance $\\ge 1.5$ pc\nto Sgr A*, and therefore, inhibits the clouds from collapsing to form stars\nnear the SMBH. However, within the central $1.5$ pc, the tidal force overrides\nturbulence and the threshold densities for a gravitational collapse quickly\ngrow to $\\ge 10^{8}$ cm$^{-3}$.\n"} {"abstract": " Cross-modal recipe retrieval has recently gained substantial attention due to\nthe importance of food in people's lives, as well as the availability of vast\namounts of digital cooking recipes and food images to train machine learning\nmodels. In this work, we revisit existing approaches for cross-modal recipe\nretrieval and propose a simplified end-to-end model based on well established\nand high performing encoders for text and images. We introduce a hierarchical\nrecipe Transformer which attentively encodes individual recipe components\n(titles, ingredients and instructions). Further, we propose a self-supervised\nloss function computed on top of pairs of individual recipe components, which\nis able to leverage semantic relationships within recipes, and enables training\nusing both image-recipe and recipe-only samples. We conduct a thorough analysis\nand ablation studies to validate our design choices. As a result, our proposed\nmethod achieves state-of-the-art performance in the cross-modal recipe\nretrieval task on the Recipe1M dataset. We make code and models publicly\navailable.\n"} {"abstract": " Quantization is one of the core components in lossy image compression. For\nneural image compression, end-to-end optimization requires differentiable\napproximations of quantization, which can generally be grouped into three\ncategories: additive uniform noise, straight-through estimator and soft-to-hard\nannealing. Training with additive uniform noise approximates the quantization\nerror variationally but suffers from the train-test mismatch. The other two\nmethods do not encounter this mismatch but, as shown in this paper, hurt the\nrate-distortion performance since the latent representation ability is\nweakened. We thus propose a novel soft-then-hard quantization strategy for\nneural image compression that first learns an expressive latent space softly,\nthen closes the train-test mismatch with hard quantization. In addition, beyond\nthe fixed integer quantization, we apply scaled additive uniform noise to\nadaptively control the quantization granularity by deriving a new variational\nupper bound on actual rate. Experiments demonstrate that our proposed methods\nare easy to adopt, stable to train, and highly effective especially on complex\ncompression models.\n"} {"abstract": " In this paper, we introduce PASSAT, a practical system to boost the security\nassurance delivered by the current cloud architecture without requiring any\nchanges or cooperation from the cloud service providers. PASSAT is an\napplication transparent to the cloud servers that allows users to securely and\nefficiently store and access their files stored on public cloud storage based\non a single master password. Using a fast and light-weight XOR secret sharing\nscheme, PASSAT secret-shares users' files and distributes them among n publicly\navailable cloud platforms. To access the files, PASSAT communicates with any k\nout of n cloud platforms to receive the shares and runs a secret-sharing\nreconstruction algorithm to recover the files. An attacker (insider or\noutsider) who compromises or colludes with less than k platforms cannot learn\nthe user's files or modify the files stealthily. To authenticate the user to\nmultiple cloud platforms, PASSAT crucially stores the authentication\ncredentials, specific to each platform on a password manager, protected under\nthe user's master password. Upon requesting access to files, the user enters\nthe password to unlock the vault and fetches the authentication tokens using\nwhich PASSAT can interact with cloud storage. Our instantiation of PASSAT based\non (2, 3)-XOR secret sharing of Kurihara et al., implemented with three popular\nstorage providers, namely, Google Drive, Box, and Dropbox, confirms that our\napproach can efficiently enhance the confidentiality, integrity, and\navailability of the stored files with no changes on the servers.\n"} {"abstract": " We describe a time lens to expand the dynamic range of photon Doppler\nvelocimetry (PDV) systems. The principle and preliminary design of a time-lens\nPDV (TL-PDV) are explained and shown to be feasible through simulations. In a\nPDV system, an interferometer is used for measuring frequency shifts due to the\nDoppler effect from the target motion. However, the sampling rate of the\nelectronics could limit the velocity range of a PDV system. A four-wave-mixing\n(FWM) time lens applies a quadratic temporal phase to an optical signal within\na nonlinear FWM medium (such as an integrated photonic waveguide or highly\nnonlinear optical fiber). By spectrally isolating the mixing product, termed\nthe idler, and with appropriate lengths of dispersion prior and after to this\nFWM time lens, a temporally magnified version of the input signal is generated.\nTherefore, the frequency shifts of PDV can be \"slowed down\" with the\nmagnification factor $M$ of the time lens. $M=1$ corresponds to a regular PDV\nwithout a TL. $M=10$ has been shown to be feasible for a TL-PDV. Use of this\neffect for PDV can expand the velocity measurement range and allow the use of\nlower bandwidth electronics. TL-PDV will open up new avenues for various\ndynamic materials experiments.\n"} {"abstract": " 3D perception using sensors under vehicle industrial standard is the rigid\ndemand in autonomous driving. MEMS LiDAR emerges with irresistible trend due to\nits lower cost, more robust, and meeting the mass-production standards.\nHowever, it suffers small field of view (FoV), slowing down the step of its\npopulation. In this paper, we propose LEAD, i.e., LiDAR Extender for Autonomous\nDriving, to extend the MEMS LiDAR by coupled image w.r.t both FoV and range. We\npropose a multi-stage propagation strategy based on depth distributions and\nuncertainty map, which shows effective propagation ability. Moreover, our depth\noutpainting/propagation network follows a teacher-student training fashion,\nwhich transfers depth estimation ability to depth completion network without\nany scale error passed. To validate the LiDAR extension quality, we utilize a\nhigh-precise laser scanner to generate a ground-truth dataset. Quantitative and\nqualitative evaluations show that our scheme outperforms SOTAs with a large\nmargin. We believe the proposed LEAD along with the dataset would benefit the\ncommunity w.r.t depth researches.\n"} {"abstract": " This paper proposes VARA-TTS, a non-autoregressive (non-AR) text-to-speech\n(TTS) model using a very deep Variational Autoencoder (VDVAE) with Residual\nAttention mechanism, which refines the textual-to-acoustic alignment\nlayer-wisely. Hierarchical latent variables with different temporal resolutions\nfrom the VDVAE are used as queries for residual attention module. By leveraging\nthe coarse global alignment from previous attention layer as an extra input,\nthe following attention layer can produce a refined version of alignment. This\namortizes the burden of learning the textual-to-acoustic alignment among\nmultiple attention layers and outperforms the use of only a single attention\nlayer in robustness. An utterance-level speaking speed factor is computed by a\njointly-trained speaking speed predictor, which takes the mean-pooled latent\nvariables of the coarsest layer as input, to determine number of acoustic\nframes at inference. Experimental results show that VARA-TTS achieves slightly\ninferior speech quality to an AR counterpart Tacotron 2 but an\norder-of-magnitude speed-up at inference; and outperforms an analogous non-AR\nmodel, BVAE-TTS, in terms of speech quality.\n"} {"abstract": " This work introduces a methodology for studying synchronization in adaptive\nnetworks with heterogeneous plasticity (adaptation) rules. As a paradigmatic\nmodel, we consider a network of adaptively coupled phase oscillators with\ndistance-dependent adaptations. For this system, we extend the master stability\nfunction approach to adaptive networks with heterogeneous adaptation. Our\nmethod allows for separating the contributions of network structure, local node\ndynamics, and heterogeneous adaptation in determining synchronization.\nUtilizing our proposed methodology, we explain mechanisms leading to\nsynchronization or desynchronization by enhanced long-range connections in\nnonlocally coupled ring networks and networks with Gaussian distance-dependent\ncoupling weights equipped with a biologically motivated plasticity rule.\n"} {"abstract": " We consider the problem of budget allocation for competitive influence\nmaximization over social networks. In this problem, multiple competing parties\n(players) want to distribute their limited advertising resources over a set of\nsocial individuals to maximize their long-run cumulative payoffs. It is assumed\nthat the individuals are connected via a social network and update their\nopinions based on the classical DeGroot model. The players must decide the\nbudget distribution among the individuals at a finite number of campaign times\nto maximize their overall payoff given as a function of individuals' opinions.\nWe show that i) the optimal investment strategy for the case of a single-player\ncan be found in polynomial time by solving a concave program, and ii) the\nopen-loop equilibrium strategies for the multiplayer dynamic game can be\ncomputed efficiently by following natural regret minimization dynamics. Our\nresults extend the earlier work on the static version of the problem to a\ndynamic multistage game.\n"} {"abstract": " There is mounting evidence that ultra-energetic neutrinos of astrophysical\norigin may be associated with blazars. Here we investigate a unique sample of\n47 blazars, $\\sim 20$ of which could be new neutrino sources. In particular, we\nfocus on 17 objects of yet unknown redshift, for which we present optical\nspectroscopy secured at the Gran Telescopio Canarias and the ESO Very Large\nTelescope. We find all sources but one (a quasar) to be BL Lac objects. For\nnine targets we are able to determine the redshift (0.09~$<$~z~$<$~1.6), while\nfor the others we set a lower limit on it, based on either the robust detection\nof intervening absorption systems or on an estimation derived from the absence\nof spectral signatures of the host galaxy. In some spectra we detect forbidden\nand semi-forbidden emission lines with luminosities in the range $10^{40} -\n10^{41}$ erg s$^{-1}$. We also report on the spectroscopy of seven blazars\npossibly associated with energetic neutrinos that partially meet the criteria\nof our sample and are discussed in the Appendix. These results represent the\nstarting point of our investigation into the real nature of these objects and\ntheir likelihood of being neutrino emitters.\n"} {"abstract": " Given C$^*$-algebras $A$ and $B$ and a $^*$-homomorphism $\\phi:A\\rightarrow\nB$, we adopt the portrait of the relative $K$-theory $K_*(\\phi)$ due to Karoubi\nusing Banach categories and Banach functors. We show that the elements of the\nrelative groups may be represented in a simple form. We prove the existence of\ntwo six-term exact sequences, and we use these sequences to deduce the fact\nthat the relative theory is isomorphic, in a natural way, to the $K$-theory of\nthe mapping cone of $\\phi$ as an excision result.\n"} {"abstract": " We experimentally demonstrate, for the first time, noise diagnostics by\nrepeated quantum measurements. Specifically, we establish the ability of a\nsingle photon, subjected to random polarisation noise, to diagnose\nnon-Markovian temporal correlations of such a noise process. In the frequency\ndomain, these noise correlations correspond to colored noise spectra, as\nopposed to the ones related to Markovian, white noise. Both the noise spectrum\nand its corresponding temporal correlations are diagnosed by probing the photon\nby means of frequent, (partially-)selective polarisation measurements. Our main\nresult is the experimental demonstration that noise with positive temporal\ncorrelations corresponds to our single photon undergoing a dynamical regime\nenabled by the quantum Zeno effect (QZE), while noise characterized by negative\n(anti-) correlations corresponds to regimes associated with the anti-Zeno\neffect (AZE). This demonstration opens the way to a new kind of noise\nspectroscopy based on QZE and AZE in photon (or other single-particle) state\nprobing.\n"} {"abstract": " Known force terms arising in the Ehrenfest dynamics of quantum electrons and\nclassical nuclei, due to a moving basis set for the former, can be understood\nin terms of the curvature of the manifold hosting the quantum states of the\nelectronic subsystem. Namely, the velocity-dependent terms appearing in the\nEhrenfest forces on the nuclei acquire a geometrical meaning in terms of the\nintrinsic curvature of the manifold, while Pulay terms relate to its extrinsic\ncurvature.\n"} {"abstract": " The production of dileptons with an invariant mass in the range 1 GeV < M < 5\nGeV provides unique insight into the approach to thermal equilibrium in\nultrarelativistic nucleus-nucleus collisions. In this mass range, they are\nproduced through the annihilation of quark-antiquark pairs in the early stages\nof the collision. They are sensitive to the anisotropy of the quark momentum\ndistribution, and also to the quark abundance, which is expected to be\nunderpopulated relative to thermal equilibrium. We take into account both\neffects based on recent theoretical developments in QCD kinetic theory, and\nstudy how the dilepton mass spectrum depends on the shear viscosity to entropy\nratio that controls the equilibration time. We evaluate the background from the\nDrell-Yan process and argue that future detector developments can suppress the\nadditional background from semileptonic decays of heavy flavors.\n"} {"abstract": " In extreme learning machines (ELM) the hidden-layer coefficients are randomly\nset and fixed, while the output-layer coefficients of the neural network are\ncomputed by a least squares method. The randomly-assigned coefficients in ELM\nare known to influence its performance and accuracy significantly. In this\npaper we present a modified batch intrinsic plasticity (modBIP) method for\npre-training the random coefficients in the ELM neural networks. The current\nmethod is devised based on the same principle as the batch intrinsic plasticity\n(BIP) method, namely, by enhancing the information transmission in every node\nof the neural network. It differs from BIP in two prominent aspects. First,\nmodBIP does not involve the activation function in its algorithm, and it can be\napplied with any activation function in the neural network. In contrast, BIP\nemploys the inverse of the activation function in its construction, and\nrequires the activation function to be invertible (or monotonic). The modBIP\nmethod can work with the often-used non-monotonic activation functions (e.g.\nGaussian, swish, Gaussian error linear unit, and radial-basis type functions),\nwith which BIP breaks down. Second, modBIP generates target samples on random\nintervals with a minimum size, which leads to highly accurate computation\nresults when combined with ELM. The combined ELM/modBIP method is markedly more\naccurate than ELM/BIP in numerical simulations. Ample numerical experiments are\npresented with shallow and deep neural networks for function approximation and\nboundary/initial value problems with partial differential equations. They\ndemonstrate that the combined ELM/modBIP method produces highly accurate\nsimulation results, and that its accuracy is insensitive to the\nrandom-coefficient initializations in the neural network. This is in sharp\ncontrast with the ELM results without pre-training of the random coefficients.\n"} {"abstract": " A key task in design work is grasping the client's implicit tastes. Designers\noften do this based on a set of examples from the client. However, recognizing\na common pattern among many intertwining variables such as color, texture, and\nlayout and synthesizing them into a composite preference can be challenging. In\nthis paper, we leverage the pattern recognition capability of computational\nmodels to aid in this task. We offer a set of principles for computationally\nlearning personal style. The principles are manifested in PseudoClient, a deep\nlearning framework that learns a computational model for personal graphic\ndesign style from only a handful of examples. In several experiments, we found\nthat PseudoClient achieves a 79.40% accuracy with only five positive and\nnegative examples, outperforming several alternative methods. Finally, we\ndiscuss how PseudoClient can be utilized as a building block to support the\ndevelopment of future design applications.\n"} {"abstract": " Let $\\Omega $ be an open subset of $\\mathbb{R}^{N}$, and let $p,\\, q:\\Omega\n\\rightarrow \\left[ 1,\\infty \\right] $ be measurable functions. We give a\nnecessary and sufficient condition for the embedding of the variable exponent\nspace $L^{p(\\cdot )}\\left( \\Omega \\right) $ in $L^{q(\\cdot )}\\left( \\Omega\n\\right) $ to be almost compact. This leads to a condition on $\\Omega, \\, p$ and\n$q$ sufficient to ensure that the Sobolev space $W^{1,p(\\cdot )}\\left( \\Omega\n\\right) $ based on $L^{p(\\cdot )}\\left( \\Omega \\right) $ is compactly embedded\nin $L^{q(\\cdot )}\\left( \\Omega \\right) ;$ compact embedding results of this\ntype already in the literature are included as special cases.\n"} {"abstract": " As widely recognized, vortex represents flow rotation. Vortex should have a\nlocal rotation axis as its direction and angular speed as its strength.\nVorticity vector has been considered the rotation axis, and vorticity magnitude\nthe rotational strength for a long time in classical fluid kinematics. However,\nthis concept cannot stand in viscous flow. This study demonstrates by rigorous\nmathematical proof that the vorticity vector is not the fluid rotation axis,\nand vorticity is not the rotation strength. On the other hand, the Liutex\nvector is mathematically proved as the fluid rotation axis, and the Liutex\nmagnitude is twice the fluid angular speed.\n"} {"abstract": " Most countries have started vaccinating people against COVID-19. However, due\nto limited production capacities and logistical challenges it will take\nmonths/years until herd immunity is achieved. Therefore, vaccination and social\ndistancing have to be coordinated. In this paper, we provide some insight on\nthis topic using optimization-based control on an age-differentiated\ncompartmental model. For real-life decision making, we investigate the impact\nof the planning horizon on the optimal vaccination/social distancing strategy.\nWe find that in order to reduce social distancing in the long run, without\noverburdening the healthcare system, it is essential to vaccinate the people\nwith the highest contact rates first. That is also the case if the objective is\nto minimize fatalities provided that the social distancing measures are\nsufficiently strict. However, for short-term planning it is optimal to focus on\nthe high-risk group.\n"} {"abstract": " We present a model of exclusive $\\phi$-meson lepto-production $ep \\to\ne'p'\\phi$ near threshold which features the strangeness gravitational form\nfactors of the proton. We argue that the shape of the differential cross\nsection $d\\sigma/dt$ is a sensitive probe of the strangeness D-term of the\nproton.\n"} {"abstract": " With the promise of reliability in cloud, more enterprises are migrating to\ncloud. The process of continuous integration/deployment (CICD) in cloud\nconnects developers who need to deliver value faster and more transparently\nwith site reliability engineers (SREs) who need to manage applications\nreliably. SREs feed back development issues to developers, and developers\ncommit fixes and trigger CICD to redeploy. The release cycle is more continuous\nthan ever, thus the code to production is faster and more automated. To provide\nthis higher level agility, the cloud platforms become more complex in the face\nof flexibility with deeper layers of virtualization. However, reliability does\nnot come for free with all these complexities. Software engineers and SREs need\nto deal with wider information spectrum from virtualized layers. Therefore,\nproviding correlated information with true positive evidences is critical to\nidentify the root cause of issues quickly in order to reduce mean time to\nrecover (MTTR), performance metrics for SREs. Similarity, knowledge, or\nstatistics driven approaches have been effective, but with increasing data\nvolume and types, an individual approach is limited to correlate semantic\nrelations of different data sources. In this paper, we introduce FIXME to\nenhance software reliability with hybrid diagnosis approaches for enterprises.\nOur evaluation results show using hybrid diagnosis approach is about 17% better\nin precision. The results are helpful for both practitioners and researchers to\ndevelop hybrid diagnosis in the highly dynamic cloud environment.\n"} {"abstract": " In multistage manufacturing systems, modeling multiple quality indices based\non the process sensing variables is important. However, the classic modeling\ntechnique predicts each quality variable one at a time, which fails to consider\nthe correlation within or between stages. We propose a deep multistage\nmulti-task learning framework to jointly predict all output sensing variables\nin a unified end-to-end learning framework according to the sequential system\narchitecture in the MMS. Our numerical studies and real case study have shown\nthat the new model has a superior performance compared to many benchmark\nmethods as well as great interpretability through developed variable selection\ntechniques.\n"} {"abstract": " Phase noise of the frequency synthesizer is one of the main limitations to\nthe short-term stability of microwave atomic clocks. In this work, we\ndemonstrated a low-noise, simple-architecture microwave frequency synthesizer\nfor a coherent population trapping (CPT) clock. The synthesizer is mainly\ncomposed of a 100 MHz oven controlled crystal oscillator (OCXO), a microwave\ncomb generator and a direct digital synthesizer (DDS). The absolute phase\nnoises of 3.417 GHz signal are measured to be -55 dBc/Hz, -81 dBc/Hz, -111\ndBc/Hz and -134 dBc/Hz, respectively, for 1 Hz, 10 Hz, 100 Hz and 1 kHz offset\nfrequencies, which shows only 1 dB deterioration at the second harmonic of the\nmodulation frequency of the atomic clock. The estimated frequency stability of\nintermodulation effect is 4.7*10^{-14} at 1s averaging time, which is about\nhalf order of magnitude lower than that of the state-of-the-art CPT Rb clock.\nOur work offers an alternative microwave synthesizer for high-performance CPT\nRb atomic clock.\n"} {"abstract": " The Schr\\\"odinger equation is solved numerically for charmonium using the\ndiscrete variable representation (DVR) method. The Hamiltonian matrix is\nconstructed and diagonalized to obtain the eigenvalues and eigenfunctions.\nUsing these eigenvalues and eigenfunctions, spectra and various decay widths\nare calculated. The obtained results are in good agreement with other numerical\nmethods and with experiments.\n"} {"abstract": " We are motivated by the problem of providing strong generalization guarantees\nin the context of meta-learning. Existing generalization bounds are either\nchallenging to evaluate or provide vacuous guarantees in even relatively simple\nsettings. We derive a probably approximately correct (PAC) bound for\ngradient-based meta-learning using two different generalization frameworks in\norder to deal with the qualitatively different challenges of generalization at\nthe \"base\" and \"meta\" levels. We employ bounds for uniformly stable algorithms\nat the base level and bounds from the PAC-Bayes framework at the meta level.\nThe result of this approach is a novel PAC bound that is tighter when the base\nlearner adapts quickly, which is precisely the goal of meta-learning. We show\nthat our bound provides a tighter guarantee than other bounds on a toy\nnon-convex problem on the unit sphere and a text-based classification example.\nWe also present a practical regularization scheme motivated by the bound in\nsettings where the bound is loose and demonstrate improved performance over\nbaseline techniques.\n"} {"abstract": " Designing intelligent microrobots that can autonomously navigate and perform\ninstructed routines in blood vessels, a complex and crowded environment with\nobstacles including dense cells, different flow patterns and diverse vascular\ngeometries, can offer enormous possibilities in biomedical applications. Here\nwe report a hierarchical control scheme that enables a microrobot to\nefficiently navigate and execute customizable routines in blood vessels. The\ncontrol scheme consists of two highly decoupled components: a high-level\ncontroller setting short-ranged dynamic targets to guide the microrobot to\nfollow a preset path and a low-level deep reinforcement learning (DRL)\ncontroller responsible for maneuvering microrobots towards these dynamic\nguiding targets. The proposed DRL controller utilizes three-dimensional (3D)\nconvolutional neural networks and is capable of learning control policy\ndirectly from a coarse raw 3D sensory input. In blood vessels with rich\nconfigurations of red blood cells and vessel geometry, the control scheme\nenables efficient navigation and faithful execution of instructed routines. The\ncontrol scheme is also robust to adversarial perturbations including blood\nflows. This study provides a proof-of-principle for designing data-driven\ncontrol systems for autonomous navigation in vascular networks; it illustrates\nthe great potential of artificial intelligence for broad biomedical\napplications such as target drug delivery, blood clots clear, precision\nsurgery, disease diagnosis, and more.\n"} {"abstract": " Communication overhead is the key challenge for distributed training.\nGradient compression is a widely used approach to reduce communication traffic.\nWhen combining with parallel communication mechanism method like pipeline,\ngradient compression technique can greatly alleviate the impact of\ncommunication overhead. However, there exists two problems of gradient\ncompression technique to be solved. Firstly, gradient compression brings in\nextra computation cost, which will delay the next training iteration. Secondly,\ngradient compression usually leads to the decrease of convergence accuracy.\n"} {"abstract": " Recently, much attention has been paid to the societal impact of AI,\nespecially concerns regarding its fairness. A growing body of research has\nidentified unfair AI systems and proposed methods to debias them, yet many\nchallenges remain. Representation learning for Heterogeneous Information\nNetworks (HINs), a fundamental building block used in complex network mining,\nhas socially consequential applications such as automated career counseling,\nbut there have been few attempts to ensure that it will not encode or amplify\nharmful biases, e.g. sexism in the job market. To address this gap, in this\npaper we propose a comprehensive set of de-biasing methods for fair HINs\nrepresentation learning, including sampling-based, projection-based, and graph\nneural networks (GNNs)-based techniques. We systematically study the behavior\nof these algorithms, especially their capability in balancing the trade-off\nbetween fairness and prediction accuracy. We evaluate the performance of the\nproposed methods in an automated career counseling application where we\nmitigate gender bias in career recommendation. Based on the evaluation results\non two datasets, we identify the most effective fair HINs representation\nlearning techniques under different conditions.\n"} {"abstract": " This research recasts the network attack dataset from UNSW-NB15 as an\nintrusion detection problem in image space. Using one-hot-encodings, the\nresulting grayscale thumbnails provide a quarter-million examples for deep\nlearning algorithms. Applying the MobileNetV2's convolutional neural network\narchitecture, the work demonstrates a 97% accuracy in distinguishing normal and\nattack traffic. Further class refinements to 9 individual attack families\n(exploits, worms, shellcodes) show an overall 56% accuracy. Using feature\nimportance rank, a random forest solution on subsets show the most important\nsource-destination factors and the least important ones as mainly obscure\nprotocols. The dataset is available on Kaggle.\n"} {"abstract": " Tamil is a Dravidian language that is commonly used and spoken in the\nsouthern part of Asia. In the era of social media, memes have been a fun moment\nin the day-to-day life of people. Here, we try to analyze the true meaning of\nTamil memes by categorizing them as troll and non-troll. We propose an\ningenious model comprising of a transformer-transformer architecture that tries\nto attain state-of-the-art by using attention as its main component. The\ndataset consists of troll and non-troll images with their captions as text. The\ntask is a binary classification task. The objective of the model is to pay more\nattention to the extracted features and to ignore the noise in both images and\ntext.\n"} {"abstract": " Self-organized spatial patterns of vegetation are frequent in water-limited\nregions and have been suggested as important indicators of ecosystem health.\nHowever, the mechanisms underlying their emergence remain unclear. Some\ntheories hypothesize that patterns could result from a scale-dependent feedback\n(SDF), whereby interactions favoring plant growth dominate at short distances\nand growth-inhibitory interactions dominate in the long range. However, we know\nlittle about how net plant-to-plant interactions may change sign with\ninter-individual distance, and in the absence of strong empirical support, the\nrelevance of this SDF for vegetation pattern formation remains disputed. These\ntheories predict a sequential change in pattern shape from gapped to\nlabyrinthine to spotted spatial patterns as precipitation declines.\nNonetheless, alternative theories show that the same sequence of patterns could\nemerge even if net interactions between plants were always inhibitory (purely\ncompetitive feedbacks, PCF). Although these alternative hypotheses lead to\nvisually indistinguishable patterns they predict very different desertification\ndynamics following the spotted pattern. Moreover, vegetation interaction with\nother ecosystem components can introduce additional spatio-temporal scales that\nreshape both the patterns and the desertification dynamics. Therefore, to make\nreliable ecological predictions for a focal ecosystem, it is crucial that\nmodels accurately capture the mechanisms at play in the system of interest.\nHere, we review existing theories for vegetation self-organization and their\nconflicting predictions about desertification dynamics. We further discuss\npossible ways for reconciling these predictions and potential empirical tests\nvia manipulative experiments to improve our understanding of how vegetation\nself-organizes and better predict the fate of the ecosystems where they form.\n"} {"abstract": " Deep learning techniques have achieved great success in remote sensing image\nchange detection. Most of them are supervised techniques, which usually require\nlarge amounts of training data and are limited to a particular application.\nSelf-supervised methods as an unsupervised approach are popularly used to solve\nthis problem and are widely used in unsupervised binary change detection tasks.\nHowever, the existing self-supervised methods in change detection are based on\npre-tasks or at patch-level, which may be sub-optimal for pixel-wise change\ndetection tasks. Therefore, in this work, a pixel-wise contrastive approach is\nproposed to overcome this limitation. This is achieved by using contrastive\nloss in pixel-level features on an unlabeled multi-view setting. In this\napproach, a Siamese ResUnet is trained to obtain pixel-wise representations and\nto align features from shifted positive pairs. Meanwhile, vector quantization\nis used to augment the learned features in two branches. The final binary\nchange map is obtained by subtracting features of one branch from features of\nthe other branch and using the Rosin thresholding method. To overcome the\neffects of regular seasonal changes in binary change maps, we also used an\nuncertainty method to enhance the temporal robustness of the proposed approach.\nTwo homogeneous (OSCD and MUDS) datasets and one heterogeneous (California\nFlood) dataset are used to evaluate the performance of the proposed approach.\nResults demonstrate improvements in both efficiency and accuracy over the\npatch-wise multi-view contrastive method.\n"} {"abstract": " Convolutional Neural Networks (CNN) have been rigorously studied for\nHyperspectral Image Classification (HSIC) and are known to be effective in\nexploiting joint spatial-spectral information with the expense of lower\ngeneralization performance and learning speed due to the hard labels and\nnon-uniform distribution over labels. Several regularization techniques have\nbeen used to overcome the aforesaid issues. However, sometimes models learn to\npredict the samples extremely confidently which is not good from a\ngeneralization point of view. Therefore, this paper proposed an idea to enhance\nthe generalization performance of a hybrid CNN for HSIC using soft labels that\nare a weighted average of the hard labels and uniform distribution over ground\nlabels. The proposed method helps to prevent CNN from becoming over-confident.\nWe empirically show that in improving generalization performance, label\nsmoothing also improves model calibration which significantly improves\nbeam-search. Several publicly available Hyperspectral datasets are used to\nvalidate the experimental evaluation which reveals improved generalization\nperformance, statistical significance, and computational complexity as compared\nto the state-of-the-art models. The code will be made available at\nhttps://github.com/mahmad00.\n"} {"abstract": " In recent years, the interest growth in the Blockchains (BC) and\nInternet-of-Things (IoT) integration -- termed as BIoT -- for more trust via\ndecentralization has led to great potentials in various use cases such as\nhealth care, supply chain tracking, and smart cities. A key element of BIoT\necosystems is the data transactions (TX) that include the data collected by IoT\ndevices. BIoT applications face many challenges to comply with the European\nGeneral Data Protection Regulation (GDPR) i.e., enabling users to hold on to\ntheir rights for deleting or modifying their data stored on publicly accessible\nand immutable BCs. In this regard, this paper identifies the requirements of\nBCs for being GDPR compliant in BIoT use cases. Accordingly, an on-chain\nsolution is proposed that allows fine-grained modification (update and erasure)\noperations on TXs' data fields within a BC. The proposed solution is based on a\ncryptographic primitive called Chameleon Hashing. The novelty of this approach\nis manifold. BC users have the authority to update their data, which are\naddressed at the TX level with no side-effects on the block or chain. By\nperforming and storing the data updates, all on-chain, traceability and\nverifiability of the BC are preserved. Moreover, the compatibility with TX\naggregation mechanisms that allow the compression of the BC size is maintained.\n"} {"abstract": " Mobile health applications (mHealth apps for short) are being increasingly\nadopted in the healthcare sector, enabling stakeholders such as governments,\nhealth units, medics, and patients, to utilize health services in a pervasive\nmanner. Despite having several known benefits, mHealth apps entail significant\nsecurity and privacy challenges that can lead to data breaches with serious\nsocial, legal, and financial consequences. This research presents an empirical\ninvestigation about security awareness of end-users of mHealth apps that are\navailable on major mobile platforms, including Android and iOS. We collaborated\nwith two mHealth providers in Saudi Arabia to survey 101 end-users,\ninvestigating their security awareness about (i) existing and desired security\nfeatures, (ii) security related issues, and (iii) methods to improve security\nknowledge. Findings indicate that majority of the end-users are aware of the\nexisting security features provided by the apps (e.g., restricted app\npermissions); however, they desire usable security (e.g., biometric\nauthentication) and are concerned about privacy of their health information\n(e.g., data anonymization). End-users suggested that protocols such as session\ntimeout or Two-factor authentication (2FA) positively impact security but\ncompromise usability of the app. Security-awareness via social media, peer\nguidance, or training from app providers can increase end-users trust in\nmHealth apps. This research investigates human-centric knowledge based on\nempirical evidence and provides a set of guidelines to develop secure and\nusable mHealth apps.\n"} {"abstract": " We study theoretically subradiant states in the array of atoms coupled to\nphotons propagating in a one-dimensional waveguide focusing on the strongly\ninteracting many-body regime with large excitation fill factor $f$. We\nintroduce a generalized many-body entropy of entanglement based on exact\nnumerical diagonalization followed by a high-order singular value\ndecomposition. This approach has allowed us to visualize and understand the\nstructure of a many-body quantum state. We reveal the breakdown of fermionized\nsubradiant states with increase of $f$ with emergence of short-ranged dimerized\nantiferromagnetic correlations at the critical point $f=1/2$ and the complete\ndisappearance of subradiant states at $f>1/2$.\n"} {"abstract": " A graph $G$ is a prime distance graph (respectively, a 2-odd graph) if its\nvertices can be labeled with distinct integers such that for any two adjacent\nvertices, the difference of their labels is prime (either 2 or odd). We prove\nthat trees, cycles, and bipartite graphs are prime distance graphs, and that\nDutch windmill graphs and paper mill graphs are prime distance graphs if and\nonly if the Twin Prime Conjecture and dePolignac's Conjecture are true,\nrespectively. We give a characterization of 2-odd graphs in terms of edge\ncolorings, and we use this characterization to determine which circulant graphs\nof the form $Circ(n, \\{1,k\\})$ are 2-odd and to prove results on circulant\nprime distance graphs.\n"} {"abstract": " We consider the stochastic shortest path planning problem in MDPs, i.e., the\nproblem of designing policies that ensure reaching a goal state from a given\ninitial state with minimum accrued cost. In order to account for rare but\nimportant realizations of the system, we consider a nested dynamic coherent\nrisk total cost functional rather than the conventional risk-neutral total\nexpected cost. Under some assumptions, we show that optimal, stationary,\nMarkovian policies exist and can be found via a special Bellman's equation. We\npropose a computational technique based on difference convex programs (DCPs) to\nfind the associated value functions and therefore the risk-averse policies. A\nrover navigation MDP is used to illustrate the proposed methodology with\nconditional-value-at-risk (CVaR) and entropic-value-at-risk (EVaR) coherent\nrisk measures.\n"} {"abstract": " Using a martingale concentration inequality, concentration bounds `from time\n$n_0$ on' are derived for stochastic approximation algorithms with contractive\nmaps and both martingale difference and Markov noises. These are applied to\nreinforcement learning algorithms, in particular to asynchronous Q-learning and\nTD(0).\n"} {"abstract": " The properties of quasar-host galaxies might be determined by the growth and\nfeedback of their supermassive (SMBH, $10^{8-10}$ M$_{\\odot}$) black holes. We\ninvestigate such connection with a suite of cosmological simulations of massive\n(halo mass $\\approx 10^{12}$ M$_{\\odot}$) galaxies at $z\\simeq 6$ which include\na detailed sub-grid multiphase gas and accretion model. BH seeds of initial\nmass $10^5$ M$_{\\odot}$ grow mostly by gas accretion, and become SMBH by $z=6$\nsetting on the observed $M_{\\rm BH} - M_{\\star}$ relation without the need for\na boost factor. Although quasar feedback crucially controls the SMBH growth,\nits impact on the properties of the host galaxy at $z=6$ is negligible. In our\nmodel, quasar activity can both quench (via gas heating) or enhance (by ISM\nover-pressurization) star formation. However, we find that the star formation\nhistory is insensitive to such modulation as it is largely dominated, at least\nat $z>6$, by cold gas accretion from the environment that cannot be hindered by\nthe quasar energy deposition. Although quasar-driven outflows can achieve\nvelocities $> 1000~\\rm km~s^{-1}$, only $\\approx 4$% of the outflowing gas mass\ncan actually escape from the host galaxy. These findings are only loosely\nconstrained by available data, but can guide observational campaigns searching\nfor signatures of quasar feedback in early galaxies.\n"} {"abstract": " The existence of $1$-factorizations of an infinite complete equipartite graph\n$K_m[n]$ (with $m$ parts of size $n$) admitting a vertex-regular automorphism\ngroup $G$ is known only when $n=1$ and $m$ is countable (that is, for countable\ncomplete graphs) and, in addition, $G$ is a finitely generated abelian group\n$G$ of order $m$.\n In this paper, we show that a vertex-regular $1$-factorization of $K_m[n]$\nunder the group $G$ exists if and only if $G$ has a subgroup $H$ of order $n$\nwhose index in $G$ is $m$. Furthermore, we provide a sufficient condition for\nan infinite Cayley graph to have a regular $1$-factorization. Finally, we\nconstruct 1-factorizations that contain a given subfactorization, both having a\nvertex-regular automorphism group.\n"} {"abstract": " This paper aims at solving an optimal control problem governed by an\nanisotropic Allen-Cahn equation numerically. Therefore we first prove the\nFr\\'echet differentiability of an in time discretized parabolic control problem\nunder certain assumptions on the involved quasilinearity and formulate the\nfirst order necessary conditions. As a next step, since the anisotropies are in\ngeneral not smooth enough, the convergence behavior of the optimal controls are\nstudied for a sequence of (smooth) approximations of the former quasilinear\nterm. In addition the simultaneous limit in the approximation and the time step\nsize is considered. For a class covering a large variety of anisotropies we\nintroduce a certain regularization and show the previously formulated\nrequirements. Finally, a trust region Newton solver is applied to various\nanisotropies and configurations, and numerical evidence for mesh independent\nbehavior and convergence with respect to regularization is presented.\n"} {"abstract": " We answer in negative to the following question of Boris Mitjagin: Is it true\nthat a product of two nuclear operators in Banach spaces can be factored\nthrough a trace class operator in a Hilbert space?\n"} {"abstract": " We propose a new architecture for diacritics restoration based on\ncontextualized embeddings, namely BERT, and we evaluate it on 12 languages with\ndiacritics. Furthermore, we conduct a detailed error analysis on Czech, a\nmorphologically rich language with a high level of diacritization. Notably, we\nmanually annotate all mispredictions, showing that roughly 44% of them are\nactually not errors, but either plausible variants (19%), or the system\ncorrections of erroneous data (25%). Finally, we categorize the real errors in\ndetail. We release the code at\nhttps://github.com/ufal/bert-diacritics-restoration.\n"} {"abstract": " Knowledge distillation transfers knowledge from the teacher network to the\nstudent one, with the goal of greatly improving the performance of the student\nnetwork. Previous methods mostly focus on proposing feature transformation and\nloss functions between the same level's features to improve the effectiveness.\nWe differently study the factor of connection path cross levels between teacher\nand student networks, and reveal its great importance. For the first time in\nknowledge distillation, cross-stage connection paths are proposed. Our new\nreview mechanism is effective and structurally simple. Our finally designed\nnested and compact framework requires negligible computation overhead, and\noutperforms other methods on a variety of tasks. We apply our method to\nclassification, object detection, and instance segmentation tasks. All of them\nwitness significant student network performance improvement. Code is available\nat https://github.com/Jia-Research-Lab/ReviewKD\n"} {"abstract": " Cell-free massive MIMO systems consist of many distributed access points with\nsimple components that jointly serve the users. In millimeter wave bands, only\na limited set of predetermined beams can be supported. In a network that\nconsolidates these technologies, downlink analog beam selection stands as a\nchallenging task for the network sum-rate maximization. Low-cost digital\nfilters can improve the network sum-rate further. In this work, we propose\nlow-cost joint designs of analog beam selection and digital filters. The\nproposed joint designs achieve significantly higher sum-rates than the disjoint\ndesign benchmark. Supervised machine learning (ML) algorithms can efficiently\napproximate the input-output mapping functions of the beam selection decisions\nof the joint designs with low computational complexities. Since the training of\nML algorithms is performed off-line, we propose a well-constructed joint design\nthat combines multiple initializations, iterations, and selection features, as\nwell as beam conflict control, i.e., the same beam cannot be used for multiple\nusers. The numerical results indicate that ML algorithms can retain 99-100% of\nthe original sum-rate results achieved by the proposed well-constructed\ndesigns.\n"} {"abstract": " Classifying the sub-categories of an object from the same super-category\n(e.g., bird) in a fine-grained visual classification (FGVC) task highly relies\non mining multiple discriminative features. Existing approaches mainly tackle\nthis problem by introducing attention mechanisms to locate the discriminative\nparts or feature encoding approaches to extract the highly parameterized\nfeatures in a weakly-supervised fashion. In this work, we propose a lightweight\nyet effective regularization method named Channel DropBlock (CDB), in\ncombination with two alternative correlation metrics, to address this problem.\nThe key idea is to randomly mask out a group of correlated channels during\ntraining to destruct features from co-adaptations and thus enhance feature\nrepresentations. Extensive experiments on three benchmark FGVC datasets show\nthat CDB effectively improves the performance.\n"} {"abstract": " We show that for a model complete strongly minimal theory whose pregeometry\nis flat, the recursive spectrum (SRM($T$)) is either of the form $[0,\\alpha)$\nfor $\\alpha\\in \\omega+2$ or $[0,n]\\cup\\{\\omega\\}$ for $n\\in \\omega$, or\n$\\{\\omega\\}$, or contained in $\\{0,1,2\\}$.\n Combined with previous results, this leaves precisely 4 sets for which it is\nnot yet determined whether each is the spectrum of a model complete strongly\nminimal theory with a flat pregeometry.\n"} {"abstract": " Data-driven methods for battery lifetime prediction are attracting increasing\nattention for applications in which the degradation mechanisms are poorly\nunderstood and suitable training sets are available. However, while advanced\nmachine learning and deep learning methods promise high performance with\nminimal data preprocessing, simpler linear models with engineered features\noften achieve comparable performance, especially for small training sets, while\nalso providing physical and statistical interpretability. In this work, we use\na previously published dataset to develop simple, accurate, and interpretable\ndata-driven models for battery lifetime prediction. We first present the\n\"capacity matrix\" concept as a compact representation of battery\nelectrochemical cycling data, along with a series of feature representations.\nWe then create a number of univariate and multivariate models, many of which\nachieve comparable performance to the highest-performing models previously\npublished for this dataset. These models also provide insights into the\ndegradation of these cells. Our approaches can be used both to quickly train\nmodels for a new dataset and to benchmark the performance of more advanced\nmachine learning methods.\n"} {"abstract": " Willems' fundamental lemma asserts that all trajectories of a linear\ntime-invariant system can be obtained from a finite number of measured ones,\nassuming that controllability and a persistency of excitation condition hold.\nWe show that these two conditions can be relaxed. First, we prove that the\ncontrollability condition can be replaced by a condition on the controllable\nsubspace, unobservable subspace, and a certain subspace associated with the\nmeasured trajectories. Second, we prove that the persistency of excitation\nrequirement can be relaxed if the degree of a certain minimal polynomial is\ntightly bounded. Our results show that data-driven predictive control using\nonline data is equivalent to model predictive control, even for uncontrollable\nsystems. Moreover, our results significantly reduce the amount of data needed\nin identifying homogeneous multi-agent systems.\n"} {"abstract": " This article describe globular weak $(n,\\infty)$-transformations\n($n\\in\\mathbb{N}$) in the sense of Grothendieck, i.e for each $n\\in\\mathbb{N}$\nwe build a coherator $\\Theta^{\\infty}_{\\mathbb{M}^n}$ which sets models are\nglobular weak $(n,\\infty)$-transformations. A natural globular filtration\nemerges from these coherators.\n"} {"abstract": " In this article, logical concepts are defined using the internal syntactic\nand semantic structure of language. For a first-order language, it has been\nshown that its logical constants are connectives and a certain type of\nquantifiers for which the universal and existential quantifiers form a\nfunctionally complete set of quantifiers. Neither equality nor cardinal\nquantifiers belong to the logical constants of a first-order language.\n"} {"abstract": " To safely deploy autonomous vehicles, onboard perception systems must work\nreliably at high accuracy across a diverse set of environments and geographies.\nOne of the most common techniques to improve the efficacy of such systems in\nnew domains involves collecting large labeled datasets, but such datasets can\nbe extremely costly to obtain, especially if each new deployment geography\nrequires additional data with expensive 3D bounding box annotations. We\ndemonstrate that pseudo-labeling for 3D object detection is an effective way to\nexploit less expensive and more widely available unlabeled data, and can lead\nto performance gains across various architectures, data augmentation\nstrategies, and sizes of the labeled dataset. Overall, we show that better\nteacher models lead to better student models, and that we can distill expensive\nteachers into efficient, simple students.\n Specifically, we demonstrate that pseudo-label-trained student models can\noutperform supervised models trained on 3-10 times the amount of labeled\nexamples. Using PointPillars [24], a two-year-old architecture, as our student\nmodel, we are able to achieve state of the art accuracy simply by leveraging\nlarge quantities of pseudo-labeled data. Lastly, we show that these student\nmodels generalize better than supervised models to a new domain in which we\nonly have unlabeled data, making pseudo-label training an effective form of\nunsupervised domain adaptation.\n"} {"abstract": " In this work, we propose a Bayesian statistical model to simultaneously\ncharacterize two or more social networks defined over a common set of actors.\nThe key feature of the model is a hierarchical prior distribution that allows\nus to represent the entire system jointly, achieving a compromise between\ndependent and independent networks. Among others things, such a specification\neasily allows us to visualize multilayer network data in a low-dimensional\nEuclidean space, generate a weighted network that reflects the consensus\naffinity between actors, establish a measure of correlation between networks,\nassess cognitive judgements that subjects form about the relationships among\nactors, and perform clustering tasks at different social instances. Our model's\ncapabilities are illustrated using several real-world data sets, taking into\naccount different types of actors, sizes, and relations.\n"} {"abstract": " We define partial quasi-morphisms on the group of Hamiltonian diffeomorphisms\nof the cotangent bundle using the spectral invariants in Lagrangian Floer\nhomology with conormal boundary conditions, where the product compatible with\nthe PSS isomorphism and the homological intersection product is lacking.\n"} {"abstract": " In this paper, we propose a data-driven method to discover multiscale\nchemical reactions governed by the law of mass action. First, we use a single\nmatrix to represent the stoichiometric coefficients for both the reactants and\nproducts in a system without catalysis reactions. The negative entries in the\nmatrix denote the stoichiometric coefficients for the reactants and the\npositive ones for the products. Second, we find that the conventional\noptimization methods usually get stuck in the local minima and could not find\nthe true solution in learning the multiscale chemical reactions. To overcome\nthis difficulty, we propose a partial-parameters-freezing (PPF) technique to\nprogressively determine the network parameters by using the fact that the\nstoichiometric coefficients are integers. With such a technique, the dimension\nof the searching space is gradually reduced in the training process and the\nglobal mimina can be eventually obtained. Several numerical experiments\nincluding the classical Michaelis-Menten kinetics, the hydrogen oxidation\nreactions, and the simplified GRI-3.0 mechanism verify the good performance of\nour algorithm in learning the multiscale chemical reactions. The code is\navailable at \\url{https://github.com/JuntaoHuang/multiscale-chemical-reaction}.\n"} {"abstract": " Approximate nearest-neighbor search is a fundamental algorithmic problem that\ncontinues to inspire study due its essential role in numerous contexts. In\ncontrast to most prior work, which has focused on point sets, we consider\nnearest-neighbor queries against a set of line segments in $\\mathbb{R}^d$, for\nconstant dimension $d$. Given a set $S$ of $n$ disjoint line segments in\n$\\mathbb{R}^d$ and an error parameter $\\varepsilon > 0$, the objective is to\nbuild a data structure such that for any query point $q$, it is possible to\nreturn a line segment whose Euclidean distance from $q$ is at most\n$(1+\\varepsilon)$ times the distance from $q$ to its nearest line segment. We\npresent a data structure for this problem with storage $O((n^2/\\varepsilon^{d})\n\\log (\\Delta/\\varepsilon))$ and query time $O(\\log\n(\\max(n,\\Delta)/\\varepsilon))$, where $\\Delta$ is the spread of the set of\nsegments $S$. Our approach is based on a covering of space by anisotropic\nelements, which align themselves according to the orientations of nearby\nsegments.\n"} {"abstract": " We construct a large class of projective threefolds with one node (aka\nnon-degenerate quadratic singularity) such that their small resolutions are not\nprojective.\n"} {"abstract": " Data heterogeneity has been identified as one of the key features in\nfederated learning but often overlooked in the lens of robustness to\nadversarial attacks. This paper focuses on characterizing and understanding its\nimpact on backdooring attacks in federated learning through comprehensive\nexperiments using synthetic and the LEAF benchmarks. The initial impression\ndriven by our experimental results suggests that data heterogeneity is the\ndominant factor in the effectiveness of attacks and it may be a redemption for\ndefending against backdooring as it makes the attack less efficient, more\nchallenging to design effective attack strategies, and the attack result also\nbecomes less predictable. However, with further investigations, we found data\nheterogeneity is more of a curse than a redemption as the attack effectiveness\ncan be significantly boosted by simply adjusting the client-side backdooring\ntiming. More importantly,data heterogeneity may result in overfitting at the\nlocal training of benign clients, which can be utilized by attackers to\ndisguise themselves and fool skewed-feature based defenses. In addition,\neffective attack strategies can be made by adjusting attack data distribution.\nFinally, we discuss the potential directions of defending the curses brought by\ndata heterogeneity. The results and lessons learned from our extensive\nexperiments and analysis offer new insights for designing robust federated\nlearning methods and systems\n"} {"abstract": " Unsupervised domain adaptive classifcation intends to improve the\nclassifcation performance on unlabeled target domain. To alleviate the adverse\neffect of domain shift, many approaches align the source and target domains in\nthe feature space. However, a feature is usually taken as a whole for alignment\nwithout explicitly making domain alignment proactively serve the classifcation\ntask, leading to sub-optimal solution. In this paper, we propose an effective\nTask-oriented Alignment (ToAlign) for unsupervised domain adaptation (UDA). We\nstudy what features should be aligned across domains and propose to make the\ndomain alignment proactively serve classifcation by performing feature\ndecomposition and alignment under the guidance of the prior knowledge induced\nfrom the classifcation task itself. Particularly, we explicitly decompose a\nfeature in the source domain into a task-related/discriminative feature that\nshould be aligned, and a task-irrelevant feature that should be\navoided/ignored, based on the classifcation meta-knowledge. Extensive\nexperimental results on various benchmarks (e.g., Offce-Home, Visda-2017, and\nDomainNet) under different domain adaptation settings demonstrate the\neffectiveness of ToAlign which helps achieve the state-of-the-art performance.\nThe code is publicly available at https://github.com/microsoft/UDA\n"} {"abstract": " The design space for inertial confinement fusion (ICF) experiments is vast\nand experiments are extremely expensive. Researchers rely heavily on computer\nsimulations to explore the design space in search of high-performing\nimplosions. However, ICF multiphysics codes must make simplifying assumptions,\nand thus deviate from experimental measurements for complex implosions. For\nmore effective design and investigation, simulations require input from past\nexperimental data to better predict future performance. In this work, we\ndescribe a cognitive simulation method for combining simulation and\nexperimental data into a common, predictive model. This method leverages a\nmachine learning technique called transfer learning, the process of taking a\nmodel trained to solve one task, and partially retraining it on a sparse\ndataset to solve a different, but related task. In the context of ICF design,\nneural network models trained on large simulation databases and partially\nretrained on experimental data, producing models that are far more accurate\nthan simulations alone. We demonstrate improved model performance for a range\nof ICF experiments at the National Ignition Facility, and predict the outcome\nof recent experiments with less than ten percent error for several key\nobservables. We discuss how the methods might be used to carry out a\ndata-driven experimental campaign to optimize performance, illustrating the key\nproduct -- models that become increasingly accurate as data is acquired.\n"} {"abstract": " This paper develops simple feed-forward neural networks that achieve the\nuniversal approximation property for all continuous functions with a fixed\nfinite number of neurons. These neural networks are simple because they are\ndesigned with a simple and computable continuous activation function $\\sigma$\nleveraging a triangular-wave function and a softsign function. We prove that\n$\\sigma$-activated networks with width $36d(2d+1)$ and depth $11$ can\napproximate any continuous function on a $d$-dimensioanl hypercube within an\narbitrarily small error. Hence, for supervised learning and its related\nregression problems, the hypothesis space generated by these networks with a\nsize not smaller than $36d(2d+1)\\times 11$ is dense in the space of continuous\nfunctions. Furthermore, classification functions arising from image and signal\nclassification are in the hypothesis space generated by $\\sigma$-activated\nnetworks with width $36d(2d+1)$ and depth $12$, when there exist pairwise\ndisjoint closed bounded subsets of $\\mathbb{R}^d$ such that the samples of the\nsame class are located in the same subset.\n"} {"abstract": " Adversarial algorithms have shown to be effective against neural networks for\na variety of tasks. Some adversarial algorithms perturb all the pixels in the\nimage minimally for the image classification task in image classification. In\ncontrast, some algorithms perturb few pixels strongly. However, very little\ninformation is available regarding why these adversarial samples so diverse\nfrom each other exist. Recently, Vargas et al. showed that the existence of\nthese adversarial samples might be due to conflicting saliency within the\nneural network. We test this hypothesis of conflicting saliency by analysing\nthe Saliency Maps (SM) and Gradient-weighted Class Activation Maps (Grad-CAM)\nof original and few different types of adversarial samples. We also analyse how\ndifferent adversarial samples distort the attention of the neural network\ncompared to original samples. We show that in the case of Pixel Attack,\nperturbed pixels either calls the network attention to themselves or divert the\nattention from them. Simultaneously, the Projected Gradient Descent Attack\nperturbs pixels so that intermediate layers inside the neural network lose\nattention for the correct class. We also show that both attacks affect the\nsaliency map and activation maps differently. Thus, shedding light on why some\ndefences successful against some attacks remain vulnerable against other\nattacks. We hope that this analysis will improve understanding of the existence\nand the effect of adversarial samples and enable the community to develop more\nrobust neural networks.\n"} {"abstract": " Understanding and mitigating loss channels due to two-level systems (TLS) is\none of the main cornerstones in the quest of realizing long photon lifetimes in\nsuperconducting quantum circuits. Typically, the TLS to which a circuit couples\nare modeled as a large bath without any coherence. Here we demonstrate that the\ncoherence of TLS has to be considered to accurately describe the ring-down\ndynamics of a coaxial quarter-wave resonator with an internal quality factor of\n$0.5\\times10^9$ at the single-photon level. The transient analysis reveals\neffective non-Markovian dynamics of the combined TLS and cavity system, which\nwe can accurately fit by introducing a comprehensive TLS model. The fit returns\nan average coherence time of around $T_2^*\\approx0.3\\,\\mathrm{\\mu s}$ for a\ntotal of $N\\approx10^{9}$ TLS with power-law distributed coupling strengths.\nDespite the shortly coherent TLS excitations, we observe long-term effects on\nthe cavity decay due to coherent elastic scattering between the resonator field\nand the TLS. Moreover, this model provides an accurate prediction of the\ninternal quality factor's temperature dependence.\n"} {"abstract": " Weakly nonlinear internal wave-wave interaction is a key mechanism that\ncascades energy from large to small scales, leading to ocean turbulence and\nmixing. Oceans typically have a non-uniform density stratification profile;\nmoreover, submarine topography leads to a spatially varying ocean depth ($h$).\nUnder these conditions and assuming mild-slope bathymetry, we employ\nmultiple-scale analysis to derive the wave amplitude equations for triadic- and\nself-interactions. The waves are assumed to have a slowly (rapidly) varying\namplitude (phase) in space and time. For uniform stratifications, the\nhorizontal wavenumber ($k$) condition for waves ($1$,$2$,$3$), given by\n${k}_{(1,a)}+{k}_{(2,b)}+{k}_{(3,c)}=0$, is unaffected as $h$ is varied, where\n$(a,b,c)$ denote the modenumber. Moreover, the nonlinear coupling coefficients\n(NLC) are proportional to $1/h^2$, implying that triadic waves grow faster\nwhile travelling up a seamount. For non-uniform stratifications, triads that do\nnot satisfy the condition $a=b=c$ may not satisfy the horizontal wavenumber\ncondition as $h$ is varied, and unlike uniform stratification, the NLC may not\ndecrease (increase) monotonically with increasing (decreasing) $h$. NLC, and\nhence wave growth rates for both triads and self-interactions, can also vary\nrapidly with $h$. The most unstable daughter wave combination of a triad with a\nmode-1 parent wave can also change for relatively small changes in $h$. We also\ninvestigate higher-order self-interactions in the presence of a monochromatic,\nsmall amplitude bathymetry; here the bathymetry behaves as a zero frequency\nwave. We derive the amplitude evolution equations and show that higher-order\nself-interactions might be a viable mechanism of energy cascade.\n"} {"abstract": " We give a $0.5368$-competitive algorithm for edge-weighted online bipartite\nmatching. Prior to our work, the best competitive ratio was $0.5086$ due to\nFahrbach, Huang, Tao, and Zadimoghaddam (FOCS 2020). They achieved their\nbreakthrough result by developing a subroutine called \\emph{online correlated\nselection} (OCS) which takes as input a sequence of pairs and selects one item\nfrom each pair. Importantly, the selections the OCS makes are negatively\ncorrelated.\n We achieve our result by defining \\emph{multiway} OCSes which receive\narbitrarily many elements at each step, rather than just two. In addition to\nbetter competitive ratios, our formulation allows for a simpler reduction from\nedge-weighted online bipartite matching to OCSes. While Fahrbach et al. used a\nfactor-revealing linear program to optimize the competitive ratio, our analysis\ndirectly connects the competitive ratio to the parameters of the multiway OCS.\nFinally, we show that the formulation of Farhbach et al. can achieve a\ncompetitive ratio of at most $0.5239$, confirming that multiway OCSes are\nstrictly more powerful.\n"} {"abstract": " Let $\\Omega\\subset \\mathbb{C}^n$ be a smooth bounded pseudoconvex domain and\n$A^2 (\\Omega)$ denote its Bergman space. Let $P:L^2(\\Omega)\\longrightarrow\nA^2(\\Omega)$ be the Bergman projection. For a measurable\n$\\varphi:\\Omega\\longrightarrow \\Omega$, the projected composition operator is\ndefined by $(K_\\varphi f)(z) = P(f \\circ \\varphi)(z), z \\in\\Omega, f\\in A^2\n(\\Omega).$ In 1994, Rochberg studied boundedness of $K_\\varphi$ on the Hardy\nspace of the unit disk and obtained different necessary or sufficient\nconditions for boundedness of $K_\\varphi$. In this paper we are interested in\nprojected composition operators on Bergman spaces on pseudoconvex domains. We\nstudy boundedness of this operator under the smoothness assumptions on the\nsymbol $\\varphi$ on $\\overline\\Omega$.\n"} {"abstract": " A new mechanism for the internal heating of ultra-short-period planets is\nproposed based on the gravitational perturbation by a non-axisymmetric\nquadrupole moment of their host stars. Such a quadrupole is due to the magnetic\nflux tubes in the stellar convection zone, unevenly distributed in longitude\nand persisting for many stellar rotations as observed in young late-type stars.\nThe rotation period of the host star evolves from its shortest value on the\nzero-age main sequence to longer periods due to the loss of angular momentum\nthrough a magnetized wind. If the stellar rotation period comes close to twice\nthe orbital period of the planet, the quadrupole leads to a spin-orbit\nresonance that excites oscillations of the star-planet separation. As a\nconsequence, a strong tidal dissipation is produced inside the planet. We\nillustrate the operation of the mechanism by modeling the evolution of the\nstellar rotation and of the innermost planetary orbit in the cases of CoRoT-7,\nKepler-78, and K2-141 whose present orbital periods range between 0.28 and 0.85\ndays. If the spin-orbit resonance occurs, the maximum power dissipated inside\nthe planets ranges between $10^{18}$ and $10^{19}$ W, while the total\ndissipated energy is of the order of $10^{30}-10^{32}$ J over a time interval\nas short as $(1-4.5) \\times 10^{4}$ yr. Such a huge heating over a so short\ntime interval produces a complete melting of the planetary interiors and may\nshut off their hydromagnetic dynamos. These may initiate a successive phase of\nintense internal heating owing to unipolar magnetic star-planet interactions\nand affect the composition and the escape of their atmospheres, producing\neffects that could be observable during the entire lifetime of the planets\n[abridged abstract].\n"} {"abstract": " This paper presents a fully convolutional scene graph generation (FCSGG)\nmodel that detects objects and relations simultaneously. Most of the scene\ngraph generation frameworks use a pre-trained two-stage object detector, like\nFaster R-CNN, and build scene graphs using bounding box features. Such pipeline\nusually has a large number of parameters and low inference speed. Unlike these\napproaches, FCSGG is a conceptually elegant and efficient bottom-up approach\nthat encodes objects as bounding box center points, and relationships as 2D\nvector fields which are named as Relation Affinity Fields (RAFs). RAFs encode\nboth semantic and spatial features, and explicitly represent the relationship\nbetween a pair of objects by the integral on a sub-region that points from\nsubject to object. FCSGG only utilizes visual features and still generates\nstrong results for scene graph generation. Comprehensive experiments on the\nVisual Genome dataset demonstrate the efficacy, efficiency, and\ngeneralizability of the proposed method. FCSGG achieves highly competitive\nresults on recall and zero-shot recall with significantly reduced inference\ntime.\n"} {"abstract": " With extreme weather events becoming more common, the risk posed by surface\nwater flooding is ever increasing. In this work we propose a model, and\nassociated Bayesian inference scheme, for generating probabilistic\n(high-resolution short-term) forecasts of localised precipitation. The\nparametrisation of our underlying hierarchical dynamic spatio-temporal model is\nmotivated by a forward-time, centred-space finite difference solution to a\ncollection of stochastic partial differential equations, where the main driving\nforces are advection and diffusion. Observations from both weather radar and\nground based rain gauges provide information from which we can learn about the\nlikely values of the (latent) precipitation field in addition to other unknown\nmodel parameters. Working in the Bayesian paradigm provides a coherent\nframework for capturing uncertainty both in the underlying model parameters and\nalso in our forecasts. Further, appealing to simulation based (MCMC) sampling\nyields a straightforward solution to handling zeros, treated as censored\nobservations, via data augmentation. Both the underlying state and the\nobservations are of moderately large dimension ($\\mathcal{O}(10^4)$ and\n$\\mathcal{O}(10^3)$ respectively) and this renders standard inference\napproaches computationally infeasible. Our solution is to embed the ensemble\nKalman smoother within a Gibbs sampling scheme to facilitate approximate\nBayesian inference in reasonable time. Both the methodology and the\neffectiveness of our posterior sampling scheme are demonstrated via simulation\nstudies and also by a case study of real data from the Urban Observatory\nproject based in Newcastle upon Tyne, UK.\n"} {"abstract": " Continuous variable measurement-based quantum computation on cluster states\nhas in recent years shown great potential for scalable, universal, and\nfault-tolerant quantum computation when combined with the\nGottesman-Kitaev-Preskill (GKP) code and quantum error correction. However, no\ncomplete fault-tolerant architecture exists that includes everything from\ncluster state generation with finite squeezing to gate implementations with\nrealistic noise and error correction. In this work, we propose a simple\narchitecture for the preparation of a cluster state in three dimensions in\nwhich gates by gate teleportation can be efficiently implemented. To\naccommodate scalability, we propose architectures that allow for both spatial\nand temporal multiplexing, with the temporal encoded version requiring as\nlittle as two squeezed light sources. Due to its three-dimensional structure,\nthe architecture supports topological qubit error correction, while GKP error\ncorrection is efficiently realized within the architecture by teleportation. To\nvalidate fault-tolerance, the architecture is simulated using surface-GKP\ncodes, including noise from GKP-states as well as gate noise caused by finite\nsqueezing in the cluster state. We find a fault-tolerant squeezing threshold of\n12.7 dB with room for further improvement.\n"} {"abstract": " In high energy exclusive processes involving leptons, QED corrections can be\nsensitive to infrared scales like the lepton mass and the soft photon energy\ncut, resulting in large logarithms that need to be resummed to all order in\n$\\alpha$. When considering the ratio of the exclusive processes between two\ndifferent lepton flavors, the ratio $R$ can be expressed in terms of factorized\nfunctions in the decoupled leptonic sectors. While some of the functional terms\ncancel, there remain the large logarithms due to the lepton mass difference and\nthe energy cut. This factorization process can be universally applied to the\nexclusive processes such as $Z\\to l^+l^-$ and $B^-\\to l^-\\bar{\\nu}_l$, where\nthe resummed result in the ratio gives significant deviations from the naive\nexpectation from lepton universality.\n"} {"abstract": " A hyperbolic group acts by homeomorphisms on its Gromov boundary. We show\nthat if this boundary is a topological n-sphere the action is topologically\nstable in the dynamical sense: any nearby action is semi-conjugate to the\nstandard boundary action.\n"} {"abstract": " In this paper we consider the problem of pointwise determining the fibres of\nthe flat unitary subbundle of a PVHS of weight one. Starting from the\nassociated Higgs field, and assuming the base has dimension $1$, we construct a\nfamily of (smooth but possibly non-holomorphic) morphisms of vector bundles\nwith the property that the intersection of their kernels at a general point is\nthe fibre of the flat subbundle. We explore the first one of these morphisms in\nthe case of a geometric PVHS arising from a family of smooth projective curves,\nshowing that it acts as the cup-product with some sort of \"second-order\nKodaira-Spencer class\" which we introduce, and check in the case of a family of\nsmooth plane curves that this additional condition is non-trivial.\n"} {"abstract": " Multi-segment reconstruction (MSR) is the problem of estimating a signal\ngiven noisy partial observations. Here each observation corresponds to a\nrandomly located segment of the signal. While previous works address this\nproblem using template or moment-matching, in this paper we address MSR from an\nunsupervised adversarial learning standpoint, named MSR-GAN. We formulate MSR\nas a distribution matching problem where the goal is to recover the signal and\nthe probability distribution of the segments such that the distribution of the\ngenerated measurements following a known forward model is close to the real\nobservations. This is achieved once a min-max optimization involving a\ngenerator-discriminator pair is solved. MSR-GAN is mainly inspired by CryoGAN\n[1]. However, in MSR-GAN we no longer assume the probability distribution of\nthe latent variables, i.e. segment locations, is given and seek to recover it\nalongside the unknown signal. For this purpose, we show that the loss at the\ngenerator side originally is non-differentiable with respect to the segment\ndistribution. Thus, we propose to approximate it using Gumbel-Softmax\nreparametrization trick. Our proposed solution is generalizable to a wide range\nof inverse problems. Our simulation results and comparison with various\nbaselines verify the potential of our approach in different settings.\n"} {"abstract": " We compare the radial profiles of the specific star formation rate (sSFR) in\na sample of 169 star-forming galaxies in close pairs with those of mass-matched\ncontrol galaxies in the SDSS-IV MaNGA survey. We find that the sSFR is\ncentrally enhanced (within one effective radius) in interacting galaxies by\n~0.3 dex and that there is a weak sSFR suppression in the outskirts of the\ngalaxies of ~0.1 dex. We stack the differences profiles for galaxies in five\nstellar mass bins between log(M/Mstar) = 9.0-11.5 and find that the sSFR\nenhancement has no dependence on the stellar mass. The same result is obtained\nwhen the comparison galaxies are matched to each paired galaxy in both stellar\nmass and redshift. In addition, we find that that the sSFR enhancement is\nelevated in pairs with nearly equal masses and closer projected separations, in\nagreement with previous work based on single-fiber spectroscopy. We also find\nthat the sSFR offsets in the outskirts of the paired galaxies are dependent on\nwhether the galaxy is the more massive or less massive companion in the pair.\nThe more massive companion experiences zero to a positive sSFR enhancement\nwhile the less massive companion experiences sSFR suppression in their\noutskirts. Our results illustrate the complex tidal effects on star formation\nin closely paired galaxies.\n"} {"abstract": " Solving Partially Observable Markov Decision Processes (POMDPs) is hard.\nLearning optimal controllers for POMDPs when the model is unknown is harder.\nOnline learning of optimal controllers for unknown POMDPs, which requires\nefficient learning using regret-minimizing algorithms that effectively tradeoff\nexploration and exploitation, is even harder, and no solution exists currently.\nIn this paper, we consider infinite-horizon average-cost POMDPs with unknown\ntransition model, though a known observation model. We propose a natural\nposterior sampling-based reinforcement learning algorithm (PSRL-POMDP) and show\nthat it achieves a regret bound of $O(\\log T)$, where $T$ is the time horizon,\nwhen the parameter set is finite. In the general case (continuous parameter\nset), we show that the algorithm achieves $O (T^{2/3})$ regret under two\ntechnical assumptions. To the best of our knowledge, this is the first online\nRL algorithm for POMDPs and has sub-linear regret.\n"} {"abstract": " Let $D$ be a $k$-regular bipartite tournament on $n$ vertices. We show that,\nfor every $p$ with $2 \\le p \\le n/2-2$, $D$ has a cycle $C$ of length $2p$ such\nthat $D \\setminus C$ is hamiltonian unless $D$ is isomorphic to the special\ndigraph $F_{4k}$. This statement was conjectured by Manoussakis, Song and Zhang\n[K. Zhang, Y. Manoussakis, and Z. Song. Complementary cycles containing a fixed\narc in diregular bipartite tournaments. Discrete Mathematics,\n133(1-3):325--328,1994]. In the same paper, the conjecture was proved for $p=2$\nand more recently Bai, Li and He gave a proof for $p=3$ [Y. Bai, H. Li, and W.\nHe. Complementary cycles in regular bipartite tournaments. Discrete\nMathematics, 333:14--27, 2014].\n"} {"abstract": " Missing time-series data is a prevalent practical problem. Imputation methods\nin time-series data often are applied to the full panel data with the purpose\nof training a model for a downstream out-of-sample task. For example, in\nfinance, imputation of missing returns may be applied prior to training a\nportfolio optimization model. Unfortunately, this practice may result in a\nlook-ahead-bias in the future performance on the downstream task. There is an\ninherent trade-off between the look-ahead-bias of using the full data set for\nimputation and the larger variance in the imputation from using only the\ntraining data. By connecting layers of information revealed in time, we propose\na Bayesian posterior consensus distribution which optimally controls the\nvariance and look-ahead-bias trade-off in the imputation. We demonstrate the\nbenefit of our methodology both in synthetic and real financial data.\n"} {"abstract": " Recent studies have witnessed that self-supervised methods based on view\nsynthesis obtain clear progress on multi-view stereo (MVS). However, existing\nmethods rely on the assumption that the corresponding points among different\nviews share the same color, which may not always be true in practice. This may\nlead to unreliable self-supervised signal and harm the final reconstruction\nperformance. To address the issue, we propose a framework integrated with more\nreliable supervision guided by semantic co-segmentation and data-augmentation.\nSpecially, we excavate mutual semantic from multi-view images to guide the\nsemantic consistency. And we devise effective data-augmentation mechanism which\nensures the transformation robustness by treating the prediction of regular\nsamples as pseudo ground truth to regularize the prediction of augmented\nsamples. Experimental results on DTU dataset show that our proposed methods\nachieve the state-of-the-art performance among unsupervised methods, and even\ncompete on par with supervised methods. Furthermore, extensive experiments on\nTanks&Temples dataset demonstrate the effective generalization ability of the\nproposed method.\n"} {"abstract": " Nearly a decade ago, Azrieli and Shmaya introduced the class of\n$\\lambda$-Lipschitz games in which every player's payoff function is\n$\\lambda$-Lipschitz with respect to the actions of the other players. They\nshowed that such games admit $\\epsilon$-approximate pure Nash equilibria for\ncertain settings of $\\epsilon$ and $\\lambda$. They left open, however, the\nquestion of how hard it is to find such an equilibrium. In this work, we\ndevelop a query-efficient reduction from more general games to Lipschitz games.\nWe use this reduction to show a query lower bound for any randomized algorithm\nfinding $\\epsilon$-approximate pure Nash equilibria of $n$-player,\nbinary-action, $\\lambda$-Lipschitz games that is exponential in\n$\\frac{n\\lambda}{\\epsilon}$. In addition, we introduce ``Multi-Lipschitz\ngames,'' a generalization involving player-specific Lipschitz values, and\nprovide a reduction from finding equilibria of these games to finding\nequilibria of Lipschitz games, showing that the value of interest is the sum of\nthe individual Lipschitz parameters. Finally, we provide an exponential lower\nbound on the deterministic query complexity of finding $\\epsilon$-approximate\ncorrelated equilibria of $n$-player, $m$-action, $\\lambda$-Lipschitz games for\nstrong values of $\\epsilon$, motivating the consideration of explicitly\nrandomized algorithms in the above results. Our proof is arguably simpler than\nthose previously used to show similar results.\n"} {"abstract": " The WIMP proposed here yields the observed abundance of dark matter, and is\nconsistent with the current limits from direct detection, indirect detection,\nand collider experiments, if its mass is $\\sim 72$ GeV/$c^2$. It is also\nconsistent with analyses of the gamma rays observed by Fermi-LAT from the\nGalactic center (and other sources), and of the antiprotons observed by AMS-02,\nin which the excesses are attributed to dark matter annihilation. These\nsuccesses are shared by the inert doublet model (IDM), but the phenomenology is\nvery different: The dark matter candidate of the IDM has first-order gauge\ncouplings to other new particles, whereas the present candidate does not. In\naddition to indirect detection through annihilation products, it appears that\nthe present particle can be observed in the most sensitive direct-detection and\ncollider experiments currently being planned.\n"} {"abstract": " It has been shown that the performance of neural machine translation (NMT)\ndrops starkly in low-resource conditions, often requiring large amounts of\nauxiliary data to achieve competitive results. An effective method of\ngenerating auxiliary data is back-translation of target language sentences. In\nthis work, we present a case study of Tigrinya where we investigate several\nback-translation methods to generate synthetic source sentences. We find that\nin low-resource conditions, back-translation by pivoting through a\nhigher-resource language related to the target language proves most effective\nresulting in substantial improvements over baselines.\n"} {"abstract": " This paper proposes architectures that facilitate the extrapolation of\nemotional expressions in deep neural network (DNN)-based text-to-speech (TTS).\nIn this study, the meaning of \"extrapolate emotional expressions\" is to borrow\nemotional expressions from others, and the collection of emotional speech\nuttered by target speakers is unnecessary. Although a DNN has potential power\nto construct DNN-based TTS with emotional expressions and some DNN-based TTS\nsystems have demonstrated satisfactory performances in the expression of the\ndiversity of human speech, it is necessary and troublesome to collect emotional\nspeech uttered by target speakers. To solve this issue, we propose\narchitectures to separately train the speaker feature and the emotional feature\nand to synthesize speech with any combined quality of speakers and emotions.\nThe architectures are parallel model (PM), serial model (SM), auxiliary input\nmodel (AIM), and hybrid models (PM&AIM and SM&AIM). These models are trained\nthrough emotional speech uttered by few speakers and neutral speech uttered by\nmany speakers. Objective evaluations demonstrate that the performances in the\nopen-emotion test provide insufficient information. They make a comparison with\nthose in the closed-emotion test, but each speaker has their own manner of\nexpressing emotion. However, subjective evaluation results indicate that the\nproposed models could convey emotional information to some extent. Notably, the\nPM can correctly convey sad and joyful emotions at a rate of >60%.\n"} {"abstract": " The baroclinic annular mode (BAM) is a leading-order mode of the eddy-kinetic\nenergy in the Southern Hemisphere exhibiting. oscillatory behavior at\nintra-seasonal time scales. The oscillation mechanism has been linked to\ntransient eddy-mean flow interactions that remain poorly understood. Here we\ndemonstrate that the finite memory effect in eddy-heat flux dependence on the\nlarge-scale flow can explain the origin of the BAM's oscillatory behavior. We\nrepresent the eddy memory effect by a delayed integral kernel that leads to a\ngeneralized Langevin equation for the planetary-scale heat equation. Using a\nmathematical framework for the interactions between planetary and\nsynoptic-scale motions, we derive a reduced dynamical model of the BAM - a\nstochastically-forced oscillator with a period proportional to the geometric\nmean between the eddy-memory time scale and the diffusive eddy equilibration\ntimescale. Our model provides a formal justification for the previously\nproposed phenomenological model of the BAM and could be used to explicitly\ndiagnose the memory kernel and improve our understanding of transient eddy-mean\nflow interactions in the atmosphere.\n"} {"abstract": " Spelling irregularities, known now as spelling mistakes, have been found for\nseveral centuries. As humans, we are able to understand most of the misspelled\nwords based on their location in the sentence, perceived pronunciation, and\ncontext. Unlike humans, computer systems do not possess the convenient auto\ncomplete functionality of which human brains are capable. While many programs\nprovide spelling correction functionality, many systems do not take context\ninto account. Moreover, Artificial Intelligence systems function in the way\nthey are trained on. With many current Natural Language Processing (NLP)\nsystems trained on grammatically correct text data, many are vulnerable against\nadversarial examples, yet correctly spelled text processing is crucial for\nlearning. In this paper, we investigate how spelling errors can be corrected in\ncontext, with a pre-trained language model BERT. We present two experiments,\nbased on BERT and the edit distance algorithm, for ranking and selecting\ncandidate corrections. The results of our experiments demonstrated that when\ncombined properly, contextual word embeddings of BERT and edit distance are\ncapable of effectively correcting spelling errors.\n"} {"abstract": " In game theory, mechanism design is concerned with the design of incentives\nso that a desired outcome of the game can be achieved. In this paper, we study\nthe design of incentives so that a desirable equilibrium is obtained, for\ninstance, an equilibrium satisfying a given temporal logic property -- a\nproblem that we call equilibrium design. We base our study on a framework where\nsystem specifications are represented as temporal logic formulae, games as\nquantitative concurrent game structures, and players' goals as mean-payoff\nobjectives. In particular, we consider system specifications given by LTL and\nGR(1) formulae, and show that implementing a mechanism to ensure that a given\ntemporal logic property is satisfied on some/every Nash equilibrium of the\ngame, whenever such a mechanism exists, can be done in PSPACE for LTL\nproperties and in NP/$\\Sigma^{P}_{2}$ for GR(1) specifications. We also study\nthe complexity of various related decision and optimisation problems, such as\noptimality and uniqueness of solutions, and show that the complexities of all\nsuch problems lie within the polynomial hierarchy. As an application,\nequilibrium design can be used as an alternative solution to the rational\nsynthesis and verification problems for concurrent games with mean-payoff\nobjectives whenever no solution exists, or as a technique to repair, whenever\npossible, concurrent games with undesirable rational outcomes (Nash equilibria)\nin an optimal way.\n"} {"abstract": " Fonts have had trends throughout their history, not only in when they were\ninvented but also in their usage and popularity. In this paper, we attempt to\nspecifically find the trends in font usage using robust regression on a large\ncollection of text images. We utilize movie posters as the source of fonts for\nthis task because movie posters can represent time periods by using their\nrelease date. In addition, movie posters are documents that are carefully\ndesigned and represent a wide range of fonts. To understand the relationship\nbetween the fonts of movie posters and time, we use a regression Convolutional\nNeural Network (CNN) to estimate the release year of a movie using an isolated\ntitle text image. Due to the difficulty of the task, we propose to use of a\nhybrid training regimen that uses a combination of Mean Squared Error (MSE) and\nTukey's biweight loss. Furthermore, we perform a thorough analysis on the\ntrends of fonts through time.\n"} {"abstract": " Using characteristics to treat advection terms in time-dependent PDEs leads\nto a class of schemes, e.g., semi-Lagrangian and Lagrange-Galerkin schemes,\nwhich preserve stability under large Courant numbers, and may therefore be\nappealing in many practical situations. Unfortunately, the need of locating the\nfeet of characteristics may cause a serious drop of efficiency in the case of\nunstructured space grids, and thus prevent the use of large time-step schemes\non complex geometries. In this paper, we perform an in-depth analysis of the\nmain recipes available for characteristic location, and propose a technique to\nimprove the efficiency of this phase, using additional information related to\nthe advecting vector field. This results in a clear improvement of execution\ntimes in the unstructured case, thus extending the range of applicability of\nlarge time-step schemes.\n"} {"abstract": " In this paper, we construct a sequence $(c_k)_{k\\in\\mathbb{Z}_{\\geq 1}}$ of\nsymplectic capacities based on the Chiu-Tamarkin complex $C_{T,\\ell}$, a\n$\\mathbb{Z}/\\ell\\mathbb{Z}$-equivariant invariant coming from the microlocal\ntheory of sheaves. We compute $(c_k)_{k\\in\\mathbb{Z}_{\\geq 1}}$ for convex\ntoric domains, which are the same as the Gutt-Hutchings capacities. On the\nother hand, our method works for the contact embedding problem. We define a\nsequence of \"contact capacities\" $([c]_k)_{k\\in\\mathbb{Z}_{\\geq 1}}$ on the\nprequantized contact manifold $\\mathbb{R}^{2d}\\times S^1$, which could derive\nsome embedding obstructions of prequantized convex toric domains.\n"} {"abstract": " We prove that all homology 3-spheres are $J_4$-equivalent, i.e. that any\nhomology 3-sphere can be obtained from one another by twisting one of its\nHeegaard splittings by an element of the mapping class group acting trivially\non the fourth nilpotent quotient of the fundamental group of the gluing\nsurface. We do so by exhibiting an element of $J_4$, the fourth term of the\nJohnson filtration of the mapping class group, on which (the core of) the\nCasson invariant takes the value $1$. In particular, this provides an explicit\nexample of an element of $J_4$ that is not a commutator of length $2$ in the\nTorelli group.\n"} {"abstract": " This paper presents an approach to deal with safety of dynamical systems in\npresence of multiple non-convex unsafe sets. While optimal control and model\npredictive control strategies can be employed in these scenarios, they suffer\nfrom high computational complexity in case of general nonlinear systems.\nLeveraging control barrier functions, on the other hand, results in\ncomputationally efficient control algorithms. Nevertheless, when safety\nguarantees have to be enforced alongside stability objectives, undesired\nasymptotically stable equilibrium points have been shown to arise. We propose a\ncomputationally efficient optimization-based approach which allows us to ensure\nsafety of dynamical systems without introducing undesired equilibria even in\npresence of multiple non-convex unsafe sets. The developed control algorithm is\nshowcased in simulation and in a real robot navigation application.\n"} {"abstract": " Advances in synthetic biology and nanotechnology have contributed to the\ndesign of tools that can be used to control, reuse, modify, and re-engineer\ncells' structure, as well as enabling engineers to effectively use biological\ncells as programmable substrates to realize Bio-Nano Things (biological\nembedded computing devices). Bio-NanoThings are generally tiny, non-intrusive,\nand concealable devices that can be used for in-vivo applications such as\nintra-body sensing and actuation networks, where the use of artificial devices\ncan be detrimental. Such (nano-scale) devices can be used in various healthcare\nsettings such as continuous health monitoring, targeted drug delivery, and\nnano-surgeries. These services can also be grouped to form a collaborative\nnetwork (i.e., nanonetwork), whose performance can potentially be improved when\nconnected to higher bandwidth external networks such as the Internet, say via\n5G. However, to realize the IoBNT paradigm, it is also important to seamlessly\nconnect the biological environment with the technological landscape by having a\ndynamic interface design to convert biochemical signals from the human body\ninto an equivalent electromagnetic signal (and vice versa). This,\nunfortunately, risks the exposure of internal biological mechanisms to\ncyber-based sensing and medical actuation, with potential security and privacy\nimplications. This paper comprehensively reviews bio-cyber interface for IoBNT\narchitecture, focusing on bio-cyber interfacing options for IoBNT like\nbiologically inspired bio-electronic devices, RFID enabled implantable chips,\nand electronic tattoos. This study also identifies known and potential security\nand privacy vulnerabilities and mitigation strategies for consideration in\nfuture IoBNT designs and implementations.\n"} {"abstract": " In this paper, we introduce a novel deep neural network suitable for\nmulti-scale analysis and propose efficient model-agnostic methods that help the\nnetwork extract information from high-frequency domains to reconstruct clearer\nimages. Our model can be applied to multi-scale image enhancement problems\nincluding denoising, deblurring and single image super-resolution. Experiments\non SIDD, Flickr2K, DIV2K, and REDS datasets show that our method achieves\nstate-of-the-art performance on each task. Furthermore, we show that our model\ncan overcome the over-smoothing problem commonly observed in existing\nPSNR-oriented methods and generate more natural high-resolution images by\napplying adversarial training.\n"} {"abstract": " Topological phases exhibit unconventional order that cannot be detected by\nany local order parameter. In the framework of Projected Entangled Pair\nStates(PEPS), topological order is characterized by an entanglement symmetry of\nthe local tensor which describes the model. This symmetry can take the form of\na tensor product of group representations, or in the more general case a\ncorrelated symmetry action in the form of a Matrix Product Operator(MPO), which\nencompasses all string-net models. Among other things, these entanglement\nsymmetries allow for the description of ground states and anyon excitations.\nRecently, the idea has been put forward to use those symmetries and the anyonic\nobjects they describe as order parameters for probing topological phase\ntransitions, and the applicability of this idea has been demonstrated for\nAbelian groups. In this paper, we extend this construction to the domain of\nnon-Abelian models with MPO symmetries, and use it to study the breakdown of\ntopological order in the double Fibonacci (DFib) string-net and its Galois\nconjugate, the non-hermitian double Yang-Lee (DYL) string-net. We start by\nshowing how to construct topological order parameters for condensation and\ndeconfinement of anyons using the MPO symmetries. Subsequently, we set up\ninterpolations from the DFib and the DYL model to the trivial phase, and show\nthat these can be mapped to certain restricted solid on solid(RSOS) models,\nwhich are equivalent to the $((5\\pm\\sqrt{5})/2)$-state Potts model,\nrespectively. The known exact solutions of the statistical models allow us to\nlocate the critical points, and to predict the critical exponents for the order\nparameters. We complement this by numerical study of the phase transitions,\nwhich fully confirms our theoretical predictions; remarkably, we find that both\nmodels exhibit a duality between the order parameters for condensation and\ndeconfinement.\n"} {"abstract": " In this paper we propose a novel class of methods for high order accurate\nintegration of multirate systems of ordinary differential equation\ninitial-value problems. The proposed methods construct multirate schemes by\napproximating the action of matrix $\\varphi$-functions within explicit\nexponential Rosenbrock (ExpRB) methods, thereby called Multirate Exponential\nRosenbrock (MERB) methods. They consist of the solution to a sequence of\nmodified \"fast\" initial-value problems, that may themselves be approximated\nthrough subcycling any desired IVP solver. In addition to proving how to\nconstruct MERB methods from certain classes of ExpRB methods, we provide\nrigorous convergence analysis of these methods and derive efficient MERB\nschemes of orders two through six (the highest order ever constructed\ninfinitesimal multirate methods). We then present numerical simulations to\nconfirm these theoretical convergence rates, and to compare the efficiency of\nMERB methods against other recently-introduced high order multirate methods.\n"} {"abstract": " When the inflaton couples to photons and amplifies electric fields, charged\nparticles produced via the Schwinger effect can dominate the universe after\ninflation, which is dubbed as the Schwinger preheating. Using the hydrodynamic\napproach for the Boltzmann equation, we numerically study two cases, the\nStarobinsky inflation model with the kinetic coupling and the anisotropic\ninflation model. The Schwinger preheating is not observed in the latter model\nbut occurs for a sufficiently large inflaton-photon coupling in the first\nmodel. We analytically address its condition and derive a general attractor\nsolution of the electric fields. The occurrence of the Schwinger preheating in\nthe first model is determined by whether the electric fields enter the\nattractor solution during inflation or not.\n"} {"abstract": " In this paper we have presented the mechanism of the barrier crossing\ndynamics of a Brownian particle which is coupled to a thermal bath in the\npresence of both time independent and fluctuating magnetic fields. Here the\nfollowing three aspects are important in addition to the role of the thermal\nbath on the barrier crossing dynamics. Magnetic field induced coupling may\nintroduce a resonance like effect. Another role of the field is that\nenhancement of its strength reduces the frequency factor of the barrier\ncrossing rate constant. Finally, the fluctuating magnetic field introduces an\ninduced electric field which activates the Brownian particle to cross the\nenergy barrier. As a result of interplay among these aspects versatile\nnon-monotonic behavior may appear in the variation of the rate constant as a\nfunction of the strength of the time independent magnetic field.\n"} {"abstract": " The space radiation environment is a complex combination of fast-moving ions\nderived from all atomic species found in the periodic table. The energy\nspectrum of each ion species varies widely but is prominently in the range of\n400 - 600 MeV/n. The large dynamic range in ion energy is difficult to simulate\nin ground-based radiobiology experiments. Most ground-based irradiations with\nmono-energetic beams of a single one ion species are delivered at comparatively\nhigh dose rates. In some cases, sequences of such beams are delivered with\nvarious ion species and energies to crudely approximate the complex space\nradiation environment. This approximation may cause profound experimental bias\nin processes such as biologic repair of radiation damage, which are known to\nhave strong temporal dependancies. It is possible that this experimental bias\nleads to an overprediction of risks of radiation effects that have not been\nobserved in the astronaut cohort. None of the primary health risks presumely\nattributed to space radiation exposure, such as radiation carciogenesis,\ncardiovascular disease, cognitive deficits, etc., have been observed in\nastronaut or cosmonaut crews. This fundamentally and profoundly limits our\nunderstanding of the effects of GCR on humans and limits the development of\neffective radiation countermeasures.\n"} {"abstract": " Time series forecasting methods play critical role in estimating the spread\nof an epidemic. The coronavirus outbreak of December 2019 has already infected\nmillions all over the world and continues to spread on. Just when the curve of\nthe outbreak had started to flatten, many countries have again started to\nwitness a rise in cases which is now being referred as the 2nd wave of the\npandemic. A thorough analysis of time-series forecasting models is therefore\nrequired to equip state authorities and health officials with immediate\nstrategies for future times. This aims of the study are three-fold: (a) To\nmodel the overall trend of the spread; (b) To generate a short-term forecast of\n10 days in countries with the highest incidence of confirmed cases (USA, India\nand Brazil); (c) To quantitatively determine the algorithm that is best suited\nfor precise modelling of the linear and non-linear features of the time series.\nThe comparison of forecasting models for the total cumulative cases of each\ncountry is carried out by comparing the reported data and the predicted value,\nand then ranking the algorithms (Prophet, Holt-Winters, LSTM, ARIMA, and\nARIMA-NARNN) based on their RMSE, MAE and MAPE values. The hybrid combination\nof ARIMA and NARNN (Nonlinear Auto-Regression Neural Network) gave the best\nresult among the selected models with a reduced RMSE, which proved to be almost\n35.3% better than one of the most prevalent method of time-series prediction\n(ARIMA). The results demonstrated the efficacy of the hybrid implementation of\nthe ARIMA-NARNN model over other forecasting methods such as Prophet, Holt\nWinters, LSTM, and the ARIMA model in encapsulating the linear as well as\nnon-linear patterns of the epidemical datasets.\n"} {"abstract": " We present mid infrared imaging of two young clusters, the Coronet in the CrA\ncloud core and B59 in the Pipe Nebula, using the FORCAST camera on the\nStratospheric Observatory for Infrared Astronomy. We also analyze Herschel\nSpace Observatory PACS and SPIRE images of the associated clouds. The two\nclusters are at similar, and very close, distances. Star formation is ongoing\nin the Coronet, which hosts at least one Class 0 source and several pre-stellar\ncores, which may collapse and form stars. The B59 cluster is older, although it\nstill has a few Class I sources, and is less compact. The CrA cloud has a\ndiameter of about 0.16 pc, and we determine a dust temperature of 15.7 K and a\nstar formation efficiency of about 27 %, while the B59 core is approximately\ntwice as large, has a dust temperature of about 11.4 K and a star formation\nefficiency of about 14 %. We infer that the gas densities are much higher in\nthe Coronet, which has also formed intermediate mass stars, while B59 has only\nformed low-mass stars.\n"} {"abstract": " This paper presents approximation methods for time-dependent thermal\nradiative transfer problems in high energy density physics. It is based on the\nmultilevel quasidiffusion method defined by the high-order radiative transfer\nequation (RTE) and the low-order quasidiffusion (aka VEF) equations for the\nmoments of the specific intensity. A large part of data storage in TRT problems\nbetween time steps is determined by the dimensionality of grid functions of the\nradiation intensity. The approximate implicit methods with reduced memory for\nthe time-dependent Boltzmann equation are applied to the high-order RTE,\ndiscretized in time with the backward Euler (BE) scheme. The high-dimensional\nintensity from the previous time level in the BE scheme is approximated by\nmeans of the low-rank proper orthogonal decomposition (POD). Another version of\nthe presented method applies the POD to the remainder term of P2 expansion of\nthe intensity. The accuracy of the solution of the approximate implicit methods\ndepends of the rank of the POD. The proposed methods enable one to reduce\nstorage requirements in time dependent problems. Numerical results of a\nFleck-Cummings TRT test problem are presented.\n"} {"abstract": " In this paper we study nonlinear interpolation problems for interpolation and\npeak-interpolation sets of function algebras. The subject goes back to the\nclassical Rudin-Carleson interpolation theorem. In particular, we prove the\nfollowing nonlinear version of this theorem:\n Let $\\bar{\\mathbb D}\\subset \\mathbb C$ be the closed unit disk, $\\mathbb\nT\\subset\\bar{\\mathbb D}$ the unit circle, $S\\subset\\mathbb T$ a closed subset\nof Lebesgue measure zero and $M$ a connected complex manifold.\n Then for every continuous $M$-valued map $f$ on $S$ there exists a continuous\n$M$-valued map $g$ on $\\bar{\\mathbb D}$ holomorphic on its interior such that\n$g|_S=f$. We also consider similar interpolation problems for continuous maps\n $f: S\\rightarrow\\bar M$, where $\\bar M$ is a complex manifold with boundary\n$\\partial M$ and interior $M$. Assuming that $f(S)\\cap\\partial M\\ne\\emptyset$\nwe are looking for holomorphic extensions $g$ of $f$ such that $g(\\bar{\\mathbb\nD}\\setminus S)\\subset M$.\n"} {"abstract": " In this paper, we investigate Riesz energy problems on unbounded conductors\nin $\\R^d$ in the presence of general external fields $Q$, not necessarily\nsatisfying the growth condition $Q(x)\\to\\infty$ as $x\\to\\infty$ assumed in\nseveral previous studies. We provide sufficient conditions on $Q$ for the\nexistence of an equilibrium measure and the compactness of its support.\nParticular attention is paid to the case of the hyperplanar conductor $\\R^{d}$,\nembedded in $\\R^{d+1}$, when the external field is created by the potential of\na signed measure $\\nu$ outside of $\\R^{d}$. Simple cases where $\\nu$ is a\ndiscrete measure are analyzed in detail. New theoretic results for Riesz\npotentials, in particular an extension of a classical theorem by de La\nVall\\'ee-Poussin, are established. These results are of independent interest.\n"} {"abstract": " DNA sequencing is becoming increasingly commonplace, both in medical and\ndirect-to-consumer settings. To promote discovery, collected genomic data is\noften de-identified and shared, either in public repositories, such as OpenSNP,\nor with researchers through access-controlled repositories. However, recent\nstudies have suggested that genomic data can be effectively matched to\nhigh-resolution three-dimensional face images, which raises a concern that the\nincreasingly ubiquitous public face images can be linked to shared genomic\ndata, thereby re-identifying individuals in the genomic data. While these\ninvestigations illustrate the possibility of such an attack, they assume that\nthose performing the linkage have access to extremely well-curated data. Given\nthat this is unlikely to be the case in practice, it calls into question the\npragmatic nature of the attack. As such, we systematically study this\nre-identification risk from two perspectives: first, we investigate how\nsuccessful such linkage attacks can be when real face images are used, and\nsecond, we consider how we can empower individuals to have better control over\nthe associated re-identification risk. We observe that the true risk of\nre-identification is likely substantially smaller for most individuals than\nprior literature suggests. In addition, we demonstrate that the addition of a\nsmall amount of carefully crafted noise to images can enable a controlled\ntrade-off between re-identification success and the quality of shared images,\nwith risk typically significantly lowered even with noise that is imperceptible\nto humans.\n"} {"abstract": " Recent works on Binary Neural Networks (BNNs) have made promising progress in\nnarrowing the accuracy gap of BNNs to their 32-bit counterparts. However, the\naccuracy gains are often based on specialized model designs using additional\n32-bit components. Furthermore, almost all previous BNNs use 32-bit for feature\nmaps and the shortcuts enclosing the corresponding binary convolution blocks,\nwhich helps to effectively maintain the accuracy, but is not friendly to\nhardware accelerators with limited memory, energy, and computing resources.\nThus, we raise the following question: How can accuracy and energy consumption\nbe balanced in a BNN network design? We extensively study this fundamental\nproblem in this work and propose a novel BNN architecture without most commonly\nused 32-bit components: \\textit{BoolNet}. Experimental results on ImageNet\ndemonstrate that BoolNet can achieve 4.6x energy reduction coupled with 1.2\\%\nhigher accuracy than the commonly used BNN architecture Bi-RealNet. Code and\ntrained models are available at: https://github.com/hpi-xnor/BoolNet.\n"} {"abstract": " Many task-oriented dialogue systems use deep reinforcement learning (DRL) to\nlearn policies that respond to the user appropriately and complete the tasks\nsuccessfully. Training DRL agents with diverse dialogue trajectories prepare\nthem well for rare user requests and unseen situations. One effective\ndiversification method is to let the agent interact with a diverse set of\nlearned user models. However, trajectories created by these artificial user\nmodels may contain generation errors, which can quickly propagate into the\nagent's policy. It is thus important to control the quality of the\ndiversification and resist the noise. In this paper, we propose a novel\ndialogue diversification method for task-oriented dialogue systems trained in\nsimulators. Our method, Intermittent Short Extension Ensemble (I-SEE),\nconstrains the intensity to interact with an ensemble of diverse user models\nand effectively controls the quality of the diversification. Evaluations on the\nMultiwoz dataset show that I-SEE successfully boosts the performance of several\nstate-of-the-art DRL dialogue agents.\n"} {"abstract": " When manipulating three-dimensional data, it is possible to ensure that\nrotational and translational symmetries are respected by applying so-called\nSE(3)-equivariant models. Protein structure prediction is a prominent example\nof a task which displays these symmetries. Recent work in this area has\nsuccessfully made use of an SE(3)-equivariant model, applying an iterative\nSE(3)-equivariant attention mechanism. Motivated by this application, we\nimplement an iterative version of the SE(3)-Transformer, an SE(3)-equivariant\nattention-based model for graph data. We address the additional complications\nwhich arise when applying the SE(3)-Transformer in an iterative fashion,\ncompare the iterative and single-pass versions on a toy problem, and consider\nwhy an iterative model may be beneficial in some problem settings. We make the\ncode for our implementation available to the community.\n"} {"abstract": " In the framework of the Standard Model (SM) a theoretical description of the\nneutron beta decay is given at the level of 10^{-5}. The neutron lifetime and\ncorrelation coefficients of the neutron beta decay for a polarized neutron, a\npolarized electron and an unpolarized proton are calculated at the account for\ni) the radiative corrections of order O(\\alpha E_e/m_N) ~ 10^{-5} to Sirlin's\nouter and inner radiative corrections of order O(\\alpha/\\pi), ii) the\ncorrections of order O(E^2_e/m^2_N) ~ 10^{-5}, caused by weak magnetism and\nproton recoil, and iii) Wilkinson's corrections of order 10^{-5} (Wilkinson,\nNucl. Phys. A377, 474 (1982)). These corrections define the SM background of\nthe theoretical description of the neutron beta decay at the level of 10^{-5},\nwhich is required by experimental searches of interactions beyond the SM with\nexperimental uncertainties of a few parts of 10^{-5}.\n"} {"abstract": " Face swapping has both positive applications such as entertainment,\nhuman-computer interaction, etc., and negative applications such as DeepFake\nthreats to politics, economics, etc. Nevertheless, it is necessary to\nunderstand the scheme of advanced methods for high-quality face swapping and\ngenerate enough and representative face swapping images to train DeepFake\ndetection algorithms. This paper proposes the first Megapixel level method for\none shot Face Swapping (or MegaFS for short). Firstly, MegaFS organizes face\nrepresentation hierarchically by the proposed Hierarchical Representation Face\nEncoder (HieRFE) in an extended latent space to maintain more facial details,\nrather than compressed representation in previous face swapping methods.\nSecondly, a carefully designed Face Transfer Module (FTM) is proposed to\ntransfer the identity from a source image to the target by a non-linear\ntrajectory without explicit feature disentanglement. Finally, the swapped faces\ncan be synthesized by StyleGAN2 with the benefits of its training stability and\npowerful generative capability. Each part of MegaFS can be trained separately\nso the requirement of our model for GPU memory can be satisfied for megapixel\nface swapping. In summary, complete face representation, stable training, and\nlimited memory usage are the three novel contributions to the success of our\nmethod. Extensive experiments demonstrate the superiority of MegaFS and the\nfirst megapixel level face swapping database is released for research on\nDeepFake detection and face image editing in the public domain. The dataset is\nat this link.\n"} {"abstract": " In this paper, we study the problem of mobile user profiling, which is a\ncritical component for quantifying users' characteristics in the human mobility\nmodeling pipeline. Human mobility is a sequential decision-making process\ndependent on the users' dynamic interests. With accurate user profiles, the\npredictive model can perfectly reproduce users' mobility trajectories. In the\nreverse direction, once the predictive model can imitate users' mobility\npatterns, the learned user profiles are also optimal. Such intuition motivates\nus to propose an imitation-based mobile user profiling framework by exploiting\nreinforcement learning, in which the agent is trained to precisely imitate\nusers' mobility patterns for optimal user profiles. Specifically, the proposed\nframework includes two modules: (1) representation module, which produces state\ncombining user profiles and spatio-temporal context in real-time; (2) imitation\nmodule, where Deep Q-network (DQN) imitates the user behavior (action) based on\nthe state that is produced by the representation module. However, there are two\nchallenges in running the framework effectively. First, epsilon-greedy strategy\nin DQN makes use of the exploration-exploitation trade-off by randomly pick\nactions with the epsilon probability. Such randomness feeds back to the\nrepresentation module, causing the learned user profiles unstable. To solve the\nproblem, we propose an adversarial training strategy to guarantee the\nrobustness of the representation module. Second, the representation module\nupdates users' profiles in an incremental manner, requiring integrating the\ntemporal effects of user profiles. Inspired by Long-short Term Memory (LSTM),\nwe introduce a gated mechanism to incorporate new and old user characteristics\ninto the user profile.\n"} {"abstract": " We study the polarization dynamics of ultrafast solitons in mode-locked fiber\nlasers. We find that when a stable soliton is generated, it's\nstate-of-polarization shifts toward a stable state and when the soliton is\ngenerated with excess power levels it experiences relaxation oscillations in\nits intensity and timing. On the other hand, when a soliton is generated in an\nunstable state-of-polarization, it either decays in intensity until it\ndisappears, or its temporal width decreases until it explodes into several\nsolitons and then it disappears. We also found that when two solitons are\nsimultaneously generated close to each other, they attract each other until\nthey collide and merge into a single soliton. Although, these two solitons are\ngenerated with different states-of-polarization, they shift their\nstate-of-polarization closer to each other until the polarization coincides\nwhen they collide. We support our findings by numerical calculations of a\nnon-Lagrangian approach by simulating the Ginzburg-Landau equation governing\nthe dynamics of solitons in a laser cavity. Our model also predicts the\nrelaxation oscillations of stable solitons and the two types of unstable\nsolitons observed in the experimental measurements.\n"} {"abstract": " This paper presents Favalon, a functional programming language built on the\npremise of a lambda calculus for use as an interactive shell replacement.\nFavalon seamlessly integrates with typed versions of existing libraries and\ncommands using type inference, flexible runtime type metadata, and the same\ntechniques employed by shells to link commands together. Much of Favalon's\nsyntax is customizable via user-defined functions, allowing it to be extended\nby anyone who is familiar with a command-line shell. Furthermore, Favalon's\ntype inference engine can be separated from its runtime library and easily\nrepurposed for other applications.\n"} {"abstract": " Recently, asymmetric plasmonic nanojunctions [Karnetzky et. al., Nature Comm.\n2471, 9 (2018)] have shown promise as on-chip electronic devices to convert\nfemtosecond optical pulses to current bursts, with a bandwidth of\nmulti-terahertz scale, although yet at low temperatures and pressures. Such\nnanoscale devices are of great interest for novel ultrafast electronics and\nopto-electronic applications. Here, we operate the device in air and at room\ntemperature, revealing the mechanisms of photoemission from plasmonic\nnanojunctions, and the fundamental limitations on the speed of\noptical-to-electronic conversion. Inter-cycle interference of coherent\nelectronic wavepackets results in a complex energy electron distribution and\nbirth of multiphoton effects. This energy structure, as well as reshaping of\nthe wavepackets during their propagation from one tip to the other, determine\nthe ultrafast dynamics of the current. We show that, up to some level of\napproximation, the electron flight time is well-determined by the mean\nponderomotive velocity in the driving field.\n"} {"abstract": " The necessary and sufficient conditions are given for a sequence of complex\nnumbers to be the periodic (or antiperiodic) spectrum of non-self-adjoint Dirac\noperator.\n"} {"abstract": " In this paper, we consider a type of image quality assessment as a\ntask-specific measurement, which can be used to select images that are more\namenable to a given target task, such as image classification or segmentation.\nWe propose to train simultaneously two neural networks for image selection and\na target task using reinforcement learning. A controller network learns an\nimage selection policy by maximising an accumulated reward based on the target\ntask performance on the controller-selected validation set, whilst the target\ntask predictor is optimised using the training set. The trained controller is\ntherefore able to reject those images that lead to poor accuracy in the target\ntask. In this work, we show that the controller-predicted image quality can be\nsignificantly different from the task-specific image quality labels that are\nmanually defined by humans. Furthermore, we demonstrate that it is possible to\nlearn effective image quality assessment without using a ``clean'' validation\nset, thereby avoiding the requirement for human labelling of images with\nrespect to their amenability for the task. Using $6712$, labelled and\nsegmented, clinical ultrasound images from $259$ patients, experimental results\non holdout data show that the proposed image quality assessment achieved a mean\nclassification accuracy of $0.94\\pm0.01$ and a mean segmentation Dice of\n$0.89\\pm0.02$, by discarding $5\\%$ and $15\\%$ of the acquired images,\nrespectively. The significantly improved performance was observed for both\ntested tasks, compared with the respective $0.90\\pm0.01$ and $0.82\\pm0.02$ from\nnetworks without considering task amenability. This enables image quality\nfeedback during real-time ultrasound acquisition among many other medical\nimaging applications.\n"} {"abstract": " We show that a novel, general phase space mapping Hamiltonian for\nnonadiabatic systems, which is reminiscent of the renowned Meyer-Miller mapping\nHamiltonian, involves a commutator variable matrix rather than the conventional\nzero-point-energy parameter. In the exact mapping formulation on constraint\nspace for phase space approaches for nonadiabatic dynamics, the general mapping\nHamiltonian with commutator variables can be employed to generate approximate\ntrajectory-based dynamics. Various benchmark model tests, which range from gas\nphase to condensed phase systems, suggest that the overall performance of the\ngeneral mapping Hamiltonian is better than that of the conventional\nMeyer-Miller Hamiltonian.\n"} {"abstract": " Many man-made objects are characterised by a shape that is symmetric along\none or more planar directions. Estimating the location and orientation of such\nsymmetry planes can aid many tasks such as estimating the overall orientation\nof an object of interest or performing shape completion, where a partial scan\nof an object is reflected across the estimated symmetry plane in order to\nobtain a more detailed shape. Many methods processing 3D data rely on expensive\n3D convolutions. In this paper we present an alternative novel encoding that\ninstead slices the data along the height dimension and passes it sequentially\nto a 2D convolutional recurrent regression scheme. The method also comprises a\ndifferentiable least squares step, allowing for end-to-end accurate and fast\nprocessing of both full and partial scans of symmetric objects. We use this\napproach to efficiently handle 3D inputs to design a method to estimate planar\nreflective symmetries. We show that our approach has an accuracy comparable to\nstate-of-the-art techniques on the task of planar reflective symmetry\nestimation on full synthetic objects. Additionally, we show that it can be\ndeployed on partial scans of objects in a real-world pipeline to improve the\noutputs of a 3D object detector.\n"} {"abstract": " In this paper we present a novel mechanism for producing the observed Dark\nMatter(DM) relic abundance during the First Order Phase Transition (FOPT) in\nthe early universe. We show that the bubble expansion with ultra-relativistic\nvelocities can lead to the abundance of DM particles with masses much larger\nthan the scale of the transition. We study this non-thermal production\nmechanism in the context of a generic phase transition and the electroweak\nphase transition. The application of the mechanism to the Higgs portal DM as\nwell as the signal in the Stochastic Gravitational Background are discussed.\n"} {"abstract": " Introduction. Can the infection due to the human immunodeficiency virus type\n1 induce a change in the differentiation status or process in T cells?.\nMethods. We will consider two stochastic Markov chain models, one which will\ndescribe the T-helper cell differentiation process, and another one describing\nthat process of infection of the T-helper cell by the virus; in these Markov\nchains, we will consider a set of states $\\{X_t \\}$ comprised of those proteins\ninvolved in each of the processes and their interactions (either\ndifferentiation or infection of the cell), such that we will obtain two\nstochastic transition matrices ($A,B$), one for each process; afterwards, the\ncomputation of their eigenvalues shall be performed, in which, should the\neigenvalue $\\lambda_i=1$ exist, the computation for the equilibrium\ndistribution $\\pi^n$ will be obtained for each of the matrices, which will\ninform us on the trends of interactions amongst the proteins in the long-term.\nResults. The stochastic processes considered possess an equilibrium\ndistribution, when reaching their equilibrium distribution, there exists an\nincrease in their informational entropy, and their log-rank distributions can\nbe modeled as discrete beta generalized distributions (DGBD). Discussion. The\nequilibrium distributions of both process can be regarded as states in which\nthe cell is well-differentiated, ergo there exists an induction of a novel\nHIV-dependent differentiated state in the T-cell; these processes due to their\nDGBD distribution can be considered complex processes; due to the increasing\nentropy, the equilibrium states are stable ones. Conclusion. The HIV virus can\npromote a novel differentiated state in the T-cell, which can give account for\nclinical features seen in patients; this model, notwithstanding does not give\naccount of YES/NO logical switches involved in the regulatory networks.\n"} {"abstract": " We propose an optimal MMSE precoding technique using quantized signals with\nconstant envelope. Unlike the existing MMSE design that relies on 1-bit\nresolution, the proposed approach employs uniform phase quantization and the\nbounding step in the branch-and-bound method is different in terms of\nconsidering the most restrictive relaxation of the nonconvex problem, which is\nthen utilized for a suboptimal design also. Moreover, unlike prior studies, we\npropose three different soft detection methods and an iterative detection and\ndecoding scheme that allow the utilization of channel coding in conjunction\nwith low-resolution precoding. Besides an exact approach for computing the\nextrinsic information, we propose two approximations with reduced computational\ncomplexity. Numerical simulations show that utilizing the MMSE criterion\ninstead of the established maximum-minimum distance to the decision threshold\nyields a lower bit-error-rate in many scenarios. Furthermore, when using the\nMMSE criterion, a smaller number of bound evaluations in the branch-and-bound\nmethod is required for low and medium SNR. Finally, results based on an LDPC\nblock code indicate that the receive processing schemes yield a lower\nbit-error-rate compared to the conventional design.\n"} {"abstract": " We consider the geodesic of the directed last passage percolation with iid\nexponential weights. We find the explicit one point distribution of the\ngeodesic location joint with the last passage times, and its limit when the\nsize goes to infinity.\n"} {"abstract": " We consider the problem of minimizing age of information in general\nsingle-hop and multihop wireless networks. First, we formulate a way to convert\nAoI optimization problems into equivalent network stability problems. Then, we\npropose a heuristic low complexity approach for achieving stability that can\nhandle general network topologies; unicast, multicast and broadcast flows;\ninterference constraints; link reliabilities; and AoI cost functions. We\nprovide numerical results to show that our proposed algorithms behave as well\nas the best known scheduling and routing schemes available in the literature\nfor a wide variety of network settings.\n"} {"abstract": " We develop a theory for the non-equilibrium screening of a charged impurity\nin a two-dimensional electron system under a strong time-periodic drive. Our\nanalysis of the time-averaged polarization function and dielectric function\nreveals that Floquet driving modifies the screened impurity potential in two\nmain regimes. In the weak drive regime, the time-averaged screened potential\nexhibits unconventional Friedel oscillations with multiple spatial periods\ncontributed by a principal period modulated by higher-order periods, which are\ndue to the emergence of additional Kohn anomalies in the polarization function.\nIn the strong drive regime, the time-averaged impurity potential becomes almost\nunscreened and does not exhibit Friedel oscillations. This tunable Friedel\noscillations is a result of the dynamic gating effect of the time-dependent\ndriving field on the two-dimensional electron system.\n"} {"abstract": " In this paper, based on the idea of self-adjusting steepness based\nschemes[5], a two-dimensional calculation method of steepness parameter is\nproposed, and thus a two-dimensional self-adjusting steepness based limiter is\nconstructed. With the application of such limiter to the over-intersection\nbased remapping framework, a low dissipation remapping method has been proposed\nthat can be applied to the existing ALE method.\n"} {"abstract": " We derive the Laws of Cosines and Sines in the super hyperbolic plane using\nMinkowski supergeometry and find the identical formulae to the classical case,\nbut remarkably involving different expressions for cosines and sines of angles\nwhich include substantial fermionic corrections. In further analogy to the\nclassical case, we apply these results to show that two parallel supergeodesics\nwhich are not ultraparallel admit a unique common orthogonal supergeodesic, and\nwe briefly describe aspects of elementary supernumber theory, leading to a\nprospective analogue of the Gauss product of quadratic forms.\n"} {"abstract": " We present an analysis of the galaxy environment and physical properties of a\npartial Lyman limit system at z = 0.83718 with HI and metal line components\nclosely separated in redshift space ($|\\Delta v| \\approx 400$ km/s) towards the\nbackground quasar HE1003+0149. The HST/COS far-ultraviolet spectrum provides\ncoverage of lines of oxygen ions from OI to OV. Comparison of observed spectral\nlines with synthetic profiles generated from Bayesian ionization modeling\nreveals the presence of two distinct gas phases in the absorbing medium. The\nlow-ionization phase of the absorber has sub-solar metallicities (1/10-th\nsolar) with indications of [C/O] < 0 in each of the components. The OIV and OV\ntrace a more diffuse higher-ionization medium with predicted HI column\ndensities that are $\\approx 2$ dex lower. The quasar field observed with\nVLT/MUSE reveals three dwarf galaxies with stellar masses of $M^* \\sim 10^{8} -\n10^{9}$ M$_\\odot$, and with star formation rates of $\\approx 0.5 - 1$ M$_\\odot$\nyr$^{-1}$, at projected separations of $\\rho/R_{\\mathrm{vir}} \\approx 1.8 -\n3.0$ from the absorber. Over a wider field with projected proper separation of\n$\\leq 5$ Mpc and radial velocity offset of $|\\Delta v| \\leq 1000$ km/s from the\nabsorber, 21 more galaxies are identified in the $VLT$/VIMOS and Magellan deep\ngalaxy redshift surveys, with 8 of them within $1$ Mpc and $500$ km/s,\nconsistent with the line of sight penetrating a group of galaxies. The absorber\npresumably traces multiple phases of cool ($T \\sim 10^4$ K) photoionized\nintragroup medium. The inferred [C/O] < 0 hints at preferential enrichment from\ncore-collapse supernovae, with such gas displaced from one or more of the\nnearby galaxies, and confined to the group medium.\n"} {"abstract": " Transition metal dichalcogenides (TMDs) combine interesting optical and\nspintronic properties in an atomically-thin material, where the light\npolarization can be used to control the spin and valley degrees-of-freedom for\nthe development of novel opto-spintronic devices. These promising properties\nemerge due to their large spin-orbit coupling in combination with their crystal\nsymmetries. Here, we provide simple symmetry arguments in a group-theory\napproach to unveil the symmetry-allowed spin scattering mechanisms, and\nindicate how one can use these concepts towards an external control of the spin\nlifetime. We perform this analysis for both monolayer (inversion asymmetric)\nand bilayer (inversion symmetric) crystals, indicating the different mechanisms\nthat play a role in these systems. We show that, in monolayer TMDs, electrons\nand holes transform fundamentally differently -- leading to distinct\nspin-scattering processes. We find that one of the electronic states in the\nconduction band is partially protected by time-reversal symmetry, indicating a\nlonger spin lifetime for that state. In bilayer and bulk TMDs, a hidden\nspin-polarization can exist within each layer despite the presence of global\ninversion symmetry. We show that this feature enables control of the interlayer\nspin-flipping scattering processes via an out-of-plane electric field,\nproviding a mechanism for electrical control of the spin lifetime.\n"} {"abstract": " We study the dynamics of a one-dimensional Rydberg lattice gas under\nfacilitation (anti-blockade) conditions which implements a so-called\nkinetically constrained spin system. Here an atom can only be excited to a\nRydberg state when one of its neighbors is already excited. Once two or more\natoms are simultaneously excited mechanical forces emerge, which couple the\ninternal electronic dynamics of this many-body system to external vibrational\ndegrees of freedom in the lattice. This electron-phonon coupling results in a\nso-called phonon dressing of many-body states which in turn impacts on the\nfacilitation dynamics. In our theoretical study we focus on a scenario in which\nall energy scales are sufficiently separated such that a perturbative treatment\nof the coupling between electronic and vibrational states is possible. This\nallows to analytically derive an effective Hamiltonian for the evolution of\nconsecutive clusters of Rydberg excitations in the presence of phonon dressing.\nWe analyze the spectrum of this Hamiltonian and show -- by employing Fano\nresonance theory -- that the interaction between Rydberg excitations and\nlattice vibrations leads to the emergence of slowly decaying bound states that\ninhibit fast relaxation of certain initial states.\n"} {"abstract": " Drafting as a process to reduce drag and to benefit from the presence of\nother competitors is applied in various sports with several recent examples of\ncompetitive running in formations. In this study, the aerodynamics of a\nrealistic model of a female runner is calculated by computational fluid\ndynamics (CFD) simulations at four running speeds of 15 km/h, 18 km/h, 21 km/h,\nand 36 km/h. Aerodynamic power fractions of the total energy expenditure are\nfound to be in the range of 2.6-8.5%. Additionally, four exemplary formations\nare analysed with respect to their drafting potential and resulting drag values\nare compared for the main runner and her pacers. The best of the formations\nachieves a total drag reduction on the main runner of 75.6%. Moreover, there\nare large variations in the drag reduction between the considered formations of\nup to 42% with respect to the baseline single-runner case. We conclude that\nmajor drag reduction of more than 70% can already be achieved with fairly\nsimple formations, while certain factors, such as runners on the sides, can\nhave a detrimental effect on drag reduction due to local acceleration of the\npassing flow. Using an empirical model for mechanical power output during\nrunning, gains of metabolic power and performance predictions are evaluated for\nall considered formations. Improvements in running economy are up to 3.5% for\nthe best formation, leading to velocity gains of 2.3%. This translates to 154 s\n(~2.6 min) saved over a marathon distance. Consequently, direct conclusions are\ndrawn from the obtained data for ideal drafting of long-distance running in\nhighly packed formations.\n"} {"abstract": " Turbulence in the upper ocean in the submesoscale range (scales smaller than\nthe deformation radius) plays an important role for the heat exchange with the\natmosphere and for oceanic biogeochemistry. Its dynamics should strongly depend\non the seasonal cycle and the associated mixed-layer instabilities. The latter\nare particularly relevant in winter and are responsible for the formation of\nenergetic small scales that extend over the whole depth of the mixed layer. The\nknowledge of the transport properties of oceanic flows at depth, which is\nessential to understand the coupling between surface and interior dynamics,\nhowever, is still limited. By means of numerical simulations, we explore the\nLagrangian dispersion properties of turbulent flows in a quasi-geostrophic\nmodel system allowing for both thermocline and mixed-layer instabilities. The\nresults indicate that, when mixed-layer instabilities are present, the\ndispersion regime is local from the surface down to depths comparable with that\nof the interface with the thermocline, while in their absence dispersion\nquickly becomes nonlocal versus depth. We then identify the origin of such\nbehavior in the existence of fine-scale energetic structures due to mixed-layer\ninstabilities. We further discuss the effect of vertical shear on the\nLagrangian particle spreading and address the correlation between the\ndispersion properties at the surface and at depth, which is relevant to assess\nthe possibility of inferring the dynamical features of deeper flows from the\nmore accessible surface ones.\n"} {"abstract": " Electrochemically mediated selective adsorption is an emerging\nelectrosorption technique that utilizes Faradaically enhanced redox active\nelectrodes, which can adsorb ions not only electrostatically, but also\nelectrochemically. The superb selectivity (>100) of this technique enables\nselective removal of toxic or high-value target ions under low energy\nconsumption. Here, we develop a general theoretical framework to describe the\ncompetitive electrosorption phenomena involving multiple ions and surface-bound\nredox species. The model couples diffusion, convection and electromigration\nwith competitive surface adsorption reaction kinetics, consistently derived\nfrom non-equilibrium thermodynamics. To optimize the selective removal of the\ntarget ions, design criteria were derived analytically from physically relevant\ndimensionless groups and time scales, where the propagation of the target\nanions concentration front is the limiting step. Detailed computational studies\nare reported for three case studies that cover a wide range of inlet\nconcentration ratios between the competing ions. And in all three cases, target\nanions in the electrosorption cell forms a self-sharpening reaction-diffusion\nwave front. Based on the model, a three-step stop-flow operation scheme with a\npure stripping solution of target anions is proposed that optimizes the ion\nadsorption performance and increases the purity of the regeneration stream to\nalmost 100%, which is beneficial for downstream processing.\n"} {"abstract": " Quality of website design is one of the influential factors of website\nsuccess. How the design helps the users using effectively and efficiently\nwebsite and satisfied at the end of the use. However, it is a common tendency\nthat websites are designed based on the developer's perspectives and lack\nconsidering user importance. Thus, the degree of website usability tends to be\nlow according to user perceptions. This study purposed to understand the user\nexperiences using an institutional repository (IR) website in a public\nuniversity in Indonesia. The research was performed based on usability testing\nframework as the usability testing method. About 12 participants were purposely\ninvolved concerning their key informant characteristics. Following three\nempirical data collection techniques (i.e., query technique, formal experiment,\nand thinking aloud), both descriptive analysis using usability scale matric and\ncontent analysis using qualitative data analysis (QDA) Miner Lite software were\nused in the data analysis stage. Lastly, several visual design recommendations\nwere then proposed at the end of the study. In terms of a case study, besides\nthe practical recommendations which may contextually useful for the next\nwebsite development; the clarity of the research design may also help scholars\nhow to combine more than one usability testing technique within a\nmulti-technique study design.\n"} {"abstract": " The SIMT execution model is commonly used for general GPU development. CUDA\nand OpenCL developers write scalar code that is implicitly parallelized by\ncompiler and hardware. On Intel GPUs, however, this abstraction has profound\nperformance implications as the underlying ISA is SIMD and important hardware\ncapabilities cannot be fully utilized. To close this performance gap we\nintroduce C-For-Metal (CM), an explicit SIMD programming framework designed to\ndeliver close-to-the-metal performance on Intel GPUs. The CM programming\nlanguage and its vector/matrix types provide an intuitive interface to exploit\nthe underlying hardware features, allowing fine-grained register management,\nSIMD size control and cross-lane data sharing. Experimental results show that\nCM applications from different domains outperform the best-known SIMT-based\nOpenCL implementations, achieving up to 2.7x speedup on the latest Intel GPU.\n"} {"abstract": " We systematically investigate axisymmetric extremal isolated horizons (EIHs)\ndefined by vanishing surface gravity, corresponding to zero temperature. In the\nfirst part, using the Newman-Penrose and GHP formalism we derive the most\ngeneral metric function for such EIHs in the Einstein-Maxwell theory, which\ncomplements the previous result of Lewandowski and Pawlowski. We prove that it\ndepends on 5 independent parameters, namely deficit angles on the north and\nsouth poles of a spherical-like section of the horizon, its radius (area), and\ntotal electric and magnetic charges of the black hole. The deficit angles and\nboth charges can be separately set to zero. In the second part of our paper, we\nidentify this general axially symmetric solution for EIH with extremal horizons\nin exact electrovacuum Plebanski-Demianski spacetimes, using the convenient\nparametrization of this family by Griffiths and Podolsky. They represent all\n(double aligned) black holes of algebraic type D without a cosmological\nconstant. Apart from a conicity, they depend on 6 physical parameters (mass,\nKerr-like rotation, NUT parameter, acceleration, electric and magnetic charges)\nconstrained by the extremality condition. We were able to determine their\nrelation to the EIH geometrical parameters. This explicit identification of\ntype D extremal black holes with a unique form of EIH includes several\ninteresting subclasses, such as accelerating extremely charged\nReissner-Nordstrom black hole (C-metric), extremal accelerating Kerr-Newman,\naccelerating Kerr-NUT, or non-accelerating Kerr-Newman-NUT black holes.\n"} {"abstract": " Multimode nonlinear optics offers to overcome a long-standing limitation of\nfiber optics, tightly phase locking several spatial modes and enabling the\ncoherent transport of a wavepacket through a multimode fiber. A similar problem\nis encountered in the temporal compression of multi-mJ pulses to few-cycle\nduration in hollow gas-filled fibers. Scaling the fiber length to up to six\nmeters, hollow fibers have recently reached 1 TW of peak power. Despite the\nremarkable utility of the hollow fiber compressor and its widespread\napplication, however, no analytical model exists to enable insight into the\nscaling behavior of maximum compressibility and peak power. Here we extend a\nrecently introduced formalism for describing mode-locking to the spatially\nanalogue scenario of locking spatial fiber modes together. Our formalism\nunveils the coexistence of two soliton branches for anomalous modal dispersion\nand indicates the formation of stable spatio-temporal light bullets that would\nbe unstable in free space, similar to the temporal cage solitons in\nmode-locking theory. Our model enables deeper understanding of the physical\nprocesses behind the formation of such light bullets and predict the existence\nof multimode solitons in a much wider range of fiber types than previously\nconsidered possible.\n"} {"abstract": " Let $f : X \\to S$ be a family of smooth projective algebraic varieties over a\nsmooth connected base $S$, with everything defined over\n$\\overline{\\mathbb{Q}}$. Denote by $\\mathbb{V} = R^{2i} f_{*} \\mathbb{Z}(i)$\nthe associated integral variation of Hodge structure on the degree $2i$\ncohomology. We consider the following question: when can a fibre\n$\\mathbb{V}_{s}$ above an algebraic point $s \\in S(\\overline{\\mathbb{Q}})$ be\nisomorphic to a transcendental fibre $\\mathbb{V}_{s'}$ with $s' \\in\nS(\\mathbb{C}) \\setminus S(\\overline{\\mathbb{Q}})$? When $\\mathbb{V}$ induces a\nquasi-finite period map $\\varphi : S \\to \\Gamma \\backslash D$, conjectures in\nHodge theory predict that such isomorphisms cannot exist. We introduce new\ndifferential-algebraic techniques to show this is true for all points $s \\in\nS(\\overline{\\mathbb{Q}})$ outside of an explicit proper closed algebraic subset\nof $S$. As a corollary we establish the existence of a canonical\n$\\overline{\\mathbb{Q}}$-algebraic model for normalizations of period images.\n"} {"abstract": " We study co-dimension two monodromy defects in theories of conformally\ncoupled scalars and free Dirac fermions in arbitrary $d$ dimensions. We\ncharacterise this family of conformal defects by computing the one-point\nfunctions of the stress-tensor and conserved current for Abelian flavour\nsymmetries as well as two-point functions of the displacement operator. In the\ncase of $d=4$, the normalisation of these correlation functions are related to\ndefect Weyl anomaly coefficients, and thus provide crucial information about\nthe defect conformal field theory. We provide explicit checks on the values of\nthe defect central charges by calculating the universal part of the defect\ncontribution to entanglement entropy, and further, we use our results to\nextract the universal part of the vacuum R\\'enyi entropy. Moreover, we leverage\nthe non-supersymmetric free field results to compute a novel defect Weyl\nanomaly coefficient in a $d=4$ theory of free $\\mathcal{N}=2$ hypermultiplets.\nIncluding singular modes in the defect operator product expansion of\nfundamental fields, we identify notable relevant deformations in the singular\ndefect theories and show that they trigger a renormalisation group flow towards\nan IR fixed point with the most regular defect OPE. We also study Gukov-Witten\ndefects in free $d=4$ Maxwell theory and show that their central charges\nvanish.\n"} {"abstract": " Loneliness (i.e., the distressing feeling that often accompanies the\nsubjective sense of social disconnection) is detrimental to mental and physical\nhealth, and deficits in self-reported feelings of being understood by others is\na risk factor for loneliness. What contributes to these deficits in lonely\npeople? We used functional magnetic resonance imaging (fMRI) to unobtrusively\nmeasure the relative alignment of various aspects of people's mental processing\nof naturalistic stimuli (specifically, videos) as they unfold over time. We\nthereby tested whether lonely people actually process the world in\nidiosyncratic ways, rather than only exaggerating or misperceiving how\ndissimilar others' views are to their own (which could lead them to feel\nmisunderstood, even if they actually see the world similarly to those around\nthem). We found evidence for such idiosyncrasy: lonely individuals' neural\nresponses during free viewing of the videos were dissimilar to peers in their\ncommunities, particularly in brain regions (e.g., regions of the default-mode\nnetwork) in which similar responses have been associated with shared\npsychological perspectives and subjective understanding. Our findings were\nrobust even after controlling for demographic similarities, participants'\noverall levels of objective social isolation, and their friendships with each\nother. These results suggest that being surrounded predominantly by people who\nsee the world differently from oneself may be a risk factor for loneliness,\neven if one is friends with them.\n"} {"abstract": " The Traditional Approximation of Rotation (TAR) is a treatment of the\nhydrodynamic equations of rotating and stably stratified fluids in which the\naction of the Coriolis acceleration along the direction of the entropy and\nchemical stratifications is neglected because it is weak in comparison with the\nbuoyancy force. The dependent variables in the equations for the dynamics of\ngravito-inertial waves (GIWs) then become separable into radial and horizontal\nparts as in the non-rotating case. The TAR is built on the assumptions that the\nstar is spherical (i.e. its centrifugal deformation is neglected) and uniformly\nrotating. We study the feasibility of carrying out a generalisation of the TAR\nto account for the centrifugal acceleration in the case of strongly deformed\nuniformly and rapidly rotating stars (and planets), and to identify the\nvalidity domain of this approximation. We built analytically a complete\nformalism that allows the study of the dynamics of GIWs in spheroidal\ncoordinates which take into account the flattening of rapidly rotating stars by\nassuming the hierarchies of frequencies adopted within the TAR in the spherical\ncase and by deriving a generalised Laplace tidal equation for the horizontal\neigenfunctions of the GIWs and their asymptotic wave periods, which can be used\nto probe the structure and dynamics of rotating deformed stars with\nasteroseismology. Using 2D ESTER stellar models, we determine the validity\ndomain of the generalised TAR as a function of the rotation rate of the star\nnormalised by its critical angular velocity and its pseudo-radius. This\ngeneralisation allows us to study the signature of the centrifugal effects on\nGIWs in rapidly rotating deformed stars. We found that the effects of the\ncentrifugal acceleration in rapidly rotating early-type stars on GIWs are\ntheoretically detectable in modern space photometry using observations from\nKepler.\n"} {"abstract": " Artificial intelligence is applied in a range of sectors, and is relied upon\nfor decisions requiring a high level of trust. For regression methods, trust is\nincreased if they approximate the true input-output relationships and perform\naccurately outside the bounds of the training data. But often performance\noff-test-set is poor, especially when data is sparse. This is because the\nconditional average, which in many scenarios is a good approximation of the\n`ground truth', is only modelled with conventional Minkowski-r error measures\nwhen the data set adheres to restrictive assumptions, with many real data sets\nviolating these. To combat this there are several methods that use prior\nknowledge to approximate the `ground truth'. However, prior knowledge is not\nalways available, and this paper investigates how error measures affect the\nability for a regression method to model the `ground truth' in these scenarios.\nCurrent error measures are shown to create an unhelpful bias and a new error\nmeasure is derived which does not exhibit this behaviour. This is tested on 36\nrepresentative data sets with different characteristics, showing that it is\nmore consistent in determining the `ground truth' and in giving improved\npredictions in regions beyond the range of the training data.\n"} {"abstract": " This paper addresses the task of (complex) conversational question answering\nover a knowledge graph. For this task, we propose LASAGNE (muLti-task semAntic\nparSing with trAnsformer and Graph atteNtion nEtworks). It is the first\napproach, which employs a transformer architecture extended with Graph\nAttention Networks for multi-task neural semantic parsing. LASAGNE uses a\ntransformer model for generating the base logical forms, while the Graph\nAttention model is used to exploit correlations between (entity) types and\npredicates to produce node representations. LASAGNE also includes a novel\nentity recognition module which detects, links, and ranks all relevant entities\nin the question context. We evaluate LASAGNE on a standard dataset for complex\nsequential question answering, on which it outperforms existing baseline\naverages on all question types. Specifically, we show that LASAGNE improves the\nF1-score on eight out of ten question types; in some cases, the increase in\nF1-score is more than 20% compared to the state of the art.\n"} {"abstract": " Calculation of conductivity in the Hubbard model is a challenging task.\nRecent years have seen much progress in this respect and numerically exact\nsolutions are now possible in certain regimes. In this paper we discuss the\ncalculation of conductivity for the square lattice Hubbard model in the\npresence of a perpendicular magnetic field, focusing on orbital effects. We\npresent the relevant formalism in all detail and in full generality, and then\ndiscuss the simplifications that arise at the level of the dynamical mean field\ntheory (DMFT). We prove that the Kubo bubble preserves gauge and translational\ninvariance, and that in the DMFT the vertex corrections cancel regardless of\nthe magnetic field. We present the DMFT results for the spectral function and\nboth the longitudinal and Hall conductivity in several regimes of parameters.\nWe analyze thoroughly the quantum oscillations of the longitudinal conductivity\nand identify a high-frequency oscillation component, arising as a combined\neffect of scattering and temperature, in line with recent experimental\nobservations in moir\\'e systems.\n"} {"abstract": " Millions of people use platforms such as YouTube, Facebook, Twitter, and\nother mass media. Due to the accessibility of these platforms, they are often\nused to establish a narrative, conduct propaganda, and disseminate\nmisinformation. This work proposes an approach that uses state-of-the-art NLP\ntechniques to extract features from video captions (subtitles). To evaluate our\napproach, we utilize a publicly accessible and labeled dataset for classifying\nvideos as misinformation or not. The motivation behind exploring video captions\nstems from our analysis of videos metadata. Attributes such as the number of\nviews, likes, dislikes, and comments are ineffective as videos are hard to\ndifferentiate using this information. Using caption dataset, the proposed\nmodels can classify videos among three classes (Misinformation, Debunking\nMisinformation, and Neutral) with 0.85 to 0.90 F1-score. To emphasize the\nrelevance of the misinformation class, we re-formulate our classification\nproblem as a two-class classification - Misinformation vs. others (Debunking\nMisinformation and Neutral). In our experiments, the proposed models can\nclassify videos with 0.92 to 0.95 F1-score and 0.78 to 0.90 AUC ROC.\n"} {"abstract": " The 2D TI edge states are considered within the Volkov-Pankratov (VP)\nHamiltonian. A smooth transition between TI and OI is assumed. The edge states\nare formed in the total gap of homogeneous 2D material. A pair of these states\nare of linear dispersion, others have gapped Dirac spectra. The optical\nselection rules are found. The optical transitions between the neighboring edge\nstates appear in the global 2D gap for the in-plane light electric field\ndirected across the edge.\n The electrons in linear edge states have no backscattering, that is\nindicative of the fact of topological protection. However, when linear edge\nstates get to the energy domain of Dirac edge states, the backscattering\nbecomes permitted. The elastic backscattering rate is found. The Drude-like\nconductivity is found when the Fermi level gets into the energy domain of the\ncoexistence of linear and Dirac edge states. The localization edge conductance\nof a finite sample at zero temperature is determined.\n"} {"abstract": " This paper compares the advantages, limitations, and computational\nconsiderations of using Finite-Time Lyapunov Exponents (FTLEs) and Lagrangian\nDescriptors (LDs) as tools for identifying barriers and mechanisms of fluid\ntransport in two-dimensional time-periodic flows. These barriers and mechanisms\nof transport are often referred to as \"Lagrangian Coherent Structures,\" though\nthis term often changes meaning depending on the author or context. This paper\nwill specifically focus on using FTLEs and LDs to identify stable and unstable\nmanifolds of hyperbolic stagnation points, and the Kolmogorov-Arnold-Moser\n(KAM) tori associated with elliptic stagnation points. The background and\ntheory behind both methods and their associated phase space structures will be\npresented, and then examples of FTLEs and LDs will be shown based on a simple,\nperiodic, time-dependent double-gyre toy model with varying parameters.\n"} {"abstract": " Railway systems provide pivotal support to modern societies, making their\nefficiency and robustness important to ensure. However, these systems are\nsusceptible to disruptions and delays, leading to accumulating economic damage.\nThe large spatial scale of delay spreading typically make it difficult to\ndistinguish which regions will ultimately affected from an initial disruption,\ncreating uncertainty for risk assessment. In this paper, we identify\ngeographical structures that reflect how delay spreads through railway\nnetworks. We do so by proposing a graph-based, hybrid schedule and\nempirical-based model for delay propagation and apply spectral clustering. We\napply the model to four European railway systems: the Netherlands, Germany,\nSwitzerland and Italy. We characterize geographical structures in the railway\nsystems of these countries and interpret these regions in terms of delay\nseverity and how dynamically disconnected they are from the rest. The method\nalso allows us to point out important differences between these countries'\nrailway systems. For practitioners, this geographical characterization of\nrailways provide natural boundaries for local decision-making structures and a\nfirst-order prioritization on which regions are at risk, given an initial\ndisruption.\n"} {"abstract": " Brain parcellations play a ubiquitous role in the analysis of magnetic\nresonance imaging (MRI) datasets. Over 100 years of research has been conducted\nin pursuit of an ideal brain parcellation. Different methods have been\ndeveloped and studied for constructing brain parcellations using different\nimaging modalities. More recently, several data-driven parcellation methods\nhave been adopted from data mining, machine learning, and statistics\ncommunities. With contributions from different scientific fields, there is a\nrich body of literature that needs to be examined to appreciate the breadth of\nexisting research and the gaps that need to be investigated. In this work, we\nreview the large body of in vivo brain parcellation research spanning different\nneuroimaging modalities and methods. A key contribution of this work is a\nsemantic organization of this large body of work into different taxonomies,\nmaking it easy to understand the breadth and depth of the brain parcellation\nliterature. Specifically, we categorized the existing parcellations into three\ngroups: Anatomical parcellations, functional parcellations, and structural\nparcellations which are constructed using T1-weighted MRI, functional MRI\n(fMRI), and diffusion-weighted imaging (DWI) datasets, respectively. We provide\na multi-level taxonomy of different methods studied in each of these\ncategories, compare their relative strengths and weaknesses, and highlight the\nchallenges currently faced for the development of brain parcellations.\n"} {"abstract": " Effective molecular representation learning is of great importance to\nfacilitate molecular property prediction, which is a fundamental task for the\ndrug and material industry. Recent advances in graph neural networks (GNNs)\nhave shown great promise in applying GNNs for molecular representation\nlearning. Moreover, a few recent studies have also demonstrated successful\napplications of self-supervised learning methods to pre-train the GNNs to\novercome the problem of insufficient labeled molecules. However, existing GNNs\nand pre-training strategies usually treat molecules as topological graph data\nwithout fully utilizing the molecular geometry information. Whereas, the\nthree-dimensional (3D) spatial structure of a molecule, a.k.a molecular\ngeometry, is one of the most critical factors for determining molecular\nphysical, chemical, and biological properties. To this end, we propose a novel\nGeometry Enhanced Molecular representation learning method (GEM) for Chemical\nRepresentation Learning (ChemRL). At first, we design a geometry-based GNN\narchitecture that simultaneously models atoms, bonds, and bond angles in a\nmolecule. To be specific, we devised double graphs for a molecule: The first\none encodes the atom-bond relations; The second one encodes bond-angle\nrelations. Moreover, on top of the devised GNN architecture, we propose several\nnovel geometry-level self-supervised learning strategies to learn spatial\nknowledge by utilizing the local and global molecular 3D structures. We compare\nChemRL-GEM with various state-of-the-art (SOTA) baselines on different\nmolecular benchmarks and exhibit that ChemRL-GEM can significantly outperform\nall baselines in both regression and classification tasks. For example, the\nexperimental results show an overall improvement of 8.8% on average compared to\nSOTA baselines on the regression tasks, demonstrating the superiority of the\nproposed method.\n"} {"abstract": " We investigate program equivalence for linear higher-order(sequential)\nlanguages endowed with primitives for computational effects. More specifically,\nwe study operationally-based notions of program equivalence for a linear\n$\\lambda$-calculus with explicit copying and algebraic effects \\emph{\\`a la}\nPlotkin and Power. Such a calculus makes explicit the interaction between\ncopying and linearity, which are intensional aspects of computation, with\neffects, which are, instead, \\emph{extensional}. We review some of the notions\nof equivalences for linear calculi proposed in the literature and show their\nlimitations when applied to effectful calculi where copying is a first-class\ncitizen. We then introduce resource transition systems, namely transition\nsystems whose states are built over tuples of programs representing the\navailable resources, as an operational semantics accounting for both\nintensional and extensional interactive behaviors of programs. Our main result\nis a sound and complete characterization of contextual equivalence as trace\nequivalence defined on top of resource transition systems.\n"} {"abstract": " One of the exciting recent developments in decentralized finance (DeFi) has\nbeen the development of decentralized cryptocurrency exchanges that can\nautonomously handle conversion between different cryptocurrencies.\nDecentralized exchange protocols such as Uniswap, Curve and other types of\nAutomated Market Makers (AMMs) maintain a liquidity pool (LP) of two or more\nassets constrained to maintain at all times a mathematical relation to each\nother, defined by a given function or curve. Examples of such functions are the\nconstant-sum and constant-product AMMs. Existing systems however suffer from\nseveral challenges. They require external arbitrageurs to restore the price of\ntokens in the pool to match the market price. Such activities can potentially\ndrain resources from the liquidity pool. In particular, dramatic market price\nchanges can result in low liquidity with respect to one or more of the assets\nand reduce the total value of the LP. We propose in this work a new approach to\nconstructing the AMM by proposing the idea of dynamic curves. It utilizes input\nfrom a market price oracle to modify the mathematical relationship between the\nassets so that the pool price continuously and automatically adjusts to be\nidentical to the market price. This approach eliminates arbitrage opportunities\nand, as we show through simulations, maintains liquidity in the LP for all\nassets and the total value of the LP over a wide range of market prices.\n"} {"abstract": " We develop a model of interacting zwitterionic membranes with rotating\nsurface dipoles immersed in a monovalent salt, and implement it in a field\ntheoretic formalism. In the mean-field regime of monovalent salt, the\nelectrostatic forces between the membranes are characterized by a non-uniform\ntrend: at large membrane separations, the interfacial dipoles on the opposing\nsides behave as like-charge cations and give rise to repulsive membrane\ninteractions; at short membrane separations, the anionic field induced by the\ndipolar phosphate groups sets the behavior in the intermembrane region. The\nattraction of the cationic nitrogens in the dipolar lipid headgroups leads to\nthe adhesion of the membrane surfaces via dipolar bridging. The underlying\ncompetition between the opposing field components of the individual dipolar\ncharges leads to the non-uniform salt ion affinity of the zwitterionic membrane\nwith respect to the separation distance; large inter-membrane separations imply\nanionic excess while small, nanometer size separations, favor cationic excess.\nThis complex ionic selectivity of zwitterionic membranes may have relevant\nrepercussions on nanofiltration and nanofluidic transport techniques.\n"} {"abstract": " Recently the leading order of the correlation energy of a Fermi gas in a\ncoupled mean-field and semiclassical scaling regime has been derived, under the\nassumption of an interaction potential with a small norm and with compact\nsupport in Fourier space. We generalize this result to large interaction\npotentials, requiring only $|\\cdot| \\hat{V} \\in \\ell^1 (\\mathbb{Z}^3)$. Our\nproof is based on approximate, collective bosonization in three dimensions.\nSignificant improvements compared to recent work include stronger bounds on\nnon-bosonizable terms and more efficient control on the bosonization of the\nkinetic energy.\n"} {"abstract": " The localization spread gives a criterion to decide between metallic versus\ninsulating behaviour of a material. It is defined as the second moment cumulant\nof the many-body position operator, divided by the number of electrons.\nDifferent operators are used for systems treated with Open or Periodic Boundary\nConditions. In particular, in the case of periodic systems, we use the\ncomplex-position definition, that was already used in similar contexts for the\ntreatment of both classical and quantum situations. In this study, we show that\nthe localization spread evaluated on a finite ring system of radius $R$ with\nOpen Boundary Conditions leads, in the large $R$ limit, to the same formula\nderived by Resta et al. for 1D systems with periodic Born-von K\\'arm\\'an\nboundary conditions. A second formula, alternative to the Resta's one, is also\ngiven, based on the sum-over-state formalism, allowing for an interesting\ngeneralization to polarizability and other similar quantities.\n"} {"abstract": " In the recent years, there has been a shift in facial behavior analysis from\nthe laboratory-controlled conditions to the challenging in-the-wild conditions\ndue to the superior performance of deep learning based approaches for many real\nworld applications.However, the performance of deep learning approaches relies\non the amount of training data. One of the major problems with data acquisition\nis the requirement of annotations for large amount of training data. Labeling\nprocess of huge training data demands lot of human support with strong domain\nexpertise for facial expressions or action units, which is difficult to obtain\nin real-time environments.Moreover, labeling process is highly vulnerable to\nambiguity of expressions or action units, especially for intensities due to the\nbias induced by the domain experts. Therefore, there is an imperative need to\naddress the problem of facial behavior analysis with weak annotations. In this\npaper, we provide a comprehensive review of weakly supervised learning (WSL)\napproaches for facial behavior analysis with both categorical as well as\ndimensional labels along with the challenges and potential research directions\nassociated with it. First, we introduce various types of weak annotations in\nthe context of facial behavior analysis and the corresponding challenges\nassociated with it. We then systematically review the existing state-of-the-art\napproaches and provide a taxonomy of these approaches along with their insights\nand limitations. In addition, widely used data-sets in the reviewed literature\nand the performance of these approaches along with evaluation principles are\nsummarized. Finally, we discuss the remaining challenges and opportunities\nalong with the potential research directions in order to apply facial behavior\nanalysis with weak labels in real life situations.\n"} {"abstract": " We show that the action of the mapping class group on the space of closed\ncurves of a closed surface effectively tracks the corresponding action on\nTeichm\\\"uller space in the following sense: for all but quantitatively few\nmapping classes, the information of how a mapping class moves a given point of\nTeichm\\\"uller space determines, up to a power saving error term, how it changes\nthe geometric intersection numbers of a given closed curve with respect to\narbitrary geodesic currents. Applications include an effective estimate\ndescribing the speed of convergence of Teichm\\\"uller geodesic rays to the\nboundary at infinity of Teichm\\\"uller space, an effective estimate comparing\nthe Teichm\\\"uller and Thurston metrics along mapping class group orbits of\nTeichm\\\"uller space, and, in the sequel, effective estimates for countings of\nfilling closed geodesics on closed, negatively curved surfaces.\n"} {"abstract": " We discuss a model based on dark sector described by non-Abelian $SU(2)_D$\ngauge symmetry where we introduce $SU(2)_L \\times SU(2)_D$ bi-doublet\nvector-like leptons to generate active neutrino masses and kinetic mixing\nbetween $SU(2)_D$ and $U(1)_Y$ gauge fields at one-loop level. After\nspontaneous symmetry breaking of $SU(2)_D$, we have remnant $Z_4$ symmetry\nguaranteeing stability of dark matter candidates. We formulate neutrino mass\nmatrix and related lepton flavor violating processes and discus dark matter\nphysics estimating relic density. It is found that our model realize\nmulticomponent dark matter scenario due to the $Z_4$ symmetry and relic density\ncan be explained by gauge interactions with kinetic mixing effect.\n"} {"abstract": " In this article we continue with the research initiated in our previous work\non singular Liouville equations with quantized singularity. The main goal of\nthis article is to prove that as long as the bubbling solutions violate the\nspherical Harnack inequality near a singular source, the first derivatives of\ncoefficient functions must tend to zero.\n"} {"abstract": " We compute explicit solutions $\\Lambda^\\pm_m$ of the Painleve VI (PVI)\ndifferential equation from equivariant instanton bundles $E_m$ corresponding to\nYang-Mills instantons with \"quadrupole symmetry.\" This is based on a\ngeneralization of Hitchin's logarithmic connection to vector bundles with an\n$SL_2({\\mathbb C})$ action. We then identify explicit Okamoto transformation\nwhich play the role of \"creation operators\" for construction $\\Lambda^\\pm_m$\nfrom the \"ground state\" $\\Lambda^\\pm_0$, suggesting that the equivariant\ninstanton bundles $E_m$ might similarly be related to the trivial \"ground\nstate\" $E_0$.\n"} {"abstract": " Machine unlearning has great significance in guaranteeing model security and\nprotecting user privacy. Additionally, many legal provisions clearly stipulate\nthat users have the right to demand model providers to delete their own data\nfrom training set, that is, the right to be forgotten. The naive way of\nunlearning data is to retrain the model without it from scratch, which becomes\nextremely time and resource consuming at the modern scale of deep neural\nnetworks. Other unlearning approaches by refactoring model or training data\nstruggle to gain a balance between overhead and model usability.\n In this paper, we propose an approach, dubbed as DeepObliviate, to implement\nmachine unlearning efficiently, without modifying the normal training mode. Our\napproach improves the original training process by storing intermediate models\non the hard disk. Given a data point to unlearn, we first quantify its temporal\nresidual memory left in stored models. The influenced models will be retrained\nand we decide when to terminate the retraining based on the trend of residual\nmemory on-the-fly. Last, we stitch an unlearned model by combining the\nretrained models and uninfluenced models. We extensively evaluate our approach\non five datasets and deep learning models. Compared to the method of retraining\nfrom scratch, our approach can achieve 99.0%, 95.0%, 91.9%, 96.7%, 74.1%\naccuracy rates and 66.7$\\times$, 75.0$\\times$, 33.3$\\times$, 29.4$\\times$,\n13.7$\\times$ speedups on the MNIST, SVHN, CIFAR-10, Purchase, and ImageNet\ndatasets, respectively. Compared to the state-of-the-art unlearning approach,\nwe improve 5.8% accuracy, 32.5$\\times$ prediction speedup, and reach a\ncomparable retrain speedup under identical settings on average on these\ndatasets. Additionally, DeepObliviate can also pass the backdoor-based\nunlearning verification.\n"} {"abstract": " Theoretical studies of superradiant lasing on optical clock transitions\npredict a superb frequency accuracy and precision closely tied to the bare\natomic linewidth. Such a superradiant laser is also robust against cavity\nfluctuations when the spectral width of the lasing mode is much larger than\nthat of the atomic medium. Recent predictions suggest that this unique feature\npersists even for a hot and thus strongly broadened ensemble, provided the\neffective atom number is large enough. Here we use a second-order cumulant\nexpansion approach to study the power, linewidth and lineshifts of such a\nsuperradiant laser as a function of the inhomogeneous width of the ensemble\nincluding variations of the spatial atom-field coupling within the resonator.\nWe present conditions on the atom numbers, the pump and coupling strengths\nrequired to reach the buildup of collective atomic coherence as well as scaling\nand limitations for the achievable laser linewidth.\n"} {"abstract": " Many sequence-to-sequence tasks in natural language processing are roughly\nmonotonic in the alignment between source and target sequence, and previous\nwork has facilitated or enforced learning of monotonic attention behavior via\nspecialized attention functions or pretraining. In this work, we introduce a\nmonotonicity loss function that is compatible with standard attention\nmechanisms and test it on several sequence-to-sequence tasks:\ngrapheme-to-phoneme conversion, morphological inflection, transliteration, and\ndialect normalization. Experiments show that we can achieve largely monotonic\nbehavior. Performance is mixed, with larger gains on top of RNN baselines.\nGeneral monotonicity does not benefit transformer multihead attention, however,\nwe see isolated improvements when only a subset of heads is biased towards\nmonotonic behavior.\n"} {"abstract": " For hidden Markov models one of the most popular estimates of the hidden\nchain is the Viterbi path -- the path maximising the posterior probability. We\nconsider a more general setting, called the pairwise Markov model (PMM), where\nthe joint process consisting of finite-state hidden process and observation\nprocess is assumed to be a Markov chain. It has been recently proven that under\nsome conditions the Viterbi path of the PMM can almost surely be extended to\ninfinity, thereby defining the infinite Viterbi decoding of the observation\nsequence, called the Viterbi process. This was done by constructing a block of\nobservations, called a barrier, which ensures that the Viterbi path goes trough\na given state whenever this block occurs in the observation sequence. In this\npaper we prove that the joint process consisting of Viterbi process and PMM is\nregenerative. The proof involves a delicate construction of regeneration times\nwhich coincide with the occurrences of barriers. As one possible application of\nour theory, some results on the asymptotics of the Viterbi training algorithm\nare derived.\n"} {"abstract": " A generalized Kummer surface $X=Km_{3}(A,G_{A})$ is the minimal resolution of\nthe quotient of a $2$-dimensional complex torus by an order 3 symplectic\nautomorphism group $G_{A}$. A Kummer structure on $X$ is an isomorphism class\nof pairs $(B,G_{B})$ such that $X\\simeq Km_{3}(B,G_{B})$. When the surface is\nalgebraic, we obtain that the number of Kummer structures is linked with the\nnumber of order $3$ elliptic points on some Shimura curve naturally related to\n$A$. For each $n\\in\\mathbb{N}$, we obtain generalized Kummer surfaces $X_{n}$\nfor which the number of Kummer structures is $2^{n}$. We then give a\nclassification of the moduli spaces of generalized Kummer surfaces. When the\nsurface is non algebraic, there is only one Kummer structure, but the number of\nirreducible components of the moduli spaces of such surfaces is large compared\nto the algebraic case. The endomorphism rings of the complex $2$-tori we study\nare mainly quaternion orders, these order contain the ring of Eisenstein\nintegers. One can also see this paper as a study of quaternion orders\n$\\mathcal{O}$ over $\\mathbb{Q}$ that contain the ring of Eisenstein integers.\nWe obtain that such order is determined up to isomorphism by its discriminant,\nand when the quaternion algebra is indefinite, the order $\\mathcal{O}$ is\nprincipal.\n"} {"abstract": " The neural network and quantum computing are both significant and appealing\nfields, with their interactive disciplines promising for large-scale computing\ntasks that are untackled by conventional computers. However, both developments\nare restricted by the scope of the hardware development. Nevertheless, many\nneural network algorithms had been proposed before GPUs become powerful enough\nfor running very deep models. Similarly, quantum algorithms can also be\nproposed as knowledge reserves before real quantum computers are easily\naccessible. Specifically, taking advantage of both the neural networks and\nquantum computation and designing quantum deep neural networks (QDNNs) for\nacceleration on Noisy Intermediate-Scale Quantum (NISQ) processors is also an\nimportant research problem. As one of the most widely used neural network\narchitectures, convolutional neural network (CNN) remains to be accelerated by\nquantum mechanisms, with only a few attempts have been demonstrated. In this\npaper, we propose a new hybrid quantum-classical circuit, namely Quantum\nFourier Convolutional Network (QFCN). Our model achieves exponential speed-up\ncompared with classical CNN theoretically and improves over the existing best\nresult of quantum CNN. We demonstrate the potential of this architecture by\napplying it to different deep learning tasks, including traffic prediction and\nimage classification.\n"} {"abstract": " Our goal, in the context of open-domain textual question-answering (QA), is\nto explain answers by showing the line of reasoning from what is known to the\nanswer, rather than simply showing a fragment of textual evidence (a\n\"rationale'\"). If this could be done, new opportunities for understanding and\ndebugging the system's reasoning become possible. Our approach is to generate\nexplanations in the form of entailment trees, namely a tree of multipremise\nentailment steps from facts that are known, through intermediate conclusions,\nto the hypothesis of interest (namely the question + answer). To train a model\nwith this skill, we created ENTAILMENTBANK, the first dataset to contain\nmultistep entailment trees. Given a hypothesis (question + answer), we define\nthree increasingly difficult explanation tasks: generate a valid entailment\ntree given (a) all relevant sentences (b) all relevant and some irrelevant\nsentences, or (c) a corpus. We show that a strong language model can partially\nsolve these tasks, in particular when the relevant sentences are included in\nthe input (e.g., 35% of trees for (a) are perfect), and with indications of\ngeneralization to other domains. This work is significant as it provides a new\ntype of dataset (multistep entailments) and baselines, offering a new avenue\nfor the community to generate richer, more systematic explanations.\n"} {"abstract": " We study the problem of list-decodable linear regression, where an adversary\ncan corrupt a majority of the examples. Specifically, we are given a set $T$ of\nlabeled examples $(x, y) \\in \\mathbb{R}^d \\times \\mathbb{R}$ and a parameter\n$0< \\alpha <1/2$ such that an $\\alpha$-fraction of the points in $T$ are i.i.d.\nsamples from a linear regression model with Gaussian covariates, and the\nremaining $(1-\\alpha)$-fraction of the points are drawn from an arbitrary noise\ndistribution. The goal is to output a small list of hypothesis vectors such\nthat at least one of them is close to the target regression vector. Our main\nresult is a Statistical Query (SQ) lower bound of $d^{\\mathrm{poly}(1/\\alpha)}$\nfor this problem. Our SQ lower bound qualitatively matches the performance of\npreviously developed algorithms, providing evidence that current upper bounds\nfor this task are nearly best possible.\n"} {"abstract": " In this work, we aim to address the 3D scene stylization problem - generating\nstylized images of the scene at arbitrary novel view angles. A straightforward\nsolution is to combine existing novel view synthesis and image/video style\ntransfer approaches, which often leads to blurry results or inconsistent\nappearance. Inspired by the high quality results of the neural radiance fields\n(NeRF) method, we propose a joint framework to directly render novel views with\nthe desired style. Our framework consists of two components: an implicit\nrepresentation of the 3D scene with the neural radiance field model, and a\nhypernetwork to transfer the style information into the scene representation.\nIn particular, our implicit representation model disentangles the scene into\nthe geometry and appearance branches, and the hypernetwork learns to predict\nthe parameters of the appearance branch from the reference style image. To\nalleviate the training difficulties and memory burden, we propose a two-stage\ntraining procedure and a patch sub-sampling approach to optimize the style and\ncontent losses with the neural radiance field model. After optimization, our\nmodel is able to render consistent novel views at arbitrary view angles with\narbitrary style. Both quantitative evaluation and human subject study have\ndemonstrated that the proposed method generates faithful stylization results\nwith consistent appearance across different views.\n"} {"abstract": " In this paper, we propose a simple yet effective method to deal with the\nviolation of the Closed-World Assumption for a classifier. Previous works tend\nto apply a threshold either on the classification scores or the loss function\nto reject the inputs that violate the assumption. However, these methods cannot\nachieve the low False Positive Ratio (FPR) required in safety applications. The\nproposed method is a rejection option based on hypothesis testing with\nprobabilistic networks. With probabilistic networks, it is possible to estimate\nthe distribution of outcomes instead of a single output. By utilizing Z-test\nover the mean and standard deviation for each class, the proposed method can\nestimate the statistical significance of the network certainty and reject\nuncertain outputs. The proposed method was experimented on with different\nconfigurations of the COCO and CIFAR datasets. The performance of the proposed\nmethod is compared with the Softmax Response, which is a known top-performing\nmethod. It is shown that the proposed method can achieve a broader range of\noperation and cover a lower FPR than the alternative.\n"} {"abstract": " I describe the rationale for, and design of, an agent-based simulation model\nof a contemporary online sports-betting exchange: such exchanges, closely\nrelated to the exchange mechanisms at the heart of major financial markets,\nhave revolutionized the gambling industry in the past 20 years, but gathering\nsufficiently large quantities of rich and temporally high-resolution data from\nreal exchanges - i.e., the sort of data that is needed in large quantities for\nDeep Learning - is often very expensive, and sometimes simply impossible; this\ncreates a need for a plausibly realistic synthetic data generator, which is\nwhat this simulation now provides. The simulator, named the \"Bristol Betting\nExchange\" (BBE), is intended as a common platform, a data-source and\nexperimental test-bed, for researchers studying the application of AI and\nmachine learning (ML) techniques to issues arising in betting exchanges; and,\nas far as I have been able to determine, BBE is the first of its kind: a free\nopen-source agent-based simulation model consisting not only of a\nsports-betting exchange, but also a minimal simulation model of racetrack\nsporting events (e.g., horse-races or car-races) about which bets may be made,\nand a population of simulated bettors who each form their own private\nevaluation of odds and place bets on the exchange before and - crucially -\nduring the race itself (i.e., so-called \"in-play\" betting) and whose betting\nopinions change second-by-second as each race event unfolds. BBE is offered as\na proof-of-concept system that enables the generation of large high-resolution\ndata-sets for automated discovery or improvement of profitable strategies for\nbetting on sporting events via the application of AI/ML and advanced data\nanalytics techniques. This paper offers an extensive survey of relevant\nliterature and explains the motivation and design of BBE, and presents brief\nillustrative results.\n"} {"abstract": " Pinch-off and satellite droplets formation during breakup of near-inviscid\nliquid bridge sandwiched between two given equal and coaxial circular plates\nhave been investigated. The breakup always results in the formation of a\nspindle shape which is the precursor of the satellite droplet at the moment of\npinch-off. Interestingly, the slenderness of this spindle is always bigger than\n2{\\pi} and always results in the formation of only one satellite droplet\nregardless of the surface tension and the slenderness of the liquid bridge. We\npredict the cone angle of this spindle formed during the pinch-off of inviscid\nfluids should be 18.086122158...{\\deg}. After pinch-off, the satellite droplets\nwill drift out of the pinch-off regions in the case of symmetrical short\nbridge, and merge again with the sessile drop in the case of unsymmetrical long\nbridge. We demonstrate that the velocity of the satellite droplet is consistent\nwith a scaling model based on a balance between capillary forces and the\ninertia at the pinch-off region.\n"} {"abstract": " In this paper, we generalize fractional $q$-integrals by the method of\n$q$-difference equation. In addition, we deduce fractional Askey--Wilson\nintegral, reversal type fractional Askey--Wilson integral and Ramanujan type\nfractional Askey--Wilson integral.\n"} {"abstract": " Person Re-Identification (Re-ID) is of great importance to the many video\nsurveillance systems. Learning discriminative features for Re-ID remains a\nchallenge due to the large variations in the image space, e.g., continuously\nchanging human poses, illuminations and point of views. In this paper, we\npropose HAVANA, a novel extensible, light-weight HierArchical and\nVAriation-Normalized Autoencoder that learns features robust to intra-class\nvariations. In contrast to existing generative approaches that prune the\nvariations with heavy extra supervised signals, HAVANA suppresses the\nintra-class variations with a Variation-Normalized Autoencoder trained with no\nadditional supervision. We also introduce a novel Jensen-Shannon triplet loss\nfor contrastive distribution learning in Re-ID. In addition, we present\nHierarchical Variation Distiller, a hierarchical VAE to factorize the latent\nrepresentation and explicitly model the variations. To the best of our\nknowledge, HAVANA is the first VAE-based framework for person ReID.\n"} {"abstract": " We classify Frobenius forms, a special class of homogeneous polynomials in\ncharacteristic $p>0$, in up to five variables over an algebraically closed\nfield. We also point out some of the similarities with quadratic forms.\n"} {"abstract": " We present optical spectroscopy for 18 halo white dwarfs identified using\nphotometry from the Canada-France Imaging Survey and Pan-STARRS1 DR1 3$\\pi$\nsurvey combined with astrometry from Gaia DR2. The sample contains 13 DA, 1 DZ,\n2 DC, and two potentially exotic types of white dwarf. We fit both the spectrum\nand the spectral energy distribution in order to obtain the temperature and\nsurface gravity, which we then convert into a mass, and then an age, using\nstellar isochrones and the initial-to-final mass relation. We find a large\nspread in ages that is not consistent with expected formation scenarios for the\nGalactic halo. We find a mean age of 9.03$^{+2.13}_{-2.03}$ Gyr and a\ndispersion of 4.21$^{+2.33}_{-1.58}$ Gyr for the inner halo using a maximum\nlikelihood method. This result suggests an extended star formation history\nwithin the local halo population.\n"} {"abstract": " According to their strength, the tracing properties of a code can be\ncategorized as frameproof, separating, IPP and TA. It is known that if the\nminimum distance of the code is larger than a certain threshold then the TA\nproperty implies the rest. Silverberg et al. ask if there is some kind of\ntracing capability left when the minimum distance falls below the threshold.\nUnder different assumptions, several papers have given a negative answer to the\nquestion. In this paper further progress is made. We establish values of the\nminimum distance for which Reed-Solomon codes do not posses the separating\nproperty.\n"} {"abstract": " By using ab-initio-accurate force fields and molecular dynamics simulations\nwe demonstrate that the layer stiffness has profound effects on the\nsuperlubricant state of two-dimensional van der Waals heterostructures. These\nare engineered to have identical inter-layer sliding energy surfaces, but\nlayers of different rigidity, so that the effects of the stiffness on the\nmicroscopic friction in the superlubricant state can be isolated. A twofold\nincrease in the intra-layer stiffness reduces the friction by approximately a\nfactor six. Most importantly, we find two sliding regimes as a function of the\nsliding velocity. At low velocity the heat generated by the motion is\nefficiently exchanged between the layers and the friction is independent on\nwhether the sliding layer is softer or harder than the substrate. In contrast,\nat high velocity the friction heat flux cannot be exchanged fast enough, and\nthe build up of significant temperature gradients between the layers is\nobserved. In this situation the temperature profile depends on whether the\nslider is softer than the substrate.\n"} {"abstract": " Previous studies have predicted the failure of Fourier's law of thermal\nconduction due to the existence of wave like propagation of heat with finite\npropagation speed. This non-Fourier thermal transport phenomenon can appear in\nboth the hydrodynamic and (quasi) ballistic regimes. Hence, it is not easy to\nclearly distinguish these two non-Fourier regimes only by this phenomenon. In\nthis work, the transient heat propagation in homogeneous thermal system is\nstudied based on the phonon Boltzmann transport equation (BTE) under the\nCallaway model. Given a quasi-one or quasi-two (three) dimensional simulation\nwith homogeneous environment temperature, at initial moment, a heat source is\nadded suddenly at the center with high temperature, then the heat propagates\nfrom the center to the outer. Numerical results show that in quasi-two (three)\ndimensional simulations, the transient temperature will be lower than the\nlowest value of initial temperature in the hydrodynamic regime within a certain\nrange of time and space. This phenomenon appears only when the normal\nscattering dominates heat conduction. Besides, it disappears in quasi-one\ndimensional simulations. Similar phenomenon is also observed in thermal systems\nwith time varying heat source. This novel transient heat propagation phenomenon\nof hydrodynamic phonon transport distinguishes it well from (quasi) ballistic\nphonon transport.\n"} {"abstract": " Despite the advances in the autonomous driving domain, autonomous vehicles\n(AVs) are still inefficient and limited in terms of cooperating with each other\nor coordinating with vehicles operated by humans. A group of autonomous and\nhuman-driven vehicles (HVs) which work together to optimize an altruistic\nsocial utility -- as opposed to the egoistic individual utility -- can co-exist\nseamlessly and assure safety and efficiency on the road. Achieving this mission\nwithout explicit coordination among agents is challenging, mainly due to the\ndifficulty of predicting the behavior of humans with heterogeneous preferences\nin mixed-autonomy environments. Formally, we model an AV's maneuver planning in\nmixed-autonomy traffic as a partially-observable stochastic game and attempt to\nderive optimal policies that lead to socially-desirable outcomes using a\nmulti-agent reinforcement learning framework. We introduce a quantitative\nrepresentation of the AVs' social preferences and design a distributed reward\nstructure that induces altruism into their decision making process. Our\naltruistic AVs are able to form alliances, guide the traffic, and affect the\nbehavior of the HVs to handle competitive driving scenarios. As a case study,\nwe compare egoistic AVs to our altruistic autonomous agents in a highway\nmerging setting and demonstrate the emerging behaviors that lead to a\nnoticeable improvement in the number of successful merges as well as the\noverall traffic flow and safety.\n"} {"abstract": " Improving existing widely-adopted prediction models is often a more efficient\nand robust way towards progress than training new models from scratch. Existing\nmodels may (a) incorporate complex mechanistic knowledge, (b) leverage\nproprietary information and, (c) have surmounted barriers to adoption. Compared\nto model training, model improvement and modification receive little attention.\nIn this paper we propose a general approach to model improvement: we combine\ngradient boosting with any previously developed model to improve model\nperformance while retaining important existing characteristics. To exemplify,\nwe consider the context of Mendelian models, which estimate the probability of\ncarrying genetic mutations that confer susceptibility to disease by using\nfamily pedigrees and health histories of family members. Via simulations we\nshow that integration of gradient boosting with an existing Mendelian model can\nproduce an improved model that outperforms both that model and the model built\nusing gradient boosting alone. We illustrate the approach on genetic testing\ndata from the USC-Stanford Cancer Genetics Hereditary Cancer Panel (HCP) study.\n"} {"abstract": " Traditional on-die, three-level cache hierarchy design is very commonly used\nbut is also prone to latency, especially at the Level 2 (L2) cache. We discuss\nthree distinct ways of improving this design in order to have better\nperformance. Performance is especially important for systems with high\nworkloads. The first method proposes to eliminate L2 altogether while proposing\na new prefetching technique, the second method suggests increasing the size of\nL2, while the last method advocates the implementation of optical caches. After\ncarefully contemplating the results in performance gains and the advantages and\ndisadvantages of each method, we found the last method to be the best of the\nthree.\n"} {"abstract": " It is known that general relativity (GR) theory is not consistent with the\nlatest observations. The modified gravity of GR known as $\\mathrm{f(R)}$ where\n$\\mathrm{R}$ is the Ricci scalar, is considered to be a good candidate for\ndealing with the anomalies present in classical GR. In this context, we study\nstatic rotating uncharged anti-de Sitter and de Sitter (AdS and dS) black holes\n(BHs) using $\\mathrm{f(R)}$ theory without assuming any constraints on the\nRicci scalar or on $\\mathrm{f(R)}$. We derive BH solutions depend on the\nconvolution function and deviate from the AdS/dS Schwarzschild BH solution of\nGR. Although the field equations have no dependence on the cosmological\nconstant, the BHs are characterized by an effective cosmological constant that\ndepends on the convolution function. The asymptotic form of this BH solution\ndepends on the gravitational mass of the system and on extra terms that lead to\nBHs being different from GR BHs but to correspond to GR BHs under certain\nconditions. We also investigate how these extra terms are responsible for\nmaking the singularities of the invariants milder than those of the GR BHs. We\nstudy some physical properties of the BHs from the point of view of\nthermodynamics and show that there is an outer event horizon in addition to the\ninner Cauchy horizons. Among other things, we show that our BH solutions\nsatisfy the first law of thermodynamics. To check the stability of these BHs we\nuse the geodesic deviations and derive the stability conditions. Finally, using\nthe odd-type mode it is shown that all the derived BHs are stable and have a\nradial speed equal to one.\n"} {"abstract": " In physical experiments, reference frames are standardly modelled through a\nspecific choice of coordinates used to describe the physical systems, but they\nthemselves are not considered as such. However, any reference frame is a\nphysical system that ultimately behaves according to quantum mechanics. We\ndevelop a framework for rotational (i.e. spin) quantum reference frames, with\nrespect to which quantum systems with spin degrees of freedom are described. We\ngive an explicit model for such frames as systems composed of three spin\ncoherent states of angular momentum $j$ and introduce the transformations\nbetween them by upgrading the Euler angles occurring in classical\n$\\textrm{SO}(3)$ spin transformations to quantum mechanical operators acting on\nthe states of the reference frames. To ensure that an arbitrary rotation can be\napplied on the spin we take the limit of infinitely large $j$, in which case\nthe angle operator possesses a continuous spectrum. We prove that rotationally\ninvariant Hamiltonians (such as that of the Heisenberg model) are invariant\nunder a larger group of quantum reference frame transformations. Our result is\nthe first development of the quantum reference frame formalism for a\nnon-Abelian group.\n"} {"abstract": " We consider a simple scalar dark matter model within the frame of gauged\n$L_{\\mu}-L_{\\tau}$ symmetry. A gauge boson $Z'$ as well as two scalar fields\n$S$ and $\\Phi$ are introduced to the Standard Model (SM). $S$ and $\\Phi$ are SM\nsinglet but both with $U(1)_{L_{\\mu}-L_{\\tau}}$ charge. The real component and\nimaginary component of $S$ can acquire different masses after spontaneously\nsymmetry breaking, and the lighter one can play the role of dark matter which\nis stabilized by the residual $Z_2$ symmetry. A viable parameter space is\nconsidered to discuss the possibility of light dark matter as well as\nco-annihilation case, and we present current $(g-2)_{\\mu}$ anomaly, Higgs\ninvisible decay, dark matter relic density as well as direct detection\nconstriants on the parameter space.\n"} {"abstract": " Decentralized financial (DeFi) applications on the Ethereum blockchain are\nhighly interoperable because they share a single state in a deterministic\ncomputational environment. Stakeholders can deposit claims on assets, referred\nto as 'liquidity shares', across applications producing effects equivalent to\nrehypothecation in traditional financial systems. We seek to understand the\ndegree to which this practice may contribute to financial integration on\nEthereum by examining transactions in 'composed' derivatives for the assets\nDAI, USDC, USDT, ETH and tokenized BTC for the full set of 344.8 million\nEthereum transactions computed in 2020. We identify a salient trend for\n'composing' assets in multiple sequential generations of derivatives and\ncomment on potential systemic implications for the Ethereum network.\n"} {"abstract": " The paper presents the submission of the team indicnlp@kgp to the EACL 2021\nshared task \"Offensive Language Identification in Dravidian Languages.\" The\ntask aimed to classify different offensive content types in 3 code-mixed\nDravidian language datasets. The work leverages existing state of the art\napproaches in text classification by incorporating additional data and transfer\nlearning on pre-trained models. Our final submission is an ensemble of an\nAWD-LSTM based model along with 2 different transformer model architectures\nbased on BERT and RoBERTa. We achieved weighted-average F1 scores of 0.97,\n0.77, and 0.72 in the Malayalam-English, Tamil-English, and Kannada-English\ndatasets ranking 1st, 2nd, and 3rd on the respective tasks.\n"} {"abstract": " The goal of this study was to improve the post-processing of precipitation\nforecasts using convolutional neural networks (CNNs). Instead of\npost-processing forecasts on a per-pixel basis, as is usually done when\nemploying machine learning in meteorological post-processing, input forecast\nimages were combined and transformed into probabilistic output forecast images\nusing fully convolutional neural networks. CNNs did not outperform regularized\nlogistic regression. Additionally, an ablation analysis was performed.\nCombining input forecasts from a global low-resolution weather model and a\nregional high-resolution weather model improved performance over either one.\n"} {"abstract": " Deep learning can promote the mammography-based computer-aided diagnosis\n(CAD) for breast cancers, but it generally suffers from the small sample size\nproblem. Self-supervised learning (SSL) has shown its effectiveness in medical\nimage analysis with limited training samples. However, the network model\nsometimes cannot be well pre-trained in the conventional SSL framework due to\nthe limitation of the pretext task and fine-tuning mechanism. In this work, a\nTask-driven Self-supervised Bi-channel Networks (TSBN) framework is proposed to\nimprove the performance of classification model the mammography-based CAD. In\nparticular, a new gray-scale image mapping (GSIM) is designed as the pretext\ntask, which embeds the class label information of mammograms into the image\nrestoration task to improve discriminative feature representation. The proposed\nTSBN then innovatively integrates different network architecture, including the\nimage restoration network and the classification network, into a unified SSL\nframework. It jointly trains the bi-channel network models and collaboratively\ntransfers the knowledge from the pretext task network to the downstream task\nnetwork with improved diagnostic accuracy. The proposed TSBN is evaluated on a\npublic INbreast mammogram dataset. The experimental results indicate that it\noutperforms the conventional SSL and multi-task learning algorithms for\ndiagnosis of breast cancers with limited samples.\n"} {"abstract": " The true topological nature of the Kondo insulator SmB$_6$ remains to be\nunveiled. Our previous tunneling study not only found evidence for the\nexistence of surface Dirac fermions, but it also uncovered that they inherently\ninteract with the spin excitons, collective excitations in the bulk. We have\nextended such a spectroscopic investigation into crystals containing a Sm\ndeficiency. The bulk hybridization gap is found to be insensitive to the\ndeficiency up to 1% studied here, but the surface states in Sm-deficient\ncrystals exhibit quite different temperature evolutions from those in\nstoichiometric ones. We attribute this to the topological surface states\nremaining incoherent down to the lowest measurement temperature due to their\ncontinued interaction with the spin excitons that remain uncondensed. This\nresult shows that the detailed topological nature of SmB$_6$ could vary\ndrastically in the presence of disorder in the lattice. This sensitiveness to\ndisorder is seemingly contradictory to the celebrated topological protection,\nbut it can be understood as being due to the intimate interplay between strong\ncorrelations and topological effects.\n"} {"abstract": " Effective environmental planning and management to address climate change\ncould be achieved through extensive environmental modeling with machine\nlearning and conventional physical models. In order to develop and improve\nthese models, practitioners and researchers need comprehensive benchmark\ndatasets that are prepared and processed with environmental expertise that they\ncan rely on. This study presents an extensive dataset of rainfall events for\nthe state of Iowa (2016-2019) acquired from the National Weather Service Next\nGeneration Weather Radar (NEXRAD) system and processed by a quantitative\nprecipitation estimation system. The dataset presented in this study could be\nused for better disaster monitoring, response and recovery by paving the way\nfor both predictive and prescriptive modeling.\n"} {"abstract": " This paper is concerned with polynomially generated multiplier invariant\nsubspaces of the weighted Bergman space $A_{\\boldsymbol{\\beta}}^2$ in\ninfinitely many variables. We completely classify these invariant subspaces\nunder the unitary equivalence. Our results not only cover cases of both the\nHardy space $H^{2}(\\mathbb{D}_{2}^{\\infty})$ and the Bergman space\n$A^{2}(\\mathbb{D}_{2}^{\\infty})$ in infinitely many variables, but also apply\nin finite-variable setting.\n"} {"abstract": " The capability of generalization to unseen domains is crucial for deep\nlearning models when considering real-world scenarios. However, current\navailable medical image datasets, such as those for COVID-19 CT images, have\nlarge variations of infections and domain shift problems. To address this\nissue, we propose a prior knowledge driven domain adaptation and a dual-domain\nenhanced self-correction learning scheme. Based on the novel learning schemes,\na domain adaptation based self-correction model (DASC-Net) is proposed for\nCOVID-19 infection segmentation on CT images. DASC-Net consists of a novel\nattention and feature domain enhanced domain adaptation model (AFD-DA) to solve\nthe domain shifts and a self-correction learning process to refine segmentation\nresults. The innovations in AFD-DA include an image-level activation feature\nextractor with attention to lung abnormalities and a multi-level discrimination\nmodule for hierarchical feature domain alignment. The proposed self-correction\nlearning process adaptively aggregates the learned model and corresponding\npseudo labels for the propagation of aligned source and target domain\ninformation to alleviate the overfitting to noises caused by pseudo labels.\nExtensive experiments over three publicly available COVID-19 CT datasets\ndemonstrate that DASC-Net consistently outperforms state-of-the-art\nsegmentation, domain shift, and coronavirus infection segmentation methods.\nAblation analysis further shows the effectiveness of the major components in\nour model. The DASC-Net enriches the theory of domain adaptation and\nself-correction learning in medical imaging and can be generalized to\nmulti-site COVID-19 infection segmentation on CT images for clinical\ndeployment.\n"} {"abstract": " We revisit the foundational Moment Formula proved by Roger Lee fifteen years\nago. We show that when the underlying stock price martingale admits finite\nlog-moments E[|log(S)|^q] for some positive q, the arbitrage-free growth in the\nleft wing of the implied volatility smile is less constrained than Lee's bound.\nThe result is rationalised by a market trading discretely monitored variance\nswaps wherein the payoff is a function of squared log-returns, and requires no\nassumption for the underlying martingale to admit any negative moment. In this\nrespect, the result can derived from a model-independent setup. As a byproduct,\nwe relax the moment assumptions on the stock price to provide a new proof of\nthe notorious Gatheral-Fukasawa formula expressing variance swaps in terms of\nthe implied volatility.\n"} {"abstract": " We present a combined angle-resolved photoemission spectroscopy and\nlow-energy electron diffraction (LEED) study of the prominent transition metal\ndichalcogenide IrTe$_2$ upon potassium (K) deposition on its surface. Pristine\nIrTe$_2$ undergoes a series of charge-ordered phase transitions below room\ntemperature that are characterized by the formation of stripes of Ir dimers of\ndifferent periodicities. Supported by density functional theory calculations,\nwe first show that the K atoms dope the topmost IrTe$_2$ layer with electrons,\ntherefore strongly decreasing the work function and shifting only the\nelectronic surface states towards higher binding energy. We then follow the\nevolution of its electronic structure as a function of temperature across the\ncharge-ordered phase transitions and observe that their critical temperatures\nare unchanged for K coverages of $0.13$ and $0.21$~monolayer (ML). Using LEED,\nwe also confirm that the periodicity of the related stripe phases is unaffected\nby the K doping. We surmise that the charge-ordered phase transitions of\nIrTe$_2$ are robust against electron surface doping, because of its metallic\nnature at all temperatures, and due to the importance of structural effects in\nstabilizing charge order in IrTe$_2$.\n"} {"abstract": " We theoretically show that two distinctive spin textures manifest themselves\naround saddle points of energy bands in a monolayer NbSe$_2$ under external\ngate potentials. While the density of states at all saddle points diverge\nlogarithmically, ones at the zone boundaries display a windmill-shaped spin\ntexture while the others unidirectional spin orientations. The disparate\nspin-resolved states are demonstrated to contribute an intrinsic spin Hall\nconductivity significantly while their characteristics differ from each\nother.Based on a minimal but essential tight-binding approximation reproducing\nfirst-principles computation results, we established distinct effective Rashba\nHamiltonians for each saddle point, realizing the unique spin textures\ndepending on their momentum. Energetic positions of the saddle points in a\nsingle layer NbSe$_2$ are shown to be well controlled by a gate potential so\nthat it could be a prototypical system to test a competition between various\ncollective phenomena triggered by diverging density of states and their spin\ntextures in low-dimension.\n"} {"abstract": " The time of the first occurrence of a threshold crossing event in a\nstochastic process, known as the first passage time, is of interest in many\nareas of sciences and engineering. Conventionally, there is an implicit\nassumption that the notional 'sensor' monitoring the threshold crossing event\nis always active. In many realistic scenarios, the sensor monitoring the\nstochastic process works intermittently. Then, the relevant quantity of\ninterest is the $\\textit{first detection time}$, which denotes the time when\nthe sensor detects the threshold crossing event for the first time. In this\nwork, a birth-death process monitored by a random intermittent sensor is\nstudied, for which the first detection time distribution is obtained. In\ngeneral, it is shown that the first detection time is related to, and is\nobtainable from, the first passage time distribution. Our analytical results\ndisplay an excellent agreement with simulations. Further, this framework is\ndemonstrated in several applications -- the SIS compartmental and logistic\nmodels, and birth-death processes with resetting. Finally, we solve the\npractically relevant problem of inferring the first passage time distribution\nfrom the first detection time.\n"} {"abstract": " Millimeter-wave (mmWave) and sub-Terahertz (THz) frequencies are expected to\nplay a vital role in 6G wireless systems and beyond due to the vast available\nbandwidth of many tens of GHz. This paper presents an indoor 3-D spatial\nstatistical channel model for mmWave and sub-THz frequencies based on extensive\nradio propagation measurements at 28 and 140 GHz conducted in an indoor office\nenvironment from 2014 to 2020. Omnidirectional and directional path loss models\nand channel statistics such as the number of time clusters, cluster delays, and\ncluster powers were derived from over 15,000 measured power delay profiles. The\nresulting channel statistics show that the number of time clusters follows a\nPoisson distribution and the number of subpaths within each cluster follows a\ncomposite exponential distribution for both LOS and NLOS environments at 28 and\n140 GHz. This paper proposes a unified indoor statistical channel model for\nmmWave and sub-Terahertz frequencies following the mathematical framework of\nthe previous outdoor NYUSIM channel models. A corresponding indoor channel\nsimulator is developed, which can recreate 3-D omnidirectional, directional,\nand multiple input multiple output (MIMO) channels for arbitrary mmWave and\nsub-THz carrier frequency up to 150 GHz, signal bandwidth, and antenna\nbeamwidth. The presented statistical channel model and simulator will guide\nfuture air-interface, beamforming, and transceiver designs for 6G and beyond.\n"} {"abstract": " We aim at measuring the influence of the nondeterministic choices of a part\nof a system on its ability to satisfy a specification. For this purpose, we\napply the concept of Shapley values to verification as a means to evaluate how\nimportant a part of a system is. The importance of a component is measured by\ngiving its control to an adversary, alone or along with other components, and\ntesting whether the system can still fulfill the specification. We study this\nidea in the framework of model-checking with various classical types of\nlinear-time specification, and propose several ways to transpose it to\nbranching ones. We also provide tight complexity bounds in almost every case.\n"} {"abstract": " The associations between emergent physical phenomena (e.g.,\nsuperconductivity) and orbital, charge, and spin degrees of freedom of $3d$\nelectrons are intriguing in transition metal compounds. Here, we successfully\nmanipulate the superconductivity of spinel oxide Li$_{1\\pm\nx}$Ti$_2$O$_{4-\\delta}$ (LTO) by ionic liquid gating. A dome-shaped\nsuperconducting phase diagram is established, where two insulating phases are\ndisclosed both in heavily electron-doping and hole-doping regions. The\nsuperconductor-insulator transition (SIT) in the hole-doping region can be\nattributed to the loss of Ti valence electrons. In the electron-doping region,\nLTO exhibits an unexpected SIT instead of a metallic behavior despite an\nincrease in carrier density. Furthermore, a thermal hysteresis is observed in\nthe normal state resistance curve, suggesting a first-order phase transition.\nWe speculate that the SIT and the thermal hysteresis stem from the enhanced\n$3d$ electron correlations and the formation of orbital ordering by comparing\nthe transport and structural results of LTO with the other spinel oxide\nsuperconductor MgTi$_2$O$_4$, as well as analysing the electronic structure by\nfirst-principles calculations. Further comprehension of the detailed interplay\nbetween superconductivity and orbital ordering would contribute to the\nrevealing of unconventional superconducting pairing mechanism.\n"} {"abstract": " This paper explores the options available to the anti-realist to defend a\nQuinean empirical under-determination thesis using examples of dualities. I\nfirst explicate a version of the empirical under-determination thesis that can\nbe brought to bear on theories of contemporary physics. Then I identify a class\nof examples of dualities that lead to empirical under-determination. But I\nargue that the resulting under-determination is benign, and is not a threat to\na cautious scientific realism. Thus dualities are not new ammunition for the\nanti-realist. The paper also shows how the number of possible interpretative\noptions about dualities that have been considered in the literature can be\nreduced, and suggests a general approach to scientific realism that one may\ntake dualities to favour.\n"} {"abstract": " Two-dimensional (2D) magnets have broad application prospects in the\nspintronics, but how to effectively control them with a small electric field is\nstill an issue. Here we propose that 2D magnets can be efficiently controlled\nin a multiferroic heterostructure composed of 2D magnetic material and\nperovskite oxide ferroelectric (POF) whose dielectric polarization is easily\nflipped under a small electric field. We illustrate the feasibility of such\nstrategy in the bilayer CrI3/BiFeO3(001) heterostructure by using the\nfirst-principles calculations. Different from the traditional POF multiferroic\nheterostructures which have strong interface interactions, we find that the\ninterface interaction between CrI3 and BiFeO3(001) is van der Waals type.\nWhereas, the heterostructure has particular strong magnetoelectric coupling\nwhere the bilayer CrI3 can be efficiently switched between ferromagnetic and\nantiferromagnetic types by the polarized states of BiFeO3(001). We also\ndiscover the competing effect between electron doping and the additional\nelectric field on the interlayer exchange coupling interaction of CrI3, which\nis responsible to the magnetic phase transition. Our results provide a new\navenue for the tuning of 2D magnets with a small electric field.\n"} {"abstract": " Previous studies have shown that the ground state of systems of nucleons\ncomposed by an equal number of protons and neutrons interacting via\nproton-neutron pairing forces can be described accurately by a condensate of\n$\\alpha$-like quartets. Here we extend these studies to the low-lowing excited\nstates of these systems and show that these states can be accurately described\nby breaking a quartet from the ground state condensate and replacing it with an\n\"excited\" quartet. This approach, which is analogous to the one-broken-pair\napproximation employed for like-particle pairing, is analysed for various\nisovector and isovector-isoscalar pairing\n"} {"abstract": " This paper focuses on regularisation methods using models up to the third\norder to search for up to second-order critical points of a finite-sum\nminimisation problem. The variant presented belongs to the framework of [3]: it\nemploys random models with accuracy guaranteed with a sufficiently large\nprefixed probability and deterministic inexact function evaluations within a\nprescribed level of accuracy. Without assuming unbiased estimators, the\nexpected number of iterations is $\\mathcal{O}\\bigl(\\epsilon_1^{-2}\\bigr)$ or\n$\\mathcal{O}\\bigl(\\epsilon_1^{-{3/2}}\\bigr)$ when searching for a first-order\ncritical point using a second or third order model, respectively, and of\n$\\mathcal{O}\\bigl(\\max[\\epsilon_1^{-{3/2}},\\epsilon_2^{-3}]\\bigr)$ when seeking\nfor second-order critical points with a third order model, in which\n$\\epsilon_j$, $j\\in\\{1,2\\}$, is the $j$th-order tolerance. These results match\nthe worst-case optimal complexity for the deterministic counterpart of the\nmethod. Preliminary numerical tests for first-order optimality in the context\nof nonconvex binary classification in imaging, with and without Artifical\nNeural Networks (ANNs), are presented and discussed.\n"} {"abstract": " Organic-inorganic metal halide perovskites have recently attracted increasing\nattention as highly efficient light harvesting materials for photovoltaic\napplications. However, the precise control of crystallization and morphology of\norganometallic perovskites deposited from solution, considered crucial for\nenhancing the final photovoltaic performance, remains challenging. In this\ncontext, here, we report on growing microcrystalline deposits of CH3NH3PbI3\n(MAPbI3), by one-step solution casting on cylinde-shaped quartz substrates\n(rods). We show that the substrate curvature has a strong influence on\nmorphology of the obtained polycrystalline deposits of MAPbI3. Although the\ncrystalline width and length markedly decreased for substrates with higher\ncurvatures, the photoluminescence (PL) spectral peak positions did not\nsignificantly evolve for MAPbI3 deposits on substrates with different\ndiameters. The crystalline size reduction and denser coverage of\nmicrocrystalline MAPbI3 deposits on cylinder-shaped substrates with higher\ncurvatures were attributed to two major contributions, both related to the\nannealing step of the MAPbI3 deposits. In particular, the diameter-dependent\nvariability of the heat capacities and the substrate curvature-enhanced solvent\nevaporation rate seemed to contribute the most to the crystallization process\nand the resulting morphology changes of MAPbI3 deposits on cylinder-shaped\nquartz substrates with various diameters. The longitudinal geometry of\ncylinder-shaped substrates provided also a facile solution for checking the PL\nresponse of the deposits of MAPbI3 exposed to the flow of various gaseous\nmedia, such as oxygen, nitrogen and argon. Overall, the approach reported\nherein inspires novel, cylinder-shaped geometries of MAPbI3 deposits, which can\nfind applications in low-cost photo-optical devices, including gas sensors.\n"} {"abstract": " This paper proposes a deep learning framework for classification of BBC\ntelevision programmes using audio. The audio is firstly transformed into\nspectrograms, which are fed into a pre-trained convolutional Neural Network\n(CNN), obtaining predicted probabilities of sound events occurring in the audio\nrecording. Statistics for the predicted probabilities and detected sound events\nare then calculated to extract discriminative features representing the\ntelevision programmes. Finally, the embedded features extracted are fed into a\nclassifier for classifying the programmes into different genres. Our\nexperiments are conducted over a dataset of 6,160 programmes belonging to nine\ngenres labelled by the BBC. We achieve an average classification accuracy of\n93.7% over 14-fold cross validation. This demonstrates the efficacy of the\nproposed framework for the task of audio-based classification of television\nprogrammes.\n"} {"abstract": " We prove that two Enriques surfaces defined over an algebraically closed\nfield of characteristic different from $2$ are isomorphic if their Kuznetsov\ncomponents are equivalent. This improves and complete our previous result joint\nwith Nuer where the same statement is proved for generic Enriques surfaces.\n"} {"abstract": " Let $\\{(A_i,B_i)\\}_{i=1}^m$ be a set pair system. F\\\"{u}redi, Gy\\'{a}rf\\'{a}s\nand Kir\\'{a}ly called it {\\em $1$-cross intersecting} if $|A_i\\cap B_j|$ is $1$\nwhen $i\\neq j$ and $0$ if $i=j$. They studied such systems and their\ngeneralizations, and in particular considered $m(a,b,1)$ -- the maximum size of\na $1$-cross intersecting set pair system in which $|A_i|\\leq a$ and $|B_i|\\leq\nb$ for all $i$. F\\\"{u}redi, Gy\\'{a}rf\\'{a}s and Kir\\'{a}ly proved that\n$m(n,n,1)\\geq 5^{(n-1)/2}$ and asked whether there are upper bounds on\n$m(n,n,1)$ significantly better than the classical bound ${2n\\choose n}$ of\nBollob\\' as for cross intersecting set pair systems.\n Answering one of their questions, Holzman recently proved that if $a,b\\geq\n2$, then $m(a,b,1)\\leq \\frac{29}{30}\\binom{a+b}{a}$. He also conjectured that\nthe factor $\\frac{29}{30}$ in his bound can be replaced by $\\frac{5}{6}$. The\ngoal of this paper is to prove this bound.\n"} {"abstract": " The structure and stability of ternary systems prepared with polysorbate 60\nand various combinations of cetyl (C16) and stearyl (C18) alcohols (fatty\nalcohol 16g, polysorbate 4g, water 180g) were examined as they aged over 3\nmonths at 25oC. Rheological results showed that the consistency of these\nsystems increased initially during roughly the first week of aging, which was\nsucceeded by little changes in consistency (systems containing from 30% to 70%\nC18, with the 50% C18 system showing the highest consistencies in viscosity and\nelasticity) or significant breakdown of structure (remaining systems). The\nformation and/or disintegration of all ternary systems were also detected by\nmicroscopy and differential scanning calorimetry experiments. This study\nemphasizes the fact that the structure and consistency of ternary systems are\ndominantly controlled by the swelling capacity of the lamellar\n$\\alpha-$crystalline gel phase. When the conversion of this gel phase into\nnon-swollen $\\beta$- or $\\gamma$-crystals occurs, systems change from\nsemisolids to fluids. Molecular dynamics simulations were performed to provide\nimportant details on the molecular mechanism of our ternary systems.\nComputational results supported the hypothesis experimentally proposed for the\nstability of the mixed system being due to an increase in the flexibility,\nhence an increase in the configurational entropy of the chain tip of the\nalcohol with a longer hydrocarbon chain (with the highest flexibility observed\nin the 50:50 C18:C16 system). This finding is in excellent agreement with\nexperimental conclusions. Additionally, simulation data show that in the mixed\nsystem, the alcohol with shorter hydrocarbon chain becomes more rigid. These\nmolecular details could not be available in experimental measurements\n"} {"abstract": " The slow revolution of the Earth and Moon around their barycentrum does not\ninduce Coriolis accelerations. On the other hand, the motion of Sun and Earth\nis a rotation with Coriolis forces which appear not to have been calculated\nyet, nor have the inertial accelerations within the system of motion of all\nthree celestial bodies. It is the purpose of this contribution to evaluate the\nrelated Coriolis and centrifugal terms and to compare them to the available\natmospheric standard terms. It is a main result that the revolution is of\ncentral importance in the combined dynamics of Earth, Moon and Sun. Covariant\nflow equations are well known tools for dealing with such complicated flow\nsettings. They are used here to quantify the effects of the Earth's revolution\naround the Earth-Moon barycenter and its rotation around the Sun on the\natmospheric circulation. It is found that the motion around the Sun adds time\ndependent terms to the standard Coriolis forces. The related centrifugal\naccelerations are presented. A major part of these accelerations is balanced by\nthe gravitational attraction by Moon and Sun, but important unbalanced\ncontributions remain. New light on the consequences of the Earth's revolution\nis shed by repeating the calculations for a rotating Earth-Moon pair. It is\nfound that the revolution complicates the atmospheric dynamics.\n"} {"abstract": " It is no secret amongst deep learning researchers that finding the optimal\ndata augmentation strategy during training can mean the difference between\nstate-of-the-art performance and a run-of-the-mill result. To that end, the\ncommunity has seen many efforts to automate the process of finding the perfect\naugmentation procedure for any task at hand. Unfortunately, even recent\ncutting-edge methods bring massive computational overhead, requiring as many as\n100 full model trainings to settle on an ideal configuration. We show how to\nachieve equivalent performance in just 6: with Random Unidimensional\nAugmentation. Source code is available at https://github.com/fastestimator/RUA\n"} {"abstract": " By processing in the frequency domain (FD), massive MIMO systems can approach\nthe theoretical per-user capacity using a single carrier modulation (SCM)\nwaveform with a cyclic prefix. Minimum mean squared error (MMSE) detection and\nzero forcing (ZF) precoding have been shown to effectively cancel multi-user\ninterference while compensating for inter-symbol interference. In this paper,\nwe present a modified downlink precoding approach in the FD based on\nregularized zero forcing (RZF), which reuses the matrix inverses calculated as\npart of the FD MMSE uplink detection. By reusing these calculations, the\ncomputational complexity of the RZF precoder is drastically lowered, compared\nto the ZF precoder. Introduction of the regularization in RZF leads to a bias\nin the detected data symbols at the user terminals. We show this bias can be\nremoved by incorporating a scaling factor at the receiver. Furthermore, it is\nnoted that user powers have to be optimized to strike a balance between noise\nand interference seen at each user terminal. The resulting performance of the\nRZF precoder exceeds that of the ZF precoder for low and moderate input\nsignal-to-noise ratio (SNR) conditions, and performance is equal for high input\nSNR. These results are established and confirmed by analysis and simulation.\n"} {"abstract": " We analyze the orthogonal greedy algorithm when applied to dictionaries\n$\\mathbb{D}$ whose convex hull has small entropy. We show that if the metric\nentropy of the convex hull of $\\mathbb{D}$ decays at a rate of\n$O(n^{-\\frac{1}{2}-\\alpha})$ for $\\alpha > 0$, then the orthogonal greedy\nalgorithm converges at the same rate on the variation space of $\\mathbb{D}$.\nThis improves upon the well-known $O(n^{-\\frac{1}{2}})$ convergence rate of the\northogonal greedy algorithm in many cases, most notably for dictionaries\ncorresponding to shallow neural networks. These results hold under no\nadditional assumptions on the dictionary beyond the decay rate of the entropy\nof its convex hull. In addition, they are robust to noise in the target\nfunction and can be extended to convergence rates on the interpolation spaces\nof the variation norm. Finally, we show that these improved rates are sharp and\nprove a negative result showing that the iterates generated by the orthogonal\ngreedy algorithm cannot in general be bounded in the variation norm of\n$\\mathbb{D}$.\n"} {"abstract": " Network flows are one of the most studied combinatorial optimization problems\nwith innumerable applications. Any flow on a directed acyclic graph (DAG) $G$\nhaving $n$ vertices and $m$ edges can be decomposed into a set of $O(m)$ paths,\nwith applications from network routing to assembly of biological sequences. In\nsome applications, the flow decomposition corresponds to some particular data\nthat need to be reconstructed from the flow, which require finding paths (or\nsubpaths) appearing in all possible flow decompositions, referred to as safe\npaths.\n Recently, Ma et al. [WABI 2020] addressed a related problem in a\nprobabilistic framework. Later, they gave a quadratic-time algorithm based on a\nglobal criterion, for a generalized version (AND-Quant) of the corresponding\nproblem, i.e., reporting if a given flow path is safe. Our contributions are as\nfollows:\n 1- A simple characterization for the safety of a given path based on a local\ncriterion, which can be directly adapted to give an optimal linear time\nverification algorithm.\n 2- A simple enumeration algorithm that reports all maximal safe paths on a\nflow network in $O(mn)$ time. The algorithm reports all safe paths using a\ncompact representation of the solution (called ${\\cal P}_c$), which is\n$\\Omega(mn)$ in the worst case, but merely $O(m+n)$ in the best case.\n 3- An improved enumeration algorithm where all safe paths ending at every\nvertex are represented as funnels using $O(n^2+|{\\cal P}_c|)$ space. These can\nbe computed and used to report all maximal safe paths, using time linear in the\ntotal space required by funnels, with an extra logarithmic factor.\n Overall we present a simple characterization for the problem leading to an\noptimal verification algorithm and a simple enumeration algorithm. The\nenumeration algorithm is improved using the funnel structures for safe paths,\nwhich may be of independent interest.\n"} {"abstract": " The most advanced D-Wave Advantage quantum annealer has 5000+ qubits,\nhowever, every qubit is connected to a small number of neighbors. As such,\nimplementation of a fully-connected graph results in an order of magnitude\nreduction in qubit count. To compensate for the reduced number of qubits, one\nhas to rely on special heuristic software such as qbsolv, the purpose of which\nis to decompose a large problem into smaller pieces that fit onto a quantum\nannealer. In this work, we compare the performance of two implementations of\nsuch software: the original open-source qbsolv which is a part of the D-Wave\nOcean tools and a new Mukai QUBO solver from Quantum Computing Inc. (QCI). The\ncomparison is done for solving the electronic structure problem and is\nimplemented in a classical mode (Tabu search techniques). The Quantum Annealer\nEigensolver is used to map the electronic structure eigenvalue-eigenvector\nequation to a type of problem solvable on modern quantum annealers. We find\nthat the Mukai QUBO solver outperforms the Ocean qbsolv for all calculations\ndone in the present work, both the ground and excited state calculations. This\nwork stimulates the development of software to assist in the utilization of\nmodern quantum annealers.\n"} {"abstract": " We derive the Thouless-Anderson-Palmer (TAP) equations for the Ghatak and\nSherrington model. Our derivation, based on the cavity method, holds at high\ntemperature and at all values of the crystal field. It confirms the prediction\nof Yokota.\n"} {"abstract": " One of the most complex and devastating disaster scenarios that the\nU.S.~Pacific Northwest region and the state of Oregon faces is a large\nmagnitude Cascadia Subduction Zone earthquake event. The region's electrical\ngrid lacks in resilience against the destruction of a megathrust earthquake, a\npowerful tsunami, hundreds of aftershocks and increased volcanic activity, all\nof which are highly probable components of this hazard. This research seeks to\ncatalyze further understanding and improvement of resilience. By systematizing\npower system related experiences of historical earthquakes, and collecting\npractical and innovative ideas from other regions on how to enhance network\ndesign, construction, and operation, important steps are being taken toward a\nmore resilient, earthquake-resistant grid. This paper presents relevant\nfindings in an effort to be an overview and a useful guideline for those who\nare also working towards greater electrical grid resilience.\n"} {"abstract": " The representation of data and its relationships using networks is prevalent\nin many research fields such as computational biology, medical informatics and\nsocial networks. Recently, complex networks models have been introduced to\nbetter capture the insights of the modelled scenarios. Among others, dual\nnetworks -based models have been introduced, which consist in mapping\ninformation as pair of networks containing the same nodes but different edges.\n We focus on the use of a novel approach to visualise and analyse dual\nnetworks. The method uses two algorithms for community discovery, and it is\nprovided as a Python-based tool with a graphical user interface. The tool is\nable to load dual networks and to extract both the densest connected subgraph\nas well as the common modular communities. The latter is obtained by using an\nadapted implementation of the Louvain algorithm.\n The proposed algorithm and graphical tool have been tested by using social,\nbiological, and co-authorship networks. Results demonstrate that the proposed\napproach is efficient and is able to extract meaningful information from dual\nnetworks. Finally, as contribution, the proposed graphical user interface can\nbe considered a valuable innovation to the context.\n"} {"abstract": " In this article, we develop an algebraic framework of axioms which abstracts\nvarious high-level properties of multi-qudit representations of generalized\nClifford algebras. We further construct an explicit model and prove that it\nsatisfies these axioms. Strengths of our algebraic framework include the\nminimality of its assumptions, and the readiness by which one may give an\nexplicit construction satisfying these assumptions. In terms of applications,\nthis algebraic framework provides a solid foundation which opens the way for\ndeveloping a graphical calculus for multi-qudit representations of generalized\nClifford algebras using purely algebraic methods, which is addressed in a\nfollow-up paper.\n"} {"abstract": " Assuming the Riemann hypothesis we establish explicit bounds for the modulus\nof the log-derivative of Riemann's zeta-function in the critical strip.\n"} {"abstract": " Abstract symbolic reasoning, as required in domains such as mathematics and\nlogic, is a key component of human intelligence. Solvers for these domains have\nimportant applications, especially to computer-assisted education. But learning\nto solve symbolic problems is challenging for machine learning algorithms.\nExisting models either learn from human solutions or use hand-engineered\nfeatures, making them expensive to apply in new domains. In this paper, we\ninstead consider symbolic domains as simple environments where states and\nactions are given as unstructured text, and binary rewards indicate whether a\nproblem is solved. This flexible setup makes it easy to specify new domains,\nbut search and planning become challenging. We introduce four environments\ninspired by the Mathematics Common Core Curriculum, and observe that existing\nReinforcement Learning baselines perform poorly. We then present a novel\nlearning algorithm, Contrastive Policy Learning (ConPoLe) that explicitly\noptimizes the InfoNCE loss, which lower bounds the mutual information between\nthe current state and next states that continue on a path to the solution.\nConPoLe successfully solves all four domains. Moreover, problem representations\nlearned by ConPoLe enable accurate prediction of the categories of problems in\na real mathematics curriculum. Our results suggest new directions for\nreinforcement learning in symbolic domains, as well as applications to\nmathematics education.\n"} {"abstract": " With the ongoing penetration of conversational user interfaces, a better\nunderstanding of social and emotional characteristic inherent to dialogue is\nrequired. Chatbots in particular face the challenge of conveying human-like\nbehaviour while being restricted to one channel of interaction, i.e., text. The\ngoal of the presented work is thus to investigate whether characteristics of\nsocial intelligence embedded in human-chatbot interactions are perceivable by\nhuman interlocutors and if yes, whether such influences the experienced\ninteraction quality. Focusing on the social intelligence dimensions\nAuthenticity, Clarity and Empathy, we first used a questionnaire survey\nevaluating the level of perception in text utterances, and then conducted a\nWizard of Oz study to investigate the effects of these utterances in a more\ninteractive setting. Results show that people have great difficulties\nperceiving elements of social intelligence in text. While on the one hand they\nfind anthropomorphic behaviour pleasant and positive for the naturalness of a\ndialogue, they may also perceive it as frightening and unsuitable when\nexpressed by an artificial agent in the wrong way or at the wrong time.\n"} {"abstract": " Learning representations for graphs plays a critical role in a wide spectrum\nof downstream applications. In this paper, we summarize the limitations of the\nprior works in three folds: representation space, modeling dynamics and\nmodeling uncertainty. To bridge this gap, we propose to learn dynamic graph\nrepresentation in hyperbolic space, for the first time, which aims to infer\nstochastic node representations. Working with hyperbolic space, we present a\nnovel Hyperbolic Variational Graph Neural Network, referred to as HVGNN. In\nparticular, to model the dynamics, we introduce a Temporal GNN (TGNN) based on\na theoretically grounded time encoding approach. To model the uncertainty, we\ndevise a hyperbolic graph variational autoencoder built upon the proposed TGNN\nto generate stochastic node representations of hyperbolic normal distributions.\nFurthermore, we introduce a reparameterisable sampling algorithm for the\nhyperbolic normal distribution to enable the gradient-based learning of HVGNN.\nExtensive experiments show that HVGNN outperforms state-of-the-art baselines on\nreal-world datasets.\n"} {"abstract": " We propose SinIR, an efficient reconstruction-based framework trained on a\nsingle natural image for general image manipulation, including\nsuper-resolution, editing, harmonization, paint-to-image, photo-realistic style\ntransfer, and artistic style transfer. We train our model on a single image\nwith cascaded multi-scale learning, where each network at each scale is\nresponsible for image reconstruction. This reconstruction objective greatly\nreduces the complexity and running time of training, compared to the GAN\nobjective. However, the reconstruction objective also exacerbates the output\nquality. Therefore, to solve this problem, we further utilize simple random\npixel shuffling, which also gives control over manipulation, inspired by the\nDenoising Autoencoder. With quantitative evaluation, we show that SinIR has\ncompetitive performance on various image manipulation tasks. Moreover, with a\nmuch simpler training objective (i.e., reconstruction), SinIR is trained 33.5\ntimes faster than SinGAN (for 500 X 500 images) that solves similar tasks. Our\ncode is publicly available at github.com/YooJiHyeong/SinIR.\n"} {"abstract": " This paper studies the model compression problem of vision transformers.\nBenefit from the self-attention module, transformer architectures have shown\nextraordinary performance on many computer vision tasks. Although the network\nperformance is boosted, transformers are often required more computational\nresources including memory usage and the inference complexity. Compared with\nthe existing knowledge distillation approaches, we propose to excavate useful\ninformation from the teacher transformer through the relationship between\nimages and the divided patches. We then explore an efficient fine-grained\nmanifold distillation approach that simultaneously calculates cross-images,\ncross-patch, and random-selected manifolds in teacher and student models.\nExperimental results conducted on several benchmarks demonstrate the\nsuperiority of the proposed algorithm for distilling portable transformer\nmodels with higher performance. For example, our approach achieves 75.06% Top-1\naccuracy on the ImageNet-1k dataset for training a DeiT-Tiny model, which\noutperforms other ViT distillation methods.\n"} {"abstract": " Adversarial examples mainly exploit changes to input pixels to which humans\nare not sensitive to, and arise from the fact that models make decisions based\non uninterpretable features. Interestingly, cognitive science reports that the\nprocess of interpretability for human classification decision relies\npredominantly on low spatial frequency components. In this paper, we\ninvestigate the robustness to adversarial perturbations of models enforced\nduring training to leverage information corresponding to different spatial\nfrequency ranges. We show that it is tightly linked to the spatial frequency\ncharacteristics of the data at stake. Indeed, depending on the data set, the\nsame constraint may results in very different level of robustness (up to 0.41\nadversarial accuracy difference). To explain this phenomenon, we conduct\nseveral experiments to enlighten influential factors such as the level of\nsensitivity to high frequencies, and the transferability of adversarial\nperturbations between original and low-pass filtered inputs.\n"} {"abstract": " In this paper, we propose a simple yet effective crowd counting and\nlocalization network named SCALNet. Unlike most existing works that separate\nthe counting and localization tasks, we consider those tasks as a pixel-wise\ndense prediction problem and integrate them into an end-to-end framework.\nSpecifically, for crowd counting, we adopt a counting head supervised by the\nMean Square Error (MSE) loss. For crowd localization, the key insight is to\nrecognize the keypoint of people, i.e., the center point of heads. We propose a\nlocalization head to distinguish dense crowds trained by two loss functions,\ni.e., Negative-Suppressed Focal (NSF) loss and False-Positive (FP) loss, which\nbalances the positive/negative examples and handles the false-positive\npredictions. Experiments on the recent and large-scale benchmark, NWPU-Crowd,\nshow that our approach outperforms the state-of-the-art methods by more than 5%\nand 10% improvement in crowd localization and counting tasks, respectively. The\ncode is publicly available at https://github.com/WangyiNTU/SCALNet.\n"} {"abstract": " Harmonic generation in atoms and molecules has reshaped our understanding of\nultrafast phenomena beyond the traditional nonlinear optics and has launched\nattosecond physics. Harmonics from solids represent a new frontier, where both\nmajority and minority spin channels contribute to harmonics.} This is true even\nin a ferromagnet whose electronic states are equally available to optical\nexcitation. Here, we demonstrate that harmonics can be generated {mostly} from\na single spin channel in half metallic chromium dioxide. {An energy gap in the\nminority channel greatly reduces the harmonic generation}, so harmonics\npredominantly emit from the majority channel, with a small contribution from\nthe minority channel. However, this is only possible when the incident photon\nenergy is well below the energy gap in the minority channel, so all the\ntransitions in the minority channel are virtual. The onset of the photon energy\nis determined by the transition energy between the dipole-allowed transition\nbetween the O-$2p$ and Cr-$3d$ states. Harmonics {mainly} from a single spin\nchannel can be detected, regardless of laser field strength, as far as the\nphoton energy is below the minority band energy gap. This prediction should be\ntested experimentally.\n"} {"abstract": " Dynamic Time Warping (DTW) is widely used for temporal data processing.\nHowever, existing methods can neither learn the discriminative prototypes of\ndifferent classes nor exploit such prototypes for further analysis. We propose\nDiscriminative Prototype DTW (DP-DTW), a novel method to learn class-specific\ndiscriminative prototypes for temporal recognition tasks. DP-DTW shows superior\nperformance compared to conventional DTWs on time series classification\nbenchmarks. Combined with end-to-end deep learning, DP-DTW can handle\nchallenging weakly supervised action segmentation problems and achieves state\nof the art results on standard benchmarks. Moreover, detailed reasoning on the\ninput video is enabled by the learned action prototypes. Specifically, an\naction-based video summarization can be obtained by aligning the input sequence\nwith action prototypes.\n"} {"abstract": " Hadronic matrix elements of local four-quark operators play a central role in\nnon-leptonic kaon decays, while vacuum matrix elements involving the same kind\nof operators appear in inclusive dispersion relations, such as those relevant\nin $\\tau$-decay analyses. Using an $SU(3)_L\\otimes SU(3)_R$ decomposition of\nthe operators, we derive generic relations between these matrix elements,\nextending well-known results that link observables in the two different\nsectors. Two relevant phenomenological applications are presented. First, we\ndetermine the electroweak-penguin contribution to the kaon CP-violating ratio\n$\\varepsilon'/\\varepsilon$, using the measured hadronic spectral functions in\n$\\tau$ decay. Second, we fit our $SU(3)$ dynamical parameters to the most\nrecent lattice data on $K\\to\\pi\\pi$ matrix elements. The comparison of this\nnumerical fit with results from previous analytical approaches provides an\ninteresting anatomy of the $\\Delta I = \\frac{1}{2}$ enhancement, confirming old\nsuggestions about its underlying dynamical origin.\n"} {"abstract": " We present a machine learning method to predict extreme hydrologic events\nfrom spatially and temporally varying hydrological and meteorological data. We\nused a timestep reduction technique to reduce the computational and memory\nrequirements and trained a bidirection LSTM network to predict soil water and\nstream flow from time series data observed and simulated over eighty years in\nthe Wabash River Watershed. We show that our simple model can be trained much\nfaster than complex attention networks such as GeoMAN without sacrificing\naccuracy. Based on the predicted values of soil water and stream flow, we\npredict the occurrence and severity of extreme hydrologic events such as\ndroughts. We also demonstrate that extreme events can be predicted in\ngeographical locations separate from locations observed during the training\nprocess. This spatially-inductive setting enables us to predict extreme events\nin other areas in the US and other parts of the world using our model trained\nwith the Wabash Basin data.\n"} {"abstract": " Electronic and optical properties of doped organic semiconductors are\ndominated by local interactions between donor and acceptor molecules. However,\nwhen such systems are in crystalline form, long-range order competes against\nshort-range couplings. In a first-principles study on three experimentally\nresolved bulk structures of quaterthiophene doped by (fluorinated)\ntetracyanoquinodimethane, we demonstrate the crucial role of long-range\ninteractions in donor/acceptor co-crystals. The band structures of the\ninvestigated materials exhibit direct band-gaps decreasing in size with\nincreasing amount of F atoms in the acceptors. The valence-band maximum and\nconduction-band minimum are found at the Brillouin zone boundary and the\ncorresponding wave-functions are segregated on donor and acceptor molecules,\nrespectively. With the aid of a tight-binding model, we rationalize that the\nmechanisms responsible for these behaviors, which are ubiquitous in\ndonor/acceptor co-crystals, are driven by long-range interactions. The optical\nresponse of the analyzed co-crystals is highly anisotropic. The absorption\nonset is dominated by an intense resonance corresponding to a charge-transfer\nexcitation. Long-range interactions are again responsible for this behavior,\nwhich enhances the efficiency of the co-crystals for photo-induced charge\nseparation and transport. In addition to this result, which has important\nimplications in the rational design of organic materials for opto-electronics,\nour study clarifies that cluster models, accounting only for local\ninteractions, cannot capture the relevant impact of long-range order in\ndonor/acceptor co-crystals.\n"} {"abstract": " Recently, deep-learning based approaches have achieved impressive performance\nfor autonomous driving. However, end-to-end vision-based methods typically have\nlimited interpretability, making the behaviors of the deep networks difficult\nto explain. Hence, their potential applications could be limited in practice.\nTo address this problem, we propose an interpretable end-to-end vision-based\nmotion planning approach for autonomous driving, referred to as IVMP. Given a\nset of past surrounding-view images, our IVMP first predicts future egocentric\nsemantic maps in bird's-eye-view space, which are then employed to plan\ntrajectories for self-driving vehicles. The predicted future semantic maps not\nonly provide useful interpretable information, but also allow our motion\nplanning module to handle objects with low probability, thus improving the\nsafety of autonomous driving. Moreover, we also develop an optical flow\ndistillation paradigm, which can effectively enhance the network while still\nmaintaining its real-time performance. Extensive experiments on the nuScenes\ndataset and closed-loop simulation show that our IVMP significantly outperforms\nthe state-of-the-art approaches in imitating human drivers with a much higher\nsuccess rate. Our project page is available at\nhttps://sites.google.com/view/ivmp.\n"} {"abstract": " We develop methods for forming prediction sets in an online setting where the\ndata generating distribution is allowed to vary over time in an unknown\nfashion. Our framework builds on ideas from conformal inference to provide a\ngeneral wrapper that can be combined with any black box method that produces\npoint predictions of the unseen label or estimated quantiles of its\ndistribution. While previous conformal inference methods rely on the assumption\nthat the data points are exchangeable, our adaptive approach provably achieves\nthe desired coverage frequency over long-time intervals irrespective of the\ntrue data generating process. We accomplish this by modelling the distribution\nshift as a learning problem in a single parameter whose optimal value is\nvarying over time and must be continuously re-estimated. We test our method,\nadaptive conformal inference, on two real world datasets and find that its\npredictions are robust to visible and significant distribution shifts.\n"} {"abstract": " We analyze a fully discrete finite element numerical scheme for the\nCahn-Hilliard-Stokes-Darcy system that models two-phase flows in coupled free\nflow and porous media. To avoid a well-known difficulty associated with the\ncoupling between the Cahn-Hilliard equation and the fluid motion, we make use\nof the operator-splitting in the numerical scheme, so that these two solvers\nare decoupled, which in turn would greatly improve the computational\nefficiency. The unique solvability and the energy stability have been proved\nin~\\cite{CHW2017}. In this work, we carry out a detailed convergence analysis\nand error estimate for the fully discrete finite element scheme, so that the\noptimal rate convergence order is established in the energy norm, i.e.,, in the\n$\\ell^\\infty (0, T; H^1) \\cap \\ell^2 (0, T; H^2)$ norm for the phase variables,\nas well as in the $\\ell^\\infty (0, T; H^1) \\cap \\ell^2 (0, T; H^2)$ norm for\nthe velocity variable. Such an energy norm error estimate leads to a\ncancellation of a nonlinear error term associated with the convection part,\nwhich turns out to be a key step to pass through the analysis. In addition, a\ndiscrete $\\ell^2 (0;T; H^3)$ bound of the numerical solution for the phase\nvariables plays an important role in the error estimate, which is accomplished\nvia a discrete version of Gagliardo-Nirenberg inequality in the finite element\nsetting.\n"} {"abstract": " In this paper, the Hankel transform of the generalized q-exponential\npolynomial of the first form (q, r)-Whitney numbers of the second kind is\nestablished using the method of Cigler. Consequently, the Hankel transform of\nthe first form (q, r)-Dowling numbers is obtained as special case.\n"} {"abstract": " In the article, we investigate the diquark-diquark-antiquark type fully-heavy\npentaquark states with the spin-parity $J^P={\\frac{1}{2}}^-$ via the QCD sum\nrules, and obtain the masses $M_{cccc\\bar{c}}=7.93\\pm 0.15\\,\\rm{GeV}$ and\n$M_{bbbb\\bar{b}}=23.91\\pm0.15\\,\\rm{GeV}$. We can search for the fully-heavy\npentaquark states in the $J/\\psi \\Omega_{ccc}$ and $\\Upsilon \\Omega_{bbb}$\ninvariant mass spectrum in the future.\n"} {"abstract": " Optical phenomena associated with extremely localized field should be\nunderstood with considerations of nonlocal and quantum effects, which pose a\nhurdle to conceptualize the physics with a picture of eigenmodes. Here we first\npropose a generalized Lorentz model to describe general nonlocal media under\nlinear mean-field approximation and formulate source-free Maxwell's equations\nas a linear eigenvalue problem to define the quasinormal modes. Then we\nintroduce an orthonormalization scheme for the modes and establish a canonical\nquasinormal mode framework for general nonlocal media. Explicit formalisms for\nmetals described by quantum hydrodynamic model and polar dielectrics with\nnonlocal response are exemplified. The framework enables for the first time\ndirect modal analysis of mode transition in the quantum tunneling regime and\nprovides physical insights beyond usual far-field spectroscopic analysis.\nApplied to nonlocal polar dielectrics, the framework also unveils the important\nroles of longitudinal phonon polaritons in optical response.\n"} {"abstract": " Smart contracts are distributed, self-enforcing programs executing on top of\nblockchain networks. They have the potential to revolutionize many industries\nsuch as financial institutes and supply chains. However, smart contracts are\nsubject to code-based vulnerabilities, which casts a shadow on its\napplications. As smart contracts are unpatchable (due to the immutability of\nblockchain), it is essential that smart contracts are guaranteed to be free of\nvulnerabilities. Unfortunately, smart contract languages such as Solidity are\nTuring-complete, which implies that verifying them statically is infeasible.\nThus, alternative approaches must be developed to provide the guarantee. In\nthis work, we develop an approach which automatically transforms smart\ncontracts so that they are provably free of 4 common kinds of vulnerabilities.\nThe key idea is to apply runtime verification in an efficient and provably\ncorrect manner. Experiment results with 5000 smart contracts show that our\napproach incurs minor run-time overhead in terms of time (i.e., 14.79%) and gas\n(i.e., 0.79%).\n"} {"abstract": " We present a structure preserving discretization of the fundamental spacetime\ngeometric structures of fluid mechanics in the Lagrangian description in 2D and\n3D. Based on this, multisymplectic variational integrators are developed for\nbarotropic and incompressible fluid models, which satisfy a discrete version of\nNoether theorem. We show how the geometric integrator can handle regular fluid\nmotion in vacuum with free boundaries and constraints such as the impact\nagainst an obstacle of a fluid flowing on a surface. Our approach is applicable\nto a wide range of models including the Boussinesq and shallow water models, by\nappropriate choice of the Lagrangian.\n"} {"abstract": " Modularity of neural networks -- both biological and artificial -- can be\nthought of either structurally or functionally, and the relationship between\nthese is an open question. We show that enforcing structural modularity via\nsparse connectivity between two dense sub-networks which need to communicate to\nsolve the task leads to functional specialization of the sub-networks, but only\nat extreme levels of sparsity. With even a moderate number of interconnections,\nthe sub-networks become functionally entangled. Defining functional\nspecialization is in itself a challenging problem without a universally agreed\nsolution. To address this, we designed three different measures of\nspecialization (based on weight masks, retraining and correlation) and found\nthem to qualitatively agree. Our results have implications in both neuroscience\nand machine learning. For neuroscience, it shows that we cannot conclude that\nthere is functional modularity simply by observing moderate levels of\nstructural modularity: knowing the brain's connectome is not sufficient for\nunderstanding how it breaks down into functional modules. For machine learning,\nusing structure to promote functional modularity -- which may be important for\nrobustness and generalization -- may require extremely narrow bottlenecks\nbetween modules.\n"} {"abstract": " Being able to spot defective parts is a critical component in large-scale\nindustrial manufacturing. A particular challenge that we address in this work\nis the cold-start problem: fit a model using nominal (non-defective) example\nimages only. While handcrafted solutions per class are possible, the goal is to\nbuild systems that work well simultaneously on many different tasks\nautomatically. The best peforming approaches combine embeddings from ImageNet\nmodels with an outlier detection model. In this paper, we extend on this line\nof work and propose PatchCore, which uses a maximally representative memory\nbank of nominal patch-features. PatchCore offers competitive inference times\nwhile achieving state-of-the-art performance for both detection and\nlocalization. On the standard dataset MVTec AD, PatchCore achieves an\nimage-level anomaly detection AUROC score of $99.1\\%$, more than halving the\nerror compared to the next best competitor. We further report competitive\nresults on two additional datasets and also find competitive results in the few\nsamples regime.\n"} {"abstract": " Novel many-body and topological electronic phases can be created in\nassemblies of interacting spins coupled to a superconductor, such as\none-dimensional topological superconductors with Majorana zero modes (MZMs) at\ntheir ends. Understanding and controlling interactions between spins and the\nemergent band structure of the in-gap Yu-Shiba-Rusinov (YSR) states they induce\nin a superconductor are fundamental for engineering such phases. Here, by\nprecisely positioning magnetic adatoms with a scanning tunneling microscope\n(STM), we demonstrate both the tunability of exchange interaction between spins\nand precise control of the hybridization of YSR states they induce on the\nsurface of a bismuth (Bi) thin film that is made superconducting with the\nproximity effect. In this platform, depending on the separation of spins, the\ninterplay between Ruderman-Kittel-Kasuya-Yosida (RKKY) interaction, spin-orbit\ncoupling, and surface magnetic anisotropy stabilizes different types of spin\nalignments. Using high-resolution STM spectroscopy at millikelvin temperatures,\nwe probe these spin alignments through monitoring the spin-induced YSR states\nand their energy splitting. Such measurements also reveal a quantum phase\ntransition between the ground states with different electron number parity for\na pair of spins in a superconductor tuned by their separation. Experiments on\nlarger assemblies show that spin-spin interactions can be mediated in a\nsuperconductor over long distances. Our results show that controlling\nhybridization of the YSR states in this platform provides the possibility of\nengineering the band structure of such states for creating topological phases.\n"} {"abstract": " We describe a numerical method that simulates the interaction of the helium\natom with sequences of femtosecond and attosecond light pulses. The method,\nwhich is based on the close-coupling expansion of the electronic configuration\nspace in a B-spline bipolar spherical harmonic basis, can accurately reproduce\nthe excitation and single ionization of the atom, within the electrostatic\napproximation. The time dependent Schr\\\"odinger equation is integrated with a\nsequence of second-order split-exponential unitary propagators. The asymptotic\nchannel-, energy- and angularly-resolved photoelectron distributions are\ncomputed by projecting the wavepacket at the end of the simulation on the\nmultichannel scattering states of the atom, which are separately computed\nwithin the same close-coupling basis. This method is applied to simulate the\npump-probe ionization of helium in the vicinity of the $2s/2p$ excitation\nthreshold of the He$^+$ ion. This work confirms the qualitative conclusions of\none of our earliest publications [L Argenti and E Lindroth, Phys. Rev. Lett.\n{\\bf 105}, 53002 (2010)], in which we demonstrated the control of the $2s/2p$\nionization branching-ratio. Here, we take those calculations to convergence and\nshow how correlation brings the periodic modulation of the branching ratios in\nalmost phase opposition. The residual total ionization probability to the\n$2s+2p$ channels is dominated by the beating between the $sp_{2,3}^+$ and the\n$sp_{2,4}^+$ doubly excited states, which is consistent with the modulation of\nthe complementary signal in the $1s$ channel, measured in 2010 by Chang and\nco-workers~[S Gilbertson~\\emph{et al.}, Phys. Rev. Lett. {\\bf 105}, 263003\n(2010)].\n"} {"abstract": " Over the past decade, unprecedented progress in the development of neural\nnetworks influenced dozens of different industries, including weed recognition\nin the agro-industrial sector. The use of neural networks in agro-industrial\nactivity in the task of recognizing cultivated crops is a new direction. The\nabsence of any standards significantly complicates the understanding of the\nreal situation of the use of the neural network in the agricultural sector. The\nmanuscript presents the complete analysis of researches over the past 10 years\non the use of neural networks for the classification and tracking of weeds due\nto neural networks. In particular, the analysis of the results of using various\nneural network algorithms for the task of classification and tracking was\npresented. As a result, we presented the recommendation for the use of neural\nnetworks in the tasks of recognizing a cultivated object and weeds. Using this\nstandard can significantly improve the quality of research on this topic and\nsimplify the analysis and understanding of any paper.\n"} {"abstract": " The formation of $\\alpha$ particle on nuclear surface has been a fundamental\nproblem since the early age of nuclear physics. It strongly affects the\n$\\alpha$ decay lifetime of heavy and superheavy elements, level scheme of light\nnuclei, and the synthesis of the elements in stars. However, the\n$\\alpha$-particle formation in medium-mass nuclei has been poorly known despite\nits importance. Here, based on the $^{48}{\\rm Ti}(p,p\\alpha)^{44}{\\rm Ca}$\nreaction analysis, we report that the $\\alpha$-particle formation in a\nmedium-mass nucleus $^{48}{\\rm Ti}$ is much stronger than that expected from a\nmean-field approximation, and the estimated average distance between $\\alpha$\nparticle and the residue is as large as 4.5 fm. This new result poses a\nchallenge of describing four nucleon correlations by microscopic nuclear\nmodels.\n"} {"abstract": " Giant spin-splitting was recently predicted in collinear antiferromagnetic\nmaterials with a specific class of magnetic space group. In this work, we have\npredicted a two-dimensional (2D) antiferromagnetic Weyl semimetal (WS), CrO\nwith large spin-split band structure, spin-momentum locked transport properties\nand high N\\'eel temperature. It has two pairs of spin-polarized Weyl points at\nthe Fermi level. By manipulating the position of the Weyl points with strain,\nfour different antiferromagnetic spintronic states can be achieved: WSs with\ntwo spin-polarized transport channels (STCs), WSs with single STC,\nsemiconductors with two STCs, and semiconductors with single STC. Based on\nthese properties, a new avenue in spintronics with 2D collinear\nantiferromagnets is proposed.\n"} {"abstract": " Being the seventh most spoken language in the world, the use of the Bangla\nlanguage online has increased in recent times. Hence, it has become very\nimportant to analyze Bangla text data to maintain a safe and harassment-free\nonline place. The data that has been made accessible in this article has been\ngathered and marked from the comments of people in public posts by celebrities,\ngovernment officials, athletes on Facebook. The total amount of collected\ncomments is 44001. The dataset is compiled with the aim of developing the\nability of machines to differentiate whether a comment is a bully expression or\nnot with the help of Natural Language Processing and to what extent it is\nimproper if it is an inappropriate comment. The comments are labeled with\ndifferent categories of harassment. Exploratory analysis from different\nperspectives is also included in this paper to have a detailed overview. Due to\nthe scarcity of data collection of categorized Bengali language comments, this\ndataset can have a significant role for research in detecting bully words,\nidentifying inappropriate comments, detecting different categories of Bengali\nbullies, etc. The dataset is publicly available at\nhttps://data.mendeley.com/datasets/9xjx8twk8p.\n"} {"abstract": " Let $\\Omega \\Subset \\mathbb R^n$, $f \\in C^1(\\mathbb R^{N\\times n})$ and\n$g\\in C^1(\\mathbb R^N)$, where $N,n \\in \\mathbb N$. We study the minimisation\nproblem of finding $u \\in W^{1,\\infty}_0(\\Omega;\\mathbb R^N)$ that satisfies \\[\n\\big\\| f(\\mathrm D u) \\big\\|_{L^\\infty(\\Omega)} \\! = \\inf \\Big\\{\\big\\|\nf(\\mathrm D v) \\big\\|_{L^\\infty(\\Omega)} \\! : \\ v \\! \\in\nW^{1,\\infty}_0(\\Omega;\\mathbb R^N), \\, \\| g(v) \\|_{L^\\infty(\\Omega)}\\!\n=1\\Big\\}, \\] under natural assumptions on $f,g$. This includes the\n$\\infty$-eigenvalue problem as a special case. Herein we prove existence of a\nminimiser $u_\\infty$ with extra properties, derived as the limit of minimisers\nof approximating constrained $L^p$ problems as $p\\to \\infty$. A central\ncontribution and novelty of this work is that $u_\\infty$ is shown to solve a\ndivergence PDE with measure coefficients, whose leading term is a divergence\ncounterpart equation of the non-divergence $\\infty$-Laplacian. Our results are\nnew even in the scalar case of the $\\infty$-eigenvalue problem.\n"} {"abstract": " This paper presents the detailed simulation of a double-pixel structure for\ncharged particle detection based on the 3D-trench silicon sensor developed for\nthe TIMESPOT project and a comparison of the simulation results with\nmeasurements performed at $\\pi-$M1 beam at PSI laboratory. The simulation is\nbased on the combined use of several software tools (TCAD, GEANT4, TCoDe and\nTFBoost) which allow to fully design and simulate the device physics response\nin very short computational time, O(1-100 s) per simulated signal, by\nexploiting parallel computation using single or multi-thread processors. This\nallowed to produce large samples of simulated signals, perform detailed studies\nof the sensor characteristics and make precise comparisons with experimental\nresults.\n"} {"abstract": " We investigate the possibility that radio-bright active galactic nuclei (AGN)\nare responsible for the TeV--PeV neutrinos detected by IceCube. We use an\nunbinned maximum-likelihood-ratio method, 10 years of IceCube muon-track data,\nand 3388 radio-bright AGN selected from the Radio Fundamental Catalog. None of\nthe AGN in the catalog have a large global significance. The two most\nsignificant sources have global significance of $\\simeq$ 1.5$\\sigma$ and\n0.8$\\sigma$, though 4.1$\\sigma$ and 3.8$\\sigma$ local significance. Our\nstacking analyses show no significant correlation between the whole catalog and\nIceCube neutrinos. We infer from the null search that this catalog can account\nfor at most 30\\% (95\\% CL) of the diffuse astrophysical neutrino flux measured\nby IceCube. Moreover, our results disagree with recent work that claimed a\n4.1$\\sigma$ detection of neutrinos from the sources in this catalog, and we\ndiscuss the reasons of the difference.\n"} {"abstract": " We propose a generalization of the coherent anomaly method to extract the\ncritical exponents of a phase transition occurring in the steady-state of an\nopen quantum many-body system. The method, originally developed by Suzuki [J.\nPhys. Soc. Jpn. {\\bf 55}, 4205 (1986)] for equilibrium systems, is based on the\nscaling properties of the singularity in the response functions determined\nthrough cluster mean-field calculations. We apply this method to the\ndissipative transverse-field Ising model and the dissipative XYZ model in two\ndimensions obtaining convergent results already with small clusters.\n"} {"abstract": " As a step towards quantization of Higher Spin Gravities we construct the\npresymplectic AKSZ sigma-model for $4d$ Higher Spin Gravity which is AdS/CFT\ndual of Chern-Simons vector models. It is shown that the presymplectic\nstructure leads to the correct quantum commutator of higher spin fields and to\nthe correct algebra of the global higher spin symmetry currents. The\npresymplectic AKSZ model is proved to be unique, it depends on two coupling\nconstants in accordance with the AdS/CFT duality, and it passes some simple\nchecks of interactions.\n"} {"abstract": " Electric, intelligent, and network are the most important future development\ndirections of automobiles. Intelligent electric vehicles have shown great\npotentials to improve traffic mobility and reduce emissions, especially at\nunsignalized intersections. Previous research has shown that vehicle passing\norder is the key factor in traffic mobility improvement. In this paper, we\npropose a graph-based cooperation method to formalize the conflict-free\nscheduling problem at unsignalized intersections. Firstly, conflict directed\ngraphs and coexisting undirected graphs are built to describe the conflict\nrelationship of the vehicles. Then, two graph-based methods are introduced to\nsolve the vehicle passing order. One method is an optimized depth-first\nspanning tree method which aims to find the local optimal passing order for\neach vehicle. The other method is a maximum matching algorithm that solves the\nglobal optimal problem. The computational complexity of both methods is also\nderived. Numerical simulation results demonstrate the effectiveness of the\nproposed algorithms.\n"} {"abstract": " As an indispensable part of modern human-computer interaction system, speech\nsynthesis technology helps users get the output of intelligent machine more\neasily and intuitively, thus has attracted more and more attention. Due to the\nlimitations of high complexity and low efficiency of traditional speech\nsynthesis technology, the current research focus is the deep learning-based\nend-to-end speech synthesis technology, which has more powerful modeling\nability and a simpler pipeline. It mainly consists of three modules: text\nfront-end, acoustic model, and vocoder. This paper reviews the research status\nof these three parts, and classifies and compares various methods according to\ntheir emphasis. Moreover, this paper also summarizes the open-source speech\ncorpus of English, Chinese and other languages that can be used for speech\nsynthesis tasks, and introduces some commonly used subjective and objective\nspeech quality evaluation method. Finally, some attractive future research\ndirections are pointed out.\n"} {"abstract": " The internet advertising market is a multi-billion dollar industry, in which\nadvertisers buy thousands of ad placements every day by repeatedly\nparticipating in auctions. In recent years, the industry has shifted to\nfirst-price auctions as the preferred paradigm for selling advertising slots.\nAnother important and ubiquitous feature of these auctions is the presence of\ncampaign budgets, which specify the maximum amount the advertisers are willing\nto pay over a specified time period. In this paper, we present a new model to\nstudy the equilibrium bidding strategies in first-price auctions for\nadvertisers who satisfy budget constraints on average. Our model dispenses with\nthe common, yet unrealistic assumption that advertisers' values are independent\nand instead assumes a contextual model in which advertisers determine their\nvalues using a common feature vector. We show the existence of a natural\nvalue-pacing-based Bayes-Nash equilibrium under very mild assumptions, and\nstudy its structural properties. Furthermore, we generalize the existence\nresult to standard auctions and prove a revenue equivalence showing that all\nstandard auctions yield the same revenue even in the presence of budget\nconstraints.\n"} {"abstract": " We study the dynamics of a ferrofluid thin film confined in a Hele-Shaw cell,\nand subjected to a tilted nonuniform magnetic field. It is shown that the\ninterface between the ferrofluid and an inviscid outer fluid (air) supports\ntraveling waves, governed by a novel modified Kuramoto--Sivashinsky-type\nequation derived under the long-wave approximation. The balance between energy\nproduction and dissipation in this long-wave equations allows for the existence\nof dissipative solitons. These permanent traveling waves' propagation velocity\nand profile shape are shown to be tunable via the external magnetic field. A\nmultiple-scale analysis is performed to obtain the correction to the linear\nprediction of the propagation velocity, and to reveal how the nonlinearity\narrests the linear instability. The traveling periodic interfacial waves\ndiscovered are identified as fixed points in an energy phase plane. It is shown\nthat transitions between states (wave profiles) occur. These transitions are\nexplained via the spectral stability of the traveling waves. Interestingly,\nmultiperiodic waves, which are a non-integrable analog of the double cnoidal\nwave, are also found to propagate under the model long-wave equation. These\nmultiperiodic solutions are investigated numerically, and they are found to be\nlong-lived transients, but ultimately abruptly transition to one of the stable\nperiodic states identified.\n"} {"abstract": " The origin of the gamma-ray emission of the blazar Mrk 421 is still a matter\nof debate. We used 5.5 years of unbiased observing campaign data, obtained\nusing the FACT telescope and the Fermi LAT detector at TeV and GeV energies,\nthe longest and densest so far, together with contemporaneous multi-wavelength\nobservations, to characterise the variability of Mrk 421 and to constrain the\nunderlying physical mechanisms. We studied and correlated light curves obtained\nby ten different instruments and found two significant results. The TeV and\nX-ray light curves are very well correlated with a lag of <0.6 days. The GeV\nand radio (15 Ghz band) light curves are widely and strongly correlated.\nVariations of the GeV light curve lead those in the radio. Lepto-hadronic and\npurely hadronic models in the frame of shock acceleration predict proton\nacceleration or cooling timescales that are ruled out by the short variability\ntimescales and delays observed in Mrk 421. Instead the observations match the\npredictions of leptonic models.\n"} {"abstract": " A pair-density wave state has been suggested to exist in underdoped cuprate\nsuperconductors, with some supporting experimental evidence emerging over the\npast few years from scanning tunneling spectroscopy. Several studies have also\nlinked the observed quantum oscillations in these systems to a reconstruction\nof the Fermi surface by a pair-density wave. Here, we show, using semiclassical\nanalysis and numerical calculations, that a Fermi pocket created by first-order\nscattering from a pair-density wave cannot induce such oscillations. In\ncontrast, pockets resulting from second-order scattering can cause\noscillations. We consider the effects of a finite pair-density wave correlation\nlength on the signal, and demonstrate that it is only weakly sensitive to\ndisorder in the form of $\\pi$-phase slips. Finally, we discuss our results in\nthe context of the cuprates and show that a bidirectional pair-density wave may\nproduce observed oscillation frequencies.\n"} {"abstract": " Federated Learning (FL) is emerging as a promising paradigm of\nprivacy-preserving machine learning, which trains an algorithm across multiple\nclients without exchanging their data samples. Recent works highlighted several\nprivacy and robustness weaknesses in FL and addressed these concerns using\nlocal differential privacy (LDP) and some well-studied methods used in\nconventional ML, separately. However, it is still not clear how LDP affects\nadversarial robustness in FL. To fill this gap, this work attempts to develop a\ncomprehensive understanding of the effects of LDP on adversarial robustness in\nFL. Clarifying the interplay is significant since this is the first step\ntowards a principled design of private and robust FL systems. We certify that\nlocal differential privacy has both positive and negative effects on\nadversarial robustness using theoretical analysis and empirical verification.\n"} {"abstract": " We define the flow group of any component of any stratum of rooted abelian or\nquadratic differentials (those marked with a horizontal separatrix) to be the\ngroup generated by almost-flow loops. We prove that the flow group is equal to\nthe fundamental group of the component. As a corollary, we show that the plus\nand minus modular Rauzy--Veech groups are finite-index subgroups of their\nambient modular monodromy groups. This partially answers a question of Yoccoz.\n Using this, and recent advances on algebraic hulls and Zariski closures of\nmonodromy groups, we prove that the Rauzy--Veech groups are Zariski dense in\ntheir ambient symplectic groups. Density, in turn, implies the simplicity of\nthe plus and minus Lyapunov spectra of any component of any stratum of\nquadratic differentials. Thus, we establish the Kontsevich--Zorich conjecture.\n"} {"abstract": " By using the Boole summation formula, we obtain asymptotic expansions for the\nfirst and higher order derivatives of the alternating Hurwitz zeta function\n$$\\zeta_{E}(z,q)=\\sum_{n=0}^\\infty\\frac{(-1)^{n}}{(n+q)^z}$$\n with respect to its first argument\n$$\\zeta_{E}^{(m)}(z,q)\\equiv\\frac{\\partial^m}{\\partial z^m}\\zeta_E(z,q).$$\n"} {"abstract": " We show that every finite semilattice can be represented as an atomized\nsemilattice, an algebraic structure with additional elements (atoms) that\nextend the semilattice's partial order. Each atom maps to one subdirectly\nirreducible component, and the set of atoms forms a hypergraph that fully\ndefines the semilattice. An atomization always exists and is unique up to\n\"redundant atoms\". Atomized semilattices are representations that can be used\nas computational tools for building semilattice models from sentences, as well\nas building its subalgebras and products. Atomized semilattices can be applied\nto machine learning and to the study of semantic embeddings into algebras with\nidempotent operators.\n"} {"abstract": " The recent emergence of machine-learning based generative models for speech\nsuggests a significant reduction in bit rate for speech codecs is possible.\nHowever, the performance of generative models deteriorates significantly with\nthe distortions present in real-world input signals. We argue that this\ndeterioration is due to the sensitivity of the maximum likelihood criterion to\noutliers and the ineffectiveness of modeling a sum of independent signals with\na single autoregressive model. We introduce predictive-variance regularization\nto reduce the sensitivity to outliers, resulting in a significant increase in\nperformance. We show that noise reduction to remove unwanted signals can\nsignificantly increase performance. We provide extensive subjective performance\nevaluations that show that our system based on generative modeling provides\nstate-of-the-art coding performance at 3 kb/s for real-world speech signals at\nreasonable computational complexity.\n"} {"abstract": " We consider the problem of estimating an object's physical properties such as\nmass, friction, and elasticity directly from video sequences. Such a system\nidentification problem is fundamentally ill-posed due to the loss of\ninformation during image formation. Current solutions require precise 3D labels\nwhich are labor-intensive to gather, and infeasible to create for many systems\nsuch as deformable solids or cloth. We present gradSim, a framework that\novercomes the dependence on 3D supervision by leveraging differentiable\nmultiphysics simulation and differentiable rendering to jointly model the\nevolution of scene dynamics and image formation. This novel combination enables\nbackpropagation from pixels in a video sequence through to the underlying\nphysical attributes that generated them. Moreover, our unified computation\ngraph -- spanning from the dynamics and through the rendering process --\nenables learning in challenging visuomotor control tasks, without relying on\nstate-based (3D) supervision, while obtaining performance competitive to or\nbetter than techniques that rely on precise 3D labels.\n"} {"abstract": " In this paper, we study quasi post-critically finite degenerations for\nrational maps. We construct limits for such degenerations as geometrically\nfinite rational maps on a finite tree of Riemann spheres. We prove the\nboundedness for such degenerations of hyperbolic rational maps with Sierpinski\ncarpet Julia set and give criteria for the convergence for quasi-Blaschke\nproducts $\\mathcal{QB}_d$, making progress towards the analogues of Thurston's\ncompactness theorem for acylindrical $3$-manifold and the double limit theorem\nfor quasi-Fuchsian groups in complex dynamics. In the appendix, we apply such\nconvergence results to show the existence of certain polynomial matings.\n"} {"abstract": " Optical wireless communications (OWCs) have been recognized as a candidate\nenabler of next generation in-body nano-scale networks and implants. The\ndevelopment of an accurate channel model capable of accommodating the\nparticularities of different type of tissues is expected to boost the design of\noptimized communication protocols for such applications. Motivated by this,\nthis paper focuses on presenting a general pathloss model for in-body OWCs. In\nparticular, we use experimental measurements in order to extract analytical\nexpressions for the absorption coefficients of the five main tissues'\nconstitutions, namely oxygenated and de-oxygenated blood, water, fat, and\nmelanin. Building upon these expressions, we derive a general formula for the\nabsorption coefficient evaluation of any biological tissue. To verify the\nvalidity of this formula, we compute the absorption coefficient of complex\ntissues and compare them against respective experimental results reported by\nindependent research works. Interestingly, we observe that the analytical\nformula has high accuracy and is capable of modeling the pathloss and,\ntherefore, the penetration depth in complex tissues.\n"} {"abstract": " We theoretically analyze a possibility of electromagnetic wave emission due\nto electron transitions between spin subbands in a ferromagnet. Different\nmechanisms of such spin-flip transitions are cousidered. One mechanism is the\nelectron transitions caused by magnetic field of the wave. Another mechanism is\ndue to Rashba spin-orbit interaction. While two mentioned mechanisms exist in a\nhomogeneously magnetized ferromagnet, there are two other mechanisms that exist\nin non-collinearly magnetized medium. First mechanism is known and is due to\nthe dependence of exchange interaction constant on the quasimomentum of\nconduction electrons. Second one exists in any non-collinearly magnetized\nmedium. We study these mechanisms in a non-collinear ferromagnet with\nhelicoidal magnetization distribution. The estimations of probabilities of\nelectron transitions due to different mechanisms are made for realistic\nparameters, and we compare the mechanisms. We also estimate the radiation power\nand threshold current in a simple model in which spin is injected into the\nferromagnet by a spin-polarized electric current through a tunnel barrier.\n"} {"abstract": " We consider the sharp interface limit for the scalar-valued and vector-valued\nAllen-Cahn equation with homogeneous Neumann boundary condition in a bounded\nsmooth domain $\\Omega$ of arbitrary dimension $N\\geq 2$ in the situation when a\ntwo-phase diffuse interface has developed and intersects the boundary\n$\\partial\\Omega$. The limit problem is mean curvature flow with\n$90${\\deg}-contact angle and we show convergence in strong norms for\nwell-prepared initial data as long as a smooth solution to the limit problem\nexists. To this end we assume that the limit problem has a smooth solution on\n$[0,T]$ for some time $T>0$. Based on the latter we construct suitable\ncurvilinear coordinates and set up an asymptotic expansion for the\nscalar-valued and the vector-valued Allen-Cahn equation. Finally, we prove a\nspectral estimate for the linearized Allen-Cahn operator in both cases in order\nto estimate the difference of the exact and approximate solutions with a\nGronwall-type argument.\n"} {"abstract": " We prove that the unique possible flow in an Alexandroff $T_{0}$-space is the\ntrivial one. On the way of motivation, we relate Alexandroff spaces with\ntopological hyperspaces.\n"} {"abstract": " Transformer models have demonstrated superior performance in natural language\nprocessing. The dot product self-attention in Transformer allows us to model\ninteractions between words. However, this modeling comes with significant\ncomputational overhead. In this work, we revisit the memory-compute trade-off\nassociated with Transformer, particularly multi-head attention, and show a\nmemory-heavy but significantly more compute-efficient alternative to\nTransformer. Our proposal, denoted as PairConnect, a multilayer perceptron\n(MLP), models the pairwise interaction between words by explicit pairwise word\nembeddings. As a result, PairConnect substitutes self dot product with a simple\nembedding lookup. We show mathematically that despite being an MLP, our\ncompute-efficient PairConnect is strictly more expressive than Transformer. Our\nexperiment on language modeling tasks suggests that PairConnect could achieve\ncomparable results with Transformer while reducing the computational cost\nassociated with inference significantly.\n"} {"abstract": " Multi-agent collision-free trajectory planning and control subject to\ndifferent goal requirements and system dynamics has been extensively studied,\nand is gaining recent attention in the realm of machine and reinforcement\nlearning. However, in particular when using a large number of agents,\nconstructing a least-restrictive collision avoidance policy is of utmost\nimportance for both classical and learning-based methods. In this paper, we\npropose a Least-Restrictive Collision Avoidance Module (LR-CAM) that evaluates\nthe safety of multi-agent systems and takes over control only when needed to\nprevent collisions. The LR-CAM is a single policy that can be wrapped around\npolicies of all agents in a multi-agent system. It allows each agent to pursue\nany objective as long as it is safe to do so. The benefit of the proposed\nleast-restrictive policy is to only interrupt and overrule the default\ncontroller in case of an upcoming inevitable danger. We use a Long Short-Term\nMemory (LSTM) based Variational Auto-Encoder (VAE) to enable the LR-CAM to\naccount for a varying number of agents in the environment. Moreover, we propose\nan off-policy meta-reinforcement learning framework with a novel reward\nfunction based on a Hamilton-Jacobi value function to train the LR-CAM. The\nproposed method is fully meta-trained through a ROS based simulation and tested\non real multi-agent system. Our results show that LR-CAM outperforms the\nclassical least-restrictive baseline by 30 percent. In addition, we show that\neven if a subset of agents in a multi-agent system use LR-CAM, the success rate\nof all agents will increase significantly.\n"} {"abstract": " In this article we will show $2$ different proofs for the fact that there\nexist relatively prime positive integers $a,b$ such that: $a^2+ab+b^2=7^n$.\n"} {"abstract": " We theoretically analyze the typical learning performance of\n$\\ell_{1}$-regularized linear regression ($\\ell_1$-LinR) for Ising model\nselection using the replica method from statistical mechanics. For typical\nrandom regular graphs in the paramagnetic phase, an accurate estimate of the\ntypical sample complexity of $\\ell_1$-LinR is obtained. Remarkably, despite the\nmodel misspecification, $\\ell_1$-LinR is model selection consistent with the\nsame order of sample complexity as $\\ell_{1}$-regularized logistic regression\n($\\ell_1$-LogR), i.e., $M=\\mathcal{O}\\left(\\log N\\right)$, where $N$ is the\nnumber of variables of the Ising model. Moreover, we provide an efficient\nmethod to accurately predict the non-asymptotic behavior of $\\ell_1$-LinR for\nmoderate $M, N$, such as precision and recall. Simulations show a fairly good\nagreement between theoretical predictions and experimental results, even for\ngraphs with many loops, which supports our findings. Although this paper mainly\nfocuses on $\\ell_1$-LinR, our method is readily applicable for precisely\ncharacterizing the typical learning performances of a wide class of\n$\\ell_{1}$-regularized $M$-estimators including $\\ell_1$-LogR and interaction\nscreening.\n"} {"abstract": " We consider the recent surge of information on the potential benefits of\nacid-suppression drugs in the context of COVID-19, with an eye on the\nvariability (and confusion) across the reported findings--at least as regards\nthe popular antacid famotidine. The inconsistencies reflect contradictory\nconclusions from independent clinical-based studies that took roughly similar\napproaches, in terms of experimental design (retrospective, cohort-based, etc.)\nand statistical analyses (propensity-score matching and stratification, etc.).\nThe confusion has significant ramifications in choosing therapeutic\ninterventions: e.g., do potential benefits of famotidine indicate its use in a\nparticular COVID-19 case? Beyond this pressing therapeutic issue, conflicting\ninformation on famotidine must be resolved before its integration in\nontological and knowledge graph-based frameworks, which in turn are useful in\ndrug repurposing efforts. To begin systematically structuring the rapidly\naccumulating information, in the hopes of clarifying and reconciling the\ndiscrepancies, we consider the contradictory information along three proposed\n'axes': (1) a context-of-disease axis, (2) a degree-of-[therapeutic]-benefit\naxis, and (3) a mechanism-of-action axis. We suspect that incongruencies in how\nthese axes have been (implicitly) treated in past studies has led to the\ncontradictory indications for famotidine and COVID-19. We also trace the\nevolution of information on acid-suppression agents as regards the\ntransmission, severity, and mortality of COVID-19, given the many literature\nreports that have accumulated. By grouping the studies conceptually and\nthematically, we identify three eras in the progression of our understanding of\nfamotidine and COVID-19. Harmonizing these findings is a key goal for both\nclinical standards-of-care (COVID and beyond) as well as ontological and\nknowledge graph-based approaches.\n"} {"abstract": " We consider the asymmetric simple exclusion process (ASEP) with forward\nhopping rate 1, backward hopping rate q and periodic boundary conditions. We\nshow that the Bethe equations of ASEP can be decoupled, at all order in\nperturbation in the variable q, by introducing a formal Laurent series mapping\nthe Bethe roots of the totally asymmetric case q=0 (TASEP) to the Bethe roots\nof ASEP. The probability of the height for ASEP is then written as a single\ncontour integral on the Riemann surface on which symmetric functions of TASEP\nBethe roots live.\n"} {"abstract": " In this study, we investigated a method allowing the determination of the\nfemur bone surface as well as its mechanical axis from some easy-to-identify\nbony landmarks. The reconstruction of the whole femur is therefore performed\nfrom these landmarks using a Statistical Shape Model (SSM). The aim of this\nresearch is therefore to assess the impact of the number, the position, and the\naccuracy of the landmarks for the reconstruction of the femur and the\ndetermination of its related mechanical axis, an important clinical parameter\nto consider for the lower limb analysis. Two statistical femur models were\ncreated from our in-house dataset and a publicly available dataset. Both were\nevaluated in terms of average point-to-point surface distance error and through\nthe mechanical axis of the femur. Furthermore, the clinical impact of using\nlandmarks on the skin in replacement of bony landmarks is investigated. The\npredicted proximal femurs from bony landmarks were more accurate compared to\non-skin landmarks while both had less than 3.5 degrees mechanical axis angle\ndeviation error. The results regarding the non-invasive determination of the\nmechanical axis are very encouraging and could open very interesting clinical\nperspectives for the analysis of the lower limb either for orthopedics or\nfunctional rehabilitation.\n"} {"abstract": " Several novel statistical methods have been developed to estimate large\nintegrated volatility matrices based on high-frequency financial data. To\ninvestigate their asymptotic behaviors, they require a sub-Gaussian or finite\nhigh-order moment assumption for observed log-returns, which cannot account for\nthe heavy tail phenomenon of stock returns. Recently, a robust estimator was\ndeveloped to handle heavy-tailed distributions with some bounded fourth-moment\nassumption. However, we often observe that log-returns have heavier tail\ndistribution than the finite fourth-moment and that the degrees of heaviness of\ntails are heterogeneous over the asset and time period. In this paper, to deal\nwith the heterogeneous heavy-tailed distributions, we develop an adaptive\nrobust integrated volatility estimator that employs pre-averaging and\ntruncation schemes based on jump-diffusion processes. We call this an adaptive\nrobust pre-averaging realized volatility (ARP) estimator. We show that the ARP\nestimator has a sub-Weibull tail concentration with only finite 2$\\alpha$-th\nmoments for any $\\alpha>1$. In addition, we establish matching upper and lower\nbounds to show that the ARP estimation procedure is optimal. To estimate large\nintegrated volatility matrices using the approximate factor model, the ARP\nestimator is further regularized using the principal orthogonal complement\nthresholding (POET) method. The numerical study is conducted to check the\nfinite sample performance of the ARP estimator.\n"} {"abstract": " Context. Turbulent transport in stellar radiative zones is a key ingredient\nof stellar evolution theory, but the anisotropy of the transport due to the\nstable stratification and the rotation of these regions is poorly understood.\nThe assumption of shellular rotation, which is a cornerstone of the so-called\nrotational mixing, relies on an efficient horizontal transport. However, this\ntransport is included in many stellar evolution codes through phenomenological\nmodels that have never been tested.\n Aims. We investigate the impact of horizontal shear on the anisotropy of\nturbulent transport.\n Methods. We used a relaxation approximation (also known as {\\tau}\napproximation) to describe the anisotropising effect of stratification,\nrotation, and shear on a background turbulent flow by computing velocity\ncorrelations.\n Results. We obtain new theoretical scalings for velocity correlations that\ninclude the effect of horizontal shear. These scalings show an enhancement of\nturbulent motions, which would lead to a more efficient transport of chemicals\nand angular momentum, in better agreement with helio- and asteroseismic\nobservations of rotation in the whole Hertzsprung-Russell diagram. Moreover, we\npropose a new choice for the non-linear time used in the relaxation\napproximation, which characterises the source of the turbulence.\n Conclusions. For the first time, we describe the effect of stratification,\nrotation, and vertical and horizontal shear on the anisotropy of turbulent\ntransport in stellar radiative zones. The new prescriptions need to be\nimplemented in stellar evolution calculations. To do so, it may be necessary to\nimplement non-diffusive transport.\n"} {"abstract": " Industrial robots can solve very complex tasks in controlled environments,\nbut modern applications require robots able to operate in unpredictable\nsurroundings as well. An increasingly popular reactive policy architecture in\nrobotics is Behavior Trees but as with other architectures, programming time\nstill drives cost and limits flexibility. There are two main branches of\nalgorithms to generate policies automatically, automated planning and machine\nlearning, both with their own drawbacks. We propose a method for generating\nBehavior Trees using a Genetic Programming algorithm and combining the two\nbranches by taking the result of an automated planner and inserting it into the\npopulation. Experimental results confirm that the proposed method of combining\nplanning and learning performs well on a variety of robotic assembly problems\nand outperforms both of the base methods used separately. We also show that\nthis type of high level learning of Behavior Trees can be transferred to a real\nsystem without further training.\n"} {"abstract": " Kernel segmentation aims at partitioning a data sequence into several\nnon-overlapping segments that may have nonlinear and complex structures. In\ngeneral, it is formulated as a discrete optimization problem with combinatorial\nconstraints. A popular algorithm for optimally solving this problem is dynamic\nprogramming (DP), which has quadratic computation and memory requirements.\nGiven that sequences in practice are too long, this algorithm is not a\npractical approach. Although many heuristic algorithms have been proposed to\napproximate the optimal segmentation, they have no guarantee on the quality of\ntheir solutions. In this paper, we take a differentiable approach to alleviate\nthe aforementioned issues. First, we introduce a novel sigmoid-based\nregularization to smoothly approximate the combinatorial constraints. Combining\nit with objective of the balanced kernel clustering, we formulate a\ndifferentiable model termed Kernel clustering with sigmoid-based regularization\n(KCSR), where the gradient-based algorithm can be exploited to obtain the\noptimal segmentation. Second, we develop a stochastic variant of the proposed\nmodel. By using the stochastic gradient descent algorithm, which has much lower\ntime and space complexities, for optimization, the second model can perform\nsegmentation on overlong data sequences. Finally, for simultaneously segmenting\nmultiple data sequences, we slightly modify the sigmoid-based regularization to\nfurther introduce an extended variant of the proposed model. Through extensive\nexperiments on various types of data sequences performances of our models are\nevaluated and compared with those of the existing methods. The experimental\nresults validate advantages of the proposed models. Our Matlab source code is\navailable on github.\n"} {"abstract": " Phishing is the number one threat in the world of internet. Phishing attacks\nare from decades and with each passing year it is becoming a major problem for\ninternet users as attackers are coming with unique and creative ideas to breach\nthe security. In this paper, different types of phishing and anti-phishing\ntechniques are presented. For this purpose, the Systematic Literature\nReview(SLR) approach is followed to critically define the proposed research\nquestions. At first 80 articles were extracted from different repositories.\nThese articles were then filtered out using Tollgate Approach to find out\ndifferent types of phishing and anti-phishing techniques. Research study\nevaluated that spear phishing, Email Spoofing, Email Manipulation and phone\nphishing are the most commonly used phishing techniques. On the other hand,\naccording to the SLR, machine learning approaches have the highest accuracy of\npreventing and detecting phishing attacks among all other anti-phishing\napproaches.\n"} {"abstract": " This paper proposes a model-free nonparametric estimator of conditional\nquantile of a time series regression model where the covariate vector is\nrepeated many times for different values of the response. This type of data is\nabound in climate studies. To tackle such problems, our proposed method\nexploits the replicated nature of the data and improves on restrictive linear\nmodel structure of conventional quantile regression. Relevant asymptotic theory\nfor the nonparametric estimators of the mean and variance function of the model\nare derived under a very general framework. We provide a detailed simulation\nstudy which clearly demonstrates the gain in efficiency of the proposed method\nover other benchmark models, especially when the true data generating process\nentails nonlinear mean function and heteroskedastic pattern with time dependent\ncovariates. The predictive accuracy of the non-parametric method is remarkably\nhigh compared to other methods when attention is on the higher quantiles of the\nvariable of interest. Usefulness of the proposed method is then illustrated\nwith two climatological applications, one with a well-known tropical cyclone\nwind-speed data and the other with an air pollution data.\n"} {"abstract": " Polarimetric imaging is one of the most effective techniques for\nhigh-contrast imaging and characterization of circumstellar environments. These\nenvironments can be characterized through direct-imaging polarimetry at\nnear-infrared wavelengths. The SPHERE/IRDIS instrument installed on the Very\nLarge Telescope in its dual-beam polarimetric imaging (DPI) mode, offers the\ncapability to acquire polarimetric images at high contrast and high angular\nresolution. However dedicated image processing is needed to get rid of the\ncontamination by the stellar light, of instrumental polarization effects, and\nof the blurring by the instrumental point spread function. We aim to\nreconstruct and deconvolve the near-infrared polarization signal from\ncircumstellar environments. We use observations of these environments obtained\nwith the high-contrast imaging infrared polarimeter SPHERE-IRDIS at the VLT. We\ndeveloped a new method to extract the polarimetric signal using an inverse\napproach method that benefits from the added knowledge of the detected signal\nformation process. The method includes weighted data fidelity term, smooth\npenalization, and takes into account instrumental polarization. The method\nenables to accurately measure the polarized intensity and angle of linear\npolarization of circumstellar disks by taking into account the noise statistics\nand the convolution of the observed objects by the instrumental point spread\nfunction. It has the capability to use incomplete polarimetry cycles which\nenhance the sensitivity of the observations. The method improves the overall\nperformances in particular for low SNR/small polarized flux compared to\nstandard methods.\n"} {"abstract": " Traditional and deep learning-based fusion methods generated the intermediate\ndecision map to obtain the fusion image through a series of post-processing\nprocedures. However, the fusion results generated by these methods are easy to\nlose some source image details or results in artifacts. Inspired by the image\nreconstruction techniques based on deep learning, we propose a multi-focus\nimage fusion network framework without any post-processing to solve these\nproblems in the end-to-end and supervised learning way. To sufficiently train\nthe fusion model, we have generated a large-scale multi-focus image dataset\nwith ground-truth fusion images. What's more, to obtain a more informative\nfusion image, we further designed a novel fusion strategy based on unity fusion\nattention, which is composed of a channel attention module and a spatial\nattention module. Specifically, the proposed fusion approach mainly comprises\nthree key components: feature extraction, feature fusion and image\nreconstruction. We firstly utilize seven convolutional blocks to extract the\nimage features from source images. Then, the extracted convolutional features\nare fused by the proposed fusion strategy in the feature fusion layer. Finally,\nthe fused image features are reconstructed by four convolutional blocks.\nExperimental results demonstrate that the proposed approach for multi-focus\nimage fusion achieves remarkable fusion performance compared to 19\nstate-of-the-art fusion methods.\n"} {"abstract": " In this work, the aim is to study the spread of a contagious disease and\ninformation on a multilayer social system. The main idea is to find a criterion\nunder which the adoption of the spreading information blocks or suppresses the\nepidemic spread. A two-layer network is the base of the model. The first layer\ndescribes the direct contact interactions, while the second layer is the\ninformation propagation layer. Both layers consist of the same nodes. The\nsociety consists of five different categories of individuals: susceptibles,\ninfective, recovered, vaccinated and precautioned. Initially, only one infected\nindividual starts transmitting the infection. Direct contact interactions\nspread the infection to the susceptibles. The information spreads through the\nsecond layer. The SIR model is employed for the infection spread, while the\nBass equation models the adoption of information. The control parameters of the\ncompetition between the spread of information and spread of disease are the\ntopology and the density of connectivity. The topology of the information layer\nis a scale-free network with increasing density of edges. In the contact layer,\nregular and scale-free networks with the same average degree per node are used\ninterchangeably. The observation is that increasing complexity of the contact\nnetwork reduces the role of individual awareness. If the contact layer consists\nof networks with limited range connections, or the edges sparser than the\ninformation network, spread of information plays a significant role in\ncontrolling the epidemics.\n"} {"abstract": " This paper first formalises a new observability concept, called weak regular\nobservability, that is adapted to Fast Moving Horizon Estimation where one aims\nto estimate the state of a nonlinear system efficiently on rolling time windows\nin the case of small initial error. Additionally, sufficient conditions of weak\nregular observability are provided in a problem of Simultaneous Localisation\nand Mapping (SLAM) for different measurement models. In particular it is shown\nthat following circular trajectories leads to weak regular observability in a\nsecond order 2D SLAM problem with several possible types of sensors.\n"} {"abstract": " The dark matter halo surface density, given by the product of the dark matter\ncore radius ($r_c$) and core density ($\\rho_c$) has been shown to be a constant\nfor a wide range of isolated galaxy systems. Here, we carry out a test of this\n{\\em ansatz} using a sample of 17 relaxed galaxy groups observed using Chandra\nand XMM-Newton, as an extension of our previous analysis with galaxy clusters.\nWe find that $\\rho_c \\propto r_c^{-1.35^{+0.16}_{-0.17}}$, with an intrinsic\nscatter of about 27.3%, which is about 1.5 times larger than that seen for\ngalaxy clusters. Our results thereby indicate that the surface density is\ndiscrepant with respect to scale invariance by about 2$\\sigma$, and its value\nis about four times greater than that for galaxies. Therefore, the elevated\nvalues of the halo surface density for groups and clusters indicate that the\nsurface density cannot be a universal constant for all dark matter dominated\nsystems. Furthermore, we also implement a test of the radial acceleration\nrelation for this group sample. We find that the residual scatter in the radial\nacceleration relation is about 0.32 dex and a factor of three larger than that\nobtained using galaxy clusters. The acceleration scale which we obtain is\nin-between that seen for galaxies and clusters.\n"} {"abstract": " The nature of unconventional superconductivity is intimately linked to the\nmicroscopic nature of the pairing interactions. In this work, motivated by\ncubic heavy fermion compounds with embedded multipolar moments, we\ntheoretically investigate superconducting instabilities instigated by\nmultipolar Kondo interactions. Employing multipolar fluctuations (mediated by\nRKKY interaction) coupled to conduction electrons via two-channel Kondo and\nnovel multipolar Kondo interactions, we uncover a variety of superconducting\nstates characterized by higher-angular momentum Cooper pairs, $J=0,1,2,3$. We\ndemonstrate that both odd and even parity pairing functions are possible,\nregardless of the total angular momentum of the Cooper pairs, which can be\ntraced back to the atypical nature of the multipolar Kondo interaction that\nintertwines conduction electron spin and orbital degrees of freedom. We\ndetermine that different (point-group) irrep classified pairing functions may\ncoexist with each other, with some of them characterized by gapped and point\nnode structures in their corresponding quasiparticle spectra. This work lays\nthe foundation for discovery and classification of superconducting states in\nrare-earth metallic compounds with multipolar local moments.\n"} {"abstract": " Satellites, are both crucial and, despite common misbelieve, very fragile\nparts our civilian and military critical infrastructure. While, many efforts\nare focused on securing ground and space segments, especially when national\nsecurity or large businesses interests are affected, the small-sat, newspace\nrevolution democratizes access to, and exploitation of the near earth orbits.\nThis brings new players to the market, typically in the form of small to medium\nsized companies, offering new or more affordable services. Despite the\nnecessity and inevitability of this process, it also opens potential new venues\nfor targeted attacks against space-related infrastructure. Since sources of\nsatellite ephemerides are very often centralized, they are subject to classical\nMan-in-the-Middle attacks which open venues for TLE spoofing attack, which may\nresult in unnecessary collision avoidance maneuvers, in best case and\norchestrated crashes, in worst case. In this work, we propose a countermeasure\nto the presented problem that include distributed solution, which will have no\ncentral authority responsible for storing and disseminating TLE information.\nInstead, each of the peers participating to the system, have full access to all\nof the records stored in the system, and distribute the data in a consensual\nmanner,ensuring information replication at each peer node. This way, single\npoint of failure syndromes of classic systems, which currently exist due to the\ndirect ephemerids distribution mechanism, are removed. Our proposed solution is\nto build data dissemination systems using permissioned, private ledgers where\npeers have strong and verifiable identities, which allow also for redundancy in\nSST data sourcing.\n"} {"abstract": " Opioid Use Disorder (OUD) is a public health crisis costing the US billions\nof dollars annually in healthcare, lost workplace productivity, and crime.\nAnalyzing longitudinal healthcare data is critical in addressing many\nreal-world problems in healthcare. Leveraging the real-world longitudinal\nhealthcare data, we propose a novel multi-stream transformer model called MUPOD\nfor OUD identification. MUPOD is designed to simultaneously analyze multiple\ntypes of healthcare data streams, such as medications and diagnoses, by\nattending to segments within and across these data streams. Our model tested on\nthe data from 392,492 patients with long-term back pain problems showed\nsignificantly better performance than the traditional models and recently\ndeveloped deep learning models.\n"} {"abstract": " This paper presents an analytical study for the metric properties of the\nparaboloidal double projection, i.e. central and orthogonal projections used in\nthe catadioptric camera system. Metric properties have not sufficiently studied\nin previous treatments of such system. These properties incorporate the\ndetermination of the true lengths of projected lines and areas bounded by\nprojected lines. The advantageous main gain of determining metric elements of\nthe paraboloidal double projection is studying distortion analysis and camera\ncalibration, which is considered an essential tool in testing camera accuracy.\nAlso, this may be considered as a significant utility in studying comparison\nanalysis between different cameras projection systems.\n"} {"abstract": " Anti-ferromagnetic materials have the possibility to offer ultra fast, high\ndata density spintronic devices. A significant challenge is the reliable\ndetection of the state of the antiferromagnet, which can be achieved using\nexchange bias. Here we develop an atomistic spin model of the athermal training\neffect, a well known phenomenon in exchange biased systems where the bias is\nsignificantly reduced after the first hysteresis cycle. We find that the\nsetting process in granular thin films relies on the presence of interfacial\nmixing between the ferromagnetic and antiferromagnetic layers. We\nsystematically investigate the effect of the intermixing and find that the\nexchange bias, switching field and coercivity all increase with increased\nintermixing. The interfacial spin state is highly frustrated leading to a\nsystematic decrease in interfacial ordering of the ferromagnet. This metastable\nspin structure of initially irreversible spins leads to a large effective\nexchange coupling and thus large increase in the switching field. After the\nfirst hysteresis cycle these metastable spins drop into a reversible ground\nstate that is repeatable for all subsequent hysteresis cycles, demonstrating\nthat the effect is truly athermal. Our simulations provide new insights into\nthe role of interface mixing and the importance of metastable spin structures\nin exchange biased systems which could help with the design an optimisation of\nantiferromagnetic spintronic devices.\n"} {"abstract": " The optical matrix formalism is applied to find parameters such as focal\ndistance, back and front focal points, principal planes, and the equation\nrelating object and image distances for a thick spherical lens immerse in air.\nThen, the formalism is applied to systems compound of two, three and N thick\nlenses in cascade. It is found that a simple Gaussian equation is enough to\nrelate object and image distances no matter the number of lenses.\n"} {"abstract": " Recent works have shown that learned models can achieve significant\nperformance gains, especially in terms of perceptual quality measures, over\ntraditional methods. Hence, the state of the art in image restoration and\ncompression is getting redefined. This special issue covers the state of the\nart in learned image/video restoration and compression to promote further\nprogress in innovative architectures and training methods for effective and\nefficient networks for image/video restoration and compression.\n"} {"abstract": " The Laser Interferometer Space Antenna, LISA, will detect gravitational wave\nsignals from Extreme Mass Ratio Inspirals, where a stellar mass compact object\norbits a supermassive black hole and eventually plunges into it. Here we report\non LISA's capability to detect whether the smaller compact object in an Extreme\nMass Ratio Inspiral is endowed with a scalar field, and to measure its scalar\ncharge -- a dimensionless quantity that acts as a measure of how much scalar\nfield the object carries. By direct comparison of signals, we show that LISA\nwill be able to detect and measure the scalar charge with an accuracy of the\norder of percent, which is an unprecedented level of precision. This result is\nindependent of the origin of the scalar field and of the structure and other\nproperties of the small compact object, so it can be seen as a generic\nassessment of LISA's capabilities to detect new fundamental fields.\n"} {"abstract": " Communication between workers and the master node to collect local stochastic\ngradients is a key bottleneck in a large-scale federated learning system.\nVarious recent works have proposed to compress the local stochastic gradients\nto mitigate the communication overhead. However, robustness to malicious\nattacks is rarely considered in such a setting. In this work, we investigate\nthe problem of Byzantine-robust federated learning with compression, where the\nattacks from Byzantine workers can be arbitrarily malicious. We point out that\na vanilla combination of compressed stochastic gradient descent (SGD) and\ngeometric median-based robust aggregation suffers from both stochastic and\ncompression noise in the presence of Byzantine attacks. In light of this\nobservation, we propose to jointly reduce the stochastic and compression noise\nso as to improve the Byzantine-robustness. For the stochastic noise, we adopt\nthe stochastic average gradient algorithm (SAGA) to gradually eliminate the\ninner variations of regular workers. For the compression noise, we apply the\ngradient difference compression and achieve compression for free. We\ntheoretically prove that the proposed algorithm reaches a neighborhood of the\noptimal solution at a linear convergence rate, and the asymptotic learning\nerror is in the same order as that of the state-of-the-art uncompressed method.\nFinally, numerical experiments demonstrate effectiveness of the proposed\nmethod.\n"} {"abstract": " Many emerging cyber-physical systems, such as autonomous vehicles and robots,\nrely heavily on artificial intelligence and machine learning algorithms to\nperform important system operations. Since these highly parallel applications\nare computationally intensive, they need to be accelerated by graphics\nprocessing units (GPUs) to meet stringent timing constraints. However, despite\nthe wide adoption of GPUs, efficiently scheduling multiple GPU applications\nwhile providing rigorous real-time guarantees remains a challenge. In this\npaper, we propose RTGPU, which can schedule the execution of multiple GPU\napplications in real-time to meet hard deadlines. Each GPU application can have\nmultiple CPU execution and memory copy segments, as well as GPU kernels. We\nstart with a model to explicitly account for the CPU and memory copy segments\nof these applications. We then consider the GPU architecture in the development\nof a precise timing model for the GPU kernels and leverage a technique known as\npersistent threads to implement fine-grained kernel scheduling with improved\nperformance through interleaved execution. Next, we propose a general method\nfor scheduling parallel GPU applications in real time. Finally, to schedule\nmultiple parallel GPU applications, we propose a practical real-time scheduling\nalgorithm based on federated scheduling and grid search (for GPU kernel\nsegments) with uniprocessor fixed priority scheduling (for multiple CPU and\nmemory copy segments). Our approach provides superior schedulability compared\nwith previous work, and gives real-time guarantees to meet hard deadlines for\nmultiple GPU applications according to comprehensive validation and evaluation\non a real NVIDIA GTX1080Ti GPU system.\n"} {"abstract": " The Supervisory control and data acquisition (SCADA) systems have been\ncontinuously leveraging the evolution of network architecture, communication\nprotocols, next-generation communication techniques (5G, 6G, Wi-Fi 6), and the\ninternet of things (IoT). However, SCADA system has become the most profitable\nand alluring target for ransomware attackers. This paper proposes the deep\nlearning-based novel ransomware detection framework in the SCADA controlled\nelectric vehicle charging station (EVCS) with the performance analysis of three\ndeep learning algorithms, namely deep neural network (DNN), 1D convolution\nneural network (CNN), and long short-term memory (LSTM) recurrent neural\nnetwork. All three-deep learning-based simulated frameworks achieve around 97%\naverage accuracy (ACC), more than 98% of the average area under the curve\n(AUC), and an average F1-score under 10-fold stratified cross-validation with\nan average false alarm rate (FAR) less than 1.88%. Ransomware driven\ndistributed denial of service (DDoS) attack tends to shift the SOC profile by\nexceeding the SOC control thresholds. The severity has been found to increase\nas the attack progress and penetration increases. Also, ransomware driven false\ndata injection (FDI) attack has the potential to damage the entire BES or\nphysical system by manipulating the SOC control thresholds. It's a design\nchoice and optimization issue that a deep learning algorithm can deploy based\non the tradeoffs between performance metrics.\n"} {"abstract": " Spectral lines from formaldehyde (H2CO) molecules at cm wavelengths are\ntypically detected in absorption and trace a broad range of environments, from\ndiffuse gas to giant molecular clouds. In contrast, thermal emission of\nformaldehyde lines at cm wavelengths is rare. In previous observations with the\n100m Robert C. Byrd Green Bank Telescope (GBT), we detected 2 cm formaldehyde\nemission toward NGC7538 IRS1 - a high-mass protostellar object in a prominent\nstar-forming region of our Galaxy. We present further GBT observations of the 2\ncm and 1 cm H2CO lines to investigate the nature of the 2 cm H2CO emission. We\nconducted observations to constrain the angular size of the 2 cm emission\nregion based on a East-West and North-South cross-scan map. Gaussian fits of\nthe spatial distribution in the East-West direction show a deconvolved size (at\nhalf maximum) of the 2 cm emission of 50\" +/- 8\". The 1 cm H2CO observations\nrevealed emission superimposed on a weak absorption feature. A non-LTE\nradiative transfer analysis shows that the H2CO emission is consistent with\nquasi-thermal radiation from dense gas (~10^5 to 10^6 cm^-3). We also report\ndetection of 4 transitions of CH3OH (12.2, 26.8, 28.3, 28.9 GHz), the (8,8)\ntransition of NH3 (26.5 GHz), and a cross-scan map of the 13 GHz SO line that\nshows extended emission (> 50\").\n"} {"abstract": " Dynamically switchable half-/quarter-wave plates have recently been the focus\nin the terahertz regime. Conventional design philosophy leads to multilayer\nmetamaterials or narrowband metasurfaces. Here we propose a novel design\nphilosophy and a VO2-metal hybrid metasurface for achieving broadband\ndynamically switchable half-/quarter-wave plate (HWP/QWP) based on the\ntransition from the overdamped to the underdamped resonance. Results show that,\nby varying the VO2 conductivity by three orders of magnitude, the proposed\nmetasurface's function can be switched between an HWP with polarization\nconversion ratio larger than 96% and a QWP with ellipticity close to -1 over\nthe broad working band of 0.8-1.2 THz. We expect that the proposed design\nphilosophy will advance the engineering of metasurfaces for dynamically\nswitchable functionalities beyond the terahertz regime.\n"} {"abstract": " We develop a new type of orthogonal polynomial, the modified discrete\nLaguerre (MDL) polynomials, designed to accelerate the computation of bosonic\nMatsubara sums in statistical physics. The MDL polynomials lead to a rapidly\nconvergent Gaussian \"quadrature\" scheme for Matsubara sums, and more generally\nfor any sum $F(0)/2 + F(h) + F(2h) + \\cdots$ of exponentially decaying summands\n$F(nh) = f(nh)e^{-nhs}$ where $hs>0$. We demonstrate this technique for\ncomputation of finite-temperature Casimir forces arising from quantum field\ntheory, where evaluation of the summand $F$ requires expensive electromagnetic\nsimulations. A key advantage of our scheme, compared to previous methods, is\nthat the convergence rate is nearly independent of the spacing $h$\n(proportional to the thermodynamic temperature). We also prove convergence for\nany polynomially decaying $F$.\n"} {"abstract": " The growth rate of the number of scientific publications is constantly\nincreasing, creating important challenges in the identification of valuable\nresearch and in various scholarly data management applications, in general. In\nthis context, measures which can effectively quantify the scientific impact\ncould be invaluable. In this work, we present BIP! DB, an open dataset that\ncontains a variety of impact measures calculated for a large collection of more\nthan 100 million scientific publications from various disciplines.\n"} {"abstract": " Context{The high energy emission regions of rotation powered pulsars are\nstudied using folded light curve (FLCs) and phase resolved spectra (PRS).}\naims{This work uses the NICER observatory to obtain the highest resolution FLC\nand PRS of the Crab pulsar at soft X-ray energies.} methods{NICER has\naccumulated about 347 ksec of data on the Crab pulsar. The data are processed\nusing the standard analysis pipeline. Stringent filtering is done for spectral\nanalysis. The individual detectors are calibrated in terms of long time light\ncurve (LTLC), raw spectrum and deadtime. The arrival times of the photons are\nreferred to the solar system's barycenter and the rotation frequency $\\nu$ and\nits time derivative $\\dot \\nu$ are used to derive the rotation phase of each\nphoton.} results{The LTLCs, raw spectra and deadtimes of the individual\ndetectors are statistically similar; the latter two show no evolution with\nepoch; detector deadtime is independent of photon energy. The deadtime for the\nCrab pulsar, taking into account the two types of deadtime, is only approx 7%\nto 8% larger than that obtained using the cleaned events. Detector 00 behaves\nslightly differently from the rest, but can be used for spectral work. The PRS\nof the two peaks of the Crab pulsar are obtained at a resolution of better than\n1/512 in rotation phase. The FLC very close to the first peak rises slowly and\nfalls faster. The spectral index of the PRS is almost constant very close to\nthe first peak.} conclusions{The high resolution FLC and PRS of the {{peaks}}\nof the Crab pulsar provide important constraints for the formation of caustics\nin the emission zone.}\n"} {"abstract": " The many-body-theory approach to positronium-atom interactions developed in\n[Phys. Rev. Lett. \\textbf{120}, 183402 (2018)] is applied to the sequence of\nnoble-gas atoms He-Xe. The Dyson equation is solved separately for an electron\nand positron moving in the field of the atom, with the entire system enclosed\nin a hard-wall spherical cavity. The two-particle Dyson equation is solved to\ngive the energies and wave functions of the Ps eigenstates in the cavity. From\nthese, we determine the scattering phase shifts and cross sections, and values\nof the pickoff annihilation parameter $^1Z_\\text{eff}$ including short-range\nelectron-positron correlations via vertex enhancement factors. Comparisons are\nmade with available experimental data for elastic and momentum-transfer cross\nsections and $^1Z_\\text{eff}$. Values of $^1Z_\\text{eff}$ for He and Ne,\npreviously reported in [Phys. Rev. Lett. \\textbf{120}, 183402 (2018)], are\nfound to be in near-perfect agreement with experiment, and for Ar, Kr, and Xe\nwithin a factor of 1.2.\n"} {"abstract": " Search-based test generation is guided by feedback from one or more fitness\nfunctions - scoring functions that judge solution optimality. Choosing\ninformative fitness functions is crucial to meeting the goals of a tester.\nUnfortunately, many goals - such as forcing the class-under-test to throw\nexceptions, increasing test suite diversity, and attaining Strong Mutation\nCoverage - do not have effective fitness function formulations. We propose that\nmeeting such goals requires treating fitness function identification as a\nsecondary optimization step. An adaptive algorithm that can vary the selection\nof fitness functions could adjust its selection throughout the generation\nprocess to maximize goal attainment, based on the current population of test\nsuites. To test this hypothesis, we have implemented two reinforcement learning\nalgorithms in the EvoSuite unit test generation framework, and used these\nalgorithms to dynamically set the fitness functions used during generation for\nthe three goals identified above.\n We have evaluated our framework, EvoSuiteFIT, on a set of Java case examples.\nEvoSuiteFIT techniques attain significant improvements for two of the three\ngoals, and show limited improvements on the third when the number of\ngenerations of evolution is fixed. Additionally, for two of the three goals,\nEvoSuiteFIT detects faults missed by the other techniques. The ability to\nadjust fitness functions allows strategic choices that efficiently produce more\neffective test suites, and examining these choices offers insight into how to\nattain our testing goals. We find that adaptive fitness function selection is a\npowerful technique to apply when an effective fitness function does not already\nexist for achieving a testing goal.\n"} {"abstract": " While object semantic understanding is essential for most service robotic\ntasks, 3D object classification is still an open problem. Learning from\nartificial 3D models alleviates the cost of annotation necessary to approach\nthis problem, but most methods still struggle with the differences existing\nbetween artificial and real 3D data. We conjecture that the cause of those\nissue is the fact that many methods learn directly from point coordinates,\ninstead of the shape, as the former is hard to center and to scale under\nvariable occlusions reliably. We introduce spherical kernel point convolutions\nthat directly exploit the object surface, represented as a graph, and a voting\nscheme to limit the impact of poor segmentation on the classification results.\nOur proposed approach improves upon state-of-the-art methods by up to 36% when\ntransferring from artificial objects to real objects.\n"} {"abstract": " After observing the Higgs boson by the ATLAS and CMS experiments at the LHC,\naccurate measurements of its properties, which allow us to study the\nelectroweak symmetry breaking mechanism, become a high priority for particle\nphysics. The most promising of extracting the Higgs self-coupling at hadron\ncolliders is by examining the double Higgs production, especially in the $b\n\\bar{b} \\gamma \\gamma$ channel. In this work, we presented full loop\ncalculation for both SM and New Physics effects of the Higgs pair production to\nnext-to-leading-order (NLO), including loop-induced processes $gg\\to HH$,\n$gg\\to HHg$, and $qg \\to qHH$. We also included the calculation of the\ncorrections from diagrams with only one QCD coupling in $qg \\to qHH$, which was\nneglected in the previous studies. With the latest observed limit on the HH\nproduction cross-section, we studied the constraints on the effective Higgs\ncouplings for the LHC at center-of-mass energies of 14 TeV and a provisional\n100 TeV proton collider within the Future-Circular-Collider (FCC) project. To\nobtain results better than using total cross-section alone, we focused on the\n$b \\bar{b} \\gamma \\gamma$ channel and divided the differential cross-section\ninto low and high bins based on the total invariant mass and $p_{T}$ spectra.\nThe new physics effects are further constrained by including extra kinematic\ninformation. However, some degeneracy persists, as shown in previous studies,\nespecially in determining the Higgs trilinear coupling. Our analysis shows that\nthe degeneracy is reduced by including the full NLO corrections.\n"} {"abstract": " In this paper, we propose an energy-efficient optimal altitude for an aerial\naccess point (AAP), which acts as a flying base station to serve a set of\nground user equipment (UE). Since the ratio of total energy consumed by the\naerial vehicle to the communication energy is very large, we include the aerial\nvehicle's energy consumption in the problem formulation. After considering the\nenergy consumption model of the aerial vehicle, our objective is translated\ninto a non-convex optimization problem of maximizing the global energy\nefficiency (GEE) of the aerial communication system, subject to altitude and\nminimum individual data rate constraints. At first, the non-convex fractional\nobjective function is solved by using sequential convex programming (SCP)\noptimization technique. To compare the result of SCP with the global optimum of\nthe problem, we reformulate the initial problem as a monotonic fractional\noptimization problem (MFP) and solve it using the polyblock outer approximation\n(PA) algorithm. Numerical results show that the candidate solution obtained\nfrom SCP is the same as the global optimum found using the monotonic fractional\nprogramming technique. Furthermore, the impact of the aerial vehicle's energy\nconsumption on the optimal altitude determination is also studied.\n"} {"abstract": " Generative Adversarial Networks (GANs) have demonstrated unprecedented\nsuccess in various image generation tasks. The encouraging results, however,\ncome at the price of a cumbersome training process, during which the generator\nand discriminator are alternately updated in two stages. In this paper, we\ninvestigate a general training scheme that enables training GANs efficiently in\nonly one stage. Based on the adversarial losses of the generator and\ndiscriminator, we categorize GANs into two classes, Symmetric GANs and\nAsymmetric GANs, and introduce a novel gradient decomposition method to unify\nthe two, allowing us to train both classes in one stage and hence alleviate the\ntraining effort. We also computationally analyze the efficiency of the proposed\nmethod, and empirically demonstrate that, the proposed method yields a solid\n$1.5\\times$ acceleration across various datasets and network architectures.\nFurthermore, we show that the proposed method is readily applicable to other\nadversarial-training scenarios, such as data-free knowledge distillation. The\ncode is available at https://github.com/zju-vipa/OSGAN.\n"} {"abstract": " In order to satisfy timing constraints, modern real-time applications require\nmassively parallel accelerators such as General Purpose Graphic Processing\nUnits (GPGPUs). Generation after generation, the number of computing clusters\nmade available in novel GPU architectures is steadily increasing, hence,\ninvestigating suitable scheduling approaches is now mandatory. Such scheduling\napproaches are related to mapping different and concurrent compute kernels\nwithin the GPU computing clusters, hence grouping GPU computing clusters into\nschedulable partitions. In this paper we propose novel techniques to define GPU\npartitions; this allows us to define suitable task-to-partition allocation\nmechanisms in which tasks are GPU compute kernels featuring different timing\nrequirements. Such mechanisms will take into account the interference that GPU\nkernels experience when running in overlapping time windows. Hence, an\neffective and simple way to quantify the magnitude of such interference is also\npresented. We demonstrate the efficiency of the proposed approaches against the\nclassical techniques that considered the GPU as a single, non-partitionable\nresource.\n"} {"abstract": " Electrons in low-temperature solids are governed by the non-relativistic\nSchr$\\ddot{o}$dinger equation, since the electron velocities are much slower\nthan the speed of light. Remarkably, the low-energy quasi-particles given by\nelectrons in various materials can behave as relativistic Dirac/Weyl fermions\nthat obey the relativistic Dirac/Weyl equation. We refer to these materials as\n\"Dirac/Weyl materials\", which provide a tunable platform to test relativistic\nquantum phenomena in table-top experiments. More interestingly, different types\nof physical fields in these Weyl/Dirac materials, such as magnetic\nfluctuations, lattice vibration, strain, and material inhomogeneity, can couple\nto the \"relativistic\" quasi-particles in a similar way as the $U(1)$ gauge\ncoupling. As these fields do not have gauge-invariant dynamics in general, we\nrefer to them as \"pseudo-gauge fields\". In this chapter, we overview the\nconcept and physical consequences of pseudo-gauge fields in Weyl/Dirac\nmaterials. In particular, we will demonstrate that pseudo-gauge fields can\nprovide a unified understanding of a variety of physical phenomena, including\nchiral zero modes inside a magnetic vortex core of magnetic Weyl semimetals, a\ngiant current response at magnetic resonance in magnetic topological\ninsulators, and piezo-electromagnetic response in time-reversal invariant\nsystems. These phenomena are deeply related to various concepts in high-energy\nphysics, such as chiral anomaly and axion electrodynamics.\n"} {"abstract": " The bug triaging process, an essential process of assigning bug reports to\nthe most appropriate developers, is related closely to the quality and costs of\nsoftware development. As manual bug assignment is a labor-intensive task,\nespecially for large-scale software projects, many machine-learning-based\napproaches have been proposed to automatically triage bug reports. Although\ndeveloper collaboration networks (DCNs) are dynamic and evolving in the\nreal-world, most automated bug triaging approaches focus on static tossing\ngraphs at a single time slice. Also, none of the previous studies consider\nperiodic interactions among developers. To address the problems mentioned\nabove, in this article, we propose a novel spatial-temporal dynamic graph\nneural network (ST-DGNN) framework, including a joint random walk (JRWalk)\nmechanism and a graph recurrent convolutional neural network (GRCNN) model. In\nparticular, JRWalk aims to sample local topological structures in a graph with\ntwo sampling strategies by considering both node importance and edge\nimportance. GRCNN has three components with the same structure, i.e.,\nhourly-periodic, daily-periodic, and weekly-periodic components, to learn the\nspatial-temporal features of dynamic DCNs. We evaluated our approach's\neffectiveness by comparing it with several state-of-the-art graph\nrepresentation learning methods in two domain-specific tasks that belong to\nnode classification. In the two tasks, experiments on two real-world,\nlarge-scale developer collaboration networks collected from the Eclipse and\nMozilla projects indicate that the proposed approach outperforms all the\nbaseline methods.\n"} {"abstract": " We propose an algorithm that uses linear function approximation (LFA) for\nstochastic shortest path (SSP). Under minimal assumptions, it obtains sublinear\nregret, is computationally efficient, and uses stationary policies. To our\nknowledge, this is the first such algorithm in the LFA literature (for SSP or\nother formulations). Our algorithm is a special case of a more general one,\nwhich achieves regret square root in the number of episodes given access to a\ncertain computation oracle.\n"} {"abstract": " This paper develops a new empirical Bayesian inference algorithm for solving\na linear inverse problem given multiple measurement vectors (MMV) of\nunder-sampled and noisy observable data. Specifically, by exploiting the joint\nsparsity across the multiple measurements in the sparse domain of the\nunderlying signal or image, we construct a new support informed sparsity\npromoting prior. Several applications can be modeled using this framework, and\nas a prototypical example we consider reconstructing an image from synthetic\naperture radar (SAR) observations using nearby azimuth angles. Our numerical\nexperiments demonstrate that using this new prior not only improves accuracy of\nthe recovery, but also reduces the uncertainty in the posterior when compared\nto standard sparsity producing priors.\n"} {"abstract": " The current interests in the universe motivate us to go beyond Einstein's\nGeneral theory of relativity. One of the interesting proposals comes from a new\nclass of teleparallel gravity named symmetric teleparallel gravity, i.e.,\n$f(Q)$ gravity, where the non-metricity term $Q$ is accountable for fundamental\ninteraction. These alternative modified theories of gravity's vital role are to\ndeal with the recent interests and to present a realistic cosmological model.\nThis manuscript's main objective is to study the traversable wormhole\ngeometries in $f(Q)$ gravity. We construct the wormhole geometries for three\ncases: (i) by assuming a relation between the radial and lateral pressure, (ii)\nconsidering phantom energy equation of state (EoS), and (iii) for a specific\nshape function in the fundamental interaction of gravity (i.e. for linear form\nof $f(Q)$). Besides, we discuss two wormhole geometries for a general case of\n$f(Q)$ with two specific shape functions. Then, we discuss the viability of\nshape functions and the stability analysis of the wormhole solutions for each\ncase. We have found that the null energy condition (NEC) violates each wormhole\nmodel which concluded that our outcomes are realistic and stable. Finally, we\ndiscuss the embedding diagrams and volume integral quantifier to have a\ncomplete view of wormhole geometries.\n"} {"abstract": " We study temporally localized structures in doubly resonant degenerate\noptical parametric oscillators in the absence of temporal walk-off. We focus on\nstates formed through the locking of domain walls between the zero and a\nnon-zero continuous wave solution. We show that these states undergo collapsed\nsnaking and we characterize their dynamics in the parameter space.\n"} {"abstract": " Experience replay enables off-policy reinforcement learning (RL) agents to\nutilize past experiences to maximize the cumulative reward. Prioritized\nexperience replay that weighs experiences by the magnitude of their\ntemporal-difference error ($|\\text{TD}|$) significantly improves the learning\nefficiency. But how $|\\text{TD}|$ is related to the importance of experience is\nnot well understood. We address this problem from an economic perspective, by\nlinking $|\\text{TD}|$ to value of experience, which is defined as the value\nadded to the cumulative reward by accessing the experience. We theoretically\nshow the value metrics of experience are upper-bounded by $|\\text{TD}|$ for\nQ-learning. Furthermore, we successfully extend our theoretical framework to\nmaximum-entropy RL by deriving the lower and upper bounds of these value\nmetrics for soft Q-learning, which turn out to be the product of $|\\text{TD}|$\nand \"on-policyness\" of the experiences. Our framework links two important\nquantities in RL: $|\\text{TD}|$ and value of experience. We empirically show\nthat the bounds hold in practice, and experience replay using the upper bound\nas priority improves maximum-entropy RL in Atari games.\n"} {"abstract": " We tackle the problem of unsupervised synthetic-to-real domain adaptation for\nsingle image depth estimation. An essential building block of single image\ndepth estimation is an encoder-decoder task network that takes RGB images as\ninput and produces depth maps as output. In this paper, we propose a novel\ntraining strategy to force the task network to learn domain invariant\nrepresentations in a selfsupervised manner. Specifically, we extend\nself-supervised learning from traditional representation learning, which works\non images from a single domain, to domain invariant representation learning,\nwhich works on images from two different domains by utilizing an image-to-image\ntranslation network. Firstly, we use an image-to-image translation network to\ntransfer domain-specific styles between synthetic and real domains. This style\ntransfer operation allows us to obtain similar images from the different\ndomains. Secondly, we jointly train our task network and Siamese network with\nthe same images from the different domains to obtain domain invariance for the\ntask network. Finally, we fine-tune the task network using labeled synthetic\nand unlabeled realworld data. Our training strategy yields improved\ngeneralization capability in the real-world domain. We carry out an extensive\nevaluation on two popular datasets for depth estimation, KITTI and Make3D. The\nresults demonstrate that our proposed method outperforms the state-of-the-art\non all metrics, e.g. by 14.7% on Sq Rel on KITTI. The source code and model\nweights will be made available.\n"} {"abstract": " We propose an algorithm for automatic, targetless, extrinsic calibration of a\nLiDAR and camera system using semantic information. We achieve this goal by\nmaximizing mutual information (MI) of semantic information between sensors,\nleveraging a neural network to estimate semantic mutual information, and matrix\nexponential for calibration computation. Using kernel-based sampling to sample\ndata from camera measurement based on LiDAR projected points, we formulate the\nproblem as a novel differentiable objective function which supports the use of\ngradient-based optimization methods. We also introduce an initial calibration\nmethod using 2D MI-based image registration. Finally, we demonstrate the\nrobustness of our method and quantitatively analyze the accuracy on a synthetic\ndataset and also evaluate our algorithm qualitatively on KITTI360 and RELLIS-3D\nbenchmark datasets, showing improvement over recent comparable approaches.\n"} {"abstract": " Providing multi-connectivity services is an important goal for next\ngeneration wireless networks, where multiple access networks are available and\nneed to be integrated into a coherent solution that efficiently supports both\nreliable and non reliable traffic. Based on virtual network interfaces and per\npath congestion controlled tunnels, the MP-DCCP based multiaccess aggregation\nframework presents as a novel solution that flexibly supports different path\nschedulers and congestion control algorithms as well as reordering modules. The\nframework has been implemented within the Linux kernel space and has been\ntested over different prototypes. Experimental results have shown that the\noverall performance strongly depends upon the congestion control algorithm used\non the individual DCCP tunnels, denoted as CCID. In this paper, we present an\nimplementation of the BBR (Bottleneck Bandwidth Round Trip propagation time)\ncongestion control algorithm for DCCP in the Linux kernel. We show how BBR is\nintegrated into the MP-DCCP multi-access framework and evaluate its performance\nover both single and multi-path environments. Our evaluation results show that\nBBR improves the performance compared to CCID2 for multi-path scenarios due to\nthe faster response to changes in the available bandwidth, which reduces\nlatency and increases performance, especially for unreliable traffic. the\nMP-DCCP framework code, including the new CCID5 is available as OpenSource.\n"} {"abstract": " Recent developments in representational learning for information retrieval\ncan be organized in a conceptual framework that establishes two pairs of\ncontrasts: sparse vs. dense representations and unsupervised vs. learned\nrepresentations. Sparse learned representations can further be decomposed into\nexpansion and term weighting components. This framework allows us to understand\nthe relationship between recently proposed techniques such as DPR, ANCE,\nDeepCT, DeepImpact, and COIL, and furthermore, gaps revealed by our analysis\npoint to \"low hanging fruit\" in terms of techniques that have yet to be\nexplored. We present a novel technique dubbed \"uniCOIL\", a simple extension of\nCOIL that achieves to our knowledge the current state-of-the-art in sparse\nretrieval on the popular MS MARCO passage ranking dataset. Our implementation\nusing the Anserini IR toolkit is built on the Lucene search library and thus\nfully compatible with standard inverted indexes.\n"} {"abstract": " We prove a Gannon-Lee theorem for non-globally hyperbolic Lo\\-rentzian\nmetrics of regularity $C^1$, the most general regularity class currently\navailable in the context of the classical singularity theorems. Along the way\nwe also prove that any maximizing causal curve in a $C^1$-spacetime is a\ngeodesic and hence of $C^2$-regularity.\n"} {"abstract": " Alphas are stock prediction models capturing trading signals in a stock\nmarket. A set of effective alphas can generate weakly correlated high returns\nto diversify the risk. Existing alphas can be categorized into two classes:\nFormulaic alphas are simple algebraic expressions of scalar features, and thus\ncan generalize well and be mined into a weakly correlated set. Machine learning\nalphas are data-driven models over vector and matrix features. They are more\npredictive than formulaic alphas, but are too complex to mine into a weakly\ncorrelated set. In this paper, we introduce a new class of alphas to model\nscalar, vector, and matrix features which possess the strengths of these two\nexisting classes. The new alphas predict returns with high accuracy and can be\nmined into a weakly correlated set. In addition, we propose a novel alpha\nmining framework based on AutoML, called AlphaEvolve, to generate the new\nalphas. To this end, we first propose operators for generating the new alphas\nand selectively injecting relational domain knowledge to model the relations\nbetween stocks. We then accelerate the alpha mining by proposing a pruning\ntechnique for redundant alphas. Experiments show that AlphaEvolve can evolve\ninitial alphas into the new alphas with high returns and weak correlations.\n"} {"abstract": " In this article we generalize the concepts that were used in the PhD thesis\nof Drudge to classify Cameron-Liebler line classes in PG$(n,q), n\\geq 3$, to\nCameron-Liebler sets of $k$-spaces in PG$(n,q)$ and AG$(n,q)$. In his PhD\nthesis, Drudge proved that every Cameron-Liebler line class in PG$(n,q)$\nintersects every $3$-dimensional subspace in a Cameron-Liebler line class in\nthat subspace. We are using the generalization of this result for sets of\n$k$-spaces in PG$(n,q)$ and AG$(n,q)$. Together with a basic counting argument\nthis gives a very strong non-existence condition, $n\\geq 3k+3$. This condition\ncan also be improved for $k$-sets in AG$(n,q)$, with $n\\geq 2k+2$.\n"} {"abstract": " We consider the problem of communicating a general bivariate function of two\nclassical sources observed at the encoders of a classical-quantum multiple\naccess channel. Building on the techniques developed for the case of a\nclassical channel, we propose and analyze a coding scheme based on coset codes.\nThe proposed technique enables the decoder recover the desired function without\nrecovering the sources themselves. We derive a new set of sufficient conditions\nthat are weaker than the current known for identified examples. This work is\nbased on a new ensemble of coset codes that are proven to achieve the capacity\nof a classical-quantum point-to-point channel.\n"} {"abstract": " Motivated by recent observations of ergodicity breaking due to Hilbert space\nfragmentation in 1D Fermi-Hubbard chains with a tilted potential [Scherg et\nal., arXiv:2010.12965], we show that the same system also hosts quantum\nmany-body scars in a regime $U\\approx \\Delta \\gg J$ at electronic filling\nfactor $\\nu=1$. We numerically demonstrate that the scarring phenomenology in\nthis model is similar to other known realisations such as Rydberg atom chains,\nincluding persistent dynamical revivals and ergodicity-breaking many-body\neigenstates. At the same time, we show that the mechanism of scarring in the\nFermi-Hubbard model is different from other examples in the literature: the\nscars originate from a subgraph, representing a free spin-1 paramagnet, which\nis weakly connected to the rest of the Hamiltonian's adjacency graph. Our work\ndemonstrates that correlated fermions in tilted optical lattices provide a\nplatform for understanding the interplay of many-body scarring and other forms\nof ergodicity breaking, such as localisation and Hilbert space fragmentation.\n"} {"abstract": " We map the likelihood of GW190521, the heaviest detected binary black hole\n(BBH) merger, by sampling under different mass and spin priors designed to be\nuninformative. We find that a source-frame total mass of $\\sim$$150 M_{\\odot}$\nis consistently supported, but posteriors in mass ratio and spin depend\ncritically on the choice of priors. We confirm that the likelihood has a\nmulti-modal structure with peaks in regions of mass ratio representing very\ndifferent astrophysical scenarios. The unequal-mass region ($m_2 / m_1 < 0.3$)\nhas an average likelihood $\\sim$$e^6$ times larger than the equal-mass region\n($m_2 / m_1 > 0.3$) and a maximum likelihood $\\sim$$e^2$ larger. Using\nensembles of samples across priors, we examine the implications of\nqualitatively different BBH sources that fit the data. We find that the\nequal-mass solution has poorly constrained spins and at least one black hole\nmass that is difficult to form via stellar collapse due to pair instability.\nThe unequal-mass solution can avoid this mass gap entirely but requires a\nnegative effective spin and a precessing primary. Either of these scenarios is\nmore easily produced by dynamical formation channels than field binary\nco-evolution. The sensitive comoving volume-time of the mass gap solution is\n$\\mathcal{O}(10)$ times larger than the gap-avoiding solution. After accounting\nfor this distance effect, the likelihood still reverses the advantage to favor\nthe gap-avoiding scenario by a factor of $\\mathcal{O}(100)$ before considering\nmass and spin priors. Posteriors are easily driven away from this\nhigh-likelihood region by common prior choices meant to be uninformative,\nmaking GW190521 parameter inference sensitive to the assumed mass and spin\ndistributions of mergers in the source's astrophysical channel. This may be a\ngeneric issue for similarly heavy events given current detector sensitivity and\nwaveform degeneracies.\n"} {"abstract": " The ESO workshop \"Ground-based thermal infrared astronomy\" was held on-line\nOctober 12-16, 2020. Originally planned as a traditional in-person meeting at\nESO in Garching in April 2020, it was rescheduled and transformed into a fully\non-line event due to the COVID-19 pandemic. With 337 participants from 36\ncountries the workshop was a resounding success, demonstrating the wide\ninterest of the astronomical community in the science goals and the toolkit of\nground-based thermal infrared astronomy.\n"} {"abstract": " Federated Learning (FL) is an emerging decentralized learning framework\nthrough which multiple clients can collaboratively train a learning model.\nHowever, a major obstacle that impedes the wide deployment of FL lies in\nmassive communication traffic. To train high dimensional machine learning\nmodels (such as CNN models), heavy communication traffic can be incurred by\nexchanging model updates via the Internet between clients and the parameter\nserver (PS), implying that the network resource can be easily exhausted.\nCompressing model updates is an effective way to reduce the traffic amount.\nHowever, a flexible unbiased compression algorithm applicable for both uplink\nand downlink compression in FL is still absent from existing works. In this\nwork, we devise the Model Update Compression by Soft Clustering (MUCSC)\nalgorithm to compress model updates transmitted between clients and the PS. In\nMUCSC, it is only necessary to transmit cluster centroids and the cluster ID of\neach model update. Moreover, we prove that: 1) The compressed model updates are\nunbiased estimation of their original values so that the convergence rate by\ntransmitting compressed model updates is unchanged; 2) MUCSC can guarantee that\nthe influence of the compression error on the model accuracy is minimized.\nThen, we further propose the boosted MUCSC (B-MUCSC) algorithm, a biased\ncompression algorithm that can achieve an extremely high compression rate by\ngrouping insignificant model updates into a super cluster. B-MUCSC is suitable\nfor scenarios with very scarce network resource. Ultimately, we conduct\nextensive experiments with the CIFAR-10 and FEMNIST datasets to demonstrate\nthat our algorithms can not only substantially reduce the volume of\ncommunication traffic in FL, but also improve the training efficiency in\npractical networks.\n"} {"abstract": " We consider string theory vacua with tadpoles for dynamical fields and\nuncover universal features of the resulting spacetime-dependent solutions. We\nargue that the solutions can extend only a finite distance $\\Delta$ away in the\nspacetime dimensions over which the fields vary, scaling as $\\Delta^n\\sim {\\cal\nT}$ with the strength of the tadpole ${\\cal T}$. We show that naive\nsingularities arising at this distance scale are physically replaced by ends of\nspacetime, related to the cobordism defects of the swampland cobordism\nconjecture and involving stringy ingredients like orientifold planes and\nbranes, or exotic variants thereof. We illustrate these phenomena in large\nclasses of examples, including AdS$_5\\times T^{1,1}$ with 3-form fluxes, 10d\nmassive IIA, M-theory on K3, the 10d non-supersymmetric $USp(32)$ strings, and\ntype IIB compactifications with 3-form fluxes and/or magnetized D-branes. We\nalso describe a 6d string model whose tadpole triggers spontaneous\ncompactification to a semirealistic 3-family MSSM-like particle physics model.\n"} {"abstract": " For a bounded domain $D \\subset \\mathbb{C}^n$, let $K_D = K_D(z) > 0$ denote\nthe Bergman kernel on the diagonal and consider the reproducing kernel Hilbert\nspace of holomorphic functions on $D$ that are square integrable with respect\nto the weight $K_D^{-d}$, where $d \\geq 0$ is an integer. The corresponding\nweighted kernel $K_{D, d}$ transforms appropriately under biholomorphisms and\nhence produces an invariant K\\\"{a}hler metric on $D$. Thus, there is a\nhierarchy of such metrics starting with the classical Bergman metric that\ncorresponds to the case $d=0$. This note is an attempt to study this class of\nmetrics in much the same way as the Bergman metric has been with a view towards\nidentifying properties that are common to this family. When $D$ is strongly\npseudoconvex, the scaling principle is used to obtain the boundary asymptotics\nof these metrics and several invariants associated to them. It turns out that\nall these metrics are complete on strongly pseudoconvex domains.\n"} {"abstract": " We present evolutionary models for solar-like stars with an improved\ntreatment of convection that results in a more accurate estimate of the radius\nand effective temperature. This is achieved by improving the calibration of the\nmixing-length parameter, which sets the length scale in the 1D convection model\nimplemented in the stellar evolution code. Our calibration relies on the\nresults of 2D and 3D radiation hydrodynamics simulations of convection to\nspecify the value of the adiabatic specific entropy at the bottom of the\nconvective envelope in stars as a function of their effective temperature,\nsurface gravity and metallicity. For the first time, this calibration is fully\nintegrated within the flow of a stellar evolution code, with the mixing-length\nparameter being continuously updated at run-time. This approach replaces the\nmore common, but questionable, procedure of calibrating the length scale\nparameter on the Sun, and then applying the solar-calibrated value in modeling\nother stars, regardless of their mass, composition and evolutionary status. The\ninternal consistency of our current implementation makes it suitable for\napplication to evolved stars, in particular to red giants. We show that the\nentropy calibrated models yield a revised position of the red giant branch that\nis in better agreement with observational constraints than that of standard\nmodels.\n"} {"abstract": " In this chapter we describe the history and evolution of the iCub humanoid\nplatform. We start by describing the first version as it was designed during\nthe RobotCub EU project and illustrate how it evolved to become the platform\nthat is adopted by more than 30 laboratories world wide. We complete the\nchapter by illustrating some of the research activities that are currently\ncarried out on the iCub robot, i.e. visual perception, event driven sensing and\ndynamic control. We conclude the Chapter with a discussion of the lessons we\nlearned and a preview of the upcoming next release of the robot, iCub 3.0.\n"} {"abstract": " Unfitted finite element methods have emerged as a popular alternative to\nclassical finite element methods for the solution of partial differential\nequations and allow modeling arbitrary geometries without the need for a\nboundary-conforming mesh. On the other hand, the efficient solution of the\nresultant system is a challenging task because of the numerical\nill-conditioning that typically entails from the formulation of such methods.\nWe use an adaptive geometric multigrid solver for the solution of the mixed\nfinite cell formulation of saddle-point problems and investigate its\nconvergence in the context of the Stokes and Navier-Stokes equations. We\npresent two smoothers for the treatment of cutcells in the finite cell method\nand analyze their effectiveness for the model problems using a numerical\nbenchmark. Results indicate that the presented multigrid method is capable of\nsolving the model problems independently of the problem size and is robust with\nrespect to the depth of the grid hierarchy.\n"} {"abstract": " For finite samples with binary outcomes penalized logistic regression such as\nridge logistic regression (RR) has the potential of achieving smaller mean\nsquared errors (MSE) of coefficients and predictions than maximum likelihood\nestimation. There is evidence, however, that RR is sensitive to small or sparse\ndata situations, yielding poor performance in individual datasets. In this\npaper, we elaborate this issue further by performing a comprehensive simulation\nstudy, investigating the performance of RR in comparison to Firth's correction\nthat has been shown to perform well in low-dimensional settings. Performance of\nRR strongly depends on the choice of complexity parameter that is usually tuned\nby minimizing some measure of the out-of-sample prediction error or information\ncriterion. Alternatively, it may be determined according to prior assumptions\nabout true effects. As shown in our simulation and illustrated by a data\nexample, values optimized in small or sparse datasets are negatively correlated\nwith optimal values and suffer from substantial variability which translates\ninto large MSE of coefficients and large variability of calibration slopes. In\ncontrast, if the degree of shrinkage is pre-specified, accurate coefficients\nand predictions can be obtained even in non-ideal settings such as encountered\nin the context of rare outcomes or sparse predictors.\n"} {"abstract": " In this paper, for a given compact 3-manifold with an initial Riemannian\nmetric and a symmetric tensor, we establish the short-time existence and\nuniqueness theorem for extension of cross curvature flow. We give an example of\nthis flow on manifolds.\n"} {"abstract": " We describe Substitutional Neural Image Compression (SNIC), a general\napproach for enhancing any neural image compression model, that requires no\ndata or additional tuning of the trained model. It boosts compression\nperformance toward a flexible distortion metric and enables bit-rate control\nusing a single model instance. The key idea is to replace the image to be\ncompressed with a substitutional one that outperforms the original one in a\ndesired way. Finding such a substitute is inherently difficult for conventional\ncodecs, yet surprisingly favorable for neural compression models thanks to\ntheir fully differentiable structures. With gradients of a particular loss\nbackpropogated to the input, a desired substitute can be efficiently crafted\niteratively. We demonstrate the effectiveness of SNIC, when combined with\nvarious neural compression models and target metrics, in improving compression\nquality and performing bit-rate control measured by rate-distortion curves.\nEmpirical results of control precision and generation speed are also discussed.\n"} {"abstract": " When two solids at different temperatures are separated by a vacuum gap they\nrelax toward their equilibrium state by exchanging heat either by radiation,\nphonon or electron tunneling, depending on their separation distance and on the\nnature of materials. The interplay between this exchange of energy and its\nspreading through each solid entirely drives the relaxation dynamics. Here we\nhighlight a significant slowing down of this process in the extreme near-field\nregime at distances where the heat flux exchanged between the two solids is\ncomparable or even dominates over the flux carried by conduction inside each\nsolid. This mechanism, leading to a strong effective increase of the system\nthermal inertia, should play an important role in the temporal evolution of\nthermal state of interacting solids systems at nanometric and subnanometric\nscales.\n"} {"abstract": " We investigate the modification of gravitational fields generated by\ntopological defects on a generalized Duffin-Kemmer-Petiau (DKP) oscillator for\nspin-0 particle under spinning cosmic string background. The generalized DKP\noscillator equation under spinning cosmic string background is established, and\nthe impact of the Cornell potential on the generalized DKP oscillator is\npresented. We give the influence of space-time and potential parameters on\nenergy levels.\n"} {"abstract": " Two non-intrusive uncertainty propagation approaches are proposed for the\nperformance analysis of engineering systems described by expensive-to-evaluate\ndeterministic computer models with parameters defined as interval variables.\nThese approaches employ a machine learning based optimization strategy, the\nso-called Bayesian optimization, for evaluating the upper and lower bounds of a\ngeneric response variable over the set of possible responses obtained when each\ninterval variable varies independently over its range. The lack of knowledge\ncaused by not evaluating the response function for all the possible\ncombinations of the interval variables is accounted for by developing a\nprobabilistic description of the response variable itself by using a Gaussian\nProcess regression model. An iterative procedure is developed for selecting a\nsmall number of simulations to be evaluated for updating this statistical model\nby using well-established acquisition functions and to assess the response\nbounds. In both approaches, an initial training dataset is defined. While one\napproach builds iteratively two distinct training datasets for evaluating\nseparately the upper and lower bounds of the response variable, the other\nbuilds iteratively a single training dataset. Consequently, the two approaches\nwill produce different bound estimates at each iteration. The upper and lower\nbound responses are expressed as point estimates obtained from the mean\nfunction of the posterior distribution. Moreover, a confidence interval on each\nestimate is provided for effectively communicating to engineers when these\nestimates are obtained for a combination of the interval variables for which no\ndeterministic simulation has been run. Finally, two metrics are proposed to\ndefine conditions for assessing if the predicted bound estimates can be\nconsidered satisfactory.\n"} {"abstract": " We report constraints on light dark matter through its interactions with\nshell electrons in the PandaX-II liquid xenon detector with a total 46.9\ntonne$\\cdot$day exposure. To effectively search for these very low energy\nelectron recoils, ionization-only signals are selected from the data. 1821\ncandidates are identified within ionization signal range between 50 to 75\nphotoelectrons, corresponding to a mean electronic recoil energy from 0.08 to\n0.15 keV. The 90% C.L. exclusion limit on the scattering cross section between\nthe dark matter and electron is calculated based on Poisson statistics. Under\nthe assumption of point interaction, we provide the world's most stringent\nlimit within the dark matter mass range from 15 to 30 $\\rm MeV/c^2$, with the\ncorresponding cross section from $2.5\\times10^{-37}$ to $3.1\\times10^{-38}$\ncm$^2$.\n"} {"abstract": " As of 2020, the international workshop on Procedural Content Generation\nenters its second decade. The annual workshop, hosted by the international\nconference on the Foundations of Digital Games, has collected a corpus of 95\npapers published in its first 10 years. This paper provides an overview of the\nworkshop's activities and surveys the prevalent research topics emerging over\nthe years.\n"} {"abstract": " Ma-Ma-Yeh made a beautiful observation that a transformation of the grammar\nof Dumont instantly leads to the $\\gamma$-positivity of the Eulerian\npolynomials. We notice that the transformed grammar bears a striking\nresemblance to the grammar for 0-1-2 increasing trees also due to Dumont. The\nappearance of the factor of two fits perfectly in a grammatical labeling of\n0-1-2 increasing plane trees. Furthermore, the grammatical calculus is\ninstrumental to the computation of the generating functions. This approach can\nbe adapted to study the $e$-positivity of the trivariate second-order Eulerian\npolynomials first introduced by Dumont in the contexts of ternary trees and\nStirling permutations, and independently defined by Janson, in connection with\nthe joint distribution of the numbers of ascents, descents and plateaux over\nStirling permutations.\n"} {"abstract": " High-Performance Big Data Analytics (HPDA) applications are characterized by\nhuge volumes of distributed and heterogeneous data that require efficient\ncomputation for knowledge extraction and decision making. Designers are moving\ntowards a tight integration of computing systems combining HPC, Cloud, and IoT\nsolutions with artificial intelligence (AI). Matching the application and data\nrequirements with the characteristics of the underlying hardware is a key\nelement to improve the predictions thanks to high performance and better use of\nresources.\n We present EVEREST, a novel H2020 project started on October 1st, 2020 that\naims at developing a holistic environment for the co-design of HPDA\napplications on heterogeneous, distributed, and secure platforms. EVEREST\nfocuses on programmability issues through a data-driven design approach, the\nuse of hardware-accelerated AI, and an efficient runtime monitoring with\nvirtualization support. In the different stages, EVEREST combines\nstate-of-the-art programming models, emerging communication standards, and\nnovel domain-specific extensions. We describe the EVEREST approach and the use\ncases that drive our research.\n"} {"abstract": " This article summarises the current status of classical communication\nnetworks and identifies some critical open research challenges that can only be\nsolved by leveraging quantum technologies. By now, the main goal of quantum\ncommunication networks has been security. However, quantum networks can do more\nthan just exchange secure keys or serve the needs of quantum computers. In\nfact, the scientific community is still investigating on the possible use\ncases/benefits that quantum communication networks can bring. Thus, this\narticle aims at pointing out and clearly describing how quantum communication\nnetworks can enhance in-network distributed computing and reduce the overall\nend-to-end latency, beyond the intrinsic limits of classical technologies.\nFurthermore, we also explain how entanglement can reduce the communication\ncomplexity (overhead) that future classical virtualised networks will\nexperience.\n"} {"abstract": " We present a new multiscale method to study the N-Methyl-D-Aspartate (NMDA)\nneuroreceptor starting from the reconstruction of its crystallographic\nstructure. Thanks to the combination of homology modelling, Molecular Dynamics\nand Lattice Boltzmann simulations, we analyse the allosteric transition of NDMA\nupon ligand binding and compute the receptor response to ionic passage across\nthe membrane.\n"} {"abstract": " This paper presents an analytical model to quantify noise in a bolometer\nreadout circuit. A frequency domain analysis of the noise model is presented\nwhich includes the effect of noise from the bias resistor, sensor resistor,\nvoltage and current noise of amplifier and cable capacitance. The analytical\nmodel is initially verified by using several standard SMD resistors as a sensor\nin the range of 0.1 - 100 Mohm and measuring the RMS noise of the bolometer\nreadout circuit. Noise measurement on several indigenously developed neutron\ntransmutation doped Ge temperature sensor has been carried out over a\ntemperature range of 20 - 70 mK and the measured data is compared with the\nnoise calculated using analytical model. The effect of different sensor\nresistances on the noise of bolometer readout circuit, in line with the\nanalytical model and measured data, is presented in this paper.\n"} {"abstract": " This paper presents an AI system applied to location and robotic grasping.\nExperimental setup is based on a parameter study to train a deep-learning\nnetwork based on Mask-RCNN to perform waste location in indoor and outdoor\nenvironment, using five different classes and generating a new waste dataset.\nInitially the AI system obtain the RGBD data of the environment, followed by\nthe detection of objects using the neural network. Later, the 3D object shape\nis computed using the network result and the depth channel. Finally, the shape\nis used to compute grasping for a robot arm with a two-finger gripper. The\nobjective is to classify the waste in groups to improve a recycling strategy.\n"} {"abstract": " The unscented Kalman inversion (UKI) method presented in [1] is a general\nderivative-free approach for the inverse problem. UKI is particularly suitable\nfor inverse problems where the forward model is given as a black box and may\nnot be differentiable. The regularization strategies, convergence property, and\nspeed-up strategies [1,2] of the UKI are thoroughly studied, and the method is\ncapable of handling noisy observation data and solving chaotic inverse\nproblems. In this paper, we study the uncertainty quantification capability of\nthe UKI. We propose a modified UKI, which allows to well approximate the mean\nand covariance of the posterior distribution for well-posed inverse problems\nwith large observation data. Theoretical guarantees for both linear and\nnonlinear inverse problems are presented. Numerical results, including learning\nof permeability parameters in subsurface flow and of the Navier-Stokes initial\ncondition from solution data at positive times are presented. The results\nobtained by the UKI require only $O(10)$ iterations, and match well with the\nexpected results obtained by the Markov Chain Monte Carlo method.\n"} {"abstract": " Following G.~Gr\\\"atzer and E.~Knapp, 2009, a planar semimodular lattice $L$\nis \\emph{rectangular}, if~the left boundary chain has exactly one\ndoubly-irreducible element, $c_l$, and the right boundary chain has exactly one\ndoubly-irreducible element, $c_r$, and these elements are complementary.\n The Cz\\'edli-Schmidt Sequences, introduced in 2012, construct rectangular\nlattices. We use them to prove some structure theorems. In particular, we prove\nthat for a slim (no $\\mathsf{M}_3$ sublattice) rectangular lattice~$L$, the\ncongruence lattice $\\Con L$ has exactly $\\length[c_l,1] + \\length[c_r,1]$ dual\natoms and a dual atom in $\\Con L$ is a congruence with exactly two classes. We\nalso describe the prime ideals in a slim rectangular lattice.\n"} {"abstract": " We explore the tail of various waiting time datasets of processes that follow\na nonstationary Poisson distribution with a sinusoidal driver. Analytically, we\nfind that the distribution of large waiting times of such processes can be\ndescribed using a power law slope of -2.5. We show that this result applies\nmore broadly to any nonstationary Poisson process driven periodically. Examples\nof such processes include solar flares, coronal mass ejections, geomagnetic\nstorms, and substorms. We also discuss how the power law specifically relates\nto the behavior of driver near its minima.\n"} {"abstract": " The present work looks at semiautomatic rings with automatic addition and\ncomparisons which are dense subrings of the real numbers and asks how these can\nbe used to represent geometric objects such that certain operations and\ntransformations are automatic. The underlying ring has always to be a countable\ndense subring of the real numbers and additions and comparisons and\nmultiplications with constants need to be automatic. It is shown that the ring\ncan be selected such that equilateral triangles can be represented and\nrotations by 30 degrees are possible, while the standard representation of the\nb-adic rationals does not allow this.\n"} {"abstract": " To further improve the learning efficiency and performance of reinforcement\nlearning (RL), in this paper we propose a novel uncertainty-aware model-based\nRL (UA-MBRL) framework, and then implement and validate it in autonomous\ndriving under various task scenarios. First, an action-conditioned ensemble\nmodel with the ability of uncertainty assessment is established as the virtual\nenvironment model. Then, a novel uncertainty-aware model-based RL framework is\ndeveloped based on the adaptive truncation approach, providing virtual\ninteractions between the agent and environment model, and improving RL's\ntraining efficiency and performance. The developed algorithms are then\nimplemented in end-to-end autonomous vehicle control tasks, validated and\ncompared with state-of-the-art methods under various driving scenarios. The\nvalidation results suggest that the proposed UA-MBRL method surpasses the\nexisting model-based and model-free RL approaches, in terms of learning\nefficiency and achieved performance. The results also demonstrate the good\nability of the proposed method with respect to the adaptiveness and robustness,\nunder various autonomous driving scenarios.\n"} {"abstract": " Electronic health records represent a holistic overview of patients'\ntrajectories. Their increasing availability has fueled new hopes to leverage\nthem and develop accurate risk prediction models for a wide range of diseases.\nGiven the complex interrelationships of medical records and patient outcomes,\ndeep learning models have shown clear merits in achieving this goal. However, a\nkey limitation of these models remains their capacity in processing long\nsequences. Capturing the whole history of medical encounters is expected to\nlead to more accurate predictions, but the inclusion of records collected for\ndecades and from multiple resources can inevitably exceed the receptive field\nof the existing deep learning architectures. This can result in missing\ncrucial, long-term dependencies. To address this gap, we present Hi-BEHRT, a\nhierarchical Transformer-based model that can significantly expand the\nreceptive field of Transformers and extract associations from much longer\nsequences. Using a multimodal large-scale linked longitudinal electronic health\nrecords, the Hi-BEHRT exceeds the state-of-the-art BEHRT 1% to 5% for area\nunder the receiver operating characteristic (AUROC) curve and 3% to 6% for area\nunder the precision recall (AUPRC) curve on average, and 3% to 6% (AUROC) and\n3% to 11% (AUPRC) for patients with long medical history for 5-year heart\nfailure, diabetes, chronic kidney disease, and stroke risk prediction.\nAdditionally, because pretraining for hierarchical Transformer is not\nwell-established, we provide an effective end-to-end contrastive pre-training\nstrategy for Hi-BEHRT using EHR, improving its transferability on predicting\nclinical events with relatively small training dataset.\n"} {"abstract": " Microscopic organisms, such as bacteria, have the ability of colonizing\nsurfaces and developing biofilms that can determine diseases and infections.\nMost bacteria secrete a significant amount of extracellular polymer substances\nthat are relevant for biofilm stabilization and growth. In this work, we apply\ncomputer simulation and perform experiments to investigate the impact of\npolymer size and concentration on early biofilm formation and growth. We\nobserve as bacterial cells formed loose, disorganized clusters whenever the\neffect of diffusion exceeded that of cell growth and division. Addition of\nmodel polymeric molecules induced particle self-assembly and aggregation to\nform compact clusters in a polymer size- and concentration-dependent fashion.\nWe also find that large polymer size or concentration lead to the development\nof intriguing stripe-like and dendritic colonies. The results obtained by\nBrownian dynamic simulation closely resemble the morphologies that we\nexperimentally observe in biofilms of a Pseudomonas Putida strain with added\npolymers. The analysis of the Brownian dynamic simulation results suggests the\nexistence of a threshold polymer concentration that distinguishes between two\ngrowth regimes. Below this threshold, the main force driving polymer-induced\ncompaction is hindrance of bacterial cell diffusion, while collective effects\nplay a minor role. Above this threshold, especially for large polymers,\npolymer-induced compaction is a collective phenomenon driven by depletion\nforces. Well above this concentration threshold, severely limited diffusion\ndrives the formation of filaments and dendritic colonies.\n"} {"abstract": " In this paper, the interfacial motion between two immiscible viscous fluids\nin the confined geometry of a Hele-Shaw cell is studied. We consider the\ninfluence of a thin wetting film trailing behind the displaced fluid, which\ndynamically affects the pressure drop at the fluid-fluid interface by\nintroducing a nonlinear dependence on the interfacial velocity. In this\nframework, two cases of interest are analyzed: The injection-driven flow\n(expanding evolution), and the lifting plate flow (shrinking evolution). In\nparticular, we investigate the possibility of controlling the development of\nfingering instabilities in these two different Hele-Shaw setups when wetting\neffects are taken into account. By employing linear stability theory, we find\nthe proper time-dependent injection rate $Q(t)$ and the time-dependent lifting\nspeed ${\\dot b}(t)$ required to control the number of emerging fingers during\nthe expanding and shrinking evolution, respectively. Our results indicate that\nthe consideration of wetting leads to an increase in the magnitude of $Q(t)$\n[and ${\\dot b}(t)$] in comparison to the non-wetting strategy. Moreover, a\nspectrally accurate boundary integral approach is utilized to examine the\nvalidity and effectiveness of the controlling protocols at the fully nonlinear\nregime of the dynamics and confirms that the proposed injection and lifting\nschemes are feasible strategies to prescribe the morphologies of the resulting\npatterns in the presence of the wetting film.\n"} {"abstract": " We revisit the theoretical properties of Hamiltonian stochastic differential\nequations (SDES) for Bayesian posterior sampling, and we study the two types of\nerrors that arise from numerical SDE simulation: the discretization error and\nthe error due to noisy gradient estimates in the context of data subsampling.\nOur main result is a novel analysis for the effect of mini-batches through the\nlens of differential operator splitting, revising previous literature results.\nThe stochastic component of a Hamiltonian SDE is decoupled from the gradient\nnoise, for which we make no normality assumptions. This leads to the\nidentification of a convergence bottleneck: when considering mini-batches, the\nbest achievable error rate is $\\mathcal{O}(\\eta^2)$, with $\\eta$ being the\nintegrator step size. Our theoretical results are supported by an empirical\nstudy on a variety of regression and classification tasks for Bayesian neural\nnetworks.\n"} {"abstract": " We present the results that are necessary in the ongoing lattice calculations\nof the gluon parton distribution functions (PDFs) within the pseudo-PDF\napproach. We identify the two-gluon correlator functions that contain the\ninvariant amplitude determining the gluon PDF in the light-cone $z^2 \\to 0$\nlimit, and perform one-loop calculations in the coordinate representation in an\nexplicitly gauge-invariant form. Ultraviolet (UV) terms, which contain $\\ln\n(-z^2)$-dependence cancel in the reduced Ioffe-time distribution (ITD), and we\nobtain the matching relation between the reduced ITD and the light-cone ITD.\nUsing a kernel form, we get a direct connection between lattice data for the\nreduced ITD and the normalized gluon PDF.\n"} {"abstract": " Considerable research effort has been guided towards algorithmic fairness but\nreal-world adoption of bias reduction techniques is still scarce. Existing\nmethods are either metric- or model-specific, require access to sensitive\nattributes at inference time, or carry high development or deployment costs.\nThis work explores the unfairness that emerges when optimizing ML models solely\nfor predictive performance, and how to mitigate it with a simple and easily\ndeployed intervention: fairness-aware hyperparameter optimization (HO). We\npropose and evaluate fairness-aware variants of three popular HO algorithms:\nFair Random Search, Fair TPE, and Fairband. We validate our approach on a\nreal-world bank account opening fraud case-study, as well as on three datasets\nfrom the fairness literature. Results show that, without extra training cost,\nit is feasible to find models with 111% mean fairness increase and just 6%\ndecrease in performance when compared with fairness-blind HO.\n"} {"abstract": " LIDAR sensors are usually used to provide autonomous vehicles with 3D\nrepresentations of their environment. In ideal conditions, geometrical models\ncould detect the road in LIDAR scans, at the cost of a manual tuning of\nnumerical constraints, and a lack of flexibility. We instead propose an\nevidential pipeline, to accumulate road detection results obtained from neural\nnetworks. First, we introduce RoadSeg, a new convolutional architecture that is\noptimized for road detection in LIDAR scans. RoadSeg is used to classify\nindividual LIDAR points as either belonging to the road, or not. Yet, such\npoint-level classification results need to be converted into a dense\nrepresentation, that can be used by an autonomous vehicle. We thus secondly\npresent an evidential road mapping algorithm, that fuses consecutive road\ndetection results. We benefitted from a reinterpretation of logistic\nclassifiers, which can be seen as generating a collection of simple evidential\nmass functions. An evidential grid map that depicts the road can then be\nobtained, by projecting the classification results from RoadSeg into grid\ncells, and by handling moving objects via conflict analysis. The system was\ntrained and evaluated on real-life data. A python implementation maintains a 10\nHz framerate. Since road labels were needed for training, a soft labelling\nprocedure, relying lane-level HD maps, was used to generate coarse training and\nvalidation sets. An additional test set was manually labelled for evaluation\npurposes. So as to reach satisfactory results, the system fuses road detection\nresults obtained from three variants of RoadSeg, processing different LIDAR\nfeatures.\n"} {"abstract": " Spiral structure is ubiquitous in the Universe, and the pitch angle of arms\nin spiral galaxies provide an important observable in efforts to discriminate\nbetween different mechanisms of spiral arm formation and evolution. In this\npaper, we present a hierarchical Bayesian approach to galaxy pitch angle\ndetermination, using spiral arm data obtained through the Galaxy Builder\ncitizen science project. We present a new approach to deal with the large\nvariations in pitch angle between different arms in a single galaxy, which\nobtains full posterior distributions on parameters. We make use of our pitch\nangles to examine previously reported links between bulge and bar strength and\npitch angle, finding no correlation in our data (with a caveat that we use\nobservational proxies for both bulge size and bar strength which differ from\nother work). We test a recent model for spiral arm winding, which predicts\nuniformity of the cotangent of pitch angle between some unknown upper and lower\nlimits, finding our observations are consistent with this model of transient\nand recurrent spiral pitch angle as long as the pitch angle at which most\nwinding spirals dissipate or disappear is larger than 10 degrees.\n"} {"abstract": " Source code spends most of its time in a broken or incomplete state during\nsoftware development. This presents a challenge to machine learning for code,\nsince high-performing models typically rely on graph structured representations\nof programs derived from traditional program analyses. Such analyses may be\nundefined for broken or incomplete code. We extend the notion of program graphs\nto work-in-progress code by learning to predict edge relations between tokens,\ntraining on well-formed code before transferring to work-in-progress code. We\nconsider the tasks of code completion and localizing and repairing variable\nmisuse in a work-in-process scenario. We demonstrate that training\nrelation-aware models with fine-tuned edges consistently leads to improved\nperformance on both tasks.\n"} {"abstract": " The objective of Federated Learning (FL) is to perform statistical inference\nfor data which are decentralised and stored locally on networked clients. FL\nraises many constraints which include privacy and data ownership, communication\noverhead, statistical heterogeneity, and partial client participation. In this\npaper, we address these problems in the framework of the Bayesian paradigm. To\nthis end, we propose a novel federated Markov Chain Monte Carlo algorithm,\nreferred to as Quantised Langevin Stochastic Dynamics which may be seen as an\nextension to the FL setting of Stochastic Gradient Langevin Dynamics, which\nhandles the communication bottleneck using gradient compression. To improve\nperformance, we then introduce variance reduction techniques, which lead to two\nimproved versions coined \\texttt{QLSD}$^{\\star}$ and \\texttt{QLSD}$^{++}$. We\ngive both non-asymptotic and asymptotic convergence guarantees for the proposed\nalgorithms. We illustrate their performances using various Bayesian Federated\nLearning benchmarks.\n"} {"abstract": " The complex-step derivative approximation is a numerical differentiation\ntechnique that can achieve analytical accuracy, to machine precision, with a\nsingle function evaluation. In this letter, the complex-step derivative\napproximation is extended to be compatible with elements of matrix Lie groups.\nAs with the standard complex-step derivative, the method is still able to\nachieve analytical accuracy, up to machine precision, with a single function\nevaluation. Compared to a central-difference scheme, the proposed complex-step\napproach is shown to have superior accuracy. The approach is applied to two\ndifferent pose estimation problems, and is able to recover the same results as\nan analytical method when available.\n"} {"abstract": " The problem of simultaneous rigid alignment of multiple unordered point sets\nwhich is unbiased towards any of the inputs has recently attracted increasing\ninterest, and several reliable methods have been newly proposed. While being\nremarkably robust towards noise and clustered outliers, current approaches\nrequire sophisticated initialisation schemes and do not scale well to large\npoint sets. This paper proposes a new resilient technique for simultaneous\nregistration of multiple point sets by interpreting the latter as particle\nswarms rigidly moving in the mutually induced force fields. Thanks to the\nimproved simulation with altered physical laws and acceleration of globally\nmultiply-linked point interactions with a 2^D-tree (D is the space\ndimensionality), our Multi-Body Gravitational Approach (MBGA) is robust to\nnoise and missing data while supporting more massive point sets than previous\nmethods (with 10^5 points and more). In various experimental settings, MBGA is\nshown to outperform several baseline point set alignment approaches in terms of\naccuracy and runtime. We make our source code available for the community to\nfacilitate the reproducibility of the results.\n"} {"abstract": " It is nontrivial to store rapidly growing big data nowadays, which demands\nhigh-performance lossless compression techniques. Likelihood-based generative\nmodels have witnessed their success on lossless compression, where flow based\nmodels are desirable in allowing exact data likelihood optimisation with\nbijective mappings. However, common continuous flows are in contradiction with\nthe discreteness of coding schemes, which requires either 1) imposing strict\nconstraints on flow models that degrades the performance or 2) coding numerous\nbijective mapping errors which reduces the efficiency. In this paper, we\ninvestigate volume preserving flows for lossless compression and show that a\nbijective mapping without error is possible. We propose Numerical Invertible\nVolume Preserving Flow (iVPF) which is derived from the general volume\npreserving flows. By introducing novel computation algorithms on flow models,\nan exact bijective mapping is achieved without any numerical error. We also\npropose a lossless compression algorithm based on iVPF. Experiments on various\ndatasets show that the algorithm based on iVPF achieves state-of-the-art\ncompression ratio over lightweight compression algorithms.\n"} {"abstract": " This paper presents a model addressing welfare optimal policies of demand\nresponsive transportation service, where passengers cause external travel time\ncosts for other passengers due to the route changes. Optimal pricing and trip\nproduction policies are modelled both on the aggregate level and on the network\nlevel. The aggregate model is an extension from Jokinen (2016) with flat\npricing model, but occupancy rate is now modelled as an endogenous variable\ndepending on demand and capacity levels. The network model enables to describe\ndifferences between routes from the viewpoint of occupancy rate and efficient\ntrip combining. Moreover, the model defines the optimal differentiated pricing\nfor routes.\n"} {"abstract": " In this article, we introduce a notion of relative mean metric dimension with\npotential for a factor map $\\pi: (X,d, T)\\to (Y, S)$ between two topological\ndynamical systems. To link it with ergodic theory, we establish four\nvariational principles in terms of metric entropy of partitions, Shapira's\nentropy, Katok's entropy and Brin-Katok local entropy respectively. Some\nresults on local entropy with respect to a fixed open cover are obtained in the\nrelative case. We also answer an open question raised by Shi \\cite{Shi}\npartially for a very well-partitionable compact metric space, and in general we\nobtain a variational inequality involving box dimension of the space.\nCorresponding inner variational principles given an invariant measure of\n$(Y,S)$ are also investigated.\n"} {"abstract": " We introduce a quantum interferometric scheme that uses states that are sharp\nin frequency and delocalized in position. The states are frequency modes of a\nquantum field that is trapped at all times in a finite volume potential, such\nas a small box potential. This allows for significant miniaturization of\ninterferometric devices. Since the modes are in contact at all times, it is\npossible to estimate physical parameters of global multi-mode channels. As an\nexample, we introduce a three-mode scheme and calculate precision bounds in the\nestimation of parameters of two-mode Gaussian channels. This scheme can be\nimplemented in several systems, including superconducting circuits, cavity-QED\nand cold atoms. We consider a concrete implementation using the ground state\nand two phononic modes of a trapped Bose-Einstein condensate. We apply this to\nshow that frequency interferometry can improve the sensitivity of phononic\ngravitational waves detectors by several orders of magnitude, even in the case\nthat squeezing is much smaller than assumed previously and that the system\nsuffers from short phononic lifetimes. Other applications range from\nmagnetometry, gravimetry and gradiometry to dark matter/energy searches.\n"} {"abstract": " Analytic solution is presented of the nonlinear semiclassical dynamics of\nsuperradiant photonic condensate that arises in the Dicke model of two-level\natoms dipolar coupled to the electromagnetic field in the microwave cavity. In\nadiabatic limit with respect to photon degree of freedom the system is\napproximately integrable and its evolution is expressed via Jacobi elliptic\nfunctions of real time. Periodic trajectories of the semiclassical coordinate\nof photonic condensate either localise around two degenerate minima of the\ncondensate ground state energy or traverse between them over the saddle point.\nAn exact mapping of the semiclassical dynamics of photonic condensate on the\nmotion of unstable Lagrange 'sleeping top' is found. Analytic expression is\npresented for the frequency dependence of transmission coefficient along a\ntransmission line inductively coupled to the resonant cavity with superradiant\ncondensate. Sharp transmission drops reflect Fourier spectrum of the\nsemiclassical motion of photonic condensate and of 'sleeping top' nodding.\n"} {"abstract": " Molecular ferroelectrics have captured immense attention due to their\nsuperiority over inorganic oxide ferroelectrics, such as environmentally\nfriendly, low-cost, flexible, foldable. However, the mechanisms of\nferroelectric switching and phase transition for the molecular ferroelectrics\nare still missing, leaving the development of novel molecular ferroelectrics\nless efficient. In this work, we have provided a methodology combining\nmolecular dynamics (MD) simulation on a polarized force field named polarized\ncrystal charge (PCC) and enhanced sampling technique, replica-exchange\nmolecular dynamics (REMD) to simulate such mechanisms. With this procedure, we\nhave investigated a promising molecular ferroelectric material,\n(R)/(S)-3-quinuclidinol crystal. We have simulated the ferroelectric hysteresis\nloops of both enantiomers and obtained spontaneous polarization of 7/8 \\mu C\ncm-2 and a corresponding coercive electric field of 15 kV cm-1. We also find\nthe Curie temperature as 380/385 K for ferro-/para-electric phase transition of\nboth enantiomers. All of the simulated results are highly compatible with\nexperimental values. Besides of that, we predict a novel Curie temperature of\nabout 600 K. This finding is further validated by principal component analysis\n(PCA). Our work would significantly promote the future exploration of\nmultifunctional molecular ferroelectrics for the next generation of intelligent\ndevices.\n"} {"abstract": " We report on new stability conditions for evolutionary dynamics in the\ncontext of population games. We adhere to the prevailing framework consisting\nof many agents, grouped into populations, that interact noncooperatively by\nselecting strategies with a favorable payoff. Each agent is repeatedly allowed\nto revise its strategy at a rate referred to as revision rate. Previous\nstability results considered either that the payoff mechanism was a memoryless\npotential game, or allowed for dynamics (in the payoff mechanism) at the\nexpense of precluding any explicit dependence of the agents' revision rates on\ntheir current strategies. Allowing the dependence of revision rates on\nstrategies is relevant because the agents' strategies at any point in time are\ngenerally unequal. To allow for strategy-dependent revision rates and payoff\nmechanisms that are dynamic (or memoryless games that are not potential), we\nfocus on an evolutionary dynamics class obtained from a straightforward\nmodification of one that stems from the so-called impartial pairwise comparison\nstrategy revision protocol. Revision protocols consistent with the modified\nclass retain from those in the original one the advantage that the agents\noperate in a fully decentralized manner and with minimal information\nrequirements - they need to access only the payoff values (not the mechanism)\nof the available strategies. Our main results determine conditions under which\nsystem-theoretic passivity properties are assured, which we leverage for\nstability analysis.\n"} {"abstract": " Collective action demands that individuals efficiently coordinate how much,\nwhere, and when to cooperate. Laboratory experiments have extensively explored\nthe first part of this process, demonstrating that a variety of\nsocial-cognitive mechanisms influence how much individuals choose to invest in\ngroup efforts. However, experimental research has been unable to shed light on\nhow social cognitive mechanisms contribute to the where and when of collective\naction. We leverage multi-agent deep reinforcement learning to model how a\nsocial-cognitive mechanism--specifically, the intrinsic motivation to achieve a\ngood reputation--steers group behavior toward specific spatial and temporal\nstrategies for collective action in a social dilemma. We also collect\nbehavioral data from groups of human participants challenged with the same\ndilemma. The model accurately predicts spatial and temporal patterns of group\nbehavior: in this public goods dilemma, the intrinsic motivation for reputation\ncatalyzes the development of a non-territorial, turn-taking strategy to\ncoordinate collective action.\n"} {"abstract": " In this paper we study zero-noise limits of $\\alpha -$stable noise perturbed\nODE's which are driven by an irregular vector field $A$ with asymptotics $%\nA(x)\\sim \\overline{a}(\\frac{x}{\\left\\vert x\\right\\vert })\\left\\vert\nx\\right\\vert ^{\\beta -1}x$ at zero, where $\\overline{a}>0$ is a continuous\nfunction and $\\beta \\in (0,1)$. The results established in this article can be\nconsidered a generalization of those in the seminal works of Bafico \\cite% {Ba}\nand Bafico, Baldi \\cite{BB} to the multi-dimensional case.\\ Our approach for\nproving these results is inspired by techniques in \\cite% {PP_self_similar} and\nbased on the analysis of an SDE for $t\\longrightarrow \\infty $, which is\nobtained through a transformation of the perturbed ODE.\n"} {"abstract": " A Private Set Operation (PSO) protocol involves at least two parties with\ntheir private input sets. The goal of the protocol is for the parties to learn\nthe output of a set operation, i.e. set intersection, on their input sets,\nwithout revealing any information about the items that are not in the output\nset. Commonly, the outcome of the set operation is revealed to parties and\nno-one else. However, in many application areas of PSO the result of the set\noperation should be learned by an external participant whom does not have an\ninput set. We call this participant the decider. In this paper, we present new\nvariants of multi-party PSO, where there is a decider who gets the result. All\nparties expect the decider have a private set. Other parties neither learn this\nresult, nor anything else about this protocol. Moreover, we present a generic\nsolution to the problem of PSO.\n"} {"abstract": " One of the biggest challenges in multi-agent reinforcement learning is\ncoordination, a typical application scenario of this is traffic signal control.\nRecently, it has attracted a rising number of researchers and has become a hot\nresearch field with great practical significance. In this paper, we propose a\nnovel method called MetaVRS~(Meta Variational RewardShaping) for traffic signal\ncoordination control. By heuristically applying the intrinsic reward to the\nenvironmental reward, MetaVRS can wisely capture the agent-to-agent interplay.\nBesides, latent variables generated by VAE are brought into policy for\nautomatically tradeoff between exploration and exploitation to optimize the\npolicy. In addition, meta learning was used in decoder for faster adaptation\nand better approximation. Empirically, we demonstate that MetaVRS substantially\noutperforms existing methods and shows superior adaptability, which predictably\nhas a far-reaching significance to the multi-agent traffic signal coordination\ncontrol.\n"} {"abstract": " We present the first measurement of the homogeneity index, $\\mathcal{H}$, a\nfractal or Hausdorff dimension of the early Universe from the Planck CMB\ntemperature variations $\\delta T$ in the sky. This characterization of the\nisotropy scale is model-free and purely geometrical, independent of the\namplitude of $\\delta T$. We find evidence of homogeneity ($\\mathcal{H}=0$) for\nscales larger than $\\theta_{\\mathcal{H}} = 65.9 \\pm 9.2 \\deg $ on the CMB sky.\nThis finding is at odds with the $\\Lambda$CDM prediction, which assumes a scale\ninvariant infinite universe. Such anomaly is consistent with the well known low\nquadrupule amplitude in the angular $\\delta T$ spectrum, but quantified in a\ndirect and model independent way. We estimate the significance of our finding\nfor $\\mathcal{H}=0$ using a principal component analysis from the sampling\nvariations of the observed sky. This analysis is validated with an independent\ntheoretical prediction of the covariance matrix based purely on data. Assuming\ntranslation invariance (and flat geometry $k=0$) we can convert the isotropy\nscale $\\theta_\\mathcal{H}$ into a (comoving) homogeneity scale of\n$\\chi_\\mathcal{H} \\simeq 3.3 c/H_0$, which is very close to the trapped surface\ngenerated by the observed cosmological constant $\\Lambda$.\n"} {"abstract": " XRISM is an X-ray astronomical mission by the JAXA, NASA, ESA and other\ninternational participants, that is planned for launch in 2022 (Japanese fiscal\nyear), to quickly restore high-resolution X-ray spectroscopy of astrophysical\nobjects. To enhance the scientific outputs of the mission, the Science\nOperations Team (SOT) is structured independently from the instrument teams and\nthe Mission Operations Team. The responsibilities of the SOT are divided into\nfour categories: 1) guest observer program and data distributions, 2)\ndistribution of analysis software and the calibration database, 3) guest\nobserver support activities, and 4) performance verification and optimization\nactivities. As the first step, lessons on the science operations learned from\npast Japanese X-ray missions are reviewed, and 15 kinds of lessons are\nidentified. Among them, a) the importance of early preparation of the\noperations from the ground stage, b) construction of an independent team for\nscience operations separate from the instrument development, and c) operations\nwith well-defined duties by appointed members are recognized as key lessons.\nThen, the team structure and the task division between the mission and science\noperations are defined; the tasks are shared among Japan, US, and Europe and\nare performed by three centers, the SOC, SDC, and ESAC, respectively. The SOC\nis designed to perform tasks close to the spacecraft operations, such as\nspacecraft planning, quick-look health checks, pre-pipeline processing, etc.,\nand the SDC covers tasks regarding data calibration processing, maintenance of\nanalysis tools, etc. The data-archive and user-support activities are covered\nboth by the SOC and SDC. Finally, the science-operations tasks and tools are\ndefined and prepared before launch.\n"} {"abstract": " We study an expanding two-fluid model of non-relativistic dark matter and\nradiation which are allowed to interact during a certain time span and to\nestablish an approximate thermal equilibrium. Such interaction which generates\nan effective bulk viscous pressure at background level is expected to be\nrelevant for times around the transition from radiation to matter dominance. We\nquantify the magnitude of this pressure for dark matter particles masses within\nthe range $1 {\\rm eV} \\lesssim m_{\\chi} \\lesssim 10 {\\rm eV}$ around the\nmatter-radiation equality epoch (i.e., redshift $z_{\\rm eq}\\sim 3400$) and\ndemonstrate that the existence of a transient bulk viscosity has consequences\nwhich may be relevant for addressing current tensions of the standard\ncosmological model: i) the additional (negative) pressure contribution modifies\nthe expansion rate around $z_{\\rm eq}$, yielding a larger $H_0$ value and ii)\nlarge-scale structure formation is impacted by suppressing the amplitude of\nmatter overdensity growth via a new viscous friction-term contribution to the\nM\\'esz\\'aros effect. As a result, both the $H_0$ and the $S_8$ tensions of the\ncurrent standard cosmological model are significantly alleviated.\n"} {"abstract": " For an autonomous robotic system, monitoring surgeon actions and assisting\nthe main surgeon during a procedure can be very challenging. The challenges\ncome from the peculiar structure of the surgical scene, the greater similarity\nin appearance of actions performed via tools in a cavity compared to, say,\nhuman actions in unconstrained environments, as well as from the motion of the\nendoscopic camera. This paper presents ESAD, the first large-scale dataset\ndesigned to tackle the problem of surgeon action detection in endoscopic\nminimally invasive surgery. ESAD aims at contributing to increase the\neffectiveness and reliability of surgical assistant robots by realistically\ntesting their awareness of the actions performed by a surgeon. The dataset\nprovides bounding box annotation for 21 action classes on real endoscopic video\nframes captured during prostatectomy, and was used as the basis of a recent\nMIDL 2020 challenge. We also present an analysis of the dataset conducted using\nthe baseline model which was released as part of the challenge, and a\ndescription of the top performing models submitted to the challenge together\nwith the results they obtained. This study provides significant insight into\nwhat approaches can be effective and can be extended further. We believe that\nESAD will serve in the future as a useful benchmark for all researchers active\nin surgeon action detection and assistive robotics at large.\n"} {"abstract": " Traditionally Genetic Algorithm has been used for optimization of unimodal\nand multimodal functions. Earlier researchers worked with constant\nprobabilities of GA control operators like crossover, mutation etc. for tuning\nthe optimization in specific domains. Recent advancements in this field\nwitnessed adaptive approach in probability determination. In Adaptive mutation\nprimarily poor individuals are utilized to explore state space, so mutation\nprobability is usually generated proportionally to the difference between\nfitness of best chromosome and itself (fMAX - f). However, this approach is\nsusceptible to nature of fitness distribution during optimization. This paper\npresents an alternate approach of mutation probability generation using\nchromosome rank to avoid any susceptibility to fitness distribution.\nExperiments are done to compare results of simple genetic algorithm (SGA) with\nconstant mutation probability and adaptive approaches within a limited resource\nconstraint for unimodal, multimodal functions and Travelling Salesman Problem\n(TSP). Measurements are done for average best fitness, number of generations\nevolved and percentage of global optimum achievements out of several trials.\nThe results demonstrate that the rank-based adaptive mutation approach is\nsuperior to fitness-based adaptive approach as well as SGA in a multimodal\nproblem space.\n"} {"abstract": " A new class of test functions for black box optimization is introduced.\nAffine OneMax (AOM) functions are defined as compositions of OneMax and\ninvertible affine maps on bit vectors. The black box complexity of the class is\nupper bounded by a polynomial of large degree in the dimension. The proof\nrelies on discrete Fourier analysis and the Kushilevitz-Mansour algorithm.\nTunable complexity is achieved by expressing invertible linear maps as finite\nproducts of transvections. The black box complexity of sub-classes of AOM\nfunctions is studied. Finally, experimental results are given to illustrate the\nperformance of search algorithms on AOM functions.\n"} {"abstract": " The aberrations in an optical microscope are commonly measured and corrected\nat one location in the field of view, within the so-called isoplanatic patch.\nFull-field correction is desirable for high-resolution imaging of large\nspecimens. Here we present a novel wavefront detector, based on pupil sampling\nwith sub-apertures, which measures the aberrated wavefront phase at each\nposition of the specimen. Based on this measurement, we propose a region-wise\ndeconvolution that provides an anisoplanatic reconstruction of the sample\nimage. Our results indicate that the measurement and correction of the\naberrations can be performed in a wide-field fluorescence microscope over its\nentire field of view.\n"} {"abstract": " A rapid decline in mortality and fertility has become major issues in many\ndeveloped countries over the past few decades. A precise model for forecasting\ndemographic movements is important for decision making in social welfare\npolicies and resource budgeting among the government and many industry sectors.\nThis article introduces a novel non-parametric approach using Gaussian process\nregression with a natural cubic spline mean function and a spectral mixture\ncovariance function for mortality and fertility modelling and forecasting.\nUnlike most of the existing approaches in demographic modelling literature,\nwhich rely on time parameters to decide the movements of the whole mortality or\nfertility curve shifting from one year to another over time, we consider the\nmortality and fertility curves from their components of all age-specific\nmortality and fertility rates and assume each of them following a Gaussian\nprocess over time to fit the whole curves in a discrete but intensive style.\nThe proposed Gaussian process regression approach shows significant\nimprovements in terms of preciseness and robustness compared to other\nmainstream demographic modelling approaches in the short-, mid- and long-term\nforecasting using the mortality and fertility data of several developed\ncountries in our numerical experiments.\n"} {"abstract": " Driving is a routine activity for many, but it is far from simple. Drivers\ndeal with multiple concurrent tasks, such as keeping the vehicle in the lane,\nobserving and anticipating the actions of other road users, reacting to\nhazards, and dealing with distractions inside and outside the vehicle. Failure\nto notice and respond to the surrounding objects and events can cause\naccidents. The ongoing improvements of the road infrastructure and vehicle\nmechanical design have made driving safer overall. Nevertheless, the problem of\ndriver inattention has remained one of the primary causes of accidents.\nTherefore, understanding where the drivers look and why they do so can help\neliminate sources of distractions and identify unsafe attention patterns.\nResearch on driver attention has implications for many practical applications\nsuch as policy-making, improving driver education, enhancing road\ninfrastructure and in-vehicle infotainment systems, as well as designing\nsystems for driver monitoring, driver assistance, and automated driving. This\nreport covers the literature on changes in drivers' visual attention\ndistribution due to factors, internal and external to the driver. Aspects of\nattention during driving have been explored across multiple disciplines,\nincluding psychology, human factors, human-computer interaction, intelligent\ntransportation, and computer vision, each offering different perspectives,\ngoals, and explanations for the observed phenomena. We link cross-disciplinary\ntheoretical and behavioral research on driver's attention to practical\nsolutions. Furthermore, limitations and directions for future research are\ndiscussed. This report is based on over 175 behavioral studies, nearly 100\npractical papers, 20 datasets, and over 70 surveys published since 2010. A\ncurated list of papers used for this report is available at\n\\url{https://github.com/ykotseruba/attention_and_driving}.\n"} {"abstract": " Magnetic anisotropies have key role to taylor magnetic behavior in\nferromagnetic systems. Further, they are also essential elements to manipulate\nthe thermoelectric response in Anomalous Nernst (ANE) and Longitudinal Spin\nSeebeck systems (LSSE). We propose here a theoretical approach and explore the\nrole of magnetic anisotropies on the magnetization and thermoelectric response\nof noninteracting multidomain ferromagnetic systems. The magnetic behavior and\nthe thermoelectric curves are calculated from a modified Stoner Wohlfarth model\nfor an isotropic system, a uniaxial magnetic one, as well as for a system\nhaving a mixture of uniaxial and cubic magnetocrystalline magnetic\nanisotropies. It is verified remarkable modifications of the magnetic behavior\nwith the anisotropy and it is shown that the thermoelectric response is\nstrongly affected by these changes. Further, the fingerprints of the energy\ncontributions to the thermoelectric response are disclosed. To test the\nrobustness of our theoretical approach, we engineer films having the specific\nmagnetic properties and compare directly experimental data with theoretical\nresults. Thus, experimental evidence is provided to confirm the validity of our\ntheoretical approach. The results go beyond the traditional reports focusing on\nmagnetically saturated films and show how the thermoelectric effect behaves\nduring the whole magnetization curve. Our findings reveal a promising way to\nexplore the ANE and LSSE effects as a powerful tool to study magnetic\nanisotropies, as well as to employ systems with magnetic anisotropy as sensing\nor elements in technological applications.\n"} {"abstract": " IW And stars are a recently recognized subgroup of dwarf novae which are\ncharacterized by (often repetitive) slowly rising standstills terminated by\nbrightening, but the exact mechanism for this variation is not yet identified.\nWe have identified BO Cet, which had been considered as a novalike cataclysmic\nvariable, as a new member of IW And stars based on the behavior in 2019-2020.\nIn addition to this, the object showed dwarf nova-type outbursts in 2020-2021,\nand superhumps having a period 7.8% longer than the orbital one developed at\nleast during one long outburst. This object has been confirmed as an SU\nUMa-type dwarf nova with an exceptionally long orbital period (0.1398 d). BO\nCet is thus the first cataclysmic variable showing both SU UMa-type and IW\nAnd-type features. We obtained a mass ratio (q) of 0.31-0.34 from the\nsuperhumps in the growing phase (stage A superhumps). At this q, the radius of\nthe 3:1 resonance, responsible for tidal instability and superhumps, and the\ntidal truncation radius are very similar. We interpret that in some occasions\nthis object showed IW And-type variation when the disk size was not large\nenough, but that the radius of the 3:1 resonance could be reached as the result\nof thermal instability. We also discuss that there are SU UMa-type dwarf novae\nabove q=0.30, which is above the previously considered limit (q~0.25) derived\nfrom numerical simulations and that this is possible since the radius of the\n3:1 resonance is inside the tidal truncation radius. We constrained the mass of\nthe white dwarf larger than 1.0Msol, which may be responsible for the IW\nAnd-type behavior and the observed strength of the He II emission. The exact\nreason, however, why this object is unique in that it shows both SU UMa-type\nand IW And-type features is still unsolved.\n"} {"abstract": " We develop time-splitting finite difference methods, using implicit\nBackward-Euler and semi-implicit Crank-Nicolson discretization schemes, to\nstudy the spin-orbit coupled spinor Bose Einstein condensates with coherent\ncoupling in quasi-one and quasi-two-dimensional traps. The split equations\ninvolving kinetic energy and spin-orbit coupling operators are solved using\neither time-implicit Backward-Euler or semi-implicit Crank-Nicolson methods. We\nexplicitly develop the method for pseudospin-1/2, spin-1, and spin-2\ncondensates. The results for ground states obtained with time-splitting\nBackward-Euler and Crank-Nicolson methods are in excellent agreement with\ntime-splitting Fourier spectral method which is one of the popular methods to\nsolve the mean-field models for spin-orbit coupled spinor condensates. We\nconfirm the emergence of different phases in spin-orbit coupled pseudospin-1/2,\nspin-1, and spin-2 condensates with coherent coupling.\n"} {"abstract": " We consider two models of random cones together with their duals. Let\n$Y_1,\\dots,Y_n$ be independent and identically distributed random vectors in\n$\\mathbb R^d$ whose distribution satisfies some mild condition. The random\ncones $G_{n,d}^A$ and $G_{n,d}^B$ are defined as the positive hulls\n$\\text{pos}\\{Y_1-Y_2,\\dots,Y_{n-1}-Y_n\\}$, respectively\n$\\text{pos}\\{Y_1-Y_2,\\dots,Y_{n-1}-Y_n,Y_n\\}$, conditioned on the event that\nthe respective positive hull is not equal to $\\mathbb R^d$. We prove limit\ntheorems for various expected geometric functionals of these random cones, as\n$n$ and $d$ tend to infinity in a coordinated way. This includes limit theorems\nfor the expected number of $k$-faces and the $k$-th conic quermassintegrals, as\n$n$, $d$ and sometimes also $k$ tend to infinity simultaneously. Moreover, we\nuncover a phase transition in high dimensions for the expected statistical\ndimension for both models of random cones.\n"} {"abstract": " A proper orthogonal decomposition-based B-splines B\\'ezier elements method\n(POD-BSBEM) is proposed as a non-intrusive reduced-order model for uncertainty\npropagation analysis for stochastic time-dependent problems. The method uses a\ntwo-step proper orthogonal decomposition (POD) technique to extract the reduced\nbasis from a collection of high-fidelity solutions called snapshots. A third\nPOD level is then applied on the data of the projection coefficients associated\nwith the reduced basis to separate the time-dependent modes from the stochastic\nparametrized coefficients. These are approximated in the stochastic parameter\nspace using B-splines basis functions defined in the corresponding B\\'ezier\nelement. The accuracy and the efficiency of the proposed method are assessed\nusing benchmark steady-state and time-dependent problems and compared to the\nreduced order model-based artificial neural network (POD-ANN) and to the\nfull-order model-based polynomial chaos expansion (Full-PCE). The POD-BSBEM is\nthen applied to analyze the uncertainty propagation through a flood wave flow\nstemming from a hypothetical dam-break in a river with a complex bathymetry.\nThe results confirm the ability of the POD-BSBEM to accurately predict the\nstatistical moments of the output quantities of interest with a substantial\nspeed-up for both offline and online stages compared to other techniques.\n"} {"abstract": " Nowadays, datacenters lean on a computer-centric approach based on monolithic\nservers which include all necessary hardware resources (mainly CPU, RAM,\nnetwork and disks) to run applications. Such an architecture comes with two\nmain limitations: (1) difficulty to achieve full resource utilization and (2)\ncoarse granularity for hardware maintenance. Recently, many works investigated\na resource-centric approach called disaggregated architecture where the\ndatacenter is composed of self-content resource boards interconnected using\nfast interconnection technologies, each resource board including instances of\none resource type. The resource-centric architecture allows each resource to be\nmanaged (maintenance, allocation) independently. LegoOS is the first work which\nstudied the implications of disaggregation on the operating system, proposing\nto disaggregate the operating system itself. They demonstrated the suitability\nof this approach, considering mainly CPU and RAM resources. However, they\ndidn't study the implication of disaggregation on network resources. We\nreproduced a LegoOS infrastructure and extended it to support disaggregated\nnetworking. We show that networking can be disaggregated following the same\nprinciples, and that classical networking optimizations such as DMA, DDIO or\nloopback can be reproduced in such an environment. Our evaluations show the\nviability of the approach and the potential of future disaggregated\ninfrastructures.\n"} {"abstract": " Previous methods decompose the blind super-resolution (SR) problem into two\nsequential steps: \\textit{i}) estimating the blur kernel from given\nlow-resolution (LR) image and \\textit{ii}) restoring the SR image based on the\nestimated kernel. This two-step solution involves two independently trained\nmodels, which may not be well compatible with each other. A small estimation\nerror of the first step could cause a severe performance drop of the second\none. While on the other hand, the first step can only utilize limited\ninformation from the LR image, which makes it difficult to predict a highly\naccurate blur kernel. Towards these issues, instead of considering these two\nsteps separately, we adopt an alternating optimization algorithm, which can\nestimate the blur kernel and restore the SR image in a single model.\nSpecifically, we design two convolutional neural modules, namely\n\\textit{Restorer} and \\textit{Estimator}. \\textit{Restorer} restores the SR\nimage based on the predicted kernel, and \\textit{Estimator} estimates the blur\nkernel with the help of the restored SR image. We alternate these two modules\nrepeatedly and unfold this process to form an end-to-end trainable network. In\nthis way, \\textit{Estimator} utilizes information from both LR and SR images,\nwhich makes the estimation of the blur kernel easier. More importantly,\n\\textit{Restorer} is trained with the kernel estimated by \\textit{Estimator},\ninstead of the ground-truth kernel, thus \\textit{Restorer} could be more\ntolerant to the estimation error of \\textit{Estimator}. Extensive experiments\non synthetic datasets and real-world images show that our model can largely\noutperform state-of-the-art methods and produce more visually favorable results\nat a much higher speed. The source code is available at\n\\url{https://github.com/greatlog/DAN.git}.\n"} {"abstract": " We consider a certain lattice branching random walk with on-site competition\nand in an environment which is heterogeneous at a macroscopic scale\n$1/\\varepsilon$ in space and time. This can be seen as a model for the spatial\ndynamics of a biological population in a habitat which is heterogeneous at a\nlarge scale (mountains, temperature or precipitation gradient...). The model\nincorporates another parameter, $K$, which is a measure of the local population\ndensity. We study the model in the limit when first $\\varepsilon\\to 0$ and then\n$K\\to\\infty$. In this asymptotic regime, we show that the rescaled position of\nthe front as a function of time converges to the solution of an explicit ODE.\nWe further discuss the relation with another popular model of population\ndynamics, the Fisher-KPP equation, which arises in the limit $K\\to\\infty$.\nCombined with known results on the Fisher-KPP equation, our results show in\nparticular that the limits $\\varepsilon\\to0$ and $K\\to\\infty$ do not commute in\ngeneral. We conjecture that an interpolating regime appears when $\\log K$ and\n$1/\\varepsilon$ are of the same order.\n"} {"abstract": " A long-held belief is that shock energy induces initiation of an energetic\nmaterial through an indirect energy up-pumping mechanism involving phonon\nscattering through doorway modes. In this paper, a 3-phonon theoretical\nanalysis of energy up-pumping in RDX is presented that involves both direct and\nindirect pathways where the direct energy transfer dominates. The calculation\nconsiders individual phonon modes which are then analyzed in bands. Scattering\nis handled up to the third order term in the Hamiltonian based on Fermi's\nGolden Rule. On average, modes with frequencies up to 90 cm-1 scatter quickly\nand redistribute the energy to all the modes. This direct stimulation occurs\nrapidly, within 0.16 ps, and involves distortions to NN bonds. Modes from 90 to\n1839 cm-1 further up-pump the energy to NN bond distortion modes through an\nindirect route within 5.6 ps. The highest frequency modes have the lowest\ncontribution to energy transfer due to their lower participation in\nphonon-phonon scattering. The modes stimulated directly by the shock with\nfrequencies up to 90 cm-1 are estimated to account for 52 to 89\\% of the total\nenergy transfer to various NN bond distorting modes.\n"} {"abstract": " A new paradigm called physical reservoir computing has recently emerged,\nwhere the nonlinear dynamics of high-dimensional and fixed physical systems are\nharnessed as a computational resource to achieve complex tasks. Via extensive\nsimulations based on a dynamic truss-frame model, this study shows that an\norigami structure can perform as a dynamic reservoir with sufficient computing\npower to emulate high-order nonlinear systems, generate stable limit cycles,\nand modulate outputs according to dynamic inputs. This study also uncovers the\nlinkages between the origami reservoir's physical designs and its computing\npower, offering a guideline to optimize the computing performance.\nComprehensive parametric studies show that selecting optimal feedback crease\ndistribution and fine-tuning the underlying origami folding designs are the\nmost effective approach to improve computing performance. Furthermore, this\nstudy shows how origami's physical reservoir computing power can apply to soft\nrobotic control problems by a case study of earthworm-like peristaltic crawling\nwithout traditional controllers. These results can pave the way for\norigami-based robots with embodied mechanical intelligence.\n"} {"abstract": " We study the algebraic conditions leading to the chain property of complexes\nfor vertex operator algebra $n$-point functions with differential being defined\nthrough reduction formulas. The notion of the reduction cohomology of Riemann\nsurfaces is introduced. Algebraic, geometrical, and cohomological meanings of\nreduction formulas is clarified. A counterpart of the Bott-Segal theorem for\nRiemann surfaces in terms of the reductions cohomology is proven. It is shown\nthat the reduction cohomology is given by the cohomology of $n$-point\nconnections over the vertex operator algebra bundle defined on a genus $g$\nRiemann surface $\\Sigma^{(g)}$. The reduction cohomology for a vertex operator\nalgebra with formal parameters identified with local coordinates around marked\npoints on $\\Sigma^{(g)}$ is found in terms of the space of analytical\ncontinuations of solutions to Knizhnik-Zamolodchikov equations. For the\nreduction cohomology, the Euler-Poincare formula is derived. Examples for\nvarious genera and vertex operator cluster algebras are provided.\n"} {"abstract": " For a commutative ring $R$, we define the notions of deformed Picard\nalgebroids and deformed twisted differential operators on a smooth, separated,\nlocally of finite type $R$-scheme and prove these are in a natural bijection.\nWe then define the pullback of a sheaf of twisted differential operators that\nreduces to the classical definition when $R=\\mathbb{C}$. Finally, for modules\nover twisted differential operators, we prove a theorem for the descent under a\nlocally trivial torsor.\n"} {"abstract": " Motivated by questions in number theory, Myerson asked how small the sum of 5\ncomplex nth roots of unity can be. We obtain a uniform bound of O(n^{-4/3}) by\nperturbing the vertices of a regular pentagon, improving to O(n^{-7/3})\ninfinitely often.\n The corresponding configurations were suggested by examining exact minimum\nvalues computed for n <= 221000. These minima can be explained at least in part\nby selection of the best example from multiple families of competing\nconfigurations related to close rational approximations.\n"} {"abstract": " The orientation completion problem for a class of oriented graphs asks\nwhether a given partially oriented graph can be completed to an oriented graph\nin the class by orienting the unoriented edges of the partially oriented graph.\nOrientation completion problems have been studied recently for several classes\nof oriented graphs, yielding both polynomial time solutions as well as\nNP-completeness results. Local tournaments are a well-structured class of\noriented graphs that generalize tournaments and their underlying graphs are\nintimately related to proper circular-arc graphs. According to Skrien, a\nconnected graph can be oriented as a local tournament if and only if it is a\nproper circular-arc graph. Proper interval graphs are precisely the graphs\nwhich can be oriented as acyclic local tournaments. It has been proved that the\norientation completion problems for the classes of local tournaments and\nacyclic local tournaments are both polynomial time solvable. In this paper we\ncharacterize the partially oriented graphs that can be completed to local\ntournaments by determining the complete list of obstructions. These are in a\nsense minimal partially oriented graphs that cannot be completed to local\ntournaments. The result may be viewed as an extension of the well-known\nforbidden subgraph characterization of proper circular-arc graphs obtained by\nTucker. The complete list of obstructions for acyclic local tournament\norientation completions has been given in a companion paper.\n"} {"abstract": " We derive a thermodynamic uncertainty relation (TUR) for first-passage times\n(FPTs) on continuous time Markov chains. The TUR utilizes the entropy\nproduction coming from bidirectional transitions, and the net flux coming from\nunidirectional transitions, to provide a lower bound on FPT fluctuations. As\nevery bidirectional transition can also be seen as a pair of separate\nunidirectional ones, our approach typically yields an ensemble of TURs. The\ntightest bound on FPT fluctuations can then be obtained from this ensemble by a\nsimple and physically motivated optimization procedure. The results presented\nherein are valid for arbitrary initial conditions, out-of-equilibrium dynamics,\nand are therefore well suited to describe the inherently irreversible\nfirst-passage event. They can thus be readily applied to a myriad of\nfirst-passage problems that arise across a wide range of disciplines.\n"} {"abstract": " A hyperlink is a finite set of non-intersecting simple closed curves in\n$\\mathbb{R}^4 \\equiv \\mathbb{R} \\times \\mathbb{R}^3$, each curve is either a\nmatter or geometric loop. We consider an equivalence class of such hyperlinks,\nup to time-like isotopy, preserving time-ordering. Using an equivalence class\nand after coloring each matter component loop with an irreducible\nrepresentation of $\\mathfrak{su}(2) \\times \\mathfrak{su}(2)$, we can define its\nWilson Loop observable using an Einstein-Hilbert action, which is now thought\nof as a functional acting on the set containing equivalence classes of\nhyperlink. Construct a vector space using these functionals, which we now term\nas quantum states. To make it into a Hilbert space, we need to define a\ncounting probability measure on the space containing equivalence classes of\nhyperlinks. In our previous work, we defined area, volume and curvature\noperators, corresponding to given geometric objects like surface and a compact\nsolid spatial region. These operators act on the quantum states and by\ndeliberate construction of the Hilbert space, are self-adjoint and possibly\nunbounded operators. Using these operators and Einstein's field equations, we\ncan proceed to construct a quantized stress operator and also a Hamiltonian\nconstraint operator for the quantum system. We will also use the area operator\nto derive the Bekenstein entropy of a black hole. In the concluding section, we\nwill explain how Loop Quantum Gravity predicts the existence of gravitons,\nimplies causality and locality in quantum gravity, and formulate the principle\nof equivalence mathematically in its framework.\n"} {"abstract": " We use a replica trick construction to propose a definition of branch-point\ntwist operators in two dimensional momentum space and compute their two-point\nfunction. The result is then tentatively interpreted as a pseudo R\\'enyi\nentropy for momentum modes.\n"} {"abstract": " Deep learning semantic segmentation algorithms can localise abnormalities or\nopacities from chest radiographs. However, the task of collecting and\nannotating training data is expensive and requires expertise which remains a\nbottleneck for algorithm performance. We investigate the effect of image\naugmentations on reducing the requirement of labelled data in the semantic\nsegmentation of chest X-rays for pneumonia detection. We train fully\nconvolutional network models on subsets of different sizes from the total\ntraining data. We apply a different image augmentation while training each\nmodel and compare it to the baseline trained on the entire dataset without\naugmentations. We find that rotate and mixup are the best augmentations amongst\nrotate, mixup, translate, gamma and horizontal flip, wherein they reduce the\nlabelled data requirement by 70% while performing comparably to the baseline in\nterms of AUC and mean IoU in our experiments.\n"} {"abstract": " We investigate the behavior of vortex bound states in the quantum limit by\nself-consistently solving the Bogoliubov-de Gennes equation. We find that the\nenergies of the vortex bound states deviates from the analytical result\n$E_\\mu=\\mu\\Delta^2/E_F$ with the half-integer angular momentum $\\mu$ in the\nextreme quantum limit. Specifically, the energy ratio for the first three\norders is more close to $1:2:3$ instead of $1:3:5$ at extremely low\ntemperature. The local density of states reveals an Friedel-like behavior\nassociated with that of the pair potential in the extreme quantum limit, which\nwill be smoothed out by thermal effect above a certain temperature even the\nquantum limit condition, namely $T/T_c<\\Delta/E_F$ is still satisfied. Our\nstudies show that the vortex bound states can exhibit very distinct features in\ndifferent temperature regimes, which provides a comprehensive understanding and\nshould stimulate more experimental efforts for verifications.\n"} {"abstract": " Predicting molecular conformations (or 3D structures) from molecular graphs\nis a fundamental problem in many applications. Most existing approaches are\nusually divided into two steps by first predicting the distances between atoms\nand then generating a 3D structure through optimizing a distance geometry\nproblem. However, the distances predicted with such two-stage approaches may\nnot be able to consistently preserve the geometry of local atomic\nneighborhoods, making the generated structures unsatisfying. In this paper, we\npropose an end-to-end solution for molecular conformation prediction called\nConfVAE based on the conditional variational autoencoder framework.\nSpecifically, the molecular graph is first encoded in a latent space, and then\nthe 3D structures are generated by solving a principled bilevel optimization\nprogram. Extensive experiments on several benchmark data sets prove the\neffectiveness of our proposed approach over existing state-of-the-art\napproaches. Code is available at\n\\url{https://github.com/MinkaiXu/ConfVAE-ICML21}.\n"} {"abstract": " Search strategies for the third-generation leptoquarks (LQs) are distinct\nfrom other LQ searches, especially when they decay to a top quark and a $\\tau$\nlepton. We investigate the cases of all TeV-scale scalar and vector LQs that\ndecay to either a top-tau pair (charge-$1/3$ and $5/3$ LQs) or a top-neutrino\npair (charge-$2/3$ LQs). One can then use the boosted top (which can be tagged\nefficiently using jet-substructure techniques) and high-$p_{\\rm T}$ $\\tau$\nleptons to search for these LQs. We consider two search channels with either\none or two taus along with at least one hadronically decaying boosted top\nquark. We estimate the high luminosity LHC (HL-LHC) search prospects of these\nLQs by considering both symmetric and asymmetric pair and single production\nprocesses. Our selection criteria are optimised to retain events from both pair\nand single production processes. The combined signal has better prospects than\nthe traditional searches. We include new three-body single production processes\nto enhance the single production contributions to the combined signal. We\nidentify the interference effect that appears in the dominant single production\nchannel of charge-$1/3$ scalar LQ ($S^{1/3}$). This interference is\nconstructive if $S^{1/3}$ is weak-triplet and destructive, for a singlet one.\nAs a result, their LHC prospects differ appreciably.\n"} {"abstract": " We present a detailed analysis to clarify what determines the growth of the\nlow-$T/|W|$ instability in the context of rapidly rotating core-collapse of\nmassive stars. To this end, we perform three-dimensional core-collapse\nsupernova (CCSN) simulations of a $27 M_{\\odot}$ star including several updates\nin the general relativistic correction to gravity, the multi-energy treatment\nof heavy-lepton neutrinos, and the nuclear equation of state. Non-axisymmetric\ndeformations are analyzed from the point of view of the time evolution of the\npattern frequency and the corotation radius. The corotation radius is found to\ncoincide with the convective layer in the proto neutron star (PNS). We propose\na new mechanism to account for the growth of the low-$T/|W|$ instability in the\nCCSN environment. Near the convective boundary where a small\nBrunt-V\\\"ais\\\"al\\\"a frequency is expected, Rossby waves propagating in the\nazimuthal direction at mid latitude induce non-axisymmetric unstable modes, in\nboth hemispheres. They merge with each other and finally become the spiral arm\nin the equatorial plane. We also investigate how the growth of the low-$T/|W|$\ninstability impacts the neutrino and gravitational-wave signatures.\n"} {"abstract": " An optical neural network is proposed and demonstrated with programmable\nmatrix transformation and nonlinear activation function of photodetection\n(square-law detection). Based on discrete phase-coherent spatial modes, the\ndimensionality of programmable optical matrix operations is 30~37, which is\nimplemented by spatial light modulators. With this architecture, all-optical\nclassification tasks of handwritten digits, objects and depth images are\nperformed on the same platform with high accuracy. Due to the parallel nature\nof matrix multiplication, the processing speed of our proposed architecture is\npotentially as high as7.4T~74T FLOPs per second (with 10~100GHz detector)\n"} {"abstract": " In this note, we give a characterisation in terms of identities of the join\nof $\\mathbf{V}$ with the variety of finite locally trivial semigroups\n$\\mathbf{LI}$ for several well-known varieties of finite monoids $\\mathbf{V}$\nby using classical algebraic-automata-theoretic techniques. To achieve this, we\nuse the new notion of essentially-$\\mathbf{V}$ stamps defined by Grosshans,\nMcKenzie and Segoufin and show that it actually coincides with the join of\n$\\mathbf{V}$ and $\\mathbf{LI}$ precisely when some natural condition on the\nvariety of languages corresponding to $\\mathbf{V}$ is verified.This work is a\nkind of rediscovery of the work of J. C. Costa around 20 years ago from a\nrather different angle, since Costa's work relies on the use of advanced\ndevelopments in profinite topology, whereas what is presented here essentially\nuses an algebraic, language-based approach.\n"} {"abstract": " The Transiting Exoplanet Survey Satellite (\\textit{TESS}) mission was\ndesigned to perform an all-sky search of planets around bright and nearby\nstars. Here we report the discovery of two sub-Neptunes orbiting around the TOI\n1062 (TIC 299799658), a V=10.25 G9V star observed in the TESS Sectors 1, 13, 27\n& 28. We use precise radial velocity observations from HARPS to confirm and\ncharacterize these two planets. TOI 1062b has a radius of\n2.265^{+0.095}_{-0.091} Re, a mass of 11.8 +\\- 1.4 Me, and an orbital period of\n4.115050 +/- 0.000007 days. The second planet is not transiting, has a minimum\nmass of 7.4 +/- 1.6 Me and is near the 2:1 mean motion resonance with the\ninnermost planet with an orbital period of 8.13^{+0.02}_{-0.01} days. We\nperformed a dynamical analysis to explore the proximity of the system to this\nresonance, and to attempt at further constraining the orbital parameters. The\ntransiting planet has a mean density of 5.58^{+1.00}_{-0.89} g cm^-3 and an\nanalysis of its internal structure reveals that it is expected to have a small\nvolatile envelope accounting for 0.35% of the mass at maximum. The star's\nbrightness and the proximity of the inner planet to the \"radius gap\" make it an\ninteresting candidate for transmission spectroscopy, which could further\nconstrain the composition and internal structure of TOI 1062b.\n"} {"abstract": " This paper describes the design, implementation, and verification of a\ntest-bed for determining the noise temperature of radio antennas operating\nbetween 400-800MHz. The requirements for this test-bed were driven by the HIRAX\nexperiment, which uses antennas with embedded amplification, making system\nnoise characterization difficult in the laboratory. The test-bed consists of\ntwo large cylindrical cavities, each containing radio-frequency (RF) absorber\nheld at different temperatures (300K and 77K), allowing a measurement of system\nnoise temperature through the well-known 'Y-factor' method. The apparatus has\nbeen constructed at Yale, and over the course of the past year has undergone\ndetailed verification measurements. To date, three preliminary noise\ntemperature measurement sets have been conducted using the system, putting us\non track to make the first noise temperature measurements of the HIRAX feed and\nperform the first analysis of feed repeatability.\n"} {"abstract": " We establish concentration inequalities in the class of ultra log-concave\ndistributions. In particular, we show that ultra log-concave distributions\nsatisfy Poisson concentration bounds. As an application, we derive\nconcentration bounds for the intrinsic volumes of a convex body, which\ngeneralizes and improves a result of Lotz, McCoy, Nourdin, Peccati, and Tropp\n(2019).\n"} {"abstract": " What does bumping into things in a scene tell you about scene geometry? In\nthis paper, we investigate the idea of learning from collisions. At the heart\nof our approach is the idea of collision replay, where we use examples of a\ncollision to provide supervision for observations at a past frame. We use\ncollision replay to train convolutional neural networks to predict a\ndistribution over collision time from new images. This distribution conveys\ninformation about the navigational affordances (e.g., corridors vs open spaces)\nand, as we show, can be converted into the distance function for the scene\ngeometry. We analyze this approach with an agent that has noisy actuation in a\nphotorealistic simulator.\n"} {"abstract": " We propose a leptoquark model with two scalar leptoquarks $S^{}_1 \\left(\n\\bar{3},1,\\frac{1}{3} \\right)$ and $\\widetilde{R}^{}_2 \\left(3,2,\\frac{1}{6}\n\\right)$ to give a combined explanation of neutrino masses, lepton flavor\nmixing and the anomaly of muon $g-2$, satisfying the constraints from the\nradiative decays of charged leptons. The neutrino masses are generated via\none-loop corrections resulting from a mixing between $S^{}_1$ and\n$\\widetilde{R}^{}_2$. With a set of specific textures for the leptoquark Yukawa\ncoupling matrices, the neutrino mass matrix possesses an approximate\n$\\mu$-$\\tau$ reflection symmetry with $\\left( M^{}_\\nu \\right)^{}_{ee} = 0$\nonly in favor of the normal neutrino mass ordering. We show that this model can\nsuccessfully explain the anomaly of muon $g-2$ and current experimental\nneutrino oscillation data under the constraints from the radiative decays of\ncharged leptons.\n"} {"abstract": " Face detection is a crucial first step in many facial recognition and face\nanalysis systems. Early approaches for face detection were mainly based on\nclassifiers built on top of hand-crafted features extracted from local image\nregions, such as Haar Cascades and Histogram of Oriented Gradients. However,\nthese approaches were not powerful enough to achieve a high accuracy on images\nof from uncontrolled environments. With the breakthrough work in image\nclassification using deep neural networks in 2012, there has been a huge\nparadigm shift in face detection. Inspired by the rapid progress of deep\nlearning in computer vision, many deep learning based frameworks have been\nproposed for face detection over the past few years, achieving significant\nimprovements in accuracy. In this work, we provide a detailed overview of some\nof the most representative deep learning based face detection methods by\ngrouping them into a few major categories, and present their core architectural\ndesigns and accuracies on popular benchmarks. We also describe some of the most\npopular face detection datasets. Finally, we discuss some current challenges in\nthe field, and suggest potential future research directions.\n"} {"abstract": " A Cayley (di)hypergraph is a hypergraph that its automorphism group contains\na subgroup acting regularly on (hyper)vertices. In this paper, we study Cayley\n(di)hypergraph and its automorphism group.\n"} {"abstract": " Purpose: Develop a processing scheme for Gradient Echo (GRE) phase to enable\nrestoration of susceptibility-related (SuR) features in regions affected by\nimperfect phase unwrapping, background suppression and low signal-to-noise\nratio (SNR) due to phase dispersion. Theory and Methods: The predictable\ncomponents sampled across the echo dimension in a multi-echo GRE sequence are\nrecovered by rank minimizing a Hankel matrix formed using the complex\nexponential of the background suppressed phase. To estimate the single\nfrequency component that relates to the susceptibility induced field, it is\nrequired to maintain consistency with the measured phase after background\nsuppression, penalized by a unity rank approximation (URA) prior. This is\nformulated as an optimization problem, implemented using the alternating\ndirection method of multiplier (ADMM). Results: With in vivo multi-echo GRE\ndata, the magnitude susceptibility weighted image (SWI) reconstructed using URA\nprior shows additional venous structures that are obscured due to phase\ndispersion and noise in regions subject to remnant non-local field variations.\nThe performance is compared with the susceptibility map weighted imaging (SMWI)\nand the standard SWI. It is also shown using numerical simulation that\nquantitative susceptibility map (QSM) computed from the reconstructed phase\nexhibits reduced artifacts and quantification error. In vivo experiments reveal\niron depositions in insular, motor cortex and superior frontal gyrus that are\nnot identified in standard QSM. Conclusion: URA processed GRE phase is less\nsensitive to imperfections in the phase pre-processing techniques, and thereby\nenable robust estimation of SWI and QSM.\n"} {"abstract": " We provide a comprehensive analysis of the two-parameter Beta distributions\nseen from the perspective of second-order stochastic dominance. By changing its\nparameters through a bijective mapping, we work with a bounded subset D instead\nof an unbounded plane. We show that a mean-preserving spread is equivalent to\nan increase of the variance, which means that higher moments are irrelevant to\ncompare the riskiness of Beta distributions. We then derive the lattice\nstructure induced by second-order stochastic dominance, which is feasible\nthanks to the topological closure of D. Finally, we consider a standard\n(expected-utility based) portfolio optimization problem in which its inputs are\nthe parameters of the Beta distribution. We explicitly characterize the subset\nof D for which the optimal solution consists of investing 100% of the wealth in\nthe risky asset and we provide an exhaustive numerical analysis of this optimal\nsolution through (color-coded) graphs.\n"} {"abstract": " We report the growth, structural and magnetic properties of the less studied\nEu-oxide phase, Eu$_3$O$_4$, thin films grown on a Si/SiO$_2$ substrate and\nSi/SiO$_2$/graphene using molecular beam epitaxy. The X-ray diffraction scans\nshow that highly-textured crystalline Eu$_3$O$_4$(001) films are grown on both\nsubstrates, whereas the film deposited on graphene has a better crystallinity\nthan that grown on the Si/SiO$_2$ substrate. The SQUID measurements show that\nboth films have a Curie temperature of about 5.5 K, with a magnetic moment of\n0.0032 emu/g at 2 K. The mixed-valency of the Eu cations has been confirmed by\nthe qualitative analysis of the depth-profile X-ray photoelectron spectroscopy\nmeasurements with the Eu$^{2+}$ : Eu$^{3+}$ ratio of 28 : 72. However,\nsurprisingly, our films show no metamagnetic behaviour as reported for the bulk\nand powder form. Furthermore, the Raman spectroscopy scans show that the growth\nof the Eu$_3$O$_4$ thin films has no damaging effect on the underlayer graphene\nsheet. Therefore, the graphene layer is expected to retain its properties.\n"} {"abstract": " We study the Choquard equation with a local perturbation \\begin{equation*}\n-\\Delta u=\\lambda u+(I_\\alpha\\ast|u|^p)|u|^{p-2}u+\\mu|u|^{q-2}u,\\ x\\in\n\\mathbb{R}^{N} \\end{equation*} having prescribed mass \\begin{equation*}\n\\int_{\\mathbb{R}^N}|u|^2dx=a^2. \\end{equation*} For a $L^2$-critical or\n$L^2$-supercritical perturbation $\\mu|u|^{q-2}u$, we prove nonexistence,\nexistence and symmetry of normalized ground states, by using the mountain pass\nlemma, the Poho\\v{z}aev constraint method, the Schwartz symmetrization\nrearrangements and some theories of polarizations. In particular, our results\ncover the Hardy-Littlewood-Sobolev upper critical exponent case\n$p=(N+\\alpha)/(N-2)$. Our results are a nonlocal counterpart of the results in\n\\cite{{Li 2021-4},{Soave JFA},{Wei-Wu 2021}}.\n"} {"abstract": " We study an invariant of compact metric spaces which combines the notion of\ncurvature sets introduced by Gromov in the 1980s together with the notion of\nVietoris-Rips persistent homology. For given integers $k\\geq 0$ and $n\\geq 1$\nthese invariants arise by considering the degree $k$ Vietoris-Rips persistence\ndiagrams of all subsets of a given metric space with cardinality at most $n$.\nWe call these invariants \\emph{persistence sets} and denote them as\n$D_{n,k}^\\mathrm{VR}$. We argue that computing these invariants could be\nsignificantly easier than computing the usual Vietoris-Rips persistence\ndiagrams. We establish stability results as for these invariants and we also\nprecisely characterize some of them in the case of spheres with geodesic and\nEuclidean distances. We identify a rich family of metric graphs for which\n$D_{4,1}^{\\mathrm{VR}}$ fully recovers their homotopy type. Along the way we\nprove some useful properties of Vietoris-Rips persistence diagrams.\n"} {"abstract": " Let $K$ be the connected sum of knots $K_1,\\ldots,K_n$. It is known that the\n$\\mathrm{SL}_2(\\mathbb{C})$-character variety of the knot exterior of $K$ has a\ncomponent of dimension $\\geq 2$ as the connected sum admits a so-called\nbending. We show that there is a natural way to define the adjoint Reidemeister\ntorsion for such a high-dimensional component and prove that it is locally\nconstant on a subset of the character variety where the trace of a meridian is\nconstant. We also prove that the adjoint Reidemeister torsion of $K$ satisfies\nthe vanishing identity if each $K_i$ does so.\n"} {"abstract": " Grammatical error correction (GEC) suffers from a lack of sufficient parallel\ndata. Therefore, GEC studies have developed various methods to generate pseudo\ndata, which comprise pairs of grammatical and artificially produced\nungrammatical sentences. Currently, a mainstream approach to generate pseudo\ndata is back-translation (BT). Most previous GEC studies using BT have employed\nthe same architecture for both GEC and BT models. However, GEC models have\ndifferent correction tendencies depending on their architectures. Thus, in this\nstudy, we compare the correction tendencies of the GEC models trained on pseudo\ndata generated by different BT models, namely, Transformer, CNN, and LSTM. The\nresults confirm that the correction tendencies for each error type are\ndifferent for every BT model. Additionally, we examine the correction\ntendencies when using a combination of pseudo data generated by different BT\nmodels. As a result, we find that the combination of different BT models\nimproves or interpolates the F_0.5 scores of each error type compared with that\nof single BT models with different seeds.\n"} {"abstract": " Deep learning recommendation models (DLRMs) are used across many\nbusiness-critical services at Facebook and are the single largest AI\napplication in terms of infrastructure demand in its data-centers. In this\npaper we discuss the SW/HW co-designed solution for high-performance\ndistributed training of large-scale DLRMs. We introduce a high-performance\nscalable software stack based on PyTorch and pair it with the new evolution of\nZion platform, namely ZionEX. We demonstrate the capability to train very large\nDLRMs with up to 12 Trillion parameters and show that we can attain 40X speedup\nin terms of time to solution over previous systems. We achieve this by (i)\ndesigning the ZionEX platform with dedicated scale-out network, provisioned\nwith high bandwidth, optimal topology and efficient transport (ii) implementing\nan optimized PyTorch-based training stack supporting both model and data\nparallelism (iii) developing sharding algorithms capable of hierarchical\npartitioning of the embedding tables along row, column dimensions and load\nbalancing them across multiple workers; (iv) adding high-performance core\noperators while retaining flexibility to support optimizers with fully\ndeterministic updates (v) leveraging reduced precision communications,\nmulti-level memory hierarchy (HBM+DDR+SSD) and pipelining. Furthermore, we\ndevelop and briefly comment on distributed data ingestion and other supporting\nservices that are required for the robust and efficient end-to-end training in\nproduction environments.\n"} {"abstract": " Bitcoin and Ethereum transactions present one of the largest real-world\ncomplex networks that are publicly available for study, including a detailed\npicture of their time evolution. As such, they have received a considerable\namount of attention from the network science community, beside analysis from an\neconomic or cryptography perspective. Among these studies, in an analysis on\nthe early instance of the Bitcoin network, we have shown the clear presence of\nthe preferential attachment, or \"rich-get-richer\" phenomenon. Now, we revisit\nthis question, using a recent version of the Bitcoin network that has grown\nalmost 100-fold since our original analysis. Furthermore, we additionally carry\nout a comparison with Ethereum, the second most important cryptocurrency. Our\nresults show that preferential attachment continues to be a key factor in the\nevolution of both the Bitcoin and Ethereum transactoin networks. To facilitate\nfurther analysis, we publish a recent version of both transaction networks, and\nan efficient software implementation that is able to evaluate linking\nstatistics necessary for learn about preferential attachment on networks with\nseveral hundred million edges.\n"} {"abstract": " Distributed hardware of acoustic sensor networks bears inconsistency of local\nsampling frequencies, which is detrimental to signal processing. Fundamentally,\nsampling rate offset (SRO) nonlinearly relates the discrete-time signals\nacquired by different sensor nodes. As such, retrieval of SRO from the\navailable signals requires nonlinear estimation, like double-cross-correlation\nprocessing (DXCP), and frequently results in biased estimation. SRO\ncompensation by asynchronous sampling rate conversion (ASRC) on the signals\nthen leaves an unacceptable residual. As a remedy to this problem, multi-stage\nprocedures have been devised to diminish the SRO residual with multiple\niterations of SRO estimation and ASRC over the entire signal. This paper\nconverts the mechanism of offline multi-stage processing into a continuous\nfeedback-control loop comprising a controlled ASRC unit followed by an online\nimplementation of DXCP-based SRO estimation. To support the design of an\noptimum internal model control unit for this closed-loop system, the paper\ndeploys an analytical dynamical model of the proposed online DXCP. The\nresulting control architecture then merely applies a single treatment of each\nsignal frame, while efficiently diminishing SRO bias with time. Evaluations\nwith both speech and Gaussian input demonstrate that the high accuracy of\nmulti-stage processing is maintained at the low complexity of single-stage\n(open-loop) processing.\n"} {"abstract": " We examine a class of random walks in random environments on $\\mathbb{Z}$\nwith bounded jumps, a generalization of the classic one-dimensional model. The\nenvironments we study have i.i.d. transition probability vectors drawn from\nDirichlet distributions. For this model, we characterize recurrence and\ntransience, and in the transient case we characterize ballisticity. For\nballisticity, we give two parameters, $\\kappa_0$ and $\\kappa_1$. The parameter\n$\\kappa_0$ governs finite trapping effects, and $\\kappa_1$ governs repeated\ntraversals of arbitrarily large regions of the graph. We show that the walk is\nright-transient if and only if $\\kappa_1>0$, and in that case it is ballistic\nif and only if $\\min(\\kappa_0,\\kappa_1)>1$.\n"} {"abstract": " G0.253+0.016, aka 'the Brick', is one of the most massive (> 10^5 Msun) and\ndense (> 10^4 cm-3) molecular clouds in the Milky Way's Central Molecular Zone.\nPrevious observations have detected tentative signs of active star formation,\nmost notably a water maser that is associated with a dust continuum source. We\npresent ALMA Band 6 observations with an angular resolution of 0.13\" (1000 AU)\ntowards this 'maser core', and report unambiguous evidence of active star\nformation within G0.253+0.016. We detect a population of eighteen continuum\nsources (median mass ~ 2 Msun), nine of which are driving bi-polar molecular\noutflows as seen via SiO (5-4) emission. At the location of the water maser, we\nfind evidence for a protostellar binary/multiple with multi-directional outflow\nemission. Despite the high density of G0.253+0.016, we find no evidence for\nhigh-mass protostars in our ALMA field. The observed sources are instead\nconsistent with a cluster of low-to-intermediate-mass protostars. However, the\nmeasured outflow properties are consistent with those expected for\nintermediate-to-high-mass star formation. We conclude that the sources are\nyoung and rapidly accreting, and may potentially form intermediate and\nhigh-mass stars in the future. The masses and projected spatial distribution of\nthe cores are generally consistent with thermal fragmentation, suggesting that\nthe large-scale turbulence and strong magnetic field in the cloud do not\ndominate on these scales, and that star formation on the scale of individual\nprotostars is similar to that in Galactic disc environments.\n"} {"abstract": " Authenticated Append-Only Skiplists (AAOSLs) enable maintenance and querying\nof an authenticated log (such as a blockchain) without requiring any single\nparty to store or verify the entire log, or to trust another party regarding\nits contents. AAOSLs can help to enable efficient dynamic participation (e.g.,\nin consensus) and reduce storage overhead.\n In this paper, we formalize an AAOSL originally described by Maniatis and\nBaker, and prove its key correctness properties. Our model and proofs are\nmachine checked in Agda. Our proofs apply to a generalization of the original\nconstruction and provide confidence that instances of this generalization can\nbe used in practice. Our formalization effort has also yielded some\nsimplifications and optimizations.\n"} {"abstract": " We define a spectral flow for paths of selfadjoint Fredholm operators that\nare equivariant under the orthogonal action of a compact Lie group as an\nelement of the representation ring of the latter. This $G$-equivariant spectral\nflow shares all common properties of the integer valued classical spectral\nflow, and it can be non-trivial even if the classical spectral flow vanishes.\nOur main theorem uses the $G$-equivariant spectral flow to study bifurcation of\nperiodic solutions for autonomous Hamiltonian systems with symmetries.\n"} {"abstract": " Deterioration of the operation parameters of Al/SiO2/p-type Si surface\nbarrier detector upon irradiation with alpha-particles at room temperature was\ninvestigated. As a result of 40-days irradiation with a total fluence of 8*10^9\n{\\alpha}-particles, an increase of {\\alpha}-peak FWHM from 70 keV to 100 keV\nwas observed and explained by increase of the detector reverse current due to\nformation of a high concentration of near mid-gap defect levels. Performed CV\nmeasurements revealed the appearance of at least 6*10^12 cm-3 radiation-induced\nacceptors at the depths where according to the TRIM simulations the highest\nconcentration of vacancy-interstitial pairs was created by the incoming\n{\\alpha}-particles. The studies carried out by current-DLTS technique allowed\nto associate the observed increase of the acceptor concentration with the near\nmid-gap acceptor level at EV+0.56 eV. This level can be apparently associated\nwith V2O defects recognized previously to be responsible for the space charge\nsign inversion in the irradiated n-type Si detectors.\n"} {"abstract": " For a function $f\\colon [0,1]\\to\\mathbb R$, we consider the set $E(f)$ of\npoints at which $f$ cuts the real axis. Given $f\\colon [0,1]\\to\\mathbb R$ and a\nCantor set $D\\subset [0,1]$ with $\\{0,1\\}\\subset D$, we obtain conditions\nequivalent to the conjunction $f\\in C[0,1]$ (or $f\\in C^\\infty [0,1]$) and\n$D\\subset E(f)$. This generalizes some ideas of Zabeti. We observe that, if $f$\nis continuous, then $E(f)$ is a closed nowhere dense subset of $f^{-1}[\\{ 0\\}]$\nwhere each $x\\in \\{0,1\\}\\cap E(f)$ is an accumulation point of $E(f)$. Our main\nresult states that, for a closed nowhere dense set $F\\subset [0,1]$ with each\n$x\\in \\{0,1\\}\\cap E(f)$ being an accumulation point of $F$, there exists $f\\in\nC^\\infty [0,1]$ such that $F=E(f)$.\n"} {"abstract": " Graph-based causal discovery methods aim to capture conditional\nindependencies consistent with the observed data and differentiate causal\nrelationships from indirect or induced ones. Successful construction of\ngraphical models of data depends on the assumption of causal sufficiency: that\nis, that all confounding variables are measured. When this assumption is not\nmet, learned graphical structures may become arbitrarily incorrect and effects\nimplied by such models may be wrongly attributed, carry the wrong magnitude, or\nmis-represent direction of correlation. Wide application of graphical models to\nincreasingly less curated \"big data\" draws renewed attention to the unobserved\nconfounder problem.\n We present a novel method that aims to control for the latent space when\nestimating a DAG by iteratively deriving proxies for the latent space from the\nresiduals of the inferred model. Under mild assumptions, our method improves\nstructural inference of Gaussian graphical models and enhances identifiability\nof the causal effect. In addition, when the model is being used to predict\noutcomes, it un-confounds the coefficients on the parents of the outcomes and\nleads to improved predictive performance when out-of-sample regime is very\ndifferent from the training data. We show that any improvement of prediction of\nan outcome is intrinsically capped and cannot rise beyond a certain limit as\ncompared to the confounded model. We extend our methodology beyond GGMs to\nordinal variables and nonlinear cases. Our R package provides both PCA and\nautoencoder implementations of the methodology, suitable for GGMs with some\nguarantees and for better performance in general cases but without such\nguarantees.\n"} {"abstract": " The Multichannel Subtractive Double Pass (MSDP) is an imaging spectroscopy\ntechnique, which allows observations of spectral line profiles over a 2D field\nof view with high spatial and temporal resolution. It has been intensively used\nsince 1977 on various spectrographs (Meudon, Pic du Midi, the German Vacuum\nTower Telescope, THEMIS, Wroc{\\l}aw). We summarize previous developments and\ndescribe the capabilities of a new design that has been developed at Meudon and\nthat has higher spectral resolution and increased channel number: Spectral\nSampling with Slicer for Solar Instrumentation (S4I), which can be combined\nwith a new and fast polarimetry analysis. This new generation MSDP technique is\nwell adapted to large telescopes. Also presented are the goals of a derived\ncompact version of the instrument, the Solar Line Emission Dopplerometer\n(SLED), dedicated to dynamic studies of coronal loops observed in the forbidden\niron lines, and prominences. It is designed for observing total solar eclipses,\nand for deployment on the Wroc{\\l}aw and Lomnicky peak coronagraphs\nrespectively for prominence and coronal observations.\n"} {"abstract": " We exhibit a non-hyperelliptic curve C of genus 3 such that the class of the\nCeresa cycle [C]-[(-1)*C] in JC modulo algebraic equivalence is torsion.\n"} {"abstract": " This work presents the results of project CONECT4, which addresses the\nresearch and development of new non-intrusive communication methods for the\ngeneration of a human-machine learning ecosystem oriented to predictive\nmaintenance in the automotive industry. Through the use of innovative\ntechnologies such as Augmented Reality, Virtual Reality, Digital Twin and\nexpert knowledge, CONECT4 implements methodologies that allow improving the\nefficiency of training techniques and knowledge management in industrial\ncompanies. The research has been supported by the development of content and\nsystems with a low level of technological maturity that address solutions for\nthe industrial sector applied in training and assistance to the operator. The\nresults have been analyzed in companies in the automotive sector, however, they\nare exportable to any other type of industrial sector. -- --\n En esta publicaci\\'on se presentan los resultados del proyecto CONECT4, que\naborda la investigaci\\'on y desarrollo de nuevos m\\'etodos de comunicaci\\'on no\nintrusivos para la generaci\\'on de un ecosistema de aprendizaje\nhombre-m\\'aquina orientado al mantenimiento predictivo en la industria de\nautomoci\\'on. A trav\\'es del uso de tecnolog\\'ias innovadoras como la Realidad\nAumentada, la Realidad Virtual, el Gemelo Digital y conocimiento experto,\nCONECT4 implementa metodolog\\'ias que permiten mejorar la eficiencia de las\nt\\'ecnicas de formaci\\'on y gesti\\'on de conocimiento en las empresas\nindustriales. La investigaci\\'on se ha apoyado en el desarrollo de contenidos y\nsistemas con un nivel de madurez tecnol\\'ogico bajo que abordan soluciones para\nel sector industrial aplicadas en la formaci\\'on y asistencia al operario. Los\nresultados han sido analizados en empresas del sector de automoci\\'on, no\nobstante, son exportables a cualquier otro tipo de sector industrial.\n"} {"abstract": " This paper reports 209 O-type stars found with LAMOST. All 135 new O-type\nstars discovered so far with LAMOST so far are given. Among them, 94 stars are\nfirstly presented in this sample. There are 1 Iafpe star, 5 Onfp stars, 12 Oe\nstars, 1 Ofc stars, 3 ON stars, 16 double-lined spectroscopic binaries, and 33\nsingle-lined spectroscopic binaries. All O-type stars are determined based on\nLAMOST low-resolution spectra (R ~ 1800), with their LAMOST median-resolution\nspectra (R~7500) as supplements.\n"} {"abstract": " A world-wide COVID-19 pandemic intensified strongly the studies of molecular\nmechanisms related to the coronaviruses. The origin of coronaviruses and the\nrisks of human-to-human, animal-to-human, and human-to-animal transmission of\ncoronaviral infections can be understood only on a broader evolutionary level\nby detailed comparative studies. In this paper, we studied ribonucleocapsid\nassembly-packaging signals (RNAPS) in the genomes of all seven known pathogenic\nhuman coronaviruses, SARS-CoV, SARS-CoV-2, MERS-CoV, HCoV-OC43, HCoV-HKU1,\nHCoV-229E, and HCoV-NL63 and compared them with RNAPS in the genomes of the\nrelated animal coronaviruses including SARS-Bat-CoV, MERS-Camel-CoV, MHV,\nBat-CoV MOP1, TGEV, and one of camel alphacoronaviruses. RNAPS in the genomes\nof coronaviruses were evolved due to weakly specific interactions between\ngenomic RNA and N proteins in helical nucleocapsids. Combining transitional\ngenome mapping and Jaccard correlation coefficients allows us to perform the\nanalysis directly in terms of underlying motifs distributed over the genome. In\nall coronaviruses RNAPS were distributed quasi-periodically over the genome\nwith the period about 54 nt biased to 57 nt and to 51 nt for the genomes longer\nand shorter than that of SARS-CoV, respectively. The comparison with the\nexperimentally verified packaging signals for MERS-CoV, MHV, and TGEV proved\nthat the distribution of particular motifs is strongly correlated with the\npackaging signals. We also found that many motifs were highly conserved in both\ncharacters and positioning on the genomes throughout the lineages that make\nthem promising therapeutic targets. The mechanisms of encapsidation can affect\nthe recombination and co-infection as well.\n"} {"abstract": " This paper establishes new connections between many-body quantum systems,\nOne-body Reduced Density Matrices Functional Theory (1RDMFT) and Optimal\nTransport (OT), by interpreting the problem of computing the ground-state\nenergy of a finite dimensional composite quantum system at positive temperature\nas a non-commutative entropy regularized Optimal Transport problem. We develop\na new approach to fully characterize the dual-primal solutions in such\nnon-commutative setting. The mathematical formalism is particularly relevant in\nquantum chemistry: numerical realizations of the many-electron ground state\nenergy can be computed via a non-commutative version of Sinkhorn algorithm. Our\napproach allows to prove convergence and robustness of this algorithm, which,\nto our best knowledge, were unknown even in the two marginal case. Our methods\nare based on careful a priori estimates in the dual problem, which we believe\nto be of independent interest. Finally, the above results are extended in\n1RDMFT setting, where bosonic or fermionic symmetry conditions are enforced on\nthe problem.\n"} {"abstract": " Stochastic gradient Markov chain Monte Carlo (SGMCMC) is a popular class of\nalgorithms for scalable Bayesian inference. However, these algorithms include\nhyperparameters such as step size or batch size that influence the accuracy of\nestimators based on the obtained posterior samples. As a result, these\nhyperparameters must be tuned by the practitioner and currently no principled\nand automated way to tune them exists. Standard MCMC tuning methods based on\nacceptance rates cannot be used for SGMCMC, thus requiring alternative tools\nand diagnostics. We propose a novel bandit-based algorithm that tunes the\nSGMCMC hyperparameters by minimizing the Stein discrepancy between the true\nposterior and its Monte Carlo approximation. We provide theoretical results\nsupporting this approach and assess various Stein-based discrepancies. We\nsupport our results with experiments on both simulated and real datasets, and\nfind that this method is practical for a wide range of applications.\n"} {"abstract": " Observations of the redshifted 21-cm line of neutral hydrogen (HI) are a new\nand powerful window of observation that offers us the possibility to map the\nspatial distribution of cosmic HI and learn about cosmology. BINGO (Baryon\nAcoustic Oscillations [BAO] from Integrated Neutral Gas Observations) is a new\nunique radio telescope designed to be one of the first to probe BAO at radio\nfrequencies. BINGO has two science goals: cosmology and astrophysics. Cosmology\nis the main science goal and the driver for BINGO's design and strategy. The\nkey of BINGO is to detect the low redshift BAO to put strong constraints in the\ndark sector models. Given the versatility of the BINGO telescope, a secondary\ngoal is astrophysics, where BINGO can help discover and study Fast Radio Bursts\n(FRB) and other transients, Galactic and extragalactic science. In this paper,\nwe introduce the latest progress of the BINGO project, its science goals,\ndescribing the scientific potential of the project in each science and the new\ndevelopments obtained by the collaboration. We introduce the BINGO project and\nits science goals and give a general summary of recent developments in\nconstruction, science potential and pipeline development obtained by the BINGO\ncollaboration in the past few years. We show that BINGO will be able to obtain\ncompetitive constraints for the dark sector, and also that will allow for the\ndiscovery of several FRBs in the southern hemisphere. The capacity of BINGO in\nobtaining information from 21-cm is also tested in the pipeline introduced\nhere. There is still no measurement of the BAO in radio, and studying cosmology\nin this new window of observations is one of the most promising advances in the\nfield. The BINGO project is a radio telescope that has the goal to be one of\nthe first to perform this measurement and it is currently being built in the\nnortheast of Brazil. (Abridged)\n"} {"abstract": " We explore cross-lingual transfer of register classification for web\ndocuments. Registers, that is, text varieties such as blogs or news are one of\nthe primary predictors of linguistic variation and thus affect the automatic\nprocessing of language. We introduce two new register annotated corpora,\nFreCORE and SweCORE, for French and Swedish. We demonstrate that deep\npre-trained language models perform strongly in these languages and outperform\nprevious state-of-the-art in English and Finnish. Specifically, we show 1) that\nzero-shot cross-lingual transfer from the large English CORE corpus can match\nor surpass previously published monolingual models, and 2) that lightweight\nmonolingual classification requiring very little training data can reach or\nsurpass our zero-shot performance. We further analyse classification results\nfinding that certain registers continue to pose challenges in particular for\ncross-lingual transfer.\n"} {"abstract": " Compact binary systems emit gravitational radiation which is potentially\ndetectable by current Earth bound detectors. Extracting these signals from the\ninstruments' background noise is a complex problem and the computational cost\nof most current searches depends on the complexity of the source model. Deep\nlearning may be capable of finding signals where current algorithms hit\ncomputational limits. Here we restrict our analysis to signals from\nnon-spinning binary black holes and systematically test different strategies by\nwhich training data is presented to the networks. To assess the impact of the\ntraining strategies, we re-analyze the first published networks and directly\ncompare them to an equivalent matched-filter search. We find that the deep\nlearning algorithms can generalize low signal-to-noise ratio (SNR) signals to\nhigh SNR ones but not vice versa. As such, it is not beneficial to provide high\nSNR signals during training, and fastest convergence is achieved when low SNR\nsamples are provided early on. During testing we found that the networks are\nsometimes unable to recover any signals when a false alarm probability\n$<10^{-3}$ is required. We resolve this restriction by applying a modification\nwe call unbounded Softmax replacement (USR) after training. With this\nalteration we find that the machine learning search retains $\\geq 97.5\\%$ of\nthe sensitivity of the matched-filter search down to a false-alarm rate of 1\nper month.\n"} {"abstract": " The position of the Sun inside the Milky Way's disc hampers the study of the\nspiral arm structure. We aim to analyse the spiral arms along the line-of-sight\ntowards the Galactic centre (GC) to determine their distance, extinction, and\nstellar population. We use the GALACTICNUCLEUS survey, a JHKs high angular\nresolution photometric catalogue (0.2\") for the innermost regions of the\nGalaxy. We fitted simple synthetic colour-magnitude models to our data via\n$\\chi^2$ minimisation. We computed the distance and extinction to the detected\nspiral arms. We also analysed the extinction curve and the relative extinction\nbetween the detected features. Finally, we built extinction-corrected Ks\nluminosity functions (KLFs) to study the stellar populations present in the\nsecond and third spiral arm features. We determined the mean distances to the\nspiral arms: $d1=1.6\\pm0.2$, $d2=2.6\\pm0.2$, $d3=3.9\\pm0.3$, and $d4=4.5\\pm0.2$\nkpc, and the mean extinctions: $A_{H1}=0.35\\pm0.08$, $A_{H2}=0.77\\pm0.08$,\n$A_{H3}=1.68\\pm0.08$, and $A_{H4}=2.30\\pm0.08$ mag. We analysed the extinction\ncurve in the near infrared for the stars in the spiral arms and found mean\nvalues of $A_J/A_{H}=1.89\\pm0.11$ and $A_H/A_{K_s}=1.86\\pm0.11$, in agreement\nwith the results obtained for the GC. This implies that the shape of the\nextinction curve does not depend on distance or absolute extinction. We also\nbuilt extinction maps for each spiral arm and obtained that they are\nhomogeneous and might correspond to independent extinction layers. Finally,\nanalysing the KLFs from the second and the third spiral arms, we found that\nthey have similar stellar populations. We obtained two main episodes of star\nformation: $>6$ Gyr ($\\sim60-70\\%$ of the stellar mass), and $1.5-4$ Gyr\n($\\sim20-30\\%$ of the stellar mass), compatible with previous work. We also\ndetected recent star formation at a lower level ($\\sim10\\%$) for the third\nspiral arm.\n"} {"abstract": " We present a comprehensive comparison of spin and energy dynamics in quantum\nand classical spin models on different geometries, ranging from one-dimensional\nchains, over quasi-one-dimensional ladders, to two-dimensional square lattices.\nFocusing on dynamics at formally infinite temperature, we particularly consider\nthe autocorrelation functions of local densities, where the time evolution is\ngoverned either by the linear Schr\\\"odinger equation in the quantum case, or\nthe nonlinear Hamiltonian equations of motion in the case of classical\nmechanics. While, in full generality, a quantitative agreement between quantum\nand classical dynamics can therefore not be expected, our large-scale numerical\nresults for spin-$1/2$ systems with up to $N = 36$ lattice sites in fact defy\nthis expectation. Specifically, we observe a remarkably good agreement for all\ngeometries, which is best for the nonintegrable quantum models in quasi-one or\ntwo dimensions, but still satisfactory in the case of integrable chains, at\nleast if transport properties are not dominated by the extensive number of\nconservation laws. Our findings indicate that classical or semi-classical\nsimulations provide a meaningful strategy to analyze the dynamics of quantum\nmany-body models, even in cases where the spin quantum number $S = 1/2$ is\nsmall and far away from the classical limit $S \\to \\infty$.\n"} {"abstract": " Event coreference continues to be a challenging problem in information\nextraction. With the absence of any external knowledge bases for events,\ncoreference becomes a clustering task that relies on effective representations\nof the context in which event mentions appear. Recent advances in\ncontextualized language representations have proven successful in many tasks,\nhowever, their use in event linking been limited. Here we present a three part\napproach that (1) uses representations derived from a pretrained BERT model to\n(2) train a neural classifier to (3) drive a simple clustering algorithm to\ncreate coreference chains. We achieve state of the art results with this model\non two standard datasets for within-document event coreference task and\nestablish a new standard on a third newer dataset.\n"} {"abstract": " Safety is a fundamental requirement in any human-robot collaboration\nscenario. To ensure the safety of users for such scenarios, we propose a novel\nVirtual Barrier system facilitated by an augmented reality interface. Our\nsystem provides two kinds of Virtual Barriers to ensure safety: 1) a Virtual\nPerson Barrier which encapsulates and follows the user to protect them from\ncolliding with the robot, and 2) Virtual Obstacle Barriers which users can\nspawn to protect objects or regions that the robot should not enter. To enable\neffective human-robot collaboration, our system includes an intuitive robot\nprogramming interface utilizing speech commands and hand gestures, and features\nthe capability of automatic path re-planning when potential collisions are\ndetected as a result of a barrier intersecting the robot's planned path. We\ncompared our novel system with a standard 2D display interface through a user\nstudy, where participants performed a task mimicking an industrial\nmanufacturing procedure. Results show that our system increases the user's\nsense of safety and task efficiency, and makes the interaction more intuitive.\n"} {"abstract": " We show that the change of basis matrices of a set of $m$ bases of a finite\nvector space is a connected groupoid of order $m^2$. We define a general method\nto express the elements of change of basis matrices as algebraic expressions\nusing optimizations of evaluations of vector dot products. Examples are given\nwith orthogonal polynomials.\n"} {"abstract": " We experimentally observe the dipole scattering of a nanoparticle using a\nhigh numerical aperture (NA) imaging system. The optically levitated\nnanoparticle provides an environment free of particle-substrate interaction. We\nilluminate the silica nanoparticle in vacuum with a 532 nm laser beam\northogonally to the propagation direction of the 1064 nm trapping laser beam\nstrongly focused by the same high NA objective used to collect the scattering,\nwhich results in a dark background and high signal-noise ratio. The dipole\norientations of the nanoparticle induced by the linear polarization of the\nincident laser are studied by measuring the scattering light distribution in\nthe image and the Fourier space (k-space) as we rotate the illuminating light\npolarization. The polarization vortex (vector beam) is observed for the special\ncase, when the dipole orientation of the nanoparticle is aligned along the\noptical axis of the microscope objective. Our work offers an important platform\nfor studying the scattering anisotropy with Kerker conditions.\n"} {"abstract": " We prove that the Feynman Path Integral is equivalent to a novel stringy\ndescription of elementary particles characterized by a single compact (cyclic)\nworld-line parameter playing the role of the particle internal clock. Such a\npossible description of elementary particles as characterized by intrinsic\nperiodicity in time has been indirectly confirmed, even experimentally, by\nrecent developments of Time Crystals. We clearly obtain an exact unified\nformulation of quantum and relativistic physics, potentially deterministic,\nfully falsifiable having no fine-tunable parameters, also proven in previous\npapers to be completely consistent with all known physics, from theoretical\nphysics to condensed matter. New physics will be discovered by probing quantum\nphenomena with experimental time accuracy of the order of $10^{-21}$ sec.\n"} {"abstract": " Recent papers on the theory of representation learning has shown the\nimportance of a quantity called diversity when generalizing from a set of\nsource tasks to a target task. Most of these papers assume that the function\nmapping shared representations to predictions is linear, for both source and\ntarget tasks. In practice, researchers in deep learning use different numbers\nof extra layers following the pretrained model based on the difficulty of the\nnew task. This motivates us to ask whether diversity can be achieved when\nsource tasks and the target task use different prediction function spaces\nbeyond linear functions. We show that diversity holds even if the target task\nuses a neural network with multiple layers, as long as source tasks use linear\nfunctions. If source tasks use nonlinear prediction functions, we provide a\nnegative result by showing that depth-1 neural networks with ReLu activation\nfunction need exponentially many source tasks to achieve diversity. For a\ngeneral function class, we find that eluder dimension gives a lower bound on\nthe number of tasks required for diversity. Our theoretical results imply that\nsimpler tasks generalize better. Though our theoretical results are shown for\nthe global minimizer of empirical risks, their qualitative predictions still\nhold true for gradient-based optimization algorithms as verified by our\nsimulations on deep neural networks.\n"} {"abstract": " In [Kim05], Kim gave a new proof of Siegel's Theorem that there are only\nfinitely many $S$-integral points on $\\mathbb P^1_{\\mathbb\nZ}\\setminus\\{0,1,\\infty\\}$. One advantage of Kim's method is that it in\nprinciple allows one to actually find these points, but the calculations grow\nvastly more complicated as the size of $S$ increases. In this paper, we\nimplement a refinement of Kim's method to explicitly compute various examples\nwhere $S$ has size $2$ which has been introduced in [BD19]. In so doing, we\nexhibit new examples of a natural generalisation of a conjecture of Kim.\n"} {"abstract": " This tool paper presents the High-Assurance ROS (HAROS) framework. HAROS is a\nframework for the analysis and quality improvement of robotics software\ndeveloped using the popular Robot Operating System (ROS). It builds on a static\nanalysis foundation to automatically extract models from the source code. Such\nmodels are later used to enable other sorts of analyses, such as Model\nChecking, Runtime Verification, and Property-based Testing. It has been applied\nto multiple real-world examples, helping developers find and correct various\nissues.\n"} {"abstract": " The latest conjunction of Jupiter and Saturn occurred at an optical distance\nof 6 arc minutes on 21 December 2020. We re-analysed all encounters of these\ntwo planets between -1000 and +3000 CE, as the extraordinary ones\n(<10$^{\\prime}$) take place near the line of nodes every 400 years. An\noccultation of their discs did not and will not happen within the historical\ntime span of $\\pm$5,000 years around now. When viewed from Neptune though,\nthere will be an occultation in 2046.\n"} {"abstract": " Filters (such as Bloom Filters) are data structures that speed up network\nrouting and measurement operations by storing a compressed representation of a\nset. Filters are space efficient, but can make bounded one-sided errors: with\ntunable probability epsilon, they may report that a query element is stored in\nthe filter when it is not. This is called a false positive. Recent research has\nfocused on designing methods for dynamically adapting filters to false\npositives, reducing the number of false positives when some elements are\nqueried repeatedly.\n Ideally, an adaptive filter would incur a false positive with bounded\nprobability epsilon for each new query element, and would incur o(epsilon)\ntotal false positives over all repeated queries to that element. We call such a\nfilter support optimal.\n In this paper we design a new Adaptive Cuckoo Filter and show that it is\nsupport optimal (up to additive logarithmic terms) over any n queries when\nstoring a set of size n. Our filter is simple: fixing previous false positives\nrequires a simple cuckoo operation, and the filter does not need to store any\nadditional metadata. This data structure is the first practical data structure\nthat is support optimal, and the first filter that does not require additional\nspace to fix false positives.\n We complement these bounds with experiments showing that our data structure\nis effective at fixing false positives on network traces, outperforming\nprevious Adaptive Cuckoo Filters.\n Finally, we investigate adversarial adaptivity, a stronger notion of\nadaptivity in which an adaptive adversary repeatedly queries the filter, using\nthe result of previous queries to drive the false positive rate as high as\npossible. We prove a lower bound showing that a broad family of filters,\nincluding all known Adaptive Cuckoo Filters, can be forced by such an adversary\nto incur a large number of false positives.\n"} {"abstract": " Let $(G,K)$ be a Gelfand pair, with $G$ a Lie group of polynomial growth, and\nlet $\\Sigma\\subset{\\mathbb R}^\\ell$ be a homeomorphic image of the Gelfand\nspectrum, obtained by choosing a generating system $D_1,\\dots,D_\\ell$ of\n$G$-invariant differential operators on $G/K$ and associating to a bounded\nspherical function $\\varphi$ the $\\ell$-tuple of its eigenvalues under the\naction of the $D_j$'s.\n We say that property (S) holds for $(G,K)$ if the spherical transform maps\nthe bi-$K$-invariant Schwartz space ${\\mathcal S}(K\\backslash G/K)$\nisomorphically onto ${\\mathcal S}(\\Sigma)$, the space of restrictions to\n$\\Sigma$ of the Schwartz functions on ${\\mathbb R}^\\ell$. This property is\nknown to hold for many nilpotent pairs, i.e., Gelfand pairs where $G=K\\ltimes\nN$, with $N$ nilpotent.\n In this paper we enlarge the scope of this analysis outside the range of\nnilpotent pairs, stating the basic setting for general pairs of polynomial\ngrowth and then focussing on strong Gelfand pairs.\n"} {"abstract": " Recent photometric surveys of Trans-Neptunian Objects (TNOs) have revealed\nthat the cold classical TNOs have distinct z-band color characteristics, and\noccupy their own distinct surface class. This suggested the presence of an\nabsorption band in the reflectance spectra of cold classicals at wavelengths\nabove 0.8 micron. Here we present reflectance spectra spanning 0.55-1.0 micron\nfor six TNOs occupying dynamically cold orbits at semimajor axes close to 44\nau. Five of our spectra show a clear and broadly consistent reduction in\nspectral gradient above 0.8 micron that diverges from their linear red optical\ncontinuum and agrees with their reported photometric colour data. Despite\npredictions, we find no evidence that the spectral flattening is caused by an\nabsorption band centered near 1.0 micron. We predict that the overall\nconsistent shape of these five spectra is related to the presence of similar\nrefractory organics on each of their surfaces, and/or their similar physical\nsurface properties such as porosity or grain size distribution. The observed\nconsistency of the reflectance spectra of these five targets aligns with\npredictions that the cold classicals share a common history in terms of\nformation and surface evolution. Our sixth target, which has been ambiguously\nclassified as either a hot or cold classical at various points in the past, has\na spectrum which remains nearly linear across the full range observed. This\nsuggests that this TNO is a hot classical interloper in the cold classical\ndynamical range, and supports the idea that other such interlopers may be\nidentifiable by their linear reflectance spectra in the range 0.8-1.0 micron.\n"} {"abstract": " We study the relations of the positive frequency mode functions of Dirac\nfield in 4-dimensional Minkowski spacetime covered with Rindler and Kasner\ncoordinates, and describe the explicit form of the Minkowski vacuum state with\nthe quantum states in Kasner and Rindler regions, and analytically continue the\nsolutions. As a result, we obtain the correspondence of the positive frequency\nmode functions in Kasner region and Rindler region in a unified manner which\nderives vacuum entanglement.\n"} {"abstract": " Based on a progressively type-II censored sample from the exponential\ndistribution with unknown location and scale parameter, confidence bands are\nproposed for the underlying distribution function by using confidence regions\nfor the parameters and Kolmogorov-Smirnov type statistics. Simple explicit\nrepresentations for the boundaries and for the coverage probabilities of the\nconfidence bands are analytically derived, and the performance of the bands is\ncompared in terms of band width and area by means of a data example. As a\nby-product, a novel confidence region for the location-scale parameter is\nobtained. Extensions of the results to related models for ordered data, such as\nsequential order statistics, as well as to other underlying location-scale\nfamilies of distributions are discussed.\n"} {"abstract": " The famous Yang-Yau inequality provides an upper bound for the first\neigenvalue of the Laplacian on an orientable Riemannian surface solely in terms\nof its genus $\\gamma$ and the area. Its proof relies on the existence of\nholomorhic maps to $\\mathbb{CP}^1$ of low degree. Very recently, A.~Ros was\nable to use certain holomorphic maps to $\\mathbb{CP}^2$ in order to give a\nquantitative improvement of the Yang-Yau inequality for $\\gamma=3$. In the\npresent paper, we generalize Ros' argument to make use of holomorphic maps to\n$\\mathbb{CP}^n$ for any $n>0$. As an application, we obtain a quantitative\nimprovement of the Yang-Yau inequality for all genera $\\gamma>3$ except for\n$\\gamma = 4,6,8,10,14$.\n"} {"abstract": " All yield criteria that determine the onset of plastic deformation in\ncrystalline materials must be invariant under the inversion symmetry associated\nwith a simultaneous change of sign of the slip direction and the slip plane\nnormal. We demonstrate the consequences of this symmetry on the functional form\nof the effective stress, where only the lowest order terms that obey this\nsymmetry are retained. A particular form of yield criterion is obtained for\nmaterials that do not obey the Schmid law, hereafter called non-Schmid\nmaterials. Application of this model to body-centered cubic and hexagonal\nclose-packed metals shows under which conditions the non-Schmid stress terms\nbecome significant in predicting the onset of yielding. In the special case,\nwhere the contributions of all non-Schmid stresses vanish, this model reduces\nto the maximum shear stress theory of Tresca.\n"} {"abstract": " We explore recent progress and open questions concerning local minima and\nsaddle points of the Cahn--Hilliard energy in $d\\geq 2$ and the critical\nparameter regime of large system size and mean value close to $-1$. We employ\nthe String Method of E, Ren, and Vanden-Eijnden -- a numerical algorithm for\ncomputing transition pathways in complex systems -- in $d=2$ to gain additional\ninsight into the properties of the minima and saddle point. Motivated by the\nnumerical observations, we adapt a method of Caffarelli and Spruck to study\nconvexity of level sets in $d\\geq 2$.\n"} {"abstract": " Federated Learning is an emerging privacy-preserving distributed machine\nlearning approach to building a shared model by performing distributed training\nlocally on participating devices (clients) and aggregating the local models\ninto a global one. As this approach prevents data collection and aggregation,\nit helps in reducing associated privacy risks to a great extent. However, the\ndata samples across all participating clients are usually not independent and\nidentically distributed (non-iid), and Out of Distribution(OOD) generalization\nfor the learned models can be poor. Besides this challenge, federated learning\nalso remains vulnerable to various attacks on security wherein a few malicious\nparticipating entities work towards inserting backdoors, degrading the\ngenerated aggregated model as well as inferring the data owned by participating\nentities. In this paper, we propose an approach for learning invariant (causal)\nfeatures common to all participating clients in a federated learning setup and\nanalyze empirically how it enhances the Out of Distribution (OOD) accuracy as\nwell as the privacy of the final learned model.\n"} {"abstract": " Ostrovsky's equation with time- and space- dependent forcing is studied. This\nequation is model for long waves in a rotating fluid with a non-constant depth\n(topography). A classification of Lie point symmetries and low-order\nconservation laws is presented. Generalized travelling wave solutions are\nobtained through symmetry reduction. These solutions exhibit a wave profile\nthat is stationary in a moving reference frame whose speed can be constant,\naccelerating, or decelerating.\n"} {"abstract": " A subalgebra $\\mathcal{A}$ of a $C^*$-algebra $\\mathcal{M}$ is logmodular\n(resp. has factorization) if the set $\\{a^*a; a\\text{ is invertible with\n}a,a^{-1}\\in\\mathcal{A}\\}$ is dense in (resp. equal to) the set of all positive\nand invertible elements of $\\mathcal{M}$. There are large classes of well\nstudied algebras, both in commutative and non-commutative settings, which are\nknown to be logmodular. In this paper, we show that the lattice of projections\nin a von Neumann algebra $\\mathcal{M}$ whose ranges are invariant under a\nlogmodular algebra in $\\mathcal{M}$, is a commutative subspace lattice.\nFurther, if $\\mathcal{M}$ is a factor then this lattice is a nest. As a special\ncase, it follows that all reflexive (in particular, completely distributive\nCSL) logmodular subalgebras of type I factors are nest algebras, thus answering\na question of Paulsen and Raghupathi [Trans. Amer. Math. Soc., 363 (2011)\n2627-2640]. We also discuss some sufficient criteria under which an algebra\nhaving factorization is automatically reflexive and is a nest algebra.\n"} {"abstract": " Quantum computing, an innovative computing system carrying prominent\nprocessing rate, is meant to be the solutions to problems in many fields. Among\nthese realms, the most intuitive application is to help chemical researchers\ncorrectly de-scribe strong correlation and complex systems, which are the great\nchallenge in current chemistry simulation. In this paper, we will present a\nstandalone quantum simulation tool for chemistry, ChemiQ, which is designed to\nassist people carry out chemical research or molecular calculation on real or\nvirtual quantum computers. Under the idea of modular programming in C++\nlanguage, the software is designed as a full-stack tool without third-party\nphysics or chemistry application packages. It provides services as follow:\nvisually construct molecular structure, quickly simulate ground-state energy,\nscan molecular potential energy curve by distance or angle, study chemical\nreaction, and return calculation results graphically after analysis.\n"} {"abstract": " Microwave circulators play an important role in quantum technology based on\nsuperconducting circuits. The conventional circulator design, which employs\nferrite materials, is bulky and involves strong magnetic fields, rendering it\nunsuitable for integration on superconducting chips. One promising design for\nan on-chip superconducting circulator is based on a passive Josephson-junction\nring. In this paper, we consider two operational issues for such a device:\ncircuit tuning and the effects of quasiparticle tunneling. We compute the\nscattering matrix using adiabatic elimination and derive the parameter\nconstraints to achieve optimal circulation. We then numerically optimize the\ncirculator performance over the full set of external control parameters,\nincluding gate voltages and flux bias, to demonstrate that this\nmulti-dimensional optimization converges quickly to find optimal working\npoints. We also consider the possibility of quasiparticle tunneling in the\ncirculator ring and how it affects signal circulation. Our results form the\nbasis for practical operation of a passive on-chip superconducting circulator\nmade from a ring of Josephson junctions.\n"} {"abstract": " A Robinson similarity matrix is a symmetric matrix where the entry values on\nall rows and columns increase toward the diagonal. Decompose the Robinson\nmatrix into the sum of k {0, 1}-matrices, then these k {0, 1}-matrices are the\nadjacency matrices of a set of nested unit interval graphs. Previous studies\nshow that unit interval graphs coincide with indifference graphs. An\nindifference graph has an embedding that maps each vertex to a real number,\nwhere two vertices are adjacent if their embedding is within a fixed threshold\ndistance. In this thesis, consider k different threshold distances, we study\nthe problem of finding an embedding that, simultaneously and with respect to\neach threshold distance, embeds the k indifference graphs corresponding to the\nk adjacency matrices. This is called a uniform embedding of a Robinson matrix\nwith respect to the k threshold distances. We give a sufficient and necessary\ncondition on Robinson matrices that have a uniform embedding, which is derived\nfrom paths in an associated graph. We also give an efficient combinatorial\nalgorithm to find a uniform embedding or give proof that it does not exist, for\nthe case where k = 2.\n"} {"abstract": " Stationary memoryless sources produce two correlated random sequences $X^n$\nand $Y^n$. A guesser seeks to recover $X^n$ in two stages, by first guessing\n$Y^n$ and then $X^n$. The contributions of this work are twofold: (1) We\ncharacterize the least achievable exponential growth rate (in $n$) of any\npositive $\\rho$-th moment of the total number of guesses when $Y^n$ is obtained\nby applying a deterministic function $f$ component-wise to $X^n$. We prove\nthat, depending on $f$, the least exponential growth rate in the two-stage\nsetup is lower than when guessing $X^n$ directly. We further propose a simple\nHuffman code-based construction of a function $f$ that is a viable candidate\nfor the minimization of the least exponential growth rate in the two-stage\nguessing setup. (2) We characterize the least achievable exponential growth\nrate of the $\\rho$-th moment of the total number of guesses required to recover\n$X^n$ when Stage 1 need not end with a correct guess of $Y^n$ and without\nassumptions on the stationary memoryless sources producing $X^n$ and $Y^n$.\n"} {"abstract": " Network Traffic Classification (NTC) has become an important feature in\nvarious network management operations, e.g., Quality of Service (QoS)\nprovisioning and security services. Machine Learning (ML) algorithms as a\npopular approach for NTC can promise reasonable accuracy in classification and\ndeal with encrypted traffic. However, ML-based NTC techniques suffer from the\nshortage of labeled traffic data which is the case in many real-world\napplications. This study investigates the applicability of an active form of\nML, called Active Learning (AL), in NTC. AL reduces the need for a large number\nof labeled examples by actively choosing the instances that should be labeled.\nThe study first provides an overview of NTC and its fundamental challenges\nalong with surveying the literature on ML-based NTC methods. Then, it\nintroduces the concepts of AL, discusses it in the context of NTC, and review\nthe literature in this field. Further, challenges and open issues in AL-based\nclassification of network traffic are discussed. Moreover, as a technical\nsurvey, some experiments are conducted to show the broad applicability of AL in\nNTC. The simulation results show that AL can achieve high accuracy with a small\namount of data.\n"} {"abstract": " On Titan, methane (CH4) and ethane (C2H6) are the dominant species found in\nthe lakes and seas. In this study, we have combined laboratory work and\nmodeling to refine the methane-ethane binary phase diagram at low temperatures\nand probe how the molecules interact at these conditions. We used visual\ninspection for the liquidus and Raman spectroscopy for the solidus. Through\nthese methods we determined a eutectic point of 71.15$\\pm$0.5 K at a\ncomposition of 0.644$\\pm$0.018 methane - 0.356$\\pm$0.018 ethane mole fraction\nfrom the liquidus data. Using the solidus data, we found a eutectic isotherm\ntemperature of 72.2 K with a standard deviation of 0.4 K. In addition to\nmapping the binary system, we looked at the solid-solid transitions of pure\nethane and found that, when cooling, the transition of solid I-III occurred at\n89.45$\\pm$0.2 K. The warming sequence showed transitions of solid III-II\noccurring at 89.85$\\pm$0.2 K and solid II-I at 89.65$\\pm$0.2 K. Ideal\npredictions were compared to molecular dynamics simulations to reveal that the\nmethane-ethane system behaves almost ideally, and the largest deviations occur\nas the mixing ratio approaches the eutectic composition.\n"} {"abstract": " Heavy-ion collisions at the LHC provide the conditions to investigate regions\nof quark-gluon plasma that reach higher temperatures and that persist for\nlonger periods of time compared to collisions at the Relativistic Heavy Ion\nCollider. This extended duration allows correlations from charge conservation\nto better separate during the quark-gluon plasma phase, and thus be better\ndistinguished from correlations that develop during the hadron phase or during\nhadronization. In this study charge balance functions binned by relative\nrapidity and azimuthal angle and indexed by species are considered. A detailed\ntheoretical model that evolves charge correlations throughout the entirety of\nan event is compared to preliminary results from the ALICE Collaboration. The\ncomparison with experiment provides insight into the evolution of the chemistry\nand diffusivity during the collision. A ratio of balance functions is proposed\nto better isolate the effects of diffusion and thus better constrain the\ndiffusivity.\n"} {"abstract": " A new scaling is derived that yields a Reynolds number independent profile\nfor all components of the Reynolds stress in the near-wall region of wall\nbounded flows, including channel, pipe and boundary layer flows. The scaling\ndemonstrates the important role played by the wall shear stress fluctuations\nand how the large eddies determine the Reynolds number dependence of the\nnear-wall turbulence behavior.\n"} {"abstract": " We train convolutional neural networks to predict whether or not a set of\nmeasurements is informationally complete to uniquely reconstruct any given\nquantum state with no prior information. In addition, we perform fidelity\nbenchmarking based on this measurement set without explicitly carrying out\nstate tomography. The networks are trained to recognize the fidelity and a\nreliable measure for informational completeness. By gradually accumulating\nmeasurements and data, these trained convolutional networks can efficiently\nestablish a compressive quantum-state characterization scheme by accelerating\nruntime computation and greatly reducing systematic drifts in experiments. We\nconfirm the potential of this machine-learning approach by presenting\nexperimental results for both spatial-mode and multiphoton systems of large\ndimensions. These predictions are further shown to improve when the networks\nare trained with additional bootstrapped training sets from real experimental\ndata. Using a realistic beam-profile displacement error model for\nHermite-Gaussian sources, we further demonstrate numerically that the\norders-of-magnitude reduction in certification time with trained networks\ngreatly increases the computation yield of a large-scale quantum processor\nusing these sources, before state fidelity deteriorates significantly.\n"} {"abstract": " When an approximant is accurate on the interval, it is only natural to try to\nextend it to several-dimensional domains. In the present article, we make use\nof the fact that linear rational barycentric interpolants converge rapidly\ntoward analytic and several times differentiable functions to interpolate on\ntwo-dimensional starlike domains parametrized in polar coordinates. In radial\ndirection, we engage interpolants at conformally shifted Chebyshev nodes, which\nconverge exponentially toward analytic functions. In circular direction, we\ndeploy linear rational trigonometric barycentric interpolants, which converge\nsimilarly rapidly for periodic functions, but now for conformally shifted\nequispaced nodes. We introduce a variant of a tensor-product interpolant of the\nabove two schemes and prove that it converges exponentially for two-dimensional\nanalytic functions up to a logarithmic factor and with an order limited only by\nthe order of differentiability for real functions, if the boundary is as\nsmooth. Numerical examples confirm that the shifts permit to reach a much\nhigher accuracy with significantly less nodes, a property which is especially\nimportant in several dimensions.\n"} {"abstract": " A novel approach to reduced-order modeling of high-dimensional time varying\nsystems is proposed. It leverages the formalism of the Dynamic Mode\nDecomposition technique together with the concept of balanced realization. It\nis assumed that the only information available on the system comes from input,\nstate, and output trajectories generated by numerical simulations or recorded\nand estimated during experiments, thus the approach is fully data-driven. The\ngoal is to obtain an input-output low dimensional linear model which\napproximates the system across its operating range. Since the dynamics of\naeroservoelastic systems markedly changes in operation (e.g. due to change in\nflight speed or altitude), time-varying features are retained in the\nconstructed models. This is achieved by generating a Linear Parameter-Varying\nrepresentation made of a collection of state-consistent linear time-invariant\nreduced-order models. The algorithm formulation hinges on the idea of replacing\nthe orthogonal projection onto the Proper Orthogonal Decomposition modes, used\nin Dynamic Mode Decomposition-based approaches, with a balancing oblique\nprojection constructed entirely from data. As a consequence, the input-output\ninformation captured in the lower-dimensional representation is increased\ncompared to other projections onto subspaces of same or lower size. Moreover, a\nparameter-varying projection is possible while also achieving\nstate-consistency. The validity of the proposed approach is demonstrated on a\nmorphing wing for airborne wind energy applications by comparing the\nperformance against two algorithms recently proposed in the literature.\nComparisons cover both prediction accuracy and performance in model predictive\ncontrol applications.\n"} {"abstract": " We consider a machine learning algorithm to detect and identify strong\ngravitational lenses on sky images. First, we simulate different artificial but\nvery close to reality images of galaxies, stars and strong lenses, using six\ndifferent methods, i.e. two for each class. Then we deploy a convolutional\nneural network architecture to classify these simulated images. We show that\nafter neural network training process one achieves about 93 percent accuracy.\nAs a simple test for the efficiency of the convolutional neural network, we\napply it on an real Einstein cross image. Deployed neural network classifies it\nas gravitational lens, thus opening a way for variety of lens search\napplications of the deployed machine learning scheme.\n"} {"abstract": " This article describes the regularization of the generally relativistic gauge\nfield representation of gravity on a piecewise linear lattice. It is a part of\nthe program concerning the classical relativistic theory of fundamental\ninteractions, represented by minimally coupled gauge vector field densities and\nhalf-densities. The correspondence between the local Darboux coordinates on\nphase space and the local structure of the links of the lattice, embedded in\nthe spatial manifold, is demonstrated. Thus, the canonical coordinates are\nreplaceable by links-related quantities. This idea and the significant part of\nformalism are directly based on the model of canonical loop quantum gravity\n(CLQG).\n The first stage of this program is formulated regarding the gauge field,\nwhich dynamics is independent of other fundamental fields, but contributes to\ntheir dynamics. This gauge field, which determines systems equivalence in the\nactions defining all fundamental interactions, represents Einsteinian gravity.\nThe related links-defined quantities depend on holonomies of gravitational\nconnections and fluxes of densitized dreibeins. This article demonstrates how\nto determine these quantities, which lead to a nonpertubative formalism that\npreserves the general postulate of relativity. From this perspective, the\nformalism presented in this article is analogous to the Ashtekar-Barbero-Holst\nformulation on which CLQG is based. However, in this project, it is\nadditionally required that the fields' coordinates are quantizable in the\nstandard canonical procedure for a gauge theory and that any approximation in\nthe construction of the model is at least as precisely demonstrated as the\ngauge invariance. These requirements lead to new relations between holonomies\nand connections, and the representation of the densitized deibein determinant\nthat is more precise than the volume representation in CLQG.\n"} {"abstract": " We present the Stromlo Stellar Tracks, a set of stellar evolutionary tracks,\ncomputed by modifying the Modules for Experiments in Stellar Astrophysics\n(MESA) 1D stellar evolution package, to fit the Galactic Concordance abundances\nfor hot ($T > 8000$ K) massive ($\\geq 10M_\\odot$) Main-Sequence (MS) stars.\nUntil now, all stellar evolution tracks are computed at solar, scaled-solar, or\nalpha-element enhanced abundances, and none of these models correctly represent\nthe Galactic Concordance abundances at different metallicities. This paper is\nthe first implementation of Galactic Concordance abundances to the stellar\nevolution models. The Stromlo tracks cover massive stars ($10\\leq M/M_\\odot\n\\leq 300$) with varying rotations ($v/v_{\\rm crit} = 0.0, 0.2, 0.4$) and a\nfinely sampled grid of metallicities ($-2.0 \\leq {\\rm [Z/H]} \\leq +0.5$;\n$\\Delta {\\rm [Z/H]} = 0.1$) evolved from the pre-main sequence to the end of\n$^{12}$Carbon burning. We find that the implementation of Galactic Concordance\nabundances is critical for the evolution of main-sequence, massive hot stars in\norder to estimate accurate stellar outputs (L, T$_{\\rm eff}$, $g$), which, in\nturn, have a significant impact on determining the ionizing photon luminosity\nbudgets. We additionally support prior findings of the importance that rotation\nplays on the evolution of massive stars and their ionizing budget. The\nevolutionary tracks for our Galactic Concordance abundance scaling provide a\nmore empirically motivated approach than simple uniform abundance scaling with\nmetallicity for the analysis of HII regions and have considerable implications\nin determining nebular emission lines and metallicity. Therefore, it is\nimportant to refine the existing stellar evolutionary models for comprehensive\nhigh-redshift extragalactic studies. The Stromlo tracks are publicly available\nto the astronomical community online.\n"} {"abstract": " Let $G=\\operatorname{O}(1,n+1)$ with maximal compact subgroup $K$ and let\n$\\Pi$ be a unitary irreducible representation of $G$ with non-trivial\n$(\\mathfrak{g},K)$-cohomology. Then $\\Pi$ occurs inside a principal series\nrepresentation of $G$, induced from the $\\operatorname{O}(n)$-representation\n$\\bigwedge\\nolimits^p(\\mathbb{C}^n)$ and characters of a minimal parabolic\nsubgroup of $G$ at the limit of the complementary series. Considering the\nsubgroup $G'=\\operatorname{O}(1,n)$ of $G$ with maximal compact subgroup $K'$,\nwe prove branching laws and explicit Plancherel formulas for the restrictions\nto $G'$ of all unitary representations occurring in such principal series,\nincluding the complementary series, all unitary $G$-representations with\nnon-trivial $(\\mathfrak{g},K)$-cohomology and further relative discrete series\nrepresentations in the cases $p=0,n$. Discrete spectra are constructed\nexplicitly as residues of $G'$-intertwining operators which resemble the\nFourier transforms on vector bundles over the Riemannian symmetric space\n$G'/K'$.\n"} {"abstract": " Given coprime positive integers $d',d''$, B\\'ezout's Lemma tells us that\nthere are integers $u,v$ so that $d'u-d''v=1$. We show that, interchanging $d'$\nand $d''$ if necessary, we may choose $u$ and $v$ to be Loeschian numbers,\ni.e., of the form $|\\alpha|^2$, where $\\alpha\\in\\mathbb{Z}[j]$, the ring of\nintegers of the number field $\\mathbb{Q}(j)$, where $j^2+j+1=0$. We do this by\nusing Atkin-Lehner elements in some quaternion algebras $\\mathcal{H}$. We use\nthis fact to count the number of conjugacy classes of elements of order 3 in an\norder $\\mathcal{O}\\subset\\mathcal{H}$.\n"} {"abstract": " Ear recognition can be described as a revived scientific field. Ear\nbiometrics were long believed to not be accurate enough and held a secondary\nplace in scientific research, being seen as only complementary to other types\nof biometrics, due to difficulties in measuring correctly the ear\ncharacteristics and the potential occlusion of the ear by hair, clothes and ear\njewellery. However, recent research has reinstated them as a vivid research\nfield, after having addressed these problems and proven that ear biometrics can\nprovide really accurate identification and verification results. Several 2D and\n3D imaging techniques, as well as acoustical techniques using sound emission\nand reflection, have been developed and studied for ear recognition, while\nthere have also been significant advances towards a fully automated recognition\nof the ear. Furthermore, ear biometrics have been proven to be mostly\nnon-invasive, adequately permanent and accurate, and hard to spoof and\ncounterfeit. Moreover, different ear recognition techniques have proven to be\nas effective as face recognition ones, thus providing the opportunity for ear\nrecognition to be used in identification and verification applications.\nFinally, even though some issues still remain open and require further\nresearch, the scientific field of ear biometrics has proven to be not only\nviable, but really thriving.\n"} {"abstract": " A Polarimetric Synthetic Aperture Radar (PolSAR) sensor is able to collect\nimages in different polarization states, making it a rich source of information\nfor target characterization. PolSAR images are inherently affected by speckle.\nTherefore, before deriving ad hoc products from the data, the polarimetric\ncovariance matrix needs to be estimated by reducing speckle. In recent years,\ndeep learning based despeckling methods have started to evolve from single\nchannel SAR images to PolSAR images. To this aim, deep learning based\napproaches separate the real and imaginary components of the complex-valued\ncovariance matrix and use them as independent channels in a standard\nconvolutional neural networks. However, this approach neglects the mathematical\nrelationship that exists between the real and imaginary components, resulting\nin sub-optimal output. Here, we propose a multi-stream complex-valued fully\nconvolutional network to reduce speckle and effectively estimate the PolSAR\ncovariance matrix. To evaluate the performance of CV-deSpeckNet, we used\nSentinel-1 dual polarimetric SAR images to compare against its real-valued\ncounterpart, that separates the real and imaginary parts of the complex\ncovariance matrix. CV-deSpeckNet was also compared against the state of the art\nPolSAR despeckling methods. The results show CV-deSpeckNet was able to be\ntrained with a fewer number of samples, has a higher generalization capability\nand resulted in a higher accuracy than its real-valued counterpart and\nstate-of-the-art PolSAR despeckling methods. These results showcase the\npotential of complex-valued deep learning for PolSAR despeckling.\n"} {"abstract": " Urban areas are not only one of the biggest contributors to climate change,\nbut also they are one of the most vulnerable areas with high populations who\nwould together experience the negative impacts. In this paper, I address some\nof the opportunities brought by satellite remote sensing imaging and artificial\nintelligence (AI) in order to measure climate adaptation of cities\nautomatically. I propose an AI-based framework which might be useful for\nextracting indicators from remote sensing images and might help with predictive\nestimation of future states of these climate adaptation related indicators.\nWhen such models become more robust and used in real-life applications, they\nmight help decision makers and early responders to choose the best actions to\nsustain the wellbeing of society, natural resources and biodiversity. I\nunderline that this is an open field and an ongoing research for many\nscientists, therefore I offer an in depth discussion on the challenges and\nlimitations of AI-based methods and the predictive estimation models in\ngeneral.\n"} {"abstract": " Nowadays, Graph Neural Networks (GNNs) following the Message Passing paradigm\nbecome the dominant way to learn on graphic data. Models in this paradigm have\nto spend extra space to look up adjacent nodes with adjacency matrices and\nextra time to aggregate multiple messages from adjacent nodes. To address this\nissue, we develop a method called LinkDist that distils self-knowledge from\nconnected node pairs into a Multi-Layer Perceptron (MLP) without the need to\naggregate messages. Experiment with 8 real-world datasets shows the MLP derived\nfrom LinkDist can predict the label of a node without knowing its adjacencies\nbut achieve comparable accuracy against GNNs in the contexts of semi- and\nfull-supervised node classification. Moreover, LinkDist benefits from its\nNon-Message Passing paradigm that we can also distil self-knowledge from\narbitrarily sampled node pairs in a contrastive way to further boost the\nperformance of LinkDist.\n"} {"abstract": " This work addresses whether a human-in-the-loop cyber-physical system (HCPS)\ncan be effective in improving the longitudinal control of an individual vehicle\nin a traffic flow. We introduce the CAN Coach, which is a system that gives\nfeedback to the human-in-the-loop using radar data (relative speed and position\ninformation to objects ahead) that is available on the controller area network\n(CAN). Using a cohort of six human subjects driving an instrumented vehicle, we\ncompare the ability of the human-in-the-loop driver to achieve a constant\ntime-gap control policy using only human-based visual perception to the car\nahead, and by augmenting human perception with audible feedback from CAN sensor\ndata. The addition of CAN-based feedback reduces the mean time-gap error by an\naverage of 73%, and also improves the consistency of the human by reducing the\nstandard deviation of the time-gap error by 53%. We remove human perception\nfrom the loop using a ghost mode in which the human-in-the-loop is coached to\ntrack a virtual vehicle on the road, rather than a physical one. The loss of\nvisual perception of the vehicle ahead degrades the performance for most\ndrivers, but by varying amounts. We show that human subjects can match the\nvelocity of the lead vehicle ahead with and without CAN-based feedback, but\nvelocity matching does not offer regulation of vehicle spacing. The viability\nof dynamic time-gap control is also demonstrated. We conclude that (1) it is\npossible to coach drivers to improve performance on driving tasks using CAN\ndata, and (2) it is a true HCPS, since removing human perception from the\ncontrol loop reduces performance at the given control objective.\n"} {"abstract": " Quantitative phase imaging (QPI) is a valuable label-free modality that has\ngained significant interest due to its wide potentials, from basic biology to\nclinical applications. Most existing QPI systems measure microscopic objects\nvia interferometry or nonlinear iterative phase reconstructions from intensity\nmeasurements. However, all imaging systems compromise spatial resolution for\nfield of view and vice versa, i.e., suffer from a limited space bandwidth\nproduct. Current solutions to this problem involve computational phase\nretrieval algorithms, which are time-consuming and often suffer from\nconvergence problems. In this article, we presented synthetic aperture\ninterference light (SAIL) microscopy as a novel modality for high-resolution,\nwide field of view QPI. The proposed approach employs low-coherence\ninterferometry to directly measure the optical phase delay under different\nillumination angles and produces large space-bandwidth product (SBP) label-free\nimaging. We validate the performance of SAIL on standard samples and illustrate\nthe biomedical applications on various specimens: pathology slides, entire\ninsects, and dynamic live cells in large cultures. The reconstructed images\nhave a synthetic numeric aperture of 0.45, and a field of view of 2.6 x 2.6\nmm2. Due to its direct measurement of the phase information, SAIL microscopy\ndoes not require long computational time, eliminates data redundancy, and\nalways converges.\n"} {"abstract": " We train neural models for morphological analysis, generation and\nlemmatization for morphologically rich languages. We present a method for\nautomatically extracting substantially large amount of training data from FSTs\nfor 22 languages, out of which 17 are endangered. The neural models follow the\nsame tagset as the FSTs in order to make it possible to use them as fallback\nsystems together with the FSTs. The source code, models and datasets have been\nreleased on Zenodo.\n"} {"abstract": " Quantized nano-objects offer a myriad of exciting possibilities for\nmanipulating electrons and light that impact photonics, nanoelectronics, and\nquantum information. In this context, ultrashort laser pulses combined with\nnanotips and field emission have permitted renewing nano-characterization and\ncontrol electron dynamics with unprecedented space and time resolution reaching\nfemtosecond and even attosecond regimes. A crucial missing step in these\nexperiments is that no signature of quantized energy levels has yet been\nobserved. We combine in situ nanostructuration of nanotips and ultrashort laser\npulse excitation to induce multiphoton excitation and electron emission from a\nsingle quantized nano-object attached at the apex of a metal nanotip.\nFemtosecond induced tunneling through well-defined localized confinement states\nthat are tunable in energy is demonstrated. This paves the way for the\ndevelopment of ultrafast manipulation of electron emission from isolated\nnano-objects including stereographically fixed individual molecules and high\nbrightness, ultrafast, coherent single electron sources for quantum optics\nexperiments.\n"} {"abstract": " We present a fast and feature-complete differentiable physics engine, Nimble\n(nimblephysics.org), that supports Lagrangian dynamics and hard contact\nconstraints for articulated rigid body simulation. Our differentiable physics\nengine offers a complete set of features that are typically only available in\nnon-differentiable physics simulators commonly used by robotics applications.\nWe solve contact constraints precisely using linear complementarity problems\n(LCPs). We present efficient and novel analytical gradients through the LCP\nformulation of inelastic contact that exploit the sparsity of the LCP solution.\nWe support complex contact geometry, and gradients approximating\ncontinuous-time elastic collision. We also introduce a novel method to compute\ncomplementarity-aware gradients that help downstream optimization tasks avoid\nstalling in saddle points. We show that an implementation of this combination\nin an existing physics engine (DART) is capable of a 87x single-core speedup\nover finite-differencing in computing analytical Jacobians for a single\ntimestep, while preserving all the expressiveness of original DART.\n"} {"abstract": " We study the Gram determinant and construct bases of hom spaces for the\none-dimensional topological theory of decorated unoriented one-dimensional\ncobordisms, as recently defined by Khovanov, when the pair of generating\nfunctions is linear.\n"} {"abstract": " Existing near-eye display designs struggle to balance between multiple\ntrade-offs such as form factor, weight, computational requirements, and battery\nlife. These design trade-offs are major obstacles on the path towards an\nall-day usable near-eye display. In this work, we address these trade-offs by,\nparadoxically, \\textit{removing the display} from near-eye displays. We present\nthe beaming displays, a new type of near-eye display system that uses a\nprojector and an all passive wearable headset. We modify an off-the-shelf\nprojector with additional lenses. We install such a projector to the\nenvironment to beam images from a distance to a passive wearable headset. The\nbeaming projection system tracks the current position of a wearable headset to\nproject distortion-free images with correct perspectives. In our system, a\nwearable headset guides the beamed images to a user's retina, which are then\nperceived as an augmented scene within a user's field of view. In addition to\nproviding the system design of the beaming display, we provide a physical\nprototype and show that the beaming display can provide resolutions as high as\nconsumer-level near-eye displays. We also discuss the different aspects of the\ndesign space for our proposal.\n"} {"abstract": " Models of front propagation like the famous FKPP equation have extensive\napplications across scientific disciplines e.g., in the spread of infectious\ndiseases. A common feature of such models is the existence of a static state\ninto which to propagate, e.g., the uninfected host population. Here, we instead\nmodel an infectious front propagating into a growing host population. The\ninfectious agent spreads via self-similar waves whereas the amplitude of the\nwave of infected organisms increases exponentially. Depending on the population\nunder consideration, wave speeds are either advanced or retarded compared to\nthe non-growing case. We identify a novel selection mechanism in which the\nshape of the infectious wave controls the speeds of the various waves and we\npropose experiments with bacteria and bacterial viruses to test our\npredictions. Our work reveals the complex interplay between population growth\nand front propagation.\n"} {"abstract": " The spin Hall effect and its inverse are important spin-charge conversion\nmechanisms. The direct spin Hall effect induces a surface spin accumulation\nfrom a transverse charge current due to spin orbit coupling even in\nnon-magnetic conductors. However, most detection schemes involve additional\ninterfaces, leading to large scattering in reported data. Here we perform\ninterface free x-ray spectroscopy measurements at the Cu L_{3,2} absorption\nedges of highly Bi-doped Cu (Cu_{95}Bi_{5}). The detected X-ray magnetic\ncircular dichroism (XMCD) signal corresponds to an induced magnetic moment of\n(2.7 +/- 0.5) x 10-12 {\\mu}_{B} A^{-1} cm^{2} per Cu atom averaged over the\nprobing depth, which is of the same order as for Pt measured by magneto-optics.\nThe results highlight the importance of interface free measurements to assess\nmaterial parameters and the potential of CuBi for spin-charge conversion\napplications.\n"} {"abstract": " Ensuring that a predictor is not biased against a sensible feature is the key\nof Fairness learning. Conversely, Global Sensitivity Analysis is used in\nnumerous contexts to monitor the influence of any feature on an output\nvariable. We reconcile these two domains by showing how Fairness can be seen as\na special framework of Global Sensitivity Analysis and how various usual\nindicators are common between these two fields. We also present new Global\nSensitivity Analysis indices, as well as rates of convergence, that are useful\nas fairness proxies.\n"} {"abstract": " Virtualization of distributed real-time systems enables the consolidation of\nmixed-criticality functions on a shared hardware platform thus easing system\nintegration. Time-triggered communication and computation can act as an enabler\nof safe hard real-time systems. A time-triggered hypervisor that activates\nvirtual CPUs according to a global schedule can provide the means to allow for\na resource efficient implementation of the time-triggered paradigm in\nvirtualized distributed real-time systems. A prerequisite of time-triggered\nvirtualization for hard real-time systems is providing access to a global time\nbase to VMs as well as to the hypervisor. A global time base is the result of\nclock synchronization with an upper bound on the clock synchronization\nprecision.\n We present a formalization of the notion of time in virtualized real-time\nsystems. We use this formalization to propose a virtual clock condition that\nenables us to test the suitability of a virtual clock for the design of\nvirtualized time-triggered real-time systems. We discuss and model how\nvirtualization, in particular resource consolidation versus resource\npartitioning, degrades clock synchronization precision. Finally, we apply our\ninsights to model the IEEE~802.1AS clock synchronization protocol and derive an\nupper bound on the clock synchronization precision of IEEE 802.1AS. We present\nour implementation of a dependent clock for ACRN that can be synchronized to a\ngrandmaster clock. The results of our experiments illustrate that a type-1\nhypervisor implementing a dependent clock yields native clock synchronization\nprecision. Furthermore, we show that the upper bound derived from our model\nholds for a series of experiments featuring native as well as virtualized\nsetups.\n"} {"abstract": " We study several parameters of a random Bienaym\\'e-Galton-Watson tree $T_n$\nof size $n$ defined in terms of an offspring distribution $\\xi$ with mean $1$\nand nonzero finite variance $\\sigma^2$. Let $f(s)={\\bf E}\\{s^\\xi\\}$ be the\ngenerating function of the random variable $\\xi$. We show that the independence\nnumber is in probability asymptotic to $qn$, where $q$ is the unique solution\nto $q = f(1-q)$. One of the many algorithms for finding the largest independent\nset of nodes uses a notion of repeated peeling away of all leaves and their\nparents. The number of rounds of peeling is shown to be in probability\nasymptotic to $\\log n / \\log\\bigl(1/f'(1-q)\\bigr)$. Finally, we study a related\nparameter which we call the leaf-height. Also sometimes called the protection\nnumber, this is the maximal shortest path length between any node and a leaf in\nits subtree. If $p_1 = {\\bf P}\\{\\xi=1\\}>0$, then we show that the maximum\nleaf-height over all nodes in $T_n$ is in probability asymptotic to $\\log\nn/\\log(1/p_1)$. If $p_1 = 0$ and $\\kappa$ is the first integer $i>1$ with ${\\bf\nP}\\{\\xi=i\\}>0$, then the leaf-height is in probability asymptotic to\n$\\log_\\kappa\\log n$.\n"} {"abstract": " In this work we ask how an Unruh-DeWitt (UD) detector with harmonic\noscillator internal degrees of freedom $Q$ measuring an evolving quantum matter\nfield $\\Phi(\\bm{x}, t)$ in an expanding universe with scale factor $a(t)$\nresponds. We investigate the detector's response which contains non-Markovian\ninformation about the quantum field squeezed by the dynamical spacetime. The\nchallenge is in the memory effects accumulated over the evolutionary history.\nWe first consider a detector $W$, the `\\textsl{Witness}', which co-existed and\nevolved with the quantum field from the beginning. We derive a nonMarkovian\nquantum Langevin equation for the detector's $Q$ by integrating over the\nsqueezed quantum field. The solution of this integro-differential equation\nwould answer our question, in principle, but very challenging, in practice.\nStriking a compromise, we then ask, to what extent can a detector $D$\nintroduced at late times, called the `\\textsl{Detective}', decipher past\nmemories. This situation corresponds to many cosmological experiments today\nprobing specific stages in the past, such as COBE targeting activities at the\nsurface of last scattering. Somewhat surprisingly we show that it is possible\nto retrieve to some degree certain global physical quantities, such as the\nresultant squeezing, particles created, quantum coherence and correlations. The\nreason is because the quantum field has all the fine-grained information from\nthe beginning in how it was driven by the cosmic dynamics $a(t)$. How long the\ndetails of past history can persist in the quantum field depends on the memory\ntime. The fact that a squeezed field cannot come to complete equilibrium under\nconstant driving, as in an evolving spacetime, actually helps to retain the\nmemory. We discuss interesting features and potentials of this\n`\\textit{archaeological}' perspective toward cosmological issues.\n"} {"abstract": " We present updated measurements of the Crab pulsar glitch of 2019 July 23\nusing a dataset of pulse arrival times spanning $\\sim$5 months. On MJD 58687,\nthe pulsar underwent its seventh largest glitch observed to date, characterised\nby an instantaneous spin-up of $\\sim$1 $\\mu$Hz. Following the glitch the\npulsar's rotation frequency relaxed exponentially towards pre-glitch values\nover a timescale of approximately one week, resulting in a permanent frequency\nincrement of $\\sim$0.5 $\\mu$Hz. Due to our semi-continuous monitoring of the\nCrab pulsar, we were able to partially resolve a fraction of the total spin-up.\nThis delayed spin-up occurred exponentially over a timescale of $\\sim$18 hours.\nThis is the sixth Crab pulsar glitch for which part of the initial rise was\nresolved in time and this phenomenon has not been observed in any other\nglitching pulsars, offering a unique opportunity to study the microphysical\nprocesses governing interactions between the neutron star interior and the\ncrust.\n"} {"abstract": " In this work we shall study $k$-inflation theories with non-minimal coupling\nof the scalar field to gravity, in the presence of only a higher order kinetic\nterm of the form $\\sim \\mathrm{const}\\times X^{\\mu}$, with\n$X=\\frac{1}{2}\\partial_{\\mu}\\phi\\partial^{\\mu}\\phi$. The study will be focused\nin the cases where a scalar potential is included or is absent, and the\nevolution of the scalar field will be assumed to satisfy the slow-roll or the\nconstant-roll condition. In the case of the slow-roll models with scalar\npotential, we shall calculate the slow-roll indices, and the corresponding\nobservational indices of the theory, and we demonstrate that the resulting\ntheory is compatible with the latest Planck data. The same results are obtained\nin the constant-roll case, at least in the presence of a scalar potential. In\nthe case that models without potential are considered, the results are less\nappealing since these are strongly model dependent, and at least for a\npower-law choice of the non-minimal coupling, the theory is non-viable.\nFinally, due to the fact that the scalar and tensor power spectra are conformal\ninvariant quantities, we argue that the Einstein frame counterpart of the\nnon-minimal $k$-inflation models with scalar potential, can be a viable theory,\ndue to the conformal invariance of the observational indices. The Einstein\nframe theory is more involved and thus more difficult to work with it\nanalytically, so one implication of our work is that we provide evidence for\nthe viability of another class of $k$-inflation models.\n"} {"abstract": " Some known fixed point theorems for nonexpansive mappings in metric spaces\nare extended here to the case of primitive uniform spaces. The reasoning\npresented in the proofs seems to be a natural way to obtain other general\nresults.\n"} {"abstract": " Carroll's group is presented as a group of transformations in a 5-dimensional\nspace ($\\mathcal{C}$) obtained by embedding the Euclidean space into a (4;\n1)-de Sitter space. Three of the five dimensions of $\\mathcal{C}$ are related\nto $\\mathcal{R}^3$, and the other two to mass and time. A covariant formulation\nof Caroll's group, analogous as introduced by Takahashi to Galilei's group, is\ndeduced. Unit representations are studied.\n"} {"abstract": " In this paper, we consider the spectral dependences of transverse\nelectromagnetic waves generated in solar plasma at coalescence of Langmuir\nwaves. It is shown that different spectra of Langmuir waves lead to\ncharacteristic types of transversal electromagnetic wave spectra, what makes it\npossible to diagnose the features of the spectra of Langmuir waves generated in\nsolar plasma.\n"} {"abstract": " A duality between an electrostatic problem in a three dimensional world and a\nquantum mechanical problem in a one dimensional world which allows one to\nobtain the ground state solution of the Schr\\\"odinger equation by using\nelectrostatic results is generalized to three dimensions. Here, it is\ndemonstrated that the same transformation technique is also applicable to the\ns-wave Schr\\\"odinger equation in three dimensions for central potentials. This\napproach leads to a general relationship between the electrostatic potential\nand the s-wave function and the electric energy density to the quantum\nmechanical energy.\n"} {"abstract": " The past year has witnessed the rapid development of applying the Transformer\nmodule to vision problems. While some researchers have demonstrated that\nTransformer-based models enjoy a favorable ability of fitting data, there are\nstill growing number of evidences showing that these models suffer over-fitting\nespecially when the training data is limited. This paper offers an empirical\nstudy by performing step-by-step operations to gradually transit a\nTransformer-based model to a convolution-based model. The results we obtain\nduring the transition process deliver useful messages for improving visual\nrecognition. Based on these observations, we propose a new architecture named\nVisformer, which is abbreviated from the `Vision-friendly Transformer'. With\nthe same computational complexity, Visformer outperforms both the\nTransformer-based and convolution-based models in terms of ImageNet\nclassification accuracy, and the advantage becomes more significant when the\nmodel complexity is lower or the training set is smaller. The code is available\nat https://github.com/danczs/Visformer.\n"} {"abstract": " We employ various quantum-mechanical approaches for studying the impact of\nelectric fields on both nonretarded and retarded noncovalent interactions\nbetween atoms or molecules. To this end, we apply perturbative and\nnon-perturbative methods within the frameworks of quantum mechanics (QM) as\nwell as quantum electrodynamics (QED). In addition, to provide a transparent\nphysical picture of the different types of resulting interactions, we employ a\nstochastic electrodynamic approach based on the zero-point fluctuating field.\nAtomic response properties are described via harmonic Drude oscillators - an\nefficient model system that permits an analytical solution and has been\nconvincingly shown to yield accurate results when modeling non-retarded\nintermolecular interactions. The obtained intermolecular energy contributions\nare classified as field-induced (FI) electrostatics, FI polarization, and\ndispersion interactions. The interplay between these three types of\ninteractions enables the manipulation of molecular dimer conformations by\napplying transversal or longitudinal electric fields along the intermolecular\naxis. Our framework combining four complementary theoretical approaches paves\nthe way toward a systematic description and improved understanding of molecular\ninteractions when molecules are subject to both external and vacuum fields.\n"} {"abstract": " In this short paper we recall the (Garfield) Impact Factor of a Journal, we\nimprove and extend it, and eventually we present the Total Impact Factor that\nreflects the most accurate impact factor.\n"} {"abstract": " This Letter capitalizes on a unique set of total solar eclipse observations,\nacquired between 2006 and 2020, in white light, Fe XI 789.2 nm ($\\rm T_{fexi}$\n= $1.2 \\pm 0.1$ MK) and Fe XIV 530.3 nm ($\\rm T_{fexiv}$ = $ 1.8 \\pm 0.1$ MK)\nemission, complemented by in situ Fe charge state and proton speed measurements\nfrom ACE/SWEPAM-SWICS, to identify the source regions of different solar wind\nstreams. The eclipse observations reveal the ubiquity of open structures,\ninvariably associated with Fe XI emission from $\\rm Fe^{10+}$, hence a constant\nelectron temperature, $\\rm T_{c}$ = $\\rm T_{fexi}$, in the expanding corona.\nThe in situ Fe charge states are found to cluster around $\\rm Fe^{10+}$,\nindependently of the 300 to 700 km $\\rm s^{-1}$ stream speeds, referred to as\nthe continual solar wind. $\\rm Fe^{10+}$ thus yields the fiducial link between\nthe continual solar wind and its $\\rm T_{fexi}$ sources at the Sun. While the\nspatial distribution of Fe XIV emission, from $\\rm Fe^{13+}$, associated with\nstreamers, changes throughout the solar cycle, the sporadic appearance of\ncharge states $> \\rm Fe^{11+}$, in situ, exhibits no cycle dependence\nregardless of speed. These latter streams are conjectured to be released from\nhot coronal plasmas at temperatures $\\ge \\rm T_{fexiv}$ within the bulge of\nstreamers and from active regions, driven by the dynamic behavior of\nprominences magnetically linked to them. The discovery of continual streams of\nslow, intermediate and fast solar wind, characterized by the same $\\rm\nT_{fexi}$ in the expanding corona, places new constraints on the physical\nprocesses shaping the solar wind.\n"} {"abstract": " Our goal is to estimate the star formation main sequence (SFMS) and the star\nformation rate density (SFRD) at z <= 0.017 (d < 75 Mpc) using the Javalambre\nPhotometric Local Universe Survey (J-PLUS) first data release, that probes\n897.4 deg2 with twelve optical bands. We extract the Halpha emission flux of\n805 local galaxies from the J-PLUS filter J0660, being the continuum level\nestimated with the other eleven J-PLUS bands, and the dust attenuation and\nnitrogen contamination corrected with empirical relations. Stellar masses (M),\nHalpha luminosities (L), and star formation rates (SFRs) were estimated by\naccounting for parameters covariances. Our sample comprises 689 blue galaxies\nand 67 red galaxies, classified in the (u-g) vs (g-z) color-color diagram, plus\n49 AGN. The SFMS is explored at log M > 8 and it is clearly defined by the blue\ngalaxies, with the red galaxies located below them. The SFMS is described as\nlog SFR = 0.83 log M - 8.44. We find a good agreement with previous estimations\nof the SFMS, especially those based on integral field spectroscopy. The Halpha\nluminosity function of the AGN-free sample is well described by a Schechter\nfunction with log L* = 41.34, log phi* = -2.43, and alpha = -1.25. Our\nmeasurements provide a lower characteristic luminosity than several previous\nstudies in the literature. The derived star formation rate density at d < 75\nMpc is log rho_SFR = -2.10 +- 0.11, with red galaxies accounting for 15% of the\nSFRD. Our value is lower than previous estimations at similar redshift, and\nprovides a local reference for evolutionary studies regarding the star\nformation history of the Universe.\n"} {"abstract": " Capacitated vehicle routing problem (CVRP) is being one of the most common\noptimization problems in our days, considering the wide usage of routing\nalgorithms in multiple fields such as transportation domain, food delivery,\nnetwork routing, ... Capacitated vehicle routing problem is classified as an\nNP-Hard problem, hence normal optimization algorithm can't solve it. In our\npaper, we discuss a new way to solve the mentioned problem, using a recursive\napproach of the most known clustering algorithm \"K-Means\", one of the known\nshortest path algorithm \"Dijkstra\", and some mathematical operations. In this\npaper, we will show how to implement those methods together in order to get the\nnearest solution of the optimal route, since research and development are still\non go, this research paper may be extended with another one, that will involve\nthe implementational results of this thoric side.\n"} {"abstract": " This paper explores Google's Edge TPU for implementing a practical network\nintrusion detection system (NIDS) at the edge of IoT, based on a deep learning\napproach. While there are a significant number of related works that explore\nmachine learning based NIDS for the IoT edge, they generally do not consider\nthe issue of the required computational and energy resources. The focus of this\npaper is the exploration of deep learning-based NIDS at the edge of IoT, and in\nparticular the computational and energy efficiency. In particular, the paper\nstudies Google's Edge TPU as a hardware platform, and considers the following\nthree key metrics: computation (inference) time, energy efficiency and the\ntraffic classification performance. Various scaled model sizes of two major\ndeep neural network architectures are used to investigate these three metrics.\nThe performance of the Edge TPU-based implementation is compared with that of\nan energy efficient embedded CPU (ARM Cortex A53). Our experimental evaluation\nshows some unexpected results, such as the fact that the CPU significantly\noutperforms the Edge TPU for small model sizes.\n"} {"abstract": " We study the nonlinear stability of plane Couette and Poiseuille flows with\nthe Lyapunov second method by using the classical L2-energy. We prove that the\nstreamwise perturbations are L2-energy stable for any Reynolds number. This\ncontradicts the results of Joseph [10], Joseph and Carmi [12] and Busse [4],\nand allows us to prove that the critical nonlinear Reynolds numbers are\nobtained along two-dimensional perturbations, the spanwise perturbations, as\nOrr [16] had supposed. This conclusion combined with recent results by\nFalsaperla et al. [8] on the stability with respect to tilted rolls, provides a\npossible solution to the Couette-Sommerfeld paradox.\n"} {"abstract": " We investigate the production of photons from coherently oscillating,\nspatially localized clumps of axionic fields (oscillons and axion stars) in the\npresence of external electromagnetic fields. We delineate different qualitative\nbehaviour of the photon luminosity in terms of an effective dimensionless\ncoupling parameter constructed out of the axion-photon coupling, and field\namplitude, oscillation frequency and radius of the axion star. For small values\nof this dimensionless coupling, we provide a general analytic formula for the\ndipole radiation field and the photon luminosity per solid angle, including a\nstrong dependence on the radius of the configuration. For moderate to large\ncoupling, we report on a non-monotonic behavior of the luminosity with the\ncoupling strength in the presence of external magnetic fields. After an initial\nrise in luminosity with the coupling strength, we see a suppression (by an\norder of magnitude or more compared to the dipole radiation approximation) at\nmoderately large coupling. At sufficiently large coupling, we find a transition\nto a regime of exponential growth of the luminosity due to parametric\nresonance. We carry out 3+1 dimensional lattice simulations of axion\nelectrodynamics, at small and large coupling, including non-perturbative\neffects of parametric resonance as well as backreaction effects when necessary.\nWe also discuss medium (plasma) effects that lead to resonant axion to photon\nconversion, relevance of the coherence of the soliton, and implications of our\nresults in astrophysical and cosmological settings.\n"} {"abstract": " Recent studies in three dimensional spintronics propose that the \\OE rsted\nfield plays a significant role in cylindrical nanowires. However, there is no\ndirect report of its impact on magnetic textures. Here, we use time-resolved\nscanning transmission X-ray microscopy to image the dynamic response of\nmagnetization in cylindrical Co$_{30}$Ni$_{70}$ nanowires subjected to\nnanosecond \\OE rsted field pulses. We observe the tilting of longitudinally\nmagnetized domains towards the azimuthal \\OE rsted field direction and create a\nrobust model to reproduce the differential magnetic contrasts and extract the\nangle of tilt. Further, we report the compression and expansion, or breathing,\nof a Bloch-point domain wall that occurs when weak pulses with opposite sign\nare applied. We expect that this work lays the foundation for and provides an\nincentive to further studying complex and fascinating magnetization dynamics in\nnanowires, especially the predicted ultra-fast domain wall motion and\nassociated spin wave emissions.\n"} {"abstract": " By an unquenched quark model, we predict a charmed-strange baryon state,\nnamely, the $\\Omega_{c0}^d(1P,1/2^-)$. Its mass is predicted to be 2945 MeV,\nwhich is below the $\\Xi_c\\bar{K}$ threshold due to the nontrivial\ncoupled-channel effect. So the $\\Omega_{c0}^d(1P,1/2^-)$ state could be\nregraded as the analog of the charmed-strange meson $D_{s0}^*(2317)$. It is a\ngood opportunity for the running Belle II experiment to search for this state\nin the $\\Omega_c^{(*)}\\gamma$ mass spectrum experiment in the future.\n"} {"abstract": " We present a flexible discretization technique for computational models of\nthin tubular networks embedded in a bulk domain, for example a porous medium.\nThese systems occur in the simulation of fluid flow in vascularized biological\ntissue, root water and nutrient uptake in soil, hydrological or petroleum wells\nin rock formations, or heat transport in micro-cooling devices. The key\nprocesses, such as heat and mass transfer, are usually dominated by the\nexchange between the network system and the embedding domain. By explicitly\nresolving the interface between these domains with the computational mesh, we\ncan accurately describe these processes. The network is efficiently described\nby a network of line segments. Coupling terms are evaluated by projection of\nthe interface variables. The new method is naturally applicable for nonlinear\nand time-dependent problems and can therefore be used as a reference method in\nthe development of novel implicit interface 1D-3D methods and in the design of\nverification benchmarks for embedded tubular network methods. Implicit\ninterface, not resolving the bulk-network interface explicitly have proven to\nbe very efficient but have only been mathematically analyzed for linear\nelliptic problems so far. Using two application scenarios, fluid perfusion of\nvascularized tissue and root water uptake from soil, we investigate the effect\nof some common modeling assumptions of implicit interface methods numerically.\n"} {"abstract": " In this paper a family of non-autonomous scalar parabolic PDEs over a general\ncompact and connected flow is considered. The existence or not of a\nneighbourhood of zero where the problems are linear has an influence on the\nmethods used and on the dynamics of the induced skew-product semiflow. That is\nwhy two cases are distinguished: linear-dissipative and purely dissipative\nproblems. In both cases, the structure of the global and pullback attractors is\nstudied using principal spectral theory. Besides, in the purely dissipative\nsetting, a simple condition is given, involving both the underlying linear\ndynamics and some properties of the nonlinear term, to determine the nontrivial\nsections of the attractor.\n"} {"abstract": " In this paper we study the variety of one dimensional representations of a\nfinite $W$-algebra attached to a classical Lie algebra, giving a precise\ndescription of the dimensions of the irreducible components. We apply this to\nprove a conjecture of Losev describing the image of his orbit method map. In\norder to do so we first establish new Yangian-type presentations of\nsemiclassical limits of the $W$-algebras attached to distinguished nilpotent\nelements in classical Lie algebras, using Dirac reduction.\n"} {"abstract": " We examine the possibility that dark matter (DM) consists of a gapped\ncontinuum, rather than ordinary particles. A Weakly-Interacting Continuum (WIC)\nmodel, coupled to the Standard Model via a Z-portal, provides an explicit\nrealization of this idea. The thermal DM relic density in this model is\nnaturally consistent with observations, providing a continuum counterpart of\nthe \"WIMP miracle\". Direct detection cross sections are strongly suppressed\ncompared to ordinary Z-portal WIMP, thanks to a unique effect of the continuum\nkinematics. Continuum DM states decay throughout the history of the universe,\nand observations of cosmic microwave background place constraints on potential\nlate decays. Production of WICs at colliders can provide a striking\ncascade-decay signature. We show that a simple Z-portal WIC model with the gap\nscale between 40 and 110 GeV provides a fully viable DM candidate consistent\nwith all current experimental constraints.\n"} {"abstract": " We propose a new method with $\\mathcal{L}_2$ distance that maps one\n$N$-dimensional distribution to another, taking into account available\ninformation about correspondences. We solve the high-dimensional problem in 1D\nspace using an iterative projection approach. To show the potentials of this\nmapping, we apply it to colour transfer between two images that exhibit\noverlapped scenes. Experiments show quantitative and qualitative competitive\nresults as compared with the state of the art colour transfer methods.\n"} {"abstract": " We propose an optimization scheme for ground-state cooling of a mechanical\nmode by coupling to a general three-level system. We formulate the optimization\nscheme, using the master equation approach, over a broad range of system\nparameters including detunings, decay rates, coupling strengths, and pumping\nrate. We implement the optimization scheme on three physical systems: a\ncolloidal quantum dot coupled to its confined phonon mode, a polariton coupled\nto a mechanical resonator mode, and a coupled-cavity system coupled to a\nmechanical resonator mode. These three physical systems span a broad range of\nmechanical mode frequencies, coupling rates, and decay rates. Our optimization\nscheme lowers the stead-state phonon number in all three cases by orders of\nmagnitude. We also calculate the net cooling rate by estimating the phonon\ndecay rate and show that the optimized system parameters also result in\nefficient cooling. The proposed optimization scheme can be readily extended to\nany generic driven three-level system coupled to a mechanical mode.\n"} {"abstract": " Experimental studies of high-purity kagome-lattice antiferromagnets (KAFM)\nare of great importance in attempting to better understand the predicted\nenigmatic quantum spin-liquid ground state of the KAFM model. However,\nrealizations of this model can rarely evade magnetic ordering at low\ntemperatures due to various perturbations to its dominant isotropic exchange\ninteractions. Such a situation is for example encountered due to sizable\nDzyaloshinskii-Moriya magnetic anisotropy in YCu$_3$(OH)$_6$Cl$_3$, which\nstands out from other KAFM materials by its perfect crystal structure. We find\nevidence of magnetic ordering also in the distorted sibling compound\nY$_3$Cu$_9$(OH)$_{18}$[Cl$_8$(OH)], which has recently been proposed to feature\na spin-liquid ground state arising from a spatially anisotropic kagome lattice.\nOur findings are based on a combination of bulk susceptibility, specific heat,\nand magnetic torque measurements that disclose a N\\'eel transition temperature\nof $T_N=11$~K in this material, which might feature a coexistence of magnetic\norder and persistent spin dynamics as previously found in\nYCu$_3$(OH)$_6$Cl$_3$. Contrary to previous studies of single crystals and\npowders containing impurity inclusions, we use high-purity single crystals of\nY$_3$Cu$_9$(OH)$_{18}$[Cl$_8$(OH)] grown via an optimized hydrothermal\nsynthesis route that minimizes such inclusions. This study thus demonstrates\nthat the lack of magnetic ordering in less pure samples of the investigated\ncompound does not originate from the reduced symmetry of spin lattice but is\ninstead of extrinsic origin.\n"} {"abstract": " In the present work, we explore analytically and numerically the co-existence\nand interactions of ring dark solitons (RDSs) with other RDSs, as well as with\nvortices. The azimuthal instabilities of the rings are explored via the\nso-called filament method. As a result of their nonlinear interaction, the\nvortices are found to play a stabilizing role on the rings, yet their effect is\nnot sufficient to offer complete stabilization of RDSs. Nevertheless, complete\nstabilization of the relevant configuration can be achieved by the presence of\nexternal ring-shaped barrier potentials. Interactions of multiple rings are\nalso explored, and their equilibrium positions (as a result of their own\ncurvature and their tail-tail interactions) are identified. In this case too,\nstabilization is achieved via multi-ring external barrier potentials.\n"} {"abstract": " To explain X-ray spectra of active galactic nuclei (AGN), non-thermal\nactivity in AGN coronae such as pair cascade models has been extensively\ndiscussed in the past literature. Although X-ray and gamma-ray observations in\nthe 1990s disfavored such pair cascade models, recent millimeter-wave\nobservations of nearby Seyferts establish the existence of weak non-thermal\ncoronal activity. Besides, the IceCube collaboration reported NGC 1068, a\nnearby Seyfert, as the hottest spot in their 10-yr survey. These pieces of\nevidence are enough to investigate the non-thermal perspective of AGN coronae\nin depth again. This article summarizes our current observational\nunderstandings of AGN coronae and describes how AGN coronae generate\nhigh-energy particles. We also provide ways to test the AGN corona model with\nradio, X-ray, MeV gamma-ray, and high-energy neutrino observations.\n"} {"abstract": " We present a method for contraction-based feedback motion planning of locally\nincrementally exponentially stabilizable systems with unknown dynamics that\nprovides probabilistic safety and reachability guarantees. Given a dynamics\ndataset, our method learns a deep control-affine approximation of the dynamics.\nTo find a trusted domain where this model can be used for planning, we obtain\nan estimate of the Lipschitz constant of the model error, which is valid with a\ngiven probability, in a region around the training data, providing a local,\nspatially-varying model error bound. We derive a trajectory tracking error\nbound for a contraction-based controller that is subjected to this model error,\nand then learn a controller that optimizes this tracking bound. With a given\nprobability, we verify the correctness of the controller and tracking error\nbound in the trusted domain. We then use the trajectory error bound together\nwith the trusted domain to guide a sampling-based planner to return\ntrajectories that can be robustly tracked in execution. We show results on a 4D\ncar, a 6D quadrotor, and a 22D deformable object manipulation task, showing our\nmethod plans safely with learned models of high-dimensional underactuated\nsystems, while baselines that plan without considering the tracking error bound\nor the trusted domain can fail to stabilize the system and become unsafe.\n"} {"abstract": " Fairness-aware machine learning for multiple protected at-tributes (referred\nto as multi-fairness hereafter) is receiving increasing attention as\ntraditional single-protected attribute approaches cannot en-sure fairness\nw.r.t. other protected attributes. Existing methods, how-ever, still ignore the\nfact that datasets in this domain are often imbalanced, leading to unfair\ndecisions towards the minority class. Thus, solutions are needed that achieve\nmulti-fairness,accurate predictive performance in overall, and balanced\nperformance across the different classes.To this end, we introduce a new\nfairness notion,Multi-Max Mistreatment(MMM), which measures unfairness while\nconsidering both (multi-attribute) protected group and class membership of\ninstances. To learn an MMM-fair classifier, we propose a multi-objective\nproblem formulation. We solve the problem using a boosting approach that\nin-training,incorporates multi-fairness treatment in the distribution update\nand post-training, finds multiple Pareto-optimal solutions; then uses\npseudo-weight based decision making to select optimal solution(s) among\naccurate, balanced, and multi-attribute fair solutions\n"} {"abstract": " This paper proposes a set of techniques to investigate eye gaze and fixation\npatterns while users interact with electronic user interfaces. In particular,\ntwo case studies are presented - one on analysing eye gaze while interacting\nwith deceptive materials in web pages and another on analysing graphs in\nstandard computer monitor and virtual reality displays. We analysed spatial and\ntemporal distributions of eye gaze fixations and sequence of eye gaze\nmovements. We used this information to propose new design guidelines to avoid\ndeceptive materials in web and user-friendly representation of data in 2D\ngraphs. In 2D graph study we identified that area graph has lowest number of\nclusters for user's gaze fixations and lowest average response time. The\nresults of 2D graph study were implemented in virtual and mixed reality\nenvironment. Along with this, it was ob-served that the duration while\ninteracting with deceptive materials in web pages is independent of the number\nof fixations. Furthermore, web-based data visualization tool for analysing eye\ntracking data from single and multiple users was developed.\n"} {"abstract": " We analyze the top Lyapunov exponent of the product of sequences of two by\ntwo matrices that appears in the analysis of several statistical mechanics\nmodels with disorder: for example these matrices are the transfer matrices for\nthe nearest neighbor Ising chain with random external field, and the free\nenergy density of this Ising chain is the Lyapunov exponent we consider. We\nobtain the sharp behavior of this exponent in the large interaction limit when\nthe external field is centered: this balanced case turns out to be critical in\nmany respects. From a mathematical standpoint we precisely identify the\nbehavior of the top Lyapunov exponent of a product of two dimensional random\nmatrices close to a diagonal random matrix for which top and bottom Lyapunov\nexponents coincide. In particular, the Lyapunov exponent is only\n$\\log$-H\\\"older continuous.\n"} {"abstract": " The favourable properties of tungsten borides for shielding the central High\nTemperature Superconductor (HTS) core of a spherical tokamak fusion power plant\nare modelled using the MCNP code. The objectives are to minimize the power\ndeposition into the cooled HTS core, and to keep HTS radiation damage to\nacceptable levels by limiting the neutron and gamma fluxes. The shield\nmaterials compared are W2B, WB, W2B5 and WB4 along with a reactively sintered\nboride B0.329C0.074Cr0.024Fe0.274W0.299, monolithic W and WC. Of all these W2B5\ngave the most favourable results with a factor of ~10 or greater reduction in\nneutron flux and gamma energy deposition as compared to monolithic W. These\nresults are compared with layered water-cooled shields, giving the result that\nthe monolithic shields, with moderating boron, gave comparable neutron flux and\npower deposition, and (in the case of W2B5) even better performance. Good\nperformance without water-coolant has advantages from a reactor safety\nperspective due to the risks associated with radio-activation of oxygen. 10B\nisotope concentrations between 0 and 100% are considered for the boride\nshields. The naturally occurring 20% fraction gave much lower energy\ndepositions than the 0% fraction, but the improvement largely saturated beyond\n40%. Thermophysical properties of the candidate materials are discussed, in\nparticular the thermal strain. To our knowledge, the performance of W2B5 is\nunrivalled by other monolithic shielding materials. This is partly as its\ntrigonal crystal structure gives it higher atomic density compared with other\nborides. It is also suggested that its high performance depends on it having\njust high enough 10B content to maintain a constant neutron energy spectrum\nacross the shield.\n"} {"abstract": " We show that the action of the Kauffman bracket skein algebra of a surface\n$\\Sigma$ on the skein module of the handlebody bounded by $\\Sigma$ is faithful\nif and only if the quantum parameter is not a root of 1.\n"} {"abstract": " Shared Memory is a mechanism that allows several processes to communicate\nwith each other by accessing -- writing or reading -- a set of variables that\nthey have in common. A Consistency Model defines how each process observes the\nstate of the Memory, according to the accesses performed by it and by the rest\nof the processes in the system. Therefore, it determines what value a read\nreturns when a given process issues it. This implies that there must be an\nagreement among all, or among processes in different subsets, on the order in\nwhich all or a subset of the accesses happened. It is clear that a higher\nquantity of accesses or proceses taking part in the agreement makes it possibly\nharder or slower to be achieved. This is the main reason for which a number of\nConsistency Models for Shared Memory have been introduced. This paper is a\nhandy summary of [2] and [3] where consistency models (Sequential, Causal,\nPRAM, Cache, Processors, Slow), including synchronized ones (Weak, Release,\nEntry), were formally defined. This provides a better understanding of those\nmodels and a way to reason and compare them through a concise notation. There\nare many papers on this subject in the literature such as [11] with which this\nwork shares some concepts.\n"} {"abstract": " Applying an operator product expansion approach we update the Standard Model\nprediction of the $B_c$ lifetime from over 20 years ago. The non-perturbative\nvelocity expansion is carried out up to third order in the relative velocity of\nthe heavy quarks. The scheme dependence is studied using three different mass\nschemes for the $\\bar b$ and $c$ quarks, resulting in three different values\nconsistent with each other and with experiment. Special focus has been laid on\nrenormalon cancellation in the computation. Uncertainties resulting from scale\ndependence, neglecting the strange quark mass, non-perturbative matrix elements\nand parametric uncertainties are discussed in detail. The resulting\nuncertainties are still rather large compared to the experimental ones, and\ntherefore do not allow for clear-cut conclusions concerning New Physics effects\nin the $B_c$ decay.\n"} {"abstract": " The Helmholtz equation in one dimension, which describes the propagation of\nelectromagnetic waves in effectively one-dimensional systems, is equivalent to\nthe time-independent Schr\\\"odinger equation. The fact that the potential term\nentering the latter is energy-dependent obstructs the application of the\nresults on low-energy quantum scattering in the study of the low-frequency\nwaves satisfying the Helmholtz equation. We use a recently developed dynamical\nformulation of stationary scattering to offer a comprehensive treatment of the\nlow-frequency scattering of these waves for a general finite-range scatterer.\nIn particular, we give explicit formulas for the coefficients of the\nlow-frequency series expansion of the transfer matrix of the system which in\nturn allow for determining the low-frequency expansions of its reflection,\ntransmission, and absorption coefficients. Our general results reveal a number\nof interesting physical aspects of low-frequency scattering particularly in\nrelation to permittivity profiles having balanced gain and loss.\n"} {"abstract": " We establish the dual equivalence of the category of (potentially nonunital)\noperator systems and the category of pointed compact nc (noncommutative) convex\nsets, extending a result of Davidson and the first author. We then apply this\ndual equivalence to establish a number of results about operator systems, some\nof which are new even in the unital setting.\n For example, we show that the maximal and minimal C*-covers of an operator\nsystem can be realized in terms of the C*-algebra of continuous nc functions on\nits nc quasistate space, clarifying recent results of Connes and van Suijlekom.\nWe also characterize \"C*-simple\" operator systems, i.e. operator systems with\nsimple minimal C*-cover, in terms of their nc quasistate spaces.\n We develop a theory of quotients of operator systems that extends the theory\nof quotients of unital operator algebras. In addition, we extend results of the\nfirst author and Shamovich relating to nc Choquet simplices. We show that an\noperator system is a C*-algebra if and only if its nc quasistate space is an nc\nBauer simplex with zero as an extreme point, and we show that a second\ncountable locally compact group has Kazhdan's property (T) if and only if for\nevery action of the group on a C*-algebra, the set of invariant quasistates is\nthe quasistate space of a C*-algebra.\n"} {"abstract": " With the growing use of camera devices, the industry has many image datasets\nthat provide more opportunities for collaboration between the machine learning\ncommunity and industry. However, the sensitive information in the datasets\ndiscourages data owners from releasing these datasets. Despite recent research\ndevoted to removing sensitive information from images, they provide neither\nmeaningful privacy-utility trade-off nor provable privacy guarantees. In this\nstudy, with the consideration of the perceptual similarity, we propose\nperceptual indistinguishability (PI) as a formal privacy notion particularly\nfor images. We also propose PI-Net, a privacy-preserving mechanism that\nachieves image obfuscation with PI guarantee. Our study shows that PI-Net\nachieves significantly better privacy utility trade-off through public image\ndata.\n"} {"abstract": " The backup control barrier function (CBF) was recently proposed as a\ntractable formulation that guarantees the feasibility of the CBF quadratic\nprogramming (QP) via an implicitly defined control invariant set. The control\ninvariant set is based on a fixed backup policy and evaluated online by forward\nintegrating the dynamics under the backup policy. This paper is intended as a\ntutorial of the backup CBF approach and a comparative study to some benchmarks.\nFirst, the backup CBF approach is presented step by step with the underlying\nmath explained in detail. Second, we prove that the backup CBF always has a\nrelative degree 1 under mild assumptions. Third, the backup CBF approach is\ncompared with benchmarks such as Hamilton Jacobi PDE and Sum-of-Squares on the\ncomputation of control invariant sets, which shows that one can obtain a\ncontrol invariant set close to the maximum control invariant set under a good\nbackup policy for many practical problems.\n"} {"abstract": " Complex fluids flow in complex ways in complex structures. Transport of water\nand various organic and inorganic molecules in the central nervous system are\nimportant in a wide range of biological and medical processes [C. Nicholson,\nand S. Hrab\\v{e}tov\\'a, Biophysical Journal, 113(10), 2133(2017)]. However, the\nexact driving mechanisms are often not known. In this paper, we investigate\nflows induced by action potentials in an optic nerve as a prototype of the\ncentral nervous system (CNS). Different from traditional fluid dynamics\nproblems, flows in biological tissues such as the CNS are coupled with ion\ntransport. It is driven by osmosis created by concentration gradient of ionic\nsolutions, which in term influence the transport of ions. Our mathematical\nmodel is based on the known structural and biophysical properties of the\nexperimental system used by the Harvard group Orkand et al [R.K. Orkand, J.G.\nNicholls, S.W. Kuffler, Journal of Neurophysiology, 29(4), 788(1966)].\nAsymptotic analysis and numerical computation show the significant role of\nwater in convective ion transport. The full model (including water) and the\nelectrodiffusion model (excluding water) are compared in detail to reveal an\ninteresting interplay between water and ion transport. In the full model,\nconvection due to water flow dominates inside the glial domain. This water flow\nin the glia contributes significantly to the spatial buffering of potassium in\nthe extracellular space. Convection in the extracellular domain does not\ncontribute significantly to spatial buffering. Electrodiffusion is the dominant\nmechanism for flows confined to the extracellular domain.\n"} {"abstract": " A (charged) rotating black hole may be unstable against a (charged) massive\nscalar field perturbation due to the existence of superradiance modes. The\nstability property depends on the parameters of the system. In this paper, the\nsuperradiant stable parameter space is studied for the four-dimensional\nextremal Kerr and Kerr-Newman black holes under massive and charged massive\nscalar perturbation. For the extremal Kerr case, it is found that when the\nangular frequency and proper mass of the scalar perturbation satisfy the\ninequality $\\omega<\\mu/\\sqrt{3}$, the extremal Kerr black hole and scalar\nperturbation system is superradiantly stable. For the Kerr-Newman black hole\ncase, when the angular frequency of the scalar perturbation satisfies\n$\\omega \\frac{\\sqrt{3\nk^2+2} }{ \\sqrt{k^2+2} },~k=\\frac{a}{M}$, the extremal Kerr-Newman black hole\nis superradiantly stable under charged massive scalar perturbation.\n"} {"abstract": " Multi-domain image-to-image translation with conditional Generative\nAdversarial Networks (GANs) can generate highly photo realistic images with\ndesired target classes, yet these synthetic images have not always been helpful\nto improve downstream supervised tasks such as image classification. Improving\ndownstream tasks with synthetic examples requires generating images with high\nfidelity to the unknown conditional distribution of the target class, which\nmany labeled conditional GANs attempt to achieve by adding soft-max\ncross-entropy loss based auxiliary classifier in the discriminator. As recent\nstudies suggest that the soft-max loss in Euclidean space of deep feature does\nnot leverage their intrinsic angular distribution, we propose to replace this\nloss in auxiliary classifier with an additive angular margin (AAM) loss that\ntakes benefit of the intrinsic angular distribution, and promotes intra-class\ncompactness and inter-class separation to help generator synthesize high\nfidelity images.\n We validate our method on RaFD and CIFAR-100, two challenging face expression\nand natural image classification data set. Our method outperforms\nstate-of-the-art methods in several different evaluation criteria including\nrecently proposed GAN-train and GAN-test metrics designed to assess the impact\nof synthetic data on downstream classification task, assessing the usefulness\nin data augmentation for supervised tasks with prediction accuracy score and\naverage confidence score, and the well known FID metric.\n"} {"abstract": " We consider the problem of minimizing the supplied energy of\ninfinite-dimensional linear port-Hamiltonian systems and prove that optimal\ntrajectories exhibit the turnpike phenomenon towards certain subspaces induced\nby the dissipation of the dynamics.\n"} {"abstract": " We investigate the complexity and performance of recurrent neural network\n(RNN) models as post-processing units for the compensation of fibre\nnonlinearities in digital coherent systems carrying polarization multiplexed\n16-QAM and 32-QAM signals. We evaluate three bi-directional RNN models, namely\nthe bi-LSTM, bi-GRU and bi-Vanilla-RNN and show that all of them are promising\nnonlinearity compensators especially in dispersion unmanaged systems. Our\nsimulations show that during inference the three models provide similar\ncompensation performance, therefore in real-life systems the simplest scheme\nbased on Vanilla-RNN units should be preferred. We compare bi-Vanilla-RNN with\nVolterra nonlinear equalizers and exhibit its superiority both in terms of\nperformance and complexity, thus highlighting that RNN processing is a very\npromising pathway for the upgrade of long-haul optical communication systems\nutilizing coherent detection.\n"} {"abstract": " The aim of this paper is to show that almost greedy bases induce tighter\nembeddings in superreflexive Banach spaces than in general Banach spaces. More\nspecifically, we show that an almost greedy basis in a superreflexive Banach\nspace $\\mathbb{X}$ induces embeddings that allow squeezing $\\mathbb{X}$ between\ntwo superreflexive Lorentz sequence spaces that are close to each other in the\nsense that they have the same fundamental function.\n"} {"abstract": " We provide a new degree bound on the weighted sum-of-squares (SOS)\npolynomials for Putinar-Vasilescu's Positivstellensatz. This leads to another\nPositivstellensatz saying that if $f$ is a polynomial of degree at most $2 d_f$\nnonnegative on a semialgebraic set having nonempty interior defined by finitely\nmany polynomial inequalities $g_j(x)\\ge 0$, $j=1,\\dots,m$ with\n$g_1:=L-\\|x\\|_2^2$ for some $L>0$, then there exist positive constants $\\bar c$\nand $c$ depending on $f,g_j$ such that for any $\\varepsilon>0$, for all $k\\ge\n\\bar c \\varepsilon^{-c}$, $f$ has the decomposition \\begin{equation}\n\\begin{array}{l} (1+\\|x\\|_2^2)^k(f+\\varepsilon)=\\sigma_0+\\sum_{j=1}^m\n\\sigma_jg_j \\,, \\end{array} \\end{equation} for some SOS polynomials $\\sigma_j$\nbeing such that the degrees of $\\sigma_0,\\sigma_jg_j$ are at most $2(d_f+k)$.\nHere $\\|\\cdot\\|_2$ denotes the $\\ell_2$ vector norm. As a consequence, we\nobtain a converging hierarchy of semidefinite relaxations for lower bounds in\npolynomial optimization on basic compact semialgebraic sets. The complexity of\nthis hierarchy is $\\mathcal{O}(\\varepsilon^{-c})$ for prescribed accuracy\n$\\varepsilon>0$. In particular, if $m=L=1$ then $c=65$, yielding the complexity\n$\\mathcal{O}(\\varepsilon^{-65})$ for the minimization of a polynomial on the\nunit ball. Our result improves the complexity bound\n$\\mathcal{O}(\\exp(\\varepsilon^{-c}))$ due to Nie and Schweighofer in [Journal\nof Complexity 23.1 (2007): 135-150].\n"} {"abstract": " Let $\\mathcal{F}\\subset 2^{[n]}$ be a set family such that the intersection\nof any two members of $\\mathcal{F}$ has size divisible by $\\ell$. The famous\nEventown theorem states that if $\\ell=2$ then $|\\mathcal{F}|\\leq 2^{\\lfloor\nn/2\\rfloor}$, and this bound can be achieved by, e.g., an `atomic'\nconstruction, i.e. splitting the ground set into disjoint pairs and taking\ntheir arbitrary unions. Similarly, splitting the ground set into disjoint sets\nof size $\\ell$ gives a family with pairwise intersections divisible by $\\ell$\nand size $2^{\\lfloor n/\\ell\\rfloor}$. Yet, as was shown by Frankl and Odlyzko,\nthese families are far from maximal. For infinitely many $\\ell$, they\nconstructed families $\\mathcal{F}$ as above of size $2^{\\Omega(n\\log\n\\ell/\\ell)}$. On the other hand, if the intersection of any number of sets in\n$\\mathcal{F}\\subset 2^{[n]}$ has size divisible by $\\ell$, then it is easy to\nshow that $|\\mathcal{F}|\\leq 2^{\\lfloor n/\\ell\\rfloor}$. In 1983 Frankl and\nOdlyzko conjectured that $|\\mathcal{F}|\\leq 2^{(1+o(1)) n/\\ell}$ holds already\nif one only requires that for some $k=k(\\ell)$ any $k$ distinct members of\n$\\mathcal{F}$ have an intersection of size divisible by $\\ell$. We completely\nresolve this old conjecture in a strong form, showing that $|\\mathcal{F}|\\leq\n2^{\\lfloor n/\\ell\\rfloor}+O(1)$ if $k$ is chosen appropriately, and the $O(1)$\nerror term is not needed if (and only if) $\\ell \\, | \\, n$, and $n$ is\nsufficiently large. Moreover the only extremal configurations have `atomic'\nstructure as above. Our main tool, which might be of independent interest, is a\nstructure theorem for set systems with small 'doubling'.\n"} {"abstract": " Visual Object Tracking (VOT) can be seen as an extended task of Few-Shot\nLearning (FSL). While the concept of FSL is not new in tracking and has been\npreviously applied by prior works, most of them are tailored to fit specific\ntypes of FSL algorithms and may sacrifice running speed. In this work, we\npropose a generalized two-stage framework that is capable of employing a large\nvariety of FSL algorithms while presenting faster adaptation speed. The first\nstage uses a Siamese Regional Proposal Network to efficiently propose the\npotential candidates and the second stage reformulates the task of classifying\nthese candidates to a few-shot classification problem. Following such a\ncoarse-to-fine pipeline, the first stage proposes informative sparse samples\nfor the second stage, where a large variety of FSL algorithms can be conducted\nmore conveniently and efficiently. As substantiation of the second stage, we\nsystematically investigate several forms of optimization-based few-shot\nlearners from previous works with different objective functions, optimization\nmethods, or solution space. Beyond that, our framework also entails a direct\napplication of the majority of other FSL algorithms to visual tracking,\nenabling mutual communication between researchers on these two topics.\nExtensive experiments on the major benchmarks, VOT2018, OTB2015, NFS, UAV123,\nTrackingNet, and GOT-10k are conducted, demonstrating a desirable performance\ngain and a real-time speed.\n"} {"abstract": " The fundamental processes by which nuclear energy is generated in the Sun\nhave been known for many years. However, continuous progress in areas such as\nneutrino experiments, stellar spectroscopy and helioseismic data and techniques\nrequires ever more accurate and precise determination of nuclear reaction cross\nsections, a fundamental physical input for solar models. In this work, we\nreview the current status of (standard) solar models and present a detailed\ndiscussion on the relevance of nuclear reactions for detailed predictions of\nsolar properties. In addition, we also provide an analytical model that helps\nunderstanding the relation between nuclear cross sections, neutrino fluxes and\nthe possibility they offer for determining physical characteristics of the\nsolar interior. The latter is of particular relevance in the context of the\nconundrum posed by the solar composition, the solar abundance problem, and in\nthe light of the first ever direct detection of solar CN neutrinos recently\nobtained by the Borexino collaboration. Finally, we present a short list of\nwishes about the precision with which nuclear reaction rates should be\ndetermined to allow for further progress in our understanding of the Sun.\n"} {"abstract": " We construct non-exact operator spaces satisfying the Weak Expectation\nProperty (WEP) and the Operator space version of the Local Lifting Property\n(OLLP). These examples should be compared with the example we recently gave of\na $C^*$-algebra with WEP and LLP. The construction produces several new\nanalogues among operator spaces of the Gurarii space, extending Oikhberg's\nprevious work. Each of our \"Gurarii operator spaces\" is associated to a class\nof finite dimensional operator spaces (with suitable properties). In each case\nwe show the space exists and is unique up to completely isometric isomorphism.\n"} {"abstract": " In this paper, we propose a novel graph convolutional network architecture,\nGraph Stacked Hourglass Networks, for 2D-to-3D human pose estimation tasks. The\nproposed architecture consists of repeated encoder-decoder, in which\ngraph-structured features are processed across three different scales of human\nskeletal representations. This multi-scale architecture enables the model to\nlearn both local and global feature representations, which are critical for 3D\nhuman pose estimation. We also introduce a multi-level feature learning\napproach using different-depth intermediate features and show the performance\nimprovements that result from exploiting multi-scale, multi-level feature\nrepresentations. Extensive experiments are conducted to validate our approach,\nand the results show that our model outperforms the state-of-the-art.\n"} {"abstract": " The electronic bandstructure of a solid is a collection of allowed bands\nseparated by forbidden bands, revealing the geometric symmetry of the crystal\nstructures. Comprehensive knowledge of the bandstructure with band parameters\nexplains intrinsic physical, chemical and mechanical properties of the solid.\nHere we report the artificial polaritonic bandstructures of two-dimensional\nhoneycomb lattices for microcavity exciton-polaritons using GaAs semiconductors\nin the wide-range detuning values, from cavity-photon-like (red-detuned) to\nexciton-like (blue-detuned) regimes. In order to understand the experimental\nbandstructures and their band parameters, such as gap energies, bandwidths,\nhopping integrals and density of states, we originally establish a polariton\nband theory within an augmented plane wave method with two-kind-bosons, cavity\nphotons trapped at the lattice sites and freely moving excitons. In particular,\nthis two-kind-band theory is absolutely essential to elucidate the exciton\neffect in the bandstructures of blue-detuned exciton-polaritons, where the\nflattened exciton-like dispersion appears at larger in-plane momentum values\ncaptured in our experimental access window. We reach an excellent agreement\nbetween theory and experiments in all detuning values.\n"} {"abstract": " We analyze the observed spatial, chemical and dynamical distributions of\nlocal metal-poor stars, based on photometrically derived metallicity and\ndistance estimates along with proper motions from the Gaia mission. Along the\nGalactic prime meridian, we identify stellar populations with distinct\nproperties in the metallicity versus rotational velocity space, including Gaia\nSausage/Enceladus (GSE), the metal-weak thick disk (MWTD), and the Splash\n(sometimes referred to as the \"in situ\" halo). We model the observed\nphase-space distributions using Gaussian mixtures and refine their positions\nand fractional contributions as a function of distances from the Galactic plane\n($|Z|$) and the Galactic center ($R_{\\rm GC}$), providing a global perspective\nof the major stellar populations in the local halo. Within the sample volume\n($|Z|<6$ kpc), stars associated with GSE exhibit a larger proportion of\nmetal-poor stars at greater $R_{\\rm GC}$ ($\\Delta \\langle{\\rm[Fe/H]}\\rangle\n/\\Delta R_{\\rm GC} =-0.05\\pm0.02$ dex kpc$^{-1}$). This observed trend, along\nwith a mild anticorrelation of the mean rotational velocity with metallicity\n($\\Delta \\langle v_\\phi \\rangle / \\Delta \\langle{\\rm[Fe/H]} \\rangle \\sim -10$\nkm s$^{-1}$ dex$^{-1}$), implies that more metal-rich stars in the inner region\nof the GSE progenitor were gradually stripped away, while the prograde orbit of\nthe merger at infall became radialized by dynamical friction. The metal-rich\nGSE stars are causally disconnected from the Splash structure, whose stars are\nmostly found on prograde orbits ($>94\\%$) and exhibit a more centrally\nconcentrated distribution than GSE. The MWTD exhibits a similar spatial\ndistribution to the Splash, suggesting earlier dynamical heating of stars in\nthe primordial disk of the Milky Way, possibly before the GSE merger.\n"} {"abstract": " Integrating external language models (LMs) into end-to-end (E2E) models\nremains a challenging task for domain-adaptive speech recognition. Recently,\ninternal language model estimation (ILME)-based LM fusion has shown significant\nword error rate (WER) reduction from Shallow Fusion by subtracting a weighted\ninternal LM score from an interpolation of E2E model and external LM scores\nduring beam search. However, on different test sets, the optimal LM\ninterpolation weights vary over a wide range and have to be tuned extensively\non well-matched validation sets. In this work, we perform LM fusion in the\nminimum WER (MWER) training of an E2E model to obviate the need for LM weights\ntuning during inference. Besides MWER training with Shallow Fusion (MWER-SF),\nwe propose a novel MWER training with ILME (MWER-ILME) where the ILME-based\nfusion is conducted to generate N-best hypotheses and their posteriors.\nAdditional gradient is induced when internal LM is engaged in MWER-ILME loss\ncomputation. During inference, LM weights pre-determined in MWER training\nenable robust LM integrations on test sets from different domains. Experimented\nwith 30K-hour trained transformer transducers, MWER-ILME achieves on average\n8.8% and 5.8% relative WER reductions from MWER and MWER-SF training,\nrespectively, on 6 different test sets\n"} {"abstract": " Aircraft manufacturing relies on pre-order bookings. The configuration of the\nto be assembled aircraft is fixed by the design assisted market surveys. The\nsensitivity of the supply chain to the market conditions, makes, the\nrelationship between the product (aircraft) and the associated service\n(aviation), precarious. Traditional model to mitigate this risk to\nprofitability rely on increasing the scales of operations. However, the\nemergence of new standards of air quality monitoring and insistence on the\nimplementation, demands additional corrective measures. In the quest for a\nsolution, this research commentary establishes a link, between the airport\ntaxes and the nature of the transporting unit. It warns, that merely,\nincreasing the number of mid haulage range aircrafts (MHA) in the fleet, may\nnot be enough, to overcome this challenge. In a two-pronged approach, the\ncommunication proposes, the use of mostly electric assisted air planes, and\nsmall sized airports as the key to solving this complex problem. As a side-note\nthe appropriateness of South Asian region, as a test-bed for MEAP based\naircrafts is also investigated. The success of this the idea can be potentially\nextended, to any other aviation friendly region of the world.\n"} {"abstract": " We present high-angular-resolution radio observations of the Arches cluster\nin the Galactic centre, one of the most massive young clusters in the Milky\nWay. The data were acquired in two epochs and at 6 and 10 GHz with the Karl G.\nJansky Very Large Array (JVLA). The rms noise reached is three to four times\nbetter than during previous observations and we have almost doubled the number\nof known radio stars in the cluster. Nine of them have spectral indices\nconsistent with thermal emission from ionised stellar winds, one is a confirmed\ncolliding wind binary (CWB), and two sources are ambiguous cases. Regarding\nvariability, the radio emission appears to be stable on timescales of a few to\nten years. Finally, we show that the number of radio stars can be used as a\ntool for constraining the age and/or mass of a cluster and also its mass\nfunction.\n"} {"abstract": " We consider an elliptic problem with nonlinear boundary condition involving\nnonlinearity with superlinear and subcritical growth at infinity and a\nbifurcation parameter as a factor. We use re-scaling method, degree theory and\ncontinuation theorem to prove that there exists a connected branch of positive\nsolutions bifurcating from infinity when the parameter goes to zero. Moreover,\nif the nonlinearity satisfies additional conditions near zero, we establish a\nglobal bifurcation result, and discuss the number of positive solution(s) with\nrespect to the parameter using bifurcation theory and degree theory.\n"} {"abstract": " Real-world machine learning systems need to analyze test data that may differ\nfrom training data. In K-way classification, this is crisply formulated as\nopen-set recognition, core to which is the ability to discriminate open-set\ndata outside the K closed-set classes. Two conceptually elegant ideas for\nopen-set discrimination are: 1) discriminatively learning an open-vs-closed\nbinary discriminator by exploiting some outlier data as the open-set, and 2)\nunsupervised learning the closed-set data distribution with a GAN, using its\ndiscriminator as the open-set likelihood function. However, the former\ngeneralizes poorly to diverse open test data due to overfitting to the training\noutliers, which are unlikely to exhaustively span the open-world. The latter\ndoes not work well, presumably due to the instable training of GANs. Motivated\nby the above, we propose OpenGAN, which addresses the limitation of each\napproach by combining them with several technical insights. First, we show that\na carefully selected GAN-discriminator on some real outlier data already\nachieves the state-of-the-art. Second, we augment the available set of real\nopen training examples with adversarially synthesized \"fake\" data. Third and\nmost importantly, we build the discriminator over the features computed by the\nclosed-world K-way networks. This allows OpenGAN to be implemented via a\nlightweight discriminator head built on top of an existing K-way network.\nExtensive experiments show that OpenGAN significantly outperforms prior\nopen-set methods.\n"} {"abstract": " Climate change presents an existential threat to human societies and the\nEarth's ecosystems more generally. Mitigation strategies naturally require\nsolving a wide range of challenging problems in science, engineering, and\neconomics. In this context, rapidly developing quantum technologies in\ncomputing, sensing, and communication could become useful tools to diagnose and\nhelp mitigate the effects of climate change. However, the intersection between\nclimate and quantum sciences remains largely unexplored. This preliminary\nreport aims to identify potential high-impact use-cases of quantum technologies\nfor climate change with a focus on four main areas: simulating physical\nsystems, combinatorial optimization, sensing, and energy efficiency. We hope\nthis report provides a useful resource towards connecting the climate and\nquantum science communities, and to this end we identify relevant research\nquestions and next steps.\n"} {"abstract": " Information theoretic sensor management approaches are an ideal solution to\nstate estimation problems when considering the optimal control of multi-agent\nsystems, however they are too computationally intensive for large state spaces,\nespecially when considering the limited computational resources typical of\nlarge-scale distributed multi-agent systems. Reinforcement learning (RL) is a\npromising alternative which can find approximate solutions to distributed\noptimal control problems that take into account the resource constraints\ninherent in many systems of distributed agents. However, the RL training can be\nprohibitively inefficient, especially in low-information environments where\nagents receive little to no feedback in large portions of the state space. We\npropose a hybrid information-driven multi-agent reinforcement learning (MARL)\napproach that utilizes information theoretic models as heuristics to help the\nagents navigate large sparse state spaces, coupled with information based\nrewards in an RL framework to learn higher-level policies. This paper presents\nour ongoing work towards this objective. Our preliminary findings show that\nsuch an approach can result in a system of agents that are approximately three\norders of magnitude more efficient at exploring a sparse state space than naive\nbaseline metrics. While the work is still in its early stages, it provides a\npromising direction for future research.\n"} {"abstract": " Harnessing the quantum computation power of the present\nnoisy-intermediate-size-quantum devices has received tremendous interest in the\nlast few years. Here we study the learning power of a one-dimensional\nlong-range randomly-coupled quantum spin chain, within the framework of\nreservoir computing. In time sequence learning tasks, we find the system in the\nquantum many-body localized (MBL) phase holds long-term memory, which can be\nattributed to the emergent local integrals of motion. On the other hand, MBL\nphase does not provide sufficient nonlinearity in learning highly-nonlinear\ntime sequences, which we show in a parity check task. This is reversed in the\nquantum ergodic phase, which provides sufficient nonlinearity but compromises\nmemory capacity. In a complex learning task of Mackey-Glass prediction that\nrequires both sufficient memory capacity and nonlinearity, we find optimal\nlearning performance near the MBL-to-ergodic transition. This leads to a\nguiding principle of quantum reservoir engineering at the edge of quantum\nergodicity reaching optimal learning power for generic complex reservoir\nlearning tasks. Our theoretical finding can be readily tested with present\nexperiments.\n"} {"abstract": " We present a study of the influence of magnetic field strength and morphology\nin Type Ia Supernovae and their late-time light curves and spectra. In order to\nboth capture self-consistent magnetic field topologies as well evolve our\nmodels to late times, a two stage approach is taken. We study the early\ndeflagration phase (1s) using a variety of magnetic field strengths, and find\nthat the topology of the field is set by the burning, independent of the\ninitial strength. We study late time (~1000 days) light curves and spectra with\na variety of magnetic field topologies, and infer magnetic field strengths from\nobserved supernovae. Lower limits are found to be 106G. This is determined by\nthe escape, or lack thereof, of positrons that are tied to the magnetic field.\nThe first stage employs 3d MHD and a local burning approximation, and uses the\ncode Enzo. The second stage employs a hybrid approach, with 3D radiation and\npositron transport, and spherical hydrodynamics. The second stage uses the code\nHYDRA. In our models, magnetic field amplification remains small during the\nearly deflagration phase. Late-time spectra bear the imprint of both magnetic\nfield strength and morphology. Implications for alternative explosion scenarios\nare discussed.\n"} {"abstract": " Computational Fluid Dynamics (CFD) is a major sub-field of engineering.\nCorresponding flow simulations are typically characterized by heavy\ncomputational resource requirements. Often, very fine and complex meshes are\nrequired to resolve physical effects in an appropriate manner. Since all CFD\nalgorithms scale at least linearly with the size of the underlying mesh\ndiscretization, finding an optimal mesh is key for computational efficiency.\n One methodology used to find optimal meshes is goal-oriented adaptive mesh\nrefinement. However, this is typically computationally demanding and only\navailable in a limited number of tools. Within this contribution, we adopt a\nmachine learning approach to identify optimal mesh densities. We generate\noptimized meshes using classical methodologies and propose to train a\nconvolutional network predicting optimal mesh densities given arbitrary\ngeometries. The proposed concept is validated along 2d wind tunnel simulations\nwith more than 60,000 simulations. Using a training set of 20,000 simulations\nwe achieve accuracies of more than 98.7%.\n Corresponding predictions of optimal meshes can be used as input for any mesh\ngeneration and CFD tool. Thus without complex computations, any CFD engineer\ncan start his predictions from a high quality mesh.\n"} {"abstract": " Context. The Sun's complex corona is the source of the solar wind and\ninterplanetary magnetic field. While the large scale morphology is well\nunderstood, the impact of variations in coronal properties on the scale of a\nfew degrees on properties of the interplanetary medium is not known. Solar\nOrbiter, carrying both remote sensing and in situ instruments into the inner\nsolar system, is intended to make these connections better than ever before.\nAims. We combine remote sensing and in situ measurements from Solar Orbiter's\nfirst perihelion at 0.5 AU to study the fine scale structure of the solar wind\nfrom the equatorward edge of a polar coronal hole with the aim of identifying\ncharacteristics of the corona which can explain the in situ variations.\nMethods. We use in situ measurements of the magnetic field, density and solar\nwind speed to identify structures on scales of hours at the spacecraft. Using\nPotential Field Source Surface mapping we estimate the source locations of the\nmeasured solar wind as a function of time and use EUI images to characterise\nthese solar sources. Results. We identify small scale stream interactions in\nthe solar wind with compressed magnetic field and density along with speed\nvariations which are associated with corrugations in the edge of the coronal\nhole on scales of several degrees, demonstrating that fine scale coronal\nstructure can directly influence solar wind properties and drive variations\nwithin individual streams. Conclusions. This early analysis already\ndemonstrates the power of Solar Orbiter's combined remote sensing and in situ\npayload and shows that with future, closer perihelia it will be possible\ndramatically to improve our knowledge of the coronal sources of fine scale\nsolar wind structure, which is important both for understanding the phenomena\ndriving the solar wind and predicting its impacts at the Earth and elsewhere.\n"} {"abstract": " It is crucial for policymakers to understand the community prevalence of\nCOVID-19 so combative resources can be effectively allocated and prioritized\nduring the COVID-19 pandemic. Traditionally, community prevalence has been\nassessed through diagnostic and antibody testing data. However, despite the\nincreasing availability of COVID-19 testing, the required level has not been\nmet in most parts of the globe, introducing a need for an alternative method\nfor communities to determine disease prevalence. This is further complicated by\nthe observation that COVID-19 prevalence and spread varies across different\nspatial, temporal, and demographics. In this study, we understand trends in the\nspread of COVID-19 by utilizing the results of self-reported COVID-19 symptoms\nsurveys as an alternative to COVID-19 testing reports. This allows us to assess\ncommunity disease prevalence, even in areas with low COVID-19 testing ability.\nUsing individually reported symptom data from various populations, our method\npredicts the likely percentage of the population that tested positive for\nCOVID-19. We do so with a Mean Absolute Error (MAE) of 1.14 and Mean Relative\nError (MRE) of 60.40\\% with 95\\% confidence interval as (60.12, 60.67). This\nimplies that our model predicts +/- 1140 cases than the original in a\npopulation of 1 million. In addition, we forecast the location-wise percentage\nof the population testing positive for the next 30 days using self-reported\nsymptoms data from previous days. The MAE for this method is as low as 0.15\n(MRE of 23.61\\% with 95\\% confidence interval as (23.6, 13.7)) for New York. We\npresent an analysis of these results, exposing various clinical attributes of\ninterest across different demographics. Lastly, we qualitatively analyze how\nvarious policy enactments (testing, curfew) affect the prevalence of COVID-19\nin a community.\n"} {"abstract": " Precise quantitative delineation of tumor hypoxia is essential in radiation\ntherapy treatment planning to improve the treatment efficacy by targeting\nhypoxic sub-volumes. We developed a combined imaging system of positron\nemission tomography (PET) and electron para-magnetic resonance imaging (EPRI)\nof molecular oxygen to investigate the accuracy of PET imaging in assessing\ntumor hypoxia. The PET/EPRI combined imaging system aims to use EPRI to\nprecisely measure the oxygen partial pressure in tissues. This will evaluate\nthe validity of PET hypoxic tumor imaging by (near) simultaneously acquired\nEPRI as ground truth. The combined imaging system was constructed by\nintegrating a small animal PET scanner (inner ring diameter 62 mm and axial\nfield of view 25.6 mm) and an EPRI subsystem (field strength 25 mT and resonant\nfrequency 700 MHz). The compatibility between the PET and EPRI subsystems were\ntested with both phantom and animal imaging. Hypoxic imaging on a tumor mouse\nmodel using $^{18}$F-fluoromisonidazole radio-tracer was conducted with the\ndeveloped PET/EPRI system. We report the development and initial imaging\nresults obtained from the PET/EPRI combined imaging system.\n"} {"abstract": " The effects of the evolution force are observable in nature at all structural\nlevels ranging from small molecular systems to conversely enormous biospheric\nsystems. However, the evolution force and work associated with formation of\nbiological structures has yet to be described mathematically or theoretically.\nIn addressing the conundrum, we consider evolution from a unique perspective\nand in doing so introduce the Fundamental Theory of the Evolution Force, FTEF.\nHerein, we prove FTEF by proof of concept using a synthetic evolution\nartificial intelligence to engineer 14-3-3 {\\zeta} docking proteins. Synthetic\ngenes were engineered by transforming 14-3-3 {\\zeta} sequences into time-based\nDNA codes that served as templates for random DNA hybridizations and genetic\nassembly. Application of time-based DNA codes allowed us to fast forward\nevolution, while damping the effect of point mutations. Notably, SYN-AI\nengineered a set of three architecturally conserved docking proteins that\nretained motion and vibrational dynamics of native Bos taurus 14-3-3 {\\zeta}.\n"} {"abstract": " Quantum cascade lasers (QCLs) facilitate compact optical frequency comb\nsources that operate in the mid-infrared and terahertz spectral regions, where\nmany molecules have their fundamental absorption lines. Enhancing the optical\nbandwidth of these chip-sized lasers is of paramount importance to address\ntheir application in broadband high-precision spectroscopy. In this work, we\nprovide a numerical and experimental investigation of the comb spectral width\nand show how it can be optimized to obtain its maximum value defined by the\nlaser gain bandwidth. The interplay of nonoptimal values of the resonant Kerr\nnonlinearity and the cavity dispersion can lead to significant narrowing of the\ncomb spectrum and reveals the best approach for dispersion compensation. The\nimplementation of high mirror losses is shown to be favourable and results in\nproliferation of the comb sidemodes. Ultimately, injection locking of QCLs by\nmodulating the laser bias around the roundtrip frequency provides a stable\nexternal knob to control the FM comb state and recover the maximum spectral\nwidth of the unlocked laser state.\n"} {"abstract": " The lack of an easily realizable complementary circuit technology offering\nlow static power consumption has been limiting the utilization of other\nsemiconductor materials than silicon. In this publication, a novel depletion\nmode JFET based complementary circuit technology is presented and herein after\nreferred to as Complementary Semiconductor (CS) circuit technology. The fact\nthat JFETs are pure semiconductor devices, i.e. a carefully optimized Metal\nOxide Semiconductor (MOS) gate stack is not required, facilitates the\nimplementation of CS circuit technology to many semiconductor materials, like\ne.g. germanium and silicon carbide. Furthermore, when the CS circuit technology\nis idle there are neither conductive paths between nodes that are biased at\ndifferent potentials nor forward biased p-n junctions and thus it enables low\nstatic power consumption. Moreover, the fact that the operation of depletion\nmode JFETs does not necessitate the incorporation of forward biased p-n\njunctions means that CS circuit technology is not limited to wide band-gap\nsemiconductor materials, low temperatures, and/or low voltage spans. In this\npaper the operation of the CS logic is described and proven via simulations.\n"} {"abstract": " Many robot manipulation skills can be represented with deterministic\ncharacteristics and there exist efficient techniques for learning parameterized\nmotor plans for those skills. However, one of the active research challenge\nstill remains to sustain manipulation capabilities in situation of a mechanical\nfailure. Ideally, like biological creatures, a robotic agent should be able to\nreconfigure its control policy by adapting to dynamic adversaries. In this\npaper, we propose a method that allows an agent to survive in a situation of\nmechanical loss, and adaptively learn manipulation with compromised degrees of\nfreedom -- we call our method Survivable Robotic Learning (SRL). Our key idea\nis to leverage Bayesian policy gradient by encoding knowledge bias in posterior\nestimation, which in turn alleviates future policy search explorations, in\nterms of sample efficiency and when compared to random exploration based policy\nsearch methods. SRL represents policy priors as Gaussian process, which allows\ntractable computation of approximate posterior (when true gradient is\nintractable), by incorporating guided bias as proxy from prior replays. We\nevaluate our proposed method against off-the-shelf model free learning\nalgorithm (DDPG), testing on a hexapod robot platform which encounters\nincremental failure emulation, and our experiments show that our method\nimproves largely in terms of sample requirement and quantitative success ratio\nin all failure modes. A demonstration video of our experiments can be viewed\nat: https://sites.google.com/view/survivalrl\n"} {"abstract": " Following ideas introduced by Beardon-Minda and by Baribeau-Rivard-Wegert in\nthe context of the Schwarz-Pick lemma, we use the iterated hyperbolic\ndifference quotients to prove a multipoint Julia lemma. As applications, we\ngive a sharp estimate from below of the angular derivative at a boundary point,\ngeneralizing results due to Osserman, Mercer and others; and we prove a\ngeneralization to multiple fixed points of an interesting estimate due to Cowen\nand Pommerenke. These applications show that iterated hyperbolic difference\nquotients and multipoint Julia lemmas can be useful tools for exploring in a\nsystematic way the influence of higher order derivatives on the boundary\nbehaviour of holomorphic self-maps of the unit disk.\n"} {"abstract": " We develop a mesoscopic lattice model to study the morphology formation in\ninteracting ternary mixtures with evaporation of one component. As concrete\napplication of our model, we wish to capture morphologies as they are typically\narising during fabrication of organic solar cells. In this context, we consider\nan evaporating solvent into which two other components are dissolved, as a\nmodel for a 2-component coating solution that is drying on a substrate. We\npropose a 3-spins dynamics to describe the evolution of the three interacting\nspecies. As main tool, we use a Monte Carlo Metropolis-based algorithm, with\nthe possibility of varying the system's temperature, mixture composition,\ninteraction strengths, and evaporation kinetics. The main novelty is the\nstructure of the mesoscopic model -- a bi-dimensional lattice with periodic\nboundary conditions and divided in square cells to encode a mesoscopic range\ninteraction among the units. We investigate the effect of the model parameters\non the structure of the resulting morphologies. Finally, we compare the results\nobtained with the mesoscopic model with corresponding ones based on an\nanalogous lattice model with a short range interaction among the units, i.e.\nwhen the mesoscopic length scale coincides with the microscopic length scale of\nthe lattice.\n"} {"abstract": " We consider the direct $s$-channel gravitational production of dark matter\nduring the reheating process. Independent of the identity of the dark matter\ncandidate or its non-gravitational interactions, the gravitational process is\nalways present and provides a minimal production mechanism. During reheating, a\nthermal bath is quickly generated with a maximum temperature $T_{\\rm max}$, and\nthe temperature decreases as the inflaton continues to decay until the energy\ndensities of radiation and inflaton oscillations are equal, at $T_{\\rm RH}$.\nDuring these oscillations, $s$-channel gravitational production of dark matter\noccurs. We show that the abundance of dark matter (fermionic or scalar) depends\nprimarily on the combination $T_{\\rm max}^4/T_{\\rm RH} M_P^3$. We find that a\nsufficient density of dark matter can be produced over a wide range of dark\nmatter masses: from a GeV to a ZeV.\n"} {"abstract": " Magnetic and crystallographic transitions in the Cairo pentagonal magnet\nBi2Fe4O9 are investigated by means of infrared synchrotron-based spectroscopy\nas a function of temperature (20 - 300 K) and pressure (0 - 15.5 GPa). One of\nthe phonon modes is shown to exhibit an anomalous softening as a function of\ntemperature in the antiferromagnetic phase below 240 K, highlighting\nspin-lattice coupling. Moreover, under applied pressure at 40 K, an even larger\nsoftening is observed through the pressure induced structural transition.\nLattice dynamical calculations reveal that this mode is indeed very peculiar as\nit involves a minimal bending of the strongest superexchange path in the\npentagonal planes, as well as a decrease of the distances between second\nneighbor irons. The latter confirms the hypothesis made by Friedrich et al.,1\nabout an increase in the oxygen coordination of irons being at the origin of\nthe pressure-induced structural transition. As a consequence, one expects a new\nmagnetic superexchange path that may alter the magnetic structure under\npressure.\n"} {"abstract": " Real-time detections of transients and rapid multi-wavelength follow-up are\nat the core of modern multi-messenger astrophysics. MeerTRAP is one such\ninstrument that has been deployed on the MeerKAT radio telescope in South\nAfrica to search for fast radio transients in real-time. This, coupled with the\nability to rapidly localize the transient in combination with optical\nco-pointing by the MeerLICHT telescope gives the instrument the edge in finding\nand identifying the nature of the transient on short timescales. The commensal\nnature of the project means that MeerTRAP will keep looking for transients even\nif the telescope is not being used specifically for that purpose. Here, we\npresent a brief overview of the MeerTRAP project. We describe the overall\ndesign, specifications and the software stack required to implement such an\nundertaking. We conclude with some science highlights that have been enabled by\nthis venture over the last 10 months of operation.\n"} {"abstract": " Zonotopes are widely used for over-approximating forward reachable sets of\nuncertain linear systems for verification purposes. In this paper, we use\nzonotopes to achieve more scalable algorithms that under-approximate backward\nreachable sets of uncertain linear systems for control design. The main\ndifference is that the backward reachability analysis is a two-player game and\ninvolves Minkowski difference operations, but zonotopes are not closed under\nsuch operations. We under-approximate this Minkowski difference with a\nzonotope, which can be obtained by solving a linear optimization problem. We\nfurther develop an efficient zonotope order reduction technique to bound the\ncomplexity of the obtained zonotopic under-approximations. The proposed\napproach is evaluated against existing approaches using randomly generated\ninstances and illustrated with several examples.\n"} {"abstract": " Identifying a low-dimensional informed parameter subspace offers a viable\npath to alleviating the dimensionality challenge in the sampled-based solution\nto large-scale Bayesian inverse problems. This paper introduces a novel\ngradient-based dimension reduction method in which the informed subspace does\nnot depend on the data. This permits an online-offline computational strategy\nwhere the expensive low-dimensional structure of the problem is detected in an\noffline phase, meaning before observing the data. This strategy is particularly\nrelevant for multiple inversion problems as the same informed subspace can be\nreused. The proposed approach allows controlling the approximation error (in\nexpectation over the data) of the posterior distribution. We also present\nsampling strategies that exploit the informed subspace to draw efficiently\nsamples from the exact posterior distribution. The method is successfully\nillustrated on two numerical examples: a PDE-based inverse problem with a\nGaussian process prior and a tomography problem with Poisson data and a\nBesov-$\\mathcal{B}^2_{11}$ prior.\n"} {"abstract": " Random Reshuffling (RR), also known as Stochastic Gradient Descent (SGD)\nwithout replacement, is a popular and theoretically grounded method for\nfinite-sum minimization. We propose two new algorithms: Proximal and Federated\nRandom Reshuffing (ProxRR and FedRR). The first algorithm, ProxRR, solves\ncomposite convex finite-sum minimization problems in which the objective is the\nsum of a (potentially non-smooth) convex regularizer and an average of $n$\nsmooth objectives. We obtain the second algorithm, FedRR, as a special case of\nProxRR applied to a reformulation of distributed problems with either\nhomogeneous or heterogeneous data. We study the algorithms' convergence\nproperties with constant and decreasing stepsizes, and show that they have\nconsiderable advantages over Proximal and Local SGD. In particular, our methods\nhave superior complexities and ProxRR evaluates the proximal operator once per\nepoch only. When the proximal operator is expensive to compute, this small\ndifference makes ProxRR up to $n$ times faster than algorithms that evaluate\nthe proximal operator in every iteration. We give examples of practical\noptimization tasks where the proximal operator is difficult to compute and\nProxRR has a clear advantage. Finally, we corroborate our results with\nexperiments on real data sets.\n"} {"abstract": " Conditional on the extended Riemann hypothesis, we show that with high\nprobability, the characteristic polynomial of a random symmetric $\\{\\pm\n1\\}$-matrix is irreducible. This addresses a question raised by Eberhard in\nrecent work. The main innovation in our work is establishing sharp estimates\nregarding the rank distribution of symmetric random $\\{\\pm 1\\}$-matrices over\n$\\mathbb{F}_p$ for primes $2 < p \\leq \\exp(O(n^{1/4}))$. Previously, such\nestimates were available only for $p = o(n^{1/8})$. At the heart of our proof\nis a way to combine multiple inverse Littlewood--Offord-type results to control\nthe contribution to singularity-type events of vectors in $\\mathbb{F}_p^{n}$\nwith anticoncentration at least $1/p + \\Omega(1/p^2)$. Previously, inverse\nLittlewood--Offord-type results only allowed control over vectors with\nanticoncentration at least $C/p$ for some large constant $C > 1$.\n"} {"abstract": " In recent years, the immiscible polymer blend system has attracted much\nattention as the matrix of nanocomposites. Herein, from the perspective of\ndynamics, the control of the carbon nanotubes (CNTs) migration aided with the\ninterface of polystyrene (PS) and poly(methyl methacrylate) (PMMA) blends was\nachieved through a facile melt mixing method. Thus, we revealed a comprehensive\nrelationship between several typical CNTs migrating scenarios and the microwave\ndielectric properties of their nanocomposites. Based on the unique morphologies\nand phase domain structures of the immiscible matrix, we further investigated\nthe multiple microwave dielectric relaxation processes and shed new light on\nthe relation between relaxation peak position and the phase domain size\ndistribution. Moreover, by integrating the CNTs interface localization control\nwith the matrix co-continuous structure construction, we found that the\ninterface promotes double percolation effect to achieve conductive percolation\nat low CNTs loading (~1.06 vol%). Overall, the present study provides a unique\nnanocomposite material design symphonizing both functional fillers dispersion\nand location as well as the matrix architecture optimization for microwave\napplications.\n"} {"abstract": " Despite the acclaimed success of the magnetic field (H) formulation for\nmodeling the electromagnetic behavior of superconductors with the finite\nelement method, the use of vector-dependent variables in non-conducting domains\nleads to unnecessarily long computation times. In order to solve this issue, we\nhave recently shown how to use a magnetic scalar potential together with the\nH-formulation in the COMSOL Multiphysics environment to efficiently and\naccurately solve for the magnetic field surrounding superconducting domains.\nHowever, from the definition of the magnetic scalar potential, the\nnon-conducting domains must be made simply connected in order to obey Ampere's\nlaw. In this work, we use thin cuts to apply a discontinuity in $\\phi$ and make\nthe non-conducting domains simply connected. This approach is shown to be\neasily implementable in the COMSOL Multiphysics finite element program, already\nwidely used by the applied superconductivity community. We simulate three\ndifferent models in 2-D and 3-D using superconducting filaments and tapes, and\nshow that the results are in very good agreement with the H-A and\nH-formulations. Finally, we compare the computation times between the\nformulations, showing that the H-$\\phi$-formulation can be up to seven times\nfaster than the standard H-formulation in certain applications of interest.\n"} {"abstract": " Calcium scoring, a process in which arterial calcifications are detected and\nquantified in CT, is valuable in estimating the risk of cardiovascular disease\nevents. Especially when used to quantify the extent of calcification in the\ncoronary arteries, it is a strong and independent predictor of coronary heart\ndisease events. Advances in artificial intelligence (AI)-based image analysis\nhave produced a multitude of automatic calcium scoring methods. While most\nearly methods closely follow standard calcium scoring accepted in clinic,\nrecent approaches extend this procedure to enable faster or more reproducible\ncalcium scoring. This chapter provides an introduction to AI for calcium\nscoring, and an overview of the developed methods and their applications. We\nconclude with a discussion on AI methods in calcium scoring and propose\npotential directions for future research.\n"} {"abstract": " Researchers have developed numerous debugging approaches to help programmers\nin the debugging process, but these approaches are rarely used in practice. In\nthis paper, we investigate how programmers debug their code and what\nresearchers should consider when developing debugging approaches. We conducted\nan online questionnaire where 102 programmers provided information about\nrecently fixed bugs. We found that the majority of bugs (69.6 %) are semantic\nbugs. Memory and concurrency bugs do not occur as frequently (6.9 % and 8.8 %),\nbut they consume more debugging time. Locating a bug is more difficult than\nreproducing and fixing it. Programmers often use only IDE build-in tools for\ndebugging. Furthermore, programmers frequently use a\nreplication-observation-deduction pattern when debugging. These results suggest\nthat debugging support is particularly valuable for memory and concurrency\nbugs. Furthermore, researchers should focus on the fault localization phase and\nintegrate their tools into commonly used IDEs.\n"} {"abstract": " Zero-shot learning (ZSL) refers to the problem of learning to classify\ninstances from the novel classes (unseen) that are absent in the training set\n(seen). Most ZSL methods infer the correlation between visual features and\nattributes to train the classifier for unseen classes. However, such models may\nhave a strong bias towards seen classes during training. Meta-learning has been\nintroduced to mitigate the basis, but meta-ZSL methods are inapplicable when\ntasks used for training are sampled from diverse distributions. In this regard,\nwe propose a novel Task-aligned Generative Meta-learning model for Zero-shot\nlearning (TGMZ). TGMZ mitigates the potentially biased training and enables\nmeta-ZSL to accommodate real-world datasets containing diverse distributions.\nTGMZ incorporates an attribute-conditioned task-wise distribution alignment\nnetwork that projects tasks into a unified distribution to deliver an unbiased\nmodel. Our comparisons with state-of-the-art algorithms show the improvements\nof 2.1%, 3.0%, 2.5%, and 7.6% achieved by TGMZ on AWA1, AWA2, CUB, and aPY\ndatasets, respectively. TGMZ also outperforms competitors by 3.6% in\ngeneralized zero-shot learning (GZSL) setting and 7.9% in our proposed\nfusion-ZSL setting.\n"} {"abstract": " We consider the Brenier-Schr{\\\"o}dinger problem on compact manifolds with\nboundary. In the spirit of a work by Arnaudon, Cruzeiro, L{\\'e}onard and\nZambrini, we study the kinetic property of regular solutions and obtain a link\nto the Navier-Stokes equations with an impermeability condition. We also\nenhance the class of models for which the problem admits a unique solution.\nThis involves a method of taking quotients by reflection groups for which we\ngive several examples.\n"} {"abstract": " The fast protection of meshed HVDC grids requires the modeling of the\ntransient phenomena affecting the grid after a fault. In the case of hybrid\nlines comprising both overhead and underground parts, the numerous generated\ntraveling waves may be difficult to describe and evaluate. This paper proposes\na representation of the grid as a graph, allowing to take into account any\nwaves traveling through the grid. A relatively compact description of the waves\nis then derived, based on a combined physical and behavioral modeling approach.\nThe obtained model depends explicitly on the characteristics of the grid as\nwell as on the fault parameters. An application of the model to the\nidentification of the faulty portion of an hybrid line is proposed. The\nknowledge of the faulty portion is profitable as faults in overhead lines,\ngenerally temporary, can lead to the reclosing of the line.\n"} {"abstract": " We investigate the feasibility of using deep learning techniques, in the form\nof a one-dimensional convolutional neural network (1D-CNN), for the extraction\nof signals from the raw waveforms produced by the individual channels of liquid\nargon time projection chamber (LArTPC) detectors. A minimal generic LArTPC\ndetector model is developed to generate realistic noise and signal waveforms\nused to train and test the 1D-CNN, and evaluate its performance on low-level\nsignals. We demonstrate that our approach overcomes the inherent shortcomings\nof traditional cut-based methods by extending sensitivity to signals with ADC\nvalues below their imposed thresholds. This approach exhibits great promise in\nenhancing the capabilities of future generation neutrino experiments like DUNE\nto carry out their low-energy neutrino physics programs.\n"} {"abstract": " Sensing and metrology play an important role in fundamental science and\napplications, by fulfilling the ever-present need for more precise data sets,\nand by allowing to make more reliable conclusions on the validity of\ntheoretical models. Sensors are ubiquitous, they are used in applications\nacross a diverse range of fields including gravity imaging, geology,\nnavigation, security, timekeeping, spectroscopy, chemistry, magnetometry,\nhealthcare, and medicine. Current progress in quantum technologies inevitably\ntriggers the exploration of quantum systems to be used as sensors with new and\nimproved capabilities. This perspective initially provides a brief review of\nexisting and tested quantum sensing systems, before discussing future possible\ndirections of superconducting quantum circuits use for sensing and metrology:\nsuperconducting sensors including many entangled qubits and schemes employing\nQuantum Error Correction. The perspective also lists future research directions\nthat could be of great value beyond quantum sensing, e.g. for applications in\nquantum computation and simulation.\n"} {"abstract": " Over the past two decades, open systems that are described by a non-Hermitian\nHamiltonian have become a subject of intense research. These systems encompass\nclassical wave systems with balanced gain and loss, semiclassical models with\nmode selective losses, and minimal quantum systems, and the meteoric research\non them has mainly focused on the wide range of novel functionalities they\ndemonstrate. Here, we address the following questions: Does anything remain\nconstant in the dynamics of such open systems? What are the consequences of\nsuch conserved quantities? Through spectral-decomposition method and explicit,\nrecursive procedure, we obtain all conserved observables for general\n$\\mathcal{PT}$-symmetric systems. We then generalize the analysis to\nHamiltonians with other antilinear symmetries, and discuss the consequences of\nconservation laws for open systems. We illustrate our findings with several\nphysically motivated examples.\n"} {"abstract": " Study on a rectified current induced by active particles has received a great\nattention due to its possible application to a microscopic motor in biological\nenvironments. Insertion of an {\\em asymmetric} passive object amid many active\nparticles has been regarded as an essential ingredient for generating such a\nrectified motion. Here, we report that the reverse situation is also possible,\nwhere the motion of an active object can be rectified by its geometric\nasymmetry amid many passive particles. This may describe an unidirectional\nmotion of polar biological agents with asymmetric shape. We also find a weak\nbut less diffusive rectified motion in a {\\em passive} mode without energy\npump-in. This \"moving by dissipation\" mechanism could be used as a design\nprinciple for developing more reliable microscopic motors.\n"} {"abstract": " It has been argued that supergravity models of inflation with vanishing sound\nspeeds, $c_s$, lead to an unbounded growth in the production rate of\ngravitinos. We consider several models of inflation to delineate the conditions\nfor which $c_s = 0$. In models with unconstrained superfields, we argue that\nthe mixing of the goldstino and inflatino in a time-varying background prevents\nthe uncontrolled production of the longitudinal modes. This conclusion is\nunchanged if there is a nilpotent field associated with supersymmetry breaking\nwith constraint ${\\bf S^2} =0$, i.e. sgoldstino-less models. Models with a\nsecond orthogonal constraint, ${\\bf S(\\Phi-\\bar{\\Phi})} =0$, where $\\bf{\\Phi}$\nis the inflaton superfield, which eliminates the inflatino, may suffer from the\nover-production of gravitinos. However, we point out that these models may be\nproblematic if this constraint originates from a UV Lagrangian, as this may\nrequire using higher derivative operators. These models may also exhibit other\npathologies such as $c_s > 1$, which are absent in theories with the single\nconstraint or unconstrained fields.\n"} {"abstract": " Power law size distributions are the hallmarks of nonlinear energy\ndissipation processes governed by self-organized criticality. Here we analyze\n75 data sets of stellar flare size distributions, mostly obtained from the {\\sl\nExtreme Ultra-Violet Explorer (EUVE)} and the {\\sl Kepler} mission. We aim to\nanswer the following questions for size distributions of stellar flares: (i)\nWhat are the values and uncertainties of power law slopes? (ii) Do power law\nslopes vary with time ? (iii) Do power law slopes depend on the stellar\nspectral type? (iv) Are they compatible with solar flares? (v) Are they\nconsistent with self-organized criticality (SOC) models? We find that the\nobserved size distributions of stellar flare fluences (or energies) exhibit\npower law slopes of $\\alpha_E=2.09\\pm0.24$ for optical data sets observed with\nKepler. The observed power law slopes do not show much time variability and do\nnot depend on the stellar spectral type (M, K, G, F, A, Giants). In solar\nflares we find that background subtraction lowers the uncorrected value of\n$\\alpha_E=2.20\\pm0.22$ to $\\alpha_E=1.57\\pm0.19$. Furthermore, most of the\nstellar flares are temporally not resolved in low-cadence (30 min) Kepler data,\nwhich causes an additional bias. Taking these two biases into account, the\nstellar flare data sets are consistent with the theoretical prediction $N(x)\n\\propto x^{-\\alpha_x}$ of self-organized criticality models, i.e.,\n$\\alpha_E=1.5$. Thus, accurate power law fits require automated detection of\nthe inertial range and background subtraction, which can be modeled with the\ngeneralized Pareto distribution, finite-system size effects, and extreme event\noutliers.\n"} {"abstract": " If every vertex in a map has one out of two face-cycle types, then the map is\nsaid to be $2$-semiequivelar. A 2-uniform tiling is an edge-to-edge tiling of\nregular polygons having $2$ distinct transitivity classes of vertices. Clearly,\na $2$-uniform map is $2$-semiequivelar. The converse of this is not true in\ngeneral. There are 20 distinct 2-uniform tilings (these are of $14$ different\ntypes) on the plane. In this article, we prove that a $2$-semiequivelar\ntoroidal map $K$ has a finite $2$-uniform cover if the universal cover of $K$\nis $2$-uniform except of two types.\n"} {"abstract": " To keep up with demand, servers will scale up to handle hundreds of thousands\nof clients simultaneously. Much of the focus of the community has been on\nscaling servers in terms of aggregate traffic intensity (packets transmitted\nper second). However, bottlenecks caused by the increasing number of concurrent\nclients, resulting in a large number of concurrent flows, have received little\nattention. In this work, we focus on identifying such bottlenecks. In\nparticular, we define two broad categories of problems; namely, admitting more\npackets into the network stack than can be handled efficiently, and increasing\nper-packet overhead within the stack. We show that these problems contribute to\nhigh CPU usage and network performance degradation in terms of aggregate\nthroughput and RTT. Our measurement and analysis are performed in the context\nof the Linux networking stack, the the most widely used publicly available\nnetworking stack. Further, we discuss the relevance of our findings to other\nnetwork stacks. The goal of our work is to highlight considerations required in\nthe design of future networking stacks to enable efficient handling of large\nnumbers of clients and flows.\n"} {"abstract": " This paper presents a new technique for disturbing the algebraic structure of\nlinear codes in code-based cryptography. Specifically, we introduce the\nso-called semilinear transformations in coding theory and then creatively apply\nthem to the construction of code-based cryptosystems. Note that\n$\\mathbb{F}_{q^m}$ can be viewed as an $\\mathbb{F}_q$-linear space of dimension\n$m$, a semilinear transformation $\\varphi$ is therefore defined as an\n$\\mathbb{F}_q$-linear automorphism of $\\mathbb{F}_{q^m}$. Then we impose this\ntransformation to a linear code $\\mathcal{C}$ over $\\mathbb{F}_{q^m}$. It is\nclear that $\\varphi(\\mathcal{C})$ forms an $\\mathbb{F}_q$-linear space, but\ngenerally does not preserve the $\\mathbb{F}_{q^m}$-linearity any longer.\nInspired by this observation, a new technique for masking the structure of\nlinear codes is developed in this paper. Meanwhile, we endow the underlying\nGabidulin code with the so-called partial cyclic structure to reduce the\npublic-key size. Compared to some other code-based cryptosystems, our proposal\nadmits a much more compact representation of public keys. For instance, 2592\nbytes are enough to achieve the security of 256 bits, almost 403 times smaller\nthan that of Classic McEliece entering the third round of the NIST PQC project.\n"} {"abstract": " Mel-filterbanks are fixed, engineered audio features which emulate human\nperception and have been used through the history of audio understanding up to\ntoday. However, their undeniable qualities are counterbalanced by the\nfundamental limitations of handmade representations. In this work we show that\nwe can train a single learnable frontend that outperforms mel-filterbanks on a\nwide range of audio signals, including speech, music, audio events and animal\nsounds, providing a general-purpose learned frontend for audio classification.\nTo do so, we introduce a new principled, lightweight, fully learnable\narchitecture that can be used as a drop-in replacement of mel-filterbanks. Our\nsystem learns all operations of audio features extraction, from filtering to\npooling, compression and normalization, and can be integrated into any neural\nnetwork at a negligible parameter cost. We perform multi-task training on eight\ndiverse audio classification tasks, and show consistent improvements of our\nmodel over mel-filterbanks and previous learnable alternatives. Moreover, our\nsystem outperforms the current state-of-the-art learnable frontend on Audioset,\nwith orders of magnitude fewer parameters.\n"} {"abstract": " With the dramatic rise in high-quality galaxy data expected from Euclid and\nVera C. Rubin Observatory, there will be increasing demand for fast\nhigh-precision methods for measuring galaxy fluxes. These will be essential for\ninferring the redshifts of the galaxies. In this paper, we introduce Lumos, a\ndeep learning method to measure photometry from galaxy images. Lumos builds on\nBKGnet, an algorithm to predict the background and its associated error, and\npredicts the background-subtracted flux probability density function. We have\ndeveloped Lumos for data from the Physics of the Accelerating Universe Survey\n(PAUS), an imaging survey using 40 narrow-band filter camera (PAUCam). PAUCam\nimages are affected by scattered light, displaying a background noise pattern\nthat can be predicted and corrected for. On average, Lumos increases the SNR of\nthe observations by a factor of 2 compared to an aperture photometry algorithm.\nIt also incorporates other advantages like robustness towards distorting\nartifacts, e.g. cosmic rays or scattered light, the ability of deblending and\nless sensitivity to uncertainties in the galaxy profile parameters used to\ninfer the photometry. Indeed, the number of flagged photometry outlier\nobservations is reduced from 10% to 2%, comparing to aperture photometry.\nFurthermore, with Lumos photometry, the photo-z scatter is reduced by ~10% with\nthe Deepz machine learning photo-z code and the photo-z outlier rate by 20%.\nThe photo-z improvement is lower than expected from the SNR increment, however\ncurrently the photometric calibration and outliers in the photometry seem to be\nits limiting factor.\n"} {"abstract": " It is well known that quantum effects may lead to remove the intrinsic\nsingularity point of back holes. Also, the quintessence scalar field is a\ncandidate model for describing late-time acceleration expansion. Accordingly,\nKazakov and Solodukhin considered the existence of back-reaction of the\nspacetime due to the quantum fluctuations of the background metric to deform\nSchwarzschild black hole, which led to change the intrinsic singularity of the\nblack hole to a 2-sphere with a radius of the order of the Planck length. Also,\nKiselev rewrote the Schwarzschild metric by taking into account the\nquintessence field in the background. In this study, we consider the\nquantum-corrected Schwarzschild black hole inspired by Kazakov-Solodukhin's\nwork, and Schwarzschild black hole surrounded by quintessence deduced by\nKiselev to study the mutual effects of quantum fluctuations and quintessence on\nthe accretion onto the black hole. Consequently, the radial component of\n4-velocity and the proper energy density of the accreting fluid have a finite\nvalue on the surface of its central 2-sphere due to the presence of quantum\ncorrections. Also, by comparing the accretion parameters in different kinds of\nblack holes, we infer that the presence of a point-like electric charge in the\nspacetime is somewhat similar to some quantum fluctuations in the background\nmetric.\n"} {"abstract": " Batched network coding is a variation of random linear network coding which\nhas low computational and storage costs. In order to adapt to random\nfluctuations in the number of erasures in individual batches, it is not optimal\nto recode and transmit the same number of packets for all batches. Different\ndistributed optimization models, which are called adaptive recoding schemes,\nwere formulated for this purpose. The key component of these optimization\nproblems is the expected value of the rank distribution of a batch at the next\nnetwork node, which is also known as the expected rank. In this paper, we put\nforth a unified adaptive recoding framework with an arbitrary recoding field\nsize. We show that the expected rank functions are concave when the packet loss\npattern is a stationary stochastic process, which covers but not limited to\nindependent packet loss and Gilbert-Elliott packet loss model. Under this\nconcavity assumption, we show that there always exists a solution which not\nonly can minimize the randomness on the number of recoded packets but also can\ntolerate rank distribution errors due to inaccurate measurements or limited\nprecision of the machine. We provide an algorithm to obtain such an optimal\noptimal solution, and propose tuning schemes that can turn any feasible\nsolution into a desired optimal solution.\n"} {"abstract": " Based on direct numerical simulations with point-like inertial particles\ntransported by homogeneous and isotropic turbulent flows, we present evidence\nfor the existence of Markov property in Lagrangian turbulence. We show that the\nMarkov property is valid for a finite step size larger than a Stokes\nnumber-dependent Einstein-Markov memory length. This enables the description of\nmulti-scale statistics of Lagrangian particles by Fokker-Planck equations,\nwhich can be embedded in an interdisciplinary approach linking the statistical\ndescription of turbulence with fluctuation theorems of non-equilibrium\nstochastic thermodynamics and fluctuation theorems, and local flow structures.\n"} {"abstract": " A dedicated in situ heating setup in a scanning electron microscope (SEM)\nfollowed by an ex situ atomic force microscopy (AFM) and electron backscatter\ndiffraction (EBSD) is used to characterize the nucleation and early growth\nstages of Fe-Al intermetallics (IMs) at 596 {\\deg}C. A location tracking is\nused to interpret further characterization. Ex situ AFM observations reveal a\nslight shrinkage and out of plane protrusion of the IM at the onset of IM\nnucleation followed by directional growth. The formed interfacial IM compounds\nwere identified by ex situ EBSD. It is now clearly demonstrated that the\n{\\theta}-phase nucleates first prior to the diffusion-controlled growth of the\n{\\eta}-phase. The {\\theta}-phase prevails the intermetallic layer.\n"} {"abstract": " We present a conceptual study of a large format imaging spectrograph for the\nLarge Submillimeter Telescope (LST) and the Atacama Large Aperture\nSubmillimeter Telescope (AtLAST). Recent observations of high-redshift galaxies\nindicate the onset of earliest star formation just a few 100 million years\nafter the Big Bang (i.e., z = 12--15), and LST/AtLAST will provide a unique\npathway to uncover spectroscopically-identified first forming galaxies in the\npre-reionization era, once it will be equipped with a large format imaging\nspectrograph. We propose a 3-band (200, 255, and 350 GHz), medium resolution (R\n= 2,000) imaging spectrograph with 1.5 M detectors in total based on the KATANA\nconcept (Karatsu et al. 2019), which exploits technologies of the integrated\nsuperconducting spectrometer (ISS) and a large-format imaging array. A 1-deg2\ndrilling survey (3,500 hr) will capture a large number of [O III] 88 um (and [C\nII] 158 um) emitters at z = 8--9, and constrain [O III] luminosity functions at\nz > 12.\n"} {"abstract": " Score-based diffusion models synthesize samples by reversing a stochastic\nprocess that diffuses data to noise, and are trained by minimizing a weighted\ncombination of score matching losses. The log-likelihood of score-based\ndiffusion models can be tractably computed through a connection to continuous\nnormalizing flows, but log-likelihood is not directly optimized by the weighted\ncombination of score matching losses. We show that for a specific weighting\nscheme, the objective upper bounds the negative log-likelihood, thus enabling\napproximate maximum likelihood training of score-based diffusion models. We\nempirically observe that maximum likelihood training consistently improves the\nlikelihood of score-based diffusion models across multiple datasets, stochastic\nprocesses, and model architectures. Our best models achieve negative\nlog-likelihoods of 2.83 and 3.76 bits/dim on CIFAR-10 and ImageNet 32x32\nwithout any data augmentation, on a par with state-of-the-art autoregressive\nmodels on these tasks.\n"} {"abstract": " We study a model system with nematic and magnetic orders, within a channel\ngeometry modelled by an interval, $[-D, D]$. The system is characterised by a\ntensor-valued nematic order parameter $\\mathbf{Q}$ and a vector-valued\nmagnetisation $\\mathbf{M}$, and the observable states are modelled as stable\ncritical points of an appropriately defined free energy. In particular, the\nfull energy includes a nemato-magnetic coupling term characterised by a\nparameter $c$. We (i) derive $L^\\infty$ bounds for $\\mathbf{Q}$ and\n$\\mathbf{M}$; (ii) prove a uniqueness result in parameter regimes defined by\n$c$, $D$ and material- and temperature-dependent correlation lengths; (iii)\nanalyse order reconstruction solutions, possessing domain walls, and their\nstabilities as a function of $D$ and $c$ and (iv) perform numerical studies\nthat elucidate the interplay of $c$ and $D$ for multistability.\n"} {"abstract": " We propose a mechanism to substantially rectify radiative heat flow by\nmatching thin films of metal-to-insulator transition materials and polar\ndielectrics in the electromagnetic near field. By leveraging the distinct\nscaling behaviors of the local density of states with film thickness for metals\nand insulators, we theoretically achieve rectification ratios over 140-a\n10-fold improvement over the state of the art-with nanofilms of vanadium\ndioxide and cubic boron nitride in the parallel-plane geometry at\nexperimentally feasible gap sizes (~100 nm). Our rational design offers\nrelative ease of fabrication, flexible choice of materials, and robustness\nagainst deviations from optimal film thicknesses. We expect this work to\nfacilitate the application of thermal diodes in solid-state thermal circuits\nand energy conversion devices.\n"} {"abstract": " We propose a predictive model of the turbulent burning velocity over a wide\nrange of conditions. The model consists of sub models of the stretch factor and\nthe turbulent flame area. The stretch factor characterizes the flame response\nof turbulence stretch and incorporates effects of detailed chemistry and\ntransport with a lookup table of laminar counterflow flames. The flame area\nmodel captures the area growth based on Lagrangian statistics of propagating\nsurfaces, and considers effects of turbulence length scales and fuel\ncharacteristics. The present model predicts the turbulent burning velocity via\nan algebraic expression without free parameters. It is validated against 285\ncases of the direct numerical simulation or experiment reported from various\nresearch groups on planar and Bunsen flames over a wide range of conditions,\ncovering fuels from hydrogen to n-dodecane, pressures from 1 to 20 atm, lean\nand rich mixtures, turbulence intensity ratios from 0.35 to 110, and turbulence\nlength ratios from 0.5 to 80. The comprehensive comparison shows that the\nproposed turbulent burning velocity model has an overall good agreement over\nthe wide range of conditions, with the averaged modeling error of 25.3%.\nFurthermore, the model prediction involves the uncertainty quantification for\nmodel parameters and chemical kinetics to extend the model applicability.\n"} {"abstract": " We propose a distributed approach to train deep convolutional generative\nadversarial neural network (DC-CGANs) models. Our method reduces the imbalance\nbetween generator and discriminator by partitioning the training data according\nto data labels, and enhances scalability by performing a parallel training\nwhere multiple generators are concurrently trained, each one of them focusing\non a single data label. Performance is assessed in terms of inception score and\nimage quality on MNIST, CIFAR10, CIFAR100, and ImageNet1k datasets, showing a\nsignificant improvement in comparison to state-of-the-art techniques to\ntraining DC-CGANs. Weak scaling is attained on all the four datasets using up\nto 1,000 processes and 2,000 NVIDIA V100 GPUs on the OLCF supercomputer Summit.\n"} {"abstract": " In this paper we show how inter-cellular molecular communication may change\nthe overall levels of photosynthesis in plants. Individual plant cells respond\nto external stimuli, such as illumination levels, to regulate their\nphotosynthetic output. Here, we present a mathematical model which shows that\nby sharing information internally using molecular communication, plants may\nincrease overall photosynthate production. Numerical results show that higher\nmutual information between cells corresponds to an increase in overall\nphotosynthesis by as much as 25 per cent. This suggests that molecular\ncommunication plays a vital role in maximising the photosynthesis in plants and\ntherefore suggests new routes to influence plant development in agriculture and\nelsewhere.\n"} {"abstract": " In this paper, the split common null point problem in two Banach spaces is\nconsidered. Then, using the generalized resolvents of maximal monotone\noperators and the generalized projections and an infinite family of\nnonexpansive mappings, a strong convergence theorem for finding a solution of\nthe split common null point problem in two Banach spaces in the presence of a\nsequence of errors will be proved.\n"} {"abstract": " Spectrally-efficient secure non-orthogonal multiple access (NOMA) has\nrecently attained a substantial research interest for fifth generation\ndevelopment. This work explores crucial security issue in NOMA which is stemmed\nfrom utilizing the decoding concept of successive interference cancellation.\nConsidering untrusted users, we design a novel secure NOMA transmission\nprotocol to maximize secrecy fairness among users. A new decoding order for two\nusers' NOMA is proposed that provides positive secrecy rate to both users.\nObserving the objective of maximizing secrecy fairness between users under\ngiven power budget constraint, the problem is formulated as minimizing the\nmaximum secrecy outage probability (SOP) between users. In particular,\nclosed-form expressions of SOP for both users are derived to analyze secrecy\nperformance. SOP minimization problems are solved using pseudoconvexity\nconcept, and optimized power allocation (PA) for each user is obtained.\nAsymptotic expressions of SOPs, and optimal PAs minimizing these approximations\nare obtained to get deeper insights. Further, globally-optimized power control\nsolution from secrecy fairness perspective is obtained at a low computational\ncomplexity and, asymptotic approximation is obtained to gain analytical\ninsights. Numerical results validate the correctness of analysis, and present\ninsights on optimal solutions. Finally, we present insights on global-optimal\nPA by which fairness is ensured and gains of about 55.12%, 69.30%, and 19.11%,\nrespectively are achieved, compared to fixed PA and individual users' optimal\nPAs.\n"} {"abstract": " Variational inference enables approximate posterior inference of the highly\nover-parameterized neural networks that are popular in modern machine learning.\nUnfortunately, such posteriors are known to exhibit various pathological\nbehaviors. We prove that as the number of hidden units in a single-layer\nBayesian neural network tends to infinity, the function-space posterior mean\nunder mean-field variational inference actually converges to zero, completely\nignoring the data. This is in contrast to the true posterior, which converges\nto a Gaussian process. Our work provides insight into the over-regularization\nof the KL divergence in variational inference.\n"} {"abstract": " We present TransitFit, an open-source Python~3 package designed to fit\nexoplanetary transit light-curves for transmission spectroscopy studies\n(Available at https://github.com/joshjchayes/TransitFit and\nhttps://github.com/spearnet/TransitFit, with documentation at\nhttps://transitfit.readthedocs.io/). TransitFit employs nested sampling to\noffer efficient and robust multi-epoch, multi-wavelength fitting of transit\ndata obtained from one or more telescopes. TransitFit allows per-telescope\ndetrending to be performed simultaneously with parameter fitting, including the\nuse of user-supplied detrending alogorithms. Host limb darkening can be fitted\neither independently (\"uncoupled\") for each filter or combined (\"coupled\")\nusing prior conditioning from the PHOENIX stellar atmosphere models. For this\nTransitFit uses the Limb Darkening Toolkit (LDTk) together with filter\nprofiles, including user-supplied filter profiles. We demonstrate the\napplication of TransitFit in three different contexts. First, we model SPEARNET\nbroadband optical data of the low-density hot-Neptune WASP-127~b. The data were\nobtained from a globally-distributed network of 0.5m--2.4m telescopes. We find\nclear improvement in our broadband results using the coupled mode over\nuncoupled mode, when compared against the higher spectral resolution GTC/OSIRIS\ntransmission spectrum obtained by Chen et al. (2018). Using TransitFit, we fit\n26 transit observations by TESS to recover improved ephemerides of the\nhot-Jupiter WASP-91~b and a transit depth determined to a precision of 170~ppm.\nFinally, we use TransitFit to conduct an investigation into the contested\npresence of TTV signatures in WASP-126~b using 126 transits observed by TESS,\nconcluding that there is no statistically significant evidence for such\nsignatures from observations spanning 31 TESS sectors.\n"} {"abstract": " The ongoing Coronavirus disease 2019 (COVID-19) is a major crisis that has\nsignificantly affected the healthcare sector and global economies, which made\nit the main subject of various fields in scientific and technical research. To\nproperly understand and control this new epidemic, mathematical modelling is\npresented as a very effective tool that can illustrate the mechanisms of its\npropagation. In this regard, the use of compartmental models is the most\nprominent approach adopted in the literature to describe the dynamics of\nCOVID-19. Along the same line, we aim during this study to generalize and\nameliorate many existing works that consecrated to analyse the behaviour of\nthis epidemic. Precisely, we propose an SQEAIHR epidemic system for\nCoronavirus. Our constructed model is enriched by taking into account the media\nintervention and vital dynamics. By the use of the next-generation matrix\nmethod, the theoretical basic reproductive number $R_0$ is obtained for\nCOVID-19. Based on some nonstandard and generalized analytical techniques, the\nlocal and global stability of the disease-free equilibrium are proven when $R_0\n< 1$. Moreover, in the case of $R_0 > 1$, the uniform persistence of COVID-19\nmodel is also shown. In order to better adapt our epidemic model to reality,\nthe randomness factor is taken into account by considering a proportional white\nnoises, which leads to a well-posed stochastic model. Under appropriate\nconditions, interesting asymptotic properties are proved, namely: extinction\nand persistence in the mean. The theoretical results show that the dynamics of\nthe perturbed COVID-19 model are determined by parameters that are closely\nrelated to the magnitude of the stochastic noise. Finally, we present some\nnumerical illustrations to confirm our theoretical results and to show the\nimpact of media intervention and quarantine strategies.\n"} {"abstract": " In this work, we generalize the reaction-diffusion equation in statistical\nphysics, Schr\\\"odinger equation in quantum mechanics, Helmholtz equation in\nparaxial optics into the neural partial differential equations (NPDE), which\ncan be considered as the fundamental equations in the field of artificial\nintelligence research. We take finite difference method to discretize NPDE for\nfinding numerical solution, and the basic building blocks of deep neural\nnetwork architecture, including multi-layer perceptron, convolutional neural\nnetwork and recurrent neural networks, are generated. The learning strategies,\nsuch as Adaptive moment estimation, L-BFGS, pseudoinverse learning algorithms\nand partial differential equation constrained optimization, are also presented.\nWe believe it is of significance that presented clear physical image of\ninterpretable deep neural networks, which makes it be possible for applying to\nanalog computing device design, and pave the road to physical artificial\nintelligence.\n"} {"abstract": " Generative modeling has recently shown great promise in computer vision, but\nit has mostly focused on synthesizing visually realistic images. In this paper,\nmotivated by multi-task learning of shareable feature representations, we\nconsider a novel problem of learning a shared generative model that is useful\nacross various visual perception tasks. Correspondingly, we propose a general\nmulti-task oriented generative modeling (MGM) framework, by coupling a\ndiscriminative multi-task network with a generative network. While it is\nchallenging to synthesize both RGB images and pixel-level annotations in\nmulti-task scenarios, our framework enables us to use synthesized images paired\nwith only weak annotations (i.e., image-level scene labels) to facilitate\nmultiple visual tasks. Experimental evaluation on challenging multi-task\nbenchmarks, including NYUv2 and Taskonomy, demonstrates that our MGM framework\nimproves the performance of all the tasks by large margins, consistently\noutperforming state-of-the-art multi-task approaches.\n"} {"abstract": " Understanding the physics of strongly correlated electronic systems has been\na central issue in condensed matter physics for decades. In transition metal\noxides, strong correlations characteristic of narrow $d$ bands is at the origin\nof such remarkable properties as the Mott gap opening, enhanced effective mass,\nand anomalous vibronic coupling, to mention a few. SrVO$_3$, with V$^{4+}$ in a\n$3d^1$ electronic configuration is the simplest example of a 3D correlated\nmetallic electronic system. Here, we focus on the observation of a (roughly)\nquadratic temperature dependence of the inverse electron mobility of this\nseemingly simple system, which is an intriguing property shared by other\nmetallic oxides. The systematic analysis of electronic transport in SrVO$_3$\nthin films discloses the limitations of the simplest picture of e-e\ncorrelations in a Fermi liquid; instead, we show that the quasi-2D topology of\nthe Fermi surface and a strong electron-phonon coupling, contributing to dress\ncarriers with a phonon cloud, play a pivotal role on the reported electron\nspectroscopic, optical, thermodynamic and transport data. The picture that\nemerges is not restricted to SrVO$_3$ but can be shared with other $3d$ and\n$4d$ metallic oxides.\n"} {"abstract": " Let $ E \\subset \\mathbb{R}^2 $ be a finite set, and let $ f : E \\to\n[0,\\infty) $. In this paper, we address the algorithmic aspects of nonnegative\n$C^2$ interpolation in the plane. Specifically, we provide an efficient\nalgorithm to compute a nonnegative $C^2(\\mathbb{R}^2)$ extension of $ f $ with\nnorm within a universal constant factor of the least possible. We also provide\nan efficient algorithm to approximate the trace norm.\n"} {"abstract": " In the past decades, the revolutionary advances of Machine Learning (ML) have\nshown a rapid adoption of ML models into software systems of diverse types.\nSuch Machine Learning Software Applications (MLSAs) are gaining importance in\nour daily lives. As such, the Quality Assurance (QA) of MLSAs is of paramount\nimportance. Several research efforts are dedicated to determining the specific\nchallenges we can face while adopting ML models into software systems. However,\nwe are aware of no research that offered a holistic view of the distribution of\nthose ML quality assurance challenges across the various phases of software\ndevelopment life cycles (SDLC). This paper conducts an in-depth literature\nreview of a large volume of research papers that focused on the quality\nassurance of ML models. We developed a taxonomy of MLSA quality assurance\nissues by mapping the various ML adoption challenges across different phases of\nSDLC. We provide recommendations and research opportunities to improve SDLC\npractices based on the taxonomy. This mapping can help prioritize quality\nassurance efforts of MLSAs where the adoption of ML models can be considered\ncrucial.\n"} {"abstract": " Giant Radio Galaxies (GRGs) are the largest single structures in the\nUniverse. Exhibiting extended radio morphology, their projected sizes range\nfrom 0.7 Mpc up to 4.9 Mpc. LOFAR has opened a new window on the discovery and\ninvestigation of GRGs and, despite the hundreds that are today known, their\nmain growth catalyst is still debated. One natural explanation for the\nexceptional size of GRGs is their old age. In this context, hard X-ray selected\nGRGs show evidence of restarting activity, with the giant radio lobes being\nmostly disconnected from the nuclear source, if any. In this paper, we present\nthe serendipitous discovery of a distant ($z=0.629$), medium X-ray selected GRG\nin the Bo\\\"otes field. High-quality, deep Chandra and LOFAR data allow a robust\nstudy of the connection between the nucleus and the lobes, at a larger redshift\nso far inaccessible to coded-mask hard X-ray instruments. The radio morphology\nof the GRG presented in this work does not show evidence for restarted\nactivity, and the nuclear radio core spectrum does not appear to be GPS-like.\nOn the other hand, the X-ray properties of the new GRG are perfectly consistent\nwith the ones previously studied with Swift/BAT and INTEGRAL at lower redshift.\nIn particular, the bolometric luminosity measured from the X-ray spectrum is a\nfactor of six larger than the one derived from the radio lobes, although the\nlarge uncertainties make them formally consistent at $1\\sigma$. Finally, the\nmoderately dense environment around the GRG, traced by the spatial distribution\nof galaxies, supports recent findings that the growth of GRGs is not primarily\ndriven by underdense environments.\n"} {"abstract": " By employing a pseudo-orthonormal coordinate-free approach, the Dirac\nequation for particles in the Kerr--Newman spacetime is separated into its\nradial and angular parts. In the massless case to which a special attention is\ngiven, the general Heun-type equations turn into their confluent form. We show\nhow one recovers some results previously obtained in literature, by other\nmeans.\n"} {"abstract": " Although many techniques have been applied to matrix factorization (MF), they\nmay not fully exploit the feature structure. In this paper, we incorporate the\ngrouping effect into MF and propose a novel method called Robust Matrix\nFactorization with Grouping effect (GRMF). The grouping effect is a\ngeneralization of the sparsity effect, which conducts denoising by clustering\nsimilar values around multiple centers instead of just around 0. Compared with\nexisting algorithms, the proposed GRMF can automatically learn the grouping\nstructure and sparsity in MF without prior knowledge, by introducing a\nnaturally adjustable non-convex regularization to achieve simultaneous sparsity\nand grouping effect. Specifically, GRMF uses an efficient alternating\nminimization framework to perform MF, in which the original non-convex problem\nis first converted into a convex problem through Difference-of-Convex (DC)\nprogramming, and then solved by Alternating Direction Method of Multipliers\n(ADMM). In addition, GRMF can be easily extended to the Non-negative Matrix\nFactorization (NMF) settings. Extensive experiments have been conducted using\nreal-world data sets with outliers and contaminated noise, where the\nexperimental results show that GRMF has promoted performance and robustness,\ncompared to five benchmark algorithms.\n"} {"abstract": " The increasing market penetration of electric vehicles (EVs) may pose\nsignificant electricity demand on power systems. This electricity demand is\naffected by the inherent uncertainties of EVs' travel behavior that makes\nforecasting the daily charging demand (CD) very challenging. In this project,\nwe use the National House Hold Survey (NHTS) data to form sequences of trips,\nand develop machine learning models to predict the parameters of the next trip\nof the drivers, including trip start time, end time, and distance. These\nparameters are later used to model the temporal charging behavior of EVs. The\nsimulation results show that the proposed modeling can effectively estimate the\ndaily CD pattern based on travel behavior of EVs, and simple machine learning\ntechniques can forecast the travel parameters with acceptable accuracy.\n"} {"abstract": " We argue that neutrino oscillations at JUNO offer a unique opportunity to\nstudy Sorkin's triple-path interference, which is predicted to be zero in\ncanonical quantum mechanics by virtue of the Born rule. In particular, we\ncompute the expected bounds on triple-path interference at JUNO and demonstrate\nthat they are comparable to those already available from electromagnetic\nprobes. Furthermore, the neutrino probe of the Born rule is much more direct\ndue to an intrinsic independence from any boundary conditions, whereas such\ndependence on boundary conditions is always present in the case of\nelectromagnetic probes. Thus, neutrino oscillations present an ideal probe of\nthis aspect of the foundations of quantum mechanics.\n"} {"abstract": " This paper is concerned with the asymptotic analysis of sojourn times of\nrandom fields with continuous sample paths. Under a very general framework we\nshow that there is an interesting relationship between tail asymptotics of\nsojourn times and that of supremum. Moreover, we establish the uniform\ndouble-sum method to derive the tail asymptotics of sojourn times. In the\nliterature, based on the pioneering research of S. Berman the sojourn times\nhave been utilised to derive the tail asymptotics of supremum of Gaussian\nprocesses. In this paper we show that the opposite direction is even more\nfruitful, namely knowing the asymptotics of supremum o f random processes and\nfields (in particular Gaussian) it is possible to establish the asymptotics of\ntheir sojourn times. We illustrate our findings considering i) two dimensional\nGaussian random fields, ii) chi-process generated by stationary Gaussian\nprocesses and iii) stationary Gaussian queueing processes.\n"} {"abstract": " We consider the setting where the nodes of an undirected, connected network\ncollaborate to solve a shared objective modeled as the sum of smooth functions.\nWe assume that each summand is privately known by a unique node. NEAR-DGD is a\ndistributed first order method which permits adjusting the amount of\ncommunication between nodes relative to the amount of computation performed\nlocally in order to balance convergence accuracy and total application cost. In\nthis work, we generalize the convergence properties of a variant of NEAR-DGD\nfrom the strongly convex to the nonconvex case. Under mild assumptions, we show\nconvergence to minimizers of a custom Lyapunov function. Moreover, we\ndemonstrate that the gap between those minimizers and the second order\nstationary solutions of the original problem can become arbitrarily small\ndepending on the choice of algorithm parameters. Finally, we accompany our\ntheoretical analysis with a numerical experiment to evaluate the empirical\nperformance of NEAR-DGD in the nonconvex setting.\n"} {"abstract": " We calculate models of stellar evolution for very massive stars and include\nthe effects of modified gravity to investigate the influence on the physical\nproperties of blue supergiant stars and their use as extragalactic distance\nindicators. With shielding and fifth force parameters in a similar range as in\nprevious studies of Cepheid and tip of the red giant branch (TRGB) stars we\nfind clear effects on stellar luminosity and flux-weighted gravity. The\nrelationship between flux weighted gravity, g_F = g/Teff^4, and bolometric\nmagnitude M_bol (FGLR), which has been used successfully for accurate distance\ndeterminations, is systematically affected. While the stellar evolution FGLRs\nshow a systematic offset from the observed relation, we can use the\ndifferential shifts between models with Newtonian and modified gravity to\nestimate the influence on FGLR distance determinations. Modified gravity leads\nto a distance increase of 0.05 to 0.15 magnitudes in distance modulus. These\nchange are comparable to the ones found for Cepheid stars. We compare observed\nFGLR and TRGB distances of nine galaxies to constrain the free parameters of\nmodified gravity. Not accounting for systematic differences between TRGB and\nFGLR distances shielding parameters of 5*10^-7 and 10^-6 and fifth force\nparameters of 1/3 and 1 can be ruled out with about 90% confidence. Allowing\nfor potential systematic offsets between TRGB and FGLR distances no\ndetermination is possible for a shielding parameter of 10^-6. For 5*10$^-7 a\nfifth force parameter of 1 can be ruled out to 92% but 1/3 is unlikely only to\n60%.\n"} {"abstract": " The superior performance of CNN on medical image analysis heavily depends on\nthe annotation quality, such as the number of labeled image, the source of\nimage, and the expert experience. The annotation requires great expertise and\nlabour. To deal with the high inter-rater variability, the study of imperfect\nlabel has great significance in medical image segmentation tasks. In this\npaper, we present a novel cascaded robust learning framework for chest X-ray\nsegmentation with imperfect annotation. Our model consists of three independent\nnetwork, which can effectively learn useful information from the peer networks.\nThe framework includes two stages. In the first stage, we select the clean\nannotated samples via a model committee setting, the networks are trained by\nminimizing a segmentation loss using the selected clean samples. In the second\nstage, we design a joint optimization framework with label correction to\ngradually correct the wrong annotation and improve the network performance. We\nconduct experiments on the public chest X-ray image datasets collected by\nShenzhen Hospital. The results show that our methods could achieve a\nsignificant improvement on the accuracy in segmentation tasks compared to the\nprevious methods.\n"} {"abstract": " The huge amount of data produced in the fifth-generation (5G) networks not\nonly brings new challenges to the reliability and efficiency of mobile devices\nbut also drives rapid development of new storage techniques. With the benefits\nof fast access speed and high reliability, NAND flash memory has become a\npromising storage solution for the 5G networks. In this paper, we investigate a\nprotograph-coded bit-interleaved coded modulation with iterative detection and\ndecoding (BICM-ID) utilizing irregular mapping (IM) in the multi-level-cell\n(MLC) NAND flash-memory systems. First, we propose an enhanced protograph-based\nextrinsic information transfer (EPEXIT) algorithm to facilitate the analysis of\nprotograph codes in the IM-BICM-ID systems. With the use of EPEXIT algorithm, a\nsimple design method is conceived for the construction of a family of high-rate\nprotograph codes, called irregular-mapped accumulate-repeat-accumulate (IMARA)\ncodes, which possess both excellent decoding thresholds and\nlinear-minimum-distance-growth property. Furthermore, motivated by the\nvoltage-region iterative gain characteristics of IM-BICM-ID systems, a novel\nread-voltage optimization scheme is developed to acquire accurate read-voltage\nlevels, thus minimizing the decoding thresholds of protograph codes.\nTheoretical analyses and error-rate simulations indicate that the proposed\nIMARA-aided IM-BICM-ID scheme and the proposed read-voltage optimization scheme\nremarkably improve the convergence and decoding performance of flash-memory\nsystems. Thus, the proposed protograph-coded IM-BICM-ID flash-memory systems\ncan be viewed as a reliable and efficient storage solution for the\nnew-generation mobile networks with massive data-storage requirement.\n"} {"abstract": " We propose and experimentally demonstrate a novel interference fading\nsuppression method for phase-sensitive optical time domain reflectometry\n(Phi-OTDR) using space-division multiplexed (SDM) pulse probes in few-mode\nfiber. The SDM probes consist of multiple different modes, and three spatial\nmodes (LP01, LP11a and LP11b) are used in this work for proof of concept.\nFirstly, the Rayleigh backscattering light of different modes is experimentally\ncharacterized, and it turns out that the waveforms of Phi-OTDR traces of\ndistinct modes are all different from each other. Thanks to the spatial\ndifference of fading positions of distinct modes, multiple probes from\nspatially multiplexed modes can be used to suppress the interference fading in\nPhi-OTDR. Then, the performances of the Phi-OTDR systems using single probe and\nmultiple probes are evaluated and compared. Specifically, statistical analysis\nshows that both fading probabilities over fiber length and time are reduced\nsignificantly by using multiple SDM probes, which verifies the significant\nperformance improvement on fading suppression. The proposed novel interference\nfading suppression method does not require complicated frequency or phase\nmodulation, which has the advantages of simplicity, good effectiveness and high\nreliability.\n"} {"abstract": " Nonuniform fast Fourier transforms dominate the computational cost in many\napplications including image reconstruction and signal processing. We thus\npresent a general-purpose GPU-based CUDA library for type 1 (nonuniform to\nuniform) and type 2 (uniform to nonuniform) transforms in dimensions 2 and 3,\nin single or double precision. It achieves high performance for a given\nuser-requested accuracy, regardless of the distribution of nonuniform points,\nvia cache-aware point reordering, and load-balanced blocked spreading in shared\nmemory. At low accuracies, this gives on-GPU throughputs around $10^9$\nnonuniform points per second, and (even including host-device transfer) is\ntypically 4-10$\\times$ faster than the latest parallel CPU code FINUFFT (at 28\nthreads). It is competitive with two established GPU codes, being up to\n90$\\times$ faster at high accuracy and/or type 1 clustered point distributions.\nFinally we demonstrate a 5-12$\\times$ speedup versus CPU in an X-ray\ndiffraction 3D iterative reconstruction task at $10^{-12}$ accuracy, observing\nexcellent multi-GPU weak scaling up to one rank per GPU.\n"} {"abstract": " In this paper, we discuss the properties of the generating functions of spin\nHurwitz numbers. In particular, for spin Hurwitz numbers with arbitrary\nramification profiles, we construct the weighed sums which are given by Orlov's\nhypergeometric solutions of the 2-component BKP hierarchy. We derive the closed\nalgebraic formulas for the correlation functions associated with these\ntau-functions, and under reasonable analytical assumptions we prove the loop\nequations (the blobbed topological recursion). Finally, we prove a version of\ntopological recursion for the spin Hurwitz numbers with the spin completed\ncycles (a generalized version of the Giacchetto--Kramer--Lewa\\'nski\nconjecture).\n"} {"abstract": " We study inverse problems for the nonlinear wave equation $\\square_g u +\nw(x,u, \\nabla_g u) = 0$ in a Lorentzian manifold $(M,g)$ with boundary, where\n$\\nabla_g u$ denotes the gradient and $w(x,u, \\xi)$ is smooth and quadratic in\n$\\xi$. Under appropriate assumptions, we show that the conformal class of the\nLorentzian metric $g$ can be recovered up to diffeomorphisms, from the\nknowledge of the Neumann-to-Dirichlet map. With some additional conditions, we\ncan recover the metric itself up to diffeomorphisms. Moreover, we can recover\nthe second and third quadratic forms in the Taylor expansion of $w(x,u, \\xi)$\nwith respect to $u$ up to null forms.\n"} {"abstract": " This article discusses the physical and kinematical characteristics of\nplanetary nebulae accompanying PG1159 central stars. The study is based on the\nparallax and proper motion measurements recently offered by Gaia space mission.\nTwo approaches were used to investigate the kinematical properties of the\nsample. The results revealed that most of the studied nebulae arise from\nprogenitor stars of mass range; $0.9-1.75$\\,M$_{\\odot}$. Furthermore, they tend\nto live within the Galactic thick-disk and moving with an average peculiar\nvelocity of $61.7\\pm19.2$\\,km\\,s$^{-1}$ at a mean vertical height of $469\\pm79$\npc. The locations of the PG1159 stars on the H-R diagram indicate that they\nhave an average final stellar mass and evolutionary age of\n$0.58\\pm0.08$\\,M$_{\\odot}$ and $25.5\\pm5.3 \\rm{x}10^3$ yr, respectively. We\nfound a good agreement between the mean evolutionary age of the PG1159 stars\nand the mean dynamical age of their companion planetary nebulae ($28.0\\pm6.4\n\\rm{x}10^3$ yr).\n"} {"abstract": " The pentakis dodecahedron, the dual of the truncated icosahedron, consists of\n60 edge-sharing triangles. It has 20 six- and 12 five-fold coordinated\nvertices, with the former forming a dodecahedron, and each of the latter\nconnected to the vertices of one of the 12 pentagons of the dodecahedron. When\nspins mounted on the vertices of the pentakis dodecahedron interact according\nto the nearest-neighbor antiferromagnetic Heisenberg model, the two different\nvertex types necessitate the introduction of two exchange constants. As the\nrelative strength of the two constants is varied the molecule interpolates\nbetween the dodecahedron and a molecule consisting only of quadrangles. The\ncompetition between the two exchange constants, frustration, and an external\nmagnetic field results in a multitude of ground-state magnetization and\nsusceptibility discontinuities. At the classical level the maximum is ten\nmagnetization and one susceptibility discontinuities when the 12 five-fold\nvertices interact with the dodecahedron spins with approximately one-half the\nstrength of their interaction. When the two interactions are approximately\nequal in strength the number of discontinuities is also maximized, with three\nof the magnetization and eight of the susceptibility. At the full quantum\nlimit, where the magnitude of the spins equals 1/2, there can be up to three\nground-state magnetization jumps that have the total z spin component changing\nby \\Delta S^z=2, even though quantum fluctuations rarely allow discontinuities\nof the magnetization. The full quantum case also supports a \\Delta S^z=3\ndiscontinuity. Frustration also results in nonmagnetic states inside the\nsinglet-triplet gap. These results make the pentakis dodecahedron the molecule\nwith the most discontinuous magnetic response from the quantum to the classical\nlevel.\n"} {"abstract": " Dynamical scaling is an asymptotic property typical for the dynamics of\nfirst-order phase transitions in physical systems and related to\nself-similarity. Based on the integral-representation for the marginal\nprobabilities of a fractional non-homogeneous Poisson process introduced by\nLeonenko et al. (2017) and generalising the standard fractional Poisson\nprocess, we prove the dynamical scaling under fairly mild conditions. Our\nresult also includes the special case of the standard fractional Poisson\nprocess.\n"} {"abstract": " The performance of superconducting radio-frequency (SRF) cavities depends on\nthe niobium surface condition. Recently, various heat-treatment methods have\nbeen investigated to achieve unprecedented high quality factor (Q) and high\naccelerating field (E). We report the influence of a new baking process called\nfurnace baking on the Q-E behavior of 1.3 GHz SRF cavities. Furnace baking is\nperformed as the final step of the cavity surface treatment; the cavities are\nheated in a vacuum furnace for 3 h, followed by high-pressure rinsing and\nradio-frequency measurement. This method is simpler and potentially more\nreliable than previously reported heat-treatment methods, and it is therefore,\neasier to apply to the SRF cavities. We find that the quality factor is\nincreased after furnace baking at temperatures ranging from 300C to 400C, while\nstrong decreasing the quality factor at high accelerating field is observed\nafter furnace baking at temperatures ranging from 600C to 800C. We find\nsignificant differences in the surface resistance for various processing\ntemperatures.\n"} {"abstract": " Sound Event Detection and Audio Classification tasks are traditionally\naddressed through time-frequency representations of audio signals such as\nspectrograms. However, the emergence of deep neural networks as efficient\nfeature extractors has enabled the direct use of audio signals for\nclassification purposes. In this paper, we attempt to recognize musical\ninstruments in polyphonic audio by only feeding their raw waveforms into deep\nlearning models. Various recurrent and convolutional architectures\nincorporating residual connections are examined and parameterized in order to\nbuild end-to-end classi-fiers with low computational cost and only minimal\npreprocessing. We obtain competitive classification scores and useful\ninstrument-wise insight through the IRMAS test set, utilizing a parallel\nCNN-BiGRU model with multiple residual connections, while maintaining a\nsignificantly reduced number of trainable parameters.\n"} {"abstract": " We study the topological properties of a spin-orbit coupled Hofstadter model\non the Kagome lattice. The model is time-reversal invariant and realizes a\n$\\mathbb{Z}_2$ topological insulator as a result of artificial gauge fields. We\ndevelop topological arguments to describe this system showing three\ninequivalent sites in a unit cell and a flat band in its energy spectrum in\naddition to the topological dispersive energy bands. We show the stability of\nthe topological phase towards spin-flip processes and different types of\non-site potentials. In particular, we also address the situation where on-site\nenergies may differ inside a unit cell. Moreover, a staggered potential on the\nlattice may realize topological phases for the half-filled situation. Another\ninteresting result is the occurrence of a topological phase for large on-site\nenergies. To describe topological properties of the system we use a numerical\napproach based on the twisted boundary conditions and we develop a mathematical\napproach, related to smooth fields.\n"} {"abstract": " Cross-flow turbines convert kinetic power in wind or water currents to\nmechanical power. Unlike axial-flow turbines, the influence of geometric\nparameters on turbine performance is not well-understood, in part because there\nare neither generalized analytical formulations nor inexpensive, accurate\nnumerical models that describe their fluid dynamics. Here, we experimentally\ninvestigate the effect of aspect ratio - the ratio of the blade span to rotor\ndiameter - on the performance of a straight-bladed cross-flow turbine in a\nwater channel. To isolate the effect of aspect ratio, all other non-dimensional\nparameters are held constant, including the relative confinement, Froude\nnumber, and Reynolds number. The coefficient of performance is found to be\ninvariant for the range of aspect ratios tested (0.95 - 1.63), which we ascribe\nto minimal blade-support interactions for this turbine design. Finally, a\nsubset of experiments is repeated without controlling for the Froude number and\nthe coefficient of performance is found to increase, a consequence of Froude\nnumber variation that could mistakenly be ascribed to aspect ratio. This\nhighlights the importance of rigorous experimental design when exploring the\neffect of geometric parameters on cross-flow turbine performance.\n"} {"abstract": " After more than a decade of intense focus on automated vehicles, we are still\nfacing huge challenges for the vision of fully autonomous driving to become a\nreality. The same \"disillusionment\" is true in many other domains, in which\nautonomous Cyber-Physical Systems (CPS) could considerably help to overcome\nsocietal challenges and be highly beneficial to society and individuals. Taking\nthe automotive domain, i.e. highly automated vehicles (HAV), as an example,\nthis paper sets out to summarize the major challenges that are still to\novercome for achieving safe, secure, reliable and trustworthy highly automated\nresp. autonomous CPS. We constrain ourselves to technical challenges,\nacknowledging the importance of (legal) regulations, certification,\nstandardization, ethics, and societal acceptance, to name but a few, without\ndelving deeper into them as this is beyond the scope of this paper. Four\nchallenges have been identified as being the main obstacles to realizing HAV:\nRealization of continuous, post-deployment systems improvement, handling of\nuncertainties and incomplete information, verification of HAV with machine\nlearning components, and prediction. Each of these challenges is described in\ndetail, including sub-challenges and, where appropriate, possible approaches to\novercome them. By working together in a common effort between industry and\nacademy and focusing on these challenges, the authors hope to contribute to\novercome the \"disillusionment\" for realizing HAV.\n"} {"abstract": " We present an integrated design to precisely measure optical frequency using\nweak value amplification with a multi-mode interferometer. The technique\ninvolves introducing a weak perturbation to the system and then post-selecting\nthe data in such a way that the signal is amplified without amplifying the\ntechnical noise, as has previously been demonstrated in a free-space setup. We\ndemonstrate the advantages of a Bragg grating with two band gaps for obtaining\nsimultaneous, stable high transmission and high dispersion. We numerically\nmodel the interferometer in order to demonstrate the amplification effect. The\ndevice is shown to have advantages over both the free-space implementation and\nother methods of measuring optical frequency on a chip, such as an integrated\nMach-Zehnder interferometer.\n"} {"abstract": " In this paper, we introduce single acceptance sampling inspection plan\n(SASIP) for transmuted Rayleigh (TR) distribution when the lifetime experiment\nis truncated at a prefixed time. Establish the proposed plan for different\nchoices of confidence level, acceptance number and ratio of true mean lifetime\nto specified mean lifetime. Minimum sample size necessary to ensure a certain\nspecified lifetime is obtained. Operating characteristic(OC) values and\nproducer's risk of proposed plan are presented. Two real life example has been\npresented to show the applicability of proposed SASIP.\n"} {"abstract": " In this paper, we study the periodicity structure of finite field linear\nrecurring sequences whose period is not necessarily maximal and determine\nnecessary and sufficient conditions for the characteristic polynomial~\\(f\\) to\nhave exactly two periods in the sense that the period of any sequence generated\nby~\\(f\\) is either one or a unique integer greater than one.\n"} {"abstract": " We present a comprehensive analysis of all XMM-Newton spectra of OJ 287\nspanning 15 years of X-ray spectroscopy of this bright blazar. We also report\nthe latest results from our dedicated Swift UVOT and XRT monitoring of OJ 287\nwhich started in 2015, along with all earlier public Swift data since 2005.\nDuring this time interval, OJ 287 was caught in extreme minima and outburst\nstates. Its X-ray spectrum is highly variable and encompasses all states seen\nin blazars from very flat to exceptionally steep. The spectrum can be\ndecomposed into three spectral components: Inverse Compton (IC) emission\ndominant at low-states, super-soft synchrotron emission which becomes\nincreasingly dominant as OJ 287 brightens, and an intermediately-soft\n(Gamma_x=2.2) additional component seen at outburst. This last component\nextends beyond 10 keV and plausibly represents either a second synchrotron/IC\ncomponent and/or a temporary disk corona of the primary supermassive black hole\n(SMBH). Our 2018 XMM-Newton observation, quasi-simultaneous with the Event\nHorizon Telescope observation of OJ 287, is well described by a two-component\nmodel with a hard IC component of Gamma_x=1.5 and a soft synchrotron component.\nLow-state spectra limit any long-lived accretion disk/corona contribution in\nX-rays to a very low value of L_x/L_Edd < 5.6 times 10^(-4) (for M_(BH,\nprimary) = 1.8 times 10^10 M_sun). Some implications for the binary SMBH model\nof OJ 287 are discussed.\n"} {"abstract": " Several recent applications of optimal transport (OT) theory to machine\nlearning have relied on regularization, notably entropy and the Sinkhorn\nalgorithm. Because matrix-vector products are pervasive in the Sinkhorn\nalgorithm, several works have proposed to \\textit{approximate} kernel matrices\nappearing in its iterations using low-rank factors. Another route lies instead\nin imposing low-rank constraints on the feasible set of couplings considered in\nOT problems, with no approximations on cost nor kernel matrices. This route was\nfirst explored by Forrow et al., 2018, who proposed an algorithm tailored for\nthe squared Euclidean ground cost, using a proxy objective that can be solved\nthrough the machinery of regularized 2-Wasserstein barycenters. Building on\nthis, we introduce in this work a generic approach that aims at solving, in\nfull generality, the OT problem under low-rank constraints with arbitrary\ncosts. Our algorithm relies on an explicit factorization of low rank couplings\nas a product of \\textit{sub-coupling} factors linked by a common marginal;\nsimilar to an NMF approach, we alternatively updates these factors. We prove\nthe non-asymptotic stationary convergence of this algorithm and illustrate its\nefficiency on benchmark experiments.\n"} {"abstract": " The hierarchy of nonlocality and entanglement in multipartite systems is one\nof the fundamental problems in quantum physics. Existing studies on this topic\nto date were limited to the entanglement classification according to the\nnumbers of particles enrolled. Equivalence under stochastic local operations\nand classical communication provides a more detailed classification, e. g. the\ngenuine three-qubit entanglement being divided into W and GHZ classes. We\nconstruct two families of local models for the three-qubit\nGreenberger-Horne-Zeilinger (GHZ)-symmetric states, whose entanglement classes\nhave a complete description. The key technology of construction the local\nmodels in this work is the GHZ symmetrization on tripartite extensions of the\noptimal local-hidden-state models for Bell diagonal states. Our models show\nthat entanglement and nonlocality are inequivalent for all the entanglement\nclasses (biseparable, W, and GHZ) in three-qubit systems.\n"} {"abstract": " We show that both the classical as well as the quantum definitions of the\nFisher information faithfully identify resourceful quantum states in general\nquantum resource theories, in the sense that they can always distinguish\nbetween states with and without a given resource. This shows that all quantum\nresources confer an advantage in metrology, and establishes the Fisher\ninformation as a universal tool to probe the resourcefulness of quantum states.\nWe provide bounds on the extent of this advantage, as well as a simple\ncriterion to test whether different resources are useful for the estimation of\nunitarily encoded parameters. Finally, we extend the results to show that the\nFisher information is also able to identify the dynamical resourcefulness of\nquantum operations.\n"} {"abstract": " The relation of period spacing ($\\Delta P$) versus period ($P$) of dipole\nprograde g modes is known to be useful to measure rotation rates in the g-mode\ncavity of rapidly rotating $\\gamma$ Dor and slowly pulsating B (SPB) stars. In\na rapidly rotating star, an inertial mode in the convective core can resonantly\ncouple with g modes propagative in the surrounding radiative region. The\nresonant coupling causes a dip in the $P$-$\\Delta P$ relation, distinct from\nthe modulations due to the chemical composition gradient. Such a resonance dip\nin $\\Delta P$ of prograde dipole g modes appears around a frequency\ncorresponding to a spin parameter $2f_{\\rm rot}{\\rm(cc)}/\\nu_{\\rm co-rot} \\sim\n8-11$ with $f_{\\rm rot}$(cc) being the rotation frequency of the convective\ncore and $\\nu_{\\rm co-rot}$ the pulsation frequency in the co-rotating frame.\nThe spin parameter at the resonance depends somewhat on the extent of core\novershooting, central hydrogen abundance, and other stellar parameters. We can\nfit the period at the observed dip with the prediction from prograde dipole g\nmodes of a main-sequence model, allowing the convective core to rotate\ndifferentially from the surrounding g-mode cavity. We have performed such\nfittings for 16 selected $\\gamma$ Dor stars having well defined dips, and found\nthat the majority of $\\gamma$ Dor stars we studied rotate nearly uniformly,\nwhile convective cores tend to rotate slightly faster than the g-mode cavity in\nless evolved stars.\n"} {"abstract": " Magnetic reconnection is explored on the Terrestrial Reconnection Experiment\n(TREX) for asymmetric inflow conditions and in a configuration where the\nabsolute rate of reconnection is set by an external drive. Magnetic pileup\nenhances the upstream magnetic field of the high density inflow, leading to an\nincreased upstream Alfven speed and helping to lower the normalized\nreconnection rate to values expected from theoretical consideration. In\naddition, a shock interface between the far upstream supersonic plasma inflow\nand the region of magnetic flux pileup is observed, important to the overall\nforce balance of the system, hereby demonstrating the role of shock formation\nfor configurations including a supersonically driven inflow. Despite the\nspecialised geometry where a strong reconnection drive is applied from only one\nside of the reconnection layer, previous numerical and theoretical results\nremain robust and are shown to accurately predict the normalized rate of\nreconnection for the range of system sizes considered. This experimental rate\nof reconnection is dependent on system size, reaching values as high as 0.8 at\nthe smallest normalized system size applied.\n"} {"abstract": " In the de Sitter gauge theory (DGT), the fundamental variables are the de\nSitter (dS) connection and the gravitational Higgs/Goldstone field $\\xi^A$.\nPreviously, a model for DGT was analyzed, which generalizes the\nMacDowell--Mansouri gravity to have a variable cosmological constant\n$\\Lambda=3/l^2$, where $l$ is related to $\\xi^A$ by $\\xi^A\\xi_A=l^2$. It was\nshown that the model sourced by a perfect fluid does not support a radiation\nepoch and the accelerated expansion of the parity invariant universe. In this\nwork, I consider a similar model, namely, the Stelle--West gravity, and couple\nit to a modified perfect fluid, such that the total Lagrangian 4-form is\npolynomial in the gravitational variables. The Lagrangian of the modified fluid\nhas a nontrivial variational derivative with respect to $l$, and as a result,\nthe problems encountered in the previous work no longer appear. Moreover, to\nexplore the elegance of the general theory, as well as to write down the basic\nframework, I perform the Lagrange--Noether analysis for DGT sourced by a matter\nfield, yielding the field equations and the identities with respect to the\nsymmetries of the system. The resulted formula are dS covariant and do not rely\non the existence of the metric field.\n"} {"abstract": " Privacy and energy are primary concerns for sensor devices that offload\ncompute to a potentially untrusted edge server or cloud. Homomorphic Encryption\n(HE) enables offload processing of encrypted data. HE offload processing\nretains data privacy, but is limited by the need for frequent communication\nbetween the client device and the offload server. Existing client-aided\nencrypted computing systems are optimized for performance on the offload\nserver, failing to sufficiently address client costs, and precluding HE offload\nfor low-resource (e.g., IoT) devices. We introduce Client-aided HE for Opaque\nCompute Offloading (CHOCO), a client-optimized system for encrypted offload\nprocessing. CHOCO introduces rotational redundancy, an algorithmic optimization\nto minimize computing and communication costs. We design Client-Aided HE for\nOpaque Compute Offloading Through Accelerated Cryptographic Operations\n(CHOCO-TACO), a comprehensive architectural accelerator for client-side\ncryptographic operations that eliminates most of their time and energy costs.\nOur evaluation shows that CHOCO makes client-aided HE offloading feasible for\nresource-constrained clients. Compared to existing encrypted computing\nsolutions, CHOCO reduces communication cost by up to 2948x. With hardware\nsupport, client-side encryption/decryption is faster by 1094x and uses 648x\nless energy. In our end-to-end implementation of a large-scale DNN (VGG16),\nCHOCO uses 37% less energy than local (unencrypted) computation.\n"} {"abstract": " Primordial perturbations in our universe are believed to have a quantum\norigin, and can be described by the wavefunction of the universe (or\nequivalently, cosmological correlators). It follows that these observables must\ncarry the imprint of the founding principle of quantum mechanics: unitary time\nevolution. Indeed, it was recently discovered that unitarity implies an\ninfinite set of relations among tree-level wavefunction coefficients, dubbed\nthe Cosmological Optical Theorem. Here, we show that unitarity leads to a\nsystematic set of \"Cosmological Cutting Rules\" which constrain wavefunction\ncoefficients for any number of fields and to any loop order. These rules fix\nthe discontinuity of an n-loop diagram in terms of lower-loop diagrams and the\ndiscontinuity of tree-level diagrams in terms of tree-level diagrams with fewer\nexternal fields. Our results apply with remarkable generality, namely for\narbitrary interactions of fields of any mass and any spin with a Bunch-Davies\nvacuum around a very general class of FLRW spacetimes. As an application, we\nshow how one-loop corrections in the Effective Field Theory of inflation are\nfixed by tree-level calculations and discuss related perturbative unitarity\nbounds. These findings greatly extend the potential of using unitarity to\nbootstrap cosmological observables and to restrict the space of consistent\neffective field theories on curved spacetimes.\n"} {"abstract": " In this paper, we show how the absorption and re-radiation energy from\nmolecules in the air can influence the Multiple Input Multiple Output (MIMO)\nperformance in high-frequency bands, e.g., millimeter wave (mmWave) and\nterahertz. In more detail, some common atmosphere molecules, such as oxygen and\nwater, can absorb and re-radiate energy in their natural resonance frequencies,\nsuch as 60 GHz, 180 GHz and 320 GHz. Hence, when hit by electromagnetic waves,\nmolecules will get excited and absorb energy, which leads to an extra path loss\nand is known as molecular attenuation. Meanwhile, the absorbed energy will be\nre-radiated towards a random direction with a random phase. These re-radiated\nwaves also interfere with the signal transmission. Although, the molecular\nre-radiation was mostly considered as noise in literature, recent works show\nthat it is correlated to the main signal and can be viewed as a composition of\nmultiple delayed or scattered signals. Such a phenomenon can provide\nnon-line-of-sight (NLoS) paths in an environment that lacks scatterers, which\nincreases spatial multiplexing and thus greatly enhances the performance of\nMIMO systems. Therefore in this paper, we explore the scattering model and\nnoise models of molecular re-radiation to characterize the channel transfer\nfunction of the NLoS channels created by atmosphere molecules. Our simulation\nresults show that the re-radiation can increase MIMO capacity up to 3 folds in\nmmWave and 6 folds in terahertz for a set of realistic transmit power,\ndistance, and antenna numbers. We also show that in the high SNR, the\nre-radiation makes the open-loop precoding viable, which is an alternative to\nbeamforming to avoid beam alignment sensitivity in high mobility applications.\n"} {"abstract": " We present a Hubble Space Telescope/Wide-Field Camera 3 near infrared\nspectrum of the archetype Y dwarf WISEP 182831.08+265037.8. The spectrum covers\nthe 0.9-1.7 um wavelength range at a resolving power of lambda/Delta lambda\n~180 and is a significant improvement over the previously published spectrum\nbecause it covers a broader wavelength range and is uncontaminated by light\nfrom a background star. The spectrum is unique for a cool brown dwarf in that\nthe flux peaks in the Y, J, and H band are of near equal intensity in units of\nf_lambda. We fail to detect any absorption bands of NH_3 in the spectrum, in\ncontrast to the predictions of chemical equilibrium models, but tentatively\nidentify CH_4 as the carrier of an unknown absorption feature centered at 1.015\num. Using previously published ground- and spaced-based photometry, and using a\nRayleigh Jeans tail to account for flux emerging longward of 4.5 um, we compute\na bolometric luminosity of log (L_bol/L_sun)=-6.50+-0.02 which is significantly\nlower than previously published results. Finally, we compare the spectrum and\nphotometry to two sets of atmospheric models and find that best overall match\nto the observed properties of WISEP 182831.08+265037.8 is a ~1 Gyr old binary\ncomposed of two T_eff~325 K, ~5 M_Jup brown dwarfs with subsolar [C/O] ratios.\n"} {"abstract": " Consider traffic data (i.e., triplets in the form of\nsource-destination-timestamp) that grow over time. Tensors (i.e.,\nmulti-dimensional arrays) with a time mode are widely used for modeling and\nanalyzing such multi-aspect data streams. In such tensors, however, new entries\nare added only once per period, which is often an hour, a day, or even a year.\nThis discreteness of tensors has limited their usage for real-time\napplications, where new data should be analyzed instantly as it arrives. How\ncan we analyze time-evolving multi-aspect sparse data 'continuously' using\ntensors where time is'discrete'? We propose SLICENSTITCH for continuous\nCANDECOMP/PARAFAC (CP) decomposition, which has numerous time-critical\napplications, including anomaly detection, recommender systems, and stock\nmarket prediction. SLICENSTITCH changes the starting point of each period\nadaptively, based on the current time, and updates factor matrices (i.e.,\noutputs of CP decomposition) instantly as new data arrives. We show,\ntheoretically and experimentally, that SLICENSTITCH is (1) 'Any time': updating\nfactor matrices immediately without having to wait until the current time\nperiod ends, (2) Fast: with constant-time updates up to 464x faster than online\nmethods, and (3) Accurate: with fitness comparable (specifically, 72 ~ 100%) to\noffline methods.\n"} {"abstract": " We develop a first-principles-based generalized mode-coupling theory (GMCT)\nfor the tagged-particle motion of glassy systems. This theory establishes a\nhierarchy of coupled integro-differential equations for self-multi-point\ndensity correlation functions, which can formally be extended up to infinite\norder. We use our GMCT framework to calculate the self-nonergodicity parameters\nand the self-intermediate scattering function for the Percus-Yevick hard sphere\nsystem, based on the first few levels of the GMCT hierarchy. We also test the\nscaling laws in the $\\alpha$- and $\\beta$-relaxation regimes near the\nglass-transition singularity. Furthermore, we study the mean-square\ndisplacement and the Stoke-Einstein relation in the supercooled regime. We find\nthat qualitatively our GMCT results share many similarities with the\nwell-established predictions from standard mode-coupling theory, but the\nquantitative results change, and typically improve, by increasing the GMCT\nclosure level. However, we also demonstrate on general theoretical grounds that\nthe current GMCT framework is unable to account for violation of the\nStokes-Einstein relation, underlining the need for further improvements in the\nfirst-principles description of glassy dynamics.\n"} {"abstract": " In this article, we define amorphic complexity for actions of locally compact\n$\\sigma$-compact amenable groups on compact metric spaces. Amorphic complexity,\noriginally introduced for $\\mathbb Z$-actions, is a topological invariant which\nmeasures the complexity of dynamical systems in the regime of zero entropy. We\nshow that it is tailor-made to study strictly ergodic group actions with\ndiscrete spectrum and continuous eigenfunctions. This class of actions\nincludes, in particular, Delone dynamical systems related to regular model sets\nobtained via Meyer's cut and project method. We provide sharp upper bounds on\namorphic complexity of such systems. In doing so, we observe an intimate\nrelationship between amorphic complexity and fractal geometry.\n"} {"abstract": " In this paper we prove regularity results for a class of nonlinear degenerate\nelliptic equations of the form $\\displaystyle -\\operatorname{div}(A(|\\nabla\nu|)\\nabla u)+B\\left( |\\nabla u|\\right) =f(u)$; in particular, we investigate\nthe second order regularity of the solutions. As a consequence of these\nresults, we obtain symmetry and monotonicity properties of positive solutions\nfor this class of degenerate problems in convex symmetric domains via a\nsuitable adaption of the celebrated moving plane method of Alexandrov-Serrin.\n"} {"abstract": " In the centre of mass frame, we have studied theoretically the $Z$-boson\nresonant production in the presence of an intense laser field via the weak\nprocess $e^+e^- \\to \\mu^+\\mu^-$. Dressing the incident particles by a\nCircularly Polarized laser field (CP-laser field), at the first step, shows\nthat for a given laser field's parameters, the $Z$- boson cross section\ndecreases by several orders of magnitude. We have compared the the Total Cross\nSection (TCS) obtained by using the scattering matrix method with that given by\nthe Breit-Wigner approach in the presence of a CP-laser field and the results\nare found to be very consistent. This result indicates that Breit-Wigner\nformula is valid not only for the laser-free process but also in the presence\nof a CP-laser field. The dependence of the laser-assisted differential cross\nsection on the Centre of Mass Energy (CME) for different scattering angles\nproves that it reaches its maximum for small and high scattering angles. At the\nnext step and by dressing both incident and scattered particles, we have shown\nthat the CP-laser field largely affects the TCS, especially when its strength\nreaches $10^{9}\\,V.cm^{-1}$. This result confirms that obtained for the elastic\nelectron-proton scattering in the presence of a CP-laser field [I. Dahiri et\nal., arXiv:2102.00722v1]. It is interpreted by the fact that heavy interacting\nparticles require high laser field's intensity to affect the collision's cross\nsection.\n"} {"abstract": " In this work, we give a new technique for constructing self-dual codes over\ncommutative Frobenius rings using $\\lambda$-circulant matrices. The new\nconstruction was derived as a modification of the well-known four circulant\nconstruction of self-dual codes. Applying this technique together with the\nbuilding-up construction, we construct singly-even binary self-dual codes of\nlengths 56, 58, 64, 80 and 92 that were not known in the literature before.\nSingly-even self-dual codes of length 80 with $\\beta\\in\\{2,4,5,6,8\\}$ in their\nweight enumerators are constructed for the first time in the literature.\n"} {"abstract": " The need for a comprehensive study to explore various aspects of online\nsocial media has been instigated by many researchers. This paper gives an\ninsight into the social platform, Twitter. In this present work, we have\nillustrated stepwise procedure for crawling the data and discuss the key issues\nrelated to extracting associated features that can be useful in Twitter-related\nresearch while crawling these data from Application Programming Interfaces\n(APIs). Further, the data that comprises of over 86 million tweets have been\nanalysed from various perspective including the most used languages, most\nfrequent words, most frequent users, countries with most and least tweets and\nre-tweets, etc. The analysis reveals that the users' data associated with\nTwitter has a high affinity for researches in the various domain that includes\npolitics, social science, economics, and linguistics, etc. In addition, the\nrelation between Twitter users of a country and its human development index has\nbeen identified. It is observed that countries with very high human development\nindices have a relatively higher number of tweets compared to low human\ndevelopment indices countries. It is envisaged that the present study shall\nopen many doors of researches in information processing and data science.\n"} {"abstract": " We study the structural evolution of isolated star-forming galaxies in the\nIllustris TNG100-1 hydrodynamical simulation, with a focus on investigating the\ngrowth of the central core density within 2 kpc ($\\Sigma_{*,2kpc}$) in relation\nto total stellar mass ($M_*$) at z < 0.5. First, we show that several\nobservational trends in the $\\Sigma_{*,2kpc}$-$M_*$ plane are qualitatively\nreproduced in IllustrisTNG, including the distributions of AGN, star forming\ngalaxies, quiescent galaxies, and radial profiles of stellar age, sSFR, and\nmetallicity. We find that galaxies with dense cores evolve parallel to the\n$\\Sigma_{*,2kpc}$-$M_*$ relation, while galaxies with diffuse cores evolve\nalong shallower trajectories. We investigate possible drivers of rapid growth\nin $\\Sigma_{*,2kpc}$ compared to $M_*$. Both the current sSFR gradient and the\nBH accretion rate are indicators of past core growth, but are not predictors of\nfuture core growth. Major mergers (although rare in our sample; $\\sim$10%)\ncause steeper core growth, except for high mass ($M_*$ >$\\sim$ $10^{10}\nM_{\\odot}$) mergers, which are mostly dry. Disc instabilities, as measured by\nthe fraction of mass with Toomre Q < 2, are not predictive of rapid core\ngrowth. Instead, rapid core growth results in more stable discs. The cumulative\nblack hole feedback history sets the maximum rate of core growth, preventing\nrapid growth in high-mass galaxies ($M_*$ >$\\sim$ $10^{9.5} M_{\\odot}$). For\nmassive galaxies the total specific angular momentum of accreting gas is the\nmost important predictor of future core growth. Our results suggest that the\nangular momentum of accreting gas controls the slope, width and zero-point\nevolution of the $\\Sigma_{*,2kpc}$-$M_*$ relation.\n"} {"abstract": " This article introduces a neural network-based signal processing framework\nfor intelligent reflecting surface (IRS) aided wireless communications systems.\nBy modeling radio-frequency (RF) impairments inside the \"meta-atoms\" of IRS\n(including nonlinearity and memory effects), we present an approach that\ngeneralizes the entire IRS-aided system as a reservoir computing (RC) system,\nan efficient recurrent neural network (RNN) operating in a state near the \"edge\nof chaos\". This framework enables us to take advantage of the nonlinearity of\nthis \"fabricated\" wireless environment to overcome link degradation due to\nmodel mismatch. Accordingly, the randomness of the wireless channel and RF\nimperfections are naturally embedded into the RC framework, enabling the\ninternal RC dynamics lying on the edge of chaos. Furthermore, several practical\nissues, such as channel state information acquisition, passive beamforming\ndesign, and physical layer reference signal design, are discussed.\n"} {"abstract": " Is critical input information encoded in specific sparse pathways within the\nneural network? In this work, we discuss the problem of identifying these\ncritical pathways and subsequently leverage them for interpreting the network's\nresponse to an input. The pruning objective -- selecting the smallest group of\nneurons for which the response remains equivalent to the original network --\nhas been previously proposed for identifying critical pathways. We demonstrate\nthat sparse pathways derived from pruning do not necessarily encode critical\ninput information. To ensure sparse pathways include critical fragments of the\nencoded input information, we propose pathway selection via neurons'\ncontribution to the response. We proceed to explain how critical pathways can\nreveal critical input features. We prove that pathways selected via neuron\ncontribution are locally linear (in an L2-ball), a property that we use for\nproposing a feature attribution method: \"pathway gradient\". We validate our\ninterpretation method using mainstream evaluation experiments. The validation\nof pathway gradient interpretation method further confirms that selected\npathways using neuron contributions correspond to critical input features. The\ncode is publicly available.\n"} {"abstract": " The paper investigates the problem of finding communities in complex network\nsystems, the detection of which allows a better understanding of the laws of\ntheir functioning. To solve this problem, two approaches are proposed based on\nthe use of flows characteristics of complex network. The first of these\napproaches consists in calculating the parameters of influence of separate\nsubsystems of the network system, distinguished by the principles of ordering\nor subordination, and the second, in using the concept of its flow core. Based\non the proposed approaches, reliable criteria for finding communities have been\nformulated and efficient algorithms for their detection in complex network\nsystems have been developed. It is shown that the proposed approaches make it\npossible to single out communities in cases in which the existing numerical and\nvisual methods turn out to be disabled.\n"} {"abstract": " The novel concept of simultaneously transmitting and reflecting (STAR)\nreconfigurable intelligent surfaces (RISs) is investigated, where the incident\nwireless signal is divided into transmitted and reflected signals passing into\nboth sides of the space surrounding the surface, thus facilitating a full-space\nmanipulation of signal propagation. Based on the introduced basic signal model\nof `STAR', three practical operating protocols for STAR-RISs are proposed,\nnamely energy splitting (ES), mode switching (MS), and time switching (TS).\nMoreover, a STAR-RIS aided downlink communication system is considered for both\nunicast and multicast transmission, where a multi-antenna base station (BS)\nsends information to two users, i.e., one on each side of the STAR-RIS. A power\nconsumption minimization problem for the joint optimization of the active\nbeamforming at the BS and the passive transmission and reflection beamforming\nat the STAR-RIS is formulated for each of the proposed operating protocols,\nsubject to communication rate constraints of the users. For ES, the resulting\nhighly-coupled non-convex optimization problem is solved by an iterative\nalgorithm, which exploits the penalty method and successive convex\napproximation. Then, the proposed penalty-based iterative algorithm is extended\nto solve the mixed-integer non-convex optimization problem for MS. For TS, the\noptimization problem is decomposed into two subproblems, which can be\nconsecutively solved using state-of-the-art algorithms and convex optimization\ntechniques. Finally, our numerical results reveal that: 1) the TS and ES\noperating protocols are generally preferable for unicast and multicast\ntransmission, respectively; and 2) the required power consumption for both\nscenarios is significantly reduced by employing the proposed STAR-RIS instead\nof conventional reflecting/transmiting-only RISs.\n"} {"abstract": " In this work we compute the first integral cohomology of the pure mapping\nclass group of a non-orientable surface of infinite topological type and genus\nat least 3. To this purpose, we also prove several other results already known\nfor orientable surfaces such as the existence of an Alexander method, the fact\nthat the mapping class group is isomorphic to the automorphism group of the\ncurve graph along with the topological rigidity of the curve graph, and the\nstructure of the pure mapping class group as both a Polish group and a\nsemi-direct product.\n"} {"abstract": " Programmable data planes allow users to define their own data plane\nalgorithms for network devices including appropriate data plane application\nprogramming interfaces (APIs) which may be leveraged by user-defined\nsoftware-defined networking (SDN) control. This offers great flexibility for\nnetwork customization, be it for specialized, commercial appliances, e.g., in\n5G or data center networks, or for rapid prototyping in industrial and academic\nresearch. Programming protocol-independent packet processors (P4) has emerged\nas the currently most widespread abstraction, programming language, and concept\nfor data plane programming. It is developed and standardized by an open\ncommunity, and it is supported by various software and hardware platforms. In\nthe first part of this paper we give a tutorial of data plane programming\nmodels, the P4 programming language, architectures, compilers, targets, and\ndata plane APIs. We also consider research efforts to advance P4 technology. In\nthe second part, we categorize a large body of literature of P4-based applied\nresearch into different research domains, summarize the contributions of these\npapers, and extract prototypes, target platforms, and source code availability.\nFor each research domain, we analyze how the reviewed works benefit from P4's\ncore features. Finally, we discuss potential next steps based on our findings.\n"} {"abstract": " Metasurfaces enable manipulation of light propagation at an unprecedented\nlevel, benefitting from a number of merits unavailable to conventional optical\nelements, such as ultracompactness, precise phase and polarization control at\ndeep subwavelength scale, and multifunctionalities. Recent progress in this\nfield has witnessed a plethora of functional metasurfaces, ranging from lenses\nand vortex beam generation to holography. However, research endeavors have been\nmainly devoted to static devices, exploiting only a glimpse of opportunities\nthat metasurfaces can offer. We demonstrate a dynamic metasurface platform,\nwhich allows independent manipulation of addressable subwavelength pixels at\nvisible frequencies through controlled chemical reactions. In particular, we\ncreate dynamic metasurface holograms for advanced optical information\nprocessing and encryption. Plasmonic nanorods tailored to exhibit hierarchical\nreaction kinetics upon hydrogenation/dehydrogenation constitute addressable\npixels in multiplexed metasurfaces. The helicity of light, hydrogen, oxygen,\nand reaction duration serve as multiple keys to encrypt the metasurfaces. One\nsingle metasurface can be deciphered into manifold messages with customized\nkeys, featuring a compact data storage scheme as well as a high level of\ninformation security. Our work suggests a novel route to protect and transmit\nclassified data, where highly restricted access of information is imposed.\n"} {"abstract": " High-energy heavy-ion collisions generate extremely strong magnetic field\nwhich plays a key role in a number of novel quantum phenomena in quark-gluon\nplasma (QGP), such as the chiral magnetic effect (CME). However, due to the\ncomplexity in theoretical modellings of the coupled electromagnetic fields and\nthe QGP system, especially in the pre-equilibrium stages, the lifetime of the\nmagnetic field in the QGP medium remains undetermined. We establish, for the\nfirst time, a kinetic framework to study the dynamical decay of the magnetic\nfield in the early stages of a weakly coupled QGP by solving the coupled\nBoltzmann and Maxwell equations. We find that at late times a\nmagnetohydrodynamical description of the coupled system emerges. With respect\nto realistic collisions at RHIC and the LHC, we estimate the residual strength\nof the magnetic field in the QGP when the system start to evolve\nhydrodynamically.\n"} {"abstract": " Nowadays, the confidentiality of data and information is of great importance\nfor many companies and organizations. For this reason, they may prefer not to\nrelease exact data, but instead to grant researchers access to approximate\ndata. For example, rather than providing the exact measurements of their\nclients, they may only provide researchers with grouped data, that is, the\nnumber of clients falling in each of a set of non-overlapping measurement\nintervals. The challenge is to estimate the mean and variance structure of the\nhidden ungrouped data based on the observed grouped data. To tackle this\nproblem, this work considers the exact observed data likelihood and applies the\nExpectation-Maximization (EM) and Monte-Carlo EM (MCEM) algorithms for cases\nwhere the hidden data follow a univariate, bivariate, or multivariate normal\ndistribution. Simulation studies are conducted to evaluate the performance of\nthe proposed EM and MCEM algorithms. The well-known Galton data set is\nconsidered as an application example.\n"} {"abstract": " For the first time, the dielectric response of a BaTiO3 thin film under an AC\nelectric field is investigated using time-resolved X-ray absorption\nspectroscopy at the Ti K-edge to clarify correlated contributions of each\nconstituent atom on the electronic states. Intensities of the pre-edge eg peak\nand shoulder structure just below the main edge increase with an increase in\nthe amplitude of the applied electric field, whereas that of the main peak\ndecreases in an opposite manner. Based on the multiple scattering theory, the\nincrease and decrease of the eg and main peaks are simulated for different Ti\noff-center displacements. Our results indicate that these spectral features\nreflect the inter- and intra-atomic hybridization of Ti 3d with O 2p and Ti 4p,\nrespectively. In contrast, the shoulder structure is not affected by changes in\nthe Ti off-center displacement but is susceptible to the effect of the corner\nsite Ba ions. This is the first experimental verification of the dynamic\nelectronic contribution of Ba to polarization reversal.\n"} {"abstract": " Magnetic multilayers are promising tuneable systems for hosting magnetic\nskyrmions at/above room temperature. Revealing their intriguing switching\nmechanisms and associated inherent electrical responses are prerequisites for\ndeveloping skyrmionic devices. In this work, we theoretically demonstrate the\nannihilation of single skyrmions occurring through a multilayer structure,\nwhich is mediated by hopping dynamics of topological hedgehog singularities\nknown as Bloch points. The emerging intralayer dynamics of Bloch points are\ndominated by the Dzyaloshinskii-Moriya interaction, and their propagation can\ngive rise to solenoidal emergent electric fields in the vicinity. Moreover, as\nthe topology of spin textures can dominate their emergent magnetic properties,\nwe show that the Bloch-point hopping through the multilayer will modulate the\nassociated topological Hall response, with the magnitude proportional to the\neffective topological charge. We also investigate the thermodynamic stability\nof these states regarding the layer-dependent magnetic properties. This study\ncasts light on the emergent electromagnetic signatures of skyrmion-based\nspintronics, rooted in magnetic-multilayer systems.\n"} {"abstract": " In this paper a comparative structural, dielectric and magnetic study of two\nlangasite compounds Ba$_3$TeCo$_3$P$_2$O$_{14}$ (absence of lone pair) and\nPb$_3$TeCo$_3$P$_2$O$_{14}$ (Pb$^{2+}$ 6$s^2$ lone pair) have been carried out\nto precisely explore the development of room temperature spontaneous\npolarization in presence of stereochemically active lone pair. In case of\nPb$_3$TeCo$_3$P$_2$O$_{14}$, mixing of both Pb 6$s$ with Pb 6$p$ and O 2$p$\nhelp the lone pair to be stereochemically active. This stereochemically active\nlone pair brings a large structural distortion within the unit cell and creates\na polar geometry, while Ba$_3$TeCo$_3$P$_2$O$_{14}$ compound remains in a\nnonpolar structure due to the absence of any such effect. Consequently,\npolarization measurement under varying electric field confirms room temperature\nferroelectricity for Pb$_3$TeCo$_3$P$_2$O$_{14}$, which was not the case of\nBa$_3$TeCo$_3$P$_2$O$_{14}$. Detailed study was carried out to understand the\nmicroscopic mechanism of ferroelectricity which revealed the exciting\nunderlying activity of poler TeO$_6$ octahedral unit as well as Pb-hexagon.\n"} {"abstract": " We study cosmological inflation and its dynamics in the framework of the\nRandall-Sundrum II brane model. In particular, we analyze in detail four\nrepresentative small-field inflationary potentials, namely Natural inflation,\nHilltop inflation, Higgs-like inflation, and Exponential SUSY inflation, each\ncharacterized by two mass scales. We constrain the parameters for which a\nviable inflationary Universe emerges using the latest PLANCK results.\nFurthermore, we investigate whether or not those models in brane cosmology are\nconsistent with the recently proposed Swampland Criteria, and give predictions\nfor the duration of reheating as well as for the reheating temperature after\ninflation. Our results show that (i) the distance conjecture is satisfied, (ii)\nthe de Sitter conjecture and its refined version may be avoided, and (iii) the\nallowed range for the five-dimensional Planck mass, $M_5$, is found to be\nbetween $10^5~\\textrm{TeV}$ and $10^{12}~\\textrm{TeV}$. Our main findings\nindicate that non-thermal leptogenesis cannot work within the framework of\nRS-II brane cosmology, at least for the inflationary potentials considered\nhere.\n"} {"abstract": " We first propose a general method to construct the complete set of on-shell\noperator bases involving massive particles with any spins. To incorporate the\nnon-abelian little groups of massive particles, the on-shell scattering\namplitude basis should be factorized into two parts: one is charged, and the\nother one is neutral under little groups of massive particles. The complete set\nof these two parts can be systematically constructed by choosing some specific\nYoung diagrams of Lorentz subgroup and global symmetry $U(N)$ respectively ($N$\nis the number of external particles), without the equation of motion and\nintegration by part redundancy. Thus the complete massive amplitude bases\nwithout any redundancies can be obtained by combining these two complete sets.\nSome examples are presented to explicitly demonstrate this method. This method\nis applicable for constructing amplitude bases involving identical particles,\nand all the bases can be constructed automatically by computer programs based\non it.\n"} {"abstract": " We study the variety ZG of monoids where the elements that belong to a group\nare central, i.e., commute with all other elements. We show that ZG is local,\nthat is, the semidirect product ZG * D of ZG by definite semigroups is equal to\nLZG, the variety of semigroups where all local monoids are in ZG. Our main\nresult is thus: ZG * D = LZG. We prove this result using Straubing's delay\ntheorem, by considering paths in the category of idempotents. In the process,\nwe obtain the characterization ZG = MNil \\vee Com, and also characterize the ZG\nlanguages, i.e., the languages whose syntactic monoid is in ZG: they are\nprecisely the languages that are finite unions of disjoint shuffles of\nsingleton languages and regular commutative languages.\n"} {"abstract": " The noise-enhanced trapping is a surprising phenomenon that has already been\nstudied in chaotic scattering problems where the noise affects the physical\nvariables but not the parameters of the system. Following this research, in\nthis work we provide strong numerical evidence to show that an additional\nmechanism that enhances the trapping arises when the noise influences the\nenergy of the system. For this purpose, we have included a source of Gaussian\nwhite noise in the H\\'enon-Heiles system, which is a paradigmatic example of\nopen Hamiltonian system. For a particular value of the noise intensity, some\ntrajectories decrease their energy due to the stochastic fluctuations. This\ndrop in energy allows the particles to spend very long transients in the\nscattering region, increasing their average escape times. This result, together\nwith the previously studied mechanisms, points out the generality of the\nnoise-enhanced trapping in chaotic scattering problems.\n"} {"abstract": " This study investigates the correlation of self-report accuracy with academic\nperformance. The sample was composed of 289 undergraduate students (96 senior\nand 193 junior) enrolled in two engineering classes. Age ranged between 22 and\n24 years, with a slight over representation of male students (53%). Academic\nperformance was calculated based on students' final grades in each class. The\ntendency to report inaccurate information was measured at the end of the Raven\nProgressive Matrices Test, by asking students to report their exact finishing\ntimes. We controlled for gender, age, personality traits, intelligence, and\npast academic performance. We also included measures of centrality in their\nfriendship, advice and trust networks. Correlation and multiple regression\nanalyses results indicate that lower achieving students were significantly less\naccurate in self-reporting data. We also found that being more central in the\nadvice network was correlated with higher performance (r = .20, p < .001). The\nresults are aligned with existing literature emphasizing the individual and\nrelational factors associated with academic performance and, pending future\nstudies, may be utilized to include a new metric of self-report accuracy that\nis not dependent on academic records.\n"} {"abstract": " We investigate the 3D spin alignment of galaxies with respect to the\nlarge-scale filaments using the MaNGA survey. The cosmic web is reconstructed\nfrom the Sloan Digital Sky Survey using Disperse and the 3D spins of MaNGA\ngalaxies are estimated using the thin disk approximation with integral field\nspectroscopy kinematics. Late-type spiral galaxies are found to have their\nspins parallel to the closest filament's axis. The alignment signal is found to\nbe dominated by low-mass spirals. Spins of S0-type galaxies tend to be oriented\npreferentially in perpendicular direction with respect to the filament's axis.\nThis orthogonal orientation is found to be dominated by S0s that show a notable\nmisalignment between their kinematic components of stellar and ionised gas\nvelocity fields and/or by low mass S0s with lower rotation support compared to\ntheir high mass counterparts. Qualitatively similar results are obtained when\nsplitting galaxies based on the degree of ordered stellar rotation, such that\ngalaxies with high spin magnitude have their spin aligned, and those with low\nspin magnitude in perpendicular direction to the filaments. In the context of\nconditional tidal torque theory, these findings suggest that galaxies' spins\nretain memory of their larger-scale environment. In agreement with measurements\nfrom hydrodynamical cosmological simulations, the measured signal at low\nredshift is weak, yet statistically significant. The dependence of the\nspin-filament orientation of galaxies on their stellar mass, morphology and\nkinematics highlights the importance of sample selection to detect the signal.\n"} {"abstract": " Pretrained Masked Language Models (MLMs) have revolutionised NLP in recent\nyears. However, previous work has indicated that off-the-shelf MLMs are not\neffective as universal lexical or sentence encoders without further\ntask-specific fine-tuning on NLI, sentence similarity, or paraphrasing tasks\nusing annotated task data. In this work, we demonstrate that it is possible to\nturn MLMs into effective universal lexical and sentence encoders even without\nany additional data and without any supervision. We propose an extremely\nsimple, fast and effective contrastive learning technique, termed Mirror-BERT,\nwhich converts MLMs (e.g., BERT and RoBERTa) into such encoders in 20-30\nseconds without any additional external knowledge. Mirror-BERT relies on fully\nidentical or slightly modified string pairs as positive (i.e., synonymous)\nfine-tuning examples, and aims to maximise their similarity during identity\nfine-tuning. We report huge gains over off-the-shelf MLMs with Mirror-BERT in\nboth lexical-level and sentence-level tasks, across different domains and\ndifferent languages. Notably, in the standard sentence semantic similarity\n(STS) tasks, our self-supervised Mirror-BERT model even matches the performance\nof the task-tuned Sentence-BERT models from prior work. Finally, we delve\ndeeper into the inner workings of MLMs, and suggest some evidence on why this\nsimple approach can yield effective universal lexical and sentence encoders.\n"} {"abstract": " Soft or weakly-consolidated sand refers to porous materials composed of\nparticles (or grains) weakly held together to form a solid but that can be\neasily broken when subjected to stress. These materials do not behave as\nconventional brittle, linear elastic materials and the transition between these\ntwo regimes cannot usually be described using poro-elastic models. Furthermore,\nconventional geotechnical sampling techniques often result in the destruction\nof the cementation and recovery of sufficient intact core is, therefore,\ndifficult. This paper studies a numerical model that allows us to introduce\nweak consolidation in granular packs. The model, based on the LIGGGHTS open\nsource project, simply adds an attractive contribution to particles in contact.\nThis simple model allow us to reproduce key elements of the behaviour of the\nstress observed in compacted sands and clay, as well as in poorly consolidated\nsandstones. The paper finishes by inspecting the effect of different\nconsolidation levels in fluid-driven fracture behaviour. Numerical results are\ncompared against experimental results on bio-cemented sandstones.\n"} {"abstract": " In this paper, a new implicit-explicit local method with an arbitrary order\nis produced for stiff initial value problems. Here, a general method for\none-step time integrations has been created, considering a direction free\napproach for integrations leading to a numerical method with parameter-based\nstability preservation. Adaptive procedures depending on the problem types for\nthe current method are explained with the help of local error estimates to\nminimize the computational cost. Priority error analysis of the current method\nis made, and order conditions are presented in terms of direction parameters.\nStability analysis of the method is performed for both scalar equations and\nsystems of differential equations. The currently produced parameter-based\nmethod has been proven to provide A-stability, for 0.5<\\theta<1, in various\norders. The present method has been shown to be a very good option for\naddressing a wide range of initial value problems through numerical\nexperiments. It can be seen as a significant contribution that the\nSusceptible-Exposed-Infected-Recovered equation system parameterized for the\nCOVID-19 pandemic has been integrated with the present method and stability\nproperties of the method have been tested on this stiff model and significant\nresults are produced. Some challenging stiff behaviours represented by the\nnonlinear Duffing equation, Robertson chemical system, and van der Pol equation\nhave also been integrated, and the results revealed that the current algorithm\nproduces much more reliable results than numerical techniques in the\nliterature.\n"} {"abstract": " In this paper, a new method of training pipeline is discussed to achieve\nsignificant performance on the task of anti-spoofing with RGB image. We explore\nand highlight the impact of using pseudo-depth to pre-train a network that will\nbe used as the backbone to the final classifier. While the usage of\npseudo-depth for anti-spoofing task is not a new idea on its own, previous\nendeavours utilize pseudo-depth simply as another medium to extract features\nfor performing prediction, or as part of many auxiliary losses in aiding the\ntraining of the main classifier, normalizing the importance of pseudo-depth as\njust another semantic information. Through this work, we argue that there\nexists a significant advantage in training the final classifier can be gained\nby the pre-trained generator learning to predict the corresponding pseudo-depth\nof a given facial image, from a Generative Adversarial Network framework. Our\nexperimental results indicate that our method results in a much more adaptable\nsystem that can generalize beyond intra-dataset samples, but to inter-dataset\nsamples, which it has never seen before during training. Quantitatively, our\nmethod approaches the baseline performance of the current state of the art\nanti-spoofing models with 15.8x less parameters used. Moreover, experiments\nshowed that the introduced methodology performs well only using basic binary\nlabel without additional semantic information which indicates potential\nbenefits of this work in industrial and application based environment where\ntrade-off between additional labelling and resources are considered.\n"} {"abstract": " A transient two-dimensional acoustic boundary element solver is coupled to a\npotential flow boundary element solver via Powell's acoustic analogy to\ndetermine the acoustic emission of isolated hydrofoils performing\nbiologically-inspired motions. The flow-acoustic boundary element framework is\nvalidated against experimental and asymptotic solutions for the noise produced\nby canonical vortex-body interactions. The numerical framework then\ncharacterizes the noise production of an oscillating foil, which is a simple\nrepresentation of a fish caudal fin. A rigid NACA 0012 hydrofoil is subjected\nto combined heaving and pitching motions for Strouhal numbers ($0.03 < St < 1$)\nbased on peak-to-peak amplitudes and chord-based reduced frequencies ($0.125 <\nf^* < 1$) that span the parameter space of many swimming fish species. A\ndipolar acoustic directivity is found for all motions, frequencies, and\namplitudes considered, and the peak noise level increases with both the reduced\nfrequency and the Strouhal number. A combined heaving and pitching motion\nproduces less noise than either a purely pitching or purely heaving foil at a\nfixed reduced frequency and amplitude of motion. Correlations of the lift and\npower coefficients with the peak root-mean-square acoustic pressure levels are\ndetermined, which could be utilized to develop long-range, quiet swimmers.\n"} {"abstract": " In power system dynamic simulation, up to 90% of the computational time is\ndevoted to solve the network equations, i.e., a set of linear equations.\nTraditional approaches are based on sparse LU factorization, which is\ninherently sequential. In this paper, an inverse-based network solution is\nproposed by a hierarchical method for computing and store the approximate\ninverse of the conductance matrix in electromagnetic transient (EMT)\nsimulations. The proposed method can also efficiently update the inverse by\nmodifying only local sub-matrices to reflect changes in the network, e.g., loss\nof a line. Experiments on a series of simplified 179-bus Western\nInterconnection demonstrate the advantages of the proposed methods.\n"} {"abstract": " We present the results of long-term photometric monitoring of two active\ngalactic nuclei, 2MASX J08535955+7700543 (z $\\sim$ 0.106) and VII Zw 244 (z\n$\\sim$ 0.131), being investigated by the reverberation mapping method in\nmedium-band filters. To estimate the size of the broad line region, we have\nanalyzed the light curves with the JAVELIN code. The emission line widths have\nbeen measured using the spectroscopic data obtained at the 6-m BTA telescope of\nSAO RAS. We give our estimates of the supermassive black hole masses $\\lg\n(M/M_{\\odot})$, $7.398_{-0.171}^{+0.153}$, and $7.049_{-0.075}^{+0.068}$,\nrespectively\n"} {"abstract": " Perpendicularly magnetized films showing small saturation magnetization,\n$M_\\mathrm{s}$, are essential for spin-transfer-torque writing type\nmagnetoresistive random access memories, STT-MRAMs. An intermetallic compound,\n{(Mn-Cr)AlGe} of the Cu$_2$Sb-type crystal structure was investigated, in this\nstudy, as a material showing the low $M_\\mathrm{s}$ ($\\sim 300$ kA/m) and\nhigh-perpendicular magnetic anisotropy, $K_\\mathrm{u}$. The layer thickness\ndependence of $K_\\mathrm{u}$ and effects of Mg-insertion layers at top and\nbottom (Mn-Cr)AlGe$|$MgO interfaces were studied in film samples fabricated\nonto thermally oxidized silicon substrates to realize high-$K_\\mathrm{u}$ in\nthe thickness range of a few nanometer. Optimum Mg-insertion thicknesses were\n1.4 and 3.0 nm for the bottom and the top interfaces, respectively, which were\nrelatively thick compared to results in similar insertion effect investigations\non magnetic tunnel junctions reported in previous studies. The cross-sectional\ntransmission electron microscope images revealed that the Mg-insertion layers\nacted as barriers to interdiffusion of Al-atoms as well as oxidization from the\nMgO layers. The values of $K_\\mathrm{u}$ were about $7 \\times 10^5$ and $2\n\\times 10^5$ J/m$^3$ at room temperature for 5 and 3 nm-thick (Mn-Cr)AlGe\nfilms, respectively, with the optimum Mg-insertion thicknesses. The\n$K_\\mathrm{u}$ at a few nanometer thicknesses is comparable or higher than\nthose reported in perpendicularly magnetized CoFeB films which are\nconventionally used in MRAMs, while the $M_\\mathrm{s}$ value is one third or\nless smaller than those of the CoFeB films. The developed (Mn-Cr)AlGe films are\npromising from the viewpoint of not only the magnetic properties, but also the\ncompatibility to the silicon process in the film fabrication.\n"} {"abstract": " Blockchain (BC) technology can revolutionize future networks by providing a\ndistributed, secure, and unalterable way to boost collaboration among\noperators, users, and other stakeholders. Its implementations have\ntraditionally been supported by wired communications, with performance\nindicators like the high latency introduced by the BC being one of the key\ntechnology drawbacks. However, when applied to wireless communications, the\nperformance of BC remains unknown, especially if running over contention-based\nnetworks. In this paper, we evaluate the latency performance of BC technology\nwhen the supporting communication platform is wireless, specifically we focus\non IEEE 802.11ax, for the use case of users' radio resource provisioning. For\nthat purpose, we propose a discrete-time Markov model to capture the expected\ndelay incurred by the BC. Unlike other models in the literature, we consider\nthe effect that timers and forks have on the end-to-end latency.\n"} {"abstract": " Recently, Martinez-Penas and Kschischang (IEEE Trans. Inf. Theory, 2019)\nshowed that lifted linearized Reed-Solomon codes are suitable codes for error\ncontrol in multishot network coding. We show how to construct and decode lifted\ninterleaved linearized Reed-Solomon codes. Compared to the construction by\nMartinez-Penas-Kschischang, interleaving allows to increase the decoding region\nsignificantly (especially w.r.t. the number of insertions) and decreases the\noverhead due to the lifting (i.e., increases the code rate), at the cost of an\nincreased packet size. The proposed decoder is a list decoder that can also be\ninterpreted as a probabilistic unique decoder. Although our best upper bound on\nthe list size is exponential, we present a heuristic argument and simulation\nresults that indicate that the list size is in fact one for most channel\nrealizations up to the maximal decoding radius.\n"} {"abstract": " This paper presents a detailed investigation of FeCr-based quaternary Heusler\nalloys. By using ultrasoft pseudopotential, electronic and magnetic properties\nof the compounds are studied within the framework of Density Functional Theory\n(DFT) by using the Quantum Espresso package. The thermodynamic, mechanical, and\ndynamical stability of the compounds is established through the comprehensive\nstudy of different mechanical parameters and phonon dispersion curves. The\nmeticulous study of elastic parameters such as bulk, Young's, shear moduli,\netc. is done to understand different mechanical properties. The FeCr-based\ncompounds containing also Yttrium are studied to redress the contradictory\nelectronic and magnetic properties observed in the literature. The interesting\nproperties like half-metallicity and spin-gapless semiconducting (SGS) behavior\nare realized in the compounds under study.\n"} {"abstract": " We study several variants of the problem of moving a convex polytope $K$,\nwith $n$ edges, in three dimensions through a flat rectangular (and sometimes\nmore general) window. Specifically:\n $\\bullet$ We study variants where the motion is restricted to translations\nonly, discuss situations where such a motion can be reduced to sliding\n(translation in a fixed direction), and present efficient algorithms for those\nvariants, which run in time close to $O(n^{8/3})$.\n $\\bullet$ We consider the case of a `gate' (an unbounded window with two\nparallel infinite edges), and show that $K$ can pass through such a window, by\nany collision-free rigid motion, if and only if it can slide through it.\n $\\bullet$ We consider arbitrary compact convex windows, and show that if $K$\ncan pass through such a window $W$ (by any motion) then $K$ can slide through a\ngate of width equal to the diameter of $W$.\n $\\bullet$ We study the case of a circular window $W$, and show that, for the\nregular tetrahedron $K$ of edge length $1$, there are two thresholds $1 >\n\\delta_1\\approx 0.901388 > \\delta_2\\approx 0.895611$, such that (a) $K$ can\nslide through $W$ if the diameter $d$ of $W$ is $\\ge 1$, (b) $K$ cannot slide\nthrough $W$ but can pass through it by a purely translational motion when\n$\\delta_1\\le d < 1$, (c) $K$ cannot pass through $W$ by a purely translational\nmotion but can do it when rotations are allowed when $\\delta_2 \\le d <\n\\delta_1$, and (d) $K$ cannot pass through $W$ at all when $d < \\delta_2$.\n $\\bullet$ Finally, we explore the general setup, where we want to plan a\ngeneral motion (with all six degrees of freedom) for $K$ through a rectangular\nwindow $W$, and present an efficient algorithm for this problem, with running\ntime close to $O(n^4)$.\n"} {"abstract": " The paper is devoted to the study of Gromov-Hausdorff convergence and\nstability of irreversible metric-measure spaces, both in the compact and\nnoncompact cases. While the compact setting is mostly similar to the reversible\ncase developed by J. Lott, K.-T. Sturm and C. Villani, the noncompact case\nprovides various surprising phenomena. Since the reversibility of noncompact\nirreversible spaces might be infinite, it is motivated to introduce a suitable\nnondecreasing function that bounds the reversibility of larger and larger\nballs. By this approach, we are able to prove satisfactory\nconvergence/stability results in a suitable -- reversibility depending --\nGromov-Hausdorff topology. A wide class of irreversible spaces is provided by\nFinsler manifolds, which serve to construct various model examples by pointing\nout genuine differences between the reversible and irreversible settings. We\nconclude the paper by proving various geometric and functional inequalities (as\nBrunn-Minkowski, Bishop-Gromov, log-Sobolev and Lichnerowicz inequalities) on\nirreversible structures.\n"} {"abstract": " We introduce an evolutionary game on hypergraphs in which decisions between a\nrisky alternative and a safe one are taken in social groups of different sizes.\nThe model naturally reproduces choice shifts, namely the differences between\nthe preference of individual decision makers and the consensual choice of a\ngroup, that have been empirically observed in choice dilemmas. In particular, a\ndeviation from the Nash equilibrium towards the risky strategy occurs when the\ndynamics takes place on heterogeneous hypergraphs. These results can explain\nthe emergence of irrational herding and radical behaviours in social groups.\n"} {"abstract": " Photos of faces captured in unconstrained environments, such as large crowds,\nstill constitute challenges for current face recognition approaches as often\nfaces are occluded by objects or people in the foreground. However, few studies\nhave addressed the task of recognizing partial faces. In this paper, we propose\na novel approach to partial face recognition capable of recognizing faces with\ndifferent occluded areas. We achieve this by combining attentional pooling of a\nResNet's intermediate feature maps with a separate aggregation module. We\nfurther adapt common losses to partial faces in order to ensure that the\nattention maps are diverse and handle occluded parts. Our thorough analysis\ndemonstrates that we outperform all baselines under multiple benchmark\nprotocols, including naturally and synthetically occluded partial faces. This\nsuggests that our method successfully focuses on the relevant parts of the\noccluded face.\n"} {"abstract": " Edge computing has emerged as a popular paradigm for running\nlatency-sensitive applications due to its ability to offer lower network\nlatencies to end-users. In this paper, we argue that despite its lower network\nlatency, the resource-constrained nature of the edge can result in higher\nend-to-end latency, especially at higher utilizations, when compared to cloud\ndata centers. We study this edge performance inversion problem through an\nanalytic comparison of edge and cloud latencies and analyze conditions under\nwhich the edge can yield worse performance than the cloud. To verify our\nanalytic results, we conduct a detailed experimental comparison of the edge and\nthe cloud latencies using a realistic application and real cloud workloads.\nBoth our analytical and experimental results show that even at moderate\nutilizations, the edge queuing delays can offset the benefits of lower network\nlatencies, and even result in performance inversion where running in the cloud\nwould provide superior latencies. We finally discuss practical implications of\nour results and provide insights into how application designers and service\nproviders should design edge applications and systems to avoid these pitfalls.\n"} {"abstract": " Adaptive optimization methods have been widely used in deep learning. They\nscale the learning rates adaptively according to the past gradient, which has\nbeen shown to be effective to accelerate the convergence. However, they suffer\nfrom poor generalization performance compared with SGD. Recent studies point\nthat smoothing exponential gradient noise leads to generalization degeneration\nphenomenon. Inspired by this, we propose AdaL, with a transformation on the\noriginal gradient. AdaL accelerates the convergence by amplifying the gradient\nin the early stage, as well as dampens the oscillation and stabilizes the\noptimization by shrinking the gradient later. Such modification alleviates the\nsmoothness of gradient noise, which produces better generalization performance.\nWe have theoretically proved the convergence of AdaL and demonstrated its\neffectiveness on several benchmarks.\n"} {"abstract": " In the year 2011, S.Basha \\cite{BS} introduced the notion of proximal\ncontraction in a metric space $X$ and study the existence and uniqueness of\nbest proximity point for this class of mappings. Also, the author gave an\nalgorithm to achieve this best proximity point. In this paper, we show that the\nbest proximity point theorem can be proved by Banach contraction principle.\n"} {"abstract": " In this paper we undertake the task of text-based video moment retrieval from\na corpus of videos. To train the model, text-moment paired datasets were used\nto learn the correct correspondences. In typical training methods, ground-truth\ntext-moment pairs are used as positive pairs, whereas other pairs are regarded\nas negative pairs. However, aside from the ground-truth pairs, some text-moment\npairs should be regarded as positive. In this case, one text annotation can be\npositive for many video moments. Conversely, one video moment can be\ncorresponded to many text annotations. Thus, there are many-to-many\ncorrespondences between the text annotations and video moments. Based on these\ncorrespondences, we can form potentially relevant pairs, which are not given as\nground truth yet are not negative; effectively incorporating such relevant\npairs into training can improve the retrieval performance. The text query\nshould describe what is happening in a video moment. Hence, different video\nmoments annotated with similar texts, which contain a similar action, are\nlikely to hold the similar action, thus these pairs can be considered as\npotentially relevant pairs. In this paper, we propose a novel training method\nthat takes advantage of potentially relevant pairs, which are detected based on\nlinguistic analysis about text annotation. Experiments on two benchmark\ndatasets revealed that our method improves the retrieval performance both\nquantitatively and qualitatively.\n"} {"abstract": " The s, d bosons of the Interacting Boson Model are being derived from the\nShell Model space, through the Elliott or the proxy SU(3) symmetry. A novel\ninterpretation of the s, d bosons is therefore introduced: each boson results\nfrom the coupling of two harmonic oscillator quanta. Such quanta obey to the\nboson statistics and as a result their coupled states satisfy the bosonic\ncommutators too, without using any approximation. Thus, the mapping of two\nharmonic oscillator quanta onto an s or a d boson is accurate. Furthermore,\nthrough this interpretation, it emerges naturally, that the nuclear states of\neven-even nuclei can be described solely by spherical tensors of degree L=0, 2,\nnamely by the s, d bosons of the Interacting Boson Model. Beginning from the\noccupancy of the cartesian Shell Model states by protons or neutrons, one may\nresult to certain U(6) irreps, which inherit some of the fermionic\ncharacteristics from the initial Shell Model space. Practically this means,\nthat the bosons of the Interacting Boson Model span the whole 6 dimensional\nspace of the U(6) symmetry. As a result a microscopic justification of the\ncoherent states emerges from first principles.\n"} {"abstract": " The double JPEG compression detection has received much attention in recent\nyears due to its applicability as a forensic tool for the most widely used JPEG\nfile format. Existing state-of-the-art CNN-based methods either use histograms\nof all the frequencies or rely on heuristics to select histograms of specific\nlow frequencies to classify single and double compressed images. However, even\namidst lower frequencies of double compressed images/patches, histograms of all\nthe frequencies do not have distinguishable features to separate them from\nsingle compressed images. This paper directly extracts the quantized DCT\ncoefficients from the JPEG images without decompressing them in the pixel\ndomain, obtains all AC frequencies' histograms, uses a module based on $1\\times\n1$ depth-wise convolutions to learn the inherent relation between each\nhistogram and corresponding q-factor, and utilizes a tailor-made BiLSTM network\nfor selectively encoding these feature vector sequences. The proposed system\noutperforms several baseline methods on a relatively large and diverse publicly\navailable dataset of single and double compressed patches. Another essential\naspect of any single vs. double JPEG compression detection system is handling\nthe scenario where test patches are compressed with entirely different\nquantization matrices (Q-matrices) than those used while training; different\ncamera manufacturers and image processing software generally utilize their\ncustomized quantization matrices. A set of extensive experiments shows that the\nproposed system trained on a single dataset generalizes well on other datasets\ncompressed with completely unseen quantization matrices and outperforms the\nstate-of-the-art methods in both seen and unseen quantization matrices\nscenarios.\n"} {"abstract": " We provide non-isomorphic finite 2-groups which have isomorphic group\nalgebras over any field of characteristic 2, thus settling the Modular\nIsomorphism Problem.\n"} {"abstract": " The performance of reinforcement learning depends upon designing an\nappropriate action space, where the effect of each action is measurable, yet,\ngranular enough to permit flexible behavior. So far, this process involved\nnon-trivial user choices in terms of the available actions and their execution\nfrequency. We propose a novel framework for reinforcement learning that\neffectively lifts such constraints. Within our framework, agents learn\neffective behavior over a routine space: a new, higher-level action space,\nwhere each routine represents a set of 'equivalent' sequences of granular\nactions with arbitrary length. Our routine space is learned end-to-end to\nfacilitate the accomplishment of underlying off-policy reinforcement learning\nobjectives. We apply our framework to two state-of-the-art off-policy\nalgorithms and show that the resulting agents obtain relevant performance\nimprovements while requiring fewer interactions with the environment per\nepisode, improving computational efficiency.\n"} {"abstract": " Connectivity is a fundamental structural feature of a network that determines\nthe outcome of any dynamics that happens on top of it. However, an analytical\napproach to obtain connection probabilities between nodes associated to paths\nof different lengths is still missing. Here, we derive exact expressions for\nrandom-walk connectivity probabilities across any range of numbers of steps in\na generic temporal, directed and weighted network. This allows characterizing\nexplicit connectivity realized by causal paths as well as implicit connectivity\nrelated to motifs of three nodes and two links called here pitchforks. We\ndirectly link such probabilities to the processes of tagging and sampling any\nquantity exchanged across the network, hence providing a natural framework to\nassess transport dynamics. Finally, we apply our theoretical framework to study\nocean transport features in the Mediterranean Sea. We find that relevant\ntransport structures, such as fluid barriers and corridors, can generate\ncontrasting and counter-intuitive connectivity patterns bringing novel insights\ninto how ocean currents drive seascape connectivity.\n"} {"abstract": " We consider anomalous diffusion for molecular communication with a passive\nreceiver. We first consider the probability density function of molecules'\nlocation at a given time in a space of arbitrary dimension. The expected number\nof observed molecules inside a receptor space of the receiver at certain time\nis derived taking into account the life expectancy of the molecules. In\naddition, an implicit solution for the time that maximizes the expected number\nof observed molecules is obtained in terms of Fox's H-function. The closed-form\nexpressions for the bit error rate of a single-bit interval transmission and a\nmulti-bit interval transmission are derived. It is shown that lifetime limited\nmolecules can reduce the inter-symbol interference while also enhancing the\nreliability of MC systems at a suitable observation time.\n"} {"abstract": " Boundary value problems (BVPs) play a central role in the mathematical\nanalysis of constrained physical systems subjected to external forces.\nConsequently, BVPs frequently emerge in nearly every engineering discipline and\nspan problem domains including fluid mechanics, electromagnetics, quantum\nmechanics, and elasticity. The fundamental solution, or Green's function, is a\nleading method for solving linear BVPs that enables facile computation of new\nsolutions to systems under any external forcing. However, fundamental Green's\nfunction solutions for nonlinear BVPs are not feasible since linear\nsuperposition no longer holds. In this work, we propose a flexible deep\nlearning approach to solve nonlinear BVPs using a dual-autoencoder\narchitecture. The autoencoders discover an invertible coordinate transform that\nlinearizes the nonlinear BVP and identifies both a linear operator $L$ and\nGreen's function $G$ which can be used to solve new nonlinear BVPs. We find\nthat the method succeeds on a variety of nonlinear systems including nonlinear\nHelmholtz and Sturm--Liouville problems, nonlinear elasticity, and a 2D\nnonlinear Poisson equation. The method merges the strengths of the universal\napproximation capabilities of deep learning with the physics knowledge of\nGreen's functions to yield a flexible tool for identifying fundamental\nsolutions to a variety of nonlinear systems.\n"} {"abstract": " We present mesoscale numerical simulations of Rayleigh-B\\'enard (RB)\nconvection in a two-dimensional model emulsion. The systems under study are\nconstituted of finite-size droplets, whose concentration Phi_0 is\nsystematically varied from small (Newtonian emulsions) to large values\n(non-Newtonian emulsions). We focus on the characterisation of the heat\ntransfer properties close to the transition from conductive to convective\nstates, where it is known that a homogeneous Newtonian system exhibits a steady\nflow and a time-independent heat flux. In marked contrast, emulsions exhibit a\nnon-steady dynamics with fluctuations in the heat flux. In this paper, we aim\nat the characterisation of such non-steady dynamics via detailed studies on the\ntime-averaged heat flux and its fluctuations. To understand the time-averaged\nheat flux, we propose a side-by-side comparison between the emulsion system and\na single-phase (SP) system, whose viscosity is constructed from the shear\nrheology of the emulsion. We show that such local closure works well only when\na suitable degree of coarse-graining (at the droplet scale) is introduced in\nthe local viscosity. To delve deeper into the fluctuations in the heat flux, we\npropose a side-by-side comparison between a Newtonian emulsion and a\nnon-Newtonian emulsion, at fixed time-averaged heat flux. This comparison\nelucidates that finite-size droplets and the non-Newtonian rheology cooperate\nto trigger enhanced heat-flux fluctuations at the droplet scales. These\nenhanced fluctuations are rooted in the emergence of space correlations among\ndistant droplets, which we highlight via direct measurements of the droplets\ndisplacement and the characterisation of the associated correlation function.\nThe observed findings offer insights on heat transfer properties for confined\nsystems possessing finite-size constituents.\n"} {"abstract": " We study the performance power of software combining in designing persistent\nalgorithms and data structures. We present Bcomb, a new blocking\nhighly-efficient combining protocol, and built upon it to get PBcomb, a\npersistent version of it that performs a small number of persistence\ninstructions and exhibits low synchronization cost. We built fundamental\nrecoverable data structures, such as stacks and queues based on PBcomb, as well\nas on PWFcomb, a wait-free universal construction we present. Our experiments\nshow that PBcomb and PWFcomb outperform by far state-of-the-art recoverable\nuniversal constructions and transactional memory systems, many of which ensure\nweaker consistency properties than our algorithms. We built recoverable queues\nand stacks, based on PBcomb and PWFcomb, and present experiments to show that\nthey have much better performance than previous recoverable implementations of\nstacks and queues. We build the first recoverable implementation of a\nconcurrent heap and present experiments to show that it has good performance\nwhen the size of the heap is not very large.\n"} {"abstract": " In this note we characterize 1+n doubly twisted spacetimes in terms of\n`doubly torqued' vector fields. They extend Bang-Yen Chen's characterization of\ntwisted and generalized Robertson-Walker spacetimes with torqued and\nconcircular vector fields. The result is a simple classification of 1+n\ndoubly-twisted, doubly-warped, twisted and generalized Robertson-Walker\nspacetimes.\n"} {"abstract": " We consider the de Rham complex over scales of weighted isotropic and\nanisotropic H\\\"older spaces with prescribed asymptotic behaviour at the\ninfinity. Starting from theorems on the solvability of the system of operator\nequations generated by the de Rham differential $d$ and the operator $d^*$\nformally adjoint to it, a description of the cohomology groups of the de Rham\ncomplex over these scales was obtained. It was also proved that in the\nisotropic case the cohomology space is finite-dimensional, and in the\nanisotropic case the general form of an element from the cohomology space is\npresented.\n"} {"abstract": " While annotated images for change detection using satellite imagery are\nscarce and costly to obtain, there is a wealth of unlabeled images being\ngenerated every day. In order to leverage these data to learn an image\nrepresentation more adequate for change detection, we explore methods that\nexploit the temporal consistency of Sentinel-2 times series to obtain a usable\nself-supervised learning signal. For this, we build and make publicly available\n(https://zenodo.org/record/4280482) the Sentinel-2 Multitemporal Cities Pairs\n(S2MTCP) dataset, containing multitemporal image pairs from 1520 urban areas\nworldwide. We test the results of multiple self-supervised learning methods for\npre-training models for change detection and apply it on a public change\ndetection dataset made of Sentinel-2 image pairs (OSCD).\n"} {"abstract": " We consider the Schr\\\"odinger equation on Riemannian symmetric spaces of\nnoncompact type. Previous studies in rank one included sharp-in-time pointwise\nestimates for the Schr\\\"odinger kernel, dispersive properties, Strichartz\ninequalities for a large family of admissible pairs, and global well-posedness\nand scattering, both for small initial data. In this paper we establish\nanalogous results in the higher rank case. The kernel estimates, which is our\nmain result, are obtained by combining a subordination formula, an improved\nHadamard parametrix for the wave equation, and a barycentric decomposition\ninitially developed for the wave equation, which allows us to overcome a\nwell-known problem, namely the fact that the Plancherel density is not always a\ndifferential symbol.\n"} {"abstract": " Deep learning applications are drastically progressing in seismic processing\nand interpretation tasks. However, the majority of approaches subsample data\nvolumes and restrict model sizes to minimise computational requirements.\nSubsampling the data risks losing vital spatio-temporal information which could\naid training whilst restricting model sizes can impact model performance, or in\nsome extreme cases, renders more complicated tasks such as segmentation\nimpossible. This paper illustrates how to tackle the two main issues of\ntraining of large neural networks: memory limitations and impracticably large\ntraining times. Typically, training data is preloaded into memory prior to\ntraining, a particular challenge for seismic applications where data is\ntypically four times larger than that used for standard image processing tasks\n(float32 vs. uint8). Using a microseismic use case, we illustrate how over\n750GB of data can be used to train a model by using a data generator approach\nwhich only stores in memory the data required for that training batch.\nFurthermore, efficient training over large models is illustrated through the\ntraining of a 7-layer UNet with input data dimensions of 4096X4096. Through a\nbatch-splitting distributed training approach, training times are reduced by a\nfactor of four. The combination of data generators and distributed training\nremoves any necessity of data 1 subsampling or restriction of neural network\nsizes, offering the opportunity of utilisation of larger networks,\nhigher-resolution input data or moving from 2D to 3D problem spaces.\n"} {"abstract": " New ultra-hard rhombohedral B2N2 and BC2N - or hexagonal B6N6 and B3C6N3 -\nare derived from 3R graphite based on crystal chemistry rationale schematizing\na mechanism for 2D => 3D transformation. Full unconstrained geometry\noptimizations leading to ground state energy structures and energy derived\nquantities as energy-volume equation of states (EOS) were based on computations\nwithin the density functional theory (DFT) with generalized gradient\napproximation (GGA) for exchange-correlation (XC) effects. The new binary and\nternary phases are characterized by tetrahedral stacking alike diamond,\nvisualized with charge density representations, and illustrating ion\ncharacters. Atom averaged total energies are similar between cubic BN and\nrh-B2N2 on one hand, and larger stabilization of rhombohedral BC2N versus cubic\nand orthorhombic forms (in literature assessed from favored C-C and B-N\nbonding), on the other hand. The electronic band structures are characteristic\nof insulators with Egap ~ 5 eV. Both phases are characterized by large bulk and\nshear moduli and very high hardness values i.e. HV(rh-B2N2) = 74 GPa and\nHV(rh-BC2N) = 87 GPa.\n"} {"abstract": " We consider a multi-view learning problem known as group independent\ncomponent analysis (group ICA), where the goal is to recover shared independent\nsources from many views. The statistical modeling of this problem requires to\ntake noise into account. When the model includes additive noise on the\nobservations, the likelihood is intractable. By contrast, we propose Adaptive\nmultiView ICA (AVICA), a noisy ICA model where each view is a linear mixture of\nshared independent sources with additive noise on the sources. In this setting,\nthe likelihood has a tractable expression, which enables either direct\noptimization of the log-likelihood using a quasi-Newton method, or generalized\nEM. Importantly, we consider that the noise levels are also parameters that are\nlearned from the data. This enables sources estimation with a closed-form\nMinimum Mean Squared Error (MMSE) estimator which weights each view according\nto its relative noise level. On synthetic data, AVICA yields better sources\nestimates than other group ICA methods thanks to its explicit MMSE estimator.\nOn real magnetoencephalograpy (MEG) data, we provide evidence that the\ndecomposition is less sensitive to sampling noise and that the noise variance\nestimates are biologically plausible. Lastly, on functional magnetic resonance\nimaging (fMRI) data, AVICA exhibits best performance in transferring\ninformation across views.\n"} {"abstract": " The temperature in most parts of a protoplanetary disk is determined by\nirradiation from the central star. Numerical experiments of Watanabe \\& Lin\n(2008) suggested that such disks, also called `passive disks', suffer from a\nthermal instability. Here, we use analytical and numerical tools to elucidate\nthe nature of this instability. We find that it is related to the flaring of\nthe optical surface, the layer at which starlight is intercepted by the disk.\nWhenever a disk annulus is perturbed thermally and acquires a larger scale\nheight, disk flaring becomes steeper in the inner part, and flatter in the\nouter part. Starlight now shines more overhead for the inner part and so can\npenetrate into deeper layers; conversely, it is absorbed more shallowly in the\nouter part. These geometric changes allow the annulus to intercept more\nstarlight, and the perturbation grows. We call this the irradiation\ninstability. It requires only ingredients that are known to exist in realistic\ndisks, and operates best in parts that are both optically thick and\ngeometrically thin (inside 30AU, but can extend to further reaches when, e.g.,\ndust settling is considered). An unstable disk develops travelling thermal\nwaves that reach order-unity in amplitude. In thermal radiation, such a disk\nshould appear as a series of bright rings interleaved with dark shadowed gaps,\nwhile in scattered light it resembles a moving staircase. Depending on the gas\nand dust responses, this instability could lead to a wide range of\nconsequences, such as {\\w ALMA rings and gaps,} dust traps, vertical\ncirculation, vortices and turbulence.\n"} {"abstract": " Much recent research is devoted to exploring tradeoffs between computational\naccuracy and energy efficiency at different levels of the system stack.\nApproximation at the floating point unit (FPU) allows saving energy by simply\nreducing the number of computed floating point bits in return for accuracy\nloss. Although, finding the most energy efficient approximation for various\napplications with minimal effort is the main challenge. To address this issue,\nwe propose NEAT: a pin tool that helps users automatically explore the\naccuracy-energy tradeoff space induced by various floating point\nimplementations. NEAT helps programmers explore the effects of simultaneously\nusing multiple floating point implementations to achieve the lowest energy\nconsumption for an accuracy constraint or vice versa. NEAT accepts one or more\nuser-defined floating point implementations and programmable placement rules\nfor where/when to apply them. NEAT then automatically replaces floating point\noperations with different implementations based on the user-specified rules\nduring the runtime and explores the resulting tradeoff space to find the best\nuse of approximate floating point implementations for the precision tuning\nthroughout the program. We evaluate NEAT by enforcing combinations of 24/53\ndifferent floating point implementations with three sets of placement rules on\na wide range of benchmarks. We find that heuristic precision tuning at the\nfunction level provides up to 22% and 48% energy savings at 1% and 10% accuracy\nloss comparing to applying a single implementation for the whole application.\nAlso, NEAT is applicable to neural networks where it finds the optimal\nprecision level for each layer considering an accuracy target for the model.\n"} {"abstract": " This paper deals with the time differential dual-phase-lag heat transfer\nmodels aiming, at first, to identify the eventually restrictions that make them\nthermodynamically consistent. At a first glance it can be observed that the\ncapability of a time differential dual-phase-lag model of heat conduction to\ndescribe real phenomena depends on the properties of the differential operators\ninvolved in the related constitutive equation. In fact, the constitutive\nequation is viewed as an ordinary differential equation in terms of the heat\nflux components (or in terms of the temperature gradient) and it results that,\nfor approximation orders greater than or equal to five, the corresponding\ncharacteristic equation has at least a complex root having a positive real\npart. That leads to a heat flux component (or temperature gradient) that grows\nto infinity when the time tends to infinity and so there occur some\ninstabilities. Instead, when the approximation orders are lower than or equal\nto four, this is not the case and there is the need to study the compatibility\nwith the Second Law of Thermodynamics. To this aim the related constitutive\nequation is reformulated within the system of the fading memory theory, and\nthus the heat flux vector is written in terms of the history of the temperature\ngradient and on this basis the compatibility of the model with the\nthermodynamical principles is analyzed.\n"} {"abstract": " Black hole low mass X-ray binaries in their hard spectral state are found to\ndisplay two different correlations between the radio emission from the compact\njets and the X-ray emission from the inner accretion flow. Here, we present a\nlarge data set of quasi-simultaneous radio and X-ray observations of the\nrecently discovered accreting black hole MAXI J1348-630 during its 2019/2020\noutburst. Our results span almost six orders of magnitude in X-ray luminosity,\nallowing us to probe the accretion-ejection coupling from the brightest to the\nfaintest phases of the outburst. We find that MAXI J1348-630 belongs to the\ngrowing population of outliers at the highest observed luminosities.\nInterestingly, MAXI J1348-630 deviates from the outlier track at $L_{\\rm X}\n\\lesssim 7 \\times 10^{35} (D / 2.2 \\ {\\rm kpc})^2$ erg s$^{-1}$ and ultimately\nrejoins the standard track at $L_{\\rm X} \\simeq 10^{33} (D / 2.2 \\ {\\rm\nkpc})^2$ erg s$^{-1}$, displaying a hybrid radio/X-ray correlation, observed\nonly in a handful of sources. However, for MAXI J1348-630 these transitions\nhappen at luminosities much lower than what observed for similar sources (at\nleast an order of magnitude). We discuss the behaviour of MAXI J1348-630 in\nlight of the currently proposed scenarios and we highlight the importance of\nfuture deep monitorings of hybrid correlation sources, especially close to the\ntransitions and in the low luminosity regime.\n"} {"abstract": " We present a geometric construction of a lattice that emulates the action of\na gauge field on a fermion. The construction consists of a square lattice made\nof polymeric sites, where all clustered atoms are identical and represented by\npotential wells or resonators supporting one bound state. The emulation covers\nboth abelian and non-abelian gauge fields. In the former case, Hofstadter's\nbutterfly is reproduced by means of a chain made of rotating dimers, subject to\nperiodic boundary conditions parallel to the chain. A rigorous map between this\nmodel and Harper's Hamiltonian is derived. In the non-abelian case, band mixing\nand wave confinement are obtained by interband coupling using SU(2) as an\ninternal group, \\ie the effects are due to non-commutability of field\ncomponents. A colored model with SU(3) made of trimers is also studied, finding\nthereby the appearance of flat bands in special configurations. This work\nconstitutes the first all-geometric emulation of the Peierls substitution, and\nis valid for many types of waves.\n"} {"abstract": " Distributed cloud networking builds on network functions virtualization (NFV)\nand software defined networking (SDN) to enable the deployment of network\nservices in the form of elastic virtual network functions (VNFs) instantiated\nover general purpose servers at distributed cloud locations. We address the\ndesign of fast approximation algorithms for the NFV service distribution\nproblem (NSDP), whose goal is to determine the placement of VNFs, the routing\nof service flows, and the associated allocation of cloud and network resources\nthat satisfy client demands with minimum cost. We show that in the case of\nload-proportional costs, the resulting fractional NSDP can be formulated as a\nmulti-commodity-chain flow problem on a cloud augmented graph, and design a\nqueue-length based algorithm, named QNSD, that provides an O(\\epsilon)\napproximation in time O(1/\\epsilon). We then address the case in which resource\ncosts are a function of the integer number of allocated resources and design a\nvariation of QNSD that effectively pushes for flow consolidation into a limited\nnumber of active resources to minimize overall cloud network cost.\n"} {"abstract": " We establish a Galois connection between sub-monads of an augmented monad and\nsub-functors of the forgetful functor from its Eilenberg-Moore category. This\nconnection is given in terms of invariants and stabilizers defined through\nuniversal properties. An explicit procedure for the computation of invariants\nis given assuming the existence of suitable right adjoints. Additionally, in\nthe context of monoidal closed categories, a characterization of stabilizers is\nmade in terms of Tannakian reconstruction.\n"} {"abstract": " This paper is concerned with the inverse elastic scattering problem for a\nrandom potential in three dimensions. Interpreted as a distribution, the\npotential is assumed to be a microlocally isotropic Gaussian random field whose\ncovariance operator is a classical pseudo-differential operator. Given the\npotential, the direct scattering problem is shown to be well-posed in the\ndistribution sense by studying the equivalent Lippmann--Schwinger integral\nequation. For the inverse scattering problem, we demonstrate that the\nmicrolocal strength of the random potential can be uniquely determined with\nprobability one by a single realization of the high frequency limit of the\naveraged compressional or shear backscattered far-field pattern of the\nscattered wave. The analysis employs the integral operator theory, the Born\napproximation in the high frequency regime, the microlocal analysis for the\nFourier integral operators, and the ergodicity of the wave field.\n"} {"abstract": " Stateflow models are complex software models, often used as part of\nsafety-critical software solutions designed with Matlab Simulink. They\nincorporate design principles that are typically very hard to verify formally.\nIn particular, the standard exhaustive formal verification techniques are\nunlikely to scale well for the complex designs that are developed in industry.\nFurthermore, the Stateflow language lacks a formal semantics, which\nadditionally hinders the formal analysis.\n To address these challenges, we lay here the foundations of a scalable\ntechnique for provably correct formal analysis of Stateflow models, with\nrespect to invariant properties, based on bounded model checking (BMC) over\nsymbolic executions. The crux of our technique is: i) a representation of the\nstate space of Stateflow models as a symbolic transition system (STS) over the\nsymbolic configurations of the model, as the basis for BMC, and ii) application\nof incremental BMC, to generate verification results after each unrolling of\nthe next-state relation of the transition system. To this end, we develop a\nsymbolic structural operational semantics (SSOS) for Stateflow, starting from\nan existing structural operational semantics (SOS), and show the preservation\nof invariant properties between the two. Next, we define bounded invariant\nchecking for STS over symbolic configurations as a satisfiability problem. We\ndevelop an automated procedure for generating the initial and next-state\npredicates of the STS, and propose an encoding scheme of the bounded invariant\nchecking problem as a set of constraints, ready for automated analysis with\nstandard, off-the-shelf satisfiability solvers. Finally, we present preliminary\nperformance results by applying our tool on an illustrative example.\n"} {"abstract": " In this paper, we address the space-time video super-resolution, which aims\nat generating a high-resolution (HR) slow-motion video from a low-resolution\n(LR) and low frame rate (LFR) video sequence. A na\\\"ive method is to decompose\nit into two sub-tasks: video frame interpolation (VFI) and video\nsuper-resolution (VSR). Nevertheless, temporal interpolation and spatial\nupscaling are intra-related in this problem. Two-stage approaches cannot fully\nmake use of this natural property. Besides, state-of-the-art VFI or VSR deep\nnetworks usually have a large frame reconstruction module in order to obtain\nhigh-quality photo-realistic video frames, which makes the two-stage approaches\nhave large models and thus be relatively time-consuming. To overcome the\nissues, we present a one-stage space-time video super-resolution framework,\nwhich can directly reconstruct an HR slow-motion video sequence from an input\nLR and LFR video. Instead of reconstructing missing LR intermediate frames as\nVFI models do, we temporally interpolate LR frame features of the missing LR\nframes capturing local temporal contexts by a feature temporal interpolation\nmodule. Extensive experiments on widely used benchmarks demonstrate that the\nproposed framework not only achieves better qualitative and quantitative\nperformance on both clean and noisy LR frames but also is several times faster\nthan recent state-of-the-art two-stage networks. The source code is released in\nhttps://github.com/Mukosame/Zooming-Slow-Mo-CVPR-2020 .\n"} {"abstract": " Understanding the biomechanics of the heart in health and disease plays an\nimportant role in the diagnosis and treatment of heart failure. The use of\ncomputational biomechanical models for therapy assessment is paving the way for\npersonalized treatment, and relies on accurate constitutive equations mapping\nstrain to stress. Current state-of-the art constitutive equations account for\nthe nonlinear anisotropic stress-strain response of cardiac muscle using\nhyperelasticity theory. While providing a solid foundation for understanding\nthe biomechanics of heart tissue, most current laws neglect viscoelastic\nphenomena observed experimentally. Utilizing experimental data from human\nmyocardium and knowledge of the hierarchical structure of heart muscle, we\npresent a fractional nonlinear anisotropic viscoelastic constitutive model. The\nmodel is shown to replicate biaxial stretch, triaxial cyclic shear and triaxial\nstress relaxation experiments (mean error ~7.65%), showing improvements\ncompared to its hyperelastic (mean error ~25%) counterparts. Model sensitivity,\nfidelity and parameter uniqueness are demonstrated. The model is also compared\nto rate-dependent biaxial stretch as well as different modes of biaxial\nstretch, illustrating extensibility of the model to a range of loading\nphenomena.\n"} {"abstract": " Robotic motion generation methods using machine learning have been studied in\nrecent years. Bilateral control-based imitation learning can imitate human\nmotions using force information. By means of this method, variable speed motion\ngeneration that considers physical phenomena such as the inertial force and\nfriction can be achieved. However, the previous study only focused on a simple\nreciprocating motion. To learn the complex relationship between the force and\nspeed more accurately, it is necessary to learn multiple actions using many\njoints. In this paper, we propose a variable speed motion generation method for\nmultiple motions. We considered four types of neural network models for the\nmotion generation and determined the best model for multiple motions at\nvariable speeds. Subsequently, we used the best model to evaluate the\nreproducibility of the task completion time for the input completion time\ncommand. The results revealed that the proposed method could change the task\ncompletion time according to the specified completion time command in multiple\nmotions.\n"} {"abstract": " Higher-order singular value decomposition (HOSVD) is one of the most\nefficient tensor decomposition techniques. It has the salient ability to\nrepresent high_dimensional data and extract features. In more recent years, the\nquaternion has proven to be a very suitable tool for color pixel representation\nas it can well preserve cross-channel correlation of color channels. Motivated\nby the advantages of the HOSVD and the quaternion tool, in this paper, we\ngeneralize the HOSVD to the quaternion domain and define quaternion-based HOSVD\n(QHOSVD). Due to the non-commutability of quaternion multiplication, QHOSVD is\nnot a trivial extension of the HOSVD. They have similar but different\ncalculation procedures. The defined QHOSVD can be widely used in various visual\ndata processing with color pixels. In this paper, we present two applications\nof the defined QHOSVD in color image processing: multi_focus color image fusion\nand color image denoising. The experimental results on the two applications\nrespectively demonstrate the competitive performance of the proposed methods\nover some existing ones.\n"} {"abstract": " Weak decays in superheavy nuclei with proton numbers Z = 118 - 120 and\nneutron numbers N = 175 - 184 are studied within a microscopic formalism based\non deformed self-consistent Skyrme Hartree-Fock mean-field calculations with\npairing correlations. The half-lives of beta+ decay and electron capture are\ncompared with alpha-decay half-lives obtained from phenomenological formulas.\nThe sensitivity of the half-lives to the unknown Q-energies is studied by\ncomparing the results obtained from different approaches for the masses. It is\nshown that alpha-decay is always dominant in this mass region. The competition\nbetween alpha and beta+/EC decay modes is studied in seven alpha-decay chains\nstarting at different isotopes of Z=118, 119, and 120.\n"} {"abstract": " With a better understanding of the loss surfaces for multilayer networks, we\ncan build more robust and accurate training procedures. Recently it was\ndiscovered that independently trained SGD solutions can be connected along\none-dimensional paths of near-constant training loss. In this paper, we show\nthat there are mode-connecting simplicial complexes that form multi-dimensional\nmanifolds of low loss, connecting many independently trained models. Inspired\nby this discovery, we show how to efficiently build simplicial complexes for\nfast ensembling, outperforming independently trained deep ensembles in\naccuracy, calibration, and robustness to dataset shift. Notably, our approach\nonly requires a few training epochs to discover a low-loss simplex, starting\nfrom a pre-trained solution. Code is available at\nhttps://github.com/g-benton/loss-surface-simplexes.\n"} {"abstract": " Learning robust speaker embeddings is a crucial step in speaker diarization.\nDeep neural networks can accurately capture speaker discriminative\ncharacteristics and popular deep embeddings such as x-vectors are nowadays a\nfundamental component of modern diarization systems. Recently, some\nimprovements over the standard TDNN architecture used for x-vectors have been\nproposed. The ECAPA-TDNN model, for instance, has shown impressive performance\nin the speaker verification domain, thanks to a carefully designed neural\nmodel.\n In this work, we extend, for the first time, the use of the ECAPA-TDNN model\nto speaker diarization. Moreover, we improved its robustness with a powerful\naugmentation scheme that concatenates several contaminated versions of the same\nsignal within the same training batch. The ECAPA-TDNN model turned out to\nprovide robust speaker embeddings under both close-talking and distant-talking\nconditions. Our results on the popular AMI meeting corpus show that our system\nsignificantly outperforms recently proposed approaches.\n"} {"abstract": " We propose a method for enantio-detection of chiral molecules based on a\ncavity-molecule system, where the left- and right-handed molecules are coupled\nwith a cavity and two classical light fields to form cyclic three-level models.\nVia the cavity-assisted three-photon processes based on the cyclic three-level\nmodel, photons are generated continuously in the cavity even in the absence of\nexternal driving to the cavity. However, the photonic fields generated from the\nthree-photon processes of left- and right-handed molecules differ with the\nphase difference {\\pi} according to the inherent properties of electric-dipole\ntransition moments of enantiomers. This provides a potential way to detect the\nenantiomeric excess of chiral mixture by monitoring the output field of the\ncavity.\n"} {"abstract": " This article develops the multiple-input multiple-output (MIMO) technology\nfor weather radar sensing. There are ample advantages of MIMO that have been\nhighlighted that can improve the spatial resolution of the observations and\nalso the accuracy of the radar variables. These concepts have been introduced\nhere pertaining to weather radar observations with supporting simulations\ndemonstrating improvements to existing phased array technology. Already MIMO is\nbeing used in a big way for hard target detection and tracking and also in the\nautomotive radar industry and it offers similar improvements for weather radar\nobservations. Some of the benefits are discussed here with a phased array\nplatform in mind which offers quadrant outputs.\n"} {"abstract": " We demonstrate an efficient procedure for terahertz-wave radiation generation\nby applying a nonlinear wavelength conversion approach using a mid-infrared\npump source. We used a 3.3 {\\mu}m dual-chirped optical parametric amplification\nsource and a 1.5 {\\mu}m fiber laser for a comparison of energy conversion\nefficiencies. Nonlinear inorganic and organic crystals were used, and for the\nperiodically-poled lithium niobate crystal, the efficiencies were 1.3*10^-4 and\n5.6*10^-12 for the 3.3 {\\mu}m source and the 1.5 {\\mu}m source, respectively.\nWe confirmed that nonlinear crystals could be pumped with tera-watt / cm^2\nclass by using an mid-infrared source, which reduces several undesirable\nnonlinear optical effects.\n"} {"abstract": " Measurement-based quantum computing (MBQC) promises natural compatibility\nwith quantum error correcting codes at the cost of a polynomial increase in\nphysical qubits. MBQC proposals have largely focused on photonic systems, where\n2-qubit gates are difficult. Semiconductor spin qubits in quantum dots, on the\nother hand, offer fast 2-qubit gates via the exchange interaction. In\nexchange-based quantum computing, as with other solid-state qubits, leakage to\nhigher states is a serious problem that must be mitigated. Here, two hybrid\nmeasurement-exchange schemes are proposed which quantify the benefits of MBQC\non quantum dot-based quantum computing. Measurement of double quantum dot\nencoded qubits in the singlet-triplet basis, along with inter- and intra-qubit\nexchange interaction, are used to perform one and two qubit operations. Both\nschemes suppress individual qubit spin-state leakage errors, offer fast gate\ntimes, and require only controllable exchange couplings, up to known phase and\nPauli corrections.\n"} {"abstract": " Different from static images, videos contain additional temporal and spatial\ninformation for better object detection. However, it is costly to obtain a\nlarge number of videos with bounding box annotations that are required for\nsupervised deep learning. Although humans can easily learn to recognize new\nobjects by watching only a few video clips, deep learning usually suffers from\noverfitting. This leads to an important question: how to effectively learn a\nvideo object detector from only a few labeled video clips? In this paper, we\nstudy the new problem of few-shot learning for video object detection. We first\ndefine the few-shot setting and create a new benchmark dataset for few-shot\nvideo object detection derived from the widely used ImageNet VID dataset. We\nemploy a transfer-learning framework to effectively train the video object\ndetector on a large number of base-class objects and a few video clips of\nnovel-class objects. By analyzing the results of two methods under this\nframework (Joint and Freeze) on our designed weak and strong base datasets, we\nreveal insufficiency and overfitting problems. A simple but effective method,\ncalled Thaw, is naturally developed to trade off the two problems and validate\nour analysis.\n Extensive experiments on our proposed benchmark datasets with different\nscenarios demonstrate the effectiveness of our novel analysis in this new\nfew-shot video object detection problem.\n"} {"abstract": " We conduct novel coverage probability analysis of downlink transmission in a\nthree-dimensional (3D) terahertz (THz) communication (THzCom) system. In this\nsystem, we address the unique propagation properties in THz band, e.g.,\nabsorption loss, super-narrow directional beams, and high vulnerability towards\nblockage, which are fundamentally different from those at lower frequencies.\nDifferent from existing studies, we characterize the performance while\nconsidering the effect of 3D directional antennas at both access points (APs)\nand user equipments (UEs), and the joint impact of the blockage caused by the\nuser itself, moving humans, and wall blockers in a 3D environment. Under such\nconsideration, we develop a tractable analytical framework to derive a new\nexpression for the coverage probability by examining the regions where dominant\ninterferers (i.e., those can cause outage by themselves) can exist, and the\naverage number of interferers existing in these regions. Aided by numerical\nresults, we validate our analysis and reveal that ignoring the impact of the\nvertical heights of THz devices in the analysis leads to a substantial\nunderestimation of the coverage probability. We also show that it is more\nworthwhile to increase the antenna directivity at the APs than at the UEs, to\nproduce a more reliable THzCom system.\n"} {"abstract": " We investigate Rindler's frame measurements. From its perspective, we found a\ngeometric/gravitational interpretation of speed of light, mass and uncertainty\nprinciple. This can be interpreted as measurements of a black hole universal\nclock. This lead to an emergence of a timeless state of gravity in a\nmathematically consistent way. In other words, space my be a frozen time.\n"} {"abstract": " The present work is devoted to an extension of the well-known Ehrling\ninequalities, which quantitatively characterize compact embeddings of function\nspaces, to more general operators. Firstly, a modified notion of continuity for\nlinear operators, named \\emph{Ehrling continuity} and inspired by the classical\nEhrling inequality, is introduced, and then, a necessary and sufficient\ncondition for Ehrling continuity is provided via arguments based on general\ntopology. Secondly, general completely continuous operators between normed\nspaces are characterized in terms of (generalized) Ehrling type inequalities.\nTo this end, the well-known local metrization of the weak topology (so to\nspeak, a \\emph{very weak norm}) plays a crucial role. Thanks to these results,\na universal relation is observed among complete continuity, the very weak norm\nand generalized Ehrling type inequality.\n"} {"abstract": " We present order reduction results for linear time invariant descriptor\nsystems. Results are given for both forced and unforced systems as well methods\nfor constructing the reduced order systems. Our results establish a precise\nconnection between classical and new results on this topic, and lead to an\nelementary construction of quasi-Weierstrass forms for a descriptor system.\nExamples are given to illustrate the usefulness of our results.\n"} {"abstract": " In the field of nuclear reactor physics, transient phenomena are usually\nstudied using deterministic or hybrids methods. These methods require many\napproximations, such as: geometry, time and energy discretizations, material\nhomogenization and assumption of diffusion conditions, among others. In this\ncontext, Monte Carlo simulations are specially adequate to study these\nproblems. Challenges presented when using Monte Carlo simulations in space-time\nkinetics in fissile systems are the different time-scales involved in prompt\nand delayed neutron emission, which implies that results obtained have a large\nvariance associated if an analog Monte Carlo simulation is utilized.\nFurthermore, in both deterministic and Monte Carlo simulations delayed neutron\nprecursors are grouped in a $6$- or $8$- group structure, but nowadays there is\nnot a solid reason to keep this aggregation. In this work, and for the first\ntime, individual precursor data is implemented in a Monte Carlo simulation,\nexplicitly including the time dependence related to the $\\beta$-delayed neutron\nemission. This was accomplished by modifying the open source Monte Carlo code\nOpenMC. In the modified code -- Time Dependent OpenMC or OpenMC(TD) -- time\ndependency related to delayed neutron emission originated from $\\beta$-decay\nwas addressed. Continuous energy neutron cross-sections data used comes from\nJEFF-$3$.$1$.$1$ library. Individual precursor data was taken from\nJEFF-$3$.$1$.$1$ (cumulative yields) and ENDF-B/VIII.$0$ (delayed neutron\nemission probabilities and delayed neutron energy spectra). OpenMC(TD) was\ntested in: i) a monoenergetic system; ii) an energy dependent unmoderated\nsystem where the precursors were taken individually or in a group structure;\nand finally iii) a light-water moderated energy dependent system, using\n$6$-groups, $50$ and $40$ individual precursors.\n"} {"abstract": " Let $X$ be an infinite linearly ordered set and let $Y$ be a nonempty subset\nof $X$. We calculate the relative rank of the semigroup $OP(X,Y)$ of all\norientation-preserving transformations on $X$ with restricted range $Y$ modulo\nthe semigroup $O(X,Y)$ of all order-preserving transformations on $X$ with\nrestricted range $Y$. For $Y = X$, we characterize the relative generating sets\nof minimal size.\n"} {"abstract": " For an image query, unsupervised contrastive learning labels crops of the\nsame image as positives, and other image crops as negatives. Although\nintuitive, such a native label assignment strategy cannot reveal the underlying\nsemantic similarity between a query and its positives and negatives, and\nimpairs performance, since some negatives are semantically similar to the query\nor even share the same semantic class as the query. In this work, we first\nprove that for contrastive learning, inaccurate label assignment heavily\nimpairs its generalization for semantic instance discrimination, while accurate\nlabels benefit its generalization. Inspired by this theory, we propose a novel\nself-labeling refinement approach for contrastive learning. It improves the\nlabel quality via two complementary modules: (i) self-labeling refinery (SLR)\nto generate accurate labels and (ii) momentum mixup (MM) to enhance similarity\nbetween query and its positive. SLR uses a positive of a query to estimate\nsemantic similarity between a query and its positive and negatives, and\ncombines estimated similarity with vanilla label assignment in contrastive\nlearning to iteratively generate more accurate and informative soft labels. We\ntheoretically show that our SLR can exactly recover the true semantic labels of\nlabel-corrupted data, and supervises networks to achieve zero prediction error\non classification tasks. MM randomly combines queries and positives to increase\nsemantic similarity between the generated virtual queries and their positives\nso as to improves label accuracy. Experimental results on CIFAR10, ImageNet,\nVOC and COCO show the effectiveness of our method. PyTorch code and model will\nbe released online.\n"} {"abstract": " The East Asian very-long-baseline interferometry (VLBI) Network (EAVN) is a\nrapidly evolving international VLBI array that is currently promoted under\njoint efforts among China, Japan, and Korea. EAVN aims at forming a joint VLBI\nNetwork by combining a large number of radio telescopes distributed over East\nAsian regions. After the combination of the Korean VLBI Network (KVN) and the\nVLBI Exploration of Radio Astrometry (VERA) into KaVA, further expansion with\nthe joint array in East Asia is actively promoted. Here we report the first\nimaging results (at 22 and 43 GHz) of bright radio sources obtained with KaVA\nconnected to Tianma 65-m and Nanshan 26-m Radio Telescopes in China. To test\nthe EAVN imaging performance for different sources, we observed four active\ngalactic nuclei (AGN) having different brightness and morphology. As a result,\nwe confirmed that Tianma 65-m Radio Telescope (TMRT) significantly enhances the\noverall array sensitivity, a factor of 4 improvement in baseline sensitivity\nand 2 in image dynamic range compared to the case of KaVA only. The addition of\nNanshan 26-m Radio Telescope (NSRT) further doubled the east-west angular\nresolution. With the resulting high-dynamic-range, high-resolution images with\nEAVN (KaVA+TMRT+NSRT), various fine-scale structures in our targets, such as\nthe counter-jet in M87, a kink-like morphology of the 3C273 jet and the weak\nemission in other sources, are successfully detected. This demonstrates the\npowerful capability of EAVN to study AGN jets and to achieve other science\ngoals in general. Ongoing expansion of EAVN will further enhance the angular\nresolution, detection sensitivity and frequency coverage of the network.\n"} {"abstract": " For an appropriate choice of a $\\mathbb{Z}$-grading structure, we prove that\nthe wrapped Fukaya category of the symmetric square of a $(k+3)$-punctured\nsphere, i.e. the Weinstein manifold given as the complement of $(k+3)$ generic\nlines in $\\mathbb{C}P^2$ is quasi-equivalent to the derived category of\ncoherent sheaves on a singular surface $\\mathcal{Z}_{2,k}$ constructed as the\nboundary of a toric Landau-Ginzburg model $(\\mathcal{X}_{2,k},\n\\mathbf{w}_{2,k})$. We do this by first constructing a quasi-equivalence\nbetween certain categorical resolutions of both sides and then localising. We\nalso provide a general homological mirror symmetry conjecture concerning all\nthe higher symmetric powers of punctured spheres. The corresponding toric\nLG-models $(\\mathcal{X}_{n,k},\\mathbf{w}_{n,k})$ are constructed from the\ncombinatorics of curves on the punctured surface and are related to small toric\nresolutions of the singularity $x_1\\ldots x_{n+1}= v_1\\ldots v_k$.\n"} {"abstract": " Monolayers of transition metal dichalcogenides are ideal materials to control\nboth spin and valley degrees of freedom either electrically or optically.\nNevertheless, optical excitation mostly generates excitons species with\ninherently short lifetime and spin/valley relaxation time. Here we demonstrate\na very efficient spin/valley optical pumping of resident electrons in n-doped\nWSe2 and WS2 monolayers. We observe that, using a continuous wave laser and\nappropriate doping and excitation densities, negative trion doublet lines\nexhibit circular polarization of opposite sign and the photoluminescence\nintensity of the triplet trion is more than four times larger with circular\nexcitation than with linear excitation. We interpret our results as a\nconsequence of a large dynamic polarization of resident electrons using\ncircular light.\n"} {"abstract": " The nature of Wireless Sensor Networks (WSN) and the widespread of using WSN\nintroduce many security threats and attacks. An effective Intrusion Detection\nSystem (IDS) should be used to detect attacks. Detecting such an attack is\nchallenging, especially the detection of Denial of Service (DoS) attacks.\nMachine learning classification techniques have been used as an approach for\nDoS detection. This paper conducted an experiment using Waikato Environment for\nKnowledge Analysis (WEKA)to evaluate the efficiency of five machine learning\nalgorithms for detecting flooding, grayhole, blackhole, and scheduling at DoS\nattacks in WSNs. The evaluation is based on a dataset, called WSN-DS. The\nresults showed that the random forest classifier outperforms the other\nclassifiers with an accuracy of 99.72%.\n"} {"abstract": " This paper discusses capabilities that are essential to models applied in\npolicy analysis settings and the limitations of direct applications of\noff-the-shelf machine learning methodologies to such settings. Traditional\neconometric methodologies for building discrete choice models for policy\nanalysis involve combining data with modeling assumptions guided by\nsubject-matter considerations. Such considerations are typically most useful in\nspecifying the systematic component of random utility discrete choice models\nbut are typically of limited aid in determining the form of the random\ncomponent. We identify an area where machine learning paradigms can be\nleveraged, namely in specifying and systematically selecting the best\nspecification of the random component of the utility equations. We review two\nrecent novel applications where mixed-integer optimization and cross-validation\nare used to algorithmically select optimal specifications for the random\nutility components of nested logit and logit mixture models subject to\ninterpretability constraints.\n"} {"abstract": " The generation of non-Abelian geometric phases from a system of evanescently\ncoupled waveguides is extended towards the framework of nonorthogonal\ncoupled-mode theory. Here, we study an experimentally feasible tripod\narrangement of waveguides that contain dark states from which a nontrivial\nU(2)-mixing can be obtained by means of an adiabatic parameter variation. We\ninvestigate the influence of higher-order contributions beyond\nnearest-neighbour coupling as well as self-coupling on the stability of a\nU(3)-phase generated from an optical tetrapod setup. Our results indicate that,\ndespite the mode nonorthogonality, the symmetry of dark states protects the\ngeometric evolution of light from distortion.\n"} {"abstract": " The canonical ensemble plays a crucial role in statistical mechanics in and\nout of equilibrium. For example, the standard derivation of the fluctuation\ntheorem relies on the assumption that the initial state of the heat bath is the\ncanonical ensemble. On the other hand, the recent progress in the foundation of\nstatistical mechanics has revealed that a thermal equilibrium state is not\nnecessarily described by the canonical ensemble but can be a quantum pure state\nor even a single energy eigenstate, as formulated by the eigenstate\nthermalization hypothesis (ETH). Then, a question raised is how these two\npictures, the canonical ensemble and a single energy eigenstate as a thermal\nequilibrium state, are compatible in the fluctuation theorem. In this paper, we\ntheoretically and numerically show that the fluctuation theorem holds in both\nof the long and short-time regimes, even when the initial state of the bath is\na single energy eigenstate of a many-body system. Our proof of the fluctuation\ntheorem in the long-time regime is based on the ETH, while it was previously\nshown in the short-time regime on the basis of the Lieb-Robinson bound and the\nETH [Phys. Rev. Lett. 119, 100601 (2017)]. The proofs for these time regimes\nare theoretically independent and complementary, implying the fluctuation\ntheorem in the entire time domain. We also perform a systematic numerical\nsimulation of hard-core bosons by exact diagonalization and verify the\nfluctuation theorem in both of the time regimes by focusing on the finite-size\nscaling. Our results contribute to the understanding of the mechanism that the\nfluctuation theorem emerges from unitary dynamics of quantum many-body systems,\nand can be tested by experiments with, e.g., ultracold atoms.\n"} {"abstract": " Reinforcement learning in complex environments may require supervision to\nprevent the agent from attempting dangerous actions. As a result of supervisor\nintervention, the executed action may differ from the action specified by the\npolicy. How does this affect learning? We present the Modified-Action Markov\nDecision Process, an extension of the MDP model that allows actions to differ\nfrom the policy. We analyze the asymptotic behaviours of common reinforcement\nlearning algorithms in this setting and show that they adapt in different ways:\nsome completely ignore modifications while others go to various lengths in\ntrying to avoid action modifications that decrease reward. By choosing the\nright algorithm, developers can prevent their agents from learning to\ncircumvent interruptions or constraints, and better control agent responses to\nother kinds of action modification, like self-damage.\n"} {"abstract": " We present the long-term X-ray spectral and temporal analysis of a 'bare-type\nAGN' Ark 120. We consider the observations from XMM-Newton, Suzaku, Swift, and\nNuSTAR from 2003 to 2018. The spectral properties of this source are studied\nusing various phenomenological and physical models present in the literature.\nWe report (a) the variations of several physical parameters, such as the\ntemperature and optical depth of the electron cloud, the size of the Compton\ncloud, and accretion rate for the last fifteen years. The spectral variations\nare explained from the change in the accretion dynamics; (b) the X-ray time\ndelay between 0.2-2 keV and 3-10 keV light-curves exhibited zero-delay in 2003,\npositive delay of 4.71 \\pm 2.1 ks in 2013, and negative delay of 4.15 \\pm 1.5\nks in 2014. The delays are explained considering Comptonization, reflection,\nand light-crossing time; (c) the long term intrinsic luminosities obtained\nusing nthcomp, of the soft-excess and the primary continuum show a correlation\nwith a Pearson Correlation Coefficient of 0.922. This indicates that the\nsoft-excess and the primary continuum are originated from the same physical\nprocess. From a physical model fitting, we infer that the soft excess for Ark\n120 could be due to a small number of scatterings in the Compton cloud. Using\nMonte-Carlo simulations, we show that indeed the spectra corresponding to fewer\nscatterings could provide a steeper soft-excess power-law in the 0.2-3 keV\nrange. Simulated luminosities are found to be in agreement with the observed\nvalues.\n"} {"abstract": " We study the parameter dependence of complex geodesics with prescribed\nboundary value and direction on bounded strongly linearly convex domains.As an\nimportant application, we present a quantitative relationship between the\nregularity of the pluricomplex Poisson kernel of such a domain, which is a\nsolution to a homogeneous complex Monge-Amp\\`{e}re equation with boundary\nsingularity, and that of the boundary of the domain. Our results improve\nconsiderably previous ones in this direction due to Chang-Hu-Lee and\nBracci-Patrizio.\n"} {"abstract": " The LIGO-Virgo-Kagra collaboration (LVC) discovered recently GW190521, a\ngravitational wave (GW) source associated with the merger between two black\nholes (BHs) with mass $66$ M$_\\odot$ and $>85$ M$_\\odot$. GW190521 represents\nthe first BH binary (BBH) merger with a primary mass falling in the \"upper\nmass-gap\" and the first leaving behind a $\\sim 150$ M$_\\odot$ remnant. So far,\nthe LVC reported the discovery of four further mergers having a total mass\n$>100$ M$_\\odot$, i.e. in the intermediate-mass black holes (IMBH) mass range.\nHere, we discuss results from a series of 80 $N$-body simulations of young\nmassive clusters (YMCs) that implement relativistic corrections to follow\ncompact object mergers. We discover the development of a GW190521-like system\nas the result of a 3rd-generation merger, and four IMBH-BH mergers with total\nmass $~(300-350)$ M$_\\odot$. We show that these IMBH-BH mergers are\nlow-frequency GW sources detectable with LISA and DECIGO out to redshift\n$z=0.01-0.1$ and $z>100$, and we discuss how their detection could help\nunravelling IMBH natal spins. For the GW190521 test case, we show that the\n3rd-generation merger remnant has a spin and effective spin parameter that\nmatches the $90\\%$ credible interval measured for GW190521 better than a\nsimpler double merger and comparably to a single merger. Due to GW recoil\nkicks, we show that retaining the products of these mergers require birth-sites\nwith escape velocities $\\gtrsim 50-100$ km s$^{-1}$, values typically attained\nin galactic nuclei and massive clusters with steep density profiles.\n"} {"abstract": " Label noise is frequently observed in real-world large-scale datasets. The\nnoise is introduced due to a variety of reasons; it is heterogeneous and\nfeature-dependent. Most existing approaches to handling noisy labels fall into\ntwo categories: they either assume an ideal feature-independent noise, or\nremain heuristic without theoretical guarantees. In this paper, we propose to\ntarget a new family of feature-dependent label noise, which is much more\ngeneral than commonly used i.i.d. label noise and encompasses a broad spectrum\nof noise patterns. Focusing on this general noise family, we propose a\nprogressive label correction algorithm that iteratively corrects labels and\nrefines the model. We provide theoretical guarantees showing that for a wide\nvariety of (unknown) noise patterns, a classifier trained with this strategy\nconverges to be consistent with the Bayes classifier. In experiments, our\nmethod outperforms SOTA baselines and is robust to various noise types and\nlevels.\n"} {"abstract": " A nearly autonomous management and control (NAMAC) system is designed to\nfurnish recommendations to operators for achieving particular goals based on\nNAMAC's knowledge base. As a critical component in a NAMAC system, digital\ntwins (DTs) are used to extract information from the knowledge base to support\ndecision-making in reactor control and management during all modes of plant\noperations. With the advancement of artificial intelligence and data-driven\nmethods, machine learning algorithms are used to build DTs of various functions\nin the NAMAC system. To evaluate the uncertainty of DTs and its impacts on the\nreactor digital instrumentation and control systems, uncertainty quantification\n(UQ) and software risk analysis is needed. As a comprehensive overview of prior\nresearch and a starting point for new investigations, this study selects and\nreviews relevant UQ techniques and software hazard and software risk analysis\nmethods that may be suitable for DTs in the NAMAC system.\n"} {"abstract": " In this note, we prove that the problem of computing the linear discrepancy\nof a given matrix is $\\Pi_2$-hard, even to approximate within $9/8 - \\epsilon$\nfactor for any $\\epsilon > 0$. This strengthens the NP-hardness result of Li\nand Nikolov [ESA 2020] for the exact version of the problem, and answers a\nquestion posed by them. Furthermore, since Li and Nikolov showed that the\nproblem is contained in $\\Pi_2$, our result makes linear discrepancy another\nnatural problem that is $\\Pi_2$-complete (to approximate).\n"} {"abstract": " Chemical space is routinely explored by machine learning methods to discover\ninteresting molecules, before time-consuming experimental synthesizing is\nattempted. However, these methods often rely on a graph representation,\nignoring 3D information necessary for determining the stability of the\nmolecules. We propose a reinforcement learning approach for generating\nmolecules in cartesian coordinates allowing for quantum chemical prediction of\nthe stability. To improve sample-efficiency we learn basic chemical rules from\nimitation learning on the GDB-11 database to create an initial model applicable\nfor all stoichiometries. We then deploy multiple copies of the model\nconditioned on a specific stoichiometry in a reinforcement learning setting.\nThe models correctly identify low energy molecules in the database and produce\nnovel isomers not found in the training set. Finally, we apply the model to\nlarger molecules to show how reinforcement learning further refines the\nimitation learning model in domains far from the training data.\n"} {"abstract": " We report the first Vivaldi arrays monolithically fabricated exclusively\nusing commercial, low-cost, 3D metal printing (direct metal laser sintering).\nFurthermore, we developed one of the first dual-polarized Vivaldi arrays on a\ntriangular lattice, and compare it to a square lattice array. The triangular\nlattice is attractive because it has a 15.5% larger cell size compared to the\nsquare lattice and can be more naturally truncated into a wide range of\naperture shapes such as a rectangle, hexagon, or triangle. Both arrays operate\nat 3-20 GHz and scan angles out to 60 degree from normal. The fabrication\nprocess is significantly simplified compared to previously published Vivaldi\narrays since the antenna is ready for use directly after the standard printing\nprocess is complete. This rapid manufacturing is further expedited by printing\nthe 'Sub-Miniature Push-on, Micro' (SMPM) connectors directly onto the\nradiating elements, which simplifies assembly and reduces cost compared to\nutilizing discrete RF connectors. The arrays have a modular design that allow\nfor combining multiple sub-arrays together for arbitrarily increasing the\naperture size. Simulations and measurement show that our arrays have similar\nperformance as previously published Vivaldi arrays, but with simpler\nfabrication.\n"} {"abstract": " We are not very good at measuring -- rigorously and quantitatively -- the\ncyber security of systems. Our ability to measure cyber resilience is even\nworse. And without measuring cyber resilience, we can neither improve it nor\ntrust its efficacy. It is difficult to know if we are improving or degrading\ncyber resilience when we add another control, or a mix of controls, to harden\nthe system. The only way to know is to specifically measure cyber resilience\nwith and without a particular set of controls. What needs to be measured are\ntemporal patterns of recovery and adaptation, and not time-independent failure\nprobabilities. In this paper, we offer a set of criteria that would ensure\ndecision-maker confidence in the reliability of the methodology used in\nobtaining a meaningful measurement.\n"} {"abstract": " Recently, Gross, Mansour and Tucker introduced the partial-dual genus\npolynomial of a ribbon graph as a generating function that enumerates the\npartial duals of the ribbon graph by genus. It is analogous to the\nextensively-studied polynomial in topological graph theory that enumerates by\ngenus all embeddings of a given graph. To investigate the partial-dual genus\npolynomial one only needs to focus on bouquets, i.e. ribbon graphs with only\none vertex. In this paper, we shall further show that the partial-dual genus\npolynomial of a bouquet essentially depends on the signed intersection graph of\nthe bouquet rather than on the bouquet itself. That is to say the bouquets with\nthe same signed intersection graph will have the same partial-dual genus\npolynomial. We then prove that the partial-dual genus polynomial of a bouquet\ncontains non-zero constant term if and only if its signed intersection graph is\npositive and bipartite. Finally we consider a conjecture posed by Gross,\nMansour and Tucker. that there is no orientable ribbon graph whose partial-dual\ngenus polynomial has only one non-constant term, we give a characterization of\nnon-empty bouquets whose partial-dual genus polynomials have only one term by\nconsider non-orientable case and orientable case separately.\n"} {"abstract": " We obtain global, non-asymptotic convergence guarantees for independent\nlearning algorithms in competitive reinforcement learning settings with two\nagents (i.e., zero-sum stochastic games). We consider an episodic setting where\nin each episode, each player independently selects a policy and observes only\ntheir own actions and rewards, along with the state. We show that if both\nplayers run policy gradient methods in tandem, their policies will converge to\na min-max equilibrium of the game, as long as their learning rates follow a\ntwo-timescale rule (which is necessary). To the best of our knowledge, this\nconstitutes the first finite-sample convergence result for independent policy\ngradient methods in competitive RL; prior work has largely focused on\ncentralized, coordinated procedures for equilibrium computation.\n"} {"abstract": " Recently, many enterprises are facing the difficulties brought out by the\nlimitation of warehouse land and the increase of loan cost. As a promising\napproach to improve space utilization rate, puzzle-based storage systems\n(PBSSs) are drawing more attention from logistics researchers. In previous\nresearch about PBSS, concentration has been paid to single target item\nproblems. However, there are no consensus algorithms to solve load retrievals\nroute programming in PBSSs with multiple target items. In this paper, a\nheuristic algorithm is proposed to solve load retrievals route programming in\nPBSSs with multiple target items, multiple escorts and multiple IOs. In this\npaper, new concepts about the proposed algorithm are defined, including target\nIOs, target position of escorts, number of escorts required, et al. Then, the\ndecision procedures are designed according to these concepts. Based on Markov\ndecision process, the proposed algorithm makes the decision of the next action\nby analyzing the current status, until all the target items arrive at the IOs.\nThe case study results have shown the effectiveness and efficiency of the\nproposed heuristic algorithm.\n"} {"abstract": " We discuss the propagation of surface waves in an isotropic half space\nmodelled with the linear Cosserat theory of isotropic elastic materials. To\nthis aim we use a method based on the algebraic analysis of the surface\nimpedance matrix and on the algebraic Riccati equation, and which is\nindependent of the common Stroh formalism. Due to this method, a new algorithm\nwhich determines the amplitudes and the wave speed in the theory of isotropic\nelastic Cosserat materials is described. Moreover, the method allows to prove\nthe existence and uniqueness of a subsonic solution of the secular equation, a\nproblem which remains unsolved in almost all generalised linear theories of\nelastic materials. Since the results are suitable to be used for numerical\nimplementations, we propose two numerical algorithms which are viable for any\nelastic material. Explicit numerical calculations are made for alumunium-epoxy\nin the context of the Cosserat model. Since the novel form of the secular\nequation for isotropic elastic material has not been explicitly derived\nelsewhere, we establish it in this paper, too.\n"} {"abstract": " Deep neural network (DNN) is a popular model implemented in many systems to\nhandle complex tasks such as image classification, object recognition, natural\nlanguage processing etc. Consequently DNN structural vulnerabilities become\npart of the security vulnerabilities in those systems. In this paper we study\nthe root cause of DNN adversarial examples. We examine the DNN response surface\nto understand its classification boundary. Our study reveals the structural\nproblem of DNN classification boundary that leads to the adversarial examples.\nExisting attack algorithms can generate from a handful to a few hundred\nadversarial examples given one clean image. We show there are infinitely many\nadversarial images given one clean sample, all within a small neighborhood of\nthe clean sample. We then define DNN uncertainty regions and show\ntransferability of adversarial examples is not universal. We also argue that\ngeneralization error, the large sample theoretical guarantee established for\nDNN, cannot adequately capture the phenomenon of adversarial examples. We need\nnew theory to measure DNN robustness.\n"} {"abstract": " A new solution for the Dutch national flag problem is proposed, requiring no\nthree-way comparisons, which gives quicksort a proper worst-case runtime of\n$O(nk)$ for inputs with $k$ distinct elements. This is used together with other\nknown and novel techniques to construct a hybrid sort that is never\nsignificantly slower than regular quicksort while speeding up drastically for\nmany input distributions.\n"} {"abstract": " It is established that in the tensionless limit the chiral superstring\nintegrand is reduced to the chiral integrand of the ambitwistor string.\n"} {"abstract": " Binarized Neural Networks (BNNs) have the potential to revolutionize the way\nthat deep learning is carried out in edge computing platforms. However, the\neffectiveness of interpretability methods on these networks has not been\nassessed.\n In this paper, we compare the performance of several widely used saliency\nmap-based interpretabilty techniques (Gradient, SmoothGrad and GradCAM), when\napplied to Binarized or Full Precision Neural Networks (FPNNs). We found that\nthe basic Gradient method produces very similar-looking maps for both types of\nnetwork. However, SmoothGrad produces significantly noisier maps for BNNs.\nGradCAM also produces saliency maps which differ between network types, with\nsome of the BNNs having seemingly nonsensical explanations. We comment on\npossible reasons for these differences in explanations and present it as an\nexample of why interpretability techniques should be tested on a wider range of\nnetwork types.\n"} {"abstract": " In this paper we establish the existence and multiplicity of nontrivial\nsolutions to the following problem \\begin{align*} \\begin{split}\n(-\\Delta)^{\\frac{1}{2}}u+u+(\\ln|\\cdot|*|u|^2)&=f(u)+\\mu|u|^{-\\gamma-1}u,~\\text{in}~\\mathbb{R},\n\\end{split} \\end{align*} where $\\mu>0$, $(*)$ is the convolution operation\nbetween two functions, $0<\\gamma<1$, $f$ is a function with a certain type of\ngrowth. We prove the existence of a nontrivial solution at a certain mountain\npass level and another ground state solution when the nonlinearity $f$ is of\nexponential critical growth.\n"} {"abstract": " We discuss the possible role of the Tietze extension theorem in providing a\nrigorous topological base to the expanding space-time in cosmology. A simple\ntoy model has been introduced to show the analogy between the topological\nextension from a circle $S$ to the whole space $M$ and the cosmic expansion\nfrom a non-zero volume to the whole space-time in non-singular cosmological\nmodels. A topological analogy to the cosmic scale factor function has been\nsuggested, the paper refers to the possible applications of the topological\nextension in mathematical physics.\n"} {"abstract": " Attention-based encoder-decoder framework is widely used in the scene text\nrecognition task. However, for the current state-of-the-art(SOTA) methods,\nthere is room for improvement in terms of the efficient usage of local visual\nand global context information of the input text image, as well as the robust\ncorrelation between the scene processing module(encoder) and the text\nprocessing module(decoder). In this paper, we propose a Representation and\nCorrelation Enhanced Encoder-Decoder Framework(RCEED) to address these\ndeficiencies and break performance bottleneck. In the encoder module, local\nvisual feature, global context feature, and position information are aligned\nand fused to generate a small-size comprehensive feature map. In the decoder\nmodule, two methods are utilized to enhance the correlation between scene and\ntext feature space. 1) The decoder initialization is guided by the holistic\nfeature and global glimpse vector exported from the encoder. 2) The feature\nenriched glimpse vector produced by the Multi-Head General Attention is used to\nassist the RNN iteration and the character prediction at each time step.\nMeanwhile, we also design a Layernorm-Dropout LSTM cell to improve model's\ngeneralization towards changeable texts. Extensive experiments on the\nbenchmarks demonstrate the advantageous performance of RCEED in scene text\nrecognition tasks, especially the irregular ones.\n"} {"abstract": " The presence of smart objects is increasingly widespread and their ecosystem,\nalso known as Internet of Things, is relevant in many different application\nscenarios. The huge amount of temporally annotated data produced by these smart\ndevices demand for efficient techniques for transfer and storage of time series\ndata. Compression techniques play an important role toward this goal and,\ndespite the fact that standard compression methods could be used with some\nbenefit, there exist several ones that specifically address the case of time\nseries by exploiting their peculiarities to achieve a more effective\ncompression and a more accurate decompression in the case of lossy compression\ntechniques. This paper provides a state-of-the-art survey of the principal time\nseries compression techniques, proposing a taxonomy to classify them\nconsidering their overall approach and their characteristics. Furthermore, we\nanalyze the performances of the selected algorithms by discussing and comparing\nthe experimental results that where provided in the original articles. The goal\nof this paper is to provide a comprehensive and homogeneous reconstruction of\nthe state-of-the-art which is currently fragmented across many papers that use\ndifferent notations and where the proposed methods are not organized according\nto a classification.\n"} {"abstract": " Long-term neutrino-radiation hydrodynamics simulations in full general\nrelativity are performed for the collapse of rotating massive stars that are\nevolved from He-stars with their initial mass of $20$ and $32M_\\odot$. It is\nshown that if the collapsing stellar core has sufficient angular momentum, the\nrotationally-supported proto-neutron star (PNS) survives for seconds\naccompanying the formation of a massive torus of mass larger than $1\\,M_\\odot$.\nSubsequent mass accretion onto the central region produces a massive and\ncompact central object, and eventually enhances the neutrino luminosity beyond\n$10^{53}$\\,erg/s, resulting in a very delayed neutrino-driven explosion in\nparticular toward the polar direction. The kinetic energy of the explosion can\nbe appreciably higher than $10^{52}$ erg for a massive progenitor star and\ncompatible with that of energetic supernovae like broad-line type-Ic\nsupernovae. By the subsequent accretion, the massive PNS collapses eventually\ninto a rapidly spinning black hole, which could be a central engine for\ngamma-ray bursts if a massive torus surrounds it.\n"} {"abstract": " From tiny pacemaker chips to aircraft collision avoidance systems, the\nstate-of-the-art Cyber-Physical Systems (CPS) have increasingly started to rely\non Deep Neural Networks (DNNs). However, as concluded in various studies, DNNs\nare highly susceptible to security threats, including adversarial attacks. In\nthis paper, we first discuss different vulnerabilities that can be exploited\nfor generating security attacks for neural network-based systems. We then\nprovide an overview of existing adversarial and fault-injection-based attacks\non DNNs. We also present a brief analysis to highlight different challenges in\nthe practical implementation of adversarial attacks. Finally, we also discuss\nvarious prospective ways to develop robust DNN-based systems that are resilient\nto adversarial and fault-injection attacks.\n"} {"abstract": " This paper studies the robust satisfiability check and online control\nsynthesis problems for uncertain discrete-time systems subject to signal\ntemporal logic (STL) specifications. Different from existing techniques, this\nwork proposes an approach based on STL, reachability analysis, and temporal\nlogic trees. Firstly, a real-time version of STL semantics and a tube-based\ntemporal logic tree are proposed. We show that such a tree can be constructed\nfrom every STL formula. Secondly, using the tube-based temporal logic tree, a\nsufficient condition is obtained for the robust satisfiability check of the\nuncertain system. When the underlying system is deterministic, a necessary and\nsufficient condition for satisfiability is obtained. Thirdly, an online control\nsynthesis algorithm is designed. It is shown that when the STL formula is\nrobustly satisfiable and the initial state of the system belongs to the initial\nroot node of the tube-based temporal logic tree, it is guaranteed that the\ntrajectory generated by the controller satisfies the STL formula. The\neffectiveness of the proposed approach is verified by an automated car\novertaking example.\n"} {"abstract": " Recent exploration methods have proven to be a recipe for improving\nsample-efficiency in deep reinforcement learning (RL). However, efficient\nexploration in high-dimensional observation spaces still remains a challenge.\nThis paper presents Random Encoders for Efficient Exploration (RE3), an\nexploration method that utilizes state entropy as an intrinsic reward. In order\nto estimate state entropy in environments with high-dimensional observations,\nwe utilize a k-nearest neighbor entropy estimator in the low-dimensional\nrepresentation space of a convolutional encoder. In particular, we find that\nthe state entropy can be estimated in a stable and compute-efficient manner by\nutilizing a randomly initialized encoder, which is fixed throughout training.\nOur experiments show that RE3 significantly improves the sample-efficiency of\nboth model-free and model-based RL methods on locomotion and navigation tasks\nfrom DeepMind Control Suite and MiniGrid benchmarks. We also show that RE3\nallows learning diverse behaviors without extrinsic rewards, effectively\nimproving sample-efficiency in downstream tasks. Source code and videos are\navailable at https://sites.google.com/view/re3-rl.\n"} {"abstract": " Human action recognition is an active research area in computer vision.\nAlthough great process has been made, previous methods mostly recognize actions\nbased on depth data at only one scale, and thus they often neglect multi-scale\nfeatures that provide additional information action recognition in practical\napplication scenarios. In this paper, we present a novel framework focusing on\nmulti-scale motion information to recognize human actions from depth video\nsequences. We propose a multi-scale feature map called Laplacian pyramid depth\nmotion images(LP-DMI). We employ depth motion images (DMI) as the templates to\ngenerate the multi-scale static representation of actions. Then, we caculate\nLP-DMI to enhance multi-scale dynamic information of motions and reduces\nredundant static information in human bodies. We further extract the\nmulti-granularity descriptor called LP-DMI-HOG to provide more discriminative\nfeatures. Finally, we utilize extreme learning machine (ELM) for action\nclassification. The proposed method yeilds the recognition accuracy of 93.41%,\n85.12%, 91.94% on public MSRAction3D dataset, UTD-MHAD and DHA dataset. Through\nextensive experiments, we prove that our method outperforms state-of-the-art\nbenchmarks.\n"} {"abstract": " Selective laser melting is receiving increasing interest as an additive\nmanufacturing technique. Residual stresses induced by the large temperature\ngradients and inhomogeneous cooling process can favour the generation of\ncracks. In this work, a crystal plasticity finite element model is developed to\nsimulate the formation of residual stresses and to understand the correlation\nbetween plastic deformation, grain orientation and residual stresses in the\nadditive manufacturing process. The temperature profile and grain structure\nfrom thermal-fluid flow and grain growth simulations are implemented into the\ncrystal plasticity model. An element elimination and reactivation method is\nproposed to model the melting and solidification and to reinitialise state\nvariables, such as the plastic deformation, in the reactivated elements. The\naccuracy of this method is judged against previous method based on the\nstiffness degradation of liquid regions by comparing the plastic deformation as\na function of time induced by thermal stresses. The method is used to\ninvestigate residual stresses parallel and perpendicular to the laser scan\ndirection, and the correlation with the maximum Schmid factor of the grains\nalong those directions. The magnitude of the residual stress can be predicted\nas a function of the depth, grain orientation and position with respect to the\nmolten pool.\n"} {"abstract": " The exceptional combination of strength and ductility in multi-component\nalloys is often attributed to the interaction of dislocations with the various\nsolute atoms in the alloy. To study these effects on the mechanical properties\nof such alloys there is a need to develop a modeling framework capable of\nquantifying the effect of these solutes on the evolution of dislocation\nnetworks. Large scale three-dimensional (3D) Discrete dislocation dynamics\n(DDD) simulations can provide access to such studies but to date no relevant\napproaches are available that aim for a complete representation of real alloys\nwith arbitrary chemical compositions. Here, we introduce a formulation of\ndislocation interaction with substitutional solute atoms in fcc alloys in 3D\nDDD simulations that accounts for solute strengthening induced by atomic misfit\nas well as fluctuations in the cross-slip activation energy. Using this model,\nwe show that local fluctuations in the chemical composition of various\nCrFeCoNi-based multi-principal element alloys (MPEA) lead to sluggish\ndislocation motion, frequent cross-slip and alignment of dislocations with\nsolute aggregation features, explaining experimental observations related to\nmechanical behavior and dislocation activity. It is also demonstrated, that\nthis behavior observed for certain MPEAs cannot be reproduced by assuming a\nperfect solid solution. The developed method also provides a basis for further\ninvestigations of dislocation plasticity in any real arbitrary fcc alloy with\nsubstitutional solutes.\n"} {"abstract": " The spacetime in the interior of a black hole can be described by an\nhomogeneous line element, for which the Einstein--Hilbert action reduces to a\none-dimensional mechanical model. We have shown in [SciPost Phys. 10, 022\n(2021), [2010.07059]] that this model exhibits a symmetry under the\n$(2+1)$-dimensional Poincar\\'e group. Here we explain how this can be\nunderstood as a broken infinite-dimensional BMS$_3$ symmetry. This is done by\nreinterpreting the action for the model as a geometric action for BMS$_3$,\nwhere the configuration space variables are elements of the algebra\n$\\mathfrak{bms}_3$ and the equations of motion transform as coadjoint vectors.\nThe Poincar\\'e subgroup then arises as the stabilizer of the vacuum orbit. This\nsymmetry breaking is analogous to what happens with the Schwarzian action in\nAdS$_2$ JT gravity, although in the present case there is no direct\ninterpretation in terms of boundary symmetries. This observation, together with\nthe fact that other lower-dimensional gravitational models (such as the BTZ\nblack hole) possess the same broken BMS$_3$ symmetries, provides yet another\nillustration of the ubiquitous role played by this group.\n"} {"abstract": " Embedding nonlinear dynamical systems into artificial neural networks is a\npowerful new formalism for machine learning. By parameterizing ordinary\ndifferential equations (ODEs) as neural network layers, these Neural ODEs are\nmemory-efficient to train, process time-series naturally and incorporate\nknowledge of physical systems into deep learning models. However, the practical\napplications of Neural ODEs are limited due to long inference times, because\nthe outputs of the embedded ODE layers are computed numerically with\ndifferential equation solvers that can be computationally demanding. Here we\nshow that mathematical model order reduction methods can be used for\ncompressing and accelerating Neural ODEs by accurately simulating the\ncontinuous nonlinear dynamics in low-dimensional subspaces. We implement our\nnovel compression method by developing Neural ODEs that integrate the necessary\nsubspace-projection and interpolation operations as layers of the neural\nnetwork. We validate our model reduction approach by comparing it to two\nestablished acceleration methods from the literature in two classification\nasks. In compressing convolutional and recurrent Neural ODE architectures, we\nachieve the best balance between speed and accuracy when compared to the other\ntwo acceleration methods. Based on our results, our integration of model order\nreduction with Neural ODEs can facilitate efficient, dynamical system-driven\ndeep learning in resource-constrained applications.\n"} {"abstract": " Superconducting crystals with lack of inversion symmetry can potentially host\nunconventional pairing. However, till date, no direct conclusive experimental\nevidence of such unconventional order parameters in non-centrosymmetric\nsuperconductors has been reported. In this paper, through direct measurement of\nthe superconducting energy gap by scanning tunnelling spectroscopy, we report\nthe existence of both $s$-wave (singlet) and $p$-wave (triplet) pairing\nsymmetries in non-centrosymmetric Ru$_7$B$_3$. Our temperature and magnetic\nfield dependent studies also indicate that the relative amplitudes of the\nsinglet and triplet components of the order parameter change differently with\ntemperature.\n"} {"abstract": " In this paper, we introduce a concept of norm-attainment in the projective\nsymmetric tensor product $\\widehat{\\otimes}_{\\pi,s,N} X$ of a Banach space $X$,\nwhich turns out to be naturally related to the classical norm-attainment of\n$N$-homogeneous polynomials on $X$. Due to this relation, we can prove that\nthere exist symmetric tensors that do not attain their norms, which allows us\nto study the problem of when the set of norm-attaining elements in\n$\\widehat{\\otimes}_{\\pi,s,N} X$ is dense. We show that the set of all\nnorm-attaining symmetric tensors is dense in $\\widehat{\\otimes}_{\\pi,s,N} X$\nfor a large set of Banach spaces as $L_p$-spaces, isometric $L_1$-predual\nspaces or Banach spaces with monotone Schauder basis, among others. Next, we\nprove that if $X^*$ satisfies the Radon-Nikod\\'ym and the approximation\nproperty, then the set of all norm-attaining symmetric tensors in\n$\\widehat{\\otimes}_{\\pi,s,N} X^*$ is dense. From these techniques, we can\npresent new examples of Banach spaces $X$ and $Y$ such that the set of all\nnorm-attaining tensors in the projective tensor product $X\n\\widehat{\\otimes}_\\pi Y$ is dense, answering positively an open question from\nthe paper by S. Dantas, M. Jung, \\'O. Rold\\'an and A. Rueda Zoca.\n"} {"abstract": " We give a simple unified proof for all existing rational hypergeometric\nRamanujan identities for $1/\\pi$, and give a complete survey (without proof) of\nseveral generalizations: rational hypergeometric identities for $1/\\pi^c$,\nTaylor expansions, upside-down formulas, and supercongruences.\n"} {"abstract": " Large-scale Web-based services present opportunities for improving UI\npolicies based on observed user interactions. We address challenges of learning\nsuch policies through model-free offline Reinforcement Learning (RL) with\noff-policy training. Deployed in a production system for user authentication in\na major social network, it significantly improves long-term objectives. We\narticulate practical challenges, compare several ML techniques, provide\ninsights on training and evaluation of RL models, and discuss generalizations.\n"} {"abstract": " We introduce tools from discrete convexity theory and polyhedral geometry\ninto the theory of West's stack-sorting map $s$. Associated to each permutation\n$\\pi$ is a particular set $\\mathcal V(\\pi)$ of integer compositions that\nappears in a formula for the fertility of $\\pi$, which is defined to be\n$|s^{-1}(\\pi)|$. These compositions also feature prominently in more general\nformulas involving families of colored binary plane trees called troupes and in\na formula that converts from free to classical cumulants in noncommutative\nprobability theory. We show that $\\mathcal V(\\pi)$ is a transversal discrete\npolymatroid when it is nonempty. We define the fertilitope of $\\pi$ to be the\nconvex hull of $\\mathcal V(\\pi)$, and we prove a surprisingly simple\ncharacterization of fertilitopes as nestohedra arising from full binary plane\ntrees. Using known facts about nestohedra, we provide a procedure for\ndescribing the structure of the fertilitope of $\\pi$ directly from $\\pi$ using\nBousquet-M\\'elou's notion of the canonical tree of $\\pi$. As a byproduct, we\nobtain a new combinatorial cumulant conversion formula in terms of\ngeneralizations of canonical trees that we call quasicanonical trees. We also\napply our results on fertilitopes to study combinatorial properties of the\nstack-sorting map. In particular, we show that the set of fertility numbers has\ndensity $1$, and we determine all infertility numbers of size at most $126$.\nFinally, we reformulate the conjecture that $\\sum_{\\sigma\\in\ns^{-1}(\\pi)}x^{\\text{des}(\\sigma)+1}$ is always real-rooted in terms of\nnestohedra, and we propose natural ways in which this new version of the\nconjecture could be extended.\n"} {"abstract": " In this article, the form of basis set for solving helium Schr\\\"{o}dinger\nequation is reinvestigated in perspective of geometry. With the help of theorem\nproved by Gu $et~al.$, we construct a convenient variational basis set, which\nemphasizes the geometric characteristics of trial wavefuncions. The main\nadvantage of this basis is that the angular part is complete for natural $L$\nstates with $L + 1$ terms and for unnatural $L$ states with $L$ terms, where\n$L$ is the total angular quantum number. Compared with basis sets which contain\nthree Euler angles, this basis is very simple to use. More importantly, this\nbasis is quite easy to be generalized to more particle systems.\n"} {"abstract": " It is well-known that when translating terms into graphical formalisms many\ninnessential details are abstracted away. In the case of $\\lambda$-calculus and\nproof-nets, these innessential details are captured by a notion of equivalence\non $\\lambda$-terms known as $\\simeq_\\sigma$-equivalence in both the\nintuitionistic (due to Regnier) and classical (due to Laurent) cases. The\npurpose of this paper is to uncover a strong bisimulation behind\n$\\simeq_\\sigma$-equivalence, as formulated by Laurent for Parigot's\n$\\lambda\\mu$-calculus. This is achieved by introducing a relation $\\simeq$,\ndefined over a revised presentation of $\\lambda\\mu$-calculus we dub $\\Lambda\nM$.\n More precisely, we first identify the reasons behind Laurent's\n$\\simeq_\\sigma$-equivalence failing to be a strong bisimulation. Inspired by\nLaurent's Polarized Proof-Nets, this leads us to distinguish multiplicative and\nexponential reduction steps on terms. Second, we provide an enriched syntax\nthat allows to keep track of the renaming operation. These technical\ningredients are crucial to pave the way towards a strong bisimulation for the\nclassical case. We thus introduce a calculus $\\Lambda M$ and a relation\n$\\simeq$ that we show to be a strong bisimulation with respect to reduction in\n$\\Lambda M$, i.e. two $\\simeq$-equivalent terms have the exact same reduction\nsemantics, a result which fails for Regnier's $\\simeq_\\sigma$-equivalence in\n$\\lambda$-calculus as well as for Laurent's $\\simeq_\\sigma$-equivalence in\n$\\lambda\\mu$. We also show that two $\\simeq$-equivalent terms translate to\nequivalent Polarized Proof-Nets. Although $\\simeq$ is not strictly included in\nLaurent's $\\simeq_\\sigma$, it can be seen as a restriction of it.\n"} {"abstract": " Learning musical instruments using online instructional videos has become\nincreasingly prevalent. However, pre-recorded videos lack the instantaneous\nfeedback and personal tailoring that human tutors provide. In addition,\nexisting video navigations are not optimized for instrument learning, making\nthe learning experience encumbered. Guided by our formative interviews with\nguitar players and prior literature, we designed Soloist, a mixed-initiative\nlearning framework that automatically generates customizable curriculums from\noff-the-shelf guitar video lessons. Soloist takes raw videos as input and\nleverages deep-learning based audio processing to extract musical information.\nThis back-end processing is used to provide an interactive visualization to\nsupport effective video navigation and real-time feedback on the user's\nperformance, creating a guided learning experience. We demonstrate the\ncapabilities and specific use-cases of Soloist within the domain of learning\nelectric guitar solos using instructional YouTube videos. A remote user study,\nconducted to gather feedback from guitar players, shows encouraging results as\nthe users unanimously preferred learning with Soloist over unconverted\ninstructional videos.\n"} {"abstract": " We explore the nonunitary dynamics of $(2+1)$-dimensional free fermions and\nshow that the obtained steady state is critical regardless the strength of the\nnonunitary evolution. Numerical results indicate that the entanglement entropy\nhas a logarithmic violation of the area-law and the mutual information between\ntwo distant regions decays as a power-law function. In particular, we provide\nan interpretation of these scaling behaviors in terms of a simple quasiparticle\npair picture. In addition, we study the dynamics of the correlation function\nand demonstrate that this system has dynamical exponent $z=1$. We further\ndemonstrate the dynamics of the correlation function can be well captured by a\nclassical nonlinear master equation. Our method opens a door to a vast number\nof nonunitary random dynamics in free fermions and can be generalized to any\ndimensions.\n"} {"abstract": " Many cultural traits characterizing intelligent behaviors are now thought to\nbe transmitted through statistical learning, motivating us to study its effects\non cultural evolution. We conduct a large-scale music data analysis and observe\nthat various statistical parameters of musical products approximately follow\nthe beta distribution and other conjugate distributions. We construct a simple\nmodel of cultural evolution incorporating statistical learning and analytically\nshow that conjugate distributions emerge at equilibrium in the presence of\noblique transmission. The results demonstrate that the distribution of a\ncultural trait within a population depends on the individual's model for\ncultural production (the conjugate distribution law), and reveal interesting\npossibilities for theoretical and experimental studies on cultural evolution\nand social learning.\n"} {"abstract": " We investigate the balance of power between stars and AGN across cosmic\nhistory, based on the comparison between the infrared (IR) galaxy luminosity\nfunction (LF) and the IR AGN LF. The former corresponds to emission from dust\nheated by stars and AGN, whereas the latter includes emission from AGN-heated\ndust only. We find that at all redshifts (at least up to z~2.5), the high\nluminosity tails of the two LFs converge, indicating that the most\ninfrared-luminous galaxies are AGN-powered. Our results shed light to the\ndecades-old conundrum regarding the flatter high-luminosity slope seen in the\nIR galaxy LF compared to that in the UV and optical. We attribute this\ndifference to the increasing fraction of AGN-dominated galaxies with increasing\ntotal infrared luminosity (L_IR). We partition the L_IR-z parameter space into\na star-formation and an AGN-dominated region, finding that the most luminous\ngalaxies at all epochs lie in the AGN-dominated region. This sets a potential\n`limit' to attainable star formation rates, casting doubt on the abundance of\n`extreme starbursts': if AGN did not exist, L_IR>10^13 Lsun galaxies would be\nsignificantly rarer than they currently are in our observable Universe. We also\nfind that AGN affect the average dust temperatures (T_dust) of galaxies and\nhence the shape of the well-known L_IR-T_dust relation. We propose that the\nreason why local ULIRGs are hotter than their high redshift counterparts is\nbecause of a higher fraction of AGN-dominated galaxies amongst the former\ngroup.\n"} {"abstract": " The properties of a supersolid state (SS) in quasi-one-dimensional dipolar\nBose-Einstein condensate is studied, considering two possible mechanisms of\nrealization - due to repulsive three-body atomic interactions and quantum\nfluctuations in the framework of the Lee-Huang-Yang (LHY) theory. The proposed\ntheoretical model, based on minimization of the energy functional, allows\nevaluating the amplitude of the SS for an arbitrary set of parameters in the\ngoverning Gross-Pitaevskii equation (GPE). To explore the dynamics of the SS\nfirst, we numerically construct its ground state in different settings,\nincluding periodic boundary conditions, box-like trap and parabolic potential,\nthen impose a perturbation. In oscillations of the perturbed supersolid we\nobserve the key manifestation of SS, namely the free flow of the superfluid\nfraction through the crystalline component of the system. Two distinct\noscillation frequencies of the supersolid associated with the superfluid\nfraction and crystalline components of the wave function are identified from\nnumerical simulations of the GPE.\n"} {"abstract": " Arguably the most common and salient object in daily video communications is\nthe talking head, as encountered in social media, virtual classrooms,\nteleconferences, news broadcasting, talk shows, etc. When communication\nbandwidth is limited by network congestions or cost effectiveness, compression\nartifacts in talking head videos are inevitable. The resulting video quality\ndegradation is highly visible and objectionable due to high acuity of human\nvisual system to faces. To solve this problem, we develop a multi-modality deep\nconvolutional neural network method for restoring face videos that are\naggressively compressed. The main innovation is a new DCNN architecture that\nincorporates known priors of multiple modalities: the video-synchronized speech\nsignal and semantic elements of the compression code stream, including motion\nvectors, code partition map and quantization parameters. These priors strongly\ncorrelate with the latent video and hence they are able to enhance the\ncapability of deep learning to remove compression artifacts. Ample empirical\nevidences are presented to validate the superior performance of the proposed\nDCNN method on face videos over the existing state-of-the-art methods.\n"} {"abstract": " In this paper we study the spontaneous development of symmetries in the early\nlayers of a Convolutional Neural Network (CNN) during learning on natural\nimages. Our architecture is built in such a way to mimic the early stages of\nbiological visual systems. In particular, it contains a pre-filtering step\n$\\ell^0$ defined in analogy with the Lateral Geniculate Nucleus (LGN).\nMoreover, the first convolutional layer is equipped with lateral connections\ndefined as a propagation driven by a learned connectivity kernel, in analogy\nwith the horizontal connectivity of the primary visual cortex (V1). The layer\n$\\ell^0$ shows a rotational symmetric pattern well approximated by a Laplacian\nof Gaussian (LoG), which is a well-known model of the receptive profiles of LGN\ncells. The convolutional filters in the first layer can be approximated by\nGabor functions, in agreement with well-established models for the profiles of\nsimple cells in V1. We study the learned lateral connectivity kernel of this\nlayer, showing the emergence of orientation selectivity w.r.t. the learned\nfilters. We also examine the association fields induced by the learned kernel,\nand show qualitative and quantitative comparisons with known group-based models\nof V1 horizontal connectivity. These geometric properties arise spontaneously\nduring the training of the CNN architecture, analogously to the emergence of\nsymmetries in visual systems thanks to brain plasticity driven by external\nstimuli.\n"} {"abstract": " Phenomics is concerned with detailed description of all aspects of organisms,\nfrom their physical foundations at genetic, molecular and cellular level, to\nbehavioural and psychological traits. Neuropsychiatric phenomics, endorsed by\nNIMH, provides such broad perspective to understand mental disorders. It is\nclear that learning sciences also need similar approach that will integrate\nefforts to understand cognitive processes from the perspective of the brain\ndevelopment, in temporal, spatial, psychological and social aspects. The brain\nis a substrate shaped by genetic, epigenetic, cellular and environmental\nfactors including education, individual experiences and personal history,\nculture, social milieu. Learning sciences should thus be based on the\nfoundation of neurocognitive phenomics. A brief review of selected aspects of\nsuch approach is presented, outlining new research directions. Central,\nperipheral and motor processes in the brain are linked to the inventory of the\nlearning styles.\n"} {"abstract": " This article describes novel approaches to quickly estimate planar surfaces\nfrom RGBD sensor data. The approach manipulates the standard algebraic fitting\nequations into a form that allows many of the needed regression variables to be\ncomputed directly from the camera calibration information. As such, much of the\ncomputational burden required by a standard algebraic surface fit can be\npre-computed. This provides a significant time and resource savings, especially\nwhen many surface fits are being performed which is often the case when RGBD\npoint-cloud data is being analyzed for normal estimation, curvature estimation,\npolygonization or 3D segmentation applications. Using an integral image\nimplementation, the proposed approaches show a significant increase in\nperformance compared to the standard algebraic fitting approaches.\n"} {"abstract": " Compressing Deep Neural Network (DNN) models to alleviate the storage and\ncomputation requirements is essential for practical applications, especially\nfor resource limited devices. Although capable of reducing a reasonable amount\nof model parameters, previous unstructured or structured weight pruning methods\ncan hardly truly accelerate inference, either due to the poor hardware\ncompatibility of the unstructured sparsity or due to the low sparse rate of the\nstructurally pruned network. Aiming at reducing both storage and computation,\nas well as preserving the original task performance, we propose a generalized\nweight unification framework at a hardware compatible micro-structured level to\nachieve high amount of compression and acceleration. Weight coefficients of a\nselected micro-structured block are unified to reduce the storage and\ncomputation of the block without changing the neuron connections, which turns\nto a micro-structured pruning special case when all unified coefficients are\nset to zero, where neuron connections (hence storage and computation) are\ncompletely removed. In addition, we developed an effective training framework\nbased on the alternating direction method of multipliers (ADMM), which converts\nour complex constrained optimization into separately solvable subproblems.\nThrough iteratively optimizing the subproblems, the desired micro-structure can\nbe ensured with high compression ratio and low performance degradation. We\nextensively evaluated our method using a variety of benchmark models and\ndatasets for different applications. Experimental results demonstrate\nstate-of-the-art performance.\n"} {"abstract": " The asymptotic phase is a fundamental quantity for the analysis of\ndeterministic limit-cycle oscillators, and generalized definitions of the\nasymptotic phase for stochastic oscillators have also been proposed. In this\narticle, we show that the asymptotic phase and also amplitude can be defined\nfor classical and semiclassical stochastic oscillators in a natural and unified\nmanner by using the eigenfunctions of the Koopman operator of the system. We\nshow that the proposed definition gives appropriate values of the phase and\namplitude for strongly stochastic limit-cycle oscillators, excitable systems\nundergoing noise-induced oscillations, and also for quantum limit-cycle\noscillators in the semiclassical regime.\n"} {"abstract": " The relationship between galaxy characteristics and the reionization of the\nuniverse remains elusive, mainly due to the observational difficulty in\naccessing the Lyman continuum (LyC) at these redshifts. It is thus important to\nidentify low-redshift LyC-leaking galaxies that can be used as laboratories to\ninvestigate the physical processes that allow LyC photons to escape. The\nweakness of the [S II] nebular emission lines relative to typical star-forming\ngalaxies has been proposed as a LyC predictor. In this paper, we show that the\n[S II]-deficiency is an effective method to select LyC-leaking candidates using\ndata from the Low-redshift LyC Survey, which has detected flux below the Lyman\nedge in 35 out of 66 star-forming galaxies with the Cosmic Origins Spectrograph\nonboard the Hubble Space Telescope. We show that LyC leakers tend to be more [S\nII]-deficient and that the fraction of their detections increases as [S\nII]-deficiency becomes more prominent. Correlational studies suggest that [S\nII]-deficiency complements other LyC diagnostics (such as strong Lyman-$\\alpha$\nemission and high [O III]/[O II]). Our results verify an additional technique\nby which reionization-era galaxies could be studied.\n"} {"abstract": " We propose a novel analytical model for anisotropic multi-layer cylindrical\nstructures containing graphene layers. The general structure is formed by an\naperiodic repetition of a three-layer sub-structure, where a graphene layer,\nwith an isotropic surface conductivity, has been sandwiched between two\nadjacent magnetic materials. An external magnetic bias has been applied in the\naxial direction. General matrix representation is obtained in our proposed\nanalytical model to find the dispersion relation. The relation will be used to\nfind the effective index of the structure and its other propagation parameters.\nTwo special exemplary structures have been introduced and studied to show the\nrichness of the proposed general structure regarding the related specific\nplasmonic wave phenomena and effects. A series of simulations have been\nconducted to demonstrate the noticeable wave-guiding properties of the\nstructure in the 10-40 THz band. A very good agreement between the analytical\nand simulation results is observed. The proposed structure can be utilized to\ndesign novel plasmonic devices such as absorbers, modulators, plasmonic sensors\nand tunable antennas in the THz frequencies.\n"} {"abstract": " In this paper, meant as a companion to arXiv:2006.04458, we consider a class\nof non-integrable $2D$ Ising models in cylindrical domains, and we discuss two\nkey aspects of the multiscale construction of their scaling limit. In\nparticular, we provide a detailed derivation of the Grassmann representation of\nthe model, including a self-contained presentation of the exact solution of the\nnearest neighbor model in the cylinder. Moreover, we prove precise asymptotic\nestimates of the fermionic Green's function in the cylinder, required for the\nmultiscale analysis of the model. We also review the multiscale construction of\nthe effective potentials in the infinite volume limit, in a form suitable for\nthe generalization to finite cylinders. Compared to previous works, we\nintroduce a few important simplifications in the localization procedure and in\nthe iterative bounds on the kernels of the effective potentials, which are\ncrucial for the adaptation of the construction to domains with boundaries.\n"} {"abstract": " Many channel decoders rely on parallel decoding attempts to achieve good\nperformance with acceptable latency. However, most of the time fewer attempts\nthan the foreseen maximum are sufficient for successful decoding.\nInput-distribution-aware (IDA) decoding allows to determine the parallelism of\npolar code list decoders by observing the distribution of channel information.\nIn this work, IDA decoding is proven to be effective with different codes and\ndecoding algorithms as well. Two techniques, M-IDA and MD-IDA, are proposed:\nthey exploit the sampling of the input distribution inherent to particular\ndecoding algorithms to perform low-cost IDA decoding. Simulation results on the\ndecoding of BCH codes via the Chase and ORBGRAND algorithms show that they\nperform at least as well as the original IDA decoding, allowing to reduce\nrun-time complexity down to 17% and 67\\% with minimal error correction\ndegradation.\n"} {"abstract": " It was shown that the particle distribution detected by a uniformly\naccelerated observer in the inertial vacuum (Unruh effect) deviates from the\npure Planckian spectrum when considering the superposition of fields with\ndifferent masses. Here we elaborate on the statistical origin of this\nphenomenon. In a suitable regime, we provide an effective description of the\nemergent distribution in terms of the nonextensive q-generalized statistics\nbased on Tsallis entropy. This picture allows us to establish a nontrivial\nrelation between the q-entropic index and the characteristic mixing parameters\nsin{\\theta} and \\Delta m. In particular, we infer that q < 1, indicating the\nsuperadditive feature of Tsallis entropy in this framework. We discuss our\nresult in connection with the entangled condensate structure acquired by the\nquantum vacuum for mixed fields.\n"} {"abstract": " In this paper we present an early prototype of the Digger Finger that is\ndesigned to easily penetrate granular media and is equipped with the GelSight\nsensor. Identifying objects buried in granular media using tactile sensors is a\nchallenging task. First, particle jamming in granular media prevents downward\nmovement. Second, the granular media particles tend to get stuck between the\nsensing surface and the object of interest, distorting the actual shape of the\nobject. To tackle these challenges we present a Digger Finger prototype. It is\ncapable of fluidizing granular media during penetration using mechanical\nvibrations. It is equipped with high resolution vision based tactile sensing to\nidentify objects buried inside granular media. We describe the experimental\nprocedures we use to evaluate these fluidizing and buried shape recognition\ncapabilities. A robot with such fingers can perform explosive ordnance disposal\nand Improvised Explosive Device (IED) detection tasks at a much a finer\nresolution compared to techniques like Ground Penetration Radars (GPRs).\nSensors like the Digger Finger will allow robotic manipulation research to move\nbeyond only manipulating rigid objects.\n"} {"abstract": " Domain adaptation aims to leverage a label-rich domain (the source domain) to\nhelp model learning in a label-scarce domain (the target domain). Most domain\nadaptation methods require the co-existence of source and target domain samples\nto reduce the distribution mismatch, however, access to the source domain\nsamples may not always be feasible in the real world applications due to\ndifferent problems (e.g., storage, transmission, and privacy issues). In this\nwork, we deal with the source data-free unsupervised domain adaptation problem,\nand propose a novel approach referred to as Virtual Domain Modeling (VDM-DA).\nThe virtual domain acts as a bridge between the source and target domains. On\none hand, we generate virtual domain samples based on an approximated Gaussian\nMixture Model (GMM) in the feature space with the pre-trained source model,\nsuch that the virtual domain maintains a similar distribution with the source\ndomain without accessing to the original source data. On the other hand, we\nalso design an effective distribution alignment method to reduce the\ndistribution divergence between the virtual domain and the target domain by\ngradually improving the compactness of the target domain distribution through\nmodel learning. In this way, we successfully achieve the goal of distribution\nalignment between the source and target domains by training deep networks\nwithout accessing to the source domain data. We conduct extensive experiments\non benchmark datasets for both 2D image-based and 3D point cloud-based\ncross-domain object recognition tasks, where the proposed method referred to\nDomain Adaptation with Virtual Domain Modeling (VDM-DA) achieves the\nstate-of-the-art performances on all datasets.\n"} {"abstract": " We consider the learning and prediction of nonlinear time series generated by\na latent symplectic map. A special case is (not necessarily separable)\nHamiltonian systems, whose solution flows give such symplectic maps. For this\nspecial case, both generic approaches based on learning the vector field of the\nlatent ODE and specialized approaches based on learning the Hamiltonian that\ngenerates the vector field exist. Our method, however, is different as it does\nnot rely on the vector field nor assume its existence; instead, it directly\nlearns the symplectic evolution map in discrete time. Moreover, we do so by\nrepresenting the symplectic map via a generating function, which we approximate\nby a neural network (hence the name GFNN). This way, our approximation of the\nevolution map is always \\emph{exactly} symplectic. This additional geometric\nstructure allows the local prediction error at each step to accumulate in a\ncontrolled fashion, and we will prove, under reasonable assumptions, that the\nglobal prediction error grows at most \\emph{linearly} with long prediction\ntime, which significantly improves an otherwise exponential growth. In\naddition, as a map-based and thus purely data-driven method, GFNN avoids two\nadditional sources of inaccuracies common in vector-field based approaches,\nnamely the error in approximating the vector field by finite difference of the\ndata, and the error in numerical integration of the vector field for making\npredictions. Numerical experiments further demonstrate our claims.\n"} {"abstract": " Early wildfire detection is of paramount importance to avoid as much damage\nas possible to the environment, properties, and lives. Deep Learning (DL)\nmodels that can leverage both visible and infrared information have the\npotential to display state-of-the-art performance, with lower false-positive\nrates than existing techniques. However, most DL-based image fusion methods\nhave not been evaluated in the domain of fire imagery. Additionally, to the\nbest of our knowledge, no publicly available dataset contains visible-infrared\nfused fire images. There is a growing interest in DL-based image fusion\ntechniques due to their reduced complexity. Due to the latter, we select three\nstate-of-the-art, DL-based image fusion techniques and evaluate them for the\nspecific task of fire image fusion. We compare the performance of these methods\non selected metrics. Finally, we also present an extension to one of the said\nmethods, that we called FIRe-GAN, that improves the generation of artificial\ninfrared images and fused ones on selected metrics.\n"} {"abstract": " We consider linear systems $Ax = b$ where $A \\in \\mathbb{R}^{m \\times n}$\nconsists of normalized rows, $\\|a_i\\|_{\\ell^2} = 1$, and where up to $\\beta m$\nentries of $b$ have been corrupted (possibly by arbitrarily large numbers).\nHaddock, Needell, Rebrova and Swartworth propose a quantile-based Random\nKaczmarz method and show that for certain random matrices $A$ it converges with\nhigh likelihood to the true solution. We prove a deterministic version by\nconstructing, for any matrix $A$, a number $\\beta_A$ such that there is\nconvergence for all perturbations with $\\beta < \\beta_A$. Assuming a random\nmatrix heuristic, this proves convergence for tall Gaussian matrices with up to\n$\\sim 0.5\\%$ corruption (a number that can likely be improved).\n"} {"abstract": " In the framework of photonics with all-dielectric nanoantennas,\nsub-micro-metric spheres can be exploited for a plethora of applications\nincluding vanishing back-scattering, enhanced directivity of a light emitter,\nbeam steering, and large Purcell factors. Here, the potential of a\nhigh-throughput fabrication method based on aerosol-spray is shown to form\nquasi-perfect sub-micrometric spheres of polycrystalline TiO 2 . Spectroscopic\ninvestigation of light scattering from individual particles reveals sharp\nresonances in agreement with Mie theory, neat structural colors, and a high\ndirectivity. Owing to the high permittivity and lossless material in use, this\nmethod opens the way toward the implementation of isotropic meta-materials and\nforward-directional sources with magnetic responses at visible and near-UV\nfrequencies, not accessible with conventional Si- and Ge-based Mie resonators.\n"} {"abstract": " We exhibit the analog of the entropy map for multivariate Gaussian\ndistributions on local fields. As in the real case, the image of this map lies\nin the supermodular cone and it determines the distribution of the valuation\nvector. In general, this map can be defined for non-archimedian valued fields\nwhose valuation group is an additive subgroup of the real line, and it remains\nsupermodular. We also explicitly compute the image of this map in dimension 3.\n"} {"abstract": " We investigate the degradation of quantum entanglement in the\nSchwarzschild-de Sitter black hole spacetime, by studying the mutual\ninformation and the logarithmic negativity for maximally entangled, bipartite\ninitial states for massless minimal scalar fields. This spacetime is endowed\nwith a black hole as well as a cosmological event horizon, giving rise to\nparticle creation at two different temperatures. We consider two independent\ndescriptions of thermodynamics and particle creation in this background. The\nfirst involves thermal equilibrium of an observer with the individual Hawking\ntemperature of either of the horizons. We show that as of the asymptotically\nflat/anti-de Sitter black holes, the entanglement or correlation degrades here\nwith increasing Hawking temperature. The second treats both the horizons\ncombinedly to define a total entropy and an effective equilibrium temperature.\nWe present a field theoretic derivation of this effective temperature and argue\nthat unlike the usual cases, the particle creation here is not ocurring in\ncausally disconnected spacetime wedges but in a single region. Using these\nstates, we then show that in this scenario the entanglement never degrades but\nincreases with increasing black hole temperature and holds true no matter how\nhot the black hole becomes or how small the cosmological constant is. We argue\nthat this phenomenon can have no analogue in the asymptotically flat/anti-de\nSitter black hole spacetimes.\n"} {"abstract": " Dynamic graphs arise in a plethora of practical scenarios such as social\nnetworks, communication networks, and financial transaction networks. Given a\ndynamic graph, it is fundamental and essential to learn a graph representation\nthat is expected not only to preserve structural proximity but also jointly\ncapture the time-evolving patterns. Recently, graph convolutional network (GCN)\nhas been widely explored and used in non-Euclidean application domains. The\nmain success of GCN, especially in handling dependencies and passing messages\nwithin nodes, lies in its approximation to Laplacian smoothing. As a matter of\nfact, this smoothing technique can not only encourage must-link node pairs to\nget closer but also push cannot-link pairs to shrink together, which\npotentially cause serious feature shrink or oversmoothing problem, especially\nwhen stacking graph convolution in multiple layers or steps. For learning\ntime-evolving patterns, a natural solution is to preserve historical state and\ncombine it with the current interactions to obtain the most recent\nrepresentation. Then the serious feature shrink or oversmoothing problem could\nhappen when stacking graph convolution explicitly or implicitly according to\ncurrent prevalent methods, which would make nodes too similar to distinguish\neach other. To solve this problem in dynamic graph embedding, we analyze the\nshrinking properties in the node embedding space at first, and then design a\nsimple yet versatile method, which exploits L2 feature normalization constraint\nto rescale all nodes to hypersphere of a unit ball so that nodes would not\nshrink together, and yet similar nodes can still get closer. Extensive\nexperiments on four real-world dynamic graph datasets compared with competitive\nbaseline models demonstrate the effectiveness of the proposed method.\n"} {"abstract": " Epitaxial orthorhombic Hf0.5Zr0.5O2 (HZO) films on La0.67Sr0.33MnO3 (LSMO)\nelectrodes show robust ferroelectricity, with high polarization, endurance and\nretention. However, no similar results have been achieved using other\nperovskite electrodes so far. Here, LSMO and other perovskite electrodes are\ncompared. A small amount of orthorhombic phase and low polarization is found in\nHZO films grown on La-doped BaSnO3 and Nb-doped SrTiO3, while null amounts of\northorhombic phase and polarization are detected in films on LaNiO3 and SrRuO3.\nThe critical effect of the electrode on the stabilized phases is not\nconsequence of differences in the electrode lattice parameter. The interface is\ncritical, and engineering the HZO bottom interface on just a few monolayers of\nLSMO permits the stabilization of the orthorhombic phase. Furthermore, while\nthe specific divalent ion (Sr or Ca) in the manganite is not relevant, reducing\nthe La content causes a severe reduction of the amount of orthorhombic phase\nand the ferroelectric polarization in the HZO film.\n"} {"abstract": " We show that for every positive integer $k$, there exist $k$ consecutive\nprimes having the property that if any digit of any one of the primes,\nincluding any of the infinitely many leading zero digits, is changed, then that\nprime becomes composite.\n"} {"abstract": " Classical and quantum correlation functions are derived for a system of\nnon-interacting particles moving on a circle. It is shown that the decaying\nbehaviour of the classical expression for the correlation function can be\nrecovered from the strictly periodic quantum mechanical expression by taking\nthe limit that the Planck's constant goes to zero, after an appropriate\ntransformation.\n"} {"abstract": " We propose a scheme to create an electronic Floquet vortex state by\nirradiating a two-dimensional semiconductor with the laser light carrying\nnon-zero orbital angular momentum. We analytically and numerically study the\nproperties of the Floquet vortex states, with the methods analogous to the ones\npreviously applied to the analysis of superconducting vortex states. We show\nthat such Floquet vortex states are similar to the superconducting vortex\nstates, and they exhibit a wide range of tunability. To illustrate the\npotential utility of such tunability, we show how such states could be used for\nquantum state engineering.\n"} {"abstract": " The redshift distribution of galactic-scale lensing systems provides a\nlaboratory to probe the velocity dispersion function (VDF) of early-type\ngalaxies (ETGs) and measure the evolution of early-type galaxies at redshift z\n~ 1. Through the statistical analysis of the currently largest sample of\nearly-type galaxy gravitational lenses, we conclude that the VDF inferred\nsolely from strong lensing systems is well consistent with the measurements of\nSDSS DR5 data in the local universe. In particular, our results strongly\nindicate a decline in the number density of lenses by a factor of two and a 20%\nincrease in the characteristic velocity dispersion for the early-type galaxy\npopulation at z ~ 1. Such VDF evolution is in perfect agreement with the\n$\\Lambda$CDM paradigm (i.e., the hierarchical build-up of mass structures over\ncosmic time) and different from \"stellar mass-downsizing\" evolutions obtained\nby many galaxy surveys. Meanwhile, we also quantitatively discuss the evolution\nof the VDF shape in a more complex evolution model, which reveals its strong\ncorrelation with that of the number density and velocity dispersion of\nearly-type galaxies. Finally, we evaluate if future missions such as LSST can\nbe sensitive enough to place the most stringent constraints on the redshift\nevolution of early-type galaxies, based on the redshift distribution of\navailable gravitational lenses.\n"} {"abstract": " A promising approach to the practical application of the Quantum Approximate\nOptimization Algorithm (QAOA) is finding QAOA parameters classically in\nsimulation and sampling the solutions from QAOA with optimized parameters on a\nquantum computer. Doing so requires repeated evaluations of QAOA energy in\nsimulation. We propose a novel approach for accelerating the evaluation of QAOA\nenergy by leveraging the symmetry of the problem. We show a connection between\nclassical symmetries of the objective function and the symmetries of the terms\nof the cost Hamiltonian with respect to the QAOA energy. We show how by\nconsidering only the terms that are not connected by symmetry, we can\nsignificantly reduce the cost of evaluating the QAOA energy. Our approach is\ngeneral and applies to any known subgroup of symmetries and is not limited to\ngraph problems. Our results are directly applicable to nonlocal QAOA\ngeneralization RQAOA. We outline how available fast graph automorphism solvers\ncan be leveraged for computing the symmetries of the problem in practice. We\nimplement the proposed approach on the MaxCut problem using a state-of-the-art\ntensor network simulator and a graph automorphism solver on a benchmark of 48\ngraphs with up to 10,000 nodes. Our approach provides an improvement for $p=1$\non $71.7\\%$ of the graphs considered, with a median speedup of $4.06$, on a\nbenchmark where $62.5\\%$ of the graphs are known to be hard for automorphism\nsolvers.\n"} {"abstract": " Methods inspired from machine learning have recently attracted great interest\nin the computational study of quantum many-particle systems. So far, however,\nit has proven challenging to deal with microscopic models in which the total\nnumber of particles is not conserved. To address this issue, we propose a new\nvariant of neural network states, which we term neural coherent states. Taking\nthe Fr\\\"ohlich impurity model as a case study, we show that neural coherent\nstates can learn the ground state of non-additive systems very well. In\nparticular, we observe substantial improvement over the standard coherent state\nestimates in the most challenging intermediate coupling regime. Our approach is\ngeneric and does not assume specific details of the system, suggesting wide\napplications.\n"} {"abstract": " This paper presents a fundamental analysis connecting phase noise and\nlong-term frequency accuracy of oscillators and explores the possibilities and\nlimitations in crystal-less frequency calibration for wireless edge nodes from\na noise-impact perspective. N-period-average jitter (NPAJ) is introduced as a\nlink between the spectral characterization of phase noise and long-term\nfrequency accuracy. It is found that flicker noise or other colored noise\nprofiles coming from the reference in a frequency synthesizer is the dominant\nnoise source affecting long-term frequency accuracy. An average processing unit\nembedded in an ADPLL is proposed based on the N-period-average jitter concept\nto enhance frequency accuracy in a Calibrate and Open-loop scenario commonly\nused in low power radios. With this low-cost block, the frequency calibration\naccuracy can be directly associated with the reference noise performance. Thus,\nthe feasibility of XO-less design with certain communication standards can be\neasily evaluated with the proposed theory.\n"} {"abstract": " Using a differential equation approach asymptotic expansions are rigorously\nobtained for Lommel, Weber, Anger-Weber and Struve functions, as well as\nNeumann polynomials, each of which is a solution of an inhomogeneous Bessel\nequation. The approximations involve Airy and Scorer functions, and are\nuniformly valid for large real order $\\nu$ and unbounded complex argument $z$.\nAn interesting complication is the identification of the Lommel functions with\nthe new asymptotic solutions, and in order to do so it is necessary to consider\ncertain sectors of the complex plane, as well as introduce new forms of Lommel\nand Struve functions.\n"} {"abstract": " Context: Expert judgement is a common method for software effort estimations\nin practice today. Estimators are often shown extra obsolete requirements\ntogether with the real ones to be implemented. Only one previous study has been\nconducted on if such practices bias the estimations. Objective: We conducted\nsix experiments with both students and practitioners to study, and quantify,\nthe effects of obsolete requirements on software estimation. Method By\nconducting a family of six experiments using both students and practitioners as\nresearch subjects (N = 461), and by using a Bayesian Data Analysis approach, we\ninvestigated different aspects of this effect. We also argue for, and show an\nexample of, how we by using a Bayesian approach can be more confident in our\nresults and enable further studies with small sample sizes. Results: We found\nthat the presence of obsolete requirements triggered an overestimation in\neffort across all experiments. The effect, however, was smaller in a field\nsetting compared to using students as subjects. Still, the over-estimations\ntriggered by the obsolete requirements were systematically around twice the\npercentage of the included obsolete ones, but with a large 95% credible\ninterval. Conclusions: The results have implications for both research and\npractice in that the found systematic error should be accounted for in both\nstudies on software estimation and, maybe more importantly, in estimation\npractices to avoid over-estimation due to this systematic error. We partly\nexplain this error to be stemming from the cognitive bias of\nanchoring-and-adjustment, i.e. the obsolete requirements anchored a much larger\nsoftware. However, further studies are needed in order to accurately predict\nthis effect.\n"} {"abstract": " In this review I will discuss the comparison between model results and\nobservational data for the Milky Way, the predictive power of such models as\nwell as their limits. Such a comparison, known as Galactic archaeology, allows\nus to impose constraints on stellar nucleosynthesis and timescales of formation\nof the various Galactic components (halo, bulge, thick disk and thin disk).\n"} {"abstract": " We focus on studying the opacity of iron, chromium, and nickel plasmas at\nconditions relevant to experiments carried out at Sandia National Laboratories\n[J. E. Bailey et al., Nature 517, 56 (2015)]. We calculate the photo-absorption\ncross-sections and subsequent opacity for plasmas using linear response\ntime-dependent density functional theory (TD-DFT). Our results indicate that\nthe physics of channel mixing accounted for in linear response TD-DFT leads to\nan increase in the opacity in the bound-free quasi-continuum, where the Sandia\nexperiments indicate that models under-predict iron opacity. However, the\nincrease seen in our calculations is only in the range of 5-10%. Further, we do\nnot see any change in this trend for chromium and nickel. This behavior\nindicates that channel mixing effects do not explain the trends in opacity\nobserved in the Sandia experiments.\n"} {"abstract": " In this note, we prove some results related to small perturbations of a frame\nfor a Hilbert space $\\mathcal{H}$ in order to have a woven pair for\n$\\mathcal{H}$. Our results complete those known in the literature. In addition\nwe study a necessary condition for a woven pair, that resembles a\ncharacterization for Riesz frames.\n"} {"abstract": " Solving the Multi-Agent Path Finding (MAPF) problem optimally is known to be\nNP-Hard for both make-span and total arrival time minimization. While many\nalgorithms have been developed to solve MAPF problems, there is no dominating\noptimal MAPF algorithm that works well in all types of problems and no standard\nguidelines for when to use which algorithm. In this work, we develop the deep\nconvolutional network MAPFAST (Multi-Agent Path Finding Algorithm SelecTor),\nwhich takes a MAPF problem instance and attempts to select the fastest\nalgorithm to use from a portfolio of algorithms. We improve the performance of\nour model by including single-agent shortest paths in the instance embedding\ngiven to our model and by utilizing supplemental loss functions in addition to\na classification loss. We evaluate our model on a large and diverse dataset of\nMAPF instances, showing that it outperforms all individual algorithms in its\nportfolio as well as the state-of-the-art optimal MAPF algorithm selector. We\nalso provide an analysis of algorithm behavior in our dataset to gain a deeper\nunderstanding of optimal MAPF algorithms' strengths and weaknesses to help\nother researchers leverage different heuristics in algorithm designs.\n"} {"abstract": " We studied the accretion disc structure in the doubly imaged lensed quasar\nSDSS J1339+1310 using $r$-band light curves and UV-visible to near-IR (NIR)\nspectra from the first 11 observational seasons after its discovery. The\n2009$-$2019 light curves displayed pronounced microlensing variations on\ndifferent timescales, and this microlensing signal permitted us to constrain\nthe half-light radius of the 1930 \\r{A} continuum-emitting region. Assuming an\naccretion disc with an axis inclined at 60 deg to the line of sight, we\nobtained log$_{10}$($r_{1/2}$/cm) = 15.4$^{+0.3}_{-0.4}$. We also estimated the\ncentral black hole mass from spectroscopic data. The width of the Civ, Mgii,\nand H$\\beta$ emission lines, and the continuum luminosity at 1350, 3000, and\n5100 \\r{A}, led to log$_{10}$($M_{BH}$/M$_{\\odot}$) = 8.6 $\\pm$ 0.4. Thus, hot\ngas responsible for the 1930 \\r{A} continuum emission is likely orbiting a 4.0\n$\\times$ 10$^8$ M$_{\\odot}$ black hole at an $r_{1/2}$ of only a few tens of\nSchwarzschild radii.\n"} {"abstract": " Let pi = pi_1 pi_2 ... pi_n be a permutation in the symmetric group S_n\nwritten in one-line notation. The pinnacle set of pi, denoted Pin pi, is the\nset of all pi_i such that pi_{i-1} < pi_i > pi_{i+1}. This is an analogue of\nthe well-studied peak set of pi where one considers values rather than\npositions. The pinnacle set was introduced by Davis, Nelson, Petersen, and\nTenner who showed that it has many interesting properties. In particular, they\nproved that the number of subsets of [n] = {1, 2, ..., n} which can be the\npinnacle set of some permutation is a binomial coefficient. Their proof\ninvolved a bijection with lattice paths and was somewhat involved. We give a\nsimpler demonstration of this result which does not need lattice paths.\nMoreover, we show that our map and theirs are different descriptions of the\nsame function. Davis et al. also studied the number of pinnacle sets with\nmaximum m and cardinality d which they denoted by p(m,d). We show that these\nintegers are ballot numbers and give two proofs of this fact: one using finite\ndifferences and one bijective. Diaz-Lopez, Harris, Huang, Insko, and Nilsen\nfound a summation formula for calculating the number of permutations in S_n\nhaving a given pinnacle set. We derive a new expression for this number which\nis faster to calculate in many cases. We also show how this method can be\nadapted to find the number of orderings of a pinnacle set which can be realized\nby some pi in S_n.\n"} {"abstract": " This article discusses the effects of the spiral-arm corotation on the\nstellar dynamics in the Solar Neighborhood (SN). All our results presented here\nrely on: 1) observational evidence that the Sun lies near the corotation\ncircle, where stars rotate with the same angular velocity as the spiral-arm\npattern; the corotation circle establishes domains of the corotation resonance\n(CR) in the Galactic disk; 2) dynamical constraints that put the spiral-arm\npotential as the dominant perturbation in the SN, comparing with the effects of\nthe central bar in the SN; 3) a long-lived nature of the spiral structure,\npromoting a state of dynamical relaxing and phase-mixing of the stellar orbits\nin response to the spiral perturbation. With an analytical model for the\nGalactic potential, composed of an axisymmetric background deduced from the\nobserved rotation curve, and perturbed by a four-armed spiral pattern,\nnumerical simulations of stellar orbits are performed to delineate the domains\nof regular and chaotic motions shaped by the resonances. Such studies show that\nstars can be trapped inside the stable zones of the spiral CR, and this orbital\ntrapping mechanism could explain the dynamical origin of the Local arm of the\nMilky Way (MW). The spiral CR and the near high-order epicyclic resonances\ninfluence the velocity distribution in the SN, creating the observable\nstructures such as moving groups and their radially extended counterpart known\nas diagonal ridges. The Sun and most of the SN stars evolve inside a stable\nzone of the spiral CR, never crossing the main spiral-arm structure, but\noscillating in the region between the Sagittarius-Carina and Perseus arms. This\norbital behavior of the Sun brings insights to our understanding of questions\nconcerning the solar system evolution, the Earth environment changes, and the\npreservation of life on Earth.\n"} {"abstract": " Context. The tropospheric wind pattern in Jupiter consists of alternating\nprograde and retrograde zonal jets with typical velocities of up to 100 m/s\naround the equator. At much higher altitudes, in the ionosphere, strong auroral\njets have been discovered with velocities of 1-2 km/s. There is no such direct\nmeasurement in the stratosphere of the planet. Aims. In this paper, we bridge\nthe altitude gap between these measurements by directly measuring the wind\nspeeds in Jupiter's stratosphere. Methods. We use the Atacama Large\nMillimeter/submillimeter Array's very high spectral and angular resolution\nimaging of the stratosphere of Jupiter to retrieve the wind speeds as a\nfunction of latitude by fitting the Doppler shifts induced by the winds on the\nspectral lines. Results. We detect for the first time equatorial zonal jets\nthat reside at 1 mbar, i.e. above the altitudes where Jupiter's\nQuasi-Quadrennial Oscillation occurs. Most noticeably, we find 300-400 m/s\nnon-zonal winds at 0.1 mbar over the polar regions underneath the main auroral\novals. They are in counter-rotation and lie several hundreds of kilometers\nbelow the ionospheric auroral winds. We suspect them to be the lower tail of\nthe ionospheric auroral winds. Conclusions. We detect directly and for the\nfirst time strong winds in Jupiter's stratosphere. They are zonal at low-to-mid\nlatitudes and non-zonal at polar latitudes. The wind system found at polar\nlatitudes may help increase the effciency of chemical complexification by\nconfining the photochemical products in a region of large energetic electron\nprecipitation.\n"} {"abstract": " Developers of computer vision algorithms outsource some of the labor involved\nin annotating training data through business process outsourcing companies and\ncrowdsourcing platforms. Many data annotators are situated in the Global South\nand are considered independent contractors. This paper focuses on the\nexperiences of Argentinian and Venezuelan annotation workers. Through\nqualitative methods, we explore the discourses encoded in the task instructions\nthat these workers follow to annotate computer vision datasets. Our preliminary\nfindings indicate that annotation instructions reflect worldviews imposed on\nworkers and, through their labor, on datasets. Moreover, we observe that\nfor-profit goals drive task instructions and that managers and algorithms make\nsure annotations are done according to requesters' commands. This configuration\npresents a form of commodified labor that perpetuates power asymmetries while\nreinforcing social inequalities and is compelled to reproduce them into\ndatasets and, subsequently, in computer vision systems.\n"} {"abstract": " The increasing concerns about data privacy and security drive an emerging\nfield of studying privacy-preserving machine learning from isolated data\nsources, i.e., federated learning. A class of federated learning, vertical\nfederated learning, where different parties hold different features for common\nusers, has a great potential of driving a great variety of business cooperation\namong enterprises in many fields. In machine learning, decision tree ensembles\nsuch as gradient boosting decision trees (GBDT) and random forest are widely\napplied powerful models with high interpretability and modeling efficiency.\nHowever, stateof-art vertical federated learning frameworks adapt anonymous\nfeatures to avoid possible data breaches, makes the interpretability of the\nmodel compromised. To address this issue in the inference process, in this\npaper, we firstly make a problem analysis about the necessity of disclosure\nmeanings of feature to Guest Party in vertical federated learning. Then we find\nthe prediction result of a tree could be expressed as the intersection of\nresults of sub-models of the tree held by all parties. With this key\nobservation, we protect data privacy and allow the disclosure of feature\nmeaning by concealing decision paths and adapt a communication-efficient secure\ncomputation method for inference outputs. The advantages of Fed-EINI will be\ndemonstrated through both theoretical analysis and extensive numerical results.\nWe improve the interpretability of the model by disclosing the meaning of\nfeatures while ensuring efficiency and accuracy.\n"} {"abstract": " The detection and characterization of young planetary systems offers a direct\npath to study the processes that shape planet evolution. We report on the\ndiscovery of a sub-Neptune-size planet orbiting the young star HD 110082\n(TOI-1098). Transit events we initially detected during TESS Cycle 1 are\nvalidated with time-series photometry from Spitzer. High-contrast imaging and\nhigh-resolution, optical spectra are also obtained to characterize the stellar\nhost and confirm the planetary nature of the transits. The host star is a late\nF dwarf (M=1.2 Msun) with a low-mass, M dwarf binary companion (M=0.26 Msun)\nseparated by nearly one arcminute (~6200 AU). Based on its rapid rotation and\nLithium absorption, HD 110082 is young, but is not a member of any known group\nof young stars (despite proximity to the Octans association). To measure the\nage of the system, we search for coeval, phase-space neighbors and compile a\nsample of candidate siblings to compare with the empirical sequences of young\nclusters and to apply quantitative age-dating techniques. In doing so, we find\nthat HD 110082 resides in a new young stellar association we designate\nMELANGE-1, with an age of 250(+50/-70) Myr. Jointly modeling the TESS and\nSpitzer light curves, we measure a planetary orbital period of 10.1827 days and\nradius of Rp = 3.2(+/-0.1) Earth radii. HD 110082 b's radius falls in the\nlargest 12% of field-age systems with similar host star mass and orbital\nperiod. This finding supports previous studies indicating that young planets\nhave larger radii than their field-age counterparts.\n"} {"abstract": " Infinitesimal symmetries of a classical mechanical system are usually\ndescribed by a Lie algebra acting on the phase space, preserving the Poisson\nbrackets. We propose that a quantum analogue is the action of a Lie bi-algebra\non the associative $*$-algebra of observables. The latter can be thought of as\nfunctions on some underlying non-commutative manifold. We illustrate this for\nthe non-commutative torus $\\mathbb{T}^2_\\theta$. The canonical trace defines a\nManin triple from which a Lie bi-algebra can be constructed. In the special\ncase of rational $\\theta=\\frac{M}{N}$ this Lie bi-algebra is\n$\\underline{GL}(N)=\\underline{U}(N)\\oplus \\underline{B}(N)$, corresponding to\nunitary and upper triangular matrices. The Lie bi-algebra has a remnant in the\nclassical limit $N\\to\\infty$: the elements of $\\underline{U}(N)$ tend to real\nfunctions while $\\underline{B}(N)$ tends to a space of complex analytic\nfunctions.\n"} {"abstract": " We study the problem of determining the best intervention in a Causal\nBayesian Network (CBN) specified only by its causal graph. We model this as a\nstochastic multi-armed bandit (MAB) problem with side-information, where the\ninterventions correspond to the arms of the bandit instance. First, we propose\na simple regret minimization algorithm that takes as input a semi-Markovian\ncausal graph with atomic interventions and possibly unobservable variables, and\nachieves $\\tilde{O}(\\sqrt{M/T})$ expected simple regret, where $M$ is dependent\non the input CBN and could be very small compared to the number of arms. We\nalso show that this is almost optimal for CBNs described by causal graphs\nhaving an $n$-ary tree structure. Our simple regret minimization results, both\nupper and lower bound, subsume previous results in the literature, which\nassumed additional structural restrictions on the input causal graph. In\nparticular, our results indicate that the simple regret guarantee of our\nproposed algorithm can only be improved by considering more nuanced structural\nrestrictions on the causal graph. Next, we propose a cumulative regret\nminimization algorithm that takes as input a general causal graph with all\nobservable nodes and atomic interventions and performs better than the optimal\nMAB algorithm that does not take causal side-information into account. We also\nexperimentally compare both our algorithms with the best known algorithms in\nthe literature. To the best of our knowledge, this work gives the first simple\nand cumulative regret minimization algorithms for CBNs with general causal\ngraphs under atomic interventions and having unobserved confounders.\n"} {"abstract": " In its standard formulation, quantum backflow is a classically impossible\nphenomenon in which a free quantum particle in a positive-momentum state\nexhibits a negative probability current. Recently, Miller et al. [Quantum 5,\n379 (2021)] have put forward a new, \"experiment-friendly\" formulation of\nquantum backflow that aims at extending the notion of quantum backflow to\nsituations in which the particle's state may have both positive and negative\nmomenta. Here, we investigate how the experiment-friendly formulation of\nquantum backflow compares to the standard one when applied to a free particle\nin a positive-momentum state. We show that the two formulations are not always\ncompatible. We further identify a parametric regime in which the two\nformulations appear to be in qualitative agreement with one another.\n"} {"abstract": " This paper studies the generation and transmission expansion co-optimization\nproblem with a high wind power penetration rate in large-scale power grids. In\nthis paper, generation and transmission expansion co-optimization is modeled as\na mixed-integer programming (MIP) problem. A scenario creation method is\nproposed to capture the variation and correlation of both load and wind power\nacross regions for large-scale power grids. Obtained scenarios that represent\nload and wind uncertainties can be easily introduced into the MIP problem and\nthen solved to obtain the co-optimized generation and transmission expansion\nplan. Simulation results show that the proposed planning model and the scenario\ncreation method can improve the expansion result significantly through modeling\nmore detailed information of wind and load variation among regions in the US EI\nsystem. The improved expansion plan that combines generation and transmission\nwill aid system planners and policy makers to maximize the social welfare in\nlarge-scale power grids.\n"} {"abstract": " We provide a systematic method to deduce the global form of flavor symmetry\ngroups in 4d N=2 theories obtained by compactifying 6d N=(2,0) superconformal\nfield theories (SCFTs) on a Riemann surface carrying regular punctures and\npossibly outer-automorphism twist lines. Apriori, this method only determines\nthe group associated to the manifest part of the flavor symmetry algebra, but\noften this information is enough to determine the group associated to the full\nenhanced flavor symmetry algebra. Such cases include some interesting and\nwell-studied 4d N=2 SCFTs like the Minahan-Nemeschansky theories. The symmetry\ngroups obtained via this method match with the symmetry groups obtained using a\nLagrangian description if such a description arises in some duality frame.\nMoreover, we check that the proposed symmetry groups are consistent with the\nsuperconformal indices available in the literature. As another application, our\nmethod finds distinct global forms of flavor symmetry group for pairs of\ninteracting 4d N=2 SCFTs (recently pointed out in the literature) whose Coulomb\nbranch dimensions, flavor algebras and levels coincide (along with other\ninvariants), but nonetheless are distinct SCFTs.\n"} {"abstract": " Two years ago, we alarmed the scientific community about the large number of\nbad papers in the literature on {\\it zero difference balanced functions}, where\ndirect proofs of seemingly new results are presented in an unnecessarily\nlengthy and convoluted way. Indeed, these results had been proved long before\nand very easily in terms of difference families.\n In spite of our report, papers of the same kind continue to proliferate.\nRegrettably, a further attempt to put the topic in order seems unavoidable.\nWhile some authors now follow our recommendation of using the terminology of\n{\\it partitioned difference families}, their methods are still the same and\ntheir results are often trivial or even wrong. In this note, we show how a very\nrecent paper of this type can be easily dealt with.\n"} {"abstract": " A neighborhood restricted Mixed Gibbs Sampling (MGS) based approach is\nproposed for low-complexity high-order modulation large-scale Multiple-Input\nMultiple-Output (LS-MIMO) detection. The proposed LS-MIMO detector applies a\nneighborhood limitation (NL) on the noisy solution from the MGS at a distance d\n- thus, named d-simplified MGS (d-sMGS) - in order to mitigate its impact,\nwhich can be harmful when a high order modulation is considered. Numerical\nsimulation results considering 64-QAM demonstrated that the proposed detection\nmethod can substantially improve the MGS algorithm convergence, whereas no\nextra computational complexity per iteration is required. The proposed\nd-sMGS-based detector suitable for high-order modulation LS-MIMO further\nexhibits improved performance vs. complexity tradeoff when the system loading\nis high, i.e., when K >= 0.75. N. Also, with increasing the number of\ndimensions, i.e., increasing the number of antennas and/or modulation order, a\nsmaller restriction of 2-sMGS was shown to be a more interesting choice than\n1-sMGS.\n"} {"abstract": " Darwinian evolution tends to produce energy-efficient outcomes. On the other\nhand, energy limits computation, be it neural and probabilistic or digital and\nlogical. Taking a particular energy-efficient viewpoint, we define neural\ncomputation and make use of an energy-constrained, computational function. This\nfunction can be optimized over a variable that is proportional to the number of\nsynapses per neuron. This function also implies a specific distinction between\nATP-consuming processes, especially computation \\textit{per se} vs the\ncommunication processes including action potentials and transmitter release.\nThus to apply this mathematical function requires an energy audit with a\npartitioning of energy consumption that differs from earlier work. The audit\npoints out that, rather than the oft-quoted 20 watts of glucose available to\nthe brain \\cite{sokoloff1960metabolism,sawada2013synapse}, the fraction\npartitioned to cortical computation is only 0.1 watts of ATP. On the other hand\nat 3.5 watts, long-distance communication costs are 35-fold greater. Other\nnovel quantifications include (i) a finding that the biological vs ideal values\nof neural computational efficiency differ by a factor of $10^8$ and (ii) two\npredictions of $N$, the number of synaptic transmissions needed to fire a\nneuron (2500 vs 2000).\n"} {"abstract": " Every physical system is characterized by its action. The standard measure of\nintegration is the square root of a minus the determinant of the metric. It is\nchosen on the basis of a single requirement that it must be a density under\ndiffeomorphic transformations. Therefore, it may not be a unique choice. In\nthis thesis, we develop the two-measure and the Galileon measure string and\nsuperstring actions, apply one of them to the string model of hadrons and\npresent the modified measure extension to higher dimensional extended objects.\n"} {"abstract": " Traditional image classification techniques often produce unsatisfactory\nresults when applied to high spatial resolution data because classes in high\nresolution images are not spectrally homogeneous. Texture offers an alternative\nsource of information for classifying these images. This paper evaluates a\nrecently developed, computationally simple texture metric called Weber Local\nDescriptor (WLD) for use in classifying high resolution QuickBird panchromatic\ndata. We compared WLD with state-of-the art texture descriptors (TD) including\nLocal Binary Pattern (LBP) and its rotation-invariant version LBPRIU. We also\ninvestigated whether incorporating VAR, a TD that captures brightness\nvariation, would improve the accuracy of LBPRIU and WLD. We found that WLD\ngenerally produces more accurate classification results than the other TD we\nexamined, and is also more robust to varying parameters. We have implemented an\noptimised algorithm for calculating WLD which makes the technique practical in\nterms of computation time. Overall, our results indicate that WLD is a\npromising approach for classifying high resolution remote sensing data.\n"} {"abstract": " Image completion has made tremendous progress with convolutional neural\nnetworks (CNNs), because of their powerful texture modeling capacity. However,\ndue to some inherent properties (e.g., local inductive prior, spatial-invariant\nkernels), CNNs do not perform well in understanding global structures or\nnaturally support pluralistic completion. Recently, transformers demonstrate\ntheir power in modeling the long-term relationship and generating diverse\nresults, but their computation complexity is quadratic to input length, thus\nhampering the application in processing high-resolution images. This paper\nbrings the best of both worlds to pluralistic image completion: appearance\nprior reconstruction with transformer and texture replenishment with CNN. The\nformer transformer recovers pluralistic coherent structures together with some\ncoarse textures, while the latter CNN enhances the local texture details of\ncoarse priors guided by the high-resolution masked images. The proposed method\nvastly outperforms state-of-the-art methods in terms of three aspects: 1) large\nperformance boost on image fidelity even compared to deterministic completion\nmethods; 2) better diversity and higher fidelity for pluralistic completion; 3)\nexceptional generalization ability on large masks and generic dataset, like\nImageNet.\n"} {"abstract": " Recently, anchor-based methods have achieved great progress in face\ndetection. Once anchor design and anchor matching strategy determined, plenty\nof positive anchors will be sampled. However, faces with extreme aspect ratio\nalways fail to be sampled according to standard anchor matching strategy. In\nfact, the max IoUs between anchors and extreme aspect ratio faces are still\nlower than fixed sampling threshold. In this paper, we firstly explore the\nfactors that affect the max IoU of each face in theory. Then, anchor matching\nsimulation is performed to evaluate the sampling range of face aspect ratio.\nBesides, we propose a Wide Aspect Ratio Matching (WARM) strategy to collect\nmore representative positive anchors from ground-truth faces across a wide\nrange of aspect ratio. Finally, we present a novel feature enhancement module,\nnamed Receptive Field Diversity (RFD) module, to provide diverse receptive\nfield corresponding to different aspect ratios. Extensive experiments show that\nour method can help detectors better capture extreme aspect ratio faces and\nachieve promising detection performance on challenging face detection\nbenchmarks, including WIDER FACE and FDDB datasets.\n"} {"abstract": " Spike-based neuromorphic hardware holds the promise to provide more energy\nefficient implementations of Deep Neural Networks (DNNs) than standard hardware\nsuch as GPUs. But this requires to understand how DNNs can be emulated in an\nevent-based sparse firing regime, since otherwise the energy-advantage gets\nlost. In particular, DNNs that solve sequence processing tasks typically employ\nLong Short-Term Memory (LSTM) units that are hard to emulate with few spikes.\nWe show that a facet of many biological neurons, slow after-hyperpolarizing\n(AHP) currents after each spike, provides an efficient solution. AHP-currents\ncan easily be implemented in neuromorphic hardware that supports\nmulti-compartment neuron models, such as Intel's Loihi chip. Filter\napproximation theory explains why AHP-neurons can emulate the function of LSTM\nunits. This yields a highly energy-efficient approach to time series\nclassification. Furthermore it provides the basis for implementing with very\nsparse firing an important class of large DNNs that extract relations between\nwords and sentences in a text in order to answer questions about the text.\n"} {"abstract": " Let ${\\cal G}$ be a minor-closed graph class. We say that a graph $G$ is a\n$k$-apex of ${\\cal G}$ if $G$ contains a set $S$ of at most $k$ vertices such\nthat $G\\setminus S$ belongs to ${\\cal G}.$ We denote by ${\\cal A}_k ({\\cal G})$\nthe set of all graphs that are $k$-apices of ${\\cal G}.$ We prove that every\ngraph in the obstruction set of ${\\cal A}_k ({\\cal G}),$ i.e., the\nminor-minimal set of graphs not belonging to ${\\cal A}_k ({\\cal G}),$ has size\nat most $2^{2^{2^{2^{{\\sf poly}(k)}}}},$ where ${\\sf poly}$ is a polynomial\nfunction whose degree depends on the size of the minor-obstructions of ${\\cal\nG}.$ This bound drops to $2^{2^{{\\sf poly}(k)}}$ when ${\\cal G}$ excludes some\napex graph as a minor.\n"} {"abstract": " This paper presents an online evolving neural network-based inverse dynamics\nlearning controller for an autonomous vehicle's longitudinal and lateral\ncontrol under model uncertainties and disturbances. The inverse dynamics of the\nvehicle are approximated using a feedback error learning mechanism that\nutilizes a dynamic Radial Basis Function neural network, referred to as the\nExtended Minimal Resource Allocating Network (EMRAN). EMRAN uses an extended\nKalman filter approach for learning and a growing/pruning condition helps in\nkeeping the number of hidden neurons minimum. The online learning algorithm\nhelps in handling the uncertainties and dynamic variations and also the unknown\ndisturbances on the road. The proposed control architecture employs two coupled\nconventional controllers aided by the EMRAN inverse dynamics controller. The\ncontrol architecture has a conventional PID controller for longitudinal cruise\ncontrol and a Stanley controller for lateral path-tracking. Performances of\nboth the longitudinal and lateral controllers are compared with existing\ncontrol methods and the simulation results clearly indicate that the proposed\ncontrol scheme handles the disturbances and parametric uncertainties better,\nand also provides better tracking performance in autonomous vehicles.\n"} {"abstract": " Quantum resources, such as entanglement, steering, and Bell nonlocality, are\nevaluated for three coupled qubits in the steady-state configuration. We employ\nthe phenomenological master equation and the microscopic master equation to\nprobe such quantum resources, which provide very different results depending on\nthe system configuration. In particular, steering and Bell nonlocality are null\nwithin the phenomenological model, while they reach considerable values within\nthe microscopic model. These results show that the phenomenological approach is\nnot able to capture all quantum resources of the system. We also provide an\nanalytical expression for the steady-state and quantum resources of the system\ncomposed of three coupled qubits in the zero temperature limit. Such results\ndemonstrate that quantum resources between two qubits are strongly affected by\nthe third qubit in a nontrivial way.\n"} {"abstract": " As an instance-level recognition problem, re-identification (re-ID) requires\nmodels to capture diverse features. However, with continuous training, re-ID\nmodels pay more and more attention to the salient areas. As a result, the model\nmay only focus on few small regions with salient representations and ignore\nother important information. This phenomenon leads to inferior performance,\nespecially when models are evaluated on small inter-identity variation data. In\nthis paper, we propose a novel network, Erasing-Salient Net (ES-Net), to learn\ncomprehensive features by erasing the salient areas in an image. ES-Net\nproposes a novel method to locate the salient areas by the confidence of\nobjects and erases them efficiently in a training batch. Meanwhile, to mitigate\nthe over-erasing problem, this paper uses a trainable pooling layer P-pooling\nthat generalizes global max and global average pooling. Experiments are\nconducted on two specific re-identification tasks (i.e., Person re-ID, Vehicle\nre-ID). Our ES-Net outperforms state-of-the-art methods on three Person re-ID\nbenchmarks and two Vehicle re-ID benchmarks. Specifically, mAP / Rank-1 rate:\n88.6% / 95.7% on Market1501, 78.8% / 89.2% on DuckMTMC-reID, 57.3% / 80.9% on\nMSMT17, 81.9% / 97.0% on Veri-776, respectively. Rank-1 / Rank-5 rate: 83.6% /\n96.9% on VehicleID (Small), 79.9% / 93.5% on VehicleID (Medium), 76.9% / 90.7%\non VehicleID (Large), respectively. Moreover, the visualized salient areas show\nhuman-interpretable visual explanations for the ranking results.\n"} {"abstract": " In this work, we give a class of examples of hyperbolic potentials (including\nthe null one) for continuous non-uniformly expanding maps. It implies the\nexistence and uniqueness of equilibrium state (in particular, of maximal\nentropy measure). Among the maps considered is the important class known as\nViana maps.\n"} {"abstract": " Projective two-weight linear codes are closely related to finite projective\nspaces and strongly regular graphs. In this paper, a family of $q$-ary\nprojective two-weight linear codes is presented, where $q$ is a power of 2. The\nparameters of both the codes and their duals are excellent. As applications,\nthe codes are used to derive strongly regular graphs with new parameters and\nsecret sharing schemes with interesting access structures.\n"} {"abstract": " We demonstrate unprecedented accuracy for rapid gravitational-wave parameter\nestimation with deep learning. Using neural networks as surrogates for Bayesian\nposterior distributions, we analyze eight gravitational-wave events from the\nfirst LIGO-Virgo Gravitational-Wave Transient Catalog and find very close\nquantitative agreement with standard inference codes, but with inference times\nreduced from O(day) to a minute per event. Our networks are trained using\nsimulated data, including an estimate of the detector-noise characteristics\nnear the event. This encodes the signal and noise models within millions of\nneural-network parameters, and enables inference for any observed data\nconsistent with the training distribution, accounting for noise nonstationarity\nfrom event to event. Our algorithm -- called \"DINGO\" -- sets a new standard in\nfast-and-accurate inference of physical parameters of detected\ngravitational-wave events, which should enable real-time data analysis without\nsacrificing accuracy.\n"} {"abstract": " In the article [PT] a general procedure to study solutions of the equations\n$x^4-dy^2=z^p$ was presented for negative values of $d$. The purpose of the\npresent article is to extend our previous results to positive values of $d$. On\ndoing so, we give a description of the extension ${\\mathbb\nQ}(\\sqrt{d},\\sqrt{\\epsilon})/{\\mathbb Q}(\\sqrt{d})$ (where $\\epsilon$ is a\nfundamental unit) needed to prove the existence of a Hecke character over\n${\\mathbb Q}(\\sqrt{d})$ with fixed local conditions. We also extend some \"large\nimage\" results regarding images of Galois representations coming from ${\\mathbb\nQ}$-curves (due to Ellenberg in \\cite{MR2075481}) from imaginary to real\nquadratic fields.\n"} {"abstract": " In today's digital society, the Tor network has become an indispensable tool\nfor individuals to protect their privacy on the Internet. Operated by\nvolunteers, relay servers constitute the core component of Tor and are used to\ngeographically escape surveillance. It is therefore essential to have a large,\nyet diverse set of relays. In this work, we analyze the contribution of\neducational institutions to the Tor network and report on our experience of\noperating exit relays at a university. Taking Germany as an example (but\narguing that the global situation is similar), we carry out a quantitative\nstudy and find that universities contribute negligible amounts of relays and\nbandwidth. Since many universities all over the world have excellent conditions\nthat render them perfect places to host Tor (exit) relays, we encourage other\ninterested people and institutions to join. To this end, we discuss and resolve\ncommon concerns and provide lessons learned.\n"} {"abstract": " Given a pair of graphs $\\textbf{A}$ and $\\textbf{B}$, the problems of\ndeciding whether there exists either a homomorphism or an isomorphism from\n$\\textbf{A}$ to $\\textbf{B}$ have received a lot of attention. While graph\nhomomorphism is known to be NP-complete, the complexity of the graph\nisomorphism problem is not fully understood. A well-known combinatorial\nheuristic for graph isomorphism is the Weisfeiler-Leman test together with its\nhigher order variants. On the other hand, both problems can be reformulated as\ninteger programs and various LP methods can be applied to obtain high-quality\nrelaxations that can still be solved efficiently. We study so-called fractional\nrelaxations of these programs in the more general context where $\\textbf{A}$\nand $\\textbf{B}$ are not graphs but arbitrary relational structures. We give a\ncombinatorial characterization of the Sherali-Adams hierarchy applied to the\nhomomorphism problem in terms of fractional isomorphism. Collaterally, we also\nextend a number of known results from graph theory to give a characterization\nof the notion of fractional isomorphism for relational structures in terms of\nthe Weisfeiler-Leman test, equitable partitions, and counting homomorphisms\nfrom trees. As a result, we obtain a description of the families of CSPs that\nare closed under Weisfeiler-Leman invariance in terms of their polymorphisms as\nwell as decidability by the first level of the Sherali-Adams hierarchy.\n"} {"abstract": " When considered as orthogonal bases in distinct vector spaces, the unit\nvectors of polarization directions and the Laguerre-Gaussian modes of\npolarization amplitude are inseparable, constituting a so-called classical\nentangled light beam. We apply this classical entanglement to demonstrate\ntheoretically the execution of Shor's factoring algorithm on a classical light\nbeam. The demonstration comprises light-path designs for the key algorithmic\nsteps of modular exponentiation and Fourier transform on the target integer 15.\nThe computed multiplicative order that eventually leads to the integer factors\nis identified through a four-hole diffraction interference from sources\nobtained from the entangled beam profile. We show that the fringe patterns\nresulted from the interference are uniquely mapped to the sought-after order,\nthereby emulating the factoring process originally rooted in the quantum\nregime.\n"} {"abstract": " We report the discovery of TOI-1444b, a 1.4-$R_\\oplus$ super-Earth on a\n0.47-day orbit around a Sun-like star discovered by {\\it TESS}. Precise radial\nvelocities from Keck/HIRES confirmed the planet and constrained the mass to be\n$3.87 \\pm 0.71 M_\\oplus$. The RV dataset also indicates a possible\nnon-transiting, 16-day planet ($11.8\\pm2.9M_\\oplus$). We report a tentative\ndetection of phase curve variation and secondary eclipse of TOI-1444b in the\n{\\it TESS} bandpass. TOI-1444b joins the growing sample of 17\nultra-short-period planets with well-measured masses and sizes, most of which\nare compatible with an Earth-like composition. We take this opportunity to\nexamine the expanding sample of ultra-short-period planets ($<2R_\\oplus$) and\ncontrast them with the newly discovered sub-day ultra-hot Neptunes\n($>3R_\\oplus$, $>2000F_\\oplus$ TOI-849 b, LTT9779 b and K2-100). We find that\n1) USPs have predominately Earth-like compositions with inferred iron core mass\nfractions of 0.32$\\pm$0.04; and have masses below the threshold of runaway\naccretion ($\\sim 10M_\\oplus$), while ultra-hot Neptunes are above the threshold\nand have H/He or other volatile envelope. 2) USPs are almost always found in\nmulti-planet system consistent with a secular interaction formation scenario;\nultra-hot Neptunes ($P_{\\rm orb} \\lesssim$1 day) tend to be ``lonely' similar\nto longer-period hot Neptunes($P_{\\rm orb}$1-10 days) and hot Jupiters. 3) USPs\noccur around solar-metallicity stars while hot Neptunes prefer higher\nmetallicity hosts. 4) In all these respects, the ultra-hot Neptunes show more\nresemblance to hot Jupiters than the smaller USP planets, although ultra-hot\nNeptunes are rarer than both USP and hot Jupiters by 1-2 orders of magnitude.\n"} {"abstract": " A wave function exposed to measurements undergoes pure state dynamics, with\ndeterministic unitary and probabilistic measurement induced state updates,\ndefining a quantum trajectory. For many-particle systems, the competition of\nthese different elements of dynamics can give rise to a scenario similar to\nquantum phase transitions. To access it despite the randomness of single\nquantum trajectories, we construct an $n$-replica Keldysh field theory for the\nensemble average of the $n$-th moment of the trajectory projector. A key\nfinding is that this field theory decouples into one set of degrees of freedom\nthat heats up indefinitely, while $n-1$ others can be cast into the form of\npure state evolutions generated by an effective non-Hermitian Hamiltonian. This\ndecoupling is exact for free theories, and useful for interacting ones. In\nparticular, we study locally measured Dirac fermions in $(1+1)$ dimensions,\nwhich can be bosonized to a monitored interacting Luttinger liquid at long\nwavelengths. For this model, the non-Hermitian Hamiltonian corresponds to a\nquantum Sine-Gordon model with complex coefficients. A renormalization group\nanalysis reveals a gapless critical phase with logarithmic entanglement entropy\ngrowth, and a gapped area law phase, separated by a\nBerezinskii-Kosterlitz-Thouless transition. The physical picture emerging here\nis a pinning of the trajectory wave function into eigenstates of the\nmeasurement operators upon increasing the monitoring rate.\n"} {"abstract": " The rise of social media has led to the increasing of comments on online\nforums. However, there still exists invalid comments which are not informative\nfor users. Moreover, those comments are also quite toxic and harmful to people.\nIn this paper, we create a dataset for constructive and toxic speech detection,\nnamed UIT-ViCTSD (Vietnamese Constructive and Toxic Speech Detection dataset)\nwith 10,000 human-annotated comments. For these tasks, we propose a system for\nconstructive and toxic speech detection with the state-of-the-art transfer\nlearning model in Vietnamese NLP as PhoBERT. With this system, we obtain\nF1-scores of 78.59% and 59.40% for classifying constructive and toxic comments,\nrespectively. Besides, we implement various baseline models as traditional\nMachine Learning and Deep Neural Network-Based models to evaluate the dataset.\nWith the results, we can solve several tasks on the online discussions and\ndevelop the framework for identifying constructiveness and toxicity of\nVietnamese social media comments automatically.\n"} {"abstract": " We extend the scattering result for the radial defocusing-focusing\nmass-energy double critical nonlinear Schr\\\"odinger equation in $d\\leq 4$ given\nby Cheng et al. to the case $d\\geq 5$. The main ingredient is a suitable long\ntime perturbation theory which is applicable for $d\\geq 5$. The paper will\ntherefore give a full characterization on the scattering threshold for the\nradial defocusing-focusing mass-energy double critical nonlinear Schr\\\"odinger\nequation in all dimensions $d\\geq 3$.\n"} {"abstract": " Accurate description of finite-temperature vibrational dynamics is\nindispensable in the computation of two-dimensional electronic spectra. Such\nsimulations are often based on the density matrix evolution, statistical\naveraging of initial vibrational states, or approximate classical or\nsemiclassical limits. While many practical approaches exist, they are often of\nlimited accuracy and difficult to interpret. Here, we use the concept of\nthermo-field dynamics to derive an exact finite-temperature expression that\nlends itself to an intuitive wavepacket-based interpretation. Furthermore, an\nefficient method for computing finite-temperature two-dimensional spectra is\nobtained by combining the exact thermo-field dynamics approach with the thawed\nGaussian approximation for the wavepacket dynamics, which is exact for any\ndisplaced, distorted, and Duschinsky-rotated harmonic potential but also\naccounts partially for anharmonicity effects in general potentials. Using this\nnew method, we directly relate a symmetry breaking of the two-dimensional\nsignal to the deviation from the conventional Brownian oscillator picture.\n"} {"abstract": " We provide sufficient conditions for the existence of periodic solutions of\nthe of the Lorentz force equation, which models the motion of a charged\nparticle under the action of an electromagnetic fields. The basic assumptions\ncover relevant models with singularities like Coulomb-like electric potentials\nor the magnetic dipole.\n"} {"abstract": " In this work we compare the capacity and achievable rate of uncoded faster\nthan Nyquist (FTN) signalling in the frequency domain, also referred to as\nspectrally efficient FDM (SEFDM). We propose a deep residual convolutional\nneural network detector for SEFDM signals in additive white Gaussian noise\nchannels, that allows to approach the Mazo limit in systems with up to 60\nsubcarriers. Notably, the deep detectors achieve a loss less than 0.4-0.7 dB\nfor uncoded QPSK SEFDM systems of 12 to 60 subcarriers at a 15% spectral\ncompression.\n"} {"abstract": " We propose an autoencoder-based geometric shaping that learns a constellation\nrobust to SNR and laser linewidth estimation errors. This constellation\nmaintains shaping gain in mutual information (up to 0.3 bits/symbol) with\nrespect to QAM over various SNR and laser linewidth values.\n"} {"abstract": " Since its development, Stokesian Dynamics has been a leading approach for the\ndynamic simulation of suspensions of particles at arbitrary concentrations with\nfull hydrodynamic interactions. Although originally developed for the\nsimulation of passive particle suspensions, the Stokesian Dynamics framework is\nequally well suited to the analysis and dynamic simulation of suspensions of\nactive particles, as we elucidate here. We show how the reciprocal theorem can\nbe used to formulate the exact dynamics for a suspension of arbitrary active\nparticles and then show how the Stokesian Dynamics method provides a rigorous\nway to approximate and compute the dynamics of dense active suspensions where\nmany-body hydrodynamic interactions are important.\n"} {"abstract": " As the photovoltaic sector approaches 1 TW in cumulative installed capacity,\nwe provide an overview of the current challenges to achieve further\ntechnological improvements. On the raw materials side, we see no fundamental\nlimitation to expansion in capacity of the current market technologies, even\nthough basic estimates predict that the PV sector will become the largest\nconsumer of Ag in the world after 2030. On the other hand, recent market data\non PV costs indicates that the largest cost fraction is now infrastructure and\narea-related, and nearly independent of the core cell technology. Therefore,\nadditional value adding is likely to proceed via an increase in energy yield\nmetrics such as the power density and/or efficiency of the PV module. However,\ncurrent market technologies are near their fundamental detailed balance\nefficiency limits. The transition to multijunction PV in tandem configurations\nis regarded as the most promising path to surpass this limitation and increase\nthe power per unit area of PV modules. So far, each specific multijunction\nconcept faces particular obstacles that have prevented their upscaling, but the\nfield is rapidly improving. In this review work, we provide a global comparison\nbetween the different types of multijunction concepts, including III-Vs,\nSi-based tandems and the emergence of perovskite/Si devices. Coupled with\nanalyses of new notable developments in the field, we discuss the challenges\ncommon to different multijunction cell architectures, and the specific\nchallenges of each type of device, both on a cell level and on a module\nintegration level. From the analysis, we conclude that several tandem concepts\nare nearing the disruption level where a breakthrough into mainstream PV is\npossible.\n"} {"abstract": " Human trajectory forecasting in crowds, at its core, is a sequence prediction\nproblem with specific challenges of capturing inter-sequence dependencies\n(social interactions) and consequently predicting socially-compliant multimodal\ndistributions. In recent years, neural network-based methods have been shown to\noutperform hand-crafted methods on distance-based metrics. However, these\ndata-driven methods still suffer from one crucial limitation: lack of\ninterpretability. To overcome this limitation, we leverage the power of\ndiscrete choice models to learn interpretable rule-based intents, and\nsubsequently utilise the expressibility of neural networks to model\nscene-specific residual. Extensive experimentation on the interaction-centric\nbenchmark TrajNet++ demonstrates the effectiveness of our proposed architecture\nto explain its predictions without compromising the accuracy.\n"} {"abstract": " A quantum many-body system with a conserved electric charge can have a DC\nresistivity that is either exactly zero (implying it supports dissipationless\ncurrent) or nonzero. Exactly zero resistivity is related to conservation laws\nthat prevent the current from degrading. In this paper, we carefully examine\nthe situations in which such a circumstance can occur. We find that exactly\nzero resistivity requires either continuous translation symmetry, or an\ninternal symmetry that has a certain kind of \"mixed anomaly\" with the electric\ncharge. (The symmetry could be a generalized global symmetry associated with\nthe emergence of unbreakable loop or higher dimensional excitations.) However,\neven if one of these is satisfied, we show that there is still a mechanism to\nget nonzero resistivity, through critical fluctuations that drive the\nsusceptibility of the conserved quantity to infinity; we call this mechanism\n\"critical drag\". Critical drag is thus a mechanism for resistivity that, unlike\nconventional mechanisms, is unrelated to broken symmetries. We furthermore\nargue that an emergent symmetry that has the appropriate mixed anomaly with\nelectric charge is in fact an inevitable consequence of compressibility in\nsystems with lattice translation symmetry. Critical drag therefore seems to be\nthe only way (other than through irrelevant perturbations breaking the emergent\nsymmetry, that disappear at the renormalization group fixed point) to get\nnonzero resistivity in such systems. Finally, we present a very simple and\nconcrete model -- the \"Quantum Lifshitz Model\" -- that illustrates the critical\ndrag mechanism as well as the other considerations of the paper.\n"} {"abstract": " Voltage manipulation of skyrmions is a promising path towards low-energy\nspintronic devices. Here, voltage effects on skyrmions in a GdOx/Gd/Co/Pt\nheterostructure are observed experimentally. The results show that the skyrmion\ndensity can be both enhanced and depleted by the application of an electric\nfield, along with the ability, at certain magnetic fields to completely switch\nthe skyrmion state on and off. Further, a zero magnetic field skyrmion state\ncan be stablized under a negative bias voltage using a defined voltage and\nmagnetic field sequence. The voltage effects measured here occur on a\nfew-second timescale, suggesting an origin in voltage-controlled magnetic\nanisotropy rather than ionic effects. By investigating the skyrmion nucleation\nrate as a function of temperature, we extract the energy barrier to skyrmion\nnucleation in our sample. Further, micromagnetic simulations are used to\nexplore the effect of changing the anisotropy and Dzyaloshinskii-Moriya\ninteraction on skyrmion density. Our work demonstrates the control of skyrmions\nby voltages, showing functionalities desirable for commercial devices.\n"} {"abstract": " We investigate the nonequilibrium dynamics of the spinless Haldane model with\nnearest-neighbor interactions on the honeycomb lattice by employing an unbiased\nnumerical method. In this system, a first-order transition from the Chern\ninsulator (CI) at weak coupling to the charge-density-wave (CDW) phase at\nstrong coupling can be characterized by a level crossing of the lowest energy\nlevels. Here we show that adiabatically following the eigenstates across this\nlevel crossing, their Chern numbers are preserved, leading to the\nidentification of a topologically-nontrivial low-energy excited state in the\nCDW regime. By promoting a resonant energy excitation via an ultrafast\ncircularly polarized pump pulse, we find that the system acquires a\nnon-vanishing Hall response as a result of the large overlap enhancement\nbetween the time-dependent wave-function and the topologically non-trivial\nexcited state. This is suggestive of a photoinduced topological phase\ntransition via unitary dynamics, despite a proper definition of the Chern\nnumber remaining elusive for an out-of-equilibrium interacting system. We\ncontrast these results with more common quench protocols, where such features\nare largely absent in the dynamics even if the post-quench Hamiltonian displays\na topologically nontrivial ground state.\n"} {"abstract": " Over the past few years, there is a heated debate and serious public concerns\nregarding online content moderation, censorship, and the principle of free\nspeech on the Web. To ease these concerns, social media platforms like Twitter\nand Facebook refined their content moderation systems to support soft\nmoderation interventions. Soft moderation interventions refer to warning labels\nattached to potentially questionable or harmful content to inform other users\nabout the content and its nature while the content remains accessible, hence\nalleviating concerns related to censorship and free speech. In this work, we\nperform one of the first empirical studies on soft moderation interventions on\nTwitter. Using a mixed-methods approach, we study the users who share tweets\nwith warning labels on Twitter and their political leaning, the engagement that\nthese tweets receive, and how users interact with tweets that have warning\nlabels. Among other things, we find that 72% of the tweets with warning labels\nare shared by Republicans, while only 11% are shared by Democrats. By analyzing\ncontent engagement, we find that tweets with warning labels had more engagement\ncompared to tweets without warning labels. Also, we qualitatively analyze how\nusers interact with content that has warning labels finding that the most\npopular interactions are related to further debunking false claims, mocking the\nauthor or content of the disputed tweet, and further reinforcing or resharing\nfalse claims. Finally, we describe concrete examples of inconsistencies, such\nas warning labels that are incorrectly added or warning labels that are not\nadded on tweets despite sharing questionable and potentially harmful\ninformation.\n"} {"abstract": " We present a combined neutron diffraction (ND) and high-field muon spin\nrotation ($\\mu$SR) study of the magnetic and superconducting phases of the\nhigh-temperature superconductor La$_{1.94}$Sr$_{0.06}$CuO$_{4+y}$ ($T_{c} =\n38$~K). We observe a linear dependence of the ND signal from the modulated\nantiferromagnetic order (m-AFM) on the applied field. The magnetic volume\nfraction measured with $\\mu$SR increases linearly from 0\\% to $\\sim$40\\% with\napplied magnetic field up to 8~T. This allows us to conclude, in contrast to\nearlier field-dependent neutron diffraction studies, that the long-range m-AFM\nregions are induced by an applied field, and that their ordered magnetic moment\nremains constant.\n"} {"abstract": " We study the running vacuum model in which the vaccum energy density depends\non square of Hubble parameter in comparison with the $\\Lambda$CDM model. In\nthis work, the Bayesian inference method is employed to test against the\nstandard $\\Lambda$CDM model to appraise the relative significance of our model,\nusing the combined data sets, Pantheon+CMB+BAO and Pantheon+CMB+BAO+Hubble\ndata. The model parameters and the corresponding errors are estimated from the\nmarginal likelihood function of the model parameters. Marginalizing over all\nmodel parameters with suitable prior, we have obtained the Bayes factor as the\nratio of Bayesian evidence of our model and the $\\Lambda$CDM model. The\nanalysis based on Jeffrey's scale of bayesian inference shows that the evidence\nof our model against the $\\Lambda$CDM model is weak for both data combinations.\nEven though the running vacuum model gives a good account of the evolution of\nthe universe, it is not superior to the $\\Lambda$CDM model.\n"} {"abstract": " Private blockchain networks are used by enterprises to manage decentralized\nprocesses without trusted mediators and without exposing their assets publicly\non an open network like Ethereum. Yet external parties that cannot join such\nnetworks may have a compelling need to be informed about certain data items on\ntheir shared ledgers along with certifications of data authenticity; e.g., a\nmortgage bank may need to know about the sale of a mortgaged property from a\nnetwork managing property deeds. These parties are willing to compensate the\nnetworks in exchange for privately sharing information with proof of\nauthenticity and authorization for external use. We have devised a novel and\ncryptographically secure protocol to effect a fair exchange between rational\nnetwork members and information recipients using a public blockchain and atomic\nswap techniques. Using our protocol, any member of a private blockchain can\natomically reveal private blockchain data with proofs in exchange for a\nmonetary reward to an external party if and only if the external party is a\nvalid recipient. The protocol preserves confidentiality of data for the\nrecipient, and in addition, allows it to mount a challenge if the data turns\nout to be inauthentic. We also formally analyze the security and privacy of\nthis protocol, which can be used in a wide array of practical scenarios\n"} {"abstract": " Developers create software branches for tentative feature addition and bug\nfixing, and periodically merge branches to release software with new features\nor repairing patches. When the program edits from different branches textually\noverlap (i.e., textual conflicts), or the co-application of those edits lead to\ncompilation or runtime errors (i.e., compiling or dynamic conflicts), it is\nchallenging and time-consuming for developers to eliminate merge conflicts.\nPrior studies examined %the popularity of merge conflicts and how conflicts\nwere related to code smells or software development process; tools were built\nto find and solve conflicts.\n However, some fundamental research questions are still not comprehensively\nexplored, including (1) how conflicts were introduced, (2) how developers\nmanually resolved conflicts, and (3) what conflicts cannot be handled by\ncurrent tools.\n For this paper, we took a hybrid approach that combines automatic detection\nwith manual inspection to reveal 204 merge conflicts and their resolutions in\n15 open-source repositories. %in the version history of 15 open-source\nprojects. Our data analysis reveals three phenomena. First, compiling and\ndynamic conflicts are harder to detect, although current tools mainly focus on\ntextual conflicts. Second, in the same merging context, developers usually\nresolved similar textual conflicts with similar strategies. Third, developers\nmanually fixed most of the inspected compiling and dynamic conflicts by\nsimilarly editing the merged version as what they did for one of the branches.\nOur research reveals the challenges and opportunities for automatic detection\nand resolution of merge conflicts; it also sheds light on related areas like\nsystematic program editing and change recommendation.\n"} {"abstract": " We consider averaging a number of candidate models to produce a prediction of\nlower risk in the context of partially linear functional additive models. These\nmodels incorporate the parametric effect of scalar variables and the additive\neffect of a functional variable to describe the relationship between a response\nvariable and regressors. We develop a model averaging scheme that assigns the\nweights by minimizing a cross-validation criterion. Under the framework of\nmodel misspecification, the resulting estimator is proved to be asymptotically\noptimal in terms of the lowest possible square error loss for prediction. Also,\nsimulation studies and real data analysis demonstrate the good performance of\nour proposed method.\n"} {"abstract": " The imprints of large-scale structures on the Cosmic Microwave Background can\nbe studied via the CMB lensing and Integrated Sachs-Wolfe (ISW) signals. In\nparticular, the stacked ISW signal around supervoids has been claimed in\nseveral works to be anomalously high. In this study, we find cluster and void\nsuperstructures using four tomographic redshift bins with $0-1$ . The mathematical methods\nemployed here are based on $q$-Bessel Fourier analysis.\n"} {"abstract": " The execution of quantum circuits on real systems has largely been limited to\nthose which are simply time-ordered sequences of unitary operations followed by\na projective measurement. As hardware platforms for quantum computing continue\nto mature in size and capability, it is imperative to enable quantum circuits\nbeyond their conventional construction. Here we break into the realm of dynamic\nquantum circuits on a superconducting-based quantum system. Dynamic quantum\ncircuits involve not only the evolution of the quantum state throughout the\ncomputation, but also periodic measurements of a subset of qubits mid-circuit\nand concurrent processing of the resulting classical information within\ntimescales shorter than the execution times of the circuits. Using noisy\nquantum hardware, we explore one of the most fundamental quantum algorithms,\nquantum phase estimation, in its adaptive version, which exploits dynamic\ncircuits, and compare the results to a non-adaptive implementation of the same\nalgorithm. We demonstrate that the version of real-time quantum computing with\ndynamic circuits can offer a substantial and tangible advantage when noise and\nlatency are sufficiently low in the system, opening the door to a new realm of\navailable algorithms on real quantum systems.\n"} {"abstract": " Using classical electrodynamics, this work analyzes the dynamics of a closed\nmicrowave cavity as a function of its center of energy. Starting from the\nprinciple of momentum conservation, expressions for the maximum electromagnetic\nmomentum stored in a free microwave cavity are obtained. Next, it is shown\nthat, for coherent fields and special shape conditions, this momentum component\nmay not completely average out to zero when the fields change in the transient\nregime. Non-zero conditions are illustrated for the asymmetric conical frustum\nwhose exact modes can not be calculated analytically. One concludes that the\nelectromagnetic momentum can be imparted to the mechanical body so as to\ndisplace it in relation to the original center of energy. However, the average\ntime range of the effect is much shorter than any time regime of the\nexperimental tests performed to measure presently, suggesting it has not been\nobserved yet in copper-made resonators.\n"} {"abstract": " Constitutive models are widely used for modeling complex systems in science\nand engineering, where first-principle-based, well-resolved simulations are\noften prohibitively expensive. For example, in fluid dynamics, constitutive\nmodels are required to describe nonlocal, unresolved physics such as turbulence\nand laminar-turbulent transition. However, traditional constitutive models\nbased on partial differential equations (PDEs) often lack robustness and are\ntoo rigid to accommodate diverse calibration datasets. We propose a\nframe-independent, nonlocal constitutive model based on a vector-cloud neural\nnetwork that can be learned with data. The model predicts the closure variable\nat a point based on the flow information in its neighborhood. Such nonlocal\ninformation is represented by a group of points, each having a feature vector\nattached to it, and thus the input is referred to as vector cloud. The cloud is\nmapped to the closure variable through a frame-independent neural network,\ninvariant both to coordinate translation and rotation and to the ordering of\npoints in the cloud. As such, the network can deal with any number of\narbitrarily arranged grid points and thus is suitable for unstructured meshes\nin fluid simulations. The merits of the proposed network are demonstrated for\nscalar transport PDEs on a family of parameterized periodic hill geometries.\nThe vector-cloud neural network is a promising tool not only as nonlocal\nconstitutive models and but also as general surrogate models for PDEs on\nirregular domains.\n"} {"abstract": " We report here results on the analysis of correlated flux variations between\nthe optical and GeV $\\gamma$-ray bands in three bright BL Lac objects, namely\nAO\\, 0235+164, OJ 287 and PKS 2155$-$304. This was based on the analysis of\nabout 10 years of data from the {\\it Fermi} Gamma-ray Space Telescope covering\nthe period between 08 August 2008 to 08 August 2018 along with optical data\ncovering the same period. For all the sources, during the flares analysed in\nthis work, the optical and $\\gamma$-ray flux variations are found to be closely\ncorrelated. From broad band spectral energy distribution modelling of different\nepochs in these sources using the one zone leptonic emission model, we found\nthat the optical-UV emission is dominated by synchrotron emission from the jet.\nThe $\\gamma$-ray emission in the low synchrotron peaked sources AO\\, 0235+164\nand OJ 287 are found to be well fit with external Compton (EC) component,\nwhile, the $\\gamma$-ray emission in the high synchrotron peaked source PKS\n2155$-$304 is well fit with synchrotron self Compton component. Further we note\nthat the $\\gamma$-ray emission during the high flux state of AO 0235+164\n(epochs A and B) requires seed photons from both the dusty torus and broad line\nregion, while the $\\gamma$-ray emission in OJ 287 and during epochs C and D of\nAO\\,0235+164 can be modelled by EC scattering of infra-red photons from the\ntorus.\n"} {"abstract": " In this work we describe the High-Dimensional Matrix Mechanism (HDMM), a\ndifferentially private algorithm for answering a workload of predicate counting\nqueries. HDMM represents query workloads using a compact implicit matrix\nrepresentation and exploits this representation to efficiently optimize over (a\nsubset of) the space of differentially private algorithms for one that is\nunbiased and answers the input query workload with low expected error. HDMM can\nbe deployed for both $\\epsilon$-differential privacy (with Laplace noise) and\n$(\\epsilon, \\delta)$-differential privacy (with Gaussian noise), although the\ncore techniques are slightly different for each. We demonstrate empirically\nthat HDMM can efficiently answer queries with lower expected error than\nstate-of-the-art techniques, and in some cases, it nearly matches existing\nlower bounds for the particular class of mechanisms we consider.\n"} {"abstract": " This note gives a detailed proof of the following statement. Let $d\\in\n\\mathbb{N}$ and $m,n \\ge d + 1$, with $m + n \\ge \\binom{d+2}{2} + 1$. Then the\ncomplete bipartite graph $K_{m,n}$ is generically globally rigid in dimension\n$d$.\n"} {"abstract": " The emergence of new technologies and innovative communication tools permits\nus to transcend societal challenges. While particle accelerators are essential\ninstruments to improve our quality of life through science and technology, an\nadequate ecosystem is essential to activate and maximize this potential.\nResearch Infrastructure (RI) and industries supported by enlightened\norganizations and education, can generate a sustainable environment to serve\nthis purpose. In this paper, we will discuss state-of-the-art infrastructures\ntaking the lead to reach this impact, thus contributing to economic and social\ntransformation.\n"} {"abstract": " Bayesian decision theory provides an elegant framework for acting optimally\nunder uncertainty when tractable posterior distributions are available. Modern\nBayesian models, however, typically involve intractable posteriors that are\napproximated with, potentially crude, surrogates. This difficulty has\nengendered loss-calibrated techniques that aim to learn posterior\napproximations that favor high-utility decisions. In this paper, focusing on\nBayesian neural networks, we develop methods for correcting approximate\nposterior predictive distributions encouraging them to prefer high-utility\ndecisions. In contrast to previous work, our approach is agnostic to the choice\nof the approximate inference algorithm, allows for efficient test time decision\nmaking through amortization, and empirically produces higher quality decisions.\nWe demonstrate the effectiveness of our approach through controlled experiments\nspanning a diversity of tasks and datasets.\n"} {"abstract": " Isolated post-capillary pulmonary hypertension (Ipc-PH) occurs due to left\nheart failure, which contributes to 1 out of every 9 deaths in the United\nStates. In some patients, through unknown mechanisms, Ipc-PH transitions to\ncombined pre-/post-capillary PH (Cpc-PH), diagnosed by an increase in pulmonary\nvascular resistance and associated with a dramatic increase in mortality. We\nhypothesize that altered mechanical forces and subsequent vasoactive signaling\nin the pulmonary capillary bed drive the transition from Ipc-PH to Cpc-PH.\nHowever, even in a healthy pulmonary circulation, the mechanical forces in the\nsmallest vessels (the arterioles, venules, and capillary bed) have not been\nquantitatively defined. This study is the first to examine this question via a\ncomputational fluid dynamics model of the human pulmonary arteries, veins,\narterioles, and venules. Using this model we predict temporal and spatial\ndynamics of cyclic stretch and wall shear stress. In the large vessels,\nnumerical simulations show that shear stress increases coincides with larger\nflow and pressure. In the microvasculature, we found that as vessel radius\ndecreases, shear stress increases and flow decreases. In arterioles, this\ncorresponds with lower pressures; however, the venules and smaller veins have\nhigher pressure than larger veins. Our model provides predictions for pressure,\nflow, shear stress, and cyclic stretch that provides a way to analyze and\ninvestigate hypotheses related to disease progression in the pulmonary\ncirculation.\n"} {"abstract": " We report spatially resolved measurements of static and fluctuating electric\nfields over conductive (Au) and non-conductive (SiO2) surfaces. Using an\nultrasensitive `nanoladder' cantilever probe to scan over these surfaces at\ndistances of a few tens of nanometers, we record changes in the probe resonance\nfrequency and damping that we associate with static and fluctuating fields,\nrespectively. We find that the two quantities are spatially correlated and of\nsimilar magnitude for the two materials. We quantitatively describe the\nobserved effects on the basis of trapped surface charges and dielectric\nfluctuations in an adsorbate layer. Our results provide direct, spatial\nevidence for surface dissipation in adsorbates that affects nanomechanical\nsensors, trapped ions, superconducting resonators, and color centers in\ndiamond.\n"} {"abstract": " This paper studies the estimation of large-scale optimal transport maps\n(OTM), which is a well-known challenging problem owing to the curse of\ndimensionality. Existing literature approximates the large-scale OTM by a\nseries of one-dimensional OTM problems through iterative random projection.\nSuch methods, however, suffer from slow or none convergence in practice due to\nthe nature of randomly selected projection directions. Instead, we propose an\nestimation method of large-scale OTM by combining the idea of projection\npursuit regression and sufficient dimension reduction. The proposed method,\nnamed projection pursuit Monge map (PPMM), adaptively selects the most\n``informative'' projection direction in each iteration. We theoretically show\nthe proposed dimension reduction method can consistently estimate the most\n``informative'' projection direction in each iteration. Furthermore, the PPMM\nalgorithm weakly convergences to the target large-scale OTM in a reasonable\nnumber of steps. Empirically, PPMM is computationally easy and converges fast.\nWe assess its finite sample performance through the applications of Wasserstein\ndistance estimation and generative models.\n"} {"abstract": " Nowadays, a community starts to find the need for human presence in an\nalternative way, there has been tremendous research and development in\nadvancing telepresence robots. People tend to feel closer and more comfortable\nwith telepresence robots as many senses a human presence in robots. In general,\nmany people feel the sense of agency from the face of a robot, but some\ntelepresence robots without arm and body motions tend to give a sense of human\npresence. It is important to identify and configure how the telepresence robots\naffect a sense of presence and agency to people by including human face and\nslight face and arm motions. Therefore, we carried out extensive research via\nweb-based experiment to determine the prototype that can result in soothing\nhuman interaction with the robot. The experiments featured videos of a\ntelepresence robot n = 128, 2 x 2 between-participant study robot face factor:\nvideo-conference, robot-like face; arm motion factor: moving vs. static) to\ninvestigate the factors significantly affecting human presence and agency with\nthe robot. We used two telepresence robots: an affordable robot platform and a\nmodified version for human interaction enhancements. The findings suggest that\nparticipants feel agency that is closer to human-likeness when the robot's face\nwas replaced with a human's face and without a motion. The robot's motion\ninvokes a feeling of human presence whether the face is human or robot-like.\n"} {"abstract": " Dense depth map capture is challenging in existing active sparse illumination\nbased depth acquisition techniques, such as LiDAR. Various techniques have been\nproposed to estimate a dense depth map based on fusion of the sparse depth map\nmeasurement with the RGB image. Recent advances in hardware enable adaptive\ndepth measurements resulting in further improvement of the dense depth map\nestimation. In this paper, we study the topic of estimating dense depth from\ndepth sampling. The adaptive sparse depth sampling network is jointly trained\nwith a fusion network of an RGB image and sparse depth, to generate optimal\nadaptive sampling masks. We show that such adaptive sampling masks can\ngeneralize well to many RGB and sparse depth fusion algorithms under a variety\nof sampling rates (as low as $0.0625\\%$). The proposed adaptive sampling method\nis fully differentiable and flexible to be trained end-to-end with upstream\nperception algorithms.\n"} {"abstract": " For every set of parabolic weights, we construct a Harder-Narasimhan\nstratification for the moduli stack of parabolic vector bundles on a curve. It\nis based on the notion of parabolic slope, introduced by Mehta and Seshadri. We\nalso prove that the stratification is schematic, that each stratum is complete,\nand establish an analogue of Behrend's conjecture for parabolic vector bundles.\nA comparison with recent $\\Theta$-stratification approaches is discussed.\n"} {"abstract": " Counterfactual explanations (CEs) are a practical tool for demonstrating why\nmachine learning classifiers make particular decisions. For CEs to be useful,\nit is important that they are easy for users to interpret. Existing methods for\ngenerating interpretable CEs rely on auxiliary generative models, which may not\nbe suitable for complex datasets, and incur engineering overhead. We introduce\na simple and fast method for generating interpretable CEs in a white-box\nsetting without an auxiliary model, by using the predictive uncertainty of the\nclassifier. Our experiments show that our proposed algorithm generates more\ninterpretable CEs, according to IM1 scores, than existing methods.\nAdditionally, our approach allows us to estimate the uncertainty of a CE, which\nmay be important in safety-critical applications, such as those in the medical\ndomain.\n"} {"abstract": " This paper presents DeepI2P: a novel approach for cross-modality registration\nbetween an image and a point cloud. Given an image (e.g. from a rgb-camera) and\na general point cloud (e.g. from a 3D Lidar scanner) captured at different\nlocations in the same scene, our method estimates the relative rigid\ntransformation between the coordinate frames of the camera and Lidar. Learning\ncommon feature descriptors to establish correspondences for the registration is\ninherently challenging due to the lack of appearance and geometric correlations\nacross the two modalities. We circumvent the difficulty by converting the\nregistration problem into a classification and inverse camera projection\noptimization problem. A classification neural network is designed to label\nwhether the projection of each point in the point cloud is within or beyond the\ncamera frustum. These labeled points are subsequently passed into a novel\ninverse camera projection solver to estimate the relative pose. Extensive\nexperimental results on Oxford Robotcar and KITTI datasets demonstrate the\nfeasibility of our approach. Our source code is available at\nhttps://github.com/lijx10/DeepI2P\n"} {"abstract": " Nanoscale layered ferromagnets have demonstrated fascinating two-dimensional\nmagnetism down to atomic layers, providing a peculiar playground of spin orders\nfor investigating fundamental physics and spintronic applications. However,\nstrategy for growing films with designed magnetic properties is not well\nestablished yet. Herein, we present a versatile method to control the Curie\ntemperature (T_{C}) and magnetic anisotropy during growth of ultrathin\nCr_{2}Te_{3} films. We demonstrate increase of the TC from 165 K to 310 K in\nsync with magnetic anisotropy switching from an out-of-plane orientation to an\nin-plane one, respectively, via controlling the Te source flux during film\ngrowth, leading to different c-lattice parameters while preserving the\nstoichiometries and thicknesses of the films. We attributed this modulation of\nmagnetic anisotropy to the switching of the orbital magnetic moment, using\nX-ray magnetic circular dichroism analysis. We also inferred that different\nc-lattice constants might be responsible for the magnetic anisotropy change,\nsupported by theoretical calculations. These findings emphasize the potential\nof ultrathin Cr_{2}Te_{3} films as candidates for developing room-temperature\nspintronics applications and similar growth strategies could be applicable to\nfabricate other nanoscale layered magnetic compounds.\n"} {"abstract": " A bargaining game is investigated for cooperative energy management in\nmicrogrids. This game incorporates a fully distributed and realistic\ncooperative power scheduling algorithm (CoDES) as well as a distributed Nash\nBargaining Solution (NBS)-based method of allocating the overall power bill\nresulting from CoDES. A novel weather-based stochastic renewable generation\n(RG) prediction method is incorporated in the power scheduling. We demonstrate\nthe proposed game using a 4-user grid-connected microgrid model with diverse\nuser demands, storage, and RG profiles and examine the effect of weather\nprediction on day-ahead power scheduling and cost/profit allocation. Finally,\nthe impact of users' ambivalence about cooperation and /or dishonesty on the\nbargaining outcome is investigated, and it is shown that the proposed game is\nresilient to malicious users' attempts to avoid payment of their fair share of\nthe overall bill.\n"} {"abstract": " The computer vision community has paid much attention to the development of\nvisible image super-resolution (SR) using deep neural networks (DNNs) and has\nachieved impressive results. The advancement of non-visible light sensors, such\nas acoustic imaging sensors, has attracted much attention, as they allow people\nto visualize the intensity of sound waves beyond the visible spectrum. However,\nbecause of the limitations imposed on acquiring acoustic data, new methods for\nimproving the resolution of the acoustic images are necessary. At this time,\nthere is no acoustic imaging dataset designed for the SR problem. This work\nproposed a novel backprojection model architecture for the acoustic image\nsuper-resolution problem, together with Acoustic Map Imaging VUB-ULB Dataset\n(AMIVU). The dataset provides large simulated and real captured images at\ndifferent resolutions. The proposed XCycles BackProjection model (XCBP), in\ncontrast to the feedforward model approach, fully uses the iterative correction\nprocedure in each cycle to reconstruct the residual error correction for the\nencoded features in both low- and high-resolution space. The proposed approach\nwas evaluated on the dataset and showed high outperformance compared to the\nclassical interpolation operators and to the recent feedforward\nstate-of-the-art models. It also contributed to a drastically reduced\nsub-sampling error produced during the data acquisition.\n"} {"abstract": " The symmetry operators generating the hidden $\\mathbb{Z}_2$ symmetry of the\nasymmetric quantum Rabi model (AQRM) at bias $\\epsilon \\in\n\\frac{1}{2}\\mathbb{Z}$ have recently been constructed by V. V. Mangazeev et al.\n[J. Phys. A: Math. Theor. 54 12LT01 (2021)]. We start with this result to\ndetermine symmetry operators for the $N$-qubit generalisation of the AQRM, also\nknown as the biased Dicke model, at special biases. We also prove for general\n$N$ that the symmetry operators, which commute with the Hamiltonian of the\nbiased Dicke model, generate a $\\mathbb{Z}_2$ symmetry.\n"} {"abstract": " Weakened random oracle models (WROMs) are variants of the random oracle model\n(ROM). The WROMs have the random oracle and the additional oracle which breaks\nsome property of a hash function. Analyzing the security of cryptographic\nschemes in WROMs, we can specify the property of a hash function on which the\nsecurity of cryptographic schemes depends. Liskov (SAC 2006) proposed WROMs and\nlater Numayama et al. (PKC 2008) formalized them as CT-ROM, SPT-ROM, and\nFPT-ROM. In each model, there is the additional oracle to break collision\nresistance, second preimage resistance, preimage resistance respectively. Tan\nand Wong (ACISP 2012) proposed the generalized FPT-ROM (GFPT-ROM) which\nintended to capture the chosen prefix collision attack suggested by Stevens et\nal. (EUROCRYPT 2007). In this paper, in order to analyze the security of\ncryptographic schemes more precisely, we formalize GFPT-ROM and propose\nadditional three WROMs which capture the chosen prefix collision attack and its\nvariants. In particular, we focus on signature schemes such as RSA-FDH, its\nvariants, and DSA, in order to understand essential roles of WROMs in their\nsecurity proofs.\n"} {"abstract": " Einstein-Maxwell-dilaton theory with non-trivial dilaton potential is known\nto admit asymptotically flat and (Anti-)de Sitter charged black hole solutions.\nWe investigate the conditions for the presence of horizons as function of the\nparameters mass $M$, charge $Q$ and dilaton coupling strength $\\alpha$. We\nobserve that there is a value of $\\alpha$ which separate two regions, one where\nthe black hole is Reissner-Nordstr\\\"om-like from a region where it is\nSchwarzschild-like. We find that for de Sitter and small non-vanishing\n$\\alpha$, the extremal case is not reached by the solution. We also discuss the\nattractive or repulsive nature of the leading long distance interaction between\ntwo such black holes, or a test particle and one black hole, from a world-line\neffective field theory point of view. Finally, we discuss possible\nmodifications of the Weak Gravity Conjecture in the presence of both a\ndilatonic coupling and a cosmological constant.\n"} {"abstract": " Network intrusion attacks are a known threat. To detect such attacks, network\nintrusion detection systems (NIDSs) have been developed and deployed. These\nsystems apply machine learning models to high-dimensional vectors of features\nextracted from network traffic to detect intrusions. Advances in NIDSs have\nmade it challenging for attackers, who must execute attacks without being\ndetected by these systems. Prior research on bypassing NIDSs has mainly focused\non perturbing the features extracted from the attack traffic to fool the\ndetection system, however, this may jeopardize the attack's functionality. In\nthis work, we present TANTRA, a novel end-to-end Timing-based Adversarial\nNetwork Traffic Reshaping Attack that can bypass a variety of NIDSs. Our\nevasion attack utilizes a long short-term memory (LSTM) deep neural network\n(DNN) which is trained to learn the time differences between the target\nnetwork's benign packets. The trained LSTM is used to set the time differences\nbetween the malicious traffic packets (attack), without changing their content,\nsuch that they will \"behave\" like benign network traffic and will not be\ndetected as an intrusion. We evaluate TANTRA on eight common intrusion attacks\nand three state-of-the-art NIDS systems, achieving an average success rate of\n99.99\\% in network intrusion detection system evasion. We also propose a novel\nmitigation technique to address this new evasion attack.\n"} {"abstract": " We performed deep observations to search for radio pulsations in the\ndirections of 375 unassociated Fermi Large Area Telescope (LAT) gamma-ray\nsources using the Giant Metrewave Radio Telescope (GMRT) at 322 and 607 MHz. In\nthis paper we report the discovery of three millisecond pulsars (MSPs), PSR\nJ0248+4230, PSR J1207$-$5050 and PSR J1536$-$4948. We conducted follow up\ntiming observations for around 5 years with the GMRT and derived phase coherent\ntiming models for these MSPs. PSR J0248$+$4230 and J1207$-$5050 are isolated\nMSPs having periodicities of 2.60 ms and 4.84 ms. PSR J1536-4948 is a 3.07 ms\npulsar in a binary system with orbital period of around 62 days about a\ncompanion of minimum mass 0.32 solar mass. We also present multi-frequency\npulse profiles of these MSPs from the GMRT observations. PSR J1536-4948 is an\nMSP with an extremely wide pulse profile having multiple components. Using the\nradio timing ephemeris we subsequently detected gamma-ray pulsations from these\nthree MSPs, confirming them as the sources powering the gamma-ray emission. For\nPSR J1536-4948 we performed combined radio-gamma-ray timing using around 11.6\nyears of gamma-ray pulse times of arrivals (TOAs) along with the radio TOAs.\nPSR J1536-4948 also shows evidence for pulsed gamma-ray emission out to above\n25 GeV, confirming earlier associations of this MSP with a >10 GeV point\nsource. The multi-wavelength pulse profiles of all three MSPs offer challenges\nto models of radio and gamma-ray emission in pulsar magnetospheres.\n"} {"abstract": " This paper studies the precoder design problem of achieving max-min fairness\n(MMF) amongst users in multigateway multibeam satellite communication systems\nwith feeder link interference. We propose a beamforming strategy based on a\nnewly introduced transmission scheme known as rate-splitting multiple access\n(RSMA). RSMA relies on multi-antenna rate-splitting at the transmitter and\nsuccessive interference cancellation (SIC) at the receivers, such that the\nintended message for a user is split into a common part and a private part and\nthe interference is partially decoded and partially treated as noise. In this\npaper, we formulate the MMF problem subject to per-antenna power constraints at\nthe satellite for the system with imperfect channel state information at the\ntransmitter (CSIT). We also consider the case of two-stage precoding which is\nassisted by on-board processing (OBP) at the satellite. Numerical results\nobtained through simulations for RSMA and the conventional linear precoding\nmethod are compared. When RSMA is used, MMF rate gain is promised and this gain\nincreases when OBP is used. RSMA is proven to be promising for multigateway\nmultibeam satellite systems whereby there are various practical challenges such\nas feeder link interference, CSIT uncertainty, per-antenna power constraints,\nuneven user distribution per beam and frame-based processing.\n"} {"abstract": " This paper introduces PyMatching, a fast open-source Python package for\ndecoding quantum error-correcting codes with the minimum-weight perfect\nmatching (MWPM) algorithm. PyMatching includes the standard MWPM decoder as\nwell as a variant, which we call local matching, that restricts each syndrome\ndefect to be matched to another defect within a local neighbourhood. The\ndecoding performance of local matching is almost identical to that of the\nstandard MWPM decoder in practice, while reducing the computational complexity\napproximately quadratically. We benchmark the performance of PyMatching,\nshowing that local matching is several orders of magnitude faster than\nimplementations of the full MWPM algorithm using NetworkX or Blossom V for\nproblem sizes typically considered in error correction simulations. PyMatching\nand its dependencies are open-source, and it can be used to decode any quantum\ncode for which syndrome defects come in pairs using a simple Python interface.\nPyMatching supports the use of weighted edges, hook errors, boundaries and\nmeasurement errors, enabling fast decoding and simulation of fault-tolerant\nquantum computing.\n"} {"abstract": " We implement two recently developed fast Coulomb solvers, HSMA3D [J. Chem.\nPhys. 149 (8) (2018) 084111] and HSMA2D [J. Chem. Phys. 152 (13) (2020)\n134109], into a new user package HSMA for molecular dynamics simulation engine\nLAMMPS. The HSMA package is designed for efficient and accurate modeling of\nelectrostatic interactions in 3D and 2D periodic systems with dielectric\neffects at the O(N) cost. The implementation is hybrid MPI and OpenMP\nparallelized and compatible with existing LAMMPS functionalities. The\nvectorization technique following AVX512 instructions is adopted for\nacceleration. To establish the validity of our implementation, we have\npresented extensive comparisons to the widely used particle-particle\nparticle-mesh (PPPM) algorithm in LAMMPS and other dielectric solvers. With the\nproper choice of algorithm parameters and parallelization setup, the package\nenables calculations of electrostatic interactions that outperform the standard\nPPPM in speed for a wide range of particle numbers.\n"} {"abstract": " The growing share of proactive actors in the electricity markets calls for\nmore attention on prosumers and more support for their decision-making under\ndecentralized electricity markets. In view of the changing paradigm, it is\ncrucial to study the long-term planning under the decentralized and\nprosumer-centric markets to unravel the effects of such markets on the planning\ndecisions. In the first part of the two-part paper, we propose a\nprosumer-centric framework for concurrent generation and transmission planning.\nHere, three planning models are presented where a peer-to-peer market with\nproduct differentiation, a pool market and a mixed bilateral/pool market and\ntheir associated trading costs are explicitly modeled, respectively. To fully\nreveal the individual costs and benefits, we start by formulating the\noptimization problems of various actors, i.e. prosumers, transmission system\noperator, energy market operator and carbon market operator. Moreover, to\nenable decentralized planning where the privacy of the prosumers is preserved,\ndistributed optimization algorithms are presented based on the corresponding\ncentralized optimization problems.\n"} {"abstract": " Previous work mainly focuses on improving cross-lingual transfer for NLU\ntasks with a multilingual pretrained encoder (MPE), or improving the\nperformance on supervised machine translation with BERT. However, it is\nunder-explored that whether the MPE can help to facilitate the cross-lingual\ntransferability of NMT model. In this paper, we focus on a zero-shot\ncross-lingual transfer task in NMT. In this task, the NMT model is trained with\nparallel dataset of only one language pair and an off-the-shelf MPE, then it is\ndirectly tested on zero-shot language pairs. We propose SixT, a simple yet\neffective model for this task. SixT leverages the MPE with a two-stage training\nschedule and gets further improvement with a position disentangled encoder and\na capacity-enhanced decoder. Using this method, SixT significantly outperforms\nmBART, a pretrained multilingual encoder-decoder model explicitly designed for\nNMT, with an average improvement of 7.1 BLEU on zero-shot any-to-English test\nsets across 14 source languages. Furthermore, with much less training\ncomputation cost and training data, our model achieves better performance on 15\nany-to-English test sets than CRISS and m2m-100, two strong multilingual NMT\nbaselines.\n"} {"abstract": " Particle tracking in large-scale numerical simulations of turbulent flows\npresents one of the major bottlenecks in parallel performance and scaling\nefficiency. Here, we describe a particle tracking algorithm for large-scale\nparallel pseudo-spectral simulations of turbulence which scales well up to\nbillions of tracer particles on modern high-performance computing\narchitectures. We summarize the standard parallel methods used to solve the\nfluid equations in our hybrid MPI/OpenMP implementation. As the main focus, we\ndescribe the implementation of the particle tracking algorithm and document its\ncomputational performance. To address the extensive inter-process communication\nrequired by particle tracking, we introduce a task-based approach to overlap\npoint-to-point communications with computations, thereby enabling improved\nresource utilization. We characterize the computational cost as a function of\nthe number of particles tracked and compare it with the flow field computation,\nshowing that the cost of particle tracking is very small for typical\napplications.\n"} {"abstract": " Spontaneous imbibition has been receiving much attention due to its\nsignificance in many subsurface and industrial applications. Unveiling\npore-scale wetting dynamics, and particularly its upscaling to the Darcy scale\nare still unresolved. In this work, we conduct image-based pore-network\nmodeling of cocurrent spontaneous imbibition and the corresponding quasi-static\nimbibition, in homogeneous sintered glass beads as well as heterogeneous\nEstaillades. A wide range of viscosity ratios and wettability conditions are\ntaken into account. Based on our pore-scale results, we show the influence of\npore-scale heterogeneity on imbibition dynamics and nonwetting entrapment. We\nelucidate different pore-filling mechanisms in imbibition, which helps us\nunderstand wetting dynamics. Most importantly, we develop a non-equilibrium\nmodel for relative permeability of the wetting phase, which adequately\nincorporates wetting dynamics. This is crucial to the final goal of developing\na two-phase imbibition model with measurable material properties such as\ncapillary pressure and relative permeability. Finally, we propose some future\nwork on both numerical and experimental verifications of the developed\nnon-equilibrium permeability model.\n"} {"abstract": " This paper introduces the notion of an unravelled abstract regular polytope,\nand proves that $\\SL_3(q) \\rtimes $, where $t$ is the transpose inverse\nautomorphism of $\\SL_3(q)$, possesses such polytopes for various congruences of\n$q$. A large number of small examples of such polytopes are given, along with\nextensive details of their various properties.\n"} {"abstract": " If $R$ is a commutative unital ring and $M$ is a unital $R$-module, then each\nelement of $\\operatorname{End}_R(M)$ determines a left\n$\\operatorname{End}_{R}(M)[X]$-module structure on $\\operatorname{End}_{R}(M)$,\nwhere $\\operatorname{End}_{R}(M)$ is the $R$-algebra of endomorphisms of $M$\nand $\\operatorname{End}_{R}(M)[X] =\\operatorname{End}_{R}(M)\\otimes_RR[X]$.\nThese structures provide a very short proof of the Cayley-Hamilton theorem,\nwhich may be viewed as a reformulation of the proof in Algebra by Serge Lang.\nSome generalisations of the Cayley-Hamilton theorem can be easily proved using\nthe proposed method.\n"} {"abstract": " We present a fascinating model that has lately caught attention among\nphysicists working in complexity related fields. Though it originated from\nmathematics and later from economics, the model is very enlightening in many\naspects that we shall highlight in this review. It is called The Stable\nMarriage Problem (though the marriage metaphor can be generalized to many other\ncontexts), and it consists of matching men and women, considering\npreference-lists where individuals express their preference over the members of\nthe opposite gender. This problem appeared for the first time in 1962 in the\nseminal paper of Gale and Shapley and has aroused interest in many fields of\nscience, including economics, game theory, computer science, etc. Recently it\nhas also attracted many physicists who, using the powerful tools of statistical\nmechanics, have also approached it as an optimization problem. Here we present\na complete overview of the Stable Marriage Problem emphasizing its\nmultidisciplinary aspect, and reviewing the key results in the disciplines that\nit has influenced most. We focus, in particular, in the old and recent results\nachieved by physicists, finally introducing two new promising models inspired\nby the philosophy of the Stable Marriage Problem. Moreover, we present an\ninnovative reinterpretation of the problem, useful to highlight the\nrevolutionary role of information in the contemporary economy.\n"} {"abstract": " In this article, we prove that Buchstaber invariant of 4-dimensional real\nuniversal complex is no less than 24 as a follow-up to the work of Ayzenberg\n[2] and Sun [14]. Moreover, a lower bound for Buchstaber invariants of\n$n$-dimensional real universal complexes is given as an improvement of\nErokhovet's result in [7].\n"} {"abstract": " The interest in dynamic processes on networks is steadily rising in recent\nyears. In this paper, we consider the $(\\alpha,\\beta)$-Thresholded Network\nDynamics ($(\\alpha,\\beta)$-Dynamics), where $\\alpha\\leq \\beta$, in which only\nstructural dynamics (dynamics of the network) are allowed, guided by local\nthresholding rules executed in each node. In particular, in each discrete round\n$t$, each pair of nodes $u$ and $v$ that are allowed to communicate by the\nscheduler, computes a value $\\mathcal{E}(u,v)$ (the potential of the pair) as a\nfunction of the local structure of the network at round $t$ around the two\nnodes. If $\\mathcal{E}(u,v) < \\alpha$ then the link (if it exists) between $u$\nand $v$ is removed; if $\\alpha \\leq \\mathcal{E}(u,v) < \\beta$ then an existing\nlink among $u$ and $v$ is maintained; if $\\beta \\leq \\mathcal{E}(u,v)$ then a\nlink between $u$ and $v$ is established if not already present.\n The microscopic structure of $(\\alpha,\\beta)$-Dynamics appears to be simple,\nso that we are able to rigorously argue about it, but still flexible, so that\nwe are able to design meaningful microscopic local rules that give rise to\ninteresting macroscopic behaviors. Our goals are the following: a) to\ninvestigate the properties of the $(\\alpha,\\beta)$-Thresholded Network Dynamics\nand b) to show that $(\\alpha,\\beta)$-Dynamics is expressive enough to solve\ncomplex problems on networks.\n Our contribution in these directions is twofold. We rigorously exhibit the\nclaim about the expressiveness of $(\\alpha,\\beta)$-Dynamics, both by designing\na simple protocol that provably computes the $k$-core of the network as well as\nby showing that $(\\alpha,\\beta)$-Dynamics is in fact Turing-Complete. Second\nand most important, we construct general tools for proving stabilization that\nwork for a subclass of $(\\alpha,\\beta)$-Dynamics and prove speed of convergence\nin a restricted setting.\n"} {"abstract": " Deep generative models have emerged as a powerful class of priors for signals\nin various inverse problems such as compressed sensing, phase retrieval and\nsuper-resolution. Here, we assume an unknown signal to lie in the range of some\npre-trained generative model. A popular approach for signal recovery is via\ngradient descent in the low-dimensional latent space. While gradient descent\nhas achieved good empirical performance, its theoretical behavior is not well\nunderstood. In this paper, we introduce the use of stochastic gradient Langevin\ndynamics (SGLD) for compressed sensing with a generative prior. Under mild\nassumptions on the generative model, we prove the convergence of SGLD to the\ntrue signal. We also demonstrate competitive empirical performance to standard\ngradient descent.\n"} {"abstract": " The hull of a linear code over finite fields is the intersection of the code\nand its dual, which was introduced by Assmus and Key. In this paper, we develop\na method to construct linear codes with trivial hull ( LCD codes) and\none-dimensional hull by employing the positive characteristic analogues of\nGauss sums. These codes are quasi-abelian, and sometimes doubly circulant. Some\nsufficient conditions for a linear code to be an LCD code (resp. a linear code\nwith one-dimensional hull) are presented. It is worth mentioning that we\npresent a lower bound on the minimum distances of the constructed linear codes.\nAs an application, using these conditions, we obtain some optimal or almost\noptimal LCD codes (resp. linear codes with one-dimensional hull) with respect\nto the online Database of Grassl.\n"} {"abstract": " The performance of visual quality prediction models is commonly assumed to be\nclosely tied to their ability to capture perceptually relevant image aspects.\nModels are thus either based on sophisticated feature extractors carefully\ndesigned from extensive domain knowledge or optimized through feature learning.\nIn contrast to this, we find feature extractors constructed from random noise\nto be sufficient to learn a linear regression model whose quality predictions\nreach high correlations with human visual quality ratings, on par with a model\nwith learned features. We analyze this curious result and show that besides the\nquality of feature extractors also their quantity plays a crucial role - with\ntop performances only being achieved in highly overparameterized models.\n"} {"abstract": " The WL-rank of a digraph $\\Gamma$ is defined to be the rank of the coherent\nconfiguration of $\\Gamma$. We construct a new infinite family of strictly Deza\nCayley graphs for which the WL-rank is equal to the number of vertices. The\ngraphs from this family are divisible design and integral.\n"} {"abstract": " The city of Rio de Janeiro is one of the biggest cities in Brazil. Drug gangs\nand paramilitary groups called \\textit{mil\\'icias} control some regions of the\ncity where the government is not present, specially in the slums. Due to the\ncharacteristics of such two distinct groups, it was observed that the evolution\nof COVID-19 is different in those two regions, in comparison with the regions\ncontrolled by the government. In order to understand qualitatively those\nobservations, we divided the city in three regions controlled by the\ngovernment, by the drug gangs and by the \\textit{mil\\'icias}, respectively, and\nwe consider a SIRD-like epidemic model where the three regions are coupled.\nConsidering different levels of exposure, the model is capable to reproduce\nqualitatively the distinct evolution of the COVID-19 disease in the three\nregions, suggesting that the organized crime shapes the COVID-19 evolution in\nthe city of Rio de Janeiro. This case study suggests that the model can be used\nin general for any metropolitan region with groups of people that can be\ncategorized by their level of exposure.\n"} {"abstract": " In Specific Power Absorption (SPA) models for Magnetic Fluid Hyperthermia\n(MFH) experiments, the magnetic relaxation time of the nanoparticles (NPs) is\nknown to be a fundamental descriptor of the heating mechanisms. The relaxation\ntime is mainly determined by the interplay between the magnetic properties of\nthe NPs and the rheological properties of NPs environment. Although the role of\nmagnetism in MFH has been extensively studied, the thermal properties of the\nNPs medium and their changes during of MFH experiments have been so far\nunderrated. Here, we show that ZnxFe3-xO4 NPs dispersed through different with\nphase transition in the temperature range of the experiment: clarified butter\noil (CBO) and paraffin. These systems show non-linear behavior of the heating\nrate within the temperature range of the MFH experiments. For CBO, a fast\nincrease at $306 K$ associated to changes in the viscosity (\\texteta(T)) and\nspecific heat (c_p(T)) of the medium below and above its melting temperature.\nThis increment in the heating rate takes place around $318 K$ for paraffin.\nMagnetic and morphological characterizations of NPs together with the observed\nagglomeration of the nanoparticles above $306 K$ indicate that the fast\nincrease in MFH curves could not be associated to a change in the magnetic\nrelaxation mechanism, with N\\'eel relaxation being dominant. In fact,\nsuccessive experiment runs performed up to temperatures below and above the CBO\nmelting point resulted in different MFH curves due to agglomeration of NPs\ndriven by magnetic field inhomogeneity during the experiments. Similar effects\nwere observed for paraffin. Our results highlight the relevance of the NPs\nmedium's thermodynamic properties for an accurate measurement of the heating\nefficiency for in vitro and in vivo environments, where the thermal properties\nare largely variable within the temperature window of MFH experiments.\n"} {"abstract": " Recent studies indicate that hierarchical Vision Transformer with a macro\narchitecture of interleaved non-overlapped window-based self-attention \\&\nshifted-window operation is able to achieve state-of-the-art performance in\nvarious visual recognition tasks, and challenges the ubiquitous convolutional\nneural networks (CNNs) using densely slid kernels. Most follow-up works attempt\nto replace the shifted-window operation with other kinds of cross-window\ncommunication paradigms, while treating self-attention as the de-facto standard\nfor window-based information aggregation. In this manuscript, we question\nwhether self-attention is the only choice for hierarchical Vision Transformer\nto attain strong performance, and the effects of different kinds of\ncross-window communication. To this end, we replace self-attention layers with\nembarrassingly simple linear mapping layers, and the resulting proof-of-concept\narchitecture termed as LinMapper can achieve very strong performance in\nImageNet-1k image recognition. Moreover, we find that LinMapper is able to\nbetter leverage the pre-trained representations from image recognition and\ndemonstrates excellent transfer learning properties on downstream dense\nprediction tasks such as object detection and instance segmentation. We also\nexperiment with other alternatives to self-attention for content aggregation\ninside each non-overlapped window under different cross-window communication\napproaches, which all give similar competitive results. Our study reveals that\nthe \\textbf{macro architecture} of Swin model families, other than specific\naggregation layers or specific means of cross-window communication, may be more\nresponsible for its strong performance and is the real challenger to the\nubiquitous CNN's dense sliding window paradigm. Code and models will be\npublicly available to facilitate future research.\n"} {"abstract": " Safe, environmentally conscious and flexible, these are the central\nrequirements for the future mobility. In the European border region between\nGermany, France and Luxembourg, mobility in the world of work and pleasure is a\ndecisive factor. It must be simple, affordable and available to all. The\nautomation and intelligent connection of road traffic plays an important role\nin this. Due to the distributed settlement structure with many small towns and\nvillage and a few central hot spots, a fully available public transport is very\ncomplex and expensive and only a few bus and train lines exist. In this\ncontext, the trinational research project TERMINAL aims to establish a\ncross-border automated minibus in regular traffic and to explore the user\nacceptance for commuter traffic. Additionally, mobility on demand services are\ntested, and both will be embedded within the existing public transport\ninfrastructure.\n"} {"abstract": " Firing Squad Synchronisation on Cellular Automata is the dynamical\nsynchronisation of finitely many cells without any prior knowledge of their\nrange. This can be conceived as a signal with an infinite speed. Most of the\nproposed constructions naturally translate to the continuous setting of signal\nmachines and generate fractal figures with an accumulation on a horizontal\nline, i.e. synchronously, in the space-time diagram. Signal machines are\nstudied in a series of articles named Abstract Geometrical Computation.\n In the present article, we design a signal machine that is able to\nsynchronise/accumulate on any non-infinite slope. The slope is encoded in the\ninitial configuration. This is done by constructing an infinite tree such that\neach node computes the way the tree expands.\n The interest of Abstract Geometrical computation is to do away with the\nconstraint of discrete space, while tackling new difficulties from continuous\nspace. The interest of this paper in particular is to provide basic tools for\nfurther study of computable accumulation lines in the signal machine model.\n"} {"abstract": " Numerical qualification of an eco-friendly alternative gas mixture for\navalanche mode operation of Resistive Plate Chambers is the soul of this work.\nTo identify the gas mixture, a numerical model developed elsewhere by the\nauthors has been first established by comparing the simulated figure of merits\n(efficiency and streamer probability) with the experimental data for the gas\nmixture used in INO-ICAL. Then it has been used to simulate the same properties\nof a gas mixture based on argon, carbon di-oxide and nitrogen, identified as\npotential replacement by studying its different properties. Efficacy of this\neco-friendly gas mixture has been studied by comparing the simulated result\nwith the standard gas mixture used in INO-ICAL as well as with experimental\ndata of other eco-friendly hydrofluorocarbon (HFO1234ze) based potential\nreplacements. To increase the efficacy of the proposed gas mixture, studies of\nthe traditional way (addition of a little amount of SF$_6$) and an alternative\napproach (exploring the option of high-end electronics) were carried out.\n"} {"abstract": " We introduce a linearised form of the square root of the Todd class inside\nthe Verbitsky component of a hyper-K\\\"ahler manifold using the extended Mukai\nlattice. This enables us to define a Mukai vector for certain objects in the\nderived category taking values inside the extended Mukai lattice which is\nfunctorial for derived equivalences. As applications, we obtain a structure\ntheorem for derived equivalences between hyper-K\\\"ahler manifolds as well as an\nintegral lattice associated to the derived category of hyper-K\\\"ahler manifolds\ndeformation equivalent to the Hilbert scheme of a K3 surface mimicking the\nsurface case.\n"} {"abstract": " Robots are becoming more and more commonplace in many industry settings. This\nsuccessful adoption can be partly attributed to (1) their increasingly\naffordable cost and (2) the possibility of developing intelligent,\nsoftware-driven robots. Unfortunately, robotics software consumes significant\namounts of energy. Moreover, robots are often battery-driven, meaning that even\na small energy improvement can help reduce its energy footprint and increase\nits autonomy and user experience. In this paper, we study the Robot Operating\nSystem (ROS) ecosystem, the de-facto standard for developing and prototyping\nrobotics software. We analyze 527 energy-related data points (including\ncommits, pull-requests, and issues on ROS-related repositories, ROS-related\nquestions on StackOverflow, ROS Discourse, ROS Answers, and the official ROS\nWiki). Our results include a quantification of the interest of roboticists on\nsoftware energy efficiency, 10 recurrent causes, and 14 solutions of\nenergy-related issues, and their implied trade-offs with respect to other\nquality attributes. Those contributions support roboticists and researchers\ntowards having energy-efficient software in future robotics projects.\n"} {"abstract": " We design and analyze an algorithm for first-order stochastic optimization of\na large class of functions on $\\mathbb{R}^d$. In particular, we consider the\n\\emph{variationally coherent} functions which can be convex or non-convex. The\niterates of our algorithm on variationally coherent functions converge almost\nsurely to the global minimizer $\\boldsymbol{x}^*$. Additionally, the very same\nalgorithm with the same hyperparameters, after $T$ iterations guarantees on\nconvex functions that the expected suboptimality gap is bounded by\n$\\widetilde{O}(\\|\\boldsymbol{x}^* - \\boldsymbol{x}_0\\| T^{-1/2+\\epsilon})$ for\nany $\\epsilon>0$. It is the first algorithm to achieve both these properties at\nthe same time. Also, the rate for convex functions essentially matches the\nperformance of parameter-free algorithms. Our algorithm is an instance of the\nFollow The Regularized Leader algorithm with the added twist of using\n\\emph{rescaled gradients} and time-varying linearithmic regularizers.\n"} {"abstract": " Novel structure for relativistic hydrodynamics of classic plasmas is derived\nfollowing the microscopic dynamics of charged particles. The derivation is\nstarted from the microscopic definition of concentration. Obviously, the\nconcentration evolution leads to the continuity equation and gives the\ndefinition of particle current. Introducing no arbitrary functions, we consider\nthe evolution of current (which does not coincide with the momentum density).\nIt leads to a set of new function which, to the best of our knowledge, have not\nbeen consider in the literature earlier. One of these functions is the average\nreverse relativistic (gamma) factor. Its current is also considered as one of\nbasic functions. Evolution of new functions appears via the concentration and\nparticle current so the set of equations partially closes itself. Other\nfunctions are presented as functions of basic function as a part of truncation\npresiger. Two pairs of chosen functions construct two four vectors. Evolution\nof these four vectors leads to appearance of two four tensors which are\nconsidered instead of the energy-momentum tensor. The Langmuir waves are\nconsidered within the suggested model.\n"} {"abstract": " We derive combinatorial necessary conditions for discrete-time quantum walks\ndefined by regular mixed graphs to be periodic. If the quantum walk is\nperiodic, all the eigenvalues of the time evolution matrices must be algebraic\nintegers. Focusing on this, we explore which ring the coefficients of the\ncharacteristic polynomials should belong to. On the other hand, the\ncoefficients of the characteristic polynomials of $\\eta$-Hermitian adjacency\nmatrices have combinatorial implications. From these, we can find combinatorial\nimplications in the coefficients of the characteristic polynomials of the time\nevolution matrices, and thus derive combinatorial necessary conditions for\nmixed graphs to be periodic. For example, if a $k$-regular mixed graph with $n$\nvertices is periodic, then $2n/k$ must be an integer. As an application of this\nwork, we determine periodicity of mixed complete graphs and mixed graphs with a\nprime number of vertices.\n"} {"abstract": " In this paper, we present a multiscale framework for solving the Helmholtz\nequation in heterogeneous media without scale separation and in the high\nfrequency regime where the wavenumber $k$ can be large. The main innovation is\nthat our methods achieve a nearly exponential rate of convergence with respect\nto the computational degrees of freedom, using a coarse grid of mesh size\n$O(1/k)$ without suffering from the well-known pollution effect. The key idea\nis a coarse-fine scale decomposition of the solution space that adapts to the\nmedia property and wavenumber; this decomposition is inspired by the multiscale\nfinite element method. We show that the coarse part is of low complexity in the\nsense that it can be approximated with a nearly exponential rate of convergence\nvia local basis functions, while the fine part is local such that it can be\ncomputed efficiently using the local information of the right hand side. The\ncombination of the two parts yields the overall nearly exponential rate of\nconvergence. We demonstrate the effectiveness of our methods theoretically and\nnumerically; an exponential rate of convergence is consistently observed and\nconfirmed. In addition, we observe the robustness of our methods regarding the\nhigh contrast in the media numerically.\n"} {"abstract": " In this article we introduce the zero-divisor graphs $\\Gamma_\\mathscr{P}(X)$\nand $\\Gamma^\\mathscr{P}_\\infty(X)$ of the two rings $C_\\mathscr{P}(X)$ and\n$C^\\mathscr{P}_\\infty(X)$; here $\\mathscr{P}$ is an ideal of closed sets in $X$\nand $C_\\mathscr{P}(X)$ is the aggregate of those functions in $C(X)$, whose\nsupport lie on $\\mathscr{P}$. $C^\\mathscr{P}_\\infty(X)$ is the $\\mathscr{P}$\nanalogue of the ring $C_\\infty (X)$. We find out conditions on the topology on\n$X$, under-which $\\Gamma_\\mathscr{P}(X)$ (respectively,\n$\\Gamma^\\mathscr{P}_\\infty(X)$) becomes triangulated/ hypertriangulated. We\nrealize that $\\Gamma_\\mathscr{P}(X)$ (respectively,\n$\\Gamma^\\mathscr{P}_\\infty(X)$) is a complemented graph if and only if the\nspace of minimal prime ideals in $C_\\mathscr{P}(X)$ (respectively\n$\\Gamma^\\mathscr{P}_\\infty(X)$) is compact. This places a special case of this\nresult with the choice $\\mathscr{P}\\equiv$ the ideals of closed sets in $X$,\nobtained by Azarpanah and Motamedi in \\cite{Azarpanah} on a wider setting. We\nalso give an example of a non-locally finite graph having finite chromatic\nnumber. Finally it is established with some special choices of the ideals\n$\\mathscr{P}$ and $\\mathscr{Q}$ on $X$ and $Y$ respectively that the rings\n$C_\\mathscr{P}(X)$ and $C_\\mathscr{Q}(Y)$ are isomorphic if and only if\n$\\Gamma_\\mathscr{P}(X)$ and $\\Gamma_\\mathscr{Q}(Y)$ are isomorphic.\n"} {"abstract": " Ever since its foundations were laid nearly a century ago, quantum theory has\nprovoked questions about the very nature of reality. We address these questions\nby considering the universe, and the multiverse, fundamentally as complex\npatterns, or mathematical structures. Basic mathematical structures can be\nexpressed more simply in terms of emergent parameters. Even simple mathematical\nstructures can interact within their own structural environment, in a\nrudimentary form of self-awareness, which suggests a definition of reality in a\nmathematical structure as simply the complete structure. The absolute\nrandomness of quantum outcomes is most satisfactorily explained by a multiverse\nof discrete, parallel universes. Some of these have to be identical to each\nother, but that introduces a dilemma, because each mathematical structure must\nbe unique. The resolution is that the parallel universes must be embedded\nwithin a mathematical structure, the multiverse, which allows universes to be\nidentical within themselves, but nevertheless distinct, as determined by their\nposition in the structure. The multiverse needs more emergent parameters than\nour universe and so it can be considered to be a superstructure.\nCorrespondingly, its reality can be called a super-reality. While every\nuniverse in the multiverse is part of the super-reality, the complete\nsuper-reality is forever beyond the horizon of any of its component universes.\n"} {"abstract": " Semi-device independent (Semi-DI) quantum random number generators (QRNG)\ngained attention for security applications, offering an excellent trade-off\nbetween security and generation rate. This paper presents a proof-of-principle\ntime-bin encoding semi-DI QRNG experiments based on a prepare-and-measure\nscheme. The protocol requires two simple assumptions and a measurable\ncondition: an upper-bound on the prepared pulses' energy. We lower-bound the\nconditional min-entropy from the energy-bound and the input-output correlation,\ndetermining the amount of genuine randomness that can be certified. Moreover,\nwe present a generalized optimization problem for bounding the min-entropy in\nthe case of multiple-input and outcomes in the form of a semidefinite program\n(SDP). The protocol is tested with a simple experimental setup, capable of\nrealizing two configurations for the ternary time-bin encoding scheme. The\nexperimental setup is easy-to-implement and comprises commercially available\noff-the-shelf (COTS) components at the telecom wavelength, granting a secure\nand certifiable entropy source. The combination of ease-of-implementation,\nscalability, high-security level, and output-entropy make our system a\npromising candidate for commercial QRNGs.\n"} {"abstract": " While artificial intelligence provides the backbone for many tools people use\naround the world, recent work has brought to attention that the algorithms\npowering AI are not free of politics, stereotypes, and bias. While most work in\nthis area has focused on the ways in which AI can exacerbate existing\ninequalities and discrimination, very little work has studied how governments\nactively shape training data. We describe how censorship has affected the\ndevelopment of Wikipedia corpuses, text data which are regularly used for\npre-trained inputs into NLP algorithms. We show that word embeddings trained on\nBaidu Baike, an online Chinese encyclopedia, have very different associations\nbetween adjectives and a range of concepts about democracy, freedom, collective\naction, equality, and people and historical events in China than its regularly\nblocked but uncensored counterpart - Chinese language Wikipedia. We examine the\nimplications of these discrepancies by studying their use in downstream AI\napplications. Our paper shows how government repression, censorship, and\nself-censorship may impact training data and the applications that draw from\nthem.\n"} {"abstract": " Graphene nanoribbons (GNRs) possess distinct symmetry-protected topological\nphases. We show, through first-principles calculations, that by applying an\nexperimentally accessible transverse electric field (TEF), certain boron and\nnitrogen periodically co-doped GNRs have tunable topological phases. The\ntunability arises from a field-induced band inversion due to an opposite\nresponse of the conduction- and valance-band states to the electric field. With\na spatially-varying applied field, segments of GNRs of distinct topological\nphases are created, resulting in a field-programmable array of topological\njunction states, each may be occupied with charge or spin. Our findings not\nonly show that electric field may be used as an easy tuning knob for\ntopological phases in quasi-one-dimensional systems, but also provide new\ndesign principles for future GNR-based quantum electronic devices through their\ntopological characters.\n"} {"abstract": " Self-supervised contrastive learning offers a means of learning informative\nfeatures from a pool of unlabeled data. In this paper, we delve into another\nuseful approach -- providing a way of selecting a core-set that is entirely\nunlabeled. In this regard, contrastive learning, one of a large number of\nself-supervised methods, was recently proposed and has consistently delivered\nthe highest performance. This prompted us to choose two leading methods for\ncontrastive learning: the simple framework for contrastive learning of visual\nrepresentations (SimCLR) and the momentum contrastive (MoCo) learning\nframework. We calculated the cosine similarities for each example of an epoch\nfor the entire duration of the contrastive learning process and subsequently\naccumulated the cosine-similarity values to obtain the coreset score. Our\nassumption was that an sample with low similarity would likely behave as a\ncoreset. Compared with existing coreset selection methods with labels, our\napproach reduced the cost associated with human annotation. The unsupervised\nmethod implemented in this study for coreset selection obtained improved\nresults over a randomly chosen subset, and were comparable to existing\nsupervised coreset selection on various classification datasets (e.g., CIFAR,\nSVHN, and QMNIST).\n"} {"abstract": " We study the action of the homeomorphism group of a surface $S$ on the fine\ncurve graph ${\\mathcal C }^\\dagger(S)$. While the definition of\n$\\mathcal{C}^\\dagger(S)$ parallels the classical curve graph for mapping class\ngroups, we show that the dynamics of the action of ${\\mathrm{Homeo}}(S)$ on\n$\\mathcal{C}^\\dagger(S)$ is much richer: homeomorphisms induce parabolic\nisometries in addition to elliptics and hyperbolics, and all positive reals are\nrealized as asymptotic translation lengths.\n When the surface $S$ is a torus, we relate the dynamics of the action of a\nhomeomorphism on $\\mathcal{C}^\\dagger(S)$ to the dynamics of its action on the\ntorus via the classical theory of rotation sets. We characterize homeomorphisms\nacting hyperbolically, show asymptotic translation length provides a lower\nbound for the area of the rotation set, and, while no characterisation purely\nin terms of rotation sets is possible, we give sufficient conditions for\nelements to be elliptic or parabolic.\n"} {"abstract": " A quantum internet aims at harnessing networked quantum technologies, namely\nby distributing bipartite entanglement between distant nodes. However,\nmultipartite entanglement between the nodes may empower the quantum internet\nfor additional or better applications for communications, sensing, and\ncomputation. In this work, we present an algorithm for generating multipartite\nentanglement between different nodes of a quantum network with noisy quantum\nrepeaters and imperfect quantum memories, where the links are entangled pairs.\nOur algorithm is optimal for GHZ states with 3 qubits, maximising\nsimultaneously the final state fidelity and the rate of entanglement\ndistribution. Furthermore, we determine the conditions yielding this\nsimultaneous optimality for GHZ states with a higher number of qubits, and for\nother types of multipartite entanglement. Our algorithm is general also in the\nsense that it can optimise simultaneously arbitrary parameters. This work opens\nthe way to optimally generate multipartite quantum correlations over noisy\nquantum networks, an important resource for distributed quantum technologies.\n"} {"abstract": " We present a novel mapping for studying 2D many-body quantum systems by\nsolving an effective, one-dimensional long-range model in place of the original\ntwo-dimensional short-range one. In particular, we address the problem of\nchoosing an efficient mapping from the 2D lattice to a 1D chain that optimally\npreserves the locality of interactions within the TN structure. By using Matrix\nProduct States (MPS) and Tree Tensor Network (TTN) algorithms, we compute the\nground state of the 2D quantum Ising model in transverse field with lattice\nsize up to $64\\times64$, comparing the results obtained from different mappings\nbased on two space-filling curves, the snake curve and the Hilbert curve. We\nshow that the locality-preserving properties of the Hilbert curve leads to a\nclear improvement of numerical precision, especially for large sizes, and turns\nout to provide the best performances for the simulation of 2D lattice systems\nvia 1D TN structures.\n"} {"abstract": " Biological agents have meaningful interactions with their environment despite\nthe absence of immediate reward signals. In such instances, the agent can learn\npreferred modes of behaviour that lead to predictable states -- necessary for\nsurvival. In this paper, we pursue the notion that this learnt behaviour can be\na consequence of reward-free preference learning that ensures an appropriate\ntrade-off between exploration and preference satisfaction. For this, we\nintroduce a model-based Bayesian agent equipped with a preference learning\nmechanism (pepper) using conjugate priors. These conjugate priors are used to\naugment the expected free energy planner for learning preferences over states\n(or outcomes) across time. Importantly, our approach enables the agent to learn\npreferences that encourage adaptive behaviour at test time. We illustrate this\nin the OpenAI Gym FrozenLake and the 3D mini-world environments -- with and\nwithout volatility. Given a constant environment, these agents learn confident\n(i.e., precise) preferences and act to satisfy them. Conversely, in a volatile\nsetting, perpetual preference uncertainty maintains exploratory behaviour. Our\nexperiments suggest that learnable (reward-free) preferences entail a trade-off\nbetween exploration and preference satisfaction. Pepper offers a\nstraightforward framework suitable for designing adaptive agents when reward\nfunctions cannot be predefined as in real environments.\n"} {"abstract": " We introduce a new supervised learning algorithm based to train spiking\nneural networks for classification. The algorithm overcomes a limitation of\nexisting multi-spike learning methods: it solves the problem of interference\nbetween interacting output spikes during a learning trial. This problem of\nlearning interference causes learning performance in existing approaches to\ndecrease as the number of output spikes increases, and represents an important\nlimitation in existing multi-spike learning approaches. We address learning\ninterference by introducing a novel mechanism to balance the magnitudes of\nweight adjustments during learning, which in theory allows every spike to\nsimultaneously converge to their desired timings. Our results indicate that our\nmethod achieves significantly higher memory capacity and faster convergence\ncompared to existing approaches for multi-spike classification. In the\nubiquitous Iris and MNIST datasets, our algorithm achieves competitive\npredictive performance with state-of-the-art approaches.\n"} {"abstract": " We exhibit explicit and easily realisable bijections between Hecke--Kiselman\nmonoids of type $A_n$/$\\widetilde{A}_n$; certain braid diagrams on the\nplane/cylinder; and couples of integer sequences of particular types. This\nyields a fast solution of the word problem and an efficient normal form for\nthese HK monoids. Yang--Baxter type actions play an important role in our\nconstructions.\n"} {"abstract": " Here, we designed two promising schemes to realize the high-entropy structure\nin a series of quasi-two-dimensional compounds, transition metal\ndichalcogenides (TMDCs). In the intra-layer high-entropy plan, (HEM)X2\ncompounds with high-entropy structure in the MX2 slabs were obtained, here HEM\nmeans high-entropy metals, such as TiZrNbMoTa. And superconductivity with a\nTc~7.4 K was found in a Mo-rich HEMX2. On the other hand, in the intercalation\nplan, we intercalated HEM-atoms (FeCoCrNiMn) into the gap between the\nsandwiched-MX2 slabs resulting in a series of (HEM)xMX2 compounds, x in the\nrange of 0~0.5, in which HEM is mainly composed of 3d transition metal\nelements, such as FeCoCrNiMn. As the introduction of multi-component magnetic\natoms, ferromagnetic spin-glass states with strong 2D characteristics ensued.\nTuning the x content, three kinds of two in the high-entropy intercalated layer\nwere observed including the 1*1 triangular lattice and two kinds of\nsuperlattices \\sqrt3*\\sqrt3 and \\sqrt3*2 in x=0.333 and x>0.5, respectively.\nMeanwhile, the spin frustration in the two-dimensional high-entropy magnetic\nplane will be enhanced with the development of \\sqrt3*\\sqrt3 and will be\nreduced significantly when changing into the \\sqrt3*2 phase. The high-entropy\nTMDCs and versatile two-dimensional high-entropy structures found by us possess\ngreat potentials to find new physics in low-dimensional high-entropy structures\nand future applications.\n"} {"abstract": " Reversible data hiding in encrypted images (RDH-EI) has attracted increasing\nattention, since it can protect the privacy of original images while the\nembedded data can be exactly extracted. Recently, some RDH-EI schemes with\nmultiple data hiders have been proposed using secret sharing technique.\nHowever, these schemes protect the contents of the original images with\nlightweight security level. In this paper, we propose a high-security RDH-EI\nscheme with multiple data hiders. First, we introduce a cipher-feedback secret\nsharing (CFSS) technique. It follows the cryptography standards by introducing\nthe cipher-feedback strategy of AES. Then, using the CFSS technique, we devise\na new (r,n)-threshold (r<=n) RDH-EI scheme with multiple data hiders called\nCFSS-RDHEI. It can encrypt an original image into n encrypted images with\nreduced size using an encryption key and sends each encrypted image to one data\nhider. Each data hider can independently embed secret data into the encrypted\nimage to obtain the corresponding marked encrypted image. The original image\ncan be completely recovered from r marked encrypted images and the encryption\nkey. Performance evaluations show that our CFSS-RDHEI scheme has high embedding\nrate and its generated encrypted images are much smaller, compared to existing\nsecret sharing-based RDH-EI schemes. Security analysis demonstrates that it can\nachieve high security to defense some commonly used security attacks.\n"} {"abstract": " Narrow linewidth visible light lasers are critical for atomic, molecular and\noptical (AMO) applications including atomic clocks, quantum computing, atomic\nand molecular spectroscopy, and sensing. Historically, such lasers are\nimplemented at the tabletop scale, using semiconductor lasers stabilized to\nlarge optical reference cavities. Photonic integration of high spectral-purity\nvisible light sources will enable experiments to increase in complexity and\nscale. Stimulated Brillouin scattering (SBS) is a promising approach to realize\nhighly coherent on-chip visible light laser emission. While progress has been\nmade on integrated SBS lasers at telecommunications wavelengths, barriers have\nexisted to translate this performance to the visible, namely the realization of\nBrillouin-active waveguides in ultra-low optical loss photonics. We have\novercome this barrier, demonstrating the first visible light photonic\nintegrated SBS laser, which operates at 674 nm to address the 88Sr+ optical\nclock transition. To guide the laser design, we use a combination of\nmulti-physics simulation and Brillouin spectroscopy in a 2 meter spiral\nwaveguide to identify the 25.110 GHz first order Stokes frequency shift and 290\nMHz gain bandwidth. The laser is implemented in an 8.9 mm radius silicon\nnitride all-waveguide resonator with 1.09 dB per meter loss and Q of 55.4\nMillion. Lasing is demonstrated, with an on-chip 14.7 mW threshold, a 45% slope\nefficiency, and linewidth narrowing as the pump is increased from below\nthreshold to 269 Hz. To illustrate the wavelength flexibility of this design,\nwe also demonstrate lasing at 698 nm, the wavelength for the optical clock\ntransition in neutral strontium. This demonstration of a waveguide-based,\nphotonic integrated SBS laser that operates in the visible, and the reduced\nsize and sensitivity to environmental disturbances, shows promise for diverse\nAMO applications.\n"} {"abstract": " The cosmic-ray ionization rate ($\\zeta$, s$^{-1}$) plays an important role in\nthe interstellar medium. It controls ion-molecular chemistry and provides a\nsource of heating. Here we perform a grid of calculations using the spectral\nsynthesis code CLOUDY along nine sightlines towards, HD 169454, HD 110432, HD\n204827, $\\lambda$ Cep, X Per, HD 73882, HD 154368, Cyg OB2 5, Cyg OB2 12. The\nvalue of $\\zeta$ is determined by matching the observed column densities of\nH$_3^+$ and H$_2$. The presence of polycyclic aromatic hydrocarbons (PAHs)\naffects the free electron density, which changes the H$_3^+$ density and the\nderived ionization rate. PAHs are ubiquitous in the Galaxy, but there are also\nregions where PAHs do not exist. Hence, we consider clouds with a range of PAH\nabundances and show their effects on the H$_3^+$ abundance. We predict an\naverage cosmic-ray ionization rate for H$_2$ ($\\zeta$(H$_2$))= (7.88 $\\pm$\n2.89) $\\times$ 10$^{-16}$ s$^{-1}$ for models with average Galactic PAHs\nabundances, (PAH/H =10$^{-6.52}$), except Cyg OB2 5 and Cyg OB2 12. The value\nof $\\zeta$ is nearly 1 dex smaller for sightlines toward Cyg OB2 12. We\nestimate the average value of $\\zeta$(H$_2$)= (95.69 $\\pm$ 46.56) $\\times$\n10$^{-16}$ s$^{-1}$ for models without PAHs.\n"} {"abstract": " The rate-regulation trade-off defined between two objective functions, one\npenalizing the packet rate and one the state deviation and control effort, can\nexpress the performance bound of a networked control system. However, the\ncharacterization of the set of globally optimal solutions in this trade-off for\nmulti-dimensional controlled Gauss-Markov processes has been an open problem.\nIn the present article, we characterize a policy profile that belongs to this\nset. We prove that such a policy profile consists of a symmetric threshold\ntriggering policy, which can be expressed in terms of the value of information,\nand a certainty-equivalent control policy, which uses a conditional expectation\nwith linear dynamics.\n"} {"abstract": " The results obtained from state of the art human pose estimation (HPE) models\ndegrade rapidly when evaluating people of a low resolution, but can super\nresolution (SR) be used to help mitigate this effect? By using various SR\napproaches we enhanced two low resolution datasets and evaluated the change in\nperformance of both an object and keypoint detector as well as end-to-end HPE\nresults. We remark the following observations. First we find that for people\nwho were originally depicted at a low resolution (segmentation area in pixels),\ntheir keypoint detection performance would improve once SR was applied. Second,\nthe keypoint detection performance gained is dependent on that persons pixel\ncount in the original image prior to any application of SR; keypoint detection\nperformance was improved when SR was applied to people with a small initial\nsegmentation area, but degrades as this becomes larger. To address this we\nintroduced a novel Mask-RCNN approach, utilising a segmentation area threshold\nto decide when to use SR during the keypoint detection step. This approach\nachieved the best results on our low resolution datasets for each HPE\nperformance metrics.\n"} {"abstract": " The Bloch theorem is a general theorem restricting the persistent current\nassociated with a conserved U(1) charge in a ground state or in a thermal\nequilibrium. It gives an upper bound of the magnitude of the current density,\nwhich is inversely proportional to the system size. In a recent preprint, Else\nand Senthil applied the argument for the Bloch theorem to a generalized Gibbs\nensemble, assuming the presence of an additional conserved charge, and\npredicted a nonzero current density in the nonthermal steady state [D. V. Else\nand T. Senthil, arXiv:2106.15623]. In this work, we provide a complementary\nderivation based on the canonical ensemble, given that the additional charge is\nstrictly conserved within the system by itself. Furthermore, using the example\nwhere the additional conserved charge is the momentum operator, we discuss that\nthe persistent current tends to vanish when the system is in contact with an\nexternal momentum reservoir in the co-moving frame of the reservoir.\n"} {"abstract": " For a linear algebraic group $G$ over $\\bf Q$, we consider the period domains\n$D$ classifying $G$-mixed Hodge structures, and construct the extended period\ndomains $D_{\\mathrm{BS}}$, $D_{\\mathrm{SL}(2)}$, and $\\Gamma \\backslash\nD_{\\Sigma}$. In particular, we give toroidal partial compactifications of mixed\nMumford--Tate domains.\n"} {"abstract": " In this work, after making an attempt to improve the formulation of the model\non particle transport within astrophysical plasma outflows and constructing the\nappropriate algorithms, we test the reliability and effectiveness of our method\nthrough numerical simulations on well-studied Galactic microquasars as the SS\n433 and the Cyg X-1 systems. Then, we concentrate on predictions of the\nassociated emissions, focusing on detectable high energy neutrinos and\n$\\gamma$-rays originated from the extra-galactic M33 X-7 system, which is a\nrecently discovered X-ray binary located in the neighboring galaxy Messier 33\nand has not yet been modeled in detail. The particle and radiation energy\ndistributions, produced from magnetized astrophysical jets in the context of\nour method, are assumed to originate from decay and scattering processes taking\nplace among the secondary particles created when hot (relativistic) protons of\nthe jet scatter on thermal (cold) ones (p-p interaction mechanism inside the\njet). These distributions are computed by solving the system of coupled\nintegro-differential transport equations of multi-particle processes (reactions\nchain) following the inelastic proton-proton (p-p) collisions. For the\ndetection of such high energy neutrinos as well as multi-wavelength (radio,\nX-ray and gamma-ray) emissions, extremely sensitive detection instruments are\nin operation or have been designed like the CTA, IceCube, ANTARES, KM3NeT,\nIceCube-Gen-2, and other space telescopes.\n"} {"abstract": " With the ever-increasing speed and volume of knowledge production and\nconsumption, scholarly communication systems have been rapidly transformed into\ndigitised and networked open ecosystems, where preprint servers have played a\npivotal role. However, evidence is scarce regarding how this paradigm shift has\naffected the dynamics of collective attention on scientific knowledge. Herein,\nwe address this issue by investigating the citation dynamics of more than 1.5\nmillion eprints on arXiv, the most prominent and oldest eprint archive. The\ndiscipline-average citation history curves are estimated by applying a\nnonlinear regression model to the long-term citation data. The revealed\nspatiotemporal characteristics, including the growth and obsolescence patterns,\nare shown to vary across disciplines, reflecting the different publication and\ncitation practices. The results are used to develop a spatiotemporally\nnormalised citation index, called the $\\gamma$-index, with an approximately\nnormal distribution. It can be used to compare the citational impact of\nindividual papers across disciplines and time periods, providing a less biased\nmeasure of research impact than those widely used in the literature and in\npractice. Further, a stochastic model for the observed spatiotemporal citation\ndynamics is derived, reproducing both the Lognormal Law for the cumulative\ncitation distribution and the time trajectory of average citations in a unified\nformalism.\n"} {"abstract": " Behavioural symptoms and urinary tract infections (UTI) are among the most\ncommon problems faced by people with dementia. One of the key challenges in the\nmanagement of these conditions is early detection and timely intervention in\norder to reduce distress and avoid unplanned hospital admissions. Using in-home\nsensing technologies and machine learning models for sensor data integration\nand analysis provides opportunities to detect and predict clinically\nsignificant events and changes in health status. We have developed an\nintegrated platform to collect in-home sensor data and performed an\nobservational study to apply machine learning models for agitation and UTI risk\nanalysis. We collected a large dataset from 88 participants with a mean age of\n82 and a standard deviation of 6.5 (47 females and 41 males) to evaluate a new\ndeep learning model that utilises attention and rational mechanism. The\nproposed solution can process a large volume of data over a period of time and\nextract significant patterns in a time-series data (i.e. attention) and use the\nextracted features and patterns to train risk analysis models (i.e. rational).\nThe proposed model can explain the predictions by indicating which time-steps\nand features are used in a long series of time-series data. The model provides\na recall of 91\\% and precision of 83\\% in detecting the risk of agitation and\nUTIs. This model can be used for early detection of conditions such as UTIs and\nmanaging of neuropsychiatric symptoms such as agitation in association with\ninitial treatment and early intervention approaches. In our study we have\ndeveloped a set of clinical pathways for early interventions using the alerts\ngenerated by the proposed model and a clinical monitoring team has been set up\nto use the platform and respond to the alerts according to the created\nintervention plans.\n"} {"abstract": " The rook graph is a graph whose edges represent all the possible legal moves\nof the rook chess piece on a chessboard. The problem we consider is the\nfollowing. Given any set $M$ containing pairs of cells such that each cell of\nthe $m_1 \\times m_2$ chessboard is in exactly one pair, we determine the values\nof the positive integers $m_1$ and $m_2$ for which it is possible to construct\na closed tour of all the cells of the chessboard which uses all the pairs of\ncells in $M$ and some edges of the rook graph. This is an alternative\nformulation of a graph-theoretical problem presented in [Electron. J. Combin.\n28(1) (2021), #P1.7] involving the Cartesian product $G$ of two complete graphs\n$K_{m_1}$ and $K_{m_2}$, which is, in fact, isomorphic to the $m_{1}\\times\nm_{2}$ rook graph. The problem revolves around determining the values of the\nparameters $m_1$ and $m_2$ that would allow any perfect matching of the\ncomplete graph on the same vertex set of $G$ to be extended to a Hamiltonian\ncycle by using only edges in $G$.\n"} {"abstract": " Long-lived storage of arbitrary transverse multimodes is important for\nestablishing a high-channel-capacity quantum network. Most of the pioneering\nworks focused on atomic diffusion as the dominant impact on the retrieved\npattern in an atom-based memory. In this work, we demonstrate that the\nunsynchronized Larmor precession of atoms in the inhomogeneous magnetic field\ndominates the distortion of the pattern stored in a cold-atom-based memory. We\nfind that this distortion effect can be eliminated by applying a strong uniform\npolarization magnetic field. By preparing atoms in magnetically insensitive\nstates, the destructive interference between different spin-wave components is\ndiminished, and the stored localized patterns are synchronized further in a\nsingle spin-wave component; then, an obvious enhancement in preserving patterns\nfor a long time is obtained. The reported results are very promising for\nstudying transverse multimode decoherence in storage and high-dimensional\nquantum networks in the future.\n"} {"abstract": " In this paper, we consider a simplified model of turbulence for large\nReynolds numbers driven by a constant power energy input on large scales. In\nthe statistical stationary regime, the behaviour of the kinetic energy is\ncharacterised by two well defined phases: a laminar phase where the kinetic\nenergy grows linearly for a (random) time $t_w$ followed by abrupt\navalanche-like energy drops of sizes $S$ due to strong intermittent\nfluctuations of energy dissipation. We study the probability distribution\n$P[t_w]$ and $P[S]$ which both exhibit a quite well defined scaling behaviour.\nAlthough $t_w$ and $S$ are not statistically correlated, we suggest and\nnumerically checked that their scaling properties are related based on a\nsimple, but non trivial, scaling argument. We propose that the same approach\ncan be used for other systems showing avalanche-like behaviour such as\namorphous solids and seismic events.\n"} {"abstract": " Planar graphs can be represented as intersection graphs of different types of\ngeometric objects in the plane, e.g., circles (Koebe, 1936), line segments\n(Chalopin \\& Gon{\\c{c}}alves, 2009), \\textsc{L}-shapes (Gon{\\c{c}}alves et al,\n2018). For general graphs, however, even deciding whether such representations\nexist is often $NP$-hard. We consider apex graphs, i.e., graphs that can be\nmade planar by removing one vertex from them. We show, somewhat surprisingly,\nthat deciding whether geometric representations exist for apex graphs is\n$NP$-hard.\n More precisely, we show that for every positive integer $k$, recognizing\nevery graph class $\\mathcal{G}$ which satisfies $\\textsc{PURE-2-DIR} \\subseteq\n\\mathcal{G} \\subseteq \\textsc{1-STRING}$ is $NP$-hard, even when the input\ngraphs are apex graphs of girth at least $k$. Here, $PURE-2-DIR$ is the class\nof intersection graphs of axis-parallel line segments (where intersections are\nallowed only between horizontal and vertical segments) and \\textsc{1-STRING} is\nthe class of intersection graphs of simple curves (where two curves share at\nmost one point) in the plane. This partially answers an open question raised by\nKratochv{\\'\\i}l \\& Pergel (2007).\n Most known $NP$-hardness reductions for these problems are from variants of\n3-SAT. We reduce from the \\textsc{PLANAR HAMILTONIAN PATH COMPLETION} problem,\nwhich uses the more intuitive notion of planarity. As a result, our proof is\nmuch simpler and encapsulates several classes of geometric graphs.\n"} {"abstract": " The present paper reports on the numerical investigation of lifted turbulent\njet flames with H2/N2 fuel issuing into a vitiated coflow of lean combustion\nproducts of H2/air using conditional moment closure method (CMC). A 2D\naxisymmetric formulation has been used for the predictions of fluid flow, while\nCMC equations are solved with detailed chemistry to represent the\nturbulence-chemistry interaction. Simulations are carried out for different\ncoflow temperatures, jet and coflow velocities in order to investigate the\nimpact on the flame lift-off height as well as on the flame stabilization.\nFurthermore, the role of conditional velocity models on the flame has also been\ninvestigated. In addition, the effect of mixing is investigated over a range of\ncoflow temperatures and the stabilization mechanism is determined from the\nanalysis of the transport budgets. It is found that the lift-off height is\nhighly sensitive to the coflow temperature, while the predicted lift-off height\nusing the mixing model constant, i.e., C{\\Phi}=4, is found to be the closest to\nthe experimental results. For all the coflow temperatures, the balance is found\nbetween the chemical, axial convection and molecular diffusion terms while the\ncontribution from axial and radial diffusion is negligible, thus indicating\nauto-ignition as the flame stabilization mechanism.\n"} {"abstract": " When the Rashba and Dresslhaus spin-orbit coupling are both presented for a\ntwo-dimensional electron in a perpendicular magnetic field, a striking\nresemblance to anisotropic quantum Rabi model in quantum optics is found. We\nperform a generalized Rashba coupling approximation to obtain a solvable\nHamiltonian by keeping the nearest-mixing terms of Laudau states, which is\nreformulated in the similar form to that with only Rashba coupling. Each Landau\nstate becomes a new displaced-Fock state with a displacement shift instead of\nthe original Harmonic oscillator Fock state, yielding eigenstates in closed\nform. Analytical energies are consistent with numerical ones in a wide range of\ncoupling strength even for a strong Zeeman splitting. In the presence of an\nelectric field, the spin conductance and the charge conductance obtained\nanalytically are in good agreements with the numerical results. As the\ncomponent of the Dresselhaus coupling increases, we find that the spin Hall\nconductance exhibits a pronounced resonant peak at a larger value of the\ninverse of the magnetic field. Meanwhile, the charge conductance exhibits a\nseries of plateaus as well as a jump at the resonant magnetic field. Our method\nprovides an easy-to-implement analytical treatment to two-dimensional electron\ngas systems with both types of spin-orbit couplings.\n"} {"abstract": " In recent years, there has been a resurgence in methods that use distributed\n(neural) representations to represent and reason about semantic knowledge for\nrobotics applications. However, while robots often observe previously unknown\nconcepts, these representations typically assume that all concepts are known a\npriori, and incorporating new information requires all concepts to be learned\nafresh. Our work relaxes this limiting assumption of existing representations\nand tackles the incremental knowledge graph embedding problem by leveraging the\nprinciples of a range of continual learning methods. Through an experimental\nevaluation with several knowledge graphs and embedding representations, we\nprovide insights about trade-offs for practitioners to match a semantics-driven\nrobotics applications to a suitable continual knowledge graph embedding method.\n"} {"abstract": " As power systems are undergoing a significant transformation with more\nuncertainties, less inertia and closer to operation limits, there is increasing\nrisk of large outages. Thus, there is an imperative need to enhance grid\nemergency control to maintain system reliability and security. Towards this\nend, great progress has been made in developing deep reinforcement learning\n(DRL) based grid control solutions in recent years. However, existing DRL-based\nsolutions have two main limitations: 1) they cannot handle well with a wide\nrange of grid operation conditions, system parameters, and contingencies; 2)\nthey generally lack the ability to fast adapt to new grid operation conditions,\nsystem parameters, and contingencies, limiting their applicability for\nreal-world applications. In this paper, we mitigate these limitations by\ndeveloping a novel deep meta reinforcement learning (DMRL) algorithm. The DMRL\ncombines the meta strategy optimization together with DRL, and trains policies\nmodulated by a latent space that can quickly adapt to new scenarios. We test\nthe developed DMRL algorithm on the IEEE 300-bus system. We demonstrate fast\nadaptation of the meta-trained DRL polices with latent variables to new\noperating conditions and scenarios using the proposed method and achieve\nsuperior performance compared to the state-of-the-art DRL and model predictive\ncontrol (MPC) methods.\n"} {"abstract": " Full-stack autonomous driving perception modules usually consist of\ndata-driven models based on multiple sensor modalities. However, these models\nmight be biased to the sensor setup used for data acquisition. This bias can\nseriously impair the perception models' transferability to new sensor setups,\nwhich continuously occur due to the market's competitive nature. We envision\nsensor data abstraction as an interface between sensor data and machine\nlearning applications for highly automated vehicles (HAD).\n For this purpose, we review the primary sensor modalities, camera, lidar, and\nradar, published in autonomous-driving related datasets, examine single sensor\nabstraction and abstraction of sensor setups, and identify critical paths\ntowards an abstraction of sensor data from multiple perception configurations.\n"} {"abstract": " We reconstruct the Lorentzian graviton propagator in asymptotically safe\nquantum gravity from Euclidean data. The reconstruction is applied to both the\ndynamical fluctuation graviton and the background graviton propagator. We prove\nthat the spectral function of the latter necessarily has negative parts similar\nto, and for the same reasons, as the gluon spectral function. In turn, the\nspectral function of the dynamical graviton is positive. We argue that the\nlatter enters cross sections and other observables in asymptotically safe\nquantum gravity. Hence, its positivity may hint at the unitarity of\nasymptotically safe quantum gravity.\n"} {"abstract": " Distributed data processing ecosystems are widespread and their components\nare highly specialized, such that efficient interoperability is urgent.\nRecently, Apache Arrow was chosen by the community to serve as a format\nmediator, providing efficient in-memory data representation. Arrow enables\nefficient data movement between data processing and storage engines,\nsignificantly improving interoperability and overall performance. In this work,\nwe design a new zero-cost data interoperability layer between Apache Spark and\nArrow-based data sources through the Arrow Dataset API. Our novel data\ninterface helps separate the computation (Spark) and data (Arrow) layers. This\nenables practitioners to seamlessly use Spark to access data from all Arrow\nDataset API-enabled data sources and frameworks. To benefit our community, we\nopen-source our work and show that consuming data through Apache Arrow is\nzero-cost: our novel data interface is either on-par or more performant than\nnative Spark.\n"} {"abstract": " Excessive evaporative loss of water from the topsoil in arid-land agriculture\nis compensated via irrigation, which exploits massive freshwater resources. The\ncumulative effects of decades of unsustainable freshwater consumption in many\narid regions are now threatening food-water security. While plastic mulches can\nreduce evaporation from the topsoil, their cost and non-biodegradability limit\ntheir utility. In response, we report on superhydrophobic sand (SHS), a\nbio-inspired enhancement of common sand with a nanoscale wax coating. When SHS\nwas applied as a 5 mm-thick mulch over the soil, evaporation dramatically\nreduced and crop yields increased. Multi-year field trials of SHS application\nwith tomato (Solanum lycopersicum), barley (Hordeum vulgare), and wheat\n(Triticum aestivum) under normal irrigation enhanced yields by 17%-73%. Under\nbrackish water irrigation (5500 ppm NaCl), SHS mulching produced 53%-208%\nhigher fruit yield and grain gains for tomato and barley. Thus, SHS could\nbenefit agriculture and city-greening in arid regions.\n"} {"abstract": " Light curves of the accreting white dwarf pulsator GW Librae spanning a 7.5\nmonth period in 2017 were obtained as part of the Next Generation Transit\nSurvey. This data set comprises 787 hours of photometry from 148 clear nights,\nallowing the behaviour of the long (hours) and short period (20min) modulation\nsignals to be tracked from night to night over a much longer observing baseline\nthan has been previously achieved. The long period modulations intermittently\ndetected in previous observations of GW Lib are found to be a persistent\nfeature, evolving between states with periods ~83min and 2-4h on time-scales of\nseveral days. The 20min signal is found to have a broadly stable amplitude and\nfrequency for the duration of the campaign, but the previously noted phase\ninstability is confirmed. Ultraviolet observations obtained with the Cosmic\nOrigin Spectrograph onboard the Hubble Space Telescope constrain the\nultraviolet-to-optical flux ratio to ~5 for the 4h modulation, and <=1 for the\n20min period, with caveats introduced by non-simultaneous observations. These\nresults add further observational evidence that these enigmatic signals must\noriginate from the white dwarf, highlighting our continued gap in theoretical\nunderstanding of the mechanisms that drive them.\n"} {"abstract": " Code summarization is the task of generating natural language description of\nsource code, which is important for program understanding and maintenance.\nExisting approaches treat the task as a machine translation problem (e.g., from\nJava to English) and applied Neural Machine Translation models to solve the\nproblem. These approaches only consider a given code unit (e.g., a method)\nwithout its broader context. The lacking of context may hinder the NMT model\nfrom gathering sufficient information for code summarization. Furthermore,\nexisting approaches use a fixed vocabulary and do not fully consider the words\nin code, while many words in the code summary may come from the code. In this\nwork, we present a neural network model named ToPNN for code summarization,\nwhich uses the topics in a broader context (e.g., class) to guide the neural\nnetworks that combine the generation of new words and the copy of existing\nwords in code. Based on the model we present an approach for generating natural\nlanguage code summaries at the method level (i.e., method comments). We\nevaluate our approach using a dataset with 4,203,565 commented Java methods.\nThe results show significant improvement over state-of-the-art approaches and\nconfirm the positive effect of class topics and the copy mechanism.\n"} {"abstract": " We show that the identification problem for a class of dynamic panel logit\nmodels with fixed effects has a connection to the truncated moment problem in\nmathematics. We use this connection to show that the sharp identified set of\nthe structural parameters is characterized by a set of moment equality and\ninequality conditions. This result provides sharp bounds in models where moment\nequality conditions do not exist or do not point identify the parameters. We\nalso show that the sharp identifying content of the non-parametric latent\ndistribution of the fixed effects is characterized by a vector of its\ngeneralized moments, and that the number of moments grows linearly in T. This\nfinal result lets us point identify, or sharply bound, specific classes of\nfunctionals, without solving an optimization problem with respect to the latent\ndistribution.\n"} {"abstract": " Task environments developed in Minecraft are becoming increasingly popular\nfor artificial intelligence (AI) research. However, most of these are currently\nconstructed manually, thus failing to take advantage of procedural content\ngeneration (PCG), a capability unique to virtual task environments. In this\npaper, we present mcg, an open-source library to facilitate implementing PCG\nalgorithms for voxel-based environments such as Minecraft. The library is\ndesigned with human-machine teaming research in mind, and thus takes a\n'top-down' approach to generation, simultaneously generating low and high level\nmachine-readable representations that are suitable for empirical research.\nThese can be consumed by downstream AI applications that consider human spatial\ncognition. The benefits of this approach include rapid, scalable, and efficient\ndevelopment of virtual environments, the ability to control the statistics of\nthe environment at a semantic level, and the ability to generate novel\nenvironments in response to player actions in real time.\n"} {"abstract": " Aspects of ultrahomogeneous and existentially closed Heyting algebras are\nstudied. Roelcke non-precompactness, non-simplicity, and non-amenability of the\nautomorphism group of the Fra\\\"iss\\'e limit of finite Heyting algebras are\nexamined among others.\n"} {"abstract": " With the continuing spread of misinformation and disinformation online, it is\nof increasing importance to develop combating mechanisms at scale in the form\nof automated systems that support multiple languages. One task of interest is\nclaim veracity prediction, which can be addressed using stance detection with\nrespect to relevant documents retrieved online. To this end, we present our new\nArabic Stance Detection dataset (AraStance) of 4,063 claim--article pairs from\na diverse set of sources comprising three fact-checking websites and one news\nwebsite. AraStance covers false and true claims from multiple domains (e.g.,\npolitics, sports, health) and several Arab countries, and it is well-balanced\nbetween related and unrelated documents with respect to the claims. We\nbenchmark AraStance, along with two other stance detection datasets, using a\nnumber of BERT-based models. Our best model achieves an accuracy of 85\\% and a\nmacro F1 score of 78\\%, which leaves room for improvement and reflects the\nchallenging nature of AraStance and the task of stance detection in general.\n"} {"abstract": " Penrose et al. investigated the physical incoherence of the spacetime with\nnegative mass via the bending of light. Precise estimates of time-delay of null\ngeodesics were needed and played a pivotal role in their proof. In this paper,\nwe construct an intermediate diagonal metric and make a reduction of this\nproblem to a causality comparison in the compactified spacetimes regarding\ntimelike connectedness near the conformal infinities. This different approach\nallows us to avoid encountering the difficulties and subtle issues Penrose et\nal. met. It provides a new, substantially simple, and physically natural\nnon-PDE viewpoint to understand the positive mass theorem. This elementary\nargument modestly applies to asymptotically flat solutions which are vacuum and\nstationary near infinity.\n"} {"abstract": " Traditional channel coding with feedback constructs and transmits a codeword\nonly after all message bits are available at the transmitter. This paper joins\nGuo & Kostina and Lalitha et. al. in developing approaches for causal (or\nprogressive) encoding, where the transmitter may begin transmitting codeword\nsymbols as soon as the first message bit arrives. Building on the work of\nHorstein, Shayevitz and Feder, and Naghshvar et. al., this paper extends our\nprevious computationally efficient systematic algorithm for traditional\nposterior matching to produce a four-phase encoder that progressively encodes\nusing only the message bits causally available. Systematic codes work well with\nposterior matching on a channel with feedback, and they provide an immediate\nbenefit when causal encoding is employed instead of traditional encoding. Our\nalgorithm captures additional gains in the interesting region where the\ntransmission rate mu is higher than the rate lambda at which message bits\nbecome available. In this region, transmission of additional symbols beyond\nsystematic bits, before a traditional encoder would have begun transmission,\nfurther improves performance\n"} {"abstract": " Recently, there have been efforts towards understanding the sampling\nbehaviour of event-triggered control (ETC), for obtaining metrics on its\nsampling performance and predicting its sampling patterns. Finite-state\nabstractions, capturing the sampling behaviour of ETC systems, have proven\npromising in this respect. So far, such abstractions have been constructed for\nnon-stochastic systems. Here, inspired by this framework, we abstract the\nsampling behaviour of stochastic narrow-sense linear periodic ETC (PETC)\nsystems via Interval Markov Chains (IMCs). Particularly, we define functions\nover sequences of state-measurements and interevent times that can be expressed\nas discounted cumulative sums of rewards, and compute bounds on their expected\nvalues by constructing appropriate IMCs and equipping them with suitable\nrewards. Finally, we argue that our results are extendable to more general\nforms of functions, thus providing a generic framework to define and study\nvarious ETC sampling indicators.\n"} {"abstract": " Various unusual behaviors of artificial materials are governed by their\ntopological properties, among which the edge state at the boundary of a\nphotonic or phononic lattice has been captivated as a popular notion. However,\nthis remarkable bulk-boundary correspondence and the related phenomena are\nmissing in thermal materials. One reason is that heat diffusion is described in\na non-Hermitian framework because of its dissipative nature. The other is that\nthe relevant temperature field is mostly composed of modes that extend over\nwide ranges, making it difficult to be rendered within the tight-binding theory\nas commonly employed in wave physics. Here, we overcome the above challenges\nand perform systematic studies on heat diffusion in thermal lattices. Based on\na continuum model, we introduce a state vector to link the Zak phase with the\nexistence of the edge state, and thereby analytically prove the thermal\nbulk-boundary correspondence. We experimentally demonstrate the predicted edge\nstates with a topologically protected and localized heat dissipation capacity.\nOur finding sets up a solid foundation to explore the topology in novel heat\ntransfer manipulations.\n"} {"abstract": " Initial hopes of quickly eradicating the COVID-19 pandemic proved futile, and\nthe goal shifted to controlling the peak of the infection, so as to minimize\nthe load on healthcare systems. To that end, public health authorities\nintervened aggressively to institute social distancing, lock-down policies, and\nother Non-Pharmaceutical Interventions (NPIs). Given the high social,\neducational, psychological, and economic costs of NPIs, authorities tune them,\nalternatively tightening up or relaxing rules, with the result that, in effect,\na relatively flat infection rate results. For example, during the summer in\nparts of the United States, daily infection numbers dropped to a plateau. This\npaper approaches NPI tuning as a control-theoretic problem, starting from a\nsimple dynamic model for social distancing based on the classical SIR epidemics\nmodel. Using a singular-perturbation approach, the plateau becomes a\nQuasi-Steady-State (QSS) of a reduced two-dimensional SIR model regulated by\nadaptive dynamic feedback. It is shown that the QSS can be assigned and it is\nglobally asymptotically stable. Interestingly, the dynamic model for social\ndistancing can be interpreted as a nonlinear integral controller. Problems of\ndata fitting and parameter identifiability are also studied for this model. The\npaper also discusses how this simple model allows for meaningful study of the\neffect of population size, vaccinations, and the emergence of second waves.\n"} {"abstract": " The discovery of superconductivity in infinite-layer nickelates brings us\ntantalizingly close to a new material class that mirrors the cuprate\nsuperconductors. Here, we report on magnetic excitations in these nickelates,\nmeasured using resonant inelastic x-ray scattering (RIXS) at the Ni L3-edge, to\nshed light on the material complexity and microscopic physics. Undoped NdNiO2\npossesses a branch of dispersive excitations with a bandwidth of approximately\n200 meV, reminiscent of strongly-coupled, antiferromagnetically aligned spins\non a square lattice, despite a lack of evidence for long range magnetic order.\nThe significant damping of these modes indicates the importance of coupling to\nrare-earth itinerant electrons. Upon doping, the spectral weight and energy\ndecrease slightly, while the modes become overdamped. Our results highlight the\nrole of Mottness in infinite-layer nickelates.\n"} {"abstract": " In the next decades, ultra-high-energy neutrinos in the EeV energy range will\nbe potentially detected by next-generation neutrino telescopes. Although their\nprimary goals are to observe cosmogenic neutrinos and to gain insight into\nextreme astrophysical environments, they can also indirectly probe the nature\nof dark matter. In this paper, we study the projected sensitivity of up-coming\nneutrino radio telescopes, such as RNO-G, GRAND and IceCube-gen2 radio array,\nto decaying dark matter scenarios. We investigate different dark matter\ndecaying channels and masses, from $10^7$ to $10^{15}$ GeV. By assuming the\nobservation of cosmogenic or newborn pulsar neutrinos, we forecast conservative\nconstraints on the lifetime of heavy dark matter particles. We find that these\nlimits are competitive with and highly complementary to previous\nmulti-messenger analyses.\n"} {"abstract": " Decision trees and their ensembles are very popular models of supervised\nmachine learning. In this paper we merge the ideas underlying decision trees,\ntheir ensembles and FCA by proposing a new supervised machine learning model\nwhich can be constructed in polynomial time and is applicable for both\nclassification and regression problems. Specifically, we first propose a\npolynomial-time algorithm for constructing a part of the concept lattice that\nis based on a decision tree. Second, we describe a prediction scheme based on a\nconcept lattice for solving both classification and regression tasks with\nprediction quality comparable to that of state-of-the-art models.\n"} {"abstract": " This paper presents a new stochastic finite element method for computing\nstructural stochastic responses. The method provides a new expansion of\nstochastic response and decouples the stochastic response into a combination of\na series of deterministic responses with random variable coefficients. A\ndedicated iterative algorithm is proposed to determine the deterministic\nresponses and corresponding random variable coefficients one by one. The\nalgorithm computes the deterministic responses and corresponding random\nvariable coefficients in their individual space and is insensitive to\nstochastic dimensions, thus it can be applied to high dimensional stochastic\nproblems readily without extra difficulties. More importantly, the\ndeterministic responses can be computed efficiently by use of existing Finite\nElement Method (FEM) solvers, thus the proposed method can be easy to embed\ninto existing FEM structural analysis softwares. Three practical examples,\nincluding low-dimensional and high-dimensional stochastic problems, are given\nto demonstrate the accuracy and effectiveness of the proposed method.\n"} {"abstract": " Controllable person image generation aims to produce realistic human images\nwith desirable attributes (e.g., the given pose, cloth textures or hair style).\nHowever, the large spatial misalignment between the source and target images\nmakes the standard architectures for image-to-image translation not suitable\nfor this task. Most of the state-of-the-art architectures avoid the alignment\nstep during the generation, which causes many artifacts, especially for person\nimages with complex textures. To solve this problem, we introduce a novel\nSpatially-Adaptive Warped Normalization (SAWN), which integrates a learned\nflow-field to warp modulation parameters. This allows us to align person\nspatial-adaptive styles with pose features efficiently. Moreover, we propose a\nnovel self-training part replacement strategy to refine the pretrained model\nfor the texture-transfer task, significantly improving the quality of the\ngenerated cloth and the preservation ability of irrelevant regions. Our\nexperimental results on the widely used DeepFashion dataset demonstrate a\nsignificant improvement of the proposed method over the state-of-the-art\nmethods on both pose-transfer and texture-transfer tasks. The source code is\navailable at https://github.com/zhangqianhui/Sawn.\n"} {"abstract": " We develop the Google matrix analysis of the multiproduct world trade network\nobtained from the UN COMTRADE database in recent years. The comparison is done\nbetween this new approach and the usual Import-Export description of this world\ntrade network. The Google matrix analysis takes into account the multiplicity\nof trade transactions thus highlighting in a better way the world influence of\nspecific countries and products. It shows that after Brexit, the European Union\nof 27 countries has the leading position in the world trade network ranking,\nbeing ahead of USA and China. Our approach determines also a sensitivity of\ntrade country balance to specific products showing the dominant role of\nmachinery and mineral fuels in multiproduct exchanges. It also underlines the\ngrowing influence of Asian countries.\n"} {"abstract": " We establish the correspondence between two apparently unrelated but in fact\ncomplementary approaches of a relativistic deformed kinematics: the geometric\nproperties of momentum space and the loss of absolute locality in canonical\nspacetime, which can be restored with the introduction of a generalized\nspacetime. This correspondence is made explicit for the case of\n$\\kappa$-Poincar\\'e kinematics and compared with its properties in the Hopf\nalgebra framework.\n"} {"abstract": " We investigate the possible presence of dark matter (DM) in massive and\nrotating neutron stars (NSs). For the purpose we extend our previous work [1]\nto introduce a light new physics vector mediator besides a scalar one in order\nto ensure feeble interaction between fermionic DM and $\\beta$ stable hadronic\nmatter in NSs. The masses of DM fermion, the mediators and the couplings are\nchosen consistent with the self-interaction constraint from Bullet cluster and\nfrom present day relic abundance. Assuming that both the scalar and vector\nmediators contribute equally to the relic abundance, we compute the equation of\nstate (EoS) of the DM admixed NSs to find that the present consideration of the\nvector new physics mediator do not bring any significant change to the EoS and\nstatic NS properties of DM admixed NSs compared to the case where only the\nscalar mediator was considered [1]. However, the obtained structural properties\nin static conditions are in good agreement with the various constraints on them\nfrom massive pulsars like PSR J0348+0432 and PSR J0740+6620, the gravitational\nwave (GW170817) data and the recently obtained results of NICER experiments for\nPSR J0030+0451 and PSR J0740+6620. We also extended our work to compute the\nrotational properties of DM admixed NSs rotating at different angular\nvelocities. The present results in this regard suggest that the secondary\ncomponent of GW190814 may be a rapidly rotating massive DM admixed NS. The\nconstraints on rotational frequency from pulsars like PSR B1937+21 and PSR\nJ1748-2446ad are also satisfied by our present results. Also, the constraints\non moment of inertia are satisfied considering slow rotation. The universality\nrelation in terms of normalized moment of inertia also holds good with our DM\nadmixed EoS.\n"} {"abstract": " This paper presents a distributed optimization algorithm tailored for solving\noptimal control problems arising in multi-building coordination. The buildings\ncoordinated by a grid operator, join a demand response program to balance the\nvoltage surge by using an energy cost defined criterion. In order to model the\nhierarchical structure of the building network, we formulate a distributed\nconvex optimization problem with separable objectives and coupled affine\nequality constraints. A variant of the Augmented Lagrangian based Alternating\nDirection Inexact Newton (ALADIN) method for solving the considered class of\nproblems is then presented along with a convergence guarantee. To illustrate\nthe effectiveness of the proposed method, we compare it to the Alternating\nDirection Method of Multipliers (ADMM) by running both an ALADIN and an ADMM\nbased model predictive controller on a benchmark case study.\n"} {"abstract": " The Flatland competition aimed at finding novel approaches to solve the\nvehicle re-scheduling problem (VRSP). The VRSP is concerned with scheduling\ntrips in traffic networks and the re-scheduling of vehicles when disruptions\noccur, for example the breakdown of a vehicle. While solving the VRSP in\nvarious settings has been an active area in operations research (OR) for\ndecades, the ever-growing complexity of modern railway networks makes dynamic\nreal-time scheduling of traffic virtually impossible. Recently, multi-agent\nreinforcement learning (MARL) has successfully tackled challenging tasks where\nmany agents need to be coordinated, such as multiplayer video games. However,\nthe coordination of hundreds of agents in a real-life setting like a railway\nnetwork remains challenging and the Flatland environment used for the\ncompetition models these real-world properties in a simplified manner.\nSubmissions had to bring as many trains (agents) to their target stations in as\nlittle time as possible. While the best submissions were in the OR category,\nparticipants found many promising MARL approaches. Using both centralized and\ndecentralized learning based approaches, top submissions used graph\nrepresentations of the environment to construct tree-based observations.\nFurther, different coordination mechanisms were implemented, such as\ncommunication and prioritization between agents. This paper presents the\ncompetition setup, four outstanding solutions to the competition, and a\ncross-comparison between them.\n"} {"abstract": " In this paper, we establish a large deviations principle (LDP) for\ninteracting particle systems that arise from state and action dynamics of\ndiscrete-time mean-field games under the equilibrium policy of the\ninfinite-population limit. The LDP is proved under weak Feller continuity of\nstate and action dynamics. The proof is based on transferring LDP for empirical\nmeasures of initial states and noise variables under setwise topology to the\noriginal game model via contraction principle, which was first suggested by\nDelarue, Lacker, and Ramanan to establish LDP for continuous-time mean-field\ngames under common noise. We also compare our work with LDP results established\nin prior literature for interacting particle systems, which are in a sense\nuncontrolled versions of mean-field games.\n"} {"abstract": " In this paper, we consider enumeration problems for edge-distinct and\nvertex-distinct Eulerian trails. Here, two Eulerian trails are\n\\emph{edge-distinct} if the edge sequences are not identical, and they are\n\\emph{vertex-distinct} if the vertex sequences are not identical. As the main\nresult, we propose optimal enumeration algorithms for both problems, that is,\nthese algorithm runs in $\\mathcal{O}(N)$ total time, where $N$ is the number of\nsolutions. Our algorithms are based on the reverse search technique introduced\nby [Avis and Fukuda, DAM 1996], and the push out amortization technique\nintroduced by [Uno, WADS 2015].\n"} {"abstract": " In order to prevent the spread of COVID-19, governments have often required\nregional or national lockdowns, which have caused extensive economic stagnation\nover broad areas as the shock of the lockdowns has diffused to other regions\nthrough supply chains. Using supply-chain data for 1.6 million firms in Japan,\nthis study examines how governments can mitigate these economic losses when\nthey are obliged to implement lockdowns. Through tests of all combinations of\ntwo-region lockdowns, we find that coordinated, i.e., simultaneous, lockdowns\nyield smaller GDP losses than uncoordinated lockdowns. Furthermore, we test\npractical scenarios in which Japan's 47 regions impose lockdowns over three\nmonths and find that GDP losses are lower if nationwide lockdowns are\ncoordinated than if they are uncoordinated.\n"} {"abstract": " Finite-time coherent sets (FTCSs) are distinguished regions of phase space\nthat resist mixing with the surrounding space for some finite period of time;\nphysical manifestations include eddies and vortices in the ocean and\natmosphere, respectively. The boundaries of finite-time coherent sets are\nexamples of Lagrangian coherent structures (LCSs). The selection of the time\nduration over which FTCS and LCS computations are made in practice is crucial\nto their success. If this time is longer than the lifetime of coherence of\nindividual objects then existing methods will fail to detect the shorter-lived\ncoherence. It is of clear practical interest to determine the full lifetime of\ncoherent objects, but in complicated practical situations, for example a field\nof ocean eddies with varying lifetimes, this is impossible with existing\napproaches. Moreover, determining the timing of emergence and destruction of\ncoherent sets is of significant scientific interest. In this work we introduce\nnew constructions to address these issues. The key components are an inflated\ndynamic Laplace operator and the concept of semi-material FTCSs. We make strong\nmathematical connections between the inflated dynamic Laplacian and the\nstandard dynamic Laplacian [Froyland 2015], showing that the latter arises as a\nlimit of the former. The spectrum and eigenfunctions of the inflated dynamic\nLaplacian directly provide information on the number, lifetimes, and evolution\nof coherent~sets.\n"} {"abstract": " The ambipolar electrostatic potential rising along the magnetic field line\nfrom the grounded wall to the centre in the linear gas dynamic trap, rules the\navailable suppression of axial heat and particle losses. In this paper, the\nvisible range optical diagnostic is described using the Doppler shift of plasma\nemission lines for measurements of this accelerating potential drop. We used\nthe room temperature hydrogen jet puffed directly on the line of sight as the\ncharge exchange target for plasma ions moving in the expanding flux from the\nmirror towards the wall. Both bulk plasma protons and $He^{2+}$ ions velocity\ndistribution functions can be spectroscopically studied; the latter population\nis produced via the neutral He tracer puff into the central cell plasma. This\nway, potential in the centre and in the mirror area can be measured\nsimultaneously along with the ion temperature. A reasonable accuracy of\n$4\\div8\\%$ was achieved in observations with the frame rate of $\\approx 1~kHz$.\nActive acquisitions on the gas jet also provide the spatial resolution better\nthan 5~mm in the middle plane radial coordinate because of the strong\ncompression of the object size when projected to the centre along the magnetic\nflux surface. The charge exchange radiation diagnostic operates with three\nemission lines: H-$\\alpha$ 656.3~nm, He-I 667.8~nm and He-I 587.6~nm. Recorded\nspectra are shown in the paper and examples for physical dependences are\npresented. The considered experimental technique can be scaled to the upgraded\nmulti-point diagnostic for the next generation linear traps and other magnetic\nconfinement systems.\n"} {"abstract": " Among the top approaches of recent years, link prediction using knowledge\ngraph embedding (KGE) models has gained significant attention for knowledge\ngraph completion. Various embedding models have been proposed so far, among\nwhich, some recent KGE models obtain state-of-the-art performance on link\nprediction tasks by using embeddings with a high dimension (e.g. 1000) which\naccelerate the costs of training and evaluation considering the large scale of\nKGs. In this paper, we propose a simple but effective performance boosting\nstrategy for KGE models by using multiple low dimensions in different\nrepetition rounds of the same model. For example, instead of training a model\none time with a large embedding size of 1200, we repeat the training of the\nmodel 6 times in parallel with an embedding size of 200 and then combine the 6\nseparate models for testing while the overall numbers of adjustable parameters\nare same (6*200=1200) and the total memory footprint remains the same. We show\nthat our approach enables different models to better cope with their\nexpressiveness issues on modeling various graph patterns such as symmetric,\n1-n, n-1 and n-n. In order to justify our findings, we conduct experiments on\nvarious KGE models. Experimental results on standard benchmark datasets, namely\nFB15K, FB15K-237 and WN18RR, show that multiple low-dimensional models of the\nsame kind outperform the corresponding single high-dimensional models on link\nprediction in a certain range and have advantages in training efficiency by\nusing parallel training while the overall numbers of adjustable parameters are\nsame.\n"} {"abstract": " One of the properties of interest in the analysis of networks is \\emph{global\ncommunicability}, i.e., how easy or difficult it is, generally, to reach nodes\nfrom other nodes by following edges. Different global communicability measures\nprovide quantitative assessments of this property, emphasizing different\naspects of the problem.\n This paper investigates the sensitivity of global measures of communicability\nto local changes. In particular, for directed, weighted networks, we study how\ndifferent global measures of communicability change when the weight of a single\nedge is changed; or, in the unweighted case, when an edge is added or removed.\nThe measures we study include the \\emph{total network communicability}, based\non the matrix exponential of the adjacency matrix, and the \\emph{Perron network\ncommunicability}, defined in terms of the Perron root of the adjacency matrix\nand the associated left and right eigenvectors.\n Finding what local changes lead to the largest changes in global\ncommunicability has many potential applications, including assessing the\nresilience of a system to failure or attack, guidance for incremental system\nimprovements, and studying the sensitivity of global communicability measures\nto errors in the network connection data.\n"} {"abstract": " Although micro-lensing of macro-lensed quasars and supernovae provides unique\nopportunities for several kinds of investigations, it can add unwanted and\nsometimes substantial noise. While micro-lensing flux anomalies may be safely\nignored for some observations, they severely limit others. \"Worst-case\"\nestimates can inform the decision whether or not to undertake an extensive\nexamination of micro-lensing scenarios. Here, we report \"worst-case\"\nmicro-lensing uncertainties for point sources lensed by singular isothermal\npotentials, parameterized by a convergence equal to the shear and by the\nstellar fraction. The results can be straightforwardly applied to\nnon-isothermal potentials utilizing the mass sheet degeneracy. We use\nmicro-lensing maps to compute fluctuations in image micro-magnifications and\nestimate the stellar fraction at which the fluctuations are greatest for a\ngiven convergence. We find that the worst-case fluctuations happen at a stellar\nfraction $\\kappa_\\star=\\frac{1}{|\\mu_{macro}|}$. For macro-minima, fluctuations\nin both magnification and demagnification appear to be bounded ($1.5>\\Delta\nm>-1.3$, where $\\Delta m$ is magnitude relative to the average\nmacro-magnification). Magnifications for macro-saddles are bounded as well\n($\\Delta m > -1.7$). In contrast, demagnifications for macro-saddles appear to\nhave unbounded fluctuations as $1/\\mu_{macro}\\rightarrow0$ and\n$\\kappa_\\star\\rightarrow0$.\n"} {"abstract": " A high-order quasi-conservative discontinuous Galerkin (DG) method is\nproposed for the numerical simulation of compressible multi-component flows. A\ndistinct feature of the method is a predictor-corrector strategy to define the\ngrid velocity. A Lagrangian mesh is first computed based on the flow velocity\nand then used as an initial mesh in a moving mesh method (the moving mesh\npartial differential equation or MMPDE method ) to improve its quality. The\nfluid dynamic equations are discretized in the direct arbitrary\nLagrangian-Eulerian framework using DG elements and the non-oscillatory kinetic\nflux while the species equation is discretized using a quasi-conservative DG\nscheme to avoid numerical oscillations near material interfaces. A selection of\none- and two-dimensional examples are presented to verify the convergence order\nand the constant-pressure-velocity preservation property of the method. They\nalso demonstrate that the incorporation of the Lagrangian meshing with the\nMMPDE moving mesh method works well to concentrate mesh points in regions of\nshocks and material interfaces.\n"} {"abstract": " We provide evidence for the existence of a new strongly-coupled four\ndimensional $\\mathcal{N}=2$ superconformal field theory arising as a\nnon-trivial IR fixed point on the Coulomb branch of the mass-deformed\nsuperconformal Lagrangian theory with gauge group $G_2$ and four fundamental\nhypermultiplets. Notably, our analysis proceeds by using various geometric\nconstraints to bootstrap the data of the theory, and makes no explicit\nreference to the Seiberg-Witten curve. We conjecture a corresponding VOA and\ncheck that the vacuum character satisfies a linear modular differential\nequation of fourth order. We also propose an identification with existing class\n$\\mathcal{S}$ constructions.\n"} {"abstract": " Social touch is essential for our social interactions, communication, and\nwell-being. It has been shown to reduce anxiety and loneliness; and is a key\nchannel to transmit emotions for which words are not sufficient, such as love,\nsympathy, reassurance, etc. However, direct physical contact is not always\npossible due to being remotely located, interacting in a virtual environment,\nor as a result of a health issue. Mediated social touch enables physical\ninteractions, despite the distance, by transmitting the haptic cues that\nconstitute social touch through devices. As this technology is fairly new, the\nusers' needs and their expectations on a device design and its features are\nunclear, as well as who would use this technology, and in which conditions. To\nbetter understand these aspects of the mediated interaction, we conducted an\nonline survey on 258 respondents located in the USA. Results give insights on\nthe type of interactions and device features that the US population would like\nto use.\n"} {"abstract": " We establish a second anti-blocker theorem for non-commutative convex\ncorners, show that the anti-blocking operation is continuous on bounded sets of\nconvex corners, and define optimisation parameters for a given convex corner\nthat generalise well-known graph theoretic quantities. We define the entropy of\na state with respect to a convex corner, characterise its maximum value in\nterms of a generalised fractional chromatic number and establish entropy\nsplitting results that demonstrate the entropic complementarity between a\nconvex corner and its anti-blocker. We identify two extremal tensor products of\nconvex corners and examine the behaviour of the introduced parameters with\nrespect to tensoring. Specialising to non-commutative graphs, we obtain quantum\nversions of the fractional chromatic number and the clique covering number, as\nwell as a notion of non-commutative graph entropy of a state, which we show to\nbe continuous with respect to the state and the graph. We define the\nWitsenhausen rate of a non-commutative graph and compute the values of our\nparameters in some specific cases.\n"} {"abstract": " We focus on BPS solutions of the gauged O(3) Sigma model, due to Schroers,\nand use these ideas to study the geometry of the moduli space. The model has an\nasymmetry parameter $\\tau$ breaking the symmetry of vortices and antivortices\non the field equations. It is shown that the moduli space is incomplete both on\nthe Euclidean plane and on a compact surface. On the Euclidean plane, the L2\nmetric on the moduli space is approximated for well separated cores and results\nconsistent with similar approximations for the Ginzburg-Landau functional are\nfound. The scattering angle of approaching vortex-antivortex pairs of different\neffective mass is computed numerically and is shown to be different from the\nwell known scattering of approaching Ginzburg-Landau vortices. The volume of\nthe moduli space for general $\\tau$ is computed for the case of the round\nsphere and flat tori. The model on a compact surface is deformed introducing a\nneutral field and a Chern-Simons term. A lower bound for the Chern-Simons\nconstant $\\kappa$ such that the extended model admits a solution is shown to\nexist, and if the total number of vortices and antivortices are different, the\nexistence of an upper bound is also shown. Existence of multiple solutions to\nthe governing elliptic problem is established on a compact surface as well as\nthe existence of two limiting behaviours as $\\kappa \\to 0$. A localization\nformula for the deformation is found for both Ginzburg-Landau and the O(3)\nSigma model vortices and it is shown that it can be extended to the coalescense\nset. This rules out the possibility that this is Kim-Lee's term in the case of\nGinzburg-Landau vortices, moreover, the deformation term is compared on the\nplane with the Ricci form of the surface and it is shown they are different,\nhence also discarding that this is the term proposed by Collie-Tong to model\nvortex dynamics with Chern-Simons interaction.\n"} {"abstract": " We show that the maximal number of singular points of a normal quartic\nsurface $X \\subset \\mathbb{P}^3_K$ defined over an algebraically closed field\n$K$ of characteristic $2$ is at most $20$, and that if equality is attained,\nthen the minimal resolution of $X$ is a supersingular\n K3 surface and the singular points are $20$ nodes.\n We produce examples with 14 nodes. In a sequel to this paper (in two parts,\nthe second in collaboration with Matthias Sch\\\"utt) we show that the optimal\nbound is indeed 14, and that if equality is attained, then the minimal\nresolution of $X$ is a supersingular\n K3 surface and the singular points are $14$ nodes.\n We also obtain some smaller upper bounds under several geometric assumptions\nholding at one of the singular points $P$ (structure of tangent cone,\nseparability/inseparability of the projection with centre $P$).\n"} {"abstract": " All-electronic interrogation of biofluid flow velocity by sensors\nincorporated in ultra-low-power or self-sustained systems offers the promise of\nenabling multifarious emerging research and applications. Electrical sensors\nbased on nanomaterials are of high spatiotemporal resolution and exceptional\nsensitivity to external flow stimulus and easily integrated and fabricated\nusing scalable techniques. But existing nano-based electrical flow-sensing\ntechnologies remain lacking in precision and stability and are typically only\napplicable to simple aqueous solutions or liquid/gas dual-phase mixtures,\nmaking them unsuitable for monitoring low-flow (~micrometer/second) yet\nimportant characteristics of continuous biofluids (e.g., hemorheological\nbehaviors in microcirculation). Here we show that monolayer-graphene single\nmicroelectrodes harvesting charge from continuous aqueous flow provide an ideal\nflow sensing strategy: Our devices deliver over six months stability and\nsub-micrometer/second resolution in real-time quantification of whole-blood\nflows with multiscale amplitude-temporal characteristics in a microfluidic\nchip. The flow transduction is enabled by low-noise charge transfer at\ngraphene/water interface in response to flow-sensitive rearrangement of the\ninterfacial electrical double layer. Our results demonstrate the feasibility of\nusing a graphene-based self-powered strategy for monitoring biofluid flow\nvelocity with key performance metrics orders of magnitude higher than other\nelectrical approaches.\n"} {"abstract": " Randomization-based Machine Learning methods for prediction are currently a\nhot topic in Artificial Intelligence, due to their excellent performance in\nmany prediction problems, with a bounded computation time. The application of\nrandomization-based approaches to renewable energy prediction problems has been\nmassive in the last few years, including many different types of\nrandomization-based approaches, their hybridization with other techniques and\nalso the description of new versions of classical randomization-based\nalgorithms, including deep and ensemble approaches. In this paper we review the\nmost important characteristics of randomization-based machine learning\napproaches and their application to renewable energy prediction problems. We\ndescribe the most important methods and algorithms of this family of modeling\nmethods, and perform a critical literature review, examining prediction\nproblems related to solar, wind, marine/ocean and hydro-power renewable\nsources. We support our critical analysis with an extensive experimental study,\ncomprising real-world problems related to solar, wind and hydro-power energy,\nwhere randomization-based algorithms are found to achieve superior results at a\nsignificantly lower computational cost than other modeling counterparts. We end\nour survey with a prospect of the most important challenges and research\ndirections that remain open this field, along with an outlook motivating\nfurther research efforts in this exciting research field.\n"} {"abstract": " Internal interfaces in a domain could exist as a material defect or they can\nappear due to propagations of cracks. Discretization of such geometries and\nsolution of the contact problem on the internal interfaces can be\ncomputationally challenging. We employ an unfitted Finite Element (FE)\nframework for the discretization of the domains and develop a tailored,\nglobally convergent, and efficient multigrid method for solving contact\nproblems on the internal interfaces. In the unfitted FE methods, structured\nbackground meshes are used and only the underlying finite element space has to\nbe modified to incorporate the discontinuities. The non-penetration conditions\non the embedded interfaces of the domains are discretized using the method of\nLagrange multipliers. We reformulate the arising variational inequality problem\nas a quadratic minimization problem with linear inequality constraints. Our\nmultigrid method can solve such problems by employing a tailored multilevel\nhierarchy of the FE spaces and a novel approach for tackling the discretized\nnon-penetration conditions. We employ pseudo-$L^2$ projection-based transfer\noperators to construct a hierarchy of nested FE spaces from the hierarchy of\nnon-nested meshes. The essential component of our multigrid method is a\ntechnique that decouples the linear constraints using an orthogonal\ntransformation of the basis. The decoupled constraints are handled by a\nmodified variant of the projected Gauss-Seidel method, which we employ as a\nsmoother in the multigrid method. These components of the multigrid method\nallow us to enforce linear constraints locally and ensure the global\nconvergence of our method. We will demonstrate the robustness, efficiency, and\nlevel independent convergence property of the proposed method for Signorini's\nproblem and two-body contact problems.\n"} {"abstract": " Given a simple undirected graph $G$, an orientation of $G$ is to assign every\nedge of $G$ a direction. Borradaile et al gave a greedy algorithm\nSC-Path-Reversal (in polynomial time) which finds a strongly connected\norientation that minimizes the maximum indegree, and conjectured that\nSC-Path-Reversal is indeed optimal for the \"minimizing the lexicographic order\"\nobjective as well. In this note, we give a positive answer to the conjecture,\nthat is we show that the algorithm SC-PATH-REVERSAL finds a strongly connected\norientation that minimizes the lexicographic order of indegrees.\n"} {"abstract": " Intermediate mass planets, from Super-Earth to Neptune-sized bodies, are the\nmost common type of planets in the galaxy. The prevailing theory of planet\nformation, core-accretion, predicts significantly fewer intermediate-mass giant\nplanets than observed. The competing mechanism for planet formation, disk\ninstability, can produce massive gas giant planets on wide-orbits, such as\nHR8799, by direct fragmentation of the protoplanetary disk. Previously,\nfragmentation in magnetized protoplanetary disks has only been considered when\nthe magneto-rotational instability is the driving mechanism for magnetic field\ngrowth. Yet, this instability is naturally superseded by the spiral-driven\ndynamo when more realistic, non-ideal MHD conditions are considered. Here we\nreport on MHD simulations of disk fragmentation in the presence of a\nspiral-driven dynamo. Fragmentation leads to the formation of long-lived bound\nprotoplanets with masses that are at least one order of magnitude smaller than\nin conventional disk instability models. These light clumps survive shear and\ndo not grow further due to the shielding effect of the magnetic field, whereby\nmagnetic pressure stifles local inflow of matter. The outcome is a population\nof gaseous-rich planets with intermediate masses, while gas giants are found to\nbe rarer, in qualitative agreement with the observed mass distribution of\nexoplanets.\n"} {"abstract": " Since the education sector is associated with highly dynamic business\nenvironments which are controlled and maintained by information systems, recent\ntechnological advancements and the increasing pace of adopting artificial\nintelligence (AI) technologies constitute a need to identify and analyze the\nissues regarding their implementation in education sector. However, a study of\nthe contemporary literature reveled that relatively little research has been\nundertaken in this area. To fill this void, we have identified the benefits and\nchallenges of implementing artificial intelligence in the education sector,\npreceded by a short discussion on the concepts of AI and its evolution over\ntime. Moreover, we have also reviewed modern AI technologies for learners and\neducators, currently available on the software market, evaluating their\nusefulness. Last but not least, we have developed a strategy implementation\nmodel, described by a five-stage, generic process, along with the corresponding\nconfiguration guide. To verify and validate their design, we separately\ndeveloped three implementation strategies for three different higher education\norganizations. We believe that the obtained results will contribute to better\nunderstanding the specificities of AI systems, services and tools, and\nafterwards pave a smooth way in their implementation.\n"} {"abstract": " In compositional zero-shot learning, the goal is to recognize unseen\ncompositions (e.g. old dog) of observed visual primitives states (e.g. old,\ncute) and objects (e.g. car, dog) in the training set. This is challenging\nbecause the same state can for example alter the visual appearance of a dog\ndrastically differently from a car. As a solution, we propose a novel graph\nformulation called Compositional Graph Embedding (CGE) that learns image\nfeatures, compositional classifiers, and latent representations of visual\nprimitives in an end-to-end manner. The key to our approach is exploiting the\ndependency between states, objects, and their compositions within a graph\nstructure to enforce the relevant knowledge transfer from seen to unseen\ncompositions. By learning a joint compatibility that encodes semantics between\nconcepts, our model allows for generalization to unseen compositions without\nrelying on an external knowledge base like WordNet. We show that in the\nchallenging generalized compositional zero-shot setting our CGE significantly\noutperforms the state of the art on MIT-States and UT-Zappos. We also propose a\nnew benchmark for this task based on the recent GQA dataset. Code is available\nat: https://github.com/ExplainableML/czsl\n"} {"abstract": " The biochemical reaction networks that regulate living systems are all\nstochastic to varying degrees. The resulting randomness affects biological\noutcomes at multiple scales, from the functional states of single proteins in a\ncell to the evolutionary trajectory of whole populations. Controlling how the\ndistribution of these outcomes changes over time -- via external interventions\nlike time-varying concentrations of chemical species -- is a complex challenge.\nIn this work, we show how counterdiabatic (CD) driving, first developed to\ncontrol quantum systems, provides a versatile tool for steering biological\nprocesses. We develop a practical graph-theoretic framework for CD driving in\ndiscrete-state continuous-time Markov networks. We illustrate the formalism\nwith examples from gene regulation and chaperone-assisted protein folding,\ndemonstrating the possibility that nature can exploit CD driving to accelerate\nresponse to sudden environmental changes. We generalize the method to continuum\nFokker-Planck models, and apply it to study AFM single-molecule pulling\nexperiments in regimes where the typical assumption of adiabaticity breaks\ndown, as well as an evolutionary model with competing genetic variants subject\nto time-varying selective pressures. The AFM analysis shows how CD driving can\neliminate non-equilibrium artifacts due to large force ramps in such\nexperiments, allowing accurate estimation of biomolecular properties.\n"} {"abstract": " Quantum properties, such as entanglement and coherence, are indispensable\nresources in various quantum information processing tasks. However, there still\nlacks an efficient and scalable way to detecting these useful features,\nespecially for high-dimensional and multipartite quantum systems. In this work,\nwe exploit the convexity of samples without the desired quantum features and\ndesign an unsupervised machine learning method to detect the presence of such\nfeatures as anomalies. Particularly, in the context of entanglement detection,\nwe propose a complex-valued neural network composed of pseudo-siamese network\nand generative adversarial net, and then train it with only separable states to\nconstruct non-linear witnesses for entanglement. It is shown via numerical\nexamples, ranging from two-qubit to ten-qubit systems, that our network is able\nto achieve high detection accuracy which is above 97.5% on average.Moreover, it\nis capable of revealing rich structures of entanglement, such as partial\nentanglement among subsystems. Our results are readily applicable to the\ndetection of other quantum resources such as Bell nonlocality and steerability,\nand thus our work could provide a powerful tool to extract quantum features\nhidden in multipartite quantum data.\n"} {"abstract": " Effective traffic optimization strategies can improve the performance of\ntransportation networks significantly. Most exiting works develop traffic\noptimization strategies depending on the local traffic states of congested road\nsegments, where the congestion propagation is neglected. This paper proposes a\nnovel distributed traffic optimization method for urban freeways considering\nthe potential congested road segments, which are called\npotential-homogeneous-area. The proposed approach is based on the intuition\nthat the evolution of congestion may affect the neighbor segments due to the\nmobility of traffic flow. We identify potential-homogeneous-area by applying\nour proposed temporal-spatial lambda-connectedness method using historical\ntraffic data. Further, global dynamic capacity constraint of this area is\nintegrated with cell transmission model (CTM) in the traffic optimization\nproblem. To reduce computational complexity and improve scalability, we propose\na fully distributed algorithm to solve the problem, which is based on the\npartial augmented Lagrangian and dual-consensus alternating direction method of\nmultipliers (ADMM). By this means, distributed coordination of ramp metering\nand variable speed limit control is achieved. We prove that the proposed\nalgorithm converges to the optimal solution so long as the traffic optimization\nobjective is convex. The performance of the proposed method is evaluated by\nmacroscopic simulation using real data of Shanghai, China.\n"} {"abstract": " We give a numerical condition for right-handedness of a dynamically convex\nReeb flow on $S^3$. Our condition is stated in terms of an asymptotic ratio\nbetween the amount of rotation of the linearised flow and the linking number of\ntrajectories with a periodic orbit that spans a disk-like global surface of\nsection. As an application, we find an explicit constant $\\delta_* < 0.7225$\nsuch that if a Riemannian metric on the $2$-sphere is $\\delta$-pinched with\n$\\delta > \\delta_*$, then its geodesic flow lifts to a right-handed flow on\n$S^3$. In particular, all finite collections of periodic orbits of such a\ngeodesic flow bind open books whose pages are global surfaces of section.\n"} {"abstract": " Conventional Supervised Learning approaches focus on the mapping from input\nfeatures to output labels. After training, the learnt models alone are adapted\nonto testing features to predict testing labels in isolation, with training\ndata wasted and their associations ignored. To take full advantage of the vast\nnumber of training data and their associations, we propose a novel learning\nparadigm called Memory-Associated Differential (MAD) Learning. We first\nintroduce an additional component called Memory to memorize all the training\ndata. Then we learn the differences of labels as well as the associations of\nfeatures in the combination of a differential equation and some sampling\nmethods. Finally, in the evaluating phase, we predict unknown labels by\ninferencing from the memorized facts plus the learnt differences and\nassociations in a geometrically meaningful manner. We gently build this theory\nin unary situations and apply it on Image Recognition, then extend it into Link\nPrediction as a binary situation, in which our method outperforms strong\nstate-of-the-art baselines on ogbl-ddi dataset.\n"} {"abstract": " Existing sequential recommendation methods rely on large amounts of training\ndata and usually suffer from the data sparsity problem. To tackle this, the\npre-training mechanism has been widely adopted, which attempts to leverage\nlarge-scale data to perform self-supervised learning and transfer the\npre-trained parameters to downstream tasks. However, previous pre-trained\nmodels for recommendation focus on leverage universal sequence patterns from\nuser behaviour sequences and item information, whereas ignore capturing\npersonalized interests with the heterogeneous user information, which has been\nshown effective in contributing to personalized recommendation. In this paper,\nwe propose a method to enhance pre-trained models with heterogeneous user\ninformation, called User-aware Pre-training for Recommendation (UPRec).\nSpecifically, UPRec leverages the user attributes andstructured social graphs\nto construct self-supervised objectives in the pre-training stage and proposes\ntwo user-aware pre-training tasks. Comprehensive experimental results on\nseveral real-world large-scale recommendation datasets demonstrate that UPRec\ncan effectively integrate user information into pre-trained models and thus\nprovide more appropriate recommendations for users.\n"} {"abstract": " This paper is devoted to the analysis of the distribution of the total\nmagnetic quantum number $M$ in a relativistic subshell with $N$ equivalent\nelectrons of momentum $j$. This distribution is analyzed through its cumulants\nand through their generating function, for which an analytical expression is\nprovided. This function also allows us to get the values of the cumulants at\nany order. Such values are useful to obtain the moments at various orders.\nSince the cumulants of the distinct subshells are additive this study directly\napplies to any relativistic configuration. Recursion relations on the\ngenerating function are given. It is shown that the generating function of the\nmagnetic quantum number distribution may be expressed as a n-th derivative of a\npolynomial. This leads to recurrence relations for this distribution which are\nvery efficient even in the case of large $j$ or $N$. The magnetic quantum\nnumber distribution is numerically studied using the Gram-Charlier and\nEdgeworth expansions. The inclusion of high-order terms may improve the\naccuracy of the Gram-Charlier representation for instance when a small and a\nlarge angular momenta coexist in the same configuration. However such series\ndoes not exhibit convergence when high orders are considered and the account\nfor the first two terms often provides a fair approximation of the magnetic\nquantum number distribution. The Edgeworth series offers an interesting\nalternative though this expansion is also divergent and of asymptotic nature.\n"} {"abstract": " In this paper we present a continuation method which transforms spatially\ndistributed ODE systems into continuous PDE. We show that this continuation can\nbe performed both for linear and nonlinear systems, including multidimensional,\nspace- and time-varying systems. When applied to a large-scale network, the\ncontinuation provides a PDE describing evolution of continuous state\napproximation that respects the spatial structure of the original ODE. Our\nmethod is illustrated by multiple examples including transport equations,\nKuramoto equations and heat diffusion equations. As a main example, we perform\nthe continuation of a Newtonian system of interacting particles and obtain the\nEuler equations for compressible fluids, thereby providing an original\nalternative solution to Hilbert's 6th problem. Finally, we leverage our\nderivation of the Euler equations to control multiagent systems, designing a\nnonlinear control algorithm for robot formation based on its continuous\napproximation.\n"} {"abstract": " Due to the recent growth of discoveries of strong gravitational lensing (SGL)\nsystems, one can statistically study both lens properties and cosmological\nparameters from 161 galactic scale SGL systems. We analyze meVSL model with the\nvelocity dispersion of lenses by adopting the redshift and surface mass density\ndepending power-law mass model. Analysis shows that meVSL models with various\ndark energy models including $\\Lambda$CDM, $\\omega$CDM, and CPL provide the\nnegative values of meVSL parameter, $b$ when we put the prior to the $\\Omega_{m\n0}$ value from Planck. These indicate the faster speed of light and the\nstronger gravitational force in the past. However, if we adopt the WMAP prior\non $\\Omega_{m0}$, then we obtain the null results on $b$ within 1-$\\sigma$ CL\nfor the different dark energy models.\n"} {"abstract": " Most online message threads inherently will be cluttered and any new user or\nan existing user visiting after a hiatus will have a difficult time\nunderstanding whats being discussed in the thread. Similarly cluttered\nresponses in a message thread makes analyzing the messages a difficult problem.\nThe need for disentangling the clutter is much higher when the platform where\nthe discussion is taking place does not provide functions to retrieve reply\nrelations of the messages. This introduces an interesting problem to which\n\\cite{wang2011learning} phrases as a structural learning problem. We create\nvector embeddings for posts in a thread so that it captures both linguistic and\npositional features in relation to a context of where a given message is in.\nUsing these embeddings for posts we compute a similarity based connectivity\nmatrix which then converted into a graph. After employing a pruning mechanisms\nthe resultant graph can be used to discover the reply relation for the posts in\nthe thread. The process of discovering or disentangling chat is kept as an\nunsupervised mechanism. We present our experimental results on a data set\nobtained from Telegram with limited meta data.\n"} {"abstract": " The relationship between the magnetic interaction and photoinduced dynamics\nin antiferromagnetic perovskites is investigated in this study. In\nLa${}_{1/3}$Sr${}_{2/3}$FeO${}_{3}$ thin films, commensurate spin ordering is\naccompanied by charge disproportionation, whereas SrFeO${}_{3}$ thin films show\nincommensurate helical antiferromagnetic spin ordering due to increased\nferromagnetic coupling compared to La${}_{1/3}$Sr${}_{2/3}$FeO${}_{3}$. To\nunderstand the photoinduced spin dynamics in these materials, we investigate\nthe spin ordering through time-resolved resonant soft X-ray scattering. In\nLa${}_{1/3}$Sr${}_{2/3}$FeO${}_{3}$, ultrafast quenching of the magnetic\nordering within 130 fs through a nonthermal process is observed, triggered by\ncharge transfer between the Fe atoms. We compare this to the photoinduced\ndynamics of the helical magnetic ordering of SrFeO${}_{3}$. We find that the\nchange in the magnetic coupling through optically induced charge transfer can\noffer an even more efficient channel for spin-order manipulation.\n"} {"abstract": " In this paper we study the equations of the elimination ideal associated with\n$n+1$ generic multihomogeneous polynomials defined over a product of projective\nspaces of dimension $n$. We first prove a duality property and then make this\nduality explicit by introducing multigraded Sylvester forms. These results\nprovide a partial generalization of similar properties that are known in the\nsetting of homogeneous polynomial systems defined over a single projective\nspace. As an important consequence, we derive a new family of elimination\nmatrices that can be used for solving zero-dimensional multiprojective\npolynomial systems by means of linear algebra methods.\n"} {"abstract": " We report microscopic, cathodoluminescence, chemical and O isotopic\nmeasurements of FeO-poor isolated olivine grains (IOG) in the carbonaceous\nchondrites Allende (CV3), Northwest Africa 5958 (C2-ung), Northwest Africa\n11086 (CM2-an), Allan Hills 77307 (CO3.0). The general petrographic, chemical\nand isotopic similarity with bona fide type I chondrules confirms that the IOG\nderived from them. The concentric CL zoning, reflecting a decrease in\nrefractory elements toward the margins, and frequent rimming by enstatite are\ntaken as evidence of interaction of the IOG with the gas as stand-alone\nobjects. This indicates that they were splashed out of chondrules when these\nwere still partially molten. CaO-rich refractory forsterites, which are\nrestricted to $\\Delta^{17}O < -4\\permil$ likely escaped equilibration at lower\ntemperatures because of their large size and possibly quicker quenching. The\nIOG thus bear witness to frequent collisions in the chondrule-forming regions.\n"} {"abstract": " The nova rate in the Milky Way remains largely uncertain, despite its vital\nimportance in constraining models of Galactic chemical evolution as well as\nunderstanding progenitor channels for Type Ia supernovae. The rate has been\npreviously estimated in the range of $\\approx10-300$ yr$^{-1}$, either based on\nextrapolations from a handful of very bright optical novae or the nova rates in\nnearby galaxies; both methods are subject to debatable assumptions. The total\ndiscovery rate of optical novae remains much smaller ($\\approx5-10$ yr$^{-1}$)\nthan these estimates, even with the advent of all-sky optical time domain\nsurveys. Here, we present a systematic sample of 12 spectroscopically confirmed\nGalactic novae detected in the first 17 months of Palomar Gattini-IR (PGIR), a\nwide-field near-infrared time domain survey. Operating in $J$-band\n($\\approx1.2$ $\\mu$m) that is relatively immune to dust extinction, the\nextinction distribution of the PGIR sample is highly skewed to large extinction\nvalues ($> 50$% of events obscured by $A_V\\gtrsim5$ mag). Using recent\nestimates for the distribution of mass and dust in the Galaxy, we show that the\nobserved extinction distribution of the PGIR sample is commensurate with that\nexpected from dust models. The PGIR extinction distribution is inconsistent\nwith that reported in previous optical searches (null hypothesis probability $<\n0.01$%), suggesting that a large population of highly obscured novae have been\nsystematically missed in previous optical searches. We perform the first\nquantitative simulation of a $3\\pi$ time domain survey to estimate the Galactic\nnova rate using PGIR, and derive a rate of $\\approx 46.0^{+12.5}_{-12.4}$\nyr$^{-1}$. Our results suggest that all-sky near-infrared time-domain surveys\nare well poised to uncover the Galactic nova population.\n"} {"abstract": " Reconfigurable intelligent surface (RIS) is considered as a revolutionary\ntechnology for future wireless communication networks. In this letter, we\nconsider the acquisition of the cascaded channels, which is a challenging task\ndue to the massive number of passive RIS elements. To reduce the pilot\noverhead, we adopt the element-grouping strategy, where each element in one\ngroup shares the same reflection coefficient and is assumed to have the same\nchannel condition. We analyze the channel interference caused by the\nelement-grouping strategy and further design two deep learning based networks.\nThe first one aims to refine the partial channels by eliminating the\ninterference, while the second one tries to extrapolate the full channels from\nthe refined partial channels. We cascade the two networks and jointly train\nthem. Simulation results show that the proposed scheme provides significant\ngain compared to the conventional element-grouping method without interference\nelimination.\n"} {"abstract": " The progenitors of present-day galaxy clusters give important clues about the\nevolution of the large scale structure, cosmic mass assembly, and galaxy\nevolution. Simulations are a major tool for these studies since they are used\nto interpret observations. In this work, we introduce a set of\n\"protocluster-lightcones\", dubbed PCcones. They are mock galaxy catalogs\ngenerated from the Millennium Simulation with the L-GALAXIES semi-analytic\nmodel. These lightcones were constructed by placing a desired structure at the\nredshift of interest in the centre of the cone. This approach allows to adopt a\nset of observational constraints, such as magnitude limits and uncertainties in\nmagnitudes and photometric redshifts (photo-zs), to produce realistic\nsimulations of photometric surveys. We show that photo-zs obtained with PCcones\nare more accurate than those obtained directly with the Millennium Simulation,\nmostly due to the difference in how apparent magnitudes are computed. We apply\nPCcones in the determination of the expected accuracy of protocluster detection\nusing photo-zs in the $z=1-3$ range in the wide-layer of HSC-SSP and the\n10-year LSST forecast. With our technique, we expect to recover only $\\sim38\\%$\nand $\\sim 43\\%$ of all massive galaxy cluster progenitors with more than 70\\%\nof purity for HSC-SSP and LSST, respectively. Indeed, the combination of\nobservational constraints and photo-z uncertainties affects the detection of\nstructures critically for both emulations, indicating the need of spectroscopic\nredshifts to improve detection. We also compare our mocks of the Deep CFHTLS at\n$z<1.5$ with observed cluster catalogs, as an extra validation of the\nlightcones and methods.\n"} {"abstract": " We reanalyse the solar eclipse linked to the Biblical passage about the\nmilitary leader Joshua who ordered the sun to halt in the midst of the day\n(Joshua 10:12). Although there is agreement that the basic story is rooted in a\nreal event, the date is subject to different opinions. We review the historical\nemergence of the text and confirm that the total eclipse of the sun of 30\nSeptember 1131 BCE is the most likely candidate. The Besselian Elements for\nthis eclipse are re-computed. The error for the deceleration parameter of\nEarth's rotation, $\\Delta T$, is improved by a factor of 2.\n"} {"abstract": " Aspect-based Sentiment Analysis (ABSA) aims to identify the aspect terms,\ntheir corresponding sentiment polarities, and the opinion terms. There exist\nseven subtasks in ABSA. Most studies only focus on the subsets of these\nsubtasks, which leads to various complicated ABSA models while hard to solve\nthese subtasks in a unified framework. In this paper, we redefine every subtask\ntarget as a sequence mixed by pointer indexes and sentiment class indexes,\nwhich converts all ABSA subtasks into a unified generative formulation. Based\non the unified formulation, we exploit the pre-training sequence-to-sequence\nmodel BART to solve all ABSA subtasks in an end-to-end framework. Extensive\nexperiments on four ABSA datasets for seven subtasks demonstrate that our\nframework achieves substantial performance gain and provides a real unified\nend-to-end solution for the whole ABSA subtasks, which could benefit multiple\ntasks.\n"} {"abstract": " Recently, a geometric approach to operator mixing in massless QCD-like\ntheories -- that involves canonical forms based on the Poincare'-Dulac theorem\nfor the linear system that defines the renormalized mixing matrix in the\ncoordinate representation $Z(x,\\mu)$ -- has been advocated in arXiv:2103.15527\n. As a consequence, a classification of operator mixing in four cases --\ndepending on the canonical forms of $- \\frac{\\gamma(g)}{\\beta(g)}$, with\n$\\gamma(g)=\\gamma_0 g^2+\\cdots$ the matrix of the anomalous dimensions and\n$\\beta(g)=-\\beta_0 g^3 + \\cdots$ the beta function -- has been proposed: (I)\nnonresonant $\\frac{\\gamma_0}{\\beta_0}$ diagonalizable, (II) resonant\n$\\frac{\\gamma_0}{\\beta_0}$ diagonalizable, (III) nonresonant\n$\\frac{\\gamma_0}{\\beta_0}$ nondiagonalizable, (IV) resonant\n$\\frac{\\gamma_0}{\\beta_0}$ nondiagonalizable. In particular, in\narXiv:2103.15527 a detailed analysis of the case (I) -- where operator mixing\nreduces to all orders of perturbation theory to the multiplicatively\nrenormalizable case -- has been provided. In the present paper, following the\naforementioned approach, we work out in the remaining three cases the canonical\nforms for $- \\frac{\\gamma(g)}{\\beta(g)}$ to all orders of perturbation theory,\nthe corresponding UV asymptotics of $Z(x,\\mu)$, and the physics interpretation.\nWe also work out in detail physical realizations of the cases (I) and (II).\n"} {"abstract": " Ova-angular rotations of a prime number are characterized, constructed using\nthe Dirichlet theorem. The geometric properties arising from this theory are\nanalyzed and some applications are presented, including Goldbach's conjecture,\nthe existence of infinite primes of the form $\\rho = k^2+1$ and the convergence\nof the sum of the inverses of the Mersenne's primes. Although the mathematics\nthat was used was elementary, you can notice the usefulness of this theory\nbased on geometric properties. In this paper, the study ends by introducing the\nova-angular square matrix.\n"} {"abstract": " Data from multifactor HCI experiments often violates the normality assumption\nof parametric tests (i.e., nonconforming data). The Aligned Rank Transform\n(ART) is a popular nonparametric analysis technique that can find main and\ninteraction effects in nonconforming data, but leads to incorrect results when\nused to conduct contrast tests. We created a new algorithm called ART-C for\nconducting contrasts within the ART paradigm and validated it on 72,000 data\nsets. Our results indicate that ART-C does not inflate Type I error rates,\nunlike contrasts based on ART, and that ART-C has more statistical power than a\nt-test, Mann-Whitney U test, Wilcoxon signed-rank test, and ART. We also\nextended a tool called ARTool with our ART-C algorithm for both Windows and R.\nOur validation had some limitations (e.g., only six distribution types, no\nmixed factorial designs, no random slopes), and data drawn from Cauchy\ndistributions should not be analyzed with ART-C.\n"} {"abstract": " We investigate the formation and growth of massive black hole (BH) seeds in\ndusty star-forming galaxies, relying and extending the framework proposed by\nBoco et al. 2020. Specifically, the latter envisages the migration of stellar\ncompact remnants (neutron stars and stellar-mass black holes) via gaseous\ndynamical friction towards the galaxy nuclear region, and their subsequent\nmerging to grow a massive central BH seed. In this paper we add two relevant\ningredients: (i) we include primordial BHs, that could constitute a fraction\n$f_{\\rm pBH}$ of the dark matter, as an additional component participating in\nthe seed growth; (ii) we predict the stochastic gravitational wave background\noriginated during the seed growth, both from stellar compact remnant and from\nprimordial BH mergers. We find that the latter events contribute most to the\ninitial growth of the central seed during a timescale of $10^6-10^7\\,\\rm yr$,\nbefore stellar compact remnant mergers and gas accretion take over. In\naddition, if the fraction of primordial BHs $f_{\\rm pBH}$ is large enough,\ngravitational waves emitted by their mergers in the nuclear galactic regions\ncould be detected by future interferometers like Einsten Telescope, DECIGO and\nLISA. As for the associated stochastic gravitational wave background, we\npredict that it extends over the wide frequency band $10^{-6}\\lesssim f [{\\rm\nHz}]\\lesssim 10$, which is very different from the typical range originated by\nmergers of isolated binary compact objects. On the one hand, the detection of\nsuch a background could be a smoking gun to test the proposed seed growth\nmechanism; on the other hand, it constitutes a relevant contaminant from\nastrophysical sources to be characterized and subtracted, in the challenging\nsearch for a primordial background of cosmological origin.\n"} {"abstract": " The viewing size of a signer correlates with legibility, i.e., the ease with\nwhich a viewer can recognize individual signs. The WCAG 2.0 guidelines (G54)\nmention in the notes that there should be a mechanism to adjust the size to\nensure the signer is discernible but does not state minimum discernibility\nguidelines. The fluent range (the range over which sign viewers can follow the\nsigners at maximum speed) extends from about 7{\\deg} to 20{\\deg}, which is far\ngreater than 2{\\deg} for print. Assuming a standard viewing distance of 16\ninches from a 5-inch smartphone display, the corresponding sizes are from 2 to\n5 inches, i.e., from 1/3rd to full-screen. This is consistent with vision\nscience findings about human visual processing properties, and how they play a\ndominant role in constraining the distribution of signer sizes.\n"} {"abstract": " Time-of-Flight Magnetic Resonance Angiographs (TOF-MRAs) enable visualization\nand analysis of cerebral arteries. This analysis may indicate normal variation\nof the configuration of the cerebrovascular system or vessel abnormalities,\nsuch as aneurysms. A model would be useful to represent normal cerebrovascular\nstructure and variabilities in a healthy population and to differentiate from\nabnormalities. Current anomaly detection using autoencoding convolutional\nneural networks usually use a voxelwise mean-error for optimization. We propose\noptimizing a variational-autoencoder (VAE) with structural similarity loss\n(SSIM) for TOF-MRA reconstruction. A patch-trained 2D fully-convolutional VAE\nwas optimized for TOF-MRA reconstruction by comparing vessel segmentations of\noriginal and reconstructed MRAs. The method was trained and tested on two\ndatasets: the IXI dataset, and a subset from the ADAM challenge. Both trained\nnetworks were tested on a dataset including subjects with aneurysms. We\ncompared VAE optimization with L2-loss and SSIM-loss. Performance was evaluated\nbetween original and reconstructed MRAs using mean square error, mean-SSIM,\npeak-signal-to-noise-ratio and dice similarity index (DSI) of segmented\nvessels. The L2-optimized VAE outperforms SSIM, with improved reconstruction\nmetrics and DSIs for both datasets. Optimization using SSIM performed best for\nvisual image quality, but with discrepancy in quantitative reconstruction and\nvascular segmentation. The larger, more diverse IXI dataset had overall better\nperformance. Reconstruction metrics, including SSIM, were lower for MRAs\nincluding aneurysms. A SSIM-optimized VAE improved the visual perceptive image\nquality of TOF-MRA reconstructions. A L2-optimized VAE performed best for\nTOF-MRA reconstruction, where the vascular segmentation is important. SSIM is a\npotential metric for anomaly detection of MRAs.\n"} {"abstract": " We investigate weak$^*$ derived sets, that is the sets of weak$^*$ limits of\nbounded nets, of convex subsets of duals of non-reflexive Banach spaces and\ntheir possible iterations. We prove that a dual space of any non-reflexive\nBanach space contains convex subsets of any finite order and a convex subset of\norder $\\omega + 1$.\n"} {"abstract": " The scaling relations between the black hole (BH) mass and soft lag\nproperties for both active galactic nuclei (AGNs) and BH X-ray binaries\n(BHXRBs) suggest the same underlying physical mechanism at work in accreting BH\nsystems spanning a broad range of mass. However, the low-mass end of AGNs has\nnever been explored in detail. In this work, we extend the existing scaling\nrelations to lower-mass AGNs, which serve as anchors between the normal-mass\nAGNs and BHXRBs. For this purpose, we construct a sample of low-mass AGNs\n($M_{\\rm BH}<3\\times 10^{6} M_{\\rm \\odot}$) from the XMM-Newton archive and\nmeasure frequency-resolved time delays between the soft (0.3-1 keV) and hard\n(1-4 keV) X-ray emissions. We report that the soft band lags behind the hard\nband emission at high frequencies $\\sim[1.3-2.6]\\times 10^{-3}$ Hz, which is\ninterpreted as a sign of reverberation from the inner accretion disc in\nresponse to the direct coronal emission. At low frequencies ($\\sim[3-8]\\times\n10^{-4}$ Hz), the hard band lags behind the soft band variations, which we\nexplain in the context of the inward propagation of luminosity fluctuations\nthrough the corona. Assuming a lamppost geometry for the corona, we find that\nthe X-ray source of the sample extends at an average height and radius of $\\sim\n10r_{\\rm g}$ and $\\sim 6r_{\\rm g}$, respectively. Our results confirm that the\nscaling relations between the BH mass and soft lag amplitude/frequency derived\nfor higher-mass AGNs can safely extrapolate to lower-mass AGNs, and the\naccretion process is indeed independent of the BH mass.\n"} {"abstract": " The presence of interface recombination in a complex multilayered thin-film\nsolar structure causes a disparity between the internal open-circuit voltage\n(VOC,in), measured by photoluminescence, and the external open-circuit voltage\n(VOC,ex) i.e. an additional VOC deficit. Higher VOC,ex value aim require a\ncomprehensive understanding of connection between VOC deficit and interface\nrecombination. Here, a deep near-surface defect model at the absorber/buffer\ninterface is developed for copper indium di-selenide solar cells grown under Cu\nexcess conditions to explain the disparity between VOC,in and VOC,ex.. The\nmodel is based on experimental analysis of admittance spectroscopy and\ndeep-level transient spectroscopy, which show the signature of deep acceptor\ndefect. Further, temperature-dependent current-voltage measurements confirm the\npresence of near surface defects as the cause of interface recombination. The\nnumerical simulations show strong decrease in the local VOC,in near the\nabsorber/buffer interface leading to a VOC deficit in the device. This loss\nmechanism leads to interface recombination without a reduced interface bandgap\nor Fermi level pinning. Further, these findings demonstrate that the VOC,in\nmeasurements alone can be inconclusive and might conceal the information on\ninterface recombination pathways, establishing the need for complementary\ntechniques like temperature dependent current voltage measurements to identify\nthe cause of interface recombination in the devices.\n"} {"abstract": " We study random walks on the isometry group of a Gromov hyperbolic space or\nTeichm\\\"uller space. We prove that the translation lengths of random isometries\nsatisfy a central limit theorem if and only if the random walk has finite\nsecond moment. While doing this, we recover the central limit theorem of\nBenoist and Quint for the displacement of a reference point and establish its\nconverse. Also discussed are the corresponding laws of the iterated logarithm.\nFinally, we prove sublinear geodesic tracking by random walks with finite\n$(1/2)$-th moment and logarithmic tracking by random walks with finite\nexponential moment.\n"} {"abstract": " A multi-agent optimization problem motivated by the management of energy\nsystems is discussed. The associated cost function is separable and convex\nalthough not necessarily strongly convex and there exist edge-based coupling\nequality constraints. In this regard, we propose a distributed algorithm based\non solving the dual of the augmented problem. Furthermore, we consider that the\ncommunication network might be time-varying and the algorithm might be carried\nout asynchronously. The time-varying nature and the asynchronicity are modeled\nas random processes. Then, we show the convergence and the convergence rate of\nthe proposed algorithm under the aforementioned conditions.\n"} {"abstract": " Deploying sophisticated deep learning models on embedded devices with the\npurpose of solving real-world problems is a struggle using today's technology.\nPrivacy and data limitations, network connection issues, and the need for fast\nmodel adaptation are some of the challenges that constitute today's approaches\nunfit for many applications on the edge and make real-time on-device training a\nnecessity. Google is currently working on tackling these challenges by\nembedding an experimental transfer learning API to their TensorFlow Lite,\nmachine learning library. In this paper, we show that although transfer\nlearning is a good first step for on-device model training, it suffers from\ncatastrophic forgetting when faced with more realistic scenarios. We present\nthis issue by testing a simple transfer learning model on the CORe50 benchmark\nas well as by demonstrating its limitations directly on an Android application\nwe developed. In addition, we expand the TensorFlow Lite library to include\ncontinual learning capabilities, by integrating a simple replay approach into\nthe head of the current transfer learning model. We test our continual learning\nmodel on the CORe50 benchmark to show that it tackles catastrophic forgetting,\nand we demonstrate its ability to continually learn, even under non-ideal\nconditions, using the application we developed. Finally, we open-source the\ncode of our Android application to enable developers to integrate continual\nlearning to their own smartphone applications, as well as to facilitate further\ndevelopment of continual learning functionality into the TensorFlow Lite\nenvironment.\n"} {"abstract": " This chapter presents an overview on actuator attacks that exploit zero\ndynamics, and countermeasures against them. First, zero-dynamics attack is\nre-introduced based on a canonical representation called normal form. Then it\nis shown that the target dynamic system is at elevated risk if the associated\nzero dynamics is unstable. From there on, several questions are raised in\nseries to ensure when the target system is immune to the attack of this kind.\nThe first question is: Is the target system secure from zero-dynamics attack if\nit does not have any unstable zeros? An answer provided for this question is:\nNo, the target system may still be at risk due to another attack surface\nemerging in the process of implementation. This is followed by a series of next\nquestions, and in the course of providing answers, variants of the classic\nzero-dynamics attack are presented, from which the vulnerability of the target\nsystem is explored in depth. At the end, countermeasures are proposed to render\nthe attack ineffective. Because it is known that the zero-dynamics in\ncontinuous-time systems cannot be modified by feedback, the main idea of the\ncountermeasure is to relocate any unstable zero to a stable region in the stage\nof digital implementation through modified digital samplers and holders.\nAdversaries can still attack actuators, but due to the re-located zeros, they\nare of little use in damaging the target system.\n"} {"abstract": " Guitar fretboards are designed based on the equation of the ideal string.\nThat is, it neglecs several factors as nonlinearities and bending stiffness of\nthe strings. Due to this fact, intonation of guitars along the whole neck is\nnot perfect, and guitars have right tuning just in an \\emph{average} sense.\nThere are commercially available fretboards that differ from the tradictional\ndesign.\\footnote{One example is the \\cite{patent} by the Company True\nTemperament AB, where each fretboard is made using CNC processes.} As a final\napplication of this work we would like to redesign the fretboard layout\nconsidering the effects of bending stiffness. The main goal of this project is\nto analyze the differences between the differences in the solution for\nvibrations of the ideal string and a stiff string. These differences should\nlead to changes in the fret distribution for a guitar, and, hopefully improve\nthe overall intonation of the instrument. We will start analyzing the ideal\nstring equation and after a good understanding of this analytical solution we\nwill proceed with the, more complex, stiff equation. Topics like separation of\nvariables, Fourier transforms, and Perturbation analysis might prove useful\nduring the course of this project\n"} {"abstract": " This investigation presents evidence of the relation between the dynamics of\nintense events in small-scale turbulence and the energy cascade. We use the\ngeneralised (H\\\"older) means to track the temporal evolution of intense events\nof the enstrophy and the dissipation in direct numerical simulations of\nisotropic turbulence. We show that these events are modulated by large-scale\nfluctuations, and that their evolution is consistent with a local\nmultiplicative cascade, as hypothesised by a broad class of intermittency\nmodels of turbulence.\n"} {"abstract": " A new implicit-explicit local differential transform method (IELDTM) is\nderived here for time integration of the nonlinear advection-diffusion\nprocesses represented by (2+1)-dimensional Burgers equation. The IELDTM is\nadaptively constructed as stability preserved and high order time integrator\nfor spatially discretized Burgers equation. For spatial discretization of the\nmodel equation, the Chebyshev spectral collocation method (ChCM) is utilized. A\nrobust stability analysis and global error analysis of the IELDTM are presented\nwith respect to the direction parameter \\theta. With the help of the global\nerror analysis, adaptivity equations are derived to minimize the computational\ncosts of the algorithms. The produced method is shown to eliminate the accuracy\ndisadvantage of the classical \\theta-method and the stability disadvantages of\nthe existing DTM-based methods. Two examples of the Burgers equation in one and\ntwo dimensions have been solved via the ChCM-IELDTM hybridization, and the\nproduced results are compared with the literature. The present time integrator\nhas been proven to produce more accurate numerical results than the MATLAB\nsolvers, ode45 and ode15s.\n"} {"abstract": " Concatenation and equilibrium swelling of Olympic gels, which are composed of\nentangled cyclic polymers, is studied by Monte Carlo Simulations. The average\nnumber of concatenated molecules per cyclic polymers, $f_n$, is found to depend\non the degree of polymerization, $N$, and polymer volume fraction at network\npreparation, ${\\phi}_0$, as $f_n ~ {\\phi}_0^{{\\nu}/(3{\\nu}-1)}N$ with scaling\nexponent ${\\nu} = 0.588$. In contrast to chemically cross-linked polymer\nnetworks, we observe that Olympic gels made of longer cyclic chains exhibit a\nsmaller equilibrium swelling degree, $Q ~ N^{-0.28} {\\phi}_0^{-0.72}$, at the\nsame polymer volume fraction ${\\phi}_0$. This observation is explained by a\ndesinterspersion process of overlapping non-concatenated rings upon swelling,\nwhich is tested directly by analyzing the change in overlap of the molecules\nupon swelling.\n"} {"abstract": " Temporal Neural Networks (TNNs) are spiking neural networks that use time as\na resource to represent and process information, similar to the mammalian\nneocortex. In contrast to compute-intensive deep neural networks that employ\nseparate training and inference phases, TNNs are capable of extremely efficient\nonline incremental/continual learning and are excellent candidates for building\nedge-native sensory processing units. This work proposes a microarchitecture\nframework for implementing TNNs using standard CMOS. Gate-level implementations\nof three key building blocks are presented: 1) multi-synapse neurons, 2)\nmulti-neuron columns, and 3) unsupervised and supervised online learning\nalgorithms based on Spike Timing Dependent Plasticity (STDP). The proposed\nmicroarchitecture is embodied in a set of characteristic scaling equations for\nassessing the gate count, area, delay and power for any TNN design.\nPost-synthesis results (in 45nm CMOS) for the proposed designs are presented,\nand their online incremental learning capability is demonstrated.\n"} {"abstract": " We propose new methods for in-domain and cross-domain Named Entity\nRecognition (NER) on historical data for Dutch and French. For the cross-domain\ncase, we address domain shift by integrating unsupervised in-domain data via\ncontextualized string embeddings; and OCR errors by injecting synthetic OCR\nerrors into the source domain and address data centric domain adaptation. We\npropose a general approach to imitate OCR errors in arbitrary input data. Our\ncross-domain as well as our in-domain results outperform several strong\nbaselines and establish state-of-the-art results. We publish preprocessed\nversions of the French and Dutch Europeana NER corpora.\n"} {"abstract": " COVID-19 has impacted nations differently based on their policy\nimplementations. The effective policy requires taking into account public\ninformation and adaptability to new knowledge. Epidemiological models built to\nunderstand COVID-19 seldom provide the policymaker with the capability for\nadaptive pandemic control (APC). Among the core challenges to be overcome\ninclude (a) inability to handle a high degree of non-homogeneity in different\ncontributing features across the pandemic timeline, (b) lack of an approach\nthat enables adaptive incorporation of public health expert knowledge, and (c)\ntransparent models that enable understanding of the decision-making process in\nsuggesting policy. In this work, we take the early steps to address these\nchallenges using Knowledge Infused Policy Gradient (KIPG) methods. Prior work\non knowledge infusion does not handle soft and hard imposition of varying forms\nof knowledge in disease information and guidelines to necessarily comply with.\nFurthermore, the models do not attend to non-homogeneity in feature counts,\nmanifesting as partial observability in informing the policy. Additionally,\ninterpretable structures are extracted post-learning instead of learning an\ninterpretable model required for APC. To this end, we introduce a mathematical\nframework for KIPG methods that can (a) induce relevant feature counts over\nmulti-relational features of the world, (b) handle latent non-homogeneous\ncounts as hidden variables that are linear combinations of kernelized\naggregates over the features, and (b) infuse knowledge as functional\nconstraints in a principled manner. The study establishes a theory for imposing\nhard and soft constraints and simulates it through experiments. In comparison\nwith knowledge-intensive baselines, we show quick sample efficient adaptation\nto new knowledge and interpretability in the learned policy, especially in a\npandemic context.\n"} {"abstract": " Bayesian optimization is a popular method for optimizing expensive black-box\nfunctions. The objective functions of hard real world problems are oftentimes\ncharacterized by a fluctuated landscape of many local optima. Bayesian\noptimization risks in over-exploiting such traps, remaining with insufficient\nquery budget for exploring the global landscape. We introduce Coordinate\nBackoff Bayesian Optimization (CobBO) to alleviate those challenges. CobBO\ncaptures a smooth approximation of the global landscape by interpolating the\nvalues of queried points projected to randomly selected promising subspaces.\nThus also a smaller query budget is required for the Gaussian process\nregressions applied over the lower dimensional subspaces. This approach can be\nviewed as a variant of coordinate ascent, tailored for Bayesian optimization,\nusing a stopping rule for backing off from a certain subspace and switching to\nanother coordinate subset. Extensive evaluations show that CobBO finds\nsolutions comparable to or better than other state-of-the-art methods for\ndimensions ranging from tens to hundreds, while reducing the trial complexity.\n"} {"abstract": " Automatic software development has been a research hot spot in the field of\nsoftware engineering (SE) in the past decade. In particular, deep learning (DL)\nhas been applied and achieved a lot of progress in various SE tasks. Among all\napplications, automatic code generation by machines as a general concept,\nincluding code completion and code synthesis, is a common expectation in the\nfield of SE, which may greatly reduce the development burden of the software\ndevelopers and improves the efficiency and quality of the software development\nprocess to a certain extent. Code completion is an important part of modern\nintegrated development environments (IDEs). Code completion technology\neffectively helps programmers complete code class names, method names, and\nkey-words, etc., which improves the efficiency of program development and\nreduces spelling errors in the coding process. Such tools use static analysis\non the code and provide candidates for completion arranged in alphabetical\norder. Code synthesis is implemented from two aspects, one based on\ninput-output samples and the other based on functionality description. In this\nstudy, we introduce existing techniques of these two aspects and the\ncorresponding DL techniques, and present some possible future research\ndirections.\n"} {"abstract": " We initiate the study of incentive-compatible forecasting competitions in\nwhich multiple forecasters make predictions about one or more events and\ncompete for a single prize. We have two objectives: (1) to incentivize\nforecasters to report truthfully and (2) to award the prize to the most\naccurate forecaster. Proper scoring rules incentivize truthful reporting if all\nforecasters are paid according to their scores. However, incentives become\ndistorted if only the best-scoring forecaster wins a prize, since forecasters\ncan often increase their probability of having the highest score by reporting\nmore extreme beliefs. In this paper, we introduce two novel forecasting\ncompetition mechanisms. Our first mechanism is incentive compatible and\nguaranteed to select the most accurate forecaster with probability higher than\nany other forecaster. Moreover, we show that in the standard single-event,\ntwo-forecaster setting and under mild technical conditions, no other\nincentive-compatible mechanism selects the most accurate forecaster with higher\nprobability. Our second mechanism is incentive compatible when forecasters'\nbeliefs are such that information about one event does not lead to belief\nupdates on other events, and it selects the best forecaster with probability\napproaching 1 as the number of events grows. Our notion of incentive\ncompatibility is more general than previous definitions of dominant strategy\nincentive compatibility in that it allows for reports to be correlated with the\nevent outcomes. Moreover, our mechanisms are easy to implement and can be\ngeneralized to the related problems of outputting a ranking over forecasters\nand hiring a forecaster with high accuracy on future events.\n"} {"abstract": " In this paper we consider nonautonomous optimal control problems of infinite\nhorizon type, whose control actions are given by $L^1$-functions. We verify\nthat the value function is locally Lipschitz. The equivalence between dynamic\nprogramming inequalities and Hamilton-Jacobi-Bellman (HJB) inequalities for\nproximal sub (super) gradients is proven. Using this result we show that the\nvalue function is a Dini solution of the HJB equation. We obtain a verification\nresult for the class of Dini sub-solutions of the HJB equation and also prove a\nminimax property of the value function with respect to the sets of Dini\nsemi-solutions of the HJB equation. We introduce the concept of viscosity\nsolutions of the HJB equation in infinite horizon and prove the equivalence\nbetween this and the concept of Dini solutions. In the appendix we provide an\nexistence theorem.\n"} {"abstract": " The evolution towards Industry 4.0 is driving the need for innovative\nsolutions in the area of network management, considering the complex, dynamic\nand heterogeneous nature of ICT supply chains. To this end, Intent-Based\nnetworking (IBN) which is already proven to evolve how network management is\ndriven today, can be implemented as a solution to facilitate the management of\nlarge ICT supply chains. In this paper, we first present a comparison of the\nmain architectural components of typical IBN systems and, then, we study the\nkey engineering requirements when integrating IBN with ICT supply chain network\nsystems while considering AI methods. We also propose a general architecture\ndesign that enables intent translation of ICT supply chain specifications into\nlower level policies, to finally show an example of how the access control is\nperformed in a modeled ICT supply chain system.\n"} {"abstract": " The last milestone achievement for the roundoff-error-free solution of\ngeneral mixed integer programs over the rational numbers was a hybrid-precision\nbranch-and-bound algorithm published by Cook, Koch, Steffy, and Wolter in 2013.\n We describe a substantial revision and extension of this framework that\nintegrates symbolic presolving, features an exact repair step for solutions\nfrom primal heuristics, employs a faster rational LP solver based on LP\niterative refinement, and is able to produce independently verifiable\ncertificates of optimality.\n We study the significantly improved performance and give insights into the\ncomputational behavior of the new algorithmic components.\n On the MIPLIB 2017 benchmark set, we observe an average speedup of 6.6x over\nthe original framework and 2.8 times as many instances solved within a time\nlimit of two hours.\n"} {"abstract": " HIP 41378 f is a temperate $9.2\\pm0.1 R_{\\oplus}$ planet with period of\n542.08 days and an extremely low density of $0.09\\pm0.02$ g cm$^{-3}$. It\ntransits the bright star HIP 41378 (V=8.93), making it an exciting target for\natmospheric characterization including transmission spectroscopy. HIP 41378 was\nmonitored photometrically between the dates of 2019 November 19 and November\n28. We detected a transit of HIP 41378 f with NGTS, just the third transit ever\ndetected for this planet, which confirms the orbital period. This is also the\nfirst ground-based detection of a transit of HIP 41378 f. Additional\nground-based photometry was also obtained and used to constrain the time of the\ntransit. The transit was measured to occur 1.50 hours earlier than predicted.\nWe use an analytic transit timing variation (TTV) model to show the observed\nTTV can be explained by interactions between HIP 41378 e and HIP 41378 f. Using\nour TTV model, we predict the epochs of future transits of HIP 41378 f, with\nderived transit centres of T$_{C,4} = 2459355.087^{+0.031}_{-0.022}$ (May 2021)\nand T$_{C,5} = 2459897.078^{+0.114}_{-0.060}$ (Nov 2022).\n"} {"abstract": " The canonical approach to video-and-language learning (e.g., video question\nanswering) dictates a neural model to learn from offline-extracted dense video\nfeatures from vision models and text features from language models. These\nfeature extractors are trained independently and usually on tasks different\nfrom the target domains, rendering these fixed features sub-optimal for\ndownstream tasks. Moreover, due to the high computational overload of dense\nvideo features, it is often difficult (or infeasible) to plug feature\nextractors directly into existing approaches for easy finetuning. To provide a\nremedy to this dilemma, we propose a generic framework ClipBERT that enables\naffordable end-to-end learning for video-and-language tasks, by employing\nsparse sampling, where only a single or a few sparsely sampled short clips from\na video are used at each training step. Experiments on text-to-video retrieval\nand video question answering on six datasets demonstrate that ClipBERT\noutperforms (or is on par with) existing methods that exploit full-length\nvideos, suggesting that end-to-end learning with just a few sparsely sampled\nclips is often more accurate than using densely extracted offline features from\nfull-length videos, proving the proverbial less-is-more principle. Videos in\nthe datasets are from considerably different domains and lengths, ranging from\n3-second generic domain GIF videos to 180-second YouTube human activity videos,\nshowing the generalization ability of our approach. Comprehensive ablation\nstudies and thorough analyses are provided to dissect what factors lead to this\nsuccess. Our code is publicly available at https://github.com/jayleicn/ClipBERT\n"} {"abstract": " Space-time visualizations of macroscopic or microscopic traffic variables is\na qualitative tool used by traffic engineers to understand and analyze\ndifferent aspects of road traffic dynamics. We present a deep learning method\nto learn the macroscopic traffic speed dynamics from these space-time\nvisualizations, and demonstrate its application in the framework of traffic\nstate estimation. Compared to existing estimation approaches, our approach\nallows a finer estimation resolution, eliminates the dependence on the initial\nconditions, and is agnostic to external factors such as traffic demand, road\ninhomogeneities and driving behaviors. Our model respects causality in traffic\ndynamics, which improves the robustness of estimation. We present the\nhigh-resolution traffic speed fields estimated for several freeway sections\nusing the data obtained from the Next Generation Simulation Program (NGSIM) and\nGerman Highway (HighD) datasets. We further demonstrate the quality and utility\nof the estimation by inferring vehicle trajectories from the estimated speed\nfields, and discuss the benefits of deep neural network models in approximating\nthe traffic dynamics.\n"} {"abstract": " We consider the problem of collectively detecting multiple events,\nparticularly in cross-sentence settings. The key to dealing with the problem is\nto encode semantic information and model event inter-dependency at a\ndocument-level. In this paper, we reformulate it as a Seq2Seq task and propose\na Multi-Layer Bidirectional Network (MLBiNet) to capture the document-level\nassociation of events and semantic information simultaneously. Specifically, a\nbidirectional decoder is firstly devised to model event inter-dependency within\na sentence when decoding the event tag vector sequence. Secondly, an\ninformation aggregation module is employed to aggregate sentence-level semantic\nand event tag information. Finally, we stack multiple bidirectional decoders\nand feed cross-sentence information, forming a multi-layer bidirectional\ntagging architecture to iteratively propagate information across sentences. We\nshow that our approach provides significant improvement in performance compared\nto the current state-of-the-art results.\n"} {"abstract": " Detecting transparent objects in natural scenes is challenging due to the low\ncontrast in texture, brightness and colors. Recent deep-learning-based works\nreveal that it is effective to leverage boundaries for transparent object\ndetection (TOD). However, these methods usually encounter boundary-related\nimbalance problem, leading to limited generation capability. Detailly, a kind\nof boundaries in the background, which share the same characteristics with\nboundaries of transparent objects but have much smaller amounts, usually hurt\nthe performance. To conquer the boundary-related imbalance problem, we propose\na novel content-dependent data augmentation method termed FakeMix. Considering\ncollecting these trouble-maker boundaries in the background is hard without\ncorresponding annotations, we elaborately generate them by appending the\nboundaries of transparent objects from other samples into the current image\nduring training, which adjusts the data space and improves the generalization\nof the models. Further, we present AdaptiveASPP, an enhanced version of ASPP,\nthat can capture multi-scale and cross-modality features dynamically. Extensive\nexperiments demonstrate that our methods clearly outperform the\nstate-of-the-art methods. We also show that our approach can also transfer well\non related tasks, in which the model meets similar troubles, such as mirror\ndetection, glass detection, and camouflaged object detection. Code will be made\npublicly available.\n"} {"abstract": " I review the meaning of General Relativity (GR), viewed as a dynamical field,\nrather than as geometry, as effected by the 1958-61 anti-geometrical work of\nADM. This very brief non-technical summary, is intended for historians.\n"} {"abstract": " We establish an asymptotic formula for the number of lattice points in the\nsets \\[ \\mathbf S_{h_1, h_2, h_3}(\\lambda): =\\{x\\in\\mathbb Z_+^3:\\lfloor\nh_1(x_1)\\rfloor+\\lfloor h_2(x_2)\\rfloor+\\lfloor h_3(x_3)\\rfloor=\\lambda\\} \\quad\n\\text{with}\\quad \\lambda\\in\\mathbb Z_+; \\] where functions $h_1, h_2, h_3$ are\nconstant multiples of regularly varying functions of the form\n$h(x):=x^c\\ell_h(x)$, where the exponent $c>1$ (but close to $1$) and a\nfunction $\\ell_h(x)$ is taken from a certain wide class of slowly varying\nfunctions. Taking $h_1(x)=h_2(x)=h_3(x)=x^c$ we will also derive an asymptotic\nformula for the number of lattice points in the sets \\[ \\mathbf\nS_{c}^3(\\lambda) := \\{x \\in \\mathbb Z^3 : \\lfloor |x_1|^c \\rfloor + \\lfloor\n|x_2|^c \\rfloor + \\lfloor |x_3|^c \\rfloor= \\lambda \\} \\quad \\text{with}\\quad\n\\lambda\\in\\mathbb Z_+; \\] which can be thought of as a perturbation of the\nclassical Waring problem in three variables.\n We will use the latter asymptotic formula to study, the main results of this\npaper, norm and pointwise convergence of the ergodic averages \\[\n\\frac{1}{\\#\\mathbf S_{c}^3(\\lambda)}\\sum_{n\\in \\mathbf\nS_{c}^3(\\lambda)}f(T_1^{n_1}T_2^{n_2}T_3^{n_3}x) \\quad \\text{as}\\quad\n\\lambda\\to\\infty; \\] where $T_1, T_2, T_3:X\\to X$ are commuting invertible and\nmeasure-preserving transformations of a $\\sigma$-finite measure space $(X,\n\\nu)$ for any function $f\\in L^p(X)$ with $p>\\frac{11-4c}{11-7c}$. Finally, we\nwill study the equidistribution problem corresponding to the spheres $\\mathbf\nS_{c}^3(\\lambda)$.\n"} {"abstract": " We present a general series representation formula for the local solution of\nBernoulli equation with Caputo fractional derivatives. We then focus on a\ngeneralization of the fractional logistic equation and we present some related\nnumerical simulations.\n"} {"abstract": " In high energy physics (HEP), jets are collections of correlated particles\nproduced ubiquitously in particle collisions such as those at the CERN Large\nHadron Collider (LHC). Machine learning (ML)-based generative models, such as\ngenerative adversarial networks (GANs), have the potential to significantly\naccelerate LHC jet simulations. However, despite jets having a natural\nrepresentation as a set of particles in momentum-space, a.k.a. a particle\ncloud, there exist no generative models applied to such a dataset. In this\nwork, we introduce a new particle cloud dataset (JetNet), and apply to it\nexisting point cloud GANs. Results are evaluated using (1) 1-Wasserstein\ndistances between high- and low-level feature distributions, (2) a newly\ndeveloped Fr\\'{e}chet ParticleNet Distance, and (3) the coverage and (4)\nminimum matching distance metrics. Existing GANs are found to be inadequate for\nphysics applications, hence we develop a new message passing GAN (MPGAN), which\noutperforms existing point cloud GANs on virtually every metric and shows\npromise for use in HEP. We propose JetNet as a novel point-cloud-style dataset\nfor the ML community to experiment with, and set MPGAN as a benchmark to\nimprove upon for future generative models. Additionally, to facilitate research\nand improve accessibility and reproducibility in this area, we release the\nopen-source JetNet Python package with interfaces for particle cloud datasets,\nimplementations for evaluation and loss metrics, and more tools for ML in HEP\ndevelopment.\n"} {"abstract": " One of the big challenges of current electronics is the design and\nimplementation of hardware neural networks that perform fast and\nenergy-efficient machine learning. Spintronics is a promising catalyst for this\nfield with the capabilities of nanosecond operation and compatibility with\nexisting microelectronics. Considering large-scale, viable neuromorphic systems\nhowever, variability of device properties is a serious concern. In this paper,\nwe show an autonomously operating circuit that performs hardware-aware machine\nlearning utilizing probabilistic neurons built with stochastic magnetic tunnel\njunctions. We show that in-situ learning of weights and biases in a Boltzmann\nmachine can counter device-to-device variations and learn the probability\ndistribution of meaningful operations such as a full adder. This scalable\nautonomously operating learning circuit using spintronics-based neurons could\nbe especially of interest for standalone artificial-intelligence devices\ncapable of fast and efficient learning at the edge.\n"} {"abstract": " We revisit the problem of the estimation of the differential entropy $H(f)$\nof a random vector $X$ in $R^d$ with density $f$, assuming that $H(f)$ exists\nand is finite. In this note, we study the consistency of the popular nearest\nneighbor estimate $H_n$ of Kozachenko and Leonenko. Without any smoothness\ncondition we show that the estimate is consistent ($E\\{|H_n - H(f)|\\} \\to 0$ as\n$n \\to \\infty$) if and only if $\\mathbb{E} \\{ \\log ( \\| X \\| + 1 )\\} < \\infty$.\nFurthermore, if $X$ has compact support, then $H_n \\to H(f)$ almost surely.\n"} {"abstract": " Identifying and understanding quality phrases from context is a fundamental\ntask in text mining. The most challenging part of this task arguably lies in\nuncommon, emerging, and domain-specific phrases. The infrequent nature of these\nphrases significantly hurts the performance of phrase mining methods that rely\non sufficient phrase occurrences in the input corpus. Context-aware tagging\nmodels, though not restricted by frequency, heavily rely on domain experts for\neither massive sentence-level gold labels or handcrafted gazetteers. In this\nwork, we propose UCPhrase, a novel unsupervised context-aware quality phrase\ntagger. Specifically, we induce high-quality phrase spans as silver labels from\nconsistently co-occurring word sequences within each document. Compared with\ntypical context-agnostic distant supervision based on existing knowledge bases\n(KBs), our silver labels root deeply in the input domain and context, thus\nhaving unique advantages in preserving contextual completeness and capturing\nemerging, out-of-KB phrases. Training a conventional neural tagger based on\nsilver labels usually faces the risk of overfitting phrase surface names.\nAlternatively, we observe that the contextualized attention maps generated from\na transformer-based neural language model effectively reveal the connections\nbetween words in a surface-agnostic way. Therefore, we pair such attention maps\nwith the silver labels to train a lightweight span prediction model, which can\nbe applied to new input to recognize (unseen) quality phrases regardless of\ntheir surface names or frequency. Thorough experiments on various tasks and\ndatasets, including corpus-level phrase ranking, document-level keyphrase\nextraction, and sentence-level phrase tagging, demonstrate the superiority of\nour design over state-of-the-art pre-trained, unsupervised, and distantly\nsupervised methods.\n"} {"abstract": " We show that the nature of the topological fluctuations in $SU(3)$ gauge\ntheory changes drastically at the finite temperature phase transition. Starting\nfrom temperatures right above the phase transition topological fluctuations\ncome in well separated lumps of unit charge that form a non-interacting ideal\ngas. Our analysis is based on a novel method to count not only the net\ntopological charge, but also separately the number of positively and negatively\ncharged lumps in lattice configurations using the spectrum of the overlap Dirac\noperator. This enables us to determine the joint distribution of the number of\npositively and negatively charged topological objects, and we find this\ndistribution to be consistent with that of an ideal gas of unit charged\ntopological objects.\n"} {"abstract": " Measuring the acoustic characteristics of a space is often done by capturing\nits impulse response (IR), a representation of how a full-range stimulus sound\nexcites it. This work generates an IR from a single image, which can then be\napplied to other signals using convolution, simulating the reverberant\ncharacteristics of the space shown in the image. Recording these IRs is both\ntime-intensive and expensive, and often infeasible for inaccessible locations.\nWe use an end-to-end neural network architecture to generate plausible audio\nimpulse responses from single images of acoustic environments. We evaluate our\nmethod both by comparisons to ground truth data and by human expert evaluation.\nWe demonstrate our approach by generating plausible impulse responses from\ndiverse settings and formats including well known places, musical halls, rooms\nin paintings, images from animations and computer games, synthetic environments\ngenerated from text, panoramic images, and video conference backgrounds.\n"} {"abstract": " Lithium-sulfur (Li-S) batteries have become one of the most attractive\nalternatives over conventional Li-ion batteries due to their high theoretical\nspecific energy density (2500 Wh/kg for Li-S vs. $\\sim$250 Wh/kg for Li-ion).\nAccurate state estimation in Li-S batteries is urgently needed for safe and\nefficient operation. To the best of the authors' knowledge, electrochemical\nmodel-based observers have not been reported for Li-S batteries, primarily due\nto the complex dynamics that make state observer design a challenging problem.\nIn this work, we demonstrate a state estimation scheme based on a\nzero-dimensional electrochemical model for Li-S batteries. The nonlinear\ndifferential-algebraic equation (DAE) model is incorporated into an extend\nKalman filter. This observer design estimates both differential and algebraic\nstates that represent the dynamic behavior inside the cell, from voltage and\ncurrent measurements only. The effectiveness of the proposed estimation\nalgorithm is illustrated by numerical simulation results. Our study unlocks how\nan electrochemical model can be utilized for practical state estimation of Li-S\nbatteries.\n"} {"abstract": " We investigate the influence of general forms of disorder on the robustness\nof superconductivity in multiband materials. Specifically, we consider a\ngeneral two-band system where the bands arise from an orbital degree of freedom\nof the electrons. Within the Born approximation, we show that the interplay of\nthe spin-orbital structure of the normal-state Hamiltonian, disorder\nscattering, and superconducting pairing potentials can lead to significant\ndeviations from the expected robustness of the superconductivity. This can be\nconveniently formulated in terms of the so-called \"superconducting fitness\". In\nparticular, we verify a key role for unconventional $s$-wave states, permitted\nby the spin-orbital structure and which may pair electrons that are not\ntime-reversed partners. To exemplify the role of Fermi surface topology and\nspin-orbital texture, we apply our formalism to the candidate topological\nsuperconductor Cu$_x$Bi$_2$Se$_3$, for which only a single band crosses the\nFermi energy, as well as models of the iron pnictides, which possess multiple\nFermi pockets.\n"} {"abstract": " Classical models for multivariate or spatial extremes are mainly based upon\nthe asymptotically justified max-stable or generalized Pareto processes. These\nmodels are suitable when asymptotic dependence is present, i.e., the joint tail\ndecays at the same rate as the marginal tail. However, recent environmental\ndata applications suggest that asymptotic independence is equally important\nand, unfortunately, existing spatial models in this setting that are both\nflexible and can be fitted efficiently are scarce. Here, we propose a new\nspatial copula model based on the generalized hyperbolic distribution, which is\na specific normal mean-variance mixture and is very popular in financial\nmodeling. The tail properties of this distribution have been studied in the\nliterature, but with contradictory results. It turns out that the proofs from\nthe literature contain mistakes. We here give a corrected theoretical\ndescription of its tail dependence structure and then exploit the model to\nanalyze a simulated dataset from the inverted Brown-Resnick process, hindcast\nsignificant wave height data in the North Sea, and wind gust data in the state\nof Oklahoma, USA. We demonstrate that our proposed model is flexible enough to\ncapture the dependence structure not only in the tail but also in the bulk.\n"} {"abstract": " Speech enhancement is an essential task of improving speech quality in noise\nscenario. Several state-of-the-art approaches have introduced visual\ninformation for speech enhancement,since the visual aspect of speech is\nessentially unaffected by acoustic environment. This paper proposes a novel\nframeworkthat involves visual information for speech enhancement, by\nin-corporating a Generative Adversarial Network (GAN). In par-ticular, the\nproposed visual speech enhancement GAN consistof two networks trained in\nadversarial manner, i) a generator that adopts multi-layer feature fusion\nconvolution network to enhance input noisy speech, and ii) a discriminator that\nattemptsto minimize the discrepancy between the distributions of the clean\nspeech signal and enhanced speech signal. Experiment re-sults demonstrated\nsuperior performance of the proposed modelagainst several state-of-the-art\n"} {"abstract": " Recent evidence based on APOGEE data for stars within a few kpc of the\nGalactic centre suggests that dissolved globular clusters (GCs) contribute\nsignificantly to the stellar mass budget of the inner halo. In this paper we\nenquire into the origins of tracers of GC dissolution, N-rich stars, that are\nlocated in the inner 4 kpc of the Milky Way. From an analysis of the chemical\ncompositions of these stars we establish that about 30% of the N-rich stars\npreviously identified in the inner Galaxy may have an accreted origin. This\nresult is confirmed by an analysis of the kinematic properties of our sample.\nThe specific frequency of N-rich stars is quite large in the accreted\npopulation, exceeding that of its in situ counterparts by near an order of\nmagnitude, in disagreement with predictions from numerical simulations. We hope\nthat our numbers provide a useful test to models of GC formation and\ndestruction.\n"} {"abstract": " Hypergraphs offer an explicit formalism to describe multibody interactions in\ncomplex systems. To connect dynamics and function in systems with these\nhigher-order interactions, network scientists have generalised random-walk\nmodels to hypergraphs and studied the multibody effects on flow-based\ncentrality measures. But mapping the large-scale structure of those flows\nrequires effective community detection methods. We derive unipartite,\nbipartite, and multilayer network representations of hypergraph flows and\nexplore how they and the underlying random-walk model change the number, size,\ndepth, and overlap of identified multilevel communities. These results help\nresearchers choose the appropriate modelling approach when mapping flows on\nhypergraphs.\n"} {"abstract": " A number of solar filaments/prominences demonstrate failed eruptions, when a\nfilament at first suddenly starts to ascend and then decelerates and stops at\nsome greater height in the corona. The mechanism of the termination of\neruptions is not clear yet. One of the confining forces able to stop the\neruption is the gravity force. Using a simple model of a partial\ncurrent-carrying torus loop anchored to the photosphere and photospheric\nmagnetic field measurements as the boundary condition for the potential\nmagnetic field extrapolation into the corona, we estimated masses of 15\neruptive filaments. The values of the filament mass show rather wide\ndistribution in the range of $4\\times10^{15}$ -- $270\\times10^{16}$g. Masses of\nthe most of filaments, laying in the middle of the range, are in accordance\nwith estimations made earlier on the basis of spectroscopic and white-light\nobservations.\n"} {"abstract": " Heavy-ion therapy, particularly using scanned (active) beam delivery,\nprovides a precise and highly conformal dose distribution, with maximum dose\ndeposition for each pencil beam at its endpoint (Bragg peak), and low entrance\nand exit dose. To take full advantage of this precision, robust range\nverification methods are required; these methods ensure that the Bragg peak is\npositioned correctly in the patient and the dose is delivered as prescribed.\nRelative range verification allows intra-fraction monitoring of Bragg peak\nspacing to ensure full coverage with each fraction, as well as inter-fraction\nmonitoring to ensure all fractions are delivered consistently. To validate the\nproposed filtered Interaction Vertex Imaging method for relative range\nverification, a ${}^{16}$O beam was used to deliver 12 Bragg peak positions in\na 40 mm poly-(methyl methacrylate) phantom. Secondary particles produced in the\nphantom were monitored using position-sensitive silicon detectors. Events\nrecorded on these detectors, along with a measurement of the treatment beam\naxis, were used to reconstruct the sites of origin of these secondary particles\nin the phantom. The distal edge of the depth distribution of these\nreconstructed points was determined with logistic fits, and the translation in\ndepth required to minimize the $\\chi^2$ statistic between these fits was used\nto compute the range shift between any two Bragg peak positions. In all cases,\nthe range shift was determined with sub-millimeter precision, to a standard\ndeviation of the mean of 220(10) $\\mu$m. This result validates filtered\nInteraction Vertex Imaging as a reliable relative range verification method,\nwhich should be capable of monitoring each energy step in each fraction of a\nscanned heavy-ion treatment plan.\n"} {"abstract": " Type IIn supernovae (SNe IIn) are a relatively infrequently observed subclass\nof SNe whose photometric and spectroscopic properties are varied. A common\nthread among SNe IIn are the complex multiple-component hydrogen Balmer lines.\nOwing to the heterogeneity of SNe IIn, online databases contain some outdated,\nerroneous, or even contradictory classifications. SN IIn classification is\nfurther complicated by SN impostors and contamination from underlying HII\nregions. We have compiled a catalogue of systematically classified nearby\n(redshift z < 0.02) SNe IIn using the Open Supernova Catalogue (OSC). We\npresent spectral classifications for 115 objects previously classified as SNe\nIIn. Our classification is based upon results obtained by fitting multiple\nGaussians to the H-alpha profiles. We compare classifications reported by the\nOSC and Transient Name Server (TNS) along with the best matched templates from\nSNID. We find that 28 objects have been misclassified as SNe IIn. TNS and OSC\ncan be unreliable; they disagree on the classifications of 51 of the objects\nand contain a number of erroneous classifications. Furthermore, OSC and TNS\nhold misclassifications for 34 and twelve (respectively) of the transients we\nclassify as SNe IIn. In total, we classify 87 SNe IIn. We highlight the\nimportance of ensuring that online databases remain up to date when new or even\ncontemporaneous data become available. Our work shows the great range of\nspectral properties and features that SNe IIn exhibit, which may be linked to\nmultiple progenitor channels and environment diversity. We set out a\nclassification sche me for SNe IIn based on the H-alpha profile which is not\ngreatly affected by the inhomogeneity of SNe IIn.\n"} {"abstract": " Isobaric $^{96}_{44}$Ru+$^{96}_{44}$Ru and $^{96}_{40}$Zr+$^{96}_{40}$Zr\ncollisions at $\\sqrt{s_{_{NN}}}=200$ GeV have been conducted at the\nRelativistic Heavy Ion Collider to circumvent the large flow-induced background\nin searching for the chiral magnetic effect (CME), predicted by the topological\nfeature of quantum chromodynamics (QCD). Considering that the background in\nisobar collisions is approximately twice that in Au+Au collisions (due to the\nsmaller multiplicity) and the CME signal is approximately half (due to the\nweaker magnetic field), we caution that the CME may not be detectable with the\ncollected isobar data statistics, within $\\sim$2$\\sigma$ significance, if the\naxial charge per entropy density ($n_5/s$) and the QCD vacuum transition\nprobability are system independent. This expectation is generally verified by\nthe Anomalous-Viscous Fluid Dynamics (AVFD) model. While our estimate provides\nan approximate \"experimental\" baseline, theoretical uncertainties on the CME\nremain large.\n"} {"abstract": " Complex objects are usually with multiple labels, and can be represented by\nmultiple modal representations, e.g., the complex articles contain text and\nimage information as well as multiple annotations. Previous methods assume that\nthe homogeneous multi-modal data are consistent, while in real applications,\nthe raw data are disordered, e.g., the article constitutes with variable number\nof inconsistent text and image instances. Therefore, Multi-modal Multi-instance\nMulti-label (M3) learning provides a framework for handling such task and has\nexhibited excellent performance. However, M3 learning is facing two main\nchallenges: 1) how to effectively utilize label correlation; 2) how to take\nadvantage of multi-modal learning to process unlabeled instances. To solve\nthese problems, we first propose a novel Multi-modal Multi-instance Multi-label\nDeep Network (M3DN), which considers M3 learning in an end-to-end multi-modal\ndeep network and utilizes consistency principle among different modal bag-level\npredictions. Based on the M3DN, we learn the latent ground label metric with\nthe optimal transport. Moreover, we introduce the extrinsic unlabeled\nmulti-modal multi-instance data, and propose the M3DNS, which considers the\ninstance-level auto-encoder for single modality and modified bag-level optimal\ntransport to strengthen the consistency among modalities. Thereby M3DNS can\nbetter predict label and exploit label correlation simultaneously. Experiments\non benchmark datasets and real world WKG Game-Hub dataset validate the\neffectiveness of the proposed methods.\n"} {"abstract": " Bunch splitting is an RF manipulation method of changing the bunch structure,\nbunch numbers and bunch intensity in the high-intensity synchrotrons that serve\nas the injector for a particle collider. An efficient way to realize bunch\nsplitting is to use the combination of different harmonic RF systems, such as\nthe two-fold bunch splitting of a bunch with a combination of fundamental\nharmonic and doubled harmonic RF systems. The two-fold bunch splitting and\nthree-fold bunch splitting methods have been experimentally verified and\nsuccessfully applied to the LHC/PS. In this paper, a generalized multi-fold\nbunch splitting method is given. The five-fold bunch splitting method using\nspecially designed multi-harmonic RF systems was studied and tentatively\napplied to the medium-stage synchrotron (MSS), the third accelerator of the\ninjector chain of the Super Proton-Proton Collider (SPPC), to mitigate the\npileup effects and collective instabilities of a single bunch in the SPPC. The\nresults show that the five-fold bunch splitting is feasible and both the bunch\npopulation distribution and longitudinal emittance growth after the splitting\nare acceptable, e.g., a few percent in the population deviation and less than\n10% in the total emittance growth.\n"} {"abstract": " We construct a random unitary Gaussian circuit for continuous-variable (CV)\nsystems subject to Gaussian measurements. We show that when the measurement\nrate is nonzero, the steady state entanglement entropy saturates to an area-law\nscaling. This is different from a many-body qubit system, where a generic\nentanglement transition is widely expected. Due to the unbounded local Hilbert\nspace, the time scale to destroy entanglement is always much shorter than the\none to build it, while a balance could be achieved for a finite local Hilbert\nspace. By the same reasoning, the absence of transition should also hold for\nother non-unitary Gaussian CV dynamics.\n"} {"abstract": " Here we report a record thermoelectric power factor of up to 160 $\\mu$ W m-1\nK-2 for the conjugated polymer poly(3-hexylthiophene) (P3HT). This result is\nachieved through the combination of high-temperature rubbing of thin films\ntogether with the use of a large molybdenum dithiolene p-dopant with a high\nelectron affinity. Comparison of the UV-vis-NIR spectra of the chemically doped\nsamples to electrochemically oxidized material reveals an oxidation level of\n10%, i.e. one polaron for every 10 repeat units. The high power factor arises\ndue to an increase in the charge-carrier mobility and hence electrical\nconductivity along the rubbing direction. We conclude that P3HT, with its\nfacile synthesis and outstanding processability, should not be ruled out as a\npotential thermoelectric material.\n"} {"abstract": " The Python package ComCH is a lightweight specialized computer algebra system\nthat provides models for well known objects, the surjection and Barratt-Eccles\noperads, parameterizing the product structure of algebras that are commutative\nin a derived sense. The primary examples of such algebras treated by ComCH are\nthe cochain complexes of spaces, for which it provides effective constructions\nof Steenrod cohomology operations at all prime.\n"} {"abstract": " By probing the population of binary black hole (BBH) mergers detected by\nLIGO-Virgo, we can infer properties about the underlying black hole formation\nchannels. A mechanism known as pair-instability (PI) supernova is expected to\nprevent the formation of black holes from stellar collapse with mass greater\nthan $\\sim 40-65\\,M_\\odot$ and less than $\\sim 120\\,M_\\odot$. Any BBH merger\ndetected by LIGO-Virgo with a component black hole in this gap, known as the PI\nmass gap, likely originated from an alternative formation channel. Here, we\nfirmly establish GW190521 as an outlier to the stellar-mass BBH population if\nthe PI mass gap begins at or below $65\\, M_{\\odot}$. In addition, for a PI\nlower boundary of $40-50\\, M_{\\odot}$, we find it unlikely that the remaining\ndistribution of detected BBH events, excluding GW190521, is consistent with the\nstellar-mass population.\n"} {"abstract": " In this paper, we propose Zero Aware Configurable Data Encoding by Skipping\nTransfer (ZAC-DEST), a data encoding scheme to reduce the energy consumption of\nDRAM channels, specifically targeted towards approximate computing and error\nresilient applications. ZAC-DEST exploits the similarity between recent data\ntransfers across channels and information about the error resilience behavior\nof applications to reduce on-die termination and switching energy by reducing\nthe number of 1's transmitted over the channels. ZAC-DEST also provides a\nnumber of knobs for trading off the application's accuracy for energy savings,\nand vice versa, and can be applied to both training and inference.\n We apply ZAC-DEST to five machine learning applications. On average, across\nall applications and configurations, we observed a reduction of $40$% in\ntermination energy and $37$% in switching energy as compared to the state of\nthe art data encoding technique BD-Coder with an average output quality loss of\n$10$%. We show that if both training and testing are done assuming the presence\nof ZAC-DEST, the output quality of the applications can be improved upto 9\ntimes as compared to when ZAC-DEST is only applied during testing leading to\nenergy savings during training and inference with increased output quality.\n"} {"abstract": " We study two well-known $SU(N)$ chiral gauge theories with fermions in the\nsymmetric, anti-symmetric and fundamental representations. We give a detailed\ndescription of the global symmetry, including various discrete quotients.\nRecent work argues that these theories exhibit a subtle mod 2 anomaly, ruling\nout certain phases in which the theories confine without breaking their global\nsymmetry, leaving a gapless composite fermion in the infra-red. We point out\nthat no such anomaly exists. We further exhibit an explicit path to the gapless\nfermion phase, showing that there is no kinematic obstruction to realising\nthese phases.\n"} {"abstract": " The nontrivial topology of spin systems such as skyrmions in real space can\npromote complex electronic states. Here, we provide a general viewpoint at the\nemergence of topological electronic states in spin systems based on the methods\nof noncommutative K-theory. By realizing that the structure of the observable\nalgebra of spin textures is determined by the algebraic properties of the\nnoncommutative hypertorus, we arrive at a unified understanding of topological\nelectronic states which we predict to arise in various noncollinear setups. The\npower of our approach lies in an ability to categorize emergent topological\nstates algebraically without referring to smooth real- or reciprocal-space\nquantities. This opens a way towards an educated design of topological phases\nin aperiodic, disordered, or non-smooth textures of spins and charges\ncontaining topological defects.\n"} {"abstract": " Pairwise alignment of DNA sequencing data is a ubiquitous task in\nbioinformatics and typically represents a heavy computational burden. A\nstandard approach to speed up this task is to compute \"sketches\" of the DNA\nreads (typically via hashing-based techniques) that allow the efficient\ncomputation of pairwise alignment scores. We propose a rate-distortion\nframework to study the problem of computing sketches that achieve the optimal\ntradeoff between sketch size and alignment estimation distortion. We consider\nthe simple setting of i.i.d. error-free sources of length $n$ and introduce a\nnew sketching algorithm called \"locational hashing.\" While standard approaches\nin the literature based on min-hashes require $B = (1/D) \\cdot O\\left( \\log n\n\\right)$ bits to achieve a distortion $D$, our proposed approach only requires\n$B = \\log^2(1/D) \\cdot O(1)$ bits. This can lead to significant computational\nsavings in pairwise alignment estimation.\n"} {"abstract": " In this article we propose a novel method to estimate the frequency\ndistribution of linguistic variables while controlling for statistical\nnon-independence due to shared ancestry. Unlike previous approaches, our\ntechnique uses all available data, from language families large and small as\nwell as from isolates, while controlling for different degrees of relatedness\non a continuous scale estimated from the data. Our approach involves three\nsteps: First, distributions of phylogenies are inferred from lexical data.\nSecond, these phylogenies are used as part of a statistical model to\nstatistically estimate transition rates between parameter states. Finally, the\nlong-term equilibrium of the resulting Markov process is computed. As a case\nstudy, we investigate a series of potential word-order correlations across the\nlanguages of the world.\n"} {"abstract": " In a previous work by the author it was shown that every finite dimensional\nalgebraic structure over an algebraically closed field of characteristic zero K\ngives rise to a character $K[X]_{aug}\\to K$, where $K[X]_aug$ is a commutative\nHopf algebra that encodes scalar invariants of structures. This enabled us to\nthink of some characters $K[X]_{aug}\\to K$ as algebraic structures with closed\norbit. In this paper we study structures in general symmetric monoidal\ncategories, and not only in $Vec_K$. We show that every character $\\chi :\nK[X]_{aug}\\to K$ arises from such a structure, by constructing a category\n$C_{\\chi}$ that is analogous to the universal construction from TQFT. We then\ngive necessary and sufficient conditions for a given character to arise from a\nstructure in an abelian category with finite dimensional hom-spaces. We call\nsuch characters good characters. We show that if $\\chi$ is good then $C_{\\chi}$\nis abelian and semisimple, and that the set of good characters forms a\nK-algebra. This gives us a way to interpolate algebraic structures, and also\nsymmetric monoidal categories, in a way that generalizes Deligne's categories\n$Rep(S_t)$, $Rep(GL_t(K))$, $Rep(O_t)$, and also some of the symmetric monoidal\ncategories introduced by Knop. We also explain how one can recover the recent\nconstruction of 2 dimensional TQFT of Khovanov, Ostrik, and Kononov, by the\nmethods presented here. We give new examples, of interpolations of the\ncategories $Rep(Aut_{O}(M))$ where $O$ is a discrete valuation ring with a\nfinite residue field, and M is a finite module over it. We also generalize the\nconstruction of wreath products with $S_t$, which was introduced by Knop.\n"} {"abstract": " We explore the connections between Green's functions for certain differential\nequations, covariance functions for Gaussian processes, and the smoothing\nsplines problem. Conventionally, the smoothing spline problem is considered in\na setting of reproducing kernel Hilbert spaces, but here we present a more\ndirect approach. With this approach, some choices that are implicit in the\nreproducing kernel Hilbert space setting stand out, one example being choice of\nboundary conditions and more elaborate shape restrictions.\n The paper first explores the Laplace operator and the Poisson equation and\nstudies the corresponding Green's functions under various boundary conditions\nand constraints. Explicit functional forms are derived in a range of examples.\nThese examples include several novel forms of the Green's function that, to the\nauthor's knowledge, have not previously been presented. Next we present a\nsmoothing spline problem where we penalize the integrated squared derivative of\nthe function to be estimated. We then show how the solution can be explicitly\ncomputed using the Green's function for the Laplace operator. In the last part\nof the paper, we explore the connection between Gaussian processes and\ndifferential equations, and show how the Laplace operator is related to\nBrownian processes and how processes that arise due to boundary conditions and\nshape constraints can be viewed as conditional Gaussian processes. The\npresented connection between Green's functions for the Laplace operator and\ncovariance functions for Brownian processes allows us to introduce several new\nnovel Brownian processes with specific behaviors. Finally, we consider the\nconnection between Gaussian process priors and smoothing splines.\n"} {"abstract": " In this paper, we study Toeplitz algebras generated by certain class of\nToeplitz operators on the $p$-Fock space and the $p$-Bergman space with\n$1 0$ is any\nsmall constant. This improves upon the previous $O(N)$-time $O(\\log \\log\nN)$-energy algorithm by Chang et al. [STOC 2017]. We provide lower bounds to\nshow that the time-energy trade-off of our algorithm is near-optimal.\n $\\textbf{Dense instances:}$ For the dense instances where the number of\ndevices is $n = \\Theta(N)$, we design a deterministic leader election algorithm\nusing only $O(1)$ energy. This improves upon the $O(\\log^* N)$-energy algorithm\nby Jurdzi\\'{n}ski et al. [PODC 2002] and the $O(\\alpha(N))$-energy algorithm by\nChang et al. [STOC 2017]. More specifically, we show that the optimal\ndeterministic energy complexity of leader election is\n$\\Theta\\left(\\max\\left\\{1, \\log \\frac{N}{n}\\right\\}\\right)$ if the devices\ncannot simultaneously transmit and listen, and it is $\\Theta\\left(\\max\\left\\{1,\n\\log \\log \\frac{N}{n}\\right\\}\\right)$ if they can.\n"} {"abstract": " The carbon pump of the world's ocean plays a vital role in the biosphere and\nclimate of the earth, urging improved understanding of the functions and\ninfluences of the ocean for climate change analyses. State-of-the-art\ntechniques are required to develop models that can capture the complexity of\nocean currents and temperature flows. This work explores the benefits of using\nphysics-informed neural networks (PINNs) for solving partial differential\nequations related to ocean modeling; such as the Burgers, wave, and\nadvection-diffusion equations. We explore the trade-offs of using data vs.\nphysical models in PINNs for solving partial differential equations. PINNs\naccount for the deviation from physical laws in order to improve learning and\ngeneralization. We observed how the relative weight between the data and\nphysical model in the loss function influence training results, where small\ndata sets benefit more from the added physics information.\n"} {"abstract": " We study (2,2s+1) RSOS lattice models deformed by the current-current\noperator. Solving the deformed Bethe ansatz equations for the model in the\nregime III we find explicit expressions for the ground state energy as well as\nfor the energy, the momentum and the scattering matrix of the breather-like\nexcitations. In the scaling limit we get an additional confirmation to the\nproposal that the bi-local deformations are a lattice analogue of the TT\nperturbations of integrable massive QFTs proposed by Smirnov and Zamolodchikov.\n"} {"abstract": " Multi-step manipulation tasks in unstructured environments are extremely\nchallenging for a robot to learn. Such tasks interlace high-level reasoning\nthat consists of the expected states that can be attained to achieve an overall\ntask and low-level reasoning that decides what actions will yield these states.\nWe propose a model-free deep reinforcement learning method to learn multi-step\nmanipulation tasks. We introduce a Robotic Manipulation Network (RoManNet),\nwhich is a vision-based model architecture, to learn the action-value functions\nand predict manipulation action candidates. We define a Task Progress based\nGaussian (TPG) reward function that computes the reward based on actions that\nlead to successful motion primitives and progress towards the overall task\ngoal. To balance the ratio of exploration/exploitation, we introduce a Loss\nAdjusted Exploration (LAE) policy that determines actions from the action\ncandidates according to the Boltzmann distribution of loss estimates. We\ndemonstrate the effectiveness of our approach by training RoManNet to learn\nseveral challenging multi-step robotic manipulation tasks in both simulation\nand real-world. Experimental results show that our method outperforms the\nexisting methods and achieves state-of-the-art performance in terms of success\nrate and action efficiency. The ablation studies show that TPG and LAE are\nespecially beneficial for tasks like multiple block stacking. Code is available\nat: https://github.com/skumra/romannet\n"} {"abstract": " This paper deals with the identification of machines in a smart city\nenvironment. The concept of machine biometrics is proposed in this work for the\nfirst time, as a way to authenticate machine identities interacting with humans\nin everyday life. This definition is imposed in modern years where autonomous\nvehicles, social robots, etc. are considered active members of contemporary\nsocieties. In this context, the case of car identification from the engine\nbehavioral biometrics is examined. For this purpose, 22 sound features were\nextracted and their discrimination capabilities were tested in combination with\n9 different machine learning classifiers, towards identifying 5 car\nmanufacturers. The experimental results revealed the ability of the proposed\nbiometrics to identify cars with high accuracy up to 98% for the case of the\nMultilayer Perceptron (MLP) neural network model.\n"} {"abstract": " Multiparty computation is raising importance because it's primary objective\nis to replace any trusted third party in the distributed computation. This work\npresents two multiparty shuffling protocols where each party, possesses a\nprivate input, agrees on a random permutation while keeping the permutation\nsecret. The proposed shuffling protocols are based on permutation network,\nthereby data-oblivious. The first proposal is $n\\text{-}permute$ that permutes\n$n$ inputs in all $n!$ possible ways. $n$-permute network consists of\n$2\\log{n}-1$ layers, and in each layer there are $n/2$ gates. Our second\nprotocol is $n_{\\pi}$-permute shuffling that defines a permutation set\n$\\Pi=\\{\\pi_1,\\dots,\\pi_N\\}$ where $|\\Pi| < n!$, and the resultant shuffling is\na random permutation $\\pi_i \\in \\Pi$. The $n_{\\pi}$-permute network contains\nleases number of layers compare to $n$-permute network. Let $n=n_1n_2$, the\n$n_{\\pi}$-permute network would define $2\\log{n_1}-1+\\log{n_2}$ layers. \\par\nThe proposed shuffling protocols are unconditionally secure against malicious\nadversary who can corrupt at most $t10.5$) galaxies that span a wide range in morphology, star formation activity\nand environment, and therefore is representative of the massive galaxy\npopulation at $z\\sim0.8$. We find that quiescent and star-forming galaxies\noccupy the parameter space of the $g$-band FP differently and thus have\ndifferent distributions in the dynamical mass-to-light ratio ($M_{\\rm\ndyn}/L_g$), largely owing to differences in the stellar age and recent star\nformation history, and, to a lesser extent, the effects of dust attenuation. In\ncontrast, we show that both star-forming and quiescent galaxies lie on the same\nmass FP at $z\\sim 0.8$, with a comparable level of intrinsic scatter about the\nplane. We examine the variation in $M_{\\rm dyn}/M_*$ through the thickness of\nthe mass FP, finding no significant residual correlations with stellar\npopulation properties, S\\'ersic index, or galaxy overdensity. Our results\nsuggest that, at fixed size and velocity dispersion, the variations in $M_{\\rm\ndyn}/L_g$ of massive galaxies reflect an approximately equal contribution of\nvariations in $M_*/L_g$, and variations in the dark matter fraction or initial\nmass function.\n"} {"abstract": " Series of distributions indexed by equally spaced time points are ubiquitous\nin applications and their analysis constitutes one of the challenges of the\nemerging field of distributional data analysis. To quantify such distributional\ntime series, we propose a class of intrinsic autoregressive models that operate\nin the space of optimal transport maps. The autoregressive transport models\nthat we introduce here are based on regressing optimal transport maps on each\nother, where predictors can be transport maps from an overall barycenter to a\ncurrent distribution or transport maps between past consecutive distributions\nof the distributional time series. Autoregressive transport models and\nassociated distributional regression models specify the link between predictor\nand response transport maps by moving along geodesics in Wasserstein space.\nThese models emerge as natural extensions of the classical autoregressive\nmodels in Euclidean space. Unique stationary solutions of autoregressive\ntransport models are shown to exist under a geometric moment contraction\ncondition of Wu and Shao (2004), using properties of iterated random functions.\nWe also discuss an extension to a varying coefficient coefficient for first\norder autoregressive transport models. In addition to simulations, the proposed\nmodels are illustrated with distributional time series of house prices across\nU.S. counties and of stock returns across the S&P 500 stock index.\n"} {"abstract": " Pedestrian trajectory prediction for surveillance video is one of the\nimportant research topics in the field of computer vision and a key technology\nof intelligent surveillance systems. Social relationship among pedestrians is a\nkey factor influencing pedestrian walking patterns but was mostly ignored in\nthe literature. Pedestrians with different social relationships play different\nroles in the motion decision of target pedestrian. Motivated by this idea, we\npropose a Social Relationship Attention LSTM (SRA-LSTM) model to predict future\ntrajectories. We design a social relationship encoder to obtain the\nrepresentation of their social relationship through the relative position\nbetween each pair of pedestrians. Afterwards, the social relationship feature\nand latent movements are adopted to acquire the social relationship attention\nof this pair of pedestrians. Social interaction modeling is achieved by\nutilizing social relationship attention to aggregate movement information from\nneighbor pedestrians. Experimental results on two public walking pedestrian\nvideo datasets (ETH and UCY), our model achieves superior performance compared\nwith state-of-the-art methods. Contrast experiments with other attention\nmethods also demonstrate the effectiveness of social relationship attention.\n"} {"abstract": " Adversarial training tends to result in models that are less accurate on\nnatural (unperturbed) examples compared to standard models. This can be\nattributed to either an algorithmic shortcoming or a fundamental property of\nthe training data distribution, which admits different solutions for optimal\nstandard and adversarial classifiers. In this work, we focus on the latter case\nunder a binary Gaussian mixture classification problem. Unlike earlier work, we\naim to derive the natural accuracy gap between the optimal Bayes and\nadversarial classifiers, and study the effect of different distributional\nparameters, namely separation between class centroids, class proportions, and\nthe covariance matrix, on the derived gap. We show that under certain\nconditions, the natural error of the optimal adversarial classifier, as well as\nthe gap, are locally minimized when classes are balanced, contradicting the\nperformance of the Bayes classifier where perfect balance induces the worst\naccuracy. Moreover, we show that with an $\\ell_\\infty$ bounded perturbation and\nan adversarial budget of $\\epsilon$, this gap is $\\Theta(\\epsilon^2)$ for the\nworst-case parameters, which for suitably small $\\epsilon$ indicates the\ntheoretical possibility of achieving robust classifiers with near-perfect\naccuracy, which is rarely reflected in practical algorithms.\n"} {"abstract": " Differential privacy (DP) provides a robust model to achieve privacy\nguarantees for released information. We examine the protection potency of\nsanitized multi-dimensional frequency distributions via DP randomization\nmechanisms against homogeneity attack (HA). HA allows adversaries to obtain the\nexact values on sensitive attributes for their targets without having to\nidentify them from the released data. We propose measures for disclosure risk\nfrom HA and derive closed-form relationships between the privacy loss\nparameters in DP and the disclosure risk from HA. The availability of the\nclosed-form relationships assists understanding the abstract concepts of DP and\nprivacy loss parameters by putting them in the context of a concrete privacy\nattack and offers a perspective for choosing privacy loss parameters when\nemploying DP mechanisms in information sanitization and release in practice. We\napply the closed-form mathematical relationships in real-life datasets to\ndemonstrate the assessment of disclosure risk due to HA on differentially\nprivate sanitized frequency distributions at various privacy loss parameters.\n"} {"abstract": " This review presents a concise, yet comprehensive discussion on the evolution\nof theoretical methods employed to determine the ground and excited states of\nmolecules in weak and strong magnetic fields. The weak-field cases have been\nstudied previously in the context of NMR, where the shielding tensor was\ndetermined by correcting the Diamagnetic Shielding operator up to the second\norder. However, the magnetic fields due to the Neutron Stars are extremely high\nand cannot be treated perturbatively. Thus, in the interest of the\nastrophysical and astrochemical community, this review aims to elaborate on the\ncomputational advancements in quantum mechanics from Hartree-Fock (HF) to\nDensity Functional Theory (DFT), in the context of molecules in a high (and\nultrahigh) magnetic field. It is found that the mean-field approximation of\nelectron-electron correlation, as in the case of Hartree-Fock Theory, yields\ninaccurate results. On the contrary, CCSD and DFT are found to overcome these\nchallenges. However, treating eletron-electron correlations in DFT can be\nchallenging for heavier ions as transition metals. To circumvent this, we\npropose the use of DFT/CCSD along with effective Hamiltonian methods, which are\nlikely to offer physical insights with reasonable accuracy.\n"} {"abstract": " Exponential Puiseux semirings are additive submonoids of $\\qq_{\\geq 0}$\ngenerated by almost all of the nonnegative powers of a positive rational\nnumber, and they are natural generalizations of rational cyclic semirings. In\nthis paper, we investigate some of the factorization invariants of exponential\nPuiseux semirings and briefly explore the connections of these properties with\nsemigroup-theoretical invariants. Specifically, we prove that sets of lengths\nof atomic exponential Puiseux semirings are almost arithmetic progressions with\na common bound, while unions of sets of lengths are arithmetic progressions.\nAdditionally, we provide exact formulas to compute the catenary degrees of\nthese monoids and show that minima and maxima of their sets of distances are\nalways attained at Betti elements. We conclude by providing various\ncharacterizations of the atomic exponential Puiseux semirings with finite omega\nfunctions; in particular, we completely describe them in terms of their\npresentations.\n"} {"abstract": " Kitaev spin liquid (KSL) system has attracted tremendous attention in past\nyears because of its fundamental significance in condensed matter physics and\npromising applications in fault-tolerant topological quantum computation.\nMaterial realization of such a system remains a major challenge in the field\ndue to the unusual configuration of anisotropic spin interactions, though great\neffort has been made before. Here we reveal that rare-earth chalcohalides REChX\n(RE=rare earth, Ch=O, S, Se, Te, X=F, Cl, Br, I) can serve as a family of KSL\ncandidates. Most family members have the typical SmSI-type structure with a\nhigh symmetry of R-3m and rare-earth magnetic ions form an undistorted\nhoneycomb lattice. The strong spin-orbit coupling of 4f electrons intrinsically\noffers anisotropic spin interactions as required by Kitaev model. We have grown\nthe crystals of YbOCl and synthesized the polycrystals of SmSI, ErOF, HoOF and\nDyOF, and made careful structural characterizations. We carry out magnetic and\nheat capacity measurements down to 1.8 K and find no obvious magnetic\ntransition in all the samples but DyOF. The van der Waals interlayer coupling\nhighlights the true two-dimensionality of the family which is vital for the\nexact realization of Abelian/non-Abelian anyons, and the graphene-like feature\nwill be a prominent advantage for developing miniaturized devices. The family\nis expected to act as an inspiring material platform for the exploration of KSL\nphysics.\n"} {"abstract": " Starting from nonequilibrium quantum field theory on a closed time path, we\nderive kinetic equations for the strong-field regime of quantum electrodynamics\n(QED) using a systematic expansion in the gauge coupling $e$. The strong field\nregime is characterized by a large photon field of order $\\mathcal{O}(1/e)$,\nwhich is relevant for the description of, e.g., intense laser fields, the\ninitial stages of off-central heavy ion collisions, and condensed matter\nsystems with net fermion number. The strong field enters the dynamical\nequations via both quantum Vlasov and collision terms, which we derive to order\n$\\mathcal{O}(e^2)$. The kinetic equations feature generalized scattering\namplitudes that have their own equation of motion in terms of the fermion\nspectral function. The description includes single photon emission,\nelectron-positron pair photoproduction, vacuum (Schwinger) pair production,\ntheir inverse processes, medium effects and contributions from the field, which\nare not restricted to the so-called locally-constant crossed field\napproximation. This extends known kinetic equations commonly used in\nstrong-field QED of intense laser fields. In particular, we derive an\nexpression for the asymptotic fermion pair number that includes leading-order\ncollisions and remains valid for strongly inhomogeneous fields. For the purpose\nof analytically highlighting limiting cases, we also consider plane-wave fields\nfor which it is shown how to recover Furry-picture scattering amplitudes by\nfurther assuming negligible occupations. Known on-shell descriptions are\nrecovered in the case of simply peaked ultrarelativistic fermion occupations.\nCollisional strong-field equations are necessary to describe the dynamics to\nthermal equilibrium starting from strong-field initial conditions.\n"} {"abstract": " We present an explanation system for applications that leverage Answer Set\nProgramming (ASP). Given a program P, an answer set A of P, and an atom a in\nthe program P, our system generates all explanation graphs of a which help\nexplain why a is true (or false) given the program P and the answer set A. We\nillustrate the functionality of the system using some examples from the\nliterature.\n"} {"abstract": " The presence of constant power loads (CPLs) in dc shipboard microgrids may\nlead to unstable conditions. The present work investigates the stability\nproperties of dc microgrids where CPLs are fed by fuel cells (FCs), and energy\nstorage systems (ESSs) equipped with voltage droop control. With respect to the\nprevious literature, the dynamics of the duty cycles of the dc-dc converters\nimplementing the droop regulation are considered. A mathematical model has been\nderived, and tuned to best mimic the behavior of the electrical representation\nimplemented in DIgSILENT. Then the model is used to find the sufficient\nconditions for stability with respect to the droop coefficient, the dc-bus\ncapacitor, and the inductances of the dc-dc converters.\n"} {"abstract": " We embed Nelson's stochastic quantization in the Schwartz-Meyer second order\ngeometry framework. The result is a non-perturbative theory of quantum\nmechanics on (pseudo)-Riemannian manifolds. Within this approach, we derive\nstochastic differential equations for massive spin-0 test particles charged\nunder scalar potentials, vector potentials and gravity. Furthermore, we derive\nthe associated Schr\\\"odinger equation. The resulting equations show that\nmassive scalar particles must be conformally coupled to gravity in a theory of\nquantum gravity. We conclude with a discussion of some prospects of the\nstochastic framework.\n"} {"abstract": " A general form of codebook design for code-domain non-orthogonal multiple\naccess (CD-NOMA) can be considered equivalent to an autoencoder (AE)-based\nconstellation design for multi-user multidimensional modulation (MU-MDM). Due\nto a constrained design space for optimal constellation, e.g., fixed resource\nmapping and equal power allocation to all codebooks, however, existing AE\narchitectures produce constellations with suboptimal bit-error-rate (BER)\nperformance. Accordingly, we propose a new architecture for MU-MDM AE and\nunderlying training methodology for joint optimization of resource mapping and\na constellation design with bit-to-symbol mapping, aiming at approaching the\nBER performance of a single-user MDM (SU-MDM) AE model with the same spectral\nefficiency. The core design of the proposed AE architecture is dense resource\nmapping combined with the novel power allocation layer that normalizes the sum\nof user codebook power across the entire resources. This globalizes the domain\nof the constellation design by enabling flexible resource mapping and power\nallocation. Furthermore, it allows the AE-based training to approach a global\noptimal MU-MDM constellations for CD-NOMA. Extensive BER simulation results\ndemonstrate that the proposed design outperforms the existing CD-NOMA designs\nwhile approaching the single-user BER performance achieved by the equivalent\nSU-MDM AE within 0.3 dB over the additive white Gaussian noise channel.\n"} {"abstract": " We investigate the dynamics of wealth inequality in an economy where\nhouseholds have positional preferences, with the strength of the positional\nconcern determined endogenously by inequality of wealth distribution in the\nsociety. We demonstrate that in the long run such an economy converges to a\nunique egalitarian steady-state equilibrium, with all households holding equal\npositive wealth, when the initial inequality is sufficiently low. Otherwise,\nthe steady state is characterised by polarisation of households into rich, who\nown all the wealth, and poor, whose wealth is zero. A fiscal policy with\ngovernment consumption funded by taxes on labour income and wealth can move the\neconomy from any initial state towards an egalitarian equilibrium with a higher\naggregate wealth.\n"} {"abstract": " Drop condensation and evaportation as a result of the gradient in vapor\nconcentration are important in both engineering and natural systems. One of the\ninteresting natural examples is transpiration on plant leaves. Most of water in\nthe inner space of the leaves escapes through stomata, whose rate depends on\nthe surface topography and a difference in vapor concentrations inside and just\noutside of the leaves. Previous research on the vapor flux on various surfaces\nhas focused on numerically solving the vapor diffusion equation or using\nscaling arguments based on a simple solution with a flat surface. In this\npresent work, we present and discuss simple analytical solutions on various 2D\nsurface shapes (e.g., semicylinder, semi-ellipse, hair). The method of solving\nthe diffusion equation is to use the complex potential theory, which provides\nanalytical solutions for vapor concentration and flux. We find that a high mass\nflux of vapor is formed near the top of the microstructures while a low mass\nflux is developed near the stomata at the leaf surface. Such a low vapor flux\nnear the stomata may affect transpiration in two ways. First, condensed\ndroplets on the stomata will not grow due to a low mass flux of vapor, which\nwill not inhibit the gas exchange through the stomatal opening. Second, the low\nmass flux from the atmosphere will facilitate the release of high concentrated\nvapor from the substomatal space.\n"} {"abstract": " We review methods to shuttle quantum particles fast and robustly. Ideal\nrobustness amounts to the invariance of the desired transport results with\nrespect to deviations, noisy or otherwise, from the nominal driving protocol\nfor the control parameters; this can include environmental perturbations.\n\"Fast\" is defined with respect to adiabatic transport times. Special attention\nis paid to shortcut-to-adiabaticity protocols that achieve, in\nfaster-than-adiabatic times, the same results of slow adiabatic driving.\n"} {"abstract": " Similarity caching systems have recently attracted the attention of the\nscientific community, as they can be profitably used in many application\ncontexts, like multimedia retrieval, advertising, object recognition,\nrecommender systems and online content-match applications. In such systems, a\nuser request for an object $o$, which is not in the cache, can be (partially)\nsatisfied by a similar stored object $o$', at the cost of a loss of user\nutility. In this paper we make a first step into the novel area of similarity\ncaching networks, where requests can be forwarded along a path of caches to get\nthe best efficiency-accuracy tradeoff. The offline problem of content placement\ncan be easily shown to be NP-hard, while different polynomial algorithms can be\ndevised to approach the optimal solution in discrete cases. As the content\nspace grows large, we propose a continuous problem formulation whose solution\nexhibits a simple structure in a class of tree topologies. We verify our\nfindings using synthetic and realistic request traces.\n"} {"abstract": " The breadth of a Lie algebra $L$ is defined to be the maximal dimension of\nthe image of $ad_x=[x,-]:L\\to L$, for $x\\in L$. Here, we initiate an\ninvestigation into the breadth of three families of Lie algebras defined by\nposets and provide combinatorial breadth formulas for members of each family.\n"} {"abstract": " Using a recently developed technique to estimate the equilibrium free energy\nof glassy materials, we explore if equilibrium simulation methods can be used\nto estimate the solubility of amorphous solids. As an illustration, we compute\nthe chemical potentials of the constituent particles of a two-component\nKob-Andersen model glass former. To compute the chemical potential for\ndifferent components, we combine the calculation of the overall free energy of\nthe glass with a calculation of the chemical potential difference of the two\ncomponents. We find that the standard method to compute chemical potential\ndifferences by thermodynamic integration yields not only a wide scatter in the\nchemical potential values but, more seriously, the average of the thermodynamic\nintegration results is well above the extrapolated value for the supercooled\nliquid. However, we find that if we compute the difference of the chemical\npotential of the components with the the non-equilibrium free energy expression\nproposed by Jarzynski, we obtain a good match with the extrapolated value of\nthe supercooled liquid. The extension of the Jarzynski method that we propose\nopens a potentially powerful route to compute free-energy related equilibrium\nproperties of glasses. We find that the solubility estimate of amorphous\nmaterials obtained from direct coexistence simulations is only in fair\nagreement with the solubility prediction based on the chemical potential\ncalculations of a hypothetical \"well-equilibrated glass\". In direct coexistence\nsimulations, we find that, in qualitative agreement with experiments, the\namorphous solubility decreases with time and attains a low solubility value.\n"} {"abstract": " With the rapid development of wireless sensor networks, smart devices, and\ntraditional information and communication technologies, there is tremendous\ngrowth in the use of Internet of Things (IoT) applications and services in our\neveryday life. IoT systems deal with high volumes of data. This data can be\nparticularly sensitive, as it may include health, financial, location, and\nother highly personal information. Fine-grained security management in IoT\ndemands effective access control. Several proposals discuss access control for\nthe IoT, however, a limited focus is given to the emerging blockchain-based\nsolutions for IoT access control. In this paper, we review the recent trends\nand critical needs for blockchain-based solutions for IoT access control. We\nidentify several important aspects of blockchain, including decentralised\ncontrol, secure storage and sharing information in a trustless manner, for IoT\naccess control including their benefits and limitations. Finally, we note some\nfuture research directions on how to converge blockchain in IoT access control\nefficiently and effectively.\n"} {"abstract": " Relative abundance is a common metric to estimate the composition of species\nin ecological surveys reflecting patterns of commonness and rarity of\nbiological assemblages. Measurements of coral reef compositions formed by four\ncommunities along Australia's Great Barrier Reef (GBR) gathered between 2012\nand 2017 are the focus of this paper. We undertake the task of finding clusters\nof transect locations with similar community composition and investigate\nchanges in clustering dynamics over time. During these years, an unprecedented\nsequence of extreme weather events (cyclones and coral bleaching) impacted the\n58 surveyed locations. The dependence between constituent parts of a\ncomposition presents a challenge for existing multivariate clustering\napproaches. In this paper, we introduce a finite mixture of Dirichlet\ndistributions with group-specific parameters, where cluster memberships are\ndictated by unobserved latent variables. The inference is carried in a Bayesian\nframework, where MCMC strategies are outlined to sample from the posterior\nmodel. Simulation studies are presented to illustrate the performance of the\nmodel in a controlled setting. The application of the model to the 2012 coral\nreef data reveals that clusters were spatially distributed in similar ways\nacross reefs which indicates a potential influence of wave exposure at the\norigin of coral reef community composition. The number of clusters estimated by\nthe model decreased from four in 2012 to two from 2014 until 2017. Posterior\nprobabilities of transect allocations to the same cluster substantially\nincrease through time showing a potential homogenization of community\ncomposition across the whole GBR. The Bayesian model highlights the diversity\nof coral reef community composition within a coral reef and rapid changes\nacross large spatial scales that may contribute to undermining the future of\nthe GBR's biodiversity.\n"} {"abstract": " With the entrance of cosmology in its new era of high precision experiments,\nlow- and high-redshift observations set off tensions in the measurements of\nboth the present-day expansion rate ($H_0$) and the clustering of matter\n($S_8$). We provide a simultaneous explanation of these tensions using the\nParker-Raval Vacuum Metamorphosis (VM) model with the neutrino sector extended\nbeyond the three massless Standard Model flavours and the curvature of the\nuniverse considered as a model parameter. To estimate the effect on\ncosmological observables we implement various extensions of the VM model in the\nstandard \\texttt{CosmoMC} pipeline and establish which regions of parameter\nspace are empirically viable to resolve the $H_0$ and $S_8$ tensions. We find\nthat the likelihood analyses of the physically motivated VM model, which has\nthe same number of free parameters as in the spatially-flat $\\Lambda$CDM model,\nalways gives $H_0$ in agreement with the local measurements (even when BAO or\nPantheon data are included) at the price of much larger $\\chi^2$ than\n$\\Lambda$CDM. The inclusion of massive neutrinos and extra relativistic species\nquantified through two well known parameters $\\sum m_{\\nu}$ and $N_{\\rm eff}$,\ndoes not modify this result, and in some cases improves the goodness of the\nfit. In particular, for the original VM+$\\sum m_\\nu$+$N_{\\rm eff}$ and the\nPlanck+BAO+Pantheon dataset combination, we find evidence for $\\sum\nm_{\\nu}=0.80^{+0.18}_{-0.22}~{\\rm eV}$ at more than $3\\sigma$, no indication\nfor extra neutrino species, $H_0=71.0\\pm1.2$~km/s/Mpc in agreement with local\nmeasurements, and $S_8=0.755\\pm0.032$ that solves the tension with the weak\nlensing measurements. [Abridged]\n"} {"abstract": " We compute the normalization of the multiple D-instanton amplitudes in type\nIIB string theory and show that the result agrees with the prediction of\nS-duality due to Green and Gutperle.\n"} {"abstract": " In this paper, we prove that for suitably chosen Dwork motives, the local\nGalois representation arising from middle cohomology of fibers over a point\nwith $p$-adic valuation $<0$ on the base is regular and ordinary. The result\nwill be crucial in the forthcoming work Potential Automorphy for $GL_n$. The\nproof consists of the construction of a semistable blowup and a use of\nHyodo-Kato's log crystalline cohomology theory.\n"} {"abstract": " State-of-the-art deep-learning-based voice activity detectors (VADs) are\noften trained with anechoic data. However, real acoustic environments are\ngenerally reverberant, which causes the performance to significantly\ndeteriorate. To mitigate this mismatch between training data and real data, we\nsimulate an augmented training set that contains nearly five million\nutterances. This extension comprises of anechoic utterances and their\nreverberant modifications, generated by convolutions of the anechoic utterances\nwith a variety of room impulse responses (RIRs). We consider five different\nmodels to generate RIRs, and five different VADs that are trained with the\naugmented training set. We test all trained systems in three different real\nreverberant environments. Experimental results show $20\\%$ increase on average\nin accuracy, precision and recall for all detectors and response models,\ncompared to anechoic training. Furthermore, one of the RIR models consistently\nyields better performance than the other models, for all the tested VADs.\nAdditionally, one of the VADs consistently outperformed the other VADs in all\nexperiments.\n"} {"abstract": " This paper illustrates how multilevel functional models can detect and\ncharacterize biomechanical changes along different sport training sessions. Our\nanalysis focuses on the relevant cases to identify differences in knee\nbiomechanics in recreational runners during low and high-intensity exercise\nsessions with the same energy expenditure by recording $20$ steps. To do so, we\nreview the existing literature of multilevel models, and then, we propose a new\nhypothesis test to look at the changes between different levels of the\nmultilevel model as low and high-intensity training sessions. We also evaluate\nthe reliability of measures recorded in three-dimension knee angles from the\nfunctional intra-class correlation coefficient (ICC) obtained from the\ndecomposition performed with the multilevel funcional model taking into account\n$20$ measures recorded in each test. The results show that there are no\nstatistically significant differences between the two modes of exercise.\nHowever, we have to be careful with the conclusions since, as we have shown,\nhuman gait-patterns are very individual and heterogeneous between groups of\nathletes, and other alternatives to the p-value may be more appropriate to\ndetect statistical differences in biomechanical changes in this context.\n"} {"abstract": " Re-identification (ReID) is to identify the same instance across different\ncameras. Existing ReID methods mostly utilize alignment-based or\nattention-based strategies to generate effective feature representations.\nHowever, most of these methods only extract general feature by employing single\ninput image itself, overlooking the exploration of relevance between comparing\nimages. To fill this gap, we propose a novel end-to-end trainable dynamic\nconvolution framework named Instance and Pair-Aware Dynamic Networks in this\npaper. The proposed model is composed of three main branches where a\nself-guided dynamic branch is constructed to strengthen instance-specific\nfeatures, focusing on every single image. Furthermore, we also design a\nmutual-guided dynamic branch to generate pair-aware features for each pair of\nimages to be compared. Extensive experiments are conducted in order to verify\nthe effectiveness of our proposed algorithm. We evaluate our algorithm in\nseveral mainstream person and vehicle ReID datasets including CUHK03,\nDukeMTMCreID, Market-1501, VeRi776 and VehicleID. In some datasets our\nalgorithm outperforms state-of-the-art methods and in others, our algorithm\nachieves a comparable performance.\n"} {"abstract": " We review the fundamentals and highlight the differences between some\ncommonly used definitions for the PPN gamma parameter ($\\gamma$) and the\ngravitational slip ($\\eta$). Here we stress the usefulness of a gamma-like\nparameter used by Berry and Gair ($\\gamma_{\\scriptscriptstyle \\Sigma}$) that\nparametrizes the bending of light and the Shapiro time delay in situations in\nwhich the standard $\\gamma$ cannot be promptly used. First we apply our\nconsiderations to two well known cases, but for which some conflicting results\ncan be found: massive Brans-Dicke gravity and $f(R)$ gravity (both the metric\nand the Palatini versions). Although the slip parameter is always well defined,\nit has in general no direct relation to either light deflection or the Shapiro\ntime delay, hence care should be taken on imposing the PPN $\\gamma$ bounds on\nthe slip. We stress that, for any system with a well posed Newtonian limit,\nPalatini $f(R)$ theories always have $\\gamma = 1$; while metric $f(R)$ theories\ncan only have two values: either 1 or 1/2. The extension towards Horndeski\ngravity shows no qualitative surprises, and $\\gamma_{\\scriptscriptstyle\n\\Sigma}$ is a constant in this context (only assuming that the Horndeski\npotentials can be approximated by analytical functions). This implies that a\nprecise study on the bending of light for different impact parameters can in\nprinciple be used to rule out the complete Horndeski action as an action for\ngravity. Also, we comment on the consequences for $\\gamma$ inferences at\nexternal galaxies.\n"} {"abstract": " We study fair resource allocation under a connectedness constraint wherein a\nset of indivisible items are arranged on a path and only connected subsets of\nitems may be allocated to the agents. An allocation is deemed fair if it\nsatisfies equitability up to one good (EQ1), which requires that agents'\nutilities are approximately equal. We show that achieving EQ1 in conjunction\nwith well-studied measures of economic efficiency (such as Pareto optimality,\nnon-wastefulness, maximum egalitarian or utilitarian welfare) is\ncomputationally hard even for binary additive valuations. On the algorithmic\nside, we show that by relaxing the efficiency requirement, a connected EQ1\nallocation can be computed in polynomial time for any given ordering of agents,\neven for general monotone valuations. Interestingly, the allocation computed by\nour algorithm has the highest egalitarian welfare among all allocations\nconsistent with the given ordering. On the other hand, if efficiency is\nrequired, then tractability can still be achieved for binary additive\nvaluations with interval structure. On our way, we strengthen some of the\nexisting results in the literature for other fairness notions such as\nenvy-freeness up to one good (EF1), and also provide novel results for\nnegatively-valued items or chores.\n"} {"abstract": " We derive a simple and precise approximation to probability density functions\nin sampling distributions based on the Fourier cosine series. After clarifying\nthe required conditions, we illustrate the approximation on two examples: the\ndistribution of the sum of uniformly distributed random variables, and the\ndistribution of sample skewness drawn from a normal population. The probability\ndensity function of the first example can be explicitly expressed, but that of\nthe second example has no explicit expression.\n"} {"abstract": " We analytically compute all four-loop QCD corrections to the photon-quark and\nHiggs-gluon form factors involving a closed massless fermion loop. Our\ncalculation of non-planar vertex integrals confirms a previous conjecture for\nthe analytical form of the non-fermionic contributions to the collinear\nanomalous dimensions of quarks and gluons.\n"} {"abstract": " $^{72}$As is a promising positron emitter for diagnostic imaging that can be\nemployed locally using a $^{72}$Se generator. However, current reaction\npathways to $^{72}$Se have insufficient nuclear data for efficient production\nusing regional 100-200 MeV high-intensity proton accelerators. In order to\naddress this deficiency, stacked-target irradiations were performed at LBNL,\nLANL, and BNL to measure the production of the $^{72}$Se/$^{72}$As PET\ngenerator system via $^{75}$As(p,x) between 35 and 200 MeV. This work provides\nthe most well-characterized excitation function for $^{75}$As(p,4n)$^{72}$Se\nstarting from threshold. Additional focus was given to report the first\nmeasurements of $^{75}$As(p,x)$^{68}$Ge and bolster an already robust\nproduction capability for the highly valuable $^{68}$Ge/$^{68}$Ga PET\ngenerator. Thick target yield comparisons with prior established formation\nroutes to both generators are made. In total, high-energy proton-induced cross\nsections are reported for 55 measured residual products from $^{75}$As, Cu, and\nTi targets, where the latter two materials were present as monitor foils. These\nresults were compared with literature data as well as the default theoretical\ncalculations of the nuclear model codes TALYS, CoH, EMPIRE, and ALICE. Reaction\nmodeling at these energies is typically unsatisfactory due to few prior\npublished data and many interacting physics models. Therefore, a detailed\nassessment of the TALYS code was performed with simultaneous parameter\nadjustments applied according to a standardized procedure. Particular attention\nwas paid to the formulation of the two-component exciton model in the\ntransition between the compound and pre-equilibrium regions, with a linked\ninvestigation of level density models for nuclei off of stability and their\nimpact on modeling predictive power.\n"} {"abstract": " In general, reliable communication via multiple-input multiple-output (MIMO)\northogonal frequency division multiplexing (OFDM) requires accurate channel\nestimation at the receiver. The existing literature largely focuses on\ndenoising methods for channel estimation that depend on either (i)~channel\nanalysis in the time-domain with prior channel knowledge or (ii)~supervised\nlearning techniques which require large pre-labeled datasets for training. To\naddress these limitations, we present a frequency-domain denoising method based\non a reinforcement learning framework that does not need a priori channel\nknowledge and pre-labeled data. Our methodology includes a new successive\nchannel denoising process based on channel curvature computation, for which we\nobtain a channel curvature magnitude threshold to identify unreliable channel\nestimates. Based on this process, we formulate the denoising mechanism as a\nMarkov decision process, where we define the actions through a geometry-based\nchannel estimation update, and the reward function based on a policy that\nreduces mean squared error (MSE). We then resort to Q-learning to update the\nchannel estimates. Numerical results verify that our denoising algorithm can\nsuccessfully mitigate noise in channel estimates. In particular, our algorithm\nprovides a significant improvement over the practical least squares (LS)\nestimation method and provides performance that approaches that of the ideal\nlinear minimum mean square error (LMMSE) estimation with perfect knowledge of\nchannel statistics.\n"} {"abstract": " Tactile perception using vibration sensation helps robots recognize their\nenvironment's physical properties and perform complex tasks. A sliding motion\nis applied to target objects to generate tactile vibration data. However,\nsituations exist where such a sliding motion is infeasible due to geometrical\nconstraints in the environment or an object's fragility which cannot resist\nfriction forces. This paper explores a novel approach to achieve\nvibration-based tactile perception without a sliding motion. To this end, our\nkey idea is injecting a mechanical vibration into a soft tactile sensor system\nand measuring the propagated vibration inside it by a sensor. Soft tactile\nsensors are deformed by the contact state, and the touched objects' shape or\ntexture should change the characteristics of the vibration propagation.\nTherefore, the propagated-vibration data are expected to contain useful\ninformation for recognizing touched environments. We developed a prototype\nsystem for a proof-of-concept: a mechanical vibration is applied to a\nbiomimetic (soft and vibration-based) tactile sensor from a small, mounted\npiezoelectric actuator. As a verification experiment, we performed two\nclassification tasks for sandpaper's grit size and a slit's gap widths using\nour approach and compared their accuracies with that of using sliding motions.\nOur approach resulted in 70% accuracy for the grit size classification and 99%\naccuracy for the gap width classification. These results are comparable to or\nbetter than the comparison methods with a sliding motion.\n"} {"abstract": " Biological synaptic plasticity exhibits nonlinearities that are not accounted\nfor by classic Hebbian learning rules. Here, we introduce a simple family of\ngeneralized nonlinear Hebbian learning rules. We study the computations\nimplemented by their dynamics in the simple setting of a neuron receiving\nfeedforward inputs. These nonlinear Hebbian rules allow a neuron to learn\ntensor decompositions of its higher-order input correlations. The particular\ninput correlation decomposed and the form of the decomposition depend on the\nlocation of nonlinearities in the plasticity rule. For simple, biologically\nmotivated parameters, the neuron learns eigenvectors of higher-order input\ncorrelation tensors. We prove that tensor eigenvectors are attractors and\ndetermine their basins of attraction. We calculate the volume of those basins,\nshowing that the dominant eigenvector has the largest basin of attraction. We\nthen study arbitrary learning rules and find that any learning rule that admits\na finite Taylor expansion into the neural input and output also has stable\nequilibria at generalized eigenvectors of higher-order input correlation\ntensors. Nonlinearities in synaptic plasticity thus allow a neuron to encode\nhigher-order input correlations in a simple fashion.\n"} {"abstract": " We consider reversible and surjective cellular automata perturbed with noise.\nWe show that, in the presence of positive additive noise, the cellular\nautomaton forgets all the information regarding its initial configuration\nexponentially fast. In particular, the state of a finite collection of cells\nwith diameter n becomes indistinguishable from pure noise after O(log n) time\nsteps. This highlights the seemingly unavoidable need for irreversibility in\norder to perform scalable reliable computation in the presence of noise.\n"} {"abstract": " Automatic classification of disordered speech can provide an objective tool\nfor identifying the presence and severity of speech impairment. Classification\napproaches can also help identify hard-to-recognize speech samples to teach ASR\nsystems about the variable manifestations of impaired speech. Here, we develop\nand compare different deep learning techniques to classify the intelligibility\nof disordered speech on selected phrases. We collected samples from a diverse\nset of 661 speakers with a variety of self-reported disorders speaking 29 words\nor phrases, which were rated by speech-language pathologists for their overall\nintelligibility using a five-point Likert scale. We then evaluated classifiers\ndeveloped using 3 approaches: (1) a convolutional neural network (CNN) trained\nfor the task, (2) classifiers trained on non-semantic speech representations\nfrom CNNs that used an unsupervised objective [1], and (3) classifiers trained\non the acoustic (encoder) embeddings from an ASR system trained on typical\nspeech [2]. We found that the ASR encoder's embeddings considerably outperform\nthe other two on detecting and classifying disordered speech. Further analysis\nshows that the ASR embeddings cluster speech by the spoken phrase, while the\nnon-semantic embeddings cluster speech by speaker. Also, longer phrases are\nmore indicative of intelligibility deficits than single words.\n"} {"abstract": " Motivated by the rising abundance of observational data with continuous\ntreatments, we investigate the problem of estimating the average dose-response\ncurve (ADRF). Available parametric methods are limited in their model space,\nand previous attempts in leveraging neural network to enhance model\nexpressiveness relied on partitioning continuous treatment into blocks and\nusing separate heads for each block; this however produces in practice\ndiscontinuous ADRFs. Therefore, the question of how to adapt the structure and\ntraining of neural network to estimate ADRFs remains open. This paper makes two\nimportant contributions. First, we propose a novel varying coefficient neural\nnetwork (VCNet) that improves model expressiveness while preserving continuity\nof the estimated ADRF. Second, to improve finite sample performance, we\ngeneralize targeted regularization to obtain a doubly robust estimator of the\nwhole ADRF curve.\n"} {"abstract": " In Statistical Relational Artificial Intelligence, a branch of AI and machine\nlearning which combines the logical and statistical schools of AI, one uses the\nconcept {\\em para\\-metrized probabilistic graphical model (PPGM)} to model\n(conditional) dependencies between random variables and to make probabilistic\ninferences about events on a space of \"possible worlds\". The set of possible\nworlds with underlying domain $D$ (a set of objects) can be represented by the\nset $\\mathbf{W}_D$ of all first-order structures (for a suitable signature)\nwith domain $D$. Using a formal logic we can describe events on $\\mathbf{W}_D$.\nBy combining a logic and a PPGM we can also define a probability distribution\n$\\mathbb{P}_D$ on $\\mathbf{W}_D$ and use it to compute the probability of an\nevent. We consider a logic, denoted $PLA$, with truth values in the unit\ninterval, which uses aggregation functions, such as arithmetic mean, geometric\nmean, maximum and minimum instead of quantifiers. However we face the problem\nof computational efficiency and this problem is an obstacle to the wider use of\nmethods from Statistical Relational AI in practical applications. We address\nthis problem by proving that the described probability will, under certain\nassumptions on the PPGM and the sentence $\\varphi$, converge as the size of $D$\ntends to infinity. The convergence result is obtained by showing that every\nformula $\\varphi(x_1, \\ldots, x_k)$ which contains only \"admissible\"\naggregation functions (e.g. arithmetic and geometric mean, max and min) is\nasymptotically equivalent to a formula $\\psi(x_1, \\ldots, x_k)$ without\naggregation functions.\n"} {"abstract": " An analytical derivation of the vibrational density of states (DOS) of\nliquids, and in particular of its characteristic linear in frequency low-energy\nregime, has always been elusive because of the presence of an infinite set of\npurely imaginary modes -- the instantaneous normal modes (INMs). By combining\nan analytic continuation of the Plemelj identity to the complex plane with the\npurely overdamped dynamics of the INMs, we derive a closed-form analytic\nexpression for the low-frequency DOS of liquids. The obtained result explains\nfrom first principles the widely observed linear in frequency term of the DOS\nin liquids, whose slope appears to increase with the average lifetime of the\nINMs. The analytic results are robustly confirmed by fitting simulations data\nfor Lennard-Jones liquids, and they also recover the Arrhenius law for the\naverage relaxation time of the INMs, as expected.\n"} {"abstract": " Inferring the causal structure of a system typically requires interventional\ndata, rather than just observational data. Since interventional experiments can\nbe costly, it is preferable to select interventions that yield the maximum\namount of information about a system. We propose a novel Bayesian method for\noptimal experimental design by sequentially selecting interventions that\nminimize the expected posterior entropy as rapidly as possible. A key feature\nis that the method can be implemented by computing simple summaries of the\ncurrent posterior, avoiding the computationally burdensome task of repeatedly\nperforming posterior inference on hypothetical future datasets drawn from the\nposterior predictive. After deriving the method in a general setting, we apply\nit to the problem of inferring causal networks. We present a series of\nsimulation studies in which we find that the proposed method performs favorably\ncompared to existing alternative methods. Finally, we apply the method to real\nand simulated data from a protein-signaling network.\n"} {"abstract": " Hand gesture is a new and promising interface for locomotion in virtual\nenvironments. While several previous studies have proposed different hand\ngestures for virtual locomotion, little is known about their differences in\nterms of performance and user preference in virtual locomotion tasks. In the\npresent paper, we presented three different hand gesture interfaces and their\nalgorithms for locomotion, which are called the Finger Distance gesture, the\nFinger Number gesture and the Finger Tapping gesture. These gestures were\ninspired by previous studies of gesture-based locomotion interfaces and are\ntypical gestures that people are familiar with in their daily lives.\nImplementing these hand gesture interfaces in the present study enabled us to\nsystematically compare the differences between these gestures. In addition, to\ncompare the usability of these gestures to locomotion interfaces using\ngamepads, we also designed and implemented a gamepad interface based on the\nXbox One controller. We compared these four interfaces through two virtual\nlocomotion tasks. These tasks assessed their performance and user preference on\nspeed control and waypoints navigation. Results showed that user preference and\nperformance of the Finger Distance gesture were comparable to that of the\ngamepad interface. The Finger Number gesture also had close performance and\nuser preference to that of the Finger Distance gesture. Our study demonstrates\nthat the Finger Distance gesture and the Finger Number gesture are very\npromising interfaces for virtual locomotion. We also discuss that the Finger\nTapping gesture needs further improvements before it can be used for virtual\nwalking.\n"} {"abstract": " Harvesting waste heat with temperatures lower than 100 oC can improve system\nefficiency and reduce greenhouse gas emissions, yet it has been a longstanding\nand challenging task. Electrochemical methods for harvesting low-grade heat\nhave attracted research interest in recent years due to the relatively high\neffective temperature coefficient of the electrolytes (> 1 mV/K) compared with\nthe thermopower of traditional thermoelectric devices. Comparing with other\nelectrochemical devices such as temperature-variation based thermally\nregenerative electrochemical cycle and temperature-difference based\nthermogalvanic cells, the thermally regenerative flow battery (TRFB) has the\nadvantages of providing a continuous power output, decoupling the heat source\nand heat sink and recuperating heat, and compatible with stacking for scaling\nup. However, TRFB suffers from the issue of stable operation due to the\nchallenge of pH matching between catholyte and anolyte solutions with desirable\ntemperature coefficients. In this work, we demonstrate a PH-neutral TRFB based\non KI/KI3 and K3Fe(CN)6/K4Fe(CN)6 as the catholyte and anolyte, respectively,\nwith a cell temperature coefficient of 1.9 mV/K and a power density of 9\nuW/cm2. This work also presents a comprehensive model with a coupled analysis\nof mass transfer and reaction kinetics in a porous electrode that can\naccurately capture the flow rate dependence of power density and energy\nconversion efficiency. We estimate that the efficiency of the pH-neutral TRFB\ncan reach 11% of the Carnot efficiency at the maximum power output with a\ntemperature difference of 37 K. Via analysis, we identify that the mass\ntransfer overpotential inside the porous electrode and the resistance of the\nion exchange membrane are the two major factors limiting the efficiency and\npower density, pointing to directions for future improvements.\n"} {"abstract": " We investigate dynamical changes and its corresponding phase space complexity\nin a stochastic red blood cell system. The system is obtained by incorporating\npower noise with the associated sinusoidal flow. Both chaotic and non-chaotic\ndynamics of sinusoidal flow in red blood cell are identified by 0-1 test.\nFurthermore, dynamical complexity of the sinusoidal flow in the system is\ninvestigated by heterogeneous recurrence based entropy. The numerical\nsimulation is performed to quantify the existence of chaotic dynamics and\ncomplexity for the sinusoidal blood flow.\n"} {"abstract": " This paper presents a Domain Specific Language (DSL) for generically\ndescribing cyber attacks, agnostic to specific system-under-test(SUT). The\ncreation of the presented DSL is motivated by an automotive use case. The\nconcepts of the DSL are generic such thatattacks on arbitrary systems can be\naddressed.The ongoing trend to improve the user experience of vehicles with\nconnected services implies an enhanced connectivity as well asremote accessible\ninterface opens potential attack vectors. This might also impact safety and the\nproprietary nature of potential SUTs.Reusing tests of attack vectors to\nindustrialize testing them on multiple SUTs mandates an abstraction mechanism\nto port an attackfrom one system to another. The DSL therefore generically\ndescribes attacks for the usage with a test case generator (and\nexecutionenvironment) also described in this paper. The latter use this\ndescription and a database with SUT-specific information to generateattack\nimplementations for a multitude of different (automotive) SUTs.\n"} {"abstract": " We prove the rationality of the Poincar\\'e series of multiplier ideals in any\ndimension and thus extending the main results for surfaces of Galindo and\nMonserrat and Alberich-Carrami\\~nana et al. Our results also hold for\nPoincar\\'e series of test ideals. In order to do so, we introduce a theory of\nHilbert functions indexed over $\\mathbb{R}$ which gives an unified treatment of\nboth cases.\n"} {"abstract": " We study the transverse geometric behavior of 2-dimensional foliations in\n3-manifolds. We show that an R-covered transversely orientable foliation with\nGromov hyperbolic leaves in a closed 3-manifold admits a regulating, transverse\npseudo-Anosov flow (in the appropriate sense) in each atoroidal piece of the\nmanifold. The flow is a blow of a one prong pseudo-Anosov flow. In addition we\nshow that there is a regulating flow for the whole foliation. We also determine\nhow deck transformations act on the universal circle of the foliation.\n"} {"abstract": " Quantum computers promise tremendous impact across applications -- and have\nshown great strides in hardware engineering -- but remain notoriously error\nprone. Careful design of low-level controls has been shown to compensate for\nthe processes which induce hardware errors, leveraging techniques from optimal\nand robust control. However, these techniques rely heavily on the availability\nof highly accurate and detailed physical models which generally only achieve\nsufficient representative fidelity for the most simple operations and generic\nnoise modes. In this work, we use deep reinforcement learning to design a\nuniversal set of error-robust quantum logic gates on a superconducting quantum\ncomputer, without requiring knowledge of a specific Hamiltonian model of the\nsystem, its controls, or its underlying error processes. We experimentally\ndemonstrate that a fully autonomous deep reinforcement learning agent can\ndesign single qubit gates up to $3\\times$ faster than default DRAG operations\nwithout additional leakage error, and exhibiting robustness against calibration\ndrifts over weeks. We then show that $ZX(-\\pi/2)$ operations implemented using\nthe cross-resonance interaction can outperform hardware default gates by over\n$2\\times$ and equivalently exhibit superior calibration-free performance up to\n25 days post optimization using various metrics. We benchmark the performance\nof deep reinforcement learning derived gates against other black box\noptimization techniques, showing that deep reinforcement learning can achieve\ncomparable or marginally superior performance, even with limited hardware\naccess.\n"} {"abstract": " A sender sells an object of unknown quality to a receiver who pays his\nexpected value for it. Sender and receiver might hold different priors over\nquality. The sender commits to a monotonic categorization of quality. We\ncharacterize the sender's optimal monotonic categorization. Using our\ncharacterization, we study the optimality of full pooling or full separation,\nthe alternation of pooling and separation, and make precise a sense in which\npooling is dominant relative to separation. We discuss applications, extensions\nand generalizations, among them the design of a grading scheme by a\nprofit-maximizing school which seeks to signal student qualities and\nsimultaneously incentivize students to learn. Such incentive constraints force\nmonotonicity, and can also be embedded as a distortion of the school's prior\nover student qualities, generating a categorization problem with distinct\nsender and receiver priors.\n"} {"abstract": " We present a new radiative transfer method (SPH-M1RT) that is coupled\ndynamically with smoothed particle hydrodynamics (SPH). We implement it in the\n(task-based parallel) SWIFT galaxy simulation code but it can be\nstraightforwardly implemented in other SPH codes. Our moment-based method\nsimultaneously solves the radiation energy and flux equations in SPH, making it\nadaptive in space and time. We modify the M1 closure relation to stabilize\nradiation fronts in the optically thin limit. We also introduce anisotropic\nartificial viscosity and high-order artificial diffusion schemes, which allow\nthe code to handle radiation transport accurately in both the optically thin\nand optically thick regimes. Non-equilibrium thermo-chemistry is solved using a\nsemi-implicit sub-cycling technique. The computational cost of our method is\nindependent of the number of sources and can be lowered further by using the\nreduced speed of light approximation. We demonstrate the robustness of our\nmethod by applying it to a set of standard tests from the cosmological\nradiative transfer comparison project of Iliev et al. The SPH-M1RT scheme is\nwell-suited for modelling situations in which numerous sources emit ionising\nradiation, such as cosmological simulations of galaxy formation or simulations\nof the interstellar medium.\n"} {"abstract": " In this research paper, I will elaborate on a method to evaluate machine\ntranslation models based on their performance on underlying syntactical\nphenomena between English and Arabic languages. This method is especially\nimportant as such \"neural\" and \"machine learning\" are hard to fine-tune and\nchange. Thus, finding a way to evaluate them easily and diversely would greatly\nhelp the task of bettering them.\n"} {"abstract": " Many clinical studies evaluate the benefit of treatment based on both\nsurvival and other ordinal/continuous clinical outcomes, such as neurocognitive\nscores or quality-of-life scores. In these studies, there are situations when\nthe clinical outcomes are truncated by death, where subjects die before their\nclinical outcome is measured. Treating outcomes as \"missing\" or \"censored\" due\nto death can be misleading for treatment effect evaluation. We show that if we\nuse the median in the survivors or in the always-survivors to summarize\nclinical outcomes, we may conclude a trade-off exists between the probability\nof survival and good clinical outcomes, even in settings where both the\nprobability of survival and the probability of any good clinical outcome are\nbetter for one treatment. Therefore, we advocate not always treating death as a\nmechanism through which clinical outcomes are missing, but rather as part of\nthe outcome measure. To account for the survival status, we describe the\nsurvival-incorporated median as an alternative summary measure for outcomes in\nthe presence of death. The survival-incorporated median is the threshold such\nthat 50\\% of the population is alive with an outcome above that threshold. We\nuse conceptual examples to show that the survival-incorporated median provides\na simple and useful summary measure to inform clinical practice.\n"} {"abstract": " The unified set of yields of particles produced in proton-proton collisions\nat $\\sqrt{s}$ = 17.3 GeV (laboratory beam momentum 158 GeV/c) is evaluated,\ncombining the experimental results of the NA49 and NA61/SHINE collaborations at\nthe CERN SPS. With the statistical hadronization code Thermal-Fist we confirm\nthe unacceptably high value of $\\chi^2$, both in the canonical and grand\ncanonical - strangeness canonical approach, and the common volume for all the\nhadrons. The use of the energy-dependent width of the Breit-Wigner\nparametrization for the mass distributions of unstable particles improves the\nquality of the description of particle yields only slightly. We confirm the\nobservation that exclusion of the $\\phi$ meson yield makes the fit result\nacceptable. The complete experimental data set of particle yields can be\nreasonably fitted if the canonical volumes of hadrons without and with open\nstrangeness are allowed to vary independently. The canonical volume of\nstrangeness was found larger than that for non-strange hadrons, which is\ncompatible with the femtoscopy measurements of p+p system at $\\sqrt{s} = $ 27.4\nMeV and 900 MeV. The model with the best-fit parameters allows to predict the\nyields of several not yet measured particles emitted from p+p at $\\sqrt{s}$ =\n17.3 GeV.\n"} {"abstract": " In this paper, we extend a recently introduced multi-fidelity control variate\nfor the uncertainty quantification of the Boltzmann equation to the case of\nkinetic models arising in the study of multiagent systems. For these phenomena,\nwhere the effect of uncertainties is particularly evident, several models have\nbeen developed whose equilibrium states are typically unknown. In particular,\nwe aim to develop efficient numerical methods based on solving the kinetic\nequations in the phase space by Direct Simulation Monte Carlo (DSMC) coupled to\na Monte Carlo sampling in the random space. To this end, exploiting the\nknowledge of the corresponding mean-field approximation we develop novel\nmean-field Control Variate (MFCV) methods that are able to strongly reduce the\nvariance of the standard Monte Carlo sampling method in the random space. We\nverify these observations with several numerical examples based on classical\nmodels , including wealth exchanges and opinion formation model for collective\nphenomena.\n"} {"abstract": " We demonstrate an on-demand source of microwave single photons with 71--99\\%\nintrinsic quantum efficiency. The source is narrowband (300\\unite{kHz}) and\ntuneable over a 600 MHz range around 5.2 GHz. Such a device is an important\nelement in numerous quantum technologies and applications. The device consists\nof a superconducting transmon qubit coupled to the open end of a transmission\nline. A $\\pi$-pulse excites the qubit, which subsequently rapidly emits a\nsingle photon into the transmission line. A cancellation pulse then suppresses\nthe reflected $\\pi$-pulse by 33.5 dB, resulting in 0.005 photons leaking into\nthe photon emission channel. We verify strong antibunching of the emitted\nphoton field and determine its Wigner function. Non-radiative decay and $1/f$\nflux noise both affect the quantum efficiency. We also study the device\nstability over time and identify uncorrelated discrete jumps of the pure\ndephasing rate at different qubit frequencies on a time scale of hours, which\nwe attribute to independent two-level system defects in the device dielectrics,\ndispersively coupled to the qubit.\n"} {"abstract": " The heavy fermion state with Kondo-hybridization (KH), usually manifested in\nf-electron systems with lanthanide or actinide elements, was recently\ndiscovered in several 3d transition metal compounds without f-electrons.\nHowever, KH has not yet been observed in 4d/5d transition metal compounds,\nsince more extended 4d/5d orbitals do not usually form flat bands that supply\nlocalized electrons appropriate for Kondo pairing. Here, we report a doping-\nand temperature-dependent angle-resolved photoemission study on 4d\nCa2-xSrxRuO4, which shows the signature of KH. We observed a spectral weight\ntransfer in the {\\gamma}-band, reminiscent of an orbital-selective Mott phase\n(OSMP). The Mott localized {\\gamma}-band induces KH with the itinerant\n\\b{eta}-band, resulting in spectral weight suppression around the Fermi level.\nOur work is the first to demonstrate the evolution of the OSMP with possible KH\namong 4d electrons, and thereby expands the material boundary of Kondo physics\nto 4d multi-orbital systems.\n"} {"abstract": " Excellent thermoelectric performance in the out-of-layer n-doped SnSe has\nbeen observed experimentally (Chang et al., Science 360, 778-783 (2018)).\nHowever, a first-principles investigation of the dominant scattering mechanisms\ngoverning all thermoelectric transport properties is lacking. In the present\nwork, by applying extensive first-principles calculations of electron-phonon\ncoupling associated with the calculation of the scattering by ionized\nimpurities, we investigate the reasons behind the superior figure of merit as\nwell as the enhancement of zT above 600 K in n-doped out-of-layer SnSe, as\ncompared to p-doped SnSe with similar carrier densities. For the n-doped case,\nthe relaxation time is dominated by ionized impurity scattering and increases\nwith temperature, a feature that maintains the power factor at high values at\nhigher temperatures and simultaneously causes the carrier thermal conductivity\nat zero electric current (k_el) to decrease faster for higher temperatures,\nleading to an ultrahigh-zT = 3.1 at 807 K. We rationalize the roles played by\nk_el and k^0 (the thermal conductivity due to carrier transport under\nisoelectrochemical conditions) in the determination of zT. Our results show the\nratio between k^0 and the lattice thermal conductivity indeed corresponds to\nthe upper limit for zT, whereas the difference between calculated zT and the\nupper limit is proportional to k_el.\n"} {"abstract": " We consider $\\mathbb{Z}_2$-synchronization on the Euclidean lattice. Every\nvertex of $\\mathbb{Z}^d$ is assigned an independent symmetric random sign\n$\\theta_u$, and for every edge $(u,v)$ of the lattice, one observes the product\n$\\theta_u\\theta_v$ flipped independently with probability $p$. The task is to\nreconstruct products $\\theta_u\\theta_v$ for pairs of vertices $u$ and $v$ which\nare arbitrarily far apart. Abb\\'e, Massouli\\'e, Montanari, Sly and Srivastava\n(2018) showed that synchronization is possible if and only if $p$ is below a\ncritical threshold $\\tilde{p}_c(d)$, and efficiently so for $p$ small enough.\nWe augment this synchronization setting with a model of side information\npreserving the sign symmetry of $\\theta$, and propose an \\emph{efficient}\nalgorithm which synchronizes a randomly chosen pair of far away vertices on\naverage, up to a differently defined critical threshold $p_c(d)$. We conjecture\nthat $ p_c(d)=\\tilde{p}_c(d)$ for all $d \\ge 2$. Our strategy is to\n\\emph{renormalize} the synchronization model in order to reduce the effective\nnoise parameter, and then apply a variant of the multiscale algorithm of AMMSS.\nThe success of the renormalization procedure is conditional on a plausible but\nunproved assumption about the regularity of the free energy of an Ising spin\nglass model on $\\mathbb{Z}^d$.\n"} {"abstract": " Graph Neural Networks (GNNs) are the subject of intense focus by the machine\nlearning community for problems involving relational reasoning. GNNs can be\nbroadly divided into spatial and spectral approaches. Spatial approaches use a\nform of learned message-passing, in which interactions among vertices are\ncomputed locally, and information propagates over longer distances on the graph\nwith greater numbers of message-passing steps. Spectral approaches use\neigendecompositions of the graph Laplacian to produce a generalization of\nspatial convolutions to graph structured data which access information over\nshort and long time scales simultaneously. Here we introduce the Spectral Graph\nNetwork, which applies message passing to both the spatial and spectral\ndomains. Our model projects vertices of the spatial graph onto the Laplacian\neigenvectors, which are each represented as vertices in a fully connected\n\"spectral graph\", and then applies learned message passing to them. We apply\nthis model to various benchmark tasks including a graph-based variant of MNIST\nclassification, molecular property prediction on MoleculeNet and QM9, and\nshortest path problems on random graphs. Our results show that the Spectral GN\npromotes efficient training, reaching high performance with fewer training\niterations despite having more parameters. The model also provides robustness\nto edge dropout and outperforms baselines for the classification tasks. We also\nexplore how these performance benefits depend on properties of the dataset.\n"} {"abstract": " Cookie banners are devices implemented by websites to allow users to manage\ntheir privacy settings with respect to the use of cookies. They are part of a\nuser's daily web browsing experience since legislation in Europe requires\nwebsites to show such notices. In this paper, we carry out a large-scale study\nof more than 17,000 websites including more than 7,500 cookie banners in Greece\nand the UK to determine compliance and tracking transparency levels. Our\nanalysis shows that although more than 60% of websites store third-party\ncookies in both countries, only less than 50% show a cookie notice and hence a\nsubstantial proportion do not comply with the law even at the very basic level.\nWe find only a small proportion of the surveyed websites providing a direct\nopt-out option, with an overwhelming majority either nudging users towards\nprivacy-intrusive choices or making cookie rejection much harder than consent.\nOur results differ significantly in some cases from previous smaller-scale\nstudies and hence underline the importance of large-scale studies for a better\nunderstanding of the big picture in cookie practices.\n"} {"abstract": " We present an analytic computation of the two-loop QCD corrections to\n$u\\bar{d}\\to W^+b\\bar{b}$ for an on-shell $W$-boson using the leading colour\nand massless bottom quark approximations. We perform an integration-by-parts\nreduction of the unpolarised squared matrix element using finite field\nreconstruction techniques and identify an independent basis of special\nfunctions that allows an analytic subtraction of the infrared and ultraviolet\npoles. This basis is valid for all planar topologies for five-particle\nscattering with an off-shell leg.\n"} {"abstract": " Operator spreading under unitary time evolution has attracted a lot of\nattention recently, as a way to probe many-body quantum chaos. While quantities\nsuch as out-of-time-ordered correlators (OTOC) do distinguish interacting from\nnon-interacting systems, it has remained unclear to what extent they can truly\ndiagnose chaotic {\\it vs} integrable dynamics in many-body quantum systems.\nHere, we analyze operator spreading in generic 1D many-body quantum systems\nusing a combination of matrix product operator (MPO) and analytical techniques,\nfocusing on the operator {\\em right-weight}. First, we show that while small\nbond dimension MPOs allow one to capture the exponentially-decaying tail of the\noperator front, in agreement with earlier results, they lead to significant\nquantitative and qualitative errors for the actual front -- defined by the\nmaximum of the right-weight. We find that while the operator front broadens\ndiffusively in both integrable and chaotic interacting spin chains, the precise\nshape and scaling of the height of the front in integrable systems is anomalous\nfor all accessible times. We interpret these results using a quasiparticle\npicture. This provides a sharp, though rather subtle signature of many-body\nquantum chaos in the operator front.\n"} {"abstract": " We developed a theory of electric and thermoelectric conductivity of lightly\ndoped SrTiO$_3$ in the non-degenerate region $k_B T \\geq E_F$, assuming that\nthe major source of electron scattering is their interaction with soft\ntransverse optical phonons present due to proximity to ferroelectric\ntransition. We have used kinetic equation approach within relaxation-time\napproximation and we have determined energy-dependent transport relaxation time\n$\\tau(E)$ by the iterative procedure. Using electron effective mass $m$ and\nelectron-transverse phonon coupling constant $\\lambda$ as two fitting\nparameters, we are able to describe quantitatively a large set of the measured\ntemperature dependences of resistivity $R(T)$ and Seebeck coefficient\n$\\mathcal{S}(T)$ for a broad range of electron densities studied experimentally\nin recent paper [1]. In addition, we calculated Nernst ratio $\\nu=N/B$ in the\nlinear approximation over weak magnetic field in the same temperature range.\n"} {"abstract": " Recently, it has been argued that encoder-decoder models can be made more\ninterpretable by replacing the softmax function in the attention with its\nsparse variants. In this work, we introduce a novel, simple method for\nachieving sparsity in attention: we replace the softmax activation with a ReLU,\nand show that sparsity naturally emerges from such a formulation. Training\nstability is achieved with layer normalization with either a specialized\ninitialization or an additional gating function. Our model, which we call\nRectified Linear Attention (ReLA), is easy to implement and more efficient than\npreviously proposed sparse attention mechanisms. We apply ReLA to the\nTransformer and conduct experiments on five machine translation tasks. ReLA\nachieves translation performance comparable to several strong baselines, with\ntraining and decoding speed similar to that of the vanilla attention. Our\nanalysis shows that ReLA delivers high sparsity rate and head diversity, and\nthe induced cross attention achieves better accuracy with respect to\nsource-target word alignment than recent sparsified softmax-based models.\nIntriguingly, ReLA heads also learn to attend to nothing (i.e. 'switch off')\nfor some queries, which is not possible with sparsified softmax alternatives.\n"} {"abstract": " Gradient-descent based iterative algorithms pervade a variety of problems in\nestimation, prediction, learning, control, and optimization. Recently iterative\nalgorithms based on higher-order information have been explored in an attempt\nto lead to accelerated learning. In this paper, we explore a specific a\nhigh-order tuner that has been shown to result in stability with time-varying\nregressors in linearly parametrized systems, and accelerated convergence with\nconstant regressors. We show that this tuner continues to provide bounded\nparameter estimates even if the gradients are corrupted by noise. Additionally,\nwe also show that the parameter estimates converge exponentially to a compact\nset whose size is dependent on noise statistics. As the HT algorithms can be\napplied to a wide range of problems in estimation, filtering, control, and\nmachine learning, the result obtained in this paper represents an important\nextension to the topic of real-time and fast decision making.\n"} {"abstract": " A result due to Williams, Stampfli and Fillmore shows that an essential\nisometry $T$ on a Hilbert space $\\mathcal{H}$ is a compact perturbation of an\nisometry if and only if ind$(T)\\le 0$. A recent result of S. Chavan yields an\nanalogous characterization of essential spherical isometries\n$T=(T_1,\\dots,T_n)\\in\\mathcal{B}(\\mathcal{H})^n$ with\ndim($\\bigcap_{i=1}^n\\ker(T_i))\\le$ dim$(\\bigcap_{i=1}^n\\ker(T_i^*))$. In the\npresent note we show that in dimension $n>1$ the result of Chavan holds without\nany condition on the dimensions of the joint kernels of $T$ and $T^*$.\n"} {"abstract": " Collective phenomena in the Tavis-Cummings model has been widely studied,\nfocusing on the phase transition features. In many occasions, it has been used\nvariational approaches that consider separated radiation-matters systems. In\nthis paper, we examine the role of the quantum entanglement of an assembly of\ntwo-level emitters coupled to a single-mode cavity; this allows us to\ncharacterise the quantum correlated state for each regime. Statistical\nproperties of the system, e.g., the first four statistical moments, show\nclearly the structure of the light and matter distributions. Even though the\nsecond order correlation function goes to one in some regimes, the statistical\nanalysis evidence a sharp departure from coherent behaviour, contrarily to the\ncommon understanding.\n"} {"abstract": " In this paper we prove existence of nonnegative solutions to parabolic\nCauchy-Dirichlet problems with superlinear gradient terms which are possibly\nsingular. The model equation is \\[\n u_t - \\Delta_pu=g(u)|\\nabla u|^q+h(u)f(t,x)\\qquad \\text{in\n}(0,T)\\times\\Omega, \\] where $\\Omega$ is an open bounded subset of\n$\\mathbb{R}^N$ with $N>2$, $04.2$ the expected wind strength\naccording to our criterion is small enough so that the compression is slower\nthan the sound speed of the BES and sound waves can be triggered. In this case\nour criterion underestimates somewhat the onset of collapse and detailed\nnumerical analyses are required.\n"} {"abstract": " Among the ODEs peculiarities -- specially those of Mechanics -- besides the\nproblem of leading them to quadratures and to solve them either in series or in\nclosed form, one is faced with the inversion. E.g. when one wishes to pass from\ntime as function of lagrangian coordinates to these last as functions of time.\nThis paper solves in almost closed form the system of non linear ODEs of the\n2D-motion (say, co-ordinates $\\theta$ and $\\psi$) of a gravity-free double\npendulum (GFDP) not subjected to any force. In such a way its movement is ruled\nby initial conditions only. The relevant strongly non linear ODEs, have been\nput back to hyper-elliptic quadratures which, through the Integral\nRepresentation Theorem (hereinafter IRT) have been driven to the Lauricella\nhypergeometric functions $F_D^{(j)}, j=3, 4, 5, 6 $. The IRT has been applied\nafter a change of variable which improves their use and accelerates the series\nconvergence. The $\\psi$ is given in terms of $F_D^{(4)}$ -- which is inverted\nby means of the Fourier Series tool and put as an argument inside the\n$F_D^{(5)}$ -- in such a way allowing the $\\theta$ computations. We succeed in\na insight knowledge of time laws and trajectories of both bobs forming the\nGFDP, which -- after the inversion -- is therefore completely solved in\nexplicit closed form. Suitable sample problems of the three possible cases of\nmotion are carried out and their analysis closes the work. The Lauricella\nfunctions employed here to solve the differential equations -- in lack of\nspecific SW packages -- have been implemented thanks to some reduction theorems\nwhich will form the object of a next paper. To our best knowledge, this work\nadds a new contribution as it concerns detection and inversion of solutions of\nnonlinear hamiltonian systems.\n"} {"abstract": " We show that adding differential privacy to Explainable Boosting Machines\n(EBMs), a recent method for training interpretable ML models, yields\nstate-of-the-art accuracy while protecting privacy. Our experiments on multiple\nclassification and regression datasets show that DP-EBM models suffer\nsurprisingly little accuracy loss even with strong differential privacy\nguarantees. In addition to high accuracy, two other benefits of applying DP to\nEBMs are: a) trained models provide exact global and local interpretability,\nwhich is often important in settings where differential privacy is needed; and\nb) the models can be edited after training without loss of privacy to correct\nerrors which DP noise may have introduced.\n"} {"abstract": " Human personality traits are the key drivers behind our decision-making,\ninfluencing our life path on a daily basis. Inference of personality traits,\nsuch as Myers-Briggs Personality Type, as well as an understanding of\ndependencies between personality traits and users' behavior on various social\nmedia platforms is of crucial importance to modern research and industry\napplications. The emergence of diverse and cross-purpose social media avenues\nmakes it possible to perform user personality profiling automatically and\nefficiently based on data represented across multiple data modalities. However,\nthe research efforts on personality profiling from multi-source multi-modal\nsocial media data are relatively sparse, and the level of impact of different\nsocial network data on machine learning performance has yet to be\ncomprehensively evaluated. Furthermore, there is not such dataset in the\nresearch community to benchmark. This study is one of the first attempts\ntowards bridging such an important research gap. Specifically, in this work, we\ninfer the Myers-Briggs Personality Type indicators, by applying a novel\nmulti-view fusion framework, called \"PERS\" and comparing the performance\nresults not just across data modalities but also with respect to different\nsocial network data sources. Our experimental results demonstrate the PERS's\nability to learn from multi-view data for personality profiling by efficiently\nleveraging on the significantly different data arriving from diverse social\nmultimedia sources. We have also found that the selection of a machine learning\napproach is of crucial importance when choosing social network data sources and\nthat people tend to reveal multiple facets of their personality in different\nsocial media avenues. Our released social multimedia dataset facilitates future\nresearch on this direction.\n"} {"abstract": " In his 1987 paper, Todorcevic remarks that Sierpinski's onto mapping\nprinciple (1932) and the Erdos-Hajnal-Milner negative Ramsey relation (1966)\nare equivalent to each other, and follow from the existence of a Luzin set.\nRecently, Guzman and Miller showed that these two principles are also\nequivalent to the existence of a nonmeager set of reals of cardinality\n$\\aleph_1$. We expand this circle of equivalences and show that these\npropositions are equivalent also to the high-dimensional version of the\nErdos-Hajnal-Milner negative Ramsey relation, thereby improving a CH theorem of\nGalvin (1980).\n Then we consider the validity of these relations in the context of strong\ncolorings over partitions and prove the consistency of a positive Ramsey\nrelation, as follows: It is consistent with the existence of both a Luzin set\nand of a Souslin tree that for some countable partition p, all colorings are\np-special.\n"} {"abstract": " Mastery of order-disorder processes in highly non-equilibrium nanostructured\noxides has significant implications for the development of emerging energy\ntechnologies. However, we are presently limited in our ability to quantify and\nharness these processes at high spatial, chemical, and temporal resolution,\nparticularly in extreme environments. Here we describe the percolation of\ndisorder at the model oxide interface LaMnO$_3$ / SrTiO$_3$, which we visualize\nduring in situ ion irradiation in the transmission electron microscope. We\nobserve the formation of a network of disorder during the initial stages of ion\nirradiation and track the global progression of the system to full disorder. We\ncouple these measurements with detailed structural and chemical probes,\nexamining possible underlying defect mechanisms responsible for this unique\npercolative behavior.\n"} {"abstract": " The Principal-Agent Theory model is widely used to explain governance role\nwhere there is a separation of ownership and control, as it defines clear\nboundaries between governance and executives. However, examination of recent\ncorporate failure reveals the concerning contribution of the Board of Directors\nto such failures and calls into question governance effectiveness in the\npresence of a powerful and charismatic CEO. This study proposes a framework for\nanalyzing the relationship between the Board of Directors and the CEO, and how\ncertain relationships affect the power structure and behavior of the Board,\nwhich leads to a role reversal in the Principal-Agent Theory, as the Board\nassumes the role of the CEO's agent. This study's results may help create a red\nflag for a board and leader's behavior that may result in governance failure.\n"} {"abstract": " We prove that there are at least as many exact embedded Lagrangian fillings\nas seeds for Legendrian links of affine type $\\tilde{\\mathsf{D}}\n\\tilde{\\mathsf{E}}$. We also provide as many Lagrangian fillings with certain\nsymmetries as seeds of type $\\tilde{\\mathsf{B}}_n$, $\\tilde{\\mathsf{F}}_4$,\n$\\tilde{\\mathsf{G}}_2$, and $\\mathsf{E}_6^{(2)}$. These families are the first\nknown Legendrian links with infinitely many fillings that exhaust all seeds in\nthe corresponding cluster structures. Furthermore, we show that Legendrian\nrealization of Coxeter mutation of type $\\tilde{\\mathsf{D}}$ corresponds to the\nLegendrian loop considered by Casals and Ng.\n"} {"abstract": " Most modern unsupervised domain adaptation (UDA) approaches are rooted in\ndomain alignment, i.e., learning to align source and target features to learn a\ntarget domain classifier using source labels. In semi-supervised domain\nadaptation (SSDA), when the learner can access few target domain labels, prior\napproaches have followed UDA theory to use domain alignment for learning. We\nshow that the case of SSDA is different and a good target classifier can be\nlearned without needing alignment. We use self-supervised pretraining (via\nrotation prediction) and consistency regularization to achieve well separated\ntarget clusters, aiding in learning a low error target classifier. With our\nPretraining and Consistency (PAC) approach, we achieve state of the art target\naccuracy on this semi-supervised domain adaptation task, surpassing multiple\nadversarial domain alignment methods, across multiple datasets. PAC, while\nusing simple techniques, performs remarkably well on large and challenging SSDA\nbenchmarks like DomainNet and Visda-17, often outperforming recent state of the\nart by sizeable margins. Code for our experiments can be found at\nhttps://github.com/venkatesh-saligrama/PAC\n"} {"abstract": " In this paper we derive quantitative estimates in the context of stochastic\nhomogenization for integral functionals defined on finite partitions, where the\nrandom surface integrand is assumed to be stationary. Requiring the integrand\nto satisfy in addition a multiscale functional inequality, we control\nquantitatively the fluctuations of the asymptotic cell formulas defining the\nhomogenized surface integrand. As a byproduct we obtain a simplified cell\nformula where we replace cubes by almost flat hyperrectangles.\n"} {"abstract": " Searches for periodicity in time series are often done with models of\nperiodic signals, whose statistical significance is assessed via false alarm\nprobabilities or Bayes factors. However, a statistically significant periodic\nmodel might not originate from a strictly periodic source. In astronomy in\nparticular, one expects transient signals that show periodicity for a certain\namount of time before vanishing. This situation is encountered for instance in\nthe search for planets in radial velocity data. While planetary signals are\nexpected to have a stable phase, amplitude and frequency - except when strong\nplanet-planet interactions are present - signals induced by stellar activity\nwill typically not exhibit the same stability. In the present article, we\nexplore the use of periodic functions multiplied by time windows to diagnose\nwhether an apparently periodic signal is truly so. We suggest diagnostics to\ncheck whether a signal is consistently present in the time series, and has a\nstable phase, amplitude and period. The tests are expressed both in a\nperiodogram and Bayesian framework. Our methods are applied to the Solar\nHARPS-N data as well as HD 215152, HD 69830 and HD 13808. We find that (i) the\nHARPS-N Solar data exhibits signals at the Solar rotation period and its first\nharmonic ($\\sim$ 13.4 days). The frequency and phase of the 13.4 days signal\nappear constant within the estimation uncertainties, but its amplitude presents\nsignificant variations which can be mapped to activity levels. (ii) as\npreviously reported, we find four, three and two planets orbiting HD 215152, HD\n69830 and HD 13808.\n"} {"abstract": " The task of multi-label image classification is to recognize all the object\nlabels presented in an image. Though advancing for years, small objects,\nsimilar objects and objects with high conditional probability are still the\nmain bottlenecks of previous convolutional neural network(CNN) based models,\nlimited by convolutional kernels' representational capacity. Recent vision\ntransformer networks utilize the self-attention mechanism to extract the\nfeature of pixel granularity, which expresses richer local semantic\ninformation, while is insufficient for mining global spatial dependence. In\nthis paper, we point out the three crucial problems that CNN-based methods\nencounter and explore the possibility of conducting specific transformer\nmodules to settle them. We put forward a Multi-label Transformer\narchitecture(MlTr) constructed with windows partitioning, in-window pixel\nattention, cross-window attention, particularly improving the performance of\nmulti-label image classification tasks. The proposed MlTr shows\nstate-of-the-art results on various prevalent multi-label datasets such as\nMS-COCO, Pascal-VOC, and NUS-WIDE with 88.5%, 95.8%, and 65.5% respectively.\nThe code will be available soon at https://github.com/starmemda/MlTr/\n"} {"abstract": " The purpose of this study is to examine Olympic champions' characteristics on\nInstagram to first understand whether differences exist between male and female\nathletes and then to find possible correlations between these characteristics.\nWe utilized a content analytic method to analyze Olympic gold medalists'\nphotographs on Instagram. By this way we fetched data from Instagram pages of\nall those Rio2016 Olympic gold medalists who had their account publicly\navailable. The analysis of data revealed the existence of a positive monotonic\nrelationship between the ratio of following/follower and the ratio of\nengagement to follower for men gold medalists, and a strong negative monotonic\nrelationship between age and ratio of self-presenting post of both men and\nwomen gold medalists which even take a linear form for men. These findings\naligned with the relative theories and literature may come together to help the\nathletes to manage and expand their personal brand in social media.\n"} {"abstract": " The evolution of young stars and disks is driven by the interplay of several\nprocesses, notably accretion and ejection of material. Critical to correctly\ndescribe the conditions of planet formation, these processes are best probed\nspectroscopically. About five-hundred orbits of the Hubble Space Telescope\n(HST) are being devoted in 2020-2022 to the ULLYSES public survey of about 70\nlow-mass (M<2Msun) young (age<10 Myr) stars at UV wavelengths. Here we present\nthe PENELLOPE Large Program that is being carried out at the ESO Very Large\nTelescope (VLT) to acquire, contemporaneous to HST, optical ESPRESSO/UVES\nhigh-resolution spectra to investigate the kinematics of the emitting gas, and\nUV-to-NIR X-Shooter medium-resolution flux-calibrated spectra to provide the\nfundamental parameters that HST data alone cannot provide, such as extinction\nand stellar properties. The data obtained by PENELLOPE have no proprietary\ntime, and the fully reduced spectra are made available to the whole community.\nHere, we describe the data and the first scientific analysis of the accretion\nproperties for the sample of thirteen targets located in the Orion OB1\nassociation and in the sigma-Orionis cluster, observed in Nov-Dec 2020. We find\nthat the accretion rates are in line with those observed previously in\nsimilarly young star-forming regions, with a variability on a timescale of days\nof <3. The comparison of the fits to the continuum excess emission obtained\nwith a slab model on the X-Shooter spectra and the HST/STIS spectra shows a\nshortcoming in the X-Shooter estimates of <10%, well within the assumed\nuncertainty. Its origin can be either a wrong UV extinction curve or due to the\nsimplicity of this modelling, and will be investigated in the course of the\nPENELLOPE program. The combined ULLYSES and PENELLOPE data will be key for a\nbetter understanding of the accretion/ejection mechanisms in young stars.\n"} {"abstract": " Advances in imagery at atomic and near-atomic resolution, such as cryogenic\nelectron microscopy (cryo-EM), have led to an influx of high resolution images\nof proteins and other macromolecular structures to data banks worldwide.\nProducing a protein structure from the discrete voxel grid data of cryo-EM maps\ninvolves interpolation into the continuous spatial domain. We present a novel\ndata format called the neural cryo-EM map, which is formed from a set of neural\nnetworks that accurately parameterize cryo-EM maps and provide native,\nspatially continuous data for density and gradient. As a case study of this\ndata format, we create graph-based interpretations of high resolution\nexperimental cryo-EM maps. Normalized cryo-EM map values interpolated using the\nnon-linear neural cryo-EM format are more accurate, consistently scoring less\nthan 0.01 mean absolute error, than a conventional tri-linear interpolation,\nwhich scores up to 0.12 mean absolute error. Our graph-based interpretations of\n115 experimental cryo-EM maps from 1.15 to 4.0 Angstrom resolution provide high\ncoverage of the underlying amino acid residue locations, while accuracy of\nnodes is correlated with resolution. The nodes of graphs created from atomic\nresolution maps (higher than 1.6 Angstroms) provide greater than 99% residue\ncoverage as well as 85% full atomic coverage with a mean of than 0.19 Angstrom\nroot mean squared deviation (RMSD). Other graphs have a mean 84% residue\ncoverage with less specificity of the nodes due to experimental noise and\ndifferences of density context at lower resolutions. This work may be\ngeneralized for transforming any 3D grid-based data format into non-linear,\ncontinuous, and differentiable format for the downstream geometric deep\nlearning applications.\n"} {"abstract": " Many real-life applications involve estimation of curves that exhibit\ncomplicated shapes including jumps or varying-frequency oscillations. Practical\nmethods have been devised that can adapt to a locally varying complexity of an\nunknown function (e.g. variable-knot splines, sparse wavelet reconstructions,\nkernel methods or trees/forests). However, the overwhelming majority of\nexisting asymptotic minimaxity theory is predicated on homogeneous smoothness\nassumptions. Focusing on locally Holderian functions, we provide new locally\nadaptive posterior concentration rate results under the supremum loss for\nwidely used Bayesian machine learning techniques in white noise and\nnon-parametric regression. In particular, we show that popular spike-and-slab\npriors and Bayesian CART are uniformly locally adaptive. In addition, we\npropose a new class of repulsive partitioning priors which relate to variable\nknot splines and which are exact-rate adaptive. For uncertainty quantification,\nwe construct locally adaptive confidence bands whose width depends on the local\nsmoothness and which achieve uniform asymptotic coverage under local\nself-similarity. To illustrate that spatial adaptation is not at all automatic,\nwe provide lower-bound results showing that popular hierarchical Gaussian\nprocess priors fall short of spatial adaptation.\n"} {"abstract": " Back-translation is an effective strategy to improve the performance of\nNeural Machine Translation~(NMT) by generating pseudo-parallel data. However,\nseveral recent works have found that better translation quality of the\npseudo-parallel data does not necessarily lead to better final translation\nmodels, while lower-quality but more diverse data often yields stronger\nresults. In this paper, we propose a novel method to generate pseudo-parallel\ndata from a pre-trained back-translation model. Our method is a meta-learning\nalgorithm which adapts a pre-trained back-translation model so that the\npseudo-parallel data it generates would train a forward-translation model to do\nwell on a validation set. In our evaluations in both the standard datasets WMT\nEn-De'14 and WMT En-Fr'14, as well as a multilingual translation setting, our\nmethod leads to significant improvements over strong baselines. Our code will\nbe made available.\n"} {"abstract": " We investigate the problem of fast-forwarding quantum evolution, whereby the\ndynamics of certain quantum systems can be simulated with gate complexity that\nis sublinear in the evolution time. We provide a definition of fast-forwarding\nthat considers the model of quantum computation, the Hamiltonians that induce\nthe evolution, and the properties of the initial states. Our definition\naccounts for any asymptotic complexity improvement of the general case and we\nuse it to demonstrate fast-forwarding in several quantum systems. In\nparticular, we show that some local spin systems whose Hamiltonians can be\ntaken into block diagonal form using an efficient quantum circuit, such as\nthose that are permutation-invariant, can be exponentially fast-forwarded. We\nalso show that certain classes of positive semidefinite local spin systems,\nalso known as frustration-free, can be polynomially fast-forwarded, provided\nthe initial state is supported on a subspace of sufficiently low energies.\nLast, we show that all quadratic fermionic systems and number-conserving\nquadratic bosonic systems can be exponentially fast-forwarded in a model where\nquantum gates are exponentials of specific fermionic or bosonic operators,\nrespectively. Our results extend the classes of physical Hamiltonians that were\npreviously known to be fast-forwarded, while not necessarily requiring methods\nthat diagonalize the Hamiltonians efficiently. We further develop a connection\nbetween fast-forwarding and precise energy measurements that also accounts for\npolynomial improvements.\n"} {"abstract": " Social Networks' omnipresence and ease of use has revolutionized the\ngeneration and distribution of information in today's world. However, easy\naccess to information does not equal an increased level of public knowledge.\nUnlike traditional media channels, social networks also facilitate faster and\nwider spread of disinformation and misinformation. Viral spread of false\ninformation has serious implications on the behaviors, attitudes and beliefs of\nthe public, and ultimately can seriously endanger the democratic processes.\nLimiting false information's negative impact through early detection and\ncontrol of extensive spread presents the main challenge facing researchers\ntoday. In this survey paper, we extensively analyze a wide range of different\nsolutions for the early detection of fake news in the existing literature. More\nprecisely, we examine Machine Learning (ML) models for the identification and\nclassification of fake news, online fake news detection competitions,\nstatistical outputs as well as the advantages and disadvantages of some of the\navailable data sets. Finally, we evaluate the online web browsing tools\navailable for detecting and mitigating fake news and present some open research\nchallenges.\n"} {"abstract": " To accommodate the explosive growth of the Internet-of-Things (IoT),\nincorporating interference alignment (IA) into existing multiple access (MA)\nschemes is under investigation. However, when it is applied in MIMO networks to\nimprove the system compacity, the incoming problem regarding information delay\narises which does not meet the requirement of low-latency. Therefore, in this\npaper, we first propose a new metric, degree of delay (DoD), to quantify the\nissue of information delay, and characterize DoD for three typical transmission\nschemes, i.e., TDMA, beamforming based TDMA (BD-TDMA), and retrospective\ninterference alignment (RIA). By analyzing DoD in these schemes, its value\nmainly depends on three factors, i.e., delay sensitive factor, size of data\nset, and queueing delay slot. The first two reflect the relationship between\nquality of service (QoS) and information delay sensitivity, and normalize time\ncost for each symbol, respectively. These two factors are independent of the\ntransmission schemes, and thus we aim to reduce the queueing delay slot to\nimprove DoD. Herein, three novel joint IA schemes are proposed for MIMO\ndownlink networks with different number of users. That is, hybrid antenna array\nbased partial interference elimination and retrospective interference\nregeneration scheme (HAA-PIE-RIR), HAA based improved PIE and RIR scheme\n(HAA-IPIE-RIR), and HAA based cyclic interference elimination and RIR scheme\n(HAA-CIE-RIR). Based on the first scheme, the second scheme extends the\napplication scenarios from $2$-user to $K$-user while causing heavy\ncomputational burden. The third scheme relieves such computational burden,\nthough it has certain degree of freedom (DoF) loss due to insufficient\nutilization of space resources.\n"} {"abstract": " We present new H$\\alpha$ photometry for the Star-Formation Reference Survey\n(SFRS), a representative sample of star-forming galaxies in the local Universe.\nCombining these data with the panchromatic coverage of the SFRS, we provide\ncalibrations of H$\\alpha$-based star-formation rates (SFRs) with and without\ncorrection for the contribution of [$\\rm N_{^{II}}$] emission. We consider the\neffect of extinction corrections based on the Balmer decrement, infrared excess\n(IRX), and spectral energy distribution (SED) fits. We compare the SFR\nestimates derived from SED fits, polycyclic aromatic hydrocarbons, hybrid\nindicators such as 24 $\\mu$m + H$\\alpha$, 8 $\\mu$m + H$\\alpha$, FIR + FUV, and\nH$\\alpha$ emission for a sample of purely star-forming galaxies. We provide a\nnew calibration for 1.4 GHz-based SFRs by comparing to the H$\\alpha$ emission,\nand we measure a dependence of the radio-to-H$\\alpha$ emission ratio based on\ngalaxy stellar mass. Active galactic nuclei introduce biases in the\ncalibrations of different SFR indicators but have only a minimal effect on the\ninferred SFR densities from galaxy surveys. Finally, we quantify the\ncorrelation between galaxy metallicity and extinction.\n"} {"abstract": " Fungi cells are capable of sensing extracellular cues through reception,\ntransduction and response systems which allow them to communicate with their\nhost and adapt to their environment. They display effective regulatory protein\nexpressions which enhance and regulate their response and adaptation to a\nvariety of triggers such as stress, hormones, light, chemicals and host\nfactors. In our recent studies, we have shown that $Pleurotus$ oyster fungi\ngenerate electrical potential impulses in the form of spike events as a result\nof their exposure to environmental, mechanical and chemical triggers,\ndemonstrating that it is possible to discern the nature of stimuli from the\nfungi electrical responses. Harnessing the power of fungi sensing and\nintelligent capabilities, we explored the communication protocols of fungi as\nreporters of human chemical secretions such as hormones, addressing the\nquestion if fungi can sense human signals. We exposed $Pleurotus$ oyster fungi\nto cortisol, directly applied to a surface of a hemp shavings substrate\ncolonised by fungi, and recorded the electrical activity of fungi. The response\nof fungi to cortisol was also supplementary studied through the application of\nX-ray to identify changes in the fungi tissue, where receiving cortisol by the\nsubstrate can inhibit the flow of calcium and, in turn, reduce its\nphysiological changes. This study could pave the way for future research on\nadaptive fungal wearables capable for detecting physiological states of humans\nand biosensors made of living fungi.\n"} {"abstract": " Developers of AI-Intensive Systems--i.e., systems that involve both\n\"traditional\" software and Artificial Intelligence\"are recognizing the need to\norganize development systematically and use engineered methods and tools. Since\nan AI-Intensive System (AIIS) relies heavily on software, it is expected that\nSoftware Engineering (SE) methods and tools can help. However, AIIS development\ndiffers from the development of \"traditional\" software systems in a few\nsubstantial aspects. Hence, traditional SE methods and tools are not suitable\nor sufficient by themselves and need to be adapted and extended. A quest for\n\"SE for AI\" methods and tools has started. We believe that, in this effort, we\nshould learn from experience and avoid repeating some of the mistakes made in\nthe quest for SE in past years. To this end, a fundamental instrument is a set\nof concepts and a notation to deal with AIIS and the problems that characterize\ntheir development processes. In this paper, we propose to describe AIIS via a\nnotation that was proposed for SE and embeds a set of concepts that are\nsuitable to represent AIIS as well. We demonstrate the usage of the notation by\nmodeling some characteristics that are particularly relevant for AIIS.\n"} {"abstract": " Tables on the web constitute a valuable data source for many applications,\nlike factual search and knowledge base augmentation. However, as genuine tables\ncontaining relational knowledge only account for a small proportion of tables\non the web, reliable genuine web table classification is a crucial first step\nof table extraction. Previous works usually rely on explicit feature\nconstruction from the HTML code. In contrast, we propose an approach for web\ntable classification by exploiting the full visual appearance of a table, which\nworks purely by applying a convolutional neural network on the rendered image\nof the web table. Since these visual features can be extracted automatically,\nour approach circumvents the need for explicit feature construction. A new hand\nlabeled gold standard dataset containing HTML source code and images for 13,112\ntables was generated for this task. Transfer learning techniques are applied to\nwell known VGG16 and ResNet50 architectures. The evaluation of CNN image\nclassification with fine tuned ResNet50 (F1 93.29%) shows that this approach\nachieves results comparable to previous solutions using explicitly defined HTML\ncode based features. By combining visual and explicit features, an F-measure of\n93.70% can be achieved by Random Forest classification, which beats current\nstate of the art methods.\n"} {"abstract": " We show an application of a subdiffusion equation with Caputo fractional time\nderivative with respect to another function $g$ to describe subdiffusion in a\nmedium having a structure evolving over time. In this case a continuous\ntransition from subdiffusion to other type of diffusion may occur. The process\ncan be interpreted as \"ordinary\" subdiffusion with fixed subdiffusion parameter\n(subdiffusion exponent) $\\alpha$ in which time scale is changed by the function\n$g$. As example, we consider the transition from \"ordinary\" subdiffusion to\nultraslow diffusion. The function $g$ generates the additional aging process\nsuperimposed on the \"standard\" aging generated by \"ordinary\" subdiffusion. The\naging process is analyzed using coefficient of relative aging of\n$g$--subdiffusion with respect to \"ordinary\" subdiffusion. The method of\nsolving the $g$-subdiffusion equation is also presented.\n"} {"abstract": " Galaxies can be classified as passive ellipticals or star-forming discs.\nEllipticals dominate at the high end of the mass range, and therefore there\nmust be a mechanism responsible for the quenching of star-forming galaxies.\nThis could either be due to the secular processes linked to the mass and star\nformation of galaxies or to external processes linked to the surrounding\nenvironment. In this paper, we analytically model the processes that govern\ngalaxy evolution and quantify their contribution. We have specifically studied\nthe effects of mass quenching, gas stripping, and mergers on galaxy quenching.\nTo achieve this, we first assumed a set of differential equations that describe\nthe processes that shape galaxy evolution. We then modelled the parameters of\nthese equations by maximising likelihood. These equations describe the\nevolution of galaxies individually, but the parameters of the equations are\nconstrained by matching the extrapolated intermediate-redshift galaxies with\nthe low-redshift galaxy population. In this study, we modelled the processes\nthat change star formation and stellar mass in massive galaxies from the GAMA\nsurvey between z~0.4 and the present. We identified and quantified the\ncontributions from mass quenching, gas stripping, and mergers to galaxy\nquenching. The quenching timescale is on average 1.2 Gyr and a closer look\nreveals support for the slow-then-rapid quenching scenario. The major merging\nrate of galaxies is about once per 10~Gyr, while the rate of ram pressure\nstripping is significantly higher. In galaxies with decreasing star formation,\nwe show that star formation is lost to fast quenching mechanisms such as ram\npressure stripping and is countered by mergers, at a rate of about 41%\nGyr$^{-1}$ and to mass quenching 49% Gyr$^{-1}$. (abridged)\n"} {"abstract": " Low-rank tensors are an established framework for high-dimensional\nleast-squares problems. We propose to extend this framework by including the\nconcept of block-sparsity. In the context of polynomial regression each\nsparsity pattern corresponds to some subspace of homogeneous multivariate\npolynomials. This allows us to adapt the ansatz space to align better with\nknown sample complexity results. The resulting method is tested in numerical\nexperiments and demonstrates improved computational resource utilization and\nsample efficiency.\n"} {"abstract": " We construct the global phase portraits of inflationary dynamics in\nteleparallel gravity models with a scalar field nonminimally coupled to torsion\nscalar. The adopted set of variables can clearly distinguish between different\nasymptotic states as fixed points, including the kinetic and inflationary\nregimes. The key role in the description of inflation is played by the\nheteroclinic orbits which run from the asymptotic saddle points to the late\ntime attractor point and are approximated by nonminimal slow roll conditions.\nTo seek the asymptotic fixed points we outline a heuristic method in terms of\nthe \"effective potential\" and \"effective mass\", which can be applied for any\nnonminimally coupled theories. As particular examples we study positive\nquadratic nonminimal couplings with quadratic and quartic potentials, and note\nhow the portraits differ qualitatively from the known scalar-curvature\ncounterparts. For quadratic models inflation can only occur at small nonminimal\ncoupling to torsion, as for larger coupling the asymptotic de Sitter saddle\npoint disappears from the physical phase space. Teleparallel models with\nquartic potentials are not viable for inflation at all, since for small\nnonminimal coupling the asymptotic saddle point exhibits weaker than\nexponential expansion, and for larger coupling disappears too.\n"} {"abstract": " We consider a dynamic network of individuals that may hold one of two\ndifferent opinions in a two-party society. As a dynamical model, agents can\nendlessly create and delete links to satisfy a preferred degree, and the\nnetwork is shaped by \\emph{homophily}, a form of social interaction.\nCharacterized by the parameter $J \\in [-1,1]$, the latter plays a role similar\nto Ising spins: agents create links to others of the same opinion with\nprobability $(1+J)/2$, and delete them with probability $(1-J)/2$. Using Monte\nCarlo simulations and mean-field theory, we focus on the network structure in\nthe steady state. We study the effects of $J$ on degree distributions and the\nfraction of cross-party links. While the extreme cases of homophily or\nheterophily ($J= \\pm 1$) are easily understood to result in complete\npolarization or anti-polarization, intermediate values of $J$ lead to\ninteresting features of the network. Our model exhibits the intriguing feature\nof an \"overwhelming transition\" occurring when communities of different sizes\nare subject to sufficient heterophily: agents of the minority group are\noversubscribed and their average degree greatly exceeds that of the majority\ngroup. In addition, we introduce an original measure of polarization which\ndisplays distinct advantages over the commonly used average edge homogeneity.\n"} {"abstract": " Learning based representation has become the key to the success of many\ncomputer vision systems. While many 3D representations have been proposed, it\nis still an unaddressed problem how to represent a dynamically changing 3D\nobject. In this paper, we introduce a compositional representation for 4D\ncaptures, i.e. a deforming 3D object over a temporal span, that disentangles\nshape, initial state, and motion respectively. Each component is represented by\na latent code via a trained encoder. To model the motion, a neural Ordinary\nDifferential Equation (ODE) is trained to update the initial state conditioned\non the learned motion code, and a decoder takes the shape code and the updated\nstate code to reconstruct the 3D model at each time stamp. To this end, we\npropose an Identity Exchange Training (IET) strategy to encourage the network\nto learn effectively decoupling each component. Extensive experiments\ndemonstrate that the proposed method outperforms existing state-of-the-art deep\nlearning based methods on 4D reconstruction, and significantly improves on\nvarious tasks, including motion transfer and completion.\n"} {"abstract": " In this paper, we discuss the In\\\"on\\\"u-Winger contraction of the conformal\nalgebra. We start with the light-cone form of the Poincar\\'e algebra and extend\nit to write down the conformal algebra in $d$ dimensions. To contract the\nconformal algebra, we choose five dimensions for simplicity and compactify the\nthird transverse direction in to a circle of radius $R$ following Kaluza-Klein\ndimensional reduction method. We identify the inverse radius, $1/R$, as the\ncontraction parameter. After the contraction, the resulting representation is\nfound to be the continuous spin representation in four dimensions. Even though\nthe scaling symmetry survives the contraction, but the special conformal\ntranslation vector changes and behaves like the four-momentum vector. We also\ndiscussed the generalization to $d$ dimensions.\n"} {"abstract": " Predicting (1) when the next hospital admission occurs and (2) what will\nhappen in the next admission about a patient by mining electronic health record\n(EHR) data can provide granular readmission predictions to assist clinical\ndecision making. Recurrent neural network (RNN) and point process models are\nusually employed in modelling temporal sequential data. Simple RNN models\nassume that sequences of hospital visits follow strict causal dependencies\nbetween consecutive visits. However, in the real-world, a patient may have\nmultiple co-existing chronic medical conditions, i.e., multimorbidity, which\nresults in a cascade of visits where a non-immediate historical visit can be\nmost influential to the next visit. Although a point process (e.g., Hawkes\nprocess) is able to model a cascade temporal relationship, it strongly relies\non a prior generative process assumption. We propose a novel model, MEDCAS, to\naddress these challenges. MEDCAS combines the strengths of RNN-based models and\npoint processes by integrating point processes in modelling visit types and\ntime gaps into an attention-based sequence-to-sequence learning model, which is\nable to capture the temporal cascade relationships. To supplement the patients\nwith short visit sequences, a structural modelling technique with graph-based\nmethods is used to construct the markers of the point process in MEDCAS.\nExtensive experiments on three real-world EHR datasets have been performed and\nthe results demonstrate that \\texttt{MEDCAS} outperforms state-of-the-art\nmodels in both tasks.\n"} {"abstract": " The use of non-differentiable priors in Bayesian statistics has become\nincreasingly popular, in particular in Bayesian imaging analysis. Current state\nof the art methods are approximate in the sense that they replace the posterior\nwith a smooth approximation via Moreau-Yosida envelopes, and apply\ngradient-based discretized diffusions to sample from the resulting\ndistribution. We characterize the error of the Moreau-Yosida approximation and\npropose a novel implementation using underdamped Langevin dynamics. In\nmisson-critical cases, however, replacing the posterior with an approximation\nmay not be a viable option. Instead, we show that Piecewise-Deterministic\nMarkov Processes (PDMP) can be utilized for exact posterior inference from\ndistributions satisfying almost everywhere differentiability. Furthermore, in\ncontrast with diffusion-based methods, the suggested PDMP-based samplers place\nno assumptions on the prior shape, nor require access to a computationally\ncheap proximal operator, and consequently have a much broader scope of\napplication. Through detailed numerical examples, including a\nnon-differentiable circular distribution and a non-convex genomics model, we\nelucidate the relative strengths of these sampling methods on problems of\nmoderate to high dimensions, underlining the benefits of PDMP-based methods\nwhen accurate sampling is decisive.\n"} {"abstract": " Ultra sparse-view computed tomography (CT) algorithms can reduce radiation\nexposure of patients, but those algorithms lack an explicit cycle consistency\nloss minimization and an explicit log-likelihood maximization in testing. Here,\nwe propose X2CT-FLOW for the maximum a posteriori (MAP) reconstruction of a\nthree-dimensional (3D) chest CT image from a single or a few two-dimensional\n(2D) projection images using a progressive flow-based deep generative model,\nespecially for ultra low-dose protocols. The MAP reconstruction can\nsimultaneously optimize the cycle consistency loss and the log-likelihood. The\nproposed algorithm is built upon a newly developed progressive flow-based deep\ngenerative model, which is featured with exact log-likelihood estimation,\nefficient sampling, and progressive learning. We applied X2CT-FLOW to\nreconstruction of 3D chest CT images from biplanar projection images without\nnoise contamination (assuming a standard-dose protocol) and with strong noise\ncontamination (assuming an ultra low-dose protocol). With the standard-dose\nprotocol, our images reconstructed from 2D projected images and 3D ground-truth\nCT images showed good agreement in terms of structural similarity (SSIM, 0.7675\non average), peak signal-to-noise ratio (PSNR, 25.89 dB on average), mean\nabsolute error (MAE, 0.02364 on average), and normalized root mean square error\n(NRMSE, 0.05731 on average). Moreover, with the ultra low-dose protocol, our\nimages reconstructed from 2D projected images and the 3D ground-truth CT images\nalso showed good agreement in terms of SSIM (0.7008 on average), PSNR (23.58 dB\non average), MAE (0.02991 on average), and NRMSE (0.07349 on average).\n"} {"abstract": " In this paper, we have studied the performance and role of local optimizers\nin quantum variational circuits. We studied the performance of the two most\npopular optimizers and compared their results with some popular classical\nmachine learning algorithms. The classical algorithms we used in our study are\nsupport vector machine (SVM), gradient boosting (GB), and random forest (RF).\nThese were compared with a variational quantum classifier (VQC) using two sets\nof local optimizers viz AQGD and COBYLA. For experimenting with VQC, IBM\nQuantum Experience and IBM Qiskit was used while for classical machine learning\nmodels, sci-kit learn was used. The results show that machine learning on noisy\nimmediate scale quantum machines can produce comparable results as on classical\nmachines. For our experiments, we have used a popular restaurant sentiment\nanalysis dataset. The extracted features from this dataset and then after\napplying PCA reduced the feature set into 5 features. Quantum ML models were\ntrained using 100 epochs and 150 epochs on using EfficientSU2 variational\ncircuit. Overall, four Quantum ML models were trained and three Classical ML\nmodels were trained. The performance of the trained models was evaluated using\nstandard evaluation measures viz, Accuracy, Precision, Recall, F-Score. In all\nthe cases AQGD optimizer-based model with 100 Epochs performed better than all\nother models. It produced an accuracy of 77% and an F-Score of 0.785 which were\nhighest across all the trained models.\n"} {"abstract": " Classification algorithms have been widely adopted to detect anomalies for\nvarious systems, e.g., IoT, cloud and face recognition, under the common\nassumption that the data source is clean, i.e., features and labels are\ncorrectly set. However, data collected from the wild can be unreliable due to\ncareless annotations or malicious data transformation for incorrect anomaly\ndetection. In this paper, we extend a two-layer on-line data selection\nframework: Robust Anomaly Detector (RAD) with a newly designed ensemble\nprediction where both layers contribute to the final anomaly detection\ndecision. To adapt to the on-line nature of anomaly detection, we consider\nadditional features of conflicting opinions of classifiers, repetitive\ncleaning, and oracle knowledge. We on-line learn from incoming data streams and\ncontinuously cleanse the data, so as to adapt to the increasing learning\ncapacity from the larger accumulated data set. Moreover, we explore the concept\nof oracle learning that provides additional information of true labels for\ndifficult data points. We specifically focus on three use cases, (i) detecting\n10 classes of IoT attacks, (ii) predicting 4 classes of task failures of big\ndata jobs, and (iii) recognising 100 celebrities faces. Our evaluation results\nshow that RAD can robustly improve the accuracy of anomaly detection, to reach\nup to 98.95% for IoT device attacks (i.e., +7%), up to 85.03% for cloud task\nfailures (i.e., +14%) under 40% label noise, and for its extension, it can\nreach up to 77.51% for face recognition (i.e., +39%) under 30% label noise. The\nproposed RAD and its extensions are general and can be applied to different\nanomaly detection algorithms.\n"} {"abstract": " So far the null results from axion searches have enforced a huge hierarchy\nbetween the Peccei-Quinn and electroweak symmetry breaking scales. Then the\ninevitable Higgs portal poses a large fine tuning on the standard model Higgs\nscalar. Now we find if the Peccei-Quinn global symmetry has a set of residually\ndiscrete symmetries, these global and discrete symmetries can achieve a chain\nbreaking at low scales such as the accessible TeV scale. This novel mechanism\ncan accommodate some new phenomena including a sizable coupling of the standard\nmodel Higgs boson to the axion.\n"} {"abstract": " BaNi$_{2}$As$_{2}$ is a non-magnetic analogue of BaFe$_{2}$As$_{2}$, the\nparent compound of a prototype ferro-pnictide high-temperature superconductor.\nRecent diffraction studies on BaNi$_{2}$As$_{2}$ demonstrate the existence of\ntwo types of periodic lattice distortions above and below the tetragonal to\ntriclinic phase transition, suggesting charge-density-wave (CDW) order to\ncompete with superconductivity. We apply time-resolved optical spectroscopy and\ndemonstrate the existence of collective CDW amplitude modes. The smooth\nevolution of these modes through the structural phase transition implies the\nCDW order in the triclinic phase smoothly evolves from the unidirectional CDW\nin the tetragonal phase and suggests that the CDW order drives the structural\nphase transition.\n"} {"abstract": " The solenoid scan is one of the most common methods for the in-situ\nmeasurement of the thermal emittance of a photocathode in an rf photoinjector.\nThe fringe field of the solenoid overlaps with the gun rf field in quite a\nnumber of photoinjectors, which makes accurate knowledge of the transfer matrix\nchallenging, thus increases the measurement uncertainty of the thermal\nemittance. This paper summarizes two methods that have been used to solve the\noverlap issue and explains their deficiencies. Furthermore, we provide a new\nmethod to eliminate the measurement error due to the overlap issue in solenoid\nscans. The new method is systematically demonstrated using theoretical\nderivations, beam dynamics simulations, and experimental data based on the\nphotoinjector configurations from three different groups, proving that the\nmeasurement error with the new method is very small and can be ignored in most\nof the photoinjector configurations.\n"} {"abstract": " Reinforcement learning (RL) research focuses on general solutions that can be\napplied across different domains. This results in methods that RL practitioners\ncan use in almost any domain. However, recent studies often lack the\nengineering steps (\"tricks\") which may be needed to effectively use RL, such as\nreward shaping, curriculum learning, and splitting a large task into smaller\nchunks. Such tricks are common, if not necessary, to achieve state-of-the-art\nresults and win RL competitions. To ease the engineering efforts, we distill\ndescriptions of tricks from state-of-the-art results and study how well these\ntricks can improve a standard deep Q-learning agent. The long-term goal of this\nwork is to enable combining proven RL methods with domain-specific tricks by\nproviding a unified software framework and accompanying insights in multiple\ndomains.\n"} {"abstract": " We study private synthetic data generation for query release, where the goal\nis to construct a sanitized version of a sensitive dataset, subject to\ndifferential privacy, that approximately preserves the answers to a large\ncollection of statistical queries. We first present an algorithmic framework\nthat unifies a long line of iterative algorithms in the literature. Under this\nframework, we propose two new methods. The first method, private entropy\nprojection (PEP), can be viewed as an advanced variant of MWEM that adaptively\nreuses past query measurements to boost accuracy. Our second method, generative\nnetworks with the exponential mechanism (GEM), circumvents computational\nbottlenecks in algorithms such as MWEM and PEP by optimizing over generative\nmodels parameterized by neural networks, which capture a rich family of\ndistributions while enabling fast gradient-based optimization. We demonstrate\nthat PEP and GEM empirically outperform existing algorithms. Furthermore, we\nshow that GEM nicely incorporates prior information from public data while\novercoming limitations of PMW^Pub, the existing state-of-the-art method that\nalso leverages public data.\n"} {"abstract": " We show that a one-dimensional regular continuous Markov process \\(\\X\\) with\nscale function \\(s\\) is a Feller--Dynkin process precisely if the space\ntransformed process \\(s (X)\\) is a martingale when stopped at the boundaries of\nits state space. As a consequence, the Feller--Dynkin and the martingale\nproperty are equivalent for regular diffusions on natural scale with open state\nspace. By means of a counterexample, we also show that this equivalence fails\nfor multi-dimensional diffusions. Moreover, for It\\^o diffusions we discuss\nrelations to Cauchy problems.\n"} {"abstract": " Recent facial image synthesis methods have been mainly based on conditional\ngenerative models. Sketch-based conditions can effectively describe the\ngeometry of faces, including the contours of facial components, hair\nstructures, as well as salient edges (e.g., wrinkles) on face surfaces but lack\neffective control of appearance, which is influenced by color, material,\nlighting condition, etc. To have more control of generated results, one\npossible approach is to apply existing disentangling works to disentangle face\nimages into geometry and appearance representations. However, existing\ndisentangling methods are not optimized for human face editing, and cannot\nachieve fine control of facial details such as wrinkles. To address this issue,\nwe propose DeepFaceEditing, a structured disentanglement framework specifically\ndesigned for face images to support face generation and editing with\ndisentangled control of geometry and appearance. We adopt a local-to-global\napproach to incorporate the face domain knowledge: local component images are\ndecomposed into geometry and appearance representations, which are fused\nconsistently using a global fusion module to improve generation quality. We\nexploit sketches to assist in extracting a better geometry representation,\nwhich also supports intuitive geometry editing via sketching. The resulting\nmethod can either extract the geometry and appearance representations from face\nimages, or directly extract the geometry representation from face sketches.\nSuch representations allow users to easily edit and synthesize face images,\nwith decoupled control of their geometry and appearance. Both qualitative and\nquantitative evaluations show the superior detail and appearance control\nabilities of our method compared to state-of-the-art methods.\n"} {"abstract": " We provide comprehensive regularity results and optimal conditions for a\ngeneral class of functionals involving Orlicz multi-phase of the type\n\\begin{align}\n \\label{abst:1}\n v\\mapsto \\int_{\\Omega} F(x,v,Dv)\\,dx, \\end{align} exhibiting non-standard\ngrowth conditions and non-uniformly elliptic properties.\n The model functional under consideration is given by the Orlicz multi-phase\nintegral \\begin{align}\n \\label{abst:2}\n v\\mapsto \\int_{\\Omega} f(x,v)\\left[ G(|Dv|) +\n\\sum\\limits_{k=1}^{N}a_k(x)H_{k}(|Dv|) \\right]\\,dx,\\quad N\\geqslant 1,\n\\end{align} where $G,H_{k}$ are $N$-functions and $ 0\\leqslant a_{k}(\\cdot)\\in\nL^{\\infty}(\\Omega) $ with $0 < \\nu \\leqslant f(\\cdot) \\leqslant L$. Its\nellipticity ratio varies according to the geometry of the level sets\n$\\{a_{k}(x)=0\\}$ of the modulating coefficient functions $a_{k}(\\cdot)$ for\nevery $k\\in \\{1,\\ldots,N\\}$.\n We give a unified treatment to show various regularity results for such\nmulti-phase problems with the coefficient functions\n$\\{a_{k}(\\cdot)\\}_{k=1}^{N}$ not necessarily H\\\"older continuous even for a\nlower level of the regularity. Moreover, assuming that minima of the functional\nabove belong to better spaces such as $C^{0,\\gamma}(\\Omega)$ or\n$L^{\\kappa}(\\Omega)$ for some $\\gamma\\in (0,1)$ and $\\kappa\\in (1,\\infty]$, we\naddress optimal conditions on nonlinearity for each variant under which we\nbuild comprehensive regularity results.\n On the other hand, since there is a lack of homogeneity properties in the\nnonlinearity, we consider an appropriate scaling with keeping the structures of\nthe problems under which we apply Harmonic type approximation in the setting\nvarying on the a priori assumption on minima. We believe that the methods and\nproofs developed in this paper are suitable to build regularity theorems for a\nlarger class of non-autonomous functionals.\n"} {"abstract": " We study thermodynamic processes in contact with a heat bath that may have an\narbitrary time-varying periodic temperature profile. Within the framework of\nstochastic thermodynamics, and for models of thermo-dynamic engines in the\nidealized case of underdamped particles in the low-friction regime, we derive\nexplicit bounds as well as optimal control protocols that draw maximum power\nand achieve maximum efficiency at any specified level of power.\n"} {"abstract": " We experimentally investigate the effect of electron temperature on transport\nin the two-dimensional Dirac surface states of the three-dimensional\ntopological insulator HgTe. We find that around the minimal conductivity point,\nwhere both electrons and holes are present, heating the carriers with a DC\ncurrent results in a non-monotonic differential resistance of narrow channels.\nWe show that the observed initial increase in resistance can be attributed to\nelectron-hole scattering, while the decrease follows naturally from the change\nin Fermi energy of the charge carriers. Both effects are governed dominantly by\na van Hove singularity in the bulk valence band. The results demonstrate the\nimportance of interband electron-hole scattering in the transport properties of\ntopological insulators.\n"} {"abstract": " In this work, we present a program in the computational environment,\nGeoGebra, that enables a graphical study of Newton's Method. Using this\ncomputational device, we will analyze Newton's Method convergence applied to\nvarious examples of real functions. Then, it will be given a guide to the\nconstruction of the program in GeoGebra.\n"} {"abstract": " In recent paper Fakkousy et al. show that the 3D H\\'{e}non-Heiles system with\nHamiltonian $ H = \\frac{1}{2} (p_1 ^2 + p_2 ^2 + p_3 ^2) +\\frac{1}{2} (A q_1 ^2\n+ C q_2 ^2 + B q_3 ^2) + (\\alpha q_1 ^2 + \\gamma q_2 ^2)q_3 +\n\\frac{\\beta}{3}q_3 ^3 $ is integrable in sense of Liouville when $\\alpha =\n\\gamma, \\frac{\\alpha}{\\beta} = 1, A = B = C$; or $\\alpha = \\gamma,\n\\frac{\\alpha}{\\beta} = \\frac{1}{6}, A = C$, $B$-arbitrary; or $\\alpha = \\gamma,\n\\frac{\\alpha}{\\beta} = \\frac{1}{16}, A = C, \\frac{A}{B} = \\frac{1}{16}$ (and of\ncourse, when $\\alpha=\\gamma=0$, in which case the Hamiltonian is separable). It\nis known that the second case remains integrable for $A, C, B$ arbitrary. Using\nMorales-Ramis theory, we prove that there are no other cases of integrability\nfor this system.\n"} {"abstract": " We derive novel explicit formulas for the inverses of truncated block\nToeplitz matrices that correspond to a multivariate minimal stationary process.\nThe main ingredients of the formulas are the Fourier coefficients of the phase\nfunction attached to the spectral density of the process. The derivation of the\nformulas is based on a recently developed finite prediction theory applied to\nthe dual process of the stationary process. We illustrate the usefulness of the\nformulas by two applications. The first one is a strong convergence result for\nsolutions of general block Toeplitz systems for a multivariate short-memory\nprocess. The second application is closed-form formulas for the inverses of\ntruncated block Toeplitz matrices corresponding to a multivariate ARMA process.\nThe significance of the latter is that they provide us with a linear-time\nalgorithm to compute the solutions of corresponding block Toeplitz systems.\n"} {"abstract": " In this article, we consider mixed local and nonlocal Sobolev\n$(q,p)$-inequalities with extremal in the case $01$. Our\nmain tool is a description of the regularly varying tails of $\\mu$ in terms of\nthe behavior of the corresponding $S$-transform at $0^-$. We also describe the\ntails of $\\boxtimes$ infinitely divisible measures in terms of the tails of\ncorresponding L\\'evy measure, treat symmetric measures with regularly varying\ntails and prove the free analog of the Breiman lemma.\n"} {"abstract": " Motivated by the need of {\\em social distancing} during a pandemic, we\nconsider an approach to schedule the visitors of a facility (e.g., a general\nstore). Our algorithms take input from the citizens and schedule the store's\ndiscrete time-slots based on their importance to visit the facility. Naturally,\nthe formulation applies to several similar problems. We consider {\\em\nindivisible} job requests that take single or multiple slots to complete. The\nsalient properties of our approach are: it (a)~ensures social distancing by\nensuring a maximum population in a given time-slot at the facility, (b)~aims to\nprioritize individuals based on the importance of the jobs, (c)~maintains\ntruthfulness of the reported importance by adding a {\\em cooling-off} period\nafter their allocated time-slot, during which the individual cannot re-access\nthe same facility, (d)~guarantees voluntary participation of the citizens, and\nyet (e)~is computationally tractable. The mechanisms we propose are prior-free.\nWe show that the problem becomes NP-complete for indivisible multi-slot\ndemands, and provide a polynomial-time mechanism that is truthful, individually\nrational, and approximately optimal. Experiments with data collected from a\nstore show that visitors with more important (single-slot) jobs are allocated\nmore preferred slots, which comes at the cost of a longer cooling-off period\nand significantly reduces social congestion. For the multi-slot jobs, our\nmechanism yields reasonable approximation while reducing the computation time\nsignificantly.\n"} {"abstract": " Customization is a general trend in software engineering, demanding systems\nthat support variable stakeholder requirements. Two opposing strategies are\ncommonly used to create variants: software clone & own and software\nconfiguration with an integrated platform. Organizations often start with the\nformer, which is cheap, agile, and supports quick innovation, but does not\nscale. The latter scales by establishing an integrated platform that shares\nsoftware assets between variants, but requires high up-front investments or\nrisky migration processes. So, could we have a method that allows an easy\ntransition or even combine the benefits of both strategies? We propose a method\nand tool that supports a truly incremental development of variant-rich systems,\nexploiting a spectrum between both opposing strategies. We design, formalize,\nand prototype the variability-management framework virtual platform. It bridges\nclone & own and platform-oriented development. Relying on\nprogramming-language-independent conceptual structures representing software\nassets, it offers operators for engineering and evolving a system, comprising:\ntraditional, asset-oriented operators and novel, feature-oriented operators for\nincrementally adopting concepts of an integrated platform. The operators record\nmeta-data that is exploited by other operators to support the transition. Among\nothers, they eliminate expensive feature-location effort or the need to trace\nclones. Our evaluation simulates the evolution of a real-world, clone-based\nsystem, measuring its costs and benefits.\n"} {"abstract": " The Kronecker product-based algorithm for context-free path querying (CFPQ)\nwas proposed by Orachev et al. (2020). We reduce this algorithm to operations\nover Boolean matrices and extend it with the mechanism to extract all paths of\ninterest. We also prove $O(n^3/\\log{n})$ time complexity of the proposed\nalgorithm, where n is a number of vertices of the input graph. Thus, we provide\nthe alternative way to construct a slightly subcubic algorithm for CFPQ which\nis based on linear algebra and incremental transitive closure (a classic\ngraph-theoretic problem), as opposed to the algorithm with the same complexity\nproposed by Chaudhuri (2008). Our evaluation shows that our algorithm is a good\ncandidate to be the universal algorithm for both regular and context-free path\nquerying.\n"} {"abstract": " We estimate the black hole spin parameter in GRS 1915+105 using the\ncontinuum-fitting method with revised mass and inclination constraints based on\nthe very long baseline interferometric parallax measurement of the distance to\nthis source. We fit Rossi X-ray Timing Explorer observations selected to be\naccretion disk-dominated spectral states as described in McClinotck et al.\n(2006) and Middleton et al. (2006), which previously gave discrepant spin\nestimates with this method. We find that, using the new system parameters, the\nspin in both datasets increased, providing a best-fit spin of $a_*=0.86$ for\nthe Middleton et al. data and a poor fit for the McClintock et al. dataset,\nwhich becomes pegged at the BHSPEC model limit of $a_*=0.99$. We explore the\nimpact of the uncertainties in the system parameters, showing that the best-fit\nspin ranges from $a_*= 0.4$ to 0.99 for the Middleton et al. dataset and allows\nreasonable fits to the McClintock et al. dataset with near maximal spin for\nsystem distances greater than $\\sim 10$ kpc. We discuss the uncertainties and\nimplications of these estimates.\n"} {"abstract": " The CHIME/FRB Project has recently released its first catalog of fast radio\nbursts (FRBs), containing 492 unique sources. We present results from angular\ncross-correlations of CHIME/FRB sources with galaxy catalogs. We find a\nstatistically significant ($p$-value $\\sim 10^{-4}$, accounting for\nlook-elsewhere factors) cross-correlation between CHIME FRBs and galaxies in\nthe redshift range $0.3 \\lesssim z \\lesssim 0.5$, in three photometric galaxy\nsurveys: WISE$\\times$SCOS, DESI-BGS, and DESI-LRG. The level of\ncross-correlation is consistent with an order-one fraction of the CHIME FRBs\nbeing in the same dark matter halos as survey galaxies in this redshift range.\nWe find statistical evidence for a population of FRBs with large host\ndispersion measure ($\\sim 400$ pc cm$^{-3}$), and show that this can plausibly\narise from gas in large halos ($M \\sim 10^{14} M_\\odot$), for FRBs near the\nhalo center ($r \\lesssim 100$ kpc). These results will improve in future\nCHIME/FRB catalogs, with more FRBs and better angular resolution.\n"} {"abstract": " 3D object detection with a single image is an essential and challenging task\nfor autonomous driving. Recently, keypoint-based monocular 3D object detection\nhas made tremendous progress and achieved great speed-accuracy trade-off.\nHowever, there still exists a huge gap with LIDAR-based methods in terms of\naccuracy. To improve their performance without sacrificing efficiency, we\npropose a sort of lightweight feature pyramid network called Lite-FPN to\nachieve multi-scale feature fusion in an effective and efficient way, which can\nboost the multi-scale detection capability of keypoint-based detectors.\nBesides, the misalignment between classification score and localization\nprecision is further relieved by introducing a novel regression loss named\nattention loss. With the proposed loss, predictions with high confidence but\npoor localization are treated with more attention during the training phase.\nComparative experiments based on several state-of-the-art keypoint-based\ndetectors on the KITTI dataset show that our proposed methods manage to achieve\nsignificant improvements in both accuracy and frame rate. The code and\npretrained models will be released at\n\\url{https://github.com/yanglei18/Lite-FPN}.\n"} {"abstract": " Extracting information from documents usually relies on natural language\nprocessing methods working on one-dimensional sequences of text. In some cases,\nfor example, for the extraction of key information from semi-structured\ndocuments, such as invoice-documents, spatial and formatting information of\ntext are crucial to understand the contextual meaning. Convolutional neural\nnetworks are already common in computer vision models to process and extract\nrelationships in multidimensional data. Therefore, natural language processing\nmodels have already been combined with computer vision models in the past, to\nbenefit from e.g. positional information and to improve performance of these\nkey information extraction models. Existing models were either trained on\nunpublished data sets or on an annotated collection of receipts, which did not\nfocus on PDF-like documents. Hence, in this research project a template-based\ndocument generator was created to compare state-of-the-art models for\ninformation extraction. An existing information extraction model \"Chargrid\"\n(Katti et al., 2019) was reconstructed and the impact of a bounding box\nregression decoder, as well as the impact of an NLP pre-processing step was\nevaluated for information extraction from documents. The results have shown\nthat NLP based pre-processing is beneficial for model performance. However, the\nuse of a bounding box regression decoder increases the model performance only\nfor fields that do not follow a rectangular shape.\n"} {"abstract": " In this paper, we provide (i) a rigorous general theory to elicit conditions\non (tail-dependent) heavy-tailed cyber-risk distributions under which a risk\nmanagement firm might find it (non)sustainable to provide aggregate cyber-risk\ncoverage services for smart societies, and (ii)a real-data driven numerical\nstudy to validate claims made in theory assuming boundedly rational cyber-risk\nmanagers, alongside providing ideas to boost markets that aggregate dependent\ncyber-risks with heavy-tails.To the best of our knowledge, this is the only\ncomplete general theory till date on the feasibility of aggregate cyber-risk\nmanagement.\n"} {"abstract": " The discovery of pulsars is of great significance in the field of physics and\nastronomy. As the astronomical equipment produces a large amount of pulsar\ndata, an algorithm for automatically identifying pulsars becomes urgent. We\npropose a deep learning framework for pulsar recognition. In response to the\nextreme imbalance between positive and negative examples and the hard negative\nsample issue presented in the HTRU Medlat Training Data,there are two coping\nstrategies in our framework: the smart under-sampling and the improved loss\nfunction. We also apply the early-fusion strategy to integrate features\nobtained from different attributes before classification to improve the\nperformance. To our best knowledge,this is the first study that integrates\nthese strategies and techniques together in pulsar recognition. The experiment\nresults show that our framework outperforms previous works with the respect to\neither the training time or F1 score. We can not only speed up the training\ntime by 10X compared with the state-of-the-art work, but also get a competitive\nresult in terms of F1 score.\n"} {"abstract": " Quantum coherence and quantum correlations are studied in the strongly\ninteracting system composed of two qubits and an oscillator with the presence\nof a parametric medium. To analytically solve the system, we employ the\nadiabatic approximation approach. It assumes each qubit's characteristic\nfrequency is substantially lower than the oscillator frequency. To validate our\napproximation, a good agreement between the calculated energy spectrum of the\nHamiltonian with its numerical result is presented. The time evolution of the\nreduced density matrices of the two-qubit and the oscillator subsystems are\ncomputed from the tripartite initial state. Starting with a factorized\ntwo-qubit initial state, the quasi-periodicity in the revival and collapse\nphenomenon that occurs in the two-qubit population inversion is studied. Based\non the measure of relative entropy of coherence, we investigate the quantum\ncoherence and its explicit dependence on the parametric term both for the\ntwo-qubit and the individual qubit subsystems by adopting different choices of\nthe initial states. Similarly, the existence of quantum correlations is\ndemonstrated by studying the geometric discord and concurrence. Besides, by\nnumerically minimizing the Hilbert-Schmidt distance, the dynamically produced\nnear maximally entangled states are reconstructed. The reconstructed states are\nobserved to be nearly pure generalized Bell states. Furthermore, utilizing the\noscillator density matrix, the quadrature variance and phase-space distribution\nof the associated Husimi $Q$-function are computed in the minimum entropy\nregime and conclude that the obtained nearly pure evolved state is a squeezed\ncoherent state.\n"} {"abstract": " It is an open question to give a combinatorial interpretation of the Falk\ninvariant of a hyperplane arrangement, i.e. the third rank of successive\nquotients in the lower central series of the fundamental group of the\narrangement. In this article, we give a combinatorial formula for this\ninvariant in the case of hyperplane arrangements that are complete lift\nrepresentation of certain gain graphs. As a corollary, we compute the Falk\ninvariant for the cone of the braid, Shi, Linial and semiorder arrangements.\n"} {"abstract": " We discuss compatibility between various quantum aspects of bosonic fields,\nrelevant for quantum optics and quantum thermodynamics, and the mesoscopic\nformalism of reduced state of the field (RSF). In particular, we derive exact\nconditions under which Gaussian and Bogoliubov-type evolutions can be cast into\nthe RSF framework. In that regard, special emphasis is put on Gaussian thermal\noperations. To strengthen the link between the RSF formalism and the notion of\nclassicality for bosonic quantum fields, we prove that RSF contains no\ninformation about entanglement in two-mode Gaussian states. For the same\npurpose, we show that the entropic characterisation of RSF by means of the von\nNeumann entropy is qualitatively the same as its description based on the Wehrl\nentropy. Our findings help bridge the conceptual gap between quantum and\nclassical mechanics.\n"} {"abstract": " Many machine learning techniques incorporate identity-preserving\ntransformations into their models to generalize their performance to previously\nunseen data. These transformations are typically selected from a set of\nfunctions that are known to maintain the identity of an input when applied\n(e.g., rotation, translation, flipping, and scaling). However, there are many\nnatural variations that cannot be labeled for supervision or defined through\nexamination of the data. As suggested by the manifold hypothesis, many of these\nnatural variations live on or near a low-dimensional, nonlinear manifold.\nSeveral techniques represent manifold variations through a set of learned Lie\ngroup operators that define directions of motion on the manifold. However\ntheses approaches are limited because they require transformation labels when\ntraining their models and they lack a method for determining which regions of\nthe manifold are appropriate for applying each specific operator. We address\nthese limitations by introducing a learning strategy that does not require\ntransformation labels and developing a method that learns the local regions\nwhere each operator is likely to be used while preserving the identity of\ninputs. Experiments on MNIST and Fashion MNIST highlight our model's ability to\nlearn identity-preserving transformations on multi-class datasets.\nAdditionally, we train on CelebA to showcase our model's ability to learn\nsemantically meaningful transformations on complex datasets in an unsupervised\nmanner.\n"} {"abstract": " We study the fundamental design automation problem of equivalence checking in\nthe NISQ (Noisy Intermediate-Scale Quantum) computing realm where quantum noise\nis present inevitably. The notion of approximate equivalence of (possibly\nnoisy) quantum circuits is defined based on the Jamiolkowski fidelity which\nmeasures the average distance between output states of two super-operators when\nthe input is chosen at random. By employing tensor network contraction, we\npresent two algorithms, aiming at different situations where the number of\nnoises varies, for computing the fidelity between an ideal quantum circuit and\nits noisy implementation. The effectiveness of our algorithms is demonstrated\nby experimenting on benchmarks of real NISQ circuits. When compared with the\nstate-of-the-art implementation incorporated in Qiskit, experimental results\nshow that the proposed algorithms outperform in both efficiency and\nscalability.\n"} {"abstract": " In this article we consider the length functional defined on the space of\nimmersed planar curves. The $L^2(ds)$ Riemannian metric gives rise to the curve\nshortening flow as the gradient flow of the length functional. Motivated by the\ntriviality of the metric topology in this space, we consider the gradient flow\nof the length functional with respect to the $H^1(ds)$-metric. Circles with\nradius $r_0$ shrink with $r(t) = \\sqrt{W(e^{c-2t})}$ under the flow, where $W$\nis the Lambert $W$ function and $c = r_0^2 + \\log r_0^2$. We conduct a thorough\nstudy of this flow, giving existence of eternal solutions and convergence for\ngeneral initial data, preservation of regularity in various spaces, qualitative\nproperties of the flow after an appropriate rescaling, and numerical\nsimulations.\n"} {"abstract": " Relational knowledge bases (KBs) are commonly used to represent world\nknowledge in machines. However, while advantageous for their high degree of\nprecision and interpretability, KBs are usually organized according to\nmanually-defined schemas, which limit their expressiveness and require\nsignificant human efforts to engineer and maintain. In this review, we take a\nnatural language processing perspective to these limitations, examining how\nthey may be addressed in part by training deep contextual language models (LMs)\nto internalize and express relational knowledge in more flexible forms. We\npropose to organize knowledge representation strategies in LMs by the level of\nKB supervision provided, from no KB supervision at all to entity- and\nrelation-level supervision. Our contributions are threefold: (1) We provide a\nhigh-level, extensible taxonomy for knowledge representation in LMs; (2) Within\nour taxonomy, we highlight notable models, evaluation tasks, and findings, in\norder to provide an up-to-date review of current knowledge representation\ncapabilities in LMs; and (3) We suggest future research directions that build\nupon the complementary aspects of LMs and KBs as knowledge representations.\n"} {"abstract": " In this paper, we show that the (admissible) character stack, which is a\nstack version of the character variety, is an open substack of the\nTeichm\\\"uller stack of homogeneous spaces of SL(2,C). We show that the\ntautological family over the representation variety, given by deforming the\nholonomy, is always a complete family. This is a generalisation of the work of\nE. Ghys about deformations of complex structures these homogeneous spaces.\n"} {"abstract": " In this manuscript we demonstrate a method to reconstruct the wavefront of\nfocused beams from a measured diffraction pattern behind a diffracting mask in\nreal-time. The phase problem is solved by means of a neural network, which is\ntrained with simulated data and verified with experimental data. The neural\nnetwork allows live reconstructions within a few milliseconds, which previously\nwith iterative phase retrieval took several seconds, thus allowing the\nadjustment of complex systems and correction by adaptive optics in real time.\nThe neural network additionally outperforms iterative phase retrieval with high\nnoise diffraction patterns.\n"} {"abstract": " To realize high-accuracy classification of high spatial resolution (HSR)\nimages, this letter proposes a new multi-feature fusion-based scene\nclassification framework (MF2SCF) by fusing local, global, and color features\nof HSR images. Specifically, we first extract the local features with the help\nof image slicing and densely connected convolutional networks (DenseNet), where\nthe outputs of dense blocks in the fine-tuned DenseNet-121 model are jointly\naveraged and concatenated to describe local features. Second, from the\nperspective of complex networks (CN), we model a HSR image as an undirected\ngraph based on pixel distance, intensity, and gradient, and obtain a gray-scale\nimage (GSI), a gradient of image (GoI), and three CN-based feature images to\ndelineate global features. To make the global feature descriptor resist to the\nimpact of rotation and illumination, we apply uniform local binary patterns\n(LBP) on GSI, GoI, and feature images, respectively, and generate the final\nglobal feature representation by concatenating spatial histograms. Third, the\ncolor features are determined based on the normalized HSV histogram, where HSV\nstands for hue, saturation, and value, respectively. Finally, three feature\nvectors are jointly concatenated for scene classification. Experiment results\nshow that MF2SCF significantly improves the classification accuracy compared\nwith state-of-the-art LBP-based methods and deep learning-based methods.\n"} {"abstract": " In this paper we study quasilinear elliptic equations driven by the double\nphase operator and a right-hand side which has the combined effect of a\nsingular and of a parametric term. Based on the fibering method by using the\nNehari manifold we are going to prove the existence of at least two weak\nsolutions for such problem when the parameter is sufficiently small.\n"} {"abstract": " Deep learning has made significant impacts on multi-view stereo systems.\nState-of-the-art approaches typically involve building a cost volume, followed\nby multiple 3D convolution operations to recover the input image's pixel-wise\ndepth. While such end-to-end learning of plane-sweeping stereo advances public\nbenchmarks' accuracy, they are typically very slow to compute. We present\n\\ouralg, a highly efficient multi-view stereo algorithm that seamlessly\nintegrates multi-view constraints into single-view networks via an attention\nmechanism. Since \\ouralg only builds on 2D convolutions, it is at least\n$2\\times$ faster than all the notable counterparts. Moreover, our algorithm\nproduces precise depth estimations and 3D reconstructions, achieving\nstate-of-the-art results on challenging benchmarks ScanNet, SUN3D, RGBD, and\nthe classical DTU dataset. our algorithm also out-performs all other algorithms\nin the setting of inexact camera poses. Our code is released at\n\\url{https://github.com/zhenpeiyang/MVS2D}\n"} {"abstract": " We are interested in solutions of the nonlinear Klein-Gordon equation (NLKG)\nin $\\mathbb{R}^{1+d}$, $d\\ge1$, which behave as a soliton or a sum of solitons\nin large time. In the spirit of other articles focusing on the supercritical\ngeneralized Korteweg-de Vries equations and on the nonlinear Schr{\\\"o}dinger\nequations, we obtain an $N$-parameter family of solutions of (NLKG) which\nconverges exponentially fast to a sum of given (unstable) solitons. For $N =\n1$, this family completely describes the set of solutions converging to the\nsoliton considered; for $N\\ge 2$, we prove uniqueness in a class with explicit\nalgebraic rate of convergence.\n"} {"abstract": " In this paper we completely solve the family of parametrised Thue equations\n\\[\n X(X-F_n Y)(X-2^n Y)-Y^3=\\pm 1, \\] where $F_n$ is the $n$-th Fibonacci number.\nIn particular, for any integer $n\\geq 3$ the Thue equation has only the trivial\nsolutions $(\\pm 1,0), (0,\\mp 1), \\mp(F_n,1), \\mp(2^n,1)$.\n"} {"abstract": " Indexing intervals is a fundamental problem, finding a wide range of\napplications. Recent work on managing large collections of intervals in main\nmemory focused on overlap joins and temporal aggregation problems. In this\npaper, we propose novel and efficient in-memory indexing techniques for\nintervals, with a focus on interval range queries, which are a basic component\nof many search and analysis tasks. First, we propose an optimized version of a\nsingle-level (flat) domain-partitioning approach, which may have large space\nrequirements due to excessive replication. Then, we propose a hierarchical\npartitioning approach, which assigns each interval to at most two partitions\nper level and has controlled space requirements. Novel elements of our\ntechniques include the division of the intervals at each partition into groups\nbased on whether they begin inside or before the partition boundaries, reducing\nthe information stored at each partition to the absolutely necessary, and the\neffective handling of data sparsity and skew. Experimental results on real and\nsynthetic interval sets of different characteristics show that our approaches\nare typically one order of magnitude faster than the state-of-the-art.\n"} {"abstract": " We propose a novel storage scheme for three-nucleon (3N) interaction matrix\nelements relevant for the normal-ordered two-body approximation used\nextensively in ab initio calculations of atomic nuclei. This scheme reduces the\nrequired memory by approximately two orders of magnitude, which allows the\ngeneration of 3N interaction matrix elements with the standard truncation of\n$E_{\\rm 3max}=28$, well beyond the previous limit of 18. We demonstrate that\nthis is sufficient to obtain the ground-state energy of $^{132}$Sn converged to\nwithin a few MeV with respect to the $E_{\\rm 3max}$ truncation.In addition, we\nstudy the asymptotic convergence behavior and perform extrapolations to the\nun-truncated limit. Finally, we investigate the impact of truncations made when\nevolving free-space 3N interactions with the similarity renormalization group.\nWe find that the contribution of blocks with angular momentum $J_{\\rm rel}>9/2$\nto the ground-state energy is dominated by a basis-truncation artifact which\nvanishes in the large-space limit, so these computationally expensive\ncomponents can be neglected. For the two sets of nuclear interactions employed\nin this work, the resulting binding energy of $^{132}$Sn agrees with the\nexperimental value within theoretical uncertainties. This work enables\nconverged ab initio calculations of heavy nuclei.\n"} {"abstract": " We show that the standard notion of entanglement is not defined for\ngravitationally anomalous two-dimensional theories because they do not admit a\nlocal tensor factorization of the Hilbert space into local Hilbert spaces.\nQualitatively, the modular flow cannot act consistently and unitarily in a\nfinite region, if there are different numbers of states with a given energy\ntraveling in the two opposite directions. We make this precise by decomposing\nit into two observations: First, a two-dimensional CFT admits a consistent\nquantization on a space with boundary only if it is not anomalous. Second, a\nlocal tensor factorization always leads to a definition of consistent, unitary,\nenergy-preserving boundary condition. As a corollary we establish a\ngeneralization of the Nielsen-Ninomiya theorem to all two-dimensional unitary\nlocal QFTs: No continuum quantum field theory in two dimensions can admit a\nlattice regulator unless its gravitational anomaly vanishes. We also show that\nthe conclusion can be generalized to six dimensions by dimensional reduction on\na four-manifold of nonvanishing signature. We advocate that these points be\nused to reinterpret the gravitational anomaly\nquantum-information-theoretically, as a fundamental obstruction to the\nlocalization of quantum information.\n"} {"abstract": " Thermal jitter (phase noise) from a free-running ring oscillator is a common,\neasily implementable physical randomness source in True Random Number\nGenerators (TRNGs). We show how to evaluate entropy, autocorrelation, and bit\npattern distributions of ring oscillator noise sources, even with low jitter\nlevels or some bias. Entropy justification is required in NIST 800-90B and\nAIS-31 testing and for applications such as the RISC-V entropy source\nextension. Our numerical evaluation algorithms outperform Monte Carlo\nsimulations in speed and accuracy. We also propose a new lower bound estimation\nformula for the entropy of ring oscillator sources which applies more generally\nthan previous ones.\n"} {"abstract": " This paper applies t-SNE, a visualisation technique familiar from Deep Neural\nNetwork research to argumentation graphs by applying it to the output of graph\nembeddings generated using several different methods. It shows that such a\nvisualisation approach can work for argumentation and show interesting\nstructural properties of argumentation graphs, opening up paths for further\nresearch in the area.\n"} {"abstract": " The fracture stress of materials typically depends on the sample size and is\ntraditionally explained in terms of extreme value statistics. A recent work\nreported results on the carrying capacity of long polyamide and polyester wires\nand interpret the results in terms of a probabilistic argument known as the St.\nPetersburg paradox. Here, we show that the same results can be better explained\nin terms of extreme value statistics. We also discuss the relevance of rate\ndependent effects.\n"} {"abstract": " This paper proposes a model to explain the potential role of inter-group\nconflicts in determining the rise and fall of signaling norms. In one\npopulation, assortative matching according to types is sustained by signaling.\nIn the other population, individuals do not signal and they are randomly\nmatched. Types evolve within each population. At the same time, the two\npopulations may engage in conflicts. Due to assortative matching, high types\ngrow faster in the population with signaling, yet they bear the cost of\nsignaling, which lowers their population's fitness in the long run. We show\nthat the survival of the signaling population depends crucially on the timing\nand the intensity of inter-group conflicts.\n"} {"abstract": " For predicting the kinetics of nucleic acid reactions, continuous-time Markov\nchains (CTMCs) are widely used. The rate of a reaction can be obtained through\nthe mean first passage time (MFPT) of its CTMC. However, a typical issue in\nCTMCs is that the number of states could be large, making MFPT estimation\nchallenging, particularly for events that happen on a long time scale (rare\nevents). We propose the pathway elaboration method, a time-efficient\nprobabilistic truncation-based approach for detailed-balance CTMCs. It can be\nused for estimating the MFPT for rare events in addition to rapidly evaluating\nperturbed parameters without expensive recomputations. We demonstrate that\npathway elaboration is suitable for predicting nucleic acid kinetics by\nconducting computational experiments on 267 measurements that cover a wide\nrange of rates for different types of reactions. We utilize pathway elaboration\nto gain insight on the kinetics of two contrasting reactions, one being a rare\nevent. We then compare the performance of pathway elaboration with the\nstochastic simulation algorithm (SSA) for MFPT estimation on 237 of the\nreactions for which SSA is feasible. We further build truncated CTMCs with SSA\nand transition path sampling (TPS) to compare with pathway elaboration.\nFinally, we use pathway elaboration to rapidly evaluate perturbed model\nparameters during optimization with respect to experimentally measured rates\nfor these 237 reactions. The testing error on the remaining 30 reactions, which\ninvolved rare events and were not feasible to simulate with SSA, improved\ncomparably with the training error. Our framework and dataset are available at\nhttps://github.com/ DNA-and-Natural-Algorithms-Group/PathwayElaboration.\n"} {"abstract": " We study an optimal control problem for a simple transportation model on a\npath graph. We give a closed form solution for the optimal controller, which\ncan also account for planned disturbances using feed-forward. The optimal\ncontroller is highly structured, which allows the controller to be implemented\nusing only local communication, conducted through two sweeps through the graph.\n"} {"abstract": " We present the GeneScore, a concept of feature reduction for Machine Learning\nanalysis of biomedical data. Using expert knowledge, the GeneScore integrates\ndifferent molecular data types into a single score. We show that the GeneScore\nis superior to a binary matrix in the classification of cancer entities from\nSNV, Indel, CNV, gene fusion and gene expression data. The GeneScore is a\nstraightforward way to facilitate state-of-the-art analysis, while making use\nof the available scientific knowledge on the nature of molecular data features\nused.\n"} {"abstract": " In their everyday life, the speech recognition performance of human listeners\nis influenced by diverse factors, such as the acoustic environment, the talker\nand listener positions, possibly impaired hearing, and optional hearing\ndevices. Prediction models come closer to considering all required factors\nsimultaneously to predict the individual speech recognition performance in\ncomplex acoustic environments. While such predictions may still not be\nsufficiently accurate for serious applications, they can already be performed\nand demand an accessible representation. In this contribution, an interactive\nrepresentation of speech recognition performance is proposed, which focuses on\nthe listeners head orientation and the spatial dimensions of an acoustic scene.\nA exemplary modeling toolchain, including an acoustic rendering model, a\nhearing device model, and a listener model, was used to generate a data set for\ndemonstration purposes. Using the spatial speech recognition maps to explore\nthis data set demonstrated the suitability of the approach to observe possibly\nrelevant behavior. The proposed representation provides a suitable target to\ncompare and validate different modeling approaches in ecologically relevant\ncontexts. Eventually, it may serve as a tool to use validated prediction models\nin the design of spaces and devices which take speech communication into\naccount.\n"} {"abstract": " Let $K$ be an imaginary quadratic field with class number 1, in this paper we\nobtain the functional equation of the $p$-adic $L$-function of: (1) a small\nslope $p$-stabilisation of a Bianchi modular form, and (2) a critical slope\n$p$-stabilisation of a Base change Bianchi modular form that is\n$\\Sigma$-smooth. To treat case (2) we use $p$-adic families of Bianchi modular\nforms.\n"} {"abstract": " For every prime number $p\\geq 3$ and every integer $m\\geq 1$, we prove the\nexistence of a continuous Galois representation $\\rho: G_\\mathbb{Q} \\rightarrow\nGl_m(\\mathbb{Z}_p)$ which has open image and is unramified outside\n$\\{p,\\infty\\}$ (resp. outside $\\{2,p,\\infty\\}$) when $p\\equiv 3$ mod $4$ (resp.\n$p \\equiv 1$ mod $4$).\n"} {"abstract": " Low-dimensional node embeddings play a key role in analyzing graph datasets.\nHowever, little work studies exactly what information is encoded by popular\nembedding methods, and how this information correlates with performance in\ndownstream machine learning tasks. We tackle this question by studying whether\nembeddings can be inverted to (approximately) recover the graph used to\ngenerate them. Focusing on a variant of the popular DeepWalk method (Perozzi et\nal., 2014; Qiu et al., 2018), we present algorithms for accurate embedding\ninversion - i.e., from the low-dimensional embedding of a graph G, we can find\na graph H with a very similar embedding. We perform numerous experiments on\nreal-world networks, observing that significant information about G, such as\nspecific edges and bulk properties like triangle density, is often lost in H.\nHowever, community structure is often preserved or even enhanced. Our findings\nare a step towards a more rigorous understanding of exactly what information\nembeddings encode about the input graph, and why this information is useful for\nlearning tasks.\n"} {"abstract": " Here, we propose an original approach for human activity recognition (HAR)\nwith commercial IEEE 802.11ac (WiFi) devices, which generalizes across\ndifferent persons, days and environments. To achieve this, we devise a\ntechnique to extract, clean and process the received phases from the channel\nfrequency response (CFR) of the WiFi channel, obtaining an estimate of the\nDoppler shift at the receiver of the communication link. The Doppler shift\nreveals the presence of moving scatterers in the environment, while not being\naffected by (environment specific) static objects. The proposed HAR framework\nis trained on data collected as a person performs four different activities and\nis tested on unseen setups, to assess its performance as the person, the day\nand/or the environment change with respect to those considered at training\ntime. In the worst case scenario, the proposed HAR technique reaches an average\naccuracy higher than 95%, validating the effectiveness of the extracted Doppler\ninformation, used in conjunction with a learning algorithm based on a neural\nnetwork, in recognizing human activities in a subject and environment\nindependent fashion.\n"} {"abstract": " Good approximate eigenstates of a Hamiltionian operator which poesses a point\nas well as a continuous spectrum have beeen obtained using the Lanczos\nalgorithm. Iterating with the bare Hamiltonian operator yields spurious\nsolutions which can easily be identified. The rms radius of the ground state\neigenvector, for example, is calculated using the bare operator.\n"} {"abstract": " Frequency estimation is a fundamental problem in many areas. The well-known\nA&M and its variant estimators have established an estimation framework by\niteratively interpolating the discrete Fourier transform (DFT) coefficients. In\ngeneral, those estimators require two DFT interpolations per iteration, have\nuneven initial estimation performance against frequencies, and are incompetent\nfor small sample numbers due to low-order approximations involved. Exploiting\nthe iterative estimation framework of A&M, we unprecedentedly introduce the\nPad\\'e approximation to frequency estimation, unveil some features about the\nupdating function used for refining the estimation in each iteration, and\ndevelop a simple closed-form solution to solving the residual estimation error.\nExtensive simulation results are provided, validating the superiority of the\nnew estimator over the state-the-art estimators in wide ranges of key\nparameters.\n"} {"abstract": " The pandemic by COVID-19 is causing a devastating effect on the health of\nglobal population. There are several efforts to prevent the spread of the\nvirus. Among those efforts, cleaning and disinfecting public areas have become\nimportant tasks. In order to contribute in this direction, this paper proposes\na coverage path planning algorithm for a spraying drone, a micro aerial vehicle\nthat has mounted a sprayer/sprinkler system, to disinfect areas. In contrast\nwith planners in the state-of-the-art, this proposal presents i) a new\nsprayer/sprinkler model that fits a more realistic coverage volume to the drop\ndispersion and ii) a planning algorithm that efficiently restricts the flight\nto the region of interest avoiding potential collisions in bounded scenes. The\ndrone with the algorithm has been tested in several simulation scenes, showing\nthat the algorithm is effective and covers more areas with respect to other\napproaches in the literature. Note that the proposal is not limited to\ndisinfection applications, but can be applied to other ones, such as painting\nor precision agriculture.\n"} {"abstract": " The combination of machine learning with control offers many opportunities,\nin particular for robust control. However, due to strong safety and reliability\nrequirements in many real-world applications, providing rigorous statistical\nand control-theoretic guarantees is of utmost importance, yet difficult to\nachieve for learning-based control schemes. We present a general framework for\nlearning-enhanced robust control that allows for systematic integration of\nprior engineering knowledge, is fully compatible with modern robust control and\nstill comes with rigorous and practically meaningful guarantees. Building on\nthe established Linear Fractional Representation and Integral Quadratic\nConstraints framework, we integrate Gaussian Process Regression as a learning\ncomponent and state-of-the-art robust controller synthesis. In a concrete\nrobust control example, our approach is demonstrated to yield improved\nperformance with more data, while guarantees are maintained throughout.\n"} {"abstract": " Thermal conduction in polymer nanocomposites depends on several parameters\nincluding the thermal conductivity and geometrical features of the\nnanoparticles, the particle loading, their degree of dispersion and formation\nof a percolating networks. To enhance efficiency of thermal contact between\nfree-standing conductive nanoparticles were previously proposed. This work\nreport for the first time the investigation of molecular junctions within a\ngraphene polymer nanocomposite. Molecular dynamics simulations were conducted\nto investigate the thermal transport efficiency of molecular junctions in\npolymer tight contact, to quantify the contribution of molecular junctions when\ngraphene and the molecular junctions are surrounded by polydimethylsiloxane\n(PDMS). A strong dependence of the thermal conductance in PDMS/graphene model\nwas found, with best performances obtained with short and conformationally\nrigid molecular junctions.\n"} {"abstract": " We consider adversarial training of deep neural networks through the lens of\nBayesian learning, and present a principled framework for adversarial training\nof Bayesian Neural Networks (BNNs) with certifiable guarantees. We rely on\ntechniques from constraint relaxation of non-convex optimisation problems and\nmodify the standard cross-entropy error model to enforce posterior robustness\nto worst-case perturbations in $\\epsilon$-balls around input points. We\nillustrate how the resulting framework can be combined with methods commonly\nemployed for approximate inference of BNNs. In an empirical investigation, we\ndemonstrate that the presented approach enables training of certifiably robust\nmodels on MNIST, FashionMNIST and CIFAR-10 and can also be beneficial for\nuncertainty calibration. Our method is the first to directly train certifiable\nBNNs, thus facilitating their deployment in safety-critical applications.\n"} {"abstract": " In this paper, we consider distributed Nash equilibrium seeking in monotone\nand hypomonotone games. We first assume that each player has knowledge of the\nopponents' decisions and propose a passivity-based modification of the standard\ngradient-play dynamics, that we call \"Heavy Anchor\". We prove that Heavy Anchor\nallows a relaxation of strict monotonicity of the pseudo-gradient, needed for\ngradient-play dynamics, and can ensure exact asymptotic convergence in merely\nmonotone regimes. We extend these results to the setting where each player has\nonly partial information of the opponents' decisions. Each player maintains a\nlocal decision variable and an auxiliary state estimate and communicates with\ntheir neighbours to learn the opponents' actions. We modify Heavy Anchor via a\ndistributed Laplacian feedback and show how we can exploit\nequilibrium-independent passivity properties to achieve convergence to a Nash\nequilibrium in hypomonotone regimes.\n"} {"abstract": " In this paper we find curves minimizing the elastic energy among curves whose\nlength is fixed and whose ends are pinned. Applying the shooting method, we can\nidentify all critical points explicitly and determine which curve is the global\nminimizer. As a result we show that the critical points consist of wavelike\nelasticae and the minimizers do not have any loops or interior inflection\npoints.\n"} {"abstract": " This paper initiates a discussion of mechanism design when the participating\nagents exhibit preferences that deviate from expected utility theory (EUT). In\nparticular, we consider mechanism design for systems where the agents are\nmodeled as having cumulative prospect theory (CPT) preferences, which is a\ngeneralization of EUT preferences. We point out some of the key modifications\nneeded in the theory of mechanism design that arise from agents having CPT\npreferences and some of the shortcomings of the classical mechanism design\nframework. In particular, we show that the revelation principle, which has\ntraditionally played a fundamental role in mechanism design, does not continue\nto hold under CPT. We develop an appropriate framework that we call mediated\nmechanism design which allows us to recover the revelation principle for CPT\nagents. We conclude with some interesting directions for future work.\n"} {"abstract": " We study the Becker-D\\\"oring bubblelator, a variant of the Becker-D\\\"oring\ncoagulation-fragmentation system that models the growth of clusters by gain or\nloss of monomers. Motivated by models of gas evolution oscillators from\nphysical chemistry, we incorporate injection of monomers and depletion of large\nclusters. For a wide range of physical rates, the Becker-D\\\"oring system itself\nexhibits a dynamic phase transition as mass density increases past a critical\nvalue. We connect the Becker-D\\\"oring bubblelator to a transport equation\ncoupled with an integrodifferential equation for excess monomer density by\nformal asymptotics in the near-critical regime. For suitable\ninjection/depletion rates, we argue that time-periodic solutions appear via a\nHopf bifurcation. Numerics confirm that the generation and removal of large\nclusters can become desynchronized, leading to temporal oscillations associated\nwith bursts of large-cluster nucleation.\n"} {"abstract": " Graph convolutional neural networks (GCNs) generalize tradition convolutional\nneural networks (CNNs) from low-dimensional regular graphs (e.g., image) to\nhigh dimensional irregular graphs (e.g., text documents on word embeddings).\nDue to inevitable faulty data collection instruments, deceptive data\nmanipulation, or other system errors, the data might be error-contaminated.\nEven a small amount of error such as noise can compromise the ability of GCNs\nand render them inadmissible to a large extent. The key challenge is how to\neffectively and efficiently employ GCNs in the presence of erroneous data. In\nthis paper, we propose a novel Robust Graph Convolutional Neural Networks for\npossible erroneous single-view or multi-view data where data may come from\nmultiple sources. By incorporating an extra layers via Autoencoders into\ntraditional graph convolutional networks, we characterize and handle typical\nerror models explicitly. Experimental results on various real-world datasets\ndemonstrate the superiority of the proposed model over the baseline methods and\nits robustness against different types of error.\n"} {"abstract": " Top-K SpMV is a key component of similarity-search on sparse embeddings. This\nsparse workload does not perform well on general-purpose NUMA systems that\nemploy traditional caching strategies. Instead, modern FPGA accelerator cards\nhave a few tricks up their sleeve. We introduce a Top-K SpMV FPGA design that\nleverages reduced precision and a novel packet-wise CSR matrix compression,\nenabling custom data layouts and delivering bandwidth efficiency often\nunreachable even in architectures with higher peak bandwidth. With HBM-based\nboards, we are 100x faster than a multi-threaded CPU implementation and 2x\nfaster than a GPU with 20% higher bandwidth, with 14.2x higher\npower-efficiency.\n"} {"abstract": " To extract the Cabibbo-Kobayashi-Maskawa (CKM) matrix element $|V_{ub}|$, we\nhave re-analyzed all the available inputs (data and theory) on the $B\\to\\pi\nl\\nu$ decays including the newly available inputs on the form-factors from\nlight cone sum rule (LCSR) approach. We have reproduced and compared the\nresults with the procedure taken up by the Heavy Flavor Averaging Group\n(HFLAV), while commenting on the effect of outliers on the fits. After removing\nthe outliers and creating a comparable group of data-sets, we mention a few\nscenarios in the extraction of $|V_{ub}|$. In all those scenarios, the\nextracted values of $|V_{ub}|$ are higher than that obtained by HFLAV. Our best\nresults for $|V_{ub}|^{exc.}$ are $(3.88 \\pm 0.13)\\times 10^{-3}$ and $(3.87\n\\pm 0.13)\\times 10^{-3}$ in frequentist and Bayesian approaches, respectively,\nwhich are consistent with that extracted from inclusive decays $|V_{ub}|^{inc}$\nwithin $1~\\sigma$ confidence interval.\n"} {"abstract": " Lithium metal has been an attractive candidate as a next generation anode\nmaterial. Despite its popularity, stability issues of lithium in the liquid\nelectrolyte and the formation of lithium whiskers have kept it from practical\nuse. Three-dimensional (3D) current collectors have been proposed as an\neffective method to mitigate whiskers growth. Although extensive research\nefforts have been done, the effects of three key parameters of the 3D current\ncollectors, namely the surface area, the tortuosity factor, and the surface\nchemistry, on the performance of lithium metal batteries remain elusive.\nHerein, we quantitatively studied the role of these three parameters by\nsynthesizing four types of porous copper networks with different sizes of\nwell-structured micro-channels. X-ray microscale computed tomography (micro-CT)\nallowed us to assess the surface area, the pore size and the tortuosity factor\nof the porous copper materials. A metallic Zn coating was also applied to study\nthe influence of surface chemistry on the performance of the 3D current\ncollectors. The effects of these parameters on the performance were studied in\ndetail through Scanning Electron Microscopy (SEM) and Titration Gas\nChromatography (TGC). Stochastic simulations further allowed us to interpret\nthe role of the tortuosity factor in lithiation. By understanding these\neffects, the optimal range of the key parameters is found for the porous copper\nanodes and their performance is predicted. Using these parameters to inform the\ndesign of porous copper anodes for Li deposition, Coulombic efficiencies (CE)\nof up to 99.56% are achieved, thus paving the way for the design of effective\n3D current collector systems.\n"} {"abstract": " This position paper summarizes a recently developed research program focused\non inference in the context of data centric science and engineering\napplications, and forecasts its trajectory forward over the next decade. Often\none endeavours in this context to learn complex systems in order to make more\ninformed predictions and high stakes decisions under uncertainty. Some key\nchallenges which must be met in this context are robustness, generalizability,\nand interpretability. The Bayesian framework addresses these three challenges\nelegantly, while bringing with it a fourth, undesirable feature: it is\ntypically far more expensive than its deterministic counterparts. In the 21st\ncentury, and increasingly over the past decade, a growing number of methods\nhave emerged which allow one to leverage cheap low-fidelity models in order to\nprecondition algorithms for performing inference with more expensive models and\nmake Bayesian inference tractable in the context of high-dimensional and\nexpensive models. Notable examples are multilevel Monte Carlo (MLMC),\nmulti-index Monte Carlo (MIMC), and their randomized counterparts (rMLMC),\nwhich are able to provably achieve a dimension-independent (including\n$\\infty-$dimension) canonical complexity rate with respect to mean squared\nerror (MSE) of $1/$MSE. Some parallelizability is typically lost in an\ninference context, but recently this has been largely recovered via novel\ndouble randomization approaches. Such an approach delivers i.i.d. samples of\nquantities of interest which are unbiased with respect to the infinite\nresolution target distribution. Over the coming decade, this family of\nalgorithms has the potential to transform data centric science and engineering,\nas well as classical machine learning applications such as deep learning, by\nscaling up and scaling out fully Bayesian inference.\n"} {"abstract": " We present a study on magnetotransport in films of the topological Dirac\nsemimetal Cd$_{3}$As$_{2}$ doped with Sb grown by molecular beam epitaxy. In\nour weak antilocalization analysis, we find a significant enhancement of the\nspin-orbit scattering rate, indicating that Sb doping leads to a strong\nincrease of the pristine band-inversion energy. We discuss possible origins of\nthis large enhancement by comparing Sb-doped Cd$_{3}$As$_{2}$ with other\ncompound semiconductors. Sb-doped Cd$_{3}$As$_{2}$ will be a suitable system\nfor further investigations and functionalization of topological Dirac\nsemimetals.\n"} {"abstract": " Ultra-reliable low latency communications (URLLC) arose to serve industrial\nIoT (IIoT) use cases within the 5G. Currently, it has inherent limitations to\nsupport future services. Based on state-of-the-art research and practical\ndeployment experience, in this article, we introduce and advocate for three\nvariants: broadband, scalable and extreme URLLC. We discuss use cases and key\nperformance indicators and identify technology enablers for the new service\nmodes. We bring practical considerations from the IIoT testbed and provide an\noutlook toward some new research directions.\n"} {"abstract": " The design of algorithms that leverage machine learning alongside\ncombinatorial optimization techniques is a young but thriving area of\noperations research. If trends emerge, the literature has still not converged\non the proper way of combining these two techniques or on the predictor\narchitectures that should be used. We focus on operations research problems for\nwhich no efficient algorithms are known, but that are variants of classic\nproblems for which ones efficient algorithm exist. Elaborating on recent\ncontributions that suggest using a machine learning predictor to approximate\nthe variant by the classic problem, we introduce the notion of structured\napproximation of an operations research problem by another. We provide a\ngeneric learning algorithm to fit these approximations. This algorithm requires\nonly instances of the variant in the training set, unlike previous learning\nalgorithms that also require the solution of these instances. Using tools from\nstatistical learning theory, we prove a result showing the convergence speed of\nthe estimator, and deduce an approximation ratio guarantee on the performance\nof the algorithm obtained for the variant. Numerical experiments on a single\nmachine scheduling and a stochastic vehicle scheduling problem from the\nliterature show that our learning algorithm is competitive with algorithms that\nhave access to optimal solutions, leading to state-of-the-art algorithms for\nthe variant considered.\n"} {"abstract": " Spatial reasoning on multi-view line drawings by state-of-the-art supervised\ndeep networks is recently shown with puzzling low performances on the SPARE3D\ndataset. To study the reason behind the low performance and to further our\nunderstandings of these tasks, we design controlled experiments on both input\ndata and network designs. Guided by the hindsight from these experiment\nresults, we propose a simple contrastive learning approach along with other\nnetwork modifications to improve the baseline performance. Our approach uses a\nself-supervised binary classification network to compare the line drawing\ndifferences between various views of any two similar 3D objects. It enables\ndeep networks to effectively learn detail-sensitive yet view-invariant line\ndrawing representations of 3D objects. Experiments show that our method could\nsignificantly increase the baseline performance in SPARE3D, while some popular\nself-supervised learning methods cannot.\n"} {"abstract": " We have entered an era of a pandemic that has shaken the world with major\nimpact to medical systems, economics and agriculture. Prominent computational\nand mathematical models have been unreliable due to the complexity of the\nspread of infections. Moreover, lack of data collection and reporting makes any\nsuch modelling attempts unreliable. Hence we need to re-look at the situation\nwith the latest data sources and most comprehensive forecasting models. Deep\nlearning models such as recurrent neural networks are well suited for modelling\ntemporal sequences. In this paper, prominent recurrent neural networks, in\nparticular \\textit{long short term memory} (LSTMs) networks, bidirectional\nLSTM, and encoder-decoder LSTM models for multi-step (short-term) forecasting\nthe spread of COVID-infections among selected states in India. We select states\nwith COVID-19 hotpots in terms of the rate of infections and compare with\nstates where infections have been contained or reached their peak and provide\ntwo months ahead forecast that shows that cases will slowly decline. Our\nresults show that long-term forecasts are promising which motivates the\napplication of the method in other countries or areas. We note that although we\nmade some progress in forecasting, the challenges in modelling remain due to\ndata and difficulty in capturing factors such as population density, travel\nlogistics, and social aspects such culture and lifestyle.\n"} {"abstract": " We measure the small-scale clustering of the Data Release 16 extended Baryon\nOscillation Spectroscopic Survey Luminous Red Galaxy sample, corrected for\nfibre-collisions using Pairwise Inverse Probability weights, which give\nunbiased clustering measurements on all scales. We fit to the monopole and\nquadrupole moments and to the projected correlation function over the\nseparation range $7-60\\,h^{-1}$Mpc with a model based on the Aemulus\ncosmological emulator to measure the growth rate of cosmic structure,\nparameterized by $f\\sigma_8$. We obtain a measurement of\n$f\\sigma_8(z=0.737)=0.408\\pm0.038$, which is $1.4\\sigma$ lower than the value\nexpected from 2018 Planck data for a flat $\\Lambda$CDM model, and is more\nconsistent with recent weak-lensing measurements. The level of precision\nachieved is 1.7 times better than more standard measurements made using only\nthe large-scale modes of the same sample. We also fit to the data using the\nfull range of scales $0.1-60\\,h^{-1}$Mpc modelled by the Aemulus cosmological\nemulator and find a $4.5\\sigma$ tension in the amplitude of the halo velocity\nfield with the Planck+$\\Lambda$CDM model, driven by a mismatch on the\nnon-linear scales. This may not be cosmological in origin, and could be due to\na breakdown in the Halo Occupation Distribution model used in the emulator.\nFinally, we perform a robust analysis of possible sources of systematics,\nincluding the effects of redshift uncertainty and incompleteness due to target\nselection that were not included in previous analyses fitting to clustering\nmeasurements on small scales.\n"} {"abstract": " We study the physical properties of four-dimensional, string-theoretical,\nhorizonless \"fuzzball\" geometries by imaging their shadows. Their\nmicrostructure traps light rays straying near the would-be horizon on\nlong-lived, highly redshifted chaotic orbits. In fuzzballs sufficiently near\nthe scaling limit this creates a shadow much like that of a black hole, while\navoiding the paradoxes associated with an event horizon. Observations of the\nshadow size and residual glow can potentially discriminate between fuzzballs\naway from the scaling limit and alternative models of black compact objects.\n"} {"abstract": " We introduce a simplified model of physiological coughing or sneezing, in the\nform of a thin liquid layer subject to a rapid (30 m/s) air stream. The setup\nis simulated using the Volume-Of-Fluid method with octree mesh adaptation, the\nlatter allowing grid sizes small enough to capture the Kolmogorov length scale.\nThe results confirm the trend to an intermediate distribution between a\nLog-Normal and a Pareto distribution $P(d) \\propto d^{-3.3}$ for the\ndistribution of droplet sizes in agreement with a previous re-analysis of\nexperimental results by one of the authors. The mechanism of atomisation does\nnot differ qualitatively from the multiphase mixing layer experiments and\nsimulations. No mechanism for a bimodal distribution, also sometimes observed,\nis evidenced in these simulations.\n"} {"abstract": " We show that the ring of modular forms with characters for the even\nunimodular lattice of signature (2,18) is obtained from the invariant ring of\n$\\mathrm{Sym}(\\mathrm{Sym}^8(V) \\oplus \\mathrm{Sym}^{12}(V))$ with respect to\nthe action of $\\mathrm{SL}(V)$ by adding a Borcherds product of weight 132 with\none relation of weight 264, where $V$ is a 2-dimensional $\\mathbb{C}$-vector\nspace. The proof is based on the study of the moduli space of elliptic K3\nsurfaces with a section.\n"} {"abstract": " We present an overview of phase field modeling of active matter systems as a\ntool for capturing various aspects of complex and active interfaces. We first\ndescribe how interfaces between different phases are characterized in phase\nfield models and provide simple fundamental governing equations that describe\ntheir evolution. For a simple model, we then show how physical properties of\nthe interface, such as surface tension and interface thickness, can be\nrecovered from these equations. We then explain how the phase field formulation\ncan be coupled to various active matter realizations and discuss three\nparticular examples of continuum biphasic active matter: active\nnematic-isotropic interfaces, active matter in viscoelastic environments, and\nactive shells in fluid background. Finally, we describe how multiple phase\nfields can be used to model active cellular monolayers and present a general\nframework that can be applied to the study of tissue behaviour and collective\nmigration.\n"} {"abstract": " The chiral magnetic effect with a fluctuating chiral imbalance is more\nrealistic in the evolution of quark-gluon plasma, which reflects the random\ngluonic topological transition. Incorporating this dynamics, we calculate the\nchiral magnetic current in response to space-time dependent axial gauge\npotential and magnetic field in AdS/CFT correspondence. In contrast to\nconventional treatment of constant axial chemical potential, the response\nfunction here is the AVV three-point function of the $\\mathcal{N}=4$ super\nYang-Mills at strong coupling. Through an iterative solution of the nonlinear\nequations of motion in Schwarzschild-AdS$_5$ background, we are able to express\nthe AVV function in terms of two Heun functions and prove its UV/IR finiteness,\nas expected for $\\mathcal{N}=4$ super Yang-Mills theory. We found that the\ndependence of the chiral magnetic current on a non-constant chiral imbalance is\nnon-local, different from hydrodynamic approximation, and demonstrates the\nsubtlety of the infrared limit discovered in field theoretic approach. We\nexpect our results enrich the understanding of the phenomenology of the chiral\nmagnetic effect in the context of relativistic heavy ion collisions.\n"} {"abstract": " We re-examine the celebrated Doob--McKean identity that identifies a\nconditioned one-dimensional Brownian motion as the radial part of a\n3-dimensional Brownian motion or, equivalently, a Bessel-3 process, albeit now\nin the analogous setting of isotropic $\\alpha$-stable processes. We find a\nnatural analogue that matches the Brownian setting, with the role of the\nBrownian motion replaced by that of the isotropic $\\alpha$-stable process,\nproviding one interprets the components of the original identity in the right\nway.\n"} {"abstract": " In this paper, we consider Bayesian point estimation and predictive density\nestimation in the binomial case. After presenting preliminary results on these\nproblems, we compare the risk functions of the Bayes estimators based on the\ntruncated and untruncated beta priors and obtain dominance conditions when the\nprobability parameter is less than or equal to a known constant. The case where\nthere are both a lower bound restriction and an upper bound restriction is also\ntreated. Then our problems are shown to be related to similar problems in the\nPoisson case. Finally, numerical studies are presented.\n"} {"abstract": " In this paper, we obtain a characterization of GVZ-groups in terms of\ncommutators and monolithic quotients. This characterization is based on\ncounting formulas due to Gallagher.\n"} {"abstract": " Network traffic is growing at an outpaced speed globally. The modern network\ninfrastructure makes classic network intrusion detection methods inefficient to\nclassify an inflow of vast network traffic. This paper aims to present a modern\napproach towards building a network intrusion detection system (NIDS) by using\nvarious deep learning methods. To further improve our proposed scheme and make\nit effective in real-world settings, we use deep transfer learning techniques\nwhere we transfer the knowledge learned by our model in a source domain with\nplentiful computational and data resources to a target domain with sparse\navailability of both the resources. Our proposed method achieved 98.30%\nclassification accuracy score in the source domain and an improved 98.43%\nclassification accuracy score in the target domain with a boost in the\nclassification speed using UNSW-15 dataset. This study demonstrates that deep\ntransfer learning techniques make it possible to construct large deep learning\nmodels to perform network classification, which can be deployed in the real\nworld target domains where they can maintain their classification performance\nand improve their classification speed despite the limited accessibility of\nresources.\n"} {"abstract": " Efficient error-controlled lossy compressors are becoming critical to the\nsuccess of today's large-scale scientific applications because of the\never-increasing volume of data produced by the applications. In the past\ndecade, many lossless and lossy compressors have been developed with distinct\ndesign principles for different scientific datasets in largely diverse\nscientific domains. In order to support researchers and users assessing and\ncomparing compressors in a fair and convenient way, we establish a standard\ncompression assessment benchmark -- Scientific Data Reduction Benchmark\n(SDRBench). SDRBench contains a vast variety of real-world scientific datasets\nacross different domains, summarizes several critical compression quality\nevaluation metrics, and integrates many state-of-the-art lossy and lossless\ncompressors. We demonstrate evaluation results using SDRBench and summarize six\nvaluable takeaways that are helpful to the in-depth understanding of lossy\ncompressors.\n"} {"abstract": " The logistic linear mixed model (LLMM) is one of the most widely used\nstatistical models. Generally, Markov chain Monte Carlo algorithms are used to\nexplore the posterior densities associated with the Bayesian LLMMs. Polson,\nScott and Windle's (2013) Polya-Gamma data augmentation (DA) technique can be\nused to construct full Gibbs (FG) samplers for the LLMMs. Here, we develop\nefficient block Gibbs (BG) samplers for Bayesian LLMMs using the Polya-Gamma DA\nmethod. We compare the FG and BG samplers in the context of a real data\nexample, as the correlation between the fixed effects and the random effects\nchanges as well as when the dimensions of the design matrices vary. These\nnumerical examples demonstrate superior performance of the BG samplers over the\nFG samplers. We also derive conditions guaranteeing geometric ergodicity of the\nBG Markov chain when the popular improper uniform prior is assigned on the\nregression coefficients, and proper or improper priors are placed on the\nvariance parameters of the random effects. This theoretical result has\nimportant practical implications as it justifies the use of asymptotically\nvalid Monte Carlo standard errors for Markov chain based estimates of the\nposterior quantities.\n"} {"abstract": " We consider bivariate polynomials over the skew field of quaternions, where\nthe indeterminates commute with all coefficients and with each other. We\nanalyze existence of univariate factorizations, that is, factorizations with\nunivariate linear factors. A necessary condition for existence of univariate\nfactorizations is factorization of the norm polynomial into a product of\nunivariate polynomials. This condition is, however, not sufficient. Our central\nresult states that univariate factorizations exist after multiplication with a\nsuitable univariate real polynomial as long as the necessary factorization\ncondition is fulfilled. We present an algorithm for computing this real\npolynomial and a corresponding univariate factorization. If a univariate\nfactorization of the original polynomial exists, a suitable input of the\nalgorithm produces a constant multiplication factor, thus giving an a\nposteriori condition for existence of univariate factorizations. Some\nfactorizations obtained in this way are of interest in mechanism science. We\npresent an example of a curious closed-loop mechanism with eight revolute\njoints.\n"} {"abstract": " We deal with the construction of linear connections associated with second\norder ordinary differential equations with and without first order constraints.\nWe use a novel method allowing glueing of submodule covariant derivatives to\nproduce new, closed form expressions for the Massa-Pagani connection and our\nextension of it to the constrained case.\n"} {"abstract": " Speaker segmentation consists in partitioning a conversation between one or\nmore speakers into speaker turns. Usually addressed as the late combination of\nthree sub-tasks (voice activity detection, speaker change detection, and\noverlapped speech detection), we propose to train an end-to-end segmentation\nmodel that does it directly. Inspired by the original end-to-end neural speaker\ndiarization approach (EEND), the task is modeled as a multi-label\nclassification problem using permutation-invariant training. The main\ndifference is that our model operates on short audio chunks (5 seconds) but at\na much higher temporal resolution (every 16ms). Experiments on multiple speaker\ndiarization datasets conclude that our model can be used with great success on\nboth voice activity detection and overlapped speech detection. Our proposed\nmodel can also be used as a post-processing step, to detect and correctly\nassign overlapped speech regions. Relative diarization error rate improvement\nover the best considered baseline (VBx) reaches 17% on AMI, 13% on DIHARD 3,\nand 13% on VoxConverse.\n"} {"abstract": " Interpreting the environmental, behavioural and psychological data from\nin-home sensory observations and measurements can provide valuable insights\ninto the health and well-being of individuals. Presents of neuropsychiatric and\npsychological symptoms in people with dementia have a significant impact on\ntheir well-being and disease prognosis. Agitation in people with dementia can\nbe due to many reasons such as pain or discomfort, medical reasons such as side\neffects of a medicine, communication problems and environment. This paper\ndiscusses a model for analysing the risk of agitation in people with dementia\nand how in-home monitoring data can support them. We proposed a semi-supervised\nmodel which combines a self-supervised learning model and a Bayesian ensemble\nclassification. We train and test the proposed model on a dataset from a\nclinical study. The dataset was collected from sensors deployed in 96 homes of\npatients with dementia. The proposed model outperforms the state-of-the-art\nmodels in recall and f1-score values by 20%. The model also indicates better\ngeneralisability compared to the baseline models.\n"} {"abstract": " With the rise of the \"big data\" phenomenon in recent years, data is coming in\nmany different complex forms. One example of this is multi-way data that come\nin the form of higher-order tensors such as coloured images and movie clips.\nAlthough there has been a recent rise in models for looking at the simple case\nof three-way data in the form of matrices, there is a relative paucity of\nhigher-order tensor variate methods. The most common tensor distribution in the\nliterature is the tensor variate normal distribution; however, its use can be\nproblematic if the data exhibit skewness or outliers. Herein, we develop four\nskewed tensor variate distributions which to our knowledge are the first skewed\ntensor distributions to be proposed in the literature, and are able to\nparameterize both skewness and tail weight. Properties and parameter estimation\nare discussed, and real and simulated data are used for illustration.\n"} {"abstract": " We consider the additional entropy production (EP) incurred by a fixed\nquantum or classical process on some initial state $\\rho$, above the minimum EP\nincurred by the same process on any initial state. We show that this additional\nEP, which we term the \"mismatch cost of $\\rho$\", has a universal\ninformation-theoretic form: it is given by the contraction of the relative\nentropy between $\\rho$ and the least-dissipative initial state $\\varphi$ over\ntime. We derive versions of this result for integrated EP incurred over the\ncourse of a process, for trajectory-level fluctuating EP, and for instantaneous\nEP rate. We also show that mismatch cost for fluctuating EP obeys an integral\nfluctuation theorem. Our results demonstrate a fundamental relationship between\n\"thermodynamic irreversibility\" (generation of EP) and \"logical\nirreversibility\" (inability to know the initial state corresponding to a given\nfinal state). We use this relationship to derive quantitative bounds on the\nthermodynamics of quantum error correction and to propose a\nthermodynamically-operationalized measure of the logical irreversibility of a\nquantum channel. Our results hold for both finite and infinite dimensional\nsystems, and generalize beyond EP to many other thermodynamic costs, including\nnonadiabatic EP, free energy loss, and entropy gain.\n"} {"abstract": " We consider two-dimensional Schroedinger equations with honeycomb potentials\nand slow time-periodic forcing of the form: $$i\\psi_t (t,x) =\nH^\\varepsilon(t)\\psi=\\left(H^0+2i\\varepsilon A (\\varepsilon t) \\cdot \\nabla\n\\right)\\psi,\\quad H^0=-\\Delta +V (x) .$$ The unforced Hamiltonian, $H^0$, is\nknown to generically have Dirac (conical) points in its band spectrum. The\nevolution under $H^\\varepsilon(t)$ of {\\it band limited Dirac wave-packets}\n(spectrally localized near the Dirac point) is well-approximated on large time\nscales ($t\\lesssim \\varepsilon^{-2+}$) by an effective time-periodic Dirac\nequation with a gap in its quasi-energy spectrum. This quasi-energy gap is\ntypical of many reduced models of time-periodic (Floquet) materials and plays a\nrole in conclusions drawn about the full system: conduction vs. insulation,\ntopological vs. non-topological bands. Much is unknown about nature of the\nquasi-energy spectrum of original time-periodic Schroedinger equation, and it\nis believed that no such quasi-energy gap occurs. In this paper, we explain how\nto transfer quasi-energy gap information about the effective Dirac dynamics to\nconclusions about the full Schroedinger dynamics. We introduce the notion of an\n{\\it effective quasi-energy gap}, and establish its existence in the\nSchroedinger model. In the current setting, an effective quasi-energy gap is an\ninterval of quasi-energies which does not support modes with large spectral\nprojection onto band-limited Dirac wave-packets. The notion of effective\nquasi-energy gap is a physically relevant relaxation of the strict notion of\nquasi-energy spectral gap; if a system is tuned to drive or measure at momenta\nand energies near the Dirac point of $H^0$, then the resulting modes in the\neffective quasi-energy gap will only be weakly excited and detected.\n"} {"abstract": " We prove that, for a Poisson vertex algebra V, the canonical injective\nhomomorphism of the variational cohomology of V to its classical cohomology is\nan isomorphism, provided that V, viewed as a differential algebra, is an\nalgebra of differential polynomials in finitely many differential variables.\nThis theorem is one of the key ingredients in the computation of vertex algebra\ncohomology. For its proof, we introduce the sesquilinear Hochschild and\nHarrison cohomology complexes and prove a vanishing theorem for the symmetric\nsesquilinear Harrison cohomology of the algebra of differential polynomials in\nfinitely many differential variables.\n"} {"abstract": " We address the problem of analysing the complexity of concurrent programs\nwritten in Pi-calculus. We are interested in parallel complexity, or span,\nunderstood as the execution time in a model with maximal parallelism. A type\nsystem for parallel complexity has been recently proposed by Baillot and\nGhyselen but it is too imprecise for non-linear channels and cannot analyse\nsome concurrent processes. Aiming for a more precise analysis, we design a type\nsystem which builds on the concepts of sized types and usages. The new variant\nof usages we define accounts for the various ways a channel is employed and\nrelies on time annotations to track under which conditions processes can\nsynchronize. We prove that a type derivation for a process provides an upper\nbound on its parallel complexity.\n"} {"abstract": " Deep neural networks are vulnerable to small input perturbations known as\nadversarial attacks. Inspired by the fact that these adversaries are\nconstructed by iteratively minimizing the confidence of a network for the true\nclass label, we propose the anti-adversary layer, aimed at countering this\neffect. In particular, our layer generates an input perturbation in the\nopposite direction of the adversarial one and feeds the classifier a perturbed\nversion of the input. Our approach is training-free and theoretically\nsupported. We verify the effectiveness of our approach by combining our layer\nwith both nominally and robustly trained models and conduct large-scale\nexperiments from black-box to adaptive attacks on CIFAR10, CIFAR100, and\nImageNet. Our layer significantly enhances model robustness while coming at no\ncost on clean accuracy.\n"} {"abstract": " Coded caching is an emerging technique to reduce the data transmission load\nduring the peak-traffic times. In such a scheme, each file in the data center\nor library is usually divided into a number of packets to pursue a low\nbroadcasting rate based on the designed placements at each user's cache.\nHowever, the implementation complexity of this scheme increases as the number\nof packets increases. It is crucial to design a scheme with a small\nsubpacketization level, while maintaining a relatively low transmission rate.\nIt is known that the design of caches in users (i.e., the placement phase) and\nbroadcasting (i.e., the delivery phase) can be unified in one matrix, namely\nthe placement delivery array (PDA). This paper proposes a novel PDA\nconstruction by selecting proper orthogonal arrays (POAs), which generalizes\nsome known constructions but with a more flexible memory size. Based on the\nproposed PDA construction, an effective transformation is further proposed to\nenable a coded caching scheme to have a smaller subpacketization level.\nMoreover, two new coded caching schemes with the coded placement are\nconsidered. It is shown that the proposed schemes yield a lower\nsubpacketization level and transmission rate over some existing schemes.\n"} {"abstract": " The Newcomb-Benford law, also known as the first-digit law, gives the\nprobability distribution associated with the first digit of a dataset, so that,\nfor example, the first significant digit has a probability of $30.1$ % of being\n$1$ and $4.58$ % of being $9$. This law can be extended to the second and next\nsignificant digits. This article presents an introduction to the discovery of\nthe law, its derivation from the scale invariance property, as well as some\napplications and examples, are presented. Additionally, a simple model of a\nMarkov process inspired by scale invariance is proposed. Within this model, it\nis proved that the probability distribution irreversibly converges to the\nNewcomb-Benford law, in analogy to the irreversible evolution toward\nequilibrium of physical systems in thermodynamics and statistical mechanics.\n"} {"abstract": " Nature-inspired algorithms are commonly used for solving the various\noptimization problems. In past few decades, various researchers have proposed a\nlarge number of nature-inspired algorithms. Some of these algorithms have\nproved to be very efficient as compared to other classical optimization\nmethods. A young researcher attempting to undertake or solve a problem using\nnature-inspired algorithms is bogged down by a plethora of proposals that exist\ntoday. Not every algorithm is suited for all kinds of problem. Some score over\nothers. In this paper, an attempt has been made to summarize various leading\nresearch proposals that shall pave way for any new entrant to easily understand\nthe journey so far. Here, we classify the nature-inspired algorithms as natural\nevolution based, swarm intelligence based, biological based, science based and\nothers. In this survey, widely acknowledged nature-inspired algorithms namely-\nACO, ABC, EAM, FA, FPA, GA, GSA, JAYA, PSO, SFLA, TLBO and WCA, have been\nstudied. The purpose of this review is to present an exhaustive analysis of\nvarious nature-inspired algorithms based on its source of inspiration, basic\noperators, control parameters, features, variants and area of application where\nthese algorithms have been successfully applied. It shall also assist in\nidentifying and short listing the methodologies that are best suited for the\nproblem.\n"} {"abstract": " Let $\\mathbb{F}_q$ be a finite field of order $q$. In this paper, we study\nthe distribution of rectangles in a given set in $\\mathbb{F}_q^2$. More\nprecisely, for any $0<\\delta\\le 1$, we prove that there exists an integer\n$q_0=q_0(\\delta)$ with the following property: if $q\\ge q_0$ and $A$ is a\nmultiplicative subgroup of $\\mathbb{F}^*_q$ with $|A|\\ge q^{2/3}$, then any set\n$S\\subset \\mathbb{F}_q^2$ with $|S|\\ge \\delta q^2$ contains at least $\\gg\n\\frac{|S|^4|A|^2}{q^5}$ rectangles with side-lengths in $A$. We also consider\nthe case of rectangles with one fixed side-length and the other in a\nmultiplicative subgroup $A$.\n"} {"abstract": " Usually, managers or technical leaders in software projects assign issues\nmanually. This task may become more complex as more detailed is the issue\ndescription. This complexity can also make the process more prone to errors\n(misassignments) and time-consuming. In the literature, many studies aim to\naddress this problem by using machine learning strategies. Although there is no\nspecific solution that works for all companies, experience reports are useful\nto guide the choices in industrial auto-assignment projects. This paper\npresents an industrial initiative conducted in a global electronics company\nthat aims to minimize the time spent and the errors that can arise in the issue\nassignment process. As main contributions, we present a literature review, an\nindustrial report comparing different algorithms, and lessons learned during\nthe project.\n"} {"abstract": " We study astrometric residuals from a simultaneous fit of Hyper Suprime-Cam\nimages. We aim to characterize these residuals and study the extent to which\nthey are dominated by atmospheric contributions for bright sources. We use\nGaussian process interpolation, with a correlation function (kernel), measured\nfrom the data, to smooth and correct the observed astrometric residual field.\nWe find that Gaussian process interpolation with a von K\\'arm\\'an kernel allows\nus to reduce the covariances of astrometric residuals for nearby sources by\nabout one order of magnitude, from 30 mas$^2$ to 3 mas$^2$ at angular scales of\n~1 arcmin, and to halve the r.m.s. residuals. Those reductions using Gaussian\nprocess interpolation are similar to recent result published with the Dark\nEnergy Survey dataset. We are then able to detect the small static astrometric\nresiduals due to the Hyper Suprime-Cam sensors effects. We discuss how the\nGaussian process interpolation of astrometric residuals impacts galaxy shape\nmeasurements, in particular in the context of cosmic shear analyses at the\nRubin Observatory Legacy Survey of Space and Time.\n"} {"abstract": " The system of two nonlinear coupled oscillators is studied. As partial case\nthis system of equation is reduced to the Duffing oscillator which has many\napplications for describing physical processes. It is well known that the\ninverse scattering transform is one of the most powerful methods for solving\nthe Cauchy problems of partial differential equations. To solve the Cauchy\nproblem for nonlinear differential equations we can use the Lax pair\ncorresponding to this equation. The Lax pair for ordinary differential or\nsystems or for system ordinary differential equations allows us to find the\nfirst integrals, which also allow us to solve the question of integrability for\ndifferential equations. In this report we present the Lax pair for the system\nof coupled oscillators. Using the Lax pair we get two first integrals for the\nsystem of equations. The considered system of equations can be also reduced to\nthe fourth-order ordinary differential equation and the Lax pair can be used\nfor the ordinary differential equation of fourth order. Some special cases of\nthe system of equations are considered.\n"} {"abstract": " This paper continues the program initiated in the works by the authors [60],\n[61] and [62] and by the authors with Li [51] and [52] to establish higher\norder Poincar\\'e-Sobolev, Hardy-Sobolev-Maz'ya, Adams and Hardy-Adams\ninequalities on real hyperbolic spaces using the method of Helgason-Fourier\nanalysis on the hyperbolic spaces. The aim of this paper is to establish such\ninequalities on the Siegel domains and complex hyperbolic spaces. Firstly, we\nprove a factorization theorem for the operators on the complex hyperbolic space\nwhich is closely related to Geller' operator, as well as the CR invariant\ndifferential operators on the Heisenberg group and CR sphere. Secondly, by\nusing, among other things, the Kunze-Stein phenomenon on a closed linear group\n$SU(1,n)$ and Helgason-Fourier analysis techniques on the complex hyperbolic\nspaces, we establish the Poincar\\'e-Sobolev, Hardy-Sobolev-Maz'ya inequality on\nthe Siegel domain $\\mathcal{U}^{n}$ and the unit ball\n$\\mathbb{B}_{\\mathbb{C}}^{n}$. Finally, we establish the sharp Hardy-Adams\ninequalities and sharp Adams type inequalities on Sobolev spaces of any\npositive fractional order on the complex hyperbolic spaces. The factorization\ntheorem we proved is of its independent interest in the Heisenberg group and CR\nsphere and CR invariant differential operators therein.\n"} {"abstract": " Low Earth orbit (LEO) satellite constellations rely on inter-satellite links\n(ISLs) to provide global connectivity. However, one significant challenge is to\nestablish and maintain inter-plane ISLs, which support communication between\ndifferent orbital planes. This is due to the fast movement of the\ninfrastructure and to the limited computation and communication capabilities on\nthe satellites. In this paper, we make use of antenna arrays with either Butler\nmatrix beam switching networks or digital beam steering to establish the\ninter-plane ISLs in a LEO satellite constellation. Furthermore, we present a\ngreedy matching algorithm to establish inter-plane ISLs with the objective of\nmaximizing the sum of rates. This is achieved by sequentially selecting the\npairs, switching or pointing the beams and, finally, setting the data rates.\nOur results show that, by selecting an update period of 30 seconds for the\nmatching, reliable communication can be achieved throughout the constellation,\nwhere the impact of interference in the rates is less than 0.7 % when compared\nto orthogonal links, even for relatively small antenna arrays. Furthermore,\ndoubling the number of antenna elements increases the rates by around one order\nof magnitude.\n"} {"abstract": " Given a random real quadratic field from $\\{ \\mathbb{Q}(\\sqrt{p}\\,) ~|~ p\n\\text{ primes} \\}$, the conjectural probability $\\mathbb{P}(h=q)$ that it has\nclass number $q$ is given for all positive odd integers $q$. Some related\nconjectures of the Cohen-Lenstra heuristic are given here as corollaries. These\nresults suggest that the set of real quadratic number fields may have some\nnatural hierarchical structures.\n"} {"abstract": " Dispersionless bands -- \\emph{flatbands} -- provide an excellent testbed for\nnovel physical phases due to the fine-tuned character of flatband tight-binding\nHamiltonians. The accompanying macroscopic degeneracy makes any perturbation\nrelevant, no matter how small. For short-range hoppings flatbands support\ncompact localized states, which allowed to develop systematic flatband\ngenerators in $d=1$ dimension in Phys. Rev. B {\\bf 95} 115135 (2017) and Phys.\nRev. B {\\bf 99} 125129 (2019). Here we extend this generator approach to $d=2$\ndimensions. The \\emph{shape} of a compact localized state turns into an\nimportant additional flatband classifier. This allows us to obtain analytical\nsolutions for classes of $d=2$ flatband networks and to re-classify and\nre-obtain known ones, such as the checkerboard, kagome, Lieb and Tasaki\nlattices. Our generator can be straightforwardly generalized to three lattice\ndimensions as well.\n"} {"abstract": " In this article we introduce the notion of a Ribaucour partial tube and use\nit to derive several applications. These are based on a characterization of\nRibaucour partial tubes as the immersions of a product of two manifolds into a\nspace form such that the distributions given by the tangent spaces of the\nfactors are orthogonal to each other with respect to the induced metric, are\ninvariant under all shape operators, and one of them is spherical. Our first\napplication is a classification of all hypersurfaces with dimension at least\nthree of a space form that carry a spherical foliation of codimension one,\nextending previous results by Dajczer, Rovenski and the second author for the\ntotally geodesic case. We proceed to prove a general decomposition theorem for\nimmersions of product manifolds, which extends several related results. Other\nmain applications concern the class of hypersurfaces of $\\mathbb{R}^{n+1}$ that\nare of Enneper type, that is, hypersurfaces that carry a family of lines of\ncurvature, correspondent to a simple principal curvature, whose orthogonal\n$(n-1)$-dimensional distribution is integrable and whose leaves are contained\nin hyperspheres or affine hyperplanes of $\\mathbb{R}^{n+1}$. We show how\nRibaucour partial tubes in the sphere can be used to describe all\n$n$-dimensional hypersurfaces of Enneper type for which the leaves of the\n$(n-1)$-dimensional distribution are contained in affine hyperplanes of\n$\\mathbb{R}^{n+1}$, and then show how a general hypersurface of Enneper type\ncan be constructed in terms of a hypersurface in the latter class. We give an\nexplicit description of some special hypersurfaces of Enneper type, among which\nare natural generalizations of the so called Joachimsthal surfaces.\n"} {"abstract": " This paper proposes a method to relax the conditional independence assumption\nof connectionist temporal classification (CTC)-based automatic speech\nrecognition (ASR) models. We train a CTC-based ASR model with auxiliary CTC\nlosses in intermediate layers in addition to the original CTC loss in the last\nlayer. During both training and inference, each generated prediction in the\nintermediate layers is summed to the input of the next layer to condition the\nprediction of the last layer on those intermediate predictions. Our method is\neasy to implement and retains the merits of CTC-based ASR: a simple model\narchitecture and fast decoding speed. We conduct experiments on three different\nASR corpora. Our proposed method improves a standard CTC model significantly\n(e.g., more than 20 % relative word error rate reduction on the WSJ corpus)\nwith a little computational overhead. Moreover, for the TEDLIUM2 corpus and the\nAISHELL-1 corpus, it achieves a comparable performance to a strong\nautoregressive model with beam search, but the decoding speed is at least 30\ntimes faster.\n"} {"abstract": " Physical-layer key generation (PKG) can generate symmetric keys between two\ncommunication ends based on the reciprocal uplink and downlink channels. By\nsmartly reconfiguring the radio signal propagation, intelligent reflecting\nsurface (IRS) is able to improve the secret key rate of PKG. However, existing\nworks involving IRS-assisted PKG are concentrated in single-antenna wireless\nnetworks. So this paper investigates the problem of PKG in the IRS-assisted\nmultiple-input single-output (MISO) system, which aims to maximize the secret\nkey rate by optimally designing the IRS passive beamforming. First, we analyze\nthe correlation between channel state information (CSI) of eavesdropper and\nlegitimate ends and derive the expression of the upper bound of secret key rate\nunder passive eavesdropping attack. Then, an optimal algorithm for designing\nIRS reflecting coefficients based on Semi-Definite Relaxation (SDR) and Taylor\nexpansion is proposed to maximize the secret key rate. Numerical results show\nthat our optimal IRS-assisted PKG scheme can achieve much higher secret key\nrate when compared with two benchmark schemes.\n"} {"abstract": " We investigate $\\lambda$-Hilbert transform, $\\lambda$-Possion integral and\nconjugate $\\lambda$-Poisson integral on the atomic Hardy space in the Dunkl\nsetting and establish a new version of Paley type inequality which extends the\nresults in \\cite{F} and \\cite{ZhongKai Li 3}.\n"} {"abstract": " Arc-locally semicomplete and arc-locally in-semicomplete digraphs were\nintroduced by Bang-Jensen as a common generalization of both semicomplete and\nsemicomplete bipartite digraphs in 1993. Later, Bang-Jensen (2004),\nGaleana-Sanchez and Goldfeder (2009) and Wang and Wang (2009) provided a\ncharacterization of strong arc-locally semicomplete digraphs. In 2009, Wang and\nWang characterized strong arc-locally in-semicomplete digraphs. In 2012,\nGaleana-Sanchez and Goldfeder provided a characterization of all arc-locally\nsemicomplete digraphs which generalizes some results by Bang-Jensen. In this\npaper, we characterize the structure of arbitrary connected arc-locally (out)\nin-semicomplete digraphs and arbitrary connected arc-locally semicomplete\ndigraphs.\n"} {"abstract": " We study Markov population processes on large graphs, with the local state\ntransition rates of a single vertex being linear function of its neighborhood.\nA simple way to approximate such processes is by a system of ODEs called the\nhomogeneous mean-field approximation (HMFA). Our main result is showing that\nHMFA is guaranteed to be the large graph limit of the stochastic dynamics on a\nfinite time horizon if and only if the graph-sequence is quasi-random. Explicit\nerror bound is given and being of order $\\frac{1}{\\sqrt{N}}$ plus the largest\ndiscrepancy of the graph. For Erd\\H{o}s R\\'{e}nyi and random regular graphs we\nshow an error bound of order the inverse square root of the average degree. In\ngeneral, diverging average degrees is shown to be a necessary condition for the\nHMFA to be accurate. Under special conditions, some of these results also apply\nto more detailed type of approximations like the inhomogenous mean field\napproximation (IHMFA). We pay special attention to epidemic applications such\nas the SIS process.\n"} {"abstract": " Downscaling aims to link the behaviour of the atmosphere at fine scales to\nproperties measurable at coarser scales, and has the potential to provide high\nresolution information at a lower computational and storage cost than numerical\nsimulation alone. This is especially appealing for targeting convective scales,\nwhich are at the edge of what is possible to simulate operationally. Since\nconvective scale weather has a high degree of independence from larger scales,\na generative approach is essential. We here propose a statistical method for\ndownscaling moist variables to convective scales using conditional Gaussian\nrandom fields, with an application to wet bulb potential temperature (WBPT)\ndata over the UK. Our model uses an adaptive covariance estimation to capture\nthe variable spatial properties at convective scales. We further propose a\nmethod for the validation, which has historically been a challenge for\ngenerative models.\n"} {"abstract": " Quantum spins of mesoscopic size are a well-studied playground for\nengineering non-classical states. If the spin represents the collective state\nof an ensemble of qubits, its non-classical behavior is linked to entanglement\nbetween the qubits. In this work, we report on an experimental study of\nentanglement in dysprosium's electronic spin. Its ground state, of angular\nmomentum $J=8$, can formally be viewed as a set of $2J$ qubits symmetric upon\nexchange. To access entanglement properties, we partition the spin by optically\ncoupling it to an excited state $J'=J-1$, which removes a pair of qubits in a\nstate defined by the light polarization. Starting with the well-known W and\nsqueezed states, we extract the concurrence of qubit pairs, which quantifies\ntheir non-classical character. We also directly demonstrate entanglement\nbetween the 14- and 2-qubit subsystems via an increase in entropy upon\npartition. In a complementary set of experiments, we probe decoherence of a\nstate prepared in the excited level $J'=J+1$ and interpret spontaneous emission\nas a loss of a qubit pair in a random state. This allows us to contrast the\nrobustness of pairwise entanglement of the W state with the fragility of the\ncoherence involved in a Schr\\\"odinger cat state. Our findings open up the\npossibility to engineer novel types of entangled atomic ensembles, in which\nentanglement occurs within each atom's electronic spin as well as between\ndifferent atoms.\n"} {"abstract": " High quality (HQ) video services occupy large portions of the total bandwidth\nand are among the main causes of congestion at network bottlenecks. Since video\nis resilient to data loss, throwing away less important video packets can ease\nnetwork congestion with minimal damage to video quality and free up bandwidth\nfor other data flows. Frame type is one of the features that can be used to\ndetermine the importance of video packets, but this information is stored in\nthe packet payload. Due to limited processing power of devices in high\nthroughput/speed networks, data encryption and user credibility issues, it is\ncostly for the network to find the frame type of each packet. Therefore, a fast\nand reliable standalone method to recognize video packet types at network level\nis desired. This paper proposes a method to model the structure of live video\nstreams in a network node which results in determining the frame type of each\npacket. It enables the network nodes to mark and if need be to discard less\nimportant video packets ahead of congestion, and therefore preserve video\nquality and free up bandwidth for more important packet types. The method does\nnot need to read the IP layer payload and uses only the packet header data for\ndecisions. Experimental results indicate while dropping packets under packet\ntype prediction degrades video quality with respect to its true type by 0.5-3\ndB, it has 7-20 dB improvement over when packets are dropped randomly.\n"} {"abstract": " Improving wind turbine efficiency is essential for reducing the costs of\nenergy production. The highly nonlinear dynamics of the wind turbines and their\nuncertain operating conditions have posed many challenges for their control\nmethods. In this work, a robust control strategy based on sliding mode and\nadaptive fuzzy disturbance observer is proposed for speed tracking in a\nvariable speed wind turbine. First, the nonlinear mathematical model that\ndescribes the dynamics of the variable speed wind turbine is derived. This\nnonlinear model is then used to derive the control methodology and to find\nstability and robustness conditions. The control approach is designed to track\nthe optimal wind speed that causes maximum energy extraction. The stability\ncondition was verified using the Lyapunov stability theory. A simulation study\nwas conducted to verify the method, and a comparative analysis was used to\nmeasure its effectiveness. The results showed a high tracking ability and\nrobustness of the developed methodology. Moreover, higher power extraction was\nobserved when compared to a classical control method.\n"} {"abstract": " Many modern systems for speaker diarization, such as the recently-developed\nVBx approach, rely on clustering of DNN speaker embeddings followed by\nresegmentation. Two problems with this approach are that the DNN is not\ndirectly optimized for this task, and the parameters need significant retuning\nfor different applications. We have recently presented progress in this\ndirection with a Leave-One-Out Gaussian PLDA (LGP) clustering algorithm and an\napproach to training the DNN such that embeddings directly optimize performance\nof this scoring method. This paper presents a new two-pass version of this\nsystem, where the second pass uses finer time resolution to significantly\nimprove overall performance. For the Callhome corpus, we achieve the first\npublished error rate below 4% without any task-dependent parameter tuning. We\nalso show significant progress towards a robust single solution for multiple\ndiarization tasks.\n"} {"abstract": " We use 3D fully kinetic particle-in-cell simulations to study the occurrence\nof magnetic reconnection in a simulation of decaying turbulence created by\nanisotropic counter-propagating low-frequency Alfv\\'en waves consistent with\ncritical-balance theory. We observe the formation of small-scale\ncurrent-density structures such as current filaments and current sheets as well\nas the formation of magnetic flux ropes as part of the turbulent cascade. The\nlarge magnetic structures present in the simulation domain retain the initial\nanisotropy while the small-scale structures produced by the turbulent cascade\nare less anisotropic. To quantify the occurrence of reconnection in our\nsimulation domain, we develop a new set of indicators based on intensity\nthresholds to identify reconnection events in which both ions and electrons are\nheated and accelerated in 3D particle-in-cell simulations. According to the\napplication of these indicators, we identify the occurrence of reconnection\nevents in the simulation domain and analyse one of these events in detail. The\nevent is related to the reconnection of two flux ropes, and the associated ion\nand electron exhausts exhibit a complex three-dimensional structure. We study\nthe profiles of plasma and magnetic-field fluctuations recorded along\nartificial-spacecraft trajectories passing near and through the reconnection\nregion. Our results suggest the presence of particle heating and acceleration\nrelated to small-scale reconnection events within magnetic flux ropes produced\nby the anisotropic Alfv\\'enic turbulent cascade in the solar wind. These events\nare related to current structures of order a few ion inertial lengths in size.\n"} {"abstract": " We propose the spatial-temporal aggregated predictor (STAP) modeling\nframework to address measurement and estimation issues that arise when\nassessing the relationship between built environment features (BEF) and health\noutcomes. Many BEFs can be mapped as point locations and thus traditional\nexposure metrics are based on the number of features within a pre-specified\nspatial unit. The size of the spatial unit--or spatial scale--that is most\nappropriate for a particular health outcome is unknown and its choice\ninextricably impacts the estimated health effect. A related issue is the lack\nof knowledge of the temporal scale--or the length of exposure time that is\nnecessary for the BEF to render its full effect on the health outcome. The\nproposed STAP model enables investigators to estimate both the spatial and\ntemporal scales for a given BEF in a data-driven fashion, thereby providing a\nflexible solution for measuring the relationship between outcomes and spatial\nproximity to point-referenced exposures. Simulation studies verify the validity\nof our method for estimating the scales as well as the association between\navailability of BEFs' and health outcomes. We apply this method to estimate the\nspatial-temporal association between supermarkets and BMI using data from the\nMulti-Ethnic Atherosclerosis Study, demonstrating the method's applicability in\ncohort studies.\n"} {"abstract": " In a rectangular domain, a boundary-value problem is considered for a\nmixed-type equation with a regularized Caputo-like counterpart of hyper-Bessel\ndifferential operator and the bi-ordinal Hilfer's fractional derivative. Using\nthe method of separation of variables, Laplace transform, a unique solvability\nof the considered problem has been established. Moreover, we have found the\nexplicit solution of initial problems for a differential equation with the\nbi-ordinal Hilfer's derivative and regularized Caputo-like counterpart of the\nhyper-Bessel differential operator with the non-zero starting point.\n"} {"abstract": " Prediction of human actions in social interactions has important applications\nin the design of social robots or artificial avatars. In this paper, we model\nhuman interaction generation as a discrete multi-sequence generation problem\nand present SocialInteractionGAN, a novel adversarial architecture for\nconditional interaction generation. Our model builds on a recurrent\nencoder-decoder generator network and a dual-stream discriminator. This\narchitecture allows the discriminator to jointly assess the realism of\ninteractions and that of individual action sequences. Within each stream a\nrecurrent network operating on short subsequences endows the output signal with\nlocal assessments, better guiding the forthcoming generation. Crucially,\ncontextual information on interacting participants is shared among agents and\nreinjected in both the generation and the discriminator evaluation processes.\nWe show that the proposed SocialInteractionGAN succeeds in producing high\nrealism action sequences of interacting people, comparing favorably to a\ndiversity of recurrent and convolutional discriminator baselines. Evaluations\nare conducted using modified Inception Score and Fr{\\'e}chet Inception Distance\nmetrics, that we specifically design for discrete sequential generated data.\nThe distribution of generated sequences is shown to approach closely that of\nreal data. In particular our model properly learns the dynamics of interaction\nsequences, while exploiting the full range of actions.\n"} {"abstract": " We consider, in general terms, the possible parameter space of thermal dark\nmatter candidates. We assume that the dark matter particle is fundamental and\nwas in thermal equilibrium in a hidden sector with a temperature $T'$, which\nmay differ from that of the Standard Model temperature, $T$. The candidates lie\nin a region in the $T'/T$ vs. $m_{\\rm dm}$ plane, which is bounded by both\nmodel-independent theoretical considerations and observational constraints. The\nformer consists of limits from dark matter candidates that decoupled when\nrelativistic (the relativistic floor) and from those that decoupled when\nnon-relativistic with the largest annihilation cross section allowed by\nunitarity (the unitarity wall), while the latter concerns big bang\nnucleosynthesis ($N_{\\rm eff}$ ceiling) and free streaming. We present three\nsimplified dark matter scenarios, demonstrating concretely how each fits into\nthe domain.\n"} {"abstract": " We design a multi-purpose environment for autonomous UAVs offering different\ncommunication services in a variety of application contexts (e.g., wireless\nmobile connectivity services, edge computing, data gathering). We develop the\nenvironment, based on OpenAI Gym framework, in order to simulate different\ncharacteristics of real operational environments and we adopt the Reinforcement\nLearning to generate policies that maximize some desired performance.The\nquality of the resulting policies are compared with a simple baseline to\nevaluate the system and derive guidelines to adopt this technique in different\nuse cases. The main contribution of this paper is a flexible and extensible\nOpenAI Gym environment, which allows to generate, evaluate, and compare\npolicies for autonomous multi-drone systems in multi-service applications. This\nenvironment allows for comparative evaluation and benchmarking of different\napproaches in a variety of application contexts.\n"} {"abstract": " The discovery of superconductivity in the infinite-layer nickelates has\nopened new perspectives in the context of quantum materials. We analyze, via\nfirst-principles calculations, the electronic properties of La$_2$NiO$_3$F --\nthe first single-layer T'-type nickelate -- and compare these properties with\nthose of related nickelates and isostructural cuprates. We find that\nLa$_2$NiO$_3$F is essentially a single-band system with a Fermi surface\ndominated by the Ni-3$d_{x^2-y^2}$ states with an exceptional 2D character. In\naddition, the hopping ratio is similar to that of the highest $T_c$ cuprates\nand there is a remarkable $e_g$ splitting together with a charge transfer\nenergy of 3.6~eV. According to these descriptors, along with a comparison to\nNd$_2$CuO$_4$, we thus indicate single-layer T'-type nickelates of this class\nas very promising analogs of cuprate-like physics while keeping distinct\nNi$^{1+}$ features.\n"} {"abstract": " We conducted an investigation to find when a mistake was introduced in a\nwidely accessed Internet document, namely the RFC index. With great surprise,\nwe discovered that a it may go unnoticed for a very long period, namely more\nthat twenty-six years. This raises some questions to what does it mean to have\nopen access and the meaning of Linus' laws that \"given enough eyeballs, all\nbugs are shallow\"\n"} {"abstract": " In this paper, we reformulate the Bakry-\\'Emery curvature on a weighted graph\nin terms of the smallest eigenvalue of a rank one perturbation of the so-called\ncurvature matrix using Schur complement. This new viewpoint allows us to show\nvarious curvature function properties in a very conceptual way. We show that\nthe curvature, as a function of the dimension parameter, is analytic, strictly\nmonotone increasing and strictly concave until a certain threshold after which\nthe function is constant. Furthermore, we derive the curvature of the Cartesian\nproduct using the crucial observation that the curvature matrix of the product\nis the direct sum of each component. Our approach of the curvature functions of\ngraphs can be employed to establish analogous results for the curvature\nfunctions of weighted Riemannian manifolds. Moreover, as an application, we\nconfirm a conjecture (in a general weighted case) of the fact that the\ncurvature does not decrease under certain graph modifications.\n"} {"abstract": " For the first time, basing both on experimental facts and our theoretical\nconsideration, we show that Fermi systems with flat bands should be tuned with\nthe superconducting state. Experimental measurements on magic-angle twisted\nbilayer graphene of the Fermi velocity $V_F$ as a function of the temperature\n$T_c$ of superconduction phase transition have revealed $V_F\\propto T_c\\propto\n1/N_s(0)$, where $N_s(0)$ is the density of states at the Fermi level. We show\nthat the high-$T_c$ compounds $\\rm Bi_2Sr_2CaCu_2O_{8+x}$ exhibit the same\nbehavior. Such observation is a challenge to theories of high-$T_c$\nsuperconductivity, since $V_F$ is negatively correlated with $T_c$, for\n$T_c\\propto 1/V_F\\propto N_s(0)$. We show that the theoretical idea of forming\nflat bands in strongly correlated Fermi systems can explain this behavior and\nother experimental data collected on both $\\rm Bi_2Sr_2CaCu_2O_{8+x}$ and\ntwisted bilayer graphene. Our findings place stringent constraints on theories\ndescribing the nature of high-$T_c$ superconductivity and the deformation of\nflat band by the superconducting phase transition.\n"} {"abstract": " This paper presents a novel, non-standard set of vector instruction types for\nexploring custom SIMD instructions in a softcore. The new types allow\nsimultaneous access to a relatively high number of operands, reducing the\ninstruction count where applicable. Additionally, a high-performance\nopen-source RISC-V (RV32 IM) softcore is introduced, optimised for exploring\ncustom SIMD instructions and streaming performance. By providing instruction\ntemplates for instruction development in HDL/Verilog, efficient FPGA-based\ninstructions can be developed with few low-level lines of code. In order to\nimprove custom SIMD instruction performance, the softcore's cache hierarchy is\noptimised for bandwidth, such as with very wide blocks for the last-level\ncache. The approach is demonstrated on example memory-intensive applications on\nan FPGA. Although the exploration is based on the softcore, the goal is to\nprovide a means to experiment with advanced SIMD instructions which could be\nloaded in future CPUs that feature reconfigurable regions as custom\ninstructions. Finally, we provide some insights on the challenges and\neffectiveness of such future micro-architectures.\n"} {"abstract": " The law of centripetal force governing the motion of celestial bodies in\neccentric conic sections, has been established and thoroughly investigated by\nSir Isaac Newton in his Principia Mathematica. Yet its profound implications on\nthe understanding of such motions is still evolving. In a paper to the royal\nacademy of science, Sir Willian Hamilton demonstrated that this law underlies\nthe circular character of hodographs for Kepler orbits. A fact which was the\nobject of ulterior research and exploration by Richard Feynman and many other\nauthors [1]. In effect, a minute examination of the geometry of elliptic\ntrajectories, reveals interesting geometric properties and relations,\naltogether, combined with the law of conservation of angular momentum lead\neventually, and without any recourse to dealing with differential equations, to\nthe appearance of the equation of the trajectory and to the derivation of the\nequation of its corresponding hodograph. On this respect, and for the sake of\nfounding the approach on solid basis, I devised two mathematical theorems; one\nconcerning the existence of geometric means, and the other is related to\nestablishing the parametric equation of an off-center circle, altogether\ncompounded with other simple arguments ultimately give rise to the inverse\nsquare law of force that governs the motion of bodies in elliptic trajectories,\nas well as to the equation of their inherent circular hodographs.\n"} {"abstract": " 3D point-clouds and 2D images are different visual representations of the\nphysical world. While human vision can understand both representations,\ncomputer vision models designed for 2D image and 3D point-cloud understanding\nare quite different. Our paper investigates the potential for transferability\nbetween these two representations by empirically investigating whether this\napproach works, what factors affect the transfer performance, and how to make\nit work even better. We discovered that we can indeed use the same neural net\nmodel architectures to understand both images and point-clouds. Moreover, we\ncan transfer pretrained weights from image models to point-cloud models with\nminimal effort. Specifically, based on a 2D ConvNet pretrained on an image\ndataset, we can transfer the image model to a point-cloud model by\n\\textit{inflating} 2D convolutional filters to 3D then finetuning its input,\noutput, and optionally normalization layers. The transferred model can achieve\ncompetitive performance on 3D point-cloud classification, indoor and driving\nscene segmentation, even beating a wide range of point-cloud models that adopt\ntask-specific architectures and use a variety of tricks.\n"} {"abstract": " We present a study of the environment of 27 z=3-4.5 bright quasars from the\nMUSE Analysis of Gas around Galaxies (MAGG) survey. With medium-depth MUSE\nobservations (4 hours on target per field), we characterise the effects of\nquasars on their surroundings by studying simultaneously the properties of\nextended gas nebulae and Lyalpha emitters (LAEs) in the quasar host haloes. We\ndetect extended (up to ~ 100 kpc) Lyalpha emission around all MAGG quasars,\nfinding a very weak redshift evolution between z=3 and z=6. By stacking the\nMUSE datacubes, we confidently detect extended emission of CIV and only\nmarginally detect extended HeII up to ~40 kpc, implying that the gas is metal\nenriched. Moreover, our observations show a significant overdensity of LAEs\nwithin 300 km/s from the quasar systemic redshifts estimated from the nebular\nemission. The luminosity functions and equivalent width distributions of these\nLAEs show similar shapes with respect to LAEs away from quasars suggesting that\nthe Lyalpha emission of the majority of these sources is not significantly\nboosted by the quasar radiation or other processes related to the quasar\nenvironment. Within this framework, the observed LAE overdensities and our\nkinematic measurements imply that bright quasars at z=3-4.5 are hosted by\nhaloes in the mass range ~ 10^{12.0}-10^{12.5} Msun.\n"} {"abstract": " Although deep neural networks are successful for many tasks in the speech\ndomain, the high computational and memory costs of deep neural networks make it\ndifficult to directly deploy highperformance Neural Network systems on\nlow-resource embedded devices. There are several mechanisms to reduce the size\nof the neural networks i.e. parameter pruning, parameter quantization, etc.\nThis paper focuses on how to apply binary neural networks to the task of\nspeaker verification. The proposed binarization of training parameters can\nlargely maintain the performance while significantly reducing storage space\nrequirements and computational costs. Experiment results show that, after\nbinarizing the Convolutional Neural Network, the ResNet34-based network\nachieves an EER of around 5% on the Voxceleb1 testing dataset and even\noutperforms the traditional real number network on the text-dependent dataset:\nXiaole while having a 32x memory saving.\n"} {"abstract": " In this paper, we present a model-free learning-based control scheme for the\nsoft snake robot to improve its contact-aware locomotion performance in a\ncluttered environment. The control scheme includes two cooperative controllers:\nA bio-inspired controller (C1) that controls both the steering and velocity of\nthe soft snake robot, and an event-triggered regulator (R2) that controls the\nsteering of the snake in anticipation of obstacle contacts and during contact.\nThe inputs from the two controllers are composed as the input to a Matsuoka CPG\nnetwork to generate smooth and rhythmic actuation inputs to the soft snake. To\nenable stable and efficient learning with two controllers, we develop a\ngame-theoretic process, fictitious play, to train C1 and R2 with a shared\npotential-field-based reward function for goal tracking tasks. The proposed\napproach is tested and evaluated in the simulator and shows significant\nimprovement of locomotion performance in the obstacle-based environment\ncomparing to two baseline controllers.\n"} {"abstract": " The primary objective of this paper is the study of different instances of\nthe elliptic Stark conjectures of Darmon, Lauder and Rotger, in a situation\nwhere the elliptic curve attached to the modular form $f$ has split\nmultiplicative reduction at $p$ and the arithmetic phenomena are specially\nrich. For that purpose, we resort to the principle of improved $p$-adic\n$L$-functions and study their $\\mathcal L$-invariants. We further interpret\nthese results in terms of derived cohomology classes coming from the setting of\ndiagonal cycles, showing that the same $\\mathcal L$-invariant which arises in\nthe theory of $p$-adic $L$-functions also governs the arithmetic of Euler\nsystems. Thus, we can reduce, in the split multiplicative situation, the\nconjecture of Darmon, Lauder and Rotger to a more familiar statement about\nhigher order derivatives of a triple product $p$-adic $L$-function at a point\nlying inside the region of classical interpolation, in the realm of the more\nwell-known exceptional zero conjectures.\n"} {"abstract": " The allocation of venture capital is one of the primary factors determining\nwho takes products to market, which startups succeed or fail, and as such who\ngets to participate in the shaping of our collective economy. While gender\ndiversity contributes to startup success, most funding is allocated to\nmale-only entrepreneurial teams. In the wake of COVID-19, 2020 is seeing a\nnotable decline in funding to female and mixed-gender teams, giving raise to an\nurgent need to study and correct the longstanding gender bias in startup\nfunding allocation. We conduct an in-depth data analysis of over 48,000\ncompanies on Crunchbase, comparing funding allocation based on the gender\ncomposition of founding teams. Detailed findings across diverse industries and\ngeographies are presented. Further, we construct machine learning models to\npredict whether startups will reach an equity round, revealing the surprising\nfinding that the CEO's gender is the primary determining factor for attaining\nfunding. Policy implications for this pressing issue are discussed.\n"} {"abstract": " Macroscopic realism (MR) is the notion that a time-evolving system possesses\ndefinite properties, irrespective of past or future measurements. Quantum\nmechanical theories can, however, produce violations of MR. Most research to\ndate has focused on a single set of conditions for MR, the Leggett-Garg\ninequalities (LGIs), and on a single data set, the \"standard data set\", which\nconsists of single-time averages and second-order correlators of a dichotomic\nvariable Q for three times. However, if such conditions are all satisfied, then\nwhere is the quantum behaviour? In this paper, we provide an answer to this\nquestion by considering expanded data sets obtained from finer-grained\nmeasurements and MR conditions on those sets. We consider three different\nsituations in which there are violations of MR that go undetected by the\nstandard LGIs. First, we explore higher-order LGIs on a data set involving\nthird- and fourth-order correlators, using a spin-1/2 and spin-1 system.\nSecond, we explore the pentagon inequalities (PIs) and a data set consisting of\nall possible averages and second-order correlators for measurements of Q at\nfive times. Third, we explore the LGIs for a trichotomic variable and\nmeasurements made with a trichotomic operator to, again, identify violations\nfor a spin-1 system beyond those seen with a single dichotomic variable. We\nalso explore the regimes in which combinations of two and three-time LGIs can\nbe satisfied and violated in a spin-1 system, extending recent work. We discuss\nthe possible experimental implementation of all the above results.\n"} {"abstract": " The carrier transport and the motion of a vortex system in a mixed state of\nan electron-doped high-temperature superconductors Nd2-xCexCuO4 were\ninvestigated. To study the anisotropy of galvanomagnetic effects of highly\nlayered NdCeCuO system we have synthesized Nd2-xCexCuO4/SrTiO3 epitaxial films\nwith non-standart orientations of the c-axis and conductive CuO2 layers\nrelative to the substrate. The variation ofe the angle of inclination of the\nmagnetic field B, relative to the current J, reveals that the behavior of both\nthe in-plane r_xx(B) and the out-plane r_xy(B) resistivities in the mixed state\nis mainly determined by the perpendicular to J component of B, that indicates\nthe crucial role of the Lorentz force F_L~[JxB] and defines the motion of\nJosephson vortices across the CuO2 layers.\n"} {"abstract": " We consider the problem of finding an inductive construction, based on vertex\nsplitting, of triangulated spheres with a fixed number of additional edges\n(braces). We show that for any positive integer $b$ there is such an inductive\nconstruction of triangulations with $b$ braces, having finitely many base\ngraphs. In particular we establish a bound for the maximum size of a base graph\nwith $b$ braces that is linear in $b$. In the case that $b=1$ or $2$ we\ndetermine the list of base graphs explicitly. Using these results we show that\ndoubly braced triangulations are (generically) minimally rigid in two distinct\ngeometric contexts arising from a hypercylinder in $\\mathbb{R}^4$ and a class\nof mixed norms on $\\mathbb{R}^3$.\n"} {"abstract": " The narrow escape problem is a first-passage problem concerned with randomly\nmoving particles in a physical domain, being trapped by absorbing surface traps\n(windows), such that the measure of traps is small compared to the domain size.\nThe expected value of time required for a particle to escape is defined as mean\nfirst passage time (MFPT), which satisfies the Poisson partial differential\nequation subject to a mixed Dirichlet-Neumann boundary condition. The primary\nobjective of this work is a direct numerical simulation of multiple particles\nundergoing Brownian motion in a three-dimensional sphere with boundary traps,\ncompute MFPT values by averaging Brownian escape times, and compare the results\nwith asymptotic results obtained by solving the Poisson PDE problem. A\ncomprehensive study of results obtained from the simulations shows that the\ndifference between Brownian and asymptotic results for the escape times mostly\nnot exceed $1\\%$ accuracy. This comparison in some sense validates the narrow\nescape PDE problem itself as an approximation (averaging) of the multiple\nphysical Brownian motion runs. This work also predicted that how many\nsingle-particle simulations are required to match the predicted asymptotic\naveraged MFPT values. The next objective of this work is to study dynamics of\nBrownian particles near the boundary by estimating the average percentage of\ntime spent by Brownian particle near the domain boundary for both the\nanisotropic and isotropic diffusion. It is shown that the Brownian particles\nspend more in the boundary layer than predicted by the boundary layer relative\nvolume, with the effect being more pronounced in a narrow layer near the\nspherical wall. It is also shown that taking into account anisotropic diffusion\nyields larger times a particle spends near the boundary, and smaller escape\ntimes than those predicted by the isotropic diffusion model.\n"} {"abstract": " This paper considers the narrow escape problem of a Brownian particle within\na three-dimensional Riemannian manifold under the influence of the force field.\nWe compute an asymptotic expansion of mean sojourn time for Brownian particles.\nAs an auxiliary result, we obtain the singular structure for the restricted\nNeumann Green's function which may be of independent interest.\n"} {"abstract": " I propose the use of two magnetic Wollaston prisms to correct the linear\nLarmor phase aberration of MIEZE, introduced by the transverse size of the\nsample. With this approach, the resolution function of MIEZE can be optimized\nfor any scattering angle of interest. The optimum magnetic fields required for\nthe magnetic Wollaston prisms depend only on the scattering angle and the\nfrequency of the RF flippers and they are independent of the neutron wavelength\nand beam divergence, which makes it suitable for both pulsed and constant\nwavelength neutron sources.\n"} {"abstract": " We consider $n$ independent $p$-dimensional Gaussian vectors with covariance\nmatrix having Toeplitz structure. We test that these vectors have independent\ncomponents against a stationary distribution with sparse Toeplitz covariance\nmatrix, and also select the support of non-zero entries. We assume that the\nnon-zero values can occur in the recent past (time-lag less than $p/2$). We\nbuild test procedures that combine a sum and a scan-type procedures, but are\ncomputationally fast, and show their non-asymptotic behaviour in both one-sided\n(only positive correlations) and two-sided alternatives, respectively. We also\nexhibit a selector of significant lags and bound the Hamming-loss risk of the\nestimated support. These results can be extended to the case of nearly Toeplitz\ncovariance structure and to sub-Gaussian vectors. Numerical results illustrate\nthe excellent behaviour of both test procedures and support selectors - larger\nthe dimension $p$, faster are the rates.\n"} {"abstract": " A graph $G$ is called interval colorable if it has a proper edge coloring\nwith colors $1,2,3,\\dots$ such that the colors of the edges incident to every\nvertex of $G$ form an interval of integers. Not all graphs are interval\ncolorable; in fact, quite few families have been proved to admit interval\ncolorings. In this paper we introduce and investigate a new notion, the\ninterval coloring thickness of a graph $G$, denoted\n${\\theta_{\\mathrm{int}}}(G)$, which is the minimum number of interval colorable\nedge-disjoint subgraphs of $G$ whose union is $G$.\n Our investigation is motivated by scheduling problems with compactness\nrequirements, in particular, problems whose solution may consist of several\nschedules, but where each schedule must not contain any waiting periods or idle\ntimes for all involved parties. We first prove that every connected properly\n$3$-edge colorable graph with maximum degree $3$ is interval colorable, and\nusing this result, we deduce an upper bound on ${\\theta_{\\mathrm{int}}}(G)$ for\ngeneral graphs $G$. We demonstrate that this upper bound can be improved in the\ncase when $G$ is bipartite, planar or complete multipartite and consider some\napplications in timetabling.\n"} {"abstract": " CO$_2$ dissociation stimulated by vibrational excitation in non-equilibrium\ndischarges has drawn lots of attention. Ns-discharges are known for their\nhighly non-equilibrium conditions. It is therefore of interest to investigate\nthe CO$_2$ excitation in such discharges. In this paper, we demonstrate the\nability for monitoring the time evolution of CO$_2$ ro-vibrational excitation\nwith a well-selected wavelength window around 2289.0 cm$^{-1}$ and a single CW\nquantum cascade laser (QCL) with both high accuracy and temporal resolution.\nThe rotational and vibrational temperatures for both the symmetric and the\nasymmetric modes of CO$_2$ in the afterglow of CO$_2$ + He ns-discharge were\nmeasured with a temporal resolution of 1.5 $\\mu$s. The non-thermal feature and\nthe preferential excitation of the asymmetric stretch mode of CO$_2$ were\nexperimentally observed, with a peak temperature of $T_{v3, max}$ = 966 $\\pm$\n1.5 K, $T_{v12, max}$ = 438.4 $\\pm$ 1.2 K and $T_{rot}$ = 334.6 $\\pm$ 0.6 K\nreached at 3 $\\mu$s after the nanosecond pulse. In the following relaxation\nprocess, an exponential decay with a time constant of 69 $\\mu$s was observed\nfor the asymmetric stretch (001) state, consistent with the dominant\ndeexcitation mechanism due to VT transfer with He and deexcitation on the wall.\nFurthermore, a synchronous oscillation of the gas temperature and the total\npressure was also observed and can be explained by a two-line thermometry and\nadiabatic process. The period of the oscillation and its dependence on the gas\ncomponents is consistent with a standing acoustic wave excited by the\nns-discharge.\n"} {"abstract": " Let $G=(V,E)$ be a graph and $P\\subseteq V$ a set of points. Two points are\nmutually visible if there is a shortest path between them without further\npoints. $P$ is a mutual-visibility set if its points are pairwise mutually\nvisible. The mutual-visibility number of $G$ is the size of any largest\nmutual-visibility set. In this paper we start the study about this new\ninvariant and the mutual-visibility sets in undirected graphs. We introduce the\nmutual-visibility problem which asks to find a mutual-visibility set with a\nsize larger than a given number. We show that this problem is NP-complete,\nwhereas, to check whether a given set of points is a mutual-visibility set is\nsolvable in polynomial time. Then we study mutual-visibility sets and\nmutual-visibility numbers on special classes of graphs, such as block graphs,\ntrees, grids, tori, complete bipartite graphs, cographs. We also provide some\nrelations of the mutual-visibility number of a graph with other invariants.\n"} {"abstract": " Binary metallic phosphide, Nb2P5, belongs to technologically important class\nof materials. Quite surprisingly, a large number of physical properties of\nNb2P5, including elastic properties and their anisotropy, acoustic, electronic\n(DOS, charge density distribution, electron density difference),\nthermo-physical, bonding characteristics, and optical properties have not been\ninvestigated at all. In the present work we have explored all these properties\nin details for the first time employing density functional theory based\nfirst-principles method. Nb2P5 is found to be a mechanically stable,\nelastically anisotropic compound with weak brittle character. The bondings\namong the atoms are dominated by covalent and ionic contributions with small\nsignature of metallic feature. The compound possesses high level of\nmachinability. Nb2P5 is a moderately hard compound. The band structure\ncalculations reveal metallic conduction with a large electronic density of\nstates at the Fermi level. Calculated values of different thermal properties\nindicate that Nb2P5 has the potential to be used as a thermal barrier coating\nmaterial. The energy dependent optical parameters show close agreement with the\nunderlying electronic band structure. The optical absorption and reflectivity\nspectra and the static index of refraction of Nb2P5 show that the compound\nholds promise to be used in optoelectronic device sector. Unlike notable\nanisotropy in elastic and mechanical properties, the optical parameters are\nfound to be almost isotropic.\n"} {"abstract": " Bayesian nonparametric hierarchical priors are highly effective in providing\nflexible models for latent data structures exhibiting sharing of information\nbetween and across groups. Most prominent is the Hierarchical Dirichlet Process\n(HDP), and its subsequent variants, which model latent clustering between and\nacross groups. The HDP, may be viewed as a more flexible extension of Latent\nDirichlet Allocation models (LDA), and has been applied to, for example, topic\nmodelling, natural language processing, and datasets arising in health-care. We\nfocus on analogous latent feature allocation models, where the data structures\ncorrespond to multisets or unbounded sparse matrices. The fundamental\ndevelopment in this regard is the Hierarchical Indian Buffet process (HIBP),\nwhich utilizes a hierarchy of Beta processes over J groups, where each group\ngenerates binary random matrices, reflecting within group sharing of features,\naccording to beta-Bernoulli IBP priors. To encompass HIBP versions of\nnon-Bernoulli extensions of the IBP, we introduce hierarchical versions of\ngeneral spike and slab IBP. We provide explicit novel descriptions of the\nmarginal, posterior and predictive distributions of the HIBP and its\ngeneralizations which allow for exact sampling and simpler practical\nimplementation. We highlight common structural properties of these processes\nand establish relationships to existing IBP type and related models arising in\nthe literature. Examples of potential applications may involve topic models,\nPoisson factorization models, random count matrix priors and neural network\nmodels\n"} {"abstract": " We introduce a new model of the logarithmic type of wave like plate equation\nwith a nonlocal logarithmic damping mechanism. We consider the Cauchy problem\nfor this new model in the whole space, and study the asymptotic profile and\noptimal decay rates of solutions as time goes to infinity in L^{2}-sense. The\noperator L considered in this paper was first introduced to dissipate the\nsolutions of the wave equation in the paper studied by Charao-Ikehata in 2020.\nWe will discuss the asymptotic property of the solution as time goes to\ninfinity to our Cauchy problem, and in particular, we classify the property of\nthe solutions into three parts from the viewpoint of regularity of the initial\ndata, that is, diffusion-like, wave-like, and both of them.\n"} {"abstract": " This paper details speckle observations of binary stars taken at the Lowell\nDiscovery Telescope, the WIYN Telescope, and the Gemini telescopes between 2016\nJanuary and 2019 September. The observations taken at Gemini and Lowell were\ndone with the Differential Speckle Survey Instrument (DSSI), and those done at\nWIYN were taken with the successor instrument to DSSI at that site, the\nNN-EXPLORE Exoplanet Star and Speckle Imager (NESSI). In total, we present 378\nobservations of 178 systems and we show that the uncertainty in the measurement\nprecision for the combined data set is ~2 mas in separation, ~1-2 degrees in\nposition angle depending on the separation, and $\\sim$0.1 magnitudes in\nmagnitude difference. Together with data already in the literature, these new\nresults permit 25 visual orbits and one spectroscopic-visual orbit to be\ncalculated for the first time. In the case of the spectroscopic-visual\nanalysis, which is done on the trinary star HD 173093, we calculate masses with\nprecision of better than 1% for all three stars in that system. Twenty-one of\nthe visual orbits calculated have a K dwarf as the primary star; we add these\nto the known orbits of K dwarf primary stars and discuss the basic orbital\nproperties of these stars at this stage. Although incomplete, the data that\nexist so far indicate that binaries with K dwarf primaries tend not to have\nlow-eccentricity orbits at separations of one to a few tens of AU, that is, on\nsolar-system scales.\n"} {"abstract": " For many real-world classification problems, e.g., sentiment classification,\nmost existing machine learning methods are biased towards the majority class\nwhen the Imbalance Ratio (IR) is high. To address this problem, we propose a\nset convolution (SetConv) operation and an episodic training strategy to\nextract a single representative for each class, so that classifiers can later\nbe trained on a balanced class distribution. We prove that our proposed\nalgorithm is permutation-invariant despite the order of inputs, and experiments\non multiple large-scale benchmark text datasets show the superiority of our\nproposed framework when compared to other SOTA methods.\n"} {"abstract": " In this paper we prove the convergence of solutions to discrete models for\nbinary waveguide arrays toward those of their formal continuum limit, for which\nwe also show the existence of localized standing waves. This work rigorously\njustifies formal arguments and numerical simulations present in the Physics\nliterature.\n"} {"abstract": " What would be the effect of locally poking a static scene? We present an\napproach that learns naturally-looking global articulations caused by a local\nmanipulation at a pixel level. Training requires only videos of moving objects\nbut no information of the underlying manipulation of the physical scene. Our\ngenerative model learns to infer natural object dynamics as a response to user\ninteraction and learns about the interrelations between different object body\nregions. Given a static image of an object and a local poking of a pixel, the\napproach then predicts how the object would deform over time. In contrast to\nexisting work on video prediction, we do not synthesize arbitrary realistic\nvideos but enable local interactive control of the deformation. Our model is\nnot restricted to particular object categories and can transfer dynamics onto\nnovel unseen object instances. Extensive experiments on diverse objects\ndemonstrate the effectiveness of our approach compared to common video\nprediction frameworks. Project page is available at https://bit.ly/3cxfA2L .\n"} {"abstract": " We propose a straightforward implementation of the phenomenon of diffractive\nfocusing with uniform atomic Bose-Einstein condensates. Both, analytical as\nwell as numerical methods not only illustrate the influence of the atom-atom\ninteraction on the focusing factor and the focus time, but also allow us to\nderive the optimal conditions for observing focusing of this type in the case\nof interacting matter waves.\n"} {"abstract": " Consider the first order differential system given by\n \\begin{equation*}\n \\begin{array}{l}\n \\dot{x}= y, \\qquad \\dot{y}= -x+a(1-y^{2n})y, \\end{array}\n \\end{equation*} where $a$ is a real parameter and the dots denote derivatives\nwith respect to the time $t$. Such system is known as the generalized Rayleigh\nsystem and it appears, for instance, in the modeling of diabetic chemical\nprocesses through a constant area duct, where the effect of adding or rejecting\nheat is considered. In this paper we characterize the global dynamics of this\ngeneralized Rayleigh system. In particular we prove the existence of a unique\nlimit cycle when the parameter $a\\ne 0$.\n"} {"abstract": " The ALICE Collaboration reports the first fully-corrected measurements of the\n$N$-subjettiness observable for track-based jets in heavy-ion collisions. This\nstudy is performed using data recorded in pp and Pb$-$Pb collisions at\ncentre-of-mass energies of $\\sqrt{s} = 7$ TeV and $\\sqrt{s_{\\rm NN}} = 2.76$\nTeV, respectively. In particular the ratio of 2-subjettiness to 1-subjettiness,\n$\\tau_{2}/\\tau_{1}$, which is sensitive to the rate of two-pronged jet\nsubstructure, is presented. Energy loss of jets traversing the strongly\ninteracting medium in heavy-ion collisions is expected to change the rate of\ntwo-pronged substructure relative to vacuum. The results are presented for jets\nwith a resolution parameter of $R = 0.4$ and charged jet transverse momentum of\n$40 \\leq p_{\\rm T,\\rm jet} \\leq 60$ GeV/$c$, which constitute a larger jet\nresolution and lower jet transverse momentum interval than previous\nmeasurements in heavy-ion collisions. This has been achieved by utilising a\nsemi-inclusive hadron-jet coincidence technique to suppress the larger jet\ncombinatorial background in this kinematic region. No significant modification\nof the $\\tau_{2}/\\tau_{1}$ observable for track-based jets in Pb$-$Pb\ncollisions is observed relative to vacuum PYTHIA6 and PYTHIA8 references at the\nsame collision energy. The measurements of $\\tau_{2}/\\tau_{1}$, together with\nthe splitting aperture angle $\\Delta R$, are also performed in pp collisions at\n$\\sqrt{s}=7$ TeV for inclusive jets. These results are compared with PYTHIA\ncalculations at $\\sqrt{s}=7$ TeV, in order to validate the model as a vacuum\nreference for the Pb$-$Pb centre-of-mass energy. The PYTHIA references for\n$\\tau_{2}/\\tau_{1}$ are shifted to larger values compared to the measurement in\npp collisions. This hints at a reduction in the rate of two-pronged jets in\nPb$-$Pb collisions compared to pp collisions.\n"} {"abstract": " We present an ALMA 1.3 mm (Band 6) continuum survey of lensed submillimeter\ngalaxies (SMGs) at $z=1.0\\sim3.2$ with an angular resolution of $\\sim0.2$\".\nThese galaxies were uncovered by the Herschel Lensing Survey (HLS), and feature\nexceptionally bright far-infrared continuum emission ($S_\\mathrm{peak} \\gtrsim\n90$ mJy) owing to their lensing magnification. We detect 29 sources in 20\nfields of massive galaxy clusters with ALMA. Using both the Spitzer/IRAC\n(3.6/4.5 $\\mathrm{\\mu m}$) and ALMA data, we have successfully modeled the\nsurface brightness profiles of 26 sources in the rest-frame near- and\nfar-infrared. Similar to previous studies, we find the median dust-to-stellar\ncontinuum size ratio to be small ($R_\\mathrm{e,dust}/R_\\mathrm{e,star} =\n0.38\\pm0.14$) for the observed SMGs, indicating that star formation is\ncentrally concentrated. This is, however, not the case for two spatially\nextended main-sequence SMGs with a low surface brightness at 1.3 mm ($\\lesssim\n0.1$ mJy arcsec$^{-2}$), in which the star formation is distributed over the\nentire galaxy ($R_\\mathrm{e,dust}/R_\\mathrm{e,star}>1$). As a whole, our SMG\nsample shows a tight anti-correlation between\n($R_\\mathrm{e,dust}/R_\\mathrm{e,star}$) and far-infrared surface brightness\n($\\Sigma_\\mathrm{IR}$) over a factor of $\\simeq$ 1000 in $\\Sigma_\\mathrm{IR}$.\nThis indicates that SMGs with less vigorous star formation (i.e., lower\n$\\Sigma_\\mathrm{IR}$) lack central starburst and are likely to retain a broader\nspatial distribution of star formation over the whole galaxies (i.e., larger\n$R_\\mathrm{e,dust}/R_\\mathrm{e,star}$). The same trend can be reproduced with\ncosmological simulations as a result of central starburst and potentially\nsubsequent \"inside-out\" quenching, which likely accounts for the emergence of\ncompact quiescent galaxies at $z\\sim2$.\n"} {"abstract": " Single layer Pb on top of (111) surfaces of group IV semiconductors hosts\ncharge density wave and superconductivity depending on the coverage and on the\nsubstrate. These systems are normally considered to be experimental\nrealizations of single band Hubbard models and their properties are mostly\ninvestigated using lattice models with frozen structural degrees of freedom,\nalthough the reliability of this approximation is unclear. Here, we consider\nthe case of Pb/Ge(111) at 1/3 coverage, for which surface X-ray diffraction and\nARPES data are available. By performing first principles calculations, we\ndemonstrate that the non-local exchange between Pb and the substrate drives the\nsystem into a $3\\times 3$ charge density wave. The electronic structure of this\ncharge ordered phase is mainly determined by two effects: the magnitude of the\nPb distortion and the large spin-orbit coupling. Finally, we show that the\neffect applies also to the $3\\times 3$ phase of Pb/Si(111) where the\nPb-substrate exchange interaction increases the bandwidth by more than a factor\n1.5 with respect to DFT+U, in better agreement with STS data. The delicate\ninterplay between substrate, structural and electronic degrees of freedom\ninvalidates the widespread interpretation available in literature considering\nthese compounds as physical realizations of single band Hubbard models.\n"} {"abstract": " Unmanned aerial vehicle (UAV)-enabled wireless power transfer (WPT) has\nrecently emerged as a promising technique to provide sustainable energy supply\nfor widely distributed low-power ground devices (GDs) in large-scale wireless\nnetworks. Compared with the energy transmitters (ETs) in conventional WPT\nsystems which are deployed at fixed locations, UAV-mounted aerial ETs can fly\nflexibly in the three-dimensional (3D) space to charge nearby GDs more\nefficiently. This paper provides a tutorial overview on UAV-enabled WPT and its\nappealing applications, in particular focusing on how to exploit UAVs'\ncontrollable mobility via their 3D trajectory design to maximize the amounts of\nenergy transferred to all GDs in a wireless network with fairness. First, we\nconsider the single-UAV-enabled WPT scenario with one UAV wirelessly charging\nmultiple GDs at known locations. To solve the energy maximization problem in\nthis case, we present a general trajectory design framework consisting of three\ninnovative approaches to optimize the UAV trajectory, which are multi-location\nhovering, successive-hover-and-fly, and time-quantization-based optimization,\nrespectively. Next, we consider the multi-UAV-enabled WPT scenario where\nmultiple UAVs cooperatively charge many GDs in a large area. Building upon the\nsingle-UAV trajectory design, we propose two efficient schemes to jointly\noptimize multiple UAVs' trajectories, based on the principles of UAV swarming\nand GD clustering, respectively. Furthermore, we consider two important\nextensions of UAV-enabled WPT, namely UAV-enabled wireless powered\ncommunication networks (WPCN) and UAV-enabled wireless powered mobile edge\ncomputing (MEC).\n"} {"abstract": " We explore the intrinsic dynamics of spherical shells immersed in a fluid in\nthe vicinity of their buckled state, through experiments and 3D axisymmetric\nsimulations. The results are supported by a theoretical model that accurately\ndescribes the buckled shell as a two-variable-only oscillator. We quantify the\neffective \"softening\" of shells above the buckling threshold, as observed in\nrecent experiments on interactions between encapsulated microbubbles and\nacoustic waves. The main dissipation mechanism in the neighboring fluid is also\nevidenced.\n"} {"abstract": " In this paper, we analyze the effect of transport infrastructure investments\nin railways. As a testing ground, we use data from a new historical database\nthat includes annual panel data on approximately 2,400 Swedish rural\ngeographical areas during the period 1860-1917. We use a staggered event study\ndesign that is robust to treatment effect heterogeneity. Importantly, we find\nextremely large reduced-form effects of having access to railways. For real\nnonagricultural income, the cumulative treatment effect is approximately 120%\nafter 30 years. Equally important, we also show that our reduced-form effect is\nlikely to reflect growth rather than a reorganization of existing economic\nactivity since we find no spillover effects between treated and untreated\nregions. Specifically, our results are consistent with the big push hypothesis,\nwhich argues that simultaneous/coordinated investment, such as large\ninfrastructure investment in railways, can generate economic growth if there\nare strong aggregate demand externalities (e.g., Murphy et al. 1989). We used\nplant-level data to further corroborate this mechanism. Indeed, we find that\ninvestments in local railways dramatically, and independent of initial\nconditions, increase local industrial production and employment on the order of\n100-300% across almost all industrial sectors.\n"} {"abstract": " The presence of relativistic electrons within the diffuse gas phase of galaxy\nclusters is now well established, but their detailed origin remains unclear.\nCosmic ray protons are also expected to accumulate during the formation of\nclusters and would lead to gamma-ray emission through hadronic interactions\nwithin the thermal gas. Recently, the detection of gamma-ray emission has been\nreported toward the Coma cluster with Fermi-LAT. Assuming that this gamma-ray\nemission arises from hadronic interactions in the ICM, we aim at exploring the\nimplication of this signal on the cosmic ray content of the Coma cluster. We\nuse the MINOT software to build a physical model of the cluster and apply it to\nthe Fermi-LAT data. We also consider contamination from compact sources and the\nimpact of various systematic effects. We confirm that a significant gamma-ray\nsignal is observed within the characteristic radius $\\theta_{500}$ of the Coma\ncluster, with a test statistic TS~27 for our baseline model. The presence of a\npossible point source may account for most of the observed signal. However,\nthis source could also correspond to the peak of the diffuse emission of the\ncluster itself and extended models match the data better. We constrain the\ncosmic ray to thermal energy ratio within $R_{500}$ to $X_{\\rm\nCRp}=1.79^{+1.11}_{-0.30}$\\% and the slope of the energy spectrum of cosmic\nrays to $\\alpha=2.80^{+0.67}_{-0.13}$. Finally, we compute the synchrotron\nemission associated with the secondary electrons produced in hadronic\ninteractions assuming steady state. This emission is about four times lower\nthan the overall observed radio signal, so that primary cosmic ray electrons or\nreacceleration of secondary electrons is necessary to explain the total\nemission. Assuming an hadronic origin of the signal, our results provide the\nfirst quantitative measurement of the cosmic ray proton content in a\ncluster.[Abridged]\n"} {"abstract": " We study idempotent, model, and Toeplitz operators that attain the norm.\nNotably, we prove that if $\\mathcal{Q}$ is a backward shift invariant subspace\nof the Hardy space $H^2(\\mathbb{D})$, then the model operator $S_{\\mathcal{Q}}$\nattains its norm. Here $S_{\\mathcal{Q}} = P_{\\mathcal{Q}}M_z|_{\\mathcal{Q}}$,\nthe compression of the shift $M_z$ on the Hardy space $H^2(\\mathbb{D})$ to\n$\\mathcal{Q}$.\n"} {"abstract": " Power system simulations that extend over a time period of minutes, hours, or\neven longer are called extended-term simulations. As power systems evolve into\ncomplex systems with increasing interdependencies and richer dynamic behaviors\nacross a wide range of timescales, extended-term simulation is needed for many\npower system analysis tasks (e.g., resilience analysis, renewable energy\nintegration, cascading failures), and there is an urgent need for efficient and\nrobust extended-term simulation approaches. The conventional approaches are\ninsufficient for dealing with the extended-term simulation of multi-timescale\nprocesses. This paper proposes an extended-term simulation approach based on\nthe holomorphic embedding (HE) methodology. Its accuracy and computational\nefficiency are backed by HE's high accuracy in event-driven simulation, larger\nand adaptive time steps, and flexible switching between full-dynamic and\nquasi-steady-state (QSS) models. We used this proposed extended-term simulation\napproach to evaluate bulk power system restoration plans, and it demonstrates\nsatisfactory accuracy and efficiency in this complex simulation task.\n"} {"abstract": " The efficiency of the adiabatic demagnetization of nuclear spin system (NSS)\nof a solid is limited, if quadrupole effects are present. Nevertheless, despite\na considerable quadrupole interaction, recent experiments validated the\nthermodynamic description of the NSS in GaAs. This suggests that nuclear spin\ntemperature can be used as the universal indicator of the NSS state in presence\nof external perturbations. We implement this idea by analyzing the modification\nof the NSS temperature in response to an oscillating magnetic field at various\nfrequencies, an approach termed as the warm-up spectroscopy. It is tested in a\nn-GaAs sample where both mechanical strain and built-in electric field may\ncontribute to the quadrupole splitting, yielding the parameters of electric\nfield gradient tensors for 75As and both Ga isotopes, 69Ga and 71Ga.\n"} {"abstract": " Unmanned aerial vehicles (UAVs) play an increasingly important role in\nmilitary, public, and civilian applications, where providing connectivity to\nUAVs is crucial for its real-time control, video streaming, and data\ncollection. Considering that cellular networks offer wide area, high speed, and\nsecure wireless connectivity, cellular-connected UAVs have been considered as\nan appealing solution to provide UAV connectivity with enhanced reliability,\ncoverage, throughput, and security. Due to the nature of UAVs mobility, the\nthroughput, reliability and End-to-End (E2E) delay of UAVs communication under\nvarious flight heights, video resolutions, and transmission frequencies remain\nunknown. To evaluate these parameters, we develop a cellular-connected UAV\ntestbed based on the Long Term Evolution (LTE) network with its uplink video\ntransmission and downlink control\\&command (CC) transmission. We also design\nalgorithms for sending control signal and controlling UAV. The indoor\nexperimental results provide fundamental insights for the cellular-connected\nUAV system design from the perspective of transmission frequency, adaptability,\nand link outage, respectively.\n"} {"abstract": " TMs are a pattern recognition approach that uses finite state machines for\nlearning and propositional logic to represent patterns. In addition to being\nnatively interpretable, they have provided competitive accuracy for various\ntasks. In this paper, we increase the computing power of TMs by proposing a\nfirst-order logic-based framework with Herbrand semantics. The resulting TM is\nrelational and can take advantage of logical structures appearing in natural\nlanguage, to learn rules that represent how actions and consequences are\nrelated in the real world. The outcome is a logic program of Horn clauses,\nbringing in a structured view of unstructured data. In closed-domain\nquestion-answering, the first-order representation produces 10x more compact\nKBs, along with an increase in answering accuracy from 94.83% to 99.48%. The\napproach is further robust towards erroneous, missing, and superfluous\ninformation, distilling the aspects of a text that are important for real-world\nunderstanding.\n"} {"abstract": " Markerless motion capture and understanding of professional non-daily human\nmovements is an important yet unsolved task, which suffers from complex motion\npatterns and severe self-occlusion, especially for the monocular setting. In\nthis paper, we propose SportsCap -- the first approach for simultaneously\ncapturing 3D human motions and understanding fine-grained actions from\nmonocular challenging sports video input. Our approach utilizes the semantic\nand temporally structured sub-motion prior in the embedding space for motion\ncapture and understanding in a data-driven multi-task manner. To enable robust\ncapture under complex motion patterns, we propose an effective motion embedding\nmodule to recover both the implicit motion embedding and explicit 3D motion\ndetails via a corresponding mapping function as well as a sub-motion\nclassifier. Based on such hybrid motion information, we introduce a\nmulti-stream spatial-temporal Graph Convolutional Network(ST-GCN) to predict\nthe fine-grained semantic action attributes, and adopt a semantic attribute\nmapping block to assemble various correlated action attributes into a\nhigh-level action label for the overall detailed understanding of the whole\nsequence, so as to enable various applications like action assessment or motion\nscoring. Comprehensive experiments on both public and our proposed datasets\nshow that with a challenging monocular sports video input, our novel approach\nnot only significantly improves the accuracy of 3D human motion capture, but\nalso recovers accurate fine-grained semantic action attributes.\n"} {"abstract": " Methods for stochastic trace estimation often require the repeated evaluation\nof expressions of the form $z^T p_n(A)z$, where $A$ is a symmetric matrix and\n$p_n$ is a degree $n$ polynomial written in the standard or Chebyshev basis. We\nshow how to evaluate these expressions using only $\\lceil n/2\\rceil$\nmatrix-vector products, thus substantially reducing the cost of existing trace\nestimation algorithms that use Chebyshev interpolation or Taylor series.\n"} {"abstract": " The inverse Higgs phenomenon, which plays an important r\\^ole in physical\nsystems with Goldstone bosons (such as the phonons in a crystal) involves\nnonholonomic mechanical constraints. By formulating field theories with\nsymmetries and constraints in a general way using the language of differential\ngeometry, we show that many examples of constraints in inverse Higgs phenomena\nfall into a special class, which we call coholonomic constraints, that are dual\n(in the sense of category theory) to holonomic constraints. Just as for\nholonomic constraints, systems with coholonomic constraints are equivalent to\nunconstrained systems (whose degrees of freedom are known as essential\nGoldstone bosons), making it easier to study their consistency and dynamics.\nThe remaining examples of inverse Higgs phenomena in the literature require the\ndual of a slight generalisation of a holonomic constraint, which we call\n(co)meronomic. Our formalism simplifies and clarifies the many ad hoc\nassumptions and constructions present in the literature. In particular, it\nidentifies which are necessary and which are merely convenient. It also opens\nthe way to studying much more general dynamical examples, including systems\nwhich have no well-defined notion of a target space.\n"} {"abstract": " This article discusses a dark energy cosmological model in the standard\ntheory of gravity - general relativity with a broad scalar field as a source.\nExact solutions of Einstein's field equations are derived by considering a\nparticular form of deceleration parameter $q$, which shows a smooth transition\nfrom decelerated to accelerated phase in the evolution of the universe. The\nexternal datasets such as Hubble ($H(z)$) datasets, Supernovae (SN) datasets,\nand Baryonic Acoustic Oscillation (BAO) datasets are used for constraining the\nmodel par parameters appearing in the functional form of $q$. The transition\nredshift is obtained at $% z_{t}=0.67_{-0.36}^{+0.26}$ for the combined data\nset ($H(z)+SN+BAO$), where the model shows signature-flipping and is consistent\nwith recent observations. Moreover, the present value of the deceleration\nparameter comes out to be $q_{0}=-0.50_{-0.11}^{+0.12}$ and the jerk parameter\n$% j_{0}=-0.98_{-0.02}^{+0.06}$ (close to 1) for the combined datasets, which\nis compatible as per Planck2018 results. The analysis also constrains the omega\nvalue i.e., $\\Omega _{m_{0}}\\leq 0.269$ for the smooth evolution of the scalar\nfield EoS parameter. It is seen that energy density is higher for the effective\nenergy density of the matter field than energy density in the presence of a\nscalar field. The evolution of the physical and geometrical parameters is\ndiscussed in some details with the model parameters' numerical constrained\nvalues. Moreover, we have performed the state-finder analysis to investigate\nthe nature of dark energy.\n"} {"abstract": " Electrical energy consumption data accessibility for low voltage end users is\none of the pillars of smart grids. In some countries, despite the presence of\nsmart meters, a fragmentary data availability and/or the lack of\nstandardization hinders the creation of post-metering value-added services and\nconfines such innovative solutions to the prototypal and experimental level. We\ntake inspiration from the technology adopted in Italy, where the national\nregulatory authority actively supported the definition of a solution agreed\nupon by all the involved stakeholders. In this context, smart meters are\nenabled to convey data to low voltage end users through a power line\ncommunication channel (CHAIN 2) in near real-time. The aim of this paper is\ntwofold. On the one hand, it describes the proof of concept that the channel\nunderwent and its subsequent validation (with performances nearing 99% success\nrate). On the other hand, it defines a classification framework (I2MA) for\npost-metering value-added services, in order to categorize each use case based\non both level of service and expected benefits, and understand its maturity\nlevel. As an example, we apply the methodology to the 16 use cases defined in\nItaly. The lessons learned from the regulatory, technological, and functional\napproach of the Italian experience bring us to the provision of recommendations\nfor researchers and industry experts. In particular, we argue that a\nwell-functioning post-metering value-added services' market can flourish when:\ni) distribution system operators certify the measurements coming from smart\nmeters; ii) national regulatory authorities support the technological\ninnovation needed for setting up this market; and iii) service providers create\ncustomer-oriented solutions based on smart meters' data.\n"} {"abstract": " Robust edge transport can occur when particles in crystalline lattices\ninteract with an external magnetic field. This system is well described by\nBloch's theorem, with the spectrum being composed of bands of bulk states and\nin-gap edge states. When the confining lattice geometry is altered to be\nquasicrystaline, then Bloch's theorem breaks down. However, we still expect to\nobserve the basic characteristics of bulk states and current carrying edge\nstates. Here, we show that for quasicrystals in magnetic fields, there is also\na third option; the bulk localised transport states. These states share the\nin-gap nature of the well-known edge states and can support transport along\nthem, but they are fully contained within the bulk of the system, with no\nsupport along the edge. We consider both finite and infinite systems, using\nrigorous error controlled computational techniques that are not prone to\nfinite-size effects. The bulk localised transport states are preserved for\ninfinite systems, in stark contrast to the normal edge states. This allows for\ntransport to be observed in infinite systems, without any perturbations,\ndefects, or boundaries being introduced. We confirm the in-gap topological\nnature of the bulk localised transport states for finite and infinite systems\nby computing common topological measures; namely the Bott index and local Chern\nmarker. The bulk localised transport states form due to a magnetic aperiodicity\narising from the interplay of length scales between the magnetic field and\nquasiperiodic lattice. Bulk localised transport could have interesting\napplications similar to those of the edge states on the boundary, but that\ncould now take advantage of the larger bulk of the lattice. The infinite size\ntechniques introduced here, especially the calculation of topological measures,\ncould also be widely applied to other crystalline, quasicrystalline, and\ndisordered models.\n"} {"abstract": " It is well established that glassy materials can undergo aging, i.e., their\nproperties gradually change over time. There is rapidly growing evidence that\ndense active and living systems also exhibit many features of glassy behavior,\nbut it is still largely unknown how physical aging is manifested in such active\nglassy materials. Our goal is to explore whether active and passive thermal\nglasses age in fundamentally different ways. To address this, we numerically\nstudy the aging dynamics following a quench from high to low temperature for\ntwo-dimensional passive and active Brownian model glass-formers. We find that\naging in active thermal glasses is governed by a time-dependent competition\nbetween thermal and active effects, with an effective temperature that\nexplicitly evolves with the age of the material. Moreover, unlike passive aging\nphenomenology, we find that the degree of dynamic heterogeneity in active aging\nsystems is relatively small and remarkably constant with age. We conclude that\nthe often-invoked mapping between an active system and a passive one with a\nhigher effective temperature rigorously breaks down upon aging, and that the\naging dynamics of thermal active glasses differs in several distinct ways from\nboth the passive and athermal active case.\n"} {"abstract": " The radio nebula W50 is a unique object interacting with the jets of the\nmicroquasar SS433. The SS433/W50 system is a good target for investigating the\nenergy of cosmic-ray particles accelerated by galactic jets. We report\nobservations of radio nebula W50 conducted with the NSF's Karl G. Jansky Very\nLarge Array (VLA) in the L band (1.0 -- 2.0 GHz). We investigate the secular\nchange of W50 on the basis of the observations in 1984, 1996, and 2017, and\nfind that most of its structures were stable for 33 years. We revise the upper\nlimit velocity of the eastern terminal filament by half to 0.023$c$ assuming a\ndistance of 5.5 kpc. We also analyze the observational data of the Arecibo\nObservatory 305-m telescope and identify the HI cavity around W50 in the\nvelocity range 33.77 km s$^{-1}$ -- 55.85 km s$^{-1}$. From this result, we\nestimate the maximum energy of the cosmic-ray protons accelerated by the jet\nterminal region to be above 10$^{15.5}$ eV. We also use the luminosity of the\ngamma-rays in the range 0.5 -- 10 GeV to estimate the total energy of\naccelerated protons below 5.2 $\\times$ 10$^{48}$ erg.\n"} {"abstract": " We present the design for a novel type of dual-band photodetector in the\nthermal infrared spectral range, the Optically Controlled Dual-band quantum dot\nInfrared Photodetector (OCDIP). This concept is based on a quantum dot ensemble\nwith a unimodal size distribution, whose absorption spectrum can be controlled\nby optically-injected carriers. An external pumping laser varies the electron\ndensity in the QDs, permitting to control the available electronic transitions\nand thus the absorption spectrum. We grew a test sample which we studied by AFM\nand photoluminescence. Based on the experimental data, we simulated the\ninfrared absorption spectrum of the sample, which showed two absorption bands\nat 5.85 um and 8.98 um depending on the excitation power.\n"} {"abstract": " We present a novel ultrastable superconducting radio-frequency (RF) ion trap\nrealized as a combination of an RF cavity and a linear Paul trap. Its RF\nquadrupole mode at 34.52 MHz reaches a quality factor of $Q\\approx2.3\\times\n10^5$ at a temperature of 4.1 K and is used to radially confine ions in an\nultralow-noise pseudopotential. This concept is expected to strongly suppress\nmotional heating rates and related frequency shifts which limit the ultimate\naccuracy achieved in advanced ion traps for frequency metrology. Running with\nits low-vibration cryogenic cooling system, electron beam ion trap and\ndeceleration beamline supplying highly charged ions (HCI), the superconducting\ntrap offers ideal conditions for optical frequency metrology with ionic\nspecies. We report its proof-of-principle operation as a quadrupole mass filter\nwith HCI, and trapping of Doppler-cooled ${}^9\\text{Be}^+$ Coulomb crystals.\n"} {"abstract": " With Regulation UNECE R157 on Automated Lane-Keeping Systems, the first\nframework for the introduction of passenger cars with Level 3 systems has\nbecome available in 2020. In accordance with recent research projects including\nacademia and the automotive industry, the Regulation utilizes scenario based\ntesting for the safety assessment. The complexity of safety validation of\nautomated driving systems necessitates system-level simulations. The\nRegulation, however, is missing the required parameterization necessary for\ntest case generation. To overcome this problem, we incorporate the exposure and\nconsider the heterogeneous behavior of the traffic participants by extracting\nconcrete scenarios according to Regulation's scenario definition from the\nestablished naturalistic highway dataset highD. We present a methodology to\nfind the scenarios in real-world data, extract the parameters for modeling the\nscenarios and transfer them to simulation. In this process, more than 340\nscenarios were extracted. OpenSCENARIO files were generated to enable an\nexemplary transfer of the scenarios to CARLA and esmini. We compare the\ntrajectories to examine the similarity of the scenarios in the simulation to\nthe recorded scenarios. In order to foster research, we publish the resulting\ndataset called ConScenD together with instructions for usage with both\nsimulation tools. The dataset is available online at\nhttps://www.levelXdata.com/scenarios.\n"} {"abstract": " In this paper, we characterize the asymptotic and large scale behavior of the\neigenvalues of wavelet random matrices in high dimensions. We assume that\npossibly non-Gaussian, finite-variance $p$-variate measurements are made of a\nlow-dimensional $r$-variate ($r \\ll p$) fractional stochastic process with\nnon-canonical scaling coordinates and in the presence of additive\nhigh-dimensional noise. The measurements are correlated both time-wise and\nbetween rows. We show that the $r$ largest eigenvalues of the wavelet random\nmatrices, when appropriately rescaled, converge to scale invariant functions in\nthe high-dimensional limit. By contrast, the remaining $p-r$ eigenvalues remain\nbounded. Under additional assumptions, we show that, up to a log\ntransformation, the $r$ largest eigenvalues of wavelet random matrices exhibit\nasymptotically Gaussian distributions. The results have direct consequences for\nstatistical inference.\n"} {"abstract": " Annually, a large number of injuries and deaths around the world are related\nto motor vehicle accidents. This value has recently been reduced to some\nextent, via the use of driver-assistance systems. Developing driver-assistance\nsystems (i.e., automated driving systems) can play a crucial role in reducing\nthis number. Estimating and predicting surrounding vehicles' movement is\nessential for an automated vehicle and advanced safety systems. Moreover,\npredicting the trajectory is influenced by numerous factors, such as drivers'\nbehavior during accidents, history of the vehicle's movement and the\nsurrounding vehicles, and their position on the traffic scene. The vehicle must\nmove over a safe path in traffic and react to other drivers' unpredictable\nbehaviors in the shortest time. Herein, to predict automated vehicles' path, a\nmodel with low computational complexity is proposed, which is trained by images\ntaken from the road's aerial image. Our method is based on an encoder-decoder\nmodel that utilizes a social tensor to model the effect of the surrounding\nvehicles' movement on the target vehicle. The proposed model can predict the\nvehicle's future path in any freeway only by viewing the images related to the\nhistory of the target vehicle's movement and its neighbors. Deep learning was\nused as a tool for extracting the features of these images. Using the HighD\ndatabase, an image dataset of the road's aerial image was created, and the\nmodel's performance was evaluated on this new database. We achieved the RMSE of\n1.91 for the next 5 seconds and found that the proposed method had less error\nthan the best path-prediction methods in previous studies.\n"} {"abstract": " The paper provides a version of the rational Hodge conjecture for $\\3\\dg$\ncategories. The noncommutative Hodge conjecture is equivalent to the version\nproposed in \\cite{perry2020integral} for admissible subcategories. We obtain\nexamples of evidence of the Hodge conjecture by techniques of noncommutative\ngeometry. Finally, we show that the noncommutative Hodge conjecture for smooth\nproper connective $\\3\\dg$ algebras is true.\n"} {"abstract": " Let $(X, D)$ be a log smooth log canonical pair such that $K_X+D$ is ample.\nExtending a theorem of Guenancia and building on his techniques, we show that\nnegatively curved K\\\"{a}hler-Einstein crossing edge metrics converge to\nK\\\"{a}hler-Einstein mixed cusp and edge metrics smoothly away from the divisor\nwhen some of the cone angles converge to $0$. We further show that near the\ndivisor such normalized K\\\"{a}hler-Einstein crossing edge metrics converge to a\nmixed cylinder and edge metric in the pointed Gromov-Hausdorff sense when some\nof the cone angles converge to $0$ at (possibly) different speeds.\n"} {"abstract": " Novice programmers face numerous barriers while attempting to learn how to\ncode that may deter them from pursuing a computer science degree or career in\nsoftware development. In this work, we propose a tool concept to address the\nparticularly challenging barrier of novice programmers holding misconceptions\nabout how their code behaves. Specifically, the concept involves an inquisitive\ncode editor that: (1) identifies misconceptions by periodically prompting the\nnovice programmer with questions about their program's behavior, (2) corrects\nthe misconceptions by generating explanations based on the program's actual\nbehavior, and (3) prevents further misconceptions by inserting test code and\nutilizing other educational resources. We have implemented portions of the\nconcept as plugins for the Atom code editor and conducted informal surveys with\nstudents and instructors. Next steps include deploying the tool prototype to\nstudents enrolled in introductory programming courses.\n"} {"abstract": " We show that any Brauer tree algebra has precisely $\\binom{2n}{n}$\n$2$-tilting complexes, where $n$ is the number of edges of the associated\nBrauer tree. More explicitly, for an external edge $e$ and an integer $j\\neq0$,\nwe show that the number of $2$-tilting complexes $T$ with $g_e(T)=j$ is\n$\\binom{2n-|j|-1}{n-1}$, where $g_e(T)$ denotes the $e$-th of the $g$-vector of\n$T$. To prove this, we use a geometric model of Brauer graph algebras on the\nclosed oriented marked surfaces and a classification of $2$-tilting complexes\ndue to Adachi-Aihara-Chan.\n"} {"abstract": " We measure the evolution of the rest-frame UV luminosity function (LF) and\nthe stellar mass function (SMF) of Lyman-alpha (Lya) emitters (LAEs) from z~2\nto z~6 by exploring ~4000 LAEs from the SC4K sample. We find a correlation\nbetween Lya luminosity (LLya) and rest-frame UV (M_UV), with best-fit\nM_UV=-1.6+-0.2 log10(LLya/erg/s)+47+-12 and a shallower relation between LLya\nand stellar mass (Mstar), with best-fit log10( Mstar/Msun)=0.9+-0.1\nlog10(LLya/erg/s)-28+-4.0. An increasing LLya cut predominantly lowers the\nnumber density of faint M_UV and low Mstar LAEs. We estimate a proxy for the\nfull UV LFs and SMFs of LAEs with simple assumptions of the faint end slope.\nFor the UV LF, we find a brightening of the characteristic UV luminosity\n(M_UV*) with increasing redshift and a decrease of the characteristic number\ndensity (Phi*). For the SMF, we measure a characteristic stellar mass\n(Mstar*/Msun) increase with increasing redshift, and a Phi* decline. However,\nif we apply a uniform luminosity cut of log10 (LLya/erg/s) >= 43.0, we find\nmuch milder to no evolution in the UV and SMF of LAEs. The UV luminosity\ndensity (rho_UV) of the full sample of LAEs shows moderate evolution and the\nstellar mass density (rho_M) decreases, with both being always lower than the\ntotal rho_UV and rho_M of more typical galaxies but slowly approaching them\nwith increasing redshift. Overall, our results indicate that both rho_UV and\nrho_M of LAEs slowly approach the measurements of continuum-selected galaxies\nat z>6, which suggests a key role of LAEs in the epoch of reionisation.\n"} {"abstract": " Accurate numerical solutions for the Schr\\\"odinger equation are of utmost\nimportance in quantum chemistry. However, the computational cost of current\nhigh-accuracy methods scales poorly with the number of interacting particles.\nCombining Monte Carlo methods with unsupervised training of neural networks has\nrecently been proposed as a promising approach to overcome the curse of\ndimensionality in this setting and to obtain accurate wavefunctions for\nindividual molecules at a moderately scaling computational cost. These methods\ncurrently do not exploit the regularity exhibited by wavefunctions with respect\nto their molecular geometries. Inspired by recent successful applications of\ndeep transfer learning in machine translation and computer vision tasks, we\nattempt to leverage this regularity by introducing a weight-sharing constraint\nwhen optimizing neural network-based models for different molecular geometries.\nThat is, we restrict the optimization process such that up to 95 percent of\nweights in a neural network model are in fact equal across varying molecular\ngeometries. We find that this technique can accelerate optimization when\nconsidering sets of nuclear geometries of the same molecule by an order of\nmagnitude and that it opens a promising route towards pre-trained neural\nnetwork wavefunctions that yield high accuracy even across different molecules.\n"} {"abstract": " As a non-linear extension of the classic Linear Discriminant Analysis(LDA),\nDeep Linear Discriminant Analysis(DLDA) replaces the original Categorical Cross\nEntropy(CCE) loss function with eigenvalue-based loss function to make a deep\nneural network(DNN) able to learn linearly separable hidden representations. In\nthis paper, we first point out DLDA focuses on training the cooperative\ndiscriminative ability of all the dimensions in the latent subspace, while put\nless emphasis on training the separable capacity of single dimension. To\nimprove DLDA, a regularization method on within-class scatter matrix is\nproposed to strengthen the discriminative ability of each dimension, and also\nkeep them complement each other. Experiment results on STL-10, CIFAR-10 and\nPediatric Pneumonic Chest X-ray Dataset showed that our proposed regularization\nmethod Regularized Deep Linear Discriminant Analysis(RDLDA) outperformed DLDA\nand conventional neural network with CCE as objective. To further improve the\ndiscriminative ability of RDLDA in the local space, an algorithm named Subclass\nRDLDA is also proposed.\n"} {"abstract": " A fog-radio access network (F-RAN) architecture is studied for an\nInternet-of-Things (IoT) system in which wireless sensors monitor a number of\nmulti-valued events and transmit in the uplink using grant-free random access\nto multiple edge nodes (ENs). Each EN is connected to a central processor (CP)\nvia a finite-capacity fronthaul link. In contrast to conventional\ninformation-agnostic protocols based on separate source-channel (SSC) coding,\nwhere each device uses a separate codebook, this paper considers an\ninformation-centric approach based on joint source-channel (JSC) coding via a\nnon-orthogonal generalization of type-based multiple access (TBMA). By\nleveraging the semantics of the observed signals, all sensors measuring the\nsame event share the same codebook (with non-orthogonal codewords), and all\nsuch sensors making the same local estimate of the event transmit the same\ncodeword. The F-RAN architecture directly detects the events values without\nfirst performing individual decoding for each device. Cloud and edge detection\nschemes based on Bayesian message passing are designed and trade-offs between\ncloud and edge processing are assessed.\n"} {"abstract": " We report parametric resonances (PRs) in the mean-field dynamics of a\none-dimensional dipolar Bose-Einstein condensate (DBEC) in widely varying\ntrapping geometries. The chief goal is to characterize the energy levels of\nthis system by analytical methods and the significance of this study arises\nfrom the commonly known fact that in the presence of interactions the energy\nlevels of a trapped BEC are hard to calculate analytically. The latter\ncharacterization is achieved by a matching of the PR energies to energy levels\nof the confining trap using perturbative methods. Further, this work reveals\nthe role of the interplay between dipole-dipole interactions (DDI) and trapping\ngeometry in defining the energies and amplitudes of the PRs. The PRs are\ninduced by a negative Gaussian potential whose depth oscillates with time.\nMoreover, the DDI play a role in this induction. The dynamics of this system is\nmodeled by the time-dependent Gross- Pitaevskii equation (TDGPE) that is\nnumerically solved by the Crank-Nicolson method. The PRs are discussed basing\non analytical methods: first, it is shown that it is possible to reproduce PRs\nby the Lagrangian variational method that are similar to the ones obtained from\nTDGPE. Second, the energies at which the PRs arise are closely matched with the\nenergy levels of the corresponding trap calculated by time-independent\nperturbation theory. Third, the most probable transitions between the trap\nenergy levels yielding PRs are determined by time-dependent perturbation\ntheory. The most significant result of this work is that we have been able to\ncharacterize the above mentioned energy levels of a DBEC in a complex trapping\npotential.\n"} {"abstract": " The digital transformation has been underway, creating digital shadows of\n(almost) all physical entities and moving them to the Internet. The era of\nInternet of Everything has therefore started to come into play, giving rise to\nunprecedented traffic growths. In this context, optical core networks forming\nthe backbone of Internet infrastructure have been under critical issues of\nreaching the capacity limit of conventional fiber, a phenomenon widely referred\nas capacity crunch. For many years, the many-fold increases in fiber capacity\nis thanks to exploiting physical dimensions for multiplexing optical signals\nsuch as wavelength, polarization, time and lately space-division multiplexing\nusing multi-core fibers and such route seems to come to an end as almost all\nknown ways have been exploited. This necessitates for a departure from\ntraditional approaches to use the fiber capacity more efficiently and thereby\nimprove economics of scale. This paper lays out a new perspective to integrate\nnetwork coding (NC) functions into optical networks to achieve greater capacity\nefficiency by upgrading intermediate nodes functionalities. In addition to the\nreview of recent proposals on new research problems enabled by NC operation in\noptical networks, we also report state-of-the-art findings in the literature in\nan effort to renew the interest of NC in optical networks and discuss three\ncritical points for pushing forward its applicability and practicality\nincluding i) NC as a new dimension for multiplexing optical signals ii)\nalgorithmic aspects of NC-enabled optical networks design iii) NC as an\nentirely fresh way for securing optical signals at physical layers\n"} {"abstract": " In this paper, we propose a novel lightweight relation extraction approach of\nstructural block driven - convolutional neural learning. Specifically, we\ndetect the essential sequential tokens associated with entities through\ndependency analysis, named as a structural block, and only encode the block on\na block-wise and an inter-block-wise representation, utilizing multi-scale\nCNNs. This is to 1) eliminate the noisy from irrelevant part of a sentence;\nmeanwhile 2) enhance the relevant block representation with both block-wise and\ninter-block-wise semantically enriched representation. Our method has the\nadvantage of being independent of long sentence context since we only encode\nthe sequential tokens within a block boundary. Experiments on two datasets\ni.e., SemEval2010 and KBP37, demonstrate the significant advantages of our\nmethod. In particular, we achieve the new state-of-the-art performance on the\nKBP37 dataset; and comparable performance with the state-of-the-art on the\nSemEval2010 dataset.\n"} {"abstract": " A model investigating the role of geometry on the alpha dose rate of spent\nnuclear fuel has been developed. This novel approach utilises a new piecewise\nfunction to describe the probability of alpha escape as a function of\nparticulate radius, decay range within the material, and position from the\nsurface. The alpha dose rates were produced for particulates of radii 1 $\\mu$m\nto 10 mm, showing considerable changes in the 1 $\\mu$m to 50 $\\mu$m range.\nResults indicate that for decreasing particulate sizes, approaching radii equal\nto or less than the range of the $\\alpha$-particle within the fuel, there is a\nsignificant increase in the rate of energy emitted per unit mass of fuel\nmaterial. The influence of geometry is more significant for smaller radii,\nshowing clear differences in dose rate curves below 50 $\\mu$m. These\nconsiderations are essential for any future accurate prediction of the\ndissolution rates and hydrogen gas release, driven by the radiolytic yields of\nparticulate spent nuclear fuel.\n"} {"abstract": " We investigate the effectiveness of three different job-search and training\nprogrammes for German long-term unemployed persons. On the basis of an\nextensive administrative data set, we evaluated the effects of those programmes\non various levels of aggregation using Causal Machine Learning. We found\nparticipants to benefit from the investigated programmes with placement\nservices to be most effective. Effects are realised quickly and are\nlong-lasting for any programme. While the effects are rather homogenous for\nmen, we found differential effects for women in various characteristics. Women\nbenefit in particular when local labour market conditions improve. Regarding\nthe allocation mechanism of the unemployed to the different programmes, we\nfound the observed allocation to be as effective as a random allocation.\nTherefore, we propose data-driven rules for the allocation of the unemployed to\nthe respective labour market programmes that would improve the status-quo.\n"} {"abstract": " Faceted summarization provides briefings of a document from different\nperspectives. Readers can quickly comprehend the main points of a long document\nwith the help of a structured outline. However, little research has been\nconducted on this subject, partially due to the lack of large-scale faceted\nsummarization datasets. In this study, we present FacetSum, a faceted\nsummarization benchmark built on Emerald journal articles, covering a diverse\nrange of domains. Different from traditional document-summary pairs, FacetSum\nprovides multiple summaries, each targeted at specific sections of a long\ndocument, including the purpose, method, findings, and value. Analyses and\nempirical results on our dataset reveal the importance of bringing structure\ninto summaries. We believe FacetSum will spur further advances in summarization\nresearch and foster the development of NLP systems that can leverage the\nstructured information in both long texts and summaries.\n"} {"abstract": " Considering a double-headed Brownian motor moving with both translational and\nrotational degrees of freedom, we investigate the directed transport properties\nof the system in a traveling-wave potential. It is found that the traveling\nwave provides the essential condition of the directed transport for the system,\nand at an appropriate angular frequency, the positive current can be optimized.\nA general current reversal appears by modulating the angular frequency of the\ntraveling wave, noise intensity, external driving force and the rod length. By\ntransforming the dynamical equation in traveling-wave potential into that in a\ntilted potential, the mechanism of current reversal is analyzed. For both cases\nof Gaussian and Levy noises, the currents show similar dependence on the\nparameters. Moreover, the current in the tilted potential shows a typical\nstochastic resonance effect. The external driving force has also a\nresonance-like effect on the current in the tilted potential. But the current\nin the traveling-wave potential exhibits the reverse behaviors of that in the\ntilted potential. Besides, the currents obviously depend on the stability index\nof the Levy noise under certain conditions.\n"} {"abstract": " We demonstrate size selective optical trapping and transport for\nnanoparticles near an optical nanofiber taper. Using a two-wavelength,\ncounter-propagating mode configuration, we show that 100 nm diameter and 150 nm\ndiameter gold nanospheres (GNSs) are trapped by the evanescent field in the\ntaper region at different optical powers. Conversely, when one nanoparticle\nspecies is trapped the other may be transported, leading to a sieve-like\neffect. Our results show that sophisticated optical manipulation can be\nachieved in a passive configuration by taking advantage of mode behavior in\nnanophotonics devices.\n"} {"abstract": " We investigated changes in the b value of the Gutenberg-Richter's law in and\naround the focal areas of earthquakes on March 20 and on May 1, 2021, with\nmagnitude (M) 6.9 and 6.8, respectively, which occurred off the Pacific coast\nof Miyagi prefecture, northeastern Japan. We showed that the b value in these\nfocal areas had been noticeably small, especially within a few years before the\noccurrence of the M6.9 earthquake in its vicinity, indicating that differential\nstress had been high in the focal areas. The coseismic slip of the 2011 Tohoku\nearthquake seems to have stopped just short of the east side of the focus of\nthe M6.9 earthquake. Furthermore, the afterslip of the 2011 Tohoku earthquake\nwas relatively small in the focal areas of the M6.9 and M6.8 earthquakes,\ncompared to the surrounding regions. In addition, the focus of the M6.9\nearthquake was situated close to the border point where the interplate slip in\nthe period from 2012 through 2021 has been considerably larger on the northern\nside than on the southern side. The high-stress state inferred by the b-value\nanalysis is concordant with those characteristics of interplate slip events. We\nfound that the M6.8 earthquake on May 1 occurred near an area where the b value\nremained small, even after the M6.9 quake. The ruptured areas by the two\nearthquakes now seem to almost coincide with the small-b-value region that had\nexisted before their occurrence. The b value on the east side of the focal\nareas of the M6.9 and M6.8 earthquakes which corresponds to the eastern part of\nthe source region of the 1978 off-Miyagi prefecture earthquake was consistently\nlarge, while the seismicity enhanced by the two earthquakes also shows a large\nb value, implying that stress in the region has not been very high.\n"} {"abstract": " During the early history of unitary quantum theory the Kato's exceptional\npoints (EPs, a.k.a. non-Hermitian degeneracies) of Hamiltonians $H(\\lambda)$\ndid not play any significant role, mainly due to the Stone theorem which firmly\nconnected the unitarity with the Hermiticity. During the recent wave of\noptimism people started believing that the corridors of a unitary access to the\nEPs could be opened leading, say, to a new picture of quantum phase transitions\nvia an {\\it ad hoc} weakening of the Hermiticty (replaced by the\nquasi-Hermiticity). Subsequently, the pessimism prevailed (the paths of access\nappeared to be fragile). In a way restricted to the quantum physics of closed\nsystems a return to optimism is advocated here: the apparent fragility of the\ncorridors is claimed to follow from a misinterpretation of the theory in its\nquasi-Hermitian formulation. Several perturbed versions of the realistic\nmany-body Bose-Hubbard model are chosen for illustration purposes.\n"} {"abstract": " Existing works on visual counting primarily focus on one specific category at\na time, such as people, animals, and cells. In this paper, we are interested in\ncounting everything, that is to count objects from any category given only a\nfew annotated instances from that category. To this end, we pose counting as a\nfew-shot regression task. To tackle this task, we present a novel method that\ntakes a query image together with a few exemplar objects from the query image\nand predicts a density map for the presence of all objects of interest in the\nquery image. We also present a novel adaptation strategy to adapt our network\nto any novel visual category at test time, using only a few exemplar objects\nfrom the novel category. We also introduce a dataset of 147 object categories\ncontaining over 6000 images that are suitable for the few-shot counting task.\nThe images are annotated with two types of annotation, dots and bounding boxes,\nand they can be used for developing few-shot counting models. Experiments on\nthis dataset shows that our method outperforms several state-of-the-art object\ndetectors and few-shot counting approaches. Our code and dataset can be found\nat https://github.com/cvlab-stonybrook/LearningToCountEverything.\n"} {"abstract": " Turbulent puffs are ubiquitous in everyday life phenomena. Understanding\ntheir dynamics is important in a variety of situations ranging from industrial\nprocesses to pure and applied science. In all these fields, a deep knowledge of\nthe statistical structure of temperature and velocity space/time fluctuations\nis of paramount importance to construct models of chemical reaction (in\nchemistry), of condensation of virus-containing droplets (in virology and/or\nbiophysics), and optimal mixing strategies in industrial applications. As a\nmatter of fact, results of turbulence in a puff are confined to bulk properties\n(i.e. average puff velocity and typical decay/growth time) and dates back to\nthe second half of the 20th century. There is thus a huge gap to fill to pass\nfrom bulk properties to two-point statistical observables. Here we fill this\ngap exploiting theory and numerics in concert to predict and validate the\nspace/time scaling behaviors of both velocity and temperature structure\nfunctions including intermittency corrections. Excellent agreement between\ntheory and simulations is found. Our results are expected to have profound\nimpact to develop evaporation models for virus-containing droplets carried by a\nturbulent puff, with benefits to the comprehension of the airborne route of\nvirus contagion.\n"} {"abstract": " The installation of electric vehicle charging stations (EVCS) will be\nessential to promote the acceptance by the users of electric vehicles (EVs).\nHowever, if EVCS are exclusively supplied by the grid, negative impacts on its\nstability together with possible CO2 emission increases could be produced.\nIntroduction of hybrid renewable energy systems (HRES) for EVCS can cope with\nboth drawbacks by reducing the load on the grid and generating clean\nelectricity. This paper develops a methodology based on a weighted\nmulticriteria process to design the most suitable configuration for HRES in\nEVCS. This methodology determines the local renewable resources and the EVCS\nelectricity demand. Then, taking into account environmental, economic and\ntechnical aspects, it deduces the most adequate HRES design for the EVCS.\nBesides, an experimental stage to validate the design deduced from the\nmulticriteria process is included. Therefore, the final design for the HRES in\nEVCS is supported not only by a complete numerical evaluation, but also by an\nexperimental verification of the demand being fully covered. Methodology\napplication to Valencia (Spain) proves that an off-grid HRES with solar PV,\nwind resources and batteries support would be the most suitable configuration\nfor the system. This solution was also experimentally verified.\n"} {"abstract": " We use large deviation theory to obtain the free energy of the XY model on a\nfully connected graph on each site of which there is a randomly oriented field\nof magnitude $h$. The phase diagram is obtained for two symmetric distributions\nof the random orientations: (a) a uniform distribution and (b) a distribution\nwith cubic symmetry. In both cases, the ordered state reflects the symmetry of\nthe underlying disorder distribution. The phase boundary has a multicritical\npoint which separates a locus of continuous transitions (for small values of\n$h$) from a locus of first order transitions (for large $h$). The free energy\nis a function of a single variable in case (a) and a function of two variables\nin case (b), leading to different characters of the multicritical points in the\ntwo cases.\n"} {"abstract": " Machine learning (ML) tools such as encoder-decoder deep convolutional neural\nnetworks (CNN) are able to extract relationships between inputs and outputs of\nlarge complex systems directly from raw data. For time-varying systems the\npredictive capabilities of ML tools degrade as the systems are no longer\naccurately represented by the data sets with which the ML models were trained.\nRe-training is possible, but only if the changes are slow and if new\ninput-output training data measurements can be made online non-invasively. In\nthis work we present an approach to deep learning for time-varying systems in\nwhich adaptive feedback based only on available system output measurements is\napplied to encoded low-dimensional dense layers of encoder-decoder type CNNs.\nWe demonstrate our method in developing an inverse model of a complex charged\nparticle accelerator system, mapping output beam measurements to input beam\ndistributions while both the accelerator components and the unknown input beam\ndistribution quickly vary with time. We demonstrate our results using\nexperimental measurements of the input and output beam distributions of the\nHiRES ultra-fast electron diffraction (UED) microscopy beam line at Lawrence\nBerkeley National Laboratory. We show how our method can be used to aid both\nphysics and ML-based surrogate online models to provide non-invasive beam\ndiagnostics and we also demonstrate how our method can be used to automatically\ntrack the time varying quantum efficiency map of a particle accelerator's\nphotocathode.\n"} {"abstract": " The extremely large magnetoresistance (XMR) effect in nonmagnetic semimetals\nhave attracted intensive attention recently. Here we propose an XMR candidate\nmaterial SrPd based on first-principles electronic structure calculations in\ncombination with a semi-classical model. The calculated carrier densities in\nSrPd indicate that there is a good electron-hole compensation, while the\ncalculated intrinsic carrier mobilities are as high as 10$^5$\ncm$^2$V$^{-1}$s$^{-1}$. There are only two doubly degenerate bands crossing the\nFermi level for SrPd, thus a semi-classical two-band model is available for\ndescribing its transport properties. Accordingly, the magnetoresistance of SrPd\nunder a magnetic field of $4$ Tesla is predicted to reach ${10^5} \\%$ at low\ntemperature. Furthermore, the calculated topological invariant indicates that\nSrPd is topologically trivial. Our theoretical studies suggest that SrPd can\nserve as an ideal platform to examine the charge compensation mechanism of the\nXMR effect.\n"} {"abstract": " Chemically peculiar stars in eclipsing binary systems are rare objects that\nallow the derivation of fundamental stellar parameters and important\ninformation on the evolutionary status and the origin of the observed chemical\npeculiarities. Here we present an investigation of the known eclipsing binary\nsystem BD+09 1467 = V680 Mon. Using spectra from the Large Sky Area\nMulti-Object Fiber Spectroscopic Telescope (LAMOST) and own observations, we\nidentify the primary component of the system as a mercury-manganese (HgMn/CP3)\nstar (spectral type kB9 hB8 HeB9 V HgMn). Furthermore, photometric time series\ndata from the Transiting Exoplanet Survey Satellite (TESS) indicate that the\nsystem is a \"heartbeat star\", a rare class of eccentric binary stars with\nshort-period orbits that exhibit a characteristic signature near the time of\nperiastron in their light curves due to the tidal distortion of the components.\nUsing all available photometric observations, we present an updated ephemeris\nand binary system parameters as derived from modelling of the system with the\nELISa code, which indicates that the secondary star has an effective\ntemperature of Teff = 8300+-200 K (spectral type of about A4). V680 Mon is only\nthe fifth known eclipsing CP3 star and the first one in a heartbeat binary.\nFurthermore, our results indicate that the star is located on the zero-age main\nsequence and a possible member of the open cluster NGC 2264. As such, it lends\nitself perfectly for detailed studies and may turn out to be a keystone in the\nunderstanding of the development of CP3 star peculiarities.\n"} {"abstract": " To achieve reliable mining results for massive vessel trajectories, one of\nthe most important challenges is how to efficiently compute the similarities\nbetween different vessel trajectories. The computation of vessel trajectory\nsimilarity has recently attracted increasing attention in the maritime data\nmining research community. However, traditional shape- and warping-based\nmethods often suffer from several drawbacks such as high computational cost and\nsensitivity to unwanted artifacts and non-uniform sampling rates, etc. To\neliminate these drawbacks, we propose an unsupervised learning method which\nautomatically extracts low-dimensional features through a convolutional\nauto-encoder (CAE). In particular, we first generate the informative trajectory\nimages by remapping the raw vessel trajectories into two-dimensional matrices\nwhile maintaining the spatio-temporal properties. Based on the massive vessel\ntrajectories collected, the CAE can learn the low-dimensional representations\nof informative trajectory images in an unsupervised manner. The trajectory\nsimilarity is finally equivalent to efficiently computing the similarities\nbetween the learned low-dimensional features, which strongly correlate with the\nraw vessel trajectories. Comprehensive experiments on realistic data sets have\ndemonstrated that the proposed method largely outperforms traditional\ntrajectory similarity computation methods in terms of efficiency and\neffectiveness. The high-quality trajectory clustering performance could also be\nguaranteed according to the CAE-based trajectory similarity computation\nresults.\n"} {"abstract": " The gap generation in the dice model with local four-fermion interaction is\nstudied. Due to the presence of two valleys with degenerate electron states,\nthere are two main types of gaps. The intra- and intervalley gap describes the\nelectron and hole pairing in the same and different valleys, respectively. We\nfound that while the generation of the intravalley gap takes place only in the\nsupercritical regime, the intervalley gap is generated for an arbitrary small\ncoupling. The physical reason for the absence of the critical coupling is the\ncatalysis of the intervalley gap generation by the flat band in the electron\nspectrum of the dice model. The completely quenched kinetic energy in the flat\nband when integrated over momentum in the gap equation leads to extremely large\nintervalley gap proportional to the area of the Brillouin zone.\n"} {"abstract": " Automated defect inspection is critical for effective and efficient\nmaintenance, repair, and operations in advanced manufacturing. On the other\nhand, automated defect inspection is often constrained by the lack of defect\nsamples, especially when we adopt deep neural networks for this task. This\npaper presents Defect-GAN, an automated defect synthesis network that generates\nrealistic and diverse defect samples for training accurate and robust defect\ninspection networks. Defect-GAN learns through defacement and restoration\nprocesses, where the defacement generates defects on normal surface images\nwhile the restoration removes defects to generate normal images. It employs a\nnovel compositional layer-based architecture for generating realistic defects\nwithin various image backgrounds with different textures and appearances. It\ncan also mimic the stochastic variations of defects and offer flexible control\nover the locations and categories of the generated defects within the image\nbackground. Extensive experiments show that Defect-GAN is capable of\nsynthesizing various defects with superior diversity and fidelity. In addition,\nthe synthesized defect samples demonstrate their effectiveness in training\nbetter defect inspection networks.\n"} {"abstract": " Let $\\mathcal{G}$ denote the variety generated by infinite dimensional\nGrassmann algebras; i.e., the collection of all unitary associative algebras\nsatisfying the identity $[[z_1,z_2],z_3]=0$, where $[z_i,z_j]=z_iz_j-z_jz_i$.\nConsider the free algebra $F_3$ in $\\mathcal{G}$ generated by\n$X_3=\\{x_1,x_2,x_3\\}$. The commutator ideal $F_3'$ of the algebra $F_3$ has a\nnatural $K[X_3]$-module structure. We call an element $p\\in F_3$ symmetric if\n$p(x_1,x_2,x_3)=p(x_{\\xi1},x_{\\xi2},x_{\\xi3})$ for each permutation $\\xi\\in\nS_3$. Symmetric elements form the subalgebra $F_3^{S_3}$ of invariants of the\nsymmetric group $S_3$ in $F_3$. We give a free generating set for the\n$K[X_3]^{S_3}$-module $(F_3')^{S_3}$.\n"} {"abstract": " We investigate the efficiency of two very different spoken term detection\napproaches for transcription when the available data is insufficient to train a\nrobust ASR system. This work is grounded in very low-resource language\ndocumentation scenario where only few minutes of recording have been\ntranscribed for a given language so far.Experiments on two oral languages show\nthat a pretrained universal phone recognizer, fine-tuned with only a few\nminutes of target language speech, can be used for spoken term detection with a\nbetter overall performance than a dynamic time warping approach. In addition,\nwe show that representing phoneme recognition ambiguity in a graph structure\ncan further boost the recall while maintaining high precision in the low\nresource spoken term detection task.\n"} {"abstract": " Multilingual Transformer-based language models, usually pretrained on more\nthan 100 languages, have been shown to achieve outstanding results in a wide\nrange of cross-lingual transfer tasks. However, it remains unknown whether the\noptimization for different languages conditions the capacity of the models to\ngeneralize over syntactic structures, and how languages with syntactic\nphenomena of different complexity are affected. In this work, we explore the\nsyntactic generalization capabilities of the monolingual and multilingual\nversions of BERT and RoBERTa. More specifically, we evaluate the syntactic\ngeneralization potential of the models on English and Spanish tests, comparing\nthe syntactic abilities of monolingual and multilingual models on the same\nlanguage (English), and of multilingual models on two different languages\n(English and Spanish). For English, we use the available SyntaxGym test suite;\nfor Spanish, we introduce SyntaxGymES, a novel ensemble of targeted syntactic\ntests in Spanish, designed to evaluate the syntactic generalization\ncapabilities of language models through the SyntaxGym online platform.\n"} {"abstract": " There has been much recent interest in two-sided markets and dynamics\nthereof. In a rather a general discrete-time feedback model, which we show\nconditions that assure that for each agent, there exists the limit of a\nlong-run average allocation of a resource to the agent, which is independent of\nany initial conditions. We call this property the unique ergodicity.\n Our model encompasses two-sided markets and more complicated interconnections\nof workers and customers, such as in a supply chain. It allows for\nnon-linearity of the response functions of market participants. Finally, it\nallows for uncertainty in the response of market participants by considering a\nset of the possible responses to either price or other signals and a measure to\nsample from these.\n"} {"abstract": " Social acceptability is an important consideration for HCI designers who\ndevelop technologies for social contexts. However, the current theoretical\nfoundations of social acceptability research do not account for the complex\ninteractions among the actors in social situations and the specific role of\ntechnology. In order to improve the understanding of how context shapes and is\nshaped by situated technology interactions, we suggest to reframe the social\nspace as a dynamic bundle of social practices and explore it with simulation\nstudies using agent-based modeling. We outline possible research directions\nthat focus on specific interactions among practices as well as regularities in\nemerging patterns.\n"} {"abstract": " In a sharp contrast to the response of silica particles we show that the\nmetal-dielectric Janus particles with boojum defects in a nematic liquid\ncrystal are self-propelled under the action of an electric field applied\nperpendicular to the director. The particles can be transported along any\ndirection in the plane of the sample by selecting the appropriate orientation\nof the Janus vector with respect to the director. The direction of motion of\nthe particles is controllable by varying the field amplitude and frequency. The\ncommand demonstrated on the motility of the particles is promising for tunable\ntransport and microrobotic applications.\n"} {"abstract": " Sleep staging is fundamental for sleep assessment and disease diagnosis.\nAlthough previous attempts to classify sleep stages have achieved high\nclassification performance, several challenges remain open: 1) How to\neffectively extract salient waves in multimodal sleep data; 2) How to capture\nthe multi-scale transition rules among sleep stages; 3) How to adaptively seize\nthe key role of specific modality for sleep staging. To address these\nchallenges, we propose SalientSleepNet, a multimodal salient wave detection\nnetwork for sleep staging. Specifically, SalientSleepNet is a temporal fully\nconvolutional network based on the $\\rm U^2$-Net architecture that is\noriginally proposed for salient object detection in computer vision. It is\nmainly composed of two independent $\\rm U^2$-like streams to extract the\nsalient features from multimodal data, respectively. Meanwhile, the multi-scale\nextraction module is designed to capture multi-scale transition rules among\nsleep stages. Besides, the multimodal attention module is proposed to\nadaptively capture valuable information from multimodal data for the specific\nsleep stage. Experiments on the two datasets demonstrate that SalientSleepNet\noutperforms the state-of-the-art baselines. It is worth noting that this model\nhas the least amount of parameters compared with the existing deep neural\nnetwork models.\n"} {"abstract": " Spin-phonon interaction is an important channel for spin and energy\nrelaxation in magnetic insulators. Understanding this interaction is critical\nfor developing magnetic insulator-based spintronic devices. Quantifying this\ninteraction in yttrium iron garnet (YIG), one of the most extensively\ninvestigated magnetic insulators, remains challenging because of the large\nnumber of atoms in a unit cell. Here, we report temperature-dependent and\npolarization-resolved Raman measurements in a YIG bulk crystal. We first\nclassify the phonon modes based on their symmetry. We then develop a modified\nmean-field theory and define a symmetry-adapted parameter to quantify\nspin-phonon interaction in a phonon-mode specific way for the first time in\nYIG. Based on this improved mean-field theory, we discover a positive\ncorrelation between the spin-phonon interaction strength and the phonon\nfrequency.\n"} {"abstract": " We propose a method for the unsupervised reconstruction of a\ntemporally-coherent sequence of surfaces from a sequence of time-evolving point\nclouds, yielding dense, semantically meaningful correspondences between all\nkeyframes. We represent the reconstructed surface as an atlas, using a neural\nnetwork. Using canonical correspondences defined via the atlas, we encourage\nthe reconstruction to be as isometric as possible across frames, leading to\nsemantically-meaningful reconstruction. Through experiments and comparisons, we\nempirically show that our method achieves results that exceed that state of the\nart in the accuracy of unsupervised correspondences and accuracy of surface\nreconstruction.\n"} {"abstract": " Citrus segmentation is a key step of automatic citrus picking. While most\ncurrent image segmentation approaches achieve good segmentation results by\npixel-wise segmentation, these supervised learning-based methods require a\nlarge amount of annotated data, and do not consider the continuous temporal\nchanges of citrus position in real-world applications. In this paper, we first\ntrain a simple CNN with a small number of labelled citrus images in a\nsupervised manner, which can roughly predict the citrus location from each\nframe. Then, we extend a state-of-the-art unsupervised learning approach to\npre-learn the citrus's potential movements between frames from unlabelled\ncitrus's videos. To take advantages of both networks, we employ the multimodal\ntransformer to combine supervised learned static information and unsupervised\nlearned movement information. The experimental results show that combing both\nnetwork allows the prediction accuracy reached at 88.3$\\%$ IOU and 93.6$\\%$\nprecision, outperforming the original supervised baseline 1.2$\\%$ and 2.4$\\%$.\nCompared with most of the existing citrus segmentation methods, our method uses\na small amount of supervised data and a large number of unsupervised data,\nwhile learning the pixel level location information and the temporal\ninformation of citrus changes to enhance the segmentation effect.\n"} {"abstract": " A bipartite experiment consists of one set of units being assigned treatments\nand another set of units for which we measure outcomes. The two sets of units\nare connected by a bipartite graph, governing how the treated units can affect\nthe outcome units. In this paper, we consider estimation of the average total\ntreatment effect in the bipartite experimental framework under a linear\nexposure-response model. We introduce the Exposure Reweighted Linear (ERL)\nestimator, and show that the estimator is unbiased, consistent and\nasymptotically normal, provided that the bipartite graph is sufficiently\nsparse. To facilitate inference, we introduce an unbiased and consistent\nestimator of the variance of the ERL point estimator. In addition, we introduce\na cluster-based design, Exposure-Design, that uses heuristics to increase the\nprecision of the ERL estimator by realizing a desirable exposure distribution.\n"} {"abstract": " Given a stream of graph edges from a dynamic graph, how can we assign anomaly\nscores to edges and subgraphs in an online manner, for the purpose of detecting\nunusual behavior, using constant time and memory? For example, in intrusion\ndetection, existing work seeks to detect either anomalous edges or anomalous\nsubgraphs, but not both. In this paper, we first extend the count-min sketch\ndata structure to a higher-order sketch. This higher-order sketch has the\nuseful property of preserving the dense subgraph structure (dense subgraphs in\nthe input turn into dense submatrices in the data structure). We then propose\nfour online algorithms that utilize this enhanced data structure, which (a)\ndetect both edge and graph anomalies; (b) process each edge and graph in\nconstant memory and constant update time per newly arriving edge, and; (c)\noutperform state-of-the-art baselines on four real-world datasets. Our method\nis the first streaming approach that incorporates dense subgraph search to\ndetect graph anomalies in constant memory and time.\n"} {"abstract": " Effectively modeling phenomena present in highly nonlinear dynamical systems\nwhilst also accurately quantifying uncertainty is a challenging task, which\noften requires problem-specific techniques. We present a novel, domain-agnostic\napproach to tackling this problem, using compositions of physics-informed\nrandom features, derived from ordinary differential equations. The architecture\nof our model leverages recent advances in approximate inference for deep\nGaussian processes, such as layer-wise weight-space approximations which allow\nus to incorporate random Fourier features, and stochastic variational inference\nfor approximate Bayesian inference. We provide evidence that our model is\ncapable of capturing highly nonlinear behaviour in real-world multivariate time\nseries data. In addition, we find that our approach achieves comparable\nperformance to a number of other probabilistic models on benchmark regression\ntasks.\n"} {"abstract": " Laser-induced ultrafast demagnetization has puzzled researchers around the\nworld for over two decades. Intrinsic complexity in electronic, magnetic, and\nphononic subsystems is difficult to understand microscopically. So far it is\nnot possible to explain demagnetization using a single mechanism, which\nsuggests a crucial piece of information still missing. In this paper, we return\nto a fundamental aspect of physics: spin and its change within each band in the\nentire Brillouin zone. We employ fcc Ni as an example and use an extremely\ndense {\\bf k} mesh to map out spin changes for every band close to the Fermi\nlevel along all the high symmetry lines. To our surprise, spin angular momentum\nat some special {\\bf k} points abruptly changes from $\\pm \\hbar/2$ to $\\mp\n\\hbar/2$ simply by moving from one crystal momentum point to the next. This\nexplains why intraband transitions, which the spin superdiffusion model is\nbased upon, can induce a sharp spin moment reduction, and why electric current\ncan change spin orientation in spintronics. These special {\\bf k} points, which\nare called spin Berry points, are not random and appear when several bands are\nclose to each other, so the Berry potential of spin majority states is\ndifferent from that of spin minority states. Although within a single band,\nspin Berry points jump, when we group several neighboring bands together, they\nform distinctive smooth spin Berry lines. It is the band structure that\ndisrupts those lines. Spin Berry points are crucial to laser-induced ultrafast\ndemagnetization and spintronics.\n"} {"abstract": " Surface-response functions are one of the most promising routes for bridging\nthe gap between fully quantum-mechanical calculations and phenomenological\nmodels in quantum nanoplasmonics. Within all the currently available recipes\nfor obtaining such response functions, \\emph{ab initio} calculations remain one\nof the most predominant, wherein the surface-response function are retrieved\nvia the metal's non-equilibrium response to an external perturbation. Here, we\npresent a complementary approach where one of the most appealing\nsurface-response functions, namely the Feibelman $d$-parameters, yield a finite\ncontribution even in the case where they are calculated directly from the\nequilibrium properties described under the local-response approximation (LRA),\nbut with a spatially varying equilibrium electron density. Using model\ncalculations that mimic both spill-in and spill-out of the equilibrium electron\ndensity, we show that the obtained $d$-parameters are in qualitative agreement\nwith more elaborate, but also more computationally demanding, \\emph{ab initio}\nmethods. The analytical work presented here illustrates how microscopic\nsurface-response functions can emerge out of entirely local electrodynamic\nconsiderations.\n"} {"abstract": " In this paper, a novel and robust algorithm is proposed for adaptive\nbeamforming based on the idea of reconstructing the autocorrelation sequence\n(ACS) of a random process from a set of measured data. This is obtained from\nthe first column and the first row of the sample covariance matrix (SCM) after\naveraging along its diagonals. Then, the power spectrum of the correlation\nsequence is estimated using the discrete Fourier transform (DFT). The DFT\ncoefficients corresponding to the angles within the noise-plus-interference\nregion are used to reconstruct the noise-plus-interference covariance matrix\n(NPICM), while the desired signal covariance matrix (DSCM) is estimated by\nidentifying and removing the noise-plus-interference component from the SCM. In\nparticular, the spatial power spectrum of the estimated received signal is\nutilized to compute the correlation sequence corresponding to the\nnoise-plus-interference in which the dominant DFT coefficient of the\nnoise-plus-interference is captured. A key advantage of the proposed adaptive\nbeamforming is that only little prior information is required. Specifically, an\nimprecise knowledge of the array geometry and of the angular sectors in which\nthe interferences are located is needed. Simulation results demonstrate that\ncompared with previous reconstruction-based beamformers, the proposed approach\ncan achieve better overall performance in the case of multiple mismatches over\na very large range of input signal-to-noise ratios.\n"} {"abstract": " Neural models have transformed the fundamental information retrieval problem\nof mapping a query to a giant set of items. However, the need for efficient and\nlow latency inference forces the community to reconsider efficient approximate\nnear-neighbor search in the item space. To this end, learning to index is\ngaining much interest in recent times. Methods have to trade between obtaining\nhigh accuracy while maintaining load balance and scalability in distributed\nsettings. We propose a novel approach called IRLI (pronounced `early'), which\niteratively partitions the items by learning the relevant buckets directly from\nthe query-item relevance data. Furthermore, IRLI employs a superior\npower-of-$k$-choices based load balancing strategy. We mathematically show that\nIRLI retrieves the correct item with high probability under very natural\nassumptions and provides superior load balancing. IRLI surpasses the best\nbaseline's precision on multi-label classification while being $5x$ faster on\ninference. For near-neighbor search tasks, the same method outperforms the\nstate-of-the-art Learned Hashing approach NeuralLSH by requiring only ~\n{1/6}^th of the candidates for the same recall. IRLI is both data and model\nparallel, making it ideal for distributed GPU implementation. We demonstrate\nthis advantage by indexing 100 million dense vectors and surpassing the popular\nFAISS library by >10% on recall.\n"} {"abstract": " The magnetic ground state of polycrystalline N\\'eel skyrmion hosting material\nGaV$_4$S$_8$ has been investigated using ac susceptibility and powder neutron\ndiffraction. In the absence of an applied magnetic field GaV$_4$S$_8$ undergoes\na transition from a paramagnetic to a cycloidal state below 13~K and then to a\nferromagnetic-like state below 6~K. With evidence from ac susceptibility and\npowder neutron diffraction, we have identified the commensurate magnetic\nstructure at 1.5 K, with ordered magnetic moments of $0.23(2)~\\mu_{\\mathrm{B}}$\non the V1 sites and $0.22(1)~\\mu_{\\mathrm{B}}$ on the V2 sites. These moments\nhave ferromagnetic-like alignment but with a 39(8)$^{\\circ}$ canting of the\nmagnetic moments on the V2 sites away from the V$_4$ cluster. In the\nincommensurate magnetic phase that exists between 6 and 13 K, we provide a\nthorough and careful analysis of the cycloidal magnetic structure exhibited by\nthis material using powder neutron diffraction.\n"} {"abstract": " Deep learning has advanced from fully connected architectures to structured\nmodels organized into components, e.g., the transformer composed of positional\nelements, modular architectures divided into slots, and graph neural nets made\nup of nodes. In structured models, an interesting question is how to conduct\ndynamic and possibly sparse communication among the separate components. Here,\nwe explore the hypothesis that restricting the transmitted information among\ncomponents to discrete representations is a beneficial bottleneck. The\nmotivating intuition is human language in which communication occurs through\ndiscrete symbols. Even though individuals have different understandings of what\na \"cat\" is based on their specific experiences, the shared discrete token makes\nit possible for communication among individuals to be unimpeded by individual\ndifferences in internal representation. To discretize the values of concepts\ndynamically communicated among specialist components, we extend the\nquantization mechanism from the Vector-Quantized Variational Autoencoder to\nmulti-headed discretization with shared codebooks and use it for\ndiscrete-valued neural communication (DVNC). Our experiments show that DVNC\nsubstantially improves systematic generalization in a variety of architectures\n-- transformers, modular architectures, and graph neural networks. We also show\nthat the DVNC is robust to the choice of hyperparameters, making the method\nvery useful in practice. Moreover, we establish a theoretical justification of\nour discretization process, proving that it has the ability to increase noise\nrobustness and reduce the underlying dimensionality of the model.\n"} {"abstract": " Optimal stopping is the problem of deciding the right time at which to take a\nparticular action in a stochastic system, in order to maximize an expected\nreward. It has many applications in areas such as finance, healthcare, and\nstatistics. In this paper, we employ deep Reinforcement Learning (RL) to learn\noptimal stopping policies in two financial engineering applications: namely\noption pricing, and optimal option exercise. We present for the first time a\ncomprehensive empirical evaluation of the quality of optimal stopping policies\nidentified by three state of the art deep RL algorithms: double deep Q-learning\n(DDQN), categorical distributional RL (C51), and Implicit Quantile Networks\n(IQN). In the case of option pricing, our findings indicate that in a\ntheoretical Black-Schole environment, IQN successfully identifies nearly\noptimal prices. On the other hand, it is slightly outperformed by C51 when\nconfronted to real stock data movements in a put option exercise problem that\ninvolves assets from the S&P500 index. More importantly, the C51 algorithm is\nable to identify an optimal stopping policy that achieves 8% more out-of-sample\nreturns than the best of four natural benchmark policies. We conclude with a\ndiscussion of our findings which should pave the way for relevant future\nresearch.\n"} {"abstract": " We present the sympathetic eruption of a standard and a blowout coronal jets\noriginating from two adjacent coronal bright points (CBP1 and CBP2) in a polar\ncoronal hole, using soft X-ray and extreme ultraviolet observations\nrespectively taken by the Hinode and the Solar Dynamic Observatory. In the\nevent, a collimated jet with obvious westward lateral motion firstly launched\nfrom CBP1, during which a small bright point appeared around CBP1's east end,\nand magnetic flux cancellation was observed within the eruption source region.\nBased on these characteristics, we interpret the observed jet as a standard jet\nassociated with photosperic magnetic flux cancellation. About 15 minutes later,\nthe westward moving jet spire interacted with CBP2 and resulted in magnetic\nreconnection between them, which caused the formation of the second jet above\nCBP2 and the appearance of a bright loop system in-between the two CBPs. In\naddition, we observed the writhing, kinking, and violent eruption of a small\nkink structure close to CBP2's west end but inside the jet-base, which made the\nsecond jet brighter and broader than the first one. These features suggest that\nthe second jet should be a blowout jet triggered by the magnetic reconnection\nbetween CBP2 and the spire of the first jet. We conclude that the two\nsuccessive jets were physically connected to each other rather than a temporal\ncoincidence, and this observation also suggests that coronal jets can be\ntriggered by external eruptions or disturbances, besides internal magnetic\nactivities or magnetohydrodynamic instabilities.\n"} {"abstract": " This paper compares notions of double sliceness for links. The main result is\nto show that a large family of 2-component Montesinos links are not strongly\ndoubly slice despite being weakly doubly slice and having doubly slice\ncomponents. Our principal obstruction to strong double slicing comes by\nconsidering branched double covers. To this end we prove a result classifying\nSeifert fibered spaces which admit a smooth embeddings into integer homology\n$S^1 \\times S^3$s by maps inducing surjections on the first homology group. A\nnumber of other results and examples pertaining to doubly slice links are also\ngiven.\n"} {"abstract": " To boost the capacity of the cellular system, the operators have started to\nreuse the same licensed spectrum by deploying 4G LTE small cells (Femto Cells)\nin the past. But in time, these small cell licensed spectrum is not sufficient\nto satisfy future applications like augmented reality (AR)and virtual reality\n(VR). Hence, cellular operators look for alternate unlicensed spectrum in Wi-Fi\n5 GHz band, later 3GPP named as LTE Licensed Assisted Access (LAA). The recent\nand current rollout of LAA deployments (in developed nations like the US)\nprovides an opportunity to understand coexistence profound ground truth. This\npaper discusses a high-level overview of my past, present, and future research\nworks in the direction of small cell benefits. In the future, we shift the\nfocus onto the latest unlicensed band: 6 GHz, where the latest Wi-Fi version,\n802.11ax, will coexist with the latest cellular technology, 5G New Radio(NR) in\nunlicensed\n"} {"abstract": " Controlling the activity of proteins with azobenzene photoswitches is a\npotent tool for manipulating their biological function. With the help of light,\none can change e.g. binding affinities, control allostery or temper with\ncomplex biological processes. Additionally, due to their intrinsically fast\nphotoisomerisation, azobenzene photoswitches can serve as triggers to initiate\nout-of-equilibrium processes. Such switching of the activity, therefore,\ninitiates a cascade of conformational events, which can only be accessed with\ntime-resolved methods. In this Review, we will show how combining the potency\nof azobenzene photoswitching with transient spectroscopic techniques helps to\ndisclose the order of events and provide an experimental observation of\nbiomolecular interactions in real-time. This will ultimately help us to\nunderstand how proteins accommodate, adapt and readjust their structure to\nanswer an incoming signal and it will complete our knowledge of the dynamical\ncharacter of proteins.\n"} {"abstract": " Nonuniform structure of low-density nuclear matter, known as nuclear pasta,\nis expected to appear not only in the inner crust of neutron stars but also in\ncore-collapse supernova explosions and neutron-star mergers. We perform fully\nthree-dimensional calculations of inhomogeneous nuclear matter and neutron-star\nmatter in the low-density region using the Thomas-Fermi approximation. The\nnuclear interaction is described in the relativistic mean-field approach with\nthe point-coupling interaction, where the meson exchange in each channel is\nreplaced by the contact interaction between nucleons. We investigate the\ninfluence of nuclear symmetry energy and its density dependence on pasta\nstructures by introducing a coupling term between the isoscalar-vector and\nisovector-vector interactions. It is found that the properties of pasta phases\nin the neutron-rich matter are strongly dependent on the symmetry energy and\nits slope. In addition to typical shapes like droplets, rods, slabs, tubes, and\nbubbles, some intermediate pasta structures are also observed in cold stellar\nmatter with a relatively large proton fraction. We find that nonspherical\nshapes are unlikely to be formed in neutron-star crusts, since the proton\nfraction obtained in $\\beta$ equilibrium is rather small. The inner crust\nproperties may lead to a visible difference in the neutron-star radius.\n"} {"abstract": " This paper addresses the Mountain Pass Theorem for locally Lipschitz\nfunctions on finite-dimensional vector spaces in terms of tangencies. Namely,\nlet $f \\colon \\mathbb R^n \\to \\mathbb R$ be a locally Lipschitz function with a\nmountain pass geometry. Let $$c := \\inf_{\\gamma \\in \\mathcal\nA}\\max_{t\\in[0,1]}f(\\gamma(t)),$$ where $\\mathcal{A}$ is the set of all\ncontinuous paths joining $x^*$ to $y^*.$ We show that either $c$ is a critical\nvalue of $f$ or $c$ is a tangency value at infinity of $f.$ This reduces to the\nMountain Pass Theorem of Ambrosetti and Rabinowitz in the case where the\nfunction $f$ is definable (such as, semi-algebraic) in an o-minimal structure.\n"} {"abstract": " Recent studies on metamorphic petrology as well as microstructural\nobservations suggest the influence of mechanical effects upon chemically active\nmetamorphic minerals. Thus, the understanding of such a coupling is crucial to\ndescribe the dynamics of geomaterials. In this effort, we derive a\nthermodynamically-consistent framework to characterize the evolution of\nchemically active minerals. We model the metamorphic mineral assemblages as a\nsolid-species solution where the species mass transport and chemical reaction\ndrive the stress generation process. The theoretical foundations of the\nframework rely on modern continuum mechanics, thermodynamics far from\nequilibrium, and the phase-field model. We treat the mineral solid solution as\na continuum body, and following the Larch\\'e and Cahn network model, we define\ndisplacement and strain fields. Consequently, we obtain a set of coupled\nchemo-mechanical equations. We use the aforementioned framework to study single\nminerals as solid solutions during metamorphism. Furthermore, we emphasise the\nuse of the phase-field framework as a promising tool to model complex\nmulti-physics processes in geoscience. Without loss of generality, we use\ncommon physical and chemical parameters found in the geoscience literature to\nportrait a comprehensive view of the underlying physics. Thereby, we carry out\n2D and 3D numerical simulations using material parameters for metamorphic\nminerals to showcase and verify the chemo-mechanical interactions of mineral\nsolid solutions that undergo spinodal decomposition, chemical reactions, and\ndeformation.\n"} {"abstract": " Reversible covalent kinase inhibitors (RCKIs) are a class of novel kinase\ninhibitors attracting increasing attention because they simultaneously show the\nselectivity of covalent kinase inhibitors, yet avoid permanent\nprotein-modification-induced adverse effects. Over the last decade, RCKIs have\nbeen reported to target different kinases, including atypical kinases.\nCurrently, three RCKIs are undergoing clinical trials to treat specific\ndiseases, for example, Pemphigus, an autoimmune disorder. In this perspective,\nfirst, RCKIs are systematically summarized, including characteristics of\nelectrophilic groups, chemical scaffolds, nucleophilic residues, and binding\nmodes. Second, we provide insights into privileged electrophiles, the\ndistribution of nucleophiles and hence effective design strategies for RCKIs.\nFinally, we provide a brief perspective on future design strategies for RCKIs,\nincluding those that target proteins other than kinases.\n"} {"abstract": " We prove that the sublinearly Morse boundary of every known cubulated group\ncontinuously injects in the Gromov boundary of a certain hyperbolic graph. We\nalso show that for all CAT(0) cube complexes, convergence to sublinearly Morse\ngeodesic rays has a simple combinatorial description using the hyperplanes\ncrossed by such sequences. As an application of this combinatorial description,\nwe show that a certain subspace of the Roller boundary continously surjects on\nthe subspace of the visual boundary consisting of sublinearly Morse geodesic\nrays.\n"} {"abstract": " Depth completion aims to generate a dense depth map from the sparse depth map\nand aligned RGB image. However, current depth completion methods use extremely\nexpensive 64-line LiDAR(about $100,000) to obtain sparse depth maps, which will\nlimit their application scenarios. Compared with the 64-line LiDAR, the\nsingle-line LiDAR is much less expensive and much more robust. Therefore, we\npropose a method to tackle the problem of single-line depth completion, in\nwhich we aim to generate a dense depth map from the single-line LiDAR info and\nthe aligned RGB image. A single-line depth completion dataset is proposed based\non the existing 64-line depth completion dataset(KITTI). A network called\nSemantic Guided Two-Branch Network(SGTBN) which contains global and local\nbranches to extract and fuse global and local info is proposed for this task. A\nSemantic guided depth upsampling module is used in our network to make full use\nof the semantic info in RGB images. Except for the usual MSE loss, we add the\nvirtual normal loss to increase the constraint of high-order 3D geometry in our\nnetwork. Our network outperforms the state-of-the-art in the single-line depth\ncompletion task. Besides, compared with the monocular depth estimation, our\nmethod also has significant advantages in precision and model size.\n"} {"abstract": " Spectral observations below Lyman-alpha are now obtained with the Cosmic\nOrigin Spectrograph (COS) on the Hubble Space Telescope (HST). It is therefore\nnecessary to provide an accurate treatment of the blue wing of the Lyman-alpha\nline that enables correct calculations of radiative transport in DA and DBA\nwhite dwarf stars. On the theoretical front, we very recently developed very\naccurate H-He potential energies for the hydrogen 1s, 2s, and 2p states.\nNevertheless, an uncertainty remained about the asymptotic correlation of the\nSigma states and the electronic dipole transition moments. A similar difficulty\noccurred in our first calculations for the resonance broadening of hydrogen\nperturbed by collisions with neutral H atoms. The aim of this paper is twofold.\nFirst, we clarify the question of the asymptotic correlation of the Sigma\nstates, and we show that relativistic contributions, even very tiny, may need\nto be accounted for a correct long-range and asymptotic description of the\nstates because of the specific 2s 2p Coulomb degeneracy in hydrogen. This\neffect of relativistic corrections, inducing small splitting of the 2s and 2p\nstates of H, is shown to be important for the Sigma-Sigma$ transition dipole\nmoments in H-He and is also discussed in H-H. Second, we use existent (H-H) and\nnewly determined (H-He) accurate potentials and properties to provide a\ntheoretical investigation of the collisional effects on the blue wing of the\nLyman-alpha line of H perturbed by He and H. We study the relative\ncontributions in the blue wing of the H and He atoms according to their\nrelative densities. We finally achieve a comparison with recent COS\nobservations and propose an assignment for a feature centered at 1190 A.\n"} {"abstract": " We apply a recently developed unsupervised machine learning scheme for local\natomic environments to characterize large-scale, disordered aggregates formed\nby sequence-defined macromolecules. This method provides new insight into the\nstructure of these disordered, dilute aggregates, which has proven difficult to\nunderstand using collective variables manually derived from expert knowledge.\nIn contrast to such conventional order parameters, we are able to classify the\nglobal aggregate structure directly using descriptions of the local\nenvironments. The resulting characterization provides a deeper understanding of\nthe range of possible self-assembled structures and their relationships to each\nother. We also provide a detailed analysis of the effects of finite system\nsize, stochasticity, and kinetics of these aggregates based on the learned\ncollective variables. Interestingly, we find that the spatiotemporal evolution\nof systems in the learned latent space is smooth and continuous, despite being\nderived from only a single snapshot from each of about 1000 monomer sequences.\nThese results demonstrate the insight which can be gained by applying\nunsupervised machine learning to soft matter systems, especially when suitable\norder parameters are not known.\n"} {"abstract": " Recently, significant progress has been made in single-view depth estimation\nthanks to increasingly large and diverse depth datasets. However, these\ndatasets are largely limited to specific application domains (e.g. indoor,\nautonomous driving) or static in-the-wild scenes due to hardware constraints or\ntechnical limitations of 3D reconstruction. In this paper, we introduce the\nfirst depth dataset DynOcc consisting of dynamic in-the-wild scenes. Our\napproach leverages the occlusion cues in these dynamic scenes to infer depth\nrelationships between points of selected video frames. To achieve accurate\nocclusion detection and depth order estimation, we employ a novel occlusion\nboundary detection, filtering and thinning scheme followed by a robust\nforeground/background classification method. In total our DynOcc dataset\ncontains 22M depth pairs out of 91K frames from a diverse set of videos. Using\nour dataset we achieved state-of-the-art results measured in weighted human\ndisagreement rate (WHDR). We also show that the inferred depth maps trained\nwith DynOcc can preserve sharper depth boundaries.\n"} {"abstract": " Large global companies need to speed up their innovation activities to\nincrease competitive advantage. However, such companies' organizational\nstructures impede their ability to capture trends they are well aware of due to\nbureaucracy, slow decision-making, distributed departments, and distributed\nprocesses. One way to strengthen the innovation capability is through fostering\ninternal startups. We report findings from an embedded multiple-case study of\nfive internal startups in a globally distributed company to identify barriers\nfor software product innovation: late involvement of software developers,\nexecutive sponsor is missing or not clarified, yearly budgeting and planning,\nunclear decision-making authority, lack of digital infrastructure for\nexperimentation and access to data from external actors. Drawing on the\nframework of continuous software engineering proposed by Fitzgerald and Stol,\nwe discuss the role of BizDev in software product innovation. We suggest that\nlack of continuity, rather than the lack of speed, is an ultimate challenge for\ninternal startups in large global companies.\n"} {"abstract": " We study the background (equilibrium), linear and nonlinear spin currents in\n2D Rashba spin-orbit coupled systems with Zeeman splitting and in 3D\nnoncentrosymmetric metals using modified spin current operator by inclusion of\nthe anomalous velocity. The linear spin Hall current arises due to the\nanomalous velocity of charge carriers induced by the Berry curvature. The\nnonlinear spin current occurs due to the band velocity and/or the anomalous\nvelocity. For 2D Rashba systems, the background spin current saturates at high\nFermi energy (independent of the Zeeman coupling), linear spin current exhibits\na plateau at the Zeeman gap and nonlinear spin currents are peaked at the gap\nedges. The magnitude of the nonlinear spin current peaks enhances with the\nstrength of Zeeman interaction. The linear spin current is polarized out of\nplane, while the nonlinear ones are polarized in-plane. We witness pure\nanomalous nonlinear spin current with spin polarization along the direction of\npropagation. In 3D noncentrosymmetric metals, background and linear spin\ncurrents are monotonically increasing functions of Fermi energy, while\nnonlinear spin currents vary non-monotonically as a function of Fermi energy\nand are independent of the Berry curvature. These findings may provide useful\ninformation to manipulate spin currents in Rashba spin-orbit coupled systems.\n"} {"abstract": " Using a navigation process with the datum $(F,V)$, in which $F$ is a Finsler\nmetric and the smooth tangent vector field $V$ satisfies $F(-V(x))>1$\neverywhere, a Lorentz Finsler metric $\\tilde{F}$ can be induced. Isoparametric\nfunctions and isoparametric hypersurfaces with or without involving a smooth\nmeasure can be defined for $\\tilde{F}$. When the vector field $V$ in the\nnavigation datum is homothetic, we prove the local correspondences between\nisoparametric functions and isoparametric hypersurfaces before and after this\nnavigation process. Using these correspondences, we provide some examples of\nisoparametric functions and isoparametric hypersurfaces on a Funk space of\nLorentz Randers type.\n"} {"abstract": " We study the high frequency Hall conductivity in a two-dimensional (2D) model\nof conduction electrons coupled to a background magnetic skyrmion texture via\nan effective Hund's coupling term. For an ordered skyrmion crystal, a Kubo\nformula calculation using the basis of skyrmion crystal Chern bands reveals a\nresonant Hall response at a frequency set by the Hund's coupling:\n$\\hbar\\omega_{\\text{res}} \\approx J_H$. A complementary real-space Kubo formula\ncalculation for an isolated skyrmion in a box reveals a similar resonant Hall\nresponse. A linear relation between the area under the Hall resonant curve and\nthe skyrmion density is discovered numerically and is further elucidated using\na gradient expansion which is valid for smooth textures and a local\napproximation based on a spin-trimer calculation. We point out the issue of\ndistinguishing this skyrmion contribution from a similar feature arising from\nspin-orbit interactions, as demonstrated in a model for Rashba spin-orbit\ncoupled electrons in a collinear ferromagnet, which is analogous to the\ndifficulty of unambiguously separating the d.c. topological Hall effect from\nthe anomalous Hall effect. The resonant feature in the high frequency\ntopological Hall effect is proposed to provide a potentially useful local\noptical signature of skyrmions via probes such as scanning magneto-optical Kerr\nmicroscopy.\n"} {"abstract": " We report on observations of the active K2 dwarf $\\epsilon$ Eridani based on\ncontemporaneous SPIRou, NARVAL, and TESS data obtained over two months in late\n2018, when the activity of the star was reported to be in a non-cyclic phase.\nWe first recover the fundamental parameters of the target from both visible and\nnIR spectral fitting. The large-scale magnetic field is investigated from\npolarimetric data. From unpolarized spectra, we estimate the total magnetic\nflux through Zeeman broadening of magnetically sensitive nIR lines and the\nchromospheric emission using the CaII H & K lines. The TESS photometric\nmonitoring is modeled with pseudo-periodic Gaussian Process Regression.\nFundamental parameters of $\\epsilon$ Eridani derived from visible and\nnear-infrared wavelengths provide us with consistent results, also in agreement\nwith published values. We report a progressive increase of macroturbulence\ntowards larger nIR wavelengths. Zeeman broadening of individual lines\nhighlights an unsigned surface magnetic field $B_{\\rm mono} = 1.90 \\pm 0.13$\nkG, with a filling factor $f = 12.5 \\pm 1.7$% (unsigned magnetic flux $Bf = 237\n\\pm 36$ G). The large-scale magnetic field geometry, chromospheric emission,\nand broadband photometry display clear signs of non-rotational evolution over\nthe course of data collection. Characteristic decay times deduced from the\nlight curve and longitudinal field measurements fall in the range 30-40 d,\nwhile the characteristic timescale of surface differential rotation, as derived\nthrough the evolution of the magnetic geometry, is equal to $57 \\pm 5$ d. The\nlarge-scale magnetic field exhibits a combination of properties not observed\npreviously for $\\epsilon$ Eridani, with a surface field among the weakest\npreviously reported, but also mostly axisymmetric, and dominated by a toroidal\ncomponent.\n"} {"abstract": " Maintenance of existing software requires a large amount of time for\ncomprehending the source code. The architecture of a software, however, may not\nbe clear to maintainers if up to date documentations are not available.\nSoftware clustering is often used as a remodularisation and architecture\nrecovery technique to help recover a semantic representation of the software\ndesign. Due to the diverse domains, structure, and behaviour of software\nsystems, the suitability of different clustering algorithms for different\nsoftware systems are not investigated thoroughly. Research that introduce new\nclustering techniques usually validate their approaches on a specific domain,\nwhich might limit its generalisability. If the chosen test subjects could only\nrepresent a narrow perspective of the whole picture, researchers might risk not\nbeing able to address the external validity of their findings. This work aims\nto fill this gap by introducing a new approach, Explaining Software Clustering\nfor Remodularisation, to evaluate the effectiveness of different software\nclustering approaches. This work focuses on hierarchical clustering and Bunch\nclustering algorithms and provides information about their suitability\naccording to the features of the software, which as a consequence, enables the\nselection of the most optimum algorithm and configuration from our existing\npool of choices for a particular software system. The proposed framework is\ntested on 30 open source software systems with varying sizes and domains, and\ndemonstrates that it can characterise both the strengths and weaknesses of the\nanalysed software clustering algorithms using software features extracted from\nthe code. The proposed approach also provides a better understanding of the\nalgorithms behaviour through the application of dimensionality reduction\ntechniques.\n"} {"abstract": " Cleaner analytic technique for quantifying compounds in dense suspension is\nneeded for wastewater and environment analysis, chemical or bio-conversion\nprocess monitoring, biomedical diagnostics, food quality control among others.\nIn this work, we introduce a green, fast, one-step method called nanoextraction\nfor extraction and detection of target analytes from sub-milliliter dense\nsuspensions using surface nanodroplets without toxic solvents and pre-removal\nof the solid contents. With nanoextraction, we achieve a limit of detection\n(LOD) of 10^(-9) M for a fluorescent model analyte obtained from a particle\nsuspension sample. The LOD lower than that in water without particles 10^(-8)\nM, potentially due to the interaction of particles and the analyte. The high\nparticle concentration in the suspension sample thus does not reduce the\nextraction efficiency, although the extraction process was slowed down up to 5\nmin. As proof of principle, we demonstrate the nanoextraction for\nquantification of model compounds in wastewater slurry containing 30 wt% sands\nand oily components (i.e. heavy oils). The nanoextraction and detection\ntechnology developed in this work may be used as fast analytic technologies for\ncomplex slurry samples in environment industrial waste, or in biomedical\ndiagnostics.\n"} {"abstract": " We explicitly compute the homology groups with coefficients in a field of\ncharacteristic zero of cocyclic subgroups or even Artin groups of FC-type. We\nalso give some partial results in the case when the coefficients are taken in a\nfield of prime characteristic.\n"} {"abstract": " We prove that for all positive integers $n$ and $k$, there exists an integer\n$N = N(n,k)$ satisfying the following. If $U$ is a set of $k$ direction vectors\nin the plane and $\\mathcal{J}_U$ is the set of all line segments in direction\n$u$ for some $u\\in U$, then for every $N$ families $\\mathcal{F}_1, \\ldots,\n\\mathcal{F}_N$, each consisting of $n$ mutually disjoint segments in\n$\\mathcal{J}_U$, there is a set $\\{A_1, \\ldots, A_n\\}$ of $n$ disjoint segments\nin $\\bigcup_{1\\leq i\\leq N}\\mathcal{F}_i$ and distinct integers $p_1, \\ldots,\np_n\\in \\{1, \\ldots, N\\}$ satisfying that $A_j\\in \\mathcal{F}_{p_j}$ for all\n$j\\in \\{1, \\ldots, n\\}$. We generalize this property for underlying lines on\nfixed $k$ directions to $k$ families of simple curves with certain conditions.\n"} {"abstract": " L. Moret-Bailly constructed families $\\mathfrak{C}\\rightarrow \\mathbb{P}^1$\nof genus 2 curves with supersingular jacobian. In this paper we first classify\nthe reducible fibers of a Moret-Bailly family using linear algebra over a\nquaternion algebra. The main result is an algorithm that exploits properties of\ntwo reducible fibers to compute a hyperelliptic model for any irreducible fiber\nof a Moret-Bailly family.\n"} {"abstract": " This paper presents a method for gaze estimation according to face images. We\ntrain several gaze estimators adopting four different network architectures,\nincluding an architecture designed for gaze estimation (i.e.,iTracker-MHSA) and\nthree originally designed for general computer vision tasks(i.e., BoTNet,\nHRNet, ResNeSt). Then, we select the best six estimators and ensemble their\npredictions through a linear combination. The method ranks the first on the\nleader-board of ETH-XGaze Competition, achieving an average angular error of\n$3.11^{\\circ}$ on the ETH-XGaze test set.\n"} {"abstract": " In a separable Hilbert space $X$, we study the controlled evolution equation\n\\begin{equation*} u'(t)+Au(t)+p(t)Bu(t)=0, \\end{equation*} where $A\\geq-\\sigma\nI$ ($\\sigma\\geq0$) is a self-adjoint linear operator, $B$ is a bounded linear\noperator on $X$, and $p\\in L^2_{loc}(0,+\\infty)$ is a bilinear control.\n We give sufficient conditions in order for the above nonlinear control system\nto be locally controllable to the $j$th eigensolution for any $j\\geq1$. We also\nderive semi-global controllability results in large time and discuss\napplications to parabolic equations in low space dimension. Our method is\nconstructive and all the constants involved in the main results can be\nexplicitly computed.\n"} {"abstract": " In the cuprates, one-dimensional chain compounds provide a unique opportunity\nto understand the microscopic physics due to the availability of reliable\ntheories. However, progress has been limited by the inability to controllably\ndope these materials. Here, we report the synthesis and spectroscopic analysis\nof the one-dimensional cuprate Ba$_{2-x}$Sr$_x$CuO$_{3+\\delta}$ over a wide\nrange of hole doping. Our angle-resolved photoemission experiments reveal the\ndoping evolution of the holon and spinon branches. We identify a prominent\nfolding branch whose intensity fails to match predictions of the simple Hubbard\nmodel. An additional strong near-neighbor attraction, which may arise from\ncoupling to phonons, quantitatively explains experiments for all accessible\ndoping levels. Considering structural and quantum chemistry similarities among\ncuprates, this attraction will play a similarly crucial role in the high-$T_C$\nsuperconducting counterparts\n"} {"abstract": " Antonio Colla was a meteorologist and astronomer who made sunspot\nobservations at the Meteorological Observatory of the Parma University (Italy).\nHe carried out his sunspot records from 1830 to 1843, just after the Dalton\nMinimum. We have recovered 71 observation days for this observer.\nUnfortunately, many of these records are qualitative and we could only obtain\nthe number of sunspot groups and/or single sunspots from 25 observations.\nHowever, we highlight the importance of these records because Colla is not\nincluded in the sunspot group database as an observer and, therefore, neither\nhis sunspot observations. According to the number of groups, the sunspot\nobservations made by Colla are similar as several observers of his time. For\ncommon observation day, only Stark significantly recorded more groups than\nColla. Moreover, we have calculated the sunspot area and positions from Colla's\nsunspot drawings concluding that both areas and positions recorded by this\nobserver seem unreal. Therefore, Colla's drawings can be interpreted such as\nsketches including reliable information on the number of groups but the\ninformation on sunspot areas and positions should not be used for scientific\npurposes.\n"} {"abstract": " All current approaches for statically enforcing differential privacy in\nhigher order languages make use of either linear or relational refinement\ntypes. A barrier to adoption for these approaches is the lack of support for\nexpressing these \"fancy types\" in mainstream programming languages. For\nexample, no mainstream language supports relational refinement types, and\nalthough Rust and modern versions of Haskell both employ some linear typing\ntechniques, they are inadequate for embedding enforcement of differential\nprivacy, which requires \"full\" linear types a la Girard. We propose a new type\nsystem that enforces differential privacy, avoids the use of linear and\nrelational refinement types, and can be easily embedded in mainstream richly\ntyped programming languages such as Scala, OCaml and Haskell. We demonstrate\nsuch an embedding in Haskell, demonstrate its expressiveness on case studies,\nand prove that our type-based enforcement of differential privacy is sound.\n"} {"abstract": " We develop a multicomponent lattice Boltzmann (LB) model for the 2D\nRayleigh--Taylor turbulence with a Shan-Chen pseudopotential implemented on\nGPUs. In the immiscible case this method is able to accurately overcome the\ninherent numerical complexity caused by the complicated structure of the\ninterface that appears in the fully developed turbulent regime. Accuracy of the\nLB model is tested both for early and late stages of instability. For the\ndeveloped turbulent motion we analyze the balance between different terms\ndescribing variations of the kinetic and potential energies. Then, we analyze\nthe role of interface in the energy balance, and also the effects of the\nvorticity induced by the interface in the energy dissipation. Statistical\nproperties are compared for miscible and immiscible flows. Our results can also\nbe considered as a first validation step to extend the application of LB model\nto 3D immiscible Rayleigh-Taylor turbulence.\n"} {"abstract": " We present a geometrically exact nonlinear analysis of elastic in-plane beams\nin the context of finite but small strain theory. The formulation utilizes the\nfull beam metric and obtains the complete analytic elastic constitutive model\nby employing the exact relation between the reference and equidistant strains.\nThus, we account for the nonlinear strain distribution over the thickness of a\nbeam. In addition to the full analytical constitutive model, four simplified\nones are presented. Their comparison provides a thorough examination of the\ninfluence of a beam's metric on the structural response. We show that the\nappropriate formulation depends on the curviness of a beam at all\nconfigurations. Furthermore, the nonlinear distribution of strain along the\nthickness of strongly curved beams must be considered to obtain a complete and\naccurate response.\n"} {"abstract": " Autoencoders as tools behind anomaly searches at the LHC have the structural\nproblem that they only work in one direction, extracting jets with higher\ncomplexity but not the other way around. To address this, we derive classifiers\nfrom the latent space of (variational) autoencoders, specifically in Gaussian\nmixture and Dirichlet latent spaces. In particular, the Dirichlet setup solves\nthe problem and improves both the performance and the interpretability of the\nnetworks.\n"} {"abstract": " We reconsider work of Elkalla on subnormal subgroups of 3-manifold groups,\ngiving essentially algebraic arguments that extend to the case of $PD_3$-groups\nand group pairs. However the argument relies on an $L^2$-Betti number\nhypothesis which has not yet been shown to hold in general.\n"} {"abstract": " Crystals and other condensed matter systems described by density waves often\nexhibit dislocations. Here we show, by considering the topology of the ground\nstate manifolds (GSMs) of such systems, that dislocations in the density phase\nfield always split into disclinations, and that the disclinations themselves\nare constrained to sit at particular points in the GSM. Consequently, the\ntopology of the GSM forbids zero-energy dislocation glide, giving rise to a\nPeirels-Nabarro barrier.\n"} {"abstract": " In this paper, we present a model for semantic memory that allows machines to\ncollect information and experiences to become more proficient with time. Post\nsemantic analysis of the sensory and other related data, the processed\ninformation is stored in the knowledge graph which is then used to comprehend\nthe work instructions expressed in natural language. This imparts industrial\nrobots cognitive behavior to execute the required tasks in a deterministic\nmanner. The paper outlines the architecture of the system along with an\nimplementation of the proposal.\n"} {"abstract": " Magnetic Resonance Fingerprinting (MRF) is a method to extract quantitative\ntissue properties such as T1 and T2 relaxation rates from arbitrary pulse\nsequences using conventional magnetic resonance imaging hardware. MRF pulse\nsequences have thousands of tunable parameters which can be chosen to maximize\nprecision and minimize scan time. Here we perform de novo automated design of\nMRF pulse sequences by applying physics-inspired optimization heuristics. Our\nexperimental data suggests systematic errors dominate over random errors in MRF\nscans under clinically-relevant conditions of high undersampling. Thus, in\ncontrast to prior optimization efforts, which focused on statistical error\nmodels, we use a cost function based on explicit first-principles simulation of\nsystematic errors arising from Fourier undersampling and phase variation. The\nresulting pulse sequences display features qualitatively different from\npreviously used MRF pulse sequences and achieve fourfold shorter scan time than\nprior human-designed sequences of equivalent precision in T1 and T2.\nFurthermore, the optimization algorithm has discovered the existence of MRF\npulse sequences with intrinsic robustness against shading artifacts due to\nphase variation.\n"} {"abstract": " The 5G mobile network brings several new features that can be applied to\nexisting and new applications. High reliability, low latency, and high data\nrate are some of the features which fulfill the requirements of vehicular\nnetworks. Vehicular networks aim to provide safety for road users and several\nadditional advantages such as enhanced traffic efficiency and in-vehicle\ninfotainment services. This paper summarizes the most important aspects of\nNR-V2X, which is standardized by 3GPP, focusing on sidelink communication. The\nmain part of this work belongs to the 3GPP Rel-16, which is the first 3GPP\nrelease for NR-V2X, and the work/study items of the future Rel-17\n"} {"abstract": " Existing emotion-aware conversational models usually focus on controlling the\nresponse contents to align with a specific emotion class, whereas empathy is\nthe ability to understand and concern the feelings and experience of others.\nHence, it is critical to learn the causes that evoke the users' emotion for\nempathetic responding, a.k.a. emotion causes. To gather emotion causes in\nonline environments, we leverage counseling strategies and develop an\nempathetic chatbot to utilize the causal emotion information. On a real-world\nonline dataset, we verify the effectiveness of the proposed approach by\ncomparing our chatbot with several SOTA methods using automatic metrics,\nexpert-based human judgements as well as user-based online evaluation.\n"} {"abstract": " We investigate an M/M/1 queue operating in two switching environments, where\nthe switch is governed by a two-state time-homogeneous Markov chain. This model\nallows to describe a system that is subject to regular operating phases\nalternating with anomalous working phases or random repairing periods. We first\nobtain the steady-state distribution of the process in terms of a generalized\nmixture of two geometric distributions. In the special case when only one kind\nof switch is allowed, we analyze the transient distribution, and investigate\nthe busy period problem. The analysis is also performed by means of a suitable\nheavy-traffic approximation which leads to a continuous random process. Its\ndistribution satisfies a partial differential equation with randomly\nalternating infinitesimal moments. For the approximating process we determine\nthe steady-state distribution, the transient distribution and a\nfirst-passage-time density.\n"} {"abstract": " Signomials are obtained by generalizing polynomials to allow for arbitrary\nreal exponents. This generalization offers great expressive power, but has\nhistorically sacrificed the organizing principle of ``degree'' that is central\nto polynomial optimization theory. We reclaim that principle here through the\nconcept of signomial rings, which we use to derive complete convex relaxation\nhierarchies of upper and lower bounds for signomial optimization via sums of\narithmetic-geometric exponentials (SAGE) nonnegativity certificates. The\nPositivstellensatz underlying the lower bounds relies on the concept of\nconditional SAGE and removes regularity conditions required by earlier works,\nsuch as convexity and Archimedeanity of the feasible set. Through worked\nexamples we illustrate the practicality of this hierarchy in areas such as\nchemical reaction network theory and chemical engineering. These examples\ninclude comparisons to direct global solvers (e.g., BARON and ANTIGONE) and the\nLasserre hierarchy (where appropriate). The completeness of our hierarchy of\nupper bounds follows from a generic construction whereby a Positivstellensatz\nfor signomial nonnegativity over a compact set provides for arbitrarily strong\nouter approximations of the corresponding cone of nonnegative signomials. While\nworking toward that result, we prove basic facts on the existence and\nuniqueness of solutions to signomial moment problems.\n"} {"abstract": " For a random walk in a uniformly elliptic and i.i.d. environment on $\\mathbb\nZ^d$ with $d \\geq 4$, we show that the quenched and annealed large deviations\nrate functions agree on any compact set contained in the boundary $\\partial\n\\mathbb{D}:=\\{ x \\in \\mathbb R^d : |x|_1 =1\\}$ of their domain which does not\nintersect any of the $(d-2)$-dimensional facets of $\\partial \\mathbb{D}$,\nprovided that the disorder of the environment is~low~enough. As a consequence,\nwe obtain a simple explicit formula for both rate functions on $\\partial\n\\mathbb{D}$ at low disorder. In contrast to previous works, our results do not\nassume any ballistic behavior of the random walk and are not restricted to\nneighborhoods of any given point (on the boundary $\\partial \\mathbb{D}$). In\naddition, our~results complement those in [BMRS19], where, using different\nmethods, we investigate the equality of the rate functions in the interior of\ntheir domain. Finally, for a general parametrized family of environments,\nwe~show that the strength of disorder determines a phase transition in the\nequality of both rate functions, in the sense that for each $x \\in \\partial\n\\mathbb{D}$ there exists $\\varepsilon_x$ such that the two rate functions agree\nat $x$ when the disorder is smaller than $\\varepsilon_x$ and disagree when its\nlarger. This further reconfirms the idea, introduced in [BMRS19], that the\ndisorder of the environment is in general intimately related with the equality\nof the rate functions.\n"} {"abstract": " In this paper, we consider graphon particle systems with heterogeneous\nmean-field type interactions and the associated finite particle approximations.\nUnder suitable growth (resp. convexity) assumptions, we obtain uniform-in-time\nconcentration estimates, over finite (resp. infinite) time horizon, for the\nWasserstein distance between the empirical measure and its limit, extending the\nwork of Bolley--Guillin--Villani.\n"} {"abstract": " It is well known that entanglement can benefit quantum information processing\ntasks. Quantum illumination, when first proposed, is surprising as\nentanglement's benefit survives entanglement-breaking noise. Since then, many\nefforts have been devoted to study quantum sensing in noisy scenarios. The\napplicability of such schemes, however, is limited to a binary quantum\nhypothesis testing scenario. In terms of target detection, such schemes\ninterrogate a single polarization-azimuth-elevation-range-Doppler resolution\nbin at a time, limiting the impact to radar detection. We resolve this\nbinary-hypothesis limitation by proposing a quantum ranging protocol enhanced\nby entanglement. By formulating a ranging task as a multiary hypothesis testing\nproblem, we show that entanglement enables a 6-dB advantage in the error\nexponent against the optimal classical scheme. Moreover, the proposed ranging\nprotocol can also be utilized to implement a pulse-position modulated\nentanglement-assisted communication protocol. Our ranging protocol reveals\nentanglement's potential in general quantum hypothesis testing tasks and paves\nthe way towards a quantum-ranging radar with a provable quantum advantage.\n"} {"abstract": " In this paper, channel estimation techniques and phase shift design for\nintelligent reflecting surface (IRS)-empowered single-user multiple-input\nmultiple-output (SU-MIMO) systems are proposed. Among four channel estimation\ntechniques developed in the paper, the two novel ones, single-path approximated\nchannel (SPAC) and selective emphasis on rank-one matrices (SEROM), have low\ntraining overhead to enable practical IRS-empowered SU-MIMO systems. SPAC is\nmainly based on parameter estimation by approximating IRS-related channels as\ndominant single-path channels. SEROM exploits IRS phase shifts as well as\ntraining signals for channel estimation and easily adjusts its training\noverhead. A closed-form solution for IRS phase shift design is also developed\nto maximize spectral efficiency where the solution only requires basic linear\noperations. Numerical results show that SPAC and SEROM combined with the\nproposed IRS phase shift design achieve high spectral efficiency even with low\ntraining overhead compared to existing methods.\n"} {"abstract": " Our Galaxy and the nearby Andromeda galaxy (M31) are the most massive members\nof the Local Group, and they seem to be a bound pair, despite the uncertainties\non the relative motion of the two galaxies. A number of studies have shown that\nthe two galaxies will likely undergo a close approach in the next 4$-$5 Gyr. We\nused direct $N$-body simulations to model this interaction to shed light on the\nfuture of the Milky Way - Andromeda system and for the first time explore the\nfate of the two supermassive black holes (SMBHs) that are located at their\ncenters. We investigated how the uncertainties on the relative motion of the\ntwo galaxies, linked with the initial velocities and the density of the diffuse\nenvironment in which they move, affect the estimate of the time they need to\nmerge and form ``Milkomeda''. After the galaxy merger, we follow the evolution\nof their two SMBHs up to their close pairing and fusion. Upon the fiducial set\nof parameters, we find that Milky Way and Andromeda will have their closest\napproach in the next 4.3 Gyr and merge over a span of 10 Gyr. Although the time\nof the first encounter is consistent with other predictions, we find that the\nmerger occurs later than previously estimated. We also show that the two SMBHs\nwill spiral in the inner region of Milkomeda and coalesce in less than 16.6 Myr\nafter the merger of the two galaxies. Finally, we evaluate the\ngravitational-wave emission caused by the inspiral of the SMBHs, and we discuss\nthe detectability of similar SMBH mergers in the nearby Universe ($z\\leq 2$)\nthrough next-generation gravitational-wave detectors.\n"} {"abstract": " With the emerging needs of creating fairness-aware solutions for search and\nrecommendation systems, a daunting challenge exists of evaluating such\nsolutions. While many of the traditional information retrieval (IR) metrics can\ncapture the relevance, diversity and novelty for the utility with respect to\nusers, they are not suitable for inferring whether the presented results are\nfair from the perspective of responsible information exposure. On the other\nhand, various fairness metrics have been proposed but they do not account for\nthe user utility or do not measure it adequately. To address this problem, we\npropose a new metric called Fairness-Aware IR (FAIR). By unifying standard IR\nmetrics and fairness measures into an integrated metric, this metric offers a\nnew perspective for evaluating fairness-aware ranking results. Based on this\nmetric, we developed an effective ranking algorithm that jointly optimized user\nutility and fairness. The experimental results showed that our FAIR metric\ncould highlight results with good user utility and fair information exposure.\nWe showed how FAIR related to existing metrics and demonstrated the\neffectiveness of our FAIR-based algorithm. We believe our work opens up a new\ndirection of pursuing a computationally feasible metric for evaluating and\nimplementing the fairness-aware IR systems.\n"} {"abstract": " Real-Time Networks (RTNs) provide latency guarantees for time-critical\napplications and it aims to support different traffic categories via various\nscheduling mechanisms. Those scheduling mechanisms rely on a precise network\nperformance measurement to dynamically adjust the scheduling strategies.\nMachine Learning (ML) offers an iterative procedure to measure network\nperformance. Network Calculus (NC) can calculate the bounds for the main\nperformance indexes such as latencies and throughputs in an RTN for ML. Thus,\nthe ML and NC integration improve overall calculation efficiency. This paper\nwill provide a survey for different approaches of Real-Time Network performance\nmeasurement via NC as well as ML and present their results, dependencies, and\napplication scenarios.\n"} {"abstract": " Given a bipartite graph with bipartition $(A,B)$ where $B$ is equipartitioned\ninto $k\\ge2$ blocks, can the vertices in $A$ be picked one by one so that at\nevery step, the picked vertices cover roughly the same number of vertices in\neach of these blocks? We show that, if each block has cardinality $m$, the\nvertices in $B$ have the same degree, and each vertex in $A$ has at most $cm$\nneighbors in every block where $c>0$ is a small constant, then there is an\nordering $v_1,\\ldots,v_n$ of the vertices in $A$ such that for every\n$j\\in\\{1,\\ldots,n\\}$, the numbers of vertices with a neighbor in\n$\\{v_1,\\ldots,v_j\\}$ in every two blocks differ by at most $\\sqrt{2(k-1)c}\\cdot\nm$. This is related to a well-known lemma of Steinitz, and partially answers an\nunpublished question of Scott and Seymour.\n"} {"abstract": " Software verification may yield spurious failures when environment\nassumptions are not accounted for. Environment assumptions are the expectations\nthat a system or a component makes about its operational environment and are\noften specified in terms of conditions over the inputs of that system or\ncomponent. In this article, we propose an approach to automatically infer\nenvironment assumptions for Cyber-Physical Systems (CPS). Our approach improves\nthe state-of-the-art in three different ways: First, we learn assumptions for\ncomplex CPS models involving signal and numeric variables; second, the learned\nassumptions include arithmetic expressions defined over multiple variables;\nthird, we identify the trade-off between soundness and informativeness of\nenvironment assumptions and demonstrate the flexibility of our approach in\nprioritizing either of these criteria.\n We evaluate our approach using a public domain benchmark of CPS models from\nLockheed Martin and a component of a satellite control system from LuxSpace, a\nsatellite system provider. The results show that our approach outperforms\nstate-of-the-art techniques on learning assumptions for CPS models, and\nfurther, when applied to our industrial CPS model, our approach is able to\nlearn assumptions that are sufficiently close to the assumptions manually\ndeveloped by engineers to be of practical value.\n"} {"abstract": " A fully discrete and fully explicit low-regularity integrator is constructed\nfor the one-dimensional periodic cubic nonlinear Schr\\\"odinger equation. The\nmethod can be implemented by using fast Fourier transform with $O(N\\ln N)$\noperations at every time level, and is proved to have an $L^2$-norm error bound\nof $O(\\tau\\sqrt{\\ln(1/\\tau)}+N^{-1})$ for $H^1$ initial data, without requiring\nany CFL condition, where $\\tau$ and $N$ denote the temporal stepsize and the\ndegree of freedoms in the spatial discretisation, respectively.\n"} {"abstract": " Isotropic hyper-elasticity, altogether with the equilibrium equation and the\nusual boundary conditions, are formulated directly on the body B, a\nthree-dimensional compact and orientable manifold with boundary equipped with a\nmass measure. Pearson-Sewell-Beatty pressure potential is formulated in an\nintrinsic geometric manner. It is shown that Poincar{\\'e}'s formula extended to\ninfinite dimension, provides, in a straightforward manner, the optimal\n(non-holonomic) constraints for such a pressure potential to exist.\n"} {"abstract": " Rare-earth titanates are Mott insulators whose magnetic ground state --\nantiferromagnetic (AFM) or ferromagnetic (FM) -- can be tuned by the radius of\nthe rare-earth element. Here, we combine phenomenology and first-principles\ncalculations to shed light on the generic magnetic phase diagram of a\nchemically-substituted titanate on the rare-earth site that interpolates\nbetween an AFM and a FM state. Octahedral rotations present in these\nperovskites cause the AFM order to acquire a small FM component -- and\nvice-versa -- removing any multi-critical point from the phase diagram.\nHowever, for a wide parameter range, a first-order metamagnetic transition line\nterminating at a critical end-point survives inside the magnetically ordered\nphase. Similarly to the liquid-gas transition, a Widom line emerges from the\nend-point, characterized by enhanced fluctuations. In contrast to metallic\nferromagnets, this metamagnetic transition involves two symmetry-equivalent and\ninsulating canted spin states. Moreover, instead of a magnetic field, we show\nthat uniaxial strain can be used to tune this transition to zero-temperature,\ninducing a quantum critical end-point.\n"} {"abstract": " Machine learning models $-$ now commonly developed to screen, diagnose, or\npredict health conditions $-$ are evaluated with a variety of performance\nmetrics. An important first step in assessing the practical utility of a model\nis to evaluate its average performance over an entire population of interest.\nIn many settings, it is also critical that the model makes good predictions\nwithin predefined subpopulations. For instance, showing that a model is fair or\nequitable requires evaluating the model's performance in different demographic\nsubgroups. However, subpopulation performance metrics are typically computed\nusing only data from that subgroup, resulting in higher variance estimates for\nsmaller groups. We devise a procedure to measure subpopulation performance that\ncan be more sample-efficient than the typical subsample estimates. We propose\nusing an evaluation model $-$ a model that describes the conditional\ndistribution of the predictive model score $-$ to form model-based metric (MBM)\nestimates. Our procedure incorporates model checking and validation, and we\npropose a computationally efficient approximation of the traditional\nnonparametric bootstrap to form confidence intervals. We evaluate MBMs on two\nmain tasks: a semi-synthetic setting where ground truth metrics are available\nand a real-world hospital readmission prediction task. We find that MBMs\nconsistently produce more accurate and lower variance estimates of model\nperformance for small subpopulations.\n"} {"abstract": " We introduce a novel approach, the Cosmological Trajectories Method (CTM), to\nmodel nonlinear structure formation in the Universe by expanding\ngravitationally-induced particle trajectories around the Zel'dovich\napproximation. A new Beyond Zel'dovich approximation is presented, which\nexpands the CTM to leading second-order in the gravitational interaction and\nallows for post-Born gravitational scattering. In the Beyond Zel'dovich\napproximation we derive the exact expression for the matter clustering power\nspectrum. This is calculated to leading order and is available in the CTM\nMODULE. We compare the Beyond Zel'dovich approximation power spectrum and\ncorrelation function to other methods including 1-loop Standard Perturbation\nTheory (SPT), 1-loop Lagrangian Perturbation Theory (LPT) and Convolution\nLagrangian Perturbation Theory (CLPT). We find that the Beyond Zel'dovich\napproximation power spectrum performs well, matching simulations to within\n$\\pm{10}\\%$, on mildly non-linear scales, and at redshifts above $z=1$ it\noutperforms the Zel'dovich approximation. We also find that the Beyond\nZel'dovich approximation models the BAO peak in the correlation function at\n$z=0$ more accurately, to within $\\pm{5}\\%$ of simulations, than the Zel'dovich\napproximation, SPT 1-loop and CLPT.\n"} {"abstract": " In this study we analyze linear combinatorial optimization problems where the\ncost vector is not known a priori, but is only observable through a finite data\nset. In contrast to the related studies, we presume that the number of\nobservations with respect to particular components of the cost vector may vary.\nThe goal is to find a procedure that transforms the data set into an estimate\nof the expected value of the objective function (which is referred to as a\nprediction rule) and a procedure that retrieves a candidate decision (which is\nreferred to as a prescription rule). We aim at finding the least conservative\nprediction and prescription rules, which satisfy some specified asymptotic\nguarantees. We demonstrate that the resulting vector optimization problems\nadmit a weakly optimal solution, which can be obtained by solving a particular\ndistributionally robust optimization problem. Specifically, the decision-maker\nmay optimize the worst-case expected loss across all probability distributions\nwith given component-wise relative entropy distances from the empirical\nmarginal distributions. Finally, we perform numerical experiments to analyze\nthe out-of-sample performance of the proposed solution approach.\n"} {"abstract": " Word Sense Disambiguation (WSD) is a long-standing task in Natural Language\nProcessing(NLP) that aims to automatically identify the most relevant meaning\nof the words in a given context. Developing standard WSD test collections can\nbe mentioned as an important prerequisite for developing and evaluating\ndifferent WSD systems in the language of interest. Although many WSD test\ncollections have been developed for a variety of languages, no standard\nAll-words WSD benchmark is available for Persian. In this paper, we address\nthis shortage for the Persian language by introducing SBU-WSD-Corpus, as the\nfirst standard test set for the Persian All-words WSD task. SBU-WSD-Corpus is\nmanually annotated with senses from the Persian WordNet (FarsNet) sense\ninventory. To this end, three annotators used SAMP (a tool for sense annotation\nbased on FarsNet lexical graph) to perform the annotation task. SBU-WSD-Corpus\nconsists of 19 Persian documents in different domains such as Sports, Science,\nArts, etc. It includes 5892 content words of Persian running text and 3371\nmanually sense annotated words (2073 nouns, 566 verbs, 610 adjectives, and 122\nadverbs). Providing baselines for future studies on the Persian All-words WSD\ntask, we evaluate several WSD models on SBU-WSD-Corpus. The corpus is publicly\navailable at https://github.com/hrouhizadeh/SBU-WSD-Corpus.\n"} {"abstract": " In this article, we calculate the density of primes $\\mathfrak{p}$ for which\nthe $\\mathfrak{p}$-th Fourier coefficient $C^*(\\mathfrak{p}, f)$ (resp.,\n$C(\\mathfrak{p}, f)$) of a primitive Hilbert modular form $f$ generates the\ncoefficient field $F_f$ (resp., $E_f$), under certain conditions on the images\nof $\\lambda$-adic residual Galois representations attached to $f$. Then, we\nproduce some examples of primitive forms $f$ satisfying these conditions. Our\nwork is a generalization of \\cite{KSW08} to primitive Hilbert modular forms.\n"} {"abstract": " Adding propositional quantification to the modal logics K, T or S4 is known\nto lead to undecidability but CTL with propositional quantification under the\ntree semantics (tQCTL) admits a non-elementary Tower-complete satisfiability\nproblem. We investigate the complexity of strict fragments of tQCTL as well as\nof the modal logic K with propositional quantification under the tree\nsemantics. More specifically, we show that tQCTL restricted to the temporal\noperator EX is already Tower-hard, which is unexpected as EX can only enforce\nlocal properties. When tQCTL restricted to EX is interpreted on N-bounded trees\nfor some N >= 2, we prove that the satisfiability problem is AExpPol-complete;\nAExpPol-hardness is established by reduction from a recently introduced tiling\nproblem, instrumental for studying the model-checking problem for interval\ntemporal logics. As consequences of our proof method, we prove Tower-hardness\nof tQCTL restricted to EF or to EXEF and of the well-known modal logics such as\nK, KD, GL, K4 and S4 with propositional quantification under a semantics based\non classes of trees.\n"} {"abstract": " We are interested in the nature of the spectrum of the one-dimensional\nSchr\\\"odinger operator $$\n - \\frac{d^2}{dx^2}-Fx + \\sum_{n \\in \\mathbb{Z}}g_n \\delta(x-n)\n \\qquad\\text{in } L^2(\\mathbb{R}) $$ with $F>0$ and two different choices of\nthe coupling constants $\\{g_n\\}_{n\\in \\mathbb{Z}}$. In the first model $g_n\n\\equiv \\lambda$ and we prove that if $F\\in \\pi^2 \\mathbb{Q}$ then the spectrum\nis $\\mathbb{R}$ and is furthermore absolutely continuous away from an explicit\ndiscrete set of points. In the second model $g_n$ are independent random\nvariables with mean zero and variance $\\lambda^2$. Under certain assumptions on\nthe distribution of these random variables we prove that almost surely the\nspectrum is $\\mathbb{R}$ and it is dense pure point if $F < \\lambda^2/2$ and\npurely singular continuous if $F> \\lambda^2/2$.\n"} {"abstract": " We report the discovery of a 'folded' gravitationally lensed image,\n'Hamilton's Object', found in a HST image of the field near the AGN SDSS\nJ223010.47-081017.8 ($z=0.62$). The lensed images are sourced by a galaxy at a\nspectroscopic redshift of 0.8200$\\pm0.0005$ and form a fold configuration on a\ncaustic caused by a foreground galaxy cluster at a photometric redshift of\n0.526$\\pm0.018$ seen in the corresponding Pan-STARRS PS1 image and marginally\ndetected as a faint ROSAT All-Sky Survey X-ray source. The lensed images\nexhibit properties similar to those of other folds where the source galaxy\nfalls very close to or straddles the caustic of a galaxy cluster. The folded\nimages are stretched in a direction roughly orthogonal to the critical curve,\nbut the configuration is that of a tangential cusp. Guided by morphological\nfeatures, published simulations and similar fold observations in the\nliterature, we identify a third or counter-image, confirmed by spectroscopy.\nBecause the fold-configuration shows highly distinctive surface brightness\nfeatures, follow-up observations of microlensing or detailed investigations of\nthe individual surface brightness features at higher resolution can further\nshed light on kpc-scale dark matter properties. We determine the local lens\nproperties at the positions of the multiple images according to the\nobservation-based lens reconstruction of Wagner et al. (2019). The analysis is\nin accordance with a mass density which hardly varies on an arc-second scale (6\nkpc) over the areas covered by the multiple images.\n"} {"abstract": " Over-the-air computation (OAC) is a promising technique to realize fast model\naggregation in the uplink of federated edge learning. OAC, however, hinges on\naccurate channel-gain precoding and strict synchronization among the edge\ndevices, which are challenging in practice. As such, how to design the maximum\nlikelihood (ML) estimator in the presence of residual channel-gain mismatch and\nasynchronies is an open problem. To fill this gap, this paper formulates the\nproblem of misaligned OAC for federated edge learning and puts forth a whitened\nmatched filtering and sampling scheme to obtain oversampled, but independent,\nsamples from the misaligned and overlapped signals. Given the whitened samples,\na sum-product ML estimator and an aligned-sample estimator are devised to\nestimate the arithmetic sum of the transmitted symbols. In particular, the\ncomputational complexity of our sum-product ML estimator is linear in the\npacket length and hence is significantly lower than the conventional ML\nestimator. Extensive simulations on the test accuracy versus the average\nreceived energy per symbol to noise power spectral density ratio (EsN0) yield\ntwo main results: 1) In the low EsN0 regime, the aligned-sample estimator can\nachieve superior test accuracy provided that the phase misalignment is\nnon-severe. In contrast, the ML estimator does not work well due to the error\npropagation and noise enhancement in the estimation process. 2) In the high\nEsN0 regime, the ML estimator attains the optimal learning performance\nregardless of the severity of phase misalignment. On the other hand, the\naligned-sample estimator suffers from a test-accuracy loss caused by phase\nmisalignment.\n"} {"abstract": " We investigate the properties of the glass phase of a recently introduced\nspin glass model of soft spins subjected to an anharmonic quartic local\npotential, which serves as a model of low temperature molecular or soft\nglasses. We solve the model using mean field theory and show that, at low\ntemperatures, it is described by full replica symmetry breaking (fullRSB). As a\nconsequence, at zero temperature the glass phase is marginally stable. We show\nthat in this case, marginal stability comes from a combination of both soft\nlinear excitations -- appearing in a gapless spectrum of the Hessian of linear\nexcitations -- and pseudogapped non-linear excitations -- corresponding to\nnearly degenerate two level systems. Therefore, this model is a natural\ncandidate to describe what happens in soft glasses, where quasi localized soft\nmodes in the density of states appear together with non-linear modes triggering\navalanches and conjectured to be essential to describe the universal\nlow-temperature anomalies of glasses.\n"} {"abstract": " For a rooted cluster algebra $\\mathcal{A}(Q)$ over a valued quiver $Q$, a\n\\emph{symmetric cluster variable} is any cluster variable that belongs to a\ncluster associated with a quiver $\\sigma (Q)$, for some permutation $\\sigma$.\nThe subalgebra of $\\mathcal{A}(Q)$ generated by all symmetric cluster variables\nis called the \\emph{symmetric mutation subalgebra} and is denoted by\n$\\mathcal{B}(Q)$. In this paper, we identify the class of cluster algebras that\nsatisfy $\\mathcal{B}(Q)=\\mathcal{A}(Q)$, which contains almost every quiver of\nfinite mutation type. In the process of proving the main theorem, we provide a\nclassification of quivers mutation classes based on their weights. Some\nproperties of symmetric mutation subalgebras are given.\n"} {"abstract": " We investigate the State-Controlled Cellular Neural Network (SC-CNN)\nframework of Murali-Lakshmanan-Chua (MLC) circuit system subjected to two\nlogical signals. By exploiting the attractors generated by this circuit in\ndifferent regions of phase-space, we show that the nonlinear circuit is capable\nof producing all the logic gates, namely OR, AND, NOR, NAND, Ex-OR and Ex-NOR\ngates available in digital systems. Further the circuit system emulates\nthree-input gates and Set-Reset flip-flop logic as well. Moreover, all these\nlogical elements and flip-flop are found to be tolerant to noise. These\nphenomena are also experimentally demonstrated. Thus our investigation to\nrealize all logic gates and memory latch in a nonlinear circuit system paves\nthe way to replace or complement the existing technology with a limited number\nof hardware.\n"} {"abstract": " Convolutional neural networks (CNNs) are able to attain better visual\nrecognition performance than fully connected neural networks despite having\nmuch less parameters due to their parameter sharing principle. Hence, modern\narchitectures are designed to contain a very small number of fully-connected\nlayers, often at the end, after multiple layers of convolutions. It is\ninteresting to observe that we can replace large fully-connected layers with\nrelatively small groups of tiny matrices applied on the entire image. Moreover,\nalthough this strategy already reduces the number of parameters, most of the\nconvolutions can be eliminated as well, without suffering any loss in\nrecognition performance. However, there is no solid recipe to detect this\nhidden subset of convolutional neurons that is responsible for the majority of\nthe recognition work. Hence, in this work, we use the matrix characteristics\nbased on eigenvalues in addition to the classical weight-based importance\nassignment approach for pruning to shed light on the internal mechanisms of a\nwidely used family of CNNs, namely residual neural networks (ResNets), for the\nimage classification problem using CIFAR-10, CIFAR-100 and Tiny ImageNet\ndatasets.\n"} {"abstract": " Access to informative databases is a crucial part of notable research\ndevelopments. In the field of domestic audio classification, there have been\nsignificant advances in recent years. Although several audio databases exist,\nthese can be limited in terms of the amount of information they provide, such\nas the exact location of the sound sources, and the associated noise levels. In\nthis work, we detail our approach on generating an unbiased synthetic domestic\naudio database, consisting of sound scenes and events, emulated in both quiet\nand noisy environments. Data is carefully curated such that it reflects issues\ncommonly faced in a dementia patients environment, and recreate scenarios that\ncould occur in real-world settings. Similarly, the room impulse response\ngenerated is based on a typical one-bedroom apartment at Hebrew SeniorLife\nFacility. As a result, we present an 11-class database containing excerpts of\nclean and noisy signals at 5-seconds duration each, uniformly sampled at 16\nkHz. Using our baseline model using Continues Wavelet Transform Scalograms and\nAlexNet, this yielded a weighted F1-score of 86.24 percent.\n"} {"abstract": " Deuterium diffusion is investigated in nitrogen-doped homoepitaxial ZnO\nlayers. The samples were grown under slightly Zn-rich growth conditions by\nplasma-assisted molecular beam epitaxy on m-plane ZnO substrates and have a\nnitrogen content [N] varied up to 5x1018 at.cm-3 as measured by secondary ion\nmass spectrometry (SIMS). All were exposed to a radio-frequency deuterium\nplasma during 1h at room temperature. Deuterium diffusion is observed in all\nepilayers while its penetration depth decreases as the nitrogen concentration\nincreases. This is a strong evidence of a diffusion mechanism limited by the\ntrapping of deuterium on a nitrogen-related trap. The SIMS profiles are\nanalyzed using a two-trap model including a shallow trap, associated with a\nfast diffusion, and a deep trap, related to nitrogen. The capture radius of the\nnitrogen-related trap is determined to be 20 times smaller than the value\nexpected for nitrogen-deuterium pairs formed by coulombic attraction between D+\nand nitrogen-related acceptors. The (N2)O deep donor is proposed as the deep\ntrapping site for deuterium and accounts well for the small capture radius and\nthe observed photoluminescence quenching and recovery after deuteration of the\nZnO:N epilayers. It is also found that this defect is by far the N-related\ndefect with the highest concentration in the studied samples.\n"} {"abstract": " This paper considers a pursuit-evasion scenario among three agents -- an\nevader, a pursuer, and a defender. We design cooperative guidance laws for the\nevader and the defender team to safeguard the evader from an attacking pursuer.\nUnlike differential games, optimal control formulations, and other heuristic\nmethods, we propose a novel perspective on designing effective nonlinear\nfeedback control laws for the evader-defender team using a time-constrained\nguidance approach. The evader lures the pursuer on the collision course by\noffering itself as bait. At the same time, the defender protects the evader\nfrom the pursuer by exercising control over the engagement duration. Depending\non the nature of the mission, the defender may choose to take an aggressive or\ndefensive stance. Such consideration widens the applicability of the proposed\nmethods in various three-agent motion planning scenarios such as aircraft\ndefense, asset guarding, search and rescue, surveillance, and secure\ntransportation. We use a fixed-time sliding mode control strategy to design the\ncontrol laws for the evader-defender team and a nonlinear finite-time\ndisturbance observer to estimate the pursuer's maneuver. Finally, we present\nsimulations to demonstrate favorable performance under various engagement\ngeometries, thus vindicating the efficacy of the proposed designs.\n"} {"abstract": " We present a class of diffraction-free partially coherent beams each member\nof which is comprised of a finite power, non-accelerating Airy bump residing on\na statistically homogeneous, Gaussian-correlated background. We examine\nfree-space propagation of soft apertured realizations of the proposed beams and\nshow that their evolution is governed by two spatial scales: the coherence\nwidth of the background and aperture size. A relative magnitude of these\nfactors determines the practical range of propagation distances over which the\nnovel beams can withstand diffraction. The proposed beams can find applications\nto imaging and optical communications through random media.\n"} {"abstract": " Unsupervised person re-identification (Re-ID) aims to match pedestrian images\nfrom different camera views in unsupervised setting. Existing methods for\nunsupervised person Re-ID are usually built upon the pseudo labels from\nclustering. However, the quality of clustering depends heavily on the quality\nof the learned features, which are overwhelmingly dominated by the colors in\nimages especially in the unsupervised setting. In this paper, we propose a\nCluster-guided Asymmetric Contrastive Learning (CACL) approach for unsupervised\nperson Re-ID, in which cluster structure is leveraged to guide the feature\nlearning in a properly designed asymmetric contrastive learning framework. To\nbe specific, we propose a novel cluster-level contrastive loss to help the\nsiamese network effectively mine the invariance in feature learning with\nrespect to the cluster structure within and between different data augmentation\nviews, respectively. Extensive experiments conducted on three benchmark\ndatasets demonstrate superior performance of our proposal.\n"} {"abstract": " The power conserving interconnection of port-thermodynamic systems via their\npower ports results in another port-thermodynamic system, while the same holds\nfor the rate of entropy increasing interconnection via their entropy flow\nports. Control by interconnection of port-thermodynamic systems seeks to\ncontrol a plant port-thermodynamic system by the interconnection with a\ncontroller port-thermodynamic system. The stability of the interconnected\nport-thermodynamic system is investigated by Lyapunov functions based on\ngenerating functions for the submanifold characterizing the state properties as\nwell as additional conserved quantities. Crucial tool is the use of canonical\npoint transformations on the symplectized thermodynamic phase space.\n"} {"abstract": " Graph neural networks (GNN) have been proven to be mature enough for handling\ngraph-structured data on node-level graph representation learning tasks.\nHowever, the graph pooling technique for learning expressive graph-level\nrepresentation is critical yet still challenging. Existing pooling methods\neither struggle to capture the local substructure or fail to effectively\nutilize high-order dependency, thus diminishing the expression capability. In\nthis paper we propose HAP, a hierarchical graph-level representation learning\nframework, which is adaptively sensitive to graph structures, i.e., HAP\nclusters local substructures incorporating with high-order dependencies. HAP\nutilizes a novel cross-level attention mechanism MOA to naturally focus more on\nclose neighborhood while effectively capture higher-order dependency that may\ncontain crucial information. It also learns a global graph content GCont that\nextracts the graph pattern properties to make the pre- and post-coarsening\ngraph content maintain stable, thus providing global guidance in graph\ncoarsening. This novel innovation also facilitates generalization across graphs\nwith the same form of features. Extensive experiments on fourteen datasets show\nthat HAP significantly outperforms twelve popular graph pooling methods on\ngraph classification task with an maximum accuracy improvement of 22.79%, and\nexceeds the performance of state-of-the-art graph matching and graph similarity\nlearning algorithms by over 3.5% and 16.7%.\n"} {"abstract": " We demonstrate the potential of dopamine modified\n0.5(Ba0.7Ca0.3)TiO3-0.5Ba(Zr0.2Ti0.8)O3 filler incorporated poly-vinylidene\nfluoride (PVDF) composite prepared by solution cast method as both flexible\nenergy storage and harvesting devices. The introduction of dopamine in filler\nsurface functionalization acts as bridging elements between filler and polymer\nmatrix and results in a better filler dispersion and an improved dielectric\nloss tangent (<0.02) along with dielectric permittivity ranges from 9 to 34\nwhich is favorable for both energy harvesting and storage. Additionally, a\nsignificantly low DC conductivity (< 10-9 ohm-1cm-1) for all composites was\nachieved leading to an improved breakdown strength and charge accumulation\ncapability. Maximum breakdown strength of 134 KV/mm and corresponding energy\nstorage density 0.72 J/cm3 were obtained from the filler content 10 weight%.\nThe improved energy harvesting performance was characterized by obtaining a\nmaximum piezoelectric charge constant (d33) = 78 pC/N, and output voltage\n(Vout) = 0.84 V along with maximum power density of 3.46 microW/cm3 for the\nfiller content of 10 wt%. Thus, the results show\n0.5(Ba0.7Ca0.3)TiO3-0.5Ba(Zr0.2Ti0.8)O3/PVDF composite has the potential for\nenergy storage and harvesting applications simultaneously that can\nsignificantly suppress the excess energy loss arises while utilizing different\nmaterial.\n"} {"abstract": " The conformal module of conjugacy classes of braids is an invariant that\nappeared earlier than the entropy of conjugacy classes of braids, and is\ninverse proportional to the entropy. Using the relation between the two\ninvariants we give a short conceptional proof of an earlier result on the\nconformal module. Mainly, we consider situations, when the conformal module of\nconjugacy classes of braids serves as obstruction for the existence of\nhomotopies (or isotopies) of smooth objects involving braids to the respective\nholomorphic objects, and present theorems on the restricted validity of\nGromov's Oka principle in these situations.\n"} {"abstract": " We study the asymptotic properties of the stochastic Cahn-Hilliard equation\nwith the logarithmic free energy by establishing different dimension-free\nHarnack inequalities according to various kinds of noises. The main\ncharacteristics of this equation are the singularities of the logarithmic free\nenergy at 1 and --1 and the conservation of the mass of the solution in its\nspatial variable. Both the space-time colored noise and the space-time white\nnoise are considered. For the highly degenerate space-time colored noise, the\nasymptotic log-Harnack inequality is established under the so-called\nessentially elliptic conditions. And the Harnack inequality with power is\nestablished for non-degenerate space-time white noise.\n"} {"abstract": " In this article, for positive integers $n\\geq m\\geq 1$, the parameter spaces\nfor the isomorphism classes of the generic point arrangements of cardinality\n$n$, and the antipodal point arrangements of cardinality $2n$ in the Eulidean\nspace $\\mathbb{R}^m$ are described using the space of totally nonzero\nGrassmannian $Gr^{tnz}_{mn}(\\mathbb{R})$. A stratification\n$\\mathcal{S}^{tnz}_{mn}(\\mathbb{R})$ of the totally nonzero Grassmannian\n$Gr^{tnz}_{mn}(\\mathbb{R})$ is mentioned and the parameter spaces are\nrespectively expressed as quotients of the space\n$\\mathcal{S}^{tnz}_{mn}(\\mathbb{R})$ of strata under suitable actions of the\nsymmetric group $S_n$ and the semidirect product group $(\\mathbb{R}^*)^n\\rtimes\nS_n$. The cardinalities of the space $\\mathcal{S}^{tnz}_{mn}(\\mathbb{R})$ of\nstrata and of the parameter spaces $S_n\\backslash\n\\mathcal{S}^{tnz}_{mn}(\\mathbb{R}), ((\\mathbb{R}^*)^n\\rtimes S_n)\\backslash\n\\mathcal{S}^{tnz}_{mn}(\\mathbb{R})$ are enumerated in dimension $m=2$.\nInterestingly enough, the enumerated value of the isomorphism classes of the\ngeneric point arrangements in the Euclidean plane is expressed in terms of the\nnumber theoretic Euler-totient function. The analogous enumeration questions\nare still open in higher dimensions for $m\\geq 3$.\n"} {"abstract": " There is growing interest in hydrogen (H$_2$) use for long-duration energy\nstorage in a future electric grid dominated by variable renewable energy (VRE)\nresources. Modelling the role of H$_2$ as grid-scale energy storage, often\nreferred as \"power-to-gas-to-power (P2G2P)\" overlooks the cost-sharing and\nemission benefits from using the deployed H$_2$ production and storage assets\nto also supply H$_2$ for decarbonizing other end-use sectors where direct\nelectrification may be challenged. Here, we develop a generalized modelling\nframework for co-optimizing energy infrastructure investment and operation\nacross power and transportation sectors and the supply chains of electricity\nand H$_2$, while accounting for spatio-temporal variations in energy demand and\nsupply. Applying this sector-coupling framework to the U.S. Northeast under a\nrange of technology cost and carbon price scenarios, we find a greater value of\npower-to-H$_2$ (P2G) versus P2G2P routes. P2G provides flexible demand\nresponse, while the extra cost and efficiency penalties of P2G2P routes make\nthe solution less attractive for grid balancing. The effects of sector-coupling\nare significant, boosting VRE generation by 12-55% with both increased\ncapacities and reduced curtailments and reducing the total system cost (or\nlevelized costs of energy) by 6-14% under 96% decarbonization scenarios. Both\nthe cost savings and emission reductions from sector coupling increase with\nH$_2$ demand for other end-uses, more than doubling for a 96% decarbonization\nscenario as H$_2$ demand quadraples. Moreover, we found that the deployment of\ncarbon capture and storage is more cost-effective in the H$_2$ sector because\nof the lower cost and higher utilization rate. These findings highlight the\nimportance of using an integrated multi-sector energy system framework with\nmultiple energy vectors in planning energy system decarbonization pathways.\n"} {"abstract": " Massive multiple-input multiple-output (MIMO) is a key technology for\nimproving the spectral and energy efficiency in 5G-and-beyond wireless\nnetworks. For a tractable analysis, most of the previous works on Massive MIMO\nhave been focused on the system performance with complex Gaussian channel\nimpulse responses under rich-scattering environments. In contrast, this paper\ninvestigates the uplink ergodic spectral efficiency (SE) of each user under the\ndouble scattering channel model. We derive a closed-form expression of the\nuplink ergodic SE by exploiting the maximum ratio (MR) combining technique\nbased on imperfect channel state information. We further study the asymptotic\nSE behaviors as a function of the number of antennas at each base station (BS)\nand the number of scatterers available at each radio channel. We then formulate\nand solve a total energy optimization problem for the uplink data transmission\nthat aims at simultaneously satisfying the required SEs from all the users with\nlimited data power resource. Notably, our proposed algorithms can cope with the\ncongestion issue appearing when at least one user is served by lower SE than\nrequested. Numerical results illustrate the effectiveness of the closed-form\nergodic SE over Monte-Carlo simulations. Besides, the system can still provide\nthe required SEs to many users even under congestion.\n"} {"abstract": " High-order implicit shock tracking is a new class of numerical methods to\napproximate solutions of conservation laws with non-smooth features. These\nmethods align elements of the computational mesh with non-smooth features to\nrepresent them perfectly, allowing high-order basis functions to approximate\nsmooth regions of the solution without the need for nonlinear stabilization,\nwhich leads to accurate approximations on traditionally coarse meshes. The\nhallmark of these methods is the underlying optimization formulation whose\nsolution is a feature-aligned mesh and the corresponding high-order\napproximation to the flow; the key challenge is robustly solving the central\noptimization problem. In this work, we develop a robust optimization solver for\nhigh-order implicit shock tracking methods so they can be reliably used to\nsimulate complex, high-speed, compressible flows in multiple dimensions. The\nproposed method integrates practical robustness measures into a sequential\nquadratic programming method, including dimension- and order-independent\nsimplex element collapses, mesh smoothing, and element-wise solution\nre-initialization, which prove to be necessary to reliably track complex\ndiscontinuity surfaces, such as curved and reflecting shocks, shock formation,\nand shock-shock interaction. A series of nine numerical experiments --\nincluding two- and three-dimensional compressible flows with complex\ndiscontinuity surfaces -- are used to demonstrate: 1) the robustness of the\nsolver, 2) the meshes produced are high-quality and track continuous,\nnon-smooth features in addition to discontinuities, 3) the method achieves the\noptimal convergence rate of the underlying discretization even for flows\ncontaining discontinuities, and 4) the method produces highly accurate\nsolutions on extremely coarse meshes relative to approaches based on shock\ncapturing.\n"} {"abstract": " Ben Reichardt showed in a series of results that the general adversary bound\nof a function characterizes its quantum query complexity. This survey seeks to\naggregate the background and definitions necessary to understand the proof.\nNotable among these are the lower bound proof, span programs, witness size, and\nsemi-definite programs. These definitions, in addition to examples and detailed\nexpositions, serve to give the reader a better intuition of the graph-theoretic\nnature of the upper bound. We also include an applications of this result to\nlower bounds on DeMorgan formula size.\n"} {"abstract": " In video object tracking, there exist rich temporal contexts among successive\nframes, which have been largely overlooked in existing trackers. In this work,\nwe bridge the individual video frames and explore the temporal contexts across\nthem via a transformer architecture for robust object tracking. Different from\nclassic usage of the transformer in natural language processing tasks, we\nseparate its encoder and decoder into two parallel branches and carefully\ndesign them within the Siamese-like tracking pipelines. The transformer encoder\npromotes the target templates via attention-based feature reinforcement, which\nbenefits the high-quality tracking model generation. The transformer decoder\npropagates the tracking cues from previous templates to the current frame,\nwhich facilitates the object searching process. Our transformer-assisted\ntracking framework is neat and trained in an end-to-end manner. With the\nproposed transformer, a simple Siamese matching approach is able to outperform\nthe current top-performing trackers. By combining our transformer with the\nrecent discriminative tracking pipeline, our method sets several new\nstate-of-the-art records on prevalent tracking benchmarks.\n"} {"abstract": " Thoracic disease detection from chest radiographs using deep learning methods\nhas been an active area of research in the last decade. Most previous methods\nattempt to focus on the diseased organs of the image by identifying spatial\nregions responsible for significant contributions to the model's prediction. In\ncontrast, expert radiologists first locate the prominent anatomical structures\nbefore determining if those regions are anomalous. Therefore, integrating\nanatomical knowledge within deep learning models could bring substantial\nimprovement in automatic disease classification. This work proposes an\nanatomy-aware attention-based architecture named Anatomy X-Net, that\nprioritizes the spatial features guided by the pre-identified anatomy regions.\nWe leverage a semi-supervised learning method using the JSRT dataset containing\norgan-level annotation to obtain the anatomical segmentation masks (for lungs\nand heart) for the NIH and CheXpert datasets. The proposed Anatomy X-Net uses\nthe pre-trained DenseNet-121 as the backbone network with two corresponding\nstructured modules, the Anatomy Aware Attention (AAA) and Probabilistic\nWeighted Average Pooling (PWAP), in a cohesive framework for anatomical\nattention learning. Our proposed method sets new state-of-the-art performance\non the official NIH test set with an AUC score of 0.8439, proving the efficacy\nof utilizing the anatomy segmentation knowledge to improve the thoracic disease\nclassification. Furthermore, the Anatomy X-Net yields an averaged AUC of 0.9020\non the Stanford CheXpert dataset, improving on existing methods that\ndemonstrate the generalizability of the proposed framework.\n"} {"abstract": " Neural architecture search (NAS) is a recent methodology for automating the\ndesign of neural network architectures. Differentiable neural architecture\nsearch (DARTS) is a promising NAS approach that dramatically increases search\nefficiency. However, it has been shown to suffer from performance collapse,\nwhere the search often leads to detrimental architectures. Many recent works\ntry to address this issue of DARTS by identifying indicators for early\nstopping, regularising the search objective to reduce the dominance of some\noperations, or changing the parameterisation of the search problem. In this\nwork, we hypothesise that performance collapses can arise from poor local\noptima around typical initial architectures and weights. We address this issue\nby developing a more global optimisation scheme that is able to better explore\nthe space without changing the DARTS problem formulation. Our experiments show\nthat our changes in the search algorithm allow the discovery of architectures\nwith both better test performance and fewer parameters.\n"} {"abstract": " [Zhang, ICML 2018] provided the first decentralized actor-critic algorithm\nfor multi-agent reinforcement learning (MARL) that offers convergence\nguarantees. In that work, policies are stochastic and are defined on finite\naction spaces. We extend those results to offer a provably-convergent\ndecentralized actor-critic algorithm for learning deterministic policies on\ncontinuous action spaces. Deterministic policies are important in real-world\nsettings. To handle the lack of exploration inherent in deterministic policies,\nwe consider both off-policy and on-policy settings. We provide the expression\nof a local deterministic policy gradient, decentralized deterministic\nactor-critic algorithms and convergence guarantees for linearly-approximated\nvalue functions. This work will help enable decentralized MARL in\nhigh-dimensional action spaces and pave the way for more widespread use of\nMARL.\n"} {"abstract": " The Force Concept Inventory (FCI) can be used as an assessment tool to\nmeasure the gains in a cohort of students. In this study it was given to first\nyear mechanics students (N=256 students) pre- and post-mechanics lectures, for\nstudents at the University of Johannesburg. From these results we examine the\neffect of switching mid-semester from traditional classes to online classes, as\nimposed by the COVID-19 lockdown in South Africa. Overall gains and student\nperspectives indicate no appreciable difference of gain, when bench-marked\nagainst previous studies using this assessment tool. When compared with 2019\ngrades, the 2020 semester grades do not appear to be greatly affected.\nFurthermore, initial statistical analyses also indicate a gender difference in\nmean gains in favour of females at the 95% significance level (for paired data,\nN=48). A survey given to students also appeared to indicate that most students\nwere aware of their conceptual performance in physics, and the main constraint\nto their studies was due to difficulties associated with being online. As such,\nthe change in pedagogy and the stresses of lockdown were found to not be\nsuggestive of a depreciation of FCI gains and grades.\n"} {"abstract": " Neural Machine Translation model is a sequence-to-sequence converter based on\nneural networks. Existing models use recurrent neural networks to construct\nboth the encoder and decoder modules. In alternative research, the recurrent\nnetworks were substituted by convolutional neural networks for capturing the\nsyntactic structure in the input sentence and decreasing the processing time.\nWe incorporate the goodness of both approaches by proposing a\nconvolutional-recurrent encoder for capturing the context information as well\nas the sequential information from the source sentence. Word embedding and\nposition embedding of the source sentence is performed prior to the\nconvolutional encoding layer which is basically a n-gram feature extractor\ncapturing phrase-level context information. The rectified output of the\nconvolutional encoding layer is added to the original embedding vector, and the\nsum is normalized by layer normalization. The normalized output is given as a\nsequential input to the recurrent encoding layer that captures the temporal\ninformation in the sequence. For the decoder, we use the attention-based\nrecurrent neural network. Translation task on the German-English dataset\nverifies the efficacy of the proposed approach from the higher BLEU scores\nachieved as compared to the state of the art.\n"} {"abstract": " A sample of 1.3 mm continuum cores in the Dragon infrared dark cloud (also\nknown as G28.37+0.07 or G28.34+0.06) is analyzed statistically. Based on their\nassociation with molecular outflows, the sample is divided into protostellar\nand starless cores. Statistical tests suggest that the protostellar cores are\nmore massive than the starless cores, even after temperature and opacity biases\nare accounted for. We suggest that the mass difference indicates core mass\ngrowth since their formation. The mass growth implies that massive star\nformation may not have to start with massive prestellar cores, depending on the\ncore mass growth rate. Its impact on the relation between core mass function\nand stellar initial mass function is to be further explored.\n"} {"abstract": " In this paper, we investigate the algebraic nature of the value of a higher\nGreen function on an orthogonal Shimura variety at a single CM point. This is\nmotivated by a conjecture of Gross and Zagier in the setting of higher Green\nfunctions on the product of two modular curves. In the process, we will study\nanalogue of harmonic Maass forms in the setting of Hilbert modular forms, and\nobtain results concerning the arithmetic of their holomorphic part Fourier\ncoefficients. As a consequence, we confirm the conjecture of Gross and Zagier\nunder mild condition on the discriminant of the CM point.\n"} {"abstract": " There is a strong consensus that combining the versatility of machine\nlearning with the assurances given by formal verification is highly desirable.\nIt is much less clear what verified machine learning should mean exactly. We\nconsider this question from the (unexpected?) perspective of computable\nanalysis. This allows us to define the computational tasks underlying verified\nML in a model-agnostic way, and show that they are in principle computable.\n"} {"abstract": " Value functions are central to Dynamic Programming and Reinforcement Learning\nbut their exact estimation suffers from the curse of dimensionality,\nchallenging the development of practical value-function (VF) estimation\nalgorithms. Several approaches have been proposed to overcome this issue, from\nnon-parametric schemes that aggregate states or actions to parametric\napproximations of state and action VFs via, e.g., linear estimators or deep\nneural networks. Relevantly, several high-dimensional state problems can be\nwell-approximated by an intrinsic low-rank structure. Motivated by this and\nleveraging results from low-rank optimization, this paper proposes different\nstochastic algorithms to estimate a low-rank factorization of the $Q(s, a)$\nmatrix. This is a non-parametric alternative to VF approximation that\ndramatically reduces the computational and sample complexities relative to\nclassical $Q$-learning methods that estimate $Q(s,a)$ separately for each\nstate-action pair.\n"} {"abstract": " Low-noise frequency conversion of single photons is a critical tool in\nestablishing fibre-based quantum networks. We show that a single photonic\ncrystal fibre can achieve frequency conversion by Bragg-scattering four-wave\nmixing of source photons from an ultra-broad wavelength range by engineering a\nsymmetric group velocity profile. Furthermore, we discuss how pump tuning can\nmitigate realistic discrepancies in device fabrication. This enables a single\nhighly adaptable frequency conversion interface to link disparate nodes in a\nquantum network via the telecoms band.\n"} {"abstract": " In this paper, we establish the existence and uniqueness of Ricci flow that\nadmits an embedded closed convex surface in $\\mathbb{R}^3$ as metric initial\ncondition. The main point is a family of smooth Ricci flows starting from\nsmooth convex surfaces whose metrics converge uniformly to the metric of the\ninitial surface in intrinsic sense.\n"} {"abstract": " We derive the stellar-to-halo mass relation (SHMR), namely $f_\\star\\propto\nM_\\star/M_{\\rm h}$ versus $M_\\star$ and $M_{\\rm h}$, for early-type galaxies\nfrom their near-IR luminosities (for $M_\\star$) and the position-velocity\ndistributions of their globular cluster systems (for $M_{\\rm h}$). Our\nindividual estimates of $M_{\\rm h}$ are based on fitting a dynamical model with\na distribution function expressed in terms of action-angle variables and\nimposing a prior on $M_{\\rm h}$ from the concentration-mass relation in the\nstandard $\\Lambda$CDM cosmology. We find that the SHMR for early-type galaxies\ndeclines with mass beyond a peak at $M_\\star\\sim 5\\times 10^{10}M_\\odot$ and\n$M_{\\rm h}\\sim 10^{12}M_\\odot$ (near the mass of the Milky Way). This result is\nconsistent with the standard SHMR derived by abundance matching for the general\npopulation of galaxies, and with previous, less robust derivations of the SHMR\nfor early types. However, it contrasts sharply with the monotonically rising\nSHMR for late types derived from extended HI rotation curves and the same\n$\\Lambda$CDM prior on $M_{\\rm h}$ as we adopt for early types. The SHMR for\nmassive galaxies varies more or less continuously, from rising to falling, with\ndecreasing disc fraction and decreasing Hubble type. We also show that the\ndifferent SHMRs for late and early types are consistent with the similar\nscaling relations between their stellar velocities and masses (Tully-Fisher and\nFaber-Jackson relations). Differences in the relations between the stellar and\nhalo virial velocities account for the similarity of the scaling relations. We\nargue that all these empirical findings are natural consequences of a picture\nin which galactic discs are built mainly by smooth and gradual inflow,\nregulated by feedback from young stars, while galactic spheroids are built by a\ncooperation between merging, black-hole fuelling, and feedback from AGNs.\n"} {"abstract": " Recently, the experimental discovery of high-$T_c$ superconductivity in\ncompressed hydrides H$_3$S and LaH$_{10}$ at megabar pressures has triggered\nsearches for various superconducting superhydrides. It was experimentally\nobserved that thorium hydrides, ThH$_{10}$ and ThH$_9$, are stabilized at much\nlower pressures compared to LaH$_{10}$. Based on first-principles\ndensity-functional theory calculations, we reveal that the isolated Th\nframeworks of ThH$_{10}$ and ThH$_9$ have relatively more excess electrons in\ninterstitial regions than the La framework of LaH$_{10}$. Such interstitial\nexcess electrons easily participate in the formation of anionic H cage\nsurrounding metal atom. The resulting Coulomb attraction between cationic Th\natoms and anionic H cages is estimated to be stronger than the corresponding\none of LaH$_{10}$, thereby giving rise to larger chemical precompressions in\nThH$_{10}$ and ThH$_9$. Such a formation mechanism of H clathrates can also be\napplied to another experimentally synthesized superhydride CeH$_9$, confirming\nthe experimental evidence that the chemical precompression in CeH$_9$ is larger\nthan that in LaH$_{10}$. Our findings demonstrate that interstitial excess\nelectrons in the isolated metal frameworks of high-pressure superhydrides play\nan important role in generating the chemical precompression of H clathrates.\n"} {"abstract": " The fluid flow along the Riga plate with the influence of magnetic force in a\nrotating system has been investigated numerically. The governing equations have\nbeen derived from Navier-Stokes equations. Applying the boundary layer\napproximation, the appropriate boundary layer equations have been obtained. By\nusing usual transformation, the obtained governing equations have been\ntransformed into a coupled dimensionless non-linear partial differential\nequation. The obtained dimensionless equations have been solved numerically by\nexplicit finite difference scheme. The simulated results have been obtained by\nusing MATLAB R2015a. Also the stability and convergence criteria have been\nanalyzed. The effect of several parameters on the primary velocity, secondary\nvelocity, temperature distributions as well as local shear stress and Nusselt\nnumber have been shown graphically.\n"} {"abstract": " Mobile app developers use paid advertising campaigns to acquire new users,\nand they need to know the campaigns' performance to guide their spending.\nDetermining the campaign that led to an install requires that the app and\nadvertising network share an identifier that allows matching ad clicks to\ninstalls. Ad networks use the identifier to build user profiles that help with\ntargeting and personalization. Modern mobile operating systems have features to\nprotect the privacy of the user. The privacy features of Apple's iOS 14\nenforces all apps to get system permission for tracking explicitly instead of\nasking the user to opt-out of tracking as before. If the user does not allow\ntracking, the identifier for advertisers (IDFA) required for attributing the\ninstallation to the campaign is not shared. The lack of an identifier for the\nattribution changes profoundly how user acquisition campaigns' performance is\nmeasured. For users who do not allow tracking, there is a new feature that\nstill allows following campaign performance. The app can set an integer, so\ncalled conversion value for each user, and the developer can get the number of\ninstalls per conversion value for each campaign. This paper investigates the\ntask of distributing revenue to advertising campaigns using the conversion\nvalues. Our contributions are to formalize the problem, find the theoretically\noptimal revenue attribution function for any conversion value schema, and show\nempirical results on past data of a free-to-play mobile game using different\nconversion value schemas.\n"} {"abstract": " We show that for Lebesgue almost all $d$-tuples $(\\theta_1,\\ldots,\\theta_d)$,\nwith $|\\theta_j|>1$, any self-affine measure for a homogeneous non-degenerate\niterated function system $\\{Ax+a_j\\}_{j=1}^m$ in ${\\mathbb R}^d$, where\n$A^{-1}$ is a diagonal matrix with the entries $(\\theta_1,\\ldots,\\theta_d)$,\nhas power Fourier decay at infinity.\n"} {"abstract": " We present ALMA [C II] 158 $\\mu$m line and far-infrared (FIR) continuum\nemission observations toward HSC J120505.09$-$000027.9 (J1205$-$0000) at $z =\n6.72$ with the beam size of $\\sim 0''.8 \\times 0''.5$ (or 4.1 kpc $\\times$ 2.6\nkpc), the most distant red quasar known to date. Red quasars are modestly\nreddened by dust, and are thought to be in rapid transition from an obscured\nstarburst to an unobscured normal quasar, driven by powerful active galactic\nnucleus (AGN) feedback which blows out a cocoon of interstellar medium (ISM).\nThe FIR continuum of J1205$-$0000 is bright, with an estimated luminosity of\n$L_{\\rm FIR} \\sim 3 \\times 10^{12}~L_\\odot$. The [C II] line emission is\nextended on scales of $r \\sim 5$ kpc, greater than the FIR continuum. The line\nprofiles at the extended regions are complex and broad (FWHM $\\sim 630-780$ km\ns$^{-1}$). Although it is not practical to identify the nature of this extended\nstructure, possible explanations include (i) companion/merging galaxies and\n(ii) massive AGN-driven outflows. For the case of (i), the companions are\nmodestly star-forming ($\\sim 10~M_\\odot$ yr$^{-1}$), but are not detected by\nour Subaru optical observations ($y_{\\rm AB,5\\sigma} = 24.4$ mag). For the case\nof (ii), our lower-limit to the cold neutral outflow rate is $\\sim 100~M_\\odot$\nyr$^{-1}$. The outflow kinetic energy and momentum are both much smaller than\nwhat predicted in energy-conserving wind models, suggesting that the AGN\nfeedback in this quasar is not capable of completely suppressing its star\nformation.\n"} {"abstract": " Baryon production is studied within the framework of quantized fragmentation\nof QCD string. Baryons appear in the model in a fairly intuitive way, with help\nof causally connected string breakups. A simple helical approximation of QCD\nflux tube with parameters constrained by mass spectrum of light mesons is\nsufficient to reproduce masses of light baryons.\n"} {"abstract": " The minimal flavor structures for both quarks and leptons are proposed to\naddress fermion mass hierarchy and flavor mixings by bi-unitary decomposition\nof the fermion mass matrix. The real matrix ${\\bf M}_0^f$ is completely\nresponsive to family mass hierarchy, which is expressed by a close-to-flat\nmatrix structure. The left-handed unitary phase ${\\bf F}_L^f$ provides the\norigin of CP violation in quark and lepton mixings, which can be explained as a\nquantum effect between Yukawa interaction states and weak gauge states. The\nminimal flavor structure is realized by just 10 parameters without any\nredundancy, corresponding to 6 fermion masses, 3 mixing angles and 1 CP\nviolation in the quark/lepton sector. This approach provides a general flavor\nstructure independent of the specific quark or lepton flavor data. We verify\nthe validation of the flavor structure by reproducing quark/lepton masses and\nmixings. Some possible scenarios that yield the flavor structure are also\ndiscussed.\n"} {"abstract": " Our goal is to develop a flux limiter of the Flux-Corrected Transport method\nfor a nonconservative convection-diffusion equation. For this, we consider a\nhybrid difference scheme that is a linear combination of a monotone scheme and\na scheme of high-order accuracy. The flux limiter is computed as an approximate\nsolution of a corresponding optimization problem with a linear objective\nfunction. The constraints for this optimization problem are derived from\ninequalities that are valid for the monotone scheme and apply to the hybrid\nscheme. Our numerical results with the flux limiters, which are exact and\napproximate solutions to the optimization problem, are in good agreement.\n"} {"abstract": " A scalable system for real-time analysis of electron temperature and density\nbased on signals from the Thomson scattering diagnostic, initially developed\nfor and installed on the NSTX-U experiment, was recently adapted for the Large\nHelical Device (LHD) and operated for the first time during plasma discharges.\nDuring its initial operation run, it routinely recorded and processed signals\nfor four spatial points at the laser repetition rate of 30 Hz, well within the\nsystem's rated capability for 60 Hz. We present examples of data collected from\nthis initial run and describe subsequent adaptations to the analysis code to\nimprove the fidelity of the temperature calculations.\n"} {"abstract": " The DARWIN observatory is a proposed next-generation experiment to search for\nparticle dark matter and other rare interactions. It will operate a 50 t liquid\nxenon detector, with 40 t in the time projection chamber (TPC). To inform the\nfinal detector design and technical choices, a series of technological\nquestions must first be addressed. Here we describe a full-scale demonstrator\nin the vertical dimension, Xenoscope, with the main goal of achieving electron\ndrift over a 2.6 m distance, which is the scale of the DARWIN TPC. We have\ndesigned and constructed the facility infrastructure, including the cryostat,\ncryogenic and purification systems, the xenon storage and recuperation system,\nas well as the slow control system. We have also designed a xenon purity\nmonitor and the TPC, with the fabrication of the former nearly complete. In a\nfirst commissioning run of the facility without an inner detector, we\ndemonstrated the nominal operational reach of Xenoscope and benchmarked the\ncomponents of the cryogenic and slow control systems, demonstrating reliable\nand continuous operation of all subsystems over 40 days. The infrastructure is\nthus ready for the integration of the purity monitor, followed by the TPC.\nFurther applications of the facility include R&D on the high voltage\nfeedthrough for DARWIN, measurements of electron cloud diffusion, as well as\nmeasurements of optical properties of liquid xenon. In the future, Xenoscope\nwill be available as a test platform for the DARWIN collaboration to\ncharacterise new detector technologies.\n"} {"abstract": " Due to their broad application to different fields of theory and practice,\ngeneralized Petersen graphs $GPG(n,s)$ have been extensively investigated.\nDespite the regularity of generalized Petersen graphs, determining an exact\nformula for the diameter is still a difficult problem. In their paper, Beenker\nand Van Lint have proved that if the circulant graph $C_n(1,s)$ has diameter\n$d$, then $GPG(n,s)$ has diameter at least $d+1$ and at most $d+2$. In this\npaper, we provide necessary and sufficient conditions so that the diameter of\n$GPG(n,s)$ is equal to $d+1,$ and sufficient conditions so that the diameter of\n$GPG(n,s)$ is equal to $d+2.$ Afterwards, we give exact values for the diameter\nof $GPG(n,s)$ for almost all cases of $n$ and $s.$ Furthermore, we show that\nthere exists an algorithm computing the diameter of generalized Petersen graphs\nwith running time $O$(log$n$).\n"} {"abstract": " In many astrophysical applications, the cost of solving a chemical network\nrepresented by a system of ordinary differential equations (ODEs) grows\nsignificantly with the size of the network, and can often represent a\nsignificant computational bottleneck, particularly in coupled chemo-dynamical\nmodels. Although standard numerical techniques and complex solutions tailored\nto thermochemistry can somewhat reduce the cost, more recently, machine\nlearning algorithms have begun to attack this challenge via data-driven\ndimensional reduction techniques. In this work, we present a new class of\nmethods that take advantage of machine learning techniques to reduce complex\ndata sets (autoencoders), the optimization of multi-parameter systems (standard\nbackpropagation), and the robustness of well-established ODE solvers to to\nexplicitly incorporate time-dependence. This new method allows us to find a\ncompressed and simplified version of a large chemical network in a\nsemi-automated fashion that can be solved with a standard ODE solver, while\nalso enabling interpretability of the compressed, latent network. As a proof of\nconcept, we tested the method on an astrophysically-relevant chemical network\nwith 29 species and 224 reactions, obtaining a reduced but representative\nnetwork with only 5 species and 12 reactions, and a x65 speed-up.\n"} {"abstract": " Short Read Alignment Mapping Metrics (SRAMM): is an efficient and versatile\ncommand line tool providing additional short read mapping metrics, filtering,\nand graphs. Short read aligners report MAPing Quality (MAPQ), but these methods\ngenerally are neither standardized nor well described in literature or software\nmanuals. Additionally, third party mapping quality programs are typically\ncomputationally intensive or designed for specific applications. SRAMM\nefficiently generates multiple different concept-based mapping scores to\nprovide for an informative post alignment examination and filtering process of\naligned short reads for various downstream applications. SRAMM is compatible\nwith Python 2.6+ and Python 3.6+ on all operating systems. It works with any\nshort read aligner that generates SAM/BAM/CRAM file outputs and reports 'AS'\ntags. It is freely available under the MIT license at\nhttp://github.com/achon/sramm.\n"} {"abstract": " We aim to give more insights on adiabatic evolution concerning the occurrence\nof anti-crossings and their link to the spectral minimum gap $\\Delta_{min}$. We\nstudy in detail adiabatic quantum computation applied to a specific\ncombinatorial problem called weighted max $k$-clique. A clear intuition of the\nparametrization introduced by V. Choi is given which explains why the\ncharacterization isn't general enough. We show that the instantaneous vectors\ninvolved in the anti-crossing vary brutally through it making the instantaneous\nground-state hard to follow during the evolution. This result leads to a\nrelaxation of the parametrization to be more general.\n"} {"abstract": " A q-Levenberg-Marquardt method is an iterative procedure that blends a\nq-steepest descent and q-Gauss-Newton methods. When the current solution is far\nfrom the correct one the algorithm acts as the q-steepest descent method.\nOtherwise the algorithm acts as the q-Gauss-Newton method. A damping parameter\nis used to interpolate between these two methods. The q-parameter is used to\nescape from local minima and to speed up the search process near the optimal\nsolution.\n"} {"abstract": " For a complete graph $K_n$ of order $n$, an edge-labeling $c:E(K_n)\\to \\{\n-1,1\\}$ satisfying $c(E(K_n))=0$, and a spanning forest $F$ of $K_n$, we\nconsider the problem to minimize $|c(E(F'))|$ over all isomorphic copies $F'$\nof $F$ in $K_n$. In particular, we ask under which additional conditions there\nis a zero-sum copy, that is, a copy $F'$ of $F$ with $c(E(F'))=0$.\n We show that there is always a copy $F'$ of $F$ with $|c(E(F'))|\\leq\n\\Delta(F)+1$, where $\\Delta(F)$ is the maximum degree of $F$. We conjecture\nthat this bound can be improved to $|c(E(F'))|\\leq (\\Delta(F)-1)/2$ and verify\nthis for $F$ being the star $K_{1,n-1}$. Under some simple necessary\ndivisibility conditions, we show the existence of a zero-sum $P_3$-factor, and,\nfor sufficiently large $n$, also of a zero-sum $P_4$-factor.\n"} {"abstract": " Deepfakes raised serious concerns on the authenticity of visual contents.\nPrior works revealed the possibility to disrupt deepfakes by adding adversarial\nperturbations to the source data, but we argue that the threat has not been\neliminated yet. This paper presents MagDR, a mask-guided detection and\nreconstruction pipeline for defending deepfakes from adversarial attacks. MagDR\nstarts with a detection module that defines a few criteria to judge the\nabnormality of the output of deepfakes, and then uses it to guide a learnable\nreconstruction procedure. Adaptive masks are extracted to capture the change in\nlocal facial regions. In experiments, MagDR defends three main tasks of\ndeepfakes, and the learned reconstruction pipeline transfers across input data,\nshowing promising performance in defending both black-box and white-box\nattacks.\n"} {"abstract": " We propose a variational autoencoder architecture to model both ignorable and\nnonignorable missing data using pattern-set mixtures as proposed by Little\n(1993). Our model explicitly learns to cluster the missing data into\nmissingness pattern sets based on the observed data and missingness masks.\nUnderpinning our approach is the assumption that the data distribution under\nmissingness is probabilistically semi-supervised by samples from the observed\ndata distribution. Our setup trades off the characteristics of ignorable and\nnonignorable missingness and can thus be applied to data of both types. We\nevaluate our method on a wide range of data sets with different types of\nmissingness and achieve state-of-the-art imputation performance. Our model\noutperforms many common imputation algorithms, especially when the amount of\nmissing data is high and the missingness mechanism is nonignorable.\n"} {"abstract": " In this paper, we study linear filters to process signals defined on\nsimplicial complexes, i.e., signals defined on nodes, edges, triangles, etc. of\na simplicial complex, thereby generalizing filtering operations for graph\nsignals. We propose a finite impulse response filter based on the Hodge\nLaplacian, and demonstrate how this filter can be designed to amplify or\nattenuate certain spectral components of simplicial signals. Specifically, we\ndiscuss how, unlike in the case of node signals, the Fourier transform in the\ncontext of edge signals can be understood in terms of two orthogonal subspaces\ncorresponding to the gradient-flow signals and curl-flow signals arising from\nthe Hodge decomposition. By assigning different filter coefficients to the\nassociated terms of the Hodge Laplacian, we develop a subspace-varying filter\nwhich enables more nuanced control over these signal types. Numerical\nexperiments are conducted to show the potential of simplicial filters for\nsub-component extraction, denoising and model approximation.\n"} {"abstract": " In this paper we address the explainability of web search engines. We propose\ntwo explainable elements on the search engine result page: a visualization of\nquery term weights and a visualization of passage relevance. The idea is that\nsearch engines that indicate to the user why results are retrieved are valued\nhigher by users and gain user trust. We deduce the query term weights from the\nterm gating network in the Deep Relevance Matching Model (DRMM) and visualize\nthem as a doughnut chart. In addition, we train a passage-level ranker with\nDRMM that selects the most relevant passage from each document and shows it as\nsnippet on the result page. Next to the snippet we show a document thumbnail\nwith this passage highlighted. We evaluate the proposed interface in an online\nuser study, asking users to judge the explainability and assessability of the\ninterface. We found that users judge our proposed interface significantly more\nexplainable and easier to assess than a regular search engine result page.\nHowever, they are not significantly better in selecting the relevant documents\nfrom the top-5. This indicates that the explainability of the search engine\nresult page leads to a better user experience. Thus, we conclude that the\nproposed explainable elements are promising as visualization for search engine\nusers.\n"} {"abstract": " Simulating time evolution of quantum systems is one of the most promising\napplications of quantum computing and also appears as a subroutine in many\napplications such as Green's function methods. In the current era of NISQ\nmachines we assess the state of algorithms for simulating time dynamics with\nlimited resources. We propose the Jaynes-Cummings model and extensions to it as\nuseful toy models to investigate time evolution algorithms on near-term quantum\ncomputers. Using these simple models, direct Trotterisation of the time\nevolution operator produces deep circuits, requiring coherence times out of\nreach on current NISQ hardware. Therefore we test two alternative responses to\nthis problem: variational compilation of the time evolution operator, and\nvariational quantum simulation of the wavefunction ansatz. We demonstrate\nnumerically to what extent these methods are successful in time evolving this\nsystem. The costs in terms of circuit depth and number of measurements are\ncompared quantitatively, along with other drawbacks and advantages of each\nmethod. We find that computational requirements for both methods make them\nsuitable for performing time evolution simulations of our models on NISQ\nhardware. Our results also indicate that variational quantum compilation\nproduces more accurate results than variational quantum simulation, at the cost\nof a larger number of measurements.\n"} {"abstract": " In this paper we give a systematic review of the theory of Gibbs measures of\nPotts model on Cayley trees (developed since 2013) and discuss many\napplications of the Potts model to real world situations: mainly biology,\nphysics, and some examples of alloy behavior, cell sorting, financial\nengineering, flocking birds, flowing foams, image segmentation, medicine,\nsociology etc.\n"} {"abstract": " We introduce a new class of commutative noetherian DG-rings which generalizes\nthe class of regular local rings. These are defined to be local DG-rings\n$(A,\\bar{\\mathfrak{m}})$ such that the maximal ideal $\\bar{\\mathfrak{m}}\n\\subseteq \\mathrm{H}^0(A)$ can be generated by an $A$-regular sequence. We call\nthese DG-rings sequence-regular DG-rings, and make a detailed study of them.\nUsing methods of Cohen-Macaulay differential graded algebra, we prove that the\nAuslander-Buchsbaum-Serre theorem about localization generalizes to this\nsetting. This allows us to define global sequence-regular DG-rings, and to\nintroduce this regularity condition to derived algebraic geometry. It is shown\nthat these DG-rings share many properties of classical regular local rings, and\nin particular we are able to construct canonical residue DG-fields in this\ncontext. Finally, we show that sequence-regular DG-rings are ubiquitous, and in\nparticular, any eventually coconnective derived algebraic variety over a\nperfect field is generically sequence-regular.\n"} {"abstract": " Tissues are characterized by layers of functional units such as cells and\nextracellular matrix (ECM). Nevertheless, how dynamics at interlayer interfaces\nhelp transmit cellular forces in tissues remains overlooked. Here, we\ninvestigate a multi-layer system where a layer of epithelial cells is seeded\nupon an elastic substrate in contact with a hard surface. Our experiments show\nthat, upon a cell extrusion event in the cellular layer, long-range wave\npropagation emerges in the substrate only when the two substrate layers were\nweakly attached to each other. We then derive a theoretical model which\nquantitatively reproduces the wave dynamics and explains how frictional sliding\nbetween substrate layers helps propagate cellular forces at a variety of\nscales, depending on the stiffness, thickness, and slipperiness of the\nsubstrate. These results highlight the importance of interfacial friction\nbetween layers in transmitting mechanical cues in tissues in vivo.\n"} {"abstract": " This paper proposes a differentiable robust LQR layer for reinforcement\nlearning and imitation learning under model uncertainty and stochastic\ndynamics. The robust LQR layer can exploit the advantages of robust optimal\ncontrol and model-free learning. It provides a new type of inductive bias for\nstochasticity and uncertainty modeling in control systems. In particular, we\npropose an efficient way to differentiate through a robust LQR optimization\nprogram by rewriting it as a convex program (i.e. semi-definite program) of the\nworst-case cost. Based on recent work on using convex optimization inside\nneural network layers, we develop a fully differentiable layer for optimizing\nthis worst-case cost, i.e. we compute the derivative of a performance measure\nw.r.t the model's unknown parameters, model uncertainty and stochasticity\nparameters. We demonstrate the proposed method on imitation learning and\napproximate dynamic programming on stochastic and uncertain domains. The\nexperiment results show that the proposed method can optimize robust policies\nunder uncertain situations, and are able to achieve a significantly better\nperformance than existing methods that do not model uncertainty directly.\n"} {"abstract": " Recent work on graph generative models has made remarkable progress towards\ngenerating increasingly realistic graphs, as measured by global graph features\nsuch as degree distribution, density, and clustering coefficients. Deep\ngenerative models have also made significant advances through better modelling\nof the local correlations in the graph topology, which have been very useful\nfor predicting unobserved graph components, such as the existence of a link or\nthe class of a node, from nearby observed graph components. A complete\nscientific understanding of graph data should address both global and local\nstructure. In this paper, we propose a joint model for both as complementary\nobjectives in a graph VAE framework. Global structure is captured by\nincorporating graph kernels in a probabilistic model whose loss function is\nclosely related to the maximum mean discrepancy(MMD) between the global\nstructures of the reconstructed and the input graphs. The ELBO objective\nderived from the model regularizes a standard local link reconstruction term\nwith an MMD term. Our experiments demonstrate a significant improvement in the\nrealism of the generated graph structures, typically by 1-2 orders of magnitude\nof graph structure metrics, compared to leading graph VAEand GAN models. Local\nlink reconstruction improves as well in many cases.\n"} {"abstract": " All solid state batteries are claimed to be the next-generation battery\nsystem, in view of their safety accompanied by high energy densities. A new\nadvanced, multiscale compatible, and fully three dimensional model for solid\nelectrolytes is presented in this note. The response of the electrolyte is\nprofoundly studied theoretically and numerically, analyzing the equilibrium and\nsteady state behaviors, the limiting factors, as well as the most relevant\nconstitutive parameters according to the sensitivity analysis of the model.\n"} {"abstract": " Unmanned aerial vehicles serving as aerial base stations (UAV-BSs) can be\ndeployed to provide wireless connectivity to ground devices in events of\nincreased network demand, points-of-failure in existing infrastructure, or\ndisasters. However, it is challenging to conserve the energy of UAVs during\nprolonged coverage tasks, considering their limited on-board battery capacity.\nReinforcement learning-based (RL) approaches have been previously used to\nimprove energy utilization of multiple UAVs, however, a central cloud\ncontroller is assumed to have complete knowledge of the end-devices' locations,\ni.e., the controller periodically scans and sends updates for UAV\ndecision-making. This assumption is impractical in dynamic network environments\nwith UAVs serving mobile ground devices. To address this problem, we propose a\ndecentralized Q-learning approach, where each UAV-BS is equipped with an\nautonomous agent that maximizes the connectivity of mobile ground devices while\nimproving its energy utilization. Experimental results show that the proposed\ndesign significantly outperforms the centralized approaches in jointly\nmaximizing the number of connected ground devices and the energy utilization of\nthe UAV-BSs.\n"} {"abstract": " One-shot voice conversion (VC), which performs conversion across arbitrary\nspeakers with only a single target-speaker utterance for reference, can be\neffectively achieved by speech representation disentanglement. Existing work\ngenerally ignores the correlation between different speech representations\nduring training, which causes leakage of content information into the speaker\nrepresentation and thus degrades VC performance. To alleviate this issue, we\nemploy vector quantization (VQ) for content encoding and introduce mutual\ninformation (MI) as the correlation metric during training, to achieve proper\ndisentanglement of content, speaker and pitch representations, by reducing\ntheir inter-dependencies in an unsupervised manner. Experimental results\nreflect the superiority of the proposed method in learning effective\ndisentangled speech representations for retaining source linguistic content and\nintonation variations, while capturing target speaker characteristics. In doing\nso, the proposed approach achieves higher speech naturalness and speaker\nsimilarity than current state-of-the-art one-shot VC systems. Our code,\npre-trained models and demo are available at\nhttps://github.com/Wendison/VQMIVC.\n"} {"abstract": " One major impediment in rapidly deploying object detection models for\nindustrial applications is the lack of large annotated datasets. We currently\nhave presented the Sacked Carton Dataset(SCD) that contains carton images from\nthree scenarios, such as comprehensive pharmaceutical logistics company(CPLC),\ne-commerce logistics company(ECLC), fruit market(FM). However, due to domain\nshift, the model trained with one of the three scenarios in SCD has poor\ngeneralization ability when applied to the rest scenarios. To solve this\nproblem, a novel image synthesis method is proposed to replace the foreground\ntexture of the source datasets with the texture of the target datasets. Our\nmethod can keep the context relationship of foreground objects and backgrounds\nunchanged and greatly augment the target datasets. We firstly propose a surface\nsegmentation algorithm to achieve texture decoupling of each instance.\nSecondly, a contour reconstruction algorithm is proposed to keep the occlusion\nand truncation relationship of the instance unchanged. Finally, the Gaussian\nfusion algorithm is used to replace the foreground texture from the source\ndatasets with the texture from the target datasets. The novel image synthesis\nmethod can largely boost AP by at least 4.3%~6.5% on RetinaNet and 3.4%~6.8% on\nFaster R-CNN for the target domain. Code is available at\nhttps://github.com/hustgetlijun/RCAN.\n"} {"abstract": " The Transient High Energy Sources and Early Universe Surveyor is an ESA M5\ncandidate mission currently in Phase A, with Launch in $\\sim$2032. The aim of\nthe mission is to complete a Gamma Ray Burst survey and monitor transient X-ray\nevents. The University of Leicester is the PI institute for the Soft X-ray\nInstrument (SXI), and is responsible for both the optic and detector\ndevelopment. The SXI consists of two wide field, lobster eye X-ray modules.\nEach module consists of 64 Micro Pore Optics (MPO) in an 8 by 8 array and 8\nCMOS detectors in each focal plane. The geometry of the MPOs comprises a square\npacked array of microscopic pores with a square cross-section, arranged over a\nspherical surface with a radius of curvature twice the focal length of the\noptic. Working in the photon energy range 0.3-5 keV, the optimum $L/d$ ratio\n(length of pore $L$ and pore width $d$) is upwards of 50 and is constant across\nthe whole optic aperture for the SXI. The performance goal for the SXI modules\nis an angular resolution of 4.5 arcmin, localisation accuracy of $\\sim$1 arcmin\nand employing an $L/d$ of 60. During the Phase A study, we are investigating\nmethods to improve the current performance and consistency of the MPOs, in\ncooperation with the manufacturer Photonis France SAS. We present the optics\ndesign of the THESEUS SXI modules and the programme of work designed to improve\nthe MPOs performance and the results from the study.\n"} {"abstract": " Recent advances in the literature have demonstrated that standard supervised\nlearning algorithms are ill-suited for problems with endogenous explanatory\nvariables. To correct for the endogeneity bias, many variants of nonparameteric\ninstrumental variable regression methods have been developed. In this paper, we\npropose an alternative algorithm called boostIV that builds on the traditional\ngradient boosting algorithm and corrects for the endogeneity bias. The\nalgorithm is very intuitive and resembles an iterative version of the standard\n2SLS estimator. Moreover, our approach is data driven, meaning that the\nresearcher does not have to make a stance on neither the form of the target\nfunction approximation nor the choice of instruments. We demonstrate that our\nestimator is consistent under mild conditions. We carry out extensive Monte\nCarlo simulations to demonstrate the finite sample performance of our algorithm\ncompared to other recently developed methods. We show that boostIV is at worst\non par with the existing methods and on average significantly outperforms them.\n"} {"abstract": " We report the interfacing of the Exciting-Plus (\"EP\") FLAPW DFT code with the\nSIRIUS multi-functional DFT library. Use of the SIRIUS library enhances EP with\nadditional task parallelism in ground state DFT calculations. Without\nsignificant change in the EP source code, the additional eigensystem solver\nmethod from the SIRIUS library can be exploited for performance gains in\ndiagonalizing the Kohn-Sham Hamiltonian. We benchmark the interfaced code\nagainst the original EP using small bulk systems, and then demonstrate\nperformance on much larger molecular magnet systems that are well beyond the\ncapability of the original EP code.\n"} {"abstract": " Chimera states have attracted significant attention as symmetry-broken states\nexhibiting the unexpected coexistence of coherence and incoherence. Despite the\nvaluable insights gained from analyzing specific systems, an understanding of\nthe general physical mechanism underlying the emergence of chimeras is still\nlacking. Here, we show that many stable chimeras arise because coherence in\npart of the system is sustained by incoherence in the rest of the system. This\nmechanism may be regarded as a deterministic analog of noise-induced\nsynchronization and is shown to underlie the emergence of strong chimeras.\nThese are chimera states whose coherent domain is formed by identically\nsynchronized oscillators. Recognizing this mechanism offers a new meaning to\nthe interpretation that chimeras are a natural link between coherence and\nincoherence.\n"} {"abstract": " We present a probabilistic 3D generative model, named Generative Cellular\nAutomata, which is able to produce diverse and high quality shapes. We\nformulate the shape generation process as sampling from the transition kernel\nof a Markov chain, where the sampling chain eventually evolves to the full\nshape of the learned distribution. The transition kernel employs the local\nupdate rules of cellular automata, effectively reducing the search space in a\nhigh-resolution 3D grid space by exploiting the connectivity and sparsity of 3D\nshapes. Our progressive generation only focuses on the sparse set of occupied\nvoxels and their neighborhood, thus enabling the utilization of an expressive\nsparse convolutional network. We propose an effective training scheme to obtain\nthe local homogeneous rule of generative cellular automata with sequences that\nare slightly different from the sampling chain but converge to the full shapes\nin the training data. Extensive experiments on probabilistic shape completion\nand shape generation demonstrate that our method achieves competitive\nperformance against recent methods.\n"} {"abstract": " Theoretical models of a spin-polarized voltage probe (SPVP) tunnel-coupled to\nthe helical edge states (HES) of a quantum spin Hall system (QSHS) are studied.\nOur first model of the SPVP comprises $N_{P}$ spin-polarized modes (subprobes),\neach of which is locally tunnel-coupled to the HES, while the SPVP, as a whole,\nis subjected to a self-consistency condition ensuring zero average current on\nthe probe. We carry out a numerical analysis which shows that the optimal\nsituation for reading off spin-resolved voltage from the HES depends on the\ninterplay of the probe-edge tunnel-coupling and the number of modes in the\nprobe ($N_P$). We further investigate the stability of our findings by\nintroducing Gaussian fluctuations in {\\it{(i)}} the tunnel-coupling between the\nsubprobes and the HES about a chosen average value and {\\it{(ii)}}\nspin-polarization of the subprobes about a chosen direction of the net\npolarization of SPVP. We also perform a numerical analysis corresponding to the\nsituation where four such SPVPs are implemented in a self-consistent fashion\nacross a ferromagnetic barrier on the HES and demonstrate that this model\nfacilitates the measurements of spin-resolved four-probe voltage drops across\nthe ferromagnetic barrier. As a second model, we employ the edge state of a\nquantum anomalous Hall state (QAHS) as the SPVP which is tunnel-coupled over an\nextended region with the HES. A two-dimensional lattice simulation for the\nquantum transport of the proposed device setup comprising a junction of QSHS\nand QAHS is considered and a feasibility study of using the edge of the QAHS as\nan efficient spin-polarized voltage probe is carried out in presence of an\noptimal strength of the disorder.\n"} {"abstract": " Planck data provide precise constraints on cosmological parameters when\nassuming the base $\\Lambda$CDM model, including a $0.17\\%$ measurement of the\nage of the Universe, $t_0=13.797 \\pm 0.023\\,{\\rm Gyr}$. However, the\npersistence of the \"Hubble tension\" calls the base $\\Lambda$CDM model's\ncompleteness into question and has spurred interest in models such as Early\nDark Energy (EDE) that modify the assumed expansion history of the Universe. We\ninvestigate the effect of EDE on the redshift-time relation $z \\leftrightarrow\nt$ and find that it differs from the base $\\Lambda$CDM model by at least\n${\\approx} 4\\%$ at all $t$ and $z$. As long as EDE remains observationally\nviable, any inferred $t \\leftarrow z$ or $z \\leftarrow t$ quoted to a higher\nlevel of precision do not reflect the current status of our understanding of\ncosmology. This uncertainty has important astrophysical implications: the\nreionization epoch - $10>z>6$ - corresponds to disjoint lookback time periods\nin the base $\\Lambda$CDM and EDE models, and the EDE value of $t_0=13.25 \\pm\n0.17~{\\rm Gyr}$ is in tension with published ages of some stars, star clusters,\nand ultra-faint dwarf galaxies. However, most published stellar ages do not\ninclude an uncertainty in accuracy (due to, e.g., uncertain distances and\nstellar physics) that is estimated to be $\\sim7-10\\%$, potentially reconciling\nstellar ages with $t_{0,\\rm EDE}$. We discuss how the big data era for stars is\nproviding extremely precise ages ($<1\\%$) and how improved distances and\ntreatment of stellar physics such as convection could result in ages accurate\nto $4-5\\%$, comparable to the current accuracy of $t \\leftrightarrow z$. Such\nprecise and accurate stellar ages can provide detailed insight into the\nhigh-redshift Universe independent of a cosmological model.\n"} {"abstract": " We are concerned with random ordinary differential equations (RODEs). Our\nmain question of interest is how uncertainties in system parameters propagate\nthrough the possibly highly nonlinear dynamical system and affect the system's\nbifurcation behavior. We come up with a methodology to determine the\nprobability of the occurrence of different types of bifurcations (sub- vs\nsuper-critical) along a given bifurcation curve based on the probability\ndistribution of the input parameters. In a first step, we reduce the system's\nbehavior to the dynamics on its center manifold. We thereby still capture the\nmajor qualitative behavior of the RODEs. In a second step, we analyze the\nreduced RODEs and quantify the probability of the occurrence of different types\nof bifurcations based on the (nonlinear) functional appearance of uncertain\nparameters. To realize this major step, we present three approaches: an\nanalytical one, where the probability can be calculated explicitly based on\nMellin transformation and inversion, a semi-analytical one consisting of a\ncombination of the analytical approach with a moment-based numerical estimation\nprocedure, and a particular sampling-based approach using unscented\ntransformation. We complement our new methodology with various numerical\nexamples.\n"} {"abstract": " The \"Subset Sum problem\" is a very well-known NP-complete problem. In this\nwork, a top-k variation of the \"Subset Sum problem\" is considered. This problem\nhas wide application in recommendation systems, where instead of k best objects\nthe k best subsets of objects with the lowest (or highest) overall scores are\nrequired. Given an input set R of n real numbers and a positive integer k, our\ntarget is to generate the k best subsets of R such that the sum of their\nelements is minimized. Our solution methodology is based on constructing a\nmetadata structure G for a given n. Each node of G stores a bit vector of size\nn from which a subset of R can be retrieved. Here it is shown that the\nconstruction of the whole graph G is not needed. To answer a query, only\nimplicit traversal of the required portion of G on demand is sufficient, which\nobviously gets rid of the preprocessing step, thereby reducing the overall time\nand space requirement. A modified algorithm is then proposed to generate each\nsubset incrementally, where it is shown that it is possible to do away with the\nexplicit storage of the bit vector. This not only improves the space\nrequirement but also improves the asymptotic time complexity. Finally, a\nvariation of our algorithm that reports only the top-k subset sums has been\ncompared with an existing algorithm, which shows that our algorithm performs\nbetter both in terms of time and space requirement by a constant factor.\n"} {"abstract": " Attention mechanism enables the Graph Neural Networks(GNNs) to learn the\nattention weights between the target node and its one-hop neighbors, the\nperformance is further improved. However, the most existing GNNs are oriented\nto homogeneous graphs and each layer can only aggregate the information of\none-hop neighbors. Stacking multi-layer networks will introduce a lot of noise\nand easily lead to over smoothing. We propose a Multi-hop Heterogeneous\nNeighborhood information Fusion graph representation learning method (MHNF).\nSpecifically, we first propose a hybrid metapath autonomous extraction model to\nefficiently extract multi-hop hybrid neighbors. Then, we propose a hop-level\nheterogeneous Information aggregation model, which selectively aggregates\ndifferent-hop neighborhood information within the same hybrid metapath.\nFinally, a hierarchical semantic attention fusion model (HSAF) is proposed,\nwhich can efficiently integrate different-hop and different-path neighborhood\ninformation respectively. This paper can solve the problem of aggregating the\nmulti-hop neighborhood information and can learn hybrid metapaths for target\ntask, reducing the limitation of manually specifying metapaths. In addition,\nHSAF can extract the internal node information of the metapaths and better\nintegrate the semantic information of different levels. Experimental results on\nreal datasets show that MHNF is superior to state-of-the-art methods in node\nclassification and clustering tasks (10.94% - 69.09% and 11.58% - 394.93%\nrelative improvement on average, respectively).\n"} {"abstract": " We present a calculation of the up, down, strange and charm quark masses\nperformed within the lattice QCD framework. We use the twisted mass fermion\naction and carry out simulations that include in the sea two light\nmass-degenerate quarks, as well as the strange and charm quarks. In the\nanalysis we use gauge ensembles simulated at three values of the lattice\nspacing and with light quarks that correspond to pion masses in the range from\n350 MeV to the physical value, while the strange and charm quark masses are\ntuned approximately to their physical values. We use several quantities to set\nthe scale in order to check for finite lattice spacing effects and in the\ncontinuum limit we get compatible results. The quark mass renormalization is\ncarried out non-perturbatively using the RI'-MOM method converted into the\n$\\overline{\\rm MS}$ scheme. For the determination of the quark masses we use\nphysical observables from both the meson and the baryon sectors, obtaining\n$m_{ud} = 3.636(66)(^{+60}_{-57})$~MeV and $m_s =\n98.7(2.4)(^{+4.0}_{-3.2})$~MeV in the $\\overline{\\rm MS}(2\\,{\\rm GeV})$ scheme\nand $m_c = 1036(17)(^{+15}_{-8})$~MeV in the $\\overline{\\rm MS}(3\\,{\\rm GeV})$\nscheme, where the first errors are statistical and the second ones are\ncombinations of systematic errors. For the quark mass ratios we get $m_s /\nm_{ud} = 27.17(32)(^{+56}_{-38})$ and $m_c / m_s = 11.48(12)(^{+25}_{-19})$.\n"} {"abstract": " Coupled flow-induced flapping dynamics of flexible plates are governed by\nthree non-dimensional numbers: Reynolds number, mass-ratio, and non-dimensional\nflexural rigidity. The traditional definition of these parameters is limited to\nisotropic single-layered flexible plates. There is a need to define these\nparameters for a more generic plate made of multiple isotropic layers placed on\ntop of each other. In this work, we derive the non-dimensional parameters for a\nflexible plate of $n$-isotropic layers and validate the non-dimensional\nparameters with the aid of numerical simulations.\n"} {"abstract": " To help agents reason about scenes in terms of their building blocks, we wish\nto extract the compositional structure of any given scene (in particular, the\nconfiguration and characteristics of objects comprising the scene). This\nproblem is especially difficult when scene structure needs to be inferred while\nalso estimating the agent's location/viewpoint, as the two variables jointly\ngive rise to the agent's observations. We present an unsupervised variational\napproach to this problem. Leveraging the shared structure that exists across\ndifferent scenes, our model learns to infer two sets of latent representations\nfrom RGB video input alone: a set of \"object\" latents, corresponding to the\ntime-invariant, object-level contents of the scene, as well as a set of \"frame\"\nlatents, corresponding to global time-varying elements such as viewpoint. This\nfactorization of latents allows our model, SIMONe, to represent object\nattributes in an allocentric manner which does not depend on viewpoint.\nMoreover, it allows us to disentangle object dynamics and summarize their\ntrajectories as time-abstracted, view-invariant, per-object properties. We\ndemonstrate these capabilities, as well as the model's performance in terms of\nview synthesis and instance segmentation, across three procedurally generated\nvideo datasets.\n"} {"abstract": " In recent years, the use of sophisticated statistical models that influence\ndecisions in domains of high societal relevance is on the rise. Although these\nmodels can often bring substantial improvements in the accuracy and efficiency\nof organizations, many governments, institutions, and companies are reluctant\nto their adoption as their output is often difficult to explain in\nhuman-interpretable ways. Hence, these models are often regarded as\nblack-boxes, in the sense that their internal mechanisms can be opaque to human\naudit. In real-world applications, particularly in domains where decisions can\nhave a sensitive impact--e.g., criminal justice, estimating credit scores,\ninsurance risk, health risks, etc.--model interpretability is desired.\nRecently, the academic literature has proposed a substantial amount of methods\nfor providing interpretable explanations to machine learning models. This\nsurvey reviews the most relevant and novel methods that form the\nstate-of-the-art for addressing the particular problem of explaining individual\ninstances in machine learning. It seeks to provide a succinct review that can\nguide data science and machine learning practitioners in the search for\nappropriate methods to their problem domain.\n"} {"abstract": " The Kolkata Paise Restaurant Problem is a challenging game, in which $n$\nagents must decide where to have lunch during their lunch break. The game is\nvery interesting because there are exactly $n$ restaurants and each restaurant\ncan accommodate only one agent. If two or more agents happen to choose the same\nrestaurant, only one gets served and the others have to return back to work\nhungry. In this paper we tackle this problem from an entirely new angle. We\nabolish certain implicit assumptions, which allows us to propose a novel\nstrategy that results in greater utilization for the restaurants. We emphasize\nthe spatially distributed nature of our approach, which, for the first time,\nperceives the locations of the restaurants as uniformly distributed in the\nentire city area. This critical change in perspective has profound\nramifications in the topological layout of the restaurants, which now makes it\ncompletely realistic to assume that every agent has a second chance. Every\nagent now may visit, in case of failure, more than one restaurants, within the\npredefined time constraints.\n"} {"abstract": " A graph G is said to be orderenergetic, if its energy equal to its order and\nit is said to be hypoenergetic if its energy less than its order. Two\nnon-isomorphic graphs of same order are said to be equienergetic if their\nenergies are equal. In this paper, we construct some new families of\norderenergetic graphs, hypoenergetic graphs, equienergetic graphs,\nequiorderenergetic graphs and equihypoenergetic graphs.\n"} {"abstract": " In order to get $\\lambda$-models with a rich structure of $\\infty$-groupoid,\nwhich we call \"homotopy $\\lambda$-models\", a general technique is described for\nsolving domain equations on any cartesian closed $\\infty$-category (c.c.i.)\nwith enough points. Finally, the technique is applied in a particular c.c.i.,\nwhere some examples of homotopy $\\lambda$-models are given.\n"} {"abstract": " The central engines of Active Galactic Nuclei (AGNs) are powered by accreting\nsupermassive black holes, and while AGNs are known to play an important role in\ngalaxy evolution, the key physical processes occur on scales that are too small\nto be resolved spatially (aside from a few exceptional cases). Reverberation\nmapping is a powerful technique that overcomes this limitation by using echoes\nof light to determine the geometry and kinematics of the central regions.\nVariable ionizing radiation from close to the black hole drives correlated\nvariability in surrounding gas/dust, but with a time delay due to the light\ntravel time between the regions, allowing reverberation mapping to effectively\nreplace spatial resolution with time resolution. Reverberation mapping is used\nto measure black hole masses and to probe the innermost X-ray emitting region,\nthe UV/optical accretion disk, the broad emission line region and the dusty\ntorus. In this article we provide an overview of the technique and its varied\napplications.\n"} {"abstract": " In this paper, we study the Feldman-Katok metric. We give entropy formulas by\nreplacing Bowen metric with Feldman-Katok metric. Some relative topics are also\ndiscussed.\n"} {"abstract": " We propose two systematic constructions of deletion-correcting codes for\nprotecting quantum information. The first one works with qudits of any\ndimension, but only one deletion is corrected and the constructed codes are\nasymptotically bad. The second one corrects multiple deletions and can\nconstruct asymptotically good codes. The second one also allows conversion of\nstabilizer-based quantum codes to deletion-correcting codes, and entanglement\nassistance.\n"} {"abstract": " Resistive random access memories are promising for non-volatile memory and\nbrain-inspired computing applications. High variability and low yield of these\ndevices are key drawbacks hindering reliable training of physical neural\nnetworks. In this study, we show that doping an oxide electrolyte, Al2O3, with\nelectronegative metals makes resistive switching significantly more\nreproducible, surpassing the reproducibility requirements for obtaining\nreliable hardware neuromorphic circuits. The underlying mechanism is the ease\nof creating oxygen vacancies in the vicinity of electronegative dopants, due to\nthe capture of the associated electrons by dopant mid-gap states, and the\nweakening of Al-O bonds. These oxygen vacancies and vacancy clusters also bind\nsignificantly to the dopant, thereby serving as preferential sites and building\nblocks in the formation of conducting paths. We validate this theory\nexperimentally by implanting multiple dopants over a range of\nelectronegativities, and find superior repeatability and yield with highly\nelectronegative metals, Au, Pt and Pd. These devices also exhibit a gradual SET\ntransition, enabling multibit switching that is desirable for analog computing.\n"} {"abstract": " Characterizing the privacy degradation over compositions, i.e., privacy\naccounting, is a fundamental topic in differential privacy (DP) with many\napplications to differentially private machine learning and federated learning.\nWe propose a unification of recent advances (Renyi DP, privacy profiles, $f$-DP\nand the PLD formalism) via the \\emph{characteristic function} ($\\phi$-function)\nof a certain \\emph{dominating} privacy loss random variable. We show that our\napproach allows \\emph{natural} adaptive composition like Renyi DP, provides\n\\emph{exactly tight} privacy accounting like PLD, and can be (often\n\\emph{losslessly}) converted to privacy profile and $f$-DP, thus providing\n$(\\epsilon,\\delta)$-DP guarantees and interpretable tradeoff functions.\nAlgorithmically, we propose an \\emph{analytical Fourier accountant} that\nrepresents the \\emph{complex} logarithm of $\\phi$-functions symbolically and\nuses Gaussian quadrature for numerical computation. On several popular DP\nmechanisms and their subsampled counterparts, we demonstrate the flexibility\nand tightness of our approach in theory and experiments.\n"} {"abstract": " The aim of this work is to determine abundances of neutron-capture elements\nfor thin- and thick-disc F, G, and K stars in several sky fields near the north\necliptic pole and to compare the results with the Galactic chemical evolution\nmodels, to explore elemental gradients according to stellar ages, mean\ngalactocentric distances, and maximum heights above the Galactic plane. The\nobservational data were obtained with the 1.65m telescope at the Moletai\nAstronomical Observatory and a fibre-fed high-resolution spectrograph.\nElemental abundances were determined using a differential spectrum synthesis\nwith the MARCS stellar model atmospheres and accounting for the\nhyperfine-structure effects. We determined abundances of Sr, Y, Zr, Ba, La, Ce,\nPr, Nd, Sm, and Eu for 424 thin- and 82 thick-disc stars. The sample of\nthick-disc stars shows a clearly visible decrease in [Eu/Mg] with increasing\n[Fe/H] compared to the thin-disc stars, bringing more evidence of a different\nchemical evolution in these two Galactic components. Abundance correlation with\nage slopes for the investigated thin-disc stars are slightly negative for the\nmajority of s-process dominated elements, while r-process dominated elements\nhave positive correlations. Our sample of thin-disc stars with ages spanning\nfrom 0.1 to 9 Gyrs give the [Y/Mg]=0.022 ($\\pm$0.015)-0.027 ($\\pm$0.003)*age\n[Gyr] relation. For the thick-disc stars, when we also took data from other\nstudies into account, we found that [Y/Mg] cannot serve as an age indicator.\nThe radial [El/Fe] gradients in the thin disc are negligible for the s-process\ndominated elements and become positive for the r-process dominated elements.\nThe vertical gradients are negative for the light s-process dominated elements\nand become positive for the r-process dominated elements. In the thick disc,\nthe radial [El/Fe] slopes are negligible, and the vertical slopes are\npredominantly negative.\n"} {"abstract": " Independent cascade (IC) model is a widely used influence propagation model\nfor social networks. In this paper, we incorporate the concept and techniques\nfrom causal inference to study the identifiability of parameters from\nobservational data in extended IC model with unobserved confounding factors,\nwhich models more realistic propagation scenarios but is rarely studied in\ninfluence propagation modeling before. We provide the conditions for the\nidentifiability or unidentifiability of parameters for several special\nstructures including the Markovian IC model, semi-Markovian IC model, and IC\nmodel with a global unobserved variable. Parameter identifiability is important\nfor other tasks such as influence maximization under the diffusion networks\nwith unobserved confounding factors.\n"} {"abstract": " We investigate the reasons for the performance degradation incurred with\nbatch-independent normalization. We find that the prototypical techniques of\nlayer normalization and instance normalization both induce the appearance of\nfailure modes in the neural network's pre-activations: (i) layer normalization\ninduces a collapse towards channel-wise constant functions; (ii) instance\nnormalization induces a lack of variability in instance statistics, symptomatic\nof an alteration of the expressivity. To alleviate failure mode (i) without\naggravating failure mode (ii), we introduce the technique \"Proxy Normalization\"\nthat normalizes post-activations using a proxy distribution. When combined with\nlayer normalization or group normalization, this batch-independent\nnormalization emulates batch normalization's behavior and consistently matches\nor exceeds its performance.\n"} {"abstract": " We propose a family of lossy integer compressions for Stochastic Gradient\nDescent (SGD) that do not communicate a single float. This is achieved by\nmultiplying floating-point vectors with a number known to every device and then\nrounding to an integer number. Our theory shows that the iteration complexity\nof SGD does not change up to constant factors when the vectors are scaled\nproperly. Moreover, this holds for both convex and non-convex functions, with\nand without overparameterization. In contrast to other compression-based\nalgorithms, ours preserves the convergence rate of SGD even on non-smooth\nproblems. Finally, we show that when the data is significantly heterogeneous,\nit may become increasingly hard to keep the integers bounded and propose an\nalternative algorithm, IntDIANA, to solve this type of problems.\n"} {"abstract": " Intelligent reflecting surface (IRS) has emerged as a competitive solution to\naddress blockage issues in millimeter wave (mmWave) and Terahertz (THz)\ncommunications due to its capability of reshaping wireless transmission\nenvironments. Nevertheless, obtaining the channel state information of\nIRS-assisted systems is quite challenging because of the passive\ncharacteristics of the IRS. In this paper, we consider the problem of beam\ntraining/alignment for IRS-assisted downlink mmWave/THz systems, where a\nmulti-antenna base station (BS) with a hybrid structure serves a single-antenna\nuser aided by IRS. By exploiting the inherent sparse structure of the\nBS-IRS-user cascade channel, the beam training problem is formulated as a joint\nsparse sensing and phaseless estimation problem, which involves devising a\nsparse sensing matrix and developing an efficient estimation algorithm to\nidentify the best beam alignment from compressive phaseless measurements.\nTheoretical analysis reveals that the proposed method can identify the best\nalignment with only a modest amount of training overhead. Simulation results\nshow that, for both line-of-sight (LOS) and NLOS scenarios, the proposed method\nobtains a significant performance improvement over existing state-of-art\nmethods. Notably, it can achieve performance close to that of the exhaustive\nbeam search scheme, while reducing the training overhead by 95%.\n"} {"abstract": " A single-hop beeping network is a distributed communication model in which\nall stations can communicate with one another by transmitting only one-bit\nmessages, called beeps. This paper focuses on resolving the distributed\ncomputing area's two fundamental problems: naming and counting problems. We are\nparticularly interested in optimizing the energy complexity and the running\ntime of algorithms to resolve these problems. Our contribution is to design\nrandomized algorithms with an optimal running time of O(n log n) and an energy\ncomplexity of O(log n) for both the naming and counting problems on single-hop\nbeeping networks of n stations.\n"} {"abstract": " We initiate the study of dark matter models based on a gapped continuum. Dark\nmatter consists of a mixture of states with a continuous mass distribution,\nwhich evolves as the universe expands. We present an effective field theory\ndescribing the gapped continuum, outline the structure of the Hilbert space and\nshow how to deal with the thermodynamics of such a system. This formalism\nenables us to study the cosmological evolution and phenomenology of gapped\ncontinuum DM in detail. As a concrete example, we consider a weakly-interacting\ncontinuum (WIC) model, a gapped continuum counterpart of the familiar WIMP. The\nDM interacts with the SM via a Z-portal. The model successfully reproduces the\nobserved relic density, while direct detection constraints are avoided due to\nthe effect of continuum kinematics. The model has striking observational\nconsequences, including continuous decays of DM states throughout cosmological\nhistory, as well as cascade decays of DM states produced at colliders. We also\ndescribe how the WIC theory can arise from a local, unitary scalar QFT\npropagating on a five-dimensional warped background with a soft wall.\n"} {"abstract": " In this paper we discuss the computation of Casimir energy on a quantum\ncomputer. The Casimir energy is an ideal quantity to calculate on a quantum\ncomputer as near term hybrid classical quantum algorithms exist to calculate\nthe ground state energy and the Casimir energy gives physical implications for\nthis quantity in a variety of settings. Depending on boundary conditions and\nwhether the field is bosonic or fermionic we illustrate how the Casimir energy\ncalculation can be set up on a quantum computer and calculated using the\nVariational Quantum Eigensolver algorithm with IBM QISKit. We compare the\nresults based on a lattice regularization with a finite number of qubits with\nthe continuum calculation for free boson fields, free fermion fields and chiral\nfermion fields. We use a regularization method introduced by Bergman and Thorn\nto compute the Casimir energy of a chiral fermion. We show how the accuracy of\nthe calculation varies with the number of qubits. We show how the number of\nPauli terms which are used to represent the Hamiltonian on a quantum computer\nscales with the number of qubits. We discuss the application of the Casimir\ncalculations on quantum computers to cosmology, nanomaterials, string models,\nKaluza Klein models and dark energy.\n"} {"abstract": " We study the average number $\\mathcal{A}(G)$ of colors in the non-equivalent\ncolorings of a graph $G$. We show some general properties of this graph\ninvariant and determine its value for some classes of graphs. We then\nconjecture several lower bounds on $\\mathcal{A}(G)$ and prove that these\nconjectures are true for specific classes of graphs such as triangulated graphs\nand graphs with maximum degree at most 2.\n"} {"abstract": " Strong evidence suggests that transformative correlated electron behavior may\nexist only in unrealized clean-limit 2D materials such as 1T-TaS2.\nUnfortunately, experiment and theory suggest that extrinsic disorder in free\nstanding 2D layers impedes correlation-driven quantum behavior. Here we\ndemonstrate a new route to realizing fragile 2D quantum states through\nepitaxial polytype engineering of van der Waals materials. The isolation of\ntruly 2D charge density waves (CDWs) between metallic layers stabilizes\ncommensurate long-range order and lifts the coupling between neighboring CDW\nlayers to restore mirror symmetries via interlayer CDW twinning. The\ntwinned-commensurate charge density wave (tC-CDW) reported herein has a single\nmetal-insulator phase transition at ~350 K as measured structurally and\nelectronically. Fast in-situ transmission electron microscopy and scanned\nnanobeam diffraction map the formation of tC-CDWs. This work introduces\nepitaxial polytype engineering of van der Waals materials to access latent 2D\nground states distinct from conventional 2D fabrication.\n"} {"abstract": " We find a minimal set of generators for the coordinate ring of Calogero-Moser\nspace $\\mathcal{C}_3$ and the algebraic relations among them explicitly. We\ngive a new presentation for the algebra of $3\\times3$ invariant matrices\ninvolving the defining relations of $\\mathbb{C}[\\mathcal{C}_3]$. We find an\nexplicit description of the commuting variety of $3\\times3$ matrices and its\norbits under the action of the affine Cremona group.\n"} {"abstract": " Ranking has always been one of the top concerns in information retrieval\nresearches. For decades, the lexical matching signal has dominated the ad-hoc\nretrieval process, but solely using this signal in retrieval may cause the\nvocabulary mismatch problem. In recent years, with the development of\nrepresentation learning techniques, many researchers turn to Dense Retrieval\n(DR) models for better ranking performance. Although several existing DR models\nhave already obtained promising results, their performance improvement heavily\nrelies on the sampling of training examples. Many effective sampling strategies\nare not efficient enough for practical usage, and for most of them, there still\nlacks theoretical analysis in how and why performance improvement happens. To\nshed light on these research questions, we theoretically investigate different\ntraining strategies for DR models and try to explain why hard negative sampling\nperforms better than random sampling. Through the analysis, we also find that\nthere are many potential risks in static hard negative sampling, which is\nemployed by many existing training methods. Therefore, we propose two training\nstrategies named a Stable Training Algorithm for dense Retrieval (STAR) and a\nquery-side training Algorithm for Directly Optimizing Ranking pErformance\n(ADORE), respectively. STAR improves the stability of DR training process by\nintroducing random negatives. ADORE replaces the widely-adopted static hard\nnegative sampling method with a dynamic one to directly optimize the ranking\nperformance. Experimental results on two publicly available retrieval benchmark\ndatasets show that either strategy gains significant improvements over existing\ncompetitive baselines and a combination of them leads to the best performance.\n"} {"abstract": " The ability to reliably prepare non-classical states will play a major role\nin the realization of quantum technology. NOON states, belonging to the class\nof Schroedinger cat states, have emerged as a leading candidate for several\napplications. Starting from a model of dipolar bosons confined to a closed\ncircuit of four sites, we show how to generate NOON states. This is achieved by\ndesigning protocols to transform initial Fock states to NOON states through use\nof time evolution, application of an external field, and local projective\nmeasurements. By variation of the external field strength, we demonstrate how\nthe system can be controlled to encode a phase into a NOON state. We also\ndiscuss the physical feasibility, via an optical lattice setup. Our proposal\nilluminates the benefits of quantum integrable systems in the design of\natomtronic protocols.\n"} {"abstract": " Elastic similarity measures are a class of similarity measures specifically\ndesigned to work with time series data. When scoring the similarity between two\ntime series, they allow points that do not correspond in timestamps to be\naligned. This can compensate for misalignments in the time axis of time series\ndata, and for similar processes that proceed at variable and differing paces.\nElastic similarity measures are widely used in machine learning tasks such as\nclassification, clustering and outlier detection when using time series data.\n There is a multitude of research on various univariate elastic similarity\nmeasures. However, except for multivariate versions of the well known Dynamic\nTime Warping (DTW) there is a lack of work to generalise other similarity\nmeasures for multivariate cases. This paper adapts two existing strategies used\nin multivariate DTW, namely, Independent and Dependent DTW, to several commonly\nused elastic similarity measures.\n Using 23 datasets from the University of East Anglia (UEA) multivariate\narchive, for nearest neighbour classification, we demonstrate that each measure\noutperforms all others on at least one dataset and that there are datasets for\nwhich either the dependent versions of all measures are more accurate than\ntheir independent counterparts or vice versa. This latter finding suggests that\nthese differences arise from a fundamental property of the data. We also show\nthat an ensemble of such nearest neighbour classifiers is highly competitive\nwith other state-of-the-art multivariate time series classifiers.\n"} {"abstract": " We report the enhanced superconducting properties of double-chain based\nsuperconductor Pr$_{2}$Ba$_{4}$Cu$_{7}$O$_{15-\\delta}$ synthesized by the\ncitrate pyrolysis technique.\n %In spite of the polycrystalline bulk samples, we obtained the higher\nresidual resistivity ratios (10-12).\n The reduction heat treatment in vacuum results in the appearance of\nsuperconducting state with $T_\\mathrm{c}$=22-24 K, accompanied by the higher\nresidual resistivity ratios. The superconducting volume fractions are estimated\nfrom the ZFC data to be 50$\\sim55\\%$, indicating the bulk superconductivity. We\nevaluate from the magneto-transport data the temperature dependence of the\nsuperconducting critical field, to establish the superconducting phase diagram.\nThe upper critical magnetic field is estimated to be about 35 T at low\ntemperatures from the resistive transition data using the\nWerthamer-Helfand-Hohenberg formula. The Hall coefficient $R_{H}$ of the\n48-h-reduced superconducting sample is determined to be -0.5$\\times10^{-3}$\ncm$^{3}$/C at 30 K, suggesting higher electron concentration. These findings\nhave a close relationship with homogeneous distributions of the superconducting\ngrains and improved weak links between their superconducting grains in the\npresent synthesis process.\n"} {"abstract": " Automatic speech recognition (ASR) models are typically designed to operate\non a single input data type, e.g. a single or multi-channel audio streamed from\na device. This design decision assumes the primary input data source does not\nchange and if an additional (auxiliary) data source is occasionally available,\nit cannot be used. An ASR model that operates on both primary and auxiliary\ndata can achieve better accuracy compared to a primary-only solution; and a\nmodel that can serve both primary-only (PO) and primary-plus-auxiliary (PPA)\nmodes is highly desirable. In this work, we propose a unified ASR model that\ncan serve both modes. We demonstrate its efficacy in a realistic scenario where\na set of devices typically stream a single primary audio channel, and two\nadditional auxiliary channels only when upload bandwidth allows it. The\narchitecture enables a unique methodology that uses both types of input audio\nduring training time. Our proposed approach achieves up to 12.5% relative\nword-error-rate reduction (WERR) compared to a PO baseline, and up to 16.0%\nrelative WERR in low-SNR conditions. The unique training methodology achieves\nup to 2.5% relative WERR compared to a PPA baseline.\n"} {"abstract": " The magnetic dipole moments of the $Z_{c}(4020)^+$, $Z_{c}(4200)^+$,\n$Z_{cs}(4000)^{+}$ and $Z_{cs}(4220)^{+}$ states are extracted in the framework\nof the light-cone QCD sum rules. In the calculations, we use the hadronic\nmolecular form of interpolating currents, and photon distribution amplitudes to\nget the magnetic dipole moment of $Z_{c}(4020)^+$, $Z_{c}(4200)^+$,\n$Z_{cs}(4000)^{+}$ and $Z_{cs}(4220)^{+}$ tetraquark states. The magnetic\ndipole moments are obtained as $\\mu_{Z_{c}} = 0.66^{+0.27}_{-0.25}$,\n$\\mu_{Z^{1}_{c}}=1.03^{+0.32}_{-0.29}$, $\\mu_{Z_{cs}}=0.73^{+0.28}_{-0.26}$,\n$\\mu_{Z^1_{cs}}=0.77^{+0.27}_{-0.25}$ for the $Z_{c}(4020)^+$, $Z_{c}(4200)^+$,\n$Z_{cs}(4000)^{+}$ and $Z_{cs}(4220)^{+}$ states, respectively. We observe that\nthe results obtained for the $Z_{c}(4020)^+$, $Z_{c}(4200)^+$,\n$Z_{cs}(4000)^{+}$ and $Z_{cs}(4220)^{+}$ states are large enough to be\nmeasured experimentally. As a by product, we predict the magnetic dipole\nmoments of the neutral $Z_{cs}(4000)$ and $Z_{cs}(4220)$ states. The results\npresented here can serve to be helpful knowledge in experimental as well as\ntheoretical studies of the properties of hidden-charm tetraquark states with\nand without strangeness.\n"} {"abstract": " Randomized Controlled Trials (RCTs) are often considered as the gold standard\nto conclude on the causal effect of a given intervention on an outcome, but\nthey may lack of external validity when the population eligible to the RCT is\nsubstantially different from the target population. Having at hand a sample of\nthe target population of interest allows to generalize the causal effect.\nIdentifying this target population treatment effect needs covariates in both\nsets to capture all treatment effect modifiers that are shifted between the two\nsets. However such covariates are often not available in both sets. Standard\nestimators then use either weighting (IPSW), outcome modeling (G-formula), or\ncombine the two in doubly robust approaches (AIPSW). In this paper, after\ncompleting existing proofs on the complete case consistency of those three\nestimators, we compute the expected bias induced by a missing covariate,\nassuming a Gaussian distribution and a semi-parametric linear model. This\nenables sensitivity analysis for each missing covariate pattern, giving the\nsign of the expected bias. We also show that there is no gain in imputing a\npartially-unobserved covariate. Finally we study the replacement of a missing\ncovariate by a proxy. We illustrate all these results on simulations, as well\nas semi-synthetic benchmarks using data from the Tennessee Student/Teacher\nAchievement Ratio (STAR), and with a real-world example from critical care\nmedicine.\n"} {"abstract": " The present study shows how any De Morgan algebra may be enriched by a\n'perfection operator' that allows one to express the Boolean properties of\nnegation-consistency and negation-determinedness. The corresponding variety of\n'perfect paradefinite algebras' (PP-algebras) is shown to be term-equivalent to\nthe variety of involutive Stone algebras, introduced by R. Cignoli and M.\nSagastume, and more recently studied from a logical perspective by M. Figallo\nand L. Cant\\'u. Such equivalence then plays an important role in the\ninvestigation of the 1-assertional logic and also the order-preserving logic\nasssociated to the PP-algebras. The latter logic, which we call PP$\\leq$,\nhappens to be characterised by a single 6-valued matrix and consists very\nnaturally in a Logic of Formal Inconsistency and Formal Undeterminedness. The\nlogic PP$\\leq$ is here axiomatised, by means of an analytic finite\nHilbert-style calculus, and a related axiomatization procedure is presented\nthat covers the logics of other classes of De Morgan algebras as well as\nsuper-Belnap logics enriched by a perfection connective.\n"} {"abstract": " This work uses genetic programming to explore the space of continuous\noptimisers, with the goal of discovering novel ways of doing optimisation. In\norder to keep the search space broad, the optimisers are evolved from scratch\nusing Push, a Turing-complete, general-purpose, language. The resulting\noptimisers are found to be diverse, and explore their optimisation landscapes\nusing a variety of interesting, and sometimes unusual, strategies.\nSignificantly, when applied to problems that were not seen during training,\nmany of the evolved optimisers generalise well, and often outperform existing\noptimisers. This supports the idea that novel and effective forms of\noptimisation can be discovered in an automated manner. This paper also shows\nthat pools of evolved optimisers can be hybridised to further increase their\ngenerality, leading to optimisers that perform robustly over a broad variety of\nproblem types and sizes.\n"} {"abstract": " This article is concerned with the global exact controllability for ideal\nincompressible magnetohydrodynamics in a rectangular domain where the controls\nare situated in both vertical walls. First, global exact controllability via\nboundary controls is established for a related Els\\\"asser type system by\napplying the return method, introduced in [Coron J.M., Math. Control Signals\nSystems, 5(3) (1992) 295--312]. Similar results are then inferred for the\noriginal magnetohydrodynamics system with the help of a special pressure-like\ncorrector in the induction equation. Overall, the main difficulties stem from\nthe nonlinear coupling between the fluid velocity and the magnetic field in\ncombination with the aim of exactly controlling the system. In order to\novercome some of the obstacles, we introduce ad-hoc constructions, such as\nsuitable initial data extensions outside of the physical part of the domain and\na certain weighted space.\n"} {"abstract": " We consider a stochastic game between three types of players: an inside\ntrader, noise traders and a market maker. In a similar fashion to Kyle's model,\nwe assume that the insider first chooses the size of her market-order and then\nthe market maker determines the price by observing the total order-flow\nresulting from the insider and the noise traders transactions. In addition to\nthe classical framework, a revenue term is added to the market maker's\nperformance function, which is proportional to the order flow and to the size\nof the bid-ask spread. We derive the maximizer for the insider's revenue\nfunction and prove sufficient conditions for an equilibrium in the game. Then,\nwe use neural networks methods to verify that this equilibrium holds. We show\nthat the equilibrium state in this model experience interesting phase\ntransitions, as the weight of the revenue term in the market maker's\nperformance function changes. Specifically, the asset price in equilibrium\nexperience three different phases: a linear pricing rule without a spread, a\npricing rule that includes a linear mid-price and a bid-ask spread, and a\nmetastable state with a zero mid-price and a large spread.\n"} {"abstract": " Based on density functional theory (DFT), we investigate the electronic\nproperties of bulk and single-layer ZrTe$_4$Se. The band structure of bulk\nZrTe$_4$Se can produce a semimetal-to-topological insulator (TI) phase\ntransition under uniaxial strain. The maximum global band gap is 0.189 eV at\nthe 7\\% tensile strain. Meanwhile, the Z$_2$ invariants (0; 110) demonstrate\nconclusively it is a weak topological insulator (WTI). The two Dirac cones for\nthe (001) surface further confirm the nontrivial topological nature. The\nsingle-layer ZrTe$_4$Se is a quantum spin Hall (QSH) insulator with a band gap\n86.4 meV and Z$_2$=1, the nontrivial metallic edge states further confirm the\nnontrivial topological nature. The maximum global band gap is 0.211 eV at the\ntensile strain 8\\%. When the compressive strain is more than 1\\%, the band\nstructure of single-layer ZrTe$_4$Se produces a TI-to-semimetal transition.\nThese theoretical analysis may provide a method for searching large band gap\nTIs and platform for topological nanoelectronic device applications.\n"} {"abstract": " Attosecond nonlinear Fourier transform (NFT) pump probe spectroscopy is an\nexperimental technique which allows investigation of the electronic excitation,\nionization, and unimolecular dissociation processes. The NFT spectroscopy\nutilizes ultrafast multiphoton ionization in the extreme ultraviolet spectral\nrange and detects the dissociation products of the unstable ionized species. In\nthis paper, a quantum mechanical description of NFT spectra is suggested, which\nis based on the second order perturbation theory in molecule-light interaction\nand the high level ab initio calculations of CO2 and CO2+ in the Franck-Condon\nzone. The calculations capture the characteristic features of the available\nexperimental NFT spectra of CO2. Approximate analytic expressions are derived\nand used to assign the calculated spectra in terms of participating electronic\nstates and harmonic photon frequencies. The developed approach provides a\nconvenient framework within which the origin and the significance of near\nharmonic and non-harmonic NFT spectral lines can be analyzed. The framework is\nscalable and the spectra of di- and triatomic species as well as the\ndependences on the control parameters can by predicted semi-quantitatively.\n"} {"abstract": " This work continues the study of the thermal Hamiltonian, initially proposed\nby J. M. Luttinger in 1964 as a model for the conduction of thermal currents in\nsolids. The previous work [DL] contains a complete study of the \"free\" model in\none spatial dimension along with a preliminary scattering result for\nconvolution-type perturbations. This work complements the results obtained in\n[DL] by providing a detailed analysis of the perturbation theory for the\none-dimensional thermal Hamiltonian. In more detail the following result are\nestablished: the regularity and decay properties for elements in the domain of\nthe unperturbed thermal Hamiltonian; the determination of a class of\nself-adjoint and relatively compact perturbations of the thermal Hamiltonian;\nthe proof of the existence and completeness of wave operators for a subclass of\nsuch potentials.\n"} {"abstract": " We theoretically and observationally investigate different choices of initial\nconditions for the primordial mode function that are imposed during an epoch\npreceding inflation. By deriving predictions for the observables resulting from\nseveral alternate quantum vacuum prescriptions we show some choices of vacua\nare theoretically observationally distinguishable from others. Comparing these\npredictions to the Planck 2018 observations via a Bayesian analysis shows no\nsignificant evidence to favour any of the quantum vacuum prescriptions over the\nothers. In addition we consider frozen initial conditions, representing a\nwhite-noise initial state at the big-bang singularity. Under certain\nassumptions the cosmological concordance model and frozen initial conditions\nare found to produce identical predictions for the cosmic microwave background\nanisotropies. Frozen initial conditions may thus provide an alternative\ntheoretic paradigm to explain observations that were previously understood in\nterms of the inflation of a quantum vacuum.\n"} {"abstract": " While self-supervised representation learning (SSL) has received widespread\nattention from the community, recent research argue that its performance will\nsuffer a cliff fall when the model size decreases. The current method mainly\nrelies on contrastive learning to train the network and in this work, we\npropose a simple yet effective Distilled Contrastive Learning (DisCo) to ease\nthe issue by a large margin. Specifically, we find the final embedding obtained\nby the mainstream SSL methods contains the most fruitful information, and\npropose to distill the final embedding to maximally transmit a teacher's\nknowledge to a lightweight model by constraining the last embedding of the\nstudent to be consistent with that of the teacher. In addition, in the\nexperiment, we find that there exists a phenomenon termed Distilling BottleNeck\nand present to enlarge the embedding dimension to alleviate this problem. Our\nmethod does not introduce any extra parameter to lightweight models during\ndeployment. Experimental results demonstrate that our method achieves the\nstate-of-the-art on all lightweight models. Particularly, when\nResNet-101/ResNet-50 is used as teacher to teach EfficientNet-B0, the linear\nresult of EfficientNet-B0 on ImageNet is very close to ResNet-101/ResNet-50,\nbut the number of parameters of EfficientNet-B0 is only 9.4\\%/16.3\\% of\nResNet-101/ResNet-50. Code is available at https://github.\ncom/Yuting-Gao/DisCo-pytorch.\n"} {"abstract": " Irreducible symplectic varieties are higher-dimensional analogues of K3\nsurfaces. In this paper, we prove the finiteness of twists of irreducible\nsymplectic varieties via a fixed finite field extension of characteristic $0$.\nThe main ingredient of the proof is the cone conjecture for irreducible\nsymplectic varieties, which was proved by Markman and Amerik--Verbitsky. As\nbyproducts, we also discuss the cone conjecture over non-closed fields by\nBright--Logan--van Luijk's method. We also give an application to the\nfiniteness of derived equivalent twists. Moreover, we discuss the case of K3\nsurfaces or Enriques surfaces over fields of positive characteristic.\n"} {"abstract": " We report the discovery of a new effect, namely, the effect of magnetically\ninduced transparency. The effect is observed in a magnetically active helically\nstructured periodical medium. Changing the external magnetic field and\nabsorption, one can tune the frequency and the linewidth of the transparency\nband.\n"} {"abstract": " Absolute Concentration Robustness (ACR) was introduced by Shinar and Feinberg\nas a way to define robustness of equilibrium species concentration in a mass\naction dynamical system. Their aim was to devise a mathematical condition that\nwill ensure robustness in the function of the biological system being modeled.\nThe robustness of function rests on what we refer to as empirical\nrobustness--the concentration of a species remains unvarying, when measured in\nthe long run, across arbitrary initial conditions. While there is a positive\ncorrelation between ACR and empirical robustness, ACR is neither necessary nor\nsufficient for empirical robustness, a fact that can be noticed even in simple\nbiochemical systems. To develop a stronger connection with empirical\nrobustness, we define dynamic ACR, a property related to dynamics, rather than\nonly to equilibrium behavior, and one that guarantees convergence to a robust\nvalue. We distinguish between wide basin and narrow basin versions of dynamic\nACR, related to the size of the set of initial values that do not result in\nconvergence to the robust value. We give numerous examples which help\ndistinguish the various flavors of ACR as well as clearly illustrate and\ncircumscribe the conditions that appear in the definitions. We discuss general\ndynamical systems with ACR properties as well as parametrized families of\ndynamical systems related to reaction networks. We discuss connections between\nACR and complex balance, two notions central to the theory of reaction\nnetworks. We give precise conditions for presence and absence of dynamic ACR in\ncomplex balanced systems, which in turn yields a large body of reaction\nnetworks with dynamic ACR.\n"} {"abstract": " In this work, we study the secure index coding problem where there are\nsecurity constraints on both legitimate receivers and eavesdroppers. We develop\ntwo performance bounds (i.e., converse results) on the symmetric secure\ncapacity. The first one is an extended version of the basic acyclic chain bound\n(Liu and Sadeghi, 2019) that takes security constraints into account. The\nsecond converse result is a novel information-theoretic lower bound on the\nsymmetric secure capacity, which is interesting as all the existing converse\nresults in the literature for secure index coding give upper bounds on the\ncapacity.\n"} {"abstract": " In this paper we consider the influence of relativistic rotation on the\nconfinement/deconfinement transition in gluodynamics within lattice simulation.\nWe perform the simulation in the reference frame which rotates with the system\nunder investigation, where rotation is reduced to external gravitational field.\nTo study the confinement/deconfinement transition the Polyakov loop and its\nsusceptibility are calculated for various lattice parameters and the values of\nangular velocities which are characteristic for heavy-ion collision\nexperiments. Different types of boundary conditions (open, periodic, Dirichlet)\nare imposed in directions, orthogonal to rotation axis. Our data for the\ncritical temperature are well described by a simple quadratic function\n$T_c(\\Omega)/T_c(0) = 1 + C_2 \\Omega^2$ with $C_2>0$ for all boundary\nconditions and all lattice parameters used in the simulations. From this we\nconclude that the critical temperature of the confinement/deconfinement\ntransition in gluodynamics increases with increasing angular velocity. This\nconclusion does not depend on the boundary conditions used in our study and we\nbelieve that this is universal property of gluodynamics.\n"} {"abstract": " Non-destructive evaluation (NDE) through inspection and monitoring is an\nintegral part of asset integrity management. The relationship between the\ncondition of interest and the quantity measured by NDE is described with\nprobabilistic models such as PoD or ROC curves. These models are used to assess\nthe quality of the information provided by NDE systems, which is affected by\nfactors such as the experience of the inspector, environmental conditions, ease\nof access, or imprecision in the measuring device. In this paper, we show how\nthe different probabilistic models of NDE are connected within a unifying\nframework. Using this framework, we derive insights into how these models\nshould be learned, calibrated, and applied. We investigate how the choice of\nthe model can affect the maintenance decisions taken on the basis of NDE\nresults. In addition, we analyze the impact of experimental design on the\nperformance of a given NDE system in a decision-making context.\n"} {"abstract": " In a real-world setting biological agents do not have infinite resources to\nlearn new things. It is thus useful to recycle previously acquired knowledge in\na way that allows for faster, less resource-intensive acquisition of multiple\nnew skills. Neural networks in the brain are likely not entirely re-trained\nwith new tasks, but how they leverage existing computations to learn new tasks\nis not well understood. In this work, we study this question in artificial\nneural networks trained on commonly used neuroscience paradigms. Building on\nrecent work from the multi-task learning literature, we propose two\ningredients: (1) network modularity, and (2) learning task primitives.\nTogether, these ingredients form inductive biases we call structural and\nfunctional, respectively. Using a corpus of nine different tasks, we show that\na modular network endowed with task primitives allows for learning multiple\ntasks well while keeping parameter counts, and updates, low. We also show that\nthe skills acquired with our approach are more robust to a broad range of\nperturbations compared to those acquired with other multi-task learning\nstrategies. This work offers a new perspective on achieving efficient\nmulti-task learning in the brain, and makes predictions for novel neuroscience\nexperiments in which targeted perturbations are employed to explore solution\nspaces.\n"} {"abstract": " We prove that quantum information propagates with a finite velocity in any\nmodel of interacting bosons whose (possibly time-dependent) Hamiltonian\ncontains spatially local single-boson hopping terms along with arbitrary local\ndensity-dependent interactions. More precisely, with density matrix $\\rho\n\\propto \\exp[-\\mu N]$ (with $N$ the total boson number), ensemble averaged\ncorrelators of the form $\\langle [A_0,B_r(t)]\\rangle $, along with\nout-of-time-ordered correlators, must vanish as the distance $r$ between two\nlocal operators grows, unless $t \\ge r/v$ for some finite speed $v$. In one\ndimensional models, we give a useful extension of this result that demonstrates\nthe smallness of all matrix elements of the commutator $[A_0,B_r(t)]$ between\nfinite density states if $t/r$ is sufficiently small. Our bounds are relevant\nfor physically realistic initial conditions in experimentally realized models\nof interacting bosons. In particular, we prove that $v$ can scale no faster\nthan linear in number density in the Bose-Hubbard model: this scaling matches\nprevious results in the high density limit. The quantum walk formalism\nunderlying our proof provides an alternative method for bounding quantum\ndynamics in models with unbounded operators and infinite-dimensional Hilbert\nspaces, where Lieb-Robinson bounds have been notoriously challenging to prove.\n"} {"abstract": " The Sihl river, located near the city of Zurich in Switzerland, is under\ncontinuous and tight surveillance as it flows directly under the city's main\nrailway station. To issue early warnings and conduct accurate risk\nquantification, a dense network of monitoring stations is necessary inside the\nriver basin. However, as of 2021 only three automatic stations are operated in\nthis region, naturally raising the question: how to extend this network for\noptimal monitoring of extreme rainfall events?\n So far, existing methodologies for station network design have mostly focused\non maximizing interpolation accuracy or minimizing the uncertainty of some\nmodel's parameters estimates. In this work, we propose new principles inspired\nfrom extreme value theory for optimal monitoring of extreme events. For\nstationary processes, we study the theoretical properties of the induced\nsampling design that yields non-trivial point patterns resulting from a\ncompromise between a boundary effect and the maximization of inter-location\ndistances. For general applications, we propose a theoretically justified\nfunctional peak-over-threshold model and provide an algorithm for sequential\nstation selection. We then issue recommendations for possible extensions of the\nSihl river monitoring network, by efficiently leveraging both station and radar\nmeasurements available in this region.\n"} {"abstract": " Spherical matrix arrays arguably represent an advantageous tomographic\ndetection geometry for non-invasive deep tissue mapping of vascular networks\nand oxygenation with volumetric optoacoustic tomography (VOT). Hybridization of\nVOT with ultrasound (US) imaging remains difficult with this configuration due\nto the relatively large inter-element pitch of spherical arrays. We suggest a\nnew approach for combining VOT and US contrast-enhanced imaging employing\ninjection of clinically-approved microbubbles. Power Doppler (PD) and US\nlocalization imaging were enabled with a sparse US acquisition sequence and\nmodel-based inversion based on infimal convolution of total variation (ICTV)\nregularization. Experiments in tissue-mimicking phantoms and in vivo in mice\ndemonstrate the powerful capabilities of the new dual-mode imaging system for\nblood velocity mapping and anatomical imaging with enhanced resolution and\ncontrast.\n"} {"abstract": " We characterise the selection cuts and clustering properties of a\nmagnitude-limited sample of bright galaxies that is part of the Bright Galaxy\nSurvey (BGS) of the Dark Energy Spectroscopic Instrument (DESI) using the ninth\ndata release of the Legacy Imaging Surveys (DR9). We describe changes in the\nDR9 selection compared to the DR8 one as explored in Ruiz-Macias et al. (2021).\nWe also compare the DR9 selection in three distinct regions: BASS/MzLS in the\nnorth Galactic Cap (NGC), DECaLS in the NGC, and DECaLS in the south Galactic\nCap (SGC). We investigate the systematics associated with the selection and\nassess its completeness by matching the BGS targets with the Galaxy and Mass\nAssembly (GAMA) survey. We measure the angular clustering for the overall\nbright sample (r $\\leq$ 19.5) and as function of apparent magnitude and colour.\nThis enables to determine the clustering strength and slope by fitting a\npower-law model that can be used to generate accurate mock catalogues for this\ntracer. We use a counts-in-cells technique to explore higher-order statistics\nand cross-correlations with external spectroscopic data sets in order to check\nthe evolution of the clustering with redshift and the redshift distribution of\nthe BGS targets using clustering-redshifts. While this work validates the\nproperties of the BGS bright targets, the final target selection pipeline and\nclustering properties of the entire DESI BGS will be fully characterised and\nvalidated with the spectroscopic data of Survey Validation.\n"} {"abstract": " In this work, we aim to improve the expressive capacity of waveform-based\ndiscriminative music networks by modeling both sequential (temporal) and\nhierarchical information in an efficient end-to-end architecture. We present\nMuSLCAT, or Multi-scale and Multi-level Convolutional Attention Transformer, a\nnovel architecture for learning robust representations of complex music tags\ndirectly from raw waveform recordings. We also introduce a lightweight variant\nof MuSLCAT called MuSLCAN, short for Multi-scale and Multi-level Convolutional\nAttention Network. Both MuSLCAT and MuSLCAN model features from multiple scales\nand levels by integrating a frontend-backend architecture. The frontend targets\ndifferent frequency ranges while modeling long-range dependencies and\nmulti-level interactions by using two convolutional attention networks with\nattention-augmented convolution (AAC) blocks. The backend dynamically\nrecalibrates multi-scale and level features extracted from the frontend by\nincorporating self-attention. The difference between MuSLCAT and MuSLCAN is\ntheir backend components. MuSLCAT's backend is a modified version of BERT.\nWhile MuSLCAN's is a simple AAC block. We validate the proposed MuSLCAT and\nMuSLCAN architectures by comparing them to state-of-the-art networks on four\nbenchmark datasets for music tagging and genre recognition. Our experiments\nshow that MuSLCAT and MuSLCAN consistently yield competitive results when\ncompared to state-of-the-art waveform-based models yet require considerably\nfewer parameters.\n"} {"abstract": " Trading in Over-The-Counter (OTC) markets is facilitated by broker-dealers,\nin comparison to public exchanges, e.g., the New York Stock Exchange (NYSE).\nDealers play an important role in stabilizing prices and providing liquidity in\nOTC markets. We apply machine learning methods to model and predict the trading\nbehavior of OTC dealers for US corporate bonds. We create sequences of daily\nhistorical transaction reports for each dealer over a vocabulary of US\ncorporate bonds. Using this history of dealer activity, we predict the future\ntrading decisions of the dealer. We consider a range of neural network-based\nprediction models. We propose an extension, the Pointwise-Product ReZero (PPRZ)\nTransformer model, and demonstrate the improved performance of our model. We\nshow that individual history provides the best predictive model for the most\nactive dealers. For less active dealers, a collective model provides improved\nperformance. Further, clustering dealers based on their similarity can improve\nperformance. Finally, prediction accuracy varies based on the activity level of\nboth the bond and the dealer.\n"} {"abstract": " Cosmology is well suited to study the effects of long range interactions due\nto the large densities in the early Universe. In this article, we explore how\nthe energy density and equation of state of a fermion system diverge from the\ncommonly assumed ideal gas form under the presence of scalar long range\ninteractions with a range much smaller than cosmological scales. In this\nscenario, \"small\"-scale physics can impact our largest-scale observations. As a\nbenchmark, we apply the formalism to self-interacting neutrinos, performing an\nanalysis to present and future cosmological data. Our results show that the\ncurrent cosmological neutrino mass bound is fully avoided in the presence of a\nlong range interaction, opening the possibility for a laboratory neutrino mass\ndetection in the near future. We also demonstrate an interesting\ncomplementarity between neutrino laboratory experiments and the future EUCLID\nsurvey.\n"} {"abstract": " This paper considers a new problem of adapting a pre-trained model of human\nmesh reconstruction to out-of-domain streaming videos. However, most previous\nmethods based on the parametric SMPL model \\cite{loper2015smpl} underperform in\nnew domains with unexpected, domain-specific attributes, such as camera\nparameters, lengths of bones, backgrounds, and occlusions. Our general idea is\nto dynamically fine-tune the source model on test video streams with additional\ntemporal constraints, such that it can mitigate the domain gaps without\nover-fitting the 2D information of individual test frames. A subsequent\nchallenge is how to avoid conflicts between the 2D and temporal constraints. We\npropose to tackle this problem using a new training algorithm named Bilevel\nOnline Adaptation (BOA), which divides the optimization process of overall\nmulti-objective into two steps of weight probe and weight update in a training\niteration. We demonstrate that BOA leads to state-of-the-art results on two\nhuman mesh reconstruction benchmarks.\n"} {"abstract": " This paper introduces a conditional generative adversarial network to\nredesign a street-level image of urban scenes by generating 1) an urban\nintervention policy, 2) an attention map that localises where intervention is\nneeded, 3) a high-resolution street-level image (1024 X 1024 or 1536 X1536)\nafter implementing the intervention. We also introduce a new dataset that\ncomprises aligned street-level images of before and after urban interventions\nfrom real-life scenarios that make this research possible. The introduced\nmethod has been trained on different ranges of urban interventions applied to\nrealistic images. The trained model shows strong performance in re-modelling\ncities, outperforming existing methods that apply image-to-image translation in\nother domains that is computed in a single GPU. This research opens the door\nfor machine intelligence to play a role in re-thinking and re-designing the\ndifferent attributes of cities based on adversarial learning, going beyond the\nmainstream of facial landmarks manipulation or image synthesis from semantic\nsegmentation.\n"} {"abstract": " Laser cooling of solids keeps attracting attention owing to abroad range of\nits applications that extends from cm-sized all-optical cryocoolers for\nairborne and space-based applications to cooling on nanoparticles for\nbiological and mesoscopic physics. Laser cooling of nanoparticles is a\nchallenging task. We propose to use Mie resonances to enhance anti-Stokes\nfluorescence laser cooling in rare-earth (RE) doped nanoparticles made of\nlow-phonon glasses or crystals. As an example, we consider an Yb3+:YAG\nnanosphere pumped at the long wavelength tail of the Yb3+ absorption spectrum\nat 1030 nm. We show that if the radius of the nanosphere is adjusted to the\npump wavelength in such a manner that the pump excites some of its Mie resonant\nmodes, the cooling power density generated in the sample is considerably\nenhanced and the temperature of the sample is consequently considerably (~ 63%)\ndecreased. This concept can be extended to nanoparticles of different shapes\nand made from different low-phonon RE doped materials suitable for laser\ncooling by anti-Stokes fluorescence.\n"} {"abstract": " Estimating 3D human poses from video is a challenging problem. The lack of 3D\nhuman pose annotations is a major obstacle for supervised training and for\ngeneralization to unseen datasets. In this work, we address this problem by\nproposing a weakly-supervised training scheme that does not require 3D\nannotations or calibrated cameras. The proposed method relies on temporal\ninformation and triangulation. Using 2D poses from multiple views as the input,\nwe first estimate the relative camera orientations and then generate 3D poses\nvia triangulation. The triangulation is only applied to the views with high 2D\nhuman joint confidence. The generated 3D poses are then used to train a\nrecurrent lifting network (RLN) that estimates 3D poses from 2D poses. We\nfurther apply a multi-view re-projection loss to the estimated 3D poses and\nenforce the 3D poses estimated from multi-views to be consistent. Therefore,\nour method relaxes the constraints in practice, only multi-view videos are\nrequired for training, and is thus convenient for in-the-wild settings. At\ninference, RLN merely requires single-view videos. The proposed method\noutperforms previous works on two challenging datasets, Human3.6M and\nMPI-INF-3DHP. Codes and pretrained models will be publicly available.\n"} {"abstract": " Two dimensional (2D) transition metal dichalcogenide (TMDC) materials, such\nas MoS2, WS2, MoSe2, and WSe2, have received extensive attention in the past\ndecade due to their extraordinary physical properties. The unique properties\nmake them become ideal materials for various electronic, photonic and\noptoelectronic devices. However, their performance is limited by the relatively\nweak light-matter interactions due to their atomically thin form factor.\nResonant nanophotonic structures provide a viable way to address this issue and\nenhance light-matter interactions in 2D TMDCs. Here, we provide an overview of\nthis research area, showcasing relevant applications, including exotic light\nemission, absorption and scattering features. We start by overviewing the\nconcept of excitons in 1L-TMDC and the fundamental theory of cavity-enhanced\nemission, followed by a discussion on the recent progress of enhanced light\nemission, strong coupling and valleytronics. The atomically thin nature of\n1L-TMDC enables a broad range of ways to tune its electric and optical\nproperties. Thus, we continue by reviewing advances in TMDC-based tunable\nphotonic devices. Next, we survey the recent progress in enhanced light\nabsorption over narrow and broad bandwidths using 1L or few-layer TMDCs, and\ntheir applications for photovoltaics and photodetectors. We also review recent\nefforts of engineering light scattering, e.g., inducing Fano resonances,\nwavefront engineering in 1L or few-layer TMDCs by either integrating resonant\nstructures, such as plasmonic/Mie resonant metasurfaces, or directly patterning\nmonolayer/few layers TMDCs. We then overview the intriguing physical properties\nof different types of van der Waals heterostructures, and their applications in\noptoelectronic and photonic devices. Finally, we draw our opinion on potential\nopportunities and challenges in this rapidly developing field of research.\n"} {"abstract": " Event perception tasks such as recognizing and localizing actions in\nstreaming videos are essential for tackling visual understanding tasks.\nProgress has primarily been driven by the use of large-scale, annotated\ntraining data in a supervised manner. In this work, we tackle the problem of\nlearning \\textit{actor-centered} representations through the notion of\ncontinual hierarchical predictive learning to localize actions in streaming\nvideos without any training annotations. Inspired by cognitive theories of\nevent perception, we propose a novel, self-supervised framework driven by the\nnotion of hierarchical predictive learning to construct actor-centered features\nby attention-based contextualization. Extensive experiments on three benchmark\ndatasets show that the approach can learn robust representations for localizing\nactions using only one epoch of training, i.e., we train the model continually\nin streaming fashion - one frame at a time, with a single pass through training\nvideos. We show that the proposed approach outperforms unsupervised and weakly\nsupervised baselines while offering competitive performance to fully supervised\napproaches. Finally, we show that the proposed model can generalize to\nout-of-domain data without significant loss in performance without any\nfinetuning for both the recognition and localization tasks.\n"} {"abstract": " One of the main reasons for the success of Evolutionary Algorithms (EAs) is\ntheir general-purposeness, i.e., the fact that they can be applied\nstraightforwardly to a broad range of optimization problems, without any\nspecific prior knowledge. On the other hand, it has been shown that\nincorporating a priori knowledge, such as expert knowledge or empirical\nfindings, can significantly improve the performance of an EA. However,\nintegrating knowledge in EAs poses numerous challenges. It is often the case\nthat the features of the search space are unknown, hence any knowledge\nassociated with the search space properties can be hardly used. In addition, a\npriori knowledge is typically problem-specific and hard to generalize. In this\npaper, we propose a framework, called Knowledge Integrated Evolutionary\nAlgorithm (KIEA), which facilitates the integration of existing knowledge into\nEAs. Notably, the KIEA framework is EA-agnostic (i.e., it works with any\nevolutionary algorithm), problem-independent (i.e., it is not dedicated to a\nspecific type of problems), expandable (i.e., its knowledge base can grow over\ntime). Furthermore, the framework integrates knowledge while the EA is running,\nthus optimizing the use of the needed computational power. In the preliminary\nexperiments shown here, we observe that the KIEA framework produces in the\nworst case an 80% improvement on the converge time, w.r.t. the corresponding\n\"knowledge-free\" EA counterpart.\n"} {"abstract": " Object handover is a common human collaboration behavior that attracts\nattention from researchers in Robotics and Cognitive Science. Though visual\nperception plays an important role in the object handover task, the whole\nhandover process has been specifically explored. In this work, we propose a\nnovel rich-annotated dataset, H2O, for visual analysis of human-human object\nhandovers. The H2O, which contains 18K video clips involving 15 people who hand\nover 30 objects to each other, is a multi-purpose benchmark. It can support\nseveral vision-based tasks, from which, we specifically provide a baseline\nmethod, RGPNet, for a less-explored task named Receiver Grasp Prediction.\nExtensive experiments show that the RGPNet can produce plausible grasps based\non the giver's hand-object states in the pre-handover phase. Besides, we also\nreport the hand and object pose errors with existing baselines and show that\nthe dataset can serve as the video demonstrations for robot imitation learning\non the handover task. Dataset, model and code will be made public.\n"} {"abstract": " Prior works have found it beneficial to combine provably noise-robust loss\nfunctions e.g., mean absolute error (MAE) with standard categorical loss\nfunction e.g. cross entropy (CE) to improve their learnability. Here, we\npropose to use Jensen-Shannon divergence as a noise-robust loss function and\nshow that it interestingly interpolate between CE and MAE with a controllable\nmixing parameter. Furthermore, we make a crucial observation that CE exhibit\nlower consistency around noisy data points. Based on this observation, we adopt\na generalized version of the Jensen-Shannon divergence for multiple\ndistributions to encourage consistency around data points. Using this loss\nfunction, we show state-of-the-art results on both synthetic (CIFAR), and\nreal-world (e.g., WebVision) noise with varying noise rates.\n"} {"abstract": " In this paper, we investigate the outage performance of an intelligent\nreflecting surface (IRS)-assisted non-orthogonal multiple access (NOMA) uplink,\nin which a group of the surface reflecting elements are configured to boost the\nsignal of one of the user equipments (UEs), while the remaining elements are\nused to boost the other UE. By approximating the received powers as Gamma\nrandom variables, tractable expressions for the outage probability under NOMA\ninterference cancellation are obtained. We evaluate the outage over different\nsplits of the elements and varying pathloss differences between the two UEs.\nThe analysis shows that for small pathloss differences, the split should be\nchosen such that most of the IRS elements are configured to boost the stronger\nUE, while for large pathloss differences, it is more beneficial to boost the\nweaker UE. Finally, we investigate a robust selection of the elements' split\nunder the criterion of minimizing the maximum outage between the two UEs.\n"} {"abstract": " This paper addresses issues surrounding the concept of fractional quantum\nmechanics, related to lights propagation in inhomogeneous nonlinear media,\nspecifically restricted to a so called gravitational optics. Besides\nSchr\\\"odinger Newton equation, we have also concerned with linear and nonlinear\nAiry beam accelerations in flat and curved spaces and fractal photonics,\nrelated to nonlinear Schr\\\"odinger equation, where impact of the fractional\nLaplacian is discussed. Another important feature of the gravitational optics'\nimplementation is its geometry with the paraxial approximation, when quantum\nmechanics, in particular, fractional quantum mechanics, is an effective\ndescription of optical effects. In this case, fractional-time differentiation\nreflexes this geometry effect as well.\n"} {"abstract": " In the first part of the paper, we study the Cauchy problem for the\nadvection-diffusion equation $\\partial_t v + \\text{div }(v\\boldsymbol{b} ) =\n\\Delta v$ associated with a merely integrable, divergence-free vector field\n$\\boldsymbol{b}$ defined on the torus. We first introduce two notions of\nsolutions (distributional and parabolic), recalling the corresponding available\nresults of existence and uniqueness. Then, we establish a regularity criterion,\nwhich in turn provides uniqueness for distributional solutions. This is\nmotivated by the recent results in [31] where the authors showed non-uniqueness\nof distributional solutions to the advection-diffusion equation despite the\nparabolic one is unique. In the second part of the paper, we precisely describe\nthe vanishing viscosity scheme for the transport/continuity equation drifted by\n$\\boldsymbol{b}$, i.e. $\\partial_t u + \\text{div }(u\\boldsymbol{b} ) = 0$.\nUnder Sobolev assumptions on $\\boldsymbol{b} $, we give two independent proofs\nof the convergence of such scheme to the Lagrangian solution of the transport\nequation. The first proof slightly generalizes the original one of [21]. The\nother one is quantitative and yields rates of convergence. This offers a\ncompletely general selection criterion for the transport equation (even beyond\nthe distributional regime) which compensates the wild non-uniqueness phenomenon\nfor solutions with low integrability arising from convex integration schemes,\nas shown in recent works [10, 31, 32, 33], and rules out the possibility of\nanomalous dissipation.\n"} {"abstract": " Quantile regression presents a complete picture of the effects on the\nlocation, scale, and shape of the dependent variable at all points, not just\nthe mean. We focus on two challenges for citation count analysis by quantile\nregression: discontinuity and substantial mass points at lower counts. A\nBayesian hurdle quantile regression model for count data with a substantial\nmass point at zero was proposed by King and Song (2019). It uses quantile\nregression for modeling the nonzero data and logistic regression for modeling\nthe probability of zeros versus nonzeros. We show that substantial mass points\nfor low citation counts will nearly certainly also affect parameter estimation\nin the quantile regression part of the model, similar to a mass point at zero.\nWe update the King and Song model by shifting the hurdle point past the main\nmass points. This model delivers more accurate quantile regression for\nmoderately to highly cited articles, especially at quantiles corresponding to\nvalues just beyond the mass points, and enables estimates of the extent to\nwhich factors influence the chances that an article will be low cited. To\nillustrate the potential of this method, it is applied to simulated citation\ncounts and data from Scopus.\n"} {"abstract": " Financial markets are a source of non-stationary multidimensional time series\nwhich has been drawing attention for decades. Each financial instrument has its\nspecific changing over time properties, making their analysis a complex task.\nImprovement of understanding and development of methods for financial time\nseries analysis is essential for successful operation on financial markets. In\nthis study we propose a volume-based data pre-processing method for making\nfinancial time series more suitable for machine learning pipelines. We use a\nstatistical approach for assessing the performance of the method. Namely, we\nformally state the hypotheses, set up associated classification tasks, compute\neffect sizes with confidence intervals, and run statistical tests to validate\nthe hypotheses. We additionally assess the trading performance of the proposed\nmethod on historical data and compare it to a previously published approach.\nOur analysis shows that the proposed volume-based method allows successful\nclassification of the financial time series patterns, and also leads to better\nclassification performance than a price action-based method, excelling\nspecifically on more liquid financial instruments. Finally, we propose an\napproach for obtaining feature interactions directly from tree-based models on\nexample of CatBoost estimator, as well as formally assess the relatedness of\nthe proposed approach and SHAP feature interactions with a positive outcome.\n"} {"abstract": " Autonomous systems like aircraft and assistive robots often operate in\nscenarios where guaranteeing safety is critical. Methods like Hamilton-Jacobi\nreachability can provide guaranteed safe sets and controllers for such systems.\nHowever, often these same scenarios have unknown or uncertain environments,\nsystem dynamics, or predictions of other agents. As the system is operating, it\nmay learn new knowledge about these uncertainties and should therefore update\nits safety analysis accordingly. However, work to learn and update safety\nanalysis is limited to small systems of about two dimensions due to the\ncomputational complexity of the analysis. In this paper we synthesize several\ntechniques to speed up computation: decomposition, warm-starting, and adaptive\ngrids. Using this new framework we can update safe sets by one or more orders\nof magnitude faster than prior work, making this technique practical for many\nrealistic systems. We demonstrate our results on simulated 2D and 10D\nnear-hover quadcopters operating in a windy environment.\n"} {"abstract": " Structural behaviour of PbMn$_{7}$O$_{12}$ has been studied by high\nresolution synchrotron X-ray powder diffraction. This material belongs to a\nfamily of quadruple perovskite manganites that exhibit an incommensurate\nstructural modulation associated with an orbital density wave. It has been\nfound that the structural modulation in PbMn$_{7}$O$_{12}$ onsets at 294 K with\nthe incommensurate propagation vector $\\mathbf{k}_s=(0,0,\\sim2.08)$. At 110 K\nanother structural transition takes place where the propagation vector suddenly\ndrops down to a \\emph{quasi}-commensurate value $\\mathbf{k}_s=(0,0,2.0060(6))$.\nThe \\emph{quasi}-commensurate phase is stable in the temperature range of 40K -\n110 K, and below 40 K the propagation vector jumps back to the incommensurate\nvalue $\\mathbf{k}_s=(0,0,\\sim2.06)$. Both low temperature structural\ntransitions are strongly first order with large thermal hysteresis. The orbital\ndensity wave in the \\emph{quasi}-commensurate phase has been found to be\nsubstantially suppressed in comparison with the incommensurate phases, which\nnaturally explains unusual magnetic behaviour recently reported for this\nperovskite. Analysis of the refined structural parameters revealed that that\nthe presence of the \\emph{quasi}-commensurate phase is likely to be associated\nwith a competition between the Pb$^{2+}$ lone electron pair and Mn$^{3+}$\nJahn-Teller instabilities.\n"} {"abstract": " We describe the new version (v3.06h) of the code HFODD that solves the\nuniversal nonrelativistic nuclear DFT Hartree-Fock or Hartree-Fock-Bogolyubov\nproblem by using the Cartesian deformed harmonic-oscillator basis. In the new\nversion, we implemented the following new features: (i) zero-range three- and\nfour-body central terms, (ii) zero-range three-body gradient terms, (iii)\nzero-range tensor terms, (iv) zero-range isospin-breaking terms, (v)\nfinite-range higher-order regularized terms, (vi) finite-range separable terms,\n(vii) zero-range two-body pairing terms, (viii) multi-quasiparticle blocking,\n(ix) Pfaffian overlaps, (x) particle-number and parity symmetry restoration,\n(xi) axialization, (xii) Wigner functions, (xiii) choice of the\nharmonic-oscillator basis, (xiv) fixed Omega partitions, (xv) consistency\nformula between energy and fields, and we corrected several errors of the\nprevious versions.\n"} {"abstract": " This note derives parametrizations for surfaces of revolution that satisfy an\naffine-linear relation between their respective curvature radii. Alongside,\nparametrizations for the uniform normal offsets of those surfaces are obtained.\nThose parametrizations are found explicitly for a countably-infinite many of\nthem, and of those, it is shown which are algebraic. Lastly, for those surfaces\nwhich have a constant ratio of principal curvatures, parametrizations with a\nconstant angle between the parameter curves are found.\n"} {"abstract": " It is well-known that the univariate Multiquadric quasi-interpolation\noperator is constructed based on the piecewise linear interpolation by |x|. In\nthis paper, we first introduce a new transcendental RBF based on the hyperbolic\ntangent function as a smooth approximant to f(r)=r with higher accuracy and\nbetter convergence properties than the multiquadric. Then Wu-Schaback's\nquasi-interpolation formula is rewritten using the proposed RBF. It preserves\nconvexity and monotonicity. We prove that the proposed scheme converges with a\nrate of O(h^2). So it has a higher degree of smoothness. Some numerical\nexperiments are given in order to demonstrate the efficiency and accuracy of\nthe method.\n"} {"abstract": " Satisfiability of boolean formulae (SAT) has been a topic of research in\nlogic and computer science for a long time. In this paper we are interested in\nunderstanding the structure of satisfiable and unsatisfiable sentences. In\nprevious work we initiated a new approach to SAT by formulating a mapping from\npropositional logic sentences to graphs, allowing us to find structural\nobstructions to 2SAT (clauses with exactly 2 literals) in terms of graphs. Here\nwe generalize these ideas to multi-hypergraphs in which the edges can have more\nthan 2 vertices and can have multiplicity. This is needed for understanding the\nstructure of SAT for sentences made of clauses with 3 or more literals (3SAT),\nwhich is a building block of NP-completeness theory. We introduce a decision\nproblem that we call GraphSAT, as a first step towards a structural view of\nSAT. Each propositional logic sentence can be mapped to a multi-hypergraph by\nassociating each variable with a vertex (ignoring the negations) and each\nclause with a hyperedge. Such a graph then becomes a representative of a\ncollection of possible sentences and we can then formulate the notion of\nsatisfiability of such a graph. With this coarse representation of classes of\nsentences one can then investigate structural obstructions to SAT. To make the\nproblem tractable, we prove a local graph rewriting theorem which allows us to\nsimplify the neighborhood of a vertex without knowing the rest of the graph. We\nuse this to deduce several reduction rules, allowing us to modify a graph\nwithout changing its satisfiability status which can then be used in a program\nto simplify graphs. We study a subclass of 3SAT by examining sentences living\non triangulations of surfaces and show that for any compact surface there\nexists a triangulation that can support unsatisfiable sentences, giving\nspecific examples of such triangulations for various surfaces.\n"} {"abstract": " Models whose ground states can be written as an exact matrix product state\n(MPS) provide valuable insights into phases of matter. While MPS-solvable\nmodels are typically studied as isolated points in a phase diagram, they can\nbelong to a connected network of MPS-solvable models, which we call the MPS\nskeleton. As a case study where we can completely unearth this skeleton, we\nfocus on the one-dimensional BDI class -- non-interacting spinless fermions\nwith time-reversal symmetry. This class, labelled by a topological winding\nnumber, contains the Kitaev chain and is Jordan-Wigner-dual to various\nsymmetry-breaking and symmetry-protected topological (SPT) spin chains. We show\nthat one can read off from the Hamiltonian whether its ground state is an MPS:\ndefining a polynomial whose coefficients are the Hamiltonian parameters,\nMPS-solvability corresponds to this polynomial being a perfect square. We\nprovide an explicit construction of the ground state MPS, its bond dimension\ngrowing exponentially with the range of the Hamiltonian. This complete\ncharacterization of the MPS skeleton in parameter space has three significant\nconsequences: (i) any two topologically distinct phases in this class admit a\npath of MPS-solvable models between them, including the phase transition which\nobeys an area law for its entanglement entropy; (ii) we illustrate that the\nsubset of MPS-solvable models is dense in this class by constructing a sequence\nof MPS-solvable models which converge to the Kitaev chain (equivalently, the\nquantum Ising chain in a transverse field); (iii) a subset of these MPS states\ncan be particularly efficiently processed on a noisy intermediate-scale quantum\ncomputer.\n"} {"abstract": " In 1979 I. Cior\\u{a}nescu and L. Zsid\\'o have proved a minimum modulus\ntheorem for entire functions dominated by the restriction to the positive half\naxis of a canonical product of genus zero, having all roots on the positive\nimaginary axis and satisfying a certain condition.\n Here we prove that the above result is optimal: if a canonical product\n{\\omega} of genus zero, having all roots on the positive imaginary axis, does\nnot satisfy the condition in the 1979 paper, then always there exists an entire\nfunction dominated by the restriction to the positive half axis of {\\omega},\nwhich does not satisfy the desired minimum modulus conclusion. This has\nrelevant implication concerning the subjectivity of ultra differential\noperators with constant coefficients.\n"} {"abstract": " With the massive damage in the world caused by Coronavirus Disease 2019\nSARS-CoV-2 (COVID-19), many related research topics have been proposed in the\npast two years. The Chest Computed Tomography (CT) scans are the most valuable\nmaterials to diagnose the COVID-19 symptoms. However, most schemes for COVID-19\nclassification of Chest CT scan is based on a single-slice level, implying that\nthe most critical CT slice should be selected from the original CT scan volume\nmanually. We simultaneously propose 2-D and 3-D models to predict the COVID-19\nof CT scan to tickle this issue. In our 2-D model, we introduce the Deep\nWilcoxon signed-rank test (DWCC) to determine the importance of each slice of a\nCT scan to overcome the issue mentioned previously. Furthermore, a\nConvolutional CT scan-Aware Transformer (CCAT) is proposed to discover the\ncontext of the slices fully. The frame-level feature is extracted from each CT\nslice based on any backbone network and followed by feeding the features to our\nwithin-slice-Transformer (WST) to discover the context information in the pixel\ndimension. The proposed Between-Slice-Transformer (BST) is used to aggregate\nthe extracted spatial-context features of every CT slice. A simple classifier\nis then used to judge whether the Spatio-temporal features are COVID-19 or\nnon-COVID-19. The extensive experiments demonstrated that the proposed CCAT and\nDWCC significantly outperform the state-of-the-art methods.\n"} {"abstract": " Recovery of power flow to critical infrastructures, after grid failure, is a\ncrucial need arising in scenarios that are increasingly becoming more frequent.\nThis article proposes a power transition and recovery strategy by proposing a\nmode-dependent droop control-based inverters. The control strategy of inverters\nachieves the following objectives 1) regulate the output active and reactive\npower by the droop-based inverters to a desired value while operating in\non-grid mode 2) seamless transition and recovery of power flow injections into\nthe critical loads in the network by inverters operating in off-grid mode after\nthe main grid fails; 3) require minimal information of grid/network status and\nconditions for the mode transition of droop control. A framework for assessing\nthe stability of the system and to guide the choice of parameters for\ncontrollers is developed using control-oriented modeling. A comprehensive\ncontroller hardware-in-the-loop-based real-time simulation study on a\ntest-system based on the realistic electrical network of M-Health Fairview,\nUniversity of Minnesota Medical Center, corroborates the efficacy of the\nproposed controller strategy.\n"} {"abstract": " Entity linking (EL) for the rapidly growing short text (e.g. search queries\nand news titles) is critical to industrial applications. Most existing\napproaches relying on adequate context for long text EL are not effective for\nthe concise and sparse short text. In this paper, we propose a novel framework\ncalled Multi-turn Multiple-choice Machine reading comprehension (M3}) to solve\nthe short text EL from a new perspective: a query is generated for each\nambiguous mention exploiting its surrounding context, and an option selection\nmodule is employed to identify the golden entity from candidates using the\nquery. In this way, M3 framework sufficiently interacts limited context with\ncandidate entities during the encoding process, as well as implicitly considers\nthe dissimilarities inside the candidate bunch in the selection stage. In\naddition, we design a two-stage verifier incorporated into M3 to address the\ncommonly existed unlinkable problem in short text. To further consider the\ntopical coherence and interdependence among referred entities, M3 leverages a\nmulti-turn fashion to deal with mentions in a sequence manner by retrospecting\nhistorical cues. Evaluation shows that our M3 framework achieves the\nstate-of-the-art performance on five Chinese and English datasets for the\nreal-world short text EL.\n"} {"abstract": " We give a review of the calculations of the masses of tetraquarks with two\nand four heavy quarks in the framework of the relativistic quark model based on\nthe quasipotential approach and QCD. The diquark-antidiquark picture of heavy\ntetraquarks is used. The quasipotentials of the quark-quark and\ndiquark-antidiquark interactions are constructed similarly to the previous\nconsideration of mesons and baryons. Diquarks are considered in the colour\ntriplet state. It is assumed that the diquark and antidiquark interact in the\ntetraquark as a whole and the internal structure of the diquarks is taken into\naccount by the calculated form factor of the diquark-gluon interaction. All\nparameters of the model are kept fixed from our previous calculations of meson\nand baryon properties. A detailed comparison of the obtained predictions for\nheavy tetraquark masses with available experimental data is given. Many\ncandidates for tetraquarks are found. It is argued that the structures in the\ndi-$J/\\psi$ mass spectrum observed recently by the LHCb Collaboration can be\ninterpreted as $cc\\bar c\\bar c$ tetraquarks.\n"} {"abstract": " In the last two decades, optical vortices carried by twisted light wavefronts\nhave attracted a great deal of interest, providing not only new physical\ninsights into light-matter interactions, but also a transformative platform for\nboosting optical information capacity. Meanwhile, advances in nanoscience and\nnanotechnology lead to the emerging field of nanophotonics, offering an\nunprecedented level of light manipulation via nanostructured materials and\ndevices. Many exciting ideas and concepts come up when optical vortices meet\nnanophotonic devices. Here, we provide a mini review on recent achievements\nmade in nanophotonics for the generation and detection of optical vortices and\nsome of their applications.\n"} {"abstract": " Decision makers involved in the management of civil assets and systems\nusually take actions under constraints imposed by societal regulations. Some of\nthese constraints are related to epistemic quantities, as the probability of\nfailure events and the corresponding risks. Sensors and inspectors can provide\nuseful information supporting the control process (e.g. the maintenance process\nof an asset), and decisions about collecting this information should rely on an\nanalysis of its cost and value. When societal regulations encode an economic\nperspective that is not aligned with that of the decision makers, the Value of\nInformation (VoI) can be negative (i.e., information sometimes hurts), and\nalmost irrelevant information can even have a significant value (either\npositive or negative), for agents acting under these epistemic constraints. We\nrefer to these phenomena as Information Avoidance (IA) and Information\nOverValuation (IOV). In this paper, we illustrate how to assess VoI in\nsequential decision making under epistemic constraints (as those imposed by\nsocietal regulations), by modeling a Partially Observable Markov Decision\nProcesses (POMDP) and evaluating non optimal policies via Finite State\nControllers (FSCs). We focus on the value of collecting information at current\ntime, and on that of collecting sequential information, we illustrate how these\nvalues are related and we discuss how IA and IOV can occur in those settings.\n"} {"abstract": " In this work, we show the generative capability of an image classifier\nnetwork by synthesizing high-resolution, photo-realistic, and diverse images at\nscale. The overall methodology, called Synthesize-It-Classifier (STIC), does\nnot require an explicit generator network to estimate the density of the data\ndistribution and sample images from that, but instead uses the classifier's\nknowledge of the boundary to perform gradient ascent w.r.t. class logits and\nthen synthesizes images using Gram Matrix Metropolis Adjusted Langevin\nAlgorithm (GRMALA) by drawing on a blank canvas. During training, the\nclassifier iteratively uses these synthesized images as fake samples and\nre-estimates the class boundary in a recurrent fashion to improve both the\nclassification accuracy and quality of synthetic images. The STIC shows the\nmixing of the hard fake samples (i.e. those synthesized by the one hot class\nconditioning), and the soft fake samples (which are synthesized as a convex\ncombination of classes, i.e. a mixup of classes) improves class interpolation.\nWe demonstrate an Attentive-STIC network that shows an iterative drawing of\nsynthesized images on the ImageNet dataset that has thousands of classes. In\naddition, we introduce the synthesis using a class conditional score classifier\n(Score-STIC) instead of a normal image classifier and show improved results on\nseveral real-world datasets, i.e. ImageNet, LSUN, and CIFAR 10.\n"} {"abstract": " We prove new $L^p$-$L^q$-estimates for solutions to elliptic differential\noperators with constant coefficients in $\\mathbb{R}^3$. We use the estimates\nfor the decay of the Fourier transform of particular surfaces in $\\mathbb{R}^3$\nwith vanishing Gaussian curvature due to Erd\\H{o}s--Salmhofer to derive new\nFourier restriction--extension estimates. These allow for constructing\ndistributional solutions in $L^q(\\mathbb{R}^3)$ for $L^p$-data via limiting\nabsorption by well-known means.\n"} {"abstract": " For the Langevin model of the dynamics of a Brownian particle with\nperturbations orthogonal to its current velocity, in a regime when the particle\nvelocity modulus becomes constant, an equation for the characteristic function\n$\\psi (t,\\lambda )=M\\left[\\exp (\\lambda ,x(t))/V={\\rm v}(0)\\right]$ of the\nposition $x(t)$ of the Brownian particle. The obtained results confirm the\nconclusion that the model of the dynamics of a Brownian particle, which\nconstructed on the basis of an unconventional physical interpretation of the\nLangevin equations, i. e. stochastic equations with orthogonal influences,\nleads to the interpretation of an ensemble of Brownian particles as a system\nwith wave properties. These results are consistent with the previously obtained\nconclusions that, with a certain agreement of the coefficients in the original\nstochastic equation, for small random influences and friction, the Langevin\nequations lead to a description of the probability density of the position of a\nparticle based on wave equations. For large random influences and friction, the\nprobability density is a solution to the diffusion equation, with a diffusion\ncoefficient that is lower than in the classical diffusion model.\n"} {"abstract": " The advent of deep learning has brought an impressive advance to monocular\ndepth estimation, e.g., supervised monocular depth estimation has been\nthoroughly investigated. However, the large amount of the RGB-to-depth dataset\nmay not be always available since collecting accurate depth ground truth\naccording to the RGB image is a time-consuming and expensive task. Although the\nnetwork can be trained on an alternative dataset to overcome the dataset scale\nproblem, the trained model is hard to generalize to the target domain due to\nthe domain discrepancy. Adversarial domain alignment has demonstrated its\nefficacy to mitigate the domain shift on simple image classification tasks in\nprevious works. However, traditional approaches hardly handle the conditional\nalignment as they solely consider the feature map of the network. In this\npaper, we propose an adversarial training model that leverages semantic\ninformation to narrow the domain gap. Based on the experiments conducted on the\ndatasets for the monocular depth estimation task including KITTI and\nCityscapes, the proposed compact model achieves state-of-the-art performance\ncomparable to complex latest models and shows favorable results on boundaries\nand objects at far distances.\n"} {"abstract": " Reliable and accurate localization and mapping are key components of most\nautonomous systems. Besides geometric information about the mapped environment,\nthe semantics plays an important role to enable intelligent navigation\nbehaviors. In most realistic environments, this task is particularly\ncomplicated due to dynamics caused by moving objects, which can corrupt the\nmapping step or derail localization. In this paper, we propose an extension of\na recently published surfel-based mapping approach exploiting three-dimensional\nlaser range scans by integrating semantic information to facilitate the mapping\nprocess. The semantic information is efficiently extracted by a fully\nconvolutional neural network and rendered on a spherical projection of the\nlaser range data. This computed semantic segmentation results in point-wise\nlabels for the whole scan, allowing us to build a semantically-enriched map\nwith labeled surfels. This semantic map enables us to reliably filter moving\nobjects, but also improve the projective scan matching via semantic\nconstraints. Our experimental evaluation on challenging highways sequences from\nKITTI dataset with very few static structures and a large amount of moving cars\nshows the advantage of our semantic SLAM approach in comparison to a purely\ngeometric, state-of-the-art approach.\n"} {"abstract": " Robotic fabric manipulation has applications in home robotics, textiles,\nsenior care and surgery. Existing fabric manipulation techniques, however, are\ndesigned for specific tasks, making it difficult to generalize across different\nbut related tasks. We build upon the Visual Foresight framework to learn fabric\ndynamics that can be efficiently reused to accomplish different sequential\nfabric manipulation tasks with a single goal-conditioned policy. We extend our\nearlier work on VisuoSpatial Foresight (VSF), which learns visual dynamics on\ndomain randomized RGB images and depth maps simultaneously and completely in\nsimulation. In this earlier work, we evaluated VSF on multi-step fabric\nsmoothing and folding tasks against 5 baseline methods in simulation and on the\nda Vinci Research Kit (dVRK) surgical robot without any demonstrations at train\nor test time. A key finding was that depth sensing significantly improves\nperformance: RGBD data yields an 80% improvement in fabric folding success rate\nin simulation over pure RGB data. In this work, we vary 4 components of VSF,\nincluding data generation, visual dynamics model, cost function, and\noptimization procedure. Results suggest that training visual dynamics models\nusing longer, corner-based actions can improve the efficiency of fabric folding\nby 76% and enable a physical sequential fabric folding task that VSF could not\npreviously perform with 90% reliability. Code, data, videos, and supplementary\nmaterial are available at https://sites.google.com/view/fabric-vsf/.\n"} {"abstract": " It is essential to help drivers have appropriate understandings of level 2\nautomated driving systems for keeping driving safety. A human machine interface\n(HMI) was proposed to present real time results of image recognition by the\nautomated driving systems to drivers. It was expected that drivers could better\nunderstand the capabilities of the systems by observing the proposed HMI.\nDriving simulator experiments with 18 participants were preformed to evaluate\nthe effectiveness of the proposed system. Experimental results indicated that\nthe proposed HMI could effectively inform drivers of potential risks\ncontinuously and help drivers better understand the level 2 automated driving\nsystems.\n"} {"abstract": " Using measurements from the PAMELA and ARINA spectrometers onboard the RESURS\nDK-1 satellite, we have examined the 27-day intensity variations in galactic\ncosmic ray (GCR) proton fluxes in 2007-2008. The PAMELA and ARINA data allow\nfor the first time a study of time profiles and the rigidity dependence of the\n27-day variations observed directly in space in a wide rigidity range from ~300\nMV to several GV. We find that the rigidity dependence of the amplitude of the\n27-day GCR variations cannot be described by the same power-law at both low and\nhigh energies. A flat interval occurs at rigidity R = <0.6-1.0> GV with a\npower-law index gamma = - 0.13+/-0.44 for PAMELA, whereas for R >= 1 GV the\npower-law dependence is evident with index gamma = - 0.51+/-0.11. We describe\nthe rigidity dependence of the 27-day GCR variations for PAMELA and ARINA data\nin the framework of the modulation potential concept using the force-field\napproximation for GCR transport. For a physical interpretation, we have\nconsidered the relationship between the 27-day GCR variations and solar wind\nplasma and other heliospheric parameters. Moreover, we have discussed possible\nimplications of MHD modeling of the solar wind plasma together with a\nstochastic GCR transport model concerning the effects of corotating interaction\nregions.\n"} {"abstract": " The G\\\"odel translation provides an embedding of the intuitionistic logic\n$\\mathsf{IPC}$ into the modal logic $\\mathsf{Grz}$, which then embeds into the\nmodal logic $\\mathsf{GL}$ via the splitting translation. Combined with\nSolovay's theorem that $\\mathsf{GL}$ is the modal logic of the provability\npredicate of Peano Arithmetic $\\mathsf{PA}$, both $\\mathsf{IPC}$ and\n$\\mathsf{Grz}$ admit arithmetical interpretations. When attempting to 'lift'\nthese results to the monadic extensions $\\mathsf{MIPC}$, $\\mathsf{MGrz}$, and\n$\\mathsf{MGL}$ of these logics, the same techniques no longer work. Following a\nconjecture made by Esakia, we add an appropriate version of Casari's formula to\nthese monadic extensions (denoted by a '+'), obtaining that the G\\\"odel\ntranslation embeds $\\mathsf{M^{+}IPC}$ into $\\mathsf{M^{+}Grz}$ and the\nsplitting translation embeds $\\mathsf{M^{+}Grz}$ into $\\mathsf{MGL}$. As proven\nby Japaridze, Solovay's result extends to the monadic system $\\mathsf{MGL}$,\nwhich leads us to an arithmetical interpretation of both $\\mathsf{M^{+}IPC}$\nand $\\mathsf{M^{+}Grz}$.\n"} {"abstract": " Curves of maximal slope are a reference gradient-evolution notion in metric\nspaces and arise as variational formulation of a vast class of nonlinear\ndiffusion equations. Existence theories for curves of maximal slope are often\nbased on minimizing-movements schemes, most notably on the Euler scheme. We\npresent here an alternative minimizing-movements approach, yielding more\nregular discretizations, serving as a-posteriori convergence estimator, and\nallowing for a simple convergence proof.\n"} {"abstract": " We propose a formalism to model and reason about reconfigurable multi-agent\nsystems. In our formalism, agents interact and communicate in different modes\nso that they can pursue joint tasks; agents may dynamically synchronize,\nexchange data, adapt their behaviour, and reconfigure their communication\ninterfaces. Inspired by existing multi-robot systems, we represent a system as\na set of agents (each with local state), executing independently and only\ninfluence each other by means of message exchange. Agents are able to sense\ntheir local states and partially their surroundings. We extend LTL to be able\nto reason explicitly about the intentions of agents in the interaction and\ntheir communication protocols. We also study the complexity of satisfiability\nand model-checking of this extension.\n"} {"abstract": " AM CVn systems are a rare type of accreting binary that consists of a white\ndwarf and a helium-rich, degenerate donor star. Using the Zwicky Transient\nFacility (ZTF), we searched for new AM CVn systems by focusing on blue,\noutbursting stars. We first selected outbursting stars using the ZTF alerts. We\ncross-matched the candidates with $Gaia$ and Pan-STARRS catalogs. The initial\nselection of candidates based on the $Gaia$ $BP$-$RP$ contains 1751 unknown\nobjects. We used the Pan-STARRS $g$-$r$ and $r$-$i$ color in combination with\nthe $Gaia$ color to identify 59 high-priority candidates. We obtained\nidentification spectra of 35 sources, of which 18 are high priority candidates,\nand discovered 9 new AM CVn systems and one magnetic CV which shows only He-II\nlines. Using the outburst recurrence time, we estimate the orbital periods\nwhich are in the range of 29 to 50 minutes. We conclude that targeted followup\nof blue, outbursting sources is an efficient method to find new AM CVn systems,\nand we plan to followup all candidates we identified to systematically study\nthe population of outbursting AM CVn systems.\n"} {"abstract": " For better user satisfaction and business effectiveness, more and more\nattention has been paid to the sequence-based recommendation system, which is\nused to infer the evolution of users' dynamic preferences, and recent studies\nhave noticed that the evolution of users' preferences can be better understood\nfrom the implicit and explicit feedback sequences. However, most of the\nexisting recommendation techniques do not consider the noise contained in\nimplicit feedback, which will lead to the biased representation of user\ninterest and a suboptimal recommendation performance. Meanwhile, the existing\nmethods utilize item sequence for capturing the evolution of user interest. The\nperformance of these methods is limited by the length of the sequence, and can\nnot effectively model the long-term interest in a long period of time. Based on\nthis observation, we propose a novel CTR model named denoising user-aware\nmemory network (DUMN). Specifically, the framework: (i) proposes a feature\npurification module based on orthogonal mapping, which use the representation\nof explicit feedback to purify the representation of implicit feedback, and\neffectively denoise the implicit feedback; (ii) designs a user memory network\nto model the long-term interests in a fine-grained way by improving the memory\nnetwork, which is ignored by the existing methods; and (iii) develops a\npreference-aware interactive representation component to fuse the long-term and\nshort-term interests of users based on gating to understand the evolution of\nunbiased preferences of users. Extensive experiments on two real e-commerce\nuser behavior datasets show that DUMN has a significant improvement over the\nstate-of-the-art baselines. The code of DUMN model has been uploaded as an\nadditional material.\n"} {"abstract": " The spin Hall effect (SHE) and the magnetic spin Hall effect (MSHE) are\nresponsible for electrical spin current generation, which is a key concept of\nmodern spintronics. We theoretically investigated the spin conductivity induced\nby spin-dependent s-d scattering in a ferromagnetic 3d alloy model by employing\nmicroscopic transport theory based on the Kubo formula. We derived a novel\nextrinsic mechanism that contributes to both the SHE and MSHE. This mechanism\ncan be understood as the contribution from anisotropic (spatial-dependent)\nspin-flip scattering due to the combination of the orbital-dependent\nanisotropic shape of s-d hybridization and spin flipping, with the orbital\nshift caused by spin-orbit interaction with the d-orbitals. We also show that\nthis mechanism is valid under crystal-field splitting among the d-orbitals in\neither the cubic or tetragonal symmetry.\n"} {"abstract": " The nonleptonic decays\n$\\Lambda_{b}\\rightarrow\\Sigma_{c}^{*-}\\pi^{+},\\Xi_{c}^{*0}K^{0}$ and\n$\\Lambda_{b}\\rightarrow\\Delta^{0}D^{0},\\Sigma^{*-}D_{0}^{+}$ are studied. In\naddition, the decays\n$\\Lambda_{b}\\rightarrow\\Xi_{c}^{0}K^{0},\\Sigma^{-}D_{s}^{+}$ are analyzed. For\nall these decays the dominant contribution comes from $W-$exchange, and for the\ndecay $\\Lambda_{b}\\rightarrow\\Lambda_{c}^{+}\\pi^{-}$, in addition to\nfactorization, baryon pole contribution to the $p$-wave (parity conserving)\ndecay amplitude $B$ is discussed.\n"} {"abstract": " The universal fractality of river networks is very well known, however\nunderstanding of the underlying mechanisms for them is still lacking in terms\nof stochastic processes. By introducing probability changing dynamically, we\nhave described the fractal natures of river networks stochastically. The\ndynamical probability depends on the drainage area at a site that is a key\ndynamical quantity of the system, meanwhile the river network is developed by\nthe probability, which induces dynamical persistency in river flows resulting\nin the self-affine property shown in real river basins, although the process is\na Markovian process with short-term memory.\n"} {"abstract": " Recovering 3D human pose from 2D joints is still a challenging problem,\nespecially without any 3D annotation, video information, or multi-view\ninformation. In this paper, we present an unsupervised GAN-based model\nconsisting of multiple weight-sharing generators to estimate a 3D human pose\nfrom a single image without 3D annotations. In our model, we introduce\nsingle-view-multi-angle consistency (SVMAC) to significantly improve the\nestimation performance. With 2D joint locations as input, our model estimates a\n3D pose and a camera simultaneously. During training, the estimated 3D pose is\nrotated by random angles and the estimated camera projects the rotated 3D poses\nback to 2D. The 2D reprojections will be fed into weight-sharing generators to\nestimate the corresponding 3D poses and cameras, which are then mixed to impose\nSVMAC constraints to self-supervise the training process. The experimental\nresults show that our method outperforms the state-of-the-art unsupervised\nmethods by 2.6% on Human 3.6M and 15.0% on MPI-INF-3DHP. Moreover, qualitative\nresults on MPII and LSP show that our method can generalize well to unknown\ndata.\n"} {"abstract": " Congestion caused in the electrical network due to renewable generation can\nbe effectively managed by integrating electric and thermal infrastructures, the\nlatter being represented by large scale District Heating (DH) networks, often\nfed by large combined heat and power (CHP) plants. The CHP plants could further\nimprove the profit margin of district heating multi-utilities by selling\nelectricity in the power market by adjusting the ratio between generated heat\nand power. The latter is possible only for certain CHP plants, which allow\ndecoupling the two commodities generation, namely the ones provided by two\nindependent variables (degrees-of-freedom) or by integrating them with thermal\nenergy storage and Power-to-Heat (P2H) units. CHP units can, therefore, help in\nthe congestion management of the electricity network. A detailed mixed-integer\nlinear programming (MILP) optimization model is introduced for solving the\nnetwork-constrained unit commitment of integrated electric and thermal\ninfrastructures. The developed model contains a detailed characterization of\nthe useful effects of CHP units, i.e., heat and power, as a function of one and\ntwo independent variables. A lossless DC flow approximation models the\nelectricity transmission network. The district heating model includes the use\nof gas boilers, electric boilers, and thermal energy storage. The conducted\nstudies on IEEE 24 bus system highlight the importance of a comprehensive\nanalysis of multi-energy systems to harness the flexibility derived from the\njoint operation of electric and heat sectors and managing congestion in the\nelectrical network.\n"} {"abstract": " We establish weighted inequalities for $BMO$ commutators of sublinear\noperators for all $0 m_A + m_{W^+}$, where $H^+$ and $A$ are\ncharged and $CP$-odd Higgs bosons in the general two Higgs Doublet Model\n(g2HDM). We show that the $cg\\to bH^+\\to b A W^+$ process can be discovered at\nLHC Run 3, while the full Run 2 data at hand can constrain the parameter space\nsignificantly by searching for the same-sign dilepton final state. The process\nhas unique implications on the hint of $gg\\to A \\to t \\bar t$ excess at\n$m_A\\approx 400$ GeV reported by CMS. When combined with other existing\nconstraints, the $cg\\to bH^+\\to b A W^+$ process can essentially rule out the\ng2HDM explanation of such an excess.\n"} {"abstract": " Representations and O-operators of Hom-(pre)-Jacobi-Jordan algebras are\nintroduced and studied. The anticommutator of a Hom-pre-Jacobi-Jordan algebra\nis a Hom-Jacobi-Jordan algebra and the left multiplication operator gives a\nrepresentation of a Hom-Jacobi-Jordan algebra. The notion of matched pairs and\nNijenhuis operators of Hom-(pre)-Jacobi-Jordan algebras are given and various\nrelevant constructions are obtained.\n"} {"abstract": " We observe minima of the longitudinal resistance corresponding to the quantum\nHall effect of composite fermions at quantum numbers $p=1$, 2, 3, 4, and 6 in\nan ultraclean strongly interacting bivalley SiGe/Si/SiGe two-dimensional\nelectron system. The minima at $p=3$ disappear below a certain electron\ndensity, although the surrounding minima at $p=2$ and $p=4$ survive at\nsignificantly lower densities. Furthermore, the onset for the resistance\nminimum at a filling factor $\\nu=3/5$ is found to be independent of the tilt\nangle of the magnetic field. These surprising results indicate the intersection\nor merging of the quantum levels of composite fermions with different valley\nindices, which reveals the valley effect on fractions.\n"} {"abstract": " This paper presents a joint source separation algorithm that simultaneously\nreduces acoustic echo, reverberation and interfering sources. Target speeches\nare separated from the mixture by maximizing independence with respect to the\nother sources. It is shown that the separation process can be decomposed into\ncascading sub-processes that separately relate to acoustic echo cancellation,\nspeech dereverberation and source separation, all of which are solved using the\nauxiliary function based independent component/vector analysis techniques, and\ntheir solving orders are exchangeable. The cascaded solution not only leads to\nlower computational complexity but also better separation performance than the\nvanilla joint algorithm.\n"} {"abstract": " Current event-centric knowledge graphs highly rely on explicit connectives to\nmine relations between events. Unfortunately, due to the sparsity of\nconnectives, these methods severely undermine the coverage of EventKGs. The\nlack of high-quality labelled corpora further exacerbates that problem. In this\npaper, we propose a knowledge projection paradigm for event relation\nextraction: projecting discourse knowledge to narratives by exploiting the\ncommonalities between them. Specifically, we propose Multi-tier Knowledge\nProjection Network (MKPNet), which can leverage multi-tier discourse knowledge\neffectively for event relation extraction. In this way, the labelled data\nrequirement is significantly reduced, and implicit event relations can be\neffectively extracted. Intrinsic experimental results show that MKPNet achieves\nthe new state-of-the-art performance, and extrinsic experimental results verify\nthe value of the extracted event relations.\n"} {"abstract": " We elucidate universal many-body properties of a one-dimensional,\ntwo-component ultracold Fermi gas near the $p$-wave Feshbach resonance. The\nlow-energy scattering in this system can be characterized by two parameters,\nthat is, $p$-wave scattering length and effective range. At the unitarity limit\nwhere the $p$-wave scattering length diverges and the effective range is\nreduced to zero without conflicting with the causality bound, the system obeys\nuniversal thermodynamics as observed in a unitary Fermi gas with contact\n$s$-wave interaction in three dimensions. It is in contrast to a Fermi gas with\nthe $p$-wave resonance in three dimensions in which the effective range is\ninevitably finite. We present the universal equation of state in this unitary\n$p$-wave Fermi gas within the many-body $T$-matrix approach as well as the\nvirial expansion method. Moreover, we examine the single-particle spectral\nfunction in the high-density regime where the virial expansion is no longer\nvalid. On the basis of the Hartree-like self-energy shift at the divergent\nscattering length, we conjecture that the equivalence of the Bertsch parameter\nacross spatial dimensions holds even for a one-dimensional unitary $p$-wave\nFermi gas.\n"} {"abstract": " We have developed two metrics related to AGN variability observables\n(time-lags, periodicity, and Structure Function (SF)) to evaluate LSST OpSim\nFBS 1.5, 1.6, 1.7 performance in AGN time-domain analysis. For this purpose, we\ngenerate an ensemble of AGN light curves based on AGN empirical relations and\nLSST OpSim cadences. Although our metrics show that denser LSST cadences\nproduce more reliable time-lag, periodicity, and SF measurements, the\ndiscrepancies in the performance between different LSST OpSim cadences are not\ndrastic based on Kullback-Leibler divergence. This is complementary to Yu and\nRichards results on DCR and SF metrics, extending them to include the point of\nview of AGN variability.\n"} {"abstract": " Motivated by the search for methods to establish strong minimality of certain\nlow order algebraic differential equations, a measure of how far a finite rank\nstationary type is from being minimal is introduced and studied: The {\\em\ndegree of nonminimality} is the minimum number of realisations of the type\nrequired to witness a nonalgebraic forking extension. Conditional on the truth\nof a conjecture of Borovik and Cherlin on the generic multiple-transitivity of\nhomogeneous spaces definable in the stable theory being considered, it is shown\nthat the nonminimality degree is bounded by the $U$-rank plus $2$. The\nBorovik-Cherlin conjecture itself is verified for algebraic and meromorphic\ngroup actions, and a bound of $U$-rank plus $1$ is then deduced unconditionally\nfor differentially closed fields and compact complex manifolds. An application\nis given regarding transcendence of solutions to algebraic differential\nequations.\n"} {"abstract": " Network games study the strategic interaction of agents connected through a\nnetwork. Interventions in such a game -- actions a coordinator or planner may\ntake that change the utility of the agents and thus shift the equilibrium\naction profile -- are introduced to improve the planner's objective. We study\nthe problem of intervention in network games where the network has a group\nstructure with local planners, each associated with a group. The agents play a\nnon-cooperative game while the planners may or may not have the same\noptimization objective. We model this problem using a sequential move game\nwhere planners make interventions followed by agents playing the intervened\ngame. We provide equilibrium analysis and algorithms that find the subgame\nperfect equilibrium. We also propose a two-level efficiency definition to study\nthe efficiency loss of equilibrium actions in this type of game.\n"} {"abstract": " Hadamard's maximal determinant problem consists in finding the maximal value\nof the determinant of a square $n\\times n$ matrix whose entries are plus or\nminus ones. This is a difficult mathematical problem which is not yet solved.\nIn the present paper a simplified version of the problem is considered and\nstudied numerically.\n"} {"abstract": " The complexity and non-Euclidean structure of graph data hinder the\ndevelopment of data augmentation methods similar to those in computer vision.\nIn this paper, we propose a feature augmentation method for graph nodes based\non topological regularization, in which topological structure information is\nintroduced into end-to-end model. Specifically, we first obtain topology\nembedding of nodes through unsupervised representation learning method based on\nrandom walk. Then, the topological embedding as additional features and the\noriginal node features are input into a dual graph neural network for\npropagation, and two different high-order neighborhood representations of nodes\nare obtained. On this basis, we propose a regularization technique to bridge\nthe differences between the two different node representations, eliminate the\nadverse effects caused by the topological features of graphs directly used, and\ngreatly improve the performance. We have carried out extensive experiments on a\nlarge number of datasets to prove the effectiveness of our model.\n"} {"abstract": " Magnetic induction tomography (MIT) is an efficient solution for long-term\nbrain disease monitoring, which focuses on reconstructing bio-impedance\ndistribution inside the human brain using non-intrusive electromagnetic fields.\nHowever, high-quality brain image reconstruction remains challenging since\nreconstructing images from the measured weak signals is a highly non-linear and\nill-conditioned problem. In this work, we propose a generative adversarial\nnetwork (GAN) enhanced MIT technique, named MITNet, based on a complex\nconvolutional neural network (CNN). The experimental results on the real-world\ndataset validate the performance of our technique, which outperforms the\nstate-of-art method by 25.27%.\n"} {"abstract": " Growth occurs in a wide range of systems ranging from biological tissue to\nadditive manufacturing. This work considers surface growth, in which mass is\nadded to the boundary of a continuum body from the ambient medium or from\nwithin the body. In contrast to bulk growth in the interior, the description of\nsurface growth requires the addition of new continuum particles to the body.\nThis is challenging for standard continuum formulations for solids that are\nmeant for situations with a fixed amount of material. Recent approaches to\nhandle this have used time-evolving reference configurations.\n In this work, an Eulerian approach to this problem is formulated, enabling\nthe side-stepping of the issue of constructing the reference configuration.\nHowever, this raises the complementary challenge of determining the stress\nresponse of the solid, which typically requires the deformation gradient that\nis not immediately available in the Eulerian formulation. To resolve this, the\napproach introduces additional kinematic descriptors, namely the relaxed\nzero-stress deformation and the elastic deformation; in contrast to the\ndeformation gradient, these have the important advantage that they are not\nrequired to satisfy kinematic compatibility. The resulting model has only the\ndensity, velocity, and elastic deformation as variables in the Eulerian\nsetting.\n The introduction in this formulation of the relaxed deformation and the\nelastic deformation provides a description of surface growth whereby the added\nmaterial can bring in its own kinematic information. Loosely, the added\nmaterial \"brings in its own reference configuration\" through the specification\nof the relaxed deformation and the elastic deformation of the added material.\nThis kinematic description enables, e.g., modeling of non-normal growth using a\nstandard normal growth velocity and a simple approach to prescribing boundary\nconditions.\n"} {"abstract": " We theoretically investigate the out-of-equilibrium dynamics in a binary\nBose-Einstein condensate confined within two-dimensional box potentials. One\nspecies of the condensate interacts with a pair of oppositely wound, but\notherwise identical Laguerre-Gaussian laser pulses, while the other species is\ninfluenced only via the interspecies interaction. Starting from the\nHamiltonian, we derive the equations of motion that accurately delineate the\nbehavior of the condensates during and after the light-matter interaction.\nDepending on the number the helical windings (or the magnitude of topological\ncharge), the species directly participating in the interaction with lasers is\ndynamically segmented into distinct parts which collide together as the pulses\ngradually diminish. This collision event generates nonlinear structures in the\nrelated species, coupled with the complementary structures produced in the\nother species, due to the interspecies interaction. The long-time dynamics of\nthe optically perturbed species is found to develop the Kolmogorov-Saffman\nscaling law in the incompressible kinetic energy spectrum, a characteristic\nfeature of the quantum turbulent state. However, the same scaling law is not\ndefinitively exhibited in the other species. This study warrants the usage of\nLaguerre-Gaussian beams for future experiments on quantum turbulence in\nBose-Einstein condensates.\n"} {"abstract": " Face representation learning using datasets with massive number of identities\nrequires appropriate training methods. Softmax-based approach, currently the\nstate-of-the-art in face recognition, in its usual \"full softmax\" form is not\nsuitable for datasets with millions of persons. Several methods, based on the\n\"sampled softmax\" approach, were proposed to remove this limitation. These\nmethods, however, have a set of disadvantages. One of them is a problem of\n\"prototype obsolescence\": classifier weights (prototypes) of the rarely sampled\nclasses, receive too scarce gradients and become outdated and detached from the\ncurrent encoder state, resulting in an incorrect training signals. This problem\nis especially serious in ultra-large-scale datasets. In this paper, we propose\na novel face representation learning model called Prototype Memory, which\nalleviates this problem and allows training on a dataset of any size. Prototype\nMemory consists of the limited-size memory module for storing recent class\nprototypes and employs a set of algorithms to update it in appropriate way. New\nclass prototypes are generated on the fly using exemplar embeddings in the\ncurrent mini-batch. These prototypes are enqueued to the memory and used in a\nrole of classifier weights for usual softmax classification-based training. To\nprevent obsolescence and keep the memory in close connection with encoder,\nprototypes are regularly refreshed, and oldest ones are dequeued and disposed.\nPrototype Memory is computationally efficient and independent of dataset size.\nIt can be used with various loss functions, hard example mining algorithms and\nencoder architectures. We prove the effectiveness of the proposed model by\nextensive experiments on popular face recognition benchmarks.\n"} {"abstract": " A problem that is frequently encountered in a variety of mathematical\ncontexts, is to find the common invariant subspaces of a single, or set of\nmatrices. A new method is proposed that gives a definitive answer to this\nproblem. The key idea consists of finding common eigenvectors for exterior\npowers of the matrices concerned. A convenient formulation of the Pl\\\"ucker\nrelations is then used to ensure that these eigenvectors actually correspond to\nsubspaces or provide the initial constraints for eigenvectors involving\nparameters. A procedure for computing the divisors of totally decomposable\nvector is also provided. Several examples are given for which the calculations\nare too tedious to do by hand and are performed by coding the conditions found\ninto Maple.\n"} {"abstract": " We present the analysis of the colour-magnitude diagram (CMD) morphology of\nthe ~ 800 Myr old star cluster NGC1831 in the Large Magellanic Cloud,\nexploiting deep, high-resolution photometry obtained using the Wide Field\nCamera 3 onboard the Hubble Space Telescope. We perform a simultaneous analysis\nof the wide upper main sequence and main sequence turn-off observed in the\ncluster, to verify whether these features are due to an extended star formation\nor a range of stellar rotation rates, or a combination of these two effects.\nComparing the observed CMD with Monte Carlo simulations of synthetic stellar\npopulations, we derive that the morphology of NGC1831 can be fully explained in\nthe context of the rotation velocity scenario, under the assumption of a\nbimodal distribution for the rotating stars, with ~40% of stars being\nslow-rotators ($\\Omega$ / $\\Omega_{crit}$ < 0.5) and the remaining ~ 60% being\nfast rotators ($\\Omega$ / $\\Omega_{crit}$ > 0.9). We derive the dynamical\nproperties of the cluster, calculating the present cluster mass and escape\nvelocity, and predicting their past evolution starting at an age of 10 Myr. We\nfind that NGC1831 has an escape velocity $v_{esc}$ = 18.4 km/s, at an age of 10\nMyr, above the previously suggested threshold of 15 km/s, below which the\ncluster cannot retain the material needed to create second-generation stars.\nThese results, combined with those obtained from the CMD morphology analysis,\nindicate that for the clusters whose morphology cannot be easily explained only\nin the context of the rotation velocity scenario, the threshold limit should be\nat least ~ 20 km/s.\n"} {"abstract": " The $q$-Onsager algebra $O_q$ is presented by two generators $W_0$, $W_1$ and\ntwo relations, called the $q$-Dolan/Grady relations. Recently Baseilhac and\nKoizumi introduced a current algebra $\\mathcal A_q$ for $O_q$. Soon afterwards,\nBaseilhac and Shigechi gave a presentation of $\\mathcal A_q$ by generators and\nrelations. We show that these generators give a PBW basis for $\\mathcal A_q$.\nUsing this PBW basis, we show that the algebra $\\mathcal A_q$ is isomorphic to\n$O_q \\otimes \\mathbb F \\lbrack z_1, z_2, \\ldots \\rbrack$, where $\\mathbb F$ is\nthe ground field and $\\lbrace z_n \\rbrace_{n=1}^\\infty $ are mutually commuting\nindeterminates. Recall the positive part $U^+_q$ of the quantized enveloping\nalgebra $U_q(\\widehat{\\mathfrak{sl}}_2)$. Our results show that $O_q$ is\nrelated to $\\mathcal A_q$ in the same way that $U^+_q$ is related to the\nalternating central extension of $U^+_q$. For this reason, we propose to call\n$\\mathcal A_q$ the alternating central extension of $O_q$.\n"} {"abstract": " We study how violations of structural assumptions like expected utility and\nexponential discounting can be connected to reference dependent preferences\nwith set-dependent reference points, even if behavior conforms with these\nassumptions when the reference is fixed. An axiomatic framework jointly and\nsystematically relaxes general rationality (WARP) and structural assumptions to\ncapture reference dependence across domains. It gives rise to a linear order\nthat determines references points, which in turn determines the preference\nparameters for a choice problem. This allows us to study risk, time, and social\npreferences collectively, where seemingly independent anomalies are\ninterconnected through the lens of reference-dependent choice.\n"} {"abstract": " Early diagnosis is essential for the successful treatment of bowel cancers\nincluding colorectal cancer (CRC) and capsule endoscopic imaging with robotic\nactuation can be a valuable diagnostic tool when combined with automated image\nanalysis. We present a deep learning rooted detection and segmentation\nframework for recognizing lesions in colonoscopy and capsule endoscopy images.\nWe restructure established convolution architectures, such as VGG and ResNets,\nby converting them into fully-connected convolution networks (FCNs), fine-tune\nthem and study their capabilities for polyp segmentation and detection. We\nadditionally use Shape from-Shading (SfS) to recover depth and provide a richer\nrepresentation of the tissue's structure in colonoscopy images. Depth is\nincorporated into our network models as an additional input channel to the RGB\ninformation and we demonstrate that the resulting network yields improved\nperformance. Our networks are tested on publicly available datasets and the\nmost accurate segmentation model achieved a mean segmentation IU of 47.78% and\n56.95% on the ETIS-Larib and CVC-Colon datasets, respectively. For polyp\ndetection, the top performing models we propose surpass the current state of\nthe art with detection recalls superior to 90% for all datasets tested. To our\nknowledge, we present the first work to use FCNs for polyp segmentation in\naddition to proposing a novel combination of SfS and RGB that boosts\nperformance\n"} {"abstract": " Infants acquire words and phonemes from unsegmented speech signals using\nsegmentation cues, such as distributional, prosodic, and co-occurrence cues.\nMany pre-existing computational models that represent the process tend to focus\non distributional or prosodic cues. This paper proposes a nonparametric\nBayesian probabilistic generative model called the prosodic hierarchical\nDirichlet process-hidden language model (Prosodic HDP-HLM). Prosodic HDP-HLM,\nan extension of HDP-HLM, considers both prosodic and distributional cues within\na single integrative generative model. We conducted three experiments on\ndifferent types of datasets, and demonstrate the validity of the proposed\nmethod. The results show that the Prosodic DAA successfully uses prosodic cues\nand outperforms a method that solely uses distributional cues. The main\ncontributions of this study are as follows: 1) We develop a probabilistic\ngenerative model for time series data including prosody that potentially has a\ndouble articulation structure; 2) We propose the Prosodic DAA by deriving the\ninference procedure for Prosodic HDP-HLM and show that Prosodic DAA can\ndiscover words directly from continuous human speech signals using statistical\ninformation and prosodic information in an unsupervised manner; 3) We show that\nprosodic cues contribute to word segmentation more in naturally distributed\ncase words, i.e., they follow Zipf's law.\n"} {"abstract": " The first principles momentum dependent local ansatz wavefunction method\n(MLA) has been extended to the ferromagnetic state by introducing\nspin-dependent variational parameters. The theory is applied to the\nferromagnetic Fe, Co, and Ni. It is shown that the MLA yields the\nmagnetizations being comparable to the results obtained by the GGA (generalized\ngradient approximation) in the density functional theory. The projected\nmomentum distribution functions as well as the mass enhancement factors are\nalso calculated on the same footing, and are compared with those in the\nparamagnetic state. It is shown that the calculated mass enhancement factor of\nFe is strongly suppressed by the spin polarization due to exchange splitting of\nthe e${}_{\\rm g}$ flat bands, while those of Co and Ni remain unchanged by the\npolarization. These results are shown to be consistent with the experimental\nresults obtained from the low-temperature specific heats.\n"} {"abstract": " In this paper we develop an asymptotic theory for steadily travelling\ngravity-capillary waves under the small-surface tension limit. In an\naccompanying work [Shelton et al. (2021), J. Fluid Mech., accepted/in press] it\nwas demonstrated that solutions associated with a perturbation about a\nleading-order gravity wave (a Stokes wave) contain surface-tension-driven\nparasitic ripples with an exponentially-small amplitude. Thus a naive\nPoincar\\'e expansion is insufficient for their description. Here, we shall\ndevelop specialised methodologies in exponential asymptotics for derivation of\nthe parasitic ripples on periodic domains. The ripples are shown to arise in\nconjunction with Stokes lines and the Stokes phenomenon. The analysis relies\ncrucially upon the derivation and analysis of singularities in the analytic\ncontinuation of the classic Stokes wave. A solvability condition is derived,\nshowing that solutions of this type do not exist at certain values of the Bond\nnumber. The asymptotic results are compared to full numerical solutions and\nshow excellent agreement. The work provides corrections and insight of a\nseminal theory on parasitic capillary waves first proposed by Longuet-Higgins\n[J. Fluid Mech., vol. 16 (1), 1963, pp. 138-159].\n"} {"abstract": " Quality-Diversity algorithms refer to a class of evolutionary algorithms\ndesigned to find a collection of diverse and high-performing solutions to a\ngiven problem. In robotics, such algorithms can be used for generating a\ncollection of controllers covering most of the possible behaviours of a robot.\nTo do so, these algorithms associate a behavioural descriptor to each of these\nbehaviours. Each behavioural descriptor is used for estimating the novelty of\none behaviour compared to the others. In most existing algorithms, the\nbehavioural descriptor needs to be hand-coded, thus requiring prior knowledge\nabout the task to solve. In this paper, we introduce: Autonomous Robots\nRealising their Abilities, an algorithm that uses a dimensionality reduction\ntechnique to automatically learn behavioural descriptors based on raw sensory\ndata. The performance of this algorithm is assessed on three robotic tasks in\nsimulation. The experimental results show that it performs similarly to\ntraditional hand-coded approaches without the requirement to provide any\nhand-coded behavioural descriptor. In the collection of diverse and\nhigh-performing solutions, it also manages to find behaviours that are novel\nwith respect to more features than its hand-coded baselines. Finally, we\nintroduce a variant of the algorithm which is robust to the dimensionality of\nthe behavioural descriptor space.\n"} {"abstract": " We introduce a protection-based IP security scheme to protect soft and firm\nIP cores which are used on FPGA devices. The scheme is based on Finite State\nMachin (FSM) obfuscation and exploits Physical Unclonable Function (PUF) for\nFPGA unique identification (ID) generation which help pay-per-device licensing.\nWe introduce a communication protocol to protect the rights of parties in this\nmarket. On standard benchmark circuits, the experimental results show that our\nscheme is secure, attack-resilient and can be implemented with low area, power\nand delay overheads.\n"} {"abstract": " In this paper, we address the Cauchy problem for the relativistic BGK model\nproposed by Anderson and Witting for massless particles in the\nFriedmann-Lemaitre-Robertson-Walker (FLRW) spacetime.\n"} {"abstract": " Acoustophoresis deals with the manipulation of sub-wavelength scatterers in\nan incident acoustic field. The geometric details of manipulated particles are\noften neglected by replacing them with equivalent symmetric geometries such as\nspheres, spheroids, cylinders or disks. It has been demonstrated that geometric\nasymmetry, represented by Willis coupling terms, can strongly affect the\nscattering of a small object, hence neglecting these terms may miss important\nforce contributions. In this work, we present a generalized formalism of\nacoustic radiation force and radiation torque based on the polarizability\ntensor, where Willis coupling terms are included to account for geometric\nasymmetry. Following Gorkov's approach, the effects of geometric asymmetry are\nexplicitly formulated as additional terms in the radiation force and torque\nexpressions. By breaking the symmetry of a sphere along one axis using\nintrusion and protrusion, we characterize the changes in the force and torque\nin terms of partial components, associated with the direct and Willis Coupling\ncoefficients of the polarizability tensor. We investigate in detail the cases\nof standing and travelling plane waves, showing how the equilibrium positions\nand angles are shifted by these additional terms. We show that while the\ncontributions of asymmetry to the force are often negligible for small\nparticles, these terms greatly affect the radiation torque. Our presented\ntheory, providing a way of calculating radiation force and torque directly from\npolarizability coefficients, shows that in general it is essential to account\nfor shape of objects undergoing acoustophoretic manipulation, and this may have\nimportant implications for applications such as the manipulation of biological\ncells.\n"} {"abstract": " Third-order approximate solutions for surface gravity waves in the finite\nwater depth are studied in the context of potential flow theory. This solution\nprovides explicit expressions for the surface elevation, free-surface velocity\npotential and velocity potential. The amplitude dispersion relation is also\nprovided. Two approaches are used to derive the third order analytical\nsolution, resulting in two types of approximate solutions: the perturbation\nsolution and the Hamiltonian solution. The perturbation solution is obtained by\nclassical perturbation technique in which the time variable is expanded in\nmultiscale to eliminate secular terms. The Hamiltonian solution is derived from\nthe canonical transformation in the Hamiltonian theory of water waves. By\ncomparing the two types of solutions, it is found that they are completely\nequivalent for the first to second order solutions and the nonlinear\ndispersion, but for the third order part only the sum-sum terms are the same.\nDue to the canonical transformation that could completely separate the dynamic\nand bound harmonics, the Hamiltonian solutions break through the difficulty\nthat the perturbation theory breaks down due to singularities in the transfer\nfunctions when quartet resonance criterion is satisfied. Furthermore, it is\nalso found that some time-averaged quantities based on the Hamiltonian\nsolution, such as mean potential energy and mean kinetic energy, are equal to\nthose in the initial state in which sea surface is assumed to be a Gaussian\nrandom process. This is because there are associated conserved quantities in\nthe Hamiltonian form. All of these show that the Hamiltonian solution is more\nreasonable and accurate to describe the third order steady-state wave field.\nFinally, based on the Hamiltonian solution, some statistics are given such as\nthe volume flux, skewness, and excess kurtosis.\n"} {"abstract": " Machine learning-inspired techniques have emerged as a new paradigm for\nanalysis of phase transitions in quantum matter. In this work, we introduce a\nsupervised learning algorithm for studying critical phenomena from measurement\ndata, which is based on iteratively training convolutional networks of\nincreasing complexity, and test it on the transverse field Ising chain and q=6\nPotts model. At the continuous Ising transition, we identify scaling behavior\nin the classification accuracy, from which we infer a characteristic\nclassification length scale. It displays a power-law divergence at the critical\npoint, with a scaling exponent that matches with the diverging correlation\nlength. Our algorithm correctly identifies the thermodynamic phase of the\nsystem and extracts scaling behavior from projective measurements,\nindependently of the basis in which the measurements are performed.\nFurthermore, we show the classification length scale is absent for the $q=6$\nPotts model, which has a first order transition and thus lacks a divergent\ncorrelation length. The main intuition underlying our finding is that, for\nmeasurement patches of sizes smaller than the correlation length, the system\nappears to be at the critical point, and therefore the algorithm cannot\nidentify the phase from which the data was drawn.\n"} {"abstract": " An electron is usually considered to have only one type of kinetic energy,\nbut could it have more, for its spin and charge, or by exciting other\nelectrons? In one dimension (1D), the physics of interacting electrons is\ncaptured well at low energies by the Tomonaga-Luttinger-Liquid (TLL) model, yet\nlittle has been observed experimentally beyond this linear regime. Here, we\nreport on measurements of many-body modes in 1D gated-wires using a tunnelling\nspectroscopy technique. We observe two separate Fermi seas at high energies,\nassociated with spin and charge excitations, together with the emergence of\nthree additional 1D 'replica' modes that strengthen with decreasing wire\nlength. The effective interaction strength in the wires is varied by changing\nthe amount of 1D inter-subband screening by over 45%. Our findings demonstrate\nthe existence of spin-charge separation in the whole energy band outside the\nlow-energy limit of validity of the TLL model, and also set a limit on the\nvalidity of the newer nonlinear TLL theory.\n"} {"abstract": " In a Hilbertian framework, for the minimization of a general convex\ndifferentiable function $f$, we introduce new inertial dynamics and algorithms\nthat generate trajectories and iterates that converge fastly towards the\nminimizer of $f$ with minimum norm. Our study is based on the non-autonomous\nversion of the Polyak heavy ball method, which, at time $t$, is associated with\nthe strongly convex function obtained by adding to $f$ a Tikhonov\nregularization term with vanishing coefficient $\\epsilon(t)$. In this dynamic,\nthe damping coefficient is proportional to the square root of the Tikhonov\nregularization parameter $\\epsilon(t)$. By adjusting the speed of convergence\nof $\\epsilon(t)$ towards zero, we will obtain both rapid convergence towards\nthe infimal value of $f$, and the strong convergence of the trajectories\ntowards the element of minimum norm of the set of minimizers of $f$. In\nparticular, we obtain an improved version of the dynamic of Su-Boyd-Cand\\`es\nfor the accelerated gradient method of Nesterov. This study naturally leads to\ncorresponding first-order algorithms obtained by temporal discretization. In\nthe case of a proper lower semicontinuous and convex function $f$, we study the\nproximal algorithms in detail, and show that they benefit from similar\nproperties.\n"} {"abstract": " Terahertz (THz) frequency bands can be promising for data transmissions\nbetween the core network and access points (AP) for next-generation wireless\nsystems. In this paper, we analyze the performance of a dual-hop THz-RF\nwireless system where an AP facilitates data transmission between a core\nnetwork and user equipment (UE). We consider a generalized model for the\nend-to-end channel with an independent and not identically distributed\n(i.ni.d.) fading model for THz and RF links using the $\\alpha$-$\\mu$\ndistribution, the THz link with pointing errors, and asymmetrical relay\nposition. We derive a closed-form expression of the cumulative distribution\nfunction (CDF) of the end-to-end signal-to-noise ratio (SNR) for the THz-RF\nlink, which is valid for continuous values of $\\mu$ for a generalized\nperformance analysis over THz fading channels. Using the derived CDF, we\nanalyze the performance of the THz-RF relayed system using decode-and-forward\n(DF) protocol by deriving analytical expressions of diversity order, moments of\nSNR, ergodic capacity, and average BER in terms of system parameters. We also\nanalyze the considered system with an i.i.d. model and develop simplified\nperformance to provide insight on the system behavior analytically under\nvarious practically relevant scenarios. Simulation and numerical analysis show\na significant effect of fading parameters of the THz link and a nominal effect\nof normalized beam-width on the performance of the relay-assisted THz-RF\nsystem.\n"} {"abstract": " We consider the cosmology obtained using scalar fields with a negative\npotential energy, such as employed to obtain an Ekpyrotic phase of contraction.\nApplying the covariant entropy bound to the tower of states dictated by the\ndistance conjecture, we find that the relative slope of the potential\n$|V^{\\prime}| / |V|$ is bounded from below by a constant of the order one in\nPlanck units. This is consistent with the requirement to obtain slow Ekpyrotic\ncontraction. We also derive a refined condition on the potential which holds\nnear local minima of a negative potential.\n"} {"abstract": " Recently, Graph Convolutional Networks (GCNs) have proven to be a powerful\nmean for Computer Aided Diagnosis (CADx). This approach requires building a\npopulation graph to aggregate structural information, where the graph adjacency\nmatrix represents the relationship between nodes. Until now, this adjacency\nmatrix is usually defined manually based on phenotypic information. In this\npaper, we propose an encoder that automatically selects the appropriate\nphenotypic measures according to their spatial distribution, and uses the text\nsimilarity awareness mechanism to calculate the edge weights between nodes. The\nencoder can automatically construct the population graph using phenotypic\nmeasures which have a positive impact on the final results, and further\nrealizes the fusion of multimodal information. In addition, a novel graph\nconvolution network architecture using multi-layer aggregation mechanism is\nproposed. The structure can obtain deep structure information while suppressing\nover-smooth, and increase the similarity between the same type of nodes.\nExperimental results on two databases show that our method can significantly\nimprove the diagnostic accuracy for Autism spectrum disorder and breast cancer,\nindicating its universality in leveraging multimodal data for disease\nprediction.\n"} {"abstract": " We construct a $2$-generated pro-$2$ group with full normal Hausdorff\nspectrum $[0,1]$, with respect to each of the four standard filtration series:\nthe $2$-power series, the lower $2$-series, the Frattini series, and the\ndimension subgroup series. This answers a question of Klopsch and the second\nauthor, for the even prime case; the odd prime case was settled by the first\nauthor and Klopsch. Also, our construction gives the first example of a\nfinitely generated pro-$2$ group with full Hausdorff spectrum with respect to\nthe lower $2$-series.\n"} {"abstract": " Zebrafish is a powerful and widely-used model system for a host of biological\ninvestigations including cardiovascular studies and genetic screening.\nZebrafish are readily assessable during developmental stages; however, the\ncurrent methods for quantification and monitoring of cardiac functions mostly\ninvolve tedious manual work and inconsistent estimations. In this paper, we\ndeveloped and validated a Zebrafish Automatic Cardiovascular Assessment\nFramework (ZACAF) based on a U-net deep learning model for automated assessment\nof cardiovascular indices, such as ejection fraction (EF) and fractional\nshortening (FS) from microscopic videos of wildtype and cardiomyopathy mutant\nzebrafish embryos. Our approach yielded favorable performance with accuracy\nabove 90% compared with manual processing. We used only black and white regular\nmicroscopic recordings with frame rates of 5-20 frames per second (fps); thus,\nthe framework could be widely applicable with any laboratory resources and\ninfrastructure. Most importantly, the automatic feature holds promise to enable\nefficient, consistent and reliable processing and analysis capacity for large\namounts of videos, which can be generated by diverse collaborating teams.\n"} {"abstract": " We use direct $N$-body simulations to explore some possible scenarios for the\nfuture evolution of two massive clusters observed toward the center of\nNGC\\,4654, a spiral galaxy with mass similar to that of the Milky Way. Using\narchival HST data, we obtain the photometric masses of the two clusters,\n$M=3\\times 10^5$ M$_\\odot$ and $M=1.7\\times 10^6$ M$_\\odot$, their half-light\nradii, $R_{\\rm eff}\\sim4$ pc and $R_{\\rm eff} \\sim 6$ pc, and their projected\ndistances from the photometric center of the galaxy (both $<22$ pc). The\nknowledge of the structure and separation of these two clusters ($\\sim 24$ pc)\nprovides a unique view for studying the dynamics of a galactic central zone\nhosting massive clusters. Varying some of the unknown clusters orbital\nparameters, we carry out several $N$-body simulations showing that the future\nevolution of these clusters will inevitably result in their merger. We find\nthat, mainly depending on the shape of their relative orbit, they will merge\ninto the galactic center in less than 82 Myr. In addition to the tidal\ninteraction, a proper consideration of the dynamical friction braking would\nshorten the merging times up to few Myr. We also investigate the possibility to\nform a massive NSC in the center of the galaxy by this process. Our analysis\nsuggests that for low eccentricity orbits, and relatively long merger times,\nthe final merged cluster is spherical in shape, with an effective radius of few\nparsecs and a mass within the effective radius of the order of\n$10^5\\,\\mathrm{M_{\\odot}}$. Because the central density of such a cluster is\nhigher than that of the host galaxy, it is likely that this merger remnant\ncould be the likely embryo of a future NSC.\n"} {"abstract": " Due to the unavailability of nationally representative data on time use, a\nsystematic analysis of the gender gap in unpaid household and care work has not\nbeen undertaken in the context of India. The present paper, using the recent\nTime Use Survey (2019) data, examines the socioeconomic and demographic factors\nassociated with variation in time spent on unpaid household and care work among\nmen and women. It analyses how much of the gender gap in the time allocated to\nunpaid work can be explained by differences in these factors. The findings show\nthat women spend much higher time compared to men in unpaid household and care\nwork. The decomposition results reveal that differences in socioeconomic and\ndemographic factors between men and women do not explain most of the gender gap\nin unpaid household work. Our results indicate that unobserved gender norms and\npractices most crucially govern the allocation of unpaid work within Indian\nhouseholds.\n"} {"abstract": " We consider the problem of online classification under a privacy constraint.\nIn this setting a learner observes sequentially a stream of labelled examples\n$(x_t, y_t)$, for $1 \\leq t \\leq T$, and returns at each iteration $t$ a\nhypothesis $h_t$ which is used to predict the label of each new example $x_t$.\nThe learner's performance is measured by her regret against a known hypothesis\nclass $\\mathcal{H}$. We require that the algorithm satisfies the following\nprivacy constraint: the sequence $h_1, \\ldots, h_T$ of hypotheses output by the\nalgorithm needs to be an $(\\epsilon, \\delta)$-differentially private function\nof the whole input sequence $(x_1, y_1), \\ldots, (x_T, y_T)$. We provide the\nfirst non-trivial regret bound for the realizable setting. Specifically, we\nshow that if the class $\\mathcal{H}$ has constant Littlestone dimension then,\ngiven an oblivious sequence of labelled examples, there is a private learner\nthat makes in expectation at most $O(\\log T)$ mistakes -- comparable to the\noptimal mistake bound in the non-private case, up to a logarithmic factor.\nMoreover, for general values of the Littlestone dimension $d$, the same mistake\nbound holds but with a doubly-exponential in $d$ factor. A recent line of work\nhas demonstrated a strong connection between classes that are online learnable\nand those that are differentially-private learnable. Our results strengthen\nthis connection and show that an online learning algorithm can in fact be\ndirectly privatized (in the realizable setting). We also discuss an adaptive\nsetting and provide a sublinear regret bound of $O(\\sqrt{T})$.\n"} {"abstract": " The intertwined processes of learning and evolution in complex environmental\nniches have resulted in a remarkable diversity of morphological forms.\nMoreover, many aspects of animal intelligence are deeply embodied in these\nevolved morphologies. However, the principles governing relations between\nenvironmental complexity, evolved morphology, and the learnability of\nintelligent control, remain elusive, partially due to the substantial challenge\nof performing large-scale in silico experiments on evolution and learning. We\nintroduce Deep Evolutionary Reinforcement Learning (DERL): a novel\ncomputational framework which can evolve diverse agent morphologies to learn\nchallenging locomotion and manipulation tasks in complex environments using\nonly low level egocentric sensory information. Leveraging DERL we demonstrate\nseveral relations between environmental complexity, morphological intelligence\nand the learnability of control. First, environmental complexity fosters the\nevolution of morphological intelligence as quantified by the ability of a\nmorphology to facilitate the learning of novel tasks. Second, evolution rapidly\nselects morphologies that learn faster, thereby enabling behaviors learned late\nin the lifetime of early ancestors to be expressed early in the lifetime of\ntheir descendants. In agents that learn and evolve in complex environments,\nthis result constitutes the first demonstration of a long-conjectured\nmorphological Baldwin effect. Third, our experiments suggest a mechanistic\nbasis for both the Baldwin effect and the emergence of morphological\nintelligence through the evolution of morphologies that are more physically\nstable and energy efficient, and can therefore facilitate learning and control.\n"} {"abstract": " In this work we consider Bayesian inference problems with intractable\nlikelihood functions. We present a method to compute an approximate of the\nposterior with a limited number of model simulations. The method features an\ninverse Gaussian Process regression (IGPR), i.e., one from the output of a\nsimulation model to the input of it. Within the method, we provide an adaptive\nalgorithm with a tempering procedure to construct the approximations of the\nmarginal posterior distributions. With examples we demonstrate that IGPR has a\ncompetitive performance compared to some commonly used algorithms, especially\nin terms of statistical stability and computational efficiency, while the price\nto pay is that it can only compute a weighted Gaussian approximation of the\nmarginal posteriors.\n"} {"abstract": " Recently it has become apparent that the Galactic center excess (GCE) is\nspatially correlated with the stellar distribution in the Galactic bulge. This\nhas given extra motivation for the unresolved population of millisecond pulsars\n(MSPs) explanation for the GCE. However, in the \"recycling\" channel the neutron\nstar forms from a core collapse supernovae that undergoes a random \"kick\" due\nto the asymmetry of the explosion. This would imply a smoothing out of the\nspatial distribution of the MSPs. We use N-body simulations to model how the\nMSP spatial distribution changes. We estimate the probability distribution of\nnatal kick velocities using the resolved gamma-ray MSP proper motions, where\nMSPs have random velocities relative to the circular motion with a scale\nparameter of 77+/-6 km/s. We find that, due to the natal kicks, there is an\napproximately 10% increase in each of the bulge MSP spatial distribution\ndimensions and also the bulge MSP distribution becomes less boxy but is still\nfar from being spherical.\n"} {"abstract": " The paper considers the problem of controlling Connected and Automated\nVehicles (CAVs) traveling through a three-entry roundabout so as to jointly\nminimize both the travel time and the energy consumption while providing\nspeed-dependent safety guarantees, as well as satisfying velocity and\nacceleration constraints. We first design a systematic approach to dynamically\ndetermine the safety constraints and derive the unconstrained optimal control\nsolution. A joint optimal control and barrier function (OCBF) method is then\napplied to efficiently obtain a controller that optimally track the\nunconstrained optimal solution while guaranteeing all the constraints.\nSimulation experiments are performed to compare the optimal controller to a\nbaseline of human-driven vehicles showing effectiveness under symmetric and\nasymmetric roundabout configurations, balanced and imbalanced traffic rates and\ndifferent sequencing rules for CAVs.\n"} {"abstract": " The reversible implementation of classical functions accounts for the bulk of\nmost known quantum algorithms. As a result, a number of reversible circuit\nconstructions over the Clifford+$T$ gate set have been developed in recent\nyears which use both the state and phase spaces, or $X$ and $Z$ bases, to\nreduce circuit costs beyond what is possible at the strictly classical level.\nWe study and generalize two particular classes of these constructions: relative\nphase circuits, including Giles and Selinger's multiply-controlled $iX$ gates\nand Maslov's $4$ qubit Toffoli gate, and measurement-assisted circuits,\nincluding Jones' Toffoli gate and Gidney's temporary logical-AND. In doing so,\nwe introduce general methods for implementing classical functions up to phase\nand for measurement-assisted termination of temporary values. We then apply\nthese techniques to find novel $T$-count efficient constructions of some\nclassical functions in space-constrained regimes, notably multiply-controlled\nToffoli gates and temporary products.\n"} {"abstract": " A new family of operators, coined hierarchical measurement operators, is\nintroduced and discussed within the well-known hierarchical sparse recovery\nframework. Such operator is a composition of block and mixing operations and\nnotably contains the Kronecker product as a special case. Results on their\nhierarchical restricted isometry property (HiRIP) are derived, generalizing\nprior work on recovery of hierarchically sparse signals from\nKronecker-structured linear measurements. Specifically, these results show\nthat, very surprisingly, sparsity properties of the block and mixing part can\nbe traded against each other. The measurement structure is well-motivated by a\nmassive random access channel design in communication engineering. Numerical\nevaluation of user detection rates demonstrate the huge benefit of the\ntheoretical framework.\n"} {"abstract": " We consider fractional operators of the form $$\\mathcal{H}^s=(\\partial_t\n-\\mathrm{div}_{x} ( A(x,t)\\nabla_{x}))^s,\\ (x,t)\\in\\mathbb R^n\\times\\mathbb\nR,$$ where $s\\in (0,1)$ and $A=A(x,t)=\\{A_{i,j}(x,t)\\}_{i,j=1}^{n}$ is an\naccretive, bounded, complex, measurable, $n\\times n$-dimensional matrix valued\nfunction. We study the fractional operators ${\\mathcal{H}}^s$ and their\nrelation to the initial value problem $$(\\lambda^{1-2s}\\mathrm{u}')'(\\lambda)\n=\\lambda^{1-2s}\\mathcal{H} \\mathrm{u}(\\lambda), \\quad \\lambda\\in (0, \\infty),$$\n$$\\mathrm{u}(0) = u,$$ in $\\mathbb R_+\\times \\mathbb R^n\\times\\mathbb R$.\nExploring this type of relation, and making the additional assumption that\n$A=A(x,t)=\\{A_{i,j}(x,t)\\}_{i,j=1}^{n}$ is real, we derive some local\nproperties of solutions to the non-local Dirichlet problem\n$$\\mathcal{H}^su=(\\partial_t -\\mathrm{div}_{x} ( A(x,t)\\nabla_{x}))^s u=0\\\n\\mbox{ for $(x,t)\\in \\Omega \\times J$},$$ $$ u=f\\ \\mbox{ for $(x,t)\\in \\mathbb\nR^{n+1}\\setminus (\\Omega \\times J)$}. $$ Our contribution is that we allow for\nnon-symmetric and time-dependent coefficients.\n"} {"abstract": " In recent years, most of the accuracy gains for video action recognition have\ncome from the newly designed CNN architectures (e.g., 3D-CNNs). These models\nare trained by applying a deep CNN on single clip of fixed temporal length.\nSince each video segment are processed by the 3D-CNN module separately, the\ncorresponding clip descriptor is local and the inter-clip relationships are\ninherently implicit. Common method that directly averages the clip-level\noutputs as a video-level prediction is prone to fail due to the lack of\nmechanism that can extract and integrate relevant information to represent the\nvideo.\n In this paper, we introduce the Gated Clip Fusion Network (GCF-Net) that can\ngreatly boost the existing video action classifiers with the cost of a tiny\ncomputation overhead. The GCF-Net explicitly models the inter-dependencies\nbetween video clips to strengthen the receptive field of local clip\ndescriptors. Furthermore, the importance of each clip to an action event is\ncalculated and a relevant subset of clips is selected accordingly for a\nvideo-level analysis. On a large benchmark dataset (Kinetics-600), the proposed\nGCF-Net elevates the accuracy of existing action classifiers by 11.49% (based\non central clip) and 3.67% (based on densely sampled clips) respectively.\n"} {"abstract": " We have seen a surge in research aims toward adversarial attacks and defenses\nin AI/ML systems. While it is crucial to formulate new attack methods and\ndevise novel defense strategies for robustness, it is also imperative to\nrecognize who is responsible for implementing, validating, and justifying the\nnecessity of these defenses. In particular, which components of the system are\nvulnerable to what type of adversarial attacks, and the expertise needed to\nrealize the severity of adversarial attacks. Also how to evaluate and address\nthe adversarial challenges in order to recommend defense strategies for\ndifferent applications. This paper opened a discussion on who should examine\nand implement the adversarial defenses and the reason behind such efforts.\n"} {"abstract": " The coarse similarity class $[A]$ of $A$ is the set of all $B$ whose\nsymmetric difference with $A$ has asymptotic density 0. There is a natural\nmetric $\\delta$ on the space $\\mathcal{S}$ of coarse similarity classes defined\nby letting $\\delta([A],[B])$ be the upper density of the symmetric difference\nof $A$ and $B$. We study the resulting metric space, showing in particular that\nbetween any two distinct points there are continuum many geodesic paths. We\nalso study subspaces of the form $\\{[A] : A \\in \\mathcal U\\}$ where $\\mathcal\nU$ is closed under Turing equivalence, and show that there is a tight\nconnection between topological properties of such a space and\ncomputability-theoretic properties of $\\mathcal U$.\n We then define a distance between Turing degrees based on Hausdorff distance\nin this metric space. We adapt a proof of Monin to show that the distances\nbetween degrees that occur are exactly 0, 1/2, and 1, and study which of these\nvalues occur most frequently in the senses of measure and category. We define a\ndegree to be attractive if the class of all degrees at distance 1/2 from it has\nmeasure 1, and dispersive otherwise. We study the distribution of attractive\nand dispersive degrees. We also study some properties of the metric space of\nTuring degrees under this Hausdorff distance, in particular the question of\nwhich countable metric spaces are isometrically embeddable in it, giving a\ngraph-theoretic sufficient condition.\n We also study the computability-theoretic and reverse-mathematical aspects of\na Ramsey-theoretic theorem due to Mycielski, which in particular implies that\nthere is a perfect set whose elements are mutually 1-random, as well as a\nperfect set whose elements are mutually 1-generic.\n Finally, we study the completeness of $(\\mathcal S,\\delta)$ from the\nperspectives of computability theory and reverse mathematics.\n"} {"abstract": " Recent experiments on the antiferromagnetic intercalated transition metal\ndichalcogenide $\\mathrm{Fe_{1/3}NbS_2}$ have demonstrated reversible\nresistivity switching by application of orthogonal current pulses below its\nmagnetic ordering temperature, making $\\mathrm{Fe_{1/3}NbS_2}$ promising for\nspintronics applications. Here, we perform density functional theory\ncalculations with Hubbard U corrections of the magnetic order, electronic\nstructure, and transport properties of crystalline $\\mathrm{Fe_{1/3}NbS_2}$,\nclarifying the origin of the different resistance states. The two\nexperimentally proposed antiferromagnetic ground states, corresponding to\nin-plane stripe and zigzag ordering, are computed to be nearly degenerate.\nIn-plane cross sections of the calculated Fermi surfaces are anisotropic for\nboth magnetic orderings, with the degree of anisotropy sensitive to the Hubbard\nU value. The in-plane resistance, computed within the Kubo linear response\nformalism using a constant relaxation time approximation, is also anisotropic,\nsupporting a hypothesis that the current-induced resistance changes are due to\na repopulating of AFM domains. Our calculations indicate that the transport\nanisotropy of $\\mathrm{Fe_{1/3}NbS_2}$ in the zigzag phase is reduced relative\nto stripe, consistent with the relative magnitudes of resistivity changes in\nexperiment. Finally, our calculations reveal the likely directionality of the\ncurrent-domain response, specifically, which domains are energetically\nstabilized for a given current direction.\n"} {"abstract": " We present the analysis of the microlensing event OGLE-2018-BLG-1428, which\nhas a short-duration ($\\sim 1$ day) caustic-crossing anomaly. The event was\ncaused by a planetary lens system with planet/host mass ratio\n$q=1.7\\times10^{-3}$. Thanks to the detection of the caustic-crossing anomaly,\nthe finite source effect was well measured, but the microlens parallax was not\nconstrained due to the relatively short timescale ($t_{\\rm E}=24$ days). From a\nBayesian analysis, we find that the host star is a dwarf star $M_{\\rm\nhost}=0.43^{+0.33}_{-0.22} \\ M_{\\odot}$ at a distance $D_{\\rm\nL}=6.22^{+1.03}_{-1.51}\\ {\\rm kpc}$ and the planet is a Jovian-mass planet\n$M_{\\rm p}=0.77^{+0.77}_{-0.53} \\ M_{\\rm J}$ with a projected separation\n$a_{\\perp}=3.30^{+0.59}_{-0.83}\\ {\\rm au}$. The planet orbits beyond the snow\nline of the host star. Considering the relative lens-source proper motion of\n$\\mu_{\\rm rel} = 5.58 \\pm 0.38\\ \\rm mas\\ yr^{-1}$, the lens can be resolved by\nadaptive optics with a 30m telescope in the future.\n"} {"abstract": " Spatial constraints such as rigid barriers affect the dynamics of cell\npopulations, potentially altering the course of natural evolution. In this\npaper, we study the population genetics of Escherichia coli proliferating in\nmicrochannels with open ends. Our experiments reveal that competition among two\nfluorescently labeled E. coli strains growing in a microchannel generates a\nself-organized stripe pattern aligned with the axial direction of the channel.\nTo account for this observation, we employ a lattice population model in which\nreproducing cells push entire lanes of cells towards the open ends of the\nchannel. By combining mathematical theory, numerical simulations, and\nexperiments, we find that the fixation dynamics is extremely fast along the\naxial direction, with a logarithmic dependence on the number of cells per lane.\nIn contrast, competition among lanes is a much slower process. We also\ndemonstrate that random mutations appearing in the middle of the channel and\nclose to its walls are much more likely to reach fixation than mutations\noccurring elsewhere.\n"} {"abstract": " Waste heat recovery for trucks via organic Rankine cycle is a promising\ntechnology to reduce fuel consumption and emissions. As the vehicles are\noperated in street traffic, the heat source is subject to strong fluctuations.\nConsequently, such disturbances have to be considered to enable safe and\nefficient operation. Herein, we find optimal operating policies for several\nrepresentative scenarios by means of dynamic optimization and discuss the\nimplications on control strategy design. First, we optimize operation of a\ntypical driving cycle with data from a test rig. Results indicate that\noperating the cycle at minimal superheat is an appropriate operating policy.\nSecond, we consider a scenario where the permissible expander power is\ntemporarily limited, which is realistic in street traffic. In this case, an\noperating policy with flexible superheat can reduce the losses associated with\noperation at minimal superheat by up to 53 % in the considered scenario. As the\nduration of power limitation increases, other constraints might become active\nwhich results in part of the exhaust gas being bypassed, hence reduced savings.\n"} {"abstract": " We investigate a theoretical model for a dynamic Moir\\'e grating which is\ncapable of producing slow and stopped light with improved performance when\ncompared with a static Moir\\'e grating. A Moir\\'e grating superimposes two\ngrating periods which creates a narrow slow light resonance between two band\ngaps. A Moir\\'e grating can be made dynamic by varying its coupling strength in\ntime. By increasing the coupling strength the reduction in group velocity in\nthe slow light resonance can be improved by many orders of magnitude while\nstill maintaining the wide bandwidth of the initial, weak grating. We show that\nfor a pulse propagating through the grating this is a consequence of altering\nthe pulse spectrum and therefore the grating can also perform bandwidth\nmodulation. Finally we present a possible realization of the system via an\nelectro-optic grating by applying a quasi-static electric field to a poled\n$\\chi^{(2)}$ nonlinear medium.\n"} {"abstract": " We studied the physical behavior of PdO nanoparticles at low temperatures,\nwhich presents an unusual behavior clearly related to macroscopic quantum\ntunneling. The samples show a tetragonal single phase with P42/mmc space group.\nMost importantly, the particle size was estimated at about 5.07 nm. Appropriate\ntechniques were used to determine the characteristic of these nanoparticles.\nThe most important aspect of this study is the magnetic characterization\nperformed at low temperatures. It shows a peak at 50 K in zero field cooling\nmode (ZFC) that corresponds to the Blocking temperature (T$_{B}$). These\nmeasurements in ZFC and field cooling (FC) indicates that the peak behavior is\ndue to different relaxation times of the asymmetrical barriers when the\nelectron changes from a metastable state to another. Below T$_{B}$ in FC mode,\nthe magnetization decreases with temperature until 36 K; this temperature is\nthe crossover temperature (T$_{Cr}$) related to the anisotropy of the barriers,\nindicative of macroscopic quantum tunneling.\n"} {"abstract": " This paper is concerned with the problem of nonlinear filter stability of\nergodic Markov processes. The main contribution is the conditional Poincar\\'e\ninequality (PI), which is shown to yield filter stability. The proof is based\nupon a recently discovered duality which is used to transform the nonlinear\nfiltering problem into a stochastic optimal control problem for a backward\nstochastic differential equation (BSDE). Based on these dual formalisms, a\ncomparison is drawn between the stochastic stability of a Markov process and\nthe filter stability. The latter relies on the conditional PI described in this\npaper, whereas the former relies on the standard form of PI.\n"} {"abstract": " Unexpected hypersurfaces are a brand name for some special linear systems.\nThey were introduced around 2017 and are a field of intensive study since then.\nThey attracted a lot of attention because of their close tights to various\nother areas of mathematics including vector bundles, arrangements of\nhyperplanes, geometry of projective varieties. Our research is motivated by the\nwhat is now known as the BMSS duality, which is a new way of deriving\nprojective varieties out of already constructed. The last author coined the\nconcept of companion surfaces in the setting of unexpected curves admitted by\nthe $B_3$ root system. Here we extend this construction in various directions.\nWe revisit the configurations of points associated to either root systems or to\nFermat arrangements and we study the geometry of the associated varieties and\ntheir companions.\n"} {"abstract": " We present AURA-net, a convolutional neural network (CNN) for the\nsegmentation of phase-contrast microscopy images. AURA-net uses transfer\nlearning to accelerate training and Attention mechanisms to help the network\nfocus on relevant image features. In this way, it can be trained efficiently\nwith a very limited amount of annotations. Our network can thus be used to\nautomate the segmentation of datasets that are generally considered too small\nfor deep learning techniques. AURA-net also uses a loss inspired by active\ncontours that is well-adapted to the specificity of phase-contrast images,\nfurther improving performance. We show that AURA-net outperforms\nstate-of-the-art alternatives in several small (less than 100images) datasets.\n"} {"abstract": " Multi-domain learning (MDL) refers to learning a set of models\nsimultaneously, with each one specialized to perform a task in a certain\ndomain. Generally, high labeling effort is required in MDL, as data needs to be\nlabeled by human experts for every domain. Active learning (AL), which reduces\nlabeling effort by only using the most informative data, can be utilized to\naddress the above issue. The resultant paradigm is termed multi-domain active\nlearning (MDAL). However, currently little research has been done in MDAL, not\nto mention any off-the-shelf solution. To fill this gap, we present a\ncomprehensive comparative study of 20 different MDAL algorithms, which are\nestablished by combining five representative MDL models under different\ninformation-sharing schemes and four well-used AL strategies under different\ncategories. We evaluate the algorithms on five datasets, involving textual and\nvisual classification tasks. We find that the models which capture both\ndomain-dependent and domain-specific information are more likely to perform\nwell in the whole AL loops. Besides, the simplest informative-based uncertainty\nstrategy surprisingly performs good in most datasets. As our off-the-shelf\nrecommendation, the combination of Multinomial Adversarial Networks (MAN) with\nthe best vs second best (BvSB) uncertainty strategy shows its superiority in\nmost cases, and this combination is also robust across datasets and domains.\n"} {"abstract": " This paper develops and investigates the impacts of multi-objective Nash\noptimum (user equilibrium) traffic assignment on a large-scale network for\nbattery electric vehicles (BEVs) and internal combustion engine vehicles\n(ICEVs) in a microscopic traffic simulation environment. Eco-routing is a\ntechnique that finds the most energy efficient route. ICEV and BEV energy\nconsumption patterns are significantly different with regard to their\nsensitivity to driving cycles. Unlike ICEVs, BEVs are more energy efficient on\nlow-speed arterial trips compared to highway trips. Different energy\nconsumption patterns require different eco-routing strategies for ICEVs and\nBEVs. This study found that eco-routing could reduce energy consumption for\nBEVs but also significantly increases their average travel time. The simulation\nstudy found that multi-objective routing could reduce the energy consumption of\nBEVs by 13.5, 14.2, 12.9, and 10.7 percent, as well as the fuel consumption of\nICEVs by 0.1, 4.3, 3.4, and 10.6 percent for \"not congested\", \"slightly\ncongested\", \"moderately congested\", and \"highly congested\" conditions,\nrespectively. The study also found that multi-objective user equilibrium\nrouting reduced the average vehicle travel time by up to 10.1% compared to the\nstandard user equilibrium traffic assignment for the highly congested\nconditions, producing a solution closer to the system optimum traffic\nassignment. The results indicate that the multi-objective eco-routing can\neffectively reduce fuel/energy consumption with minimum impacts on travel times\nfor both BEVs and ICEVs.\n"} {"abstract": " Using the dynamical diquark model, we calculate the electric-dipole radiative\ndecay widths to $X(3872)$ of the lightest negative-parity exotic candidates,\nincluding the four $I=0$, $J^{PC} \\! = \\! 1^{--}$ (\"$Y$\") states. The\n$O$(100--1000 keV) values obtained test the hypothesis of a common substructure\nshared by all of these states. We also calculate the magnetic-dipole radiative\ndecay width for $Z_c(4020)^0 \\! \\to \\! \\gamma X(3872)$, and find it to be\nrather smaller ($<$~10 keV) than its predicted value in molecular models.\n"} {"abstract": " Unmanned Aerial Vehicle (UAV) swarms adoption shows a steady growth among\noperators due to the benefits in time and cost arisen from their use. However,\nthis kind of system faces an important problem which is the calculation of many\noptimal paths for each UAV. Solving this problem would allow a to control many\nUAVs without human intervention at the same time while saving battery between\nrecharges and performing several tasks simultaneously. The main aim is to\ndevelop a system capable of calculating the optimal flight path for a UAV\nswarm. The aim of these paths is to achieve full coverage of a flight area for\ntasks such as field prospection. All this, regardless of the size of maps and\nthe number of UAVs in the swarm. It is not necessary to establish targets or\nany other previous knowledge other than the given map. Experiments have been\nconducted to determine whether it is optimal to establish a single control for\nall UAVs in the swarm or a control for each UAV. The results show that it is\nbetter to use one control for all UAVs because of the shorter flight time. In\naddition, the flight time is greatly affected by the size of the map. The\nresults give starting points for future research such as finding the optimal\nmap size for each situation.\n"} {"abstract": " We present a neutron spectroscopy based method to study quantitatively the\npartial miscibility and phase behaviour of an organic photovoltaic active layer\nmade of conjugated polymer:small molecule blends, presently illustrated with\nthe regio-random poly(3-hexylthiophene-2,5-diyl) and fullerene [6,6]-Phenyl\nC$_{61}$ butyric acid methyl ester (RRa-P3HT:PCBM) system. We perform both\ninelastic neutron scattering and quasi-elastic neutron scattering measurements\nto study the structural dynamics of blends of different compositions enabling\nus to resolve the phase behaviour. The difference of neutron cross sections\nbetween RRa-P3HT and PCBM, and the use of deuteration technique, offer a unique\nopportunity to probe the miscibility limit of fullerene in the amorphous\npolymer-rich phase and to tune the contrast between the polymer and the\nfullerene phases, respectively. Therefore, the proposed approach should be\nuniversal and relevant to study new non-fullerene acceptors that are closely\nrelated - in terms of chemical structures - to the polymer, where other\nconventional imaging and spectroscopic techniques present a poor contrast\nbetween the blend components.\n"} {"abstract": " Let $G$ be a graph with vertex set $V(G)$ and edge set $E(G)$. The Sombor and\nreduced Sombor indices of $G$ are defined as $SO(G)=\\sum_{uv\\in\nE(G)}\\sqrt{deg_G(u)^2+deg_G(v)^2}$ and $SO_{red}(G)=\\sum_{uv\\in\nE(G)}\\sqrt{(deg_G(u)-1)^2+(deg_G(v)-1)^2}$, respectively. We denote by\n$H_{n,\\nu}$ the graph constructed from the star $S_n$ by adding $\\nu$ edge(s)\n$(0\\leq \\nu\\leq n-2)$, between a fixed pendent vertex and $\\nu$ other pendent\nvertices. R\\'eti et al. [T. R\\'eti, T. Do\\v{s}li\\'c and A. Ali, On the Sombor\nindex of graphs, $\\textit{Contrib. Math. }$ $\\textbf{3}$ (2021) 11-18] proposed\na conjecture that the graph $H_{n,\\nu}$ has the maximum Sombor index among all\nconnected $\\nu$-cyclic graphs of order $n$, where $5\\leq \\nu \\leq n-2$. In this\npaper we confirm that the former conjecture is true. It is also shown that this\nconjecture is valid for the reduced Sombor index. The relationship between\nSombor, reduced Sombor and first Zagreb indices of graph is also investigated.\n"} {"abstract": " Meta-learning synthesizes and leverages the knowledge from a given set of\ntasks to rapidly learn new tasks using very little data. Meta-learning of\nlinear regression tasks, where the regressors lie in a low-dimensional\nsubspace, is an extensively-studied fundamental problem in this domain.\nHowever, existing results either guarantee highly suboptimal estimation errors,\nor require $\\Omega(d)$ samples per task (where $d$ is the data dimensionality)\nthus providing little gain over separately learning each task. In this work, we\nstudy a simple alternating minimization method (MLLAM), which alternately\nlearns the low-dimensional subspace and the regressors. We show that, for a\nconstant subspace dimension MLLAM obtains nearly-optimal estimation error,\ndespite requiring only $\\Omega(\\log d)$ samples per task. However, the number\nof samples required per task grows logarithmically with the number of tasks. To\nremedy this in the low-noise regime, we propose a novel task subset selection\nscheme that ensures the same strong statistical guarantee as MLLAM, even with\nbounded number of samples per task for arbitrarily large number of tasks.\n"} {"abstract": " In the presence of spacetime torsion, the momentum components do not commute;\ntherefore, in quantum field theory, summation over the momentum eigenvalues\nwill replace integration over the momentum. In the Einstein--Cartan theory of\ngravity, in which torsion is coupled to spin, the separation between the\neigenvalues increases with the magnitude of the momentum. Consequently, this\nreplacement regularizes divergent integrals in Feynman diagrams with loops by\nturning them into convergent sums. In this article, we apply torsional\nregularization to the self-energy of a charged lepton in quantum\nelectrodynamics. We show that this procedure eliminates the ultraviolet\ndivergence. We also show that torsion gives a photon a small nonzero mass,\nwhich regularizes the infrared divergence. In the end, we calculate the finite\nbare masses of the electron, muon, and tau lepton: $0.4329\\,\\mbox{MeV}$,\n$90.95\\,\\mbox{MeV}$, and $1543\\,\\mbox{MeV}$, respectively. These values\nconstitute about $85\\%$ of the observed, re-normalized masses.\n"} {"abstract": " Wikipedia is an online encyclopedia available in 285 languages. It composes\nan extremely relevant Knowledge Base (KB), which could be leveraged by\nautomatic systems for several purposes. However, the structure and organisation\nof such information are not prone to automatic parsing and understanding and it\nis, therefore, necessary to structure this knowledge. The goal of the current\nSHINRA2020-ML task is to leverage Wikipedia pages in order to categorise their\ncorresponding entities across 268 hierarchical categories, belonging to the\nExtended Named Entity (ENE) ontology. In this work, we propose three distinct\nmodels based on the contextualised embeddings yielded by Multilingual BERT. We\nexplore the performances of a linear layer with and without explicit usage of\nthe ontology's hierarchy, and a Gated Recurrent Units (GRU) layer. We also test\nseveral pooling strategies to leverage BERT's embeddings and selection criteria\nbased on the labels' scores. We were able to achieve good performance across a\nlarge variety of languages, including those not seen during the fine-tuning\nprocess (zero-shot languages).\n"} {"abstract": " In this work we present (and encourage the use of) the Williamson theorem and\nits consequences in several contexts in physics. We demonstrate this theorem\nusing only basic concepts of linear algebra and symplectic matrices. As an\nimmediate application in the context of small oscillations, we show that\napplying this theorem reveals the normal-mode coordinates and frequencies of\nthe system in the Hamiltonian scenario. A modest introduction of the symplectic\nformalism in quantum mechanics is presented, useing the theorem to study\nquantum normal modes and canonical distributions of thermodynamically stable\nsystems described by quadratic Hamiltonians. As a last example, a more advanced\ntopic concerning uncertainty relations is developed to show once more its\nutility in a distinct and modern perspective.\n"} {"abstract": " This paper presents the Multilingual COVID-19 Analysis Method (CMTA) for\ndetecting and observing the spread of misinformation about this disease within\ntexts. CMTA proposes a data science (DS) pipeline that applies machine learning\nmodels for processing, classifying (Dense-CNN) and analyzing (MBERT)\nmultilingual (micro)-texts. DS pipeline data preparation tasks extract features\nfrom multilingual textual data and categorize it into specific information\nclasses (i.e., 'false', 'partly false', 'misleading'). The CMTA pipeline has\nbeen experimented with multilingual micro-texts (tweets), showing\nmisinformation spread across different languages. To assess the performance of\nCMTA and put it in perspective, we performed a comparative analysis of CMTA\nwith eight monolingual models used for detecting misinformation. The comparison\nshows that CMTA has surpassed various monolingual models and suggests that it\ncan be used as a general method for detecting misinformation in multilingual\nmicro-texts. CMTA experimental results show misinformation trends about\nCOVID-19 in different languages during the first pandemic months.\n"} {"abstract": " We theoretically report the emergence of $Z_4$ parafermion edge modes in a\nperiodically driven spinful superconducting chain with modest fermionic Hubbard\ninteraction. These parafermion edge modes represent $\\pm \\pi/(2T)$ quasienergy\nexcitations ($T$ being the driving period), which have no static counterpart\nand arise from the interplay between interaction effect and periodic driving.\nAt special parameter values, these exotic quasiparticles can be analytically\nand exactly derived. Strong numerical evidence of their robustness against\nvariations in parameter values and spatial disorder is further presented. Our\nproposal offers a route toward realizing parafermions without fractional\nquantum Hall systems or complicated interactions.\n"} {"abstract": " In this rejoinder, we aim to address two broad issues that cover most\ncomments made in the discussion. First, we discuss some theoretical aspects of\nour work and comment on how this work might impact the theoretical foundation\nof privacy-preserving data analysis. Taking a practical viewpoint, we next\ndiscuss how f-differential privacy (f-DP) and Gaussian differential privacy\n(GDP) can make a difference in a range of applications.\n"} {"abstract": " We present the first spectroscopically resolved \\ha\\ emission map of the\nLarge Magellanic Cloud's (LMC) galactic wind. By combining new Wisconsin\nH-alpha Mapper (WHAM) observations ($I_{\\rm H\\alpha}\\gtrsim10~{\\rm mR}$) with\nexisting \\hicm\\ emission observations, we have (1) mapped the LMC's near-side\ngalactic wind over a local standard of rest (LSR) velocity range of $+50\\le\\rm\nv_{LSR}\\le+250~{\\rm km}~{\\rm s}^{-1}$, (2) determined its morphology and\nextent, and (3) estimated its mass, outflow rate, and mass-loading factor. We\nobserve \\ha\\ emission from this wind to typically 1-degree off the LMC's \\hi\\\ndisk. Kinematically, we find that the diffuse gas in the warm-ionized phase of\nthis wind persists at both low ($\\lesssim100~{\\rm km}~{\\rm s}^{-1}$) and high\n($\\gtrsim100~{\\rm km}~{\\rm s}^{-1}$) velocities, relative to the LMC's \\hi\\\ndisk. Furthermore, we find that the high-velocity component spatially aligns\nwith the most intense star-forming region, 30~Doradus. We, therefore, conclude\nthat this high-velocity material traces an active outflow. We estimate the mass\nof the warm ($T_e\\approx10^4~\\rm K$) ionized phase of the near-side LMC outflow\nto be $\\log{\\left(M_{\\rm ionized}/M_\\odot\\right)=7.51\\pm0.15}$ for the combined\nlow and high velocity components. Assuming an ionization fraction of 75\\% and\nthat the wind is symmetrical about the LMC disk, we estimate that its total\n(neutral and ionized) mass is $\\log{\\left(M_{\\rm total}/M_\\odot\\right)=7.93}$,\nits mass-flow rate is $\\dot{M}_{\\rm outflow}\\approx1.43~M_\\odot~\\rm yr^{-1}$,\nand its mass-loading factor is $\\eta\\approx4.54$. Our average mass-loading\nfactor results are roughly a factor of 2.5 larger than previous \\ha\\ imaging\nand UV~absorption line studies, suggesting that those studies are missing\nnearly half the gas in the outflows.\n"} {"abstract": " While 2D occupancy maps commonly used in mobile robotics enable safe\nnavigation in indoor environments, in order for robots to understand their\nenvironment to the level required for them to perform more advanced tasks,\nrepresenting 3D geometry and semantic environment information is required. We\npropose a pipeline that can generate a multi-layer representation of indoor\nenvironments for robotic applications. The proposed representation includes 3D\nmetric-semantic layers, a 2D occupancy layer, and an object instance layer\nwhere known objects are replaced with an approximate model obtained through a\nnovel model-matching approach. The metric-semantic layer and the object\ninstance layer are combined to form an augmented representation of the\nenvironment. Experiments show that the proposed shape matching method\noutperforms a state-of-the-art deep learning method when tasked to complete\nunseen parts of objects in the scene. The pipeline performance translates well\nfrom simulation to real world as shown by F1-score analysis, with semantic\nsegmentation accuracy using Mask R-CNN acting as the major bottleneck. Finally,\nwe also demonstrate on a real robotic platform how the multi-layer map can be\nused to improve navigation safety.\n"} {"abstract": " Analyzing human affect is vital for human-computer interaction systems. Most\nmethods are developed in restricted scenarios which are not practical for\nin-the-wild settings. The Affective Behavior Analysis in-the-wild (ABAW) 2021\nContest provides a benchmark for this in-the-wild problem. In this paper, we\nintroduce a multi-modal and multi-task learning method by using both visual and\naudio information. We use both AU and expression annotations to train the model\nand apply a sequence model to further extract associations between video\nframes. We achieve an AU score of 0.712 and an expression score of 0.477 on the\nvalidation set. These results demonstrate the effectiveness of our approach in\nimproving model performance.\n"} {"abstract": " It is well known that CS can boost massive random access protocols. Usually,\nthe protocols operate in some overloaded regime where the sparsity can be\nexploited. In this paper, we consider a different approach by taking an\northogonal FFT base, subdivide its image into appropriate sub-channels and let\neach subchannel take only a fraction of the load. To show that this approach\ncan actually achieve the full capacity we provide i) new concentration\ninequalities, and ii) devise a sparsity capture effect, i.e where the\nsub-division can be driven such that the activity in each each sub-channel is\nsparse by design. We show by simulations that the system is scalable resulting\nin a coarsely 30-fold capacity increase.\n"} {"abstract": " Matrices are often built and designed by applying procedures from lower order\nmatrices. Matrix tensor products, direct sums and multiplication of matrices\nretain certain properties of the lower order matrices; matrices produced by\nthese procedures are said to be {\\em separable}. {\\em Entangled} matrices is\nthe term used for matrices which are not separable. Here design methods for\nentangled matrices are derived. These can retain properties of lower order\nmatrices or acquire new required properties.\n Entangled matrices are often required in practice and a number of\napplications of the designs are given. Methods with which to construct\nmultidimensional entangled paraunitary matrices are derived; these have\napplications for wavelet and filter bank design. New entangled unitary matrices\nare designed; these are used in quantum information theory. Efficient methods\nfor designing new full diversity constellations of unitary matrices with\nexcellent {\\em quality} (a defined term) for space time applications are given.\n"} {"abstract": " We investigate theoretically coherent detection implemented simultaneously on\na set of mutually orthogonal spatial modes in the image plane as a method to\ncharacterize properties of a composite thermal source below the Rayleigh limit.\nA general relation between the intensity distribution in the source plane and\nthe covariance matrix for the complex field amplitudes measured in the image\nplane is derived. An algorithm to estimate parameters of a two-dimensional\nsymmetric binary source is devised and verified using Monte Carlo simulations\nto provide super-resolving capability for high ratio of signal to detection\nnoise (SNR). Specifically, the separation between two point sources can be\nmeaningfully determined down to $\\textrm{SNR}^{-1/2}$ in the units determined\nby the spatial spread of the transfer function of the imaging system. The\npresented algorithm is shown to make a nearly optimal use of the measured data\nin the sub-Rayleigh region.\n"} {"abstract": " Pool block withholding attack is performed among mining pools in digital\ncryptocurrencies, such as Bitcoin. Instead of mining honestly, pools can be\nincentivized to infiltrate their own miners into other pools. These\ninfiltrators report partial solutions but withhold full solutions, share block\nrewards but make no contribution to block mining. The block withholding attack\namong mining pools can be modeled as a non-cooperative game called \"the miner's\ndilemm\", which reduces effective mining power in the system and leads to\npotential systemic instability in the blockchain. However, existing literature\non the game-theoretic properties of this attack only gives a preliminary\nanalysis, e.g., an upper bound of 3 for the pure price of anarchy (PPoA) in\nthis game, with two pools involved and no miner betraying. Pure price of\nanarchy is a measurement of how much mining power is wasted in the miner's\ndilemma game. Further tightening its upper bound will bring us more insight\ninto the structure of this game, so as to design mechanisms to reduce the\nsystemic loss caused by mutual attacks. In this paper, we give a tight bound of\n(1, 2] for the pure price of anarchy. Moreover, we show the tight bound holds\nin a more general setting, in which infiltrators may betray.We also prove the\nexistence and uniqueness of pure Nash equilibrium in this setting. Inspired by\nexperiments on the game among three mining pools, we conjecture that similar\nresults hold in the N-player miner's dilemma game (N>=2).\n"} {"abstract": " Computing the distribution of permanents of random matrices has been an\noutstanding open problem for several decades. In quantum computing,\n\"anti-concentration\" of this distribution is an unproven input for the proof of\nhardness of the task of boson-sampling. We study permanents of random i.i.d.\ncomplex Gaussian matrices, and more broadly, submatrices of random unitary\nmatrices. Using a hybrid representation-theoretic and combinatorial approach,\nwe prove strong lower bounds for all moments of the permanent distribution. We\nprovide substantial evidence that our bounds are close to being tight and\nconstitute accurate estimates for the moments. Let $U(d)^{k\\times k}$ be the\ndistribution of $k\\times k$ submatrices of $d\\times d$ random unitary matrices,\nand $G^{k\\times k}$ be the distribution of $k\\times k$ complex Gaussian\nmatrices. (1) Using the Schur-Weyl duality (or the Howe duality), we prove an\nexpansion formula for the $2t$-th moment of $|Perm(M)|$ when $M$ is drawn from\n$U(d)^{k\\times k}$ or $G^{k\\times k}$. (2) We prove a surprising size-moment\nduality: the $2t$-th moment of the permanent of random $k\\times k$ matrices is\nequal to the $2k$-th moment of the permanent of $t\\times t$ matrices. (3) We\ndesign an algorithm to exactly compute high moments of the permanent of small\nmatrices. (4) We prove lower bounds for arbitrary moments of permanents of\nmatrices drawn from $G^{ k\\times k}$ or $U(k)$, and conjecture that our lower\nbounds are close to saturation up to a small multiplicative error. (5) Assuming\nour conjectures, we use the large deviation theory to compute the tail of the\ndistribution of log-permanent of Gaussian matrices for the first time. (6) We\nargue that it is unlikely that the permanent distribution can be uniquely\ndetermined from the integer moments and one may need to supplement the moment\ncalculations with extra assumptions to prove the anti-concentration conjecture.\n"} {"abstract": " In this paper we attempt to develop a general $p-$Bergman theory on bounded\ndomains in $\\mathbb C^n$. To indicate the basic difference between $L^p$ and\n$L^2$ cases, we show that the $p-$Bergman kernel $K_p(z)$ is not real-analytic\non some bounded complete Reinhardt domains when $p\\ge 4$ is an even number. By\nthe calculus of variations we get a fundamental reproducing formula. This\ntogether with certain techniques from nonlinear analysis of the $p-$Laplacian\nyield a number of results, e.g., the off-diagonal $p-$Bergman kernel\n$K_p(z,\\cdot)$ is H\\\"older continuous of order $\\frac12$ for $p>1$ and of order\n$\\frac1{2(n+2)}$ for $p=1$. We also show that the $p-$Bergman metric $B_p(z;X)$\ntends to the Carath\\'eodory metric $C(z;X)$ as $p\\rightarrow \\infty$ and the\ngeneralized Levi form $i\\partial\\bar{\\partial}\\log K_p(z;X)$ is no less than\n$B_p(z;X)^2$ for $p\\ge 2$ and $ C(z;X)^2$ for $p\\le 2.$ Stability of $K_p(z,w)$\nor $B_p(z;X)$ as $p$ varies, boundary behavior of $K_p(z)$, as well as basic\nfacts on the $p-$Bergman prjection, are also investigated.\n"} {"abstract": " Quantitative evaluation has increased dramatically among recent video\ninpainting work, but the video and mask content used to gauge performance has\nreceived relatively little attention. Although attributes such as camera and\nbackground scene motion inherently change the difficulty of the task and affect\nmethods differently, existing evaluation schemes fail to control for them,\nthereby providing minimal insight into inpainting failure modes. To address\nthis gap, we propose the Diagnostic Evaluation of Video Inpainting on\nLandscapes (DEVIL) benchmark, which consists of two contributions: (i) a novel\ndataset of videos and masks labeled according to several key inpainting failure\nmodes, and (ii) an evaluation scheme that samples slices of the dataset\ncharacterized by a fixed content attribute, and scores performance on each\nslice according to reconstruction, realism, and temporal consistency quality.\nBy revealing systematic changes in performance induced by particular\ncharacteristics of the input content, our challenging benchmark enables more\ninsightful analysis into video inpainting methods and serves as an invaluable\ndiagnostic tool for the field. Our code is available at\nhttps://github.com/MichiganCOG/devil .\n"} {"abstract": " Quantum dots (QDs) made from semiconductors are among the most promising\nplatforms for the developments of quantum computing and simulation chips, and\nhave advantages over other platforms in high density integration and in\ncompatibility to the standard semiconductor chip fabrication technology.\nHowever, development of a highly tunable semiconductor multiple QD system still\nremains as a major challenge. Here, we demonstrate realization of a highly\ntunable linear quadruple QD (QQD) in a narrow bandgap semiconductor InAs\nnanowire with fine finger gate technique. The QQD is studied by electron\ntransport measurements in the linear response regime. Characteristic\ntwo-dimensional charge stability diagrams containing four groups of resonant\ncurrent lines of different slopes are found for the QQD. It is shown that these\ncurrent lines can be individually assigned as arising from resonant electron\ntransport through the energy levels of different QDs. Benefited from the\nexcellent gate tunability, we also demonstrate tuning of the QQD to regimes\nwhere the energy levels of two QDs, three QDs and all the four QDs are\nenergetically on resonance, respectively, with the fermi level of source and\ndrain contacts. A capacitance network model is developed for the linear QQD and\nthe simulated charge stability diagrams based on the model show good agreements\nwith the experiments. Our work presents a solid experimental evidence that\nnarrow bandgap semiconductor nanowires multiple QDs could be used as a\nversatile platform to achieve integrated qubits for quantum computing and to\nperform quantum simulations for complex many-body systems.\n"} {"abstract": " Predictive energy management of Connected and Automated Vehicles (CAVs), in\nparticular those with multiple power sources, has the potential to\nsignificantly improve energy savings in real-world driving conditions. In\nparticular, the eco-driving problem seeks to design optimal speed and power\nusage profiles based upon available information from connectivity and advanced\nmapping features to minimize the fuel consumption between two designated\nlocations.\n In this work, the eco-driving problem is formulated as a three-state receding\nhorizon optimal control problem and solved via Dynamic Programming (DP). The\noptimal solution, in terms of vehicle speed and battery State of Charge (SoC)\ntrajectories, allows a connected and automated hybrid electric vehicle to\nintelligently pass the signalized intersections and minimize fuel consumption\nover a prescribed route. To enable real-time implementation, a parallel\narchitecture of DP is proposed for an NVIDIA GPU with CUDA programming.\nSimulation results indicate that the proposed optimal controller delivers more\nthan 15% fuel economy benefits compared to a baseline control strategy and that\nthe solver time can be reduced by more than 90% by the parallel implementation\nwhen compared to a serial implementation.\n"} {"abstract": " We investigate how sentence-level transformers can be modified into effective\nsequence labelers at the token level without any direct supervision. Existing\napproaches to zero-shot sequence labeling do not perform well when applied on\ntransformer-based architectures. As transformers contain multiple layers of\nmulti-head self-attention, information in the sentence gets distributed between\nmany tokens, negatively affecting zero-shot token-level performance. We find\nthat a soft attention module which explicitly encourages sharpness of attention\nweights can significantly outperform existing methods.\n"} {"abstract": " We calculate the possible interaction between a superconductor and the static\nEarth's gravitational fields, making use of the gravito-Maxwell formalism\ncombined with the time-dependent Ginzburg-Landau theory. We try to estimate\nwhich are the most favourable conditions to enhance the effect, optimizing the\nsuperconductor parameters characterizing the chosen sample. We also give a\nqualitative comparison of the behaviour of high-$T_\\text{c}$ and classical\nlow-$T_\\text{c}$ superconductors with respect to the gravity/superfluid\ninterplay.\n"} {"abstract": " Implications of the Raychaudhuri equation in focusing of geodesic congruence\nare studied in the framework of scalar-tensor theory of gravity. Brans-Dicke\ntheory and Bekenstein's scalar field theory are picked up for investigation. In\nboth the theories, a static spherically symmetric distribution and a spatially\nhomogeneous and isotropic cosmological model are dealt with, as specific\nexamples. It is found that with reasonable physical conditions, there are\npossibilities for a violation of the convergence condition. This fact leads to\na possibility of avoiding a singularity.\n"} {"abstract": " We show that, for vector spaces in which distance measurement is performed\nusing a gauge, the existence of best coapproximations in $1$-codimensional\nclosed linear subspaces implies in dimensions $\\geq 2$ that the gauge is a\nnorm, and in dimensions $\\geq 3$ that the gauge is even a Hilbert space norm.\nWe also show that coproximinality of all closed subspaces of a fixed dimension\nimplies coproximinality of all subspaces of all lower finite dimensions.\n"} {"abstract": " We construct high-order semi-discrete-in-time and fully discrete (with\nFourier-Galerkin in space) schemes for the incompressible Navier-Stokes\nequations with periodic boundary conditions, and carry out corresponding error\nanalysis. The schemes are of implicit-explicit type based on a scalar auxiliary\nvariable (SAV) approach. It is shown that numerical solutions of these schemes\nare uniformly bounded without any restriction on time step size. These uniform\nbounds enable us to carry out a rigorous error analysis for the schemes up to\nfifth-order in a unified form, and derive global error estimates in\n$l^\\infty(0,T;H^1)\\cap l^2(0,T;H^2)$ in the two dimensional case as well as\nlocal error estimates in $l^\\infty(0,T;H^1)\\cap l^2(0,T;H^2)$ in the three\ndimensional case. We also present numerical results confirming our theoretical\nconvergence rates and demonstrating advantages of higher-order schemes for\nflows with complex structures in the double shear layer problem.\n"} {"abstract": " Stochastically switching force terms appear frequently in models of\nbiological systems under the action of active agents such as proteins. The\ninteraction of switching force and Brownian motion can create an \"effective\nthermal equilibrium\" even though the system does not obey a potential function.\nIn order to extend the field of energy landscape analysis to understand\nstability and transitions in switching systems, we derive the quasipotential\nthat defines this effective equilibrium for a general overdamped Langevin\nsystem with a force switching according to a continuous-time Markov chain\nprocess. Combined with the string method for computing most-probable transition\npaths, we apply our method to an idealized system and show the appearance of\npreviously unreported numerical challenges. We present modifications to the\nalgorithms to overcome these challenges, and show validity by demonstrating\nagreement between our computed quasipotential barrier and asymptotic Monte\nCarlo transition times in the system.\n"} {"abstract": " Convolutional Neural Networks (CNNs) have achieved great success due to the\npowerful feature learning ability of convolution layers. Specifically, the\nstandard convolution traverses the input images/features using a sliding window\nscheme to extract features. However, not all the windows contribute equally to\nthe prediction results of CNNs. In practice, the convolutional operation on\nsome of the windows (e.g., smooth windows that contain very similar pixels) can\nbe very redundant and may introduce noises into the computation. Such\nredundancy may not only deteriorate the performance but also incur the\nunnecessary computational cost. Thus, it is important to reduce the\ncomputational redundancy of convolution to improve the performance. To this\nend, we propose a Content-aware Convolution (CAC) that automatically detects\nthe smooth windows and applies a 1x1 convolutional kernel to replace the\noriginal large kernel. In this sense, we are able to effectively avoid the\nredundant computation on similar pixels. By replacing the standard convolution\nin CNNs with our CAC, the resultant models yield significantly better\nperformance and lower computational cost than the baseline models with the\nstandard convolution. More critically, we are able to dynamically allocate\nsuitable computation resources according to the data smoothness of different\nimages, making it possible for content-aware computation. Extensive experiments\non various computer vision tasks demonstrate the superiority of our method over\nexisting methods.\n"} {"abstract": " Let $G$ be an $n$-vertex graph and let $L:V(G)\\rightarrow P(\\{1,2,3\\})$ be a\nlist assignment over the vertices of $G$, where each vertex with list of size 3\nand of degree at most 5 has at least three neighbors with lists of size 2. We\ncan determine $L$-choosability of $G$ in $O(1.3196^{n_3+.5n_2})$ time, where\n$n_i$ is the number of vertices in $G$ with list of size $i$ for $i\\in\n\\{2,3\\}$. As a corollary, we conclude that the 3-colorability of any graph $G$\nwith minimum degree at least 6 can be determined in $O(1.3196^{n-.5\\Delta(G)})$\ntime.\n"} {"abstract": " Multi-instance learning is a type of weakly supervised learning. It deals\nwith tasks where the data is a set of bags and each bag is a set of instances.\nOnly the bag labels are observed whereas the labels for the instances are\nunknown. An important advantage of multi-instance learning is that by\nrepresenting objects as a bag of instances, it is able to preserve the inherent\ndependencies among parts of the objects. Unfortunately, most existing\nalgorithms assume all instances to be \\textit{identically and independently\ndistributed}, which violates real-world scenarios since the instances within a\nbag are rarely independent. In this work, we propose the Multi-Instance\nVariational Auto-Encoder (MIVAE) algorithm which explicitly models the\ndependencies among the instances for predicting both bag labels and instance\nlabels. Experimental results on several multi-instance benchmarks and\nend-to-end medical imaging datasets demonstrate that MIVAE performs better than\nstate-of-the-art algorithms for both instance label and bag label prediction\ntasks.\n"} {"abstract": " The global pandemic caused by the COVID virus led universities to a change in\nthe way they teach classes, moving to a distance mode. The subject \"Modelos y\nSistemas de Costos \" of the CPA career of the Faculty of Economic Sciences and\nAdministration of the Universidad de la Rep\\'ublica (Uruguay) incorporated\naudiovisual material as a pedagogical resource consisting of videos recorded by\na group of well experienced and highest ranked teachers. The objective of this\nresearch is to analyze the efficiency of the audiovisual resources used in the\ncourse, seeking to answer whether the visualizations of said materials follow\ncertain patterns of behavior. 13 videos were analyzed, which had 16,340 views,\ncoming from at least 1,486 viewers. It was obtained that the visualizations\ndepend on the proximity to the test dates and that although the visualization\ntime has a curve that accompanies the duration of the videos, it is limited and\nthe average number of visualizations is 10 minutes and 4 seconds. It is also\nconcluded that the efficiency in viewing time increases in short videos.\n"} {"abstract": " The $H$-join of a family of graphs $\\mathcal{G}=\\{G_1, \\dots, G_p\\}$, also\ncalled the generalized composition, $H[G_1, \\dots, G_p]$, where all graphs are\nundirected, simple and finite, is the graph obtained by replacing each vertex\n$i$ of $H$ by $G_i$ and adding to the edges of all graphs in $\\mathcal{G}$ the\nedges of the join $G_i \\vee G_j$, for every edge $ij$ of $H$. Some well known\ngraph operations are particular cases of the $H$-join of a family of graphs\n$\\mathcal{G}$ as it is the case of the lexicographic product (also called\ncomposition) of two graphs $H$ and $G$, $H[G]$. During long time the known\nexpressions for the determination of the entire spectrum of the $H$-join in\nterms of the spectra of its components and an associated matrix were limited to\nfamilies of regular graphs. In this work, we extend such a determination, as\nwell as the determination of the characteristic polynomial, to families of\narbitrary graphs. From the obtained results, the eigenvectors of the adjacency\nmatrix of the $H$-join can also be determined in terms of the adjacency\nmatrices of the components and an associated matrix.\n"} {"abstract": " A class of network codes have been proposed in the literature where the\nsymbols transmitted on network edges are binary vectors and the coding\noperation performed in network nodes consists of the application of (possibly\nseveral) permutations on each incoming vector and XOR-ing the results to obtain\nthe outgoing vector. These network codes, which we will refer to as\npermute-and-add network codes, involve simpler operations and are known to\nprovide lower complexity solutions than scalar linear codes. The complexity of\nthese codes is determined by their degree which is the number of permutations\napplied on each incoming vector to compute an outgoing vector. Constructions of\npermute-and-add network codes for multicast networks are known. In this paper,\nwe provide a new framework based on group algebras to design permute-and-add\nnetwork codes for arbitrary (not necessarily multicast) networks. Our framework\nallows the use of any finite group of permutations (including circular shifts,\nproposed in prior work) and admits a trade-off between coding rate and the\ndegree of the code. Further, our technique permits elegant recovery and\ngeneralizations of the key results on permute-and-add network codes known in\nthe literature.\n"} {"abstract": " The main purpose of this paper is to construct high-girth regular expander\ngraphs with localized eigenvectors for general degrees, which is inspired by a\nrecent work due to Alon, Ganguly and Srivastava (to appear in Israel J. Math.).\n"} {"abstract": " Video streaming became an undivided part of the Internet. To efficiently\nutilize the limited network bandwidth it is essential to encode the video\ncontent. However, encoding is a computationally intensive task, involving\nhigh-performance resources provided by private infrastructures or public\nclouds. Public clouds, such as Amazon EC2, provide a large portfolio of\nservices and instances optimized for specific purposes and budgets. The\nmajority of Amazon instances use x86 processors, such as Intel Xeon or AMD\nEPYC. However, following the recent trends in computer architecture, Amazon\nintroduced Arm-based instances that promise up to 40% better cost-performance\nratio than comparable x86 instances for specific workloads. We evaluate in this\npaper the video encoding performance of x86 and Arm instances of four instance\nfamilies using the latest FFmpeg version and two video codecs. We examine the\nimpact of the encoding parameters, such as different presets and bitrates, on\nthe time and cost for encoding. Our experiments reveal that Arm instances show\nhigh time and cost-saving potential of up to 33.63% for specific bitrates and\npresets, especially for the x264 codec. However, the x86 instances are more\ngeneral and achieve low encoding times, regardless of the codec.\n"} {"abstract": " We have implemented training of neural networks in secure multi-party\ncomputation (MPC) using quantization commonly used in the said setting. To the\nbest of our knowledge, we are the first to present an MNIST classifier purely\ntrained in MPC that comes within 0.2 percent of the accuracy of the same\nconvolutional neural network trained via plaintext computation. More\nconcretely, we have trained a network with two convolution and two dense layers\nto 99.2% accuracy in 25 epochs. This took 3.5 hours in our MPC implementation\n(under one hour for 99% accuracy).\n"} {"abstract": " Interpretation of machine learning models has become one of the most\nimportant research topics due to the necessity of maintaining control and\navoiding bias in these algorithms. Since many machine learning algorithms are\npublished every day, there is a need for novel model-agnostic interpretation\napproaches that could be used to interpret a great variety of algorithms. Thus,\none advantageous way to interpret machine learning models is to feed different\ninput data to understand the changes in the prediction. Using such an approach,\npractitioners can define relations among data patterns and a model's decision.\nThis work proposes a model-agnostic interpretation approach that uses\nvisualization of feature perturbations induced by the PSO algorithm. We\nvalidate our approach on publicly available datasets, showing the capability to\nenhance the interpretation of different classifiers while yielding very stable\nresults compared with state-of-the-art algorithms.\n"} {"abstract": " Using the integral field unit (IFU) data from Mapping Nearby Galaxies at\nApache Point Observatory (MaNGA) survey, we collect a sample of 36 star forming\ngalaxies that host galactic-scale outflows in ionized gas phase. The control\nsample is matched in the three dimensional parameter space of stellar mass,\nstar formation rate and inclination angle. Concerning the global properties,\nthe outflows host galaxies tend to have smaller size, more asymmetric gas disk,\nmore active star formation in the center and older stellar population than the\ncontrol galaxies. Comparing the stellar population properties along axes, we\nconclude that the star formation in the outflows host galaxies can be divided\ninto two branches. One branch evolves following the inside-out formation\nscenario. The other locating in the galactic center is triggered by gas\naccretion or galaxy interaction, and further drives the galactic-scale\noutflows. Besides, the enhanced star formation and metallicity along minor axis\nof outflows host galaxies uncover the positive feedback and metal entrainment\nin the galactic-scale outflows. Observational data in different phases with\nhigher spatial resolution are needed to reveal the influence of galactic-scale\noutflows on the star formation progress in detail.\n"} {"abstract": " We consider in parallel pointed homotopy automorphisms of iterated wedge sums\nof topological spaces and boundary relative homotopy automorphisms of iterated\nconnected sums of manifolds minus a disk. Under certain conditions on the\nspaces and manifolds, we prove that the rational homotopy groups of these\nhomotopy automorphisms form finitely generated FI-modules, and thus satisfy\nrepresentation stability for symmetric groups, in the sense of Church and Farb.\n"} {"abstract": " In this manuscript, we report strong linear correlation between shifted\nvelocity and line width of the broad blue-shifted [OIII] components in SDSS\nquasars. Broad blue-shifted [OIII] components are commonly treated as\nindicators of outflows related to central engine, however, it is still an open\nquestion whether the outflows are related to central accretion properties or\nrelated to local physical properties of NLRs (narrow emission line regions).\nHere, the reported strong linear correlation with the Spearman Rank correlation\ncoefficient 0.75 can be expected under the assumption of AGN (active galactic\nnuclei) feedback driven outflows, through a large sample of 535 SDSS quasars\nwith reliable blue-shifted broad [OIII] components. Moreover, there are much\ndifferent detection rates for broad blue-shifted and broad red-shifted [OIII]\ncomponents in quasars, and no positive correlation can be found between shifted\nvelocity and line width of the broad red-shifted [OIII] components, which\nprovide further and strong evidence to reject possibility of local outflows in\nNLRs leading to the broad blue-shifted [OIII] components in quasars. Thus, the\nstrong linear correlation can be treated as strong evidence for the broad\nblue-shifted [OIII] components as better indicators of outflows related to\ncentral engine in AGN. Furthermore, rather than central BH masses, Eddington\nratios and continuum luminosities have key roles on properties of the broad\nblue-shifted [OIII] components in quasars.\n"} {"abstract": " This paper illustrates a detail description of the system and its results\nthat developed as a part of the participation at CONSTRAINT shared task in\nAAAI-2021. The shared task comprises two tasks: a) COVID19 fake news detection\nin English b) Hostile post detection in Hindi. Task-A is a binary\nclassification problem with fake and real class, while task-B is a multi-label\nmulti-class classification task with five hostile classes (i.e. defame, fake,\nhate, offense, non-hostile). Various techniques are used to perform the\nclassification task, including SVM, CNN, BiLSTM, and CNN+BiLSTM with tf-idf and\nWord2Vec embedding techniques. Results indicate that SVM with tf-idf features\nachieved the highest 94.39% weighted $f_1$ score on the test set in task-A.\nLabel powerset SVM with n-gram features obtained the maximum coarse-grained and\nfine-grained $f_1$ score of 86.03% and 50.98% on the task-B test set\nrespectively.\n"} {"abstract": " Doping is considered to be the main method for improving the thermoelectric\nperformance of layered sodium cobaltate (Na$_{1-x}$CoO$_2$). However, in the\nvast majority of past reports, the equilibrium location of the dopant in the\nNa$_{1-x}$CoO$_2$'s complex layered lattice has not been confidently\nidentified. Consequently, a universal strategy for choosing a suitable dopant\nfor enhancing Na$_{1-x}$CoO$_2$'s figure of merit is yet to be established.\nHere, by examining the formation energy of Gd and Yb dopants in\nNa$_{0.75}$CoO$_2$ and Na$_{0.50}$CoO$_2$, we demonstrate that in an oxygen\npoor environment, Gd and Yb dopants reside in the Na layer while in an oxygen\nrich environment these dopants replace a Co in CoO$_2$ layer. When at Na layer,\nGd and Yb dopants reduce the carrier concentration via electron-hole\nrecombination, simultaneously increasing the Seebeck coefficient ($S$) and\nreducing electric conductivity ($\\sigma$). Na site doping, however, improves\nthe thermoelectric power factor (PF) only in Na$_{0.50}$CoO$_2$. When replacing\na Co, these dopants reduce $S$ and PF. The results demonstrate how\nthermoelectric performance critically depends on the synthesis environment that\nmust be fine-tuned for achieving any thermoelectric enhancement.\n"} {"abstract": " Equation learning aims to infer differential equation models from data. While\na number of studies have shown that differential equation models can be\nsuccessfully identified when the data are sufficiently detailed and corrupted\nwith relatively small amounts of noise, the relationship between observation\nnoise and uncertainty in the learned differential equation models remains\nunexplored. We demonstrate that for noisy data sets there exists great\nvariation in both the structure of the learned differential equation models as\nwell as the parameter values. We explore how to combine data sets to quantify\nuncertainty in the learned models, and at the same time draw mechanistic\nconclusions about the target differential equations. We generate noisy data\nusing a stochastic agent-based model and combine equation learning methods with\napproximate Bayesian computation (ABC) to show that the correct differential\nequation model can be successfully learned from data, while a quantification of\nuncertainty is given by a posterior distribution in parameter space.\n"} {"abstract": " We perform binary neutron star merger simulations using a newly derived set\nof finite-temperature equations of state in the Brueckner-Hartree-Fock\napproach. We point out the important and opposite roles of finite temperature\nand rotation for stellar stability and systematically investigate the\ngravitational-wave properties, matter distribution, and ejecta properties in\nthe postmerger phase for the different cases. The validity of several universal\nrelations is also examined and the most suitable EOSs are identified.\n"} {"abstract": " Granger causality has been employed to investigate causality relations\nbetween components of stationary multiple time series. We generalize this\nconcept by developing statistical inference for local Granger causality for\nmultivariate locally stationary processes. Our proposed local Granger causality\napproach captures time-evolving causality relationships in nonstationary\nprocesses. The proposed local Granger causality is well represented in the\nfrequency domain and estimated based on the parametric time-varying spectral\ndensity matrix using the local Whittle likelihood. Under regularity conditions,\nwe demonstrate that the estimators converge to multivariate normal in\ndistribution. Additionally, the test statistic for the local Granger causality\nis shown to be asymptotically distributed as a quadratic form of a multivariate\nnormal distribution. The finite sample performance is confirmed with several\nsimulation studies for multivariate time-varying autoregressive models. For\npractical demonstration, the proposed local Granger causality method uncovered\nnew functional connectivity relationships between channels in brain signals.\nMoreover, the method was able to identify structural changes in financial data.\n"} {"abstract": " The Fermi surface properties of a nontrivial system YSi is investigated by de\nHaas-van Alphen (dHvA) oscillation measurements combined with the\nfirst-principle calculations. Three main frequencies ($\\alpha$, $\\beta$,\n$\\gamma$) are probed up to $14$~T magnetic field in dHvA oscillations. The\n$\\alpha$-branch corresponding to $21$~T frequency possesses non-trivial\ntopological character with $\\pi$ Berry phase and a linear dispersion along\n$\\Gamma$ to $Z$ direction with a small effective mass of $0.069~m_e$ with\nsecond-lowest Landau-level up to $14$~T. For $B~\\parallel$~[010] direction, the\n295~T frequency exhibits non-trivial $2D$ character with $1.24\\pi$ Berry phase\nand a high Fermi velocity of $6.7 \\times 10^5$~ms$^{-1}$. The band structure\ncalculations reveal multiple nodal crossings in the vicinity of Fermi energy\n$E_f$ without spin-orbit coupling (SOC). Inclusion of SOC opens a small gap in\nthe nodal crossings and results in nonsymmorphic symmetry enforced Dirac points\nat some high symmetry points, suggesting YSi to be a symmetry enforced\ntopological metal.\n"} {"abstract": " Let $\\alpha=(A_g,\\alpha_g)_{g\\in G}$ be a group-type partial action of a\nconnected groupoid $G$ on a ring $A=\\bigoplus_{z\\in G_0}A_z$ and\n$B=A\\star_{\\alpha}G$ the corresponding partial skew groupoid ring. In the first\npart of this paper we investigate the relation of several ring theoretic\nproperties between $A$ and $B$. For the second part, using that every Leavitt\npath algebra is isomorphic to a partial skew groupoid ring obtained from a\npartial groupoid action $\\lambda$, we characterize when $\\lambda$ is\ngroup-type. In such a case, we obtain ring theoretic properties of Leavitt path\nalgebras from the results on general partial skew groupoid rings. Several\nexamples that illustrate the results on Leavitt path algebras are presented.\n"} {"abstract": " Keyword spotting and in particular Wake-Up-Word (WUW) detection is a very\nimportant task for voice assistants. A very common issue of voice assistants is\nthat they get easily activated by background noise like music, TV or background\nspeech that accidentally triggers the device. In this paper, we propose a\nSpeech Enhancement (SE) model adapted to the task of WUW detection that aims at\nincreasing the recognition rate and reducing the false alarms in the presence\nof these types of noises. The SE model is a fully-convolutional denoising\nauto-encoder at waveform level and is trained using a log-Mel Spectrogram and\nwaveform reconstruction losses together with the BCE loss of a simple WUW\nclassification network. A new database has been purposely prepared for the task\nof recognizing the WUW in challenging conditions containing negative samples\nthat are very phonetically similar to the keyword. The database is extended\nwith public databases and an exhaustive data augmentation to simulate different\nnoises and environments. The results obtained by concatenating the SE with a\nsimple and state-of-the-art WUW detectors show that the SE does not have a\nnegative impact on the recognition rate in quiet environments while increasing\nthe performance in the presence of noise, especially when the SE and WUW\ndetector are trained jointly end-to-end.\n"} {"abstract": " Transfer reinforcement learning aims to improve the sample efficiency of\nsolving unseen new tasks by leveraging experiences obtained from previous\ntasks. We consider the setting where all tasks (MDPs) share the same\nenvironment dynamic except reward function. In this setting, the MDP dynamic is\na good knowledge to transfer, which can be inferred by uniformly random policy.\nHowever, trajectories generated by uniform random policy are not useful for\npolicy improvement, which impairs the sample efficiency severely. Instead, we\nobserve that the binary MDP dynamic can be inferred from trajectories of any\npolicy which avoids the need of uniform random policy. As the binary MDP\ndynamic contains the state structure shared over all tasks we believe it is\nsuitable to transfer. Built on this observation, we introduce a method to infer\nthe binary MDP dynamic on-line and at the same time utilize it to guide state\nembedding learning, which is then transferred to new tasks. We keep state\nembedding learning and policy learning separately. As a result, the learned\nstate embedding is task and policy agnostic which makes it ideal for transfer\nlearning. In addition, to facilitate the exploration over the state space, we\npropose a novel intrinsic reward based on the inferred binary MDP dynamic. Our\nmethod can be used out-of-box in combination with model-free RL algorithms. We\nshow two instances on the basis of \\algo{DQN} and \\algo{A2C}. Empirical results\nof intensive experiments show the advantage of our proposed method in various\ntransfer learning tasks.\n"} {"abstract": " Traditionally, the efficiency and effectiveness of search systems have both\nbeen of great interest to the information retrieval community. However, an\nin-depth analysis of the interaction between the response latency and users'\nsubjective search experience in the mobile setting has been missing so far. To\naddress this gap, we conduct a controlled study that aims to reveal how\nresponse latency affects mobile web search. Our preliminary results indicate\nthat mobile web search users are four times more tolerant to response latency\nreported for desktop web search users. However, when exceeding a certain\nthreshold of 7-10 sec, the delays have a sizeable impact and users report\nfeeling significantly more tensed, tired, terrible, frustrated and sluggish,\nall which contribute to a worse subjective user experience.\n"} {"abstract": " Industrial plants suffer from a high degree of complexity and incompatibility\nin their communication infrastructure, caused by a wild mix of proprietary\ntechnologies. This prevents transformation towards Industry 4.0 and the\nIndustrial Internet of Things. Open Platform Communications Unified\nArchitecture (OPC UA) is a standardized protocol that addresses these problems\nwith uniform and semantic communication across all levels of the hierarchy.\nHowever, its adoption in embedded field devices, such as sensors and actors, is\nstill lacking due to prohibitive memory and power requirements of software\nimplementations. We have developed a dedicated hardware engine that offloads\nprocessing of the OPC UA protocol and enables realization of compact and\nlow-power field devices with OPC UA support. As part of a proof-of-concept\nembedded system we have implemented this engine in a 22 nm FDSOI technology. We\nmeasured performance, power consumption, and memory footprint of our test chip\nand compared it with a software implementation based on open62541 and a\nRaspberry Pi 2B. Our OPC UA hardware engine is 50 times more energy efficient\nand only requires 36 KiB of memory. The complete chip consumes only 24 mW under\nfull load, making it suitable for low-power embedded applications.\n"} {"abstract": " The Bogomolov multiplier $B_0(G)$ of a finite group $G$ is the subgroup of\nthe Schur multiplier $H^2(G,\\mathbb Q/\\mathbb Z)$ consisting of the cohomology\nclasses which vanishes after restricting to any abelian subgroup of $G$. We\ngive a proof of Hopf-type formula for $B_0(G)$ and derive an exact sequence for\ncohomological version of the Bogomolov multiplier. Using this exact sequence we\nprovide necessary and sufficient conditions for the corresponding inflation\nhomomorphism to be an epimorphism and to be zero map. We provide some\nconditions for the triviality of $B_0(G)$ for central product of groups $G$ and\nshow that the Bogomolov multiplier of generalized discrete Heisenberg groups is\ntrivial. We also give a complete characterization of groups of order $p^6, p>3$\nhaving trivial Bogomolov multiplier.\n"} {"abstract": " Assuming the Generalized Continuum Hypothesis, this paper answers the\nquestion: when is the tensor product of two ultrafilters equal to their\nCartesian product? It is necessary and sufficient that their Cartesian product\nis an ultrafilter; that the two ultrafilters commute in the tensor product;\nthat for all cardinals $\\lambda$, one of the ultrafilters is both\n$\\lambda$-indecomposable and $\\lambda^+$-indecomposable; that the ultrapower\nembedding associated to each ultrafilter restricts to a definable embedding of\nthe ultrapower of the universe associated to the other.\n"} {"abstract": " Let $G$ be a finite permutation group on $\\Omega$. An ordered sequence of\nelements of $\\Omega$, $(\\omega_1,\\dots, \\omega_t)$, is an irredundant base for\n$G$ if the pointwise stabilizer $G_{(\\omega_1,\\dots, \\omega_t)}$ is trivial and\nno point is fixed by the stabilizer of its predecessors. If all irredundant\nbases of $G$ have the same size we say that $G$ is an IBIS group. In this paper\nwe show that if a primitive permutation group is IBIS, then it must be almost\nsimple, of affine-type, or of diagonal type. Moreover we prove that a\ndiagonal-type primitive permutation groups is IBIS if and only if it is\nisomorphic to $PSL(2,2^f)\\times PSL(2,2^f)$ for some $f\\geq 2,$ in its diagonal\naction of degree $2^f(2^{2f}-1).$\n"} {"abstract": " Gliomas are among the most aggressive and deadly brain tumors. This paper\ndetails the proposed Deep Neural Network architecture for brain tumor\nsegmentation from Magnetic Resonance Images. The architecture consists of a\ncascade of three Deep Layer Aggregation neural networks, where each stage\nelaborates the response using the feature maps and the probabilities of the\nprevious stage, and the MRI channels as inputs. The neuroimaging data are part\nof the publicly available Brain Tumor Segmentation (BraTS) 2020 challenge\ndataset, where we evaluated our proposal in the BraTS 2020 Validation and Test\nsets. In the Test set, the experimental results achieved a Dice score of\n0.8858, 0.8297 and 0.7900, with an Hausdorff Distance of 5.32 mm, 22.32 mm and\n20.44 mm for the whole tumor, core tumor and enhanced tumor, respectively.\n"} {"abstract": " We study a model of a thermoelectric nanojunction driven by\nvibrationally-assisted tunneling. We apply the reaction coordinate formalism to\nderive a master equation governing its thermoelectric performance beyond the\nweak electron-vibrational coupling limit. Employing full counting statistics we\ncalculate the current flow, thermopower, associated noise, and efficiency\nwithout resorting to the weak vibrational coupling approximation. We\ndemonstrate intricacies of the power-efficiency-precision trade-off at strong\ncoupling, showing that the three cannot be maximised simultaneously in our\nmodel. Finally, we emphasise the importance of capturing non-additivity when\nconsidering strong coupling and multiple environments, demonstrating that an\nadditive treatment of the environments can violate the upper bound on\nthermoelectric efficiency imposed by Carnot.\n"} {"abstract": " The nuclear matrix element (NME) of the neutrinoless double-$\\beta$\n($0\\nu\\beta\\beta$) decay is an essential input for determining the neutrino\neffective mass, if the half-life of this decay is measured. The reliable\ncalculation of this NME has been a long-standing problem because of the\ndiversity of the predicted values of the NME depending on the calculation\nmethod. In this paper, we focus on the shell model and the QRPA. The shell\nmodel have a rich amount of the many-particle many-hole correlations, and the\nQRPA can obtain the convergence of the result of calculation with respect to\nthe extension of the single-particle space. It is difficult for the shell model\nto obtain the convergence of the $0\\nu\\beta\\beta$ NME with respect to the\nvalence single-particle space. The many-body correlations of the QRPA are\ninsufficient depending on nuclei. We propose a new method to modify\nphenomenologically the results of the shell model and the QRPA compensating the\ninsufficient point of each method by using the information of other method\ncomplementarily. Extrapolations of the components of the $0\\nu\\beta\\beta$ NME\nof the shell model are made toward a very large valence single-particle space.\nWe introduce a modification factor to the components of the $0\\nu\\beta\\beta$\nNME of the QRPA. Our modification method gives similar values of the\n$0\\nu\\beta\\beta$ NME of the two methods for $^{48}$Ca. The NME of the\ntwo-neutrino double-$\\beta$ decay is also modified in a similar but simpler\nmanner, and the consistency of the two methods is improved.\n"} {"abstract": " Network softwarization has revolutionized the architecture of cellular\nwireless networks. State-of-the-art container based virtual radio access\nnetworks (vRAN) provide enormous flexibility and reduced life cycle management\ncosts, but they also come with prohibitive energy consumption. We argue that\nfor future AI-native wireless networks to be flexible and energy efficient,\nthere is a need for a new abstraction in network softwarization that caters for\nneural network type of workloads and allows a large degree of service\ncomposability. In this paper we present the NeuroRAN architecture, which\nleverages stateful function as a user facing execution model, and is\ncomplemented with virtualized resources and decentralized resource management.\nWe show that neural network based implementations of common transceiver\nfunctional blocks fit the proposed architecture, and we discuss key research\nchallenges related to compilation and code generation, resource management,\nreliability and security.\n"} {"abstract": " Semi-supervised learning through deep generative models and multi-lingual\npretraining techniques have orchestrated tremendous success across different\nareas of NLP. Nonetheless, their development has happened in isolation, while\nthe combination of both could potentially be effective for tackling\ntask-specific labelled data shortage. To bridge this gap, we combine\nsemi-supervised deep generative models and multi-lingual pretraining to form a\npipeline for document classification task. Compared to strong supervised\nlearning baselines, our semi-supervised classification framework is highly\ncompetitive and outperforms the state-of-the-art counterparts in low-resource\nsettings across several languages.\n"} {"abstract": " We present Hubble Space Telescope Cosmic Origin Spectrograph (COS) UV line\nspectroscopy and integral-field unit (IFU) observations of the intra-group\nmedium in Stephan's Quintet (SQ). SQ hosts a 30 kpc long shocked ridge\ntriggered by a galaxy collision at a relative velocity of 1000 km/s, where\nlarge amounts of molecular gas coexist with a hot, X-ray emitting, plasma. COS\nspectroscopy at five positions sampling the diverse environments of the SQ\nintra-group medium reveals very broad (2000 km/s) Ly$\\alpha$ line emission with\ncomplex line shapes. The Ly$\\alpha$ line profiles are similar to or much\nbroader than those of H$\\beta$, [CII]$\\lambda157.7\\mu$m and CO~(1-0) emission.\nThe extreme breadth of the Ly$\\alpha$ emission, compared with H$\\beta$, implies\nresonance scattering within the observed structure. Scattering indicates that\nthe neutral gas of the intra-group medium is clumpy, with a significant surface\ncovering factor. We observe significant variations in the Ly$\\alpha$/H$\\beta$\nflux ratio between positions and velocity components. From the mean line ratio\naveraged over positions and velocities, we estimate the effective escape\nfraction of Ly$\\alpha$ photons to be 10-30%. Remarkably, over more than four\norders of magnitude in temperature, the powers radiated by X-rays, Ly$\\alpha$,\nH$_2$, [CII] are comparable within a factor of a few, assuming that the ratio\nof the Ly$\\alpha$ to H$_2$ fluxes over the whole shocked intra-group medium\nstay in line with those observed at those five positions. Both shocks and\nmixing layers could contribute to the energy dissipation associated with a\nturbulent energy cascade. Our results may be relevant for the cooling of gas at\nhigh redshifts, where the metal content is lower than in this local system, and\na high amplitude of turbulence is more common.\n"} {"abstract": " In this paper, we study the Nisnevich sheafification\n$\\mathcal{H}^1_{\\acute{e}t}(G)$ of the presheaf associating to a smooth scheme\nthe set of isomorphism classes of $G$-torsors, for a reductive group $G$. We\nshow that if $G$-torsors on affine lines are extended, then\n$\\mathcal{H}^1_{\\acute{e}t}(G)$ is homotopy invariant and show that the sheaf\nis unramified if and only if Nisnevich-local purity holds for $G$-torsors. We\nalso identify the sheaf $\\mathcal{H}^1_{\\acute{e}t}(G)$ with the sheaf of\n$\\mathbb{A}^1$-connected components of the classifying space ${\\rm\nB}_{\\acute{e}t}G$. This establishes the homotopy invariance of the sheaves of\ncomponents as conjectured by Morel. It moreover provides a computation of the\nsheaf of $\\mathbb{A}^1$-connected components in terms of unramified $G$-torsors\nover function fields whenever Nisnevich-local purity holds for $G$-torsors.\n"} {"abstract": " In this paper, we consider the structural change in a class of discrete\nvalued time series, which the true conditional distribution of the observations\nis assumed to be unknown.\n The conditional mean of the process depends on a parameter $\\theta^*$ which\nmay change over time.\n We provide sufficient conditions for the consistency and the asymptotic\nnormality of the Poisson quasi-maximum likelihood estimator (QMLE) of the\nmodel.\n We consider an epidemic change-point detection and propose a test statistic\nbased on the QMLE of the parameter. Under the null hypothesis of a constant\nparameter (no change), the test statistic converges to a distribution obtained\nfrom a difference of two Brownian bridge. The test statistic diverges to\ninfinity under the epidemic alternative, which establishes that the proposed\nprocedure is consistent in power. The effectiveness of the proposed procedure\nis illustrated by simulated and real data examples.\n"} {"abstract": " We classify rank two vector bundles on a del Pezzo threefold $X$ of Picard\nrank one whose projectivizations are weak Fano. We also investigate the moduli\nspaces of such vector bundles when $X$ is of degree five, especially whether it\nis smooth, irreducible, or fine.\n"} {"abstract": " In this paper we study elimination of imaginaries in some classes of pure\nordered abelian groups. For the class of ordered abelian groups with bounded\nregular rank (equivalently with finite spines) we obtain weak elimination of\nimaginaries once we add sorts for the quotient groups $\\Gamma/ \\Delta$ for each\ndefinable convex subgroup $\\Delta$, and sorts for the quotient groups $\\Gamma/\n\\Delta+ l\\Gamma$ where $\\Delta$ is a definable convex subgroup and $l \\in\n\\mathbb{N}_{\\geq 2}$. We refer to these sorts as the \\emph{quotient sorts}. For\nthe dp-minimal case we obtain a complete elimination of imaginaries, if we also\nadd constants to distinguish the cosets of $\\Delta+n\\Gamma$ in $\\Gamma$, where\n$\\Delta$ is a definable convex subgroup and $n \\in \\mathbb{N}_{\\geq 2}$.\n"} {"abstract": " The main theme of the paper is the detailed discussion of the renormalization\nof the quantum field theory comprising two interacting scalar fields. The\npotential of the model is the fourth-order homogeneous polynomial of the\nfields, symmetric with respect to the transformation\n$\\phi_{i}\\rightarrow{-\\phi_{i}}$. We determine the Feynman rules for the model\nand then we present a detailed discussion of the renormalization of the theory\nat one loop. Next, we derive the one loop renormalization group equations for\nthe running masses and coupling constants. At the level of two loops, we use\nthe FeynArts package of Mathematica to generate the two loops Feynman diagrams\nand calculate in detail the setting sun diagram.\n"} {"abstract": " This paper provides a theoretical and numerical comparison of classical first\norder splitting methods for solving smooth convex optimization problems and\ncocoercive equations. In a theoretical point of view, we compare convergence\nrates of gradient descent, forward-backward, Peaceman-Rachford, and\nDouglas-Rachford algorithms for minimizing the sum of two smooth convex\nfunctions when one of them is strongly convex. A similar comparison is given in\nthe more general cocoercive setting under the presence of strong monotonicity\nand we observe that the convergence rates in optimization are strictly better\nthan the corresponding rates for cocoercive equations for some algorithms. We\nobtain improved rates with respect to the literature in several instances\nexploiting the structure of our problems. In a numerical point of view, we\nverify our theoretical results by implementing and comparing previous\nalgorithms in well established signal and image inverse problems involving\nsparsity. We replace the widely used $\\ell_1$ norm by the Huber loss and we\nobserve that fully proximal-based strategies have numerical and theoretical\nadvantages with respect to methods using gradient steps. In particular,\nPeaceman-Rachford is the more performant algorithm in our examples.\n"} {"abstract": " In a binary classification problem where the goal is to fit an accurate\npredictor, the presence of corrupted labels in the training data set may create\nan additional challenge. However, in settings where likelihood maximization is\npoorly behaved-for example, if positive and negative labels are perfectly\nseparable-then a small fraction of corrupted labels can improve performance by\nensuring robustness. In this work, we establish that in such settings,\ncorruption acts as a form of regularization, and we compute precise upper\nbounds on estimation error in the presence of corruptions. Our results suggest\nthat the presence of corrupted data points is beneficial only up to a small\nfraction of the total sample, scaling with the square root of the sample size.\n"} {"abstract": " We systematically investigate the complexity of counting subgraph patterns\nmodulo fixed integers. For example, it is known that the parity of the number\nof $k$-matchings can be determined in polynomial time by a simple reduction to\nthe determinant. We generalize this to an $n^{f(t,s)}$-time algorithm to\ncompute modulo $2^t$ the number of subgraph occurrences of patterns that are\n$s$ vertices away from being matchings. This shows that the known\npolynomial-time cases of subgraph detection (Jansen and Marx, SODA 2015) carry\nover into the setting of counting modulo $2^t$.\n Complementing our algorithm, we also give a simple and self-contained proof\nthat counting $k$-matchings modulo odd integers $q$ is Mod_q-W[1]-complete and\nprove that counting $k$-paths modulo $2$ is Parity-W[1]-complete, answering an\nopen question by Bj\\\"orklund, Dell, and Husfeldt (ICALP 2015).\n"} {"abstract": " We study the ground state for many interacting bosons in a double-well\npotential, in a joint limit where the particle number and the distance between\nthe potential wells both go to infinity. Two single-particle orbitals (one for\neach well) are macroscopically occupied, and we are concerned with deriving the\ncorresponding effective Bose-Hubbard Hamiltonian. We prove (i) an energy\nexpansion, including the two-modes Bose-Hubbard energy and two independent\nBogoliubov corrections (one for each potential well), (ii) a variance bound for\nthe number of particles falling inside each potential well. The latter is a\nsignature of a correlated ground state in that it violates the central limit\ntheorem.\n"} {"abstract": " In this paper, we propose a method for ensembling the outputs of multiple\nobject detectors for improving detection performance and precision of bounding\nboxes on image data. We further extend it to video data by proposing a\ntwo-stage tracking-based scheme for detection refinement. The proposed method\ncan be used as a standalone approach for improving object detection\nperformance, or as a part of a framework for faster bounding box annotation in\nunseen datasets, assuming that the objects of interest are those present in\nsome common public datasets.\n"} {"abstract": " A major issue of the increasingly popular robust optimization is the tendency\nto produce overly conservative solutions. This paper deals with this by\nproposing a new parameterized robust criterion that is flexible enough to offer\nfine-tuned control of conservatism. The proposed criterion also leads to a new\napproach for competitive ratio analysis, which can reduce the complexity of\nanalysis to the level for the minimax regret analysis. The properties of this\nnew criterion are studied, which facilitates its applications, and validates\nthe new approach for competitive ratio analysis. Finally, the criterion is\napplied to the well studied robust one-way trading problem to demonstrate its\npotential in controlling conservatism and reducing the complexity of\ncompetitive ratio analysis.\n"} {"abstract": " The classic string indexing problem is to preprocess a string S into a\ncompact data structure that supports efficient pattern matching queries.\nTypical queries include existential queries (decide if the pattern occurs in\nS), reporting queries (return all positions where the pattern occurs), and\ncounting queries (return the number of occurrences of the pattern). In this\npaper we consider a variant of string indexing, where the goal is to compactly\nrepresent the string such that given two patterns P1 and P2 and a gap range\n[\\alpha,\\beta] we can quickly find the consecutive occurrences of P1 and P2\nwith distance in [\\alpha,\\beta], i.e., pairs of occurrences immediately\nfollowing each other and with distance within the range. We present data\nstructures that use \\~O(n) space and query time \\~O(|P1|+|P2|+n^(2/3)) for\nexistence and counting and \\~O(|P1|+|P2|+n^(2/3)*occ^(1/3)) for reporting. We\ncomplement this with a conditional lower bound based on the set intersection\nproblem showing that any solution using \\~O(n) space must use\n\\tilde{\\Omega}}(|P1|+|P2|+\\sqrt{n}) query time. To obtain our results we\ndevelop new techniques and ideas of independent interest including a new suffix\ntree decomposition and hardness of a variant of the set intersection problem.\n"} {"abstract": " We study asymptotic behavior of solutions of the first-order linear consensus\nmodel with delay and anticipation, which is a system of neutral delay\ndifferential equations. We consider both the transmission-type and\nreaction-type delay that are motivated by modeling inputs. Studying the\nsimplified case of two agents, we show that, depending on the parameter regime,\nanticipation may have both a stabilizing and destabilizing effect on the\nsolutions. In particular, we demonstrate numerically that moderate level of\nanticipation generically promotes convergence towards consensus, while too high\nlevel disturbs it. Motivated by this observation, we derive sufficient\nconditions for asymptotic consensus in the multiple-agent systems, which are\nexplicit in the parameter values delay length and anticipation level, and\nindependent of the number of agents. The proofs are based on construction of\nsuitable Lyapunov-type functionals.\n"} {"abstract": " Tree shape statistics provide valuable quantitative insights into\nevolutionary mechanisms underpinning phylogenetic trees, a commonly used graph\nrepresentation of evolution systems ranging from viruses to species. By\ndeveloping limit theorems for a version of extended P\\'olya urn models in which\nnegative entries are permitted for their replacement matrices, we present\nstrong laws of large numbers and central limit theorems for asymptotic joint\ndistributions of two subtree counting statistics, the number of cherries and\nthat of pitchforks, for random phylogenetic trees generated by two widely used\nnull tree models: the proportional to distinguishable arrangements (PDA) and\nthe Yule-Harding-Kingman (YHK) models. Our results indicate that the limiting\nbehaviour of these two statistics, when appropriately scaled, are independent\nof the initial trees used in the tree generating process.\n"} {"abstract": " Detecting novel objects from few examples has become an emerging topic in\ncomputer vision recently. However, these methods need fully annotated training\nimages to learn new object categories which limits their applicability in real\nworld scenarios such as field robotics. In this work, we propose a\nprobabilistic multiple instance learning approach for few-shot Common Object\nLocalization (COL) and few-shot Weakly Supervised Object Detection (WSOD). In\nthese tasks, only image-level labels, which are much cheaper to acquire, are\navailable. We find that operating on features extracted from the last layer of\na pre-trained Faster-RCNN is more effective compared to previous episodic\nlearning based few-shot COL methods. Our model simultaneously learns the\ndistribution of the novel objects and localizes them via\nexpectation-maximization steps. As a probabilistic model, we employ von\nMises-Fisher (vMF) distribution which captures the semantic information better\nthan Gaussian distribution when applied to the pre-trained embedding space.\nWhen the novel objects are localized, we utilize them to learn a linear\nappearance model to detect novel classes in new images. Our extensive\nexperiments show that the proposed method, despite being simple, outperforms\nstrong baselines in few-shot COL and WSOD, as well as large-scale WSOD tasks.\n"} {"abstract": " Recommender system usually faces popularity bias issues: from the data\nperspective, items exhibit uneven (long-tail) distribution on the interaction\nfrequency; from the method perspective, collaborative filtering methods are\nprone to amplify the bias by over-recommending popular items. It is undoubtedly\ncritical to consider popularity bias in recommender systems, and existing work\nmainly eliminates the bias effect. However, we argue that not all biases in the\ndata are bad -- some items demonstrate higher popularity because of their\nbetter intrinsic quality. Blindly pursuing unbiased learning may remove the\nbeneficial patterns in the data, degrading the recommendation accuracy and user\nsatisfaction.\n This work studies an unexplored problem in recommendation -- how to leverage\npopularity bias to improve the recommendation accuracy. The key lies in two\naspects: how to remove the bad impact of popularity bias during training, and\nhow to inject the desired popularity bias in the inference stage that generates\ntop-K recommendations. This questions the causal mechanism of the\nrecommendation generation process. Along this line, we find that item\npopularity plays the role of confounder between the exposed items and the\nobserved interactions, causing the bad effect of bias amplification. To achieve\nour goal, we propose a new training and inference paradigm for recommendation\nnamed Popularity-bias Deconfounding and Adjusting (PDA). It removes the\nconfounding popularity bias in model training and adjusts the recommendation\nscore with desired popularity bias via causal intervention. We demonstrate the\nnew paradigm on latent factor model and perform extensive experiments on three\nreal-world datasets. Empirical studies validate that the deconfounded training\nis helpful to discover user real interests and the inference adjustment with\npopularity bias could further improve the recommendation accuracy.\n"} {"abstract": " Let $\\mathfrak{F}_n$ be the set of all cuspidal automorphic representations\n$\\pi$ of $\\mathrm{GL}_n$ with unitary central character over a number field\n$F$. We prove the first unconditional zero density estimate for the set\n$\\mathcal{S}=\\{L(s,\\pi\\times\\pi')\\colon\\pi\\in\\mathfrak{F}_n\\}$ of\nRankin-Selberg $L$-functions, where $\\pi'\\in\\mathfrak{F}_{n'}$ is fixed. We use\nthis density estimate to prove (i) a strong average form of effective\nmultiplicity one for $\\mathrm{GL}_n$; (ii) that given $\\pi\\in\\mathfrak{F}_n$\ndefined over $\\mathbb{Q}$, the convolution $\\pi\\times\\tilde{\\pi}$ has a\npositive level of distribution in the sense of Bombieri-Vinogradov; (iii) that\nalmost all $L(s,\\pi\\times\\pi')\\in \\mathcal{S}$ have a hybrid-aspect\nsubconvexity bound on $\\mathrm{Re}(s)=\\frac{1}{2}$; (iv) a hybrid-aspect\npower-saving upper bound for the variance in the discrepancy of the measures\n$|\\varphi(x+iy)|^2 y^{-2}dxdy$ associated to $\\mathrm{GL}_2$ Hecke-Maass\nnewforms $\\varphi$ with trivial nebentypus, extending work of Luo and Sarnak\nfor level 1 cusp forms; and (v) a nonsplit analogue of quantum ergodicity:\nalmost all restrictions of Hilbert Hecke-Maass newforms to the modular surface\ndissipate as their Laplace eigenvalues grow.\n"} {"abstract": " In this paper, we introduce the task of multi-view RGB-based 3D object\ndetection as an end-to-end optimization problem. To address this problem, we\npropose ImVoxelNet, a novel fully convolutional method of 3D object detection\nbased on monocular or multi-view RGB images. The number of monocular images in\neach multi-view input can variate during training and inference; actually, this\nnumber might be unique for each multi-view input. ImVoxelNet successfully\nhandles both indoor and outdoor scenes, which makes it general-purpose.\nSpecifically, it achieves state-of-the-art results in car detection on KITTI\n(monocular) and nuScenes (multi-view) benchmarks among all methods that accept\nRGB images. Moreover, it surpasses existing RGB-based 3D object detection\nmethods on the SUN RGB-D dataset. On ScanNet, ImVoxelNet sets a new benchmark\nfor multi-view 3D object detection. The source code and the trained models are\navailable at https://github.com/saic-vul/imvoxelnet.\n"} {"abstract": " LAMOST Data Release 5, covering $\\sim$17,000 $deg^2$ from $-10^{\\circ}$ to\n$80^{\\circ}$ in declination, contains 9 millions co-added low resolution\nspectra of celestial objects, each spectrum combined from repeat exposure of\ntwo to tens of times during Oct 2011 to Jun 2017. In this paper, We present the\nspectra of individual exposures for all the objects in LAMOST Data Release 5.\nFor each spectrum, equivalent width of 60 lines from 11 different elements are\ncalculated with a new method combining the actual line core and fitted line\nwings. For stars earlier than F type, the Balmer lines are fitted with both\nemission and absorption profiles once two components are detected. Radial\nvelocity of each individual exposure is measured by minimizing ${\\chi}^2$\nbetween the spectrum and its best template. Database for equivalent widths of\nspectral lines and radial velocities of individual spectra are available\nonline. Radial velocity uncertainties with different stellar type and\nsignal-to-noise ratio are quantified by comparing different exposure of the\nsame objects. We notice that the radial velocity uncertainty depends on the\ntime lag between observations. For stars observed in the same day and with\nsignal-to-noise ratio higher than 20, the radial velocity uncertainty is below\n5km/s, and increase to 10km/s for stars observed in different nights.\n"} {"abstract": " The rapid progress in clinical data management systems and artificial\nintelligence approaches enable the era of personalized medicine. Intensive care\nunits (ICUs) are the ideal clinical research environment for such development\nbecause they collect many clinical data and are highly computerized\nenvironments. We designed a retrospective clinical study on a prospective ICU\ndatabase using clinical natural language to help in the early diagnosis of\nheart failure in critically ill children. The methodology consisted of\nempirical experiments of a learning algorithm to learn the hidden\ninterpretation and presentation of the French clinical note data. This study\nincluded 1386 patients' clinical notes with 5444 single lines of notes. There\nwere 1941 positive cases (36 % of total) and 3503 negative cases classified by\ntwo independent physicians using a standardized approach. The multilayer\nperceptron neural network outperforms other discriminative and generative\nclassifiers. Consequently, the proposed framework yields an overall\nclassification performance with 89 % accuracy, 88 % recall, and 89 % precision.\nThis study successfully applied learning representation and machine learning\nalgorithms to detect heart failure from clinical natural language in a single\nFrench institution. Further work is needed to use the same methodology in other\ninstitutions and other languages.\n"} {"abstract": " We present a comprehensive overview of chirality and its optical\nmanifestation in plasmonic nanosystems and nanostructures. We discuss top-down\nfabricated structures that range from solid metallic nanostructures to\ngroupings of metallic nanoparticles arranged in three dimensions. We also\npresent the large variety of bottom-up synthesized structures. Using DNA,\npeptides, or other scaffolds, complex nanoparticle arrangements of up to\nhundreds of individual nanoparticles have been realized. Beyond this static\npicture, we also give an overview of recent demonstrations of active chiral\nplasmonic systems, where the chiral optical response can be controlled by an\nexternal stimulus. We discuss the prospect of using the unique properties of\ncomplex chiral plasmonic systems for enantiomeric sensing schemes.\n"} {"abstract": " Programming language detection is a common need in the analysis of large\nsource code bases. It is supported by a number of existing tools that rely on\nseveral features, and most notably file extensions, to determine file types. We\nconsider the problem of accurately detecting the type of files commonly found\nin software code bases, based solely on textual file content. Doing so is\nhelpful to classify source code that lack file extensions (e.g., code snippets\nposted on the Web or executable scripts), to avoid misclassifying source code\nthat has been recorded with wrong or uncommon file extensions, and also shed\nsome light on the intrinsic recognizability of source code files. We propose a\nsimple model that (a) use a language-agnostic word tokenizer for textual files,\n(b) group tokens in 1-/2-grams, (c) build feature vectors based on N-gram\nfrequencies, and (d) use a simple fully connected neural network as classifier.\nAs training set we use textual files extracted from GitHub repositories with at\nleast 1000 stars, using existing file extensions as ground truth. Despite its\nsimplicity the proposed model reaches 85% in our experiments for a relatively\nhigh number of recognized classes (more than 130 file types).\n"} {"abstract": " An essential feature of the subdiffusion equations with the $\\alpha$-order\ntime fractional derivative is the weak singularity at the initial time. The\nweak regularity of the solution is usually characterized by a regularity\nparameter $\\sigma\\in (0,1)\\cup(1,2)$. Under this general regularity assumption,\nwe here obtain the pointwise-in-time error estimate of the widely used L1\nscheme for nonlinear subdiffusion equations. To the end, we present a refined\ndiscrete fractional-type Gr\\\"onwall inequality and a rigorous analysis for the\ntruncation errors. Numerical experiments are provided to demonstrate the\neffectiveness of our theoretical analysis.\n"} {"abstract": " In this paper, we presented a new method for deformation control of\ndeformable objects, which utilizes both visual and tactile feedback. At\npresent, manipulation of deformable objects is basically formulated by assuming\npositional constraints. But in fact, in many situations manipulation has to be\nperformed under actively applied force constraints. This scenario is considered\nin this research. In the proposed scheme a tactile feedback is integrated to\nensure a stable contact between the robot end-effector and the soft object to\nbe manipulated. The controlled contact force is also utilized to regulate the\ndeformation of the soft object with its shape measured by a vision sensor. The\neffectiveness of the proposed method is demonstrated by a book page turning and\nshaping experiment.\n"} {"abstract": " With the introduction of Artificial Intelligence (AI) and related\ntechnologies in our daily lives, fear and anxiety about their misuse as well as\nthe hidden biases in their creation have led to a demand for regulation to\naddress such issues. Yet blindly regulating an innovation process that is not\nwell understood, may stifle this process and reduce benefits that society may\ngain from the generated technology, even under the best intentions. In this\npaper, starting from a baseline model that captures the fundamental dynamics of\na race for domain supremacy using AI technology, we demonstrate how socially\nunwanted outcomes may be produced when sanctioning is applied unconditionally\nto risk-taking, i.e. potentially unsafe, behaviours. As an alternative to\nresolve the detrimental effect of over-regulation, we propose a voluntary\ncommitment approach wherein technologists have the freedom of choice between\nindependently pursuing their course of actions or establishing binding\nagreements to act safely, with sanctioning of those that do not abide to what\nthey pledged. Overall, this work reveals for the first time how voluntary\ncommitments, with sanctions either by peers or an institution, leads to\nsocially beneficial outcomes in all scenarios envisageable in a short-term race\ntowards domain supremacy through AI technology. These results are directly\nrelevant for the design of governance and regulatory policies that aim to\nensure an ethical and responsible AI technology development process.\n"} {"abstract": " Chinese pre-trained language models usually process text as a sequence of\ncharacters, while ignoring more coarse granularity, e.g., words. In this work,\nwe propose a novel pre-training paradigm for Chinese -- Lattice-BERT, which\nexplicitly incorporates word representations along with characters, thus can\nmodel a sentence in a multi-granularity manner. Specifically, we construct a\nlattice graph from the characters and words in a sentence and feed all these\ntext units into transformers. We design a lattice position attention mechanism\nto exploit the lattice structures in self-attention layers. We further propose\na masked segment prediction task to push the model to learn from rich but\nredundant information inherent in lattices, while avoiding learning unexpected\ntricks. Experiments on 11 Chinese natural language understanding tasks show\nthat our model can bring an average increase of 1.5% under the 12-layer\nsetting, which achieves new state-of-the-art among base-size models on the CLUE\nbenchmarks. Further analysis shows that Lattice-BERT can harness the lattice\nstructures, and the improvement comes from the exploration of redundant\ninformation and multi-granularity representations. Our code will be available\nat https://github.com/alibaba/pretrained-language-models/LatticeBERT.\n"} {"abstract": " We address the problem of novel view synthesis (NVS) from a few sparse source\nview images. Conventional image-based rendering methods estimate scene geometry\nand synthesize novel views in two separate steps. However, erroneous geometry\nestimation will decrease NVS performance as view synthesis highly depends on\nthe quality of estimated scene geometry. In this paper, we propose an\nend-to-end NVS framework to eliminate the error propagation issue. To be\nspecific, we construct a volume under the target view and design a source-view\nvisibility estimation (SVE) module to determine the visibility of the\ntarget-view voxels in each source view. Next, we aggregate the visibility of\nall source views to achieve a consensus volume. Each voxel in the consensus\nvolume indicates a surface existence probability. Then, we present a soft\nray-casting (SRC) mechanism to find the most front surface in the target view\n(i.e. depth). Specifically, our SRC traverses the consensus volume along\nviewing rays and then estimates a depth probability distribution. We then warp\nand aggregate source view pixels to synthesize a novel view based on the\nestimated source-view visibility and target-view depth. At last, our network is\ntrained in an end-to-end self-supervised fashion, thus significantly\nalleviating error accumulation in view synthesis. Experimental results\ndemonstrate that our method generates novel views in higher quality compared to\nthe state-of-the-art.\n"} {"abstract": " In this paper, we present a first-order projection-free method, namely, the\nuniversal conditional gradient sliding (UCGS) method, for solving\n$\\varepsilon$-approximate solutions to convex differentiable optimization\nproblems. For objective functions with H\\\"older continuous gradients, we show\nthat UCGS is able to terminate with $\\varepsilon$-solutions with at most\n$O((M_\\nu D_X^{1+\\nu}/{\\varepsilon})^{2/(1+3\\nu)})$ gradient evaluations and\n$O((M_\\nu D_X^{1+\\nu}/{\\varepsilon})^{4/(1+3\\nu)})$ linear objective\noptimizations, where $\\nu\\in (0,1]$ and $M_\\nu>0$ are the exponent and constant\nof the H\\\"older condition. Furthermore, UCGS is able to perform such\ncomputations without requiring any specific knowledge of the smoothness\ninformation $\\nu$ and $M_\\nu$. In the weakly smooth case when $\\nu\\in (0,1)$,\nboth complexity results improve the current state-of-the-art $O((M_\\nu\nD_X^{1+\\nu}/{\\varepsilon})^{1/\\nu})$ results on first-order projection-free\nmethod achieved by the conditional gradient method. Within the class of\nsliding-type algorithms, to the best of our knowledge, this is the first time a\nsliding-type algorithm is able to improve not only the gradient complexity but\nalso the overall complexity for computing an approximate solution. In the\nsmooth case when $\\nu=1$, UCGS matches the state-of-the-art complexity result\nbut adds more features allowing for practical implementation.\n"} {"abstract": " Let $M\\stackrel{\\rho_0}{\\curvearrowleft}S$ be a $C^\\infty$ locally free\naction of a connected simply connected solvable Lie group $S$ on a closed\nmanifold $M$. Roughly speaking, $\\rho_0$ is parameter rigid if any $C^\\infty$\nlocally free action of $S$ on $M$ having the same orbits as $\\rho_0$ is\n$C^\\infty$ conjugate to $\\rho_0$. In this paper we prove two types of result on\nparameter rigidity.\n First let $G$ be a connected semisimple Lie group with finite center of real\nrank at least $2$ without compact factors nor simple factors locally isomorphic\nto $\\mathrm{SO}_0(n,1)$ $(n\\geq2)$ or $\\mathrm{SU}(n,1)$ $(n\\geq2)$, and let\n$\\Gamma$ be an irreducible cocompact lattice in $G$. Let $G=KAN$ be an Iwasawa\ndecomposition. We prove that the action $\\Gamma\\backslash G\\curvearrowleft AN$\nby right multiplication is parameter rigid. One of the three main ingredients\nof the proof is the rigidity theorems of Pansu and Kleiner-Leeb on the\nquasiisometries of Riemannian symmetric spaces of noncompact type.\n Secondly we show, if $M\\stackrel{\\rho_0}{\\curvearrowleft}S$ is parameter\nrigid, then the zeroth and first cohomology of the orbit foliation of $\\rho_0$\nwith certain coefficients must vanish. This is a partial converse to the\nresults in the author's [Vanishing of cohomology and parameter rigidity of\nactions of solvable Lie groups. Geom. Topol. 21(1) (2017), 157-191], where we\nsaw sufficient conditions for parameter rigidity in terms of vanishing of the\nfirst cohomology with various coefficients.\n"} {"abstract": " This letter studies an unmanned aerial vehicle (UAV) aided multicasting (MC)\nsystem, which is enabled by simultaneous free space optics (FSO) backhaul and\npower transfer. The UAV applies the power-splitting technique to harvest\nwireless power and decode backhaul information simultaneously over the FSO\nlink, while at the same time using the harvested power to multicast the\nbackhauled information over the radio frequency (RF) links to multiple ground\nusers (GUs). We derive the UAV's achievable MC rate under the Poisson point\nprocess (PPP) based GU distribution. By jointly designing the FSO and RF links\nand the UAV altitude, we maximize the system-level energy efficiency (EE),\nwhich can be equivalently expressed as the ratio of the UAV's MC rate over the\noptics base station (OBS) transmit power, subject to the UAV's sustainable\noperation and reliable backhauling constraints. Due to the non-convexity of\nthis problem, we propose suboptimal solutions with low complexity. Numerical\nresults show the close-to-optimal EE performance by properly balancing the\npower-rate tradeoff between the FSO power and the MC data transmissions.\n"} {"abstract": " The first mobile camera phone was sold only 20 years ago, when taking\npictures with one's phone was an oddity, and sharing pictures online was\nunheard of. Today, the smartphone is more camera than phone. How did this\nhappen? This transformation was enabled by advances in computational\nphotography -the science and engineering of making great images from small form\nfactor, mobile cameras. Modern algorithmic and computing advances, including\nmachine learning, have changed the rules of photography, bringing to it new\nmodes of capture, post-processing, storage, and sharing. In this paper, we give\na brief history of mobile computational photography and describe some of the\nkey technological components, including burst photography, noise reduction, and\nsuper-resolution. At each step, we may draw naive parallels to the human visual\nsystem.\n"} {"abstract": " We study dynamic clustering problems from the perspective of online learning.\nWe consider an online learning problem, called \\textit{Dynamic $k$-Clustering},\nin which $k$ centers are maintained in a metric space over time (centers may\nchange positions) such as a dynamically changing set of $r$ clients is served\nin the best possible way. The connection cost at round $t$ is given by the\n\\textit{$p$-norm} of the vector consisting of the distance of each client to\nits closest center at round $t$, for some $p\\geq 1$ or $p = \\infty$. We present\na \\textit{$\\Theta\\left( \\min(k,r) \\right)$-regret} polynomial-time online\nlearning algorithm and show that, under some well-established computational\ncomplexity conjectures, \\textit{constant-regret} cannot be achieved in\npolynomial-time. In addition to the efficient solution of Dynamic\n$k$-Clustering, our work contributes to the long line of research on\ncombinatorial online learning.\n"} {"abstract": " Photonic quantum networking relies on entanglement distribution between\ndistant nodes, typically realized by swapping procedures. However, entanglement\nswapping is a demanding task in practice, mainly because of limited\neffectiveness of entangled photon sources and Bell-state measurements necessary\nto realize the process. Here we experimentally activate a remote distribution\nof two-photon polarization entanglement which supersedes the need for initial\nentangled pairs and traditional Bell-state measurements. This alternative\nprocedure is accomplished thanks to the controlled spatial indistinguishability\nof four independent photons in three separated nodes of the network, which\nenables us to perform localized product-state measurements on the central node\nacting as a trigger. This experiment proves that the inherent\nindistinguishability of identical particles supplies new standards for feasible\nquantum communication in multinode photonic quantum networks.\n"} {"abstract": " Adapting the idea of training CartPole with Deep Q-learning agent, we are\nable to find a promising result that prevent the pole from falling down. The\ncapacity of reinforcement learning (RL) to learn from the interaction between\nthe environment and agent provides an optimal control strategy. In this paper,\nwe aim to solve the classic pendulum swing-up problem that making the learned\npendulum to be in upright position and balanced. Deep Deterministic Policy\nGradient algorithm is introduced to operate over continuous action domain in\nthis problem. Salient results of optimal pendulum are proved with increasing\naverage return, decreasing loss, and live video in the code part.\n"} {"abstract": " We report the detection of [O I]145.5um in the BR 1202-0725 system, a compact\ngroup at z=4.7 consisting of a quasar (QSO), a submillimeter-bright galaxy\n(SMG), and three faint Lya emitters. By taking into account the previous\ndetections and upper limits, the [O I]/[C II] line ratios of the now five known\nhigh-z galaxies are higher than or on the high-end of the observed values in\nlocal galaxies ([O I]/[C II]$\\gtrsim$0.13). The high [O I]/[C II] ratios and\nthe joint analysis with the previous detection of [N II] lines for both the QSO\nand the SMG suggest the presence of warm and dense neutral gas in these highly\nstar-forming galaxies. This is further supported by new CO (12-11) line\ndetections and a comparison with cosmological simulations. There is a possible\npositive correlation between the [NII]122/205 line ratio and the [O I]/[C II]\nratio when all local and high-z sources are taken into account, indicating that\nthe denser the ionized gas, the denser and warmer the neutral gas (or vice\nversa). The detection of the [O I] line in the BR1202-0725 system with a\nrelatively short amount of integration with ALMA demonstrates the great\npotential of this line as a dense gas tracer for high-z galaxies.\n"} {"abstract": " We study the nonsingular black hole in Anti de-Sitter background taking the\nnegative cosmological constant as the pressure of the system. We investigate\nthe horizon structure, and find the critical values $m_0$ and $\\tilde{k}_0$,\nsuch that $m>m_0$ (or $\\tilde{k}<\\tilde{k}_0$) corresponds to a black solution\nwith two horizons, namely the Cauchy horizon $x_-$ and the event horizon $x_+$.\nFor $m=m_0$ (or $\\tilde{k}=\\tilde{k}_0$), there exist an extremal black hole\nwith degenerate horizon $x_0=x_{\\pm}$ and for $m\\tilde{k}_0$), no black hole solution exists. In turn, we calculate\nthe thermodynamical properties and by observing the behaviour of Gibb's free\nenergy and specific heat, we find that this black hole solution exhibits first\norder (small to large black hole) and second order phase transition. Further,\nwe study the $P-V$ criticality of system and then calculate the critical\nexponents showing that they are the same as those of the Van der Waals fluid.\n"} {"abstract": " In the context of autonomous vehicles, one of the most crucial tasks is to\nestimate the risk of the undertaken action. While navigating in complex urban\nenvironments, the Bayesian occupancy grid is one of the most popular types of\nmaps, where the information of occupancy is stored as the probability of\ncollision. Although widely used, this kind of representation is not well suited\nfor risk assessment: because of its discrete nature, the probability of\ncollision becomes dependent on the tessellation size. Therefore, risk\nassessments on Bayesian occupancy grids cannot yield risks with meaningful\nphysical units. In this article, we propose an alternative framework called\nDynamic Lambda-Field that is able to assess generic physical risks in dynamic\nenvironments without being dependent on the tessellation size. Using our\nframework, we are able to plan safe trajectories where the risk function can be\nadjusted depending on the scenario. We validate our approach with quantitative\nexperiments, showing the convergence speed of the grid and that the framework\nis suitable for real-world scenarios.\n"} {"abstract": " The use of crowdworkers in NLP research is growing rapidly, in tandem with\nthe exponential increase in research production in machine learning and AI.\nEthical discussion regarding the use of crowdworkers within the NLP research\ncommunity is typically confined in scope to issues related to labor conditions\nsuch as fair pay. We draw attention to the lack of ethical considerations\nrelated to the various tasks performed by workers, including labeling,\nevaluation, and production. We find that the Final Rule, the common ethical\nframework used by researchers, did not anticipate the use of online\ncrowdsourcing platforms for data collection, resulting in gaps between the\nspirit and practice of human-subjects ethics in NLP research. We enumerate\ncommon scenarios where crowdworkers performing NLP tasks are at risk of harm.\nWe thus recommend that researchers evaluate these risks by considering the\nthree ethical principles set up by the Belmont Report. We also clarify some\ncommon misconceptions regarding the Institutional Review Board (IRB)\napplication. We hope this paper will serve to reopen the discussion within our\ncommunity regarding the ethical use of crowdworkers.\n"} {"abstract": " Motivated by applications to single-particle cryo-electron microscopy\n(cryo-EM), we study several problems of function estimation in a low SNR\nregime, where samples are observed under random rotations of the function\ndomain. In a general framework of group orbit estimation with linear\nprojection, we describe a stratification of the Fisher information eigenvalues\naccording to a sequence of transcendence degrees in the invariant algebra, and\nrelate critical points of the log-likelihood landscape to a sequence of\nmethod-of-moments optimization problems. This extends previous results for a\ndiscrete rotation group without projection.\n We then compute these transcendence degrees and the forms of these moment\noptimization problems for several examples of function estimation under $SO(2)$\nand $SO(3)$ rotations, including a simplified model of cryo-EM as introduced by\nBandeira, Blum-Smith, Kileel, Perry, Weed, and Wein. For several of these\nexamples, we affirmatively resolve numerical conjectures that\n$3^\\text{rd}$-order moments are sufficient to locally identify a generic signal\nup to its rotational orbit.\n For low-dimensional approximations of the electric potential maps of two\nsmall protein molecules, we empirically verify that the noise-scalings of the\nFisher information eigenvalues conform with these theoretical predictions over\na range of SNR, in a model of $SO(3)$ rotations without projection.\n"} {"abstract": " The recently introduced harmonic resolvent framework is concerned with the\nstudy of the input-output dynamics of nonlinear flows in the proximity of a\nknown time-periodic orbit. These dynamics are governed by the harmonic\nresolvent operator, which is a linear operator in the frequency domain whose\nsingular value decomposition sheds light on the dominant input-output\nstructures of the flow. Although the harmonic resolvent is a mathematically\nwell-defined operator, the numerical computation of its singular value\ndecomposition requires inverting a matrix that becomes exactly singular as the\nperiodic orbit approaches an exact solution of the nonlinear governing\nequations. The very poor condition properties of this matrix hinder the\nconvergence of classical Krylov solvers, even in the presence of\npreconditioners, thereby increasing the computational cost required to perform\nthe harmonic resolvent analysis. In this paper we show that a suitable\naugmentation of the (nearly) singular matrix removes the singularity, and we\nprovide a lower bound for the smallest singular value of the augmented matrix.\nWe also show that the desired decomposition of the harmonic resolvent can be\ncomputed using the augmented matrix, whose improved condition properties lead\nto a significant speedup in the convergence of classical iterative solvers. We\ndemonstrate this simple, yet effective, computational procedure on the\nKuramoto-Sivashinsky equation in the proximity of an unstable time-periodic\norbit.\n"} {"abstract": " This objective of this report is to review existing enterprise blockchain\ntechnologies - EOSIO powered systems, Hyperledger Fabric and Besu, Consensus\nQuorum, R3 Corda and Ernst and Young's Nightfall - that provide data privacy\nwhile leveraging the data integrity benefits of blockchain. By reviewing and\ncomparing how and how well these technologies achieve data privacy, a snapshot\nis captured of the industry's current best practices and data privacy models.\nMajor enterprise technologies are contrasted in parallel to EOSIO to better\nunderstand how EOSIO can evolve to meet the trends seen in enterprise\nblockchain privacy. The following strategies and trends were generally observed\nin these technologies:\n Cryptography: the hashing algorithm was found to be the most used\ncryptographic primitive in enterprise or changeover privacy solutions.\nCoordination via on-chain contracts - a common strategy was to use a shared\npublicly ledger to coordinate data privacy groups and more generally managed\nidentities and access control.\n Transaction and contract code sharing: there was a variety of different\nlevels of privacy around the business logic (smart contract code) visibility.\nSome solutions only allowed authorised peers to view code while others made\nthis accessible to everybody that was a member of the shared ledger.\n Data migrations for data privacy applications: significant challenges exist\nwhen using cryptographically stored data in terms of being able to run system\nupgrades.\n Multiple blockchain ledgers for data privacy: solutions attempted to create a\nnew private blockchain for every private data relationship which was eventually\nabandoned in favour of one shared ledger with private data\ncollections/transactions that were anchored to the ledger with a hash in order\nto improve scaling.\n"} {"abstract": " Wavefront aberrations can reflect the imaging quality of high-performance\noptical systems better than geometric aberrations. Although laser\ninterferometers have emerged as the main tool for measurement of transmitted\nwavefronts, their application is greatly limited, as they are typically\ndesigned for operation at specific wavelengths. In a previous study, we\nproposed a method for determining the wavefront transmitted by an optical\nsystem at any wavelength in a certain band. Although this method works well for\nmost monochromatic systems, where the image plane is at the focal point for the\ntransmission wavelength, for general multi-color systems, it is more practical\nto measure the wavefront at the defocused image plane. Hence, in this paper, we\nhave developed a complete method for determining transmitted wavefronts in a\nbroad bandwidth at any defocused position, enabling wavefront measurements for\nmulti-color systems. Here, we assume that in small ranges, the Zernike\ncoefficients have a linear relationship with position, such that Zernike\ncoefficients at defocused positions can be derived from measurements performed\nat the focal point. We conducted experiments to verify these assumptions,\nvalidating the new method. The experimental setup has been improved so that it\ncan handle multi-color systems, and a detailed experimental process is\nsummarized. With this technique, application of broadband transmission\nwavefront measurement can be extended to most general optical systems, which is\nof great significance for characterization of achromatic and apochromatic\noptical lenses.\n"} {"abstract": " We study for the first time the $p\\Sigma^-\\to K^-d$ and $K^-d\\to p\\Sigma^-$\nreactions close to threshold and show that they are driven by a triangle\nmechanism, with the $\\Lambda(1405)$, a proton and a neutron as intermediate\nstates, which develops a triangle singularity close to the $\\bar{K}d$\nthreshold. We find that a mechanism involving virtual pion exchange and the\n$K^-p\\to\\pi^+\\Sigma^-$ amplitude dominates over another one involving kaon\nexchange and the $K^-p\\to K^-p$ amplitude. Moreover, of the two $\\Lambda(1405)$\nstates, the one with higher mass around $1420$ MeV, gives the largest\ncontribution to the process. We show that the cross section, well within\nmeasurable range, is very sensitive to different models that, while reproducing\n$\\bar{K}N$ observables above threshold, provide different extrapolations of the\n$\\bar{K}N$ amplitudes below threshold. The observables of this reaction will\nprovide new constraints on the theoretical models, leading to more reliable\nextrapolations of the $\\bar{K}N$ amplitudes below threshold and to more\naccurate predictions of the $\\Lambda(1405)$ state of lower mass.\n"} {"abstract": " We present a newly enlarged census of the compact radio population towards\nthe Orion Nebula Cluster (ONC) using high-sensitivity continuum maps (3-10\n$\\mu$Jy bm$^{-1}$) from a total of $\\sim30$ h centimeter-wavelength\nobservations over an area of $\\sim$20$'\\times20'$ obtained in the C-band (4$-$8\nGHz) with the Karl G. Jansky Very Large Array (VLA) in its high-resolution\nA-configuration. We thus complement our previous deep survey of the innermost\nareas of the ONC, now covering the field of view of the Chandra Orion\nUltra-deep Project (COUP). Our catalog contains 521 compact radio sources of\nwhich 198 are new detections. Overall, we find that 17% of the (mostly stellar)\nCOUP sources have radio counterparts, while 53% of the radio sources have COUP\ncounterparts. Most notably, the radio detection fraction of X-ray sources is\nhigher in the inner cluster and almost constant for $r>3'$ (0.36 pc) from\n$\\theta^1$ Ori C suggesting a correlation between the radio emission mechanism\nof these sources and their distance from the most massive stars at the center\nof the cluster, for example due to increased photoionisation of circumstellar\ndisks. The combination with our previous observations four years prior lead to\nthe discovery of fast proper motions of up to $\\sim$373 km s$^{-1}$ from faint\nradio sources associated with ejecta of the OMC1 explosion. Finally, we search\nfor strong radio variability. We found changes in flux density by a factor of\n$\\lesssim$5 within our observations and a few sources with changes by a factor\n$>$10 on long timescales of a few years.\n"} {"abstract": " With the advent of the Internet-of-Things (IoT) era, the ever-increasing\nnumber of devices and emerging applications have triggered the need for\nubiquitous connectivity and more efficient computing paradigms. These stringent\ndemands have posed significant challenges to the current wireless networks and\ntheir computing architectures. In this article, we propose a high-altitude\nplatform (HAP) network-enabled edge computing paradigm to tackle the key issues\nof massive IoT connectivity. Specifically, we first provide a comprehensive\noverview of the recent advances in non-terrestrial network-based edge computing\narchitectures. Then, the limitations of the existing solutions are further\nsummarized from the perspectives of the network architecture, random access\nprocedure, and multiple access techniques. To overcome the limitations, we\npropose a HAP-enabled aerial cell-free massive multiple-input multiple-output\nnetwork to realize the edge computing paradigm, where multiple HAPs cooperate\nvia the edge servers to serve IoT devices. For the case of a massive number of\ndevices, we further adopt a grant-free massive access scheme to guarantee\nlow-latency and high-efficiency massive IoT connectivity to the network.\nBesides, a case study is provided to demonstrate the effectiveness of the\nproposed solution. Finally, to shed light on the future research directions of\nHAP network-enabled edge computing paradigms, the key challenges and open\nissues are discussed.\n"} {"abstract": " Supervised machine learning, in which models are automatically derived from\nlabeled training data, is only as good as the quality of that data. This study\nbuilds on prior work that investigated to what extent 'best practices' around\nlabeling training data were followed in applied ML publications within a single\ndomain (social media platforms). In this paper, we expand by studying\npublications that apply supervised ML in a far broader spectrum of disciplines,\nfocusing on human-labeled data. We report to what extent a random sample of ML\napplication papers across disciplines give specific details about whether best\npractices were followed, while acknowledging that a greater range of\napplication fields necessarily produces greater diversity of labeling and\nannotation methods. Because much of machine learning research and education\nonly focuses on what is done once a \"ground truth\" or \"gold standard\" of\ntraining data is available, it is especially relevant to discuss issues around\nthe equally-important aspect of whether such data is reliable in the first\nplace. This determination becomes increasingly complex when applied to a\nvariety of specialized fields, as labeling can range from a task requiring\nlittle-to-no background knowledge to one that must be performed by someone with\ncareer expertise.\n"} {"abstract": " Conversational Artificial Intelligence (AI) used in industry settings can be\ntrained to closely mimic human behaviors, including lying and deception.\nHowever, lying is often a necessary part of negotiation. To address this, we\ndevelop a normative framework for when it is ethical or unethical for a\nconversational AI to lie to humans, based on whether there is what we call\n\"invitation of trust\" in a particular scenario. Importantly, cultural norms\nplay an important role in determining whether there is invitation of trust\nacross negotiation settings, and thus an AI trained in one culture may not be\ngeneralizable to others. Moreover, individuals may have different expectations\nregarding the invitation of trust and propensity to lie for human vs. AI\nnegotiators, and these expectations may vary across cultures as well. Finally,\nwe outline how a conversational chatbot can be trained to negotiate ethically\nby applying autoregressive models to large dialog and negotiations datasets.\n"} {"abstract": " Technology has the opportunity to assist older adults as they age in place,\ncoordinate caregiving resources, and meet unmet needs through access to\nresources. Currently, older adults use consumer technologies to support\neveryday life, however these technologies are not always accessible or as\nuseful as they can be. Indeed, industry has attempted to create smart home\ntechnologies with older adults as a target user group, however these solutions\nare often more focused on the technical aspects and are short lived. In this\npaper, we advocate for older adults being involved in the design process - from\ninitial ideation to product development to deployment. We encourage federally\nfunded researchers and industry to create compensated, diverse older adult\nadvisory boards to address stereotypes about aging while ensuring their needs\nare considered.\n We envision artificial intelligence systems that augment resources instead of\nreplacing them - especially in under-resourced communities. Older adults rely\non their caregiver networks and community organizations for social, emotional,\nand physical support; thus, AI should be used to coordinate resources better\nand lower the burden of connecting with these resources. Although\nsociotechnical smart systems can help identify needs of older adults, the lack\nof affordable research infrastructure and translation of findings into consumer\ntechnology perpetuates inequities in designing for diverse older adults. In\naddition, there is a disconnect between the creation of smart sensing systems\nand creating understandable, actionable data for older adults and caregivers to\nutilize. We ultimately advocate for a well-coordinated research effort across\nthe United States that connects older adults, caregivers, community\norganizations, and researchers together to catalyze innovative and practical\nresearch for all stakeholders.\n"} {"abstract": " Let $k \\geq 1$ be an integer and $n=3k-1$. Let $\\mathbb{Z}_n$ denote the\nadditive group of integers modulo $n$ and let $C$ be the subset of\n$\\mathbb{Z}_n$ consisting of the elements congruent to 1 modulo 3. The Cayley\ngraph $Cay(\\mathbb{Z}_n; C)$ is known as the Andr\\'asfia graph And($k$). In\nthis note, we wish to determine the automorphism group of this graph. We will\nshow that $Aut(And(k))$ is isomorphic with the dihedral group\n$\\mathbb{D}_{2n}$.\n"} {"abstract": " The Landau form of the Fokker-Planck equation is the gold standard for\nplasmas dominated by small angle collisions, however its $\\Order{N^2}$ work\ncomplexity has limited its practicality. This paper extends previous work on a\nfully conservative finite element method for this Landau collision operator\nwith adaptive mesh refinement, optimized for vector machines, by porting the\nalgorithm to the Cuda programming model with implementations in Cuda and\nKokkos, and by reporting results within a Vlasov-Maxwell-Landau model of a\nplasma thermal quench. With new optimizations of the Landau kernel and ports of\nthis kernel, the sparse matrix assembly and algebraic solver to Cuda, the cost\nof a well resolved Landau collision time advance is shown to be practical for\nkinetic plasma applications. This fully implicit Landau time integrator and the\nplasma quench model is available in the PETSc (Portable, Extensible, Toolkit\nfor Scientific computing) numerical library.\n"} {"abstract": " Let $X$ and $Y$ be two smooth manifolds of the same dimension. It was proved\nby Seeger, Sogge and Stein in \\cite{SSS} that the Fourier integral operators\nwith real non-degenerate phase functions in the class $I^{\\mu}_1(X,Y;\\Lambda),$\n$\\mu\\leq -(n-1)/2,$ are bounded from $H^1$ to $L^1.$ The sharpness of the order\n$-(n-1)/2,$ for any elliptic operator was also proved in \\cite{SSS} and\nextended to other types of canonical relations in \\cite{Ruzhansky1999}. That\nthe operators in the class $I^{\\mu}_1(X,Y;\\Lambda),$ $\\mu\\leq -(n-1)/2,$\nsatisfy the weak (1,1) inequality was proved by Tao \\cite{Tao:weak11}. In this\nnote, we prove that the weak (1,1) inequality for the order $ -(n-1)/2$ is\nsharp for any elliptic Fourier integral operator, as well as its versions for\ncanonical relations satisfying additional rank conditions.\n"} {"abstract": " A novel data-processing method was developed to facilitate scintillation\ndetector characterization. Combined with fan-beam calibration, this method can\nbe used to quickly and conveniently calibrate gamma-ray detectors for SPECT,\nPET, homeland security or astronomy. Compared with traditional calibration\nmethods, this new technique can accurately calibrate a photon-counting\ndetector, including DOI information, with greatly reduced time. The enabling\npart of this technique is fan-beam scanning combined with a data-processing\nstrategy called the common-data subset (CDS) method, which was used to\nsynthesize the detector's mean detector response functions (MDRFs). Using this\napproach, $2N$ scans ($N$ in x and $N$ in y direction) are necessary to finish\ncalibration of a 2D detector as opposed to $N^2$ scans with a pencil beam. For\na 3D detector calibration, only $3N$ scans are necessary to achieve the 3D\ndetector MDRFs that include DOI information. Moreover, this calibration\ntechnique can be used for detectors with complicated or irregular MDRFs. We\npresent both Monte-Carlo simulations and experimental results that support the\nfeasibility of this method.\n"} {"abstract": " While numerous attempts have been made to jointly parse syntax and semantics,\nhigh performance in one domain typically comes at the price of performance in\nthe other. This trade-off contradicts the large body of research focusing on\nthe rich interactions at the syntax-semantics interface. We explore multiple\nmodel architectures which allow us to exploit the rich syntactic and semantic\nannotations contained in the Universal Decompositional Semantics (UDS) dataset,\njointly parsing Universal Dependencies and UDS to obtain state-of-the-art\nresults in both formalisms. We analyze the behaviour of a joint model of syntax\nand semantics, finding patterns supported by linguistic theory at the\nsyntax-semantics interface. We then investigate to what degree joint modeling\ngeneralizes to a multilingual setting, where we find similar trends across 8\nlanguages.\n"} {"abstract": " This paper considers the Gaussian multiple-access channel (MAC) in the\nasymptotic regime where the number of users grows linearly with the code\nlength. We propose efficient coding schemes based on random linear models with\napproximate message passing (AMP) decoding and derive the asymptotic error rate\nachieved for a given user density, user payload (in bits), and user energy. The\ntradeoff between energy-per-bit and achievable user density (for a fixed user\npayload and target error rate) is studied, and it is demonstrated that in the\nlarge system limit, a spatially coupled coding scheme with AMP decoding\nachieves near-optimal tradeoffs for a wide range of user densities.\nFurthermore, in the regime where the user payload is large, we also study the\nspectral efficiency versus energy-per-bit tradeoff and discuss methods to\nreduce decoding complexity at large payload sizes.\n"} {"abstract": " Computational thinking has been a recent focus of education research within\nthe sciences. However, there is a dearth of scholarly literature on how best to\nteach and to assess this topic, especially in disciplinary science courses.\nPhysics classes with computation integrated into the curriculum are a fitting\nsetting for investigating computational thinking. In this paper, we lay the\nfoundation for exploring computational thinking in introductory physics\ncourses. First, we review relevant literature to synthesize a set of potential\nlearning goals that students could engage in when working with computation. The\ncomputational thinking framework that we have developed features 14 practices\ncontained within 6 different categories. We use in-class video data as\nexistence proofs of the computational thinking practices proposed in our\nframework. In doing this work, we hope to provide ways for teachers to assess\ntheir students' development of computational thinking, while also giving\nphysics education researchers some guidance on how to study this topic in\ngreater depth.\n"} {"abstract": " In this study, an algorithm to blind and automatic modulation classification\nhas been proposed. It well benefits combined machine leaning and signal feature\nextraction to recognize diverse range of modulation in low signal power to\nnoise ratio (SNR). The presented algorithm contains four. First, it advantages\nspectrum analyzing to branching modulated signal based on regular and irregular\nspectrum character. Seconds, a nonlinear soft margin support vector (NS SVM)\nproblem is applied to received signal, and its symbols are classified to\ncorrect and incorrect (support vectors) symbols. The NS SVM employment leads to\ndiscounting in physical layer noise effect on modulated signal. After that, a\nk-center clustering can find center of each class. finally, in correlation\nfunction estimation of scatter diagram is correlated with pre-saved ideal\nscatter diagram of modulations. The correlation outcome is classification\nresult. For more evaluation, success rate, performance, and complexity in\ncompare to many published methods are provided. The simulation prove that the\nproposed algorithm can classified the modulated signal in less SNR. For\nexample, it can recognize 4-QAM in SNR=-4.2 dB, and 4-FSK in SNR=2.1 dB with\n%99 success rate. Moreover, due to using of kernel function in dual problem of\nNS SVM and feature base function, the proposed algorithm has low complexity and\nsimple implementation in practical issues.\n"} {"abstract": " Jupiter family comets contribute a significant amount of debris to near-Earth\nspace. However, telescopic observations of these objects seem to suggest they\nhave short physical lifetimes. If this is true, the material generated will\nalso be short-lived, but fireball observation networks still detect material on\ncometary orbits. This study examines centimeter-meter scale sporadic meteoroids\ndetected by the Desert Fireball Network from 2014-2020 originating from Jupiter\nfamily comet-like orbits. Analyzing each event's dynamic history and physical\ncharacteristics, we confidently determined whether they originated from the\nmain asteroid belt or the trans-Neptunian region. Our results indicate that\n$<4\\%$ of sporadic meteoroids on JFC-like orbits are genetically cometary. This\nobservation is statistically significant and shows that cometary material is\ntoo friable to survive in near-Earth space. Even when considering shower\ncontributions, meteoroids on JFC-like orbits are primarily from the main-belt.\nThus, the presence of genuine cometary meteorites in terrestrial collections is\nhighly unlikely.\n"} {"abstract": " Leakage of data from publicly available Machine Learning (ML) models is an\narea of growing significance as commercial and government applications of ML\ncan draw on multiple sources of data, potentially including users' and clients'\nsensitive data. We provide a comprehensive survey of contemporary advances on\nseveral fronts, covering involuntary data leakage which is natural to ML\nmodels, potential malevolent leakage which is caused by privacy attacks, and\ncurrently available defence mechanisms. We focus on inference-time leakage, as\nthe most likely scenario for publicly available models. We first discuss what\nleakage is in the context of different data, tasks, and model architectures. We\nthen propose a taxonomy across involuntary and malevolent leakage, available\ndefences, followed by the currently available assessment metrics and\napplications. We conclude with outstanding challenges and open questions,\noutlining some promising directions for future research.\n"} {"abstract": " Fitting concentric geometric objects to digitized data is an important\nproblem in many areas such as iris detection, autonomous navigation, and\nindustrial robotics operations. There are two common approaches to fitting\ngeometric shapes to data: the geometric (iterative) approach and algebraic\n(non-iterative) approach. The geometric approach is a nonlinear iterative\nmethod that minimizes the sum of the squares of Euclidean distances of the\nobserved points to the ellipses and regarded as the most accurate method, but\nit needs a good initial guess to improve the convergence rate. The algebraic\napproach is based on minimizing the algebraic distances with some constraints\nimposed on parametric space. Each algebraic method depends on the imposed\nconstraint, and it can be solved with the aid of the generalized eigenvalue\nproblem. Only a few methods in literature were developed to solve the problem\nof concentric ellipses. Here we study the statistical properties of existing\nmethods by firstly establishing a general mathematical and statistical\nframework for this problem. Using rigorous perturbation analysis, we derive the\nvariances and biasedness of each method under the small-sigma model. We also\ndevelop new estimators, which can be used as reliable initial guesses for other\niterative methods. Then we compare the performance of each method according to\ntheir theoretical accuracy. Not only do our methods described here outperform\nother existing non-iterative methods, they are also quite robust against large\nnoise. These methods and their practical performances are assessed by a series\nof numerical experiments on both synthetic and real data.\n"} {"abstract": " Gradient-based adversarial attacks on deep neural networks pose a serious\nthreat, since they can be deployed by adding imperceptible perturbations to the\ntest data of any network, and the risk they introduce cannot be assessed\nthrough the network's original training performance. Denoising and\ndimensionality reduction are two distinct methods that have been independently\ninvestigated to combat such attacks. While denoising offers the ability to\ntailor the defense to the specific nature of the attack, dimensionality\nreduction offers the advantage of potentially removing previously unseen\nperturbations, along with reducing the training time of the network being\ndefended. We propose strategies to combine the advantages of these two defense\nmechanisms. First, we propose the cascaded defense, which involves denoising\nfollowed by dimensionality reduction. To reduce the training time of the\ndefense for a small trade-off in performance, we propose the hidden layer\ndefense, which involves feeding the output of the encoder of a denoising\nautoencoder into the network. Further, we discuss how adaptive attacks against\nthese defenses could become significantly weak when an alternative defense is\nused, or when no defense is used. In this light, we propose a new metric to\nevaluate a defense which measures the sensitivity of the adaptive attack to\nmodifications in the defense. Finally, we present a guideline for building an\nordered repertoire of defenses, a.k.a. a defense infrastructure, that adjusts\nto limited computational resources in presence of uncertainty about the attack\nstrategy.\n"} {"abstract": " Gender-based crime is one of the most concerning scourges of contemporary\nsociety. Governments worldwide have invested lots of economic and human\nresources to radically eliminate this threat. Despite these efforts, providing\naccurate predictions of the risk that a victim of gender violence has of being\nattacked again is still a very hard open problem. The development of new\nmethods for issuing accurate, fair and quick predictions would allow police\nforces to select the most appropriate measures to prevent recidivism. In this\nwork, we propose to apply Machine Learning (ML) techniques to create models\nthat accurately predict the recidivism risk of a gender-violence offender. The\nrelevance of the contribution of this work is threefold: (i) the proposed ML\nmethod outperforms the preexisting risk assessment algorithm based on classical\nstatistical techniques, (ii) the study has been conducted through an official\nspecific-purpose database with more than 40,000 reports of gender violence, and\n(iii) two new quality measures are proposed for assessing the effective police\nprotection that a model supplies and the overload in the invested resources\nthat it generates. Additionally, we propose a hybrid model that combines the\nstatistical prediction methods with the ML method, permitting authorities to\nimplement a smooth transition from the preexisting model to the ML-based model.\nThis hybrid nature enables a decision-making process to optimally balance\nbetween the efficiency of the police system and aggressiveness of the\nprotection measures taken.\n"} {"abstract": " Global System for Mobile Communications (GSM) is a cellular network that is\npopular and has been growing in recent years. It was developed to solve\nfragmentation issues of the first cellular system, and it addresses digital\nmodulation methods, level of the network structure, and services. It is\nfundamental for organizations to become learning organizations to keep up with\nthe technology changes for network services to be at a competitive level. A\nsimulation analysis using the NetSim tool in this paper is presented for\ncomparing different cellular network codecs for GSM network performance. These\nparameters such as throughput, delay, and jitter are analyzed for the quality\nof service provided by each network codec. Unicast application for the cellular\nnetwork is modeled for different network scenarios. Depending on the evaluation\nand simulation, it was discovered that G.711, GSM_FR, and GSM-EFR performed\nbetter than the other codecs, and they are considered to be the best codecs for\ncellular networks. These codecs will be of best use to better the performance\nof the network in the near future.\n"} {"abstract": " User-facing software services are becoming increasingly reliant on remote\nservers to host Deep Neural Network (DNN) models, which perform inference tasks\nfor the clients. Such services require the client to send input data to the\nservice provider, who processes it using a DNN and returns the output\npredictions to the client. Due to the rich nature of the inputs such as images\nand speech, the input often contains more information than what is necessary to\nperform the primary inference task. Consequently, in addition to the primary\ninference task, a malicious service provider could infer secondary (sensitive)\nattributes from the input, compromising the client's privacy. The goal of our\nwork is to improve inference privacy by injecting noise to the input to hide\nthe irrelevant features that are not conducive to the primary classification\ntask. To this end, we propose Adaptive Noise Injection (ANI), which uses a\nlight-weight DNN on the client-side to inject noise to each input, before\ntransmitting it to the service provider to perform inference. Our key insight\nis that by customizing the noise to each input, we can achieve state-of-the-art\ntrade-off between utility and privacy (up to 48.5% degradation in\nsensitive-task accuracy with <1% degradation in primary accuracy),\nsignificantly outperforming existing noise injection schemes. Our method does\nnot require prior knowledge of the sensitive attributes and incurs minimal\ncomputational overheads.\n"} {"abstract": " Getman et al. (2021) reports the discovery, energetics, frequencies, and\neffects on environs of $>1000$ X-ray super-flares with X-ray energies $E_X \\sim\n10^{34}-10^{38}$~erg from pre-main sequence (PMS) stars identified in the\n$Chandra$ MYStIX and SFiNCs surveys. Here we perform detailed plasma evolution\nmodeling of $55$ bright MYStIX/SFiNCs super-flares from these events. They\nconstitute a large sample of the most powerful stellar flares analyzed in a\nuniform fashion. They are compared with published X-ray super-flares from young\nstars in the Orion Nebula Cluster, older active stars, and the Sun. Several\nresults emerge. First, the properties of PMS X-ray super-flares are independent\nof the presence or absence of protoplanetary disks inferred from infrared\nphotometry, supporting the solar-type model of PMS flaring magnetic loops with\nboth footpoints anchored in the stellar surface. Second, most PMS super-flares\nresemble solar long duration events (LDEs) that are associated with coronal\nmass ejections. Slow rise PMS super-flares are an interesting exception. Third,\nstrong correlations of super-flare peak emission measure and plasma temperature\nwith the stellar mass are similar to established correlations for the PMS X-ray\nemission composed of numerous smaller flares. Fourth, a new correlation of loop\ngeometry is linked to stellar mass; more massive stars appear to have thicker\nflaring loops. Finally, the slope of a long-standing relationship between the\nX-ray luminosity and magnetic flux of various solar-stellar magnetic elements\nappears steeper in PMS super-flares than for solar events.\n"} {"abstract": " The swampland is the set of seemingly consistent low-energy effective field\ntheories that cannot be consistently coupled to quantum gravity. In this review\nwe cover some of the conjectural properties that effective theories should\npossess in order not to fall in the swampland, and we give an overview of their\nmain applications to particle physics. The latter include predictions on\nneutrino masses, bounds on the cosmological constant, the electroweak and QCD\nscales, the photon mass, the Higgs potential and some insights about\nsupersymmetry.\n"} {"abstract": " This paper examines the use of Lie group and Lie Algebra theory to construct\nthe geometry of pairwise comparisons matrices. The Hadamard product (also known\nas coordinatewise, coordinate-wise, elementwise, or element-wise product) is\nanalyzed in the context of inconsistency and inaccuracy by the decomposition\nmethod.\n The two designed components are the approximation and orthogonal components.\nThe decomposition constitutes the theoretical foundation for the multiplicative\npairwise comparisons.\n Keywords: approximate reasoning, subjectivity, inconsistency,\nconsistency-driven, pairwise comparison, matrix Lie group, Lie algebra,\napproximation, orthogonality, decomposition.\n"} {"abstract": " Dark matter (DM) scattering and its subsequent capture in the Sun can boost\nthe local relic density, leading to an enhanced neutrino flux from DM\nannihilations that is in principle detectable at neutrino telescopes. We\ncalculate the event rates expected for a radiative seesaw model containing both\nscalar triplet and singlet-doublet fermion DM candidates. In the case of scalar\nDM, the absence of a spin dependent scattering on nuclei results in a low\ncapture rate in the Sun, which is reflected in an event rate of less than one\nper year in the current IceCube configuration with 86 strings. For\nsinglet-doublet fermion DM, there is a spin dependent scattering process next\nto the spin independent one, which significantly boosts the event rate and thus\nmakes indirect detection competitive with respect to the direct detection\nlimits imposed by PICO-60. Due to a correlation between both scattering\nprocesses, the limits on the spin independent cross section set by XENON1T\nexclude also parts of the parameter space that can be probed at IceCube.\nPreviously obtained limits by ANTARES, IceCube and Super-Kamiokande from the\nSun and the Galactic Center are shown to be much weaker.\n"} {"abstract": " Recent work has established that, for every positive integer $k$, every\n$n$-node graph has a $(2k-1)$-spanner on $O(f^{1-1/k} n^{1+1/k})$ edges that is\nresilient to $f$ edge or vertex faults. For vertex faults, this bound is tight.\nHowever, the case of edge faults is not as well understood: the best known\nlower bound for general $k$ is $\\Omega(f^{\\frac12 - \\frac{1}{2k}} n^{1+1/k}\n+fn)$. Our main result is to nearly close this gap with an improved upper\nbound, thus separating the cases of edge and vertex faults. For odd $k$, our\nnew upper bound is $O_k(f^{\\frac12 - \\frac{1}{2k}} n^{1+1/k} + fn)$, which is\ntight up to hidden $poly(k)$ factors. For even $k$, our new upper bound is\n$O_k(f^{1/2} n^{1+1/k} +fn)$, which leaves a gap of $poly(k) f^{1/(2k)}$. Our\nproof is an analysis of the fault-tolerant greedy algorithm, which requires\nexponential time, but we also show that there is a polynomial-time algorithm\nwhich creates edge fault tolerant spanners that are larger only by factors of\n$k$.\n"} {"abstract": " In warehouse and manufacturing environments, manipulation platforms are\nfrequently deployed at conveyor belts to perform pick and place tasks. Because\nobjects on the conveyor belts are moving, robots have limited time to pick them\nup. This brings the requirement for fast and reliable motion planners that\ncould provide provable real-time planning guarantees, which the existing\nalgorithms do not provide. Besides the planning efficiency, the success of\nmanipulation tasks relies heavily on the accuracy of the perception system\nwhich is often noisy, especially if the target objects are perceived from a\ndistance. For fast moving conveyor belts, the robot cannot wait for a perfect\nestimate before it starts executing its motion. In order to be able to reach\nthe object in time, it must start moving early on (relying on the initial noisy\nestimates) and adjust its motion on-the-fly in response to the pose updates\nfrom perception. We propose a planning framework that meets these requirements\nby providing provable constant-time planning and replanning guarantees. To this\nend, we first introduce and formalize a new class of algorithms called\nConstant-Time Motion Planning algorithms (CTMP) that guarantee to plan in\nconstant time and within a user-defined time bound. We then present our\nplanning framework for grasping objects off a conveyor belt as an instance of\nthe CTMP class of algorithms.\n"} {"abstract": " Fluid-structure interactions are a widespread phenomenon in nature. Although\ntheir numerical modeling have come a long way, the application of numerical\ndesign tools to these multiphysics problems is still lagging behind.\nGradient-based optimization is the most popular approach in topology\noptimization currently. Hence, it's a necessity to utilize mesh deformation\ntechniques that have continuous, smooth derivatives. In this work, we address\nmesh deformation techniques for structured, quadrilateral meshes. We discuss\nand comment on two legacy mesh deformation techniques; namely the spring\nanalogy model and the linear elasticity model. In addition, we propose a new\ntechnique based on the Yeoh hyperelasticity model. We focus on mesh quality as\na gateway to mesh admissibility. We propose layered selective stiffening such\nthat the elements adjacent to the fluid-structure interface - where the bulk of\nthe mesh distortion occurs - are stiffened in consecutive layers. The legacy\nand the new models are able to sustain large deformations without deprecating\nthe mesh quality, and the results are enhanced with using layered selective\nstiffening.\n"} {"abstract": " The ability to generate high-fidelity synthetic data is crucial when\navailable (real) data is limited or where privacy and data protection standards\nallow only for limited use of the given data, e.g., in medical and financial\ndata-sets. Current state-of-the-art methods for synthetic data generation are\nbased on generative models, such as Generative Adversarial Networks (GANs).\nEven though GANs have achieved remarkable results in synthetic data generation,\nthey are often challenging to interpret.Furthermore, GAN-based methods can\nsuffer when used with mixed real and categorical variables.Moreover, loss\nfunction (discriminator loss) design itself is problem specific, i.e., the\ngenerative model may not be useful for tasks it was not explicitly trained for.\nIn this paper, we propose to use a probabilistic model as a synthetic data\ngenerator. Learning the probabilistic model for the data is equivalent to\nestimating the density of the data. Based on the copula theory, we divide the\ndensity estimation task into two parts, i.e., estimating univariate marginals\nand estimating the multivariate copula density over the univariate marginals.\nWe use normalising flows to learn both the copula density and univariate\nmarginals. We benchmark our method on both simulated and real data-sets in\nterms of density estimation as well as the ability to generate high-fidelity\nsynthetic data\n"} {"abstract": " Logistic Regression (LR) is a widely used statistical method in empirical\nbinary classification studies. However, real-life scenarios oftentimes share\ncomplexities that prevent from the use of the as-is LR model, and instead\nhighlight the need to include high-order interactions to capture data\nvariability. This becomes even more challenging because of: (i) datasets\ngrowing wider, with more and more variables; (ii) studies being typically\nconducted in strongly imbalanced settings; (iii) samples going from very large\nto extremely small; (iv) the need of providing both predictive models and\ninterpretable results. In this paper we present a novel algorithm, Learning\nhigh-order Interactions via targeted Pattern Search (LIPS), to select\ninteraction terms of varying order to include in a LR model for an imbalanced\nbinary classification task when input data are categorical. LIPS's rationale\nstems from the duality between item sets and categorical interactions. The\nalgorithm relies on an interaction learning step based on a well-known frequent\nitem set mining algorithm, and a novel dissimilarity-based interaction\nselection step that allows the user to specify the number of interactions to be\nincluded in the LR model. In addition, we particularize two variants (Scores\nLIPS and Clusters LIPS), that can address even more specific needs. Through a\nset of experiments we validate our algorithm and prove its wide applicability\nto real-life research scenarios, showing that it outperforms a benchmark\nstate-of-the-art algorithm.\n"} {"abstract": " The concept of exceptional point of degeneracy (EPD) is used to conceive a\ndegenerate synchronization regime that is able to enhance the level of output\npower and power conversion efficiency for backward wave oscillators (BWOs)\noperating at millimeter-wave and Terahertz frequencies. Standard BWOs operating\nat such high frequency ranges typically generate output power not exceeding\ntens of watts with very poor power conversion efficiency in the order of 1%.\nThe novel concept of degenerate synchronization for the BWO based on a folded\nwaveguide is implemented by engineering distributed gain and power extraction\nalong the slow-wave waveguide. The distributed power extraction along the\nfolded waveguide is useful to satisfy the necessary conditions to have an EPD\nat the synchronization point. Particle-in-cell (PIC) simulation results shows\nthat BWO operating at an EPD regime is capable of generating output power\nexceeding 3 kwatts with conversion efficiency of exceeding 20% at frequency of\n88.5 GHz.\n"} {"abstract": " We consider nonlinear impulsive systems on Banach spaces subjected to\ndisturbances and look for dwell-time conditions guaranteeing the the ISS\nproperty. In contrary to many existing results our conditions cover the case\nwhere both continuous and discrete dynamics can be unstable simultaneously.\nLyapunov type methods are use for this purpose. The effectiveness of our\napproach is illustrated on a rather nontrivial example, which is feedback\nconnection of an ODE and a PDE systems.\n"} {"abstract": " In this work, we consider the problem of joint calibration and\ndirection-of-arrival (DOA) estimation using sensor arrays. This joint\nestimation problem is referred to as self calibration. Unlike many previous\niterative approaches, we propose geometry independent convex optimization\nalgorithms for jointly estimating the sensor gain and phase errors as well as\nthe source DOAs. We derive these algorithms based on both the conventional\nelement-space data model and the covariance data model. We focus on sparse and\nregular arrays formed using scalar sensors as well as vector sensors. The\ndeveloped algorithms are obtained by transforming the underlying bilinear\ncalibration model into a linear model, and subsequently by using standard\nconvex relaxation techniques to estimate the unknown parameters. Prior to the\nalgorithm discussion, we also derive identifiability conditions for the\nexistence of a unique solution to the self calibration problem. To demonstrate\nthe effectiveness of the developed techniques, numerical experiments and\ncomparisons to the state-of-the-art methods are provided. Finally, the results\nfrom an experiment that was performed in an anechoic chamber using an acoustic\nvector sensor array are presented to demonstrate the usefulness of the proposed\nself calibration techniques.\n"} {"abstract": " Heterogeneous graph neural networks (HGNNs) as an emerging technique have\nshown superior capacity of dealing with heterogeneous information network\n(HIN). However, most HGNNs follow a semi-supervised learning manner, which\nnotably limits their wide use in reality since labels are usually scarce in\nreal applications. Recently, contrastive learning, a self-supervised method,\nbecomes one of the most exciting learning paradigms and shows great potential\nwhen there are no labels. In this paper, we study the problem of\nself-supervised HGNNs and propose a novel co-contrastive learning mechanism for\nHGNNs, named HeCo. Different from traditional contrastive learning which only\nfocuses on contrasting positive and negative samples, HeCo employs\ncross-viewcontrastive mechanism. Specifically, two views of a HIN (network\nschema and meta-path views) are proposed to learn node embeddings, so as to\ncapture both of local and high-order structures simultaneously. Then the\ncross-view contrastive learning, as well as a view mask mechanism, is proposed,\nwhich is able to extract the positive and negative embeddings from two views.\nThis enables the two views to collaboratively supervise each other and finally\nlearn high-level node embeddings. Moreover, two extensions of HeCo are designed\nto generate harder negative samples with high quality, which further boosts the\nperformance of HeCo. Extensive experiments conducted on a variety of real-world\nnetworks show the superior performance of the proposed methods over the\nstate-of-the-arts.\n"} {"abstract": " We study the average behaviour of the Iwasawa invariants for Selmer groups of\nelliptic curves, considered over anticyclotomic $\\mathbb{Z}_p$-extensions in\nboth the definite and indefinite settings. The results in this paper lie at the\nintersection of arithmetic statistics and Iwasawa theory.\n"} {"abstract": " We propose a novel numerical method for high dimensional\nHamilton--Jacobi--Bellman (HJB) type elliptic partial differential equations\n(PDEs). The HJB PDEs, reformulated as optimal control problems, are tackled by\nthe actor-critic framework inspired by reinforcement learning, based on neural\nnetwork parametrization of the value and control functions. Within the\nactor-critic framework, we employ a policy gradient approach to improve the\ncontrol, while for the value function, we derive a variance reduced\nleast-squares temporal difference method using stochastic calculus. To\nnumerically discretize the stochastic control problem, we employ an adaptive\nstep size scheme to improve the accuracy near the domain boundary. Numerical\nexamples up to $20$ spatial dimensions including the linear quadratic\nregulators, the stochastic Van der Pol oscillators, the diffusive Eikonal\nequations, and fully nonlinear elliptic PDEs derived from a regulator problem\nare presented to validate the effectiveness of our proposed method.\n"} {"abstract": " High-contrast imaging observations are fundamentally limited by the spatially\nand temporally correlated noise source called speckles. Suppression of speckle\nnoise is the key goal of wavefront control and adaptive optics (AO),\ncoronagraphy, and a host of post-processing techniques. Speckles average at a\nrate set by the statistical speckle lifetime, and speckle-limited integration\ntime in long exposures is directly proportional to this lifetime. As progress\ncontinues in post-coronagraph wavefront control, residual atmospheric speckles\nwill become the limiting noise source in high-contrast imaging, so a complete\nunderstanding of their statistical behavior is crucial to optimizing\nhigh-contrast imaging instruments. Here we present a novel power spectral\ndensity (PSD) method for calculating the lifetime, and develop a semi-analytic\nmethod for predicting intensity PSDs behind a coronagraph. Considering a\nfrozen-flow turbulence model, we analyze the residual atmosphere speckle\nlifetimes in a MagAO-X-like AO system as well as 25--39 m giant segmented\nmirror telescope (GSMT) scale systems. We find that standard AO control\nshortens atmospheric speckle lifetime from ~130 ms to ~50 ms, and predictive\ncontrol will further shorten the lifetime to ~20 ms on 6.5 m MagAO-X. We find\nthat speckle lifetimes vary with diameter, wind speed, seeing, and location\nwithin the AO control region. On bright stars lifetimes remain within a rough\nrange of ~20 ms to ~100 ms. Due to control system dynamics there are no simple\nscaling laws which apply across a wide range of system characteristics.\nFinally, we use these results to argue that telemetry-based post-processing\nshould enable ground-based telescopes to achieve the photon-noise limit in\nhigh-contrast imaging.\n"} {"abstract": " The Eliashberg theory of superconductivity accounts for the fundamental\nphysics of conventional electron-phonon superconductors, including the\nretardation of the interaction and the effect of the Coulomb pseudopotential,\nto predict the critical temperature $T_c$ and other properties. McMillan,\nAllen, and Dynes derived approximate closed-form expressions for the critical\ntemperature predicted by this theory, which depends essentially on the\nelectron-phonon spectral function $\\alpha^2F(\\omega)$, using $\\alpha^2F$ for\nlow-$T_c$ superconductors. Here we show that modern machine learning techniques\ncan substantially improve these formulae, accounting for more general shapes of\nthe $\\alpha^2F$ function. Using symbolic regression and the sure independence\nscreening and sparsifying operator (SISSO) framework, together with a database\nof artificially generated $\\alpha^2F$ functions, ranging from multimodal\nEinstein-like models to calculated spectra of polyhydrides, as well as\nnumerical solutions of the Eliashberg equations, we derive a formula for $T_c$\nthat performs as well as Allen-Dynes for low-$T_c$ superconductors, and\nsubstantially better for higher-$T_c$ ones. The expression identified through\nour data-driven approach corrects the systematic underestimation of $T_c$ while\nreproducing the physical constraints originally outlined by Allen and Dynes.\nThis equation should replace the Allen-Dynes formula for the prediction of\nhigher-temperature superconductors and for the estimation of $\\lambda$ from\nexperimental data.\n"} {"abstract": " The HI Ly$\\alpha$ (1215.67 $\\unicode{xC5}$) emission line dominates the\nfar-UV spectra of M dwarf stars, but strong absorption from neutral hydrogen in\nthe interstellar medium makes observing Ly$\\alpha$ challenging even for the\nclosest stars. As part of the Far-Ultraviolet M-dwarf Evolution Survey (FUMES),\nthe Hubble Space Telescope has observed 10 early-to-mid M dwarfs with ages\nranging from $\\sim$24 Myr to several Gyrs to evaluate how the incident UV\nradiation evolves through the lifetime of exoplanetary systems. We reconstruct\nthe intrinsic Ly$\\alpha$ profiles from STIS G140L and E140M spectra and achieve\nreconstructed fluxes with 1-$\\sigma$ uncertainties ranging from 5% to a factor\nof two for the low resolution spectra (G140L) and 3-20% for the high resolution\nspectra (E140M). We observe broad, 500-1000 km s$^{-1}$ wings of the Ly$\\alpha$\nline profile, and analyze how the line width depends on stellar properties. We\nfind that stellar effective temperature and surface gravity are the dominant\nfactors influencing the line width with little impact from the star's magnetic\nactivity level, and that the surface flux density of the Ly$\\alpha$ wings may\nbe used to estimate the chromospheric electron density. The Ly$\\alpha$\nreconstructions on the G140L spectra are the first attempted on\n$\\lambda/\\Delta\\lambda\\sim$1000 data. We find that the reconstruction precision\nis not correlated with SNR of the observation, rather, it depends on the\nintrinsic broadness of the stellar Ly$\\alpha$ line. Young, low-gravity stars\nhave the broadest lines and therefore provide more information at low spectral\nresolution to the fit to break degeneracies among model parameters.\n"} {"abstract": " We study the current-induced torques in asymmetric magnetic tunnel junctions\ncontaining a conventional ferromagnet and a magnetic Weyl semimetal contact.\nThe Weyl semimetal hosts chiral bulk states and topologically protected Fermi\narc surface states which were found to govern the voltage behavior and\nefficiency of current-induced torques. We report how bulk chirality dictates\nthe sign of the non-equilibrium torques acting on the ferromagnet and discuss\nthe existence of large field-like torques acting on the magnetic Weyl semimetal\nwhich exceeds the theoretical maximum of conventional magnetic tunnel\njunctions. The latter are derived from the Fermi arc spin texture and display a\ncounter-intuitive dependence on the Weyl nodes separation. Our results shed\nlight on the new physics of multilayered spintronic devices comprising of\nmagnetic Weyl semimetals, which might open doors for new energy efficient\nspintronic devices.\n"} {"abstract": " This paper contains two finite-sample results about the sign test. First, we\nshow that the sign test is unbiased against two-sided alternatives even when\nobservations are not identically distributed. Second, we provide simple\ntheoretical counterexamples to show that correlation that is unaccounted for\nleads to size distortion and over-rejection. Our results have implication for\npractitioners, who are increasingly employing randomization tests for\ninference.\n"} {"abstract": " A pair of biadjoint functors between two categories produces a collection of\nelements in the centers of these categories, one for each isotopy class of\nnested circles in the plane. If the centers are equipped with a trace map into\nthe ground field, then one assigns an element of that field to a diagram of\nnested circles. We focus on the self-adjoint functor case of this construction\nand study the reverse problem of recovering such a functor and a category given\nvalues associated to diagrams of nested circles.\n"} {"abstract": " Superpixels serve as a powerful preprocessing tool in numerous computer\nvision tasks. By using superpixel representation, the number of image\nprimitives can be largely reduced by orders of magnitudes. With the rise of\ndeep learning in recent years, a few works have attempted to feed deeply\nlearned features / graphs into existing classical superpixel techniques.\nHowever, none of them are able to produce superpixels in near real-time, which\nis crucial to the applicability of superpixels in practice. In this work, we\npropose a two-stage graph-based framework for superpixel segmentation. In the\nfirst stage, we introduce an efficient Deep Affinity Learning (DAL) network\nthat learns pairwise pixel affinities by aggregating multi-scale information.\nIn the second stage, we propose a highly efficient superpixel method called\nHierarchical Entropy Rate Segmentation (HERS). Using the learned affinities\nfrom the first stage, HERS builds a hierarchical tree structure that can\nproduce any number of highly adaptive superpixels instantaneously. We\ndemonstrate, through visual and numerical experiments, the effectiveness and\nefficiency of our method compared to various state-of-the-art superpixel\nmethods.\n"} {"abstract": " This paper is an excerpt of an early version of Chapter 2 of the book\n\"Validity, Reliability, and Significance. Empirical Methods for NLP and Data\nScience\", by Stefan Riezler and Michael Hagmann, published in December 2021 by\nMorgan & Claypool. Please see the book's homepage at\nhttps://www.morganclaypoolpublishers.com/catalog_Orig/product_info.php?products_id=1688\nfor a more recent and comprehensive discussion.\n"} {"abstract": " Training deep reinforcement learning agents on environments with multiple\nlevels / scenes from the same task, has become essential for many applications\naiming to achieve generalization and domain transfer from simulation to the\nreal world. While such a strategy is helpful with generalization, the use of\nmultiple scenes significantly increases the variance of samples collected for\npolicy gradient computations. Current methods, effectively continue to view\nthis collection of scenes as a single Markov decision process (MDP), and thus\nlearn a scene-generic value function V(s). However, we argue that the sample\nvariance for a multi-scene environment is best minimized by treating each scene\nas a distinct MDP, and then learning a joint value function V(s,M) dependent on\nboth state s and MDP M. We further demonstrate that the true joint value\nfunction for a multi-scene environment, follows a multi-modal distribution\nwhich is not captured by traditional CNN / LSTM based critic networks. To this\nend, we propose a dynamic value estimation (DVE) technique, which approximates\nthe true joint value function through a sparse attention mechanism over\nmultiple value function hypothesis / modes. The resulting agent not only shows\nsignificant improvements in the final reward score across a range of OpenAI\nProcGen environments, but also exhibits enhanced navigation efficiency and\nprovides an implicit mechanism for unsupervised state-space skill\ndecomposition.\n"} {"abstract": " Deep learning as a service (DLaaS) has been intensively studied to facilitate\nthe wider deployment of the emerging deep learning applications. However, DLaaS\nmay compromise the privacy of both clients and cloud servers. Although some\nprivacy preserving deep neural network (DNN) based inference techniques have\nbeen proposed by composing cryptographic primitives, the challenges on\ncomputational efficiency have not been well-addressed due to the complexity of\nDNN models and expensive cryptographic primitives. In this paper, we propose a\nnovel privacy preserving cloud-based DNN inference framework (namely, \"PROUD\"),\nwhich greatly improves the computational efficiency. Finally, we conduct\nextensive experiments on two commonly-used datasets to validate both\neffectiveness and efficiency for the PROUD, which also outperforms the\nstate-of-the-art techniques.\n"} {"abstract": " We derive the interaction of fermions with a dynamical space-time based on\nthe postulate that the description of physics should be independent of the\nreference frame, which means to require the form-invariance of the fermion\naction under diffeomorphisms. The derivation is worked out in the Hamiltonian\nformalism as a canonical transformation along the line of non-Abelian gauge\ntheories. This yields a closed set of field equations for fermions,\nunambiguously fixing their coupling to dynamical space-time. We encounter, in\naddition to the well-known minimal coupling, anomalous couplings to curvature\nand torsion. In torsion-free geometries that anomalous interaction reduces to a\nPauli-type coupling with the curvature scalar via a spontaneously emerged new\ncoupling constant with the dimension of mass resp.\\ inverse length. A\nconsistent model Hamiltonian for the free gravitational field and the impact of\nits functional form on the structure of the dynamical geometry space-time is\ndiscussed.\n"} {"abstract": " This paper introduces a shoebox room simulator able to systematically\ngenerate synthetic datasets of binaural room impulse responses (BRIRs) given an\narbitrary set of head-related transfer functions (HRTFs). The evaluation of\nmachine hearing algorithms frequently requires BRIR datasets in order to\nsimulate the acoustics of any environment. However, currently available\nsolutions typically consider only HRTFs measured on dummy heads, which poorly\ncharacterize the high variability in spatial sound perception. Our solution\nallows to integrate a room impulse response (RIR) simulator with different HRTF\nsets represented in Spatially Oriented Format for Acoustics (SOFA). The source\ncode and the compiled binaries for different operating systems allow to both\nadvanced and non-expert users to benefit from our toolbox, see\nhttps://github.com/spatialaudiotools/sofamyroom/ .\n"} {"abstract": " Backwards Stochastic Differential Equations (BSDEs) have been widely employed\nin various areas of applied and financial mathematics. In particular, BSDEs\nappear extensively in the pricing and hedging of financial derivatives,\nstochastic optimal control problems and optimal stopping problems. Most BSDEs\ncannot be solved analytically and thus numerical methods must be applied in\norder to approximate their solutions. There have been many numerical methods\nproposed over the past few decades, for the most part, in a complex and\nscattered manner, with each requiring a variety of different and similar\nassumptions and conditions. The aim of the present paper is thus to\nsystematically survey various numerical methods for BSDEs, and in particular,\ncompare and categorise them. To this end, we focus on the core features of each\nmethod: the main assumptions, the numerical algorithm itself, key convergence\nproperties and advantages and disadvantages, in order to provide an exhaustive\nup-to-date coverage of numerical methods for BSDEs, with insightful summaries\nof each and useful comparison and categorization.\n"} {"abstract": " With the continuing rapid development of artificial microrobots and active\nparticles, questions of microswimmer guidance and control are becoming ever\nmore relevant and prevalent. In both the applications and theoretical study of\nsuch microscale swimmers, control is often mediated by an engineered property\nof the swimmer, such as in the case of magnetically propelled microrobots. In\nthis work, we will consider a modality of control that is applicable in more\ngenerality, effecting guidance via modulation of a background fluid flow. Here,\nconsidering a model swimmer in a commonplace flow and simple geometry, we\nanalyse and subsequently establish the efficacy of flow-mediated microswimmer\npositional control, later touching upon a question of optimal control. Moving\nbeyond idealised notions of controllability and towards considerations of\npractical utility, we then evaluate the robustness of this control modality to\nsources of variation that may be present in applications, examining in\nparticular the effects of measurement inaccuracy and rotational noise. This\nexploration gives rise to a number of cautionary observations, which, overall,\ndemonstrate the need for the careful assessment of both policy and behavioural\nrobustness when designing control schemes for use in practice.\n"} {"abstract": " We theoretically investigate the fluorescence intensity correlation (FIC) of\nAr clusters and Mo-doped iron oxide nanoparticles subjected to intense,\nfemtosecond and sub-femtosecond XFEL pulses for high-resolution and elemental\ncontrast imaging. We present the FIC of {\\Ka} and {\\Kah} emission in Ar\nclusters and discuss the impact of sample damage on retrieving high-resolution\nstructural information and compare the obtained structural information with\nthose from the coherent difractive imaging (CDI) approach. We found that, while\nsub-femtosecond pulses will substantially benefit the CDI approach,\nfew-femtosecond pulses may be sufficient for achieving high-resolution\ninformation with FIC. Furthermore, we show that the fluorescence intensity\ncorrelation computed from the fluorescence of Mo atoms in Mo-doped iron oxide\nnanoparticles can be used to image dopant distributions.\n"} {"abstract": " Today, almost all banks have adopted ICT as a means of enhancing their\nbanking service quality. These banks provide ICT based electronic service which\nis also called electronic banking, internet banking or online banking etc to\ntheir customers. Despite the increasing adoption of electronic banking and it\nrelevance towards end users satisfaction, few investigations has been conducted\non factors that enhanced end users satisfaction perception. In this research,\nan empirical analysis has been conducted on factors that influence electronic\nbanking user's satisfaction perception and the relationship between these\nfactors and the customer's satisfaction. The study will help bank industries in\nimproving the level of their customer's satisfaction and increase the bond\nbetween a bank and its customer.\n"} {"abstract": " Monitoring the state of contact is essential for robotic devices, especially\ngrippers that implement gecko-inspired adhesives where intimate contact is\ncrucial for a firm attachment. However, due to the lack of deformable sensors,\nfew have demonstrated tactile sensing for gecko grippers. We present Viko, an\nadaptive gecko gripper that utilizes vision-based tactile sensors to monitor\ncontact state. The sensor provides high-resolution real-time measurements of\ncontact area and shear force. Moreover, the sensor is adaptive, low-cost, and\ncompact. We integrated gecko-inspired adhesives into the sensor surface without\nimpeding its adaptiveness and performance. Using a robotic arm, we evaluate the\nperformance of the gripper by a series of grasping test. The gripper has a\nmaximum payload of 8N even at a low fingertip pitch angle of 30 degrees. We\nalso showcase the gripper's ability to adjust fingertip pose for better contact\nusing sensor feedback. Further, everyday object picking is presented as a\ndemonstration of the gripper's adaptiveness.\n"} {"abstract": " A starlike univalent function $f$ is characterized by the function\n$zf'(z)/f(z)$; several subclasses of these functions were studied in the past\nby restricting the function $zf'(z)/f(z)$ to take values in a region $\\Omega$\non the right-half plane, or, equivalently, by requiring the function\n$zf'(z)/f(z)$ to be subordinate to the corresponding mapping of the unit disk\n$\\mathbb{D}$ to the region $\\Omega$.\n The mappings $w_1(z):=z+\\sqrt{1+z^2}, w_2(z):=\\sqrt{1+z}$ and $w_3(z):=e^z$\nmaps the unit disk $\\mathbb{D}$ to various regions in the right half plane. For\nnormalized analytic functions $f$ satisfying the conditions that $f(z)/g(z),\ng(z)/zp(z)$ and $p(z)$ are subordinate to the functions $w_i, i=1,2,3$ in\nvarious ways for some analytic functions $g(z)$ and $p(z)$, we determine the\nsharp radius for them to belong to various subclasses of starlike functions.\n"} {"abstract": " We report on the discovery of FRB 20200120E, a repeating fast radio burst\n(FRB) with low dispersion measure (DM), detected by the Canadian Hydrogen\nIntensity Mapping Experiment (CHIME)/FRB project. The source DM of 87.82 pc\ncm$^{-3}$ is the lowest recorded from an FRB to date, yet is significantly\nhigher than the maximum expected from the Milky Way interstellar medium in this\ndirection (~ 50 pc cm$^{-3}$). We have detected three bursts and one candidate\nburst from the source over the period 2020 January-November. The baseband\nvoltage data for the event on 2020 January 20 enabled a sky localization of the\nsource to within $\\simeq$ 14 sq. arcmin (90% confidence). The FRB localization\nis close to M81, a spiral galaxy at a distance of 3.6 Mpc. The FRB appears on\nthe outskirts of M81 (projected offset $\\sim$ 20 kpc) but well inside its\nextended HI and thick disks. We empirically estimate the probability of chance\ncoincidence with M81 to be $< 10^{-2}$. However, we cannot reject a Milky Way\nhalo origin for the FRB. Within the FRB localization region, we find several\ninteresting cataloged M81 sources and a radio point source detected in the Very\nLarge Array Sky Survey (VLASS). We searched for prompt X-ray counterparts in\nSwift/BAT and Fermi/GBM data, and for two of the FRB 20200120E bursts, we rule\nout coincident SGR 1806$-$20-like X-ray bursts. Due to the proximity of FRB\n20200120E, future follow-up for prompt multi-wavelength counterparts and\nsub-arcsecond localization could be constraining of proposed FRB models.\n"} {"abstract": " This paper investigates the transmission power control in over-the-air\nfederated edge learning (Air-FEEL) system. Different from conventional power\ncontrol designs (e.g., to minimize the individual mean squared error (MSE) of\nthe over-the-air aggregation at each round), we consider a new power control\ndesign aiming at directly maximizing the convergence speed. Towards this end,\nwe first analyze the convergence behavior of Air-FEEL (in terms of the\noptimality gap) subject to aggregation errors at different communication\nrounds. It is revealed that if the aggregation estimates are unbiased, then the\ntraining algorithm would converge exactly to the optimal point with mild\nconditions; while if they are biased, then the algorithm would converge with an\nerror floor determined by the accumulated estimate bias over communication\nrounds. Next, building upon the convergence results, we optimize the power\ncontrol to directly minimize the derived optimality gaps under both biased and\nunbiased aggregations, subject to a set of average and maximum power\nconstraints at individual edge devices. We transform both problems into convex\nforms, and obtain their structured optimal solutions, both appearing in a form\nof regularized channel inversion, by using the Lagrangian duality method.\nFinally, numerical results show that the proposed power control policies\nachieve significantly faster convergence for Air-FEEL, as compared with\nbenchmark policies with fixed power transmission or conventional MSE\nminimization.\n"} {"abstract": " The social media platform is a convenient medium to express personal thoughts\nand share useful information. It is fast, concise, and has the ability to reach\nmillions. It is an effective place to archive thoughts, share artistic content,\nreceive feedback, promote products, etc. Despite having numerous advantages\nthese platforms have given a boost to hostile posts. Hate speech and derogatory\nremarks are being posted for personal satisfaction or political gain. The\nhostile posts can have a bullying effect rendering the entire platform\nexperience hostile. Therefore detection of hostile posts is important to\nmaintain social media hygiene. The problem is more pronounced languages like\nHindi which are low in resources. In this work, we present approaches for\nhostile text detection in the Hindi language. The proposed approaches are\nevaluated on the Constraint@AAAI 2021 Hindi hostility detection dataset. The\ndataset consists of hostile and non-hostile texts collected from social media\nplatforms. The hostile posts are further segregated into overlapping classes of\nfake, offensive, hate, and defamation. We evaluate a host of deep learning\napproaches based on CNN, LSTM, and BERT for this multi-label classification\nproblem. The pre-trained Hindi fast text word embeddings by IndicNLP and\nFacebook are used in conjunction with CNN and LSTM models. Two variations of\npre-trained multilingual transformer language models mBERT and IndicBERT are\nused. We show that the performance of BERT based models is best. Moreover, CNN\nand LSTM models also perform competitively with BERT based models.\n"} {"abstract": " In this paper, we address zero-shot learning (ZSL), the problem of\nrecognizing categories for which no labeled visual data are available during\ntraining. We focus on the transductive setting, in which unlabelled visual data\nfrom unseen classes is available. State-of-the-art paradigms in ZSL typically\nexploit generative adversarial networks to synthesize visual features from\nsemantic attributes. We posit that the main limitation of these approaches is\nto adopt a single model to face two problems: 1) generating realistic visual\nfeatures, and 2) translating semantic attributes into visual cues. Differently,\nwe propose to decouple such tasks, solving them separately. In particular, we\ntrain an unconditional generator to solely capture the complexity of the\ndistribution of visual data and we subsequently pair it with a conditional\ngenerator devoted to enrich the prior knowledge of the data distribution with\nthe semantic content of the class embeddings. We present a detailed ablation\nstudy to dissect the effect of our proposed decoupling approach, while\ndemonstrating its superiority over the related state-of-the-art.\n"} {"abstract": " We construct closed immersions from initial degenerations of the spinor\nvariety $\\mathbb{S}_n$ to inverse limits of strata associated to even\n$\\Delta$-matroids. As an application, we prove that these initial degenerations\nare smooth and irreducible for $n\\leq 5$ and identify the log canonical model\nof the Chow quotient of $\\mathbb{S}_5$ by the action of the diagonal torus of\n$\\operatorname{GL}(5)$.\n"} {"abstract": " We provide an abstract characterization for the Cuntz semigroup of unital\ncommutative AI-algebras, as well as a characterization for abstract Cuntz\nsemigroups of the form $\\text{Lsc} (X,\\overline{\\mathbb{N}})$ for some\n$T_1$-space $X$. In our investigations, we also uncover new properties that the\nCuntz semigroup of all AI-algebras satisfies.\n"} {"abstract": " Understanding the effects of interventions, such as restrictions on community\nand large group gatherings, is critical to controlling the spread of COVID-19.\nSusceptible-Infectious-Recovered (SIR) models are traditionally used to\nforecast the infection rates but do not provide insights into the causal\neffects of interventions. We propose a spatiotemporal model that estimates the\ncausal effect of changes in community mobility (intervention) on infection\nrates. Using an approximation to the SIR model and incorporating spatiotemporal\ndependence, the proposed model estimates a direct and indirect (spillover)\neffect of intervention. Under an interference and treatment ignorability\nassumption, this model is able to estimate causal intervention effects, and\nadditionally allows for spatial interference between locations. Reductions in\ncommunity mobility were measured by cell phone movement data. The results\nsuggest that the reductions in mobility decrease Coronavirus cases 4 to 7 weeks\nafter the intervention.\n"} {"abstract": " Governments, Healthcare, and Private Organizations in the global scale have\nbeen using digital tracking to keep COVID-19 outbreaks under control. Although\nthis method could limit pandemic contagion, it raises significant concerns\nabout user privacy. Known as ~\"Contact Tracing Apps\", these mobile applications\nare facilitated by Cellphone Service Providers (CSPs), who enable the spatial\nand temporal real-time user tracking. Accordingly, it might be speculated that\nCSPs collect information violating the privacy policies such as GDPR, CCPA, and\nothers. To further clarify, we conducted an in-depth analysis comparing privacy\nlegislations with the real-world practices adapted by CSPs. We found that three\nof the regulations (GDPR, COPPA, and CCPA) analyzed defined mobile location\ndata as private information, and two (T-Mobile US, Boost Mobile) of the five\nCSPs that were analyzed did not comply with the COPPA regulation. Our results\nare crucial in view of the threat these violations represent, especially when\nit comes to children's data. As such proper security and privacy auditing is\nnecessary to curtail such violations. We conclude by providing actionable\nrecommendations to address concerns and provide privacy-preserving monitoring\nof the COVID-19 spread through the contact tracing applications.\n"} {"abstract": " Nonlinear surface-plasmon polaritons~(NSPPs) in nanophotonic waveguides\nexcite with dissimilar temporal properties due to input field modifications and\nmaterial characteristics, but they possess similar nonlinear spectral\nevolution. In this work, we uncover the origin of this similarity and establish\nthat the spectral dynamics is an inherent property of the system that depends\non the synthetic dimension and is beyond waveguide geometrical dimensionality.\nTo this aim, we design an ultra-low loss nonlinear plasmonic waveguide, to\nestablish the invariance of the surface plasmonic frequency combs~(FCs) and\nphase singularities for plasmonic peregrine waves and Akhmediev breather. By\nfinely tuning the nonlinear coefficient of the interaction interface, we\nuncover the conservation conditions through this plasmonic system and employ\nthe mean-value evolution of the quantum NSPP field commensurate with the\nSchr\\\"odinger equation to evaluate spectral dynamics of the plasmonic\nFCs~(PFCs). Through providing suppressed interface losses and modified\nnonlinearity as dual requirements for conservative conditions, we propose\nexciting PFCs as equally spaced invariant quantities of this plasmonic scheme\nand prove that the spectral dynamics of the NSPPs within the interaction\ninterface yields the formation of plasmonic analog of the synthetic photonic\nlattice, which we termed \\textit{synthetic plasmonic lattice}~(SPL).\n"} {"abstract": " We continue our previous study of cylindrically symmetric, static\nelectrovacuum spacetimes generated by a magnetic field, involving optionally\nthe cosmological constant, and investigate several classes of exact solutions.\nThese spacetimes are due to magnetic fields that are perpendicular to the axis\nof symmetry.\n"} {"abstract": " Factorial designs are widely used due to their ability to accommodate\nmultiple factors simultaneously. The factor-based regression with main effects\nand some interactions is the dominant strategy for downstream data analysis,\ndelivering point estimators and standard errors via one single regression.\nJustification of these convenient estimators from the design-based perspective\nrequires quantifying their sampling properties under the assignment mechanism\nconditioning on the potential outcomes. To this end, we derive the sampling\nproperties of the factor-based regression estimators from both saturated and\nunsaturated models, and demonstrate the appropriateness of the robust standard\nerrors for the Wald-type inference. We then quantify the bias-variance\ntrade-off between the saturated and unsaturated models from the design-based\nperspective, and establish a novel design-based Gauss--Markov theorem that\nensures the latter's gain in efficiency when the nuisance effects omitted\nindeed do not exist. As a byproduct of the process, we unify the definitions of\nfactorial effects in various literatures and propose a location-shift strategy\nfor their direct estimation from factor-based regressions. Our theory and\nsimulation suggest using factor-based inference for general factorial effects,\npreferably with parsimonious specifications in accordance with the prior\nknowledge of zero nuisance effects.\n"} {"abstract": " One of the most ubiquitous and technologically important phenomena in nature\nis the nucleation of homogeneous flowing systems. The microscopic effects of\nshear on a nucleating system are still imperfectly understood, although in\nrecent years a consistent picture has emerged. The opposing effects of shear\ncan be split into two major contributions for simple liquids: increase of the\nenergetic cost of nucleation, and enhancement of the kinetics. In this\nperspective, we describe the latest computational and theoretical techniques\nwhich have been developed over the past two decades. We collate and unify the\noverarching influences of shear, temperature, and supersaturation on the\nprocess of homogeneous nucleation. Experimental techniques and capabilities are\ndiscussed, against the backdrop of results from simulations and theory.\nAlthough we primarily focus on simple liquids, we also touch upon the sheared\nnucleation of more complex systems, including glasses and polymer melts. We\nspeculate on the promising directions and possible advances that could come to\nfruition in the future.\n"} {"abstract": " In this paper, we develop general techniques for computing the G-index of a\nclosed, spin, hyperbolic 2- or 4-manifold, and apply these techniques to\ncompute the G-index of the fully symmetric spin structure of the Davis\nhyperbolic 4-manifold.\n"} {"abstract": " In binary classification, kernel-free linear or quadratic support vector\nmachines are proposed to avoid dealing with difficulties such as finding\nappropriate kernel functions or tuning their hyper-parameters. Furthermore,\nUniversum data points, which do not belong to any class, can be exploited to\nembed prior knowledge into the corresponding models so that the generalization\nperformance is improved. In this paper, we design novel kernel-free Universum\nquadratic surface support vector machine models. Further, we propose the L1\nnorm regularized version that is beneficial for detecting potential sparsity\npatterns in the Hessian of the quadratic surface and reducing to the standard\nlinear models if the data points are (almost) linearly separable. The proposed\nmodels are convex such that standard numerical solvers can be utilized for\nsolving them. Nonetheless, we formulate a least squares version of the L1 norm\nregularized model and next, design an effective tailored algorithm that only\nrequires solving one linear system. Several theoretical properties of these\nmodels are then reported/proved as well. We finally conduct numerical\nexperiments on both artificial and public benchmark data sets to demonstrate\nthe feasibility and effectiveness of the proposed models.\n"} {"abstract": " To operate efficiently across a wide range of workloads with varying power\nrequirements, a modern processor applies different current management\nmechanisms, which briefly throttle instruction execution while they adjust\nvoltage and frequency to accommodate for power-hungry instructions (PHIs) in\nthe instruction stream. Doing so 1) reduces the power consumption of non-PHI\ninstructions in typical workloads and 2) optimizes system voltage regulators'\ncost and area for the common use case while limiting current consumption when\nexecuting PHIs.\n However, these mechanisms may compromise a system's confidentiality\nguarantees. In particular, we observe that multilevel side-effects of\nthrottling mechanisms, due to PHI-related current management mechanisms, can be\ndetected by two different software contexts (i.e., sender and receiver) running\non 1) the same hardware thread, 2) co-located Simultaneous Multi-Threading\n(SMT) threads, and 3) different physical cores.\n Based on these new observations on current management mechanisms, we develop\na new set of covert channels, IChannels, and demonstrate them in real modern\nIntel processors (which span more than 70% of the entire client and server\nprocessor market). Our analysis shows that IChannels provides more than 24x the\nchannel capacity of state-of-the-art power management covert channels. We\npropose practical and effective mitigations to each covert channel in IChannels\nby leveraging the insights we gain through a rigorous characterization of real\nsystems.\n"} {"abstract": " We develop the integration theory of two-parameter controlled paths $Y$\nallowing us to define integrals of the form \\begin{equation}\n \\int_{[s,t] \\times [u,v]}\n Y_{r,r'}\n \\;d(X_{r}, X_{r'}) \\end{equation} where $X$ is the geometric $p$-rough path\nthat controls $Y$. This extends to arbitrary regularity the definition\npresented for $2\\leq p<3$ in the recent paper of Hairer and Gerasimovi\\v{c}s\nwhere it is used in the proof of a version of H\\\"{o}rmander's theorem for a\nclass of SPDEs. We extend the Fubini type theorem of the same paper by showing\nthat this two-parameter integral coincides with the two iterated one-parameter\nintegrals \\[\n \\int_{[s,t] \\times [u,v]}\n Y_{r,r'}\n \\;d(X_{r}, X_{r'})\n =\n \\int_{s}^{t}\n \\int_{u}^{v}\n Y_{r,r'}\n \\;dX_{r'}\n \\;dX_{r'}\n =\n \\int_{u}^{v}\n \\int_{s}^{t}\n Y_{r,r'}\n \\;dX_{r}\n \\;dX_{r'}. \\] A priori these three integrals have distinct definitions, and\nso this parallels the classical Fubini's theorem for product measures. By\nextending the two-parameter Young-Towghi inequality in this context, we derive\na maximal inequality for the discrete integrals approximating the two-parameter\nintegral. We also extend the analysis to consider integrals of the form\n\\begin{equation*}\n \\int_{[s,t] \\times [u,v]}\n Y_{r,r'}\n \\;\n d(X_{r}, \\tilde{X}_{r'}) \\end{equation*} for possibly different rough paths\n$X$ and $\\tilde{X}$, and obtain the corresponding Fubini type theorem. We prove\ncontinuity estimates for these integrals in the appropriate rough path\ntopologies. As an application we consider the signature kernel, which has\nrecently emerged as a useful tool in data science, as an example of a\ntwo-parameter controlled rough path which also solves a two-parameter rough\nintegral equation.\n"} {"abstract": " We revisit the calculation of vacuum energy density in compact space times.\nBy explicitly computing the effective action through the heat kernel method, we\ncompute vacuum energy density for the general case of $k$ compact spatial\ndimensions in $p+k$ dimensional Minkowski space time. Additionally, we use this\nformalism to calculate the Casimir force on a piston placed in such space\ntimes, and note the deviations from previously reported results in the\nliterature.\n"} {"abstract": " We study the problem of simulating a two-user multiple access channel over a\nmultiple access network of noiseless links. Two encoders observe independent\nand identically distributed (i.i.d.) copies of a source random variable each,\nwhile a decoder observes i.i.d. copies of a side-information random variable.\nThere are rate-limited noiseless communication links and independent pairwise\nshared randomness resources between each encoder and the decoder. The decoder\nhas to output approximately i.i.d. copies of another random variable jointly\ndistributed with the two sources and the side information. We are interested in\nthe rate tuples which permit this simulation. This setting can be thought of as\na multi-terminal generalization of the point-to-point channel simulation\nproblem studied by Bennett et al. (2002) and Cuff (2013). General inner and\nouter bounds on the rate region are derived. For the specific case where the\nsources at the encoders are conditionally independent given the\nside-information at the decoder, we completely characterize the rate region.\nOur bounds recover the existing results on function computation over such\nmulti-terminal networks. We then show through an example that an additional\nindependent source of shared randomness between the encoders strictly improves\nthe communication rate requirements, even if the additional randomness is not\navailable to the decoder. Furthermore, we provide inner and outer bounds for\nthis more general setting with independent pairwise shared randomness resources\nbetween all the three possible node pairs.\n"} {"abstract": " A cluster algebra is a commutative algebra whose structure is decided by a\nskew-symmetrizable matrix or a quiver. When a skew-symmetrizable matrix is\ninvariant under an action of a finite group and this action is admissible, the\nfolded cluster algebra is obtained from the original one. Any cluster algebra\nof non-simply-laced affine type can be obtained by folding a cluster algebra of\nsimply-laced affine type with a specific $G$-action. In this paper, we study\nthe combinatorial properties of quivers in the cluster algebra of affine type.\nWe prove that for any quiver of simply-laced affine type, $G$-invariance and\n$G$-admissibility are equivalent. This leads us to prove that the set of\n$G$-invariant seeds forms the folded cluster pattern.\n"} {"abstract": " We consider the decay $B\\to\\ell\\ell\\ell^{\\prime}\\nu$, taking into account the\nleading $1/m_b$ and $q^2$ corrections calculated in the QCD factorization\nframework as well as the soft corrections calculated employing dispersion\nrelations and quark-hadron duality. We extend the existing results for the\nradiative decay $B\\to\\gamma\\ell\\nu$ to the case of non-zero (but small) $q^2$,\nthe invariant mass squared of the dilepton pair $\\ell^+\\ell^-$. This restricts\nus to the case $\\ell\\neq\\ell'$ as otherwise the same sign $\\ell$ and $\\ell'$\ncannot be distinguished. We further study the sensitivity of the results to the\nleading moment of the $B$-meson distribution amplitude and discuss the\npotential to extract this quantity at LHCb and the Belle II experiment.\n"} {"abstract": " We develop a resource theory of symmetric distinguishability, the fundamental\nobjects of which are elementary quantum information sources, i.e., sources that\nemit one of two possible quantum states with given prior probabilities. Such a\nsource can be represented by a classical-quantum state of a composite system\n$XA$, corresponding to an ensemble of two quantum states, with $X$ being\nclassical and $A$ being quantum. We study the resource theory for two different\nclasses of free operations: $(i)$ ${\\rm{CPTP}}_A$, which consists of quantum\nchannels acting only on $A$, and $(ii)$ conditional doubly stochastic (CDS)\nmaps acting on $XA$. We introduce the notion of symmetric distinguishability of\nan elementary source and prove that it is a monotone under both these classes\nof free operations. We study the tasks of distillation and dilution of\nsymmetric distinguishability, both in the one-shot and asymptotic regimes. We\nprove that in the asymptotic regime, the optimal rate of converting one\nelementary source to another is equal to the ratio of their quantum Chernoff\ndivergences, under both these classes of free operations. This imparts a new\noperational interpretation to the quantum Chernoff divergence. We also obtain\ninteresting operational interpretations of the Thompson metric, in the context\nof the dilution of symmetric distinguishability.\n"} {"abstract": " In this study, we investigated the employment status of recent University of\nOttawa physics MSc and PhD graduates, finding that 94% of graduates are either\nemployed or pursuing further physics education one year post-graduation. Our\ndatabase was populated from the public online repository of MSc and PhD theses\nsubmitted between the academic years of 2011 to 2019, with employment\ninformation collected in 2020 from the professional social media platform\nLinkedIn. Our results highlight that graduates primarily find employment\nquickly and in their field of study, with most graduates employed in either\nacademia or physics-related industries. We also found that a significant\nportion of employed graduates, 20%, find employment in non-traditional physics\ncareers, such as business management and healthcare. Graduates with careers in\nacademia tend to have lower online connectivity compared to graduates with\ncareers in industry or non-traditional fields, suggesting a greater importance\nfor online networking for students interested in non-academic careers.\n"} {"abstract": " Bound-systems of $\\Xi^-$--$^{14}_{}{\\rm N}$ are studied via $\\Xi^-$ capture\nat rest followed by emission of a twin single-$\\Lambda$ hypernucleus in the\nemulsion detectors. Two events forming extremely deep $\\Xi^-$ bound states were\nobtained by analysis of a hybrid method in the E07 experiment at J-PARC and\nreanalysis of the E373 experiment at KEK-PS. The decay mode of one event was\nassigned as $\\Xi^-+^{14}_{}{\\rm N}\\to^{5}_{\\Lambda}{\\rm\nHe}$+$^{5}_{\\Lambda}{\\rm He}$+$^{4}_{}{\\rm He}$+n. Since there are no excited\nstates for daughter particles, the binding energy of the $\\Xi^-$ hyperon,\n$B_{\\Xi^-}$, in $^{14}_{}{\\rm N}$ nucleus was uniquely determined to be 6.27\n$\\pm$ 0.27 MeV. Another $\\Xi^-$--$^{14}_{}{\\rm N}$ system via the decay\n$^{9}_{\\Lambda}{\\rm Be}$ + $^{5}_{\\Lambda}{\\rm He}$ + n brings a $B_{\\Xi^-}$\nvalue, 8.00 $\\pm$ 0.77 MeV or 4.96 $\\pm$ 0.77 MeV, where the two possible\nvalues of $B_{\\Xi^-}$ correspond to the ground and the excited states of the\ndaughter $^{9}_{\\Lambda}{\\rm Be}$ nucleus, respectively. Because the\n$B_{\\Xi^-}$ values are larger than those of the previously reported events\n(KISO and IBUKI), which are both interpreted as the nuclear $1p$ state of the\n$\\Xi^-$--$^{14}_{}{\\rm N}$ system, these new events give the first indication\nof the nuclear $1s$ state of the $\\Xi$ hypernucleus, $^{15}_{\\Xi}{\\rm C}$.\n"} {"abstract": " Recent research on disorder effects in topological phases in quasicrystalline\nsystems has received much attention. In this work, by numerically computing the\n(spin) Bott index and the thermal conductance, we reveal the effects of\ndisorder on a class D chiral topological superconductor and a class DIII\ntime-reversal-invariant topological superconductor in a two-dimensional\nAmmann-Beenker tiling quasicrystalline lattice. We demonstrate that both the\ntopologically protected chiral and helical Majorana edge modes are robust\nagainst weak disorder in the quasicrystalline lattice. More fascinating is the\ndiscovery of disorder-induced topologically nontrivial phases exhibiting chiral\nand helical Majorana edge modes in class D and DIII topological superconductor\nsystems, respectively. Our findings open the door for the research on\ndisorder-induced Majorana edge modes in quasicrystalline systems.\n"} {"abstract": " Inverse patchy colloids are nano- to micro-scale particles with a surface\ndivided into differently charged regions. This class of colloids combines\ndirectional, selective bonding with a relatively simple particle design: owing\nto the competitive interplay between the orientation-dependent attraction and\nrepulsion -- induced by the interactions between like/oppositely charged areas\n-- experimentally accessible surface patterns are complex enough to favor the\nstabilization of specific structures of interest. Most important, the behavior\nof heterogeneously charged units can be ideally controlled by means of external\nparameters, such as the pH and the salt concentration. We present a concise\nreview about this class of systems, spanning the range from the synthesis of\nmodel inverse patchy particles to their self-assembly, covering their\ncoarse-grained modeling and the related numerical/analytical treatments.\n"} {"abstract": " With the development of earth observation technology, massive amounts of\nremote sensing (RS) images are acquired. To find useful information from these\nimages, cross-modal RS image-voice retrieval provides a new insight. This paper\naims to study the task of RS image-voice retrieval so as to search effective\ninformation from massive amounts of RS data. Existing methods for RS\nimage-voice retrieval rely primarily on the pairwise relationship to narrow the\nheterogeneous semantic gap between images and voices. However, apart from the\npairwise relationship included in the datasets, the intra-modality and\nnon-paired inter-modality relationships should also be taken into account\nsimultaneously, since the semantic consistency among non-paired representations\nplays an important role in the RS image-voice retrieval task. Inspired by this,\na semantics-consistent representation learning (SCRL) method is proposed for RS\nimage-voice retrieval. The main novelty is that the proposed method takes the\npairwise, intra-modality, and non-paired inter-modality relationships into\naccount simultaneously, thereby improving the semantic consistency of the\nlearned representations for the RS image-voice retrieval. The proposed SCRL\nmethod consists of two main steps: 1) semantics encoding and 2)\nsemantics-consistent representation learning. Firstly, an image encoding\nnetwork is adopted to extract high-level image features with a transfer\nlearning strategy, and a voice encoding network with dilated convolution is\ndevised to obtain high-level voice features. Secondly, a consistent\nrepresentation space is conducted by modeling the three kinds of relationships\nto narrow the heterogeneous semantic gap and learn semantics-consistent\nrepresentations across two modalities. Extensive experimental results on three\nchallenging RS image-voice datasets show the effectiveness of the proposed\nmethod.\n"} {"abstract": " We have analysed the Ca-K images obtained at Kodaikanal Observatory as a\nfunction of latitude and time for the period of 1913 - 2004 covering the Solar\nCycle 15 to 23. We have classified the chromospheric activity into plage,\nEnhanced Network (EN), Active Network (AN), and Quiet Network (QN) areas to\ndifferentiate between large strong active and small weak active regions. The\nstrong active regions represent toroidal and weak active regions poloidal\ncomponent of the magnetic field. We find that plages areas mostly up to 50 deg\nlatitude belt vary with about 11-year Solar Cycle. We also find that weak\nactivity represented by EN, AN and QN varies with about 11-year with\nsignificant amplitude up to about 50 deg latitude in both the hemispheres. The\namplitude of variation is minimum around 50 deg latitude and again increases by\nsmall amount in the polar region. In addition, the plots of plages, EN, AN and\nQN as a function of time indicate the maximum of activity at different latitude\noccur at different epoch. To determine the phase difference for the different\nlatitude belts, we have computed the cross-correlation coefficients of other\nlatitude belts with 35 deg latitude belt. We find that activity shifts from\nmid-latitude belts towards equatorial belts at fast speed at the beginning of\nSolar Cycle and at slower speed as the cycle progresses. The speed of shift\nvaries between approximately 19 and 3 m/s considering all the data for the\nobserved period. This speed can be linked with speed of meridional flows those\nbelieved to occur between convection zone and the surface of the Sun.\n"} {"abstract": " This study discusses the Review Bomb, a phenomenon consisting of a massive\nattack by groups of Internet users on a website that displays users' review on\nproducts. It gained attention, especially on websites that aggregate numerical\nratings. Although this phenomenon can be considered an example of online\nmisinformation, it differs from conventional spam review, which happens within\nlarger time spans. In particular, the Bomb occurs suddenly and for a short\ntime, because in this way it leverages the notorious problem of cold-start: if\nreviews are submitted by a lot of fresh new accounts, it makes hard to justify\npreventative measures. The present research work is focused on the case of The\nLast of Us Part II, a video game published by Sony, that was the target of the\nwidest phenomenon of Review Bomb, occurred in June 2020. By performing an\nobservational analysis of a linguistic corpus of English reviews and the\nfeatures of its users, this study confirms that the Bomb was an ideological\nattack aimed at breaking down the rating system of the platform Metacritic.\nEvidence supports that the bombing had the unintended consequence to induce a\nreaction from users, ending into a consistent polarisation of ratings towards\nextreme values. The results not only display the theory of polarity in online\nreviews, but them also provide insights for the research on the problem of\ncold-start detection of spam review. In particular, it illustrates the\nrelevance of detecting users discussing contextual elements instead of the\nproduct and users with anomalous features.\n"} {"abstract": " Arrays of nanoparticles exploited in light scattering applications commonly\nonly feature either a periodic or a rather random arrangement of its\nconstituents. For the periodic case, light scattering is mostly governed by the\nstrong spatial correlations of the arrangement, expressed by the structure\nfactor. For the random case, structural correlations cancel each other out and\nlight scattering is mostly governed by the scattering properties of the\nindividual scatterer, expressed by the form factor. In contrast to these\nextreme cases, we show here, for the first time, that hyperuniform disorder in\nself-organized large-area arrays of high refractive index nanodisks enables\nboth structure and form factor to impact the resulting scattering pattern,\noffering novel means to tailor light scattering. The scattering response from\nour nearly hyperuniform interfaces can be exploited in a large variety of\napplications and constitutes a novel class of advanced optical materials.\n"} {"abstract": " We study the clustering task under anisotropic Gaussian Mixture Models where\nthe covariance matrices from different clusters are unknown and are not\nnecessarily the identical matrix. We characterize the dependence of\nsignal-to-noise ratios on the cluster centers and covariance matrices and\nobtain the minimax lower bound for the clustering problem. In addition, we\npropose a computationally feasible procedure and prove it achieves the optimal\nrate within a few iterations. The proposed procedure is a hard EM type\nalgorithm, and it can also be seen as a variant of the Lloyd's algorithm that\nis adjusted to the anisotropic covariance matrices.\n"} {"abstract": " Deep learning based generative adversarial networks (GAN) can effectively\nperform image reconstruction with under-sampled MR data. In general, a large\nnumber of training samples are required to improve the reconstruction\nperformance of a certain model. However, in real clinical applications, it is\ndifficult to obtain tens of thousands of raw patient data to train the model\nsince saving k-space data is not in the routine clinical flow. Therefore,\nenhancing the generalizability of a network based on small samples is urgently\nneeded. In this study, three novel applications were explored based on parallel\nimaging combined with the GAN model (PI-GAN) and transfer learning. The model\nwas pre-trained with public Calgary brain images and then fine-tuned for use in\n(1) patients with tumors in our center; (2) different anatomies, including knee\nand liver; (3) different k-space sampling masks with acceleration factors (AFs)\nof 2 and 6. As for the brain tumor dataset, the transfer learning results could\nremove the artifacts found in PI-GAN and yield smoother brain edges. The\ntransfer learning results for the knee and liver were superior to those of the\nPI-GAN model trained with its own dataset using a smaller number of training\ncases. However, the learning procedure converged more slowly in the knee\ndatasets compared to the learning in the brain tumor datasets. The\nreconstruction performance was improved by transfer learning both in the models\nwith AFs of 2 and 6. Of these two models, the one with AF=2 showed better\nresults. The results also showed that transfer learning with the pre-trained\nmodel could solve the problem of inconsistency between the training and test\ndatasets and facilitate generalization to unseen data.\n"} {"abstract": " Deep learning algorithms are a key component of many state-of-the-art vision\nsystems, especially as Convolutional Neural Networks (CNN) outperform most\nsolutions in the sense of accuracy. To apply such algorithms in real-time\napplications, one has to address the challenges of memory and computational\ncomplexity. To deal with the first issue, we use networks with reduced\nprecision, specifically a binary neural network (also known as XNOR). To\nsatisfy the computational requirements, we propose to use highly parallel and\nlow-power FPGA devices. In this work, we explore the possibility of\naccelerating XNOR networks for traffic sign classification. The trained binary\nnetworks are implemented on the ZCU 104 development board, equipped with a Zynq\nUltraScale+ MPSoC device using two different approaches. Firstly, we propose a\ncustom HDL accelerator for XNOR networks, which enables the inference with\nalmost 450 fps. Even better results are obtained with the second method - the\nXilinx FINN accelerator - enabling to process input images with around 550\nframe rate. Both approaches provide over 96% accuracy on the test set.\n"} {"abstract": " This paper presents a novel Sliding Mode Control (SMC) algorithm to handle\nmismatched uncertainties in systems via a novel Self-Learning Disturbance\nObserver (SLDO). A computationally efficient SLDO is developed within a\nframework of feedback-error learning scheme in which a conventional estimation\nlaw and a Neuro-Fuzzy Structure (NFS) work in parallel. In this framework, the\nNFS estimates the mismatched disturbances and becomes the leading disturbance\nestimator while the former feeds the learning error to the NFS to learn system\nbehavior. The simulation results demonstrate that the proposed SMC based on\nSLDO (SMC-SLDO) ensures the robust control performance in the presence of\nmismatched time-varying uncertainties when compared to SMC, integral SMC (ISMC)\nand SMC based on a Basic Nonlinear Disturbance Observer (SMC-BNDO), and also\nremains the nominal control performance in the absence of mismatched\nuncertainties. Additionally, the SMC-SLDO not only counteracts mismatched\ntime-varying uncertainties but also improve the transient response performance\nin the presence of mismatched time-invariant uncertainties. Moreover, the\ncontroller gain of the SMC-SLDO is required to be selected larger than the\nupper bound of the disturbance estimation error rather than the upper bound of\nthe actual disturbance to guarantee the system stability which results in\neliminating the chattering effects on the control signal.\n"} {"abstract": " Graph Neural Networks (GNNs) require a relatively large number of labeled\nnodes and a reliable/uncorrupted graph connectivity structure in order to\nobtain good performance on the semi-supervised node classification task. The\nperformance of GNNs can degrade significantly as the number of labeled nodes\ndecreases or the graph connectivity structure is corrupted by adversarial\nattacks or due to noises in data measurement /collection. Therefore, it is\nimportant to develop GNN models that are able to achieve good performance when\nthere is limited supervision knowledge -- a few labeled nodes and noisy graph\nstructures. In this paper, we propose a novel Dual GNN learning framework to\naddress this challenge task. The proposed framework has two GNN based node\nprediction modules. The primary module uses the input graph structure to induce\nregular node embeddings and predictions with a regular GNN baseline, while the\nauxiliary module constructs a new graph structure through fine-grained spectral\nclusterings and learns new node embeddings and predictions. By integrating the\ntwo modules in a dual GNN learning framework, we perform joint learning in an\nend-to-end fashion. This general framework can be applied on many GNN baseline\nmodels. The experimental results validate that the proposed dual GNN framework\ncan greatly outperform the GNN baseline methods when the labeled nodes are\nscarce and the graph connectivity structure is noisy.\n"} {"abstract": " This study examines habits and perceptions related to pay to publish and open\naccess practices in fields that have attracted little research to date:\nphilosophy and ethics. The study is undertaken in the Spanish context, where\nthe culture of publication and the book and journal publishing industry has\nsome specific characteristics with regard to paying to publish, such as not\noffering open access distribution of books published for a fee. The study draws\non data from a survey of 201 researchers, a public debate with 26 researchers,\nand 14 in-depth interviews. The results reveal some interesting insights on the\ncriteria researchers apply when selecting publishers and journals for their\nwork, the extent of paying to publish (widespread in the case of books and\nmodest for journals) and the debates that arise over the effects it has on\nmanuscript review and unequal access to resources to cover publication fees.\nData on the extent of open access and the researchers views on dissemination of\npublicly funded research are also presented.\n"} {"abstract": " Despite evidence for the existence of dark matter (DM) from very high and low\nredshifts, a moderate amount of DM particle decay remains a valid possibility.\nThis includes both models with very long-lived yet unstable particles or mixed\nscenarios where only a small fraction of dark matter is allowed to decay. In\nthis paper, we investigate how DM particles decaying into radiation affect\nnon-linear structure formation. We look at the power spectrum and its redshift\nevolution, varying both the decay lifetime ($\\tau$) and the fraction of\ndecaying-to-total dark matter ($f$), and we propose a fitting function that\nreaches sub-percent precision below $k\\sim10$ h/Mpc. Based on this fit, we\nperform a forecast analysis for a Euclid-like weak lensing (WL) survey,\nincluding both massive neutrino and baryonic feedback parameters. We find that\nwith WL observations alone, it is possible to rule out decay lifetimes smaller\nthan $\\tau=75$ Gyr (at 95 percent CL) for the case that all DM is unstable.\nThis constraint improves to $\\tau=182$ Gyr if the WL data is combined with CMB\npriors from the Planck satellite and to $\\tau=275$ Gyr if we further assume\nbaryonic feedback to be fully constrained by upcoming Sunyaev-Zeldovich or\nX-ray data. The latter shows a factor of 3.2 improvement compared to\nconstraints from CMB data alone. Regarding the scenario of a strongly decaying\nsub-component of dark matter with $\\tau\\sim 30$ Gyr or lower, it will be\npossible to rule out a decaying-to-total fraction of $f>0.49$, $f>0.21$, and\n$f>0.13$ (at the 95 percent CL) for the same three scenarios. We conclude that\nthe upcoming stage-IV WL surveys will allow us to significantly improve current\nconstraints on the stability of the dark matter sector.\n"} {"abstract": " The inscribed angle theorem, a famous result about the angle subtended by a\nchord within a circle, is well known and commonly taught in school curricula.\nIn this paper, we present a generalisation of this result (and other related\ncircle theorems) to the rectangular hyperbola. The notion of angle is replaced\nby pseudo-angle, defined via the Minkowski inner product. Indeed, in Minkowski\nspace, the unit hyperbola is the set of points a unit metric distance from the\norigin, analogous to the Euclidean unit circle. While this is a result of pure\ngeometrical interest, the connection to Minkowski space allows an\ninterpretation in terms of special relativity where, in the limit $c\\to\\infty$,\nit leads to a familiar result from non-relativistic dynamics. This\nnon-relativistic result can be interpreted as an inscribed angle theorem for\nthe parabola, which we show can also be obtained from the Euclidean inscribed\nangle theorem by taking the limit of a family of ellipses ananlogous to the\nnon-relativistic limit $c\\to\\infty$. This simple result could be used as a\npedagogical example to consolidate understanding of pseudo-angles in\nnon-Euclidean spaces or to demonstrate the power of analytic continuation.\n"} {"abstract": " Human pose estimation is a major computer vision problem with applications\nranging from augmented reality and video capture to surveillance and movement\ntracking. In the medical context, the latter may be an important biomarker for\nneurological impairments in infants. Whilst many methods exist, their\napplication has been limited by the need for well annotated large datasets and\nthe inability to generalize to humans of different shapes and body\ncompositions, e.g. children and infants. In this paper we present a novel\nmethod for learning pose estimators for human adults and infants in an\nunsupervised fashion. We approach this as a learnable template matching problem\nfacilitated by deep feature extractors. Human-interpretable landmarks are\nestimated by transforming a template consisting of predefined body parts that\nare characterized by 2D Gaussian distributions. Enforcing a connectivity prior\nguides our model to meaningful human shape representations. We demonstrate the\neffectiveness of our approach on two different datasets including adults and\ninfants.\n"} {"abstract": " The Rindler spacetime describing a series of accelerating observers is Ricci\nflat, but it still has novel optical effects. In the case of WKB approximation,\nwe derive the light geodesics in the Rindler frame based on the covariant wave\nequation and geodesic equations. Then, we use ABCD matrix optics method to\nexplore the propagation characteristics of Rindler frame, thus link three\ndifferent optical transformation scenes (geometry, gravity and vacuum\nrefractive index) together. Moreover, the propagation characteristics of hollow\nbeam in Rindler spacetime are described analytically. Those characteristics are\nquite different from the ones in the flat spacetime. Based on these\ncalculations, we simply demonstrate the position uncertain relationship between\nthe transverse beam size and the momentum, which surprisingly coincides with\nthe derivation of quantization. We hope that we can provide one simple method\nto analyze the beam propagation in the accelerated frame.\n"} {"abstract": " We present a novel approach for disentangling the content of a text image\nfrom all aspects of its appearance. The appearance representation we derive can\nthen be applied to new content, for one-shot transfer of the source style to\nnew content. We learn this disentanglement in a self-supervised manner. Our\nmethod processes entire word boxes, without requiring segmentation of text from\nbackground, per-character processing, or making assumptions on string lengths.\nWe show results in different text domains which were previously handled by\nspecialized methods, e.g., scene text, handwritten text. To these ends, we make\na number of technical contributions: (1) We disentangle the style and content\nof a textual image into a non-parametric, fixed-dimensional vector. (2) We\npropose a novel approach inspired by StyleGAN but conditioned over the example\nstyle at different resolution and content. (3) We present novel self-supervised\ntraining criteria which preserve both source style and target content using a\npre-trained font classifier and text recognizer. Finally, (4) we also introduce\nImgur5K, a new challenging dataset for handwritten word images. We offer\nnumerous qualitative photo-realistic results of our method. We further show\nthat our method surpasses previous work in quantitative tests on scene text and\nhandwriting datasets, as well as in a user study.\n"} {"abstract": " In this paper we present our system for the detection and classification of\nacoustic scenes and events (DCASE) 2020 Challenge Task 4: Sound event detection\nand separation in domestic environments. We introduce two new models: the\nforward-backward convolutional recurrent neural network (FBCRNN) and the\ntag-conditioned convolutional neural network (CNN). The FBCRNN employs two\nrecurrent neural network (RNN) classifiers sharing the same CNN for\npreprocessing. With one RNN processing a recording in forward direction and the\nother in backward direction, the two networks are trained to jointly predict\naudio tags, i.e., weak labels, at each time step within a recording, given that\nat each time step they have jointly processed the whole recording. The proposed\ntraining encourages the classifiers to tag events as soon as possible.\nTherefore, after training, the networks can be applied to shorter audio\nsegments of, e.g., 200 ms, allowing sound event detection (SED). Further, we\npropose a tag-conditioned CNN to complement SED. It is trained to predict\nstrong labels while using (predicted) tags, i.e., weak labels, as additional\ninput. For training pseudo strong labels from a FBCRNN ensemble are used. The\npresented system scored the fourth and third place in the systems and teams\nrankings, respectively. Subsequent improvements allow our system to even\noutperform the challenge baseline and winner systems in average by,\nrespectively, 18.0% and 2.2% event-based F1-score on the validation set. Source\ncode is publicly available at https://github.com/fgnt/pb_sed.\n"} {"abstract": " Block-sparse signal recovery without knowledge of block sizes and boundaries,\nsuch as those encountered in multi-antenna mmWave channel models, is a hard\nproblem for compressed sensing (CS) algorithms. We propose a novel Sparse\nBayesian Learning (SBL) method for block-sparse recovery based on popular CS\nbased regularizers with the function input variable related to total variation\n(TV). Contrary to conventional approaches that impose the regularization on the\nsignal components, we regularize the SBL hyperparameters. This iterative\nTV-regularized SBL algorithm employs a majorization-minimization approach and\nreduces each iteration to a convex optimization problem, enabling a flexible\nchoice of numerical solvers. The numerical results illustrate that the\nTV-regularized SBL algorithm is robust to the nature of the block structure and\nable to recover signals with both block-patterned and isolated components,\nproving useful for various signal recovery systems.\n"} {"abstract": " Population synthesis studies of binary black-hole mergers often lack robust\nblack-hole spin estimates as they cannot accurately follow tidal spin-up during\nthe late black-hole-Wolf-Rayet evolutionary phase. We provide an analytical\napproximation of the dimensionless second-born black-hole spin given the binary\norbital period and Wolf-Rayet stellar mass at helium depletion or carbon\ndepletion. These approximations are obtained from fitting a sample of around\n$10^5$ detailed MESA simulations that follow the evolution and spin up of close\nblack-hole--Wolf-Rayet systems with metallicities in the range\n$[10^{-4},1.5Z_\\odot]$. Following the potential spin up of the Wolf-Rayet\nprogenitor, the second-born black-hole spin is calculated using up-to-date core\ncollapse prescriptions that account for any potential disk formation in the\ncollapsing Wolf-Rayet star. The fits for second-born black hole spin provided\nin this work can be readily applied to any astrophysical modeling that relies\non rapid population synthesis, and will be useful for the interpretation of\ngravitational-wave sources using such models.\n"} {"abstract": " Vector representations have become a central element in semantic language\nmodelling, leading to mathematical overlaps with many fields including quantum\ntheory. Compositionality is a core goal for such representations: given\nrepresentations for 'wet' and 'fish', how should the concept 'wet fish' be\nrepresented?\n This position paper surveys this question from two points of view. The first\nconsiders the question of whether an explicit mathematical representation can\nbe successful using only tools from within linear algebra, or whether other\nmathematical tools are needed. The second considers whether semantic vector\ncomposition should be explicitly described mathematically, or whether it can be\na model-internal side-effect of training a neural network.\n A third and newer question is whether a compositional model can be\nimplemented on a quantum computer. Given the fundamentally linear nature of\nquantum mechanics, we propose that these questions are related, and that this\nsurvey may help to highlight candidate operations for future quantum\nimplementation.\n"} {"abstract": " Combination and aggregation techniques can significantly improve forecast\naccuracy. This also holds for probabilistic forecasting methods where\npredictive distributions are combined. There are several time-varying and\nadaptive weighting schemes such as Bayesian model averaging (BMA). However, the\nquality of different forecasts may vary not only over time but also within the\ndistribution. For example, some distribution forecasts may be more accurate in\nthe center of the distributions, while others are better at predicting the\ntails. Therefore, we introduce a new weighting method that considers the\ndifferences in performance over time and within the distribution. We discuss\npointwise combination algorithms based on aggregation across quantiles that\noptimize with respect to the continuous ranked probability score (CRPS). After\nanalyzing the theoretical properties of pointwise CRPS learning, we discuss B-\nand P-Spline-based estimation techniques for batch and online learning, based\non quantile regression and prediction with expert advice. We prove that the\nproposed fully adaptive Bernstein online aggregation (BOA) method for pointwise\nCRPS online learning has optimal convergence properties. They are confirmed in\nsimulations and a probabilistic forecasting study for European emission\nallowance (EUA) prices.\n"} {"abstract": " This paper presents the definition and implementation of a quantum computer\narchitecture to enable creating a new computational device - a quantum computer\nas an accelerator In this paper, we present explicitly the idea of a quantum\naccelerator which contains the full stack of the layers of an accelerator. Such\na stack starts at the highest level describing the target application of the\naccelerator. Important to realise is that qubits are defined as perfect qubits,\nimplying they do not decohere and perform good quantum gate operations. The\nnext layer abstracts the quantum logic outlining the algorithm that is to be\nexecuted on the quantum accelerator. In our case, the logic is expressed in the\nuniversal quantum-classical hybrid computation language developed in the group,\ncalled OpenQL. We also have to start thinking about how to verify, validate and\ntest the quantum software such that the compiler generates a correct version of\nthe quantum circuit. The OpenQL compiler translates the program to a common\nassembly language, called cQASM. We need to develop a quantum operating system\nthat manages all the hardware of the micro-architecture. The layer below the\nmicro-architecture is responsible of the mapping and routing of the qubits on\nthe topology such that the nearest-neighbour-constraint can be be respected. At\nany moment in the future when we are capable of generating multiple good\nqubits, the compiler can convert the cQASM to generate the eQASM, which is\nexecutable on a particular experimental device incorporating the\nplatform-specific parameters. This way, we are able to distinguish clearly the\nexperimental research towards better qubits, and the industrial and societal\napplications that need to be developed and executed on a quantum device.\n"} {"abstract": " Correspondence-based rotation search and point cloud registration are two\nfundamental problems in robotics and computer vision. However, the presence of\noutliers, sometimes even occupying the great majority of the putative\ncorrespondences, can make many existing algorithms either fail or have very\nhigh computational cost. In this paper, we present RANSIC (RANdom Sampling with\nInvariant Compatibility), a fast and highly robust method applicable to both\nproblems based on a new paradigm combining random sampling with invariance and\ncompatibility. Generally, RANSIC starts with randomly selecting small subsets\nfrom the correspondence set, then seeks potential inliers as graph vertices\nfrom the random subsets through the compatibility tests of invariants\nestablished in each problem, and eventually returns the eligible inliers when\nthere exists at least one K-degree vertex (K is automatically updated depending\non the problem) and the residual errors satisfy a certain termination condition\nat the same time. In multiple synthetic and real experiments, we demonstrate\nthat RANSIC is fast for use, robust against over 95% outliers, and also able to\nrecall approximately 100% inliers, outperforming other state-of-the-art solvers\nfor both the rotation search and the point cloud registration problems.\n"} {"abstract": " Computational models of biological processes provide one of the most powerful\nmethods for a detailed analysis of the mechanisms that drive the behavior of\ncomplex systems. Logic-based modeling has enhanced our understanding and\ninterpretation of those systems. Defining rules that determine how the output\nactivity of biological entities is regulated by their respective inputs has\nproven to be challenging, due to increasingly larger models and the presence of\nnoise in data, allowing multiple model parameterizations to fit the\nexperimental observations.\n We present several Boolean function metrics that provide modelers with the\nappropriate framework to analyze the impact of a particular model\nparameterization. We demonstrate the link between a semantic characterization\nof a Boolean function and its consistency with the model's underlying\nregulatory structure. We further define the properties that outline such\nconsistency and show that several of the Boolean functions under study violate\nthem, questioning their biological plausibility and subsequent use. We also\nillustrate that regulatory functions can have major differences with regard to\ntheir asymptotic output behavior, with some of them being biased towards\nspecific Boolean outcomes when others are dependent on the ratio between\nactivating and inhibitory regulators.\n Application results show that in a specific signaling cancer network, the\nfunction bias can be used to guide the choice of logical operators for a model\nthat matches data observations. Moreover, graph analysis indicates that the\nstandardized Boolean function bias becomes more prominent with increasing\nnumbers of regulators, confirming the fact that rule specification can\neffectively determine regulatory outcome despite the complex dynamics of\nbiological networks.\n"} {"abstract": " The application of machine learning(ML) and genetic programming(GP) to the\nimage compression domain has produced promising results in many cases. The need\nfor compression arises due to the exorbitant size of data shared on the\ninternet. Compression is required for text, videos, or images, which are used\nalmost everywhere on web be it news articles, social media posts, blogs,\neducational platforms, medical domain, government services, and many other\nwebsites, need packets for transmission and hence compression is necessary to\navoid overwhelming the network. This paper discusses some of the\nimplementations of image compression algorithms that use techniques such as\nArtificial Neural Networks, Residual Learning, Fuzzy Neural Networks,\nConvolutional Neural Nets, Deep Learning, Genetic Algorithms. The paper also\ndescribes an implementation of Vector Quantization using GA to generate\ncodebook which is used for Lossy image compression. All these approaches prove\nto be very contrasting to the standard approaches to processing images due to\nthe highly parallel and computationally extensive nature of machine learning\nalgorithms. Such non-linear abilities of ML and GP make it widely popular for\nuse in multiple domains. Traditional approaches are also combined with\nartificially intelligent systems, leading to hybrid systems, to achieve better\nresults.\n"} {"abstract": " The thermodynamic properties of Bi-Sn were studied at 600 and 900K using a\nquasi-lattice theory. After successful fitting of Gibbs free energies of mixing\nand thermodynamic activities, the fitting parameters were used to investigate\nthe enthalpy of mixing, the entropy of mixing, concentration fluctuations,\nWarren-Cowley short range order parameter, surface concentrations and surface\ntensions of the binary systems. Positive and symmetrically shaped enthalpies of\nmixing were observed in all composition range, while negative excess entropies\nof mixing were observed. Bi-Sn showed a slight preference for like-atoms as\nnearest neighbours in all composition range. The nature of atomic order in\nBi-Sn at 600 and 900K appeared similar. The highest tendency for\nhomocoordination exists at composition where mole fraction of Bi is about 40%.\nIt was also observed that Bi (whose surface tension is lower than that of Sn)\nhas the highest surface enrichment in the Bi-Sn systems. Unlike many previous\napplications of the quasi-lattice theory where constant values were used to\napproximate coordination numbers, temperature and composition-dependent\ncoordination numbers were applied in this work.\n"} {"abstract": " We have performed ab-initio molecular dynamics simulations to elucidate the\nmechanism of the phase transition at high pressure from hexagonal graphite (HG)\nto hexagonal diamond (HD) or to cubic diamond (CD). The transition from HG to\nHD is found to occur swiftly in very small time of 0.2 ps, with large\ncooperative displacements of all the atoms. We observe that alternate layers of\natoms in HG slide in opposite directions by (1/3, 1/6, 0) and (-1/3, -1/6, 0),\nrespectively, which is about 0.7 {\\AA} along the pm[2, 1, 0] direction, while\nsimultaneously puckering by about pm0.25 {\\AA} perpendicular to the a-b plane.\nThe transition from HG to CD occurred with more complex cooperative\ndisplacements. In this case, six successive HG layers slide in pairs by 1/3\nalong [0, 1, 0], [-1, -1, 0] and [1, 0, 0], respectively along with the\npuckering as above. We have also performed calculations of the phonon spectrum\nin HG at high pressure, which reveal soft phonon modes that may facilitate the\nphase transition involving the sliding and puckering of the HG layers. The\nzero-point vibrational energy and the vibrational entropy are found to have\nimportant role in stabilizing HG up to higher pressures (>10 GPa) and\ntemperatures than that estimated (<6 GPa) from previous enthalpy calculations.\n"} {"abstract": " Recent medical imaging studies have given rise to distinct but inter-related\ndatasets corresponding to multiple experimental tasks or longitudinal visits.\nStandard scalar-on-image regression models that fit each dataset separately are\nnot equipped to leverage information across inter-related images, and existing\nmulti-task learning approaches are compromised by the inability to account for\nthe noise that is often observed in images. We propose a novel joint\nscalar-on-image regression framework involving wavelet-based image\nrepresentations with grouped penalties that are designed to pool information\nacross inter-related images for joint learning, and which explicitly accounts\nfor noise in high-dimensional images via a projection-based approach. In the\npresence of non-convexity arising due to noisy images, we derive non-asymptotic\nerror bounds under non-convex as well as convex grouped penalties, even when\nthe number of voxels increases exponentially with sample size. A projected\ngradient descent algorithm is used for computation, which is shown to\napproximate the optimal solution via well-defined non-asymptotic optimization\nerror bounds under noisy images. Extensive simulations and application to a\nmotivating longitudinal Alzheimer's disease study illustrate significantly\nimproved predictive ability and greater power to detect true signals, that are\nsimply missed by existing methods without noise correction due to the\nattenuation to null phenomenon.\n"} {"abstract": " The availability of multi-omics data has revolutionized the life sciences by\ncreating avenues for integrated system-level approaches. Data integration links\nthe information across datasets to better understand the underlying biological\nprocesses. However, high-dimensionality, correlations and heterogeneity pose\nstatistical and computational challenges. We propose a general framework,\nprobabilistic two-way partial least squares (PO2PLS), which addresses these\nchallenges. PO2PLS models the relationship between two datasets using joint and\ndata-specific latent variables. For maximum likelihood estimation of the\nparameters, we implement a fast EM algorithm and show that the estimator is\nasymptotically normally distributed. A global test for testing the relationship\nbetween two datasets is proposed, and its asymptotic distribution is derived.\nNotably, several existing omics integration methods are special cases of\nPO2PLS. Via extensive simulations, we show that PO2PLS performs better than\nalternatives in feature selection and prediction performance. In addition, the\nasymptotic distribution appears to hold when the sample size is sufficiently\nlarge. We illustrate PO2PLS with two examples from commonly used study designs:\na large population cohort and a small case-control study. Besides recovering\nknown relationships, PO2PLS also identified novel findings. The methods are\nimplemented in our R-package PO2PLS. Supplementary materials for this article\nare available online.\n"} {"abstract": " Hard sphere systems are often used to model simple fluids. The configuration\nspaces of hard spheres in a three-dimensional torus modulo various symmetry\ngroups are comparatively simple, and could provide valuable information about\nthe nature of phase transitions. Specifically, the topological changes in the\nconfiguration space as a function of packing fraction have been conjectured to\nbe related to the onset of first-order phase transitions. The critical\nconfigurations for one to twelve spheres are sampled using a Morse-theoretic\napproach, and are available in an online, interactive database. Explicit\ntriangulations are constructed for the configuration spaces of the two sphere\nsystem, and their topological and geometric properties are studied. The\ncritical configurations are found to be associated with geometric changes to\nthe configuration space that connect previously distant regions and reduce the\nconfiguration space diameter as measured by the commute time and diffusion\ndistances. The number of such critical configurations around the packing\nfraction of the solid-liquid phase transition increases exponentially with the\nnumber of spheres, suggesting that the onset of the first-order phase\ntransition in the thermodynamic limit is associated with a discontinuity in the\nconfiguration space diameter.\n"} {"abstract": " Answering a long standing question, we give an example of a Hilbert module\nand a nonzero bounded right linear map having a kernel with trivial orthogonal\ncomplement. In particular, this kernel is different from its own double\northogonal complement.\n"} {"abstract": " Neural networks often require large amounts of data to generalize and can be\nill-suited for modeling small and noisy experimental datasets. Standard network\narchitectures trained on scarce and noisy data will return predictions that\nviolate the underlying physics. In this paper, we present methods for embedding\neven--odd symmetries and conservation laws in neural networks and propose novel\nextensions and use cases for physical constraint embedded neural networks. We\ndesign an even--odd decomposition architecture for disentangling a neural\nnetwork parameterized function into its even and odd components and demonstrate\nthat it can accurately infer symmetries without prior knowledge. We highlight\nthe noise resilient properties of physical constraint embedded neural networks\nand demonstrate their utility as physics-informed noise regulators. Here we\nemployed a conservation of energy constraint embedded network as a\nphysics-informed noise regulator for a symbolic regression task. We showed that\nour approach returns a symbolic representation of the neural network\nparameterized function that aligns well with the underlying physics while\noutperforming a baseline symbolic regression approach.\n"} {"abstract": " Small form-factor, narrowband, and highly directive antennas are of critical\nimportance in a variety of applications spanning wireless communications,\nremote sensing, Raman spectroscopy, and single photon emission enhancement.\nSurprisingly, we show that the classical directivity limit can be appreciably\nsurpassed for electrically small multilayer spherical antennas excited by a\npoint electric dipole even if limiting ourselves to purely dielectric\nmaterials. Experimentally feasible designs of superdirective antennas are\nestablished by using a stochastic optimization algorithm combined with a\nrigorous analytic solution.\n"} {"abstract": " Recently, [JHEP 20 131 (2020)] obtained (a similar, scaled version of) the\n($a,b$)-phase diagram derived from the Kazakov--Zinn-Justin solution of the\nHermitian two-matrix model with interactions \\[\\mathrm{Tr\\,}\\Big\\{\\frac{a}{4}\n(A^4+B^4)+\\frac{b}{2} ABAB\\Big\\}\\,,\\] starting from Functional Renormalization.\nWe comment on something unexpected: the phase diagram of [JHEP 20 131 (2020)]\nis based on a $\\beta_b$-function that does not have the one-loop structure of\nthe Wetterich-Morris Equation. This raises the question of how to reproduce the\nphase diagram from a set of $\\beta$-functions that is, in its totality,\nconsistent with Functional Renormalization. A non-minimalist, yet simple\ntruncation that could lead to the phase diagram is provided. Additionally, we\nidentify the ensemble for which the result of op. cit. would be entirely\ncorrect.\n"} {"abstract": " Climate change, which is now considered one of the biggest threats to\nhumanity, is also the reason behind various other environmental concerns.\nContinued negligence might lead us to an irreparably damaged environment. After\nthe partial failure of the Paris Agreement, it is quite evident that we as\nindividuals need to come together to bring about a change on a large scale to\nhave a significant impact. This paper discusses our approach towards obtaining\na realistic measure of the carbon footprint index being consumed by a user\nthrough day-to-day activities performed via a smart phone app and offering\nincentives in weekly and monthly leader board rankings along with a reward\nsystem. The app helps ease out decision makings on tasks like travel, shopping,\nelectricity consumption, and gain a different and rather numerical perspective\nover the daily choices.\n"} {"abstract": " In this article we present recent advances on interval methods for rigorous\ncomputation of Poincar\\'e maps. We also discuss the impact of choice of\nPoincar\\'e section and coordinate system on obtained bounds for computing\nPoincar\\'e map nearby fixed points.\n"} {"abstract": " Foraminifera are single-celled marine organisms that construct shells that\nremain as fossils in the marine sediments. Classifying and counting these\nfossils are important in e.g. paleo-oceanographic and -climatological research.\nHowever, the identification and counting process has been performed manually\nsince the 1800s and is laborious and time-consuming. In this work, we present a\ndeep learning-based instance segmentation model for classifying, detecting, and\nsegmenting microscopic foraminifera. Our model is based on the Mask R-CNN\narchitecture, using model weight parameters that have learned on the COCO\ndetection dataset. We use a fine-tuning approach to adapt the parameters on a\nnovel object detection dataset of more than 7000 microscopic foraminifera and\nsediment grains. The model achieves a (COCO-style) average precision of $0.78\n\\pm 0.00$ on the classification and detection task, and $0.80 \\pm 0.00$ on the\nsegmentation task. When the model is evaluated without challenging sediment\ngrain images, the average precision for both tasks increases to $0.84 \\pm 0.00$\nand $0.86 \\pm 0.00$, respectively. Prediction results are analyzed both\nquantitatively and qualitatively and discussed. Based on our findings we\npropose several directions for future work, and conclude that our proposed\nmodel is an important step towards automating the identification and counting\nof microscopic foraminifera.\n"} {"abstract": " For the observational modeling of horizontal abundance distributions and of\nmagnetic geometries in chemically peculiar (CP) stars, Zeeman Doppler mapping\n(ZDM) has become the method of choice. Comparisons between abundance maps\nobtained for CP stars and predictions from numerical simulations of atomic\ndiffusion have always proved unsatisfactory, with the blame routinely put on\ntheory. Expanding a previous study aimed at clarifying the question of the\nuniqueness of ZDM maps, this paper inverts the roles between observational\nmodeling and time-dependent diffusion results, casting a cold eye on essential\nassumptions and algorithms underlying ZDM, in particular the Tikhonov-style\nregularization functionals, from 1D to 3D. We show that these have been\nestablished solely for mathematical convenience, but that they in no way\nreflect the physical reality in the atmospheres of magnetic CP stars.\nRecognizing that the observed strong magnetic fields in most well-mapped stars\nrequire the field geometry to be force-free, we demonstrate that many published\nmaps do not meet this condition. There follows a discussion of the frequent\nchanges in magnetic and abundance maps of well observed stars and a caveat\nconcerning the use of least squares deconvolution in ZDM analyses. It emerges\nthat because of the complexity and non-linearity of the field-dependent\nchemical stratifications, Tikhonov based ZDM inversions cannot recover the true\nabundance and magnetic geometries. As our findings additionally show, there is\nno way to define a physically meaningful 3D regularization functional instead.\nZDM remains dysfunctional and does not provide any observational constraints\nfor the modeling of atomic diffusion.\n"} {"abstract": " In this paper, we propose different algorithms for the solution of a tensor\nlinear discrete ill-posed problem arising in the application of the meshless\nmethod for solving PDEs in three-dimensional space using multiquadric radial\nbasis functions. It is well known that the truncated singular value\ndecomposition (TSVD) is the most common effective solver for ill-conditioned\nsystems, but unfortunately the operation count for solving a linear system with\nthe TSVD is computationally expensive for large-scale matrices. In the present\nwork, we propose algorithms based on the use of the well known Einstein product\nfor two tensors to define the tensor global Arnoldi and the tensor Gloub Kahan\nbidiagonalization algorithms. Using the so-called Tikhonov regularization\ntechnique, we will be able to provide computable approximate regularized\nsolutions in a few iterations.\n"} {"abstract": " Fairness is an important property in data-mining applications, including\nrecommender systems. In this work, we investigate a case where users of a\nrecommender system need (or want) to be fair to a protected group of items. For\nexample, in a job market, the user is the recruiter, an item is the job seeker,\nand the protected attribute is gender or race. Even if recruiters want to use a\nfair talent recommender system, the platform may not provide a fair recommender\nsystem, or recruiters may not be able to ascertain whether the recommender\nsystem's algorithm is fair. In this case, recruiters cannot utilize the\nrecommender system, or they may become unfair to job seekers. In this work, we\npropose methods to enable the users to build their own fair recommender\nsystems. Our methods can generate fair recommendations even when the platform\ndoes not (or cannot) provide fair recommender systems. The key challenge is\nthat a user does not have access to the log data of other users or the latent\nrepresentations of items. This restriction prohibits us from adopting existing\nmethods, which are designed for platforms. The main idea is that a user has\naccess to unfair recommendations provided by the platform. Our methods leverage\nthe outputs of an unfair recommender system to construct a new fair recommender\nsystem. We empirically validate that our proposed method improves fairness\nsubstantially without harming much performance of the original unfair system.\n"} {"abstract": " We provide a categorical interpretation for _escrows_, i.e. trading protocols\nin trustless environment, where the exchange between two agents is mediated by\na third party where the buyer locks the money until they receive the goods they\nwant from the seller. A simplified escrow system can be modeled as a certain\nkind of _optic_ in a monoidal category $\\mathcal M$ (e.g., the category of sets\nwith cartesian product); escrows can be regarded as morphisms of a category\n$\\mathcal E(\\mathcal M)$, with the same objects of $\\mathcal M$, and where the\nhom-objects are $\\langle X , Y \\rangle = \\mathsf{Opt}_{\\mathcal M}(\\left[\n\\begin{smallmatrix} Y \\\\ X \\end{smallmatrix} \\right], \\left[\n\\begin{smallmatrix} X \\\\ Y \\end{smallmatrix} \\right])$. When $X$ is a comonoid\nand $Y$ is a monoid in $\\mathcal M$, $\\mathcal E(\\mathcal M)(X,Y)$ is a monoid\nin $\\mathsf{Set}$ (or in the base of enrichment chosen to model one's specific\nproblem), acting on the set of optics $\\left[ \\begin{smallmatrix} B \\\\ B\n\\end{smallmatrix} \\right] \\to \\left[ \\begin{smallmatrix} X \\\\ Y\n\\end{smallmatrix} \\right]$. Moreover, we define a map $$\\lhd : \\langle Y , X\n\\rangle \\times \\mathsf{Opt}(\\left[ \\begin{smallmatrix} Y \\\\ X \\end{smallmatrix}\n\\right], \\left[ \\begin{smallmatrix} B \\\\ B \\end{smallmatrix} \\right]) \\to\n\\mathsf{Opt}(\\left[ \\begin{smallmatrix} Y \\\\ X \\end{smallmatrix} \\right],\n\\left[ \\begin{smallmatrix}{X\\otimes B}\\\\ {Y\\otimes B} \\end{smallmatrix}\n\\right])$$ having action-like properties. This has the following\ninterpretation: the object $B$ acts as an intermediary in a transaction between\n$X$ and $Y$, modeled by an escrow in $\\langle Y , X \\rangle$.\n"} {"abstract": " We study the mean properties of a large representative sample of 217 galaxies\nshowing CIII] emission at $2$day) timescales while a high frequency ($\\sim$10$^{-3}$\nHz) component emerges after the transition into the hard state. At late times\n($\\sim$500 days after peak), a second accretion state transition occurs, from\nthe hard into the quiescent state, as identified by the sudden collapse of the\nbolometric (X-ray+UV) emission to levels below 10$^{-3.4}$ L$_{\\rm Edd}$. Our\nfindings illustrate that TDEs can be used to study the scale (in)variance of\naccretion processes in individual SMBHs. Consequently, they provide a new\navenue to study accretion states over seven orders of magnitude in black hole\nmass, removing limitations inherent to commonly used ensemble studies.\n"} {"abstract": " In this short note we classify the Cartan subalgebras in all von Neumann\nalgebras associated with graph product groups and their free ergodic measure\npreserving actions on probability spaces.\n"} {"abstract": " In this paper, we consider the problem of reducing the semitotal domination\nnumber of a given graph by contracting $k$ edges, for some fixed $k \\geq 1$. We\nshow that this can always be done with at most 3 edge contractions and further\ncharacterise those graphs requiring 1, 2 or 3 edge contractions, respectively,\nto decrease their semitotal domination number. We then study the complexity of\nthe problem for $k=1$ and obtain in particular a complete complexity dichotomy\nfor monogenic classes.\n"} {"abstract": " Bayesian optimization has emerged as a powerful strategy to accelerate\nscientific discovery by means of autonomous experimentation. However, expensive\nmeasurements are required to accurately estimate materials properties, and can\nquickly become a hindrance to exhaustive materials discovery campaigns. Here,\nwe introduce Gemini: a data-driven model capable of using inexpensive\nmeasurements as proxies for expensive measurements by correcting systematic\nbiases between property evaluation methods. We recommend using Gemini for\nregression tasks with sparse data and in an autonomous workflow setting where\nits predictions of expensive to evaluate objectives can be used to construct a\nmore informative acquisition function, thus reducing the number of expensive\nevaluations an optimizer needs to achieve desired target values. In a\nregression setting, we showcase the ability of our method to make accurate\npredictions of DFT calculated bandgaps of hybrid organic-inorganic perovskite\nmaterials. We further demonstrate the benefits that Gemini provides to\nautonomous workflows by augmenting the Bayesian optimizer Phoenics to yeild a\nscalable optimization framework leveraging multiple sources of measurement.\nFinally, we simulate an autonomous materials discovery platform for optimizing\nthe activity of electrocatalysts for the oxygen evolution reaction. Realizing\nautonomous workflows with Gemini, we show that the number of measurements of a\ncomposition space comprising expensive and rare metals needed to achieve a\ntarget overpotential is significantly reduced when measurements from a proxy\ncomposition system with less expensive metals are available.\n"} {"abstract": " Virtual clusters are widely used computing platforms than can be deployed in\nmultiple cloud platforms. The ability to dynamically grow and shrink the number\nof nodes has paved the way for customised elastic computing both for High\nPerformance Computing and High Throughput Computing workloads. However,\nelasticity is typically restricted to a single cloud site, thus hindering the\nability to provision computational resources from multiple geographically\ndistributed cloud sites. To this aim, this paper introduces an architecture of\nopen-source components that coherently deploy a virtual elastic cluster across\nmultiple cloud sites to perform large-scale computing. These hybrid virtual\nelastic clusters are automatically deployed and configured using an\nInfrastructure as Code (IaC) approach on a distributed hybrid testbed that\nspans different organizations, including on-premises and public clouds,\nsupporting automated tunneling of communications across the cluster nodes with\nadvanced VPN topologies. The results indicate that cluster-based computing of\nembarrassingly parallel jobs can benefit from hybrid virtual clusters that\naggregate computing resources from multiple cloud back-ends and bring them\ntogether into a dedicated, albeit virtual network.\n The work presented in this article has been partially funded by the European\nUnion's (EU) Horizon 2020 research project DEEP Hybrid-DataCloud (grant\nagreement No 777435).\n"} {"abstract": " As an integral part of our culture and way of life, language is intricately\nrelated to migrations of people. To understand whether and how migration shapes\nlanguage formation processes we examine the dynamics of the naming game with\nmigrating agents. (i) When all agents may migrate, the dynamics generates an\neffective surface tension, which drives the coarsening. Such a behaviour is\nvery robust and appears for a wide range of densities of agents and their\nmigration rates. (ii) However, when only multilingual agents are allowed to\nmigrate, monolingual islands are typically formed. In such a case, when the\nmigration rate is sufficiently large, the majority of agents acquire a common\nlanguage, which spontaneously emerges with no indication of the surface-tension\ndriven coarsening. A relatively slow coarsening that takes place in a dense\nstatic population is very fragile, and most likely, an arbitrarily small\nmigration rate can divert the system toward quick formation of monolingual\nislands. Our work shows that migration influences language formation processes\nbut additional details like density, or mobility of agents are needed to\nspecify more precisely this influence.\n"} {"abstract": " Continued fractions are used to give an alternate proof of $e^{x/y}$ is\nirrational.\n"} {"abstract": " We investigate a general formulation for clustering and transductive few-shot\nlearning, which integrates prototype-based objectives, Laplacian regularization\nand supervision constraints from a few labeled data points. We propose a\nconcave-convex relaxation of the problem, and derive a computationally\nefficient block-coordinate bound optimizer, with convergence guarantee. At each\niteration,our optimizer computes independent (parallel) updates for each\npoint-to-cluster assignment. Therefore, it could be trivially distributed for\nlarge-scale clustering and few-shot tasks. Furthermore, we provides a thorough\nconvergence analysis based on point-to-set maps. Were port comprehensive\nclustering and few-shot learning experiments over various data sets, showing\nthat our method yields competitive performances, in term of accuracy and\noptimization quality, while scaling up to large problems. Using standard\ntraining on the base classes, without resorting to complex meta-learning and\nepisodic-training strategies, our approach outperforms state-of-the-art\nfew-shot methods by significant margins, across various models, settings and\ndata sets. Surprisingly, we found that even standard clustering procedures\n(e.g., K-means), which correspond to particular, non-regularized cases of our\ngeneral model, already achieve competitive performances in comparison to the\nstate-of-the-art in few-shot learning. These surprising results point to the\nlimitations of the current few-shot benchmarks, and question the viability of a\nlarge body of convoluted few-shot learning techniques in the recent literature.\n"} {"abstract": " This paper deals with Hensel minimal, non-trivially valued fields $K$ of\nequicharacteristic zero, whose axiomatic theory was introduced in a recent\narticle by Cluckers-Halupczok-Rideau. We additionally require that the standard\nalgebraic language be induced (up to interdefinability) for the imaginary sort\n$RV$. This condition is satisfied by the majority of classical tame structures\non Henselian fields, including Henselian fields with analytic structure. The\nmain purpose is to carry over many results of our previous papers to the above\ngeneral axiomatic settings including, among others, the theorem on existence of\nthe limit, curve selection, the closedness theorem, several non-Archimedean\nversions of the Lojasiewicz inequalities as well as the theorems on extending\ncontinuous definable functions and on existence of definable retractions. We\nestablish an embedding theorem for regular definable spaces and the definable\nultranormality of definable Hausdorff LC-spaces. Also given are examples that\ncurve selection and the closedness theorem, key results for numerous\napplications, may be no longer true after expanding the language for the\nleading term structure $RV$. In the case of Henselian fields with analytic\nstructure, a more precise version of the theorem on existence of the limit (a\nversion of Puiseux's theorem) is provided. Further, we establish definable\nversions of resolution of singularities (hypersurface case) and transformation\nto normal crossings by blowing up, on arbitrary strong analytic manifolds in\nHensel minimal expansions of analytic structures. Also introduced are\nmeromorphous functions, i.e. continuous quotients of strong analytic functions\non strong analytic manifolds. Finally, we prove a finitary meromorphous version\nof the Nullstellensatz.\n"} {"abstract": " In this paper, we prove that the Fechner and Stevens laws are equivalent\n(coincide up to isomorphism). Therefore, the problem does not exist.\n"} {"abstract": " Deep learning has achieved promising segmentation performance on 3D left\natrium MR images. However, annotations for segmentation tasks are expensive,\ncostly and difficult to obtain. In this paper, we introduce a novel\nhierarchical consistency regularized mean teacher framework for 3D left atrium\nsegmentation. In each iteration, the student model is optimized by multi-scale\ndeep supervision and hierarchical consistency regularization, concurrently.\nExtensive experiments have shown that our method achieves competitive\nperformance as compared with full annotation, outperforming other\nstate-of-the-art semi-supervised segmentation methods.\n"} {"abstract": " We present a collection recommender system that can automatically create and\nrecommend collections of items at a user level. Unlike regular recommender\nsystems, which output top-N relevant items, a collection recommender system\noutputs collections of items such that the items in the collections are\nrelevant to a user, and the items within a collection follow a specific theme.\nOur system builds on top of the user-item representations learnt by item\nrecommender systems. We employ dimensionality reduction and clustering\ntechniques along with intuitive heuristics to create collections with their\nratings and titles.\n We test these ideas in a real-world setting of music recommendation, within a\npopular music streaming service. We find that there is a 2.3x increase in\nrecommendation-driven consumption when recommending collections over items.\nFurther, it results in effective utilization of real estate and leads to\nrecommending a more and diverse set of items. To our knowledge, these are first\nof its kind experiments at such a large scale.\n"} {"abstract": " The number and importance of AI-based systems in all domains is growing. With\nthe pervasive use and the dependence on AI-based systems, the quality of these\nsystems becomes essential for their practical usage. However, quality assurance\nfor AI-based systems is an emerging area that has not been well explored and\nrequires collaboration between the SE and AI research communities. This paper\ndiscusses terminology and challenges on quality assurance for AI-based systems\nto set a baseline for that purpose. Therefore, we define basic concepts and\ncharacterize AI-based systems along the three dimensions of artifact type,\nprocess, and quality characteristics. Furthermore, we elaborate on the key\nchallenges of (1) understandability and interpretability of AI models, (2) lack\nof specifications and defined requirements, (3) need for validation data and\ntest input generation, (4) defining expected outcomes as test oracles, (5)\naccuracy and correctness measures, (6) non-functional properties of AI-based\nsystems, (7) self-adaptive and self-learning characteristics, and (8) dynamic\nand frequently changing environments.\n"} {"abstract": " An action functional is developed for nonlinear dislocation dynamics. This\nserves as a first step towards the application of effective field theory in\nphysics to evaluate its potential in obtaining a macroscopic description of\ndislocation dynamics describing the plasticity of crystalline solids.\nConnections arise between the continuum mechanics and material science of\ndefects in solids, effective field theory techniques in physics, and fracton\ntensor gauge theories.\n"} {"abstract": " Cyber Physical Systems (CPS) are characterized by their ability to integrate\nthe physical and information or cyber worlds. Their deployment in critical\ninfrastructure have demonstrated a potential to transform the world. However,\nharnessing this potential is limited by their critical nature and the far\nreaching effects of cyber attacks on human, infrastructure and the environment.\nAn attraction for cyber concerns in CPS rises from the process of sending\ninformation from sensors to actuators over the wireless communication medium,\nthereby widening the attack surface. Traditionally, CPS security has been\ninvestigated from the perspective of preventing intruders from gaining access\nto the system using cryptography and other access control techniques. Most\nresearch work have therefore focused on the detection of attacks in CPS.\nHowever, in a world of increasing adversaries, it is becoming more difficult to\ntotally prevent CPS from adversarial attacks, hence the need to focus on making\nCPS resilient. Resilient CPS are designed to withstand disruptions and remain\nfunctional despite the operation of adversaries. One of the dominant\nmethodologies explored for building resilient CPS is dependent on machine\nlearning (ML) algorithms. However, rising from recent research in adversarial\nML, we posit that ML algorithms for securing CPS must themselves be resilient.\nThis paper is therefore aimed at comprehensively surveying the interactions\nbetween resilient CPS using ML and resilient ML when applied in CPS. The paper\nconcludes with a number of research trends and promising future research\ndirections. Furthermore, with this paper, readers can have a thorough\nunderstanding of recent advances on ML-based security and securing ML for CPS\nand countermeasures, as well as research trends in this active research area.\n"} {"abstract": " I point out fatal mathematical errors in the paper \"Quantum correlations are\nweaved by the spinors of the Euclidean primitives\" by Joy Christian, published\n(2019) in the journal Royal Society Open Science.\n"} {"abstract": " This article presents an algorithm for reducing measurement uncertainty of\none physical quantity when given oversampled measurements of two physical\nquantities with correlated noise. The algorithm assumes that the aleatoric\nmeasurement uncertainty in both physical quantities follows a Gaussian\ndistribution and relies on sampling faster than it is possible for the\nmeasurand (the true value of the physical quantity that we are trying to\nmeasure) to change (due to the system thermal time constant) to calculate the\nparameters of the noise distribution. In contrast to the Kalman and particle\nfilters, which respectively require state update equations and a map of one\nphysical quality, our algorithm requires only the oversampled sensor\nmeasurements. When applied to temperature-compensated humidity sensors, it\nprovides reduced uncertainty in humidity estimates from correlated temperature\nand humidity measurements. In an experimental evaluation, the algorithm\nachieves average uncertainty reduction of 10.3 %. The algorithm incurs an\nexecution time overhead of 5.3 % when compared to the minimum algorithm\nrequired to measure and calculate the uncertainty. Detailed instruction-level\nemulation of a C-language implementation compiled to the RISC-V architecture\nshows that the uncertainty reduction program required 0.05 % more instructions\nper iteration than the minimum operations required to calculate the\nuncertainty.\n"} {"abstract": " MOA-2006-BLG-074 was selected as one of the most promising planetary\ncandidates in a retrospective analysis of the MOA collaboration: its asymmetric\nhigh-magnification peak can be perfectly explained by a source passing across a\ncentral caustic deformed by a small planet. However, after a detailed analysis\nof the residuals, we have realized that a single lens and a source orbiting\nwith a faint companion provides a more satisfactory explanation for all the\nobserved deviations from a Paczynski curve and the only physically acceptable\ninterpretation. Indeed the orbital motion of the source is constrained enough\nto allow a very good characterization of the binary source from the\nmicrolensing light curve. The case of MOA-2006-BLG-074 suggests that the\nso-called xallarap effect must be taken seriously in any attempts to obtain\naccurate planetary demographics from microlensing surveys.\n"} {"abstract": " Iteratively reweighted least square (IRLS) is a popular approach to solve\nsparsity-enforcing regression problems in machine learning. State of the art\napproaches are more efficient but typically rely on specific coordinate pruning\nschemes. In this work, we show how a surprisingly simple reparametrization of\nIRLS, coupled with a bilevel resolution (instead of an alternating scheme) is\nable to achieve top performances on a wide range of sparsity (such as Lasso,\ngroup Lasso and trace norm regularizations), regularization strength (including\nhard constraints), and design matrices (ranging from correlated designs to\ndifferential operators). Similarly to IRLS, our method only involves linear\nsystems resolutions, but in sharp contrast, corresponds to the minimization of\na smooth function. Despite being non-convex, we show that there is no spurious\nminima and that saddle points are \"ridable\", so that there always exists a\ndescent direction. We thus advocate for the use of a BFGS quasi-Newton solver,\nwhich makes our approach simple, robust and efficient. We perform a numerical\nbenchmark of the convergence speed of our algorithm against state of the art\nsolvers for Lasso, group Lasso, trace norm and linearly constrained problems.\nThese results highlight the versatility of our approach, removing the need to\nuse different solvers depending on the specificity of the ML problem under\nstudy.\n"} {"abstract": " Self-interacting dark matter (SIDM) models offer one way to reconcile\ninconsistencies between observations and predictions from collisionless cold\ndark matter (CDM) models on dwarf-galaxy scales. In order to incorporate the\neffects of both baryonic and SIDM interactions, we study a suite of\ncosmological-baryonic simulations of Milky-Way (MW)-mass galaxies from the\nFeedback in Realistic Environments (FIRE-2) project where we vary the SIDM\nself-interaction cross-section $\\sigma/m$. We compare the shape of the main\ndark matter (DM) halo at redshift $z=0$ predicted by SIDM simulations (at\n$\\sigma/m=0.1$, $1$, and $10$ cm$^2$ g$^{-1}$) with CDM simulations using the\nsame initial conditions. In the presence of baryonic feedback effects, we find\nthat SIDM models do not produce the large differences in the inner structure of\nMW-mass galaxies predicted by SIDM-only models. However, we do find that the\nradius where the shape of the total mass distribution begins to differ from\nthat of the stellar mass distribution is dependent on $\\sigma/m$. This\ntransition could potentially be used to set limits on the SIDM cross-section in\nthe MW.\n"} {"abstract": " We reanalyze the experimental NMC data on the nonsinglet structure function\n$F_2^p-F_2^n$ and E866 data on the nucleon sea asymmetry $\\bar{d}/\\bar{u}$\nusing the truncated moments approach elaborated in our previous papers. With\nhelp of the special truncated sum one can overcome the problem of the\nunavoidable experimental restrictions on the Bjorken $x$ and effectively study\nthe fundamental sum rules for the parton distributions and structure functions.\nUsing only the data from the measured region of $x$, we obtain the Gottfried\nsum $\\int_0^1 F_2^{ns}/x\\, dx$ and the integrated nucleon sea asymmetry\n$\\int_0^1 (\\bar{d}-\\bar{u})\\, dx$. We compare our results with the reported\nexperimental values and with the predictions obtained for different global\nparametrizations for the parton distributions. We also discuss the discrepancy\nbetween the NMC and E866 results on $\\int_0^1 (\\bar{d}-\\bar{u})\\, dx$. We\ndemonstrate that this discrepancy can be resolved by taking into account the\nhigher-twist effects.\n"} {"abstract": " The emission properties of tin plasmas, produced by the irradiation of\npreformed liquid tin targets by several-ns-long 2-$\\mu$m-wavelength laser\npulses, are studied in the extreme ultraviolet (EUV) regime. In a two-pulse\nscheme, a pre-pulse laser is first used to deform tin microdroplets into thin,\nextended disks before the main (2$\\mu$m) pulse creates the EUV-emitting plasma.\nIrradiating 30- to 300-$\\mu$m-diameter targets with 2-$\\mu$m laser pulses, we\nfind that the efficiency in creating EUV light around 13.5nm follows the\nfraction of laser light that overlaps with the target. Next, the effects of a\nchange in 2-$\\mu$m drive laser intensity (0.6-1.8$\\times 10^{11}$W/cm$^2$) and\npulse duration (3.7-7.4ns) are studied. It is found that the angular dependence\nof the emission of light within a 2\\% bandwidth around 13.5nm and within the\nbackward 2$\\pi$ hemisphere around the incoming laser beam is almost independent\nof intensity and duration of the 2-$\\mu$m drive laser. With increasing target\ndiameter, the emission in this 2\\% bandwidth becomes increasingly anisotropic,\nwith a greater fraction of light being emitted into the hemisphere of the\nincoming laser beam. For direct comparison, a similar set of experiments is\nperformed with a 1-$\\mu$m-wavelength drive laser. Emission spectra, recorded in\na 5.5-25.5nm wavelength range, show significant self-absorption of light around\n13.5nm in the 1-$\\mu$m case, while in the 2-$\\mu$m case only an opacity-related\nbroadening of the spectral feature at 13.5nm is observed. This work\ndemonstrates the enhanced capabilities and performance of 2-$\\mu$m-driven\nplasmas produced from disk targets when compared to 1-$\\mu$m-driven plasmas,\nproviding strong motivation for the use of 2-$\\mu$m lasers as drive lasers in\nfuture high-power sources of EUV light.\n"} {"abstract": " Advancements in the digital technologies have enabled researchers to develop\na variety of Computational Music applications. Such applications are required\nto capture, process, and generate data related to music. Therefore, it is\nimportant to digitally represent music in a music theoretic and concise manner.\nExisting approaches for representing music are ineffective in terms of\nutilizing music theory. In this paper, we address the disjoint of music theory\nand computational music by developing an opensource representation tool based\non music theory. Through the wide range of use cases, we run an analysis on the\nclassical music pieces to show the usefulness of the developed music embedding.\n"} {"abstract": " Network function virtualization (NFV) and content caching are two promising\ntechnologies that hold great potential for network operators and designers.\nThis paper optimizes the deployment of NFV and content caching in 5G networks\nand focuses on the associated power consumption savings. In addition, it\nintroduces an approach to combine content caching with NFV in one integrated\narchitecture for energy aware 5G networks. A mixed integer linear programming\n(MILP) model has been developed to minimize the total power consumption by\njointly optimizing the cache size, virtual machine (VM) workload, and the\nlocations of both cache nodes and VMs. The results were investigated under the\nimpact of core network virtual machines (CNVMs) inter-traffic. The result show\nthat the optical line terminal (OLT) access network nodes are the optimum\nlocation for content caching and for hosting VMs during busy times of the day\nwhilst IP over WDM core network nodes are the optimum locations for caching and\nVM placement during off-peak time. Furthermore, the results reveal that a\nvirtualization-only approach is better than a caching-only approach for video\nstreaming services where the virtualization-only approach compared to\ncaching-only approach, achieves a maximum power saving of 7% (average 5%) when\nno CNVMs inter-traffic is considered and 6% (average 4%) with CNVMs\ninter-traffic at 10% of the total backhaul traffic. On the other hand, the\nintegrated approach has a maximum power saving of 15% (average 9%) with and\nwithout CNVMs inter-traffic compared to the virtualization-only approach, and\nit achieves a maximum power saving of 21% (average 13%) without CNVMs\ninter-traffic and 20% (average 12%) when CNVMs inter-traffic is considered\ncompared with the caching-only approach. In order to validate the MILP models\nand achieve real-time operation in our approaches, a heuristic was developed.\n"} {"abstract": " A symbolic method for solving linear recurrences of combinatorial and\nstatistical interest is introduced. This method essentially relies on a\nrepresentation of polynomial sequences as moments of a symbol that looks as the\nframework of a random variable with no reference to any probability space. We\ngive several examples of applications and state an explicit form for the class\nof linear recurrences involving Sheffer sequences satisfying a special initial\ncondition. The results here presented can be easily implemented in a symbolic\nsoftware.\n"} {"abstract": " By Hacon-McKernan-Xu, there is a positive lower bound in each dimension for\nthe volume of all klt varieties with ample canonical class. We show that these\nbounds must go to zero extremely fast as the dimension increases, by\nconstructing a klt $n$-fold with ample canonical class whose volume is less\nthan $1/2^{2^n}$. These examples should be close to optimal.\n We also construct a klt Fano variety of each dimension $n$ such that\n$H^0(X,-mK_X)=0$ for all $1\\leq m < b$ with $b$ roughly $2^{2^n}$. Here again\nthere is some bound in each dimension, by Birkar's theorem on boundedness of\ncomplements, and we are showing that the bound must increase extremely fast\nwith the dimension.\n"} {"abstract": " The effective low-energy late-time description of many body systems near\nthermal equilibrium provided by classical hydrodynamics in terms of dissipative\ntransport phenomena receives important corrections once the effects of\nstochastic fluctuations are taken into account. One such physical effect is the\noccurrence of long-time power law tails in correlation functions of conserved\ncurrents. In the hydrodynamic regime $\\vec{k} \\rightarrow 0$ this amounts to\nnon-analytic dependence of the correlation functions on the frequency $\\omega$.\nIn this article, we consider a relativistic fluid with a conserved global\n$U(1)$ charge in the presence of a strong background magnetic field, and\ncompute the long-time tails in correlation functions of the stress tensor. The\npresence of the magnetic field renders the system anisotropic. In the absence\nof the magnetic field, there are three out-of-equilibrium transport parameters\nthat arise at the first order in the hydrodynamic derivative expansion, all of\nwhich are dissipative. In the presence of a background magnetic field, there\nare ten independent out-of-equilibrium transport parameters at the first order,\nthree of which are non-dissipative and the rest are dissipative. We provide the\nmost general linearized equations about a given state of thermal equilibrium\ninvolving the various transport parameters in the presence of a magnetic field,\nand use them to compute the long-time tails for the fluid.\n"} {"abstract": " Contending hate speech in social media is one of the most challenging social\nproblems of our time. There are various types of anti-social behavior in social\nmedia. Foremost of them is aggressive behavior, which is causing many social\nissues such as affecting the social lives and mental health of social media\nusers. In this paper, we propose an end-to-end ensemble-based architecture to\nautomatically identify and classify aggressive tweets. Tweets are classified\ninto three categories - Covertly Aggressive, Overtly Aggressive, and\nNon-Aggressive. The proposed architecture is an ensemble of smaller subnetworks\nthat are able to characterize the feature embeddings effectively. We\ndemonstrate qualitatively that each of the smaller subnetworks is able to learn\nunique features. Our best model is an ensemble of Capsule Networks and results\nin a 65.2% F1 score on the Facebook test set, which results in a performance\ngain of 0.95% over the TRAC-2018 winners. The code and the model weights are\npublicly available at\nhttps://github.com/parthpatwa/Hater-O-Genius-Aggression-Classification-using-Capsule-Networks.\n"} {"abstract": " We study the transport properties for a family of geometrically frustrated\nmodels on the triangular lattice with an interaction scale far exceeding the\nsingle-particle bandwidth. Starting from the interaction-only limit, which can\nbe solved exactly, we analyze the transport and thermodynamic behavior as a\nfunction of filling and temperature at the leading non-trivial order in the\nsingle-particle hopping. Over a broad range of intermediate temperatures, we\nfind evidence of a dc resistivity scaling linearly with temperature and with\ntypical values far exceeding the quantum of resistance, $h/e^2$. At a sequence\nof commensurate fillings, the bad-metallic regime eventually crosses over into\ninteraction induced insulating phases in the limit of low temperatures. We\ndiscuss the relevance of our results to experiments in cold-atom and moir\\'e\nheterostructure based platforms.\n"} {"abstract": " The aim of this note is to completely determine the second homology group of\nthe special queer Lie superalgebra $\\mathfrak{sq}_n(R)$ coordinatized by a\nunital associative superalgebra $R$, which will be achieved via an isomorphism\nbetween the special linear Lie superalgebra $\\mathfrak{sl}_{n}(R\\otimes Q_1)$\nand the special queer Lie superalgebra $\\mathfrak{sq}_n(R)$.\n"} {"abstract": " In a multiple linear regression model, the algebraic formula of the\ndecomposition theorem explains the relationship between the univariate\nregression coefficient and partial regression coefficient using geometry. It\nwas found that univariate regression coefficients are decomposed into their\nrespective partial regression coefficients according to the parallelogram rule.\nMulticollinearity is analyzed with the help of the decomposition theorem. It\nwas also shown that it is a sample phenomenon that the partial regression\ncoefficients of important explanatory variables are not significant, but the\nsign expectation deviation cause may be the population structure between the\nexplained variables and explanatory variables or may be the result of sample\nselection. At present, some methods of diagnostic multicollinearity only\nconsider the correlation of explanatory variables, so these methods are\nbasically unreliable, and handling multicollinearity is blind before the causes\nare not distinguished. The increase in the sample size can help identify the\ncauses of multicollinearity, and the difference method can play an auxiliary\nrole.\n"} {"abstract": " As the earliest stage of planet formation, massive, optically thick, and gas\nrich protoplanetary disks provide key insights into the physics of star and\nplanet formation. When viewed edge-on, high resolution images offer a unique\nopportunity to study both the radial and vertical structures of these disks and\nrelate this to vertical settling, radial drift, grain growth, and changes in\nthe midplane temperatures. In this work, we present multi-epoch HST and Keck\nscattered light images, and an ALMA 1.3 mm continuum map for the remarkably\nflat edge-on protoplanetary disk SSTC2DJ163131.2-242627, a young solar-type\nstar in $\\rho$ Ophiuchus. We model the 0.8 $\\mu$m and 1.3 mm images in separate\nMCMC runs to investigate the geometry and dust properties of the disk using the\nMCFOST radiative transfer code. In scattered light, we are sensitive to the\nsmaller dust grains in the surface layers of the disk, while the sub-millimeter\ndust continuum observations probe larger grains closer to the disk midplane. An\nMCMC run combining both datasets using a covariance-based log-likelihood\nestimation was marginally successful, implying insufficient complexity in our\ndisk model. The disk is well characterized by a flared disk model with an\nexponentially tapered outer edge viewed nearly edge-on, though some degree of\ndust settling is required to reproduce the vertically thin profile and lack of\napparent flaring. A colder than expected disk midplane, evidence for dust\nsettling, and residual radial substructures all point to a more complex radial\ndensity profile to be probed with future, higher resolution observations.\n"} {"abstract": " Silicon ferroelectric field-effect transistors (FeFETs) with low-k\ninterfacial layer (IL) between ferroelectric gate stack and silicon channel\nsuffers from high write voltage, limited write endurance and large\nread-after-write latency due to early IL breakdown and charge trapping and\ndetrapping at the interface. We demonstrate low voltage, high speed memory\noperation with high write endurance using an IL-free back-end-of-line (BEOL)\ncompatible FeFET. We fabricate IL-free FeFETs with 28nm channel length and\n126nm width under a thermal budget <400C by integrating 5nm thick Hf0.5Zr0.5O2\ngate stack with amorphous Indium Tungsten Oxide (IWO) semiconductor channel. We\nreport 1.2V memory window and read current window of 10^5 for program and\nerase, write latency of 20ns with +/-2V write pulses, read-after-write latency\n<200ns, write endurance cycles exceeding 5x10^10 and 2-bit/cell programming\ncapability. Array-level analysis establishes IL-free BEOL FeFET as a promising\ncandidate for logic-compatible high-performance on-chip buffer memory and\nmulti-bit weight cell for compute-in-memory accelerators.\n"} {"abstract": " We prove the asymptotic functional Poisson laws in the total variation norm\nand obtain estimates of the corresponding convergence rates for a large class\nof hyperbolic dynamical systems. These results generalize the ones obtained\nbefore in this area. Applications to intermittent solenoids, Axiom A systems,\nH\\'enon attractors and to billiards, are also considered.\n"} {"abstract": " Software engineering educators are continually challenged by rapidly evolving\nconcepts, technologies, and industry demands. Due to the omnipresence of\nsoftware in a digitalized society, higher education institutions (HEIs) have to\neducate the students such that they learn how to learn, and that they are\nequipped with a profound basic knowledge and with latest knowledge about modern\nsoftware and system development. Since industry demands change constantly, HEIs\nare challenged in meeting such current and future demands in a timely manner.\nThis paper analyzes the current state of practice in software engineering\neducation. Specifically, we want to compare contemporary education with\nindustrial practice to understand if frameworks, methods and practices for\nsoftware and system development taught at HEIs reflect industrial practice. For\nthis, we conducted an online survey and collected information about 67 software\nengineering courses. Our findings show that development approaches taught at\nHEIs quite closely reflect industrial practice. We also found that the choice\nof what process to teach is sometimes driven by the wish to make a course\nsuccessful. Especially when this happens for project courses, it could be\nbeneficial to put more emphasis on building learning sequences with other\ncourses.\n"} {"abstract": " This study explores the potential of modern implicit solvers for stochastic\npartial differential equations in the simulation of real-time complex Langevin\ndynamics. Not only do these methods offer asymptotic stability, rendering the\nissue of runaway solution moot, but they also allow us to simulate at\ncomparatively largeLangevin time steps, leading to lower computational cost. We\ncompare different ways of regularizing the underlying path integral and\nestimate the errors introduced due to the finite Langevin time. Based on that\ninsight, we implement benchmark (non-)thermal simulations of the quantum\nanharmonic oscillator on the canonical Schwinger-Keldysh contour of short\nreal-time extent.\n"} {"abstract": " In this article, we consider a class of finite rank perturbations of Toeplitz\noperators that have simple eigenvalues on the unit circle. Under a suitable\nassumption on the behavior of the essential spectrum, we show that such\noperators are power bounded. The problem originates in the approximation of\nhyperbolic partial differential equations with boundary conditions by means of\nfinite difference schemes. Our result gives a positive answer to a conjecture\nby Trefethen, Kreiss and Wu that only a weak form of the so-called Uniform\nKreiss-Lopatinskii Condition is sufficient to imply power boundedness.\n"} {"abstract": " We address the problem of exposure correction of dark, blurry and noisy\nimages captured in low-light conditions in the wild. Classical image-denoising\nfilters work well in the frequency space but are constrained by several factors\nsuch as the correct choice of thresholds, frequency estimates etc. On the other\nhand, traditional deep networks are trained end-to-end in the RGB space by\nformulating this task as an image-translation problem. However, that is done\nwithout any explicit constraints on the inherent noise of the dark images and\nthus produce noisy and blurry outputs. To this end we propose a DCT/FFT based\nmulti-scale loss function, which when combined with traditional losses, trains\na network to translate the important features for visually pleasing output. Our\nloss function is end-to-end differentiable, scale-agnostic, and generic; i.e.,\nit can be applied to both RAW and JPEG images in most existing frameworks\nwithout additional overhead. Using this loss function, we report significant\nimprovements over the state-of-the-art using quantitative metrics and\nsubjective tests.\n"} {"abstract": " The \\textit{node reliability} of a graph $G$ is the probability that at least\none node is operational and that the operational nodes can all communicate in\nthe subgraph that they induce, given that the edges are perfectly reliable but\neach node operates independently with probability $p\\in[0,1]$. We show that\nunlike many other notions of graph reliability, the number of maximal intervals\nof decrease in $[0,1]$ is unbounded, and that there can be arbitrarily many\ninflection points in the interval as well.\n"} {"abstract": " For the Minkowski question mark function $?(x)$ we consider derivative of the\nfunction $f_n(x) = \\underbrace{?(?(...?}_\\text{n times}(x)))$. Apart from\nobvious cases (rational numbers for example) it is non-trivial to find explicit\nexamples of numbers $x$ for which $f'_n(x)=0$. In this paper we present a set\nof irrational numbers, such that for every element $x_0$ of this set and for\nany $n\\in\\mathbb{Z}_+$ one has $f'_n(x_0)=0$.\n"} {"abstract": " Since their inception, learning techniques under the Reservoir Computing\nparadigm have shown a great modeling capability for recurrent systems without\nthe computing overheads required for other approaches. Among them, different\nflavors of echo state networks have attracted many stares through time, mainly\ndue to the simplicity and computational efficiency of their learning algorithm.\nHowever, these advantages do not compensate for the fact that echo state\nnetworks remain as black-box models whose decisions cannot be easily explained\nto the general audience. This work addresses this issue by conducting an\nexplainability study of Echo State Networks when applied to learning tasks with\ntime series, image and video data. Specifically, the study proposes three\ndifferent techniques capable of eliciting understandable information about the\nknowledge grasped by these recurrent models, namely, potential memory, temporal\npatterns and pixel absence effect. Potential memory addresses questions related\nto the effect of the reservoir size in the capability of the model to store\ntemporal information, whereas temporal patterns unveils the recurrent\nrelationships captured by the model over time. Finally, pixel absence effect\nattempts at evaluating the effect of the absence of a given pixel when the echo\nstate network model is used for image and video classification. We showcase the\nbenefits of our proposed suite of techniques over three different domains of\napplicability: time series modeling, image and, for the first time in the\nrelated literature, video classification. Our results reveal that the proposed\ntechniques not only allow for a informed understanding of the way these models\nwork, but also serve as diagnostic tools capable of detecting issues inherited\nfrom data (e.g. presence of hidden bias).\n"} {"abstract": " We point out qualitatively different possibilities on the role of\nCP-conserving processes in generating cosmological particle-antiparticle\nasymmetries, with illustrative examples from models in leptogenesis and\nasymmetric dark matter production. In particular, we consider scenarios in\nwhich the CP-violating and CP-conserving processes are either both decays or\nboth scatterings, thereby being naturally of comparable rates. This is in\ncontrast to the previously considered CP-conserving processes in models of\nleptogenesis in different see-saw mechanisms, in which the CP-conserving\nscatterings typically have lower rates compared to the CP-violating decays, due\nto a Boltzmann suppression. We further point out that the CP-conserving\nprocesses can play a dual role if the asymmetry is generated in the mother\nsector itself, in contrast to the conventional scenarios in which it is\ngenerated in the daughter sector. This is because, the CP-conserving processes\ninitially suppress the asymmetry generation by controlling the\nout-of-equilibrium number densities of the bath particles, but subsequently\nmodify the ratio of particle anti-particle yields at the present epoch by\neliminating the symmetric component of the bath particles through\npair-annihilations, leading to a competing effect stemming from the same\nprocess at different epochs. We find that the asymmetric yields for relevant\nparticle-antiparticle systems can vary by orders of magnitude depending upon\nthe relative size of the CP-conserving and violating reaction rates.\n"} {"abstract": " Magnetic field-line reconnection is a universal plasma process responsible\nfor the conversion of magnetic field energy to the plasma heating and charged\nparticle acceleration. Solar flares and Earth's magnetospheric substorms are\ntwo most investigated dynamical systems where magnetic reconnection is believed\nto be responsible for global magnetic field reconfiguration and energization of\nplasma populations. Such a reconfiguration includes formation of a long-living\ncurrent systems connecting the primary energy release region and cold dense\nconductive plasma of photosphere/ionosphere. In both flares and substorms the\nevolution of this current system correlates with formation and dynamics of\nenergetic particle fluxes. Our study is focused on this similarity between\nflares and substorms. Using a wide range of datasets available for flare and\nsubstorm investigations, we compare qualitatively dynamics of currents and\nenergetic particle fluxes for one flare and one substorm. We showed that there\nis a clear correlation between energetic particle bursts (associated with\nenergy release due to magnetic reconnection) and magnetic field\nreconfiguration/formation of current system. We then discuss how datasets of\nin-situ measurements in the magnetospheric substorm can help in interpretation\nof datasets gathered for the solar flare.\n"} {"abstract": " The design of provably correct controllers for continuous-state stochastic\nsystems crucially depends on approximate finite-state abstractions and their\naccuracy quantification. For this quantification, one generally uses\napproximate stochastic simulation relations, whose constant precision limits\nthe achievable guarantees on the control design. This limitation especially\naffects higher dimensional stochastic systems and complex formal\nspecifications. This work allows for variable precision by defining a\nsimulation relation that contains multiple precision layers. For bi-layered\nsimulation relations, we develop a robust dynamic programming approach yielding\na lower bound on the satisfaction probability of temporal logic specifications.\nWe illustrate the benefit of bi-layered simulation relations for linear\nstochastic systems in an example.\n"} {"abstract": " We are concerned with interior and global gradient estimates for solutions to\na class of singular quasilinear elliptic equations with measure data, whose\nprototype is given by the $p$-Laplace equation $-\\Delta_p u=\\mu$ with $p\\in\n(1,2)$. The cases when $p\\in \\big(2-\\frac 1 n,2\\big)$ and $p\\in\n\\big(\\frac{3n-2}{2n-1},2-\\frac{1}{n}\\big]$ were studied in [9] and [22],\nrespectively. In this paper, we improve the results in [22] and address the\nopen case when $p\\in \\big(1,\\frac{3n-2}{2n-1}\\big]$. Interior and global\nmodulus of continuity estimates of the gradients of solutions are also\nestablished.\n"} {"abstract": " A system of interacting classical oscillators is discussed, similar to a\nquantum mechanical system of a discrete energy level, interacting with the\nenergy quasi-continuum of states considered Fano. The limit of a continuous\nspectrum is analyzed together with the possible connection of the problem under\nstudy with the generation of coherent phonons.\n"} {"abstract": " The sequence of deformation bursts during plastic deformation exhibits\nscale-free features. In addition to the burst or avalanche sizes and the rate\nof avalanches the process is characterized by correlations in the series which\nbecome manifest in the resulting shape of the stress-strain curve. We analyze\nsuch features of plastic deformation with 2D and 3D simulations of discrete\ndislocation dynamics models and we show, that only with severe plastic\ndeformation the ensuing memory effects become negligible. The role of past\ndeformation history and dislocation pinning by disorder are studied. In\ngeneral, the correlations have the effect of reducing the scatter of the\nindividual stress-strain curves around the mean one.\n"} {"abstract": " We introduce the concept of impedance matching to axion dark matter by posing\nthe question of why axion detection is difficult, even though there is enough\npower in each square meter of incident dark-matter flux to energize a LED light\nbulb. By quantifying backreaction on the axion field, we show that a small\naxion-photon coupling does not by itself prevent an order-unity fraction of the\ndark matter from being absorbed through optimal impedance match. We further\nshow, in contrast, that the electromagnetic charges and the self-impedance of\ntheir coupling to photons provide the principal constraint on power absorption\nintegrated across a search band. Using the equations of axion electrodynamics,\nwe demonstrate stringent limitations on absorbed power in linear,\ntime-invariant, passive receivers. Our results yield fundamental constraints,\narising from the photon-electron interaction, on improving integrated power\nabsorption beyond the cavity haloscope technique. The analysis also has\nsignificant practical implications, showing apparent tension with the\nsensitivity projections for a number of planned axion searches. We additionally\nprovide a basis for more accurate signal power calculations and calibration\nmodels, especially for receivers using multi-wavelength open configurations\nsuch as dish antennas and dielectric haloscopes.\n"} {"abstract": " Given a simple connected compact Lie group $K$ and a maximal torus $T$ of\n$K$, the Weyl group $W=N_K(T)/T$ naturally acts on $T$. First, we use the\ncombinatorics of the (extended) affine Weyl group to provide an explicit\n$W$-equivariant triangulation of $T$. We describe the associated cellular\nhomology chain complex and give a formula for the cup product on its dual\ncochain complex, making it a $\\mathbb{Z}[W]$-dg-algebra. Next, remarking that\nthe combinatorics of this dg-algebra is still valid for Coxeter groups, we\nassociate a closed compact manifold $\\mathbf{T}(W)$ to any finite irreducible\nCoxeter group $W$, which coincides with a torus if $W$ is a Weyl group and is\nhyperbolic in other cases. Of course, we focus our study on\nnon-crystallographic groups, which are $I_2(m)$ with $m=5$ or $m\\ge 7$, $H_3$\nand $H_4$. The manifold $\\mathbf{T}(W)$ comes with a $W$-action and an\nequivariant triangulation, whose related $\\mathbb{Z}[W]$-dg-algebra is the one\nmentioned above. We finish by computing the homology of $\\mathbf{T}(W)$, as a\nrepresentation of $W$.\n"} {"abstract": " We present an approach for compressing volumetric scalar fields using\nimplicit neural representations. Our approach represents a scalar field as a\nlearned function, wherein a neural network maps a point in the domain to an\noutput scalar value. By setting the number of weights of the neural network to\nbe smaller than the input size, we achieve compressed representations of scalar\nfields, thus framing compression as a type of function approximation. Combined\nwith carefully quantizing network weights, we show that this approach yields\nhighly compact representations that outperform state-of-the-art volume\ncompression approaches. The conceptual simplicity of our approach enables a\nnumber of benefits, such as support for time-varying scalar fields, optimizing\nto preserve spatial gradients, and random-access field evaluation. We study the\nimpact of network design choices on compression performance, highlighting how\nsimple network architectures are effective for a broad range of volumes.\n"} {"abstract": " The proliferation of resourceful mobile devices that store rich,\nmultidimensional and privacy-sensitive user data motivate the design of\nfederated learning (FL), a machine-learning (ML) paradigm that enables mobile\ndevices to produce an ML model without sharing their data. However, the\nmajority of the existing FL frameworks rely on centralized entities. In this\nwork, we introduce IPLS, a fully decentralized federated learning framework\nthat is partially based on the interplanetary file system (IPFS). By using IPLS\nand connecting into the corresponding private IPFS network, any party can\ninitiate the training process of an ML model or join an ongoing training\nprocess that has already been started by another party. IPLS scales with the\nnumber of participants, is robust against intermittent connectivity and dynamic\nparticipant departures/arrivals, requires minimal resources, and guarantees\nthat the accuracy of the trained model quickly converges to that of a\ncentralized FL framework with an accuracy drop of less than one per thousand.\n"} {"abstract": " In this paper, we introduce a new concept: the Lions tree. These objects\narise in Taylor expansions involving the Lions derivative and prove invaluable\nin classifying the dynamics of mean-field stochastic differential equations.\n We discuss Lions trees, derive an Algebra spanned by Lions trees and explore\nhow couplings between Lions trees lead to a coupled Hopf algebra. Using this\nframework, we construct a new way to characterise rough signals driving\nmean-field equations: the probabilistic rough path. A comprehensive\ngeneralisation of the ideas first introduced in \\cite{2019arXiv180205882.2B},\nthese ideas promise powerful insights into how interactions with a collective\ndetermine the dynamics of an individual within this collective.\n"} {"abstract": " We consider a Bayesian framework based on \"probability of decision\" for\ndose-finding trial designs. The proposed PoD-BIN design evaluates the posterior\npredictive probabilities of up-and-down decisions. In PoD-BIN, multiple grades\nof toxicity, categorized as the mild toxicity (MT) and dose-limiting toxicity\n(DLT), are modeled simultaneously, and the primary outcome of interests is\ntime-to-toxicity for both MT and DLT. This allows the possibility of enrolling\nnew patients when previously enrolled patients are still being followed for\ntoxicity, thus potentially shortening trial length. The Bayesian decision rules\nin PoD-BIN utilize the probability of decisions to balance the need to speed up\nthe trial and the risk of exposing patients to overly toxic doses. We\ndemonstrate via numerical examples the resulting balance of speed and safety of\nPoD-BIN and compare to existing designs.\n"} {"abstract": " Recently, Blockchain technology adoption has expanded to many application\nareas due to the evolution of smart contracts. However, developing smart\ncontracts is non-trivial and challenging due to the lack of tools and expertise\nin this field. A promising solution to overcome this issue is to use\nModel-Driven Engineering (MDE), however, using models still involves a learning\ncurve and might not be suitable for non-technical users. To tackle this\nchallenge, chatbot or conversational interfaces can be used to assess the\nnon-technical users to specify a smart contract in gradual and interactive\nmanner.\n In this paper, we propose iContractBot, a chatbot for modeling and developing\nsmart contracts. Moreover, we investigate how to integrate iContractBot with\niContractML, a domain-specific modeling language for developing smart\ncontracts, and instantiate intention models from the chatbot. The iContractBot\nframework provides a domain-specific language (DSL) based on the user intention\nand performs model-to-text transformation to generate the smart contract code.\nA smart contract use case is presented to demonstrate how iContractBot can be\nutilized for creating models and generating the deployment artifacts for smart\ncontracts based on a simple conversation.\n"} {"abstract": " The physical mechanism on meridians (acupuncture lines) is studied and a\ntheoretical model is proposed. The meridians are explained as an alternating\nsystem responsible for the integration and the regulation of life in addition\nto the neuro-humoral regulation. We proposed that the meridian conduction is a\nkind of mechanical waves (soliton) of low frequency along the slits of muscles.\nThe anatomical-physiological and experimental evidences are reviewed. It is\ndemonstrated that the stabilization of the soliton is guaranteed by the\ncoupling between muscle vibration and cell activation. Therefore the\npropagation of mechanical wave dominates the excitation of cell groups along\nthe meridian. The meridian wave equations and its solution are deduced and how\nthese results can be used in studying human healthy is briefly discussed .\n"} {"abstract": " We investigate the dynamics brought on by an impulse perturbation in two\ninfinite-range quantum Ising models coupled to each other and to a dissipative\nbath. We show that, if dissipation is faster the higher the excitation energy,\nthe pulse perturbation cools down the low-energy sector of the system, at the\nexpense of the high-energy one, eventually stabilising a transient\nsymmetry-broken state at temperatures higher than the equilibrium critical one.\nSuch non-thermal quasi-steady state may survive for quite a long time after the\npulse, if the latter is properly tailored.\n"} {"abstract": " Consistent alpha generation, i.e., maintaining an edge over the market,\nunderpins the ability of asset traders to reliably generate profits. Technical\nindicators and trading strategies are commonly used tools to determine when to\nbuy/hold/sell assets, yet these are limited by the fact that they operate on\nknown values. Over the past decades, multiple studies have investigated the\npotential of artificial intelligence in stock trading in conventional markets,\nwith some success. In this paper, we present RCURRENCY, an RNN-based trading\nengine to predict data in the highly volatile digital asset market which is\nable to successfully manage an asset portfolio in a live environment. By\ncombining asset value prediction and conventional trading tools, RCURRENCY\ndetermines whether to buy, hold or sell digital currencies at a given point in\ntime. Experimental results show that, given the data of an interval $t$, a\nprediction with an error of less than 0.5\\% of the data at the subsequent\ninterval $t+1$ can be obtained. Evaluation of the system through backtesting\nshows that RCURRENCY can be used to successfully not only maintain a stable\nportfolio of digital assets in a simulated live environment using real\nhistorical trading data but even increase the portfolio value over time.\n"} {"abstract": " The formation of Uranus' regular moons has been suggested to be linked to the\norigin of its enormous spin axial tilt (~98^o). A giant impact between\nproto-Uranus and a 2-3 M_Earth impactor could lead to a large tilt and to the\nformation of an impact generated disc, where prograde and circular satellites\nare accreted. The most intriguing features of the current regular Uranian\nsatellite system is that it possesses a positive trend in the mass-distance\ndistribution and likely also in the bulk density, implying that viscous\nspreading of the disc after the giant impact plays a crucial role in shaping\nthe architecture of the final system. In this paper, we investigate the\nformation of Uranus' satellites by combining results of SPH simulations for the\ngiant impact, a 1D semi-analytic disc model for viscous spreading of the\npost-impact disc, and N-body simulations for the assembly of satellites from a\ndisc of moonlets. Assuming the condensed rock (i.e., silicate) remains small\nand available to stick onto the relatively rapid growing condensed water-ice,\nwe find that the best case in reproducing the observed mass and bulk\ncomposition of Uranus' satellite system is a pure-rocky impactor with 3 M_Earth\ncolliding with the young Uranus with an impact parameter b = 0.75. Such an\noblique collision could also naturally explain Uranus' large tilt and possibly,\nits low internal heat flux. The giant impact scenario can naturally explain the\nkey features of Uranus and its regular moons. We therefore suggest that the\nUranian satellite system formed as a result of an impact rather than from a\ncircumplanetary disc.\n"} {"abstract": " We investigate a set of techniques for RNN Transducers (RNN-Ts) that were\ninstrumental in lowering the word error rate on three different tasks\n(Switchboard 300 hours, conversational Spanish 780 hours and conversational\nItalian 900 hours). The techniques pertain to architectural changes, speaker\nadaptation, language model fusion, model combination and general training\nrecipe. First, we introduce a novel multiplicative integration of the encoder\nand prediction network vectors in the joint network (as opposed to additive).\nSecond, we discuss the applicability of i-vector speaker adaptation to RNN-Ts\nin conjunction with data perturbation. Third, we explore the effectiveness of\nthe recently proposed density ratio language model fusion for these tasks. Last\nbut not least, we describe the other components of our training recipe and\ntheir effect on recognition performance. We report a 5.9% and 12.5% word error\nrate on the Switchboard and CallHome test sets of the NIST Hub5 2000 evaluation\nand a 12.7% WER on the Mozilla CommonVoice Italian test set.\n"} {"abstract": " The autoregressive (AR) models, such as attention-based encoder-decoder\nmodels and RNN-Transducer, have achieved great success in speech recognition.\nThey predict the output sequence conditioned on the previous tokens and\nacoustic encoded states, which is inefficient on GPUs. The non-autoregressive\n(NAR) models can get rid of the temporal dependency between the output tokens\nand predict the entire output tokens in at least one step. However, the NAR\nmodel still faces two major problems. On the one hand, there is still a great\ngap in performance between the NAR models and the advanced AR models. On the\nother hand, it's difficult for most of the NAR models to train and converge. To\naddress these two problems, we propose a new model named the two-step\nnon-autoregressive transformer(TSNAT), which improves the performance and\naccelerating the convergence of the NAR model by learning prior knowledge from\na parameters-sharing AR model. Furthermore, we introduce the two-stage method\ninto the inference process, which improves the model performance greatly. All\nthe experiments are conducted on a public Chinese mandarin dataset ASIEHLL-1.\nThe results show that the TSNAT can achieve a competitive performance with the\nAR model and outperform many complicated NAR models.\n"} {"abstract": " Mathematical models are formal and simplified representations of the\nknowledge related to a phenomenon. In classical epidemic models, a neglected\naspect is the heterogeneity of disease transmission and progression linked to\nthe viral load of each infectious individual. Here, we attempt to investigate\nthe interplay between the evolution of individuals' viral load and the epidemic\ndynamics from a theoretical point of view. In the framework of multi-agent\nsystems, we propose a particle stochastic model describing the infection\ntransmission through interactions among agents and the individual physiological\ncourse of the disease. Agents have a double microscopic state: a discrete\nlabel, that denotes the epidemiological compartment to which they belong and\nswitches in consequence of a Markovian process, and a microscopic trait,\nrepresenting a normalized measure of their viral load, that changes in\nconsequence of binary interactions or interactions with a background.\nSpecifically, we consider Susceptible--Infected--Removed--like dynamics where\ninfectious individuals may be isolated from the general population and the\nisolation rate may depend on the viral load sensitivity and frequency of tests.\nWe derive kinetic evolution equations for the distribution functions of the\nviral load of the individuals in each compartment, whence, via suitable\nupscaling procedures, we obtain a macroscopic model for the densities and viral\nload momentum. We perform then a qualitative analysis of the ensuing\nmacroscopic model, and we present numerical tests in the case of both constant\nand viral load-dependent isolation control. Also, the matching between the\naggregate trends obtained from the macroscopic descriptions and the original\nparticle dynamics simulated by a Monte Carlo approach is investigated.\n"} {"abstract": " This is a summation of research done in the author's second and third year of\nundergraduate mathematics at The University of Toronto. As the previous details\nwere largely scattered and disorganized; the author decided to rewrite the\ncumulative research. The goal of this paper is to construct a family of\nanalytic functions $\\alpha \\uparrow^n z : (1,e^{1/e}) \\times \\mathbb{C}_{\\Re(z)\n> 0} \\to \\mathbb{C}_{\\Re(z) > 0}$ using methods from fractional calculus. This\nfamily satisfies the hyper-operator chain, $\\alpha \\uparrow^{n-1} \\alpha\n\\uparrow^n z = \\alpha \\uparrow^n (z+1)$; with the initial condition $\\alpha\n\\uparrow^0 z = \\alpha \\cdot z$.\n"} {"abstract": " Fiber-reinforced ceramic-matrix composites are advanced materials resistant\nto high temperatures, with application to aerospace engineering. Their analysis\ndepends on the detection of embedded fibers, with semi-supervised techniques\nusually employed to separate fibers within the fiber beds. Here we present an\nopen computational pipeline to detect fibers in ex-situ X-ray computed\ntomography fiber beds. To separate the fibers in these samples, we tested four\ndifferent architectures of fully convolutional neural networks. When comparing\nour neural network approach to a semi-supervised one, we obtained Dice and\nMatthews coefficients greater than $92.28 \\pm 9.65\\%$, reaching up to $98.42\n\\pm 0.03 \\%$, showing that the network results are close to the\nhuman-supervised ones in these fiber beds, in some cases separating fibers that\nhuman-curated algorithms could not find. The software we generated in this\nproject is open source, released under a permissive license, and can be freely\nadapted and re-used in other domains. All data and instructions on how to\ndownload and use it are also available.\n"} {"abstract": " We have designed a two-stage, 10-step process to give organisations a method\nto analyse small local energy systems (SLES) projects based on their Cyber\nPhysical System components in order to develop future-proof energy systems.\n SLES are often developed for a specific range of use cases and functions, and\nthese match the specific requirements and needs of the community, location or\nsite under consideration. During the design and commissioning, new and specific\ncyber physical architectures are developed. These are the control and data\nsystems that are needed to bridge the gap between the physical assets, the\ncaptured data and the control signals. Often, the cyber physical architecture\nand infrastructure is focused on functionality and the delivery of the specific\napplications.\n But we find that technologies and approaches have arisen from other fields\nthat, if used within SLES, could support the flexibility, scalability and\nreusability vital to their success. As these can improve the operational data\nsystems then they can also be used to enhance predictive functions If used and\ndeployed effectively, these new approaches can offer longer term improvements\nin the use and effectiveness of SLES, while allowing the concepts and designs\nto be capitalised upon through wider roll-out and the offering of commercial\nservices or products.\n"} {"abstract": " Mukai varieties are Fano varieties of Picard number one and coindex three. In\ngenus seven to ten they are linear sections of some special homogeneous\nvarieties. We describe the generic automorphism groups of these varieties. When\nthey are expected to be trivial for dimensional reasons, we show they are\nindeed trivial, up to three interesting and unexpected exceptions in genera 7,\n8, 9, and codimension 4, 3, 2 respectively. We conclude in particular that a\ngeneric prime Fano threefold of genus g has no automorphisms for 7 $\\le$ g\n$\\le$ 10. In the Appendix by Y. Prokhorov, the latter statement is extended to\ng = 12.\n"} {"abstract": " Over a decade ago De Loera, Haws and K\\\"oppe conjectured that Ehrhart\npolynomials of matroid polytopes have only positive coefficients and that the\ncoefficients of the corresponding $h^*$-polynomials form a unimodal sequence.\nThe first of these intensively studied conjectures has recently been disproved\nby the first author who gave counterexamples in all ranks greater or equal to\nthree. In this article we complete the picture by showing that Ehrhart\npolynomials of matroids of lower rank have indeed only positive coefficients.\nMoreover, we show that they are coefficient-wise bounded by the Ehrhart\npolynomials of minimal and uniform matroids. We furthermore address the second\nconjecture by proving that $h^*$-polynomials of matroid polytopes of sparse\npaving matroids of rank two are real-rooted and therefore have log-concave and\nunimodal coefficients.\n"} {"abstract": " We present GrammarTagger, an open-source grammar profiler which, given an\ninput text, identifies grammatical features useful for language education. The\nmodel architecture enables it to learn from a small amount of texts annotated\nwith spans and their labels, which 1) enables easier and more intuitive\nannotation, 2) supports overlapping spans, and 3) is less prone to error\npropagation, compared to complex hand-crafted rules defined on\nconstituency/dependency parses. We show that we can bootstrap a grammar\nprofiler model with $F_1 \\approx 0.6$ from only a couple hundred sentences both\nin English and Chinese, which can be further boosted via learning a\nmultilingual model. With GrammarTagger, we also build Octanove Learn, a search\nengine of language learning materials indexed by their reading difficulty and\ngrammatical features. The code and pretrained models are publicly available at\n\\url{https://github.com/octanove/grammartagger}.\n"} {"abstract": " Recently, research on mental health conditions using public online data,\nincluding Reddit, has surged in NLP and health research but has not reported\nuser characteristics, which are important to judge generalisability of\nfindings. This paper shows how existing NLP methods can yield information on\nclinical, demographic, and identity characteristics of almost 20K Reddit users\nwho self-report a bipolar disorder diagnosis. This population consists of\nslightly more feminine- than masculine-gendered mainly young or middle-aged\nUS-based adults who often report additional mental health diagnoses, which is\ncompared with general Reddit statistics and epidemiological studies.\nAdditionally, this paper carefully evaluates all methods and discusses ethical\nissues.\n"} {"abstract": " In the present article, we study the Hawking effect and the bounds on\ngreybody factor in a spacetime with radial deformation. This deformation is\nexpected to carry the imprint of a non-Einsteinian theory of gravity, but\nshares some of the important characteristics of general relativity (GR). In\nparticular, this radial deformation will restore the asymptotic behavior, and\nalso allows for the separation of the scalar field equation in terms of the\nangular and radial coordinates -- making it suitable to study the Hawking\neffect and greybody factors. However, the radial deformation would introduce a\nchange in the locations of the horizon, and therefore, the temperature of the\nHawking effect naturally alters. In fact, we observe that the deformation\nparameter has an enhancing effect on both temperature and bounds on the\ngreybody factor, which introduces a useful distinction with the Kerr spacetime.\nWe discuss these effects elaborately, and broadly study the thermal behavior of\na radially deformed spacetime.\n"} {"abstract": " This paper demonstrates how spectrum up to 1 THz will support mobile\ncommunications beyond 5G in the coming decades. Results of rooftop surrogate\nsatellite/tower base station measurements at 140 GHz show the natural isolation\nbetween terrestrial networks and surrogate satellite systems, as well as\nbetween terrestrial mobile users and co-channel fixed backhaul links. These\nfirst-of-their-kind measurements and accompanying analysis show that by keeping\nthe energy radiated by terrestrial emitters on the horizon (e.g., elevation\nangles $\\leq$15\\textdegree), there will not likely be interference in the same\nor adjacent bands between passive satellite sensors and terrestrial terminals,\nor between mobile links and terrestrial backhaul links at frequencies above 100\nGHz.\n"} {"abstract": " In this paper we discuss applications of the theory developed in [21] and\n[22] in studying certain Galois groups and splitting fields of rational\nfunctions in $\\mathbb Q\\left(X_0(N)\\right)$ using Hilbert's irreducibility\ntheorem and modular forms. We also consider computational aspect of the problem\nusing MAGMA and SAGE.\n"} {"abstract": " We provide the first construction of stationary measures for the open KPZ\nequation on the spatial interval $[0,1]$ with general inhomogeneous Neumann\nboundary conditions at $0$ and $1$ depending on real parameters $u$ and $v$,\nrespectively. When $u+v\\geq 0$ we uniquely characterize the constructed\nstationary measures through their multipoint Laplace transform which we prove\nis given in terms of a stochastic process that we call the continuous dual Hahn\nprocess.\n"} {"abstract": " Generative models are now capable of producing highly realistic images that\nlook nearly indistinguishable from the data on which they are trained. This\nraises the question: if we have good enough generative models, do we still need\ndatasets? We investigate this question in the setting of learning\ngeneral-purpose visual representations from a black-box generative model rather\nthan directly from data. Given an off-the-shelf image generator without any\naccess to its training data, we train representations from the samples output\nby this generator. We compare several representation learning methods that can\nbe applied to this setting, using the latent space of the generator to generate\nmultiple \"views\" of the same semantic content. We show that for contrastive\nmethods, this multiview data can naturally be used to identify positive pairs\n(nearby in latent space) and negative pairs (far apart in latent space). We\nfind that the resulting representations rival those learned directly from real\ndata, but that good performance requires care in the sampling strategy applied\nand the training method. Generative models can be viewed as a compressed and\norganized copy of a dataset, and we envision a future where more and more\n\"model zoos\" proliferate while datasets become increasingly unwieldy, missing,\nor private. This paper suggests several techniques for dealing with visual\nrepresentation learning in such a future. Code is released on our project page:\nhttps://ali-design.github.io/GenRep/\n"} {"abstract": " Generative Adversarial Networks (GANs) currently achieve the state-of-the-art\nsound synthesis quality for pitched musical instruments using a 2-channel\nspectrogram representation consisting of log magnitude and instantaneous\nfrequency (the \"IFSpectrogram\"). Many other synthesis systems use\nrepresentations derived from the magnitude spectra, and then depend on a\nbackend component to invert the output magnitude spectrograms that generally\nresult in audible artefacts associated with the inversion process. However, for\nsignals that have closely-spaced frequency components such as non-pitched and\nother noisy sounds, training the GAN on the 2-channel IFSpectrogram\nrepresentation offers no advantage over the magnitude spectra based\nrepresentations. In this paper, we propose that training GANs on single-channel\nmagnitude spectra, and using the Phase Gradient Heap Integration (PGHI)\ninversion algorithm is a better comprehensive approach for audio synthesis\nmodeling of diverse signals that include pitched, non-pitched, and dynamically\ncomplex sounds. We show that this method produces higher-quality output for\nwideband and noisy sounds, such as pops and chirps, compared to using the\nIFSpectrogram. Furthermore, the sound quality for pitched sounds is comparable\nto using the IFSpectrogram, even while using a simpler representation with half\nthe memory requirements.\n"} {"abstract": " While the anomalous Hall effect can manifest even without an external\nmagnetic field, time reversal symmetry is nonetheless still broken by the\ninternal magnetization of the sample. Recently, it has been shown that certain\nmaterials without an inversion center allow for a nonlinear type of anomalous\nHall effect whilst retaining time reversal symmetry. The effect may arise from\neither Berry curvature or through various asymmetric scattering mechanisms.\nHere, we report the observation of an extremely large $c$-axis nonlinear\nanomalous Hall effect in the non-centrosymmetric T$_d$ phase of MoTe$_2$ and\nWTe$_2$ without intrinsic magnetic order. We find that the effect is dominated\nby skew-scattering at higher temperatures combined with another scattering\nprocess active at low temperatures. Application of higher bias yields an\nextremely large Hall ratio of $E_\\perp /E_\\parallel$=2.47 and corresponding\nanomalous Hall conductivity of order 8x10$^7$S/m.\n"} {"abstract": " Rationalizing which parts of a molecule drive the predictions of a molecular\ngraph convolutional neural network (GCNN) can be difficult. To help, we propose\ntwo simple regularization techniques to apply during the training of GCNNs:\nBatch Representation Orthonormalization (BRO) and Gini regularization. BRO,\ninspired by molecular orbital theory, encourages graph convolution operations\nto generate orthonormal node embeddings. Gini regularization is applied to the\nweights of the output layer and constrains the number of dimensions the model\ncan use to make predictions. We show that Gini and BRO regularization can\nimprove the accuracy of state-of-the-art GCNN attribution methods on artificial\nbenchmark datasets. In a real-world setting, we demonstrate that medicinal\nchemists significantly prefer explanations extracted from regularized models.\nWhile we only study these regularizers in the context of GCNNs, both can be\napplied to other types of neural networks\n"} {"abstract": " Most online multi-object trackers perform object detection stand-alone in a\nneural net without any input from tracking. In this paper, we present a new\nonline joint detection and tracking model, TraDeS (TRAck to DEtect and\nSegment), exploiting tracking clues to assist detection end-to-end. TraDeS\ninfers object tracking offset by a cost volume, which is used to propagate\nprevious object features for improving current object detection and\nsegmentation. Effectiveness and superiority of TraDeS are shown on 4 datasets,\nincluding MOT (2D tracking), nuScenes (3D tracking), MOTS and Youtube-VIS\n(instance segmentation tracking). Project page:\nhttps://jialianwu.com/projects/TraDeS.html.\n"} {"abstract": " For partial, nondeterministic, finite state machines, a new conformance\nrelation called strong reduction is presented. It complements other existing\nconformance relations in the sense that the new relation is well-suited for\nmodel-based testing of systems whose inputs are enabled or disabled, depending\non the actual system state. Examples of such systems are graphical user\ninterfaces and systems with interfaces that can be enabled or disabled in a\nmechanical way. We present a new test generation algorithm producing complete\ntest suites for strong reduction. The suites are executed according to the\ngrey-box testing paradigm: it is assumed that the state-dependent sets of\nenabled inputs can be identified during test execution, while the\nimplementation states remain hidden, as in black-box testing. It is shown that\nthis grey-box information is exploited by the generation algorithm in such a\nway that the resulting best-case test suite size is only linear in the state\nspace size of the reference model. Moreover, examples show that this may lead\nto significant reductions of test suite size in comparison to true black-box\ntesting for strong reduction.\n"} {"abstract": " Residual coherence is a graphical tool for selecting potential second-order\ninteraction terms as functions of a single time series and its lags. This paper\nextends the notion of residual coherence to account for interaction terms of\nmultiple time series. Moreover, an alternative criterion, integrated spectrum,\nis proposed to facilitate this graphical selection.\n A financial market application shows that new insights can be gained\nregarding implied market volatility.\n"} {"abstract": " V838 Mon erupted in 2002 quickly becoming the prototype of a new type of\nstellar eruptions known today as (luminous) red novae. The red nova outbursts\nare thought to be caused by stellar mergers. The merger in V838 Mon took place\nin a triple or higher system involving two B-type stars. We mapped the merger\nsite with ALMA at a resolution of 25 mas in continuum dust emission and in\nrotational lines of simple molecules, including CO, SiO, SO, SO$_2$, AlOH, and\nH$_2$S. We use radiative transfer calculations to reproduce the remnant's\narchitecture at the epoch of the ALMA observations. For the first time, we\nidentify the position of the B-type companion relative to the outbursting\ncomponent of V838 Mon. The stellar remnant is surrounded by a clumpy wind with\ncharacteristics similar to winds of red supergiants. The merger product is also\nassociated with an elongated structure, $17.6 \\times 7.6$ mas, seen in\ncontinuum emission, and which we interpret as a disk seen at a moderate\ninclination. Maps of continuum and molecular emission show also a complex\nregion of interaction between the B-type star (its gravity, radiation, and\nwind) and the flow of matter ejected in 2002. The remnant's molecular mass is\nabout 0.1 M$_{\\odot}$ and the dust mass is 8.3$\\cdot$10$^{-3}$ M$_{\\odot}$. The\nmass of the atomic component remains unconstrained. The most interesting region\nfor understanding the merger of V838 Mon remains unresolved but appears\nelongated. To study it further in more detail will require even higher angular\nresolutions. ALMA maps show us an extreme form of interaction between the\nmerger ejecta with a distant (250 au) companion. This interaction is similar to\nthat known from the Antares AB system but at a much higher mass loss rate. The\nB-type star not only deflects the merger ejecta but also changes its chemical\ncomposition with an involvement of circumstellar shocks.\n"} {"abstract": " Unmanned aerial vehicles (UAVs) are expected to be an integral part of\nwireless networks. In this paper, we aim to find collision-free paths for\nmultiple cellular-connected UAVs, while satisfying requirements of connectivity\nwith ground base stations (GBSs) in the presence of a dynamic jammer. We first\nformulate the problem as a sequential decision making problem in discrete\ndomain, with connectivity, collision avoidance, and kinematic constraints. We,\nthen, propose an offline temporal difference (TD) learning algorithm with\nonline signal-to-interference-plus-noise ratio (SINR) mapping to solve the\nproblem. More specifically, a value network is constructed and trained offline\nby TD method to encode the interactions among the UAVs and between the UAVs and\nthe environment; and an online SINR mapping deep neural network (DNN) is\ndesigned and trained by supervised learning, to encode the influence and\nchanges due to the jammer. Numerical results show that, without any information\non the jammer, the proposed algorithm can achieve performance levels close to\nthat of the ideal scenario with the perfect SINR-map. Real-time navigation for\nmulti-UAVs can be efficiently performed with high success rates, and collisions\nare avoided.\n"} {"abstract": " A multi-objective optimization problem is $C^r$ weakly simplicial if there\nexists a $C^r$ surjection from a simplex onto the Pareto set/front such that\nthe image of each subsimplex is the Pareto set/front of a subproblem, where\n$0\\leq r\\leq \\infty$. This property is helpful to compute a parametric-surface\napproximation of the entire Pareto set and Pareto front. It is known that all\nunconstrained strongly convex $C^r$ problems are $C^{r-1}$ weakly simplicial\nfor $1\\leq r \\leq \\infty$. In this paper, we show that all unconstrained\nstrongly convex problems are $C^0$ weakly simplicial. The usefulness of this\ntheorem is demonstrated in a sparse modeling application: we reformulate the\nelastic net as a non-differentiable multi-objective strongly convex problem and\napproximate its Pareto set (the set of all trained models with different\nhyper-parameters) and Pareto front (the set of performance metrics of the\ntrained models) by using a B\\'ezier simplex fitting method, which accelerates\nhyper-parameter search.\n"} {"abstract": " Three $q$-versions of Lommel polynomials are studied. Included are explicit\nrepresentations, recurrences, continued fractions, and connections to\nassociated Askey--Wilson polynomials. Combinatorial results are emphasized,\nincluding a general theorem when $R_I$ moments coincide with orthogonal\npolynomial moments. The combinatorial results use weighted Motzkin paths,\nSchr\\\"oder paths, and parallelogram polyominoes.\n"} {"abstract": " Unsupervised time series clustering is a challenging problem with diverse\nindustrial applications such as anomaly detection, bio-wearables, etc. These\napplications typically involve small, low-power devices on the edge that\ncollect and process real-time sensory signals. State-of-the-art time-series\nclustering methods perform some form of loss minimization that is extremely\ncomputationally intensive from the perspective of edge devices. In this work,\nwe propose a neuromorphic approach to unsupervised time series clustering based\non Temporal Neural Networks that is capable of ultra low-power, continuous\nonline learning. We demonstrate its clustering performance on a subset of UCR\nTime Series Archive datasets. Our results show that the proposed approach\neither outperforms or performs similarly to most of the existing algorithms\nwhile being far more amenable for efficient hardware implementation. Our\nhardware assessment analysis shows that in 7 nm CMOS the proposed architecture,\non average, consumes only about 0.005 mm^2 die area and 22 uW power and can\nprocess each signal with about 5 ns latency.\n"} {"abstract": " Various methods for solving the inverse reinforcement learning (IRL) problem\nhave been developed independently in machine learning and economics. In\nparticular, the method of Maximum Causal Entropy IRL is based on the\nperspective of entropy maximization, while related advances in the field of\neconomics instead assume the existence of unobserved action shocks to explain\nexpert behavior (Nested Fixed Point Algorithm, Conditional Choice Probability\nmethod, Nested Pseudo-Likelihood Algorithm). In this work, we make previously\nunknown connections between these related methods from both fields. We achieve\nthis by showing that they all belong to a class of optimization problems,\ncharacterized by a common form of the objective, the associated policy and the\nobjective gradient. We demonstrate key computational and algorithmic\ndifferences which arise between the methods due to an approximation of the\noptimal soft value function, and describe how this leads to more efficient\nalgorithms. Using insights which emerge from our study of this class of\noptimization problems, we identify various problem scenarios and investigate\neach method's suitability for these problems.\n"} {"abstract": " The thermodynamic uncertainty relation originally proven for systems driven\ninto a non-equilibrium steady state (NESS) allows one to infer the total\nentropy production rate by observing any current in the system. This kind of\ninference scheme is especially useful when the system contains hidden degrees\nof freedom or hidden discrete states, which are not accessible to the\nexperimentalist. A recent generalization of the thermodynamic uncertainty\nrelation to arbitrary time-dependent driving allows one to infer entropy\nproduction not only by measuring current-observables but also by observing\nstate variables. A crucial question then is to understand which observable\nyields the best estimate for the total entropy production. In this paper we\naddress this question by analyzing the quality of the thermodynamic uncertainty\nrelation for various types of observables for the generic limiting cases of\nfast driving and slow driving. We show that in both cases observables can be\nfound that yield an estimate of order one for the total entropy production. We\nfurther show that the uncertainty relation can even be saturated in the limit\nof fast driving.\n"} {"abstract": " The KOALA experiment measures the differential cross section of\n(anti)proton-proton elastic scattering over a wide range of four-momentum\ntransfer squared 0.0008 < |t| < 0.1 (GeV/c)$^2$ . The forward scattering\nparameters and the absolute luminosity can be deduced by analyzing the\ncharacteristic shape of the differential cross-section spectrum. The experiment\nis based on fixed target kinematics and uses an internal hydrogen cluster jet\ntarget. The wide range of |t| is achieved by measuring the total kinetic energy\nof the recoil protons near 90{\\deg} with a recoil detector, which consists of\nsilicon and germanium single-sided strip sensors. The energy resolution of the\nrecoil detector is better than 30 keV (FWHM). A forward detector consisting of\ntwo layers of plastic scintillators measures the elastically scattered beam\nparticles in the forward direction close to the beam axis. It helps suppress\nthe large background at small recoil angles and improves the identification of\nelastic scattering events in the low |t| range. The KOALA setup has been\ninstalled and commissioned at COSY in order to validate the detector by\nmeasuring the proton-proton elastic scattering. The results from this\ncommissioning are presented here.\n"} {"abstract": " Nowadays High Energy Physics experiments can accumulate unprecedented\nstatistics of heavy flavour decays that allows to apply new methods, based on\nthe study of very rare phenomena, which used to be just desperate. In this\npaper we propose a new method to measure composition of $K^0$-$\\overline{K}^0$,\nproduced in a decay of heavy hadrons. This composition contains important\ninformation, in particular about weak and strong phases between amplitudes of\nthe produced $K^0$ and $\\overline{K}^0$. We consider possibility to measure\nthese parameters with time-dependent $K^0 \\to \\pi^+ \\pi^-$ analysis. Due to\n$CP$-violation in kaon mixing time-dependent decay rates of $K^0$ and\n$\\overline{K}^0$ differ, and the initial amplitudes revealed in the\n$CP$-violating decay pattern. In particular we consider cases of charmed\nhadrons decays: $D^+ \\to K^0 \\pi^+$, $D_s^+ \\to K^0 K^+$, $\\Lambda_c \\to p K^0$\nand with some assumptions $D^0 \\to K^0 \\pi^0$. This can be used to test the sum\nrule for charmed mesons and to obtain input for the full constraint of the two\nbody amplitudes of $D$-mesons.\n"} {"abstract": " The levitation of a volatile droplet on a highly superheated surface is known\nas the Leidenfrost effect. Wetting state during transition from full wetting of\na surface by a droplet at room temperature to Leidenfrost bouncing, i.e.,\nzero-wetting at high superheating, is not fully understood. Here,\nvisualizations of droplet thermal and wetting footprint in the Leidenfrost\ntransition state are presented using two optical techniques: mid-infrared\nthermography and wetting sensitive total internal reflection imaging under\ncarefully selected experimental conditions, impact Weber number < 10 and\ndroplet diameter < capillary length, using an indium-tin-oxide coated sapphire\nheater. The experimental regime was designed to create relatively stable\ndroplet dynamics, where the effects of oscillatory and capillary instabilities\nwere minimized. The thermography for ethanol droplet in Leidenfrost transition\nstate (superheat range of 82K-97K) revealed thermal footprint with a central\nhot zone surrounded by a cooler periphery, indicative of a partial wetting\nstate during Leidenfrost transition. High-speed total internal reflection\nimaging also confirmed the partial wetting footprint such that there are\nwetting areas around a central non-wetting zone. Result presented here using\nethanol as a test fluid shed light on the geometry and dynamics of a volatile\ndroplet footprint in Leidenfrost transition state.\n"} {"abstract": " We study the null set $N(\\mathcal{P})$ of the Fourier-Laplace transform of a\npolytope $\\mathcal{P} \\subset \\mathbb{R}^d$, and we find that $N(\\mathcal{P})$\ndoes not contain (almost all) circles in $\\mathbb{R}^d$. As a consequence, the\nnull set does not contain the algebraic varieties $\\{z \\in \\mathbb{C}^d \\mid\nz_1^2 + \\dots + z_d^2 = \\alpha^2\\}$ for each fixed $\\alpha \\in \\mathbb{C}$, and\nhence we get an explicit proof that the Pompeiu property is true for all\npolytopes. Our proof uses the Brion-Barvinok theorem, which gives a concrete\nformulation for the Fourier-Laplace transform of a polytope, and it also uses\nproperties of Bessel functions. The original proof that polytopes (as well as\nother bodies) possess the Pompeiu property was given by Brown, Schreiber, and\nTaylor (1973) for dimension 2. Williams (1976) later observed that the same\nproof also works for $d>2$ and, using eigenvalues of the Laplacian, gave\nanother proof valid for $d \\geq 2$ that polytopes have the Pompeiu property.\n"} {"abstract": " In today's world data is being generated at a high rate due to which it has\nbecome inevitable to analyze and quickly get results from this data. Most of\nthe relational databases primarily support SQL querying with a limited support\nfor complex data analysis. Due to this reason, data scientists have no other\noption, but to use a different system for complex data analysis. Due to this,\ndata science frameworks are in huge demand. But to use such a framework, all\nthe data needs to be loaded into it. This requires significant data movement\nacross multiple systems, which can be expensive.\n We believe that it has become the need of the hour to come up with a single\nsystem which can perform both data analysis tasks and SQL querying. This will\nsave the data scientists from the expensive data transfer operation across\nsystems. In our work, we present DaskDB, a system built over the Python's Dask\nframework, which is a scalable data science system having support for both data\nanalytics and in situ SQL query processing over heterogeneous data sources.\nDaskDB supports invoking any Python APIs as User-Defined Functions (UDF) over\nSQL queries. So, it can be easily integrated with most existing Python data\nscience applications, without modifying the existing code. Since joining two\nrelations is a very vital but expensive operation, so a novel distributed\nlearned index is also introduced to improve the join performance. Our\nexperimental evaluation demonstrates that DaskDB significantly outperforms\nexisting systems.\n"} {"abstract": " We give sufficient conditions on the exponent $p: \\mathbb R^d\\rightarrow\n[1,\\infty)$ for the boundedness of the non-centered Gaussian maximal function\non variable Lebesgue spaces $L^{p(\\cdot)}(\\mathbb R^d, \\gamma_d)$, as well as\nof the new higher order Riesz transforms associated with the Ornstein-Uhlenbeck\nsemigroup, which are the natural extensions of the supplementary first order\nGaussian Riesz transforms defined by A. Nowak and K. Stempak in\n\\cite{nowakstempak}.\n"} {"abstract": " We derive the explicit form of the martingale representation for\nsquare-integrable processes that are martingales with respect to the natural\nfiltration of the super-Brownian motion. This is done by using a weak extension\nof the Dupire derivative for functionals of superprocesses.\n"} {"abstract": " The polymer model framework is a classical tool from statistical mechanics\nthat has recently been used to obtain approximation algorithms for spin systems\non classes of bounded-degree graphs; examples include the ferromagnetic Potts\nmodel on expanders and on the grid. One of the key ingredients in the analysis\nof polymer models is controlling the growth rate of the number of polymers,\nwhich has been typically achieved so far by invoking the bounded-degree\nassumption. Nevertheless, this assumption is often restrictive and obstructs\nthe applicability of the method to more general graphs. For example, sparse\nrandom graphs typically have bounded average degree and good expansion\nproperties, but they include vertices with unbounded degree, and therefore are\nexcluded from the current polymer-model framework.\n We develop a less restrictive framework for polymer models that relaxes the\nstandard bounded-degree assumption, by reworking the relevant polymer models\nfrom the edge perspective. The edge perspective allows us to bound the growth\nrate of the number of polymers in terms of the total degree of polymers, which\nin turn can be related more easily to the expansion properties of the\nunderlying graph. To apply our methods, we consider random graphs with\nunbounded degrees from a fixed degree sequence (with minimum degree at least 3)\nand obtain approximation algorithms for the ferromagnetic Potts model, which is\na standard benchmark for polymer models. Our techniques also extend to more\ngeneral spin systems.\n"} {"abstract": " In this note, we extend the renormalization horseshoe we have recently\nconstructed with N. Goncharuk for analytic diffeomorphisms of the circle to\ntheir small two-dimensional perturbations. As one consequence, Herman rings\nwith rotation numbers of bounded type survive on a codimension one set of\nparameters under small two-dimensional perturbations.\n"} {"abstract": " Type Ia supernovae (SNe Ia) span a range of luminosities and timescales, from\nrapidly evolving subluminous to slowly evolving overluminous subtypes. Previous\ntheoretical work has, for the most part, been unable to match the entire\nbreadth of observed SNe Ia with one progenitor scenario. Here, for the first\ntime, we apply non-local thermodynamic equilibrium radiative transfer\ncalculations to a range of accurate explosion models of sub-Chandrasekhar-mass\nwhite dwarf detonations. The resulting photometry and spectra are in excellent\nagreement with the range of observed non-peculiar SNe Ia through 15 d after the\ntime of B-band maximum, yielding one of the first examples of a quantitative\nmatch to the entire Phillips (1993) relation. The intermediate-mass element\nvelocities inferred from theoretical spectra at maximum light for the more\nmassive white dwarf explosions are higher than those of bright observed SNe Ia,\nbut these and other discrepancies likely stem from the one-dimensional nature\nof our explosion models and will be improved upon by future non-local\nthermodynamic equilibrium radiation transport calculations of multi-dimensional\nsub-Chandrasekhar-mass white dwarf detonations.\n"} {"abstract": " This article explores the territorial differences in the onset and spread of\nCOVID-19 and the excess mortality associated with the pandemic, across the\nEuropean NUTS3 regions and US counties. Both in Europe and in the US, the\npandemic arrived earlier and recorded higher Rt values in urban regions than in\nintermediate and rural ones. A similar gap is also found in the data on excess\nmortality. In the weeks during the first phase of the pandemic, urban regions\nin EU countries experienced excess mortality of up to 68pp more than rural\nones. We show that, during the initial days of the pandemic, territorial\ndifferences in Rt by the degree of urbanisation can be largely explained by the\nlevel of internal, inbound and outbound mobility. The differences in the spread\nof COVID-19 by rural-urban typology and the role of mobility are less clear\nduring the second wave. This could be linked to the fact that the infection is\nwidespread across territories, to changes in mobility patterns during the\nsummer period as well as to the different containment measures which reverse\nthe causality between mobility and Rt.\n"} {"abstract": " Neural information retrieval systems typically use a cascading pipeline, in\nwhich a first-stage model retrieves a candidate set of documents and one or\nmore subsequent stages re-rank this set using contextualized language models\nsuch as BERT. In this paper, we propose DeepImpact, a new document\nterm-weighting scheme suitable for efficient retrieval using a standard\ninverted index. Compared to existing methods, DeepImpact improves impact-score\nmodeling and tackles the vocabulary-mismatch problem. In particular, DeepImpact\nleverages DocT5Query to enrich the document collection and, using a\ncontextualized language model, directly estimates the semantic importance of\ntokens in a document, producing a single-value representation for each token in\neach document. Our experiments show that DeepImpact significantly outperforms\nprior first-stage retrieval approaches by up to 17% on effectiveness metrics\nw.r.t. DocT5Query, and, when deployed in a re-ranking scenario, can reach the\nsame effectiveness of state-of-the-art approaches with up to 5.1x speedup in\nefficiency.\n"} {"abstract": " Performance metrics are a core component of the evaluation of any machine\nlearning model and used to compare models and estimate their usefulness. Recent\nwork started to question the validity of many performance metrics for this\npurpose in the context of software defect prediction. Within this study, we\nexplore the relationship between performance metrics and the cost saving\npotential of defect prediction models. We study whether performance metrics are\nsuitable proxies to evaluate the cost saving capabilities and derive a theory\nfor the relationship between performance metrics and cost saving potential.\n"} {"abstract": " Many real-life applications involve simultaneously forecasting multiple time\nseries that are hierarchically related via aggregation or disaggregation\noperations. For instance, commercial organizations often want to forecast\ninventories simultaneously at store, city, and state levels for resource\nplanning purposes. In such applications, it is important that the forecasts, in\naddition to being reasonably accurate, are also consistent w.r.t one another.\nAlthough forecasting such hierarchical time series has been pursued by\neconomists and data scientists, the current state-of-the-art models use strong\nassumptions, e.g., all forecasts being unbiased estimates, noise distribution\nbeing Gaussian. Besides, state-of-the-art models have not harnessed the power\nof modern nonlinear models, especially ones based on deep learning. In this\npaper, we propose using a flexible nonlinear model that optimizes quantile\nregression loss coupled with suitable regularization terms to maintain the\nconsistency of forecasts across hierarchies. The theoretical framework\nintroduced herein can be applied to any forecasting model with an underlying\ndifferentiable loss function. A proof of optimality of our proposed method is\nalso provided. Simulation studies over a range of datasets highlight the\nefficacy of our approach.\n"} {"abstract": " The consequences of the attractive, short-range nucleon-nucleon (NN)\ninteraction on the wave functions of the Elliott SU(3) and the proxy-SU(3)\nsymmetry are discussed. The NN interaction favors the most symmetric spatial\nSU(3) irreducible representation, which corresponds to the maximal spatial\noverlap among the fermions. The percentage of the symmetric components out of\nthe total in an SU(3) wave function is introduced, through which it is found,\nthat no SU(3) irrep is more symmetric than the highest weight irrep for a\ncertain number of valence particles in a three dimensional, isotropic, harmonic\noscillator shell. The consideration of the highest weight irreps in nuclei and\nin alkali metal clusters, leads to the prediction of a prolate to oblate shape\ntransition beyond the mid-shell region.\n"} {"abstract": " While deep learning-based 3D face generation has made a progress recently,\nthe problem of dynamic 3D (4D) facial expression synthesis is less\ninvestigated. In this paper, we propose a novel solution to the following\nquestion: given one input 3D neutral face, can we generate dynamic 3D (4D)\nfacial expressions from it? To tackle this problem, we first propose a mesh\nencoder-decoder architecture (Expr-ED) that exploits a set of 3D landmarks to\ngenerate an expressive 3D face from its neutral counterpart. Then, we extend it\nto 4D by modeling the temporal dynamics of facial expressions using a\nmanifold-valued GAN capable of generating a sequence of 3D landmarks from an\nexpression label (Motion3DGAN). The generated landmarks are fed into the mesh\nencoder-decoder, ultimately producing a sequence of 3D expressive faces. By\ndecoupling the two steps, we separately address the non-linearity induced by\nthe mesh deformation and motion dynamics. The experimental results on the CoMA\ndataset show that our mesh encoder-decoder guided by landmarks brings a\nsignificant improvement with respect to other landmark-based 3D fitting\napproaches, and that we can generate high quality dynamic facial expressions.\nThis framework further enables the 3D expression intensity to be continuously\nadapted from low to high intensity. Finally, we show our framework can be\napplied to other tasks, such as 2D-3D facial expression transfer.\n"} {"abstract": " We propose a method to exploit high finesse optical resonators for light\nassisted coherent manipulation of atomic ensembles, overcoming the limit\nimposed by the finite response time of the cavity. The key element of our\nscheme is to rapidly switch the interaction between the atoms and the cavity\nfield with an auxiliary control process as, for example, the light shift\ninduced by an optical beam. The scheme is applicable to many different atomic\nspecies, both in trapped and free fall configurations, and can be adopted to\ncontrol the internal and/or external atomic degrees of freedom. Our method will\nopen new possibilities in cavity-aided atom interferometry and in the\npreparation of highly non-classical atomic states.\n"} {"abstract": " We investigate the quantum transport through Kondo impurity assuming both a\nlarge number of orbital channels $\\mathcal K$$\\gg $$1$ for the itinerant\nelectrons and a semi-classical spin ${\\cal S}$ $\\gg $ $1$ for the impurity. The\nnon-Fermi liquid regime of the Kondo problem is achieved in the overscreened\nsector $\\mathcal K>2\\mathcal{S}$. We show that there exist two distinct\nsemiclassical regimes for the quantum transport through impurity: i) $\\mathcal\nK$ $\\gg$ $\\mathcal S$ $\\gg$ $1$, differential conductance vanishes and ii)\n$\\mathcal S$$/$$\\mathcal K{=}\\mathcal C$ with $ 0$$<$$\\mathcal C$$<$$1/2$,\ndifferential conductance reaches some non-vanishing fraction of its unitary\nvalue. Using conformal field theory approach we analyze behavior of the quantum\ntransport observables and residual entropy in both semiclassical regimes. We\nshow that the semiclassical limit ii) preserves the key features of resonance\nscattering and the most essential fingerprints of the non-Fermi liquid\nbehavior. We discuss possible realization of two semiclassical regimes in\nsemiconductor quantum transport experiments.\n"} {"abstract": " The Multi-voltage Threshold (MVT) method, which samples the signal by certain\nreference voltages, has been well developed as being adopted in pre-clinical\nand clinical digital positron emission tomography(PET) system. To improve its\nenergy measurement performance, we propose a Peak Picking MVT(PP-MVT) Digitizer\nin this paper. Firstly, a sampled Peak Point(the highest point in pulse\nsignal), which carries the values of amplitude feature voltage and amplitude\narriving time, is added to traditional MVT with a simple peak sampling circuit.\nSecondly, an amplitude deviation statistical analysis, which compares the\nenergy deviation of various reconstruction models, is used to select adaptive\nreconstruction models for signal pulses with different amplitudes. After\nprocessing 30,000 randomly-chosen pulses sampled by the oscilloscope with a\n22Na point source, our method achieves an energy resolution of 17.50% within a\n450-650 KeV energy window, which is 2.44% better than the result of traditional\nMVT with same thresholds; and we get a count number at 15225 in the same energy\nwindow while the result of MVT is at 14678. When the PP-MVT involves less\nthresholds than traditional MVT, the advantages of better energy resolution and\nlarger count number can still be maintained, which shows the robustness and the\nflexibility of PP-MVT Digitizer. This improved method indicates that adding\nfeature peak information could improve the performance on signal sampling and\nreconstruction, which canbe proved by the better performance in energy\ndetermination in radiation measurement.\n"} {"abstract": " We present a Python-based renderer built on NVIDIA's OptiX ray tracing engine\nand the OptiX AI denoiser, designed to generate high-quality synthetic images\nfor research in computer vision and deep learning. Our tool enables the\ndescription and manipulation of complex dynamic 3D scenes containing object\nmeshes, materials, textures, lighting, volumetric data (e.g., smoke), and\nbackgrounds. Metadata, such as 2D/3D bounding boxes, segmentation masks, depth\nmaps, normal maps, material properties, and optical flow vectors, can also be\ngenerated. In this work, we discuss design goals, architecture, and\nperformance. We demonstrate the use of data generated by path tracing for\ntraining an object detector and pose estimator, showing improved performance in\nsim-to-real transfer in situations that are difficult for traditional\nraster-based renderers. We offer this tool as an easy-to-use, performant,\nhigh-quality renderer for advancing research in synthetic data generation and\ndeep learning.\n"} {"abstract": " In this paper, we consider visualization of displacement fields via optical\nflow methods in elastographic experiments consisting of a static compression of\na sample. We propose an elastographic optical flow method (EOFM) which takes\ninto account experimental constraints, such as appropriate boundary conditions,\nthe use of speckle information, as well as the inclusion of structural\ninformation derived from knowledge of the background material. We present\nnumerical results based on both simulated and experimental data from an\nelastography experiment in order to demonstrate the relevance of our proposed\napproach.\n"} {"abstract": " Population growth in the last decades has resulted in the production of about\n2.01 billion tons of municipal waste per year. The current waste management\nsystems are not capable of providing adequate solutions for the disposal and\nuse of these wastes. Recycling and reuse have proven to be a solution to the\nproblem, but large-scale waste segregation is a tedious task and on a small\nscale it depends on public awareness. This research used convolutional neural\nnetworks and computer vision to develop a tool for the automation of solid\nwaste sorting. The Fotini10k dataset was constructed, which has more than\n10,000 images divided into the categories of 'plastic bottles', 'aluminum cans'\nand 'paper and cardboard'. ResNet50, MobileNetV1 and MobileNetV2 were retrained\nwith ImageNet weights on the Fotini10k dataset. As a result, top-1 accuracy of\n99% was obtained in the test dataset with all three networks. To explore the\npossible use of these networks in mobile applications, the three nets were\nquantized in float16 weights. By doing so, it was possible to obtain inference\ntimes twice as low for Raspberry Pi and three times as low for computer\nprocessing units. It was also possible to reduce the size of the networks by\nhalf. When quantizing the top-1 accuracy of 99% was maintained with all three\nnetworks. When quantizing MobileNetV2 to int-8, it obtained a top-1 accuracy of\n97%.\n"} {"abstract": " We consider speeding up stochastic gradient descent (SGD) by parallelizing it\nacross multiple workers. We assume the same data set is shared among $N$\nworkers, who can take SGD steps and coordinate with a central server. While it\nis possible to obtain a linear reduction in the variance by averaging all the\nstochastic gradients at every step, this requires a lot of communication\nbetween the workers and the server, which can dramatically reduce the gains\nfrom parallelism. The Local SGD method, proposed and analyzed in the earlier\nliterature, suggests machines should make many local steps between such\ncommunications. While the initial analysis of Local SGD showed it needs $\\Omega\n( \\sqrt{T} )$ communications for $T$ local gradient steps in order for the\nerror to scale proportionately to $1/(NT)$, this has been successively improved\nin a string of papers, with the state of the art requiring $\\Omega \\left( N\n\\left( \\mbox{ poly} (\\log T) \\right) \\right)$ communications. In this paper, we\nsuggest a Local SGD scheme that communicates less overall by communicating less\nfrequently as the number of iterations grows. Our analysis shows that this can\nachieve an error that scales as $1/(NT)$ with a number of communications that\nis completely independent of $T$. In particular, we show that $\\Omega(N)$\ncommunications are sufficient. Empirical evidence suggests this bound is close\nto tight as we further show that $\\sqrt{N}$ or $N^{3/4}$ communications fail to\nachieve linear speed-up in simulations. Moreover, we show that under mild\nassumptions, the main of which is twice differentiability on any neighborhood\nof the optimal solution, one-shot averaging which only uses a single round of\ncommunication can also achieve the optimal convergence rate asymptotically.\n"} {"abstract": " Intent classification and slot filling are two critical tasks for natural\nlanguage understanding. Traditionally the two tasks have been deemed to proceed\nindependently. However, more recently, joint models for intent classification\nand slot filling have achieved state-of-the-art performance, and have proved\nthat there exists a strong relationship between the two tasks. This article is\na compilation of past work in natural language understanding, especially joint\nintent classification and slot filling. We observe three milestones in this\nresearch so far: Intent detection to identify the speaker's intention, slot\nfilling to label each word token in the speech/text, and finally, joint intent\nclassification and slot filling tasks. In this article, we describe trends,\napproaches, issues, data sets, evaluation metrics in intent classification and\nslot filling. We also discuss representative performance values, describe\nshared tasks, and provide pointers to future work, as given in prior works. To\ninterpret the state-of-the-art trends, we provide multiple tables that describe\nand summarise past research along different dimensions, including the types of\nfeatures, base approaches, and dataset domain used.\n"} {"abstract": " This is the user manual for CosmoLattice, a modern package for lattice\nsimulations of the dynamics of interacting scalar and gauge fields in an\nexpanding universe. CosmoLattice incorporates a series of features that makes\nit very versatile and powerful: $i)$ it is written in C++ fully exploiting the\nobject oriented programming paradigm, with a modular structure and a clear\nseparation between the physics and the technical details, $ii)$ it is MPI-based\nand uses a discrete Fourier transform parallelized in multiple spatial\ndimensions, which makes it specially appropriate for probing scenarios with\nwell-separated scales, running very high resolution simulations, or simply very\nlong ones, $iii)$ it introduces its own symbolic language, defining field\nvariables and operations over them, so that one can introduce differential\nequations and operators in a manner as close as possible to the continuum,\n$iv)$ it includes a library of numerical algorithms, ranging from $O(\\delta\nt^2)$ to $O(\\delta t^{10})$ methods, suitable for simulating global and gauge\ntheories in an expanding grid, including the case of `self-consistent'\nexpansion sourced by the fields themselves. Relevant observables are provided\nfor each algorithm (e.g.~energy densities, field spectra, lattice snapshots)\nand we note that remarkably all our algorithms for gauge theories always\nrespect the Gauss constraint to machine precision. In this manual we explain\nhow to obtain and run CosmoLattice in a computer (let it be your laptop,\ndesktop or a cluster). We introduce the general structure of the code and\ndescribe in detail the basic files that any user needs to handle. We explain\nhow to implement any model characterized by a scalar potential and a set of\nscalar fields, either singlets or interacting with $U(1)$ and/or $SU(2)$ gauge\nfields. CosmoLattice is publicly available at www.cosmolattice.net.\n"} {"abstract": " A spin-1/2 Heisenberg model on honeycomb lattice is investigated by doing\ntriplon analysis and quantum Monte Carlo calculations. This model, inspired by\nCu$_2$(pymca)$_3$(ClO$_4$), has three different antiferromagnetic exchange\ninteractions ($J_A$, $J_B$, $J_C$) on three different sets of nearest-neighbour\nbonds which form a kagome superlattice. While the model is bipartite and\nunfrustrated, its quantum phase diagram is found to be dominated by a quantum\nparamagnetic phase that is best described as a spin-gapped hexagonal-singlet\nstate. The N\\'eel antiferromagnetic order survives only in a small region\naround $J_A=J_B=J_C$. The magnetization produced by external magnetic field is\nfound to exhibit plateaus at 1/3 and 2/3 of the saturation value, or at 1/3\nalone, or no plateaus. Notably, the plateaus exist only inside a bounded region\nwithin the hexagonal-singlet phase. This study provides a clear understanding\nof the spin-gapped behaviour and magnetization plateaus observed in\nCu$_2$(pymca)$_3$(ClO$_4$), and also predicts the possible disappearance of 2/3\nplateau under pressure.\n"} {"abstract": " Atomic carbon (CI) has been proposed to be a global tracer of the molecular\ngas as a substitute for CO, however, its utility remains unproven. To evaluate\nthe suitability of CI as the tracer, we performed [CI]$(^3P_1-^3P_0)$\n(hereinafter [CI](1-0)) mapping observations of the northern part of the nearby\nspiral galaxy M83 with the ASTE telescope and compared the distributions of\n[CI](1-0) with CO lines (CO(1-0), CO(3-2), and $^{13}$CO(1-0)), HI, and\ninfrared (IR) emission (70, 160, and 250$ \\mu$m). The [CI](1-0) distribution in\nthe central region is similar to that of the CO lines, whereas [CI](1-0) in the\narm region is distributed outside the CO. We examined the dust temperature,\n$T_{\\rm dust}$, and dust mass surface density, $\\Sigma_{\\rm dust}$, by fitting\nthe IR continuum-spectrum distribution with a single-temperature modified\nblackbody. The distribution of $\\Sigma_{\\rm dust}$ shows a much better\nconsistency with the integrated intensity of CO(1-0) than with that of\n[CI](1-0), indicating that CO(1-0) is a good tracer of the cold molecular gas.\nThe spatial distribution of the [CI] excitation temperature, $T_{\\rm ex}$, was\nexamined using the intensity ratio of the two [CI] transitions. An appropriate\n$T_{\\rm ex}$ at the central, bar, arm, and inter-arm regions yields a constant\n[C]/[H$_2$] abundance ratio of $\\sim7 \\times 10^{-5}$ within a range of 0.1 dex\nin all regions. We successfully detected weak [CI](1-0) emission, even in the\ninter-arm region, in addition to the central, arm, and bar regions, using\nspectral stacking analysis. The stacked intensity of [CI](1-0) is found to be\nstrongly correlated with $T_{\\rm dust}$. Our results indicate that the atomic\ncarbon is a photodissociation product of CO, and consequently, compared to\nCO(1-0), [CI](1-0) is less reliable in tracing the bulk of \"cold\" molecular gas\nin the galactic disk.\n"} {"abstract": " Using Galois theory of functional equations, we give a new proof of the main\nresult of the paper \"Transcendental transcendency of certain functions of\nPoincar\\'e\" by J.F. Ritt, on the differential transcendence of the solutions of\nthe functional equation R(y(t))=y(qt), where R is a rational function with\ncomplex coefficients which verifies R(0)=0, R'(0)=q, where q is a complex\nnumber with |q|>1. We also give a partial result in the case of an algebraic\nfunction R.\n"} {"abstract": " We construct an injection from the set of permutations of length $n$ that\ncontain exactly one copy of the decreasing pattern of length $k$ to the set of\npermutations of length $n+2$ that avoid that pattern. We then prove that the\ngenerating function counting the former is not rational, and in the case when\n$k$ is even and $k\\geq 4$, it is not even algebraic. We extend our injection\nand our nonrationality result to a larger class of patterns.\n"} {"abstract": " This paper proposes a non-autoregressive extension of our previously proposed\nsequence-to-sequence (S2S) model-based voice conversion (VC) methods. S2S\nmodel-based VC methods have attracted particular attention in recent years for\ntheir flexibility in converting not only the voice identity but also the pitch\ncontour and local duration of input speech, thanks to the ability of the\nencoder-decoder architecture with the attention mechanism. However, one of the\nobstacles to making these methods work in real-time is the autoregressive (AR)\nstructure. To overcome this obstacle, we develop a method to obtain a model\nthat is free from an AR structure and behaves similarly to the original S2S\nmodels, based on a teacher-student learning framework. In our method, called\n\"FastS2S-VC\", the student model consists of encoder, decoder, and attention\npredictor. The attention predictor learns to predict attention distributions\nsolely from source speech along with a target class index with the guidance of\nthose predicted by the teacher model from both source and target speech. Thanks\nto this structure, the model is freed from an AR structure and allows for\nparallelization. Furthermore, we show that FastS2S-VC is suitable for real-time\nimplementation based on a sliding-window approach, and describe how to make it\nrun in real-time. Through speaker-identity and emotional-expression conversion\nexperiments, we confirmed that FastS2S-VC was able to speed up the conversion\nprocess by 70 to 100 times compared to the original AR-type S2S-VC methods,\nwithout significantly degrading the audio quality and similarity to target\nspeech. We also confirmed that the real-time version of FastS2S-VC can be run\nwith a latency of 32 ms when run on a GPU.\n"} {"abstract": " A nested coordinate system is a reassigning of independent variables to take\nadvantage of geometric or symmetry properties of a particular application.\nPolar, cylindrical and spherical coordinate systems are primary examples of\nsuch a regrouping that have proved their importance in the separation of\nvariables method for solving partial differential equations. Geometric algebra\noffers powerful complimentary algebraic tools that are unavailable in other\ntreatments.\n"} {"abstract": " The study of the mapping class group of the plane minus a Cantor set uses a\ngraph of loops, which is an analogous of the curve graph in the study of\nmapping class groups of compact surfaces. The Gromov boundary of this loop\ngraph can be described in terms of \"cliques of high-filling rays\": high-filling\nrays are simple geodesics of the surface which are complicated enough to be\ninfinitely far away from any loop in the graph. Moreover, these rays are\narranged in cliques: any two high-filling rays which are both disjoint from a\nthird one are necessarily mutually disjoint. Every such clique is a point of\nthe Gromov-boundary of the loop graph. Some examples of cliques with any finite\nnumber of high-filling rays are already known.\n In this paper, we construct an infinite clique of high-filling rays.\n"} {"abstract": " We consider the asymptotic expansion of the functional series\n\\[S_{\\mu,\\gamma}(a;\\lambda)=\\sum_{n=1}^\\infty \\frac{n^\\gamma e^{-\\lambda\nn^2/a^2}}{(n^2+a^2)^\\mu}\\] for real values of the parameters $\\gamma$,\n$\\lambda>0$ and $\\mu\\geq0$ as $|a|\\to \\infty$ in the sector $|\\arg\\,a|<\\pi/4$.\nFor general values of $\\gamma$ the expansion is of algebraic type with terms\ninvolving the Riemann zeta function and a terminating confluent hypergeometric\nfunction. Of principal interest in this study is the case corresponding to even\ninteger values of $\\gamma$, where the algebraic-type expansion consists of a\nfinite number of terms together with a contribution comprising an infinite\nsequence of increasingly subdominant exponentially small expansions. This\nsituation is analogous to the well-known Poisson-Jacobi formula corresponding\nto the case $\\mu=\\gamma=0$. Numerical examples are provided to illustrate the\naccuracy of these expansions.\n"} {"abstract": " We present a novel method for predicting accurate depths from monocular\nimages with high efficiency. This optimal efficiency is achieved by exploiting\nwavelet decomposition, which is integrated in a fully differentiable\nencoder-decoder architecture. We demonstrate that we can reconstruct\nhigh-fidelity depth maps by predicting sparse wavelet coefficients. In contrast\nwith previous works, we show that wavelet coefficients can be learned without\ndirect supervision on coefficients. Instead we supervise only the final depth\nimage that is reconstructed through the inverse wavelet transform. We\nadditionally show that wavelet coefficients can be learned in fully\nself-supervised scenarios, without access to ground-truth depth. Finally, we\napply our method to different state-of-the-art monocular depth estimation\nmodels, in each case giving similar or better results compared to the original\nmodel, while requiring less than half the multiply-adds in the decoder network.\nCode at https://github.com/nianticlabs/wavelet-monodepth\n"} {"abstract": " This work presents a novel target-free extrinsic calibration algorithm for a\n3D Lidar and an IMU pair using an Extended Kalman Filter (EKF) which exploits\nthe \\textit{motion based calibration constraint} for state update. The steps\ninclude, data collection by motion excitation of the Lidar Inertial Sensor\nsuite along all degrees of freedom, determination of the inter sensor rotation\nby using rotational component of the aforementioned \\textit{motion based\ncalibration constraint} in a least squares optimization framework, and finally,\nthe determination of inter sensor translation using the \\textit{motion based\ncalibration constraint} for state update in an Extended Kalman Filter (EKF)\nframework. We experimentally validate our method using data collected in our\nlab and open-source (https://github.com/unmannedlab/imu_lidar_calibration) our\ncontribution for the robotics research community.\n"} {"abstract": " The field of quantum simulations in ultra-cold atomic gases has been\nremarkably successful. In principle it allows for an exact treatment of a\nvariety of highly relevant lattice models and their emergent phases of matter.\nBut so far there is a lack in the theoretical literature concerning the\nsystematic study of the effects of the trap potential as well as the finite\nsize of the systems, as numerical studies of such non periodic, correlated\nfermionic lattices models are numerically demanding beyond one dimension. We\nuse the recently introduced real-space truncated unity functional\nrenormalization group to study these boundary and trap effects with a focus on\ntheir impact on the superconducting phase of the $2$D Hubbard model. We find\nthat in the experiments not only lower temperatures need to be reached compared\nto current capabilities, but also system size and trap potential shape play a\ncrucial role to simulate emergent phases of matter.\n"} {"abstract": " The vast majority of semantic segmentation approaches rely on pixel-level\nannotations that are tedious and time consuming to obtain and suffer from\nsignificant inter and intra-expert variability. To address these issues, recent\napproaches have leveraged categorical annotations at the slide-level, that in\ngeneral suffer from robustness and generalization. In this paper, we propose a\nnovel weakly supervised multi-instance learning approach that deciphers\nquantitative slide-level annotations which are fast to obtain and regularly\npresent in clinical routine. The extreme potentials of the proposed approach\nare demonstrated for tumor segmentation of solid cancer subtypes. The proposed\napproach achieves superior performance in out-of-distribution, out-of-location,\nand out-of-domain testing sets.\n"} {"abstract": " The purpose of this report is to look at the measures of importance of\ncomponents in systems in terms of reliability. In the first work of Birnbaum\n(1968) on this subject, many interesting studies were created and important\nindicators were constructed that allowed to organize the components of complex\nsystems. They are helpful in analyzing the reliability of the designed systems,\nestablishing the principles of operation and maintenance. The significance\nmeasures presented here are collected and discussed regarding the motivation\nbehind their creation. They concern an approach in which both elements and\nsystems are binary, and the possibility of generalization to multistate systems\nis only mentioned. Among the discussed is one new proposal using the methods of\ngame theory, combining the sensitivity to the structure of the system and the\noperational effects on the system performance. The presented severity measures\nuse a knowledge of the system structure as well as reliability and wear and\ntear, and whether the components can be repaired and maintained.\n"} {"abstract": " Driven by increased complexity of dynamical systems, the solution of system\nof differential equations through numerical simulation in optimization problems\nhas become computationally expensive. This paper provides a smart data driven\nmechanism to construct low dimensional surrogate models. These surrogate models\nreduce the computational time for solution of the complex optimization problems\nby using training instances derived from the evaluations of the true objective\nfunctions. The surrogate models are constructed using combination of proper\northogonal decomposition and radial basis functions and provides system\nresponses by simple matrix multiplication. Using relative maximum absolute\nerror as the measure of accuracy of approximation, it is shown surrogate models\nwith latin hypercube sampling and spline radial basis functions dominate\nvariable order methods in computational time of optimization, while preserving\nthe accuracy. These surrogate models also show robustness in presence of model\nnon-linearities. Therefore, these computational efficient predictive surrogate\nmodels are applicable in various fields, specifically to solve inverse problems\nand optimal control problems, some examples of which are demonstrated in this\npaper.\n"} {"abstract": " First-order nonadiabatic coupling matrix elements (fo-NACMEs) are the basic\nquantities in theoretical descriptions of electronically nonadiabatic processes\nthat are ubiquitous in molecular physics and chemistry. Given the large size of\nsystems of chemical interests, time-dependent density functional theory (TDDFT)\nis usually the first choice. However, the lack of wave functions in TDDFT\nrenders the formulation of NAC-TDDFT for fo-NACMEs conceptually difficult. The\npresent account aims to analyze the available variants of NAC-TDDFT in a\ncritical but concise manner and meanwhile point out the proper ways for\nimplementation. It can be concluded, from both theoretical and numerical points\nof view, that the equation of motion-based variant of NAC-TDDFT is the right\nchoice. Possible future developments of this variant are also highlighted.\n"} {"abstract": " Immense field enhancement and nanoscale confinement of light are possible\nwithin nanoparticle-on-mirror (NPoM) plasmonic resonators, which enable novel\noptically-activated physical and chemical phenomena, and render these\nnanocavities greatly sensitive to minute structural changes, down to the atomic\nscale. Although a few of these structural parameters, primarily linked to the\nnanoparticle and the mirror morphology, have been identified, the impact of\nmolecular assembly and organization of the spacer layer between them has often\nbeen left uncharacterized. Here, we experimentally investigate how the complex\nand reconfigurable nature of a thiol-based self-assembled monolayer (SAM)\nadsorbed on the mirror surface impacts the optical properties of the NPoMs. We\nfabricate NPoMs with distinct molecular organizations by controlling the\nincubation time of the mirror in the thiol solution. Afterwards, we investigate\nthe structural changes that occur under laser irradiation by tracking the\nbonding dipole plasmon mode, while also monitoring Stokes and anti-Stokes Raman\nscattering from the molecules as a probe of their integrity. First, we find an\neffective decrease in the SAM height as the laser power increases, compatible\nwith an irreversible change of molecule orientation caused by heating. Second,\nwe observe that the nanocavities prepared with a densely packed and more\nordered monolayer of molecules are more prone to changes in their resonance\ncompared to samples with sparser and more disordered SAMs. Our measurements\nindicate that molecular orientation and packing on the mirror surface play a\nkey role in determining the stability of NPoM structures and hence highlight\nthe under-recognized significance of SAM characterization in the development of\nNPoM-based applications.\n"} {"abstract": " Recent literature has demonstrated that the use of per-channel energy\nnormalization (PCEN), has significant performance improvements over traditional\nlog-scaled mel-frequency spectrograms in acoustic sound event detection (SED)\nin a multi-class setting with overlapping events. However, the configuration of\nPCEN's parameters is sensitive to the recording environment, the\ncharacteristics of the class of events of interest, and the presence of\nmultiple overlapping events. This leads to improvements on a class-by-class\nbasis, but poor cross-class performance. In this article, we experiment using\nPCEN spectrograms as an alternative method for SED in urban audio using the\nUrbanSED dataset, demonstrating per-class improvements based on parameter\nconfiguration. Furthermore, we address cross-class performance with PCEN using\na novel method, Multi-Rate PCEN (MRPCEN). We demonstrate cross-class SED\nperformance with MRPCEN, demonstrating improvements to cross-class performance\ncompared to traditional single-rate PCEN.\n"} {"abstract": " We study the Bose polaron problem in a nonequilibrium setting, by considering\nan impurity embedded in a quantum fluid of light realized by exciton-polaritons\nin a microcavity, subject to a coherent drive and dissipation on account of\npump and cavity losses. We obtain the polaron effective mass, the drag force\nacting on the impurity, and determine polaron trajectories at a semiclassical\nlevel. We find different dynamical regimes, originating from the unique\nfeatures of the excitation spectrum of driven-dissipative polariton fluids, in\nparticular a non-trivial regime of acceleration against the flow. Our work\npromotes the study of impurity dynamics as an alternative testbed for probing\nsuperfluidity in quantum fluids of light.\n"} {"abstract": " In this paper, we focus on the performance of vehicle-to-vehicle (V2V)\ncommunication adopting the Dedicated Short Range Communication (DSRC)\napplication in periodic broadcast mode. An analytical model is studied and a\nfixed point method is used to analyze the packet delivery ratio (PDR) and mean\ndelay based on the IEEE 802.11p standard in a fully connected network under the\nassumption of perfect PHY performance. With the characteristics of V2V\ncommunication, we develop the Semi-persistent Contention Density Control\n(SpCDC) scheme to improve the DSRC performance. We use Monte Carlo simulation\nto verify the results obtained by the analytical model. The simulation results\nshow that the packet delivery ratio in SpCDC scheme increases more than 10%\ncompared with IEEE 802.11p in heavy vehicle load scenarios. Meanwhile, the mean\nreception delay decreases more than 50%, which provides more reliable road\nsafety.\n"} {"abstract": " Spectral factorization is a prominent tool with several important\napplications in various areas of applied science. Wiener and Masani proved the\nexistence of matrix spectral factorization. Their theorem has been extended to\nthe multivariable case by Helson and Lowdenslager. Solving the problem\nnumerically is challenging in both situations, and also important due to its\npractical applications. Therefore, several authors have developed algorithms\nfor factorization. The Janashia-Lagvilava algorithm is a relatively new method\nfor matrix spectral factorization which has proved to be useful in several\napplications. In this paper, we extend this method to the multivariable case.\nConsequently, a new numerical algorithm for multivariable matrix spectral\nfactorization is constructed.\n"} {"abstract": " We investigate the torque field and skyrmion movement at an interface between\na ferromagnet hosting a skyrmion and a material with strong spin-orbit\ninteraction. We analyze both semiconductor materials and topological insulators\nusing a Hamiltonian model that includes a linear term. The spin torque inducing\ncurrent is considered to flow in the single band limit therefore a quantum\nmodel of current is used. Skyrmion movement due spin transfer torque proves to\nbe more difficult in presence of spin orbit interaction in the case where only\ninterface in-plane currents are present. However, edge effects in narrow\nnanowires can be used to drive the skyrmion movement and to exert a limited\ncontrol on its movement direction. We also show the differences and\nsimilarities between torque fields due to electric current in the many and in\nthe single band limits.\n"} {"abstract": " For any second-order scalar PDE $\\mathcal{E}$ in one unknown function, that\nwe interpret as a hypersurface of a second-order jet space $J^2$, we construct,\nby means of the characteristics of $\\mathcal{E}$, a sub-bundle of the contact\ndistribution of the underlying contact manifold $J^1$, consisting of conic\nvarieties. We call it the contact cone structure associated with $\\mathcal{E}$.\nWe then focus on symplectic Monge-Amp\\`ere equations in 3 independent\nvariables, that are naturally parametrized by a 13-dimensional real projective\nspace. If we pass to the field of complex numbers $\\mathbb{C}$, this projective\nspace turns out to be the projectivization of the 14-dimensional irreducible\nrepresentation of the simple Lie group $\\mathsf{Sp}(6,\\mathbb{C})$: the\nassociated moment map allows to define a rational map $\\varpi$ from the space\nof symplectic 3D Monge-Amp\\`ere equations to the projectivization of the space\nof quadratic forms on a $6$-dimensional symplectic vector space. We study in\ndetails the relationship between the zero locus of the image of $\\varpi$,\nherewith called the cocharacteristic variety, and the contact cone structure of\na 3D Monge-Amp\\`ere equation $\\mathcal{E}$: under the hypothesis of\nnon-degenerate symbol, we prove that these two constructions coincide. A key\ntool in achieving such a result will be a complete list of mutually\nnon-equivalent quadratic forms on a $6$-dimensional symplectic space, which has\nan interest on its own.\n"} {"abstract": " What do word vector representations reveal about the emotions associated with\nwords? In this study, we consider the task of estimating word-level emotion\nintensity scores for specific emotions, exploring unsupervised, supervised, and\nfinally a self-supervised method of extracting emotional associations from word\nvector representations. Overall, we find that word vectors carry substantial\npotential for inducing fine-grained emotion intensity scores, showing a far\nhigher correlation with human ground truth ratings than achieved by\nstate-of-the-art emotion lexicons.\n"} {"abstract": " Landau suggested that the low-temperature properties of metals can be\nunderstood in terms of long-lived quasiparticles with all complex interactions\nincluded in Fermi-liquid parameters, such as the effective mass $m^{\\star}$.\nDespite its wide applicability, electronic transport in bad or strange metals\nand unconventional superconductors is controversially discussed towards a\npossible collapse of the quasiparticle concept. Here we explore the\nelectrodynamic response of correlated metals at half filling for varying\ncorrelation strength upon approaching a Mott insulator. We reveal persistent\nFermi-liquid behavior with pronounced quadratic dependences of the optical\nscattering rate on temperature and frequency, along with a puzzling elastic\ncontribution to relaxation. The strong increase of the resistivity beyond the\nIoffe-Regel-Mott limit is accompanied by a `displaced Drude peak' in the\noptical conductivity. Our results, supported by a theoretical model for the\noptical response, demonstrate the emergence of a bad metal from resilient\nquasiparticles that are subject to dynamical localization and dissolve near the\nMott transition.\n"} {"abstract": " In his 1935 Gedankenexperiment, Erwin Schr\\\"{o}dinger imagined a poisonous\nsubstance which has a 50% probability of being released, based on the decay of\na radioactive atom. As such, the life of the cat and the state of the poison\nbecome entangled, and the fate of the cat is determined upon opening the box.\nWe present an experimental technique that keeps the cat alive on any account.\nThis method relies on the time-resolved Hong-Ou-Mandel effect: two long,\nidentical photons impinging on a beam splitter always bunch in either of the\noutputs. Interpreting the first photon detection as the state of the poison,\nthe second photon is identified as the state of the cat. Even after the\ncollapse of the first photon's state, we show their fates are intertwined\nthrough quantum interference. We demonstrate this by a sudden phase change\nbetween the inputs, administered conditionally on the outcome of the first\ndetection, which steers the second photon to a pre-defined output and ensures\nthat the cat is always observed alive.\n"} {"abstract": " The exsitance of three-dimensional Hall effect (3DQHE) due to spontaneous\nFermi surface instabilities in strong magnetic field was proposed decades ago,\nand has stimulated recent progress in experiments. The reports in recent\nexperiments show that the Hall plateaus and vanishing transverse\nmagneto-resistivities (TMRs) (which are two main signatures of 3DQHE) are not\neasy to be observed in natural materials. And two main different explanations\nof the slow varying slope like Hall plateaus and non-vanishing\n TMRs (which can be called as quasi-quantized Hall effect (QQHE)) have been\nproposed. By studying the magneto-transport with a simple effective periodic 3D\nsystem, we show how 3DQHE can be achieved in certain parameter regimes at\nfirst. We find two new mechanisms that may give rise to QQHE. One mechanism is\nthe \"low\" Fermi energy effect, and the other is the \"strong\" impurity effect.\nOur studies also proved that the artificial superlattice is an ideal platform\nfor realizing 3DQHE with high layer barrier periodic potential.\n"} {"abstract": " Presence often is considered the most important quale describing the\nsubjective feeling of being in a computer-generated (virtual) or\ncomputer-mediated environment. The identification and separation of two\northogonal presence components, i.e., the place illusion and the plausibility\nillusion, has been an accepted theoretical model describing Virtual Reality\n(VR) experiences for some time. In this model, immersion is a proposed\ncontributing factor to the place illusion. Lately, copresence and social\npresence illusions have extended this model, and coherence was proposed as a\ncontributing factor to the plausibility illusion. Such factors strive to\nidentify (objectively) measurable characteristics of an experience, e.g.,\nsystems properties that allow controlled manipulations of VR experiences. This\nperspective article challenges this presence-oriented VR theory. First, we\nargue that a place illusion cannot be the major construct to describe the much\nwider scope of Virtual, Augmented, and Mixed Reality (VR, AR, MR: or XR for\nshort). Second, we argue that there is no plausibility illusion but merely\nplausibility, and we derive the place illusion as a consequence of a plausible\ngeneration of spatial cues, and similarly for all of the current model's\nso-defined illusions. Finally, we propose coherence and plausibility to become\nthe central essential conditions in a novel theoretical model describing XR\nexperiences and effects.\n"} {"abstract": " Many teleoperation tasks require three or more tools working together, which\nneed the cooperation of multiple operators. The effectiveness of such schemes\nmay be limited by communication. Trimanipulation by a single operator using an\nartificial third arm controlled together with their natural arms is a promising\nsolution to this issue. Foot-controlled interfaces have previously shown the\ncapability to be used for the continuous control of robot arms. However, the\nuse of such interfaces for controlling a supernumerary robotic limb (SRLs) in\ncoordination with the natural limbs, is not well understood. In this paper, a\nteleoperation task imitating physically coupled hands in a virtual reality\nscene was conducted with 14 subjects to evaluate human performance during\ntri-manipulation. The participants were required to move three limbs together\nin a coordinated way mimicking three arms holding a shared physical object. It\nwas found that after a short practice session, the three-hand tri-manipulation\nusing a single subject's hands and foot was still slower than dyad operation,\nhowever, they displayed similar performance in success rate and higher motion\nefficiency than two person's cooperation.\n"} {"abstract": " The linear frequency modulated (LFM) frequency agile radar (FAR) can\nsynthesize a wide signal bandwidth through coherent processing while keeping\nthe bandwidth of each pulse narrow. In this way, high range resolution profiles\n(HRRP) can be obtained without increasing the hardware system cost.\nFurthermore, the agility provides improved both robustness to jamming and\nspectrum efficiency. Motivated by the Newtonalized orthogonal matching pursuit\n(NOMP) for line spectral estimation problem, the NOMP for the FAR radar termed\nas NOMP-FAR is designed to process each coarse range bin to extract the HRRP\nand velocities of multiple targets, including the guide for determining the\noversampling factor and the stopping criterion. In addition, it is shown that\nthe target will cause false alarm in the nearby coarse range bins, a\npostprocessing algorithm is then proposed to suppress the ghost targets.\nNumerical simulations are conducted to demonstrate the effectiveness of\nNOMP-FAR.\n"} {"abstract": " We present the task description and discussion on the results of the DCASE\n2021 Challenge Task 2. In 2020, we organized an unsupervised anomalous sound\ndetection (ASD) task, identifying whether a given sound was normal or anomalous\nwithout anomalous training data. In 2021, we organized an advanced unsupervised\nASD task under domain-shift conditions, which focuses on the inevitable problem\nof the practical use of ASD systems. The main challenge of this task is to\ndetect unknown anomalous sounds where the acoustic characteristics of the\ntraining and testing samples are different, i.e., domain-shifted. This problem\nfrequently occurs due to changes in seasons, manufactured products, and/or\nenvironmental noise. We received 75 submissions from 26 teams, and several\nnovel approaches have been developed in this challenge. On the basis of the\nanalysis of the evaluation results, we found that there are two types of\nremarkable approaches that TOP-5 winning teams adopted: 1) ensemble approaches\nof ``outlier exposure'' (OE)-based detectors and ``inlier modeling'' (IM)-based\ndetectors and 2) approaches based on IM-based detection for features learned in\na machine-identification task.\n"} {"abstract": " Quantitative information on tumor heterogeneity and cell load could assist in\ndesigning effective and refined personalized treatment strategies. It was\nrecently shown by us that such information can be inferred from the diffusion\nparameter D derived from the diffusion-weighted MRI (DWI) if a relation between\nD and cell density can be established. However, such relation cannot a priori\nbe assumed to be constant for all patients and tumor types. Hence to assist in\nclinical decisions in palliative settings, the relation needs to be established\nwithout tumor resection. It is here demonstrated that biopsies may contain\nsufficient information for this purpose if the localization of biopsies is\nchosen as systematically elaborated in this paper. A superpixel-based method\nfor automated optimal localization of biopsies from the DWI D-map is proposed.\nThe performance of the DWI-guided procedure is evaluated by extensive\nsimulations of biopsies. Needle biopsies yield sufficient histological\ninformation to establish a quantitative relationship between D-value and cell\ndensity, provided they are taken from regions with high, intermediate, and low\nD-value in DWI. The automated localization of the biopsy regions is\ndemonstrated from a NSCLC patient tumor. In this case, even two or three\nbiopsies give a reasonable estimate. Simulations of needle biopsies under\ndifferent conditions indicate that the DWI-guidance highly improves the\nestimation results. Tumor cellularity and heterogeneity in solid tumors may be\nreliably investigated from DWI and a few needle biopsies that are sampled in\nregions of well-separated D-values, excluding adipose tissue. This procedure\ncould provide a way of embedding in the clinical workflow assistance in cancer\ndiagnosis and treatment based on personalized information.\n"} {"abstract": " In this paper we give a classification of the asymptotic expansion of the\n$q$-expansion of reciprocals of Eisenstein series $E_k$ of weight $k$ for the\nmodular group $\\func{SL}_2(\\mathbb{Z})$. For $k \\geq 12$ even, this extends\nresults of Hardy and Ramanujan, and Berndt, Bialek and Yee, utilizing the\nCircle Method on the one hand, and results of Petersson, and Bringmann and\nKane, developing a theory of meromorphic Poincar{\\'e} series on the other. We\nfollow a uniform approach, based on the zeros of the Eisenstein series with the\nlargest imaginary part. These special zeros provide information on the\nsingularities of the Fourier expansion of $1/E_k(z)$ with respect to $q = e^{2\n\\pi i z}$.\n"} {"abstract": " Interactive single-image segmentation is ubiquitous in the scientific and\ncommercial imaging software. In this work, we focus on the single-image\nsegmentation problem only with some seeds such as scribbles. Inspired by the\ndynamic receptive field in the human being's visual system, we propose the\nGaussian dynamic convolution (GDC) to fast and efficiently aggregate the\ncontextual information for neural networks. The core idea is randomly selecting\nthe spatial sampling area according to the Gaussian distribution offsets. Our\nGDC can be easily used as a module to build lightweight or complex segmentation\nnetworks. We adopt the proposed GDC to address the typical single-image\nsegmentation tasks. Furthermore, we also build a Gaussian dynamic pyramid\nPooling to show its potential and generality in common semantic segmentation.\nExperiments demonstrate that the GDC outperforms other existing convolutions on\nthree benchmark segmentation datasets including Pascal-Context, Pascal-VOC\n2012, and Cityscapes. Additional experiments are also conducted to illustrate\nthat the GDC can produce richer and more vivid features compared with other\nconvolutions. In general, our GDC is conducive to the convolutional neural\nnetworks to form an overall impression of the image.\n"} {"abstract": " For any positive integer $r$, we construct a smooth complex projective\nrational surface which has at least $r$ real forms not isomorphic over\n$\\mathbb{R}$.\n"} {"abstract": " We present a very simple form of the supercharges and the Hamiltonian of\n${\\cal N} {=}\\,2$ supersymmetric extension of $n$-particle\nRuijsenaars--Schneider models for three cases of the interaction:\n$1/(x_i-x_j)$, $1/tan(x_i-x_j)$, $1/tanh(x_i-x_j)$. The long \"fermionic tails\"\nof the supercharges and Hamiltonian rolled up in the simple rational functions\ndepending on fermionic bilinears.\n"} {"abstract": " We present a study of the structure and differential capacitance of electric\ndouble layers of aqueous electrolytes. We consider Electric Double Layer\nCapacitors (EDLC) composed of spherical cations and anions in a dielectric\ncontinuum confined between a planar cathode and anode. The model system\nincludes steric as well as Coulombic ion-ion and ion-electrode interactions. We\ncompare results of computationally expensive, but \"exact\", Brownian Dynamics\n(BD) simulations with approximate, but cheap, calculations based on classical\nDensity Functional Theory (DFT). Excellent overall agreement is found for a\nlarge set of system parameters $-$ including variations in concentrations,\nionic size- and valency-asymmetries, applied voltages, and electrode separation\n$-$ provided the differences between the canonical ensemble of the BD\nsimulations and the grand-canonical ensemble of DFT are properly taken into\naccount. In particular a careful distinction is made between the differential\ncapacitance $C_N$ at fixed number of ions and $C_\\mu$ at fixed ionic chemical\npotential. Furthermore, we derive and exploit their thermodynamic relations. In\nthe future these relations are also useful for comparing and contrasting.\n"} {"abstract": " We analyse an extremal question on the degrees of the link graphs of a finite\nregular graph, that is, the subgraphs induced by non-trivial spheres. We show\nthat if $G$ is $d$-regular and connected but not complete then some link graph\nof $G$ has minimum degree at most $\\lfloor 2d/3\\rfloor-1$, and if $G$ is\nsufficiently large in terms of $d$ then some link graph has minimum degree at\nmost $\\lfloor d/2\\rfloor-1$; both bounds are best possible. We also give the\ncorresponding best-possible result for the corresponding problem where\nsubgraphs induced by balls, rather than spheres, are considered.\n We motivate these questions by posing a conjecture concerning expansion of\nlink graphs in large bounded-degree graphs, together with a heuristic\njustification thereof.\n"} {"abstract": " #P-hardness of computing matrix immanants are proved for each member of a\nbroad class of shapes and restricted sets of matrices. The class is\ncharacterized in the following way. If a shape of size $n$ in it is in form\n$(w,\\mathbf{1}+\\lambda)$ or its conjugate is in that form, where $\\mathbf{1}$\nis the all-$1$ vector, then $|\\lambda|$ is $n^{\\varepsilon}$ for some\n$0<\\varepsilon$, $\\lambda$ can be tiled with $1\\times 2$ dominos and\n$(3w+3h(\\lambda)+1)|\\lambda| \\le n$, where $h(\\lambda)$ is the height of\n$\\lambda$. The problem remains \\#P-hard if the immanants are evaluated on\n$0$-$1$ matrices. We also give hardness proofs of some immanants whose shape\n$\\lambda = (\\mathbf{1}+\\lambda_d)$ has size $n$ such that $|\\lambda_d| =\nn^{\\varepsilon}$ for some $0<\\varepsilon<\\frac{1}{2}$, and for some $w$, the\nshape $\\lambda_d/(w)$ is tilable with $1\\times 2$ dominos. The \\#P-hardness\nresult holds when these immanants are evaluated on adjacency matrices of\nplanar, directed graphs, however, in these cases the edges have small positive\ninteger weights.\n"} {"abstract": " The chromaticity diagram associated with the CIE 1931 color matching\nfunctions is shown to be slightly non-convex. While having no impact on\npractical colorimetric computations, the non-convexity does have a significant\nimpact on the shape of some optimal object color reflectance distributions\nassociated with the outer surface of the object color solid. Instead of the\nusual two-transition Schrodinger form, many optimal colors exhibit higher\ntransition counts. A linear programming formulation is developed and is used to\nlocate where these higher-transition optimal object colors reside on the object\ncolor solid surface. The regions of higher transition count appear to have a\npoint-symmetric complementary structure. The final peer-reviewed version (to\nappear) contains additional material concerning convexification of the\ncolor-matching functions and and additional analysis of modern\n\"physiologically-relevant\" CMFs transformed from cone fundamentals.\n"} {"abstract": " Raman spectroscopy is an advantageous method for studying the local structure\nof materials, but the interpretation of measured spectra is complicated by the\npresence of oblique phonons in polycrystals of polar materials. Whilst group\ntheory considerations and standard ab initio calculations are helpful, they are\noften valid only for single crystals. In this paper, we introduce a method for\ncomputing Raman spectra of polycrystalline materials from first principles. We\nstart from the standard approach based on the (Placzek) rotation invariants of\nthe Raman tensors and extend it to include the effect of the coupling between\nthe lattice vibrations and the induced electric field, and the electro-optic\ncontribution, relevant for polar materials like ferroelectrics. As exemplified\nby applying the method to rhombohedral BaTiO3, AlN, and LiNbO3, such an\nextension brings the simulated Raman spectrum to a much better correspondence\nwith the experimental one. Additional advantages of the method are that it is\ngeneral, permits automation, and thus can be used in high-throughput fashion.\n"} {"abstract": " This paper is a continuation of our article (European J. Math.,\nhttps://doi.org/10.1007/s40879-020-00419-8). The notion of a poor complex\ncompact manifold was introduced there and the group $Aut(X)$ for a $P^1$-bundle\nover such a manifold was proven to be very Jordan. We call a group $G$ very\nJordan if it contains a normal abelian subgroup $G_0$ such that the orders of\nfinite subgroups of the quotient $G/G_0$ are bounded by a constant depending on\n$G$ only.\n In this paper we provide explicit examples of infinite families of poor\nmanifolds of any complex dimension, namely simple tori of algebraic dimension\nzero. Then we consider a non-trivial holomorphic $P^1$-bundle $(X,p,Y)$ over a\nnon-uniruled complex compact Kaehler manifold $Y$. We prove that $Aut(X)$ is\nvery Jordan provided some additional conditions on the set of sections of $p$\nare met. Applications to $P^1$-bundles over non-algebraic complex tori are\ngiven.\n"} {"abstract": " In this work we obtain a geometric characterization of the measures $\\mu$ in\n$\\mathbb{R}^{n+1}$ with polynomial upper growth of degree $n$ such that the\n$n$-dimensional Riesz transform $\\mathcal{R}\\mu (x) = \\int\n\\frac{x-y}{|x-y|^{n+1}}\\,d\\mu(y)$ belongs to $L^2(\\mu)$, under the assumption\nthat $\\mu$ satisfies the following Wolff energy estimate, for any ball\n$B\\subset\\mathbb{R}^{n+1}$: $$\\int_B \\int_0^\\infty\n\\left(\\frac{\\mu(B(x,r))}{r^{n-\\frac38}}\\right)^2\\,\\frac{dr}r\\,d\\mu(x)\\leq\nM\\,\\bigg(\\frac{\\mu(2B)}{r(B)^{n-\\frac38}}\\bigg)^2\\,\\mu(2B).$$ More precisely,\nwe show that $\\mu$ satisfies the following estimate:\n$$\\|\\mathcal{R}\\mu\\|_{L^2(\\mu)}^2 + \\|\\mu\\|\\approx \\int\\!\\!\\int_0^\\infty\n\\beta_{\\mu,2}(x,r)^2\\,\\frac{\\mu(B(x,r))}{r^n}\\,\\frac{dr}r\\,d\\mu(x) + \\|\\mu\\|,$$\nwhere $\\beta_{\\mu,2}(x,r)^2 = \\inf_L \\frac1{r^n}\\int_{B(x,r)}\n\\left(\\frac{\\mathrm{dist}(y,L)}r\\right)^2\\,d\\mu(y),$ with the infimum taken\nover all affine $n$-planes $L\\subset\\mathbb{R}^{n+1}$. In a companion paper\nwhich relies on the results obtained in this work it is shown that the same\nresult holds without the above assumption regarding the Wolff energy of $\\mu$.\nThis result has important consequences for the Painlev\\'e problem for Lipschitz\nharmonic functions.\n"} {"abstract": " We propose a new approach for trading VIX futures. We assume that the term\nstructure of VIX futures follows a Markov model. Our trading strategy selects a\nposition in VIX futures by maximizing the expected utility for a day-ahead\nhorizon given the current shape and level of the term structure.\nComputationally, we model the functional dependence between the VIX futures\ncurve, the VIX futures positions, and the expected utility as a deep neural\nnetwork with five hidden layers. Out-of-sample backtests of the VIX futures\ntrading strategy suggest that this approach gives rise to reasonable portfolio\nperformance, and to positions in which the investor will be either long or\nshort VIX futures contracts depending on the market environment.\n"} {"abstract": " In this paper we study the connectivity of Fatou components for maps in a\nlarge family of singular perturbations. We prove that, for some parameters\ninside the family, the dynamical planes for the corresponding maps present\nFatou components of arbitrarily large connectivity and we determine precisely\nthese connectivities. In particular, these results extend the ones obtained in\n[Can17, Can18].\n"} {"abstract": " Cross features play an important role in click-through rate (CTR) prediction.\nMost of the existing methods adopt a DNN-based model to capture the cross\nfeatures in an implicit manner. These implicit methods may lead to a\nsub-optimized performance due to the limitation in explicit semantic modeling.\nAlthough traditional statistical explicit semantic cross features can address\nthe problem in these implicit methods, it still suffers from some challenges,\nincluding lack of generalization and expensive memory cost. Few works focus on\ntackling these challenges. In this paper, we take the first step in learning\nthe explicit semantic cross features and propose Pre-trained Cross Feature\nlearning Graph Neural Networks (PCF-GNN), a GNN based pre-trained model aiming\nat generating cross features in an explicit fashion. Extensive experiments are\nconducted on both public and industrial datasets, where PCF-GNN shows\ncompetence in both performance and memory-efficiency in various tasks.\n"} {"abstract": " The number of photographs taken worldwide is growing rapidly and steadily.\nWhile a small subset of these images is annotated and shared by users through\nsocial media platforms, due to the sheer number of images in personal photo\nrepositories (shared or not shared), finding specific images remains\nchallenging. This survey explores existing image retrieval techniques as well\nas photo-organizer applications to highlight their relative strengths in\naddressing this challenge.\n"} {"abstract": " Partitioning graphs into blocks of roughly equal size is widely used when\nprocessing large graphs. Currently there is a gap in the space of available\npartitioning algorithms. On the one hand, there are streaming algorithms that\nhave been adopted to partition massive graph data on small machines. In the\nstreaming model, vertices arrive one at a time including their neighborhood and\nthen have to be assigned directly to a block. These algorithms can partition\nhuge graphs quickly with little memory, but they produce partitions with low\nsolution quality. On the other hand, there are offline (shared-memory)\nmultilevel algorithms that produce partitions with high quality but also need a\nmachine with enough memory. We make a first step to close this gap by\npresenting an algorithm that computes significantly improved partitions of huge\ngraphs using a single machine with little memory in streaming setting. First,\nwe adopt the buffered streaming model which is a more reasonable approach in\npractice. In this model, a processing element can store a buffer, or batch, of\nnodes before making assignment decisions. When our algorithm receives a batch\nof nodes, we build a model graph that represents the nodes of the batch and the\nalready present partition structure. This model enables us to apply multilevel\nalgorithms and in turn compute much higher quality solutions of huge graphs on\ncheap machines than previously possible. To partition the model, we develop a\nmultilevel algorithm that optimizes an objective function that has previously\nshown to be effective for the streaming setting. This also removes the\ndependency on the number of blocks k from the running time compared to the\nprevious state-of-the-art. Overall, our algorithm computes, on average, 75.9%\nbetter solutions than Fennel using a very small buffer size. In addition, for\nlarge values of k our algorithm becomes faster than Fennel.\n"} {"abstract": " Let $X$ be a nonempty set and let $T(X)$ be the full transformation semigroup\non $X$. The main objective of this paper is to study the subsemigroup\n$\\overline{\\Omega}(X, Y)$ of $T(X)$ defined by \\[\\overline{\\Omega}(X, Y) =\n\\{f\\in T(X)\\colon Yf = Y\\},\\] where $Y$ is a fixed nonempty subset of $X$. We\ndescribe regular elements in $\\overline{\\Omega}(X, Y)$ and show that\n$\\overline{\\Omega}(X, Y)$ is regular if and only if $Y$ is finite. We\ncharacterize unit-regular elements in $\\overline{\\Omega}(X, Y)$ and prove that\n$\\overline{\\Omega}(X, Y)$ is unit-regular if and only if $X$ is finite. We\ncharacterize Green's relations on $\\overline{\\Omega}(X, Y)$ and prove that\n$\\mathcal{D} =\\mathcal{J}$ on $\\overline{\\Omega}(X, Y)$ if and only if $Y$ is\nfinite. We also determine ideals of $\\overline{\\Omega}(X, Y)$ and investigate\nits kernel. This paper extends several results appeared in the literature.\n"} {"abstract": " We elaborate on the correspondence between the canonical partition function\nin asymptotically AdS universes and the no-boundary proposal for positive\nvacuum energy. For the case of a pure cosmological constant, the analytic\ncontinuation of the AdS partition function is seen to define the no-boundary\nwave function (in dS) uniquely in the simplest minisuperspace model. A\nconsideration of the AdS gravitational path integral implies that on the dS\nside, saddle points with Hawking-Moss/Coleman-De Luccia-type tunnelling\ngeometries are irrelevant. This implies that simple topology changing\ngeometries do not contribute to the nucleation of the universe. The analytic\nAdS/dS equivalence holds up once tensor fluctuations are added. It also works,\nat the level of the saddle point approximation, when a scalar field with a mass\nterm is included, though in the latter case, it is the mass that must be\nanalytically continued. Our results illustrate the emergence of time from space\nby means of a Stokes phenomenon, in the case of positive vacuum energy.\nFurthermore, we arrive at a new characterisation of the no-boundary condition,\nnamely that there should be no momentum flux at the nucleation of the universe.\n"} {"abstract": " The ultimate detection limit of optical biosensors is often limited by\nvarious noise sources, including those introduced by the optical measurement\nsetup. While sophisticated modifications to instrumentation may reduce noise, a\nsimpler approach that can benefit all sensor platforms is the application of\nsignal processing to minimize the deleterious effects of noise. In this work,\nwe show that applying complex Morlet wavelet convolution to Fabry-P\\'erot\ninterference fringes characteristic of thin film reflectometric biosensors\neffectively filters out white noise and low frequency reflectance variations.\nSubsequent calculation of an average difference in phase between the filtered\nanalyte and reference signals enables a significant reduction in the limit of\ndetection (LOD) enabling closer competition with current state-of-the-art\ntechniques. This method is applied on experimental data sets of thin film\nporous silicon sensors (PSi) in buffered solution and complex media obtained\nfrom two different laboratories. The demonstrated improvement in LOD achieved\nusing wavelet convolution and average phase difference paves the way for PSi\noptical biosensors to operate with clinically relevant detection limits for\nmedical diagnostics, environmental monitoring, and food safety.\n"} {"abstract": " In addition to the well-known gas phase mass-metallicity relation (MZR),\nrecent spatially-resolved observations have shown that local galaxies also obey\na mass-metallicity gradient relation (MZGR) whereby metallicity gradients can\nvary systematically with galaxy mass. In this work, we use our\nrecently-developed analytic model for metallicity distributions in galactic\ndiscs, which includes a wide range of physical processes -- radial advection,\nmetal diffusion, cosmological accretion, and metal-enriched outflows -- to\nsimultaneously analyse the MZR and MZGR. We show that the same physical\nprinciples govern the shape of both: centrally-peaked metal production favours\nsteeper gradients, and this steepening is diluted by the addition of metal-poor\ngas, which is supplied by inward advection for low-mass galaxies and by\ncosmological accretion for massive galaxies. The MZR and the MZGR both bend at\ngalaxy stellar mass $\\sim 10^{10} - 10^{10.5}\\,\\rm{M_{\\odot}}$, and we show\nthat this feature corresponds to the transition of galaxies from the\nadvection-dominated to the accretion-dominated regime. We also find that both\nthe MZR and MZGR strongly suggest that low-mass galaxies preferentially lose\nmetals entrained in their galactic winds. While this metal-enrichment of the\ngalactic outflows is crucial for reproducing both the MZR and the MZGR at the\nlow-mass end, we show that the flattening of gradients in massive galaxies is\nexpected regardless of the nature of their winds.\n"} {"abstract": " Direct Volume Rendering (DVR) using Volumetric Path Tracing (VPT) is a\nscientific visualization technique that simulates light transport with objects'\nmatter using physically-based lighting models. Monte Carlo (MC) path tracing is\noften used with surface models, yet its application for volumetric models is\ndifficult due to the complexity of integrating MC light-paths in volumetric\nmedia with none or smooth material boundaries. Moreover, auxiliary\ngeometry-buffers (G-buffers) produced for volumes are typically very noisy,\nfailing to guide image denoisers relying on that information to preserve image\ndetails. This makes existing real-time denoisers, which take noise-free\nG-buffers as their input, less effective when denoising VPT images. We propose\nthe necessary modifications to an image-based denoiser previously used when\nrendering surface models, and demonstrate effective denoising of VPT images. In\nparticular, our denoising exploits temporal coherence between frames, without\nrelying on noise-free G-buffers, which has been a common assumption of existing\ndenoisers for surface-models. Our technique preserves high-frequency details\nthrough a weighted recursive least squares that handles heterogeneous noise for\nvolumetric models. We show for various real data sets that our method improves\nthe visual fidelity and temporal stability of VPT during classic DVR operations\nsuch as camera movements, modifications of the light sources, and editions to\nthe volume transfer function.\n"} {"abstract": " In the last decade, substantial progress has been made towards standardizing\nthe syntax of graph query languages, and towards understanding their semantics\nand complexity of evaluation. In this paper, we consider temporal property\ngraphs (TPGs) and propose temporal regular path queries (TRPQs) that\nincorporate time into TPG navigation. Starting with design principles, we\npropose a natural syntactic extension of the MATCH clause of popular graph\nquery languages. We then formally present the semantics of TRPQs, and study the\ncomplexity of their evaluation. We show that TRPQs can be evaluated in\npolynomial time if TPGs are time-stamped with time points, and identify\nfragments of the TRPQ language that admit efficient evaluation over a more\nsuccinct interval-annotated representation. Finally, we implement a fragment of\nthe language in a state-of-the-art dataflow framework, and experimentally\ndemonstrate that TRPQ can be evaluated efficiently.\n"} {"abstract": " Let $G$ be a simple undirected graph with vertex set $V(G)=\\{v_1, v_2,\n\\ldots, v_n\\}$ and edge set $E(G)$. The Sombor matrix $\\mathcal{S}(G)$ of a\ngraph $G$ is defined so that its $(i,j)$-entry is equal to $\\sqrt{d_i^2+d_j^2}$\nif the vertices $v_i$ and $v_j$ are adjacent, and zero otherwise, where $d_i$\ndenotes the degree of vertex $v_i$ in $G$. In this paper, lower and upper\nbounds on the spectral radius, energy and Estrada index of the Sombor matrix of\ngraphs are obtained, and the respective extremal graphs are characterized.\n"} {"abstract": " COVID19 has impacted Indian engineering institutions (EIs) enormously. It has\ntightened its knot around EIs that forced their previous half shut shades\ncompletely down to prevent the risk of spreading COVID19. In such a situation,\nfetching new enrollments on EI campuses is a difficult and challenging task, as\nstudents behavior and family preferences have changed drastically due to mental\nstress and emotions attached to them. Consequently, it becomes a prerequisite\nto examine the choice characteristics influencing the selection of EI during\nthe COVID-19 pandemic to make it normal for new enrollments.\n The purpose of this study is to critically examine choice characteristics\nthat affect students choice for EI and consequently to explore relationships\nbetween institutional characteristics and the suitability of EI during the\nCOVID19 pandemic across students characteristics. The findings of this study\nrevealed dissimilarities across students characteristics regarding the\nsuitability of EIs under pandemic conditions. Regression analysis revealed that\nEI characteristics such as proximity, image and reputation, quality education\nand curriculum delivery have significantly contributed to suitability under\nCOVID19. At the micro level, multiple relationships were noted between EI\ncharacteristics and the suitability of EI under the pandemic across students\ncharacteristics. The study has successfully demonstrated how choice\ncharacteristics can be executed to regulate the suitability of EI under the\nCOVID19 pandemic for the inclusion of diversity. It is useful for policy makers\nand academicians to reposition EIs that fetch diversity during the pandemic.\nThis study is the first to provide insights into the performance of choice\ncharacteristics and their relationship with the suitability of EIs under a\npandemic and can be a yardstick in administering new enrollments.\n"} {"abstract": " We introduce Reflective Hamiltonian Monte Carlo (ReHMC), an HMC-based\nalgorithm, to sample from a log-concave distribution restricted to a convex\nbody. We prove that, starting from a warm start, the walk mixes to a\nlog-concave target distribution $\\pi(x) \\propto e^{-f(x)}$, where $f$ is\n$L$-smooth and $m$-strongly-convex, within accuracy $\\varepsilon$ after\n$\\widetilde O(\\kappa d^2 \\ell^2 \\log (1 / \\varepsilon))$ steps for a\nwell-rounded convex body where $\\kappa = L / m$ is the condition number of the\nnegative log-density, $d$ is the dimension, $\\ell$ is an upper bound on the\nnumber of reflections, and $\\varepsilon$ is the accuracy parameter. We also\ndeveloped an efficient open source implementation of ReHMC and we performed an\nexperimental study on various high-dimensional data-sets. The experiments\nsuggest that ReHMC outperfroms Hit-and-Run and Coordinate-Hit-and-Run regarding\nthe time it needs to produce an independent sample and introduces practical\ntruncated sampling in thousands of dimensions.\n"} {"abstract": " When flipping a fair coin, let $W = L_1L_2...L_N$ with $L_i\\in\\{H,T\\}$ be a\nbinary word of length $N=2$ or $N=3$. In this paper, we establish second- and\nthird-order linear recurrence relations and their generating functions to\ndiscuss the probabilities $p_{W}(n)$ that binary words $W$ appear for the first\ntime after $n$ coin tosses.\n"} {"abstract": " Climate change and global warming are the significant challenges of the new\ncentury. A viable solution to mitigate greenhouse gas emissions is via a\nglobally incentivized market mechanism proposed in the Kyoto protocol. In this\nview, the carbon dioxide (or other greenhouse gases) emission is considered a\ncommodity, forming a carbon trading system. There have been attempts in\ndeveloping this idea in the past decade with limited success. The main\nchallenges of current systems are fragmented implementations, lack of\ntransparency leading to over-crediting and double-spending, and substantial\ntransaction costs that transfer wealth to brokers and agents. We aim to create\na Carbon Credit Ecosystem using smart contracts that operate in conjunction\nwith blockchain technology in order to bring more transparency, accessibility,\nliquidity, and standardization to carbon markets. This ecosystem includes a\ntokenization mechanism to securely digitize carbon credits with clear minting\nand burning protocols, a transparent mechanism for distribution of tokens, a\nfree automated market maker for trading the carbon tokens, and mechanisms to\nengage all stakeholders, including the energy industry, project verifiers,\nliquidity providers, NGOs, concerned citizens, and governments. This approach\ncould be used in a variety of other credit/trading systems.\n"} {"abstract": " We demonstrate a successful navigation and docking control system for the\nJohn Deere Tango autonomous mower, using only a single camera as the input.\nThis vision-only system is of interest because it is inexpensive, simple for\nproduction, and requires no external sensing. This is in contrast to existing\nsystems that rely on integrated position sensors and global positioning system\n(GPS) technologies. To produce our system we combined a state-of-the-art object\ndetection architecture, You Only Look Once (YOLO), with a reinforcement\nlearning (RL) architecture, Double Deep QNetworks (Double DQN). The object\ndetection network identifies features on the mower and passes its output to the\nRL network, providing it with a low-dimensional representation that enables\nrapid and robust training. Finally, the RL network learns how to navigate the\nmachine to the desired spot in a custom simulation environment. When tested on\nmower hardware, the system is able to dock with centimeter-level accuracy from\narbitrary initial locations and orientations.\n"} {"abstract": " Few researches have been proposed specifically for real-time semantic\nsegmentation in rainy environments. However, the demand in this area is huge\nand it is challenging for lightweight networks. Therefore, this paper proposes\na lightweight network which is specially designed for the foreground\nsegmentation in rainy environments, named De-raining Semantic Segmentation\nNetwork (DRSNet). By analyzing the characteristics of raindrops, the\nMultiScaleSE Block is targetedly designed to encode the input image, it uses\nmulti-scale dilated convolutions to increase the receptive field, and SE\nattention mechanism to learn the weights of each channels. In order to combine\nsemantic information between different encoder and decoder layers, it is\nproposed to use Asymmetric Skip, that is, the higher semantic layer of encoder\nemploys bilinear interpolation and the output passes through pointwise\nconvolution, then added element-wise to the lower semantic layer of decoder.\nAccording to the control experiments, the performances of MultiScaleSE Block\nand Asymmetric Skip compared with SEResNet18 and Symmetric Skip respectively\nare improved to a certain degree on the Foreground Accuracy index. The\nparameters and the floating point of operations (FLOPs) of DRSNet is only 0.54M\nand 0.20GFLOPs separately. The state-of-the-art results and real-time\nperformances are achieved on both the UESTC all-day Scenery add rain\n(UAS-add-rain) and the Baidu People Segmentation add rain (BPS-add-rain)\nbenchmarks with the input sizes of 192*128, 384*256 and 768*512. The speed of\nDRSNet exceeds all the networks within 1GFLOPs, and Foreground Accuracy index\nis also the best among the similar magnitude networks on both benchmarks.\n"} {"abstract": " Understanding electrical energy demand at the consumer level plays an\nimportant role in planning the distribution of electrical networks and offering\nof off-peak tariffs, but observing individual consumption patterns is still\nexpensive. On the other hand, aggregated load curves are normally available at\nthe substation level. The proposed methodology separates substation aggregated\nloads into estimated mean consumption curves, called typical curves, including\ninformation given by explanatory variables. In addition, a model-based\nclustering approach for substations is proposed based on the similarity of\ntheir consumers typical curves and covariance structures. The methodology is\napplied to a real substation load monitoring dataset from the United Kingdom\nand tested in eight simulated scenarios.\n"} {"abstract": " The measurements of $V_{us}$ in leptonic $(K_{\\mu 2})$ and semileptonic\n$(K_{l3})$ kaon decays exhibit a $3\\sigma$ disagreement, which could originate\neither from physics beyond the Standard Model or some large unidentified\nStandard Model systematic effects. Clarifying this issue requires a careful\nexamination of all existing Standard Model inputs. Making use of a\nnewly-proposed computational framework and the most recent lattice QCD results,\nwe perform a comprehensive re-analysis of the electroweak radiative corrections\nto the $K_{e3}$ decay rates that achieves an unprecedented level of precision\nof $10^{-4}$, which improves the current best results by almost an order of\nmagnitude. No large systematic effects are found, which suggests that the\nelectroweak radiative corrections should be removed from the ``list of\nculprits'' responsible for the $K_{\\mu 2}$--$K_{l3}$ discrepancy.\n"} {"abstract": " We present a new method to capture detailed human motion, sampling more than\n1000 unique points on the body. Our method outputs highly accurate 4D\n(spatio-temporal) point coordinates and, crucially, automatically assigns a\nunique label to each of the points. The locations and unique labels of the\npoints are inferred from individual 2D input images only, without relying on\ntemporal tracking or any human body shape or skeletal kinematics models.\nTherefore, our captured point trajectories contain all of the details from the\ninput images, including motion due to breathing, muscle contractions and flesh\ndeformation, and are well suited to be used as training data to fit advanced\nmodels of the human body and its motion. The key idea behind our system is a\nnew type of motion capture suit which contains a special pattern with\ncheckerboard-like corners and two-letter codes. The images from our\nmulti-camera system are processed by a sequence of neural networks which are\ntrained to localize the corners and recognize the codes, while being robust to\nsuit stretching and self-occlusions of the body. Our system relies only on\nstandard RGB or monochrome sensors and fully passive lighting and the passive\nsuit, making our method easy to replicate, deploy and use. Our experiments\ndemonstrate highly accurate captures of a wide variety of human poses,\nincluding challenging motions such as yoga, gymnastics, or rolling on the\nground.\n"} {"abstract": " The application of remaining useful life (RUL) prediction has taken great\nimportance in terms of energy optimization, cost-effectiveness, and risk\nmitigation. The existing RUL prediction algorithms mostly constitute deep\nlearning frameworks. In this paper, we implement LSTM and GRU models and\ncompare the obtained results with a proposed genetically trained neural\nnetwork. The current models solely depend on Adam and SGD for optimization and\nlearning. Although the models have worked well with these optimizers, even\nlittle uncertainties in prognostics prediction can result in huge losses. We\nhope to improve the consistency of the predictions by adding another layer of\noptimization using Genetic Algorithms. The hyper-parameters - learning rate and\nbatch size are optimized beyond manual capacity. These models and the proposed\narchitecture are tested on the NASA Turbofan Jet Engine dataset. The optimized\narchitecture can predict the given hyper-parameters autonomously and provide\nsuperior results.\n"} {"abstract": " In this paper, we study the optimal transmission of a multi-quality tiled 360\nvirtual reality (VR) video from a multi-antenna server (e.g., access point or\nbase station) to multiple single-antenna users in a multiple-input\nmultiple-output (MIMO)-orthogonal frequency division multiple access (OFDMA)\nsystem. We minimize the total transmission power with respect to the subcarrier\nallocation constraints, rate allocation constraints, and successful\ntransmission constraints, by optimizing the beamforming vector and subcarrier,\ntransmission power and rate allocation. The formulated resource allocation\nproblem is a challenging mixed discrete-continuous optimization problem. We\nobtain an asymptotically optimal solution in the case of a large antenna array,\nand a suboptimal solution in the general case. As far as we know, this is the\nfirst work providing optimization-based design for 360 VR video transmission in\nMIMO-OFDMA systems. Finally, by numerical results, we show that the proposed\nsolutions achieve significant improvement in performance compared to the\nexisting solutions.\n"} {"abstract": " We discover that deep ReLU neural network classifiers can see a\nlow-dimensional Riemannian manifold structure on data. Such structure comes via\nthe local data matrix, a variation of the Fisher information matrix, where the\nrole of the model parameters is taken by the data variables. We obtain a\nfoliation of the data domain and we show that the dataset on which the model is\ntrained lies on a leaf, the data leaf, whose dimension is bounded by the number\nof classification labels. We validate our results with some experiments with\nthe MNIST dataset: paths on the data leaf connect valid images, while other\nleaves cover noisy images.\n"} {"abstract": " Backhauling services through satellite systems have doubled between 2012 and\n2018. There is an increasing demand for this service for which satellite\nsystems typically allocate a fixed resource. This solution may not help in\noptimizing the usage of the scarce satellite resource.\n This study measures the relevance of using dynamic resource allocation\nmechanisms for backhaul services through satellite systems. The satellite\nsystem is emulated with OpenSAND, the LTE system with Amarisoft and the\nexperiments are orchestrated by OpenBACH. We compare the relevance of applying\nTCP PEP mechanisms and dynamic resource allocations for different traffic\nservices by measuring the QoE for web browsing, data transfer and VoIP\napplications.\n The main conclusions are the following. When the system is congested, PEP and\nlayer-2 access mechanisms do not provide significant improvements. When the\nsystem is not congested, data transfer can be greatly improved through\nprotocols and channel access mechanism optimization. Tuning the Constant Rate\nAssignment can help in reducing the cost of the resource and provide QoE\nimprovements when the network is not loaded.\n"} {"abstract": " Grain boundaries (GBs) are planar lattice defects that govern the properties\nof many types of polycrystalline materials. Hence, their structures have been\ninvestigated in great detail. However, much less is known about their chemical\nfeatures, owing to the experimental difficulties to probe these features at the\natomic length scale inside bulk material specimens. Atom probe tomography (APT)\nis a tool capable of accomplishing this task, with an ability to quantify\nchemical characteristics at near-atomic scale. Using APT data sets, we present\nhere a machine-learning-based approach for the automated quantification of\nchemical features of GBs. We trained a convolutional neural network (CNN) using\ntwenty thousand synthesized images of grain interiors, GBs, or triple\njunctions. Such a trained CNN automatically detects the locations of GBs from\nAPT data. Those GBs are then subjected to compositional mapping and analysis,\nincluding revealing their in-plane chemical decoration patterns. We applied\nthis approach to experimentally obtained APT data sets pertaining to three case\nstudies, namely, Ni-P, Pt-Au, and Al-Zn-Mg-Cu alloys. In the first case, we\nextracted GB-specific segregation features as a function of misorientation and\ncoincidence site lattice character. Secondly, we revealed interfacial excesses\nand in-plane chemical features that could not have been found by standard\ncompositional analyses. Lastly, we tracked the temporal evolution of chemical\ndecoration from early-stage solute GB segregation in the dilute limit to\ninterfacial phase separation, characterized by the evolution of complex\ncomposition patterns. This machine-learning-based approach provides\nquantitative, unbiased, and automated access to GB chemical analyses, serving\nas an enabling tool for new discoveries related to interface thermodynamics,\nkinetics, and the associated chemistry-structure-property relations.\n"} {"abstract": " The Fock space $\\mathcal{F}(\\mathbb{C}^n)$ is the space of holomorphic\nfunctions on $\\mathbb{C}^n$ that are square-integrable with respect to the\nGaussian measure on $\\mathbb{C}^n$. This space plays an important role in\nseveral subfields of analysis and representation theory. In particular, it has\nfor a long time been a model to study Toeplitz operators. Esmeral and Maximenko\nshowed in 2016 that radial Toeplitz operators on $\\mathcal{F}(\\mathbb{C})$\ngenerate a commutative $C^*$-algebra which is isometrically isomorphic to the\n$C^*$-algebra $C_{b,u}(\\mathbb{N}_0,\\rho_1)$. In this article, we extend the\nresult to $k$-quasi-radial symbols acting on the Fock space\n$\\mathcal{F}(\\mathbb{C}^n)$. We calculate the spectra of the said Toeplitz\noperators and show that the set of all eigenvalue functions is dense in the\n$C^*$-algebra $C_{b,u}(\\mathbb{N}_0^k,\\rho_k)$ of bounded functions on\n$\\mathbb{N}_0^k$ which are uniformly continuous with respect to the square-root\nmetric. In fact, the $C^*$-algebra generated by Toeplitz operators with\nquasi-radial symbols is $C_{b,u}(\\mathbb{N}_0^k,\\rho_k)$.\n"} {"abstract": " We present optical follow-up imaging obtained with the Katzman Automatic\nImaging Telescope, Las Cumbres Observatory Global Telescope Network, Nickel\nTelescope, Swope Telescope, and Thacher Telescope of the LIGO/Virgo\ngravitational wave (GW) signal from the neutron star-black hole (NSBH) merger\nGW190814. We searched the GW190814 localization region (19 deg$^{2}$ for the\n90th percentile best localization), covering a total of 51 deg$^{2}$ and 94.6%\nof the two-dimensional localization region. Analyzing the properties of 189\ntransients that we consider as candidate counterparts to the NSBH merger,\nincluding their localizations, discovery times from merger, optical spectra,\nlikely host-galaxy redshifts, and photometric evolution, we conclude that none\nof these objects are likely to be associated with GW190814. Based on this\nfinding, we consider the likely optical properties of an electromagnetic\ncounterpart to GW190814, including possible kilonovae and short gamma-ray burst\nafterglows. Using the joint limits from our follow-up imaging, we conclude that\na counterpart with an $r$-band decline rate of 0.68 mag day$^{-1}$, similar to\nthe kilonova AT 2017gfo, could peak at an absolute magnitude of at most $-17.8$\nmag (50% confidence). Our data are not constraining for ''red'' kilonovae and\nrule out ''blue'' kilonovae with $M>0.5 M_{\\odot}$ (30% confidence). We\nstrongly rule out all known types of short gamma-ray burst afterglows with\nviewing angles $<$17$^{\\circ}$ assuming an initial jet opening angle of\n$\\sim$$5.2^{\\circ}$ and explosion energies and circumburst densities similar to\nafterglows explored in the literature. Finally, we explore the possibility that\nGW190814 merged in the disk of an active galactic nucleus, of which we find\nfour in the localization region, but we do not find any candidate counterparts\namong these sources.\n"} {"abstract": " Ultrafast lasers are ideal tools to process transparent materials because\nthey spatially confine the deposition of laser energy within the material's\nbulk via nonlinear photoionization processes. Nonlinear propagation and\nfilamentation were initially regarded as deleterious effects. But in the last\ndecade, they turned out to be benefits to control energy deposition over long\ndistances. These effects create very high aspect ratio structures which have\nfound a number of important applications, particularly for glass separation\nwith non-ablative techniques. This chapter reviews the developments of\nin-volume ultrafast laser processing of transparent materials. We discuss the\nbasic physics of the processes, characterization means, filamentation of\nGaussian and Bessel beams and provide an overview of present applications.\n"} {"abstract": " Hyperspectral imaging at cryogenic temperatures is used to investigate\nexciton and trion propagation in MoSe$_2$ monolayers encapsulated with\nhexagonal boron nitride (hBN). Under a tightly focused, continuous-wave laser\nexcitation, the spatial distribution of neutral excitons and charged trions\nstrongly differ at high excitation densities. Remarkably, in this regime the\ntrion distribution develops a halo shape, similar to that previously observed\nin WS2 monolayers at room temperature and under pulsed excitation. In contrast,\nthe exciton distribution only presents a moderate broadening without the\nappereance of a halo. Spatially and spectrally resolved luminescence spectra\nreveal the buildup of a significant temperature gradient at high excitation\npower, that is attributed to the energy relaxation of photoinduced hot\ncarriers. We show, via a numerical resolution of the transport equations for\nexcitons and trions, that the halo can be interpreted as thermal drift of\ntrions due to a Seebeck term in the particle current. The model shows that the\ndifference between trion and exciton profiles is simply understood in terms of\nthe very different lifetimes of these two quasiparticles.\n"} {"abstract": " The conformational states of a semiflexible polymer enclosed in a volume\n$V:=\\ell^{3}$ are studied as stochastic realizations of paths using the\nstochastic curvature approach developed in [Rev. E 100, 012503 (2019)], in the\nregime whenever $3\\ell/\\ell_ {p}> 1$, where $\\ell_{p}$ is the persistence\nlength. The cases of a semiflexible polymer enclosed in a cube and sphere are\nconsidered. In these cases, we explore the Spakowitz-Wang type polymer shape\ntransition, where the critical persistence length distinguishes between an\noscillating and a monotonic phase at the level of the mean-square end-to-end\ndistance. This shape transition provides evidence of a universal signature of\nthe behavior of a semiflexible polymer confined in a compact domain.\n"} {"abstract": " We construct a cosmological model from the inception of the\nFriedmann-Lem\\^aitre-Robertson-Walker metric into the field equations of the\n$f(R,L_m)$ gravity theory, with $R$ being the Ricci scalar and $L_m$ being the\nmatter lagrangian density. The formalism is developed for a particular\n$f(R,L_m)$ function, namely $R/16\\pi +(1+\\sigma R)L_{m}$, with $\\sigma$ being a\nconstant that carries the geometry-matter coupling. Our solutions are\nremarkably capable of evading the Big-Bang singularity as well as predict the\ncosmic acceleration with no need for the cosmological constant, but simply as a\nconsequence of the geometry-matter coupling terms in the Friedmann-like\nequations.\n"} {"abstract": " Molecular science is governed by the dynamics of electrons, atomic nuclei,\nand their interaction with electromagnetic fields. A reliable physicochemical\nunderstanding of these processes is crucial for the design and synthesis of\nchemicals and materials of economic value. Although some problems in this field\nare adequately addressed by classical mechanics, many require an explicit\nquantum mechanical description. Such quantum problems represented by\nexponentially large wave function should naturally benefit from quantum\ncomputation on a number of logical qubits that scales only linearly with system\nsize. In this perspective, we focus on the potential of quantum computing for\nsolving relevant problems in the molecular sciences -- molecular physics,\nchemistry, biochemistry, and materials science.\n"} {"abstract": " This research discusses multi-criteria decision making (MCDM) using Fuzzy-AHP\nmethods of tourism. The fuzzy-AHP process will rank tourism trends based on\ndata from social media. Social media is one of the channels with the largest\nsource of data input in determining tourism development. The development uses\nsocial media interactions based on the facilities visited, including reviews,\nstories, likes, forums, blogs, and feedback. This experiment aims to prioritize\nfacilities that are the trend of tourism. The priority ranking uses weight\ncriteria and the ranking process. The highest rank is in the attractions of the\nPark/Picnic Area, with the final weight calculation value of 0.6361. Fuzzy-AHP\ncan rank optimally with an MSE value of \\approx 0.0002.\n"} {"abstract": " Despite their recent success on image denoising, the need for deep and\ncomplex architectures still hinders the practical usage of CNNs. Older but\ncomputationally more efficient methods such as BM3D remain a popular choice,\nespecially in resource-constrained scenarios. In this study, we aim to find out\nwhether compact neural networks can learn to produce competitive results as\ncompared to BM3D for AWGN image denoising. To this end, we configure networks\nwith only two hidden layers and employ different neuron models and layer widths\nfor comparing the performance with BM3D across different AWGN noise levels. Our\nresults conclusively show that the recently proposed self-organized variant of\noperational neural networks based on a generative neuron model (Self-ONNs) is\nnot only a better choice as compared to CNNs, but also provide competitive\nresults as compared to BM3D and even significantly surpass it for high noise\nlevels.\n"} {"abstract": " This work investigates the feasibility of using input-output data-driven\ncontrol techniques for building control and their susceptibility to\ndata-poisoning techniques. The analysis is performed on a digital replica of\nthe KTH Livein Lab, a non-linear validated model representing one of the KTH\nLive-in Lab building testbeds. This work is motivated by recent trends showing\na surge of interest in using data-based techniques to control cyber-physical\nsystems. We also analyze the susceptibility of these controllers to\ndata-poisoning methods, a particular type of machine learning threat geared\ntowards finding imperceptible attacks that can undermine the performance of the\nsystem under consideration. We consider the Virtual Reference Feedback Tuning\n(VRFT), a popular data-driven control technique, and show its performance on\nthe KTH Live-In Lab digital replica. We then demonstrate how poisoning attacks\ncan be crafted and illustrate the impact of such attacks. Numerical experiments\nreveal the feasibility of using data-driven control methods for finding\nefficient control laws. However, a subtle change in the datasets can\nsignificantly deteriorate the performance of VRFT.\n"} {"abstract": " Scanning real-life scenes with modern registration devices typically give\nincomplete point cloud representations, mostly due to the limitations of the\nscanning process and 3D occlusions. Therefore, completing such partial\nrepresentations remains a fundamental challenge of many computer vision\napplications. Most of the existing approaches aim to solve this problem by\nlearning to reconstruct individual 3D objects in a synthetic setup of an\nuncluttered environment, which is far from a real-life scenario. In this work,\nwe reformulate the problem of point cloud completion into an object\nhallucination task. Thus, we introduce a novel autoencoder-based architecture\ncalled HyperPocket that disentangles latent representations and, as a result,\nenables the generation of multiple variants of the completed 3D point clouds.\nWe split point cloud processing into two disjoint data streams and leverage a\nhypernetwork paradigm to fill the spaces, dubbed pockets, that are left by the\nmissing object parts. As a result, the generated point clouds are not only\nsmooth but also plausible and geometrically consistent with the scene. Our\nmethod offers competitive performances to the other state-of-the-art models,\nand it enables a~plethora of novel applications.\n"} {"abstract": " Using five year monitoring observations, we did a blind search for pulses for\nrotating radio transient (RRAT) J0139+33 and PSR B0320+39. At the interval \\pm\n1.5m of the time corresponding to the source passing through the meridian, we\ndetected 39377 individual pulses for the pulsar B0320+39 and 1013 pulses for\nRRAT J0139+33. The share of registered pulses from the total number of observed\nperiods for the pulsar B0320+39 is 74%, and for the transient J0139+33 it is\n0.42%. Signal-to-noise ratio (S/N) for the strongest registered pulses is\napproximately equal to: S/N = 262 (for B0320+39) and S/N = 154 (for J0139+33).\n Distributions of the number of detected pulses in S/N units for the pulsar\nand for the rotating transient are obtained. The distributions could be\napproximated with a lognormal and power dependencies. For B0320+39 pulsar, the\ndependence is lognormal, it turns into a power dependence at high values of\nS/N, and for RRAT J0139+33, the distribution of pulses by energy is described\nby a broken (bimodal) power dependence with an exponent of about 0.4 and 1.8\n(S/N < 19 and S/N > 19).\n We have not detected regular (pulsar) emission of J0139+33. Analysis of the\nobtained data suggests that RRAT J0139+33 is a pulsar with giant pulses.\n"} {"abstract": " A Banach space X has the SHAI (surjective homomorphisms are injective)\nproperty provided that for every Banach space Y, every continuous surjective\nalgebra homomorphism from the bounded linear operators on X onto the bounded\nlinear operators on Y is injective. The main result gives a sufficient\ncondition for X to have the SHAI property. The condition is satisfied for L^p\n(0, 1) for 1 < p < \\infty, spaces with symmetric bases that have finite cotype,\nand the Schatten p-spaces for 1 < p < \\infty.\n"} {"abstract": " This paper introduces our systems for all three subtasks of SemEval-2021 Task\n4: Reading Comprehension of Abstract Meaning. To help our model better\nrepresent and understand abstract concepts in natural language, we well-design\nmany simple and effective approaches adapted to the backbone model (RoBERTa).\nSpecifically, we formalize the subtasks into the multiple-choice question\nanswering format and add special tokens to abstract concepts, then, the final\nprediction of question answering is considered as the result of subtasks.\nAdditionally, we employ many finetuning tricks to improve the performance.\nExperimental results show that our approaches achieve significant performance\ncompared with the baseline systems. Our approaches achieve eighth rank on\nsubtask-1 and tenth rank on subtask-2.\n"} {"abstract": " How should we understand the social and political effects of the datafication\nof human life? This paper argues that the effects of data should be understood\nas a constitutive shift in social and political relations. We explore how\ndatafication, or quantification of human and non-human factors into binary\ncode, affects the identity of individuals and groups. This fundamental shift\ngoes beyond economic and ethical concerns, which has been the focus of other\nefforts to explore the effects of datafication and AI. We highlight that\ntechnologies such as datafication and AI (and previously, the printing press)\nboth disrupted extant power arrangements, leading to decentralization, and\ntriggered a recentralization of power by new actors better adapted to\nleveraging the new technology. We use the analogy of the printing press to\nprovide a framework for understanding constitutive change. The printing press\nexample gives us more clarity on 1) what can happen when the medium of\ncommunication drastically alters how information is communicated and stored; 2)\nthe shift in power from state to private actors; and 3) the tension of\nsimultaneously connecting individuals while driving them towards narrower\ncommunities through algorithmic analyses of data.\n"} {"abstract": " The concept of an angle is one that often causes difficulties in metrology.\nThese are partly caused by a confusing mixture of several mathematical terms,\npartly by real mathematical difficulties and finally by imprecise terminology.\nThe purpose of this publication is to clarify misunderstandings and to explain\nwhy strict terminology is important. It will also be shown that most\nmisunderstandings regarding the `radian' can be avoided if some simple rules\nare obeyed.\n"} {"abstract": " We explore the parameter space of a U(1) extension of the standard model --\nalso called the super-weak model -- from the point of view of explaining the\nobserved dark matter energy density in the Universe. The new particle spectrum\ncontains a complex scalar singlet and three right-handed neutrinos, among which\nthe lightest one is the dark matter candidate. We explore both freeze-in and\nfreeze-out mechanisms of dark matter production. In both cases, we find regions\nin the plane of the super-weak coupling vs. the mass of the new gauge boson\nthat are not excluded by current experimental constraints. These regions are\ndistinct and the one for freeze-out will be explored in searches for neutral\ngauge boson in the near future.\n"} {"abstract": " This paper focuses on a core task in computational sustainability and\nstatistical ecology: species distribution modeling (SDM). In SDM, the\noccurrence pattern of a species on a landscape is predicted by environmental\nfeatures based on observations at a set of locations. At first, SDM may appear\nto be a binary classification problem, and one might be inclined to employ\nclassic tools (e.g., logistic regression, support vector machines, neural\nnetworks) to tackle it. However, wildlife surveys introduce structured noise\n(especially under-counting) in the species observations. If unaccounted for,\nthese observation errors systematically bias SDMs. To address the unique\nchallenges of SDM, this paper proposes a framework called StatEcoNet.\nSpecifically, this work employs a graphical generative model in statistical\necology to serve as the skeleton of the proposed computational framework and\ncarefully integrates neural networks under the framework. The advantages of\nStatEcoNet over related approaches are demonstrated on simulated datasets as\nwell as bird species data. Since SDMs are critical tools for ecological science\nand natural resource management, StatEcoNet may offer boosted computational and\nanalytical powers to a wide range of applications that have significant social\nimpacts, e.g., the study and conservation of threatened species.\n"} {"abstract": " We study phase contributions of wave functions that occur in the evolution of\nGaussian surface gravity water wave packets with nonzero initial momenta\npropagating in the presence and absence of an effective external linear\npotential. Our approach takes advantage of the fact that in contrast to matter\nwaves, water waves allow us to measure both their amplitudes and phases.\n"} {"abstract": " Non-Hermitian systems show a non-Hermitian skin effect, where the bulk states\nare localized at a boundary of the systems with open boundary conditions. In\nthis paper, we study dependence of the localization length of the eigenstates\non a system size in a specific non-Hermitian model with a critical\nnon-Hermitian skin effect, where the energy spectrum undergoes discontinuous\ntransition in the thermodynamic limit. We analytically show that the\neigenstates exhibit remarkable localization, known as scale-free localization,\nwhere the localization length is proportional to a system size. Our result\ngives a theoretical support for the scale-free localization, which has been\nproposed only numerically in previous works.\n"} {"abstract": " Recently, [8] has proposed that heterogeneity of infectiousness (and\nsusceptibility) across individuals in infectious diseases, plays a major role\nin affecting the Herd Immunity Threshold (HIT). Such heterogeneity has been\nobserved in COVID-19 and is recognized as overdispersion (or\n\"super-spreading\"). The model of [8] suggests that super-spreaders contribute\nsignificantly to the effective reproduction factor, R, and that they are likely\nto get infected and immune early in the process. Consequently, under R_0 = 3\n(attributed to COVID-19), the Herd Immunity Threshold (HIT) is as low as 5%, in\ncontrast to 67% according to the traditional models [1, 2, 4, 10]. This work\nfollows up on [8] and proposes that heterogeneity of infectiousness\n(susceptibility) has two \"faces\" whose mix affects dramatically the HIT: (1)\nPersonal-Trait-, and (2) Event-Based- Infectiousness (Susceptibility). The\nformer is a personal trait of specific individuals (super-spreaders) and is\nnullified once those individuals are immune (as in [8]). The latter is\nevent-based (e.g cultural super-spreading events) and remains effective\nthroughout the process, even after the super-spreaders immune. We extend [8]'s\nmodel to account for these two factors, analyze it and conclude that the HIT is\nvery sensitive to the mix between (1) and (2), and under R_0 = 3 it can vary\nbetween 5% and 67%. Preliminary data from COVID-19 suggests that herd immunity\nis not reached at 5%. We address operational aspects and analyze the effects of\nlockdown strategies on the spread of a disease. We find that herd immunity (and\nHIT) is very sensitive to the lockdown type. While some lockdowns affect\npositively the disease blocking and increase herd immunity, others have adverse\neffects and reduce the herd immunity.\n"} {"abstract": " The quantum operator $\\hat{T}_3$, corresponding to the projection of the\ntoroidal moment on the $z$ axis, admits several self-adjoint extensions, when\ndefined on the whole $\\mathbb{R}^3$ space. $\\hat{T}_3$ commutes with\n$\\hat{L}_3$ (the projection of the angular momentum operator on the $z$ axis)\nand they have a \\textit{natural set of coordinates} $(k,u,\\phi)$ where $\\phi$\nis the azimuthal angle. The second set of \\textit{natural coordinates} is\n$(k_1,k_2,u)$, where $k_1 = k\\cos\\phi$, $k_2 = k\\sin\\phi$. In both sets,\n$\\hat{T}_3 = -i\\hbar\\partial/\\partial u$, so any operator that is a function of\n$k$ and the partial derivatives with respect to the \\textit{natural variables}\n$(k, u, \\phi)$ commute with $\\hat{T}_3$ and $\\hat{L}_3$. Similarly, operators\nthat are functions of $k_1$, $k_2$, and the partial derivatives with respect to\n$k_1$, $k_2$, and $u$ commute with $\\hat{T}_3$. Therefore, we introduce here\nthe operators $\\hat{p}_{k} \\equiv -i \\hbar \\partial/\\partial k$,\n$\\hat{p}^{(k1)} \\equiv -i \\hbar \\partial/\\partial k_1$, and $\\hat{p}^{(k2)}\n\\equiv -i \\hbar \\partial/\\partial k_2$ and express them in the $(x,y,z)$\ncoordinates. One may also invert the relations and write the typical operators,\nlike the momentum $\\hat{\\bf p} \\equiv -i\\hbar {\\bf \\nabla}$ or the kinetic\nenergy $\\hat{H}_0 \\equiv -\\hbar^2\\Delta/(2m)$ in terms of the \"toroidal\"\noperators $\\hat{T}_3$, $\\hat{p}^{(k)}$, $\\hat{p}^{(k1)}$, $\\hat{p}^{(k2)}$,\nand, eventually, $\\hat{L}_3$. The formalism may be applied to specific physical\nsystems, like nuclei, condensed matter systems, or metamaterials. We exemplify\nit by calculating the momentum operator and the free particle Hamiltonian in\nterms of \\textit{natural coordinates} in a thin torus, where the general\nrelations get considerably simplified.\n"} {"abstract": " Electronic states in the gap of a superconductor inherit intriguing many-body\nproperties from the superconductor. Here, we create these in-gap states by\nmanipulating Cr atomic chains on the $\\beta$-Bi$_2$Pd superconductor. We find\nthat the topological properties of the in-gap states can greatly vary depending\non the crafted spin chain. These systems make an ideal platform for non-trivial\ntopological phases because of the large atom-superconductor interactions and\nthe existence of a large Rashba coupling at the Bi-terminated surface. We study\ntwo spin chains, one with atoms two-lattice-parameter apart and one with\nsquare-root-of-two lattice parameters. Of these, only the second one is in a\ntopologically non-trivial phase, in correspondence with the spin interactions\nfor this geometry.\n"} {"abstract": " Using density functional theory combined with nonequilibrium Green's function\nmethod, the transport properties of borophene-based nano gas sensors with gold\nelectrodes are calculated, and comprehensive understandings regarding the\neffects of gas molecules, MoS$_2$ substrate and gold electrodes to the\ntransport properties of borophene are made. Results show that borophene-based\nsensors can be used to detect and distinguish CO, NO, NO$_2$ and NH$_3$ gas\nmolecules, MoS$_2$ substrate leads to a non-linear behavior on the\ncurrent-voltage characteristic, and gold electrodes provide charges to\nborophene and form a potential barrier, which reduced the current values\ncompared to the current of the systems without gold electrodes. Our studies not\nonly provide useful information on the computationally design of\nborophene-based gas sensors, but also help understand the transport behaviors\nand underlying physics of 2D metallic materials with metal electrodes.\n"} {"abstract": " RX J0123.4-7321 is a well-established Be star X-ray binary system (BeXRB) in\nthe Small Magellanic Cloud (SMC). Like many such systems the variable X-ray\nemission is driven by the underlying behaviour of the mass donor Be star.\nPrevious work has shown that the optical and X-ray were characterised by\nregular outbursts at the proposed binary period of 119 d. However around\nFebruary 2008 the optical behaviour changed substantially, with the previously\nregular optical outbursts ending. Reported here are new optical (OGLE) and\nX-ray (Swift) observations covering the period after 2008 which suggest an\nalmost total circumstellar disc loss followed by a gradual recovery. This\nindicates the probable transition of a Be star to a B star, and back again.\nHowever, at the time of the most recent OGLE data (March 2020) the\ncharacteristic periodic outbursts had yet to return to their early state,\nindicating that the disk still had some re-building yet to complete.\n"} {"abstract": " Convection has been discussed in the field of accretion discs for several\ndecades, both as a means of angular momentum transport and also because of its\nrole in controlling discs' vertical structure via heat transport. If the gas is\nsufficiently ionized and threaded by a weak magnetic field, convection might\ninteract in non-trivial ways with the magnetorotational instability (MRI).\nRecently, vertically stratified local simulations of the MRI have reported\nconsiderable variation in the angular momentum transport, as measured by the\nstress to thermal pressure ratio $\\alpha$, when convection is thought to be\npresent. Although MRI turbulence can act as a heat source for convection, it is\nnot clear how the instabilities will interact dynamically. Here we aim to\ninvestigate the interplay between the two instabilities in controlled numerical\nexperiments, and thus isolate the generic features of their interaction. We\nperform vertically stratified, 3D MHD shearing box simulations with a perfect\ngas equation of state with the conservative, finite-volume code PLUTO. We find\ntwo characteristic outcomes of the interaction between the two instabilities:\nstraight MRI and MRI/convective cycles, with the latter exhibiting alternating\nphases of convection-dominated flow (during which the turbulent transport is\nweak) and MRI-dominated flow. During the latter phase we find that $\\alpha$ is\nenhanced by nearly an order of magnitude, reaching peak values of $\\sim 0.08$.\nIn addition, we find that convection in the non-linear phase takes the form of\nlarge-scale and oscillatory convective cells. Convection can also help the MRI\npersist to lower Rm than it would otherwise do. Finally we discuss how our\nresults help interpret simulations of Dwarf Novae.\n"} {"abstract": " In this paper we study Chow motives whose identity map is killed by a natural\nnumber. Examples of such objects were constructed by Gorchinskiy-Orlov. We\nintroduce various invariants of torsion motives, in particular, the $p$-level.\nWe show that this invariant bounds from below the dimension of the variety a\ntorsion motive $M$ is a direct summand of and imposes restrictions on motivic\nand singular cohomology of $M$. We study in more details the $p$-torsion\nmotives of surfaces, in particular, the Godeaux torsion motive. We show that\nsuch motives are in 1-to-1 correspondence with certain Rost cycle submodules of\nfree modules over $H^*_{et}$. This description is parallel to that of mod-$p$\nreduced motives of curves.\n"} {"abstract": " A very useful identity for Parseval frames for Hilbert spaces was obtained by\nBalan, Casazza, Edidin, and Kutyniok. In this paper, we obtain a similar\nidentity for Parseval p-approximate Schauder frames for Banach spaces which\nadmits a homogeneous semi-inner product in the sense of Lumer-Giles.\n"} {"abstract": " We construct the hydrodynamic theory of coherent collective motion\n(\"flocking\") at a solid-liquid interface. The polar order parameter and\nconcentration of a collection of \"active\" (self-propelled) particles at a\nplanar interface between a passive, isotropic bulk fluid and a solid surface\nare dynamically coupled to the bulk fluid. We find that such systems are\nstable, and have long-range orientational order, over a wide range of\nparameters. When stable, these systems exhibit \"giant number fluctuations\",\ni.e., large fluctuations of the number of active particles in a fixed large\narea. Specifically, these number fluctuations grow as the $3/4$th power of the\nmean number within the area. Stable systems also exhibit anomalously rapid\ndiffusion of tagged particles suspended in the passive fluid along any\ndirections in a plane parallel to the solid-liquid interface, whereas the\ndiffusivity along the direction perpendicular to the plane is non-anomalous. In\nother parameter regimes, the system becomes unstable.\n"} {"abstract": " As the fundamental physical process with many astrophysical implications, the\ndiffusion of cosmic rays (CRs) is determined by their interaction with\nmagnetohydrodynamic (MHD) turbulence. We consider the magnetic mirroring effect\narising from MHD turbulence on the diffusion of CRs. Due to the intrinsic\nsuperdiffusion of turbulent magnetic fields, CRs with large pitch angles that\nundergo mirror reflection, i.e., bouncing CRs, are not trapped between magnetic\nmirrors, but move diffusively along the turbulent magnetic field, leading to a\nnew type of parallel diffusion, i.e., mirror diffusion. This mirror diffusion\nis in general slower than the diffusion of non-bouncing CRs with small pitch\nangles that undergo gyroresonant scattering. The critical pitch angle at the\nbalance between magnetic mirroring and pitch-angle scattering is important for\ndetermining the diffusion coefficients of both bouncing and non-bouncing CRs\nand their scalings with the CR energy. We find non-universal energy scalings of\ndiffusion coefficients, depending on the properties of MHD turbulence.\n"} {"abstract": " Floquet engineering is the concept of tailoring a system by a periodic drive.\nIt has been very successful in opening new classes of Hamiltonians to the study\nwith ultracold atoms in optical lattices, such as artificial gauge fields,\ntopological band structures and density-dependent tunneling. Furthermore,\ndriven systems provide new physics without static counterpart such as anomalous\nFloquet topological insulators. In this review article, we provide an overview\nof the exciting developments in the field and discuss the current challenges\nand perspectives.\n"} {"abstract": " Convex clustering is an attractive clustering algorithm with favorable\nproperties such as efficiency and optimality owing to its convex formulation.\nIt is thought to generalize both k-means clustering and agglomerative\nclustering. However, it is not known whether convex clustering preserves\ndesirable properties of these algorithms. A common expectation is that convex\nclustering may learn difficult cluster types such as non-convex ones. Current\nunderstanding of convex clustering is limited to only consistency results on\nwell-separated clusters. We show new understanding of its solutions. We prove\nthat convex clustering can only learn convex clusters. We then show that the\nclusters have disjoint bounding balls with significant gaps. We further\ncharacterize the solutions, regularization hyperparameters, inclusterable cases\nand consistency.\n"} {"abstract": " In this paper we derive sharp lower and upper bounds for the covariance of\ntwo bounded random variables when knowledge about their expected values,\nvariances or both is available. When only the expected values are known, our\nresult can be viewed as an extension of the Bhatia-Davis Inequality for\nvariances. We also provide a number of different ways to standardize\ncovariance. For a binary pair random variables, one of these standardized\nmeasures of covariation agrees with a frequently used measure of dependence\nbetween genetic variants.\n"} {"abstract": " The production, application, and/or measurement of polarised X-/gamma rays\nare key to the fields of synchrotron science and X-/gamma-ray astronomy. The\ndesign, development and optimisation of experimental equipment utilised in\nthese fields typically relies on the use of Monte Carlo radiation transport\nmodelling toolkits such as Geant4. In this work the Geant4 \"G4LowEPPhysics\"\nelectromagnetic physics constructor has been reconfigured to offer a \"best set\"\nof electromagnetic physics models for studies exploring the transport of low\nenergy polarised X-/gamma rays. An overview of the physics models implemented\nin \"G4LowEPPhysics\", and it's experimental validation against Compton X-ray\npolarimetry measurements of the BL38B1 beamline at the SPring-8 synchrotron\n(Sayo, Japan) is reported. \"G4LowEPPhysics\" is shown to be able to reproduce\nthe experimental results obtained at the BL38B1 beamline (SPring-8) to within a\nlevel of accuracy on the same order as Geant4's X-/gamma ray interaction\ncross-sectional data uncertainty (approximately $\\pm$ 5 \\%).\n"} {"abstract": " The overwhelming amount of biomedical scientific texts calls for the\ndevelopment of effective language models able to tackle a wide range of\nbiomedical natural language processing (NLP) tasks. The most recent dominant\napproaches are domain-specific models, initialized with general-domain textual\ndata and then trained on a variety of scientific corpora. However, it has been\nobserved that for specialized domains in which large corpora exist, training a\nmodel from scratch with just in-domain knowledge may yield better results.\nMoreover, the increasing focus on the compute costs for pre-training recently\nled to the design of more efficient architectures, such as ELECTRA. In this\npaper, we propose a pre-trained domain-specific language model, called\nELECTRAMed, suited for the biomedical field. The novel approach inherits the\nlearning framework of the general-domain ELECTRA architecture, as well as its\ncomputational advantages. Experiments performed on benchmark datasets for\nseveral biomedical NLP tasks support the usefulness of ELECTRAMed, which sets\nthe novel state-of-the-art result on the BC5CDR corpus for named entity\nrecognition, and provides the best outcome in 2 over the 5 runs of the 7th\nBioASQ-factoid Challange for the question answering task.\n"} {"abstract": " Payment channel networks are a promising approach to improve the scalability\nof cryptocurrencies: they allow to perform transactions in a peer-to-peer\nfashion, along multi-hop routes in the network, without requiring consensus on\nthe blockchain. However, during the discovery of cost-efficient routes for the\ntransaction, critical information may be revealed about the transacting\nentities.\n This paper initiates the study of privacy-preserving route discovery\nmechanisms for payment channel networks. In particular, we present LightPIR, an\napproach which allows a source to efficiently discover a shortest path to its\ndestination without revealing any information about the endpoints of the\ntransaction. The two main observations which allow for an efficient solution in\nLightPIR are that: (1) surprisingly, hub labelling algorithms - which were\ndeveloped to preprocess \"street network like\" graphs so one can later\nefficiently compute shortest paths - also work well for the graphs underlying\npayment channel networks, and that (2) hub labelling algorithms can be directly\ncombined with private information retrieval.\n LightPIR relies on a simple hub labeling heuristic on top of existing hub\nlabeling algorithms which leverages the specific topological features of\ncryptocurrency networks to further minimize storage and bandwidth overheads. In\na case study considering the Lightning network, we show that our approach is an\norder of magnitude more efficient compared to a privacy-preserving baseline\nbased on using private information retrieval on a database that stores all\npairs shortest paths.\n"} {"abstract": " As an essential characteristics of fractional calculus, the memory effect is\nserved as one of key factors to deal with diverse practical issues, thus has\nbeen received extensive attention since it was born. By combining the\nfractional derivative with memory effects and grey modeling theory, this paper\naims to construct an unified framework for the commonly-used fractional grey\nmodels already in place. In particular, by taking different kernel and\nnormalization functions, this framework can deduce some other new fractional\ngrey models. To further improve the prediction performance, the four popular\nintelligent algorithms are employed to determine the emerging coefficients for\nthe UFGM(1,1) model. Two published cases are then utilized to verify the\nvalidity of the UFGM(1,1) model and explore the effects of fractional\naccumulation order and initial value on the prediction accuracy, respectively.\nFinally, this model is also applied to dealing with two real examples so as to\nfurther explain its efficacy and equally show how to use the unified framework\nin practical applications.\n"} {"abstract": " B-splines are widely used in the fields of reverse engineering and\ncomputer-aided design, due to their superior properties. Traditional B-spline\nsurface interpolation algorithms usually assume regularity of the data\ndistribution. In this paper, we introduce a novel B-spline surface\ninterpolation algorithm: KPI, which can interpolate sparsely and non-uniformly\ndistributed data points. As a two-stage algorithm, our method generates the\ndataset out of the sparse data using Kriging, and uses the proposed KPI\n(Key-Point Interpolation) method to generate the control points. Our algorithm\ncan be extended to higher dimensional data interpolation, such as\nreconstructing dynamic surfaces. We apply the method to interpolating the\ntemperature of Shanxi Province. The generated dynamic surface accurately\ninterpolates the temperature data provided by the weather stations, and the\npreserved dynamic characteristics can be useful for meteorology studies.\n"} {"abstract": " Software projects are regularly updated with new functionality and bug fixes\nthrough so-called releases. In recent years, many software projects have been\nshifting to shorter release cycles and this can affect the bug handling\nactivity. Past research has focused on the impact of switching from traditional\nto rapid release cycles with respect to bug handling activity, but the effect\nof the rapid release cycle duration has not yet been studied. We empirically\ninvestigate releases of 420 open source projects with rapid release cycles to\nunderstand the effect of variable and rapid release cycle durations on bug\nhandling activity. We group the releases of these projects into five categories\nof release cycle durations. For each project, we investigate how the sequence\nof releases is related to bug handling activity metrics and we study the effect\nof the variability of cycle durations on bug fixing. Our results did not reveal\nany statistically significant difference for the studied bug handling activity\nmetrics in the presence of variable rapid release cycle durations. This\nsuggests that the duration of fast release cycles does not seem to impact bug\nhandling activity.\n"} {"abstract": " We derive major parts of the eigenvalue spectrum of the operators on the\nsquashed seven-sphere that appear in the compactification of eleven-dimensional\nsupergravity. These spectra determine the mass spectrum of the fields in\n$AdS_4$ and are important for the corresponding ${\\mathcal N} =1$\nsupermultiplet structure. This work is a continuation of the work in [1] where\nthe complete spectrum of irreducible isometry representations of the fields in\n$AdS_4$ was derived for this compactification. Some comments are also made\nconcerning the $G_2$ holonomy and its implications on the structure of the\noperator equations on the squashed seven-sphere.\n"} {"abstract": " We present a quantum error correcting code with dynamically generated logical\nqubits. When viewed as a subsystem code, the code has no logical qubits.\nNevertheless, our measurement patterns generate logical qubits, allowing the\ncode to act as a fault-tolerant quantum memory. Our particular code gives a\nmodel very similar to the two-dimensional toric code, but each measurement is a\ntwo-qubit Pauli measurement.\n"} {"abstract": " Dilatancy associated with fault slip produces a transient pore pressure drop\nwhich increases frictional strength. This effect is analysed in a steadily\npropagating rupture model that includes frictional weakening, slip-dependent\nfault dilation and fluid flow. Dilatancy is shown to increase the stress\nintensity factor required to propagate the rupture tip. With increasing rupture\nspeed, an undrained (strengthened) region develops near the tip and extends\nbeyond the frictionally weakened zone. Away from the undrained region, pore\nfluid diffusion gradually recharges the fault and strength returns to the\ndrained, weakened value. For sufficiently large rupture dimensions, the\ndilation-induced strength increase near the tip is equivalent to an increase in\ntoughness that is proportional to the square root of the rupture speed. In\ngeneral, dilation has the effect of increasing the stress required for rupture\ngrowth by decreasing the stress drop along the crack. Thermal pressurisation\nhas the potential to compensate for the dilatant strengthening effect, at the\nexpense of an increased heating rate, which might lead to premature frictional\nmelting. Using reasonable laboratory parameters, the dilatancy-toughening\neffect leads to rupture dynamics that is quantitatively consistent with the\ndynamics of observed slow slip events in subduction zones.\n"} {"abstract": " We develop a geometrical micro-local analysis of contact Anosov flow, such as\ngeodesic flow on negatively curved manifold. We use the method of wave-packet\ntransform discussed in arXiv:1706.09307 and observe that the transfer operator\nis well approximated (in the high frequency limit) by the quantization of an\ninduced transfer operator acting on sections of some vector bundle on the\ntrapped set. This gives a few important consequences: The discrete eigenvalues\nof the generator of transfer operators, called Ruelle spectrum, are structured\ninto vertical bands. If the right-most band is isolated from the others, most\nof the Ruelle spectrum in it concentrate along a line parallel to the imaginary\naxis and, further, the density satisfies a Weyl law as the imaginary part tend\nto infinity. Some of these results were announced in arXiv:1301.5525.\n"} {"abstract": " While sophisticated Visual Question Answering models have achieved remarkable\nsuccess, they tend to answer questions only according to superficial\ncorrelations between question and answer. Several recent approaches have been\ndeveloped to address this language priors problem. However, most of them\npredict the correct answer according to one best output without checking the\nauthenticity of answers. Besides, they only explore the interaction between\nimage and question, ignoring the semantics of candidate answers. In this paper,\nwe propose a select-and-rerank (SAR) progressive framework based on Visual\nEntailment. Specifically, we first select the candidate answers relevant to the\nquestion or the image, then we rerank the candidate answers by a visual\nentailment task, which verifies whether the image semantically entails the\nsynthetic statement of the question and each candidate answer. Experimental\nresults show the effectiveness of our proposed framework, which establishes a\nnew state-of-the-art accuracy on VQA-CP v2 with a 7.55% improvement.\n"} {"abstract": " The architecture of circuital quantum computers requires computing layers\ndevoted to compiling high-level quantum algorithms into lower-level circuits of\nquantum gates. The general problem of quantum compiling is to approximate any\nunitary transformation that describes the quantum computation, as a sequence of\nelements selected from a finite base of universal quantum gates. The existence\nof an approximating sequence of one qubit quantum gates is guaranteed by the\nSolovay-Kitaev theorem, which implies sub-optimal algorithms to establish it\nexplicitly. Since a unitary transformation may require significantly different\ngate sequences, depending on the base considered, such a problem is of great\ncomplexity and does not admit an efficient approximating algorithm. Therefore,\ntraditional approaches are time-consuming tasks, unsuitable to be employed\nduring quantum computation. We exploit the deep reinforcement learning method\nas an alternative strategy, which has a significantly different trade-off\nbetween search time and exploitation time. Deep reinforcement learning allows\ncreating single-qubit operations in real time, after an arbitrary long training\nperiod during which a strategy for creating sequences to approximate unitary\noperators is built. The deep reinforcement learning based compiling method\nallows for fast computation times, which could in principle be exploited for\nreal-time quantum compiling.\n"} {"abstract": " We solve the large deviations of the Kardar-Parisi-Zhang (KPZ) equation in\none dimension at short time by introducing an approach which combines field\ntheoretical, probabilistic and integrable techniques. We expand the program of\nthe weak noise theory, which maps the large deviations onto a non-linear\nhydrodynamic problem, and unveil its complete solvability through a connection\nto the integrability of the Zakharov-Shabat system. Exact solutions, depending\non the initial condition of the KPZ equation, are obtained using the inverse\nscattering method and a Fredholm determinant framework recently developed.\nThese results, explicit in the case of the droplet geometry, open the path to\nobtain the complete large deviations for general initial conditions.\n"} {"abstract": " Grounding natural language instructions on the web to perform previously\nunseen tasks enables accessibility and automation. We introduce a task and\ndataset to train AI agents from open-domain, step-by-step instructions\noriginally written for people. We build RUSS (Rapid Universal Support Service)\nto tackle this problem. RUSS consists of two models: First, a BERT-LSTM with\npointers parses instructions to ThingTalk, a domain-specific language we design\nfor grounding natural language on the web. Then, a grounding model retrieves\nthe unique IDs of any webpage elements requested in ThingTalk. RUSS may\ninteract with the user through a dialogue (e.g. ask for an address) or execute\na web operation (e.g. click a button) inside the web runtime. To augment\ntraining, we synthesize natural language instructions mapped to ThingTalk. Our\ndataset consists of 80 different customer service problems from help websites,\nwith a total of 741 step-by-step instructions and their corresponding actions.\nRUSS achieves 76.7% end-to-end accuracy predicting agent actions from single\ninstructions. It outperforms state-of-the-art models that directly map\ninstructions to actions without ThingTalk. Our user study shows that RUSS is\npreferred by actual users over web navigation.\n"} {"abstract": " The goal of this paper is to open up a new research direction aimed at\nunderstanding the power of preprocessing in speeding up algorithms that solve\nNP-hard problems exactly. We explore this direction for the classic Feedback\nVertex Set problem on undirected graphs, leading to a new type of graph\nstructure called antler decomposition, which identifies vertices that belong to\nan optimal solution. It is an analogue of the celebrated crown decomposition\nwhich has been used for Vertex Cover. We develop the graph structure theory\naround such decompositions and develop fixed-parameter tractable algorithms to\nfind them, parameterized by the number of vertices for which they witness\npresence in an optimal solution. This reduces the search space of\nfixed-parameter tractable algorithms parameterized by the solution size that\nsolve Feedback Vertex Set.\n"} {"abstract": " Experiments have shown that hepatitis C virus (HCV) infections in vitro\ndisseminate both distally via the release and diffusion of cell-free virus\nthrough the medium, and locally via direct, cell-to-cell transmission. To\ndetermine the relative contribution of each mode of infection to HCV\ndissemination, we developed an agent-based model (ABM) that explicitly\nincorporates both distal and local modes of infection. The ABM tracks the\nconcentration of extracellular infectious virus in the supernatant and the\nnumber of intracellular HCV RNA segments within each infected cell over the\ncourse of simulated in vitro HCV infections. Experimental data for in vitro HCV\ninfections conducted in the presence and absence of free-virus neutralizing\nantibodies was used to validate the ABM and constrain the value of its\nparameters. We found that direct, cell-to-cell infection accounts for 99%\n(84%$-$100%, 95% credible interval) of infection events, making it the dominant\nmode of HCV dissemination in vitro. Yet, when infection via the free-virus\nroute is blocked, a 57% reduction in the number of infection events at 72 hpi\nis observed experimentally; a result consistent with that found by our ABM.\nTaken together, these findings suggest that while HCV spread via cell-free\nvirus contributes little to the total number of infection events in vitro, it\nplays a critical role in enhancing cell-to-cell HCV dissemination by providing\naccess to distant, uninfected areas, away from the already established large\ninfection foci.\n"} {"abstract": " We investigate the second harmonic generation of light field carrying orbital\nangular momentum in bulk $\\chi^{(2)}$ material. We show that due to\nconservation of energy and momentum, the frequency doubled beam light has a\nmodified spatial distribution and mode characteristics. Through rigorous phase\nmatching conditions, we demonstrate efficient mode and frequency conversion\nbased on three wave nonlinear optical mixing.\n"} {"abstract": " We develop two parallel machine-learning pipelines to estimate the\ncontribution of cosmic strings (CSs), conveniently encoded in their tension\n($G\\mu$), to the anisotropies of the cosmic microwave background radiation\nobserved by {\\it Planck}. The first approach is tree-based and feeds on certain\nmap features derived by image processing and statistical tools. The second uses\nconvolutional neural network with the goal to explore possible non-trivial\nfeatures of the CS imprints. The two pipelines are trained on {\\it Planck}\nsimulations and when applied to {\\it Planck} \\texttt{SMICA} map yield the\n$3\\sigma$ upper bound of $G\\mu\\lesssim 8.6\\times 10^{-7}$. We also train and\napply the pipelines to make forecasts for futuristic CMB-S4-like surveys and\nconservatively find their minimum detectable tension to be $G\\mu_{\\rm min}\\sim\n1.9\\times 10^{-7}$.\n"} {"abstract": " We explore quantitative descriptors that herald when a many-particle system\nin $d$-dimensional Euclidean space $\\mathbb{R}^d$ approaches a hyperuniform\nstate as a function of the relevant control parameter. We establish\nquantitative criteria to ascertain the extent of hyperuniform and\nnonhyperuniform distance-scaling regimes n terms of the ratio $B/A$, where $A$\nis \"volume\" coefficient and $B$ is \"surface-area\" coefficient associated with\nthe local number variance $\\sigma^2(R)$ for a spherical window of radius $R$.\nTo complement the known direct-space representation of the coefficient $B$ in\nterms of the total correlation function $h({\\bf r})$, we derive its\ncorresponding Fourier representation in terms of the structure factor $S({\\bf\nk})$, which is especially useful when scattering information is available\nexperimentally or theoretically. We show that the free-volume theory of the\npressure of equilibrium packings of identical hard spheres that approach a\nstrictly jammed state either along the stable crystal or metastable disordered\nbranch dictates that such end states be exactly hyperuniform. Using the ratio\n$B/A$, the hyperuniformity index $H$ and the direct-correlation function length\nscale $\\xi_c$, we study three different exactly solvable models as a function\nof the relevant control parameter, either density or temperature, with end\nstates that are perfectly hyperuniform. We analyze equilibrium hard rods and\n\"sticky\" hard-sphere systems in arbitrary space dimension $d$ as a function of\ndensity. We also examine low-temperature excited states of many-particle\nsystems interacting with \"stealthy\" long-ranged pair interactions as the\ntemperature tends to zero. The capacity to identify hyperuniform scaling\nregimes should be particularly useful in analyzing experimentally- or\ncomputationally-generated samples that are necessarily of finite size.\n"} {"abstract": " We develop an approach to choice principles and their contrapositive\nbar-induction principles as extensionality schemes connecting an \"intensional\"\nor \"effective\" view of respectively ill-and well-foundedness properties to an\n\"extensional\" or \"ideal\" view of these properties. After classifying and\nanalysing the relations between different intensional definitions of\nill-foundedness and well-foundedness, we introduce, for a domain $A$, a\ncodomain $B$ and a \"filter\" $T$ on finite approximations of functions from $A$\nto $B$, a generalised form GDC$_{A,B,T}$ of the axiom of dependent choice and\ndually a generalised bar induction principle GBI$_{A,B,T}$ such that:\n GDC$_{A,B,T}$ intuitionistically captures the strength of\n $\\bullet$ the general axiom of choice expressed as $\\forall a\\exists b R(a,\nb) \\Rightarrow\\exists\\alpha\\forall \\alpha R(\\alpha,\\alpha(a))$ when $T$ is a\nfilter that derives point-wise from a relation $R$ on $A \\times B$ without\nintroducing further constraints,\n $\\bullet$ the Boolean Prime Filter Theorem / Ultrafilter Theorem if $B$ is\nthe two-element set $\\mathbb{B}$ (for a constructive definition of prime\nfilter),\n $\\bullet$ the axiom of dependent choice if $A = \\mathbb{N}$,\n $\\bullet$ Weak K{\\\"o}nig's Lemma if $A = \\mathbb{N}$ and $B = \\mathbb{B}$ (up\nto weak classical reasoning)\n GBI$_{A,B,T}$ intuitionistically captures the strength of\n $\\bullet$ G{\\\"o}del's completeness theorem in the form validity implies\nprovability for entailment relations if $B = \\mathbb{B}$,\n $\\bullet$ bar induction when $A = \\mathbb{N}$,\n $\\bullet$ the Weak Fan Theorem when $A = \\mathbb{N}$ and $B = \\mathbb{B}$.\n Contrastingly, even though GDC$_{A,B,T}$ and GBI$_{A,B,T}$ smoothly capture\nseveral variants of choice and bar induction, some instances are inconsistent,\ne.g. when $A$ is $\\mathbb{B}^\\mathbb{N}$ and $B$ is $\\mathbb{N}$.\n"} {"abstract": " The Cyber Science Lab (CSL) and Smart Cyber-Physical System (SCPS) Lab at the\nUniversity of Guelph conduct a market study of cybersecurity technology\nadoption and requirements for smart and precision farming in Canada. We\nconducted 17 stakeholder/key opinion leader interviews in Canada and the USA,\nas well as conducting extensive secondary research, to complete this study.\nEach interview generally required 15-20 minutes to complete. Interviews were\nconducted using a client-approved interview guide. Secondary and primary\nresearch focussed on the following areas of investigation: Market size and\nsegmentation Market forecast and growth rate Competitive landscape Market\nchallenges/barriers to entry Market trends/growth drivers\nAdoption/commercialization of the technology\n"} {"abstract": " AGBs and YSOs often share the same domains in IR color-magnitude or\ncolor-color diagrams leading to potential mis-classification. We extracted a\nlist of AGB interlopers from the published YSO catalogues using the periodogram\nanalysis on NEOWISE time series data. YSO IR variability is typically\nstochastic and linked to episodic mass accretion. Furthermore, most variable\nYSOs are at an early evolutionary stage, with significant surrounding envelope\nand/or disk material. In contrast, AGBs are often identified by a well defined\nsinusoidal variability with periods of a few hundreds days. From our\nperiodogram analysis of all known low mass YSOs in the Gould Belt, we find 85\nAGB candidates, out of which 62 were previously classified as late-stage Class\nIII YSOs. Most of these new AGB candidates have similar IR colors to O-rich\nAGBs. We observed 73 of these AGB candidates in the H2O, CH3OH and SiO maser\nlines to further reveal their nature. The SiO maser emission was detected in 10\nsources, confirming them as AGBs since low mass YSOs, especially Class III\nYSOs, do not show such maser emission. The H2O and CH3OH maser lines were\ndetected in none of our targets.\n"} {"abstract": " In multi-component scalar dark matter scenarios, a single $Z_N$ ($N\\geq 4$)\nsymmetry may account for the stability of different dark matter particles. Here\nwe study the case where $N$ is even ($N=2n$) and two species, a complex scalar\nand a real scalar, contribute to the observed dark matter density. We perform a\nphenomenological analysis of three scenarios based on the $Z_4$ and $Z_6$\nsymmetries, characterizing their viable parameter spaces and analyzing their\ndetection prospects. Our results show that, thanks to the new interactions\nallowed by the $Z_{2n}$ symmetry, current experimental constraints can be\nsatisfied over a wider range of dark matter masses, and that these scenarios\nmay lead to observable signals in direct detection experiments. Finally, we\nargue that these three scenarios serve as prototypes for other two-component\n$Z_{2n}$ models with one complex and one real dark matter particle.\n"} {"abstract": " Astrophysical black holes are thought to be the Kerr black holes predicted by\ngeneral relativity, but macroscopic deviations from the Kerr solution can be\nexpected from a number of scenarios involving new physics. In Paper I, we\nstudied the reflection features in NuSTAR and XMM-Newton spectra of the\nsupermassive black hole at the center of the galaxy MCG-06-30-15 and we\nconstrained a set of deformation parameters proposed by Konoplya, Rezzolla &\nZhidenko (Phys. Rev. D93, 064015, 2016). In the present work, we analyze the\nX-ray data of a stellar-mass black hole within the same theoretical framework\nin order to probe a different curvature regime. We consider a NuSTAR\nobservation of the X-ray binary EXO 1846-031 during its outburst in 2019. As in\nthe case of Paper I, all our fits are consistent with the Kerr black hole\nhypothesis, but some deformation parameters cannot be constrained well.\n"} {"abstract": " Intuitively, one would expect the accuracy of a trained neural network's\nprediction on a test sample to correlate with how densely that sample is\nsurrounded by seen training samples in representation space. In this work we\nprovide theory and experiments that support this hypothesis. We propose an\nerror function for piecewise linear neural networks that takes a local region\nin the network's input space and outputs smooth empirical training error, which\nis an average of empirical training errors from other regions weighted by\nnetwork representation distance. A bound on the expected smooth error for each\nregion scales inversely with training sample density in representation space.\nEmpirically, we verify this bound is a strong predictor of the inaccuracy of\nthe network's prediction on test samples. For unseen test sets, including those\nwith out-of-distribution samples, ranking test samples by their local region's\nerror bound and discarding samples with the highest bounds raises prediction\naccuracy by up to 20% in absolute terms, on image classification datasets.\n"} {"abstract": " Current deep learning models for classification tasks in computer vision are\ntrained using mini-batches. In the present article, we take advantage of the\nrelationships between samples in a mini-batch, using graph neural networks to\naggregate information from similar images. This helps mitigate the adverse\neffects of alterations to the input images on classification performance.\nDiverse experiments on image-based object and scene classification show that\nthis approach not only improves a classifier's performance but also increases\nits robustness to image perturbations and adversarial attacks. Further, we also\nshow that mini-batch graph neural networks can help to alleviate the problem of\nmode collapse in Generative Adversarial Networks.\n"} {"abstract": " Two main methods have been proposed to derive the acoustical radiation force\nand torque applied by an arbitrary acoustic field on a particle: The first one\nrelies on the plane wave angular spectrum decomposition of the incident field\n(see [Sapozhnikov and Bailey, J. Acoust. Soc. Am. 133, 661 (2013)] for the\nforce and [Gong and Baudoin, J. Acoust. Soc. Am. 148, 3131 (2020)] for the\ntorque), while the second one relies on the decomposition of the incident field\ninto a sum of spherical waves, the so-called multipole expansion (see [Silva,\nJ. Acoust. Soc. Am. 130, 3541 (2011)] and [Baresh et al., J. Acoust. Soc. Am.\n133, 25 (2013)] for the force, and [Silva et al., EPL 97, 54003 (2012)] and\n[Gong et al., Phys. Rev. Applied 11, 064022 (2019)] for the torque). In this\npaper, we formally establish the equivalence between the expressions obtained\nwith these two methods for both the force and torque.\n"} {"abstract": " B-doped $\\delta$-layers were fabricated in Si(100) using BCl$_{3}$ as a\ndopant precursor in ultrahigh vacuum. BCl$_{3}$ adsorbed readily at room\ntemperature, as revealed by scanning tunneling microscopy (STM) imaging.\nAnnealing at elevated temperatures facilitated B incorporation into the Si\nsubstrate. Secondary ion mass spectrometry (SIMS) depth profiling demonstrated\na peak B concentration $>$ 1.2(1) $\\times$ 10$^{21}$ cm$^{-3}$ with a total\nareal dose of 1.85(1) $\\times$ 10$^{14}$ cm$^{-2}$ resulting from a 30 L\nBCl$_{3}$ dose at 150 $^{\\circ}$C. Hall bar measurements of a similar sample\nwere performed at 3.0 K revealing a sheet resistance of $R_{\\mathrm{s}}$ = 1.91\nk$\\Omega\\square^{-1}$, a hole concentration of $n$ = 1.90 $\\times$ 10$^{14}$\ncm$^{-2}$ and a hole mobility of $\\mu$ = 38.0 cm$^{2}$V$^{-1}$s$^{-1}$ without\nperforming an incorporation anneal. Further, the conductivity of several\nB-doped $\\delta$-layers showed a log dependence on temperature suggestive of a\ntwo-dimensional system. Selective-area deposition of BCl$_{3}$ was also\ndemonstrated using both H- and Cl-based monatomic resists. In comparison to a\ndosed area on bare Si, adsorption selectivity ratios for H and Cl resists were\ndetermined by SIMS to be 310(10):1 and 1529(5):1, respectively, further\nvalidating the use of BCl$_{3}$ as a dopant precursor for atomic precision\nfabrication of acceptor-doped devices in Si.\n"} {"abstract": " This paper proposes a novel way to solve transient linear, and non-linear\nsolid dynamics for compressible, nearly incompressible, and incompressible\nmaterial in the updated Lagrangian framework for tetrahedral unstructured\nfinite elements. It consists of a mixed formulation in both displacement and\npressure, where the momentum equation of the continuum is complemented with a\npressure equation that handles incompresibility inherently. It is obtained\nthrough the deviatoric and volumetric split of the stress, that enables us to\nsolve the problem in the incompressible limit. The Varitaional Multi-Scale\nmethod (VMS) is developed based on the orthogonal decomposition of the\nvariables, which damps out spurious pressure fields for piece wise linear\ntetrahedral elements. Various numerical examples are presented to assess the\nrobustness, accuracy and capabilities of our scheme in bending dominated\nproblems, and for complex geometries.\n"} {"abstract": " Coronary artery disease (CAD) has posed a leading threat to the lives of\ncardiovascular disease patients worldwide for a long time. Therefore, automated\ndiagnosis of CAD has indispensable significance in clinical medicine. However,\nthe complexity of coronary artery plaques that cause CAD makes the automatic\ndetection of coronary artery stenosis in Coronary CT angiography (CCTA) a\ndifficult task. In this paper, we propose a Transformer network (TR-Net) for\nthe automatic detection of significant stenosis (i.e. luminal narrowing > 50%)\nwhile practically completing the computer-assisted diagnosis of CAD. The\nproposed TR-Net introduces a novel Transformer, and tightly combines\nconvolutional layers and Transformer encoders, allowing their advantages to be\ndemonstrated in the task. By analyzing semantic information sequences, TR-Net\ncan fully understand the relationship between image information in each\nposition of a multiplanar reformatted (MPR) image, and accurately detect\nsignificant stenosis based on both local and global information. We evaluate\nour TR-Net on a dataset of 76 patients from different patients annotated by\nexperienced radiologists. Experimental results illustrate that our TR-Net has\nachieved better results in ACC (0.92), Spec (0.96), PPV (0.84), F1 (0.79) and\nMCC (0.74) indicators compared with the state-of-the-art methods. The source\ncode is publicly available from the link (https://github.com/XinghuaMa/TR-Net).\n"} {"abstract": " We introduce large-scale Augmented Granger Causality (lsAGC) as a method for\nconnectivity analysis in complex systems. The lsAGC algorithm combines\ndimension reduction with source time-series augmentation and uses predictive\ntime-series modeling for estimating directed causal relationships among\ntime-series. This method is a multivariate approach, since it is capable of\nidentifying the influence of each time-series on any other time-series in the\npresence of all other time-series of the underlying dynamic system. We\nquantitatively evaluate the performance of lsAGC on synthetic directional\ntime-series networks with known ground truth. As a reference method, we compare\nour results with cross-correlation, which is typically used as a standard\nmeasure of connectivity in the functional MRI (fMRI) literature. Using\nextensive simulations for a wide range of time-series lengths and two different\nsignal-to-noise ratios of 5 and 15 dB, lsAGC consistently outperforms\ncross-correlation at accurately detecting network connections, using Receiver\nOperator Characteristic Curve (ROC) analysis, across all tested time-series\nlengths and noise levels. In addition, as an outlook to possible clinical\napplication, we perform a preliminary qualitative analysis of connectivity\nmatrices for fMRI data of Autism Spectrum Disorder (ASD) patients and typical\ncontrols, using a subset of 59 subjects of the Autism Brain Imaging Data\nExchange II (ABIDE II) data repository. Our results suggest that lsAGC, by\nextracting sparse connectivity matrices, may be useful for network analysis in\ncomplex systems, and may be applicable to clinical fMRI analysis in future\nresearch, such as targeting disease-related classification or regression tasks\non clinical data.\n"} {"abstract": " Frustrated Mott insulators such as, transition metal dichalcogenide\n1T-TaS$_{2}$ present an ideal platform for the experimental realization of\ndisorder induced insulator-metal transition. In this letter we present the\nfirst non perturbative theoretical investigation of the disorder induced\ninsulator-metal transition in copper (Cu) intercalated 1T-TaS$_{2}$, in the\nframework of Anderson-Hubbard model on a triangular lattice. Based on the\nmagnetic, spectroscopic and transport signatures we map out the thermal phase\ndiagram of this system. Our results show that over a regime of moderate\ndisorder strength this material hosts an antiferromagnetic metal. The emergent\nmetal is a non Fermi liquid, governed by resilient quasiparticles, that survive\nas the relevant low energy excitations even after the break down of Fermi\nliquid description. The system undergoes a crossover from a non Fermi liquid\nmetal to a bad metallic phase, as a function of temperature. Our results on\nspectral line shape are found to be in excellent agreement with the\nexperimental observations on Cu intercalated 1T-TaS$_{2}$. The optical and\nspectroscopic signatures discussed in this letter are expected to serve as\nimportant benchmark for future experiments on this and related class of\nmaterials. The numerical technique discussed herein serves as a computational\nbreakthrough to address systems for which most of the existing methods fall\nshort.\n"} {"abstract": " In this work, we propose a Model Predictive Control (MPC)-based Reinforcement\nLearning (RL) method for Autonomous Surface Vehicles (ASVs). The objective is\nto find an optimal policy that minimizes the closed-loop performance of a\nsimplified freight mission, including collision-free path following, autonomous\ndocking, and a skillful transition between them. We use a parametrized\nMPC-scheme to approximate the optimal policy, which considers\npath-following/docking costs and states (position, velocity)/inputs (thruster\nforce, angle) constraints. The Least Squares Temporal Difference (LSTD)-based\nDeterministic Policy Gradient (DPG) method is then applied to update the policy\nparameters. Our simulation results demonstrate that the proposed MPC-LSTD-based\nDPG method could improve the closed-loop performance during learning for the\nfreight mission problem of ASV.\n"} {"abstract": " Pore structures and gas transport properties in porous separators for polymer\nelectrolyte fuel cells are evaluated both experimentally and through\nsimulations. In the experiments, the gas permeabilities of two porous samples,\na conventional sample and one with low electrical resistivity, are measured by\na capillary flow porometer, and the pore size distributions are evaluated with\nmercury porosimetry. Local pore structures are directly observed with micro\nX-ray computed tomography (CT). In the simulations, the effective diffusion\ncoefficients of oxygen and the air permeability in porous samples are\ncalculated using random walk Monte Carlo simulations and computational fluid\ndynamics (CFD) simulations, respectively, based on the X-ray CT images. The\ncalculated porosities and air permeabilities of the porous samples are in good\nagreement with the experimental values. The simulation results also show that\nthe in-plane permeability is twice the through-plane permeability in the\nconventional sample, whereas it is slightly higher in the low-resistivity\nsample. The results of this study show that CFD simulation based on micro X-ray\nCT images makes it possible to evaluate anisotropic gas permeabilities in\nanisotropic porous media.\n"} {"abstract": " We describe how some problems (interpretability,lack of object-orientedness)\nof modern deep networks potentiallycould be solved by adapting a biologically\nplausible saccadicmechanism of perception. A sketch of such a saccadic\nvisionmodel is proposed. Proof of concept experimental results areprovided to\nsupport the proposed approach.\n"} {"abstract": " The knapsack problem for groups was introduced by Miasnikov, Nikolaev, and\nUshakov. It is defined for each finitely generated group $G$ and takes as input\ngroup elements $g_1,\\ldots,g_n,g\\in G$ and asks whether there are\n$x_1,\\ldots,x_n\\ge 0$ with $g_1^{x_1}\\cdots g_n^{x_n}=g$. We study the knapsack\nproblem for wreath products $G\\wr H$ of groups $G$ and $H$. Our main result is\na characterization of those wreath products $G\\wr H$ for which the knapsack\nproblem is decidable. The characterization is in terms of decidability\nproperties of the indiviual factors $G$ and $H$. To this end, we introduce two\ndecision problems, the intersection knapsack problem and its restriction, the\npositive intersection knapsack problem. Moreover, we apply our main result to\n$H_3(\\mathbb{Z})$, the discrete Heisenberg group, and to Baumslag-Solitar\ngroups $\\mathsf{BS}(1,q)$ for $q\\ge 1$. First, we show that the knapsack\nproblem is undecidable for $G\\wr H_3(\\mathbb{Z})$ for any $G\\ne 1$. This\nimplies that for $G\\ne 1$ and for infinite and virtually nilpotent groups $H$,\nthe knapsack problem for $G\\wr H$ is decidable if and only if $H$ is virtually\nabelian and solvability of systems of exponent equations is decidable for $G$.\nSecond, we show that the knapsack problem is decidable for\n$G\\wr\\mathsf{BS}(1,q)$ if and only if solvability of systems of exponent\nequations is decidable for $G$.\n"} {"abstract": " The first of a two-part series, this paper assumes a weak local energy decay\nestimate holds and proves that solutions to the linear wave equation with\nvariable coefficients in $\\mathbb R^{1+3}$, first-order terms, and a potential\ndecay at a rate depending on how rapidly the vector fields of the metric,\nfirst-order terms, and potential decay at spatial infinity. We prove results\nfor both stationary and nonstationary metrics. The proof uses local energy\ndecay to prove an initial decay rate, and then uses the one-dimensional\nreduction repeatedly to achieve the full decay rate.\n"} {"abstract": " The effect of coupling between pairing and quadrupole triaxial shape\nvibrations on the low-energy collective states of $\\gamma$-soft nuclei is\ninvestigated using a model based on the framework of nuclear energy density\nfunctionals (EDFs). Employing a constrained self-consistent mean-field (SCMF)\nmethod that uses universal EDFs and pairing interactions, potential energy\nsurfaces of characteristic $\\gamma$-soft Os and Pt nuclei with $A\\approx190$\nare calculated as functions of the pairing and triaxial quadrupole\ndeformations. Collective spectroscopic properties are computed using a\nnumber-nonconserving interacting boson model (IBM) Hamiltonian, with parameters\ndetermined by mapping the SCMF energy surface onto the expectation value of the\nHamiltonian in the boson condensate state. It is shown that, by simultaneously\nconsidering both the shape and pairing collective degrees of freedom, the\nEDF-based IBM successfully reproduces data on collective structures based on\nlow-energy $0^{+}$ states, as well as $\\gamma$-vibrational bands.\n"} {"abstract": " We consider the capacity of entanglement in models related with the\ngravitational phase transitions. The capacity is labeled by the replica\nparameter which plays a similar role to the inverse temperature in\nthermodynamics. In the end of the world brane model of a radiating black hole\nthe capacity has a peak around the Page time indicating the phase transition\nbetween replica wormhole geometries of different types of topology. Similarly,\nin a moving mirror model describing Hawking radiation the capacity typically\nshows a discontinuity when the dominant saddle switches between two phases,\nwhich can be seen as a formation of island regions. In either case we find the\ncapacity can be an invaluable diagnostic for a black hole evaporation process.\n"} {"abstract": " The tension between inferences of Hubble constant ($H_0$) is found in a large\narray of datasets combinations. Modification to the late expansion history is\nthe most direct solution to this discrepancy. In this work we examine the\nviability of restoring the cosmological concordance within the scenarios of\nlate dark energy. We explore two representative parameterizations: a novel\nversion of transitional dark energy (TDE) and modified emergent dark energy\n(MEDE). We find that, the main anchors for the cosmic distance scale: cosmic\nmicrowave background (CMB), baryon acoustic oscillation (BAO), and SNe Ia\ncalibrated by Cepheids form a ``impossible trinity'', i.e., it's plausible to\nreconcile with any of the two but unlikely to accommodate them all.\nParticularly, the tension between BAO and the calibrated SNe Ia can not be\nreconciled within the scenarios of late dark energy. Nevertheless, we still\nfind a positive evidence for TDE model in analysis of all datasets\ncombinations, while with the the exclusion of BOSS datasets, the tensions with\nSH0ES drops from $3.1\\sigma$ to $1.1\\sigma$. For MEDE model, the tension with\n$H_0$ is much alleviated with the exclusion of SNe dataset. But unfortunately,\nin both TDE and MEDE scenarios, the $S_8$ tension is not relieved nor\nexacerbated.\n"} {"abstract": " The Raman peak position and linewidth provide insight into phonon\nanharmonicity and electron-phonon interactions (EPI) in materials. For\nmonolayer graphene, prior first-principles calculations have yielded decreasing\nlinewidth with increasing temperature, which is opposite to measurement\nresults. Here, we explicitly consider four-phonon anharmonicity, phonon\nrenormalization, and electron-phonon coupling, and find all to be important to\nsuccessfully explain both the $G$ peak frequency shift and linewidths in our\nsuspended graphene sample at a wide temperature range. Four-phonon scattering\ncontributes a prominent linewidth that increases with temperature, while\ntemperature dependence from EPI is found to be reversed above a doping\nthreshold ($\\hbar\\omega_G/2$, with $\\omega_G$ being the frequency of the $G$\nphonon).\n"} {"abstract": " We propose a novel neural network module that transforms an existing\nsingle-frame semantic segmentation model into a video semantic segmentation\npipeline. In contrast to prior works, we strive towards a simple, fast, and\ngeneral module that can be integrated into virtually any single-frame\narchitecture. Our approach aggregates a rich representation of the semantic\ninformation in past frames into a memory module. Information stored in the\nmemory is then accessed through an attention mechanism. In contrast to previous\nmemory-based approaches, we propose a fast local attention layer, providing\ntemporal appearance cues in the local region of prior frames. We further fuse\nthese cues with an encoding of the current frame through a second\nattention-based module. The segmentation decoder processes the fused\nrepresentation to predict the final semantic segmentation. We integrate our\napproach into two popular semantic segmentation networks: ERFNet and PSPNet. We\nobserve an improvement in segmentation performance on Cityscapes by 1.7% and\n2.1% in mIoU respectively, while increasing inference time of ERFNet by only\n1.5ms.\n"} {"abstract": " We study the problem of dynamically trading multiple futures whose underlying\nasset price follows a multiscale central tendency Ornstein-Uhlenbeck (MCTOU)\nmodel. Under this model, we derive the closed-form no-arbitrage prices for the\nfutures contracts. Applying a utility maximization approach, we solve for the\noptimal trading strategies under different portfolio configurations by\nexamining the associated system of Hamilton-Jacobi-Bellman (HJB) equations. The\noptimal strategies depend on not only the parameters of the underlying asset\nprice process but also the risk premia embedded in the futures prices.\nNumerical examples are provided to illustrate the investor's optimal positions\nand optimal wealth over time.\n"} {"abstract": " Let $ K $ be a number field over $ \\mathbb{Q} $ and let $ a_K(m) $ denote the\nnumber of integral ideals of $ K $ of norm equal to $ m\\in\\mathbb{N} $. In this\npaper we obtain asymptotic formulae for sums of the form $ \\sum_{m\\leq X}\na^l_K(m) $ thereby generalizing the previous works on the problem. Previously\nsuch asymptotics were known only in the case when $ K $ is Galois or when $K$\nwas a non normal cubic extension and $ l=2,3 $. The present work subsumes both\nthese cases.\n"} {"abstract": " Automatic transcription of monophonic/polyphonic music is a challenging task\ndue to the lack of availability of large amounts of transcribed data. In this\npaper, we propose a data augmentation method that converts natural speech to\nsinging voice based on vocoder based speech synthesizer. This approach, called\nvoice to singing (V2S), performs the voice style conversion by modulating the\nF0 contour of the natural speech with that of a singing voice. The V2S model\nbased style transfer can generate good quality singing voice thereby enabling\nthe conversion of large corpora of natural speech to singing voice that is\nuseful in building an E2E lyrics transcription system. In our experiments on\nmonophonic singing voice data, the V2S style transfer provides a significant\ngain (relative improvements of 21%) for the E2E lyrics transcription system. We\nalso discuss additional components like transfer learning and lyrics based\nlanguage modeling to improve the performance of the lyrics transcription\nsystem.\n"} {"abstract": " This paper provides a multivariate extension of Bertoin's pathwise\nconstruction of a L\\'evy process conditioned to stay positive/negative. Thus\nobtained processes conditioned to stay in half-spaces are closely related to\nthe original process on a compact time interval seen from its directional\nextremal points. In the case of a correlated Brownian motion the law of the\nconditioned process is obtained by a linear transformation of a standard\nBrownian motion and an independent Bessel-3 process. Further motivation is\nprovided by a limit theorem corresponding to zooming in on a L\\'evy process\nwith a Brownian part at the point of its directional infimum. Applications to\nzooming in at the point furthest from the origin are envisaged.\n"} {"abstract": " The US Census Bureau plans to protect the privacy of 2020 Census respondents\nthrough its Disclosure Avoidance System (DAS), which attempts to achieve\ndifferential privacy guarantees by adding noise to the Census microdata. By\napplying redistricting simulation and analysis methods to DAS-protected 2010\nCensus data, we find that the protected data are not of sufficient quality for\nredistricting purposes. We demonstrate that the injected noise makes it\nimpossible for states to accurately comply with the One Person, One Vote\nprinciple. Our analysis finds that the DAS-protected data are biased against\ncertain areas, depending on voter turnout and partisan and racial composition,\nand that these biases lead to large and unpredictable errors in the analysis of\npartisan and racial gerrymanders. Finally, we show that the DAS algorithm does\nnot universally protect respondent privacy. Based on the names and addresses of\nregistered voters, we are able to predict their race as accurately using the\nDAS-protected data as when using the 2010 Census data. Despite this, the\nDAS-protected data can still inaccurately estimate the number of\nmajority-minority districts. We conclude with recommendations for how the\nCensus Bureau should proceed with privacy protection for the 2020 Census.\n"} {"abstract": " For analytic functions $g$ on the unit disc with non-negative Maclaurin\ncoefficients, we describe the boundedness and compactness of the integral\noperator $T_g(f)(z)=\\int_0^zf(\\zeta)g'(\\zeta)\\,d\\zeta$ from a space $X$ of\nanalytic functions in the unit disc to $H^\\infty$, in terms of neat and useful\nconditions on the Maclaurin coefficients of $g$. The choices of $X$ that will\nbe considered contain the Hardy and the Hardy-Littlewood spaces, the\nDirichlet-type spaces $D^p_{p-1}$, as well as the classical Bloch and BMOA\nspaces.\n"} {"abstract": " We consider a multiphysics model for the flow of Newtonian fluid coupled with\nBiot consolidation equations through an interface, and incorporating total\npressure as an unknown in the poroelastic region. A new mixed-primal finite\nelement scheme is proposed solving for the pairs fluid velocity - pressure and\ndisplacement - total poroelastic pressure using Stokes-stable elements, and\nwhere the formulation does not require Lagrange multipliers to set up the usual\ntransmission conditions on the interface. The stability and well-posedness of\nthe continuous and semi-discrete problems are analysed in detail. Our numerical\nstudy is framed in the context of applicative problems pertaining to\nheterogeneous geophysical flows and to eye poromechanics. For the latter, we\ninvestigate different interfacial flow regimes in Cartesian and axisymmetric\ncoordinates that could eventually help describe early morphologic changes\nassociated with glaucoma development in canine species.\n"} {"abstract": " In this work, we explore macroscopic transport phenomena associated with a\nrotational system in the presence of an external orthogonal electromagnetic\nfield. Simply based on the lowest Landau level approximation, we derive\nnontrivial expressions for chiral density and various currents consistently by\nadopting small angular velocity expansion or Kubo formula. While the generation\nof anomalous electric current is due to the pseudo gauge field effect of the\nspin-rotation coupling, the chiral density and current can be simply explained\nwith the help of Lorentz boosts. Finally, Lorentz covariant forms can be\nobtained by unifying our results and the magnetovorticity effect.\n"} {"abstract": " We find an explicit formula that produces inductively the elliptic stable\nenvelopes of an arbitrary Nakajima variety associated to a quiver Q from the\nones of those Nakajima varieties whose framing vectors are the fundamental\nvectors of the quiver Q, i.e. the dimension vectors with just one unitary\nnonzero entry. The result relies on abelianization of stable envelopes. As an\napplication, we combine our result with Smirnov's formula for the elliptic\nstable envelopes of the Hilbert scheme of points on the plane to produce the\nelliptic stable envelopes of the instanton moduli space.\n"} {"abstract": " Small-scale magnetic fields are not only the fundamental element of the solar\nmagnetism, but also closely related to the structure of the solar atmosphere.\nThe observations have shown that there is a ubiquitous tangled small-scale\nmagnetic field with a strength of 60 $\\sim$ 130\\,G in the canopy forming layer\nof the quiet solar photosphere. On the other hand, the multi-dimensional MHD\nsimulations show that the convective overshooting expels the magnetic field to\nform the magnetic canopies at a height of about 500\\,km in the upper\nphotosphere. However, the distribution of such small-scale ``canopies\" in the\nsolar photosphere cannot be rigorously constrained by either observations and\nnumerical simulations. Based on stellar standard models, we identify that these\nmagnetic canopies can act as a global magnetic-arch splicing layer, and find\nthat the reflections of the solar p-mode oscillations at this magnetic-arch\nsplicing layer results in significant improvement on the discrepancy between\nthe observed and calculated p-mode frequencies. The location of the\nmagnetic-arch splicing layer is determined at a height of about 630\\,km, and\nthe inferred strength of the magnetic field is about 90\\,G. These features of\nthe magnetic-arch splicing layer derived independently in the present study are\nquantitatively in agreement with the presence of small-scale magnetic canopies\nas those obtained by the observations and 3-D MHD simulations.\n"} {"abstract": " In many real-world scenarios, the utility of a user is derived from the\nsingle execution of a policy. In this case, to apply multi-objective\nreinforcement learning, the expected utility of the returns must be optimised.\nVarious scenarios exist where a user's preferences over objectives (also known\nas the utility function) are unknown or difficult to specify. In such\nscenarios, a set of optimal policies must be learned. However, settings where\nthe expected utility must be maximised have been largely overlooked by the\nmulti-objective reinforcement learning community and, as a consequence, a set\nof optimal solutions has yet to be defined. In this paper we address this\nchallenge by proposing first-order stochastic dominance as a criterion to build\nsolution sets to maximise expected utility. We also propose a new dominance\ncriterion, known as expected scalarised returns (ESR) dominance, that extends\nfirst-order stochastic dominance to allow a set of optimal policies to be\nlearned in practice. We then define a new solution concept called the ESR set,\nwhich is a set of policies that are ESR dominant. Finally, we define a new\nmulti-objective distributional tabular reinforcement learning (MOT-DRL)\nalgorithm to learn the ESR set in a multi-objective multi-armed bandit setting.\n"} {"abstract": " This paper presents a theoretical framework for the design and analysis of\ngradient descent-based algorithms for coverage control tasks involving robot\nswarms. We adopt a multiscale approach to analysis and design to ensure\nconsistency of the algorithms in the large-scale limit. First, we represent the\nmacroscopic configuration of the swarm as a probability measure and formulate\nthe macroscopic coverage task as the minimization of a convex objective\nfunction over probability measures. We then construct a macroscopic dynamics\nfor swarm coverage, which takes the form of a proximal descent scheme in the\n$L^2$-Wasserstein space. Our analysis exploits the generalized geodesic\nconvexity of the coverage objective function, proving convergence in the\n$L^2$-Wasserstein sense to the target probability measure. We then obtain a\nconsistent gradient descent algorithm in the Euclidean space that is\nimplementable by a finite collection of agents, via a \"variational\"\ndiscretization of the macroscopic coverage objective function. We establish the\nconvergence properties of the gradient descent and its behavior in the\ncontinuous-time and large-scale limits. Furthermore, we establish a connection\nwith well-known Lloyd-based algorithms, seen as a particular class of\nalgorithms within our framework, and demonstrate our results via numerical\nexperiments.\n"} {"abstract": " The higher-order generalized singular value decomposition (HO-GSVD) is a\nmatrix factorization technique that extends the GSVD to $N \\ge 2$ data\nmatrices, and can be used to identify shared subspaces in multiple large-scale\ndatasets with different row dimensions. The standard HO-GSVD factors $N$\nmatrices $A_i\\in\\mathbb{R}^{m_i\\times n}$ as $A_i=U_i\\Sigma_i V^\\text{T}$, but\nrequires that each of the matrices $A_i$ has full column rank. We propose a\nreformulation of the HO-GSVD that extends its applicability to rank-deficient\ndata matrices $A_i$. If the matrix of stacked $A_i$ has full rank, we show that\nthe properties of the original HO-GSVD extend to our reformulation. The HO-GSVD\ncaptures shared right singular vectors of the matrices $A_i$, and we show that\nour method also identifies directions that are unique to the image of a single\nmatrix. We also extend our results to the higher-order cosine-sine\ndecomposition (HO-CSD), which is closely related to the HO-GSVD. Our extension\nof the standard HO-GSVD allows its application to datasets with $m_i < n$, such\nas are encountered in bioinformatics, neuroscience, control theory or\nclassification problems.\n"} {"abstract": " Multi-frame human pose estimation in complicated situations is challenging.\nAlthough state-of-the-art human joints detectors have demonstrated remarkable\nresults for static images, their performances come short when we apply these\nmodels to video sequences. Prevalent shortcomings include the failure to handle\nmotion blur, video defocus, or pose occlusions, arising from the inability in\ncapturing the temporal dependency among video frames. On the other hand,\ndirectly employing conventional recurrent neural networks incurs empirical\ndifficulties in modeling spatial contexts, especially for dealing with pose\nocclusions. In this paper, we propose a novel multi-frame human pose estimation\nframework, leveraging abundant temporal cues between video frames to facilitate\nkeypoint detection. Three modular components are designed in our framework. A\nPose Temporal Merger encodes keypoint spatiotemporal context to generate\neffective searching scopes while a Pose Residual Fusion module computes\nweighted pose residuals in dual directions. These are then processed via our\nPose Correction Network for efficient refining of pose estimations. Our method\nranks No.1 in the Multi-frame Person Pose Estimation Challenge on the\nlarge-scale benchmark datasets PoseTrack2017 and PoseTrack2018. We have\nreleased our code, hoping to inspire future research.\n"} {"abstract": " Versions of the following problem appear in several topics such as Gamma\nKnife radiosurgery, studying objects with the X-ray transform, the 3SUM\nproblem, and the $k$-linear degeneracy testing. Suppose there are $n$ points on\na plane whose specific locations are unknown. We are given all the lines that\ngo through the points with a given slope. We show that the minimum number of\nslopes needed, in general, to find all the point locations is $n+1$ and we\nprovide an algorithm to do so.\n"} {"abstract": " Speech disorders often occur at the early stage of Parkinson's disease (PD).\nThe speech impairments could be indicators of the disorder for early diagnosis,\nwhile motor symptoms are not obvious. In this study, we constructed a new\nspeech corpus of Mandarin Chinese and addressed classification of patients with\nPD. We implemented classical machine learning methods with ranking algorithms\nfor feature selection, convolutional and recurrent deep networks, and an end to\nend system. Our classification accuracy significantly surpassed\nstate-of-the-art studies. The result suggests that free talk has stronger\nclassification power than standard speech tasks, which could help the design of\nfuture speech tasks for efficient early diagnosis of the disease. Based on\nexisting classification methods and our natural speech study, the automatic\ndetection of PD from daily conversation could be accessible to the majority of\nthe clinical population.\n"} {"abstract": " In this paper we consider convex co-compact subgroups of the projective\nlinear group. We prove that such a group is relatively hyperbolic with respect\nto a collection of virtually Abelian subgroups of rank two if and only if each\nopen face in the ideal boundary has dimension at most one. We also introduce\nthe \"coarse Hilbert dimension\" of a subset of a convex set and use it to\ncharacterize when a naive convex co-compact subgroup is word hyperbolic or\nrelatively hyperbolic with respect to a collection of virtually Abelian\nsubgroups of rank two.\n"} {"abstract": " Let $I(G)^{[k]}$ denote the $k$th squarefree power of the edge ideal of $G$.\nWhen $G$ is a forest, we provide a sharp upper bound for the regularity of\n$I(G)^{[k]}$ in terms of the $k$-admissable matching number of $G$. For any\npositive integer $k$, we classify all forests $G$ such that $I(G)^{[k]}$ has\nlinear resolution. We also give a combinatorial formula for the regularity of\n$I(G)^{[2]}$ for any forest $G$.\n"} {"abstract": " Recently, deep learning approaches have become the main research frontier for\nbiological image reconstruction and enhancement problems thanks to their high\nperformance, along with their ultra-fast inference times. However, due to the\ndifficulty of obtaining matched reference data for supervised learning, there\nhas been increasing interest in unsupervised learning approaches that do not\nneed paired reference data. In particular, self-supervised learning and\ngenerative models have been successfully used for various biological imaging\napplications. In this paper, we overview these approaches from a coherent\nperspective in the context of classical inverse problems, and discuss their\napplications to biological imaging, including electron, fluorescence and\ndeconvolution microscopy, optical diffraction tomography and functional\nneuroimaging.\n"} {"abstract": " Given a graph with a source vertex $s$, the Single Source Replacement Paths\n(SSRP) problem is to compute, for every vertex $t$ and edge $e$, the length\n$d(s,t,e)$ of a shortest path from $s$ to $t$ that avoids $e$. A Single-Source\nDistance Sensitivity Oracle (Single-Source DSO) is a data structure that\nanswers queries of the form $(t,e)$ by returning the distance $d(s,t,e)$. We\nshow how to deterministically compress the output of the SSRP problem on\n$n$-vertex, $m$-edge graphs with integer edge weights in the range $[1,M]$ into\na Single-Source DSO of size $O(M^{1/2}n^{3/2})$ with query time\n$\\widetilde{O}(1)$. The space requirement is optimal (up to the word size) and\nour techniques can also handle vertex failures.\n Chechik and Cohen [SODA 2019] presented a combinatorial, randomized\n$\\widetilde{O}(m\\sqrt{n}+n^2)$ time SSRP algorithm for undirected and\nunweighted graphs. Grandoni and Vassilevska Williams [FOCS 2012, TALG 2020]\ngave an algebraic, randomized $\\widetilde{O}(Mn^\\omega)$ time SSRP algorithm\nfor graphs with integer edge weights in the range $[1,M]$, where $\\omega<2.373$\nis the matrix multiplication exponent. We derandomize both algorithms for\nundirected graphs in the same asymptotic running time and apply our compression\nto obtain deterministic Single-Source DSOs. The $\\widetilde{O}(m\\sqrt{n}+n^2)$\nand $\\widetilde{O}(Mn^\\omega)$ preprocessing times are polynomial improvements\nover previous $o(n^2)$-space oracles.\n On sparse graphs with $m=O(n^{5/4-\\varepsilon}/M^{7/4})$ edges, for any\nconstant $\\varepsilon > 0$, we reduce the preprocessing to randomized\n$\\widetilde{O}(M^{7/8}m^{1/2}n^{11/8})=O(n^{2-\\varepsilon/2})$ time. This is\nthe first truly subquadratic time algorithm for building Single-Source DSOs on\nsparse graphs.\n"} {"abstract": " End-to-end DNN architectures have pushed the state-of-the-art in speech\ntechnologies, as well as in other spheres of AI, leading researchers to train\nmore complex and deeper models. These improvements came at the cost of\ntransparency. DNNs are innately opaque and difficult to interpret. We no longer\nunderstand what features are learned, where they are preserved, and how they\ninter-operate. Such an analysis is important for better model understanding,\ndebugging and to ensure fairness in ethical decision making. In this work, we\nanalyze the representations trained within deep speech models, towards the task\nof speaker recognition, dialect identification and reconstruction of masked\nsignals. We carry a layer- and neuron-level analysis on the utterance-level\nrepresentations captured within pretrained speech models for speaker, language\nand channel properties. We study: is this information captured in the learned\nrepresentations? where is it preserved? how is it distributed? and can we\nidentify a minimal subset of network that posses this information. Using\ndiagnostic classifiers, we answered these questions. Our results reveal: (i)\nchannel and gender information is omnipresent and is redundantly distributed\n(ii) complex properties such as dialectal information is encoded only in the\ntask-oriented pretrained network and is localised in the upper layers (iii) a\nminimal subset of neurons can be extracted to encode the predefined property\n(iv) salient neurons are sometimes shared between properties and can highlights\npresence of biases in the network. Our cross-architectural comparison indicates\nthat (v) the pretrained models captures speaker-invariant information and (vi)\nthe pretrained CNNs models are competitive to the Transformers for encoding\ninformation for the studied properties. To the best of our knowledge, this is\nthe first study to investigate neuron analysis on the speech models.\n"} {"abstract": " AOSAT is a python package for the analysis of single-conjugate adaptive\noptics (SCAO) simulation results. Python is widely used in the astronomical\ncommunity these days, and AOSAT may be used stand-alone, integrated into a\nsimulation environment, or can easily be extended according to a user's needs.\nStandalone operation requires the user to provide the residual wavefront frames\nprovided by the SCAO simulation package used, the aperture mask (pupil) used\nfor the simulation, and a custom setup file describing the simulation/analysis\nconfiguration. In its standard form, AOSAT's \"tearsheet\" functionality will\nthen run all standard analyzers, providing an informative plot collection on\nproperties such as the point-spread function (PSF) and its quality, residual\ntip-tilt, the impact of pupil fragmentation, residual optical aberration modes\nboth static and dynamic, the expected high-contrast performance of suitable\ninstrumentation with and without coronagraphs, and the power spectral density\nof residual wavefront errors.\n AOSAT fills the gap between the simple numerical outputs provided by most\nsimulation packages, and the full-scale deployment of instrument simulators and\ndata reduction suites operating on SCAO residual wavefronts. It enables\ninstrument designers and end-users to quickly judge the impact of design or\nconfiguration decisions on the final performance of down-stream\ninstrumentation.\n"} {"abstract": " It has recently been pointed out that Gaia is capable of detecting a\nstochastic gravitational wave background in the sensitivity band between the\nfrequency of pulsar timing arrays and LISA. We argue that Gaia and THEIA has\ngreat potential for early universe cosmology, since such a frequency range is\nideal for probing phase transitions in asymmetric dark matter, SIMP and the\ncosmological QCD transition. Furthermore, there is the potential for detecting\nprimordial black holes in the solar mass range produced during such an early\nuniverse transition and distinguish them from those expected from the QCD\nepoch. Finally, we discuss the potential for Gaia and THEIA to probe\ntopological defects and the ability of Gaia to potentially shed light on the\nrecent NANOGrav results.\n"} {"abstract": " This paper studies the recovery of a joint piece-wise linear trend from a\ntime series using L1 regularization approach, called L1 trend filtering (Kim,\nKoh and Boyd, 2009). We provide some sufficient conditions under which a L1\ntrend filter can be well-behaved in terms of mean estimation and change point\ndetection. The result is two-fold: for the mean estimation, an almost optimal\nconsistent rate is obtained; for the change point detection, the slope change\nin direction can be recovered in a high probability. In addition, we show that\nthe weak irrepresentable condition, a necessary condition for LASSO model to be\nsign consistent (Zhao and Yu, 2006), is not necessary for the consistent change\npoint detection. The performance of the L1 trend filter is evaluated by some\nfinite sample simulations studies.\n"} {"abstract": " It has recently been shown that superconductivity in magic-angle twisted\ntrilayer graphene survives to in-plane magnetic fields that are well in excess\nof the Pauli limit, and much stronger than the in-plane critical magnetic\nfields of magic-angle twisted bilayer graphene. The difference is surprising\nbecause twisted bilayers and trilayers both support the magic-angle flat bands\nthought to be the fountainhead of twisted graphene superconductivity. We show\nhere that the difference in critical magnetic fields can be traced to a\n$\\mathcal{C}_2 \\mathcal{M}_{h}$ symmetry in trilayers that survives in-plane\nmagnetic fields, and also relative displacements between top and bottom layers\nthat are not under experimental control at present. An gate electric field\nbreaks the $\\mathcal{C}_2 \\mathcal{M}_{h}$ symmetry and therefore limits the\nin-plane critical magnetic field.\n"} {"abstract": " We analyze possibilities of second-order quantifier elimination for formulae\ncontaining parameters -- constants or functions. For this, we use a constraint\nresolution calculus obtained from specializing the hierarchical superposition\ncalculus. If saturation terminates, we analyze possibilities of obtaining\nweakest constraints on parameters which guarantee satisfiability. If the\nsaturation does not terminate, we identify situations in which finite\nrepresentations of infinite saturated sets exist. We identify situations in\nwhich entailment between formulae expressed using second-order quantification\ncan be effectively checked. We illustrate the ideas on a series of examples\nfrom wireless network research.\n"} {"abstract": " In the context of supervised learning of a function by a Neural Network (NN),\nwe claim and empirically justify that a NN yields better results when the\ndistribution of the data set focuses on regions where the function to learn is\nsteeper. We first traduce this assumption in a mathematically workable way\nusing Taylor expansion. Then, theoretical derivations allow to construct a\nmethodology that we call Variance Based Samples Weighting (VBSW). VBSW uses\nlocal variance of the labels to weight the training points. This methodology is\ngeneral, scalable, cost effective, and significantly increases the performances\nof a large class of NNs for various classification and regression tasks on\nimage, text and multivariate data. We highlight its benefits with experiments\ninvolving NNs from shallow linear NN to Resnet or Bert.\n"} {"abstract": " Multiple transition phenomena in divalent Eu compound EuAl$_4$ with the\ntetragonal structure were investigated via the single-crystal time-of-flight\nneutron Laue technique. At 30.0 K below a charge-density-wave (CDW) transition\ntemperature of $T_{\\rm CDW}$ = 140 K, superlattice peaks emerge near nuclear\nBragg peaks described by an ordering vector $q_{\\rm CDW}$=(0 0 ${\\delta}_c$)\nwith ${\\delta}_c{\\sim}$0.19. In contrast, magnetic peaks appear at $q_2 =\n({\\delta}_2 {\\delta}_2 0)$ with ${\\delta}_2$ = 0.085 in a magnetic-ordered\nphase at 13.5 K below $T_{\\rm N1}$ = 15.4 K. By further cooling to below\n$T_{\\rm N3}$ = 12.2 K, the magnetic ordering vector changes into $q_1 =\n({\\delta}_1 0 0)$ with ${\\delta}_1$ = 0.17 at 11.5 K and slightly shifts to\n${\\delta}_1$ = 0.194 at 4.3 K. No distinct change in the magnetic Bragg peak\nwas detected at $T_{\\rm N2}$=13.2 K and $T_{\\rm N4}$=10.0 K. The structural\nmodulation below $T_{\\rm CDW}$ with $q_{\\rm CDW}$ is characterized by the\nabsence of the superlattice peak in the (0 0 $l$) axis. As a similar CDW\ntransition was observed in SrAl$_4$, the structural modulation with $q_{\\rm\nCDW}$ could be mainly ascribed to the displacement of Al ions within the\ntetragonal $ab$-plane. Complex magnetic transitions are in stark contrast to a\nsimple collinear magnetic structure in isovalent EuGa$_4$. This could stem from\ndifferent electronic structures with the CDW transition between two compounds.\n"} {"abstract": " We propose a gauged $B-L$ extension of the standard model (SM) where light\nneutrinos are of Dirac type by virtue of tiny Yukawa couplings with the SM\nHiggs. To achieve leptogenesis, we include additional heavy Majorana fermions\nwithout introducing any $B-L$ violation by two units. An additional scalar\ndoublet with appropriate $B-L$ charge can allow heavy fermion coupling with the\nSM leptons so that out of equilibrium decay of the former can lead to\ngeneration of lepton asymmetry. Due to the $B-L$ gauge interactions of the\ndecaying fermion, the criteria of successful Dirac leptogenesis can also\nconstrain the gauge sector couplings so as to keep the corresponding washout\nprocesses under control. The same $B-L$ gauge sector parameter space can also\nbe constrained from dark matter requirements if the latter is assumed to be a\nSM singlet particle with non-zero $B-L$ charge. The same $B-L$ gauge\ninteractions also lead to additional thermalised relativistic degrees of\nfreedom $\\Delta N_{\\rm eff}$ from light Dirac neutrinos which are tightly\nconstrained by Planck 2018 data. While there exists parameter space from the\ncriteria of successful low scale Dirac leptogenesis, dark matter and $\\Delta\nN_{\\rm eff}$ even after incorporating the latest collider bounds, all the\ncurrently allowed parameters can be probed by future measurements of $\\Delta\nN_{\\rm eff}$.\n"} {"abstract": " In two-dimensional loop models, the scaling properties of critical random\ncurves are encoded in the correlators of connectivity operators. In the dense\nO($n$) loop model, any such operator is naturally associated to a standard\nmodule of the periodic Temperley-Lieb algebra. We introduce a new family of\nrepresentations of this algebra, with connectivity states that have two marked\npoints, and argue that they define the fusion of two standard modules. We\nobtain their decomposition on the standard modules for generic values of the\nparameters, which in turn yields the structure of the operator product\nexpansion of connectivity operators.\n"} {"abstract": " We seek to investigate the scalability of neuromorphic computing for computer\nvision, with the objective of replicating non-neuromorphic performance on\ncomputer vision tasks while reducing power consumption. We convert the deep\nArtificial Neural Network (ANN) architecture U-Net to a Spiking Neural Network\n(SNN) architecture using the Nengo framework. Both rate-based and spike-based\nmodels are trained and optimized for benchmarking performance and power, using\na modified version of the ISBI 2D EM Segmentation dataset consisting of\nmicroscope images of cells. We propose a partitioning method to optimize\ninter-chip communication to improve speed and energy efficiency when deploying\nmulti-chip networks on the Loihi neuromorphic chip. We explore the advantages\nof regularizing firing rates of Loihi neurons for converting ANN to SNN with\nminimum accuracy loss and optimized energy consumption. We propose a percentile\nbased regularization loss function to limit the spiking rate of the neuron\nbetween a desired range. The SNN is converted directly from the corresponding\nANN, and demonstrates similar semantic segmentation as the ANN using the same\nnumber of neurons and weights. However, the neuromorphic implementation on the\nIntel Loihi neuromorphic chip is over 2x more energy-efficient than\nconventional hardware (CPU, GPU) when running online (one image at a time).\nThese power improvements are achieved without sacrificing the task performance\naccuracy of the network, and when all weights (Loihi, CPU, and GPU networks)\nare quantized to 8 bits.\n"} {"abstract": " We put forward the concept of work extraction from thermal noise by\nphase-sensitive (homodyne) measurements of the noisy input followed by\n(outcome-dependent) unitary manipulations of the post-measured state. For\noptimized measurements, noise input with more than one quantum on average is\nshown to yield heat-to-work conversion with efficiency and power that grow with\nthe mean number of input quanta, detector efficiency and its inverse\ntemperature. This protocol is shown to be advantageous compared to common\nmodels of information and heat engines.\n"} {"abstract": " Different regimes of entanglement growth under measurement have been\ndemonstrated for quantum many-body systems, with an entangling phase for low\nmeasurement rates and a disentangling phase for high rates (quantum Zeno\neffect). Here we study entanglement growth on a disordered Bose-Fermi mixture\nwith the bosons playing the role of the effective self-induced measurement for\nthe fermions. Due to the interplay between the disorder and a non-Abelian\nsymmetry, the model features an entanglement growth resonance when the\nboson-fermion interaction strength is varied. With the addition of a magnetic\nfield, the model acquires a dynamical symmetry leading to experimentally\nmeasurable long-time local oscillations. At the entanglement growth resonance,\nwe demonstrate the emergence of the cleanest oscillations. Furthermore, we show\nthat this resonance is distinct from both noise enhanced transport and a\nstandard stochastic resonance. Our work paves the way for experimental\nrealizations of self-induced correlated phases in multi-species systems.\n"} {"abstract": " Saturn's E ring consists of micron-sized particles launched from Enceladus by\nthat moon's geological activity. A variety of small-scale structures in the\nE-ring's brightness have been attributed to tendrils of material recently\nlaunched from Enceladus. However, one of these features occurs at a location\nwhere Enceladus' gravitational perturbations should concentrate background\nE-ring particles into structures known as satellite wakes. While satellite\nwakes have been observed previously in ring material drifting past other moons,\nthese E-ring structures would be the first examples of wakes involving\nparticles following horseshoe orbits near Enceladus' orbit. The predicted\nintensity of these wake signatures are particularly sensitive to the fraction\nE-ring particles' on orbits with low eccentricities and semi-major axes just\noutside of Enceladus' orbit, and so detailed analyses of these and other\nsmall-scale E-ring features should place strong constraints on the orbital\nproperties and evolution of E-ring particles.\n"} {"abstract": " In \\cite{BH20} an elegant choice-free construction of a canonical extension\nof a boolean algebra $B$ was given as the boolean algebra of regular open\nsubsets of the Alexandroff topology on the poset of proper filters of $B$. We\nmake this construction point-free by replacing the Alexandroff space of proper\nfilters of $B$ with the free frame $\\mathcal{L}$ generated by the bounded\nmeet-semilattice of all filters of $B$ (ordered by reverse inclusion) and prove\nthat the booleanization of $\\mathcal{L}$ is a canonical extension of $B$. Our\nmain result generalizes this approach to the category\n$\\boldsymbol{\\mathit{ba}\\ell}$ of bounded archimedean $\\ell$-algebras, thus\nyielding a point-free construction of canonical extensions in\n$\\boldsymbol{\\mathit{ba}\\ell}$. We conclude by showing that the algebra of\nnormal functions on the Alexandroff space of proper archimedean $\\ell$-ideals\nof $A$ is a canonical extension of $A\\in\\boldsymbol{\\mathit{ba}\\ell}$, thus\nproviding a generalization of the result of \\cite{BH20} to\n$\\boldsymbol{\\mathit{ba}\\ell}$.\n"} {"abstract": " We report on the measurement of inclusive charmless semileptonic B decays $B\n\\to X_{u} \\ell \\nu$. The analysis makes use of hadronic tagging and is\nperformed on the full data set of the Belle experiment comprising 772 million\n$B\\bar{B}$ pairs. In the proceedings, the preliminary results of measurements\nof partial branching fractions and the CKM matrix element $|V_{ub}|$ are\npresented.\n"} {"abstract": " Superconductivity and magnetism are generally incompatible because of the\nopposing requirement on electron spin alignment. When combined, they produce a\nmultitude of fascinating phenomena, including unconventional superconductivity\nand topological superconductivity. The emergence of two-dimensional (2D)layered\nsuperconducting and magnetic materials that can form nanoscale junctions with\natomically sharp interfaces presents an ideal laboratory to explore new\nphenomena from coexisting superconductivity and magnetic ordering. Here we\nreport tunneling spectroscopy under an in-plane magnetic field of\nsuperconductor-ferromagnet-superconductor (S/F/S) tunnel junctions that are\nmade of 2D Ising superconductor NbSe2 and ferromagnetic insulator CrBr3. We\nobserve nearly 100% tunneling anisotropic magnetoresistance (AMR), that is,\ndifference in tunnel resistance upon changing magnetization direction from\nout-of-plane to inplane. The giant tunneling AMR is induced by\nsuperconductivity, particularly, a result of interfacial magnetic exchange\ncoupling and spin-dependent quasiparticle scattering. We also observe an\nintriguing magnetic hysteresis effect in superconducting gap energy and\nquasiparticle scattering rate with a critical temperature that is 2 K below the\nsuperconducting transition temperature. Our study paves the path for exploring\nsuperconducting spintronic and unconventional superconductivity in van der\nWaals heterostructures.\n"} {"abstract": " This paper explores how different ideas of racial equity in machine learning,\nin justice settings in particular, can present trade-offs that are difficult to\nsolve computationally. Machine learning is often used in justice settings to\ncreate risk assessments, which are used to determine interventions, resources,\nand punitive actions. Overall aspects and performance of these machine\nlearning-based tools, such as distributions of scores, outcome rates by levels,\nand the frequency of false positives and true positives, can be problematic\nwhen examined by racial group. Models that produce different distributions of\nscores or produce a different relationship between level and outcome are\nproblematic when those scores and levels are directly linked to the restriction\nof individual liberty and to the broader context of racial inequity. While\ncomputation can help highlight these aspects, data and computation are unlikely\nto solve them. This paper explores where values and mission might have to fill\nthe spaces computation leaves.\n"} {"abstract": " Affect recognition based on subjects' facial expressions has been a topic of\nmajor research in the attempt to generate machines that can understand the way\nsubjects feel, act and react. In the past, due to the unavailability of large\namounts of data captured in real-life situations, research has mainly focused\non controlled environments. However, recently, social media and platforms have\nbeen widely used. Moreover, deep learning has emerged as a means to solve\nvisual analysis and recognition problems. This paper exploits these advances\nand presents significant contributions for affect analysis and recognition\nin-the-wild. Affect analysis and recognition can be seen as a dual knowledge\ngeneration problem, involving: i) creation of new, large and rich in-the-wild\ndatabases and ii) design and training of novel deep neural architectures that\nare able to analyse affect over these databases and to successfully generalise\ntheir performance on other datasets. The paper focuses on large in-the-wild\ndatabases, i.e., Aff-Wild and Aff-Wild2 and presents the design of two classes\nof deep neural networks trained with these databases. The first class refers to\nuni-task affect recognition, focusing on prediction of the valence and arousal\ndimensional variables. The second class refers to estimation of all main\nbehavior tasks, i.e. valence-arousal prediction; categorical emotion\nclassification in seven basic facial expressions; facial Action Unit detection.\nA novel multi-task and holistic framework is presented which is able to jointly\nlearn and effectively generalize and perform affect recognition over all\nexisting in-the-wild databases. Large experimental studies illustrate the\nachieved performance improvement over the existing state-of-the-art in affect\nrecognition.\n"} {"abstract": " The Segerdahl process (Segerdahl (1955)), characterized by exponential claims\nand affine drift, has drawn a considerable amount of interest -- see, for\nexample, (Tichy (1984); Avram and Usabel (2008), due to its economic interest\n(it is the simplest risk process which takes into account the effect of\ninterest rates). It is also the simplest non-Levy, non-diffusion example of a\nspectrally negative Markov risk model. Note that for both spectrally negative\nLevy and diffusion processes, first passage theories which are based on\nidentifying two basic monotone harmonic functions/martingales have been\ndevelopped. This means that for these processes many control problems involving\ndividends, capital injections, etc., may be solved explicitly once the two\nbasic functions have been obtained. Furthermore, extensions to general\nspectrally negative Markov processes are possible (Landriault et al. (2017),\nAvram et al. (2018); Avram and Goreac (2019); Avram et al. (2019b).\nUnfortunately, methods for computing the basic functions are still lacking\noutside the Levy and diffusion classes, with the notable exception of the\nSegerdahl process, for which the ruin probability has been computed (Paulsen\nand Gjessing (1997). However, there is a striking lack of numerical results in\nboth cases. This motivated us to review several approaches, with the purpose of\ndrawing attention to connections between them, and underlying open problems.\n"} {"abstract": " Ultralight bosons are possible fundamental building blocks of nature, and\npromising dark matter candidates. They can trigger superradiant instabilities\nof spinning black holes (BHs) and form long-lived \"bosonic clouds\" that slowly\ndissipate energy through the emission of gravitational waves (GWs). Previous\nstudies constrained ultralight bosons by searching for the stochastic\ngravitational wave background (SGWB) emitted by these sources in LIGO data,\nfocusing on the most unstable dipolar and quadrupolar modes. Here we focus on\nscalar bosons and extend previous works by: (i) studying in detail the impact\nof higher modes in the SGWB; (ii) exploring the potential of future proposed\nground-based GW detectors, such as the Neutron Star Extreme Matter Observatory,\nthe Einstein Telescope and Cosmic Explorer, to detect this SGWB. We find that\nhigher modes largely dominate the SGWB for bosons with masses $\\gtrsim\n10^{-12}$ eV, which is particularly relevant for future GW detectors. By\nestimating the signal-to-noise ratio of this SGWB, due to both stellar-origin\nBHs and from a hypothetical population of primordial BHs, we find that future\nground-based GW detectors could observe or constrain bosons in the mass range\n$\\sim [7\\times 10^{-14}, 2\\times 10^{-11}]$ eV and significantly improve on\ncurrent and future constraints imposed by LIGO and Virgo observations.\n"} {"abstract": " Army cadets obtain occupations through a centralized process. Three\nobjectives -- increasing retention, aligning talent, and enhancing trust --\nhave guided reforms to this process since 2006. West Point's mechanism for the\nClass of 2020 exacerbated challenges implementing Army policy aims. We\nformulate these desiderata as axioms and study their implications theoretically\nand with administrative data. We show that the Army's objectives not only\ndetermine an allocation mechanism, but also a specific priority policy, a\nuniqueness result that integrates mechanism and priority design. These results\nled to a re-design of the mechanism, now adopted at both West Point and ROTC.\n"} {"abstract": " We associate a certain tensor product lattice to any primitive integer\nlattice and ask about its typical shape. These lattices are related to the\ntangent bundle of Grassmannians and their study is motivated by Peyre's\nprogramme on \"freeness\" for rational points of bounded height on Fano\nvarieties.\n"} {"abstract": " Static (DC) and dynamic (AC, at 14 MHz and 8 GHz) magnetic susceptibilities\nof single crystals of a ferromagnetic superconductor,\n$\\textrm{EuFe}_{2}(\\textrm{As}_{1-x}\\textrm{P}_{x})_{2}$ (x = 0.23), were\nmeasured in pristine state and after different doses of 2.5 MeV electron or 3.5\nMeV proton irradiation. The superconducting transition temperature, $T_{c}(H)$,\nshows an extraordinarily large decrease. It starts at\n$T_{c}(H=0)\\approx24\\:\\textrm{K}$ in the pristine sample for both AC and DC\nmeasurements, but moves to almost half of that value after moderate irradiation\ndose. Our results suggest that in\n$\\textrm{EuFe}_{2}(\\textrm{As}_{1-x}\\textrm{P}_{x})_{2}$ superconductivity is\naffected by local-moment ferromagnetism mostly via the spontaneous internal\nmagnetic fields induced by the FM subsystem. Another mechanism is revealed upon\nirradiation where magnetic defects created in ordered $\\text{Eu}^{2+}$ lattice\nact as efficient pairbreakers leading to a significant $T_{c}$ reduction upon\nirradiation compared to other 122 compounds. On the other hand, the exchange\ninteractions seem to be weakly screened by the superconducting phase leading to\na modest increase of $T_{m}$ (less than 1 K) after the irradiation drives\n$T_{c}$ to below $T_{m}$. The results suggest that FM and SC phases coexist\nmicroscopically in the same volume.\n"} {"abstract": " We propose a new network architecture, the Fractal Pyramid Networks (PFNs)\nfor pixel-wise prediction tasks as an alternative to the widely used\nencoder-decoder structure. In the encoder-decoder structure, the input is\nprocessed by an encoding-decoding pipeline that tries to get a semantic\nlarge-channel feature. Different from that, our proposed PFNs hold multiple\ninformation processing pathways and encode the information to multiple separate\nsmall-channel features. On the task of self-supervised monocular depth\nestimation, even without ImageNet pretrained, our models can compete or\noutperform the state-of-the-art methods on the KITTI dataset with much fewer\nparameters. Moreover, the visual quality of the prediction is significantly\nimproved. The experiment of semantic segmentation provides evidence that the\nPFNs can be applied to other pixel-wise prediction tasks, and demonstrates that\nour models can catch more global structure information.\n"} {"abstract": " Black-box quantum state preparation is a fundamental primitive in quantum\nalgorithms. Starting from Grover, a series of techniques have been devised to\nreduce the complexity. In this work, we propose to perform black-box state\npreparation using the technique of linear combination of unitaries (LCU). We\nprovide two algorithms based on a different structure of LCU. Our algorithms\nimprove upon the existed best results by reducing the required additional\nqubits and Toffoli gates to 2log(n) and n, respectively, in the bit precision\nn. We demonstrate the algorithms using the IBM Quantum Experience cloud\nservices. The further reduced complexity of the present algorithms brings the\nblack-box quantum state preparation closer to reality.\n"} {"abstract": " Three-dimensional topological insulators (TIs) host helical Dirac surface\nstates at the interface with a trivial insulator. In quasi-one-dimensional TI\nnanoribbon structures the wave function of surface charges extends\nphase-coherently along the perimeter of the nanoribbon, resulting in a\nquantization of transverse surface modes. Furthermore, as the inherent\nspin-momentum locking results in a Berry phase offset of $\\pi$ of\nself-interfering charge carriers an energy gap within the surface state\ndispersion appears and all states become spin-degenerate. We investigate and\ncompare the magnetic field dependent surface state dispersion in selectively\ndeposited Bi$_2$Te$_3$ TI micro- and nanoribbon structures by analysing the\ngate voltage dependent magnetoconductance at cryogenic temperatures. While in\nwide microribbon devices the field effect mainly changes the amount of bulk\ncharges close to the top surface we identify coherent transverse surface states\nalong the perimeter of the nanoribbon devices responding to a change in top\ngate potential. We quantify the energetic spacing in between these quantized\ntransverse subbands by using an electrostatic model that treats an initial\ndifference in charge carrier densities on the top and bottom surface as well as\nremaining bulk charges. In the gate voltage dependent transconductance we find\noscillations that change their relative phase by $\\pi$ at half-integer values\nof the magnetic flux quantum applied coaxial to the nanoribbon, which is a\nsignature for a magnetic flux dependent topological phase transition in narrow,\nselectively deposited TI nanoribbon devices.\n"} {"abstract": " Interference between light waves is one of the widely known phenomena in\nphysics, which is widely used in modern optics, ranging from precise detection\nat the nanoscale to gravitational-wave observation. Akin to light, both\nclassical and quantum interferences between surface plasmon polaritons (SPPs)\nhave been demonstrated. However, to actively control the SPP interference\nwithin subcycle in time (usually less than several femtoseconds in the visible\nrange) is still missing, which hinders the ultimate manipulation of SPP\ninterference on ultrafast time scale. In this paper, the interference between\nSPPs launched by a hole dimer, which was excited by a grazing incident free\nelectron beam without direct contact, was manipulated through both propagation\nand initial phase difference control. Particularly, using cathodoluminescence\nspectroscopy, the appearance of higher-order interference orders was obtained\nthrough propagation phase control by increasing separation distances of the\ndimer. Meanwhile, the peak-valley-peak evolution at a certain wavelength\nthrough changing the accelerating voltages was observed, which originates from\nthe initial phase difference control of hole launched SPPs. In particular, the\ntime resolution of this kind of control is shown to be in the ultrafast\nattosecond (as) region. Our work suggests that fast electron beams can be an\nefficient tool to control polarition interference in subcycle temporal scale,\nwhich can be potentially used in ultrafast optical processing or sensing.\n"} {"abstract": " We investigate Bayesian predictive distributions for Wishart distributions\nunder the Kullback--Leibler divergence. We consider a recently introduced class\nof prior distributions, called the family of enriched standard conjugate prior\ndistributions and compare the Bayesian predictive distributions based on these\nprior distributions. We study the performance of the Bayesian predictive\ndistribution based on the reference prior distribution in the family. We show\nthat there exists a prior distribution in the family that dominates the\nreference prior distribution.\n"} {"abstract": " The construction of an ontology of scientific knowledge objects, presented\nhere, is part of the development of an approach oriented towards the\nvisualization of scientific knowledge. It is motivated by the fact that the\nconcepts that are used to organize scientific knowledge (theorem, law,\nexperience, proof, etc.) appear in existing ontologies but that none of these\nontologies is centered on this topic and presents them in a simple and easily\nunderstandable organization. This ontology has been constructed by 1) selecting\nconcepts that appear in high level ontologies or in ontologies of knowledge\nobjects of specific fields and 2) by interviewing scientists in different\nfields. We have aligned this ontology with some of the sources used, which has\nallowed us to verify its consistency with respect to them. The validation of\nthe ontology consists in using it to formalize knowledge from various sources,\nwhich we have begun to do in the field of physics.\n"} {"abstract": " We consider an input-constrained differential-drive robot with actuator\ndynamics. For this system, we establish asymptotic stability of the origin on\narbitrary compact, convex sets using Model Predictive Control (MPC) without\nstabilizing terminal conditions despite the presence of state constraints and\nactuator dynamics. We note that the problem without those two additional\ningredients was essentially solved beforehand, despite the fact that the\nlinearization is not stabilizable. We propose an approach successfully solving\nthe task at hand by combining the theory of barriers to characterize the\nviability kernel and an MPC framework based on so-called cost controllability.\nMoreover, we present a numerical case study to derive quantitative bounds on\nthe required length of the prediction horizon. To this end, we investigate the\nboundary of the viability kernel and a neighbourhood of the origin, i.e. the\nmost interesting areas.\n"} {"abstract": " Anomaly detection is a crucial and challenging subject that has been studied\nwithin diverse research areas. In this work, we explore the task of log anomaly\ndetection (especially computer system logs and user behavior logs) by analyzing\nlogs' sequential information. We propose LAMA, a multi-head attention based\nsequential model to process log streams as template activity (event) sequences.\n A next event prediction task is applied to train the model for anomaly\ndetection. Extensive empirical studies demonstrate that our new model\noutperforms existing log anomaly detection methods including statistical and\ndeep learning methodologies, which validate the effectiveness of our proposed\nmethod in learning sequence patterns of log data.\n"} {"abstract": " In this paper, we consider fully connected feed-forward deep neural networks\nwhere weights and biases are independent and identically distributed according\nto Gaussian distributions. Extending previous results (Matthews et al.,\n2018a;b; Yang, 2019) we adopt a function-space perspective, i.e. we look at\nneural networks as infinite-dimensional random elements on the input space\n$\\mathbb{R}^I$. Under suitable assumptions on the activation function we show\nthat: i) a network defines a continuous Gaussian process on the input space\n$\\mathbb{R}^I$; ii) a network with re-scaled weights converges weakly to a\ncontinuous Gaussian process in the large-width limit; iii) the limiting\nGaussian process has almost surely locally $\\gamma$-H\\\"older continuous paths,\nfor $0 < \\gamma <1$. Our results contribute to recent theoretical studies on\nthe interplay between infinitely wide deep neural networks and Gaussian\nprocesses by establishing weak convergence in function-space with respect to a\nstronger metric.\n"} {"abstract": " In this article, we wish to establish some first order differential\nsubordination relations for certain Carath\\'{e}odory functions with nice\ngeometrical properties. Moreover, several implications are determined so that\nthe normalized analytic function belongs to various subclasses of starlike\nfunctions.\n"} {"abstract": " We study the Du Bois complex of a hypersurface $Z$ in a smooth complex\nalgebraic variety in terms of the minimal exponent $\\widetilde{\\alpha}(Z)$ and\ngive various applications. We show that if $\\widetilde{\\alpha}(Z)\\geq p+1$,\nthen the canonical morphism $\\Omega_Z^p\\to \\underline{\\Omega}_Z^p$ is an\nisomorphism. On the other hand, if $Z$ is singular and\n$\\widetilde{\\alpha}(Z)>p\\geq 2$, then ${\\mathcal\nH}^{p-1}(\\underline{\\Omega}_Z^{n-p})\\neq 0$.\n"} {"abstract": " We determine the asymptotic spreading speed of the solutions of a Fisher-KPP\nreaction-diffusion equation, starting from compactly supported initial data,\nwhen the diffusion coefficient is a fixed bounded monotone profile that is\nshifted at a given forcing speed and satisfies a general uniform ellipticity\ncondition. Depending on the monotony of the profile, we are able to\ncharacterize this spreading speed as a function of the forcing speed and the\ntwo linear spreading speeds associated to the asymptotic problems. Most\nnotably, when the profile of the coefficient diffusion is increasing we show\nthat there is an intermediate range for the forcing speed where spreading\nactually occurs at a speed which is larger than the linear speed associated\nwith the homogeneous state around the position of the front. We complement our\nstudy with the construction of strictly monotone traveling front solutions with\nstrong exponential decay near the unstable state when the profile of the\ncoefficient diffusion is decreasing and in the regime where the forcing speed\nis precisely the selected spreading speed.\n"} {"abstract": " In this paper, we study the rainbow Erd\\H{o}s-Rothschild problem with respect\nto 3-term arithmetic progressions. We obtain the asymptotic number of\n$r$-colorings of $[n]$ without rainbow 3-term arithmetic progressions, and we\nshow that the typical colorings with this property are 2-colorings. We also\nprove that $[n]$ attains the maximum number of rainbow 3-term arithmetic\nprogression-free $r$-colorings among all subsets of $[n]$. Moreover, the exact\nnumber of rainbow 3-term arithmetic progression-free $r$-colorings of\n$\\mathbb{Z}_p$ is obtained, where $p$ is any prime and $\\mathbb{Z}_p$ is the\ncyclic group of order $p$.\n"} {"abstract": " In recent times, the Internet has been plagued by a tremendous amount of\nmisinformation. Online markets such as Amazon are also not free from\nmisinformation. In this work, we study the misinformation propagated to\nconsumers through the form of Amazon reviews. There exists a vast underground\nmarket where reviews by real Amazon users are purchased and sold. While such a\npractice violates Amazon's terms of service, we observe that there exists a\ncomplex network consisting of thousands of sellers and agents, who provide a\nrebate to consumers for leaving positive reviews to over $5000$ products. Based\non interviews with members involved in the reviews market, we understand the\nworking of this market, and the tactics used to avoid detection by Amazon. We\nalso present a set of recommendations of features that Amazon and similar\nonline markets can take into consideration to detect such reviews.\n"} {"abstract": " Spatially separated bodies in relative motion through vacuum experience a\ntiny friction force known as quantum friction. This force has so far eluded\nexperimental detection due to its small magnitude and short range. Quantitative\ndetails revealing traces of the quantum friction in the degradation of the\nquantum coherence of a particle are presented. Environmentally induced\ndecoherence for a particle sliding over a dielectric sheet can be decomposed\ninto contributions of different signatures: one solely induced by the\nelectromagnetic vacuum in presence of the dielectric and another induced by\nmotion. As the geometric phase has been proved to be a fruitful venue of\ninvestigation to infer features of the quantum systems, herein we propose to\nuse the accumulated geometric phase acquired by a particle as a quantum\nfriction sensor. Furthermore, an innovative experiment designed to track traces\nof quantum friction by measuring the velocity dependence of corrections to the\ngeometric phase and coherence is proposed. The experimentally viable scheme\npresented can spark renewed optimism for the detection of non-contact friction,\nwith the hope that this non-equilibrium phenomenon can be readily measured\nsoon.\n"} {"abstract": " Multicore fibers can be used for Radio over Fiber transmission of mmwave\nsignals for phased array antennas in 5G networks. The inter-core skew of these\nfibers distort the radiation pattern. We propose an efficient method to\ncompensate the differential delays, without full equalization of the\ntransmission path lengths, reducing the power loss and complexity.\n"} {"abstract": " The interplay between security and reliability is poorly understood. This\npaper shows how triple modular redundancy affects a side-channel attack (SCA).\nOur counterintuitive findings show that modular redundancy can increase SCA\nresiliency.\n"} {"abstract": " An intelligent reflecting surface (IRS) can greatly improve the channel\nquality over a frequency-flat channel, if it is configured to reflect the\nincident signal as a beam towards the receiver. However, the fundamental\nlimitations of the IRS technology become apparent over practical\nfrequency-selective channels, where the same configuration must be used over\nthe entire bandwidth. In this paper, we consider a wideband orthogonal\nfrequency-division multiplexing (OFDM) system that is supported by a fairly\nrealistic IRS setup with two unbalanced states per element and also mutual\ncoupling. We describe the simulation setup considered in the IEEE Signal\nProcessing Cup 2021, propose a low-complexity solution for channel estimation\nand IRS configuration, and evaluate it on that setup.\n"} {"abstract": " In this paper, we propose a novel DNN watermarking method that utilizes a\nlearnable image transformation method with a secret key. The proposed method\nembeds a watermark pattern in a model by using learnable transformed images and\nallows us to remotely verify the ownership of the model. As a result, it is\npiracy-resistant, so the original watermark cannot be overwritten by a pirated\nwatermark, and adding a new watermark decreases the model accuracy unlike most\nof the existing DNN watermarking methods. In addition, it does not require a\nspecial pre-defined training set or trigger set. We empirically evaluated the\nproposed method on the CIFAR-10 dataset. The results show that it was resilient\nagainst fine-tuning and pruning attacks while maintaining a high\nwatermark-detection accuracy.\n"} {"abstract": " For a typical Scene Graph Generation (SGG) method, there is often a large gap\nin the performance of the predicates' head classes and tail classes. This\nphenomenon is mainly caused by the semantic overlap between different\npredicates as well as the long-tailed data distribution. In this paper, a\nPredicate Correlation Learning (PCL) method for SGG is proposed to address the\nabove two problems by taking the correlation between predicates into\nconsideration. To describe the semantic overlap between strong-correlated\npredicate classes, a Predicate Correlation Matrix (PCM) is defined to quantify\nthe relationship between predicate pairs, which is dynamically updated to\nremove the matrix's long-tailed bias. In addition, PCM is integrated into a\nPredicate Correlation Loss function ($L_{PC}$) to reduce discouraging gradients\nof unannotated classes. The proposed method is evaluated on Visual Genome\nbenchmark, where the performance of the tail classes is significantly improved\nwhen built on the existing methods.\n"} {"abstract": " Developers in data science and other domains frequently use computational\nnotebooks to create exploratory analyses and prototype models. However, they\noften struggle to incorporate existing software engineering tooling into these\nnotebook-based workflows, leading to fragile development processes. We\nintroduce Assembl\\'{e}, a new development environment for collaborative data\nscience projects, in which promising code fragments of data science pipelines\ncan be contributed as pull requests to an upstream repository entirely from\nwithin JupyterLab, abstracting away low-level version control tool usage. We\ndescribe the design and implementation of Assembl\\'{e} and report on a user\nstudy of 23 data scientists.\n"} {"abstract": " Predicting the permeability of porous media in saturated and partially\nsaturated conditions is of crucial importance in many geo-engineering areas,\nfrom water resources to vadose zone hydrology or contaminant transport\npredictions. Many models have been proposed in the literature to estimate the\npermeability from properties of the porous media such as porosity, grain size\nor pore size. In this study, we develop a model of the permeability for porous\nmedia saturated by one or two fluid phases with all physically-based parameters\nusing a fractal upscaling technique. The model is related to microstructural\nproperties of porous media such as fractal dimension for pore space, fractal\ndimension for tortuosity, porosity, maximum radius, ratio of minimum pore\nradius and maximum pore radius, water saturation and irreducible water\nsaturation. The model is favorably compared to existing and widely used models\nfrom the literature. Then, comparison with published experimental data for both\nunconsolidated and consolidated samples, we show that the proposed model\nestimate the permeability from the medium properties very well.\n"} {"abstract": " This paper is one in a series that investigates topological measures on\nlocally compact spaces. A topological measure is a set function which is\nfinitely additive on the collection of open and compact sets, inner regular on\nopen sets, and outer regular on closed sets. We examine semisolid sets and give\na way of constructing topological measures from solid-set functions on locally\ncompact, connected, locally connected spaces. For compact spaces our approach\nproduces a simpler method than the current one. We give examples of finite and\ninfinite topological measures on locally compact spaces and present an easy way\nto generate topological measures on spaces whose one-point compactification has\ngenus 0. Results of this paper are necessary for various methods for\nconstructing topological measures, give additional properties of topological\nmeasures, and provide a tool for determining whether two topological measures\nor quasi-linear functionals are the same.\n"} {"abstract": " We propose a novel neural model compression strategy combining data\naugmentation, knowledge transfer, pruning, and quantization for device-robust\nacoustic scene classification (ASC). Specifically, we tackle the ASC task in a\nlow-resource environment leveraging a recently proposed advanced neural network\npruning mechanism, namely Lottery Ticket Hypothesis (LTH), to find a\nsub-network neural model associated with a small amount non-zero model\nparameters. The effectiveness of LTH for low-complexity acoustic modeling is\nassessed by investigating various data augmentation and compression schemes,\nand we report an efficient joint framework for low-complexity multi-device ASC,\ncalled \\emph{Acoustic Lottery}. Acoustic Lottery could compress an ASC model up\nto $1/10^{4}$ and attain a superior performance (validation accuracy of 79.4%\nand Log loss of 0.64) compared to its not compressed seed model. All results\nreported in this work are based on a joint effort of four groups, namely\nGT-USTC-UKE-Tencent, aiming to address the \"Low-Complexity Acoustic Scene\nClassification (ASC) with Multiple Devices\" in the DCASE 2021 Challenge Task\n1a.\n"} {"abstract": " Models of the geomagnetic field rely on magnetic data of high spatial and\ntemporal resolution. The magnetic data from low-Earth orbit satellites of\ndedicated magnetic survey missions such as CHAMP and Swarm play a key role in\nthe construction of such models. Unfortunately, there are no magnetic data from\nsuch satellites after the end of CHAMP in 2010 and before the launch of Swarm\nin late 2013. This limits our ability to recover signals on timescales of 3\nyears and less during this gap period. The magnetic data from platform\nmagnetometers carried by satellites for navigational purposes may help address\nthis data gap provided that they are carefully calibrated. Earlier studies have\ndemonstrated that platform magnetometer data can be calibrated using a fixed\nreference field model. However, this approach can lead to biased calibration\nparameters. An alternative has been developed in the form of a co-estimation\nscheme which consists of simultaneously estimating both the calibration\nparameters and a model of the internal geomagnetic field. Here, we develop a\nscheme, based on the CHAOS framework, that involves the co-estimation of a\ngeomagnetic field model along with calibration parameters of platform\nmagnetometers. Using our implementation, we are able to derive a geomagnetic\nfield model from 2008 to 2018 with satellite magnetic data from CHAMP, Swarm,\nsecular variation data from ground observatories, and platform magnetometer\ndata from CryoSat-2 and GRACE. Through experiments, we explore correlations\nbetween the estimates of the geomagnetic field and the calibration parameters,\nand suggest how these may be avoided. We find that platform magnetometer data\nprovide additional information on the secular acceleration, especially in the\nPacific during the gap period. This study adds to the evidence that it is\nbeneficial to use platform magnetometer data in geomagnetic field modeling.\n"} {"abstract": " We generalize the continuous observation privacy setting from Dwork et al.\n'10 and Chan et al. '11 by allowing each event in a stream to be a subset of\nsome (possibly unknown) universe of items. We design differentially private\n(DP) algorithms for histograms in several settings, including top-$k$\nselection, with privacy loss that scales with polylog$(T)$, where $T$ is the\nmaximum length of the input stream. We present a meta-algorithm that can use\nexisting one-shot top-$k$ DP algorithms as a subroutine to continuously release\nprivate histograms from a stream. Further, we present more practical DP\nalgorithms for two settings: 1) continuously releasing the top-$k$ counts from\na histogram over a known domain when an event can consist of an arbitrary\nnumber of items, and 2) continuously releasing histograms over an unknown\ndomain when an event has a limited number of items.\n"} {"abstract": " Electronic instabilities in transition metal compounds often spontaneously\nform orbital molecules, which consist of orbital-coupled metal ions at low\ntemperature. Recent local structural studies utilizing the pair distribution\nfunction revealed that preformed orbital molecules appear disordered even in\nthe high-temperature paramagnetic phase. However, it is unclear whether\npreformed orbital molecules are dynamic or static. Here, we provide clear\nexperimental evidence of the slow dynamics of disordered orbital molecules\nrealized in the high-temperature paramagnetic phase of LiVS2, which exhibits\nvanadium trimerization upon cooling below 314 K. Unexpectedly, the preformed\norbital molecules appear as a disordered zigzag chain that fluctuate in both\ntime and space under electron irradiation. Our findings should advance studies\non soft matter physics realized in an inorganic material due to disordered\norbital molecules.\n"} {"abstract": " In the classical survey (Chapter 16.2, {\\it Mathematics in industrial\nproblem}, Vol. 24, Springer-Verlag, New York, 1989), A. Friedman proposed an\nopen problem on the collision of two incompressible jets emerging from two\naxially symmetric nozzles. In this paper, we concerned with the mathematical\ntheory on this collision problem, and establish the well-posedness theory on\nhydrodynamic impinging outgoing jets issuing from two coaxial axially symmetric\nnozzles. More precisely, we showed that for any given mass fluxes $M_1>0$ and\n$M_2<0$ in two nozzles respectively, that there exists an incompressible,\ninviscid impinging outgoing jet with contact discontinuity, which issues from\ntwo given semi-infinitely long axially symmetric nozzles and extends to\ninfinity. Moreover, the constant pressure free stream surfaces of the impinging\njet initiate smoothly from the mouths of the two nozzles and shrink to some\nasymptotic conical surface. There exists a smooth surface separating the two\nincompressible fluids and the contact discontinuity occurs on the surface.\nFurthermore, we showed that there is no stagnation point in the flow field and\nits closure, except one point on the symmetric axis. Some asymptotic behavior\nof the impinging jet in upstream and downstream, geometric properties of the\nfree stream surfaces are also obtained. The main results in this paper solved\nthe open problem on the collision of two incompressible axially symmetric jets\nin [24].\n"} {"abstract": " The existence of multiple datasets for sarcasm detection prompts us to apply\ntransfer learning to exploit their commonality. The adversarial neural transfer\n(ANT) framework utilizes multiple loss terms that encourage the source-domain\nand the target-domain feature distributions to be similar while optimizing for\ndomain-specific performance. However, these objectives may be in conflict,\nwhich can lead to optimization difficulties and sometimes diminished transfer.\nWe propose a generalized latent optimization strategy that allows different\nlosses to accommodate each other and improves training dynamics. The proposed\nmethod outperforms transfer learning and meta-learning baselines. In\nparticular, we achieve 10.02% absolute performance gain over the previous state\nof the art on the iSarcasm dataset.\n"} {"abstract": " This is the third series of the lab manuals for virtual teaching of\nintroductory physics classes. This covers fluids, waves, thermodynamics,\noptics, interference, photoelectric effect, atomic spectra, and radiation\nconcepts. A few of these labs can be used within Physics I and a few other labs\nwithin Physics II depending on the syllabi of Physics I and II classes. Virtual\nexperiments in this lab manual and our previous Physics I (arXiv.2012.09151)\nand Physics II (arXiv.2012.13278) lab manuals were designed for 2.45 hrs long\nlab classes (algebra-based and calculus-based). However, all the virtual labs\nin these three series can be easily simplified to align with conceptual type or\nshort time physics lab classes as desired. All the virtual experiments were\nbased on open education resource (OER) type simulations. Virtual experiments\nwere designed to simulate in-person physical laboratory experiments. Student\nlearning outcomes (understand, apply, analyze and evaluate) were studied with\ndetailed lab reports per each experiment and end of the semester written exam\nwhich was based on experiments. Special emphasis was given to study the student\nskill development of computational data analysis.\n"} {"abstract": " A pearl's distinguished beauty and toughness are attributable to the periodic\nstacking of aragonite tablets known as nacre. Nacre has naturally occurring\nmesoscale periodicity that remarkably arises in the absence of discrete\ntranslational symmetry. Gleaning the inspiring biomineral design of a pearl\nrequires quantifying its structural coherence and understanding the stochastic\nprocesses that influence formation. By characterizing the entire structure of\npearls (~3 mm) in cross-section at high resolution, we show nacre has\nmedium-range mesoscale periodicity. Self-correcting growth mechanisms actively\nremedy disorder and topological defects of the tablets and act as a\ncountervailing process to long-range disorder. Nacre has a correlation length\nof roughly 16 tablets (~5.5 um) despite persistent fluctuations and topological\ndefects. For longer distances (> 25 tablets, ~8.5 um), the frequency spectrum\nof nacre tablets follows f^(-1.5) behavior suggesting growth is coupled to\nexternal stochastic processes-a universality found across disparate natural\nphenomena which now includes pearls.\n"} {"abstract": " Let $F$ be a field of characteristic $p$ and let $\\Omega^n(F)$ be the\n$F$-vector space of $n$-differential forms. In this work, we will study the\nannihilator of differential forms, give specific descriptions for special cases\nand show a connection between these annihilators and the kernels of the\nrestriction map $\\Omega^n(F) \\to \\Omega^n(E)$ for purely inseparable field\nextensions $E/F$.\n"} {"abstract": " Despite abundant negotiation strategies in literature, the complexity of\nautomated negotiation forbids a single strategy from being dominant against all\nothers in different negotiation scenarios. To overcome this, one approach is to\nuse mixture of experts, but at the same time, one problem of this method is the\nselection of experts, as this approach is limited by the competency of the\nexperts selected. Another problem with most negotiation strategies is their\nincapability of adapting to dynamic variation of the opponent's behaviour\nwithin a single negotiation session resulting in poor performance. This work\nfocuses on both, solving the problem of expert selection and adapting to the\nopponent's behaviour with our Autonomous Negotiating Agent Framework. This\nframework allows real-time classification of opponent's behaviour and provides\na mechanism to select, switch or combine strategies within a single negotiation\nsession. Additionally, our framework has a reviewer component which enables\nself-enhancement capability by deciding to include new strategies or replace\nold ones with better strategies periodically. We demonstrate an instance of our\nframework by implementing maximum entropy reinforcement learning based\nstrategies with a deep learning based opponent classifier. Finally, we evaluate\nthe performance of our agent against state-of-the-art negotiators under varied\nnegotiation scenarios.\n"} {"abstract": " Software Transactional Memory (STM) algorithms provide programmers with a\nsynchronisation mechanism for concurrent access to shared variables. Basically,\nprogrammers can specify transactions (reading from and writing to shared state)\nwhich execute \"seemingly\" atomic. This property is captured in a correctness\ncriterion called opacity. For model checking opacity of an STM algorithm, we --\nin principle -- need to check opacity for all possible combinations of\ntransactions writing to and reading from potentially unboundedly many\nvariables.\n To still apply automatic model checking techniques to opacity checking, a so\ncalled small model theorem has been proven which states that model checking on\ntwo variables and two transactions is sufficient for correctness verification\nof STMs. In this paper, we take a fresh look at this small model theorem and\ninvestigate its applicability to opacity checking of STM algorithms.\n"} {"abstract": " The electronic nematic phase emerging with spontaneous rotation symmetry\nbreaking is a central issue of modern condensed matter physics. In particular,\nvarious nematic phases in iron-based superconductors and high-$T_{\\rm c}$\ncuprate superconductors are extensively studied recently. Electric quadrupole\nmoments (EQMs) are one of the order parameters characterizing these nematic\nphases in a unified way, and elucidating EQMs is a key to understanding these\nnematic phases. However, the quantum-mechanical formulation of the EQMs in\ncrystals is a nontrivial issue because the position operators are non-periodic\nand unbound. Recently, the EQMs have been formulated by local thermodynamics,\nand such {\\it thermodynamic EQMs} may be used to characterize the fourfold\nrotation symmetry breaking in materials. In this paper, we calculate the\nthermodynamic EQMs in iron-based superconductors LaFeAsO and FeSe as well as a\ncuprate superconductor La$_2$CuO$_4$ by a first-principles calculation. We show\nthat owing to the orbital degeneracy the EQMs in iron-based superconductors are\nmainly determined by the geometric properties of wave functions. This result is\nin sharp contrast to the cuprate superconductor, in which the EQMs are\ndominated by distortion of the Fermi surface.\n"} {"abstract": " Escape from a potential well through an index-1 saddle can be widely found in\nsome important physical systems. Knowing the criteria and phase space geometry\nthat govern escape events plays an important role in making use of such\nphenomenon, particularly when realistic frictional or dissipative forces are\npresent. We aim to extend the study the escape dynamics around the saddle from\ntwo degrees of freedom to three degrees of freedom, presenting both a\nmethodology and phase space structures. Both the ideal conservative system and\na perturbed, dissipative system are considered. We define the five-dimensional\ntransition region, $\\mathcal{T}_h$, as the set of initial conditions of a given\ninitial energy $h$ for which the trajectories will escape from one side of the\nsaddle to another. Invariant manifold arguments demonstrate that in the\nsix-dimensional phase space, the boundary of the transition region, $\\partial\n\\mathcal{T}_h$, is topologically a four-dimensional hyper-cylinder in the\nconservative system, and a four-dimensional hyper-sphere in the dissipative\nsystem. The transition region $\\mathcal{T}_h$ can be constructed by a solid\nthree-dimensional ellipsoid (solid three-dimensional cylinder) in the\nthree-dimensional configuration space, where at each point, there is a cone of\nvelocity -- the velocity directions leading to transition are given by cones,\nwith velocity magnitude given by the initial energy and the direction by two\nspherical angles with given limits. To illustrate our analysis, we consider an\nexample system which has two potential minima connected by an index 1 saddle.\n"} {"abstract": " Scale is often seen as a given, disturbing factor in many vision tasks. When\ndoing so it is one of the factors why we need more data during learning. In\nrecent work scale equivariance was added to convolutional neural networks. It\nwas shown to be effective for a range of tasks. We aim for accurate\nscale-equivariant convolutional neural networks (SE-CNNs) applicable for\nproblems where high granularity of scale and small kernel sizes are required.\nCurrent SE-CNNs rely on weight sharing and kernel rescaling, the latter of\nwhich is accurate for integer scales only. To reach accurate scale\nequivariance, we derive general constraints under which scale-convolution\nremains equivariant to discrete rescaling. We find the exact solution for all\ncases where it exists, and compute the approximation for the rest. The discrete\nscale-convolution pays off, as demonstrated in a new state-of-the-art\nclassification on MNIST-scale and on STL-10 in the supervised learning setting.\nWith the same SE scheme, we also improve the computational effort of a\nscale-equivariant Siamese tracker on OTB-13.\n"} {"abstract": " Crowdsourcing is widely used to create data for common natural language\nunderstanding tasks. Despite the importance of these datasets for measuring and\nrefining model understanding of language, there has been little focus on the\ncrowdsourcing methods used for collecting the datasets. In this paper, we\ncompare the efficacy of interventions that have been proposed in prior work as\nways of improving data quality. We use multiple-choice question answering as a\ntestbed and run a randomized trial by assigning crowdworkers to write questions\nunder one of four different data collection protocols. We find that asking\nworkers to write explanations for their examples is an ineffective stand-alone\nstrategy for boosting NLU example difficulty. However, we find that training\ncrowdworkers, and then using an iterative process of collecting data, sending\nfeedback, and qualifying workers based on expert judgments is an effective\nmeans of collecting challenging data. But using crowdsourced, instead of expert\njudgments, to qualify workers and send feedback does not prove to be effective.\nWe observe that the data from the iterative protocol with expert assessments is\nmore challenging by several measures. Notably, the human--model gap on the\nunanimous agreement portion of this data is, on average, twice as large as the\ngap for the baseline protocol data.\n"} {"abstract": " We theoretically investigate pattern formation and nonlinear dynamics in an\narray of equally-coupled, optically driven, Kerr nonlinear microresonators. We\nshow that the nonlinear dynamics of the system can be associated with an\neffective two dimensional space due to the multimiode structure of each\nresonator. As a result, two fundamentally different dynamical regimes (elliptic\nand hyperbolic) arise at different regions of the hybridized dispersion\nsurface. We demonstrate the formation of global nonlinear optical patterns in\nboth regimes which correspond to coherent optical frequency combs on the\nindividual resonator level. In addition we show that the presence of an\nadditional dimension leads to the observation of wave collapse.\n"} {"abstract": " We apply the method of differential inequalities for the computation of upper\nbounds for the rate of convergence to the limiting regime for one specific\nclass of (in)homogeneous continuous-time Markov chains. To obtain these\nestimates, we investigate the corresponding forward system of Kolmogorov\ndifferential equations.\n"} {"abstract": " Recently, advances in film synthesis methods have enabled a study of\nextremely overdoped $La_{2-x}Sr_{x}CuO_{4}$. This has revealed a surprising\nbehavior of the superfluid density as a function of doping and temperature, the\nexplanation of which is vividly debated. One popular class of models posits\nelectronic phase separation, where the superconducting phase fraction decreases\nwith doping, while some competing phase (e.g. ferromagnetic) progressively\ntakes over. A problem with this scenario is that all the way up to the dome\nedge the superconducting transition remains sharp, according to mutual\ninductance measurements. However, the physically relevant scale is the Pearl\npenetration depth, $\\Lambda_{P}$, and this technique probes the sample on a\nlength scale $L$ that is much larger than $\\Lambda_{P}$. In the present paper,\nwe use local scanning SQUID measurements that probe the susceptibility of the\nsample on the scale $L << \\Lambda_{P}$. Our SQUID maps show uniform landscapes\nof susceptibility and excellent overall agreement of the local penetration\ndepth data with the bulk measurements. These results contribute an important\npiece to the puzzle of how high-temperature superconductivity vanishes on the\noverdoped side of the cuprates phase diagram.\n"} {"abstract": " Social media often acts as breeding grounds for different forms of offensive\ncontent. For low resource languages like Tamil, the situation is more complex\ndue to the poor performance of multilingual or language-specific models and\nlack of proper benchmark datasets. Based on this shared task, Offensive\nLanguage Identification in Dravidian Languages at EACL 2021, we present an\nexhaustive exploration of different transformer models, We also provide a\ngenetic algorithm technique for ensembling different models. Our ensembled\nmodels trained separately for each language secured the first position in\nTamil, the second position in Kannada, and the first position in Malayalam\nsub-tasks. The models and codes are provided.\n"} {"abstract": " Overnight, Apple has turned its hundreds-of-million-device ecosystem into the\nworld's largest crowd-sourced location tracking network called offline finding\n(OF). OF leverages online finder devices to detect the presence of missing\noffline devices using Bluetooth and report an approximate location back to the\nowner via the Internet. While OF is not the first system of its kind, it is the\nfirst to commit to strong privacy goals. In particular, OF aims to ensure\nfinder anonymity, untrackability of owner devices, and confidentiality of\nlocation reports. This paper presents the first comprehensive security and\nprivacy analysis of OF. To this end, we recover the specifications of the\nclosed-source OF protocols by means of reverse engineering. We experimentally\nshow that unauthorized access to the location reports allows for accurate\ndevice tracking and retrieving a user's top locations with an error in the\norder of 10 meters in urban areas. While we find that OF's design achieves its\nprivacy goals, we discover two distinct design and implementation flaws that\ncan lead to a location correlation attack and unauthorized access to the\nlocation history of the past seven days, which could deanonymize users. Apple\nhas partially addressed the issues following our responsible disclosure.\nFinally, we make our research artifacts publicly available.\n"} {"abstract": " We develop a constitutive model allowing for the description of the rheology\nof two-dimensional soft dense suspensions above jamming. Starting from a\nstatistical description of the particle dynamics, we derive, using a set of\napproximations, a non-linear tensorial evolution equation linking the\ndeviatoric part of the stress tensor to the strain-rate and vorticity tensors.\nThe coefficients appearing in this equation can be expressed in terms of the\npacking fraction and of particle-level parameters. This constitutive equation\nrooted in the microscopic dynamic qualitatively reproduces a number of salient\nfeatures of the rheology of jammed soft suspensions, including the presence of\nyield stresses for the shear component of the stress and for the normal stress\ndifference. More complex protocols like the relaxation after a preshear are\nalso considered, showing a smaller stress after relaxation for a stronger\npreshear.\n"} {"abstract": " We study the MaxCut problem for graphs $G=(V,E)$. The problem is NP-hard,\nthere are two main approximation algorithms with theoretical guarantees: (1)\nthe Goemans \\& Williamson algorithm uses semi-definite programming to provide a\n0.878MaxCut approximation (which, if the Unique Games Conjecture is true, is\nthe best that can be done in polynomial time) and (2) Trevisan proposed an\nalgorithm using spectral graph theory from which a 0.614MaxCut approximation\ncan be obtained. We discuss a new approach using a specific quadratic program\nand prove that its solution can be used to obtain at least a 0.502MaxCut\napproximation. The algorithm seems to perform well in practice.\n"} {"abstract": " In many applications, a large number of features are collected with the goal\nto identify a few important ones. Sometimes, these features lie in a metric\nspace with a known distance matrix, which partially reflects their\nco-importance pattern. Proper use of the distance matrix will boost the power\nof identifying important features. Hence, we develop a new multiple testing\nframework named the Distance Assisted Recursive Testing (DART). DART has two\nstages. In stage 1, we transform the distance matrix into an aggregation tree,\nwhere each node represents a set of features. In stage 2, based on the\naggregation tree, we set up dynamic node hypotheses and perform multiple\ntesting on the tree. All rejections are mapped back to the features. Under mild\nassumptions, the false discovery proportion of DART converges to the desired\nlevel in high probability converging to one. We illustrate by theory and\nsimulations that DART has superior performance under various models compared to\nthe existing methods. We applied DART to a clinical trial in the allogeneic\nstem cell transplantation study to identify the gut microbiota whose abundance\nwill be impacted by the after-transplant care.\n"} {"abstract": " Trusted AI literature to date has focused on the trust needs of users who\nknowingly interact with discrete AIs. Conspicuously absent from the literature\nis a rigorous treatment of public trust in AI. We argue that public distrust of\nAI originates from the under-development of a regulatory ecosystem that would\nguarantee the trustworthiness of the AIs that pervade society. Drawing from\nstructuration theory and literature on institutional trust, we offer a model of\npublic trust in AI that differs starkly from models driving Trusted AI efforts.\nThis model provides a theoretical scaffolding for Trusted AI research which\nunderscores the need to develop nothing less than a comprehensive and visibly\nfunctioning regulatory ecosystem. We elaborate the pivotal role of externally\nauditable AI documentation within this model and the work to be done to ensure\nit is effective, and outline a number of actions that would promote public\ntrust in AI. We discuss how existing efforts to develop AI documentation within\norganizations -- both to inform potential adopters of AI components and support\nthe deliberations of risk and ethics review boards -- is necessary but\ninsufficient assurance of the trustworthiness of AI. We argue that being\naccountable to the public in ways that earn their trust, through elaborating\nrules for AI and developing resources for enforcing these rules, is what will\nultimately make AI trustworthy enough to be woven into the fabric of our\nsociety.\n"} {"abstract": " Consider two batches of independent or interdependent exponentiated\nlocation-scale distributed heterogeneous random variables. This article\ninvestigates ordering results for the second-order statistics from these\nbatches when a vector of parameters is switched to another vector of parameters\nin the specified model. Sufficient conditions for the usual stochastic order\nand the hazard rate order are derived. Some applications of the established\nresults are presented.\n"} {"abstract": " We determine the starspot detection rate in exoplanetary transit light curves\nfor M and K dwarf stars observed by the Transiting Exoplanet Survey Satellite\n(TESS) using various starspot filling factors and starspot distributions. We\nused $3.6\\times10^9$ simulations of planetary transits around spotted stars\nusing the transit-starspot model \\prism. The simulations cover a range of\nstarspot filling factors using one of three distributions: uniform,\npolar-biased, and mid-latitude. After construction of the stellar disc and\nstarspots, we checked the transit cord for starspots and examined the change in\nflux of each starspot to determine whether or not a starspot anomaly would be\ndetected. The results were then compared to predicted planetary detections for\nTESS. The results show that for the case of a uniform starspot distribution,\n$64\\pm9$ M dwarf and $23\\pm4$ K dwarf transit light curves observed by TESS\nwill contain a starspot anomaly. This reduces to $37\\pm6$ M dwarf and $12\\pm2$\nK dwarf light curves for a polar-biased distribution and $47\\pm7$ M dwarf and\n$21\\pm4$ K dwarf light curves for a mid-latitude distribution. Currently there\nare only 17 M dwarf and 10 K dwarf confirmed planetary systems from TESS, none\nof which are confirmed as showing starspot anomalies. All three starspot\ndistributions can explain the current trend. However, with such a small sample,\na firm conclusion can not be made at present. In the coming years when more\nTESS M and K dwarf exoplanetary systems have been detected and characterised,\nit will be possible to determine the dominant starspot distribution.\n"} {"abstract": " In this paper, we propose a novel data augmentation strategy named\nCut-Thumbnail, that aims to improve the shape bias of the network. We reduce an\nimage to a certain size and replace the random region of the original image\nwith the reduced image. The generated image not only retains most of the\noriginal image information but also has global information in the reduced\nimage. We call the reduced image as thumbnail. Furthermore, we find that the\nidea of thumbnail can be perfectly integrated with Mixed Sample Data\nAugmentation, so we put one image's thumbnail on another image while the ground\ntruth labels are also mixed, making great achievements on various computer\nvision tasks. Extensive experiments show that Cut-Thumbnail works better than\nstate-of-the-art augmentation strategies across classification, fine-grained\nimage classification, and object detection. On ImageNet classification,\nResNet-50 architecture with our method achieves 79.21\\% accuracy, which is more\nthan 2.8\\% improvement on the baseline.\n"} {"abstract": " This essay summarizes the efforts required to build a program of a unified,\nlow-dimension topology that allows characterizing all these flat space-times.\nSince spatiotemporal manifolds are topological spaces equipped with metrics,\ntheir properties are characterized by Clifford algebras in hypercomplex rings\nassociative with unity, so that Galileo's transformations are induced by a dual\nnumber; the Lorentz transformations, by a perplexed number and the Euclid\ntransformations, by a complex number. This fact led us to establish an internal\nautomorphism in the ring of hybrid numbers that acts as a map of the manifolds\nand induces the space-time metric based on the quality (characteristic) of the\nassociated hypercomplex unit. From this automorphism, we built hybrid\ntrigonometric functions, which we call Poincar\\'e functions, which allowed us\nto deduce general properties of space-time, hyperbolic, parabolic and\nelliptical geometries and the groups SO (3), SO (4) and SO (1, 3). This\napproach allows us to highlight the global properties of space-time, suggests\nmethods for geodynamic models and allows us to interpret anti-matter as matter\nin a Euclidean space-time where the nature of time is imaginary.\n"} {"abstract": " Global value chains (GVCs) are formed through value-added trade, and some\nregions promote economic integration by concluding regional trade agreements to\npromote these chains. However, there is no way to quantitatively assess the\nscope and extent of economic integration involving various sectors in multiple\ncountries. In this study, we used the World Input--Output Database to create a\ncross-border sector-wise trade in value-added network (international\nvalue-added network (IVAN)) covering the period of 2000--2014 and evaluated\nthem using network science methods. By applying Infomap to the IVAN, we\nconfirmed for the first time the existence of two regional communities: Europe\nand the Pacific Rim. Helmholtz--Hodge decomposition was used to decompose the\nvalue flows within the region into potential and circular flows, and the annual\nevolution of the potential and circular relationships between countries and\nsectors was clarified. The circular flow component of the decomposition was\nused to define an economic integration index, and findings confirmed that the\ndegree of economic integration in Europe declined sharply after the economic\ncrisis in 2009 to a level lower than that in the Pacific Rim. The European\neconomic integration index recovered in 2011 but again fell below that of the\nPacific Rim in 2013. Moreover, sectoral analysis showed that the economic\nintegration index captured the effect of Russian mineral resources, free\nmovement of labor in Europe, and international division of labor in the Pacific\nRim, especially in GVCs for the manufacture of motor vehicles and high-tech\nproducts.\n"} {"abstract": " Unitarity demands that the black-hole final state (what remains inside the\nevent horizon at complete evaporation) must be unique. Assuming a UV theory\nwith infinitely many fields, we propose that the uniqueness of the final state\ncan be achieved via a mechanism analogous to the quantum-mechanical description\nof dissipation.\n"} {"abstract": " \\textit{Resolve} onboard the X-ray satellite XRISM is a cryogenic instrument\nwith an X-ray microcalorimeter in a Dewar. A lid partially transparent to\nX-rays (called gate valve, or GV) is installed at the top of the Dewar along\nthe optical axis. Because observations will be made through the GV for the\nfirst few months, the X-ray transmission calibration of the GV is crucial for\ninitial scientific outcomes. We present the results of our ground calibration\ncampaign of the GV, which is composed of a Be window and a stainless steel\nmesh. For the stainless steel mesh, we measured its transmission using the\nX-ray beamline at ISAS. For the Be window, we used synchrotron facilities to\nmeasure the transmission and modeled the data with (i) photoelectric absorption\nand incoherent scattering of Be, (ii) photoelectric absorption of contaminants,\nand (iii) coherent scattering of Be changing at specific energies. We discuss\nthe physical interpretation of the transmission discontinuity caused by the\nBragg diffraction in poly-crystal Be, which we incorporated into our\ntransmission phenomenological model. We present the X-ray diffraction\nmeasurement on the sample to support our interpretation. The measurements and\nthe constructed model meet the calibration requirements of the GV. We also\nperformed a spectral fitting of the Crab nebula observed with Hitomi SXS and\nconfirmed improvements of the model parameters.\n"} {"abstract": " The uptake of machine learning (ML) approaches in the social and health\nsciences has been rather slow, and research using ML for social and health\nresearch questions remains fragmented. This may be due to the separate\ndevelopment of research in the computational/data versus social and health\nsciences as well as a lack of accessible overviews and adequate training in ML\ntechniques for non data science researchers. This paper provides a meta-mapping\nof research questions in the social and health sciences to appropriate ML\napproaches, by incorporating the necessary requirements to statistical analysis\nin these disciplines. We map the established classification into description,\nprediction, and causal inference to common research goals, such as estimating\nprevalence of adverse health or social outcomes, predicting the risk of an\nevent, and identifying risk factors or causes of adverse outcomes. This\nmeta-mapping aims at overcoming disciplinary barriers and starting a fluid\ndialogue between researchers from the social and health sciences and\nmethodologically trained researchers. Such mapping may also help to fully\nexploit the benefits of ML while considering domain-specific aspects relevant\nto the social and health sciences, and hopefully contribute to the acceleration\nof the uptake of ML applications to advance both basic and applied social and\nhealth sciences research.\n"} {"abstract": " We develop a new three-dimensional time-dependent radiative transfer code,\nTRINITY (Time-dependent Radiative transfer In Near-Infrared TomographY), for\nin-vivo diffuse optical tomography (DOT). The simulation code is based on the\ndesign of long radiation rays connecting boundaries of a computational domain,\nwhich allows us to calculate light propagation with little numerical diffusion.\nWe parallelize the code with MPI using the domain decomposition technique and\nconfirm the high parallelization efficiency so that simulations with a spatial\nresolution of $\\sim 1~\\rm mm$ can be performed in practical time. As a first\napplication, we study the light propagation for a pulse collimated within\n$\\theta \\sim 15^\\circ$ in a phantom, which is a uniform medium made of\npolyurethane mimicking biological tissue. We show that the pulse spreads in all\nforward directions over $\\sim$ a few mm due to the multiple scattering process\nof photons. Our simulations successfully reproduce the time-resolved signals\nmeasured with eight detectors for the phantom. We also introduce the effects of\nreflection and refraction at the boundary of medium with different refractive\nindex and demonstrate the faster propagation of photons in an air hole that is\nan analogue for the respiratory tract.\n"} {"abstract": " Topological phases of materials are characterized by topological invariants\nthat are conventionally calculated by different means according to the\ndimension and symmetry class of the system. For topological materials described\nby Dirac models, we introduce a wrapping number as a unified approach to obtain\nthe topological invariants in arbitrary dimensions and symmetry classes. Given\na unit vector that parametrizes the momentum-dependence of the Dirac model, the\nwrapping number describes the degree of the map from the Brillouin zone torus\nto the sphere formed by the unit vector that we call Dirac sphere. This method\nis gauge-invariant and originates from the intrinsic features of the Dirac\nmodel, and moreover places all known topological invariants, such as Chern\nnumber, winding number, Pfaffian, etc, on equal footing.\n"} {"abstract": " Recently monolayer jacutingaite (Pt2HgSe3), a naturally occurring exfoliable\nmineral, discovered in Brazil in 2008, has been theoretically predicted as a\ncandidate quantum spin Hall system with a 0.5 eV band gap, while the bulk form\nis one of only a few known dual-topological insulators which may host different\nsurface states protected by symmetries. In this work, we systematically\ninvestigate both structure and electronic evolution of bulk Pt2HgSe3 under high\npressure up to 96 GPa. The nontrivial topology persists up to the structural\nphase transition observed in the high-pressure regime. Interestingly, we found\nthat this phase transition is accompanied by the appearance of\nsuperconductivity at around 55 GPa and the critical transition temperature Tc\nincreases with applied pressure. Our results demonstrate that Pt2HgSe3 with\nnontrivial topology of electronic states displays new ground states upon\ncompression and raises potentials in application to the next-generation\nspintronic devices.\n"} {"abstract": " We make two observations on the motion of coupled particles in a periodic\npotential. Coupled pendula, or the space-discretized sine-Gordon equation is an\nexample of this problem. Linearized spectrum of the synchronous motion turns\nout to have a hidden asymptotic periodicity in its dependence on the energy;\nthis is the gist of the first observation. Our second observation is the\ndiscovery of a special property of the purely sinusoidal potentials: the\nlinearization around the synchronous solution is equivalent to the classical\nLam\\`e equation. As a consequence, {\\it all but one instability zones of the\nlinearized equation collapse to a point for the one-harmonic potentials}. This\nprovides a new example where Lam\\'e's finite zone potential arises in the\nsimplest possible setting.\n"} {"abstract": " We present a multi-wavelength investigation of a C-class flaring activity\nthat occurred in the active region NOAA 12734 on 8 March 2019. The\ninvestigation utilises data from AIA and HMI on board the SDO and the\nUdaipur-CALLISTO solar radio spectrograph of the Physical Research Laboratory.\nThis low intensity C1.3 event is characterised by typical features of a long\nduration event (LDE), viz. extended flare arcade, large-scale two-ribbon\nstructures and twin coronal dimmings. The eruptive event occurred in a coronal\nsigmoid and displayed two distinct stages of energy release, manifested in\nterms of temporal and spatial evolution. The formation of twin dimming regions\nare consistent with the eruption of a large flux rope with footpoints lying in\nthe western and eastern edges of the coronal sigmoid. The metric radio\nobservations obtained from Udaipur-CALLISTO reveals a broad-band\n($\\approx$50-180 MHz), stationary plasma emission for $\\approx$7 min during the\nsecond stage of the flaring activity that resemble a type IV radio burst. A\ntype III decametre-hectometre radio bursts with starting frequency of\n$\\approx$2.5 MHz precedes the stationary type IV burst observed by\nUdaipur-CALLISTO by $\\approx$5 min. The synthesis of multi-wavelength\nobservations and Non-Linear Force Free Field (NLFFF) coronal modelling together\nwith magnetic decay index analysis suggests that the sigmoid flux rope\nunderwent a zipping-like uprooting from its western to eastern footpoints in\nresponse to the overlying asymmetric magnetic field confinement. The\nasymmetrical eruption of the flux rope also accounts for the observed\nlarge-scale structures viz. apparent eastward shift of flare ribbons and post\nflare loops along the polarity inversion line (PIL), and provides an evidence\nfor lateral progression of magnetic reconnection site as the eruption proceeds.\n"} {"abstract": " Observations suggest that protoplanetary disks have moderate accretion rates\nonto the central young star, especially at early stages (e.g. HL Tau),\nindicating moderate disk turbulence. However, recent ALMA observations suggest\nthat dust is highly settled, implying weak turbulence. Motivated by such\ntension, we carry out 3D stratified local simulations of self-gravitating\ndisks, focusing on settling of dust particles in actively accreting disks. We\nfind that gravitationally unstable disks can have moderately high accretion\nrates while maintaining a relatively thin dust disk for two reasons. First,\naccretion stress from the self-gravitating spirals (self-gravity stress) can be\nstronger than the stress from turbulence (Reynolds stress) by a factor of 5-20.\nSecond, the strong gravity from the gas to the dust decreases the dust scale\nheight by another factor of $\\sim 2$. Furthermore, the turbulence is slightly\nanisotropic, producing a larger Reynolds stress than the vertical dust\ndiffusion coefficient. Thus, gravitoturbulent disks have unusually high\nvertical Schmidt numbers ($Sc_z$) if we scale the total accretion stress with\nthe vertical diffusion coefficient (e.g. $Sc_z\\sim$ 10-100). The reduction of\nthe dust scale height by the gas gravity, should also operate in\ngravitationally stable disks ($Q>$1). Gravitational forces between particles\nbecome more relevant for the concentration of intermediate dust sizes, forming\ndense clouds of dust. After comparing with HL Tau observations, our results\nsuggest that self-gravity and gravity among different disk components could be\ncrucial for solving the conflict between the protoplanetary disk accretion and\ndust settling, at least at the early stages.\n"} {"abstract": " Non-Hermiticity enriches the 10-fold Altland-Zirnbauer symmetry class into\nthe 38-fold symmetry class, where critical behavior of the Anderson transitions\n(ATs) has been extensively studied recently. Here, we establish a\ncorrespondence of the universality classes of the ATs between Hermitian and\nnon-Hermitian systems. We demonstrate that the critical exponents of the length\nscale in non-Hermitian systems coincide with the critical exponents in the\ncorresponding Hermitian systems with additional chiral symmetry. A remarkable\nconsequence is superuniversality, i.e., the ATs in some different symmetry\nclasses of non-Hermitian systems are characterized by the same critical\nexponent. In addition to the comparisons between the known critical exponents\nfor non-Hermitian systems and their Hermitian counterparts, we obtain the\ncritical exponents in symmetry classes AI, AII, AII$^{\\dagger}$,\nCII$^{\\dagger}$, and DIII in two and three dimensions. Our precise critical\nexponents not only confirm the correspondence, but also predict the unknown\ncritical exponents in Hermitian systems, paving a way to study the ATs of\nHermitian systems by the corresponding non-Hermitian systems.\n"} {"abstract": " We consider strongly convex-concave minimax problems in the federated\nsetting, where the communication constraint is the main bottleneck. When\nclients are arbitrarily heterogeneous, a simple Minibatch Mirror-prox achieves\nthe best performance. As the clients become more homogeneous, using multiple\nlocal gradient updates at the clients significantly improves upon Minibatch\nMirror-prox by communicating less frequently. Our goal is to design an\nalgorithm that can harness the benefit of similarity in the clients while\nrecovering the Minibatch Mirror-prox performance under arbitrary heterogeneity\n(up to log factors). We give the first federated minimax optimization algorithm\nthat achieves this goal. The main idea is to combine (i) SCAFFOLD (an algorithm\nthat performs variance reduction across clients for convex optimization) to\nerase the worst-case dependency on heterogeneity and (ii) Catalyst (a framework\nfor acceleration based on modifying the objective) to accelerate convergence\nwithout amplifying client drift. We prove that this algorithm achieves our\ngoal, and include experiments to validate the theory.\n"} {"abstract": " Thermal power generation plays a dominant role in the world's electricity\nsupply. It consumes large amounts of coal worldwide, and causes serious air\npollution. Optimizing the combustion efficiency of a thermal power generating\nunit (TPGU) is a highly challenging and critical task in the energy industry.\nWe develop a new data-driven AI system, namely DeepThermal, to optimize the\ncombustion control strategy for TPGUs. At its core, is a new model-based\noffline reinforcement learning (RL) framework, called MORE, which leverages\nlogged historical operational data of a TPGU to solve a highly complex\nconstrained Markov decision process problem via purely offline training. MORE\naims at simultaneously improving the long-term reward (increase combustion\nefficiency and reduce pollutant emission) and controlling operational risks\n(safety constraints satisfaction). In DeepThermal, we first learn a data-driven\ncombustion process simulator from the offline dataset. The RL agent of MORE is\nthen trained by combining real historical data as well as carefully filtered\nand processed simulation data through a novel restrictive exploration scheme.\nDeepThermal has been successfully deployed in four large coal-fired thermal\npower plants in China. Real-world experiments show that DeepThermal effectively\nimproves the combustion efficiency of a TPGU. We also report and demonstrate\nthe superior performance of MORE by comparing with the state-of-the-art\nalgorithms on the standard offline RL benchmarks. To the best knowledge of the\nauthors, DeepThermal is the first AI application that has been used to solve\nreal-world complex mission-critical control tasks using the offline RL\napproach.\n"} {"abstract": " Pushed by market forces, software development has become fast-paced. As a\nconsequence, modern development projects are assembled from 3rd-party\ncomponents. Security & privacy assurance techniques once designed for large,\ncontrolled updates over months or years, must now cope with small, continuous\nchanges taking place within a week, and happening in sub-components that are\ncontrolled by third-party developers one might not even know they existed. In\nthis paper, we aim to provide an overview of the current software security\napproaches and evaluate their appropriateness in the face of the changed nature\nin software development. Software security assurance could benefit by switching\nfrom a process-based to an artefact-based approach. Further, security\nevaluation might need to be more incremental, automated and decentralized. We\nbelieve this can be achieved by supporting mechanisms for lightweight and\nscalable screenings that are applicable to the entire population of software\ncomponents albeit there might be a price to pay.\n"} {"abstract": " We study the role of driving in an initial maximally entangled state evolving\nunder the presence of a structured environment in a weak and strong regime. We\nfocus on the enhancement and degradation of maximal Concurrence when the system\nis driven on and out of resonance for a general evolution, as well as the\neffect of adding a transverse coupling among the particles of the model. We\nfurther investigate the role of driving in the acquisition of a geometric phase\nfor the maximally entangled state. As the model studied herein can be used to\nmodel experimental situations such as hybrid quantum classical systems feasible\nwith current technologies, this knowledge can aid the search for physical\nsetups that best retain quantum properties under dissipative dynamics.\n"} {"abstract": " Jacobi's triple product identity is proved from one of Euler's\n$q$-exponential functions in an elementary way.\n"} {"abstract": " The goal of regression is to recover an unknown underlying function that best\nlinks a set of predictors to an outcome from noisy observations. In\nnon-parametric regression, one assumes that the regression function belongs to\na pre-specified infinite dimensional function space (the hypothesis space). In\nthe online setting, when the observations come in a stream, it is\ncomputationally-preferable to iteratively update an estimate rather than\nrefitting an entire model repeatedly. Inspired by nonparametric sieve\nestimation and stochastic approximation methods, we propose a sieve stochastic\ngradient descent estimator (Sieve-SGD) when the hypothesis space is a Sobolev\nellipsoid. We show that Sieve-SGD has rate-optimal MSE under a set of simple\nand direct conditions. We also show that the Sieve-SGD estimator can be\nconstructed with low time expense, and requires almost minimal memory usage\namong all statistically rate-optimal estimators, under some conditions on the\ndistribution of the predictors.\n"} {"abstract": " The objective of this study is to correlate the scaling factor of the\nStandard Model (SM) like Higgs boson and the cross section ratio of the process\n$e^+ e^- \\rightarrow hhf\\overline{f}$ where $f\\neq t$, normalized to SM\npredictions in the type I of the Two Higgs Doublet Model. All calculations have\nbeen performed at $\\sqrt{s}=500$ GeV and $1 \\leq \\tan{\\beta} \\leq 30$ for\nmasses $m_H = m_A = m_{H^\\pm} = 300GeV$ and $m_H=300$ GeV, $m_A=m_{H^\\pm}=500$\nGeV. The working scenario is by taking without alignment limit, that is\n$s_{\\beta-\\alpha}= 0.98$ and $s_{\\beta-\\alpha}= 0.99,$ $0.995$, which gives the\nenhancement in the cross section, particularly a few times greater than the\npredictions of the SM due to resonant-impacts of the additional heavy neutral\nHiggs bosons. This shows that enhancement in cross section occurs on leaving\nthe alignment i.e., $s_{\\beta-\\alpha}=1$, at which all the higgs that couple to\nvector bosons and fermions have the same values as in SM at tree level. A large\nvalue of enhancement factor is obtained at $s_{\\beta-\\alpha}= 0.98$ compared to\n$s_{\\beta-\\alpha}= 0.99,$ $0 .995$. Furthermore, the decrease in the\nenhancement factor is observed for the case when $m_H=300$ GeV,\n$m_A=m_{H^\\pm}=500$ GeV. The behavior of the scaling factor with $\\tan{\\beta}$\nis also studied, which shows that for large values of $\\tan\\beta$, the scaling\nfactor becomes equal to $s_{\\beta-\\alpha}$. Finally a convincing correlation is\nachieved by taking into account, the experimental and theoretical constraints\ne.g, perturbative unitarity, vacuum stability and electroweak oblique\nparameters.\n"} {"abstract": " In this paper, we propose a polar coding based scheme for set reconciliation\nbetween two network nodes. The system is modeled as a well-known Slepian-Wolf\nsetting induced by a fixed number of deletions. The set reconciliation process\nis divided into two phases: 1) a deletion polar code is employed to help one\nnode to identify the possible deletion indices, which may be larger than the\nnumber of genuine deletions; 2) a lossless compression polar code is then\ndesigned to feedback those indices with minimum overhead. Our scheme can be\nviewed as a generalization of polar codes to some emerging network-based\napplications such as the package synchronization in blockchains. Some\nconnections with the existing schemes based on the invertible Bloom lookup\ntables (IBLTs) and network coding are also observed and briefly discussed.\n"} {"abstract": " The two-dimensional regular and chaotic electro-convective flow states of a\ndielectric liquid between two infinite parallel planar electrodes are\ninvestigated using a two-relaxation-time lattice Boltzmann method. Positive\ncharges injected at the metallic planar electrode located at the bottom of the\ndielectric liquid layer are transported towards the grounded upper electrode by\nthe synergy of the flow and the electric field. The various flow states can be\ncharacterized by a non-dimensional parameter, the electric Rayleigh number.\nGradually increasing the electric Rayleigh number, the flow system sequentially\nevolves via quasi-periodic, periodic, and chaotic flow states with five\nidentified bifurcations. The turbulence kinetic energy spectrum is shown to\nfollow the -3 law as the flow approaches turbulence. The spectrum is found to\nfollow a -5 law when the flow is periodic.\n"} {"abstract": " We discuss possibilities to test physics beyond the Standard Model in\n$\\vert\\Delta c\\vert=\\vert\\Delta u\\vert= 1$ semileptonic, hadronic and missing\nenergy decay modes. Clean null test observables such as angular observables,\nCP-asymmetries and lepton universality tests are presented and\nmodel-independent correlations as well as details within flavorful,\nanomaly-free $Z^\\prime$ models are worked out.\n"} {"abstract": " Axion-CMB scenario is an interesting possibility to explain the temperature\nanisotropy of the cosmic microwave background (CMB) by primordial fluctuations\nof the QCD axion \\cite{Iso:2020pzv}. In this scenario, fluctuations of\nradiations are generated by an energy exchange between axions and radiations,\nwhich results in the correlation between the primordial axion fluctuations and\nthe CMB anisotropies. Consequently, the cosmological observations stringently\nconstrain a model of the axion and the early history of the universe. In\nparticular, we need a large energy fraction $\\Omega_A^{}$ of the axion at the\nQCD phase transition, but it must become tiny at the present universe to\nsuppress the isocurvature power spectrum. One of natural cosmological scenarios\nto realize such a situation is the thermal inflation which can sufficiently\ndilute the axion abundance. Thermal inflation occurs in various models. In this\npaper, we focus on a classically conformal (CC) $B$-$L$ model with a QCD axion.\nIn this model, the early universe undergoes a long supercooling era of the\n$B$-$L$ and electroweak symmetries, and thermal inflation naturally occurs.\nThus it can be a good candidate for the axion-CMB scenario. But the axion\nabundance at the QCD transition is shown to be insufficient in the original CC\n$B$-$L$ model. To overcome the situation, we extend the model by introducing\n$N$ scalar fields $S$ (either massive or massless) and consider a novel\ncosmological history such that the $O(N)$ and the $B$-$L$ sectors evolve almost\nseparately in the early universe. We find that all the necessary conditions for\nthe axion-CMB scenario can be satisfied in some parameter regions for massless\n$S$ fields, typically $N\\sim 10^{19}$ and the mass of $B$-$L$ gauge boson\naround $5-10$ TeV.\n"} {"abstract": " We prove that every class of graphs $\\mathscr C$ that is monadically stable\nand has bounded twin-width can be transduced from some class with bounded\nsparse twin-width. This generalizes analogous results for classes of bounded\nlinear cliquewidth and of bounded cliquewidth. It also implies that monadically\nstable classes of bounded twin-widthare linearly $\\chi$-bounded.\n"} {"abstract": " We present a transformer-based image anomaly detection and localization\nnetwork. Our proposed model is a combination of a reconstruction-based approach\nand patch embedding. The use of transformer networks helps to preserve the\nspatial information of the embedded patches, which are later processed by a\nGaussian mixture density network to localize the anomalous areas. In addition,\nwe also publish BTAD, a real-world industrial anomaly dataset. Our results are\ncompared with other state-of-the-art algorithms using publicly available\ndatasets like MNIST and MVTec.\n"} {"abstract": " Small-scale inhomogeneities in the baryon density around recombination have\nbeen proposed as a solution to the tension between local and global\ndeterminations of the Hubble constant. These baryon clumping models make\ndistinct predictions for the cosmic microwave background anisotropy power\nspectra on small angular scales. We use recent data from the Atacama Cosmology\nTelescope to test these predictions. No evidence for baryon clumping is found,\nassuming a range of parameterizations for time-independent baryon density\nprobability distribution functions. The inferred Hubble constant remains in\nsignificant tension with the SH0ES measurement.\n"} {"abstract": " Spontaneous onset of a low temperature topologically ordered phase in a\n2-dimensional (2D) lattice model of uniaxial liquid crystal (LC) was debated\nextensively pointing to a suspected underlying mechanism affecting the RG flow\nnear the topological fixed point. A recent MC study clarified that a prior\ncrossover leads to a transition to nematic phase. The crossover was interpreted\nas due to the onset of a perturbing relevant scaling field originating from the\nextra spin degree of freedom. As a counter example and in support of this\nhypothesis, we now consider V-shaped bent-core molecules with rigid rod-like\nsegments connected at an assigned angle. The two segments of the molecule\ninteract with the segments of all the nearest neighbours on a square lattice,\nprescribed by a biquadratic interaction. We compute equilibrium averages of\ndifferent observables with Monte Carlo techniques as a function of temperature\nand sample size. For the chosen molecular bend angle and symmetric\ninter-segment interaction between neighbouirng molecules, the 2D system shows\ntwo transitions as a function of T: the higher one at T1 leads to a topological\nordering of defects associated with the major molecular axis without a\ncrossover, imparting uniaxial symmetry to the medium described by the first\nfundamental group of the order parameter space $\\pi_{1}$= $Z_{2}$ (inversion\nsymmetry). The second at T2 leads to a medium displaying biaxial symmetry with\n$\\pi_{1}$ = Q (quaternion group). The biaxial phase shows a self-similar\nmicroscopic structure with the three axes showing power law correlations with\nvanishing exponents as the temperature decreases.\n"} {"abstract": " WeChat is the largest social instant messaging platform in China, with 1.1\nbillion monthly active users. \"Top Stories\" is a novel friend-enhanced\nrecommendation engine in WeChat, in which users can read articles based on\npreferences of both their own and their friends. Specifically, when a user\nreads an article by opening it, the \"click\" behavior is private. Moreover, if\nthe user clicks the \"wow\" button, (only) her/his direct connections will be\naware of this action/preference. Based on the unique WeChat data, we aim to\nunderstand user preferences and \"wow\" diffusion in Top Stories at different\nlevels. We have made some interesting discoveries. For instance, the \"wow\"\nprobability of one user is negatively correlated with the number of connected\ncomponents that are formed by her/his active friends, but the click probability\nis the opposite. We further study to what extent users' \"wow\" and click\nbehavior can be predicted from their social connections. To address this\nproblem, we present a hierarchical graph representation learning based model\nDiffuseGNN, which is capable of capturing the structure-based social\nobservations discovered above. Our experiments show that the proposed method\ncan significantly improve the prediction performance compared with alternative\nmethods.\n"} {"abstract": " In recent experiments, electromagnetically induced transparency (EIT) were\nobserved with giant atoms, but nothing unconventional were found from the\ntransmission spectra. In this letter, we show that unconventional EIT does\nexist in giant atoms, and indicate why it has not been observed so far.\nDifferent from these existing works, this letter presents a consistent theory\nincluding a real space method and a time delayed master equation for observing\nunconventional EIT. We discover that this phenomenon is a quantum effect which\ncannot be correctly described in a semi-classical way as those in recent works.\nOur theory shows that it can be observed when the time delay between two\nneighboring coupling points is comparable to the relaxation time of the atom,\nwhich is crucial for a future experimental observation. This new phenomenon\nresults from inherent non-locality of the giant atom, which physically forces\npropagating fields to be standing waves in space and the atom exhibiting\nretardations in time. Our theory establishes a framework for application of\nnonlocal systems to quantum information processing.\n"} {"abstract": " We provide a generalization of the McMullen's algorithm to approximate the\nHausdorff dimension of the limit set for convex-cocompact subgroups of\nisometries of the Complex Hyperbolic Plane.\n"} {"abstract": " Bi$_2$O$_2$Se is a promising material for next-generation semiconducting\nelectronics. It exhibits premature metallicity on the introduction of a tiny\namount of electrons, the physics behind which remains elusive. Here we report\non transport and dielectric measurements in Bi$_2$O$_2$Se single crystals at\nvarious carrier densities. The temperature-dependent resistivity ($\\rho$)\nindicates a smooth evolution from the semiconducting to the metallic state. The\ncritical concentration for the metal-insulator transition (MIT) to occur is\nextraordinarily low ($n_\\textrm{c}\\sim10^{16}$ cm$^{-3}$). The relative\npermittivity of the insulating sample is huge\n($\\epsilon_\\textrm{r}\\approx155(10)$) and varies slowly with temperature.\nCombined with the light effective mass, a long effective Bohr radius\n($a_\\textrm{B}^*\\approx36(2)$ $\\textrm{nm}$) is derived, which provides a\nreasonable interpretation of the metallic prematurity according to Mott's\ncriterion for MITs. The high electron mobility ($\\mu$) at low temperatures may\nresult from the screening of ionized scattering centers due to the huge\n$\\epsilon_\\textrm{r}$. Our findings shed light on the electron dynamics in two\ndimensional (2D) Bi$_2$O$_2$Se devices.\n"} {"abstract": " Discrete Morse theory, a cell complex-analog to smooth Morse theory, has been\ndeveloped over the past few decades since its original formulation by Robin\nForman in 1998. In particular, discrete gradient vector fields on simplicial\ncomplexes capture important features of discrete Morse functions. We prove that\nthe characteristic polynomials of the Laplacian matrices of a simplicial\ncomplex are generating functions for discrete gradient vector fields of\ndiscrete Morse functions when the complex is either a graph or a triangulation\nof an orientable manifold. Furthermore, we provide a full characterization of\nthe correspondence between rooted forests in higher dimensions and discrete\ngradient vector fields.\n"} {"abstract": " Symbolic equations are at the core of scientific discovery. The task of\ndiscovering the underlying equation from a set of input-output pairs is called\nsymbolic regression. Traditionally, symbolic regression methods use\nhand-designed strategies that do not improve with experience. In this paper, we\nintroduce the first symbolic regression method that leverages large scale\npre-training. We procedurally generate an unbounded set of equations, and\nsimultaneously pre-train a Transformer to predict the symbolic equation from a\ncorresponding set of input-output-pairs. At test time, we query the model on a\nnew set of points and use its output to guide the search for the equation. We\nshow empirically that this approach can re-discover a set of well-known\nphysical equations, and that it improves over time with more data and compute.\n"} {"abstract": " Owing to the pandemic caused by the coronavirus disease of 2019 (COVID-19),\nseveral universities have closed their campuses for preventing the spread of\ninfection. Consequently, the university classes are being held over the\nInternet, and students attend these classes from their homes. While the\nCOVID-19 pandemic is expected to be prolonged, the online-centric lifestyle has\nraised concerns about secondary health issues caused by reduced physical\nactivity (PA). However, the actual status of PA among university students has\nnot yet been examined in Japan. Hence, in this study, we collected daily PA\ndata (including the data corresponding to the number of steps taken and the\ndata associated with six types of activities) by employing smartphones and\nthereby analyzed the changes in the PA of university students. The PA data were\ncollected over a period of ten weeks from 305 first-year university students\nwho were attending a mandatory class of physical education at the university.\nThe obtained results indicate that compared to the average number of steps\ntaken before the COVID-19 pandemic (6474.87 steps), the average number of steps\ntaken after the COVID-19 pandemic (3522.5 steps) has decreased by 45.6%.\nFurthermore, the decrease in commuting time (7 AM to 10 AM), classroom time,\nand extracurricular activity time (11 AM to 12 AM) has led to a decrease in PA\non weekdays owing to reduced unplanned exercise opportunities and has caused an\nincrease in the duration of being in the stationary state in the course of\ndaily life.\n"} {"abstract": " We consider a distributed function computation problem in which parties\nobserving noisy versions of a remote source facilitate the computation of a\nfunction of their observations at a fusion center through public communication.\nThe distributed function computation is subject to constraints, including not\nonly reliability and storage but also privacy and secrecy. Specifically, 1) the\nremote source should remain private from an eavesdropper and the fusion center,\nmeasured in terms of the information leaked about the remote source; 2) the\nfunction computed should remain secret from the eavesdropper, measured in terms\nof the information leaked about the arguments of the function, to ensure\nsecrecy regardless of the exact function used. We derive the exact rate regions\nfor lossless and lossy single-function computation and illustrate the lossy\nsingle-function computation rate region for an information bottleneck example,\nin which the optimal auxiliary random variables are characterized for\nbinary-input symmetric-output channels. We extend the approach to lossless and\nlossy asynchronous multiple-function computations with joint secrecy and\nprivacy constraints, in which case inner and outer bounds for the rate regions\ndiffering only in the Markov chain conditions imposed are characterized.\n"} {"abstract": " For a $d$-dimensional random vector $X$, let $p_{n, X}(\\theta)$ be the\nprobability that the convex hull of $n$ independent copies of $X$ contains a\ngiven point $\\theta$. We provide several sharp inequalities regarding $p_{n,\nX}(\\theta)$ and $N_X(\\theta)$ denoting the smallest $n$ for which $p_{n,\nX}(\\theta)\\ge1/2$. As a main result, we derive the totally general inequality\n$1/2 \\le \\alpha_X(\\theta)N_X(\\theta)\\le 3d + 1$, where $\\alpha_X(\\theta)$\n(a.k.a. the Tukey depth) is the minimum probability that $X$ is in a fixed\nclosed halfspace containing the point $\\theta$. We also show several\napplications of our general results: one is a moment-based bound on\n$N_X(\\mathbb{E}[X])$, which is an important quantity in randomized approaches\nto cubature construction or measure reduction problem. Another application is\nthe determination of the canonical convex body included in a random convex\npolytope given by independent copies of $X$, where our combinatorial approach\nallows us to generalize existing results in random matrix community\nsignificantly.\n"} {"abstract": " Finding exact circuit size is a notorious optimization problem in practice.\nWhereas modern computers and algorithmic techniques allow to find a circuit of\nsize seven in blink of an eye, it may take more than a week to search for a\ncircuit of size thirteen. One of the reasons of this behavior is that the\nsearch space is enormous: the number of circuits of size $s$ is\n$s^{\\Theta(s)}$, the number of Boolean functions on $n$ variables is $2^{2^n}$.\n In this paper, we explore the following natural heuristic idea for decreasing\nthe size of a given circuit: go through all its subcircuits of moderate size\nand check whether any of them can be improved by reducing to SAT. This may be\nviewed as a local search approach: we search for a smaller circuit in a ball\naround a given circuit. We report the results of experiments with various\nsymmetric functions.\n"} {"abstract": " The orientational correlation scheme introduced earlier for tetrahedral\nmolecules is extended for being able to classify orientational correlations\nbetween pairs of high symmetry molecules. While in the original algorithm a\ngiven orientation of a pair of tetrahedral molecules is characterized\nunambiguously by the number of ligand atoms that can be found between two\nplanes that contain each centre and perpendicular to the centre-centre\nconnecting line, in the generalized algorithm, the planes are replaced by\ncones, whose apex angles are set according to the symmetry of each molecule. To\ndemonstrate the applicability of the method, the octahedral-shaped SF$_6$\nmolecule is studied in a wide range of phases (gaseous, supercritical fluid,\nliquid and plastic crystalline) using classical molecular dynamics. By\nanalyzing the orientational correlations, a close-contact region in the first\ncoordination shell and a medium-range order behaviour are identified in the\nnon-crystalline phases. While the former is invariant to changes of the\ndensity, the latter showed longer-ranged correlations as density is raised. In\nthe plastic crystalline state, fluorine atoms are oriented along the lattice\ndirections with higher probability. To test the method for icosahedral\nsymmetries, the crystalline structures of room temperature C$_{60}$ is\ngenerated by three sets of potentials that produce different local\narrangements. The novel analysis provided quantitative result on preferred\narrangements.\n"} {"abstract": " The oxDNA model of DNA has been applied widely to systems in biology,\nbiophysics and nanotechnology. It is currently available via two independent\nopen source packages. Here we present a set of clearly-documented exemplar\nsimulations that simultaneously provide both an introduction to simulating the\nmodel, and a review of the model's fundamental properties. We outline how\nsimulation results can be interpreted in terms of -- and feed into our\nunderstanding of -- less detailed models that operate at larger length scales,\nand provide guidance on whether simulating a system with oxDNA is worthwhile.\n"} {"abstract": " We study the problem of list-decodable mean estimation, where an adversary\ncan corrupt a majority of the dataset. Specifically, we are given a set $T$ of\n$n$ points in $\\mathbb{R}^d$ and a parameter $0< \\alpha <\\frac 1 2$ such that\nan $\\alpha$-fraction of the points in $T$ are i.i.d. samples from a\nwell-behaved distribution $\\mathcal{D}$ and the remaining $(1-\\alpha)$-fraction\nare arbitrary. The goal is to output a small list of vectors, at least one of\nwhich is close to the mean of $\\mathcal{D}$. We develop new algorithms for\nlist-decodable mean estimation, achieving nearly-optimal statistical\nguarantees, with running time $O(n^{1 + \\epsilon_0} d)$, for any fixed\n$\\epsilon_0 > 0$. All prior algorithms for this problem had additional\npolynomial factors in $\\frac 1 \\alpha$. We leverage this result, together with\nadditional techniques, to obtain the first almost-linear time algorithms for\nclustering mixtures of $k$ separated well-behaved distributions,\nnearly-matching the statistical guarantees of spectral methods. Prior\nclustering algorithms inherently relied on an application of $k$-PCA, thereby\nincurring runtimes of $\\Omega(n d k)$. This marks the first runtime improvement\nfor this basic statistical problem in nearly two decades.\n The starting point of our approach is a novel and simpler near-linear time\nrobust mean estimation algorithm in the $\\alpha \\to 1$ regime, based on a\none-shot matrix multiplicative weights-inspired potential decrease. We\ncrucially leverage this new algorithmic framework in the context of the\niterative multi-filtering technique of Diakonikolas et al. '18, '20, providing\na method to simultaneously cluster and downsample points using one-dimensional\nprojections -- thus, bypassing the $k$-PCA subroutines required by prior\nalgorithms.\n"} {"abstract": " A selection of intellectual goods produced by online communities - e.g. open\nsource software or knowledge bases like Wikipedia - are in daily use by a broad\naudience, and thus their quality impacts the public at large. Yet, it is still\nunclear what contributes to the effectiveness of such online peer production\nsystems: what conditions or social processes help them deliver quality\nproducts. Specifically, while co-contribution (i.e. bipartite networks) are\noften investigated in online collaboration, the role of interpersonal\ncommunication in coordination of online peer-production is much less\ninvestigated. To address this gap we have reconstructed networks of personal\ncommunication (direct messaging) between Wikipedia editors gathered in so\ncalled Wikiprojects - teams of contributors who focus on articles within\nspecific topical areas. We found that effective projects exchange larger volume\nof direct messages and that their communication structure allows for complex\ncoordination: for sharing of information locally through selective ties, and at\nthe same time globally across the whole group. To verify how these network\nmeasures relate to the subjective perception of importance of group\ncommunication we conducted semi-structured interviews with members of selected\nprojects. Our interviewees used direct communication for providing feedback,\nfor maintaining close relations and for tapping on the social capital of the\nWikipedia community. Our results underscore the importance of communication\nstructure in online collaboration: online peer production communities rely on\ninterpersonal communication to coordinate their work and to maintain high\nlevels of engagement. Design of platforms for such communities should allow for\nample group level communication as well as for one-on-one interactions.\n"} {"abstract": " A phenomenological Hamiltonian of a closed (i.e., unitary) quantum system is\nassumed to have an $N$ by $N$ real-matrix form composed of a unperturbed\ndiagonal-matrix part $H^{(N)}_0$ and of a tridiagonal-matrix perturbation\n$\\lambda\\,W^{(N)}(\\lambda)$. The requirement of the unitarity of the evolution\nof the system (i.e., of the diagonalizability and of the reality of the\nspectrum) restricts, naturally, the variability of the matrix elements to a\n\"physical\" domain ${\\cal D}^{[N]} \\subset \\mathbb{R}^d$. We fix the unperturbed\nmatrix (simulating a non-equidistant, square-well-type unperturbed spectrum)\nand we only admit the maximally non-Hermitian antisymmetric-matrix\nperturbations. This yields the hiddenly Hermitian model with the measure of\nperturbation $\\lambda$ and with the $d=N$ matrix elements which are, inside\n${\\cal D}^{[N]}$, freely variable. Our aim is to describe the quantum\nphase-transition boundary $\\partial {\\cal D}^{[N]}$ (alias exceptional-point\nboundary) at which the unitarity of the system is lost. Our main attention is\npaid to the strong-coupling extremes of stability, i.e., to the Kato's\nexceptional points of order $N$ (EPN) and to the (sharply spiked) shape of the\nboundary $\\partial {\\cal D}^{[N]}$ in their vicinity. The feasibility of our\nconstructions is based on the use of the high-precision arithmetics in\ncombination with the computer-assisted symbolic manipulations (including, in\nparticular, the Gr\\\"{o}bner basis elimination technique).\n"} {"abstract": " Stance detection concerns the classification of a writer's viewpoint towards\na target. There are different task variants, e.g., stance of a tweet vs. a full\narticle, or stance with respect to a claim vs. an (implicit) topic. Moreover,\ntask definitions vary, which includes the label inventory, the data collection,\nand the annotation protocol. All these aspects hinder cross-domain studies, as\nthey require changes to standard domain adaptation approaches. In this paper,\nwe perform an in-depth analysis of 16 stance detection datasets, and we explore\nthe possibility for cross-domain learning from them. Moreover, we propose an\nend-to-end unsupervised framework for out-of-domain prediction of unseen,\nuser-defined labels. In particular, we combine domain adaptation techniques\nsuch as mixture of experts and domain-adversarial training with label\nembeddings, and we demonstrate sizable performance gains over strong baselines,\nboth (i) in-domain, i.e., for seen targets, and (ii) out-of-domain, i.e., for\nunseen targets. Finally, we perform an exhaustive analysis of the cross-domain\nresults, and we highlight the important factors influencing the model\nperformance.\n"} {"abstract": " An inspiration at the origin of wavelet analysis (when Grossmann, Morlet,\nMeyer and collaborators were interacting and exploring versions of multiscale\nrepresentations) was provided by the analysis of holomorphic signals, for which\nthe images of the phase of Cauchy wavelets were remarkable in their ability to\nreveal intricate singularities or dynamic structures, such as instantaneous\nfrequency jumps in musical recordings. Our goal is to follow their seminal work\nand introduce recent developments in nonlinear analysis. In particular we\nsketch methods extending conventional Fourier analysis, exploiting both phase\nand amplitudes of holomorphic functions. The Blaschke factors are a key\ningredient, in building analytic tools, starting with the Malmquist Takenaka\northonormal bases of the Hardy space, continuing with \"best\" adapted bases\nobtained through phase unwinding, and concluding with relations to composition\nof Blaschke products and their dynamics. We also remark that the phase of a\nBlaschke product is a one layer neural net with arctan as an activation sigmoid\nand that the composition is a \"Deep Neural Net\" whose depth is the number of\ncompositions. Our results provide a wealth of related library of orthonormal\nbases.\n"} {"abstract": " In autonomous driving, perceiving the driving behaviors of surrounding agents\nis important for the ego-vehicle to make a reasonable decision. In this paper,\nwe propose a neural network model based on trajectories information for driving\nbehavior recognition. Unlike existing trajectory-based methods that recognize\nthe driving behavior using the hand-crafted features or directly encoding the\ntrajectory, our model involves a Multi-Scale Convolutional Neural Network\n(MSCNN) module to automatically extract the high-level features which are\nsupposed to encode the rich spatial and temporal information. Given a\ntrajectory sequence of an agent as the input, firstly, the Bi-directional Long\nShort Term Memory (Bi-LSTM) module and the MSCNN module respectively process\nthe input, generating two features, and then the two features are fused to\nclassify the behavior of the agent. We evaluate the proposed model on the\npublic BLVD dataset, achieving a satisfying performance.\n"} {"abstract": " High load latency that results from deep cache hierarchies and relatively\nslow main memory is an important limiter of single-thread performance. Data\nprefetch helps reduce this latency by fetching data up the hierarchy before it\nis requested by load instructions. However, data prefetching has shown to be\nimperfect in many situations. We propose cache-level prediction to complement\nprefetchers. Our method predicts which memory hierarchy level a load will\naccess allowing the memory loads to start earlier, and thereby saves many\ncycles. The predictor provides high prediction accuracy at the cost of just one\ncycle added latency to L1 misses. Experimental results show speedup of 7.8\\% on\ngeneric, graph, and HPC applications over a baseline with aggressive\nprefetchers.\n"} {"abstract": " Buckling strength estimation of architected materials has mainly been\nrestricted to load cases oriented along symmetry axes. However, realistic load\nscenarios normally exhibit more general stress distributions. In this paper we\npropose a simple yet accurate method to estimate the buckling strength of\nstretch-dominated lattice structures based on individual member analysis. As an\nintegral part of the method, the yield strength is also determined. This\nsimplified model is verified by rigorous numerical analysis. In particular, we\nefficiently compute the complete buckling strength surfaces of an orthotropic\nbulk modulus optimal plate lattice structure and isotropic stiffness optimal\nplate and truss lattice structures subjected to rotated uni-axial loads, where\nthe ratio between the highest and lowest buckling strength is found to be 1.77,\n2.11 and 2.41, respectively. For comparison, we also provide their yield\nstrength surfaces, where the corresponding ratios are 1.84, 1.16 and 1.79.\nFurthermore, we use the knowledge gained from the simplified model to create a\nnew configuration of the isotropic plate lattice structure with a more\nisotropic buckling strength surface and buckling strength ratio of 1.24,\nwithout deterioration of the stiffness or yield strength. The proposed method\nprovides a valuable tool to quickly estimate the microstructural buckling\nstrength of stretch-dominated lattice structures, especially for applications\nwhere the stress state is non-uniform such as infill in additive manufacturing.\n"} {"abstract": " We analyze the phase diagram of a topological insulator model including\nantiferromagnetic interactions in the form of an extended Su-Schrieffer Heeger\nmodel. To this end, we employ a recently introduced operational definition of\ntopological order based on the ability of a system to perform topological error\ncorrection. We show that the necessary error correction statistics can be\nobtained efficiently using a Monte-Carlo sampling of a matrix product state\nrepresentation of the ground state wave function. Specifically, we identify two\ndistinct symmetry-protected topological phases corresponding to two different\nfully dimerized reference states. Finally, we extend the notion of error\ncorrection to classify thermodynamic phases to those exhibiting local order\nparameters, finding a topologically trivial antiferromagnetic phase for\nsufficiently strong interactions.\n"} {"abstract": " Cyber-Physical System (CPS) has made a tremendous progress in recent years\nand also disrupted many technical fields such as smart industries, smart\nhealth, smart transportation etc. to flourish the nations economy. However, CPS\nSecurity is still one of the concerns for wide adoption owing to high number of\ndevices connecting to the internet and the traditional security solutions may\nnot be suitable to protect the advanced, application specific attacks. This\npaper presents a programmable device network layer architecture to combat\nattacks and efficient network monitoring in heterogeneous environment CPS\napplications. We leverage Industrial control systems (ICS) to discuss the\nexisting issues, highlighting the importance of advanced network layer for CPS.\nThe programmable data plane language (P4) is introduced to detect well known\nHELLO Flood attack with minimal efforts in the network level and also used to\nfeaturing the potential solutions for security.\n"} {"abstract": " The news recommender systems are marked by a few unique challenges specific\nto the news domain. These challenges emerge from rapidly evolving readers'\ninterests over dynamically generated news items that continuously change over\ntime. News reading is also driven by a blend of a reader's long-term and\nshort-term interests. In addition, diversity is required in a news recommender\nsystem, not only to keep the reader engaged in the reading process but to get\nthem exposed to different views and opinions. In this paper, we propose a deep\nneural network that jointly learns informative news and readers' interests into\na unified framework. We learn the news representation (features) from the\nheadlines, snippets (body) and taxonomy (category, subcategory) of news. We\nlearn a reader's long-term interests from the reader's click history,\nshort-term interests from the recent clicks via LSTMSs and the diversified\nreader's interests through the attention mechanism. We also apply different\nlevels of attention to our model. We conduct extensive experiments on two news\ndatasets to demonstrate the effectiveness of our approach.\n"} {"abstract": " Two-dimensional (2D) ZrS2 monolayer (ML) has emerged as a promising candidate\nfor thermoelectric (TE) device applications due to its high TE figure of merit,\nwhich is mainly contributed by its inherently low lattice thermal conductivity.\nThis work investigates the effect of the lattice anharmonicity driven by\ntemperature-dependent phonon dispersions on thermal transport of ZrS2 ML. The\ncalculations are based on the self-consistent phonon (SCP) theory to calculate\nthe thermodynamic parameters along with the lattice thermal conductivity. The\nhigher- order (quartic) force constants were extracted by using an efficient\ncompressive sensing lattice dynamics technique, which estimates the necessary\ndata based on the emerging machine learning program as an alternative of\ncomputationally expensive density functional theory calculations. Resolve of\nthe degeneracy and hardening of the vibrational frequencies of low-energy\noptical modes were predicted upon including the quartic anharmonicity. As\ncompared to the conventional Boltzmann transport equation (BTE) approach, the\nlattice thermal conductivity of the optimized ZrS2 ML unit cell within SCP +\nBTE approach is found to be significantly enhanced (e.g., by 21% at 300 K).\nThis enhancement is due to the relatively lower value of phonon linewidth\ncontributed by the anharmonic frequency renormalization included in the SCP\ntheory. Mainly, the conventional BTE approach neglects the temperature\ndependence of the phonon frequencies due to the consideration of harmonic\nlattice dynamics and treats the normal process of three-phonon scattering\nincorrectly due to the use of quasi-particle lifetimes. These limitations are\naddressed in this work within the SCP + BTE approach, which signifies the\nvalidity and accuracy of this approach.\n"} {"abstract": " Let $R$ be a finite commutative ring with identity and $U(R)$ be its group of\nunits. In 2005, El-Kassar and Chehade presented a ring structure for $U(R)$ and\nas a consequence they generalized this group of units to the generalized group\nof units $U^{k}\\left( R\\right) $ defined iteratively as the group of the units\nof $U^{k-1}(R)$, with $U^{1}\\left( R\\right) =U(R) $. In this paper, we examine\nthe structure of this group, when $R=\\mathbb{Z}_{n}.$ We find a decomposition\nof $U^{k}\\left(\\mathbb{Z}_{n}\\right)$ as a direct product of cyclic groups for\nthe general case of any $k$, and we study when these groups are boolean and\ntrivial. We also show that this decomposition structure is directly related to\nthe Pratt Tree primes.\n"} {"abstract": " Cluster-randomized experiments are widely used due to their logistical\nconvenience and policy relevance. To analyze them properly, we must address the\nfact that the treatment is assigned at the cluster level instead of the\nindividual level. Standard analytic strategies are regressions based on\nindividual data, cluster averages, and cluster totals, which differ when the\ncluster sizes vary. These methods are often motivated by models with strong and\nunverifiable assumptions, and the choice among them can be subjective. Without\nany outcome modeling assumption, we evaluate these regression estimators and\nthe associated robust standard errors from a design-based perspective where\nonly the treatment assignment itself is random and controlled by the\nexperimenter. We demonstrate that regression based on cluster averages targets\na weighted average treatment effect, regression based on individual data is\nsuboptimal in terms of efficiency, and regression based on cluster totals is\nconsistent and more efficient with a large number of clusters. We highlight the\ncritical role of covariates in improving estimation efficiency, and illustrate\nthe efficiency gain via both simulation studies and data analysis. Moreover, we\nshow that the robust standard errors are convenient approximations to the true\nasymptotic standard errors under the design-based perspective. Our theory holds\neven when the outcome models are misspecified, so it is model-assisted rather\nthan model-based. We also extend the theory to a wider class of weighted\naverage treatment effects.\n"} {"abstract": " Face anti-spoofing approaches based on domain generalization (DG) have drawn\ngrowing attention due to their robustness for unseen scenarios. Previous\nmethods treat each sample from multiple domains indiscriminately during the\ntraining process, and endeavor to extract a common feature space to improve the\ngeneralization. However, due to complex and biased data distribution, directly\ntreating them equally will corrupt the generalization ability. To settle the\nissue, we propose a novel Dual Reweighting Domain Generalization (DRDG)\nframework which iteratively reweights the relative importance between samples\nto further improve the generalization. Concretely, Sample Reweighting Module is\nfirst proposed to identify samples with relatively large domain bias, and\nreduce their impact on the overall optimization. Afterwards, Feature\nReweighting Module is introduced to focus on these samples and extract more\ndomain-irrelevant features via a self-distilling mechanism. Combined with the\ndomain discriminator, the iteration of the two modules promotes the extraction\nof generalized features. Extensive experiments and visualizations are presented\nto demonstrate the effectiveness and interpretability of our method against the\nstate-of-the-art competitors.\n"} {"abstract": " Quantum channels in free-space, an essential prerequisite for fundamental\ntests of quantum mechanics and quantum technologies in open space, have so far\nbeen based on direct line-of-sight because the predominant approaches for\nphoton-encoding, including polarization and spatial modes, are not compatible\nwith randomly scattered photons. Here we demonstrate a novel approach to\ntransfer and recover quantum coherence from scattered, non-line-of-sight\nphotons analyzed in a multimode and imaging interferometer for time-bins,\ncombined with photon detection based on a 8x8 single-photon-detector-array. The\nobserved time-bin visibility for scattered photons remained at a high $95\\%$\nover a wide scattering angle range of -45 degree to +45 degree, while the\nindividual pixels in the detector array resolve or track an image in its field\nof view of ca. 0.5 degrees. Using our method we demonstrate the viability of\ntwo novel applications. Firstly, using scattered photons as an indirect channel\nfor quantum communication thereby enabling non-line-of-sight quantum\ncommunication with background suppression, and secondly, using the combined\narrival time and quantum coherence to enhance the contrast of low-light imaging\nand laser ranging under high background light. We believe our method will\ninstigate new lines for research and development on applying photon coherence\nfrom scattered signals to quantum sensing, imaging, and communication in\nfree-space environments.\n"} {"abstract": " Aharonov-Bohm interferometry is the most direct probe of anyonic statistics\nin the quantum Hall effect. The technique involves oscillations of the electric\ncurrent as a function of the magnetic field and is not applicable to Kitaev\nspin liquids and other systems without charged quasiparticles. Here, we\nestablish a novel protocol, involving heat transport, for revealing fractional\nstatistics even in the absence of charged excitations, as is the case in\nquantum spin liquids. Specifically, we demonstrate that heat transport in\nKitaev spin liquids through two distinct interferometer geometries, Fabry-Perot\nand Mach-Zehnder, exhibits drastically different behaviors. Therefore, we\npropose the use of heat transport interferometry as a probe of anyonic\nstatistics in charge insulators.\n"} {"abstract": " We consider 5d supersymmetric gauge theories with unitary groups in the\n$\\Omega$-background and study codim-2/4 BPS defects supported on orthogonal\nplanes intersecting at the origin along a circle. The intersecting defects\narise upon implementing the most generic Higgsing (geometric transition) to the\nparent higher dimensional theory, and they are described by pairs of 3d\nsupersymmetric gauge theories with unitary groups interacting through 1d matter\nat the intersection. We explore the relations between instanton and generalized\nvortex calculus, pointing out a duality between intersecting defects subject to\nthe $\\Omega$-background and a deformation of supergroup gauge theories, the\nexact supergroup point being achieved in the self-dual or unrefined limit.\nEmbedding our setup into refined topological strings and in the simplest case\nwhen the parent 5d theory is Abelian, we are able to identify the supergroup\ntheory dual to the intersecting defects as the supergroup version of refined\nChern-Simons theory via open/closed duality. We also discuss the BPS/CFT side\nof the correspondence, finding an interesting large rank duality with\nsuper-instanton counting.\n"} {"abstract": " We designed, constructed and have been operating a system based on\nsingle-crystal synthetic diamond sensors, to monitor the beam losses at the\ninteraction region of the SuperKEKB asymmetric-energy electron-positron\ncollider. The system records the radiation dose-rates in positions close to the\ninner detectors of the Belle II experiment, and protects both the detector and\naccelerator components against destructive beam losses, by participating in the\nbeam-abort system. It also provides complementary information for the dedicated\nstudies of beam-related backgrounds. We describe the performance of the system\nduring the commissioning of the accelerator and during the first physics data\ntaking.\n"} {"abstract": " We study a class of convex-concave saddle-point problems of the form\n$\\min_x\\max_y \\langle Kx,y\\rangle+f_{\\cal{P}}(x)-h^\\ast(y)$ where $K$ is a\nlinear operator, $f_{\\cal{P}}$ is the sum of a convex function $f$ with a\nLipschitz-continuous gradient and the indicator function of a bounded convex\npolytope $\\cal{P}$, and $h^\\ast$ is a convex (possibly nonsmooth) function.\nSuch problem arises, for example, as a Lagrangian relaxation of various\ndiscrete optimization problems. Our main assumptions are the existence of an\nefficient linear minimization oracle ($lmo$) for $f_{\\cal{P}}$ and an efficient\nproximal map for $h^*$ which motivate the solution via a blend of proximal\nprimal-dual algorithms and Frank-Wolfe algorithms. In case $h^*$ is the\nindicator function of a linear constraint and function $f$ is quadratic, we\nshow a $O(1/n^2)$ convergence rate on the dual objective, requiring $O(n \\log\nn)$ calls of $lmo$. If the problem comes from the constrained optimization\nproblem $\\min_{x\\in\\mathbb R^d}\\{f_{\\cal{P}}(x)\\:|\\:Ax-b=0\\}$ then we\nadditionally get bound $O(1/n^2)$ both on the primal gap and on the\ninfeasibility gap. In the most general case, we show a $O(1/n)$ convergence\nrate of the primal-dual gap again requiring $O(n\\log n)$ calls of $lmo$. To the\nbest of our knowledge, this improves on the known convergence rates for the\nconsidered class of saddle-point problems. We show applications to labeling\nproblems frequently appearing in machine learning and computer vision.\n"} {"abstract": " We study the optical properties of the solar gravitational lens (SGL) while\ntreating the Sun as an extended, axisymmetric and rotating body. The\ngravitational field of the Sun is represented using a set of zonal harmonics.\nWe develop an analytical description of the intensity of light that is observed\nin the image plane in the strong interference region of a realistic SGL. This\nformalism makes it possible to model not only the point-spread function of\npoint sources, but also actual observables, images that form in the focal plane\nof an imaging telescope positioned in the image plane. Perturbations of the\nmonopole gravitational field of the Sun are dominated by the solar quadrupole\nmoment, which results in forming an astroid caustic on the image plane.\nConsequently, an imaging telescope placed inside the astroid caustic observes\nfour bright spots, forming the well-known pattern of an Einstein cross. The\nrelative intensities and positions of these spots change as the telescope is\nmoved in the image plane, with spots merging into bright arcs when the\ntelescope approaches the caustic boundary. Outside the astroid caustic, only\ntwo spots remain and the observed pattern eventually becomes indistinguishable\nfrom the imaging pattern of a monopole lens at greater distances from the\noptical axis. We present results from extensive numerical simulations, forming\nthe basis of our ongoing study of prospective exoplanet imaging with the SGL.\nThese results are also applicable to describe a large class of gravitational\nlensing scenarios involving axisymmetric lenses that can be represented using\nzonal harmonics.\n"} {"abstract": " Deep learning affords enormous opportunities to augment the armamentarium of\nbiomedical imaging, albeit its design and implementation have potential flaws.\nFundamentally, most deep learning models are driven entirely by data without\nconsideration of any prior knowledge, which dramatically increases the\ncomplexity of neural networks and limits the application scope and model\ngeneralizability. Here we establish a geometry-informed deep learning framework\nfor ultra-sparse 3D tomographic image reconstruction. We introduce a novel\nmechanism for integrating geometric priors of the imaging system. We\ndemonstrate that the seamless inclusion of known priors is essential to enhance\nthe performance of 3D volumetric computed tomography imaging with ultra-sparse\nsampling. The study opens new avenues for data-driven biomedical imaging and\npromises to provide substantially improved imaging tools for various clinical\nimaging and image-guided interventions.\n"} {"abstract": " We present an exploratory case study describing the design and realisation of\na ''pure mixed reality'' application in a museum setting, where we investigate\nthe potential of using Microsoft's HoloLens for object-centred museum\nmediation. Our prototype supports non-expert visitors observing a sculpture by\noffering interpretation that is linked to visual properties of the museum\nobject. The design and development of our research prototype is based on a\ntwo-stage visitor observation study and a formative study we conducted prior to\nthe design of the application. We present a summary of our findings from these\nstudies and explain how they have influenced our user-centred content creation\nand the interaction design of our prototype. We are specifically interested in\ninvestigating to what extent different constructs of initiative influence the\nlearning and user experience. Thus, we detail three modes of activity that we\nrealised in our prototype. Our case study is informed by research in the area\nof human-computer interaction, the humanities and museum practice. Accordingly,\nwe discuss core concepts, such as gaze-based interaction, object-centred\nlearning, presence, and modes of activity and guidance with a transdisciplinary\nperspective.\n"} {"abstract": " The computational power increases over the past decades havegreatly enhanced\nthe ability to simulate chemical reactions andunderstand ever more complex\ntransformations. Tensor contractions are the fundamental computational building\nblock of these simulations. These simulations have often been tied to one\nplatform and restricted in generality by the interface provided to the user.\nThe expanding prevalence of accelerators and researcher demands necessitate a\nmore general approach which is not tied to specific hardware or requires\ncontortion of algorithms to specific hardware platforms. In this paper we\npresent COMET, a domain-specific programming language and compiler\ninfrastructure for tensor contractions targeting heterogeneous accelerators. We\npresent a system of progressive lowering through multiple layers of abstraction\nand optimization that achieves up to 1.98X speedup for 30 tensor contractions\ncommonly used in computational chemistry and beyond.\n"} {"abstract": " The next generation of galaxy surveys will allow us to test some fundamental\naspects of the standard cosmological model, including the assumption of a\nminimal coupling between the components of the dark sector. In this paper, we\npresent the Javalambre Physics of the Accelerated Universe Astrophysical Survey\n(J-PAS) forecasts on a class of unified models where cold dark matter interacts\nwith a vacuum energy, considering future observations of baryon acoustic\noscillations, redshift-space distortions, and the matter power spectrum. After\nproviding a general framework to study the background and linear perturbations,\nwe focus on a concrete interacting model without momentum exchange by taking\ninto account the contribution of baryons. We compare the J-PAS results with\nthose expected for DESI and Euclid surveys and show that J-PAS is competitive\nto them, especially at low redshifts. Indeed, the predicted errors for the\ninteraction parameter, which measures the departure from a $\\Lambda$CDM model,\ncan be comparable to the actual errors derived from the current data of cosmic\nmicrowave background temperature anisotropies.\n"} {"abstract": " The scaling of different features of stream-wise normal stress profiles\n$\\langle uu\\rangle^+(y^+)$ in turbulent wall-bounded flows, in particular in\ntruly parallel flows, such as channel and pipe flows, is the subject of a long\nrunning debate. Particular points of contention are the scaling of the \"inner\"\nand \"outer\" peaks of $\\langle uu\\rangle^+$ at $y^+\\approxeq 15$ and $y^+\n=\\mathcal{O}(10^3)$, respectively, their infinite Reynolds number limit, and\nthe rate of logarithmic decay in the outer part of the flow. Inspired by the\nlandmark paper of Chen and Sreenivasan (2021), two terms of the inner\nasymptotic expansion of $\\langle uu\\rangle^+$ in the small parameter\n$Re_\\tau^{-1/4}$ are extracted for the first time from a set of direct\nnumerical simulations (DNS) of channel flow. This inner expansion is completed\nby a matching outer expansion, which not only fits the same set of channel DNS\nwithin 1.5\\% of the peak stress, but also provides a good match of laboratory\ndata in pipes and the near-wall part of boundary layers, up to the highest\n$Re_\\tau$'s of order $10^5$. The salient features of the new composite\nexpansion are first, an inner $\\langle uu\\rangle^+$ peak, which saturates at\n11.3 and decreases as $Re_\\tau^{-1/4}$, followed by a short \"wall loglaw\" with\na slope that becomes positive for $Re_\\tau \\gtrapprox 20'000$, leading up to an\nouter peak, and an outer logarithmic overlap with a negative slope continuously\ngoing to zero for $Re_\\tau \\to\\infty$.\n"} {"abstract": " In recent years, two-dimensional van der Waals materials have emerged as an\nimportant platform for the observation of long-range ferromagnetic order in\natomically thin layers. Although heterostructures of such materials can be\nconceived to harness and couple a wide range of magneto-optical and\nmagneto-electrical properties, technologically relevant applications require\nCurie temperatures at or above room-temperature and the ability to grow films\nover large areas. Here we demonstrate the large-area growth of single-crystal\nultrathin films of stoichiometric Fe5GeTe2 on an insulating substrate using\nmolecular beam epitaxy. Magnetic measurements show the persistence of soft\nferromagnetism up to room temperature, with a Curie temperature of 293 K, and a\nweak out-of-plane magnetocrystalline anisotropy. Surface, chemical, and\nstructural characterizations confirm the layer-by-layer growth, 5:1:2 Fe:Ge:Te\nstoichiometric elementary composition, and single crystalline character of the\nfilms.\n"} {"abstract": " \\textbf{Background} Hydrogels are crosslinked polymer networks that can\nabsorb and retain a large fraction of liquid. Near a critical sliding velocity,\nhydrogels pressed against smooth surfaces exhibit time-dependent frictional\nbehavior occurring over multiple timescales, yet the origin of these dynamics\nis unresolved. \\textbf{Objective} Here, we characterize this time-dependent\nregime and show that it is consistent with two distinct molecular processes:\nsliding-induced relaxation and quiescent recovery. \\textbf{Methods} Our\nexperiments use a custom pin-on-disk tribometer to examine poly(acrylic acid)\nhydrogels on smooth poly(methyl methacrylate) surfaces over a variety of\nsliding conditions, from minutes to hours. \\textbf{Results} We show that at a\nfixed sliding velocity, the friction coefficient decays exponentially and\nreaches a steady-state value. The time constant associated with this decay\nvaries exponentially with the sliding velocity, and is sensitive to any\nprecedent frictional shearing of the interface. This process is reversible;\nupon cessation of sliding, the friction coefficient recovers to its original\nstate. We also show that the initial direction of shear can be imprinted as an\nobservable \"memory\", and is visible after 24 hrs of repeated frictional\nshearing. \\textbf{Conclusions} We attribute this behavior to nanoscale\nextension and relaxation dynamics of the near-surface polymer network, leading\nto a model of frictional relaxation and recovery with two parallel timescales.\n"} {"abstract": " It is shown that the equalization of temperatures between our and mirror\nsectors occurs during one Hubble time due to microscopic black hole production\nand evaporation in particle collisions if the temperature of the Universe is\nnear the multidimensional Plank mass. This effect excludes the multidimensional\nPlanck masses smaller than the reheating temperature of the Universe\n($\\sim10^{13}$ GeV) in the mirror matter models, because the primordial\nnucleosynthesis theory requires that the temperature of the mirror world should\nbe lower than ours. In particular, the birth of microscopic black holes in the\nLHC is impossible if the dark matter of our Universe is represented by baryons\nof mirror matter. It excludes some of the possible coexisting options in\nparticle physics and cosmology. Multidimensional models with flat additional\ndimensions are already strongly constrained in maximum temperature due to the\neffect of Kaluza-Klein mode (KK-mode) overproduction. In these models, the\nreheating temperature should be significantly less than the multidimensional\nPlanck mass, so our restrictions in this case are not paramount. The new\nconstraints play a role in multidimensional models in which the spectrum of\nKK-modes does not lead to their overproduction in the early Universe, for\nexample, in theories with hyperbolic additional space.\n"} {"abstract": " Multilingual models have demonstrated impressive cross-lingual transfer\nperformance. However, test sets like XNLI are monolingual at the example level.\nIn multilingual communities, it is common for polyglots to code-mix when\nconversing with each other. Inspired by this phenomenon, we present two strong\nblack-box adversarial attacks (one word-level, one phrase-level) for\nmultilingual models that push their ability to handle code-mixed sentences to\nthe limit. The former uses bilingual dictionaries to propose perturbations and\ntranslations of the clean example for sense disambiguation. The latter directly\naligns the clean example with its translations before extracting phrases as\nperturbations. Our phrase-level attack has a success rate of 89.75% against\nXLM-R-large, bringing its average accuracy of 79.85 down to 8.18 on XNLI.\nFinally, we propose an efficient adversarial training scheme that trains in the\nsame number of steps as the original model and show that it improves model\naccuracy.\n"} {"abstract": " We present a statistical study of the largest bibliographic compilation of\nstellar and orbital parameters of W UMa stars derived by light curve synthesis\nwith Roche models. The compilation includes nearly 700 individually\ninvestigated objects from over 450 distinct publications. Almost 70% of this\nsample is comprised of stars observed in the last decade that have not been\nconsidered in previous statistical studies. We estimate the ages of the\ncataloged stars, model the distributions of their periods, mass ratios,\ntemperatures and other quantities, and compare them with the data from CRTS,\nLAMOST and Gaia archives. As only a small fraction of the sample has radial\nvelocity curves, we examine the reliability of the photometric mass ratios in\ntotally and partially eclipsing systems and find that totally eclipsing W UMa\nstars with photometric mass ratios have the same parameter distributions as\nthose with spectroscopic mass ratios. Most of the stars with reliable\nparameters have mass ratios below 0.5 and orbital periods shorter than 0.5\ndays. Stars with longer periods and temperatures above 7000 K stand out as\noutliers and shouldn't be labeled as W UMa binaries.\n The collected data is available as an online database at\nhttps://wumacat.aob.rs.\n"} {"abstract": " Among the models of disordered conduction and localization, models with $N$\norbitals per site are attractive both for their mathematical tractability and\nfor their physical realization in coupled disordered grains. However Wegner\nproved that there is no Anderson transition and no localized phase in the $N\n\\rightarrow \\infty$ limit, if the hopping constant $K$ is kept fixed. Here we\nshow that the localized phase is preserved in a different limit where $N$ is\ntaken to infinity and the hopping $K$ is simultaneously adjusted to keep $N \\,\nK$ constant. We support this conclusion with two arguments. The first is\nnumerical computations of the localization length showing that in the $N\n\\rightarrow \\infty$ limit the site-diagonal-disorder model possesses a\nlocalized phase if $N\\,K$ is kept constant, but does not possess that phase if\n$K$ is fixed. The second argument is a detailed analysis of the energy and\nlength scales in a functional integral representation of the gauge invariant\nmodel. The analysis shows that in the $K$ fixed limit the functional integral's\nspins do not exhibit long distance fluctuations, i.e. such fluctuations are\nmassive and therefore decay exponentially, which signals conduction. In\ncontrast the $N\\,K$ fixed limit preserves the massless character of certain\nspin fluctuations, allowing them to fluctuate over long distance scales and\ncause Anderson localization.\n"} {"abstract": " Mutation and drift play opposite roles in genetics. While mutation creates\ndiversity, drift can cause gene variants to disappear, especially when they are\nrare. In the absence of natural selection and migration, the balance between\nthe drift and mutation in a well-mixed population defines its diversity. The\nMoran model captures the effects of these two evolutionary forces and has a\ncounterpart in social dynamics, known as the Voter model with external opinion\ninfluencers. Two extreme outcomes of the Voter model dynamics are consensus and\ncoexistence of opinions, which correspond to low and high diversity in the\nMoran model. Here we use a Shannon's information-theoretic approach to\ncharacterize the smooth transition between the states of consensus and\ncoexistence of opinions in the Voter model. Mapping the Moran into the Voter\nmodel we extend the results to the mutation-drift balance and characterize the\ntransition between low and high diversity in finite populations. Describing the\npopulation as a network of connected individuals we show that the transition\nbetween the two regimes depends on the network topology of the population and\non the possible asymmetries in the mutation rates.\n"} {"abstract": " Granulation of quantum matter -- the formation of persistent small-scale\npatterns -- is realized in the images of quasi-one-dimensional Bose-Einstein\ncondensates perturbed by a periodically modulated interaction. Our present\nanalysis of a mean-field approximation suggests that granulation is caused by\nthe gradual transformation of phase undulations into density undulations. This\nis achieved by a suitably large modulation frequency, while for low enough\nfrequencies the system exhibits a quasi-adiabatic regime. We show that the\npersistence of granulation is a result of the irregular evolution of the phase\nof the wavefunction representing an irreversible process. Our model predictions\nagree with numerical solutions of the Schr\\\"odinger equation and experimental\nobservations. The numerical computations reveal the emergent many-body\ncorrelations behind these phenomena via the multi-configurational\ntime-dependent Hartree theory for bosons (MCTDHB).\n"} {"abstract": " Production high-performance computing systems continue to grow in complexity\nand size. As applications struggle to make use of increasingly heterogeneous\ncompute nodes, maintaining high efficiency (performance per watt) for the whole\nplatform becomes a challenge. Alongside the growing complexity of scientific\nworkloads, this extreme heterogeneity is also an opportunity: as applications\ndynamically undergo variations in workload, due to phases or data/compute\nmovement between devices, one can dynamically adjust power across compute\nelements to save energy without impacting performance. With an aim toward an\nautonomous and dynamic power management strategy for current and future HPC\narchitectures, this paper explores the use of control theory for the design of\na dynamic power regulation method. Structured as a feedback loop, our\napproach-which is novel in computing resource management-consists of\nperiodically monitoring application progress and choosing at runtime a suitable\npower cap for processors. Thanks to a preliminary offline identification\nprocess, we derive a model of the dynamics of the system and a\nproportional-integral (PI) controller. We evaluate our approach on top of an\nexisting resource management framework, the Argo Node Resource Manager,\ndeployed on several clusters of Grid'5000, using a standard memory-bound HPC\nbenchmark.\n"} {"abstract": " We present a data-driven optimization framework for redesigning police patrol\nzones in an urban environment. The objectives are to rebalance police workload\namong geographical areas and to reduce response time to emergency calls. We\ndevelop a stochastic model for police emergency response by integrating\nmultiple data sources, including police incidents reports, demographic surveys,\nand traffic data. Using this stochastic model, we optimize zone redesign plans\nusing mixed-integer linear programming. Our proposed design was implemented by\nthe Atlanta Police Department in March 2019. By analyzing data before and after\nthe zone redesign, we show that the new design has reduced the response time to\nhigh priority 911 calls by 5.8\\% and the imbalance of police workload among\ndifferent zones by 43\\%.\n"} {"abstract": " We present a novel class of projected methods, to perform statistical\nanalysis on a data set of probability distributions on the real line, with the\n2-Wasserstein metric. We focus in particular on Principal Component Analysis\n(PCA) and regression. To define these models, we exploit a representation of\nthe Wasserstein space closely related to its weak Riemannian structure, by\nmapping the data to a suitable linear space and using a metric projection\noperator to constrain the results in the Wasserstein space. By carefully\nchoosing the tangent point, we are able to derive fast empirical methods,\nexploiting a constrained B-spline approximation. As a byproduct of our\napproach, we are also able to derive faster routines for previous work on PCA\nfor distributions. By means of simulation studies, we compare our approaches to\npreviously proposed methods, showing that our projected PCA has similar\nperformance for a fraction of the computational cost and that the projected\nregression is extremely flexible even under misspecification. Several\ntheoretical properties of the models are investigated and asymptotic\nconsistency is proven. Two real world applications to Covid-19 mortality in the\nUS and wind speed forecasting are discussed.\n"} {"abstract": " Task-agnostic knowledge distillation, a teacher-student framework, has been\nproved effective for BERT compression. Although achieving promising results on\nNLP tasks, it requires enormous computational resources. In this paper, we\npropose Extract Then Distill (ETD), a generic and flexible strategy to reuse\nthe teacher's parameters for efficient and effective task-agnostic\ndistillation, which can be applied to students of any size. Specifically, we\nintroduce two variants of ETD, ETD-Rand and ETD-Impt, which extract the\nteacher's parameters in a random manner and by following an importance metric\nrespectively. In this way, the student has already acquired some knowledge at\nthe beginning of the distillation process, which makes the distillation process\nconverge faster. We demonstrate the effectiveness of ETD on the GLUE benchmark\nand SQuAD. The experimental results show that: (1) compared with the baseline\nwithout an ETD strategy, ETD can save 70\\% of computation cost. Moreover, it\nachieves better results than the baseline when using the same computing\nresource. (2) ETD is generic and has been proven effective for different\ndistillation methods (e.g., TinyBERT and MiniLM) and students of different\nsizes. The source code will be publicly available upon publication.\n"} {"abstract": " We present the results regarding the analysis of the fast X-ray/infrared (IR)\nvariability of the black-hole transient MAXI J1535$-$571. The data studied in\nthis work consist of two strictly simultaneous observations performed with\nXMM-Newton (X-rays: 0.7$-$10 keV), VLT/HAWK-I ($K_{\\rm s}$ band, 2.2 $\\mu$m)\nand VLT/VISIR ($M$ and $PAH2$_$2$ bands, 4.85 and 11.88 $\\mu$m respectively).\nThe cross-correlation function between the X-ray and near-IR light curves shows\na strong asymmetric anti-correlation dip at positive lags. We detect a near-IR\nQPO (2.5 $\\sigma$) at $2.07\\pm0.09$ Hz simultaneously with an X-ray QPO at\napproximately the same frequency ($f_0=2.25\\pm0.05$). From the cross-spectral\nanalysis a lag consistent with zero was measured between the two oscillations.\nWe also measure a significant correlation between the average near-IR and\nmid-IR fluxes during the second night, but find no correlation on short\ntimescales. We discuss these results in terms of the two main scenarios for\nfast IR variability (hot inflow and jet powered by internal shocks). In both\ncases, our preliminary modelling suggests the presence of a misalignment\nbetween disk and jet.\n"} {"abstract": " The paper contains an application of van Kampen theorem for groupoids for\ncomputation of homotopy types of certain class of non-compact foliated surfaces\nobtained by gluing at most countably many strips $\\mathbb{R}\\times(0,1)$ with\nboundary intervals in $\\mathbb{R}\\times\\{\\pm1\\}$ along some of those intervals.\n"} {"abstract": " We present randUBV, a randomized algorithm for matrix sketching based on the\nblock Lanzcos bidiagonalization process. Given a matrix $\\bf{A}$, it produces a\nlow-rank approximation of the form ${\\bf UBV}^T$, where $\\bf{U}$ and $\\bf{V}$\nhave orthonormal columns in exact arithmetic and $\\bf{B}$ is block bidiagonal.\nIn finite precision, the columns of both ${\\bf U}$ and ${\\bf V}$ will be close\nto orthonormal. Our algorithm is closely related to the randQB algorithms of\nYu, Gu, and Li (2018) in that the entries of $\\bf{B}$ are incrementally\ngenerated and the Frobenius norm approximation error may be efficiently\nestimated. Our algorithm is therefore suitable for the fixed-accuracy problem,\nand so is designed to terminate as soon as a user input error tolerance is\nreached. Numerical experiments suggest that the block Lanczos method is\ngenerally competitive with or superior to algorithms that use power iteration,\neven when $\\bf{A}$ has significant clusters of singular values.\n"} {"abstract": " We study the transport property of Gaussian measures on Sobolev spaces of\nperiodic functions under the dynamics of the one-dimensional cubic fractional\nnonlinear Schr\\\"{o}dinger equation. For the case of second-order dispersion or\ngreater, we establish an optimal regularity result for the quasi-invariance of\nthese Gaussian measures, following the approach by Debussche and Tsutsumi [15].\nMoreover, we obtain an explicit formula for the Radon-Nikodym derivative and,\nas a corollary, a formula for the two-point function arising in wave turbulence\ntheory. We also obtain improved regularity results in the weakly dispersive\ncase, extending those in [20]. Our proof combines the approach introduced by\nPlanchon, Tzvetkov and Visciglia [47] and that of Debussche and Tsutsumi [15].\n"} {"abstract": " We report on preliminary results of a statistical study of student\nperformance in more than a decade of calculus-based introductory physics\ncourses. Treating average homework and test grades as proxies for student\neffort and comprehension respectively, we plot comprehension versus effort in\nan academic version of the astronomical Hertzsprung-Russell diagram (which\nplots stellar luminosity versus temperature). We study the evolution of this\ndiagram with time, finding that the \"academic main sequence\" has begun to break\ndown in recent years as student achievement on tests has become decoupled from\nhomework grades. We present evidence that this breakdown is likely related to\nthe emergence of easily accessible online solutions to most textbook problems,\nand discuss possible responses and strategies for maintaining and enhancing\nstudent learning in the online era.\n"} {"abstract": " A method for creating a vision-and-language (V&L) model is to extend a\nlanguage model through structural modifications and V&L pre-training. Such an\nextension aims to make a V&L model inherit the capability of natural language\nunderstanding (NLU) from the original language model. To see how well this is\nachieved, we propose to evaluate V&L models using an NLU benchmark (GLUE). We\ncompare five V&L models, including single-stream and dual-stream models,\ntrained with the same pre-training. Dual-stream models, with their higher\nmodality independence achieved by approximately doubling the number of\nparameters, are expected to preserve the NLU capability better. Our main\nfinding is that the dual-stream scores are not much different than the\nsingle-stream scores, contrary to expectation. Further analysis shows that\npre-training causes the performance drop in NLU tasks with few exceptions.\nThese results suggest that adopting a single-stream structure and devising the\npre-training could be an effective method for improving the maintenance of\nlanguage knowledge in V&L extensions.\n"} {"abstract": " We address a state-of-the-art reinforcement learning (RL) control approach to\nautomatically configure robotic prosthesis impedance parameters to enable\nend-to-end, continuous locomotion intended for transfemoral amputee subjects.\nSpecifically, our actor-critic based RL provides tracking control of a robotic\nknee prosthesis to mimic the intact knee profile. This is a significant advance\nfrom our previous RL based automatic tuning of prosthesis control parameters\nwhich have centered on regulation control with a designer prescribed robotic\nknee profile as the target. In addition to presenting the complete tracking\ncontrol algorithm based on direct heuristic dynamic programming (dHDP), we\nprovide an analytical framework for the tracking controller with constrained\ninputs. We show that our proposed tracking control possesses several important\nproperties, such as weight convergence of the learning networks, Bellman\n(sub)optimality of the cost-to-go value function and control input, and\npractical stability of the human-robot system under input constraint. We\nfurther provide a systematic simulation of the proposed tracking control using\na realistic human-robot system simulator, the OpenSim, to emulate how the dHDP\nenables level ground walking, walking on different terrains and at different\npaces. These results show that our proposed dHDP based tracking control is not\nonly theoretically suitable, but also practically useful.\n"} {"abstract": " The high energy Operator Product Expansion for the product of two\nelectromagnetic currents is extended to the sub-eikonal level in a rigorous\nway. I calculate the impact factors for polarized and unpolarized structure\nfunctions, define new distribution functions, and derive the evolution\nequations for unpolarized and polarized structure functions in the flavor\nsinglet and non-singlet case.\n"} {"abstract": " We investigate the diurnal modulation of the event rate for dark matter\nscattering on solid targets arising from the directionally dependent defect\ncreation threshold energy. In particular, we quantify how this effect would\nhelp in separating dark matter signal from the neutrino background. We perform\na benchmark analysis for a germanium detector and compute how the reach of the\nexperiment is affected by including the timing information of the scattering\nevents. We observe that for light dark matter just above the detection\nthreshold the magnitude of the annual modulation is enhanced. In this mass\nrange using either the annual or diurnal modulation information provides a\nsimilar gain in the reach of the experiment, while the additional reach from\nusing both effects remains modest. Furthermore, we demonstrate that if the\nbackground contains a feature exhibiting an annual modulation similar to the\none observed by DAMA experiment, the diurnal modulation provides for an\nadditional handle to separate dark matter signal from the background.\n"} {"abstract": " Quasielastic scattering excitation function at large backward angle has been\nmeasured for the weakly bound system, $^{7}$Li+$^{159}$Tb at energies around\nthe Coulomb barrier. The corresponding quasielastic barrier distribution has\nbeen derived from the excitation function, both including and excluding the\n$\\alpha$-particles produced in the reaction. The centroid of the barrier\ndistribution obtained after inclusion of $\\alpha$-particles was found to be\nshifted higher in energy, compared to the distribution excluding the $\\alpha\n$-particles. The quasielastic data, excluding the $\\alpha$-particles, have been\nanalyzed in the framework of continuum discretized coupled channel\ncalculations. The quasielastic barrier distribution for $^{7}$Li+$^{159}$Tb,\nhas also been compared with the fusion barrier distribution for the system.\n"} {"abstract": " Rectification of interacting Brownian particles is investigated in a\ntwo-dimensional asymmetric channel in the presence of an external periodic\ndriving force. The periodic driving force can break the thermodynamic\nequilibrium and induces rectification of particles (or finite average\nvelocity). The spatial variation in the shape of the channel leads to entropic\nbarriers, which indeed control the rectification of particles. We find that by\nsimply tunning the driving frequency, driving amplitude, and shape of the\nasymmetric channel, the average velocity can be reversed. Moreover, a short\nrange interaction force between the particles further enhances the\nrectification of particles greatly. This interaction force is modeled as the\nlubrication interaction. Interestingly, it is observed that there exists a\ncharacteristic critical frequency $\\Omega_c$ below which the rectification of\nparticles greatly enhances in the positive direction with increasing the\ninteraction strength; whereas, for the frequency above this critical value, it\ngreatly enhances in the negative direction with increasing the interaction\nstrength. Further, there exists an optimal value of the asymmetric parameter of\nthe channel for which the rectification of interacting particles is maximum.\nThese findings are useful in sorting out the particles and understanding the\ndiffusive behavior of small particles or molecules in microfluidic channels,\nmembrane pores, etc.\n"} {"abstract": " Modern deep neural networks (DNNs) achieve highly accurate results for many\nrecognition tasks on overhead (e.g., satellite) imagery. One challenge however\nis visual domain shifts (i.e., statistical changes), which can cause the\naccuracy of DNNs to degrade substantially and unpredictably when tested on new\nsets of imagery. In this work we model domain shifts caused by variations in\nimaging hardware, lighting, and other conditions as non-linear pixel-wise\ntransformations; and we show that modern DNNs can become largely invariant to\nthese types of transformations, if provided with appropriate training data\naugmentation. In general, however, we do not know the transformation between\ntwo sets of imagery. To overcome this problem, we propose a simple real-time\nunsupervised training augmentation technique, termed randomized histogram\nmatching (RHM). We conduct experiments with two large public benchmark datasets\nfor building segmentation and find that RHM consistently yields comparable\nperformance to recent state-of-the-art unsupervised domain adaptation\napproaches despite being simpler and faster. RHM also offers substantially\nbetter performance than other comparably simple approaches that are widely-used\nin overhead imagery.\n"} {"abstract": " Feature-based dynamic pricing is an increasingly popular model of setting\nprices for highly differentiated products with applications in digital\nmarketing, online sales, real estate and so on. The problem was formally\nstudied as an online learning problem [Javanmard & Nazerzadeh, 2019] where a\nseller needs to propose prices on the fly for a sequence of $T$ products based\non their features $x$ while having a small regret relative to the best --\n\"omniscient\" -- pricing strategy she could have come up with in hindsight. We\nrevisit this problem and provide two algorithms (EMLP and ONSP) for stochastic\nand adversarial feature settings, respectively, and prove the optimal\n$O(d\\log{T})$ regret bounds for both. In comparison, the best existing results\nare $O\\left(\\min\\left\\{\\frac{1}{\\lambda_{\\min}^2}\\log{T},\n\\sqrt{T}\\right\\}\\right)$ and $O(T^{2/3})$ respectively, with $\\lambda_{\\min}$\nbeing the smallest eigenvalue of $\\mathbb{E}[xx^T]$ that could be arbitrarily\nclose to $0$. We also prove an $\\Omega(\\sqrt{T})$ information-theoretic lower\nbound for a slightly more general setting, which demonstrates that\n\"knowing-the-demand-curve\" leads to an exponential improvement in feature-based\ndynamic pricing.\n"} {"abstract": " The interplay of different electronic phases underlies the physics of\nunconventional superconductors. One of the most intriguing examples is a\nhigh-Tc superconductor FeTe1-xSex: it undergoes both a topological transition,\nlinked to the electronic band inversion, and an electronic nematic phase\ntransition, associated with rotation symmetry breaking, around the same\ncritical composition xc where superconducting Tc peaks. At this regime, nematic\nfluctuations and symmetry-breaking strain could have an enormous impact, but\nthis is yet to be fully explored. Using spectroscopic-imaging scanning\ntunneling microscopy, we study the electronic nematic transition in FeTe1-xSex\nas a function of composition. Near xc, we reveal the emergence of electronic\nnematicity in nanoscale regions. Interestingly, we discover that\nsuperconductivity is drastically suppressed in areas where static nematic order\nis the strongest. By analyzing atomic displacement in STM topographs, we find\nthat small anisotropic strain can give rise to these strongly nematic localized\nregions. Our experiments reveal a tendency of FeTe1-xSex near x~0.45 to form\npuddles hosting static nematic order, suggestive of nematic fluctuations pinned\nby structural inhomogeneity, and demonstrate a pronounced effect of anisotropic\nstrain on superconductivity in this regime.\n"} {"abstract": " The near vanishing of the cosmological constant is one of the most puzzling\nopen problems in theoretical physics. We consider a system, the so-called\nframid, that features a technically similar problem. Its stress-energy tensor\nhas a Lorentz-invariant expectation value on the ground state, yet there are no\nstandard, symmetry-based selection rules enforcing this, since the ground state\nspontaneously breaks boosts. We verify the Lorentz invariance of the\nexpectation value in question with explicit one-loop computations. These,\nhowever, yield the expected result only thanks to highly nontrivial\ncancellations, which are quite mysterious from the low-energy effective theory\nviewpoint.\n"} {"abstract": " We demonstrate experimentally a method of varying the degree of\ndirectionality in laser-induced molecular rotation. To control the ratio\nbetween the number of clockwise and counter-clockwise rotating molecules (with\nrespect to a fixed laboratory axis), we change the polarization ellipticity of\nthe laser field of an optical centrifuge. The experimental data, supported by\nthe numerical simulations, show that the degree of rotational directionality\ncan be varied in a continuous fashion between unidirectional and bidirectional\nrotation. The control can be executed with no significant loss in the total\nnumber of rotating molecules. The technique could be used for studying the\neffects of orientation of the molecular angular momentum on molecular\ncollisions and chemical reactions. It could also be utilized for controlling\nmagnetic and optical properties of gases, as well as for the enantioselective\ndetection of chiral molecules.\n"} {"abstract": " Based on a general transport theory for non-reciprocal non-Hermitian systems\nand a topological model that encompasses a wide range of previously studied\nmodels, we (i) provide conditions for effects such as reflectionless and\ntransparent transport, lasing, and coherent perfect absorption, (ii) identify\nwhich effects are compatible and linked with each other, and (iii) determine by\nwhich levers they can be tuned independently. For instance, the directed\namplification inherent in the non-Hermitian skin effect does not enter the\nspectral conditions for reflectionless transport, lasing, or coherent perfect\nabsorption, but allows to adjust the transparency of the system. In addition,\nin the topological model the conditions for reflectionless transport depend on\nthe topological phase, but those for coherent perfect absorption do not. This\nthen allows us to establish a number of distinct transport signatures of\nnon-Hermitian, nonreciprocal, and topological behaviour, in particular (I)\nreflectionless transport in a direction that depends on the topological phase,\n(II) invisibility coinciding with the skin-effect phase transition of\ntopological edge states, and (III) coherent perfect absorption in a system that\nis transparent when probed from one side.\n"} {"abstract": " We obtain an upper bound for the number of critical points of the systole\nfunction on $\\mathcal{M}_g$. Besides, we obtain an upper bound for the number\nof those critical points whose systole is smaller than a constant.\n"} {"abstract": " We study the multimessenger signals from the merger of a black hole with a\nmagnetized neutron star using resistive magnetohydrodynamics simulations\ncoupled to full general relativity. We focus on a case with a 5:1 mass ratio,\nwhere only a small amount of the neutron star matter remains post-merger, but\nwe nevertheless find that significant electromagnetic radiation can be powered\nby the interaction of the neutron star's magnetosphere with the black hole. In\nthe lead-up to merger, strong twisting of magnetic field lines from the\ninspiral leads to plasmoid emission and results in a luminosity in excess of\nthat expected from unipolar induction. We find that the strongest emission\noccurs shortly after merger during a transitory period in which magnetic loops\nform and escape the central region. The remaining magnetic field collimates\naround the spin axis of the remnant black hole before dissipating, an\nindication that, in more favorable scenarios (higher black hole spin/lower mass\nratio) with larger accretion disks, a jet would form.\n"} {"abstract": " As a unique perovskite transparent oxide semiconductor, high-mobility\nLa-doped BaSnO3 films have been successfully synthesized by molecular beam\nepitaxy and pulsed laser deposition. However, it remains a big challenge for\nmagnetron sputtering, a widely applied technique suitable for large-scale\nfabrication, to grow high-mobility La-doped BaSnO3 films. Here, we developed a\nmethod to synthesize high-mobility epitaxial La-doped BaSnO3 films (mobility up\nto 121 cm2V-1s-1 at the carrier density ~ 4.0 x 10^20 cm-3 at room temperature)\ndirectly on SrTiO3 single crystal substrates using high-pressure magnetron\nsputtering. The structural and electrical properties of the La-doped BaSnO3\nfilms were characterized by combined high-resolution X-ray diffraction, X-ray\nphotoemission spectroscopy, and temperature-dependent electrical transport\nmeasurements. The room temperature electron mobility of La-doped BaSnO3 films\nin this work is 2 to 4 times higher than the reported values of the films grown\nby magnetron sputtering. Moreover, in the high carrier density range (n > 3 x\n10^20 cm-3), the electron mobility value of 121 cm2V-1s-1 in our work is among\nthe highest values for all reported doped BaSnO3 films. It is revealed that\nhigh argon pressure during sputtering plays a vital role in stabilizing the\nfully relaxed films and inducing oxygen vacancies, which benefit the high\nmobility at room temperature. Our work provides an easy and economical way to\nmassively synthesize high-mobility transparent conducting films for transparent\nelectronics.\n"} {"abstract": " The 16-year old Blaise Pascal found a way to determine if 6 points lie on a\nconic using a straightedge. Nearly 400 years later, we develop a method that\nuses a straightedge to check whether 10 points lie on a plane cubic curve.\n"} {"abstract": " A compact analytic model is proposed to describe the combined orientation\npreference (OP) and ocular dominance (OD) features of simple cells and their\nlayout in the primary visual cortex (V1). This model consists of three parts:\n(i) an anisotropic Laplacian (AL) operator that represents the local neural\nsensitivity to the orientation of visual inputs; (ii) a receptive field (RF)\noperator that models the anisotropic spatial RF that projects to a given V1\ncell over scales of a few tenths of a millimeter and combines with the AL\noperator to give an overall OP operator; and (iii) a map that describes how the\nparameters of these operators vary approximately periodically across V1. The\nparameters of the proposed model maximize the neural response at a given OP\nwith an OP tuning curve fitted to experimental results. It is found that the\nanisotropy of the AL operator does not significantly affect OP selectivity,\nwhich is dominated by the RF anisotropy, consistent with Hubel and Wiesel's\noriginal conclusions that orientation tuning width of V1 simple cell is\ninversely related to the elongation of its RF. A simplified OP-OD map is then\nconstructed to describe the approximately periodic OP-OD structure of V1 in a\ncompact form. Specifically, the map is approximated by retaining its dominant\nspatial Fourier coefficients, which are shown to suffice to reconstruct the\noverall structure of the OP-OD map. This representation is a suitable form to\nanalyze observed maps compactly and to be used in neural field theory of V1.\nApplication to independently simulated V1 structures shows that observed\nirregularities in the map correspond to a spread of dominant coefficients in a\ncircle in Fourier space.\n"} {"abstract": " We propose a scheme comprising an array of anisotropic optical waveguides,\nembedded in a gas of cold atoms, which can be tuned from a Hermitian to an\nodd-PT -- symmetric configuration through the manipulation of control and\nassistant laser fields. We show that the system can be controlled by tuning\nintra -- and inter-cell coupling coefficients, enabling the creation of\ntopologically distinct phases and linear topological edge states. The waveguide\narray, characterized by a quadrimer primitive cell, allows for implementing\ntransitions between Hermitian and odd-PT -symmetric configurations, broken and\nunbroken PT -symmetric phases, topologically trivial and nontrivial phases, as\nwell as transitions between linear and nonlinear regimes. The introduced scheme\ngeneralizes the Rice-Mele Hamiltonian for a nonlinear non-Hermitian quadrimer\narray featuring odd-PT symmetry and makes accessible unique phenomena and\nfunctionalities that emerge from the interplay of non-Hermiticity, topology,\nand nonlinearity. We also show that in the presence of nonlinearity the system\nsustains nonlinear topological edge states bifurcating from the linear\ntopological edge states and the modes without linear limit. Each nonlinear mode\nrepresents a doublet of odd-PT -conjugate states. In the broken PT phase, the\nnonlinear edge states may be effectively stabilized when an additional\nabsorption is introduced into the system.\n"} {"abstract": " Universal register machine, a formal model of computation, can be emulated on\nthe array of the Game of Life, a two-dimensional cellular automaton. We perform\nspectral analysis on the computation dynamical process of the universal\nregister machine on the Game of Life. The array is divided into small sectors\nand the power spectrum is calculated from the evolution in each sector. The\npower spectrum can be classified into four categories by its shape; null, white\nnoise, sharp peaks, and power law. By representing the shape of power spectrum\nby a mark, we can visualize the activity of the sector during the computation\nprocess. For example, the track of pulse moving between components of the\nuniversal register machine and the position of frequently modified registers\ncan be identified. This method can expose the functional difference in each\nregion of computing machine.\n"} {"abstract": " In this note, we investigate a new model theoretical tree property, called\nthe antichain tree property (ATP). We develop combinatorial techniques for ATP.\nFirst, we show that ATP is always witnessed by a formula in a single free\nvariable, and for formulas, not having ATP is closed under disjunction. Second,\nwe show the equivalence of ATP and $k$-ATP, and provide a criterion for\ntheories to have not ATP (being NATP).\n Using these combinatorial observations, we find algebraic examples of ATP and\nNATP, including pure group, pure fields, and valued fields. More precisely, we\nprove Mekler's construction for groups, Chatzidakis-Ramsey's style criterion\nfor PAC fields, and the AKE-style principle for valued fields preserving NATP.\nAnd we give a construction of an antichain tree in the Skolem arithmetic and\natomless Boolean algebras.\n"} {"abstract": " We extend some classical results of Bousfield on homology localizations and\nnilpotent completions to a presentably symmetric monoidal stable\n$\\infty$-category $\\mathscr{M}$ admitting a multiplicative left-complete\n$t$-structure. If $E$ is a homotopy commutative algebra in $\\mathscr{M}$ we\nshow that $E$-nilpotent completion, $E$-localization, and a suitable formal\ncompletion agree on bounded below objects when $E$ satisfies some reasonable\nconditions.\n"} {"abstract": " In one-shot NAS, sub-networks need to be searched from the supernet to meet\ndifferent hardware constraints. However, the search cost is high and $N$ times\nof searches are needed for $N$ different constraints. In this work, we propose\na novel search strategy called architecture generator to search sub-networks by\ngenerating them, so that the search process can be much more efficient and\nflexible. With the trained architecture generator, given target hardware\nconstraints as the input, $N$ good architectures can be generated for $N$\nconstraints by just one forward pass without re-searching and supernet\nretraining. Moreover, we propose a novel single-path supernet, called unified\nsupernet, to further improve search efficiency and reduce GPU memory\nconsumption of the architecture generator. With the architecture generator and\nthe unified supernet, we propose a flexible and efficient one-shot NAS\nframework, called Searching by Generating NAS (SGNAS). With the pre-trained\nsupernt, the search time of SGNAS for $N$ different hardware constraints is\nonly 5 GPU hours, which is $4N$ times faster than previous SOTA single-path\nmethods. After training from scratch, the top1-accuracy of SGNAS on ImageNet is\n77.1%, which is comparable with the SOTAs. The code is available at:\nhttps://github.com/eric8607242/SGNAS.\n"} {"abstract": " We report here on the discovery with XMM-Newton of pulsations at 22 ms from\nthe central compact source associated with IKT16, a supernova remnant in the\nSmall Magellanic Cloud (SMC). The measured spin period and spin period\nderivative correspond to 21.7661076(2) ms and $2.9(3)\\times10^{-14}$\ns,s$^{-1}$, respectively. Assuming standard spin-down by magnetic dipole\nradiation, the spin-down power corresponds to $1.1\\times10^{38}$,erg,s$^{-1}$\nimplying a Crab-like pulsar. This makes it the most energetic pulsar discovered\nin the SMC so far and a close analogue of PSR J0537--6910, a Crab-like pulsar\nin the Large Magellanic Cloud. The characteristic age of the pulsar is 12 kyr.\nHaving for the first time a period measure for this source, we also searched\nfor the signal in archival data collected in radio with the Parkes telescope\nand in Gamma-rays with the Fermi/LAT, but no evidence for pulsation was found\nin these energy bands.\n"} {"abstract": " For a locally presentable abelian category $\\mathsf B$ with a projective\ngenerator, we construct the projective derived and contraderived model\nstructures on the category of complexes, proving in particular the existence of\nenough homotopy projective complexes of projective objects. We also show that\nthe derived category $\\mathsf D(\\mathsf B)$ is generated, as a triangulated\ncategory with coproducts, by the projective generator of $\\mathsf B$. For a\nGrothendieck abelian category $\\mathsf A$, we construct the injective derived\nand coderived model structures on complexes. Assuming Vopenka's principle, we\nprove that the derived category $\\mathsf D(\\mathsf A)$ is generated, as a\ntriangulated category with products, by the injective cogenerator of $\\mathsf\nA$. More generally, we define the notion of an exact category with an object\nsize function and prove that the derived category of any such exact category\nwith exact $\\kappa$-directed colimits of chains of admissible monomorphisms has\nHom sets. In particular, the derived category of any locally presentable\nabelian category has Hom sets.\n"} {"abstract": " The recent emergence of contrastive learning approaches facilitates the\nresearch on graph representation learning (GRL), introducing graph contrastive\nlearning (GCL) into the literature. These methods contrast semantically similar\nand dissimilar sample pairs to encode the semantics into node or graph\nembeddings. However, most existing works only performed model-level evaluation,\nand did not explore the combination space of modules for more comprehensive and\nsystematic studies. For effective module-level evaluation, we propose a\nframework that decomposes GCL models into four modules: (1) a sampler to\ngenerate anchor, positive and negative data samples (nodes or graphs); (2) an\nencoder and a readout function to get sample embeddings; (3) a discriminator to\nscore each sample pair (anchor-positive and anchor-negative); and (4) an\nestimator to define the loss function. Based on this framework, we conduct\ncontrolled experiments over a wide range of architectural designs and\nhyperparameter settings on node and graph classification tasks. Specifically,\nwe manage to quantify the impact of a single module, investigate the\ninteraction between modules, and compare the overall performance with current\nmodel architectures. Our key findings include a set of module-level guidelines\nfor GCL, e.g., simple samplers from LINE and DeepWalk are strong and robust; an\nMLP encoder associated with Sum readout could achieve competitive performance\non graph classification. Finally, we release our implementations and results as\nOpenGCL, a modularized toolkit that allows convenient reproduction, standard\nmodel and module evaluation, and easy extension.\n"} {"abstract": " With use of the U(1) quantum rotor method in the path integral effective\naction formulation, we have confirmed the mathematical similarity of the phase\nHamiltonian and of the extended Bose-Hubbard model with density-induced\ntunneling (DIT). Moreover, we have shown that the latter model can be mapped to\na pseudospin Hamiltonian that exhibits two coexisting (single-particle and\npair) superfluid phases. Phase separation of the two has also been confirmed,\ndetermining that there exists a range of coefficients in which only pair\ncondensation, and not single-particle superfluidity, is present. The DIT part\nsupports the coherence in the system at high densities and low temperatures,\nbut also has dissipative effects independent of the system's thermal\nproperties.\n"} {"abstract": " Recent advances in Named Entity Recognition (NER) show that document-level\ncontexts can significantly improve model performance. In many application\nscenarios, however, such contexts are not available. In this paper, we propose\nto find external contexts of a sentence by retrieving and selecting a set of\nsemantically relevant texts through a search engine, with the original sentence\nas the query. We find empirically that the contextual representations computed\non the retrieval-based input view, constructed through the concatenation of a\nsentence and its external contexts, can achieve significantly improved\nperformance compared to the original input view based only on the sentence.\nFurthermore, we can improve the model performance of both input views by\nCooperative Learning, a training method that encourages the two input views to\nproduce similar contextual representations or output label distributions.\nExperiments show that our approach can achieve new state-of-the-art performance\non 8 NER data sets across 5 domains.\n"} {"abstract": " Vertical Federated Learning (vFL) allows multiple parties that own different\nattributes (e.g. features and labels) of the same data entity (e.g. a person)\nto jointly train a model. To prepare the training data, vFL needs to identify\nthe common data entities shared by all parties. It is usually achieved by\nPrivate Set Intersection (PSI) which identifies the intersection of training\nsamples from all parties by using personal identifiable information (e.g.\nemail) as sample IDs to align data instances. As a result, PSI would make\nsample IDs of the intersection visible to all parties, and therefore each party\ncan know that the data entities shown in the intersection also appear in the\nother parties, i.e. intersection membership. However, in many real-world\nprivacy-sensitive organizations, e.g. banks and hospitals, revealing membership\nof their data entities is prohibited. In this paper, we propose a vFL framework\nbased on Private Set Union (PSU) that allows each party to keep sensitive\nmembership information to itself. Instead of identifying the intersection of\nall training samples, our PSU protocol generates the union of samples as\ntraining instances. In addition, we propose strategies to generate synthetic\nfeatures and labels to handle samples that belong to the union but not the\nintersection. Through extensive experiments on two real-world datasets, we show\nour framework can protect the privacy of the intersection membership while\nmaintaining the model utility.\n"} {"abstract": " Electroencephalograms (EEG) are noninvasive measurement signals of electrical\nneuronal activity in the brain. One of the current major statistical challenges\nis formally measuring functional dependency between those complex signals. This\npaper, proposes the spectral causality model (SCAU), a robust linear model,\nunder a causality paradigm, to reflect inter- and intra-frequency modulation\neffects that cannot be identifiable using other methods. SCAU inference is\nconducted with three main steps: (a) signal decomposition into frequency bins,\n(b) intermediate spectral band mapping, and (c) dependency modeling through\nfrequency-specific autoregressive models (VAR). We apply SCAU to study complex\ndependencies during visual and lexical fluency tasks (word generation and\nvisual fixation) in 26 participants' EEGs. We compared the connectivity\nnetworks estimated using SCAU with respect to a VAR model. SCAU networks show a\nclear contrast for both stimuli while the magnitude links also denoted a low\nvariance in comparison with the VAR networks. Furthermore, SCAU dependency\nconnections not only were consistent with findings in the neuroscience\nliterature, but it also provided further evidence on the directionality of the\nspatio-spectral dependencies such as the delta-originated and theta-induced\nlinks in the fronto-temporal brain network.\n"} {"abstract": " Let $\\Sigma$ be a closed Riemann surface, $h$ a positive smooth function on\n$\\Sigma$, $\\rho$ and $\\alpha$ real numbers. In this paper, we study a\ngeneralized mean field equation \\begin{align*}\n -\\Delta u=\\rho\\left(\\dfrac{he^u}{\\int_\\Sigma\nhe^u}-\\dfrac{1}{\\mathrm{Area}\\left(\\Sigma\\right)}\\right)+\\alpha\\left(u-\\fint_{\\Sigma}u\\right),\n\\end{align*} where $\\Delta$ denotes the Laplace-Beltrami operator. We first\nderive a uniform bound for solutions when $\\rho\\in (8k\\pi, 8(k+1)\\pi)$ for some\nnon-negative integer number $k\\in \\mathbb{N}$ and\n$\\alpha\\notin\\mathrm{Spec}\\left(-\\Delta\\right)\\setminus\\set{0}$. Then we obtain\nexistence results for $\\alpha<\\lambda_1\\left(\\Sigma\\right)$ by using the\nLeray-Schauder degree theory and the minimax method, where\n$\\lambda_1\\left(\\Sigma\\right)$ is the first positive eigenvalue for $-\\Delta$.\n"} {"abstract": " Argument mining is often addressed by a pipeline method where segmentation of\ntext into argumentative units is conducted first and proceeded by an argument\ncomponent identification task. In this research, we apply a token-level\nclassification to identify claim and premise tokens from a new corpus of\nargumentative essays written by middle school students. To this end, we compare\na variety of state-of-the-art models such as discrete features and deep\nlearning architectures (e.g., BiLSTM networks and BERT-based architectures) to\nidentify the argument components. We demonstrate that a BERT-based multi-task\nlearning architecture (i.e., token and sentence level classification)\nadaptively pretrained on a relevant unlabeled dataset obtains the best results\n"} {"abstract": " We discuss the essential spectrum of essentially self-adjoint elliptic\ndifferential operators of first order and of Laplace type operators on\nRiemannian vector bundles over geometrically finite orbifolds.\n"} {"abstract": " We analyze the SDSS data to classify the galaxies based on their colour using\na fuzzy set-theoretic method and quantify their environments using the local\ndimension. We find that the fraction of the green galaxies does not depend on\nthe environment and $10\\%-20\\%$ of the galaxies at each environment are in the\ngreen valley depending on the stellar mass range chosen. Approximately $10\\%$\nof the green galaxies at each environment host an AGN. Combining data from the\nGalaxy Zoo, we find that $\\sim 95\\%$ of the green galaxies are spirals and\n$\\sim 5\\%$ are ellipticals at each environment. Only $\\sim 8\\%$ of green\ngalaxies exhibit signs of interactions and mergers, $\\sim 1\\%$ have dominant\nbulge, and $\\sim 6\\%$ host a bar. We show that the stellar mass distributions\nfor the red and green galaxies are quite similar at each environment. Our\nanalysis suggests that the majority of the green galaxies must curtail their\nstar formation using physical mechanism(s) other than interactions, mergers,\nand those driven by bulge, bar and AGN activity. We speculate that these are\nthe massive galaxies that have grown only via smooth accretion and suppressed\nthe star formation primarily through mass driven quenching. Using a\nKolmogorov-Smirnov test, we do not find any statistically significant\ndifference between the properties of green galaxies in different environments.\nWe conclude that the environmental factors play a minor role and the internal\nprocesses play the dominant role in quenching star formation in the green\nvalley galaxies.\n"} {"abstract": " Single sign-on authentication systems such as OAuth 2.0 are widely used in\nweb services. They allow users to use accounts registered with major identity\nproviders such as Google and Facebook to login on multiple services (relying\nparties). These services can both identify users and access a subset of the\nuser's data stored with the provider. We empirically investigate the end-user\nprivacy implications of OAuth 2.0 implementations in relying parties most\nvisited around the world. We collect data on the use of OAuth-based logins in\nthe Alexa Top 500 sites per country for five countries. We categorize user data\nmade available by four identity providers (Google, Facebook, Apple and\nLinkedIn) and evaluate popular services accessing user data from the SSO\nplatforms of these providers. Many services allow users to choose from multiple\nlogin options (with different identity providers). Our results reveal that\nservices request different categories and amounts of personal data from\ndifferent providers, with at least one choice undeniably more\nprivacy-intrusive. These privacy choices (and their privacy implications) are\nhighly invisible to users. Based on our analysis, we also identify areas which\ncould improve user privacy and help users make informed decisions.\n"} {"abstract": " This study explores the Gaussian and the Lorentzian distributed spherically\nsymmetric wormhole solutions in the $f(\\tau, T)$ gravity. The basic idea of the\nGaussian and Lorentzian noncommutative geometries emerges as the physically\nacceptable and substantial notion in quantum physics. This idea of the\nnoncommutative geometries with both the Gaussian and Lorentzian distributions\nbecomes more striking when wormhole geometries in the modified theories of\ngravity are discussed. Here we consider a linear model within $f(\\tau,T)$\ngravity to investigate traversable wormholes. In particular, we discuss the\npossible cases for the wormhole geometries using the Gaussian and the\nLorentzian noncommutative distributions to obtain the exact shape function for\nthem. By incorporating the particular values of the unknown parameters\ninvolved, we discuss different properties of the new wormhole geometries\nexplored here. It is noted that the involved matter violates the weak energy\ncondition for both the cases of the noncommutative geometries, whereas there is\na possibility for a physically viable wormhole solution. By analyzing the\nequilibrium condition, it is found that the acquired solutions are stable.\nFurthermore, we provide the embedded diagrams for wormhole structures under\nGaussian and Lorentzian noncommutative frameworks. Moreover, we present the\ncritical analysis on an anisotropic pressure under the Gaussian and the\nLorentzian distributions.\n"} {"abstract": " We report test results searching for an effect of electrostatic charge on\nweight. For conducting test objects of mass of order 1 kilogram, we found no\neffect on weight, for potentials ranging from 10 V to 200 kV, corresponding to\ncharge states ranging from $10^{-9}$ to over $10^{-5}$ coulombs, and for both\npolarities, to within a measurement precision of 2 grams. While such a result\nmay not be unexpected, this is the first unipolar, high-voltage, meter-scale,\nstatic test for electro-gravitic effects reported in the literature. Our\ninvestigation was motivated by the search for possible coupling to a long-range\nscalar field that could surround the planet, yet go otherwise undetected. The\nlarge buoyancy force predicted within the classical Kaluza theory involving a\nlong-range scalar field is falsified by our results, and this appears to be the\nfirst such experimental test of the classical Kaluza theory in the weak field\nregime where it was otherwise thought identical with known physics. A\nparameterization is suggested to organize the variety of electro-gravitic\nexperiment designs.\n"} {"abstract": " Communication efficiency is a major bottleneck in the applications of\ndistributed networks. To address the problem, the problem of quantized\ndistributed optimization has attracted a lot of attention. However, most of the\nexisting quantized distributed optimization algorithms can only converge\nsublinearly. To achieve linear convergence, this paper proposes a novel\nquantized distributed gradient tracking algorithm (Q-DGT) to minimize a finite\nsum of local objective functions over directed networks. Moreover, we\nexplicitly derive the update rule for the number of quantization levels, and\nprove that Q-DGT can converge linearly even when the exchanged variables are\nrespectively one bit. Numerical results also confirm the efficiency of the\nproposed algorithm.\n"} {"abstract": " The multislice method, which simulates the propagation of the incident\nelectron wavefunction through a crystal, is a well-established method for\nanalyzing the multiple scattering effects that an electron beam may undergo.\nThe inclusion of magnetic effects into this method proves crucial towards\nsimulating magnetic differential phase contrast images at atomic resolution,\nenhanced magnetic interaction of vortex beams with magnetic materials,\ncalculating magnetic Bragg spots, or searching for magnon signatures, to name a\nfew examples. Inclusion of magnetism poses novel challenges to the efficiency\nof the multislice method for larger systems, especially regarding the\nconsistent computation of magnetic vector potentials and magnetic fields over\nlarge supercells. We present in this work a tabulation of parameterized\nmagnetic values for the first three rows of transition metal elements computed\nfrom atomic density functional theory calculations, allowing for the efficient\ncomputation of approximate magnetic vector fields across large crystals using\nonly structural and magnetic moment size and direction information.\nFerromagnetic bcc Fe and tetragonal FePt are chosen as examples in this work to\nshowcase the performance of the parameterization versus directly obtaining\nmagnetic vector fields from the unit cell spin density by density functional\ntheory calculations, both for the quantities themselves and the resulting\nmagnetic signal from their respective use in multislice calculations.\n"} {"abstract": " Recently, several experiments on La$_{2-x}$Sr$_x$CuO$_4$ (LSCO) challenged\nthe Fermi liquid picture for overdoped cuprates, and stimulated intensive\ndebates [1]. In this work, we study the magnetotransport phenomena in such\nsystems based on the Fermi liquid assumption. The Hall coefficient $R_H$ and\nmagnetoresistivity $\\rho_{xx}$ are investigated near the van Hove singularity\n$x_{\\tiny\\text{VHS}}\\approx0.2$ across which the Fermi surface topology changes\nfrom hole- to electron-like. Our main findings are: (1) $R_H$ depends on the\nmagnetic field $B$ and drops from positive to negative values with increasing\n$B$ in the doping regime $x_{\\tiny\\text{VHS}} 0.045$ regardless of the geometric configuration.\n"} {"abstract": " We study the environmental dependence of ultralight scalar dark matter (DM)\nwith linear interactions to the standard model particles. The solution to the\nDM field turns out to be a sum of the cosmic harmonic oscillation term and the\nlocal exponential fluctuation term. The amplitude of the first term depends on\nthe local DM density and the mass of the DM field. The second term is induced\nby the local distribution of matter, such as the Earth. Then, we compute the\nphase shift induced by the DM field in atom interferometers (AIs), through\nsolving the trajectories of atoms. Especially, the AI signal for the violation\nof weak equivalence principle (WEP) caused by the DM field is calculated.\nDepending on the values of the DM coupling parameters, contributions to the WEP\nviolation from the first and second terms of the DM field can be either\ncomparable or one larger than the other. Finally, we give some constraints to\nDM coupling parameters using results from the terrestrial atomic WEP tests.\n"} {"abstract": " With the rise and ever-increasing potential of deep learning techniques in\nrecent years, publicly available medical datasets became a key factor to enable\nreproducible development of diagnostic algorithms in the medical domain.\nMedical data contains sensitive patient-related information and is therefore\nusually anonymized by removing patient identifiers, e.g., patient names before\npublication. To the best of our knowledge, we are the first to show that a\nwell-trained deep learning system is able to recover the patient identity from\nchest X-ray data. We demonstrate this using the publicly available large-scale\nChestX-ray14 dataset, a collection of 112,120 frontal-view chest X-ray images\nfrom 30,805 unique patients. Our verification system is able to identify\nwhether two frontal chest X-ray images are from the same person with an AUC of\n0.9940 and a classification accuracy of 95.55%. We further highlight that the\nproposed system is able to reveal the same person even ten and more years after\nthe initial scan. When pursuing a retrieval approach, we observe an mAP@R of\n0.9748 and a precision@1 of 0.9963. Furthermore, we achieve an AUC of up to\n0.9870 and a precision@1 of up to 0.9444 when evaluating our trained networks\non CheXpert and the COVID-19 Image Data Collection. Based on this high\nidentification rate, a potential attacker may leak patient-related information\nand additionally cross-reference images to obtain more information. Thus, there\nis a great risk of sensitive content falling into unauthorized hands or being\ndisseminated against the will of the concerned patients. Especially during the\nCOVID-19 pandemic, numerous chest X-ray datasets have been published to advance\nresearch. Therefore, such data may be vulnerable to potential attacks by deep\nlearning-based re-identification algorithms.\n"} {"abstract": " In normal times, it is assumed that financial institutions operating in\nnon-overlapping sectors have complementary and distinct outcomes, typically\nreflected in mostly uncorrelated outcomes and asset returns. Such is the\nreasoning behind common \"free lunches\" to be had in investing, like\ndiversifying assets across equity and bond sectors. Unfortunately, the\nrecurrence of crises like the Great Financial Crisis of 2007-2008 demonstrate\nthat such convenient assumptions often break down, with dramatic consequences\nfor all financial actors. In hindsight, the emergence of systemic risk (as\nexemplified by failure in one part of a system spreading to ostensibly\nunrelated parts of the system) has been explained by narratives such as\nderegulation and leverage. But can we diagnose and quantify the ongoing\nemergence of systemic risk in financial systems? In this study, we focus on two\npreviously-documented measures of systemic risk that require only easily\navailable time series data (eg monthly asset returns): cross-correlation and\nprincipal component analysis. We apply these tests to daily and monthly returns\non hedge fund indexes and broad-based market indexes, and discuss their\nresults. We hope that a frank discussion of these simple, non-parametric\nmeasures can help inform legislators, lawmakers, and financial actors of\npotential crises looming on the horizon.\n"} {"abstract": " We present a detailed description of the experiment realising for the first\ntime a protective measurement, a novel measurement protocol which combines weak\ninteractions with a ``protection mechanism'' preserving the measured state\ncoherence during the whole measurement process. Furthermore, protective\nmeasurement allows finding the expectation value of an observable, i.e. an\ninherently statistical quantity, by measuring a single particle, without the\nneed of any statistics. This peculiar property, in sharp contrast with the\nframework of traditional (projective) quantum measurement, might constitute a\ngroundbreaking advance for several quantum technology related fields.\n"} {"abstract": " Temporal-Difference (TD) learning is a general and very useful tool for\nestimating the value function of a given policy, which in turn is required to\nfind good policies. Generally speaking, TD learning updates states whenever\nthey are visited. When the agent lands in a state, its value can be used to\ncompute the TD-error, which is then propagated to other states. However, it may\nbe interesting, when computing updates, to take into account other information\nthan whether a state is visited or not. For example, some states might be more\nimportant than others (such as states which are frequently seen in a successful\ntrajectory). Or, some states might have unreliable value estimates (for\nexample, due to partial observability or lack of data), making their values\nless desirable as targets. We propose an approach to re-weighting states used\nin TD updates, both when they are the input and when they provide the target\nfor the update. We prove that our approach converges with linear function\napproximation and illustrate its desirable empirical behaviour compared to\nother TD-style methods.\n"} {"abstract": " We extend the classical tracking-by-detection paradigm to this\ntracking-any-object task. Solid detection results are first extracted from TAO\ndataset. Some state-of-the-art techniques like \\textbf{BA}lanced-\\textbf{G}roup\n\\textbf{S}oftmax (\\textbf{BAGS}\\cite{li2020overcoming}) and\nDetectoRS\\cite{qiao2020detectors} are integrated during detection. Then we\nlearned appearance features to represent any object by training feature\nlearning networks. We ensemble several models for improving detection and\nfeature representation. Simple linking strategies with most similar appearance\nfeatures and tracklet-level post association module are finally applied to\ngenerate final tracking results. Our method is submitted as \\textbf{AOA} on the\nchallenge website. Code is available at\nhttps://github.com/feiaxyt/Winner_ECCV20_TAO.\n"} {"abstract": " In this paper we study the set of prime ideals in vector lattices and how the\nproperties of the prime ideals structure the vector lattice in question. The\ndifferent properties that will be considered are firstly, that all or none of\nthe prime ideals are order dense, secondly, that there are only finitely many\nprime ideals, thirdly, that every prime ideal is principal, and lastly, that\nevery ascending chain of prime ideals is stationary (a property that we refer\nto as prime Noetherian). We also completely characterize the prime ideals in\nvector lattices of piecewise polynomials, which turns out to be an interesting\nclass of vector lattices for studying principal prime ideals and ascending\nchains of prime ideals.\n"} {"abstract": " For a connected smooth proper rigid space $X$ over a perfectoid field\nextension of $\\mathbb{Q}_p$, we show that the Picard functor of\n$X^\\diamondsuit$ defined on perfectoid test objects is the diamondification of\nthe rigid analytic Picard functor. In particular, it is represented by a rigid\ngroup variety if and only if the rigid analytic Picard functor is.\n As an application, we determine which line bundles are trivialized by\npro-finite-\\'etale covers, and prove unconditionally that the associated\n\"topological torsion Picard functor\" is represented by a divisible analytic\ngroup. We use this to generalize and geometrize a construction of\nDeninger--Werner in the $p$-adic Simpson correspondence: There is an\nisomorphism of rigid analytic group varieties between the moduli space of\ncontinuous characters of $\\pi_1(X,x)$ and that of pro-finite-\\'etale Higgs line\nbundles on $X$.\n This article is part II of a series about line bundles on rigid spaces as\ndiamonds.\n"} {"abstract": " We construct a large family of normal $\\kappa$-complete\n$\\mathbb{R}_\\kappa$-embeddable $\\kappa^+$-Aronszajn trees which have no club\nisomorphic subtrees using an instance of the proxy principle of Brodsky-Rinot.\n"} {"abstract": " Recently, Robotic Cooking has been a very promising field. To execute a\nrecipe, a robot has to recognize different objects and their states. Contrary\nto object recognition, state identification has not been explored that much.\nBut it is very important because different recipe might require different state\nof an object. Moreover, robotic grasping depends on the state. Pretrained model\nusually perform very well in this type of tests. Our challenge was to handle\nthis problem without using any pretrained model. In this paper, we have\nproposed a CNN and trained it from scratch. The model is trained and tested on\nthe dataset from cooking state recognition challenge. We have also evaluated\nthe performance of our network from various perspective. Our model achieves\n65.8% accuracy on the unseen test dataset.\n"} {"abstract": " Robust and distributionally robust optimization are modeling paradigms for\ndecision-making under uncertainty where the uncertain parameters are only known\nto reside in an uncertainty set or are governed by any probability distribution\nfrom within an ambiguity set, respectively, and a decision is sought that\nminimizes a cost function under the most adverse outcome of the uncertainty. In\nthis paper, we develop a rigorous and general theory of robust and\ndistributionally robust nonlinear optimization using the language of convex\nanalysis. Our framework is based on a generalized\n`primal-worst-equals-dual-best' principle that establishes strong duality\nbetween a semi-infinite primal worst and a non-convex dual best formulation,\nboth of which admit finite convex reformulations. This principle offers an\nalternative formulation for robust optimization problems that may be\ncomputationally advantageous, and it obviates the need to mobilize the\nmachinery of abstract semi-infinite duality theory to prove strong duality in\ndistributionally robust optimization. We illustrate the modeling power of our\napproach through convex reformulations for distributionally robust optimization\nproblems whose ambiguity sets are defined through general optimal transport\ndistances, which generalize earlier results for Wasserstein ambiguity sets.\n"} {"abstract": " This work derives explicit series reversions for the solution of Calder\\'on's\nproblem. The governing elliptic partial differential equation is\n$\\nabla\\cdot(A\\nabla u)=0$ in a bounded Lipschitz domain and with a\nmatrix-valued coefficient. The corresponding forward map sends $A$ to a\nprojected version of a local Neumann-to-Dirichlet operator, allowing for the\nuse of partial boundary data and finitely many measurements. It is first shown\nthat the forward map is analytic, and subsequently reversions of its Taylor\nseries up to specified orders lead to a family of numerical methods for solving\nthe inverse problem with increasing accuracy. The convergence of these methods\nis shown under conditions that ensure the invertibility of the Fr\\'echet\nderivative of the forward map. The introduced numerical methods are of the same\ncomputational complexity as solving the linearised inverse problem. The\nanalogous results are also presented for the smoothened complete electrode\nmodel.\n"} {"abstract": " Quantum technology is approaching a level of maturity, recently demonstrated\nin space-borne experiments and in-field measurements, which would allow for\nadoption by non-specialist users. Parallel advancements made in\nmicroprocessor-based electronics and database software can be combined to\ncreate robust, versatile and modular experimental monitoring systems. Here, we\ndescribe a monitoring network used across a number of cold atom laboratories\nwith a shared laser system. The ability to diagnose malfunction, unexpected or\nunintended behaviour and passively collect data for key experimental\nparameters, such as vacuum chamber pressure, laser beam power, or resistances\nof important conductors, significantly reduces debugging time. This allows for\nefficient control over a number of experiments and remote control when access\nis limited.\n"} {"abstract": " The Cherenkov Telescope Array (CTA) is an initiative that is currently\nbuilding the largest gamma-ray ground Observatory that ever existed. A Science\nAlert Generation (SAG) system, part of the Array Control and Data Acquisition\n(ACADA) system of the CTA Observatory, analyses online the telescope data -\narriving at an event rate of tens of kHz - to detect transient gamma-ray\nevents. The SAG system also performs an online data quality analysis to assess\nthe instruments' health during the data acquisition: this analysis is crucial\nto confirm good detections. A Python and a C++ software library to perform the\nonline data quality analysis of CTA data, called rta-dq-lib, has been proposed\nfor CTA. The Python version is dedicated to the rapid prototyping of data\nquality use cases. The C++ version is optimized for maximum performance. The\nlibrary allows the user to define, through XML configuration files, the format\nof the input data and, for each data field, which quality checks must be\nperformed and which types of aggregations and transformations must be applied.\nIt internally translates the XML configuration into a direct acyclic\ncomputational graph that encodes the dependencies of the computational tasks to\nbe performed. This model allows the library to easily take advantage of\nparallelization at the thread level and the overall flexibility allow us to\ndevelop generic data quality analysis pipelines that could also be reused in\nother applications.\n"} {"abstract": " The transformer based model (e.g., FusingTF) has been employed recently for\nElectrocardiogram (ECG) signal classification. However, the high-dimensional\nembedding obtained via 1-D convolution and positional encoding can lead to the\nloss of the signal's own temporal information and a large amount of training\nparameters. In this paper, we propose a new method for ECG classification,\ncalled low-dimensional denoising embedding transformer (LDTF), which contains\ntwo components, i.e., low-dimensional denoising embedding (LDE) and transformer\nlearning. In the LDE component, a low-dimensional representation of the signal\nis obtained in the time-frequency domain while preserving its own temporal\ninformation. And with the low dimensional embedding, the transformer learning\nis then used to obtain a deeper and narrower structure with fewer training\nparameters than that of the FusingTF. Experiments conducted on the MIT-BIH\ndataset demonstrates the effectiveness and the superior performance of our\nproposed method, as compared with state-of-the-art methods.\n"} {"abstract": " The climate system is a complex, chaotic system with many degrees of freedom\nand variability on a vast range of temporal and spatial scales. Attaining a\ndeeper level of understanding of its dynamical processes is a scientific\nchallenge of great urgency, especially given the ongoing climate change and the\nevolving climate crisis. In statistical physics, complex, many-particle systems\nare studied successfully using the mathematical framework of Large Deviation\nTheory (LDT). A great potential exists for applying LDT to problems relevant\nfor geophysical fluid dynamics and climate science. In particular, LDT allows\nfor understanding the fundamental properties of persistent deviations of\nclimatic fields from the long-term averages and for associating them to\nlow-frequency, large scale patterns of climatic variability. Additionally, LDT\ncan be used in conjunction with so-called rare events algorithms to explore\nrarely visited regions of the phase space and thus to study special dynamical\nconfigurations of the climate. These applications are of key importance to\nimprove our understanding of high-impact weather and climate events.\nFurthermore, LDT provides powerful tools for evaluating the probability of\nnoise-induced transitions between competing metastable states of the climate\nsystem or of its components. This in turn essential for improving our\nunderstanding of the global stability properties of the climate system and of\nits predictability of the second kind in the sense of Lorenz. The goal of this\nreview is manifold. First, we want to provide an introduction to the derivation\nof large deviation laws in the context of stochastic processes. We then relate\nsuch results to the existing literature showing the current status of\napplications of LDT in climate science and geophysical fluid dynamics. Finally,\nwe propose some possible lines of future investigations.\n"} {"abstract": " We propose leveraging our proficiency for detecting Higgs resonances by using\nthe Higgs as a tagging object for new heavy physics. In particular, we argue\nthat searches for exotic Higgs production from decays of color-singlet fields\nwith electroweak charges could beat current searches at the LHC which look for\ntheir decays to vectors. As an example, we study the production and decay of\nvector-like leptons which admit Yukawa couplings with SM leptons. We find that\nbounds from Run 2 searches are consistent with anywhere from hundreds to many\nthousands of Higgses having been produced in their decays over the same period,\ndepending on the representation. Dedicated searches for these signatures may\nthus be able to significantly improve our reach at the electroweak energy\nfrontier.\n"} {"abstract": " In this paper, we study certifying the robustness of ReLU neural networks\nagainst adversarial input perturbations. To diminish the relaxation error\nsuffered by the popular linear programming (LP) and semidefinite programming\n(SDP) certification methods, we propose partitioning the input uncertainty set\nand solving the relaxations on each part separately. We show that this approach\nreduces relaxation error, and that the error is eliminated entirely upon\nperforming an LP relaxation with an intelligently designed partition. To scale\nthis approach to large networks, we consider courser partitions that take the\nsame form as this motivating partition. We prove that computing such a\npartition that directly minimizes the LP relaxation error is NP-hard. By\ninstead minimizing the worst-case LP relaxation error, we develop a\ncomputationally tractable scheme with a closed-form optimal two-part partition.\nWe extend the analysis to the SDP, where the feasible set geometry is exploited\nto design a two-part partition that minimizes the worst-case SDP relaxation\nerror. Experiments on IRIS classifiers demonstrate significant reduction in\nrelaxation error, offering certificates that are otherwise void without\npartitioning. By independently increasing the input size and the number of\nlayers, we empirically illustrate under which regimes the partitioned LP and\nSDP are best applied.\n"} {"abstract": " Triangle counting is a building block for a wide range of graph applications.\nTraditional wisdom suggests that i) hashing is not suitable for triangle\ncounting, ii) edge-centric triangle counting beats vertex-centric design, and\niii) communication-free and workload balanced graph partitioning is a grand\nchallenge for triangle counting. On the contrary, we advocate that i) hashing\ncan help the key operations for scalable triangle counting on Graphics\nProcessing Units (GPUs), i.e., list intersection and graph partitioning,\nii)vertex-centric design reduces both hash table construction cost and memory\nconsumption, which is limited on GPUs. In addition, iii) we exploit graph and\nworkload collaborative, and hashing-based 2D partitioning to scale\nvertex-centric triangle counting over 1,000 GPUswith sustained scalability. In\nthis work, we present TRUST which performs triangle counting with the hash\noperation and vertex-centric mechanism at the core. To the best of our\nknowledge, TRUSTis the first work that achieves over one trillion Traversed\nEdges Per Second (TEPS) rate for triangle counting.\n"} {"abstract": " The rapid development of reliable Quantum Processing Units (QPU) opens up\nnovel computational opportunities for machine learning. Here, we introduce a\nprocedure for measuring the similarity between graph-structured data, based on\nthe time-evolution of a quantum system. By encoding the topology of the input\ngraph in the Hamiltonian of the system, the evolution produces measurement\nsamples that retain key features of the data. We study analytically the\nprocedure and illustrate its versatility in providing links to standard\nclassical approaches. We then show numerically that this scheme performs well\ncompared to standard graph kernels on typical benchmark datasets. Finally, we\nstudy the possibility of a concrete implementation on a realistic neutral-atom\nquantum processor.\n"} {"abstract": " We established a Spatio-Temporal Neural Network, namely STNN, to forecast the\nspread of the coronavirus COVID-19 outbreak worldwide in 2020. The basic\nstructure of STNN is similar to the Recurrent Neural Network (RNN)\nincorporating with not only temporal data but also spatial features. Two\nimproved STNN architectures, namely the STNN with Augmented Spatial States\n(STNN-A) and the STNN with Input Gate (STNN-I), are proposed, which ensure more\npredictability and flexibility. STNN and its variants can be trained using\nStochastic Gradient Descent (SGD) algorithm and its improved variants (e.g.,\nAdam, AdaGrad and RMSProp). Our STNN models are compared with several classical\nepidemic prediction models, including the fully-connected neural network\n(BPNN), and the recurrent neural network (RNN), the classical curve fitting\nmodels, as well as the SEIR dynamical system model. Numerical simulations\ndemonstrate that STNN models outperform many others by providing more accurate\nfitting and prediction, and by handling both spatial and temporal data.\n"} {"abstract": " Domain-specific neural network accelerators have seen growing interest in\nrecent years due to their improved energy efficiency and inference performance\ncompared to CPUs and GPUs. In this paper, we propose a novel cross-layer\noptimized neural network accelerator called CrossLight that leverages silicon\nphotonics. CrossLight includes device-level engineering for resilience to\nprocess variations and thermal crosstalk, circuit-level tuning enhancements for\ninference latency reduction, and architecture-level optimization to enable\nhigher resolution, better energy-efficiency, and improved throughput. On\naverage, CrossLight offers 9.5x lower energy-per-bit and 15.9x higher\nperformance-per-watt at 16-bit resolution than state-of-the-art photonic deep\nlearning accelerators.\n"} {"abstract": " The rise of digitization of cultural documents offers large-scale contents,\nopening the road for development of AI systems in order to preserve, search,\nand deliver cultural heritage. To organize such cultural content also means to\nclassify them, a task that is very familiar to modern computer science.\nContextual information is often the key to structure such real world data, and\nwe propose to use it in form of a knowledge graph. Such a knowledge graph,\ncombined with content analysis, enhances the notion of proximity between\nartworks so it improves the performances in classification tasks. In this\npaper, we propose a novel use of a knowledge graph, that is constructed on\nannotated data and pseudo-labeled data. With label propagation, we boost\nartwork classification by training a model using a graph convolutional network,\nrelying on the relationships between entities of the knowledge graph. Following\na transductive learning framework, our experiments show that relying on a\nknowledge graph modeling the relations between labeled data and unlabeled data\nallows to achieve state-of-the-art results on multiple classification tasks on\na dataset of paintings, and on a dataset of Buddha statues. Additionally, we\nshow state-of-the-art results for the difficult case of dealing with unbalanced\ndata, with the limitation of disregarding classes with extremely low degrees in\nthe knowledge graph.\n"} {"abstract": " DeepOnets have recently been proposed as a framework for learning nonlinear\noperators mapping between infinite dimensional Banach spaces. We analyze\nDeepOnets and prove estimates on the resulting approximation and generalization\nerrors. In particular, we extend the universal approximation property of\nDeepOnets to include measurable mappings in non-compact spaces. By a\ndecomposition of the error into encoding, approximation and reconstruction\nerrors, we prove both lower and upper bounds on the total error, relating it to\nthe spectral decay properties of the covariance operators, associated with the\nunderlying measures. We derive almost optimal error bounds with very general\naffine reconstructors and with random sensor locations as well as bounds on the\ngeneralization error, using covering number arguments. We illustrate our\ngeneral framework with four prototypical examples of nonlinear operators,\nnamely those arising in a nonlinear forced ODE, an elliptic PDE with variable\ncoefficients and nonlinear parabolic and hyperbolic PDEs. In all these\nexamples, we prove that DeepOnets break the curse of dimensionality, thus\ndemonstrating the efficient approximation of infinite-dimensional operators\nwith this machine learning framework.\n"} {"abstract": " Chaotic quantum systems with Lyapunov exponent $\\lambda_\\mathrm{L}$ obey an\nupper bound $\\lambda_\\mathrm{L}\\leq 2\\pi k_\\mathrm{B}T/\\hbar$ at temperature\n$T$, implying a divergence of the bound in the classical limit $\\hbar\\to 0$.\nFollowing this trend, does a quantum system necessarily become `more chaotic'\nwhen quantum fluctuations are reduced? We explore this question by computing\n$\\lambda_\\mathrm{L}(\\hbar,T)$ in the quantum spherical $p$-spin glass model,\nwhere $\\hbar$ can be continuously varied. We find that quantum fluctuations, in\ngeneral, make paramagnetic phase less chaotic and the spin glass phase more\nchaotic. We show that the approach to the classical limit could be non-trivial,\nwith non-monotonic dependence of $\\lambda_\\mathrm{L}$ on $\\hbar$ close to the\ndynamical glass transition temperature $T_d$. Our results in the classical\nlimit ($\\hbar\\to 0$) naturally describe chaos in super-cooled liquid in\nstructural glasses. We find a crossover from strong to weak chaos substantially\nabove $T_d$, concomitant with the onset of two-step glassy relaxation. We\nfurther show that $\\lambda_\\mathrm{L}\\sim T^\\alpha$, with the exponent $\\alpha$\nvarying between 2 and 1 from quantum to classical limit, at low temperatures in\nthe spin glass phase. Our results reveal intricate interplay between quantum\nfluctuations, glassy dynamics and chaos.\n"} {"abstract": " An exotic rotationally invariant harmonic oscillator (ERIHO) is constructed\nby applying a non-unitary isotropic conformal bridge transformation (CBT) to a\nfree planar particle. It is described by the isotropic harmonic oscillator\nHamiltonian supplemented by a Zeeman type term with a real coupling constant\n$g$. The model reveals the Euclidean ($|g|<1$) and Minkowskian ($|g|>1$) phases\nseparated by the phases $g=+1$ and $g=-1$ of the Landau problem in the\nsymmetric gauge with opposite orientation of the magnetic field. A hidden\nsymmetry emerges in the system at rational values of $g$. Its generators,\ntogether with the Hamiltonian and angular momentum produce non-linearly\ndeformed $\\mathfrak{u}(2)$ and $\\mathfrak{gl}(2,{\\mathbb R})$ algebras in the\ncases of $0<|g|<1$ and $\\infty>|g|>1$, which transmute one into another under\nthe inversion $g\\rightarrow -1/g$. Similarly, the true, $\\mathfrak{u}(2)$, and\nextended conformal, $\\mathfrak{gl}(2,{\\mathbb R})$, symmetries of the isotropic\nEuclidean oscillator ($g=0$) interchange their roles in the isotropic\nMinkowskian oscillator ($|g|=\\infty$), while two copies of the\n$\\mathfrak{gl}(2,{\\mathbb R})$ algebra of analogous symmetries mutually\ntransmute in Landau phases. We show that the ERIHO system is transformed by a\npeculiar unitary transformation into the anisotropic harmonic oscillator\ngenerated, in turn, by anisotropic CBT. The relationship between the ERIHO and\nthe subcritical phases of the harmonically extended Landau problem, as well as\nwith a plane isotropic harmonic oscillator in a uniformly rotating reference\nframe, is established.\n"} {"abstract": " We consider a sequence of variables having multinomial distribution with the\nnumber of trials corresponding to these variables being large and possibly\ndifferent. The multinomial probabilities of the categories are assumed to vary\nrandomly depending on batches. The proposed framework is interesting from the\nperspective of various applications in practice such as predicting the winner\nof an election, forecasting the market share of different brands etc. In this\nwork, first we derive sufficient conditions of asymptotic normality of the\nestimates of the multinomial cell probabilities, and corresponding suitable\ntransformations. Then, we consider a Bayesian setting to implement our model.\nWe consider hierarchical priors using multivariate normal and inverse Wishart\ndistributions, and establish the posterior consistency. Based on this result\nand following appropriate Gibbs sampling algorithms, we can infer about\naggregate data. The methodology is illustrated in detail with two real life\napplications, in the contexts of political election and sales forecasting.\nAdditional insights of effectiveness are also derived through a simulation\nstudy.\n"} {"abstract": " It is widely accepted that both backscattering and dissipation cannot occur\nin topological systems because of the topological protection. Here we show that\nthe thermal dissipation can occur in the quantum Hall (QH) regime in graphene\nin the presence of dissipation sources, although the Hall plateaus and the zero\nlongitudinal resistance still survive. Dissipation appears along the downstream\nchiral flow direction of the constriction in the Hall plateau regime, but it\noccurs mainly in the bulk in the Hall plateau transition regime. In addition,\ndissipation processes are accompanied with the evolution of the energy\ndistribution from non-equilibrium to equilibrium. This indicates that topology\nneither prohibits the appearance of dissipation nor prohibits entropy\nincreasing, which opens a new topic on the dissipation in topological systems.\n"} {"abstract": " Attractor-based end-to-end diarization is achieving comparable accuracy to\nthe carefully tuned conventional clustering-based methods on challenging\ndatasets. However, the main drawback is that it cannot deal with the case where\nthe number of speakers is larger than the one observed during training. This is\nbecause its speaker counting relies on supervised learning. In this work, we\nintroduce an unsupervised clustering process embedded in the attractor-based\nend-to-end diarization. We first split a sequence of frame-wise embeddings into\nshort subsequences and then perform attractor-based diarization for each\nsubsequence. Given subsequence-wise diarization results, inter-subsequence\nspeaker correspondence is obtained by unsupervised clustering of the vectors\ncomputed from the attractors from all the subsequences. This makes it possible\nto produce diarization results of a large number of speakers for the whole\nrecording even if the number of output speakers for each subsequence is\nlimited. Experimental results showed that our method could produce accurate\ndiarization results of an unseen number of speakers. Our method achieved 11.84\n%, 28.33 %, and 19.49 % on the CALLHOME, DIHARD II, and DIHARD III datasets,\nrespectively, each of which is better than the conventional end-to-end\ndiarization methods.\n"} {"abstract": " We study the effects of the flux configurations on the emergent Majorana\nfermions in the $S=1/2$ Kitaev model on a honeycomb lattice, where quantum\nspins are fractionalized into itinerant Majorana fermions and localized fluxes.\nA quantum spin liquid appears as the ground state of the Kitaev model in the\nflux-free sector, which has intensively been investigated so far. In this flux\nsector, the Majorana fermion system has linear dispersions and shows power law\nbehavior in the Majorana correlations. On the other hand, periodically-arranged\nflux configurations yield low-energy excitations in the Majorana fermion\nsystem, which are distinctly different from those in the flux-free state. We\nfind that one of the periodically arranged flux states results in the gapped\nMajorana dispersion and the exponential decay in the Majorana correlations. The\nKitaev system with another flux configuration exhibits a semi-Dirac like\ndispersion, leading to the power law decay with a smaller power than that in\nthe flux-free sector along symmetry axes. We also examine the effect of the\nrandomness in the flux configurations and clarify that the Majorana density of\nstates is filled by increasing the flux density, and power-law decay in the\nMajorana correlations remains. The present results could be important to\ncontrol the motion of Majorana fermions, which carries the spin excitations, in\nthe Kitaev candidate materials.\n"} {"abstract": " Bouncing models are alternatives to inflationary cosmology that replace the\ninitial Big-Bang singularity by a `bouncing' phase. A deeper understanding of\nthe initial conditions of the universe, in these scenarios, requires knowledge\nof quantum aspects of bouncing models. In this work, we propose two classes of\nbouncing models that can be studied with great analytical ease and hence,\nprovide test-bed for investigating more profound problems in quantum cosmology\nof bouncing universes. Our model's two key ingredients enable us to do\nstraightforward analytical calculations: (i) a convenient parametrization of\nthe minisuperspace of FRLW spacetimes and (ii) two distinct choices of the\neffective perfect fluids that source the background geometry of the bouncing\nuniverse. We study the quantum cosmology of these models using both the\nWheeler-de Witt equations and the path integral approach. In particular, we\nfound a bouncing model analogue of the no-boundary wavefunction and presented a\nLorentzian path integral representation for the same. We also discuss the\nintroduction of real scalar perturbations.\n"} {"abstract": " In highway scenarios, an alert human driver will typically anticipate early\ncut-in/cut-out maneuvers of surrounding vehicles using visual cues mainly.\nAutonomous vehicles must anticipate these situations at an early stage too, to\nincrease their safety and efficiency. In this work, lane-change recognition and\nprediction tasks are posed as video action recognition problems. Up to four\ndifferent two-stream-based approaches, that have been successfully applied to\naddress human action recognition, are adapted here by stacking visual cues from\nforward-looking video cameras to recognize and anticipate lane-changes of\ntarget vehicles. We study the influence of context and observation horizons on\nperformance, and different prediction horizons are analyzed. The different\nmodels are trained and evaluated using the PREVENTION dataset. The obtained\nresults clearly demonstrate the potential of these methodologies to serve as\nrobust predictors of future lane-changes of surrounding vehicles proving an\naccuracy higher than 90% in time horizons of between 1-2 seconds.\n"} {"abstract": " The advent of large pre-trained language models has made it possible to make\nhigh-quality predictions on how to add or change a sentence in a document.\nHowever, the high branching factor inherent to text generation impedes the\nability of even the strongest language models to offer useful editing\nsuggestions at a more global or document level. We introduce a new task,\ndocument sketching, which involves generating entire draft documents for the\nwriter to review and revise. These drafts are built from sets of documents that\noverlap in form - sharing large segments of potentially reusable text - while\ndiverging in content. To support this task, we introduce a Wikipedia-based\ndataset of analogous documents and investigate the application of weakly\nsupervised methods, including use of a transformer-based mixture of experts,\ntogether with reinforcement learning. We report experiments using automated and\nhuman evaluation methods and discuss relative merits of these models.\n"} {"abstract": " Quasi-periodic changes of the paleointensity and geomagnetic polarity in the\nintervals of 170 Ma to the present time and of 550 Ma to the present time were\nstudied, respectively. It is revealed that the spectrum of the basic variations\nin the paleointensity and of the duration of the polar intervals is discrete\nand includes quasi-periodic oscillations with characteristic times of 15 Ma, 8\nMa, 5 Ma, and 3 Ma. The characteristic time of these quasi-periodic changes of\nthe geomagnetic field at the beginning and at the end of the Phanerozoic\ndiffered by no more than 10%. The spectral density of quasi-periodic variations\nof the geomagnetic field changed cyclically over geological time. The relation\nbetween the behaviors of the amplitude of paleointensity variations, the\nduration of the polar intervals, and their spectral density was shown.\nQuasi-periodic variations of the paleointensity (geomagnetic activity) had a\nrelatively high spectral density in the interval of (150 - 40) Ma (in the\nCretaceous - Early Paleogene). In this interval, both the amplitude of\npaleointensity variations and the duration of polar intervals increased. In the\nintervals of (170 - 150) Ma and of 30 Ma to the present, a quasi-periodic\nvariation in the paleointensity practically did not detect against the\nbackground of its noise variations. At the same time, the amplitude of the\npaleointensity variations and duration of polar intervals decreased. An\nalternation of time intervals in which the paleointensity variations acquired\neither a quasi-periodic or noise character took place during the geomagnetic\nhistory.\n"} {"abstract": " Object tracking has achieved significant progress over the past few years.\nHowever, state-of-the-art trackers become increasingly heavy and expensive,\nwhich limits their deployments in resource-constrained applications. In this\nwork, we present LightTrack, which uses neural architecture search (NAS) to\ndesign more lightweight and efficient object trackers. Comprehensive\nexperiments show that our LightTrack is effective. It can find trackers that\nachieve superior performance compared to handcrafted SOTA trackers, such as\nSiamRPN++ and Ocean, while using much fewer model Flops and parameters.\nMoreover, when deployed on resource-constrained mobile chipsets, the discovered\ntrackers run much faster. For example, on Snapdragon 845 Adreno GPU, LightTrack\nruns $12\\times$ faster than Ocean, while using $13\\times$ fewer parameters and\n$38\\times$ fewer Flops. Such improvements might narrow the gap between academic\nmodels and industrial deployments in object tracking task. LightTrack is\nreleased at https://github.com/researchmm/LightTrack.\n"} {"abstract": " Solar radio type II bursts serve as early indicators of incoming\ngeo-effective space weather events such as coronal mass ejections (CMEs). In\norder to investigate the origin of high-frequency type II bursts (HF type II\nbursts), we have identified 51 of them (among 180 type II bursts from SWPC\nreports) that are observed by ground-based Compound Astronomical Low-cost\nLow-frequency Instrument for Spectroscopy and Transportable Observatory\n(CALLISTO) spectrometers and whose upper-frequency cutoff (of either\nfundamental or harmonic emission) lies in between 150 MHz-450 MHz during\n2010-2019. We found that 60% of HF type II bursts, whose upper-frequency cutoff\n$\\geq$ 300 MHz originate from the western longitudes. Further, our study finds\na good correlation $\\sim $ 0.73 between the average shock speed derived from\nthe radio dynamic spectra and the corresponding speed from CME data. Also, we\nfound that analyzed HF type II bursts are associated with wide and fast CMEs\nlocated near the solar disk. In addition, we have analyzed the spatio-temporal\ncharacteristics of two of these high-frequency type II bursts and compared the\nderived from radio observations with those derived from multi-spacecraft CME\nobservations from SOHO/LASCO and STEREO coronagraphs.\n"} {"abstract": " We report on the design and whole characterization of low-noise and\naffordable-cost Yb-doped double-clad fiber amplifiers operating at room\ntemperature in the near-infrared spectral region at pulse repetition rate of\n160 MHz. Two different experimental configurations are discussed. In the first\none, a broadband seed radiation with a transform limited pulse duration of 71\nfs, an optical spectrum of 20 nm wide at around 1040 nm, and 20 mW average\npower is adopted. In the second configuration, the seed radiation is\nconstituted by stretched pulses with a time duration as long as 170 ps, with a\n5-nm narrow pulse spectrum centered at 1029 nm and 2 mW average input power. In\nboth cases we obtained transform limited pulse trains with an amplified output\npower exceeding 2 W. Furthermore, relative intensity noise measurements show\nthat no significant noise degradation occurs during the amplification process.\n"} {"abstract": " The mapper construction is a powerful tool from topological data analysis\nthat is designed for the analysis and visualization of multivariate data. In\nthis paper, we investigate a method for stitching a pair of univariate mappers\ntogether into a bivariate mapper, and study topological notions of information\ngains, referred to as topological gains, during such a process. We further\nprovide implementations that visualize such topological gains for mapper\ngraphs.\n"} {"abstract": " Artificial bacteria flagella (ABFs) are magnetic helical micro-swimmers that\ncan be remotely controlled via a uniform, rotating magnetic field. Previous\nstudies have used the heterogeneous response of microswimmers to external\nmagnetic fields for achieving independent control. Here we introduce analytical\nand reinforcement learning control strategies for path planning to a target by\nmultiple swimmers using a uniform magnetic field. The comparison of the two\nalgorithms shows the superiority of reinforcement learning in achieving minimal\ntravel time to a target. The results demonstrate, for the first time, the\neffective independent navigation of realistic micro-swimmers with a uniform\nmagnetic field in a viscous flow field.\n"} {"abstract": " When a chaotic, ergodic Hamiltonian system with $N$ degrees of freedom is\nsubject to sufficiently rapid periodic driving, its energy evolves diffusively.\nWe derive a Fokker-Planck equation that governs the evolution of the system's\nprobability distribution in energy space, and we provide explicit expressions\nfor the energy drift and diffusion rates. Our analysis suggests that the system\ngenerically relaxes to a long-lived \"prethermal\" state characterized by minimal\nenergy absorption, eventually followed by more rapid heating. When $N\\gg 1$,\nthe system ultimately absorbs energy indefinitely from the drive, or at least\nuntil an infinite temperature state is reached.\n"} {"abstract": " Understanding the drift motion and dynamical locking of crystalline clusters\non patterned substrates is important for the diffusion and manipulation of\nnano- and micro-scale objects on surfaces. In a previous work, we studied the\norientational and directional locking of colloidal two-dimensional clusters\nwith triangular structure driven across a triangular substrate lattice. Here we\nshow with experiments and simulations that such locking features arise for\nclusters with arbitrary lattice structure sliding across arbitrary regular\nsubstrates. Similar to triangular-triangular contacts, orientational and\ndirectional locking are strongly correlated via the real- and reciprocal-space\nmoir\\'e patterns of the contacting surfaces. Due to the different symmetries of\nthe surfaces in contact, however the relation between the locking orientation\nand the locking direction becomes more complicated compared to interfaces\ncomposed of identical lattice symmetries. We provide a generalized formalism\nwhich describes the relation between the locking orientation and locking\ndirection with arbitrary lattice symmetries.\n"} {"abstract": " A unification of Klein--Gordon, Dirac, Maxwell, Rarita--Schwinger and\nEinstein equations exact solutions (for the massless fields cases) is\npresented. The method is based on writing all of the relevant dynamical fields\nin terms of products and derivatives of pre--potential functions, which satisfy\nd'Alambert equation. The coupled equations satisfied by the pre--potentials are\nnon-linear. Remarkably, there are particular solutions of (gradient) orthogonal\npre--potentials that satisfy the usual wave equation which may be used to\nconstruct {\\it{exact non--trivial solutions to Klein--Gordon, Dirac, Maxwell,\nRarita--Schwinger and (linearized and full) Einstein equations}}, thus giving\nrise to a unification of the solutions of all massless field equations for any\nspin. Some solutions written in terms of orthogonal pre--potentials are\npresented. Relations of this method to previously developed ones, as well as to\nother subjects in physics are pointed out.\n"} {"abstract": " Soil carbon accounting and prediction play a key role in building decision\nsupport systems for land managers selling carbon credits, in the spirit of the\nParis and Kyoto protocol agreements. Land managers typically rely on\ncomputationally complex models fit using sparse datasets to make these\naccountings and predictions. The model complexity and sparsity of the data can\nlead to over-fitting, leading to inaccurate results using new data or making\npredictions. Modellers address over-fitting by simplifying their models,\nneglecting some soil organic carbon (SOC) components. In this study, we\nintroduce two novel SOC models and a new RothC-like model and investigate how\nthe SOC components and complexity of the SOC models affect the SOC prediction\nin the presence of small and sparse time series data. We develop model\nselection methods that can identify the soil carbon model with the best\npredictive performance, in light of the available data. Through this analysis\nwe reveal that commonly used complex soil carbon models can over-fit in the\npresence of sparse time series data, and our simpler models can produce more\naccurate predictions.\n"} {"abstract": " The observation of a radioactively powered kilonova AT~2017gfo associated\nwith the gravitational wave-event GW170817 from binary neutron star merger\nproves that these events are ideal sites for the production of heavy\n$r$-process elements. The gamma-ray photons produced by the radioactive decay\nof heavy elements are unique probes for the detailed nuclide compositions.\nBasing on the detailed $r$-process nucleosynthesis calculations and considering\nradiative transport calculations for the gamma-rays in different shells, we\nstudy the gamma-ray emission in a merger ejecta on a timescale of a few days.\nIt is found that the total gamma-ray energy generation rate evolution is\nroughly depicted as $\\dot{E}\\propto t^{-1.3}$. For the dynamical ejecta with a\nlow electron fraction ($Y_{\\rm e}\\lesssim0.20$), the dominant contributors of\ngamma-ray energy are the nuclides around the second $r$-process peak\n($A\\sim130$), and the decay chain of $^{132}$Te ($t_{1/2}=3.21$~days)\n$\\rightarrow$ $^{132}$I ($t_{1/2}=0.10$~days) $\\rightarrow$ $^{132}$Xe produces\ngamma-ray lines at $228$ keV, $668$ keV, and $773$ keV. For the case of a wind\nejecta with $Y_{\\rm e}\\gtrsim0.30$, the dominant contributors of gamma-ray\nenergy are the nuclides around the first $r$-process peak ($A\\sim80$), and the\ndecay chain of $^{72}$Zn ($t_{1/2}=1.93$~days) $\\rightarrow$ $^{72}$Ga\n($t_{1/2}=0.59$~days) $\\rightarrow$ $^{72}$Ge produces gamma-ray lines at $145$\nkeV, $834$ keV, $2202$ keV, and $2508$ keV. The peak fluxes of these lines are\n$10^{-9}\\sim 10^{-7}$~ph~cm$^{-2}$ s$^{-1}$, which are marginally detectable\nwith the next-generation MeV gamma-ray detector \\emph{ETCC} if the source is at\na distance of $40$~Mpc.\n"} {"abstract": " This paper addresses reinforcement learning based, direct signal tracking\ncontrol with an objective of developing mathematically suitable and practically\nuseful design approaches. Specifically, we aim to provide reliable and easy to\nimplement designs in order to reach reproducible neural network-based\nsolutions. Our proposed new design takes advantage of two control design\nframeworks: a reinforcement learning based, data-driven approach to provide the\nneeded adaptation and (sub)optimality, and a backstepping based approach to\nprovide closed-loop system stability framework. We develop this work based on\nan established direct heuristic dynamic programming (dHDP) learning paradigm to\nperform online learning and adaptation and a backstepping design for a class of\nimportant nonlinear dynamics described as Euler-Lagrange systems. We provide a\ntheoretical guarantee for the stability of the overall dynamic system, weight\nconvergence of the approximating nonlinear neural networks, and the Bellman\n(sub)optimality of the resulted control policy. We use simulations to\ndemonstrate significantly improved design performance of the proposed approach\nover the original dHDP.\n"} {"abstract": " In this paper, we formalize precisely the sense in which the application of\ncellular automaton to partial configuration is a natural extension of its local\ntransition function through the categorical notion of Kan extension. In fact,\nthe two possible ways to do such an extension and the ingredients involved in\ntheir definition are related through Kan extensions in many ways. These\nrelations provide additional links between computer science and category\ntheory, and also give a new point of view on the famous Curtis-Hedlung theorem\nof cellular automata from the extended topological point of view provided by\ncategory theory. These relations provide additional links between computer\nscience and category theory. No prior knowledge of category theory is assumed.\n"} {"abstract": " Emotion recognition from speech is a challenging task. Re-cent advances in\ndeep learning have led bi-directional recur-rent neural network (Bi-RNN) and\nattention mechanism as astandard method for speech emotion recognition,\nextractingand attending multi-modal features - audio and text, and thenfusing\nthem for downstream emotion classification tasks. Inthis paper, we propose a\nsimple yet efficient neural networkarchitecture to exploit both acoustic and\nlexical informationfrom speech. The proposed framework using multi-scale\ncon-volutional layers (MSCNN) to obtain both audio and text hid-den\nrepresentations. Then, a statistical pooling unit (SPU)is used to further\nextract the features in each modality. Be-sides, an attention module can be\nbuilt on top of the MSCNN-SPU (audio) and MSCNN (text) to further improve the\nperfor-mance. Extensive experiments show that the proposed modeloutperforms\nprevious state-of-the-art methods on IEMOCAPdataset with four emotion\ncategories (i.e., angry, happy, sadand neutral) in both weighted accuracy (WA)\nand unweightedaccuracy (UA), with an improvement of 5.0% and 5.2% respectively\nunder the ASR setting.\n"} {"abstract": " The asymptotic equivalence of canonical and microcanonical ensembles is a\ncentral concept in statistical physics, with important consequences for both\ntheoretical research and practical applications. However, this property breaks\ndown under certain circumstances. The most studied violation of ensemble\nequivalence requires phase transitions, in which case it has a `restricted'\n(i.e. confined to a certain region in parameter space) but `strong' (i.e.\ncharacterized by a difference between the entropies of the two ensembles that\nis of the same order as the entropies themselves) form. However, recent\nresearch on networks has shown that the presence of an extensive number of\nlocal constraints can lead to ensemble nonequivalence even in the absence of\nphase transitions. This occurs in a `weak' (i.e. leading to a subleading\nentropy difference) but remarkably `unrestricted' (i.e. valid in the entire\nparameter space) form. Here we look for more general manifestations of ensemble\nnonequivalence in arbitrary ensembles of matrices with given margins. These\nmodels have widespread applications in the study of spatially heterogeneous\nand/or temporally nonstationary systems, with consequences for the analysis of\nmultivariate financial and neural time-series, multi-platform social activity,\ngene expression profiles and other Big Data. We confirm that ensemble\nnonequivalence appears in `unrestricted' form throughout the entire parameter\nspace due to the extensivity of local constraints. Surprisingly, at the same\ntime it can also exhibit the `strong' form. This novel, simultaneously `strong\nand unrestricted' form of nonequivalence is very robust and imposes a\nprincipled choice of the ensemble. We calculate the proper mathematical\nquantities to be used in real-world applications.\n"} {"abstract": " Privacy-preserving genomic data sharing is prominent to increase the pace of\ngenomic research, and hence to pave the way towards personalized genomic\nmedicine. In this paper, we introduce ($\\epsilon , T$)-dependent local\ndifferential privacy (LDP) for privacy-preserving sharing of correlated data\nand propose a genomic data sharing mechanism under this privacy definition. We\nfirst show that the original definition of LDP is not suitable for genomic data\nsharing, and then we propose a new mechanism to share genomic data. The\nproposed mechanism considers the correlations in data during data sharing,\neliminates statistically unlikely data values beforehand, and adjusts the\nprobability distributions for each shared data point accordingly. By doing so,\nwe show that we can avoid an attacker from inferring the correct values of the\nshared data points by utilizing the correlations in the data. By adjusting the\nprobability distributions of the shared states of each data point, we also\nimprove the utility of shared data for the data collector. Furthermore, we\ndevelop a greedy algorithm that strategically identifies the processing order\nof the shared data points with the aim of maximizing the utility of the shared\ndata. Considering the interdependent privacy risks while sharing genomic data,\nwe also analyze the information gain of an attacker about genomes of a donor's\nfamily members by observing perturbed data of the genome donor and we propose a\nmechanism to select the privacy budget (i.e., $\\epsilon$ parameter of LDP) of\nthe donor by also considering privacy preferences of her family members. Our\nevaluation results on a real-life genomic dataset show the superiority of the\nproposed mechanism compared to the randomized response mechanism (a widely used\ntechnique to achieve LDP).\n"} {"abstract": " In this note we study the contact geometry of symplectic divisors. We show\nthe contact structure induced on the boundary of a divisor neighborhood is\ninvariant under toric and interior blow-ups and blow-downs. We also construct\nan open book decomposition on the boundary of a concave divisor neighborhood\nand apply it to the study of universally tight contact structures of contact\ntorus bundles.\n"} {"abstract": " We study the quantum Riemannian geometry of quantum projective spaces of any\ndimension. In particular we compute the Riemann and Ricci tensors, using\npreviously introduced quantum metrics and quantum Levi-Civita connections. We\nshow that the Riemann tensor is a bimodule map and derive various consequences\nof this fact. We prove that the Ricci tensor is proportional to the quantum\nmetric, giving a quantum analogue of the Einstein condition, and compute the\ncorresponding scalar curvature. Along the way we also prove several results for\nvarious objects related to those mentioned here.\n"} {"abstract": " The hole probability, i.e., the probability that a region is void of\nparticles, is a benchmark of correlations in many body systems. We compute\nanalytically this probability $P(R)$ for a spherical region of radius $R$ in\nthe case of $N$ noninteracting fermions in their ground state in a\n$d$-dimensional trapping potential. Using a connection to the Laguerre-Wishart\nensembles of random matrices, we show that, for large $N$ and in the bulk of\nthe Fermi gas, $P(R)$ is described by a universal scaling function of $k_F R$,\nfor which we obtain an exact formula ($k_F$ being the local Fermi wave-vector).\nIt exhibits a super exponential tail $P(R)\\propto e^{- \\kappa_d (k_F R)^{d+1}}$\nwhere $\\kappa_d$ is a universal amplitude, in good agreement with existing\nnumerical simulations. When $R$ is of the order of the radius of the Fermi gas,\nthe hole probability is described by a large deviation form which is not\nuniversal and which we compute exactly for the harmonic potential. Similar\nresults also hold in momentum space.\n"} {"abstract": " We consider a high-Q microresonator with $\\chi^{(2)}$ nonlinearity under\nconditions when the coupling rates between the sidebands around the pump and\nsecond harmonic exceed the damping rates, implying the strong coupling regime\n(SC). Using the dressed-resonator approach we demonstrate that this regime\nleads to the dominance of the Hermitian part of the operator driving the\nside-band dynamics over its non-Hermitian part responsible for the parametric\ngain. This has allowed us to introduce and apply the cross-area concept of the\npolariton quasi-particles and define their effective masses in the context of\n$\\chi^{(2)}$ ring-microresonators. We further use polaritons to predict the\nmodified spectral response of the resonator to a weak probe field, and to\nreveal splitting of the bare-resonator resonances, avoided crossings, and Rabi\ndynamics. Polariton basis also allows deriving a discrete sequence of the\nparametric thresholds for the generation of sidebands of different orders.\n"} {"abstract": " We use a Wigner distribution-like function based on the strong field\napproximation theory to obtain the time-energy distributions and the ionization\ntime distributions of electrons ionized by an XUV pulse alone and in the\npresence of an infrared (IR) pulse. In the case of a single XUV pulse, although\nthe overall shape of the ionization time distribution resembles the\nXUV-envelope, its detail shows dependence on the emission direction of the\nelectron and the carrier-envelope phase of the pulse, which mainly results from\nthe low-energy interference structure. It is further found that the electron\nfrom the counter-rotating term plays an important role in the interference. In\nthe case of the two-color pulse, both the time-energy distributions and the\nionization time distributions change with varying IR field. Our analysis\ndemonstrates that the IR field not only modifies the final electron kinetic\nenergy but also changes the electron's emission time, which results from the\nchange of the electric field induced by the IR pulse. Moreover, the ionization\ntime distributions of the photoelectrons emitted from atoms with higher\nionization energy are also given, which show less impact of the IR field on the\nelectron dynamics.\n"} {"abstract": " Breathers are localized structures that undergo a periodic oscillation in\ntheir duration and amplitude. Optical microresonators, benefiting from their\nhigh quality factor, provide an ideal test bench for studying the breathing\nphenomena. In the monochromatically pumped microresonator system, intrinsic\nbreathing instabilities are widely observed in the form of temporal dissipative\nKerr solitons which only exist in the effectively red detuned regime. Here, we\nproposed a novel bichromatic pumping scheme to create compulsive breathing\nmicrocombs via respectively distributing two pump lasers at the effectively\nblue and red detuned side of a single resonance. We experimentally discover the\nartificial cnoidal wave breathers and molecular crystal-like breathers in a\nchip-based silicon nitride microresonator, and theoretically describe their\nintriguing temporal dynamics based on the bichromatic pumping Lugiato-Lefever\nequation. In particular, the corresponding breathing microcombs exhibit diverse\ncomb line spacing ranging from 2 to 17 times of the free spectral range of the\nresonator. Our discovery not only provides a simple and robust method to\nproduce microcombs with reconfigurable comb line spacing, but also reveals a\nnew type of breathing waves in driven dissipative nonlinear systems.\n"} {"abstract": " We derive general covariant coupled equations of QCD describing the\ntetraquark in terms of a mix of four-quark states $2q2\\bar q$, and two-quark\nstates $q\\bar q$. The coupling of $2q2\\bar q$ to $q\\bar q$ states is achieved\nby a simple contraction of a four-quark $q\\bar q$-irreducible Green function\ndown to a two-quark $q\\bar q$ Bethe-Salpeter kernel. The resulting tetraquark\nequations are expressed in an exact field theoretic form, and are in agreement\nwith those obtained previously by consideration of disconnected interactions;\nhowever, despite being more general, they have been derived here in a much\nsimpler and more transparent way.\n"} {"abstract": " We show how spectral submanifold theory can be used to construct\nreduced-order models for harmonically excited mechanical systems with internal\nresonances. Efficient calculations of periodic and quasi-periodic responses\nwith the reduced-order models are discussed in this paper and its companion,\nPart II, respectively. The dimension of a reduced-order model is determined by\nthe number of modes involved in the internal resonance, independently of the\ndimension of the full system. The periodic responses of the full system are\nobtained as equilibria of the reduced-order model on spectral submanifolds. The\nforced response curve of periodic orbits then becomes a manifold of equilibria,\nwhich can be easily extracted using parameter continuation. To demonstrate the\neffectiveness and efficiency of the reduction, we compute the forced response\ncurves of several high-dimensional nonlinear mechanical systems, including the\nfinite-element models of a von K\\'arm\\'an beam and a plate.\n"} {"abstract": " We report for the first time the occurrence of superconductivity in the\nquaternary silicide carbide YRe2SiC with Tc = 5.9 K. The emergence of\nsuperconductivity was confirmed by means of magnetic susceptibility, electrical\nresistivity, and heat capacity measurements. The presence of a well developed\nheat capacity feature at Tc confirms that superconductivity is a bulk\nphenomenon, while a second feature in the heat capacity near 0.5 Tc combined\nwith the unusual temperature dependence of the upper critical field Hc2(T)\nindicate the presence of a multiband superconducting state. Additionally, the\nlinear dependence of the lower critical field Hc1 with temperature resemble the\nbehavior found in compounds with unconventional pairing symmetry. Band\nstructure calculations reveal YRe2SiC could harbor a non-trivial topological\nstate and that the low-energy states occupy multiple disconnected sheets at the\nFermi surface, with different degrees of hybridization, nesting, and screening\neffects, therefore making unconventional multiband superconductivity plausible.\n"} {"abstract": " We consider the dynamics of local entropy and nearest neighbor mutual\ninformation of a 1-D lattice of qubits via the repeated application of nearest\nneighbor CNOT quantum gates. This is a quantum version of a cellular automaton.\nWe analyze the entropy dynamics for different initial product states, both for\nopen boundary conditions, periodic boundary conditions and we also consider the\ninfinite lattice thermodynamic limit. The dynamics gives rise to fractal\nbehavior, where we see the appearance of the Sierpinski triangle both for\nstates in the computational basis and for operator dynamics in the Heisenberg\npicture. In the thermodynamics limit, we see equilibration with a time\ndependence controlled by $\\exp(-\\alpha t^{h-1})$ where $h$ is the fractal\ndimension of the Sierpinski triangle, and $\\alpha$ depends on the details of\nthe initial state. We also see log-periodic reductions in the one qubit entropy\nwhere the approach to equilibrium is only power law. For open boundary\nconditions we see time periodic oscillations near the boundary, associated to\nsubalgebras of operators localized near the boundary that are mapped to\nthemselves by the dynamics.\n"} {"abstract": " In an article by Garc\\'ia-Pintos et al. [Rev. Lett. 125, 040601 (2020)] the\nconnection between the charging power of a quantum battery and the fluctuations\nof a \"free energy operator\" whose expectation value characterizes the maximum\nextractable work of the battery is studied. The result of the closed-system\nanalysis shows that for a general charging process the battery will have a\nnonzero charging power if and only if the state of the battery is not an\neigenstate of the free energy operator. In this Comment, we point out a few\nmistakes in the analysis and obtain the correct bound on the charging power.\nConsequently, the result for closed-system dynamics is in general not correct.\n"} {"abstract": " In real-world multi-agent systems, agents with different capabilities may\njoin or leave without altering the team's overarching goals. Coordinating teams\nwith such dynamic composition is challenging: the optimal team strategy varies\nwith the composition. We propose COPA, a coach-player framework to tackle this\nproblem. We assume the coach has a global view of the environment and\ncoordinates the players, who only have partial views, by distributing\nindividual strategies. Specifically, we 1) adopt the attention mechanism for\nboth the coach and the players; 2) propose a variational objective to\nregularize learning; and 3) design an adaptive communication method to let the\ncoach decide when to communicate with the players. We validate our methods on a\nresource collection task, a rescue game, and the StarCraft micromanagement\ntasks. We demonstrate zero-shot generalization to new team compositions. Our\nmethod achieves comparable or better performance than the setting where all\nplayers have a full view of the environment. Moreover, we see that the\nperformance remains high even when the coach communicates as little as 13% of\nthe time using the adaptive communication strategy.\n"} {"abstract": " Let $H\\subset G$ be semisimple Lie groups, $\\Gamma\\subset G$ a lattice and\n$K$ a compact subgroup of $G$. For $n \\in \\mathbb N$, let $\\mathcal O_n$ be the\nprojection to $\\Gamma \\backslash G/K$ of a finite union of closed $H$-orbits in\n$\\Gamma \\backslash G$. In this very general context of homogeneous dynamics, we\nprove an equidistribution theorem for intersections of $\\mathcal O_n$ with an\nanalytic subvariety $S$ of $G/K$ of complementary dimension: if $\\mathcal O_n$\nis equidistributed in $\\Gamma \\backslash G/K$, then the signed intersection\nmeasure of $S \\cap \\mathcal O_n$ normalized by the volume of $\\mathcal O_n$\nconverges to the restriction to $S$ of some $G$-invariant closed form on $G/K$.\nWe give general tools to determine this closed form and compute it in some\nexamples.\n As our main application, we prove that, if $\\mathbb V$ is a polarized\nvariation of Hodge structure of weight $2$ and Hodge numbers $(q,p,q)$ over a\nbase $S$ of dimension $rq$, then the (non-exceptional) locus where the Picard\nrank is at least $r$ is equidistributed in $S$ with respect to the volume form\n$c_q^r$, where $c_q$ is the $q^{\\textrm{th}}$ Chern form of the Hodge bundle.\nThis generalizes a previous work of the first author which treated the case\n$q=r=1$. We also prove an equidistribution theorem for certain families of CM\npoints in Shimura varieties, and another one for Hecke translates of a divisor\nin $\\mathcal A_g$.\n"} {"abstract": " We consider the problem of binomiality of the steady state ideals of\nbiochemical reaction networks. We are interested in finding polynomial\nconditions on the parameters such that the steady state ideal of a chemical\nreaction network is binomial under every specialisation of the parameters if\nthe conditions on the parameters hold. We approach the binomiality problem\nusing Comprehensive Gr\\\"obner systems. Considering rate constants as\nparameters, we compute comprehensive Gr\\\"obner systems for various reactions.\nIn particular, we make automatic computations on n-site phosphorylations and\nbiomodels from the Biomodels repository using the grobcov library of the\ncomputer algebra system Singular.\n"} {"abstract": " An important component of unsupervised learning by instance-based\ndiscrimination is a memory bank for storing a feature representation for each\ntraining sample in the dataset. In this paper, we introduce 3 improvements to\nthe vanilla memory bank-based formulation which brings massive accuracy gains:\n(a) Large mini-batch: we pull multiple augmentations for each sample within the\nsame batch and show that this leads to better models and enhanced memory bank\nupdates. (b) Consistency: we enforce the logits obtained by different\naugmentations of the same sample to be close without trying to enforce\ndiscrimination with respect to negative samples as proposed by previous\napproaches. (c) Hard negative mining: since instance discrimination is not\nmeaningful for samples that are too visually similar, we devise a novel nearest\nneighbour approach for improving the memory bank that gradually merges\nextremely similar data samples that were previously forced to be apart by the\ninstance level classification loss. Overall, our approach greatly improves the\nvanilla memory-bank based instance discrimination and outperforms all existing\nmethods for both seen and unseen testing categories with cosine similarity.\n"} {"abstract": " Conformal quantum mechanics has been proposed to be the CFT$_1$ dual to\nAdS$_2$. The $N$-point correlation function that satisfy conformal constraints\nhave been constructed from a non-conformal vacuum and the insertion of a\nnon-primary operator. The main goal of this paper is to find an interpretation\nof this oddness. For this purpouse, we study possible gravitational dual models\nand propose a two-dimensional dilaton gravity with a massless fermion for the\ndescription of conformal quantum mechanics. We find a universal correspondence\nbetween states in the conformal quantum mechanics model and two-dimensional\nspacetimes. Moreover, the solutions of the Dirac equation can be interpreted as\nzero modes of a Floquet-Dirac system. Within this system, the oddness of the\nnon-conformal vacuum and non-primary operator is elucidated. As a possible\napplication, we interpret the gauge symmetries of the Floquet-Dirac system as\nthe corresponding infinite symmetries of the Schr\\\"odinger equation which are\nconjectured to be related to higher spin symmetries.\n"} {"abstract": " We investigate the necessary conditions for the two spacetimes, which are\nsolutions to the Einstein field equations with an anisotropic matter source, to\nbe related to each other by means of a conformal transformation. As a result,\nwe obtain that if one of such spacetimes is a generalization of the\nRobertson-Walker solution with vanishing acceleration and vorticity, then the\nother one has to be in this class as well, i.e. the conformal factor will be a\nconstant function on the hypersurfaces orthogonal to the fluid flow lines. The\nevolution equation for this function appears naturally as a direct consequence\nof the conformal transformation of the Ricci tensor.\n"} {"abstract": " A caveat to many applications of the current Deep Learning approach is the\nneed for large-scale data. One improvement suggested by Kolmogorov Complexity\nresults is to apply the minimum description length principle with\ncomputationally universal models. We study the potential gains in sample\nefficiency that this approach can bring in principle. We use polynomial-time\nTuring machines to represent computationally universal models and Boolean\ncircuits to represent Artificial Neural Networks (ANNs) acting on\nfinite-precision digits.\n Our analysis unravels direct links between our question and Computational\nComplexity results. We provide lower and upper bounds on the potential gains in\nsample efficiency between the MDL applied with Turing machines instead of ANNs.\nOur bounds depend on the bit-size of the input of the Boolean function to be\nlearned. Furthermore, we highlight close relationships between classical open\nproblems in Circuit Complexity and the tightness of these.\n"} {"abstract": " Scrum, the most popular agile method and project management framework, is\nwidely reported to be used, adapted, misused, and abused in practice. However,\nnot much is known about how Scrum actually works in practice, and critically,\nwhere, when, how and why it diverges from Scrum by the book. Through a Grounded\nTheory study involving semi-structured interviews of 45 participants from 30\ncompanies and observations of five teams, we present our findings on how Scrum\nworks in practice as compared to how it is presented in its formative books. We\nidentify significant variations in these practices such as work breakdown,\nestimation, prioritization, assignment, the associated roles and artefacts, and\ndiscuss the underlying rationales driving the variations. Critically, we claim\nthat not all variations are process misuse/abuse and propose a nuanced\nclassification approach to understanding variations as standard, necessary,\ncontextual, and clear deviations for successful Scrum use and adaptation\n"} {"abstract": " Axion-like particles (ALPs) are ubiquitous in models of new physics\nexplaining some of the most pressing puzzles of the Standard Model. However,\nuntil relatively recently, little attention has been paid to its interplay with\nflavour. In this work, we study in detail the phenomenology of ALPs that\nexclusively interact with up-type quarks at the tree-level, which arise in some\nwell-motivated ultra-violet completions such as QCD-like dark sectors or\nFroggatt-Nielsen type models of flavour. Our study is performed in the\nlow-energy effective theory to highlight the key features of these scenarios in\na model independent way. We derive all the existing constraints on these models\nand demonstrate how upcoming experiments at fixed-target facilities and the LHC\ncan probe a vast region of the parameter space, which is currently not excluded\nby cosmological and astrophysical bounds. We also emphasize how a future\nmeasurement of the currently unavailable meson decay $D \\to \\pi +\n\\rm{invisible}$ could complement these upcoming searches and help to probe a\nlarge unexplored region of their parameter space.\n"} {"abstract": " We construct analytical and regular solutions in four-dimensional General\nRelativity which represent multi-black hole systems immersed in external\ngravitational field configurations. The external field background is composed\nby an infinite multipolar expansion, which allows to regularise the conical\nsingularities of an array of collinear static black holes. A stationary\nrotating generalisation is achieved by adding independent angular momenta and\nNUT parameters to each source of the binary configuration. Moreover, a charged\nextension of the binary black hole system at equilibrium is generated. Finally,\nwe show that the binary Majumdar-Papapetrou solution is consistently recovered\nin the vanishing external field limit. All of these solutions reach an\nequilibrium state due to the external gravitational field only, avoiding in\nthis way the presence of any string or strut defect.\n"} {"abstract": " The problem of covariance of physical quantities has not been solved\nfundamentally in the theory of relativity, which has caused a lot of confusion\nin the community; a typical example is the Gordon metric tensor, which was\ndeveloped almost a century ago, and has been widely used to describe the\nequivalent gravitational effect of moving media on light propagation,\npredicting a novel physics of optical black hole. In this paper, it is shown\nthat under Lorentz transformation, a covariant tensor satisfies three rules:\n(1) the tensor keeps invariant in mathematical form in all inertial frames; (2)\nall elements of the tensor have the same physical definitions in all frames;\n(3) the tensor expression in one inertial frame does not include any physical\nquantities defined in other frames. The three rules constitute a criterion for\ntesting the covariance of physical laws, required by Einstein's principle of\nrelativity. Gordon metric does not satisfy Rule (3), and its covariance cannot\nbe identified before a general refractive index is defined. Finally, it is also\nshown as well that in the relativistic quantum mechanics, the Lorentz\ncovariance of Dirac wave equation is not compatible with Einstein's mass-energy\nequivalence.\n"} {"abstract": " In this work, we present a coupled 3D-1D model of solid tumor growth within a\ndynamically changing vascular network to facilitate realistic simulations of\nangiogenesis. Additionally, the model includes erosion of the extracellular\nmatrix, interstitial flow, and coupled flow in blood vessels and tissue. We\nemploy continuum mixture theory with stochastic Cahn--Hilliard type phase-field\nmodels of tumor growth. The interstitial flow is governed by a mesoscale\nversion of Darcy's law. The flow in the blood vessels is controlled by\nPoiseuille flow, and Starling's law is applied to model the mass transfer in\nand out of blood vessels. The evolution of the network of blood vessels is\norchestrated by the concentration of the tumor angiogenesis factors (TAFs);\nblood vessels grow towards the increasing TAFs concentrations. This process is\nnot deterministic, allowing random growth of blood vessels and, therefore, due\nto the coupling of nutrients in tissue and vessels, makes the growth of tumors\nstochastic. We demonstrate the performance of the model by applying it to a\nvariety of scenarios. Numerical experiments illustrate the flexibility of the\nmodel and its ability to generate satellite tumors. Simulations of the effects\nof angiogenesis on tumor growth are presented as well as sample-independent\nfeatures of cancer.\n"} {"abstract": " The behavior of hot carriers in metal-halide perovskites (MHPs) present a\nvaluable foundation for understanding the details of carrier-phonon coupling in\nthe materials as well as the prospective development of highly efficient hot\ncarrier and carrier multiplication solar cells. Whilst the carrier population\ndynamics during cooling have been intensely studied, the evolution of the hot\ncarrier properties, namely the hot carrier mobility, remain largely unexplored.\nTo address this, we introduce a novel ultrafast visible pump - infrared push -\nterahertz probe spectroscopy (PPP-THz) to monitor the real-time conductivity\ndynamics of cooling carriers in methylammonium lead iodide. We find a decrease\nin mobility upon optically depositing energy into the carriers, which is\ntypical of band-transport. Surprisingly, the conductivity recovery dynamics are\nincommensurate with the intraband relaxation measured by an analogous\nexperiment with an infrared probe (PPP- IR), and exhibit a negligible\ndependence on the density of hot carriers. These results and the kinetic\nmodelling reveal the importance of highly-localized lattice heating on the\nmobility of the hot electronic states. This collective polaron-lattice\nphenomenon may contribute to the unusual photophysics observed in MHPs and\nshould be accounted for in devices that utilize hot carriers.\n"} {"abstract": " Balanced weighing matrices with parameters $$\n\\left(1+18\\cdot\\frac{9^{m+1}-1}{8},9^{m+1},4\\cdot 9^m\\right), $$ for each\nnonzero integer $m$ is constructed. This is the first infinite class not\nbelonging to those with classical parameters. It is shown that any balanced\nweighing matrix is equivalent to a five-class association scheme.\n"} {"abstract": " Evaluating image generation models such as generative adversarial networks\n(GANs) is a challenging problem. A common approach is to compare the\ndistributions of the set of ground truth images and the set of generated test\nimages. The Frech\\'et Inception distance is one of the most widely used metrics\nfor evaluation of GANs, which assumes that the features from a trained\nInception model for a set of images follow a normal distribution. In this\npaper, we argue that this is an over-simplified assumption, which may lead to\nunreliable evaluation results, and more accurate density estimation can be\nachieved using a truncated generalized normal distribution. Based on this, we\npropose a novel metric for accurate evaluation of GANs, named TREND (TRuncated\ngEneralized Normal Density estimation of inception embeddings). We demonstrate\nthat our approach significantly reduces errors of density estimation, which\nconsequently eliminates the risk of faulty evaluation results. Furthermore, we\nshow that the proposed metric significantly improves robustness of evaluation\nresults against variation of the number of image samples.\n"} {"abstract": " Unconstrained handwriting recognition is an essential task in document\nanalysis. It is usually carried out in two steps. First, the document is\nsegmented into text lines. Second, an Optical Character Recognition model is\napplied on these line images. We propose the Simple Predict & Align Network: an\nend-to-end recurrence-free Fully Convolutional Network performing OCR at\nparagraph level without any prior segmentation stage. The framework is as\nsimple as the one used for the recognition of isolated lines and we achieve\ncompetitive results on three popular datasets: RIMES, IAM and READ 2016. The\nproposed model does not require any dataset adaptation, it can be trained from\nscratch, without segmentation labels, and it does not require line breaks in\nthe transcription labels. Our code and trained model weights are available at\nhttps://github.com/FactoDeepLearning/SPAN.\n"} {"abstract": " Determining the maximum size of a $t$-intersecting code in $[m]^n$ was a\nlongstanding open problem of Frankl and F\\\"uredi, solved independently by\nAhlswede and Khachatrian and by Frankl and Tokushige. We extend their result to\nthe setting of forbidden intersections, by showing that for any $m>2$ and $n$\nlarge compared with $t$ (but not necessarily $m$) that the same bound holds for\ncodes with the weaker property of being $(t-1)$-avoiding, i.e.\\ having no two\nvectors that agree on exactly $t-1$ coordinates. Our proof proceeds via a junta\napproximation result of independent interest, which we prove via a development\nof our recent theory of global hypercontractivity: we show that any\n$(t-1)$-avoiding code is approximately contained in a $t$-intersecting junta (a\ncode where membership is determined by a constant number of co-ordinates). In\nparticular, when $t=1$ this gives an alternative proof of a recent result of\nEberhard, Kahn, Narayanan and Spirkl that symmetric intersecting codes in\n$[m]^n$ have size $o(m^n)$.\n"} {"abstract": " This paper is directed to the financial community and focuses on the\nfinancial risks associated with climate change. It, specifically, addresses the\nestimate of climate risk embedded within a bank loan portfolio. During the 21st\ncentury, man-made carbon dioxide emissions in the atmosphere will raise global\ntemperatures, resulting in severe and unpredictable physical damage across the\nglobe. Another uncertainty associated with climate, known as the energy\ntransition risk, comes from the unpredictable pace of political and legal\nactions to limit its impact. The Climate Extended Risk Model (CERM) adapts well\nknown credit risk models. It proposes a method to calculate incremental credit\nlosses on a loan portfolio that are rooted into physical and transition risks.\nThe document provides detailed description of the model hypothesis and steps.\nThis work was initiated by the association Green RWA (Risk Weighted Assets). It\nwas written in collaboration with Jean-Baptiste Gaudemet, Anne Gruz, and\nOlivier Vinciguerra (cerm@greenrwa.org), who contributed their financial and\nrisk expertise, taking care of its application to a pilot-portfolio. It extends\nthe model proposed in a first white paper published by Green RWA\n(https://www.greenrwa.org/).\n"} {"abstract": " Recent studies have suggested that low-energy cosmic rays (CRs) may be\naccelerated inside molecular clouds by the shocks associated with star\nformation. We use a Monte Carlo transport code to model the propagation of CRs\naccelerated by protostellar accretion shocks through protostellar cores. We\ncalculate the CR attenuation and energy losses and compute the resulting flux\nand ionization rate as a function of both radial distance from the protostar\nand angular position. We show that protostellar cores have non-uniform CR\nfluxes that produce a broad range of CR ionization rates, with the maximum\nvalue being up to two orders of magnitude higher then the radial average at a\ngiven distance. In particular, the CR flux is focused in the direction of the\noutflow cavity, creating a 'flashlight' effect and allowing CRs to leak out of\nthe core. The radially averaged ionization rates are less than the measured\nvalue for the Milky Way of $\\zeta \\approx 10^{-16} \\rm s^{-1}$; however, within\n$r \\approx 0.03$ pc from the protostar, the maximum ionization rates exceed\nthis value. We show that variation in the protostellar parameters, particularly\nin the accretion rate, may produce ionization rates that are a couple of orders\nof magnitude higher or lower than our fiducial values. Finally, we use a\nstatistical method to model unresolved sub-grid magnetic turbulence in the\ncore. We show that turbulence modifies the CR spectrum and increases the\nuniformity of the CR distribution but does not significantly affect the\nresulting ionization rates.\n"} {"abstract": " We analyze the collision of three identical spin-polarized fermions at zero\ncollision energy, assuming arbitrary finite-range potentials, and define the\ncorresponding three-body scattering hypervolume $D_F$. The scattering\nhypervolume $D$ was first defined for identical bosons in 2008 by one of us. It\nis the three-body analog of the two-body scattering length. We solve the\nthree-body Schr\\\"{o}dinger equation asymptotically when the three fermions are\nfar apart or one pair and the third fermion are far apart, deriving two\nasymptotic expansions of the wave function. Unlike the case of bosons for which\n$D$ has the dimension of length to the fourth power, here the $D_F$ we define\nhas the dimension of length to the eighth power. We then analyze the\ninteraction energy of three such fermions with momenta $\\hbar\\mathbf{k}_1$,\n$\\hbar\\mathbf{k}_2$ and $\\hbar\\mathbf{k}_3$ in a large periodic cubic box. The\nenergy shift due to $D_F$ is proportional to $D_F/\\Omega^2$, where $\\Omega$ is\nthe volume of the box. We also calculate the shifts of energy and pressure of\nspin-polarized Fermi gases due to a nonzero $D_F$ and the three-body\nrecombination rate of spin-polarized ultracold atomic Fermi gases at finite\ntemperatures.\n"} {"abstract": " We present a dual-pathway approach for recognizing fine-grained interactions\nfrom videos. We build on the success of prior dual-stream approaches, but make\na distinction between the static and dynamic representations of objects and\ntheir interactions explicit by introducing separate motion and object detection\npathways. Then, using our new Motion-Guided Attention Fusion module, we fuse\nthe bottom-up features in the motion pathway with features captured from object\ndetections to learn the temporal aspects of an action. We show that our\napproach can generalize across appearance effectively and recognize actions\nwhere an actor interacts with previously unseen objects. We validate our\napproach using the compositional action recognition task from the\nSomething-Something-v2 dataset where we outperform existing state-of-the-art\nmethods. We also show that our method can generalize well to real world tasks\nby showing state-of-the-art performance on recognizing humans assembling\nvarious IKEA furniture on the IKEA-ASM dataset.\n"} {"abstract": " We introduce a new type of inversion-free feedforward hysteresis control with\nthe Preisach operator. The feedforward control has a high-gain integral loop\nstructure with the Preisach operator in negative feedback. This allows\nobtaining a dynamic quantity which corresponds to the inverse hysteresis\noutput, since the loop error tends towards zero for a sufficiently high\nfeedback gain. By analyzing the loop sensitivity function with hysteresis,\nwhich acts as a non-constant phase lag, we show the achievable bandwidth and\naccuracy of the proposed control. Remarkable fact is that the control bandwidth\nis theoretically infinite, provided the integral feedback loop with Preisach\noperator can be implemented with a smooth hysteresis output. Numerical control\nexamples with the Preisach hysteresis model in differential form are shown and\ndiscussed in detail.\n"} {"abstract": " One of the most suitable methods for modeling fully dynamic earthquake cycle\nsimulations is the spectral boundary integral element method (sBIEM), which\ntakes advantage of the fast Fourier transform (FFT) to make a complex numerical\ndynamic rupture tractable. However, this method has the serious drawback of\nrequiring a flat fault geometry due to the FFT approach. Here we present an\nanalytical formulation that extends the sBIEM to a mildly non-planar fault. We\nstart from a regularized boundary element method and apply a small-slope\napproximation of the fault geometry. Making this assumption, it is possible to\nshow that the main effect of non-planar fault geometry is to change the normal\ntraction along the fault, which is controlled by the local curvature along the\nfault. We then convert this space--time boundary integral equation of the\nnormal traction into a spectral-time formulation and incorporate this change in\nnormal traction into the existing sBIEM methodology. This approach allows us to\nmodel fully dynamic seismic cycle simulations on non-planar faults in a\nparticularly efficient way. We then test this method against a regular boundary\nintegral element method for both rough-fault and seamount fault geometries, and\ndemonstrate that this sBIEM maintains the scaling between the fault geometry\nand slip distribution.\n"} {"abstract": " Field-effect transistors made of wide-bandgap semiconductors can operate at\nhigh voltages, temperatures and frequencies with low energy losses, and have\nbeen of increasing importance in power and high-frequency electronics. However,\nthe poor performance of p-channel transistors compared with that of n-channel\ntransistors has constrained the production of energy-efficient complimentary\ncircuits with integrated n- and p-channel transistors. The p-type surface\nconductivity of hydrogen-terminated diamond offers great potential for solving\nthis problem, but surface transfer doping, which is commonly believed to be\nessential for generating the conductivity, limits the performance of\ntransistors made of hydrogen-terminated diamond because it requires the\npresence of ionized surface acceptors, which cause hole scattering. Here, we\nreport on fabrication of a p-channel wide-bandgap heterojunction field-effect\ntransistor consisting of a hydrogen-terminated diamond channel and hexagonal\nboron nitride ($h$-BN) gate insulator, without relying on surface transfer\ndoping. Despite its reduced density of surface acceptors, the transistor has\nthe lowest sheet resistance ($1.4$ k$\\Omega$) and largest on-current ($1600$\n$\\mu$m mA mm$^{-1}$) among p-channel wide-bandgap transistors, owing to the\nhighest hole mobility (room-temperature Hall mobility: $680$\ncm$^2$V$^{-1}$s$^{-1}$). Importantly, the transistor also shows normally-off\nbehavior, with a high on/off ratio exceeding $10^8$. These characteristics are\nsuited for low-loss switching and can be explained on the basis of standard\ntransport and transistor models. This new approach to making diamond\ntransistors paves the way to future wide-bandgap semiconductor electronics.\n"} {"abstract": " A plasmon is a collective excitation of electrons due to the Coulomb\ninteraction. Both plasmons and single-particle excitations (SPEs) are\neigenstates of bulk metallic systems and they are orthogonal to each other.\nHowever, in non-translationally symmetric systems such as nanostructures,\nplasmons and SPEs coherently interact. It has been well discussed that the\nplasmons and SPEs, respectively, can couple with transverse (T) electric field\nin such systems, and also that they are coupled with each other via\nlongitudinal (L) field. However, there has been a missing link in the previous\nstudies: the coherent coupling between the plasmons and SPEs mediated by the T\nfield. Herein, we develop a theoretical framework to describe the\nself-consistent relationship between plasmons and SPEs through both the L and T\nfields. The excitations are described in terms of the charge and current\ndensities in a constitutive equation with a nonlocal susceptibility, where the\ndensities include the L and T components. The electromagnetic fields\noriginating from the densities are described in terms of the Green's function\nin the Maxwell equations. The T field is generated from both densities, whereas\nthe L component is attributed to the charge density only. We introduce a\nfour-vector representation incorporating the vector and scalar potentials in\nthe Coulomb gauge, in which the T and L fields are separated explicitly. The\neigenvalues of the matrix for the self-consistent equations appear as the poles\nof the system excitations. The developed formulation enables to approach\nunknown mechanisms for enhancement of the coherent coupling between plasmons\nand the hot carriers generated by radiative fields.\n"} {"abstract": " Lin and Wang defined a model of random walks on knot diagrams and interprete\nthe Alexnader polynomials and the colored Jones polynomials as Ihara zeta\nfunctions, i.e. zeta functions defined by counting cycles on the knot diagram.\nUsing this explanation, they gave a more conceptual proof for the Melvin-Morton\nconjecture. In this paper, we give an analogous zeta function expression for\nthe twisted Alexander invariants.\n"} {"abstract": " Recent achievements in depth prediction from a single RGB image have powered\nthe new research area of combining convolutional neural networks (CNNs) with\nclassical simultaneous localization and mapping (SLAM) algorithms. The depth\nprediction from a CNN provides a reasonable initial point in the optimization\nprocess in the traditional SLAM algorithms, while the SLAM algorithms further\nimprove the CNN prediction online. However, most of the current CNN-SLAM\napproaches have only taken advantage of the depth prediction but not yet other\nproducts from a CNN. In this work, we explore the use of the outlier mask, a\nby-product from unsupervised learning of depth from video, as a prior in a\nclassical probability model for depth estimate fusion to step up the\noutlier-resistant tracking performance of a SLAM front-end. On the other hand,\nsome of the previous CNN-SLAM work builds on feature-based sparse SLAM methods,\nwasting the per-pixel dense prediction from a CNN. In contrast to these sparse\nmethods, we devise a dense CNN-assisted SLAM front-end that is implementable\nwith TensorFlow and evaluate it on both indoor and outdoor datasets.\n"} {"abstract": " The proposed in J. Math. Phys. v.57,071903 (2016) analytical expansion of\nmonotone (contractive) Riemannian metrics (called also quantum Fisher\ninformation(s)) in terms of moments of the dynamical structure factor (DSF)\nrelative to an original intensive observable is reconsidered and extended. The\nnew approach through the DSF which characterizes fully the set of monotone\nRiemannian metrics on the space of Gibbs thermal states is utilized to obtain\nan extension of the spectral presentation obtained for the Bogoliubov-Kubo-Mori\nmetric (the generalized isothermal susceptibility) on the entire class of\nmonotone Riemannian metrics. The obtained spectral presentation is the main\npoint of our consideration. The last allows to present the one to one\ncorrespondence between monotone Riemannian metrics and operator monotone\nfunctions (which is a statement of the Petz theorem in the quantum information\ntheory) in terms of the linear response theory. We show that monotone\nRiemannian metrics can be determined from the analysis of the infinite chain of\nequations of motion of the retarded Green's functions. Inequalities between the\ndifferent metrics have been obtained as well. It is a demonstration that the\nanalysis of information-theoretic problems has benefited from concepts of\nstatistical mechanics and might cross-fertilize or extend both directions, and\nvice versa. We illustrate the presented approach on the calculation of the\nentire class of monotone (contractive) Riemannian metrics on the examples of\nsome simple but instructive systems employed in various physical problems.\n"} {"abstract": " Mobile apps are increasingly relying on high-throughput and low-latency\ncontent delivery, while the available bandwidth on wireless access links is\ninherently time-varying. The handoffs between base stations and access modes\ndue to user mobility present additional challenges to deliver a high level of\nuser Quality-of-Experience (QoE). The ability to predict the available\nbandwidth and the upcoming handoffs will give applications valuable leeway to\nmake proactive adjustments to avoid significant QoE degradation. In this paper,\nwe explore the possibility and accuracy of realtime mobile bandwidth and\nhandoff predictions in 4G/LTE and 5G networks. Towards this goal, we collect\nlong consecutive traces with rich bandwidth, channel, and context information\nfrom public transportation systems. We develop Recurrent Neural Network models\nto mine the temporal patterns of bandwidth evolution in fixed-route mobility\nscenarios. Our models consistently outperform the conventional univariate and\nmultivariate bandwidth prediction models. For 4G \\& 5G co-existing networks, we\npropose a new problem of handoff prediction between 4G and 5G, which is\nimportant for low-latency applications like self-driving strategy in realistic\n5G scenarios. We develop classification and regression based prediction models,\nwhich achieve more than 80\\% accuracy in predicting 4G and 5G handoffs in a\nrecent 5G dataset.\n"} {"abstract": " We verify the leading order term in the asymptotic expansion conjecture of\nthe relative Reshetikhin-Turaev invariants proposed in \\cite{WY4} for all pairs\n$(M,L)$ satisfying the properties that $M\\setminus L$ is homeomorphic to some\nfundamental shadow link complement and the 3-manifold $M$ is obtained by doing\nrational Dehn filling on some boundary components of the fundamental shadow\nlink complement, under the assumptions that the denominator of the surgery\ncoefficients are odd and the cone angles are sufficiently small. In particular,\nthe asymptotics of the invariants captures the complex volume and the twisted\nReidemeister torsion of the manifold $M\\setminus L$ associated with the\nhyperbolic cone structure determined by the sequence of colorings of the framed\nlink $L$.\n"} {"abstract": " A Leavitt labelled path algebra over a commutative unital ring is associated\nwith a labelled space, generalizing Leavitt path algebras associated with\ngraphs and ultragraphs as well as torsion-free commutative algebras generated\nby idempotents. We show that Leavitt labelled path algebras can be realized as\npartial skew group rings, Steinberg algebras, and Cuntz-Pimsner algebras. Via\nthese realizations we obtain generalized uniqueness theorems, a description of\ndiagonal preserving isomorphisms and we characterize simplicity of Leavitt\nlabelled path algebras. In addition, we prove that a large class of partial\nskew group rings can be realized as Leavitt labelled path algebras.\n"} {"abstract": " We establish the first tight lower bound of $\\Omega(\\log\\log\\kappa)$ on the\nquery complexity of sampling from the class of strongly log-concave and\nlog-smooth distributions with condition number $\\kappa$ in one dimension.\nWhereas existing guarantees for MCMC-based algorithms scale polynomially in\n$\\kappa$, we introduce a novel algorithm based on rejection sampling that\ncloses this doubly exponential gap.\n"} {"abstract": " In this paper, we consider gradient-type methods for convex positively\nhomogeneous optimization problems with relative accuracy. An analogue of the\naccelerated universal gradient-type method for positively homogeneous\noptimization problems with relative accuracy is investigated. The second\napproach is related to subgradient methods with B. T. Polyak stepsize. Result\non the linear convergence rate for some methods of this type with adaptive step\nadjustment is obtained for some class of non-smooth problems. Some\ngeneralization to a special class of non-convex non-smooth problems is also\nconsidered.\n"} {"abstract": " We develop a deep convolutional neural networks(CNNs) to deal with the blurry\nartifacts caused by the defocus of the camera using dual-pixel images.\nSpecifically, we develop a double attention network which consists of\nattentional encoders, triple locals and global local modules to effectively\nextract useful information from each image in the dual-pixels and select the\nuseful information from each image and synthesize the final output image. We\ndemonstrate the effectiveness of the proposed deblurring algorithm in terms of\nboth qualitative and quantitative aspects by evaluating on the test set in the\nNTIRE 2021 Defocus Deblurring using Dual-pixel Images Challenge. The code, and\ntrained models are available at https://github.com/tuvovan/ATTSF.\n"} {"abstract": " As hardware architectures are evolving in the push towards exascale,\ndeveloping Computational Science and Engineering (CSE) applications depend on\nperformance portable approaches for sustainable software development. This\npaper describes one aspect of performance portability with respect to\ndeveloping a portable library of kernels that serve the needs of several CSE\napplications and software frameworks. We describe Kokkos Kernels, a library of\nkernels for sparse linear algebra, dense linear algebra and graph kernels. We\ndescribe the design principles of such a library and demonstrate portable\nperformance of the library using some selected kernels. Specifically, we\ndemonstrate the performance of four sparse kernels, three dense batched\nkernels, two graph kernels and one team level algorithm.\n"} {"abstract": " Recent breakthroughs of Neural Architecture Search (NAS) extend the field's\nresearch scope towards a broader range of vision tasks and more diversified\nsearch spaces. While existing NAS methods mostly design architectures on a\nsingle task, algorithms that look beyond single-task search are surging to\npursue a more efficient and universal solution across various tasks. Many of\nthem leverage transfer learning and seek to preserve, reuse, and refine network\ndesign knowledge to achieve higher efficiency in future tasks. However, the\nenormous computational cost and experiment complexity of cross-task NAS are\nimposing barriers for valuable research in this direction. Existing NAS\nbenchmarks all focus on one type of vision task, i.e., classification. In this\nwork, we propose TransNAS-Bench-101, a benchmark dataset containing network\nperformance across seven tasks, covering classification, regression,\npixel-level prediction, and self-supervised tasks. This diversity provides\nopportunities to transfer NAS methods among tasks and allows for more complex\ntransfer schemes to evolve. We explore two fundamentally different types of\nsearch space: cell-level search space and macro-level search space. With 7,352\nbackbones evaluated on seven tasks, 51,464 trained models with detailed\ntraining information are provided. With TransNAS-Bench-101, we hope to\nencourage the advent of exceptional NAS algorithms that raise cross-task search\nefficiency and generalizability to the next level. Our dataset file will be\navailable at Mindspore, VEGA.\n"} {"abstract": " We consider analytic functions from a reproducing kernel Hilbert space. Given\nthat such a function is of order $\\epsilon$ on a set of discrete data points,\nrelative to its global size, we ask how large can it be at a fixed point\noutside of the data set. We obtain optimal bounds on this error of analytic\ncontinuation and describe its asymptotic behavior in $\\epsilon$. We also\ndescribe the maximizer function attaining the optimal error in terms of the\nresolvent of a positive semidefinite, self-adjoint and finite rank operator.\n"} {"abstract": " In offline reinforcement learning, a policy needs to be learned from a single\npre-collected dataset. Typically, policies are thus regularized during training\nto behave similarly to the data generating policy, by adding a penalty based on\na divergence between action distributions of generating and trained policy. We\npropose a new algorithm, which constrains the policy directly in its weight\nspace instead, and demonstrate its effectiveness in experiments.\n"} {"abstract": " We study the entanglement between soft and hard particles produced in generic\nscattering processes in QED. The reduced density matrix for the hard particles,\nobtained via tracing over the entire spectrum of soft photons, is shown to have\na large eigenvalue, which governs the behavior of the Renyi entropies and of\nthe non-analytic part of the entanglement entropy at low orders in perturbation\ntheory. The leading perturbative entanglement entropy is logarithmically IR\ndivergent. The coefficient of the IR divergence exhibits certain universality\nproperties, irrespectively of the dressing of the asymptotic charged particles\nand the detailed properties of the initial state. In a certain kinematical\nlimit, the coefficient is proportional to the cusp anomalous dimension in QED.\nFor Fock basis computations associated with two-electron scattering, we derive\nan exact expression for the large eigenvalue of the density matrix in terms of\nhard scattering amplitudes, which is valid at any finite order in perturbation\ntheory. As a result, the IR logarithmic divergences appearing in the\nexpressions for the Renyi and entanglement entropies persist at any finite\norder of the perturbative expansion. To all orders, however, the IR logarithmic\ndivergences exponentiate, rendering the large eigenvalue of the density matrix\nIR finite. The all-orders Renyi entropies (per unit time, per particle flux),\nwhich are shown to be proportional to the total inclusive cross-section in the\ninitial state, are also free of IR divergences. The entanglement entropy, on\nthe other hand, retains non-analytic, logarithmic behavior with respect to the\nsize of the box (which provides the IR cutoff) even to all orders in\nperturbation theory.\n"} {"abstract": " We introduce a tamed exponential time integrator which exploits linear terms\nin both the drift and diffusion for Stochastic Differential Equations (SDEs)\nwith a one sided globally Lipschitz drift term. Strong convergence of the\nproposed scheme is proved, exploiting the boundedness of the geometric Brownian\nmotion (GBM) and we establish order 1 convergence for linear diffusion terms.\nIn our implementation we illustrate the efficiency of the proposed scheme\ncompared to existing fixed step methods and utilize it in an adaptive time\nstepping scheme. Furthermore we extend the method to nonlinear diffusion terms\nand show it remains competitive. The efficiency of these GBM based approaches\nare illustrated by considering some well-known SDE models.\n"} {"abstract": " Language models like BERT and SpanBERT pretrained on open-domain data have\nobtained impressive gains on various NLP tasks. In this paper, we probe the\neffectiveness of domain-adaptive pretraining objectives on downstream tasks. In\nparticular, three objectives, including a novel objective focusing on modeling\npredicate-argument relations, are evaluated on two challenging dialogue\nunderstanding tasks. Experimental results demonstrate that domain-adaptive\npretraining with proper objectives can significantly improve the performance of\na strong baseline on these tasks, achieving the new state-of-the-art\nperformances.\n"} {"abstract": " Studies evaluating bikeability usually compute spatial indicators shaping\ncycling conditions and conflate them in a quantitative index. Much research\ninvolves site visits or conventional geospatial approaches, and few studies\nhave leveraged street view imagery (SVI) for conducting virtual audits. These\nhave assessed a limited range of aspects, and not all have been automated using\ncomputer vision (CV). Furthermore, studies have not yet zeroed in on gauging\nthe usability of these technologies thoroughly. We investigate, with\nexperiments at a fine spatial scale and across multiple geographies (Singapore\nand Tokyo), whether we can use SVI and CV to assess bikeability\ncomprehensively. Extending related work, we develop an exhaustive index of\nbikeability composed of 34 indicators. The results suggest that SVI and CV are\nadequate to evaluate bikeability in cities comprehensively. As they\noutperformed non-SVI counterparts by a wide margin, SVI indicators are also\nfound to be superior in assessing urban bikeability, and potentially can be\nused independently, replacing traditional techniques. However, the paper\nexposes some limitations, suggesting that the best way forward is combining\nboth SVI and non-SVI approaches. The new bikeability index presents a\ncontribution in transportation and urban analytics, and it is scalable to\nassess cycling appeal widely.\n"} {"abstract": " Using hydrodynamical simulations, we study how well the underlying\ngravitational potential of a galaxy cluster can be modelled dynamically with\ndifferent types of tracers. In order to segregate different systematics and the\neffects of varying estimator performances, we first focus on applying a generic\nminimal assumption method (oPDF) to model the simulated haloes using the full\n6-D phasespace information. We show that the halo mass and concentration can be\nrecovered in an ensemble unbiased way, with a stochastic bias that varies from\nhalo to halo, mostly reflecting deviations from steady state in the tracer\ndistribution. The typical systematic uncertainty is $\\sim 0.17$ dex in the\nvirial mass and $\\sim 0.17$ dex in the concentration as well when dark matter\nparticles are used as tracers. The dynamical state of satellite galaxies are\nclose to that of dark matter particles, while intracluster stars are less in a\nsteady state, resulting in a $\\sim$ 0.26 dex systematic uncertainty in mass.\nCompared with galactic haloes hosting Milky-Way-like galaxies, cluster haloes\nshow a larger stochastic bias in the recovered mass profiles. We also test the\naccuracy of using intracluster gas as a dynamical tracer modelled through a\ngeneralised hydrostatic equilibrium equation, and find a comparable systematic\nuncertainty in the estimated mass to that using dark matter. Lastly, we\ndemonstrate that our conclusions are largely applicable to other steady-state\ndynamical models including the spherical Jeans equation, by quantitatively\nsegregating their statistical efficiencies and robustness to systematics. We\nalso estimate the limiting number of tracers that leads to the\nsystematics-dominated regime in each case.\n"} {"abstract": " In this paper, a novel intelligent reflecting surface (IRS)-assisted wireless\npowered communication network (WPCN) architecture is proposed for\npower-constrained Internet-of-Things (IoT) smart devices, where IRS is\nexploited to improve the performance of WPCN under imperfect channel state\ninformation (CSI). We formulate a hybrid access point (HAP) transmit energy\nminimization problem by jointly optimizing time allocation, HAP energy\nbeamforming, receiving beamforming, user transmit power allocation, IRS energy\nreflection coefficient and information reflection coefficient under the\nimperfect CSI and non-linear energy harvesting model. On account of the high\ncoupling of optimization variables, the formulated problem is a non-convex\noptimization problem that is difficult to solve directly. To address the\nabove-mentioned challenging problem, alternating optimization (AO) technique is\napplied to decouple the optimization variables to solve the problem.\nSpecifically, through AO, time allocation, HAP energy beamforming, receiving\nbeamforming, user transmit power allocation, IRS energy reflection coefficient\nand information reflection coefficient are divided into three sub-problems to\nbe solved alternately. The difference-of-convex (DC) programming is used to\nsolve the non-convex rank-one constraint in solving IRS energy reflection\ncoefficient and information reflection coefficient. Numerical simulations\nverify the superiority of the proposed optimization algorithm in decreasing HAP\ntransmit energy compared with other benchmark schemes.\n"} {"abstract": " Ranking the participants of a tournament has applications in voting, paired\ncomparisons analysis, sports and other domains. In this paper we introduce\nbipartite tournaments, which model situations in which two different kinds of\nentity compete indirectly via matches against players of the opposite kind;\nexamples include education (students/exam questions) and solo sports\n(golfers/courses). In particular, we look to find rankings via chain graphs,\nwhich correspond to bipartite tournaments in which the sets of adversaries\ndefeated by the players on one side are nested with respect to set inclusion.\nTournaments of this form have a natural and appealing ranking associated with\nthem. We apply chain editing -- finding the minimum number of edge changes\nrequired to form a chain graph -- as a new mechanism for tournament ranking.\nThe properties of these rankings are investigated in a probabilistic setting,\nwhere they arise as maximum likelihood estimators, and through the axiomatic\nmethod of social choice theory. Despite some nice properties, two problems\nremain: an important anonymity axiom is violated, and chain editing is NP-hard.\nWe address both issues by relaxing the minimisation constraint in chain\nediting, and characterise the resulting ranking methods via a greedy\napproximation algorithm.\n"} {"abstract": " Studies on the interplay between the charge order and the $d$-wave\nsuperconductivity in the copper-oxide high $T_{\\rm c}$ superconductors are\nreviewed with a special emphasis on the exploration based on the unconventional\nconcept of the electron fractionalization and its consequences supported by\nsolutions of high-accuracy quantum many-body solvers. Severe competitions\nbetween the superconducting states and the charge inhomogeneity including the\ncharge/spin striped states revealed by the quantum many-body solvers are first\naddressed for the Hubbard models and then for the {\\it ab initio} Hamiltonians\nof the cuprates derived without adjustable parameters to represent the\nlow-energy physics of the cuprates. The charge inhomogeneity and\nsuperconductivity are born out of the same mother, namely, the carrier\nattraction arising from the strong Coulomb repulsion near the Mott insulator\n(Mottness) and accompanied electron fractionalization. The same mother makes\nthe severe competition of the two brothers inevitable. The electron\nfractionalization has a remarkable consequences on the mechanism of the\nsuperconductivity. Recent explorations motivated by the concept of the\nfractionalization and their consequences on experimental observations in\nenergy-momentum resolved spectroscopic measurements including the angle\nresolved photoemission spectroscopy (ARPES) and the resonant inelastic X-ray\nspectroscopy (RIXS) are overviewed, with future vision for the integrated\nspectroscopy to challenge the long-standing difficulties in the cuprates as\nwell as in other strongly correlated matter in general.\n"} {"abstract": " In recent years, speech processing algorithms have seen tremendous progress\nprimarily due to the deep learning renaissance. This is especially true for\nspeech separation where the time-domain audio separation network (TasNet) has\nled to significant improvements. However, for the related task of\nsingle-speaker speech enhancement, which is of obvious importance, it is yet\nunknown, if the TasNet architecture is equally successful. In this paper, we\nshow that TasNet improves state-of-the-art also for speech enhancement, and\nthat the largest gains are achieved for modulated noise sources such as speech.\nFurthermore, we show that TasNet learns an efficient inner-domain\nrepresentation, where target and noise signal components are highly separable.\nThis is especially true for noise in terms of interfering speech signals, which\nmight explain why TasNet performs so well on the separation task. Additionally,\nwe show that TasNet performs poorly for large frame hops and conjecture that\naliasing might be the main cause of this performance drop. Finally, we show\nthat TasNet consistently outperforms a state-of-the-art single-speaker speech\nenhancement system.\n"} {"abstract": " We show that any a-priori possible entropy value is realized by an ergodic\nIRS, in free groups and in SL2(Z). This is in stark contrast to what may happen\nin SLn(Z) for n>2, where only the trivial entropy values can be realized by\nergodic IRSs.\n"} {"abstract": " Intense laser-plasma interactions are an essential tool for the laboratory\nstudy of ion acceleration at a collisionless shock. With two-dimensional\nparticle-in-cell calculations of a multicomponent plasma we observe two\nelectrostatic collisionless shocks at two distinct longitudinal positions when\ndriven with a linearly-polarized laser at normalized laser vector potential a0\nthat exceeds 10. Moreover, these shocks, associated with protons and carbon\nions, show a power-law dependence on a0 and accelerate ions to different\nvelocities in an expanding upstream with higher flux than in a single-component\nhydrogen or carbon plasma. This results from an electrostatic ion two-stream\ninstability caused by differences in the charge-to-mass ratio of different\nions. Particle acceleration in collisionless shocks in multicomponent plasma\nare ubiquitous in space and astrophysics, and these calculations identify the\npossibility for studying these complex processes in the laboratory.\n"} {"abstract": " Computer science has grown rapidly since its inception in the 1950s and the\npioneers in the field are celebrated annually by the A.M. Turing Award. In this\npaper, we attempt to shed light on the path to influential computer scientists\nby examining the characteristics of the 72 Turing Award laureates. To achieve\nthis goal, we build a comprehensive dataset of the Turing Award laureates and\nanalyze their characteristics, including their personal information, family\nbackground, academic background, and industry experience. The FP-Growth\nalgorithm is used for frequent feature mining. Logistic regression plot, pie\nchart, word cloud and map are generated accordingly for each of the interesting\nfeatures to uncover insights regarding personal factors that drive influential\nwork in the field of computer science. In particular, we show that the Turing\nAward laureates are most commonly white, male, married, United States citizen,\nand received a PhD degree. Our results also show that the age at which the\nlaureate won the award increases over the years; most of the Turing Award\nlaureates did not major in computer science; birth order is strongly related to\nthe winners' success; and the number of citations is not as important as one\nwould expect.\n"} {"abstract": " Online misinformation is a prevalent societal issue, with adversaries relying\non tools ranging from cheap fakes to sophisticated deep fakes. We are motivated\nby the threat scenario where an image is used out of context to support a\ncertain narrative. While some prior datasets for detecting image-text\ninconsistency generate samples via text manipulation, we propose a dataset\nwhere both image and text are unmanipulated but mismatched. We introduce\nseveral strategies for automatically retrieving convincing images for a given\ncaption, capturing cases with inconsistent entities or semantic context. Our\nlarge-scale automatically generated NewsCLIPpings Dataset: (1) demonstrates\nthat machine-driven image repurposing is now a realistic threat, and (2)\nprovides samples that represent challenging instances of mismatch between text\nand image in news that are able to mislead humans. We benchmark several\nstate-of-the-art multimodal models on our dataset and analyze their performance\nacross different pretraining domains and visual backbones.\n"} {"abstract": " We review the equation of state of QCD matter at finite densities. We discuss\nthe construction of the equation of state with net baryon number, electric\ncharge, and strangeness using the results of lattice QCD simulations and hadron\nresonance gas models. Its application to the hydrodynamic analyses of\nrelativistic nuclear collisions suggests that the interplay of multiple\nconserved charges is important in the quantitative understanding of the dense\nnuclear matter created at lower beam energies. Several different models of the\nQCD equation of state are discussed for comparison.\n"} {"abstract": " The existence of two novel hybrid two-dimensional (2D) monolayers, 2D B3C2P3\nand 2D B2C4P2, has been predicted based on the density functional theory\ncalculations. It has been shown that these materials possess structural and\nthermodynamic stability. 2D B3C2P3 is a moderate band gap semiconductor, while\n2D B2C4P2 is a zero band gap semiconductor. It has also been shown that 2D\nB3C2P3 has a highly tunable band gap under the effect of strain and substrate\nengineering. Moreover, 2D B3C2P3 produces low barriers for dissociation of\nwater and hydrogen molecules on its surface, and shows fast recovery after\ndesorption of the molecules. The novel materials can be fabricated by carbon\ndoping of boron phosphide, and directly by arc discharge and laser ablation and\nvaporization. Applications of 2D B3C2P3 in renewable energy and straintronic\nnanodevices have been proposed.\n"} {"abstract": " The generalization performance of a machine learning algorithm such as a\nneural network depends in a non-trivial way on the structure of the data\ndistribution. To analyze the influence of data structure on test loss dynamics,\nwe study an exactly solveable model of stochastic gradient descent (SGD) on\nmean square loss which predicts test loss when training on features with\narbitrary covariance structure. We solve the theory exactly for both Gaussian\nfeatures and arbitrary features and we show that the simpler Gaussian model\naccurately predicts test loss of nonlinear random-feature models and deep\nneural networks trained with SGD on real datasets such as MNIST and CIFAR-10.\nWe show that the optimal batch size at a fixed compute budget is typically\nsmall and depends on the feature correlation structure, demonstrating the\ncomputational benefits of SGD with small batch sizes. Lastly, we extend our\ntheory to the more usual setting of stochastic gradient descent on a fixed\nsubsampled training set, showing that both training and test error can be\naccurately predicted in our framework on real data.\n"} {"abstract": " As high-performance organic semiconductors, {\\pi}-conjugated polymers have\nattracted much attention due to their charming advantages including low-cost,\nsolution processability, mechanical flexibility, and tunable optoelectronic\nproperties. During the past several decades, the great advances have been made\nin polymers-based OFETs with p-type, n-type or even ambipolar characterics.\nThrough chemical modification and alignment optimization, lots of conjugated\npolymers exhibited superior mobilities, and some mobilities are even larger\nthan 10 cm2 V-1 s-1 in OFETs, which makes them very promising for the\napplications in organic electronic devices. This review describes the recent\nprogress of the high performance polymers used in OFETs from the aspects of\nmolecular design and assembly strategy. Furthermore, the current challenges and\noutlook in the design and development of conjugated polymers are also\nmentioned.\n"} {"abstract": " We consider convex, black-box objective functions with additive or\nmultiplicative noise with a high-dimensional parameter space and a data space\nof lower dimension, where gradients of the map exist, but may be inaccessible.\nWe investigate Derivative-Free Optimization (DFO) in this setting and propose a\nnovel method, Active STARS (ASTARS), based on STARS (Chen and Wild, 2015) and\ndimension reduction in parameter space via Active Subspace (AS) methods\n(Constantine, 2015). STARS hyperparmeters are inversely proportional to the\nknown dimension of parameter space, resulting in heavy smoothing and small step\nsizes for large dimensions. When possible, ASTARS leverages a lower-dimensional\nAS, defining a set of directions in parameter space causing the majority of the\nvariance in function values. ASTARS iterates are updated with steps only taken\nin the AS, reducing the value of the objective function more efficiently than\nSTARS, which updates iterates in the full parameter space. Computational costs\nmay be reduced further by learning ASTARS hyperparameters and the AS, reducing\nthe total evaluations of the objective function and eliminating the requirement\nthat the user specify hyperparameters, which may be unknown in our setting. We\ncall this method Fully Automated ASTARS (FAASTARS). We show that STARS and\nASTARS will both converge -- with a certain complexity -- even with inexact,\nestimated hyperparemters. We also find that FAASTARS converges with the use of\nestimated AS's and hyperparameters. We explore the effectiveness of ASTARS and\nFAASTARS in numerical examples which compare ASTARS and FAASTARS to STARS.\n"} {"abstract": " While several non-pharmacological measures have been implemented for a few\nmonths in an effort to slow the coronavirus disease (COVID-19) pandemic in the\nUnited States, the disease remains a danger in a number of counties as\nrestrictions are lifted to revive the economy. Making a trade-off between\neconomic recovery and infection control is a major challenge confronting many\nhard-hit counties. Understanding the transmission process and quantifying the\ncosts of local policies are essential to the task of tackling this challenge.\nHere, we investigate the dynamic contact patterns of the populations from\nanonymized, geo-localized mobility data and census and demographic data to\ncreate data-driven, agent-based contact networks. We then simulate the epidemic\nspread with a time-varying contagion model in ten large metropolitan counties\nin the United States and evaluate a combination of mobility reduction, mask\nuse, and reopening policies. We find that our model captures the\nspatial-temporal and heterogeneous case trajectory within various counties\nbased on dynamic population behaviors. Our results show that a decision-making\ntool that considers both economic cost and infection outcomes of policies can\nbe informative in making decisions of local containment strategies for optimal\nbalancing of economic slowdown and virus spread.\n"} {"abstract": " We propose a comprehensive field-based semianalytical method for designing\nfabrication-ready multifunctional periodic metasurfaces (MSs). Harnessing\nrecent work on multielement metagratings based on capacitively-loaded strips,\nwe have extended our previous meta-atom design formulation to generate\nrealistic substrate-supported printed-circuit-board layouts for anomalous\nrefraction MSs. Subsequently, we apply a greedy algorithm for iteratively\noptimizing individual scatterers across the entire macroperiod to achieve\nmultiple design goals for corresponding multiple incidence angles with a single\nMS structure. As verified with commercial solvers, the proposed semianalytical\nscheme, properly accounting for near-field coupling between the various\nscatterers, can reliably produce highly efficient multifunctional MSs on\ndemand, without requiring time-consuming full-wave optimization.\n"} {"abstract": " We consider the quasi-one-dimensional (quasi-1D) model of a sonic black hole\nin a dilute Bose-Einstein condensate. It is shown that an accurate treatment of\nthe dimensional reduction to quasi-1D leads to a finite condensate quantum\ndepletion even for axially infinite systems, and an intrinsic nonthermality of\nthe black hole radiation spectrum. By calculating the depletion, we derive a\n{\\em first-order} many-body signature of the sonic horizon, represented by a\ndistinct peak in the depletion power spectrum. This peak constitutes a readily\nexperimentally accessible tool to detect quantum sonic horizons even when a\nnegligible Hawking radiation flux is created by the black hole.\n"} {"abstract": " The study of strong-lensing systems conventionally involves constructing a\nmass distribution that can reproduce the observed multiply-imaging properties.\nSuch mass reconstructions are generically non-unique. Here, we present an\nalternative strategy: instead of modelling the mass distribution, we search\ncosmological galaxy-formation simulations for plausible matches. In this paper\nwe test the idea on seven well-studied lenses from the SLACS survey. For each\nof these, we first pre-select a few hundred galaxies from the EAGLE\nsimulations, using the expected Einstein radius as an initial criterion. Then,\nfor each of these pre-selected galaxies, we fit for the source light\ndistribution, while using MCMC for the placement and orientation of the lensing\ngalaxy, so as to reproduce the multiple images and arcs. The results indicate\nthat the strategy is feasible, and even yields relative posterior probabilities\nof two different galaxy-formation scenarios, though these are not statistically\nsignificant yet. Extensions to other observables, such as kinematics and\ncolours of the stellar population in the lensing galaxy, is straightforward in\nprinciple, though we have not attempted it yet. Scaling to arbitrarily large\nnumbers of lenses also appears feasible. This will be especially relevant for\nupcoming wide-field surveys, through which the number of galaxy lenses will\nrise possibly a hundredfold, which will overwhelm conventional modelling\nmethods.\n"} {"abstract": " In supervised learning for medical image analysis, sample selection\nmethodologies are fundamental to attain optimum system performance promptly and\nwith minimal expert interactions (e.g. label querying in an active learning\nsetup). In this paper we propose a novel sample selection methodology based on\ndeep features leveraging information contained in interpretability saliency\nmaps. In the absence of ground truth labels for informative samples, we use a\nnovel self supervised learning based approach for training a classifier that\nlearns to identify the most informative sample in a given batch of images. We\ndemonstrate the benefits of the proposed approach, termed\nInterpretability-Driven Sample Selection (IDEAL), in an active learning setup\naimed at lung disease classification and histopathology image segmentation. We\nanalyze three different approaches to determine sample informativeness from\ninterpretability saliency maps: (i) an observational model stemming from\nfindings on previous uncertainty-based sample selection approaches, (ii) a\nradiomics-based model, and (iii) a novel data-driven self-supervised approach.\nWe compare IDEAL to other baselines using the publicly available NIH chest\nX-ray dataset for lung disease classification, and a public histopathology\nsegmentation dataset (GLaS), demonstrating the potential of using\ninterpretability information for sample selection in active learning systems.\nResults show our proposed self supervised approach outperforms other approaches\nin selecting informative samples leading to state of the art performance with\nfewer samples.\n"} {"abstract": " In order to remain competitive, Internet companies collect and analyse user\ndata for the purpose of improving user experiences. Frequency estimation is a\nwidely used statistical tool which could potentially conflict with the relevant\nprivacy regulations. Privacy preserving analytic methods based on differential\nprivacy have been proposed, which either require a large user base or a trusted\nserver; hence may give big companies an unfair advantage while handicapping\nsmaller organizations in their growth opportunity. To address this issue, this\npaper proposes a fair privacy-preserving sampling-based frequency estimation\nmethod and provides a relation between its privacy guarantee, output accuracy,\nand number of participants. We designed decentralized privacy-preserving\naggregation mechanisms using multi-party computation technique and established\nthat, for a limited number of participants and a fixed privacy level, our\nmechanisms perform better than those that are based on traditional perturbation\nmethods; hence, provide smaller companies a fair growth opportunity. We further\npropose an architectural model to support weighted aggregation in order to\nachieve higher accuracy estimate to cater for users with different privacy\nrequirements. Compared to the unweighted aggregation, our method provides a\nmore accurate estimate. Extensive experiments are conducted to show the\neffectiveness of the proposed methods.\n"} {"abstract": " We study directions along which the norms of vectors are preserved under a\nlinear map. In particular, we find families of matrices for which these\ndirections are determined by integer vectors. We consider the two-dimensional\ncase in detail, and also discuss the extension to the three-dimensional case.\n"} {"abstract": " Currently, soil structure interaction in energy piles has not been understood\nthoroughly. One of the important underlying features is the effect of tip and\nhead restraints on displacement, strain and stress in energy piles. This study\nhas investigated thermo-mechanical response of energy piles subjected to\ndifferent end restraints by using recently found analytical solutions, thus\nproviding a fundamental, rational, mechanics-based understanding. End\nrestraints are found to have a substantial effect on thermo-mechanical response\nof energy piles, especially on thermal axial displacement and axial stress in\nthe pile. Head restraint imposed by interaction of an energy pile with the\nsuperstructure led to a decrease in the magnitude of head displacement and\nincrease in axial stress, while decreasing the axial strain. The impact of head\nrestraint was more pronounced in end bearing than in fully floating energy\npiles.\n"} {"abstract": " We model the 21cm power spectrum across the Cosmic Dawn and the Epoch of\nReionization (EoR) in fuzzy dark matter (FDM) cosmologies. The suppression of\nsmall mass halos in FDM models leads to a delay in the onset redshift of these\nepochs relative to cold dark matter (CDM) scenarios. This strongly impacts the\n21cm power spectrum and its redshift evolution. The 21cm power spectrum at a\ngiven stage of the EoR/Cosmic Dawn process is also modified: in general, the\namplitude of 21cm fluctuations is boosted by the enhanced bias factor of galaxy\nhosting halos in FDM. We forecast the prospects for discriminating between CDM\nand FDM with upcoming power spectrum measurements from HERA, accounting for\ndegeneracies between astrophysical parameters and dark matter properties. If\nFDM constitutes the entirety of the dark matter and the FDM particle mass is\n10-21eV, HERA can determine the mass to within 20 percent at 2-sigma\nconfidence.\n"} {"abstract": " We evaluate in closed form several series involving products of Cauchy\nnumbers with other special numbers (harmonic, skew-harmonic, hyperharmonic, and\ncentral binomial). Similar results are obtained with series involving Stirling\nnumbers of the first kind. We focus on several particular cases which give new\nclosed forms for Euler sums of hyperharmonic numbers and products of\nhyperharmonic and harmonic numbers.\n"} {"abstract": " Anisotropic outgassing from comets exerts a torque sufficient to rapidly\nchange the angular momentum of the nucleus, potentially leading to rotational\ninstability. Here, we use empirical measures of spin changes in a sample of\ncomets to characterize the torques and to compare them with expectations from a\nsimple model. Both the data and the model show that the characteristic spin-up\ntimescale, $\\tau_s$, is a strong function of nucleus radius, $r_n$.\nEmpirically, we find that the timescale for comets (most with perihelion 1 to 2\nAU and eccentricity $\\sim$0.5) varies as $\\tau_s \\sim 100 r_n^{2}$, where $r_n$\nis expressed in kilometers and $\\tau_s$ is in years. The fraction of the\nnucleus surface that is active varies as $f_A \\sim 0.1 r_n^{-2}$. We find that\nthe median value of the dimensionless moment arm of the torque is $k_T$ = 0.007\n(i.e. $\\sim$0.7\\% of the escaping momentum torques the nucleus), with weak\n($<$3$\\sigma$) evidence for a size dependence $k_T \\sim 10^{-3} r_n^2$.\nSub-kilometer nuclei have spin-up timescales comparable to their orbital\nperiods, confirming that outgassing torques are quickly capable of driving\nsmall nuclei towards rotational disruption. Torque-induced rotational\ninstability likely accounts for the paucity of sub-kilometer short-period\ncometary nuclei, and for the pre-perihelion destruction of sungrazing comets.\nTorques from sustained outgassing on small active asteroids can rival YORP\ntorques, even for very small ($\\lesssim$1 g s$^{-1}$) mass loss rates. Finally,\nwe highlight the important role played by observational biases in the measured\ndistributions of $\\tau_s$, $f_A$ and $k_T$.\n"} {"abstract": " We study the stochastic bilinear minimax optimization problem, presenting an\nanalysis of the same-sample Stochastic ExtraGradient (SEG) method with constant\nstep size, and presenting variations of the method that yield favorable\nconvergence. In sharp contrasts with the basic SEG method whose last iterate\nonly contracts to a fixed neighborhood of the Nash equilibrium, SEG augmented\nwith iteration averaging provably converges to the Nash equilibrium under the\nsame standard settings, and such a rate is further improved by incorporating a\nscheduled restarting procedure. In the interpolation setting where noise\nvanishes at the Nash equilibrium, we achieve an optimal convergence rate up to\ntight constants. We present numerical experiments that validate our theoretical\nfindings and demonstrate the effectiveness of the SEG method when equipped with\niteration averaging and restarting.\n"} {"abstract": " The polarization properties of the elastic electron scattering on H-like ions\nare investigated within the framework of the relativistic QED theory. The\npolarization properties are determined by a combination of relativistic effects\nand spin exchange between the incident and bound electrons. The scattering of a\npolarized electron on an initially unpolarized ion is fully described by five\nparameters. We study these parameters for non-resonant scattering, as well as\nin the vicinity of LL resonances, where scattering occurs through the formation\nand subsequent decay of intermediate autoionizing states. The study was carried\nout for ions from $\\txt{B}^{4+}$ to $\\txt{Xe}^{53+}$. Special attention was\npaid to the study of asymmetry in electron scattering.\n"} {"abstract": " Dynamical quantum phase transitions (DQPTs), which refer to the criticality\nin time of a quantum many-body system, have attracted much theoretical and\nexperimental research interest recently. Despite DQPTs are defined and\nsignalled by the non-analyticities in the Loschmidt rate, its interrelation\nwith various correlation measures such as the equilibrium order parameters of\nthe system remains unclear. In this work, by considering the quench dynamics in\nan interacting topological model, we find that the equilibrium order parameters\nof the model in general exhibit signatures around the DQPT, in the short time\nregime. The first extrema of the equilibrium order parameters are connected to\nthe first Loschmidt rate peak. By studying the unequal-time two-point\ncorrelation, we also find that the correlation between the nearest neighbors\ndecays while that with neighbors further away builds up as time grows in the\nnon-interacting case, and upon the addition of repulsive intra-cell\ninteractions. On the other hand, the inter-cell interaction tends to suppress\nthe two-site correlations. These findings could provide us insights into the\ncharacteristic of the system around DQPTs, and pave the way to a better\nunderstanding of the dynamics in non-equilibrium quantum many-body systems.\n"} {"abstract": " Ensemble learning methods are designed to benefit from multiple learning\nalgorithms for better predictive performance. The tradeoff of this improved\nperformance is slower speed and larger size of ensemble learning systems\ncompared to single learning systems. In this paper, we present a novel approach\nto deal with this problem in Random Forest (RF) as one of the most powerful\nensemble methods. The method is based on crossbreeding of the best tree\nbranches to increase the performance of RF in space and speed while keeping the\nperformance in the classification measures. The proposed approach has been\ntested on a group of synthetic and real datasets and compared to the standard\nRF approach. Several evaluations have been conducted to determine the effects\nof the Crossbred RF (CRF) on the accuracy and the number of trees in a forest.\nThe results show better performance of CRF compared to RF.\n"} {"abstract": " The structure of finite self-assembling systems depends sensitively on the\nnumber of constituent building blocks. Recently, it was demonstrated that hard\nsphere-like colloidal particles show a magic number effect when confined in\nspherical emulsion droplets. Geometric construction rules permit a few dozen\nmagic numbers that correspond to a discrete series of completely filled\nconcentric icosahedral shells. Here, we investigate the free energy landscape\nof these colloidal clusters as a function of the number of their constituent\nbuilding blocks for system sizes up to several thousand particles. We find that\nminima in the free energy landscape, arising from the presence of filled,\nconcentric shells, are significantly broadened. In contrast to their atomic\nanalogues, colloidal clusters in spherical confinement can flexibly accommodate\nexcess colloids by ordering icosahedrally in the cluster center while changing\nthe structure near the cluster surface. In-between these magic number regions,\nthe building blocks cannot arrange into filled shells. Instead, we observe that\ndefects accumulate in a single wedge and therefore only affect a few\ntetrahedral grains of the cluster. We predict the existence of this wedge by\nsimulation and confirm its presence in experiment using electron tomography.\nThe introduction of the wedge minimizes the free energy penalty by confining\ndefects to small regions within the cluster. In addition, the remaining ordered\ntetrahedral grains can relax internal strain by breaking icosahedral symmetry.\nOur findings demonstrate how multiple defect mechanisms collude to form the\ncomplex free energy landscape of hard sphere-like colloidal clusters.\n"} {"abstract": " We prove the sufficient conditions for convergence of a certain iterative\nprocess of order 2 for solving nonlinear functional equations, which does not\nrequire inverting the derivative. We translate and detail our results for a\nsystem of nonlinear equations, and apply it for some numerical example which\nillustrates our theorems.\n"} {"abstract": " Percolation transition (PT) means the formation of a macroscopic-scale large\ncluster, which exhibits a continuous transition. However, when the growth of\nlarge clusters is globally suppressed, the type of PT is changed to a\ndiscontinuous transition for random networks. A question arises as to whether\nthe type of PT is also changed for scale-free (SF) network, because the\nexistence of hubs incites the formation of a giant cluster. Here, we apply a\nglobal suppression rule to the static model for SF networks, and investigate\nproperties of the PT. We find that even for SF networks with the degree\nexponent $2 < \\lambda <3$, a hybrid PT occurs at a finite transition point\n$t_c$, which we can control by the suppression strength. The order parameter\njumps at $t_c^-$ and exhibits a critical behavior at $t_c^+$.\n"} {"abstract": " Clemm and Trebat-Leder (2014) proved that the number of quadratic number\nfields with absolute discriminant bounded by $x$ over which there exist\nelliptic curves with good reduction everywhere and rational $j$-invariant is\n$\\gg x\\log^{-1/2}(x)$. In this paper, we assume the $abc$-conjecture to show\nthe sharp asymptotic $\\sim cx\\log^{-1/2}(x)$ for this number, obtaining\nformulae for $c$ in both the real and imaginary cases. Our method has three\ningredients:\n (1) We make progress towards a conjecture of Granville: Given a fixed\nelliptic curve $E/\\mathbb{Q}$ with short Weierstrass equation $y^2 = f(x)$ for\nreducible $f \\in \\mathbb{Z}[x]$, we show that the number of integers $d$, $|d|\n\\leq D$, for which the quadratic twist $dy^2 = f(x)$ has an integral\nnon-$2$-torsion point is at most $D^{2/3+o(1)}$, assuming the $abc$-conjecture.\n (2) We apply the Selberg--Delange method to obtain a Tauberian theorem which\nallows us to count integers satisfying certain congruences while also being\ndivisible only by certain primes.\n (3) We show that for a polynomially sparse subset of the natural numbers, the\nnumber of pairs of elements with least common multiple at most $x$ is\n$O(x^{1-\\epsilon})$ for some $\\epsilon > 0$. We also exhibit a matching lower\nbound.\n If instead of the $abc$-conjecture we assume a particular tail bound, we can\nprove all the aforementioned results and that the coefficient $c$ above is\ngreater in the real quadratic case than in the imaginary quadratic case, in\nagreement with an experimentally observed bias.\n"} {"abstract": " A key challenge towards the goal of multi-part assembly tasks is finding\nrobust sensorimotor control methods in the presence of uncertainty. In contrast\nto previous works that rely on a priori knowledge on whether two parts match,\nwe aim to learn this through physical interaction. We propose a hierarchical\napproach that enables a robot to autonomously assemble parts while being\nuncertain about part types and positions. In particular, our probabilistic\napproach learns a set of differentiable filters that leverage the tactile\nsensorimotor trace from failed assembly attempts to update its belief about\npart position and type. This enables a robot to overcome assembly failure. We\ndemonstrate the effectiveness of our approach on a set of object fitting tasks.\nThe experimental results indicate that our proposed approach achieves higher\nprecision in object position and type estimation, and accomplishes object\nfitting tasks faster than baselines.\n"} {"abstract": " The pairing of two electrons on a Fermi surface due to an infinitesimal\nattraction between them always results in a superconducting instability at zero\ntemperature ($T=0$). The equivalent question of pairing instability on a\nLuttinger surface (LS) -- a contour of zeros of the propagator -- instead leads\nto a quantum critical point (QCP) that separates a non-Fermi liquid (NFL) and\nsuperconductor. A surprising and little understood aspect of pair fluctuations\nat this QCP is that their thermodynamics maps to that of the Sachdev-Ye-Kitaev\n(SYK) model in the strong coupling limit. Here, we offer a simple justification\nfor this mapping by demonstrating that (i) LS models share the\nreparametrization symmetry of the $q\\rightarrow \\infty$ SYK model with $q$-body\ninteractions \\textcolor{black}{close to the LS}, and (ii) the enforcement of\ngauge invariance results in a $\\frac{1}{\\sqrt{\\tau}}$ ($\\tau\\sim T^{-1}$)\nbehavior of the fluctuation propagator near the QCP, as is a feature of the\nfundamental SYK fermion.\n"} {"abstract": " We independently determine the zero-point offset of the Gaia early Data\nRelease-3 (EDR3) parallaxes based on $\\sim 110,000$ W Ursae Majoris (EW)-type\neclipsing binary systems. EWs cover almost the entire sky and are characterized\nby a relatively complete coverage in magnitude and color. They are an excellent\nproxy for Galactic main-sequence stars. We derive a $W1$-band Period-Luminosity\nrelation with a distance accuracy of $7.4\\%$, which we use to anchor the Gaia\nparallax zero-point. The final, global parallax offsets are $-28.6\\pm0.6$\n$\\mu$as and $-25.4\\pm4.0$ $\\mu$as (before correction) and $4.2\\pm0.5$ $\\mu$as\nand $4.6\\pm3.7$ $\\mu$as (after correction) for the five- and six-parameter\nsolutions, respectively. The total systematic uncertainty is $1.8$ $\\mu$as. The\nspatial distribution of the parallax offsets shows that the bias in the\ncorrected Gaia EDR3 parallaxes is less than 10 $\\mu$as across $40\\%$ of the\nsky. Only $15\\%$ of the sky is characterized by a parallax offset greater than\n30 $\\mu$as. Thus, we have provided independent evidence that the parallax\nzero-point correction provided by the Gaia team significantly reduces the\nprevailing bias. Combined with literature data, we find that the overall Gaia\nEDR3 parallax offsets for Galactic stars are $[-20, -30]$ $\\mu$as and 4-10\n$\\mu$as, respectively, before and after correction. For specific regions, an\nadditional deviation of about 10 $\\mu$as is found.\n"} {"abstract": " The lattice Boltzmann method (LBM) has recently emerged as an efficient\nalternative to classical Navier-Stokes solvers. This is particularly true for\nhemodynamics in complex geometries. However, in its most basic formulation,\n{i.e.} with the so-called single relaxation time (SRT) collision operator, it\nhas been observed to have a limited stability domain in the Courant/Fourier\nspace, strongly constraining the minimum time-step and grid size. The\ndevelopment of improved collision models such as the multiple relaxation time\n(MRT) operator in central moments space has tremendously widened the stability\ndomain, while allowing to overcome a number of other well-documented artifacts,\ntherefore opening the door for simulations over a wider range of grid and\ntime-step sizes. The present work focuses on implementing and validating a\nspecific collision operator, the central Hermite moments multiple relaxation\ntime model with the full expansion of the equilibrium distribution function, to\nsimulate blood flows in intracranial aneurysms. The study further proceeds with\na validation of the numerical model through different test-cases and against\nexperimental measurements obtained via stereoscopic particle image velocimetry\n(PIV) and phase-contrast magnetic resonance imaging (PC-MRI). For a\npatient-specific aneurysm both PIV and PC-MRI agree fairly well with the\nsimulation. Finally, low-resolution simulations were shown to be able to\ncapture blood flow information with sufficient accuracy, as demonstrated\nthrough both qualitative and quantitative analysis of the flow field {while\nleading to strongly reduced computation times. For instance in the case of the\npatient-specific configuration, increasing the grid-size by a factor of two led\nto a reduction of computation time by a factor of 14 with very good similarity\nindices still ranging from 0.83 to 0.88.}\n"} {"abstract": " We consider correlations, $p_{n,x}$, arising from measuring a maximally\nentangled state using $n$ measurements with two outcomes each, constructed from\n$n$ projections that add up to $xI$. We show that the correlations $p_{n,x}$\nrobustly self-test the underlying states and measurements. To achieve this, we\nlift the group-theoretic Gowers-Hatami based approach for proving robust\nself-tests to a more natural algebraic framework. A key step is to obtain an\nanalogue of the Gowers-Hatami theorem allowing to perturb an \"approximate\"\nrepresentation of the relevant algebra to an exact one.\n For $n=4$, the correlations $p_{n,x}$ self-test the maximally entangled state\nof every odd dimension as well as 2-outcome projective measurements of\narbitrarily high rank. The only other family of constant-sized self-tests for\nstrategies of unbounded dimension is due to Fu (QIP 2020) who presents such\nself-tests for an infinite family of maximally entangled states with even local\ndimension. Therefore, we are the first to exhibit a constant-sized self-test\nfor measurements of unbounded dimension as well as all maximally entangled\nstates with odd local dimension.\n"} {"abstract": " Strong lensing of gravitational waves (GWs) is attracting growing attention\nof the community. The event rates of lensed GWs by galaxies were predicted in\nnumerous papers, which used some approximations to evaluate the GW strains\ndetectable by a single detector. The joint-detection of GW signals by a network\nof instruments will increase the detecting ability of fainter and farther GW\nsignals, which could increase the detection rate of the lensed GWs, especially\nfor the 3rd generation detectors, e.g., Einstein Telescope (ET) and Cosmic\nExplorer (CE). Moreover, realistic GW templates will improve the accuracy of\nthe prediction. In this work, we consider the detection of galaxy-scale lensed\nGW events under the 2nd, 2.5th, and 3rd generation detectors with the network\nscenarios and adopt the realistic templates to simulate GW signals. Our\nforecast is based on the Monte Carlo technique which enables us to take Earth's\nrotation into consideration. We find that the overall detection rate is\nimproved, especially for the 3rd generation detector scenarios. More precisely,\nit increases by ~37% adopting realistic templates, and under network detection\nstrategy, further increases by ~58% comparing with adoption of the realistic\ntemplates, and we estimate that the 3rd generation GW detectors will detect\nhundreds lensed events per year. The effect from the Earth's rotation is\nweakened in the detector network strategy.\n"} {"abstract": " Multi-head attention has each of the attention heads collect salient\ninformation from different parts of an input sequence, making it a powerful\nmechanism for sequence modeling. Multilingual and multi-domain learning are\ncommon scenarios for sequence modeling, where the key challenge is to maximize\npositive transfer and mitigate negative transfer across languages and domains.\nIn this paper, we find that non-selective attention sharing is sub-optimal for\nachieving good generalization across all languages and domains. We further\npropose attention sharing strategies to facilitate parameter sharing and\nspecialization in multilingual and multi-domain sequence modeling. Our approach\nautomatically learns shared and specialized attention heads for different\nlanguages and domains to mitigate their interference. Evaluated in various\ntasks including speech recognition, text-to-text and speech-to-text\ntranslation, the proposed attention sharing strategies consistently bring gains\nto sequence models built upon multi-head attention. For speech-to-text\ntranslation, our approach yields an average of $+2.0$ BLEU over $13$ language\ndirections in multilingual setting and $+2.0$ BLEU over $3$ domains in\nmulti-domain setting.\n"} {"abstract": " We give a presentation in terms of generators and relations of the cohomology\nin degree zero of the Campos-Willwacher graph complexes associated to compact\norientable surfaces of genus $g$. The results carry a natural Lie algebra\nstructure, and for $g=1$ we recover Enriquez' elliptic\nGrothendieck-Teichm\\\"uller Lie algebra. In analogy to Willwacher's theorem\nrelating Kontsevich's graph complex to Drinfeld's Grothendieck-Teichm\\\"uller\nLie algebra, we call the results higher genus Grothendieck-Teichm\\\"uller Lie\nalgebras. Moreover, we find that the graph cohomology vanishes in negative\ndegrees.\n"} {"abstract": " Social recommendation aims to fuse social links with user-item interactions\nto alleviate the cold-start problem for rating prediction. Recent developments\nof Graph Neural Networks (GNNs) motivate endeavors to design GNN-based social\nrecommendation frameworks to aggregate both social and user-item interaction\ninformation simultaneously. However, most existing methods neglect the social\ninconsistency problem, which intuitively suggests that social links are not\nnecessarily consistent with the rating prediction process. Social inconsistency\ncan be observed from both context-level and relation-level. Therefore, we\nintend to empower the GNN model with the ability to tackle the social\ninconsistency problem. We propose to sample consistent neighbors by relating\nsampling probability with consistency scores between neighbors. Besides, we\nemploy the relation attention mechanism to assign consistent relations with\nhigh importance factors for aggregation. Experiments on two real-world datasets\nverify the model effectiveness.\n"} {"abstract": " The steering dynamics of re-configurable intelligent surfaces (RIS) have\nhoisted them to the front row of technologies that can be exploited to solve\nskip-zones in wireless communication systems. They can enable a programmable\nwireless environment, turning it into a partially deterministic space that\nplays an active role in determining how wireless signals propagate. However,\nRIS-based communication systems' practical implementation may face challenges\nsuch as noise generated by the RIS structure. Besides, the transmitted signal\nmay face a double-fading effect over the two portions of the channel. This\narticle tackles this double-fading problem in near-terrestrial free-space\noptical (nT-FSO) communication systems using a RIS module based upon\nliquid-crystal (LC) on silicon (LCoS). A doped LC layer can directly amplify a\nlight when placed in an external field. Leveraging on this capacity of a doped\nLC, we mitigate the double-attenuation faced by the transmitted signal. We\nfirst revisit the nT-FSO power loss scenario, then discuss the direct-light\namplification, and consider the system performance. Results show that at 51\ndegrees of the incoming light incidence angle, the proposed LCoS design has\nminimal RIS depth, implying less LC's material. The performance results show\nthat the number of bit per unit bandwidth is upper-bounded and grows with the\nratio of the sub-links distances. Finally, we present and discuss open issues\nto enable new research opportunities towards the use of RIS and amplifying-RIS\nin nT-FSO systems.\n"} {"abstract": " In this paper, we propose a novel iterative encoding algorithm for DNA\nstorage to satisfy both the GC balance and run-length constraints using a\ngreedy algorithm. DNA strands with run-length more than three and the GC\nbalance ratio far from 50\\% are known to be prone to errors. The proposed\nencoding algorithm stores data at high information density with high\nflexibility of run-length at most $m$ and GC balance between $0.5\\pm\\alpha$ for\narbitrary $m$ and $\\alpha$. More importantly, we propose a novel mapping method\nto reduce the average bit error compared to the randomly generated mapping\nmethod, using a greedy algorithm. The proposed algorithm is implemented through\niterative encoding, consisting of three main steps: randomization, M-ary\nmapping, and verification. The proposed algorithm has an information density of\n1.8523 bits/nt in the case of $m=3$ and $\\alpha=0.05$. Also, the proposed\nalgorithm is robust to error propagation, since the average bit error caused by\nthe one nt error is 2.3455 bits, which is reduced by $20.5\\%$, compared to the\nrandomized mapping.\n"} {"abstract": " Decentralized vehicle-to-everything (V2X) networks (i.e., Mode-4 C-V2X and\nMode 2a NR-V2X), rely on periodic Basic Safety Messages (BSMs) to disseminate\ntime-sensitive information (e.g., vehicle position) and has the potential to\nimprove on-road safety. For BSM scheduling, decentralized V2X networks utilize\nsensing-based semi-persistent scheduling (SPS), where vehicles sense radio\nresources and select suitable resources for BSM transmissions at prespecified\nperiodic intervals termed as Resource Reservation Interval (RRI). In this\npaper, we show that such a BSM scheduling (with a fixed RRI) suffers from\nsevere under- and over- utilization of radio resources under varying vehicle\ntraffic scenarios; which severely compromises timely dissemination of BSMs,\nwhich in turn leads to increased collision risks. To address this, we extend\nSPS to accommodate an adaptive RRI, termed as SPS++. Specifically, SPS++ allows\neach vehicle -- (i) to dynamically adjust RRI based on the channel resource\navailability (by accounting for various vehicle traffic scenarios), and then,\n(ii) select suitable transmission opportunities for timely BSM transmissions at\nthe chosen RRI. Our experiments based on Mode-4 C-V2X standard implemented\nusing the ns-3 simulator show that SPS++ outperforms SPS by at least $50\\%$ in\nterms of improved on-road safety performance, in all considered simulation\nscenarios.\n"} {"abstract": " Current gate-based quantum computers have the potential to provide a\ncomputational advantage if algorithms use quantum hardware efficiently. To make\ncombinatorial optimization more efficient, we introduce the Filtering\nVariational Quantum Eigensolver (F-VQE) which utilizes filtering operators to\nachieve faster and more reliable convergence to the optimal solution.\nAdditionally we explore the use of causal cones to reduce the number of qubits\nrequired on a quantum computer. Using random weighted MaxCut problems, we\nnumerically analyze our methods and show that they perform better than the\noriginal VQE algorithm and the Quantum Approximate Optimization Algorithm\n(QAOA). We also demonstrate the experimental feasibility of our algorithms on a\nHoneywell trapped-ion quantum processor.\n"} {"abstract": " We study whether receiving advice from either a human or algorithmic advisor,\naccompanied by five types of Local and Global explanation labelings, has an\neffect on the readiness to adopt, willingness to pay, and trust in a financial\nAI consultant. We compare the differences over time and in various key\nsituations using a unique experimental framework where participants play a\nweb-based game with real monetary consequences. We observed that accuracy-based\nexplanations of the model in initial phases leads to higher adoption rates.\nWhen the performance of the model is immaculate, there is less importance\nassociated with the kind of explanation for adoption. Using more elaborate\nfeature-based or accuracy-based explanations helps substantially in reducing\nthe adoption drop upon model failure. Furthermore, using an autopilot increases\nadoption significantly. Participants assigned to the AI-labeled advice with\nexplanations were willing to pay more for the advice than the AI-labeled advice\nwith a No-explanation alternative. These results add to the literature on the\nimportance of XAI for algorithmic adoption and trust.\n"} {"abstract": " The network-based model of social contagion has revolved around information\non local interactions; its central focus has been on network topological\nproperties shaping the local interactions and, ultimately, social contagion\noutcomes. We extend this approach by introducing information on the global\nstate, or global information, into the network-based model and analyzing how it\nalters social contagion dynamics in six different classes of networks: a\ntwo-dimensional square lattice, small-world networks, Erd\\H{o}s-R\\'{e}nyi\nnetworks, regular random networks, Holme-Kim networks, and Barab\\'{a}si-Albert\nnetworks. We find that there is an optimal amount of global information that\nminimizes the time to reach global cascades in highly clustered networks. We\nalso find that global information prolongs the time to hit the tipping point\nbut substantially compresses the time to reach global cascades after then, so\nthat the overall time to reach global cascades can even be shortened under\ncertain conditions. Finally, we show that random links substitute for global\ninformation in regulating the social contagion dynamics.\n"} {"abstract": " This paper provides a numerical framework for computing the achievable rate\nregion of memoryless multiple access channel (MAC) with a continuous alphabet\nfrom data. In particular, we use recent results on variational lower bounds on\nmutual information and KL-divergence to compute the boundaries of the rate\nregion of MAC using a set of functions parameterized by neural networks. Our\nmethod relies on a variational lower bound on KL-divergence and an upper bound\non KL-divergence based on the f-divergence inequalities. Unlike previous work,\nwhich computes an estimate on mutual information, which is neither a lower nor\nan upper bound, our method estimates a lower bound on mutual information. Our\nnumerical results show that the proposed method provides tighter estimates\ncompared to the MINE-based estimator at large SNRs while being computationally\nmore efficient. Finally, we apply the proposed method to the optical intensity\nMAC and obtain a new achievable rate boundary tighter than prior works.\n"} {"abstract": " Measuring the electrophoretic mobility of molecules is a powerful\nexperimental approach for investigating biomolecular processes. A frequent\nchallenge in the context of single-particle measurements is throughput,\nlimiting the obtainable statistics. Here, we present a molecular force sensor\nand charge detector based on parallelised imaging and tracking of tethered\ndouble-stranded DNA functionalised with charged nanoparticles interacting with\nan externally applied electric field. Tracking the position of the tethered\nparticle with simultaneous nanometre precision and microsecond temporal\nresolution allows us to detect and quantify electrophoretic forces down to the\nsub-piconewton scale. Furthermore, we demonstrate that this approach is capable\nof detecting changes to the particle charge state, as induced by the addition\nof charged biomolecules or changes to pH. Our approach provides an alternative\nroute to studying structural and charge dynamics at the single-molecule level.\n"} {"abstract": " One of the main challenges in real-world reinforcement learning is to learn\nsuccessfully from limited training samples. We show that in certain settings,\nthe available data can be dramatically increased through a form of multi-task\nlearning, by exploiting an invariance property in the tasks. We provide a\ntheoretical performance bound for the gain in sample efficiency under this\nsetting. This motivates a new approach to multi-task learning, which involves\nthe design of an appropriate neural network architecture and a prioritized\ntask-sampling strategy. We demonstrate empirically the effectiveness of the\nproposed approach on two real-world sequential resource allocation tasks where\nthis invariance property occurs: financial portfolio optimization and meta\nfederated learning.\n"} {"abstract": " Graphs are a common model for complex relational data such as social networks\nand protein interactions, and such data can evolve over time (e.g., new\nfriendships) and be noisy (e.g., unmeasured interactions). Link prediction aims\nto predict future edges or infer missing edges in the graph, and has diverse\napplications in recommender systems, experimental design, and complex systems.\nEven though link prediction algorithms strongly depend on the set of edges in\nthe graph, existing approaches typically do not modify the graph topology to\nimprove performance. Here, we demonstrate how simply adding a set of edges,\nwhich we call a \\emph{proposal set}, to the graph as a pre-processing step can\nimprove the performance of several link prediction algorithms. The underlying\nidea is that if the edges in the proposal set generally align with the\nstructure of the graph, link prediction algorithms are further guided towards\npredicting the right edges; in other words, adding a proposal set of edges is a\nsignal-boosting pre-processing step. We show how to use existing link\nprediction algorithms to generate effective proposal sets and evaluate this\napproach on various synthetic and empirical datasets. We find that proposal\nsets meaningfully improve the accuracy of link prediction algorithms based on\nboth neighborhood heuristics and graph neural networks. Code is available at\n\\url{https://github.com/CUAI/Edge-Proposal-Sets}.\n"} {"abstract": " The TianQin space Gravitational Waves (GW) observatory will contain 3\ngeocentric and circularly orbiting spacecraft with an orbital radius of 10^5\nkm, to detect the GW in the milli-hertz frequency band. Each spacecraft pair\nwill establish a 1.7*10^5 km-long laser interferometer immersed in the solar\nwind and the magnetospheric plasmas to measure the phase deviations induced by\nthe GW. GW detection requires a high-precision measurement of the laser phase.\nThe cumulative effects of the long distance and the periodic oscillations of\nthe plasma density may induce an additional phase noise. This paper aims to\nmodel the plasma induced phase deviation of the inter-spacecraft laser signals,\nusing a realistic orbit simulator and the Space Weather Modeling Framework\n(SWMF) model. Preliminary results show that the plasma density oscillation can\ninduce the phase deviations close to 2*10^-6 rad/Hz^1/2 or 0.3pm/Hz^1/2 in the\nmilli-hertz frequency band and it is within the error budget assigned to the\ndisplacement noise of the interferometry. The amplitude spectrum density of\nphases along three arms become more separated when the orbital plane is\nparallel to the Sun-Earth line or during a magnetic storm. Finally, the\ndependence of the phase deviations on the orbital radius is examined.\n"} {"abstract": " We elaborate on the role of higher-derivative curvature invariants as a\nquantum selection mechanism of regular spacetimes in the framework of the\nLorentzian path integral approach to quantum gravity. We show that for a large\nclass of black hole metrics prominently regular there are higher-derivative\ncurvature invariants which are singular. If such terms are included in the\naction, according to the finite action principle applied to a higher-derivative\ngravity model, not only singular spacetimes but also some of the regular ones\ndo not seem to contribute to the path integral.\n"} {"abstract": " This note provides a variational description of the mechanical effects of\nflexural stiffening of a 2D plate glued to an elastic-brittle or an\nelastic-plastic reinforcement. The reinforcement is assumed to be linear\nelastic outside possible free plastic yield lines or free crack. Explicit Euler\nequations and a compliance identity are shown for the reinforcement of a 1D\nbeam.\n"} {"abstract": " The generation of mean flows is a long-standing issue in rotating fluids.\nMotivated by planetary objects, we consider here a rapidly rotating\nfluid-filled spheroid, which is subject to weak perturbations of either the\nboundary (e.g. tides) or the rotation vector (e.g. in direction by precession,\nor in magnitude by longitudinal librations). Using boundary-layer theory, we\ndetermine the mean zonal flows generated by nonlinear interactions within the\nviscous Ekman layer. These flows are of interest because they survive in the\nrelevant planetary regime of both vanishing forcings and viscous effects. We\nextend the theory to take into account (i) the combination of spatial and\ntemporal perturbations, providing new mechanically driven zonal flows (e.g.\ndriven by latitudinal librations), and (ii) the spheroidal geometry relevant\nfor planetary bodies. Wherever possible, our analytical predictions are\nvalidated with direct numerical simulations. The theoretical solutions are in\ngood quantitative agreement with the simulations, with expected discrepancies\n(zonal jets) in the presence of inertial waves generated at the critical\nlatitudes (as for precession). Moreover, we find that the mean zonal flows can\nbe strongly affected in spheroids. Guided by planetary applications, we also\nrevisit the scaling laws for the geostrophic shear layers at the critical\nlatitudes, and the influence of a solid inner core.\n"} {"abstract": " Assigning meaning to parts of image data is the goal of semantic image\nsegmentation. Machine learning methods, specifically supervised learning is\ncommonly used in a variety of tasks formulated as semantic segmentation. One of\nthe major challenges in the supervised learning approaches is expressing and\ncollecting the rich knowledge that experts have with respect to the meaning\npresent in the image data. Towards this, typically a fixed set of labels is\nspecified and experts are tasked with annotating the pixels, patches or\nsegments in the images with the given labels. In general, however, the set of\nclasses does not fully capture the rich semantic information present in the\nimages. For example, in medical imaging such as histology images, the different\nparts of cells could be grouped and sub-grouped based on the expertise of the\npathologist.\n To achieve such a precise semantic representation of the concepts in the\nimage, we need access to the full depth of knowledge of the annotator. In this\nwork, we develop a novel approach to collect segmentation annotations from\nexperts based on psychometric testing. Our method consists of the psychometric\ntesting procedure, active query selection, query enhancement, and a deep metric\nlearning model to achieve a patch-level image embedding that allows for\nsemantic segmentation of images. We show the merits of our method with\nevaluation on the synthetically generated image, aerial image and histology\nimage.\n"} {"abstract": " We report an infrared spectroscopy study of the axion topological insulator\ncandidate EuIn$_2$As$_2$ for which the Eu moments exhibit an A-type\nantiferromagnetic (AFM) order below $T_N \\simeq 18 \\mathrm{K}$. The low energy\nresponse is composed of a weak Drude peak at the origin, a pronounced\ninfrared-active phonon mode at 185 cm$^{-1}$ and a free carrier plasma edge\naround 600 cm$^{-1}$. The interband transitions start above 800 cm$^{-1}$ and\ngive rise to a series of weak absorption bands at 5\\,000 and 12\\,000 cm$^{-1}$\nand strong ones at 20\\,000, 27\\,500 and 32\\,000 cm$^{-1}$. The AFM transition\ngives rise to pronounced anomalies of the charge response in terms of a\ncusp-like maximum of the free carrier scattering rate around $T_N$ and large\nmagnetic splittings of the interband transitions at 5\\,000 and 12\\,000\ncm$^{-1}$. The phonon mode at 185 cm$^{-1}$ has also an anomalous temperature\ndependence around $T_N$ which suggests that it couples to the fluctuations of\nthe Eu spins. The combined data provide evidence for a strong interaction\namongst the charge, spin and lattice degrees of freedom.\n"} {"abstract": " Backtracking of RNA polymerase (RNAP) is an important pausing mechanism\nduring DNA transcription that is part of the error correction process that\nenhances transcription fidelity. We model the backtracking mechanism of RNA\npolymerase, which usually happens when the polymerase tries to incorporate a\nmismatched nucleotide triphosphate. Previous models have made simplifying\nassumptions such as neglecting the trailing polymerase behind the backtracking\npolymerase or assuming that the trailing polymerase is stationary. We derive\nexact analytic solutions of a stochastic model that includes locally\ninteracting RNAPs by explicitly showing how a trailing RNAP influences the\nprobability that an error is corrected or incorporated by the leading\nbacktracking RNAP. We also provide two related methods for computing the mean\ntimes to error correction or incorporation given an initial local RNAP\nconfiguration.\n"} {"abstract": " Let $G$ be any group and $k\\geq 1$ be an integer number. The ordered\nconfiguration set of $k$ points in $G$ is given by the subset\n$F(G,k)=\\{(g_1,\\ldots,g_k)\\in G\\times \\cdots\\times G: g_i\\neq g_j \\text{ for }\ni\\neq j\\}\\subset G^k$. In this work, we will study the configuration set\n$F(G,k)$ in algebraic terms as a subset of the product $G^k=G\\times\n\\cdots\\times G$. As we will see, we develop practical tools for dealing with\nthe configuration set of $k$ points in $G$, which, to our knowledge, can not be\nfound in literature.\n"} {"abstract": " We propose Scale-aware AutoAug to learn data augmentation policies for object\ndetection. We define a new scale-aware search space, where both image- and\nbox-level augmentations are designed for maintaining scale invariance. Upon\nthis search space, we propose a new search metric, termed Pareto Scale Balance,\nto facilitate search with high efficiency. In experiments, Scale-aware AutoAug\nyields significant and consistent improvement on various object detectors\n(e.g., RetinaNet, Faster R-CNN, Mask R-CNN, and FCOS), even compared with\nstrong multi-scale training baselines. Our searched augmentation policies are\ntransferable to other datasets and box-level tasks beyond object detection\n(e.g., instance segmentation and keypoint estimation) to improve performance.\nThe search cost is much less than previous automated augmentation approaches\nfor object detection. It is notable that our searched policies have meaningful\npatterns, which intuitively provide valuable insight for human data\naugmentation design. Code and models will be available at\nhttps://github.com/Jia-Research-Lab/SA-AutoAug.\n"} {"abstract": " With the incoming 5G network, the ubiquitous Internet of Things (IoT) devices\ncan benefit our daily life, such as smart cameras, drones, etc. With the\nintroduction of the millimeter-wave band and the thriving number of IoT\ndevices, it is critical to design new dynamic spectrum access (DSA) system to\ncoordinate the spectrum allocation across massive devices in 5G. In this paper,\nwe present Hermes, the first decentralized DSA system for massive devices\ndeployment. Specifically, we propose an efficient multi-agent reinforcement\nlearning algorithm and introduce a novel shuffle mechanism, addressing the\ndrawbacks of collision and fairness in existing decentralized systems. We\nimplement Hermes in 5G network via simulations. Extensive evaluations show that\nHermes significantly reduces collisions and improves fairness compared to the\nstate-of-the-art decentralized methods. Furthermore, Hermes is able to adapt\nthe environmental changes within 0.5 seconds, showing its deployment\npracticability in dynamic environment of 5G.\n"} {"abstract": " We report the first-ever calculation of the isovector flavor combination of\nthe chiral-odd twist-3 parton distribution $h_L(x)$ for the proton from lattice\nQCD. We employ gauge configurations with two degenerate light, a strange and a\ncharm quark ($N_f=2+1+1$) of maximally twisted mass fermions with a clover\nimprovement. The lattice has a spatial extent of 3 fm and lattice spacing of\n0.093 fm. The values of the quark masses lead to a pion mass of $260$ MeV. We\nuse a source-sink time separation of 1.12 fm to control contamination from\nexcited states. Our calculation is based on the quasi-distribution approach,\nwith three values for the proton momentum: 0.83 GeV, 1.25 GeV, and 1.67 GeV.\nThe lattice data are renormalized non-perturbatively using the RI$'$ scheme,\nand the final result for $h_L(x)$ is presented in the $\\overline{\\rm MS}$\nscheme at the scale of 2 GeV. Furthermore, we compute in the same setup the\ntransversity distribution, $h_1(x)$, which allows us, in particular, to compare\n$h_L(x)$ to its Wandzura-Wilczek approximation. We also combine results for the\nisovector and isoscalar flavor combinations to disentangle the individual quark\ncontributions for $h_1(x)$ and $h_L(x)$, and address the Wandzura-Wilczek\napproximation in that case as well.\n"} {"abstract": " We present an adaptation of the NPA hierarchy to the setting of synchronous\ncorrelation matrices. Our adaptation improves upon the original NPA hierarchy\nby using smaller certificates and fewer constraints, although it can only be\napplied to certify synchronous correlations. We recover characterizations for\nthe sets of synchronous quantum commuting and synchronous quantum correlations.\nFor applications, we show that the existence of symmetric informationally\ncomplete positive operator-valued measures and maximal sets of mutually\nunbiased bases can be verified or invalidated with only two certificates of our\nadapted NPA hierarchy.\n"} {"abstract": " Nonlinear phononics relies on the resonant optical excitation of\ninfrared-active lattice vibrations to coherently induce targeted structural\ndeformations in solids. This form of dynamical crystal-structure design has\nbeen applied to control the functional properties of many interesting systems,\nincluding magneto-resistive manganites, magnetic materials, superconductors,\nand ferroelectrics. However, phononics has so far been restricted to protocols\nin which structural deformations occur locally within the optically excited\nvolume, sometimes resulting in unwanted heating. Here, we extend nonlinear\nphononics to propagating polaritons, effectively separating in space the\noptical drive from the functional response. Mid-infrared optical pulses are\nused to resonantly drive an 18 THz phonon at the surface of ferroelectric\nLiNbO3. A time-resolved stimulated Raman scattering probe reveals that the\nferroelectric polarization is reduced over the entire 50 micron depth of the\nsample, far beyond the ~ micron depth of the evanescent phonon field. We\nattribute the bulk response of the ferroelectric polarization to the excitation\nof a propagating 2.5 THz soft-mode phonon-polariton. For the highest excitation\namplitudes, we reach a regime in which the polarization is reversed. In this\nthis non-perturbative regime, we expect that the polariton model evolves into\nthat of a solitonic domain wall that propagates from the surface into the\nmaterials at near the speed of light.\n"} {"abstract": " About 5-8% of individuals over the age of 60 have dementia. With our\never-aging population this number is likely to increase, making dementia one of\nthe most important threats to public health in the 21st century. Given the\nphenotypic overlap of individual dementias the diagnosis of dementia is a major\nclinical challenge, even with current gold standard diagnostic approaches.\nHowever, it has been shown that certain dementias show specific structural\ncharacteristics in the brain. Progressive supranuclear palsy (PSP) and multiple\nsystem atrophy (MSA) are prototypical examples of this phenomenon, as they\noften present with characteristic brainstem atrophy. More detailed\ncharacterization of brain atrophy due to individual diseases is urgently\nrequired to select biomarkers and therapeutic targets that are meaningful to\neach disease. Here we present a joint multi-atlas-segmentation and\ndeep-learning-based segmentation method for fast and robust parcellation of the\nbrainstem into its four sub-structures, i.e., the midbrain, pons, medulla, and\nsuperior cerebellar peduncles (SCP), that in turn can provide detailed\nvolumetric information on the brainstem sub-structures affected in PSP and MSA.\nThe method may also benefit other neurodegenerative diseases, such as\nParkinson's disease; a condition which is often considered in the differential\ndiagnosis of PSP and MSA. Comparison with state-of-the-art labeling techniques\nevaluated on ground truth manual segmentations demonstrate that our method is\nsignificantly faster than prior methods as well as showing improvement in\nlabeling the brainstem indicating that this strategy may be a viable option to\nprovide a better characterization of the brainstem atrophy seen in PSP and MSA.\n"} {"abstract": " It is shown that the relativistic invariance plays a key role in the study of\nintegrable systems. Using the relativistically invariant sine-Gordon equation,\nthe Tzitzeica equation, the Toda fields and the second heavenly equation as\ndual relations, some continuous and discrete integrable positive hierarchies\nsuch as the potential modified Korteweg-de Vries hierarchy, the potential\nFordy-Gibbons hierarchies, the potential dispersionless\nKadomtsev-Petviashvili-like (dKPL) hierarchy, the differential-difference dKPL\nhierarchy and the second heavenly hierarchies are converted to the integrable\nnegative hierarchies including the sG hierarchy and the Tzitzeica hierarchy,\nthe two-dimensional dispersionless Toda hierarchy, the two-dimensional Toda\nhierarchies and negative heavenly hierarchy. In (1+1)-dimensional cases the\npositive/negative hierarchy dualities are guaranteed by the dualities between\nthe recursion operators and their inverses. In (2+1)-dimensional cases, the\npositive/negative hierarchy dualities are explicitly shown by using the formal\nseries symmetry approach, the mastersymmetry method and the relativistic\ninvariance of the duality relations. For the 4-dimensional heavenly system, the\nduality problem is studied firstly by formal series symmetry approach. Two\nelegant commuting recursion operators of the heavenly equation appear naturally\nfrom the formal series symmetry approach so that the duality problem can also\nbe studied by means of the recursion operators.\n"} {"abstract": " Predictions of biodiversity trajectories under climate change are crucial in\norder to act effectively in maintaining the diversity of species. In many\necological applications, future predictions are made under various global\nwarming scenarios as described by a range of different climate models. The\noutputs of these various predictions call for a reliable interpretation. We\npropose a interpretable and flexible two step methodology to measure the\nsimilarity between predicted species range maps and cluster the future scenario\npredictions utilizing a spectral clustering technique. We find that clustering\nbased on ecological impact (predicted species range maps) is mainly driven by\nthe amount of warming. We contrast this with clustering based only on predicted\nclimate features, which is driven mainly by climate models. The differences\nbetween these clusterings illustrate that it is crucial to incorporate\necological information to understand the relevant differences between climate\nmodels. The findings of this work can be used to better synthesize forecasts of\nbiodiversity loss under the wide spectrum of results that emerge when\nconsidering potential future biodiversity loss.\n"} {"abstract": " Short-period sub-Neptunes with substantial volatile envelopes are among the\nmost common type of known exoplanets. However, recent studies of the Kepler\npopulation have suggested a dearth of sub-Neptunes on highly irradiated orbits,\nwhere they are vulnerable to atmospheric photoevaporation. Physically, we\nexpect this \"photoevaporation desert\" to depend on the total lifetime X-ray and\nextreme ultraviolet flux, the main drivers of atmospheric escape. In this work,\nwe study the demographics of sub-Neptunes as a function of lifetime exposure to\nhigh energy radiation and host star mass. We find that for a given present day\ninsolation, planets orbiting a 0.3 $M_{sun}$ star experience $\\sim$100 $\\times$\nmore X-ray flux over their lifetimes versus a 1.2 $M_{sun}$ star. Defining the\nphotoevaporation desert as a region consistent with zero occurrence at 2\n$\\sigma$, the onset of the desert happens for integrated X-ray fluxes greater\nthan 1.43 $\\times 10^{22}$ erg/cm$^2$ to 8.23 $\\times 10^{20}$ erg/cm$^2$ as a\nfunction of planetary radii for 1.8 -- 4 $R_{\\oplus}$. We also compare the\nlocation of the photoevaporation desert for different stellar types. We find\nmuch greater variability in the desert onset in bolometric flux space compared\nto integrated X-ray flux space, suggestive of photoevaporation driven by steady\nstate stellar X-ray emissions as the dominant control on desert location.\nFinally, we report tentative evidence for the sub-Neptune valley, first seen\naround Sun-like stars, for M & K dwarfs. The discovery of additional planets\naround low-mass stars from surveys such as the TESS mission will enable\ndetailed exploration of these trends.\n"} {"abstract": " The theoretical maximum efficiency of a solar cell is typically characterized\nby a detailed balance of optical absorption and emission for a semiconductor in\nthe limit of unity radiative efficiency and an ideal step-function response for\nthe density of states and absorbance at the semiconductor band edges, known as\nthe Shockley-Queisser limit. However, real materials have non-abrupt band\nedges, which are typically characterized by an exponential distribution of\nstates, known as an Urbach tail. We develop here a modified detailed balance\nlimit of solar cells with imperfect band edges, using optoelectronic\nreciprocity relations. We find that for semiconductors whose band edges are\nbroader than the thermal energy, kT, there is an effective renormalized bandgap\ngiven by the quasi-Fermi level splitting within the solar cell. This\nrenormalized bandgap creates a Stokes shift between the onset of the absorption\nand photoluminescence emission energies, which significantly reduces the\nmaximum achievable efficiency. The abruptness of the band edge density of\nstates therefore has important implications for the maximum achievable\nphotovoltaic efficiency.\n"} {"abstract": " We study relativistic hydrodynamics in the presence of a non vanishing spin\nchemical potential. Using a variety of techniques we carry out an exhaustive\nanalysis, and identify the constitutive relations for the stress tensor and\nspin current in such a setup, allowing us to write the hydrodynamic equations\nof motion to second order in derivatives. We then solve the equations of motion\nin a perturbative setup and find surprisingly good agreement with measurements\nof global $\\Lambda$-hyperon polarization carried out at RHIC.\n"} {"abstract": " A monopolist wants to sell one item per period to a consumer with evolving\nand persistent private information. The seller sets a price each period\ndepending on the history so far, but cannot commit to future prices. We show\nthat, regardless of the degree of persistence, any equilibrium under a D1-style\nrefinement gives the seller revenue no higher than what she would get from\nposting all prices in advance.\n"} {"abstract": " Development of the new artificial systems with unique characteristics is very\nchallenging task. In this paper the application of the hybrid super\nintelligence concept with object-process methodology to develop unique\nhigh-performance computational systems is considered. The methodological\napproach how to design new intelligent components for existing high-performance\ncomputing development systems is proposed on the example of system requirements\ncreation for \"MicroAI\" and \"Artificial Electronic\" systems.\n"} {"abstract": " We define a class of discrete operators that, in particular, include the\ndelta and nabla fractional operators.\n"} {"abstract": " Existing image segmentation networks mainly leverage large-scale labeled\ndatasets to attain high accuracy. However, labeling medical images is very\nexpensive since it requires sophisticated expert knowledge. Thus, it is more\ndesirable to employ only a few labeled data in pursuing high segmentation\nperformance. In this paper, we develop a data augmentation method for one-shot\nbrain magnetic resonance imaging (MRI) image segmentation which exploits only\none labeled MRI image (named atlas) and a few unlabeled images. In particular,\nwe propose to learn the probability distributions of deformations (including\nshapes and intensities) of different unlabeled MRI images with respect to the\natlas via 3D variational autoencoders (VAEs). In this manner, our method is\nable to exploit the learned distributions of image deformations to generate new\nauthentic brain MRI images, and the number of generated samples will be\nsufficient to train a deep segmentation network. Furthermore, we introduce a\nnew standard segmentation benchmark to evaluate the generalization performance\nof a segmentation network through a cross-dataset setting (collected from\ndifferent sources). Extensive experiments demonstrate that our method\noutperforms the state-of-the-art one-shot medical segmentation methods. Our\ncode has been released at\nhttps://github.com/dyh127/Modeling-the-Probabilistic-Distribution-of-Unlabeled-Data.\n"} {"abstract": " We study the homogenized energy densities of periodic ferromagnetic Ising\nsystems. We prove that, for finite range interactions, the homogenized energy\ndensity, identifying the effective limit, is crystalline, i.e. its Wulff\ncrystal is a polytope, for which we can (exponentially) bound the number of\nvertices. This is achieved by deriving a dual representation of the energy\ndensity through a finite cell formula. This formula also allows easy numerical\ncomputations: we show a few experiments where we compute periodic patterns\nwhich minimize the anisotropy of the surface tension.\n"} {"abstract": " Classical two-sample permutation tests for equality of distributions have\nexact size in finite samples, but they fail to control size for testing\nequality of parameters that summarize each distribution. This paper proposes\npermutation tests for equality of parameters that are estimated at root-n or\nslower rates. Our general framework applies to both parametric and\nnonparametric models, with two samples or one sample split into two subsamples.\nOur tests have correct size asymptotically while preserving exact size in\nfinite samples when distributions are equal. They have no loss in\nlocal-asymptotic power compared to tests that use asymptotic critical values.\nWe propose confidence sets with correct coverage in large samples that also\nhave exact coverage in finite samples if distributions are equal up to a\ntransformation. We apply our theory to four commonly-used hypothesis tests of\nnonparametric functions evaluated at a point. Lastly, simulations show good\nfinite sample properties of our tests.\n"} {"abstract": " The private simultaneous messages model is a non-interactive version of the\nmultiparty secure computation, which has been intensively studied to examine\nthe communication cost of the secure computation. We consider its quantum\ncounterpart, the private simultaneous quantum messages (PSQM) model, and\nexamine the advantages of quantum communication and prior entanglement of this\nmodel. In the PSQM model, $k$ parties $P_1,\\ldots,P_k$ initially share a common\nrandom string (or entangled states in a stronger setting), and they have\nprivate classical inputs $x_1,\\ldots, x_k$. Every $P_i$ generates a quantum\nmessage from the private input $x_i$ and the shared random string (entangled\nstates), and then sends it to the referee $R$. Receiving the messages, $R$\ncomputes $F(x_1,\\ldots,x_k)$. Then, $R$ learns nothing except for\n$F(x_1,\\ldots,x_k)$ as the privacy condition. We obtain the following results\nfor this PSQM model. (1) We demonstrate that the privacy condition inevitably\nincreases the communication cost in the two-party PSQM model as well as in the\nclassical case presented by Applebaum, Holenstein, Mishra, and Shayevitz. In\nparticular, we prove a lower bound $(3-o(1))n$ of the communication complexity\nin PSQM protocols with a shared random string for random Boolean functions of\n$2n$-bit input, which is larger than the trivial upper bound $2n$ of the\ncommunication complexity without the privacy condition. (2) We demonstrate a\nfactor two gap between the communication complexity of PSQM protocols with\nshared entangled states and with shared random strings by designing a\nmultiparty PSQM protocol with shared entangled states for a total function that\nextends the two-party equality function. (3) We demonstrate an exponential gap\nbetween the communication complexity of PSQM protocols with shared entangled\nstates and with shared random strings for a two-party partial function.\n"} {"abstract": " Low-Rank Parity-Check (LRPC) codes are a class of rank metric codes that have\nmany applications specifically in network coding and cryptography. Recently,\nLRPC codes have been extended to Galois rings which are a specific case of\nfinite rings. In this paper, we first define LRPC codes over finite commutative\nlocal rings, which are bricks of finite rings, with an efficient decoder. We\nimprove the theoretical bound of the failure probability of the decoder. Then,\nwe extend the work to arbitrary finite commutative rings. Certain conditions\nare generally used to ensure the success of the decoder. Over finite fields,\none of these conditions is to choose a prime number as the extension degree of\nthe Galois field. We have shown that one can construct LRPC codes without this\ncondition on the degree of Galois extension.\n"} {"abstract": " The isolated susceptibility $\\chi_{\\rm I}$ may be defined as a\n(non-thermodynamic) average over the canonical ensemble, but while it has often\nbeen discussed in the literature, it has not been clearly measured. Here, we\ndemonstrate an unambiguous measurement of $\\chi_{\\rm I}$ at avoided\nnuclear-electronic level crossings in a dilute spin ice system, containing\nwell-separated holmium ions. We show that $\\chi_{\\rm I}$ quantifies the\nsuperposition of quasi-classical spin states at these points, and is a direct\nmeasure of state concurrence and populations.\n"} {"abstract": " The econometric literature on program-evaluation and optimal treatment-choice\ntakes functionals of outcome-distributions as target welfare, and ignores\nprogram-impacts on unobserved utilities, including utilities of those whose\noutcomes may be unaffected by the intervention. We show that in the practically\nimportant setting of discrete-choice, under general preference-heterogeneity\nand income-effects, the distribution of indirect-utility is nonparametrically\nidentified from average demand. This enables cost-benefit analysis and\ntreatment-targeting based on social welfare and planners' distributional\npreferences, while also allowing for general unobserved heterogeneity in\nindividual preferences. We demonstrate theoretical connections between\nutilitarian social welfare and Hicksian compensation. An empirical application\nillustrates our results.\n"} {"abstract": " We demonstrate that multiple higher-order topological transitions can be\ntriggered via the continuous change of the geometry in kagome photonic crystals\ncomposed of three dielectric rods. By tuning a single geometry parameter, the\nphotonic corner and edge states emerge or disappear with the higher-order\ntopological transitions. Two distinct higher-order topological insulator phases\nand a normal insulator phase are revealed. Their topological indices are\nobtained from symmetry representations. A photonic analog of fractional corner\ncharge is introduced to distinguish the two higher-order topological insulator\nphases. Our predictions can be readily realized and verified in configurable\ndielectric photonic crystals.\n"} {"abstract": " A prototype neutron detector has been created through modification to a\ncommercial non-volatile flash memory device. Studies are being performed to\nmodify this prototype into a purpose-built device with greater performance and\nfunctionality. This paper describes a demonstration of this technology using a\nthermal neutron beam produced by a TRIGA research reactor. With a 4x4 array of\n16 prototype devices, the full widths of the beam dimensions at half maximum\nare measured to be 2.2x2.1 cm2.\n"} {"abstract": " Mathematical modelling of ionic electrodiffusion and water movement is\nemerging as a powerful avenue of investigation to provide new physiological\ninsight into brain homeostasis. However, in order to provide solid answers and\nresolve controversies, the accuracy of the predictions is essential. Ionic\nelectrodiffusion models typically comprise non-trivial systems of non-linear\nand highly coupled partial and ordinary differential equations that govern\nphenomena on disparate time scales. Here, we study numerical challenges related\nto approximating these systems. We consider a homogenized model for\nelectrodiffusion and osmosis in brain tissue and present and evaluate different\nassociated finite element-based splitting schemes in terms of their numerical\nproperties, including accuracy, convergence, and computational efficiency for\nboth idealized scenarios and for the physiologically relevant setting of\ncortical spreading depression (CSD). We find that the schemes display optimal\nconvergence rates in space for problems with smooth manufactured solutions.\nHowever, the physiological CSD setting is challenging: we find that the\naccurate computation of CSD wave characteristics (wave speed and wave width)\nrequires a very fine spatial and fine temporal resolution.\n"} {"abstract": " The Chermak-Delgado lattice of a finite group $G$ is a self-dual sublattice\nof the subgroup lattice of $G$. In this paper, we focus on finite groups whose\nChermak-Delgado lattice is a subgroup lattice of an elementary abelian\n$p$-group. We prove that such groups are nilpotent of class $2$. We also prove\nthat, for any elementary abelian $p$-group $E$, there exists a finite group $G$\nsuch that the Chermak-Delgado lattice of $G$ is a subgroup lattice of $E$.\n"} {"abstract": " Let $(X, H)$ be a polarized smooth projective algebraic surface and $E$ is\nglobally generated, stable vector bundle on $X$. Then the Syzygy bundle $M_E$\nassociated to it is defined as the kernel bundle corresponding to the\nevaluation map. In this article we will study the stability property of $M_E$\nwith respect to $H$.\n"} {"abstract": " Contagion arising from clustering of multiple time series like those in the\nstock market indicators can further complicate the nature of volatility,\nrendering a parametric test (relying on asymptotic distribution) to suffer from\nissues on size and power. We propose a test on volatility based on the\nbootstrap method for multiple time series, intended to account for possible\npresence of contagion effect. While the test is fairly robust to distributional\nassumptions, it depends on the nature of volatility. The test is correctly\nsized even in cases where the time series are almost nonstationary. The test is\nalso powerful specially when the time series are stationary in mean and that\nvolatility are contained only in fewer clusters. We illustrate the method in\nglobal stock prices data.\n"} {"abstract": " The collateral choice option gives the collateral posting party the\nopportunity to switch between different collateral currencies which is\nwell-known to impact the asset price. Quantification of the option's value is\nof practical importance but remains challenging under the assumption of\nstochastic rates, as it is determined by an intractable distribution which\nrequires involved approximations. Indeed, many practitioners still rely on\ndeterministic spreads between the rates for valuation. We develop a scalable\nand stable stochastic model of the collateral spreads under the assumption of\nconditional independence. This allows for a common factor approximation which\nadmits analytical results from which further estimators are obtained. We show\nthat in modelling the spreads between collateral rates, a second order model\nyields accurate results for the value of the collateral choice option. The\nmodel remains precise for a wide range of model parameters and is numerically\nefficient even for a large number of collateral currencies.\n"} {"abstract": " In this work we investigate neutron stars (NS) in $f(\\mathtt{R,L_m})$ theory\nof gravity for the case $f(\\mathtt{R,L_m}) = \\mathtt{R} + \\mathtt{L_m} +\n\\sigma\\mathtt{R}\\mathtt{L_m}$, where $\\mathtt{R}$ is the Ricci scalar and\n$\\mathtt{L_m}$ the Lagrangian matter density. In the term\n$\\sigma\\mathtt{R}\\mathtt{L_m}$, $\\sigma$ represents the coupling between the\ngravitational and particles fields. For the first time the hydrostatic\nequilibrium equations in the theory are solved considering realistic equations\nof state and NS masses and radii obtained are subject to joint constrains from\nmassive pulsars, the gravitational wave event GW170817 and from the PSR\nJ0030+0451 mass-radius from NASA's Neutron Star Interior Composition Explorer\n(${\\it NICER}$) data. We show that in this theory of gravity, the mass-radius\nresults can accommodate massive pulsars, while the general theory of relativity\ncan hardly do it. The theory also can explain the observed NS within the radius\nregion constrained by the GW170817 and PSR J0030+0451 observations for masses\naround $1.4~M_{\\odot}$.\n"} {"abstract": " Let $B\\subset A$ be a left or right bounded extension of finite dimensional\nalgebras. We use the Jacobi-Zariski long nearly exact sequence to show that $B$\nsatisfies Han's conjecture if and only if $A$ does, regardless if the extension\nsplits or not. We provide conditions ensuring that an extension by arrows and\nrelations is left or right bounded. Finally we give a structure result for\nextensions of an algebra given by a quiver and admissible relations, and\nexamples of non split left or right bounded extensions.\n"} {"abstract": " The genuine concurrence is a standard quantifier of multipartite\nentanglement, detection and quantification of which still remains a difficult\nproblem from both theoretical and experimental point of view. Although many\nefforts have been devoted toward the detection of multipartite entanglement\n(e.g., using entanglement witnesses), measuring the degree of multipartite\nentanglement, in general, requires some knowledge about an exact shape of a\ndensity matrix of the quantum state. An experimental reconstruction of such\ndensity matrix can be done by full state tomography which amounts to having the\ndistant parties share a common reference frame and well calibrated devices.\nAlthough this assumption is typically made implicitly in theoretical works,\nestablishing a common reference frame, as well as aligning and calibrating\nmeasurement devices in experimental situations are never trivial tasks. It is\ntherefore an interesting and important question whether the requirements of\nhaving a shared reference frame and calibrated devices can be relaxed. In this\nwork we study both theoretically and experimentally the genuine concurrence for\nthe generalized Greenberger-Horne-Zeilinger states under randomly chosen\nmeasurements on a single qubits without a shared frame of reference and\ncalibrated devices. We present the relation between genuine concurrence and\nso-called nonlocal volume, a recently introduced indicator of nonlocality.\n"} {"abstract": " This paper presents dEchorate: a new database of measured multichannel Room\nImpulse Responses (RIRs) including annotations of early echo timings and 3D\npositions of microphones, real sources and image sources under different wall\nconfigurations in a cuboid room. These data provide a tool for benchmarking\nrecent methods in echo-aware speech enhancement, room geometry estimation, RIR\nestimation, acoustic echo retrieval, microphone calibration, echo labeling and\nreflectors estimation. The database is accompanied with software utilities to\neasily access, manipulate and visualize the data as well as baseline methods\nfor echo-related tasks.\n"} {"abstract": " We present X-ray analysis of the ejecta of supernova remnant G350.1$-$0.3\nobserved with Chandra and Suzaku, and clarify the ejecta's kinematics over a\ndecade and obtain a new observational clue to understanding the origin of the\nasymmetric explosion. Two images of Chandra X-ray Observatory taken in 2009 and\n2018 are analyzed in several methods, and enable us to measure the velocities\nin the plane of the sky. A maximum velocity is 4640$\\pm$290 km s$^{-1}$\n(0.218$\\pm$0.014 arcsec yr$^{-1}$) in the eastern region in the remnant. These\nfindings trigger us to scrutinize the Doppler effects in the spectra of the\nthermal emission, and the velocities in the line-of-sight direction are\nestimated to be a thousand km s$^{-1}$. The results are confirmed by analyzing\nthe spectra of Suzaku. Combining the proper motions and line-of-sight\nvelocities, the ejecta's three-dimensional velocities are $\\sim$3000-5000 km\ns$^{-1}$. The center of the explosion is more stringently constrained by\nfinding the optimal time to reproduce the observed spatial expansion. Our\nfindings that the age of the SNR is estimated at most to be 655 years, and the\nCCO is observed as a point source object against the SNR strengthen the\n'hydrodynamical kick' hypothesis on the origin of the remnant.\n"} {"abstract": " We present a derivation-based Atiyah sequence for noncommutative principal\nbundles. Along the way we treat the problem of deciding when a given\n*-automorphism on the quantum base space lifts to a *-automorphism on the\nquantum total space that commutes with the underlying structure group.\n"} {"abstract": " We formulate the Lagrangian of the Newtonian cosmology where the cosmological\nconstant is also introduced. Following the affine quantization procedure, the\nHamiltonian operator is derived. The wave functions of the Newtonian universe\nand the corresponding eigenvalues for the case of matter dominated by a\nnegative cosmological constant are given.\n"} {"abstract": " Learning effective representations in image-based environments is crucial for\nsample efficient Reinforcement Learning (RL). Unfortunately, in RL,\nrepresentation learning is confounded with the exploratory experience of the\nagent -- learning a useful representation requires diverse data, while\neffective exploration is only possible with coherent representations.\nFurthermore, we would like to learn representations that not only generalize\nacross tasks but also accelerate downstream exploration for efficient\ntask-specific training. To address these challenges we propose Proto-RL, a\nself-supervised framework that ties representation learning with exploration\nthrough prototypical representations. These prototypes simultaneously serve as\na summarization of the exploratory experience of an agent as well as a basis\nfor representing observations. We pre-train these task-agnostic representations\nand prototypes on environments without downstream task information. This\nenables state-of-the-art downstream policy learning on a set of difficult\ncontinuous control tasks.\n"} {"abstract": " The paper discusses how robots enable occupant-safe continuous protection for\nstudents when schools reopen. Conventionally, fixed air filters are not used as\na key pandemic prevention method for public indoor spaces because they are\nunable to trap the airborne pathogens in time in the entire room. However, by\ncombining the mobility of a robot with air filtration, the efficacy of cleaning\nup the air around multiple people is largely increased. A disinfection co-robot\nprototype is thus developed to provide continuous and occupant-friendly\nprotection to people gathering indoors, specifically for students in a\nclassroom scenario. In a static classroom with students sitting in a grid\npattern, the mobile robot is able to serve up to 14 students per cycle while\nreducing the worst-case pathogen dosage by 20%, and with higher robustness\ncompared to a static filter. The extent of robot protection is optimized by\ntuning the passing distance and speed, such that a robot is able to serve more\npeople given a threshold of worst-case dosage a person can receive.\n"} {"abstract": " We study orbit codes in the field extension ${\\mathbb F}_{q^n}$. First we\nshow that the automorphism group of a cyclic orbit code is contained in the\nnormalizer of the Singer subgroup if the orbit is generated by a subspace that\nis not contained in a proper subfield of ${\\mathbb F}_{q^n}$. We then\ngeneralize to orbits under the normalizer of the Singer subgroup. In that\nsituation some exceptional cases arise and some open cases remain. Finally we\ncharacterize linear isometries between such codes.\n"} {"abstract": " Optimal design of distributed decision policies can be a difficult task,\nillustrated by the famous Witsenhausen counterexample. In this paper we\ncharacterize the optimal control designs for the vector-valued setting assuming\nthat it results in an internal state that can be described by a continuous\nrandom variable which has a probability density function. More specifically, we\nprovide a genie-aided outer bound that relies on our previous results for\nempirical coordination problems. This solution turns out to be not optimal in\ngeneral, since it consists of a time-sharing strategy between two linear\nschemes of specific power. It follows that the optimal decision strategy for\nthe original scalar Witsenhausen problem must lead to an internal state that\ncannot be described by a continuous random variable which has a probability\ndensity function.\n"} {"abstract": " A user generates n independent and identically distributed data random\nvariables with a probability mass function that must be guarded from a querier.\nThe querier must recover, with a prescribed accuracy, a given function of the\ndata from each of n independent and identically distributed query responses\nupon eliciting them from the user. The user chooses the data probability mass\nfunction and devises the random query responses to maximize distribution\nprivacy as gauged by the (Kullback-Leibler) divergence between the former and\nthe querier's best estimate of it based on the n query responses. Considering\nan arbitrary function, a basic achievable lower bound for distribution privacy\nis provided that does not depend on n and corresponds to worst-case privacy.\nWorst-case privacy equals the logsum cardinalities of inverse atoms under the\ngiven function, with the number of summands decreasing as the querier recovers\nthe function with improving accuracy. Next, upper (converse) and lower\n(achievability) bounds for distribution privacy, dependent on n, are developed.\nThe former improves upon worst-case privacy and the latter does so under\nsuitable assumptions; both converge to it as n grows. The converse and\nachievability proofs identify explicit strategies for the user and the querier.\n"} {"abstract": " We consider the problem of assigning agents to programs in the presence of\ntwo-sided preferences, commonly known as the Hospital Residents problem. In the\nstandard setting each program has a rigid upper-quota which cannot be violated.\nMotivated by applications where quotas are governed by resource availability,\nwe propose and study the problem of computing optimal matchings with\ncost-controlled quotas -- denoted as the CCQ setting. In the CCQ setting we\nhave a cost associated with every program which denotes the cost of matching a\nsingle agent to the program and these costs control the quotas. Our goal is to\ncompute a matching that matches all agents, respects the preference lists of\nagents and programs and is optimal with respect to the cost criteria. We study\ntwo optimization problems with respect to the costs -- minimize the total cost\n(MINSUM) and minimize the maximum cost at a program (MINMAX). We show that\nthere is a sharp contrast in the complexity status of these two problems --\nMINMAX is polynomial time solvable whereas MINSUM is NP-hard and hard to\napproximate within a constant factor unless P = NP even under severe\nrestrictions. On the positive side, we present approximation algorithms for the\nMINSUM for the general case and a special hard case. The special hard case is\ntheoretically challenging as well as practically motivated and we present a\nLinear Programming based algorithm for this case. We also establish the\nconnection of our model with the stable extension problem in an apparently\ndifferent two-round setting of the stable matching problem [Gajulapalli et al.\nFSTTCS 2020]. We show that our results in the CCQ setting generalize the stable\nextension problem.\n"} {"abstract": " One-shot semantic image segmentation aims to segment the object regions for\nthe novel class with only one annotated image. Recent works adopt the episodic\ntraining strategy to mimic the expected situation at testing time. However,\nthese existing approaches simulate the test conditions too strictly during the\ntraining process, and thus cannot make full use of the given label information.\nBesides, these approaches mainly focus on the foreground-background target\nclass segmentation setting. They only utilize binary mask labels for training.\nIn this paper, we propose to leverage the multi-class label information during\nthe episodic training. It will encourage the network to generate more\nsemantically meaningful features for each category. After integrating the\ntarget class cues into the query features, we then propose a pyramid feature\nfusion module to mine the fused features for the final classifier. Furthermore,\nto take more advantage of the support image-mask pair, we propose a\nself-prototype guidance branch to support image segmentation. It can constrain\nthe network for generating more compact features and a robust prototype for\neach semantic class. For inference, we propose a fused prototype guidance\nbranch for the segmentation of the query image. Specifically, we leverage the\nprediction of the query image to extract the pseudo-prototype and combine it\nwith the initial prototype. Then we utilize the fused prototype to guide the\nfinal segmentation of the query image. Extensive experiments demonstrate the\nsuperiority of our proposed approach.\n"} {"abstract": " Digital contents have grown dramatically in recent years, leading to\nincreased attention to copyright. Image watermarking has been considered one of\nthe most popular methods for copyright protection. With the recent advancements\nin applying deep neural networks in image processing, these networks have also\nbeen used in image watermarking. Robustness and imperceptibility are two\nchallenging features of watermarking methods that the trade-off between them\nshould be satisfied. In this paper, we propose to use an end-to-end network for\nwatermarking. We use a convolutional neural network (CNN) to control the\nembedding strength based on the image content. Dynamic embedding helps the\nnetwork to have the lowest effect on the visual quality of the watermarked\nimage. Different image processing attacks are simulated as a network layer to\nimprove the robustness of the model. Our method is a blind watermarking\napproach that replicates the watermark string to create a matrix of the same\nsize as the input image. Instead of diffusing the watermark data into the input\nimage, we inject the data into the feature space and force the network to do\nthis in regions that increase the robustness against various attacks.\nExperimental results show the superiority of the proposed method in terms of\nimperceptibility and robustness compared to the state-of-the-art algorithms.\n"} {"abstract": " This paper examines a continuous time intertemporal consumption and portfolio\nchoice problem with a stochastic differential utility preference of Epstein-Zin\ntype for a robust investor, who worries about model misspecification and seeks\nrobust decision rules. We provide a verification theorem which formulates the\nHamilton-Jacobi-Bellman-Isaacs equation under a non-Lipschitz condition. Then,\nwith the verification theorem, the explicit closed-form optimal robust\nconsumption and portfolio solutions to a Heston model are given. Also we\ncompare our robust solutions with the non-robust ones, and the comparisons\nshown in a few figures coincide with our common sense.\n"} {"abstract": " We study transmission in a system consisting of a curved graphene surface as\nan arc (ripple) of circle connected to two flat graphene sheets on the left and\nright sides. We introduce a mass term in the curved part and study the effect\nof a generated band gap in spectrum on transport properties for spin-up/-down.\nThe tunneling analysis allows us to find all transmission and reflections\nchannels modeled by the band gap. This later acts by decreasing the\ntransmissions with spin-up/-down but increasing with spin opposite, which\nexhibit some behaviors look like bell-shaped curve. We find resonances\nappearing in reflection with the same spin, thus backscattering with a\nspin-up/-down is not null in ripple. We observe huge spatial shifts for the\ntotal conduction in our model and the magnitudes of these shifts can be\nefficiently controlled by adjusting the band gap. This high order tunability of\nthe tunneling effect can be used to design highly accurate devises based on\ngraphene.\n"} {"abstract": " The cells and their spatial patterns in the tumor microenvironment (TME) play\na key role in tumor evolution, and yet the latter remains an understudied topic\nin computational pathology. This study, to the best of our knowledge, is among\nthe first to hybridize local and global graph methods to profile orchestration\nand interaction of cellular components. To address the challenge in\nhematolymphoid cancers, where the cell classes in TME may be unclear, we first\nimplemented cell-level unsupervised learning and identified two new cell\nsubtypes. Local cell graphs or supercells were built for each image by\nconsidering the individual cell's geospatial location and classes. Then, we\napplied supercell level clustering and identified two new cell communities. In\nthe end, we built global graphs to abstract spatial interaction patterns and\nextract features for disease diagnosis. We evaluate the proposed algorithm on\nH&E slides of 60 hematolymphoid neoplasms and further compared it with three\ncell level graph-based algorithms, including the global cell graph, cluster\ncell graph, and FLocK. The proposed algorithm achieved a mean diagnosis\naccuracy of 0.703 with the repeated 5-fold cross-validation scheme. In\nconclusion, our algorithm shows superior performance over the existing methods\nand can be potentially applied to other cancer types.\n"} {"abstract": " Research in the Vision and Language area encompasses challenging topics that\nseek to connect visual and textual information. When the visual information is\nrelated to videos, this takes us into Video-Text Research, which includes\nseveral challenging tasks such as video question answering, video summarization\nwith natural language, and video-to-text and text-to-video conversion. This\npaper reviews the video-to-text problem, in which the goal is to associate an\ninput video with its textual description. This association can be mainly made\nby retrieving the most relevant descriptions from a corpus or generating a new\none given a context video. These two ways represent essential tasks for\nComputer Vision and Natural Language Processing communities, called text\nretrieval from video task and video captioning/description task. These two\ntasks are substantially more complex than predicting or retrieving a single\nsentence from an image. The spatiotemporal information present in videos\nintroduces diversity and complexity regarding the visual content and the\nstructure of associated language descriptions. This review categorizes and\ndescribes the state-of-the-art techniques for the video-to-text problem. It\ncovers the main video-to-text methods and the ways to evaluate their\nperformance. We analyze twenty-six benchmark datasets, showing their drawbacks\nand strengths for the problem requirements. We also show the progress that\nresearchers have made on each dataset, we cover the challenges in the field,\nand we discuss future research directions.\n"} {"abstract": " We study static magnetic susceptibility $\\chi(T, \\mu)$ in $SU(2)$ lattice\ngauge theory with $N_f = 2$ light flavours of dynamical fermions at finite\nchemical potential $\\mu$. Using linear response theory we find that $SU(2)$\ngauge theory exhibits paramagnetic behavior in both the high-temperature\ndeconfined regime and the low-temperature confining regime. Paramagnetic\nresponse becomes stronger at higher temperatures and larger values of the\nchemical potential. For our range of temperatures $0.727 \\leq T/T_c \\leq 2.67$,\nthe first coefficient of the expansion of $\\chi(T, \\mu)$ in even powers of\n$\\mu/T$ around $\\mu=0$ is close to that of free quarks and lies in the range\n$(2 \\ldots 5) \\cdot 10^{-3}$. The strongest paramagnetic response is found in\nthe diquark condensation phase at $\\mu > m_{\\pi}/2$.\n"} {"abstract": " We consider the problem of estimation and structure learning of high\ndimensional signals via a normal sequence model, where the underlying parameter\nvector is piecewise constant, or has a block structure. We develop a Bayesian\nfusion estimation method by using the Horseshoe prior to induce a strong\nshrinkage effect on successive differences in the mean parameters,\nsimultaneously imposing sufficient prior concentration for non-zero values of\nthe same. The proposed method thus facilitates consistent estimation and\nstructure recovery of the signal pieces. We provide theoretical justifications\nof our approach by deriving posterior convergence rates and establishing\nselection consistency under suitable assumptions. We also extend our proposed\nmethod to signal de-noising over arbitrary graphs and develop efficient\ncomputational methods along with providing theoretical guarantees. We\ndemonstrate the superior performance of the Horseshoe based Bayesian fusion\nestimation method through extensive simulations and two real-life examples on\nsignal de-noising in biological and geophysical applications. We also\ndemonstrate the estimation performance of our method on a real-world large\nnetwork for the graph signal de-noising problem.\n"} {"abstract": " Let $G$ be a finite group and ${\\rm cd}(G)$ will be the set of the degrees of\nthe complex irreducible characters of $G$. Also let ${\\rm cod}(G)$ be the set\nof codegrees of the irreducible characters of $G$. The Taketa problem\nconjectures if $G$ is solvable, then ${\\rm dl}(G) \\leq |{\\rm cd}(G)|$, where\n${\\rm dl}(G)$ is the derived length of $G$. In this note, we show that ${\\rm\ndl}(G) \\leq |{\\rm cod}(G)|$ in some cases and we conjecture that this\ninequality holds if $G$ is a finite solvable group.\n"} {"abstract": " In this article, we model Earth's lower small-scale eddies motion in the\natmosphere as a compressible neutral fluid flow on a rotating sphere. To\njustify the model, we carried out a numerical computation of the thermodynamic\nand hydrodynamic properties of the viscous atmospheric motion in two dimensions\nusing Naiver-Stokes dynamics, conservation of atmospheric energy, and\ncontinuity equation. The dynamics of the atmosphere, governed by a partial\ndifferential equation without any approximation , and without considering\nlatitude-dependent acceleration due to gravity. The numerical solution for\nthose governed equations was solved by applying the finite difference method\nwith applying some sort of horizontal air mass density as a perturbation to the\natmosphere at a longitude of $5\\Delta\\lambda$ . Based on this initial boundary\ncondition with taking temperature-dependent transport coefficient into account,\nwe obtain the propagation for each atmospheric parameter and presented it\ngraphically as a function of geometrically position and time. All of the\nparameters oscillating with respect to time and satisfy the characteristics of\nan atmospheric waves.\n Finally, the effect of the Coriolis force on resultant velocity was also\ndiscussed by plotting contour lines for the resultant velocity for the\ndifferent magnitude of Coriolis force, then we also obtain an interesting wave\nphenomena for the respective rotation of the Coriolis force.\n ~~~~Keywords: Naiver-Stokes Equations; Finite difference method; Viscous\natmospheric motion; Viscous dissipation; convective motion.\n"} {"abstract": " Microbiota profiles measure the structure of microbial communities in a\ndefined environment (known as microbiomes). In the past decade, microbiome\nresearch has focused on health applications as a result of which the gut\nmicrobiome has been implicated in the development of a broad range of diseases\nsuch as obesity, inflammatory bowel disease, and major depressive disorder. A\nkey goal of many microbiome experiments is to characterise or describe the\nmicrobial community. High-throughput sequencing is used to generate microbiota\nprofiles, but data gathered via this method are extremely challenging to\nanalyse, as the data violate multiple strong assumptions of standard models.\nRough Set Theory (RST) has weak assumptions that are less likely to be\nviolated, and offers a range of attractive tools for extracting knowledge from\ncomplex data. In this paper we present the first application of RST for\ncharacterising microbiomes. We begin with a demonstrative benchmark microbiota\nprofile and extend the approach to gut microbiomes gathered from depressed\nsubjects to enable knowledge discovery. We find that RST is capable of\nexcellent characterisation of the gut microbiomes in depressed subjects and\nidentifying previously undescribed alterations to the microbiome-gut-brain\naxis. An important aspect of the application of RST is that it provides a\npossible solution to an open research question regarding the search for an\noptimal normalisation approach for microbiome census data, as one does not\ncurrently exist.\n"} {"abstract": " Multistability is a common phenomenon which naturally occurs in complex\nnetworks. If coexisting attractors are numerous and their basins of attraction\nare complexly interwoven, the long-term response to a perturbation can be\nhighly uncertain. We examine the uncertainty in the outcome of perturbations to\nthe synchronous state in a Kuramoto-like representation of the British power\ngrid. Based on local basin landscapes which correspond to single-node\nperturbations, we demonstrate that the uncertainty shows strong spatial\nvariability. While perturbations at many nodes only allow for a few outcomes,\nother local landscapes show extreme complexity with more than a hundred basins.\nParticularly complex domains in the latter can be related to unstable invariant\nchaotic sets of saddle type. Most importantly, we show that the characteristic\ndynamics on these chaotic saddles can be associated with certain topological\nstructures of the network. We find that one particular tree-like substructure\nallows for the chaotic response to perturbations at nodes in the north of Great\nBritain. The interplay with other peripheral motifs increases the uncertainty\nin the system response even further.\n"} {"abstract": " Ginzburg algebras associated to triangulated surfaces provide a means to\ncategorify the cluster algebras of these surfaces. As shown by Ivan Smith, the\nfinite derived category of such a Ginzburg algebra can be embedded into the\nFukaya category of the total space of a Lefschetz fibration over the surface.\nInspired by this perspective we provide a description of the full derived\ncategory in terms of a perverse schober. The main novelty is a gluing formalism\ndescribing the Ginzburg algebra as a colimit of certain local Ginzburg algebras\nassociated to discs. As a first application we give a new proof of the derived\ninvariance of these Ginzburg algebras under flips of an edge of the\ntriangulation. Finally, we note that the perverse schober as well as the\nresulting gluing construction can also be defined over the sphere spectrum.\n"} {"abstract": " Fourth-order interference is an information processing primitive for photonic\nquantum technologies. When used in conjunction with post-selection, it forms\nthe basis of photonic controlled logic gates, entangling measurements, and can\nbe used to produce quantum correlations. Here, using classical weak coherent\nstates as inputs, we study fourth-order interference in novel $4 \\times 4$\nmulti-port beam splitters built within multi-core optical fibers. Using two\nmutually incoherent weak laser pulses as inputs, we observe high-quality fourth\norder interference between photons from different cores, as well as\nself-interference of a two-photon wavepacket. In addition, we show that quantum\ncorrelations, in the form of quantum discord, can be maximized by controlling\nthe intensity ratio between the two input weak coherent states. This should\nallow for the exploitation of quantum correlations in future telecommunication\nnetworks.\n"} {"abstract": " Plant phenotyping, that is, the quantitative assessment of plant traits\nincluding growth, morphology, physiology, and yield, is a critical aspect\ntowards efficient and effective crop management. Currently, plant phenotyping\nis a manually intensive and time consuming process, which involves human\noperators making measurements in the field, based on visual estimates or using\nhand-held devices. In this work, methods for automated grapevine phenotyping\nare developed, aiming to canopy volume estimation and bunch detection and\ncounting. It is demonstrated that both measurements can be effectively\nperformed in the field using a consumer-grade depth camera mounted onboard an\nagricultural vehicle.\n"} {"abstract": " Unsupervised domain adaptation (UDA) methods for person re-identification\n(re-ID) aim at transferring re-ID knowledge from labeled source data to\nunlabeled target data. Although achieving great success, most of them only use\nlimited data from a single-source domain for model pre-training, making the\nrich labeled data insufficiently exploited. To make full use of the valuable\nlabeled data, we introduce the multi-source concept into UDA person re-ID\nfield, where multiple source datasets are used during training. However,\nbecause of domain gaps, simply combining different datasets only brings limited\nimprovement. In this paper, we try to address this problem from two\nperspectives, \\ie{} domain-specific view and domain-fusion view. Two\nconstructive modules are proposed, and they are compatible with each other.\nFirst, a rectification domain-specific batch normalization (RDSBN) module is\nexplored to simultaneously reduce domain-specific characteristics and increase\nthe distinctiveness of person features. Second, a graph convolutional network\n(GCN) based multi-domain information fusion (MDIF) module is developed, which\nminimizes domain distances by fusing features of different domains. The\nproposed method outperforms state-of-the-art UDA person re-ID methods by a\nlarge margin, and even achieves comparable performance to the supervised\napproaches without any post-processing techniques.\n"} {"abstract": " We provide high-precision predictions for muon-pair and tau-pair productions\nin a photon-photon collision by considering a complete set of one-loop-level\nscattering amplitudes, i.e., electroweak (EW) corrections together with soft\nand hard QED radiation. Accordingly, we present a detailed numerical discussion\nwith particular emphasis on the pure QED corrections as well as genuinely weak\ncorrections. The effects of angular and initial beam polarisation distributions\non production rates are also discussed. An improvement is observed by a factor\nof two with oppositely polarized photons. Our results indicate that the\none-loop EW radiative corrections enhance the Born cross section and the total\nrelative correction is typically about ten percent for both production\nchannels. It appears that the full EW corrections to $\\gamma \\gamma \\to \\ell^-\n\\ell^+$ are required to match a percent level accuracy.\n"} {"abstract": " The program-over-monoid model of computation originates with Barrington's\nproof that the model captures the complexity class $\\mathsf{NC^1}$. Here we\nmake progress in understanding the subtleties of the model. First, we identify\na new tameness condition on a class of monoids that entails a natural\ncharacterization of the regular languages recognizable by programs over monoids\nfrom the class. Second, we prove that the class known as $\\mathbf{DA}$\nsatisfies tameness and hence that the regular languages recognized by programs\nover monoids in $\\mathbf{DA}$ are precisely those recognizable in the classical\nsense by morphisms from $\\mathbf{QDA}$. Third, we show by contrast that the\nwell studied class of monoids called $\\mathbf{J}$ is not tame. Finally, we\nexhibit a program-length-based hierarchy within the class of languages\nrecognized by programs over monoids from $\\mathbf{DA}$.\n"} {"abstract": " We show that in analytic sub-Riemannian manifolds of rank 2 satisfying a\ncommutativity condition spiral-like curves are not length minimizing near the\ncenter of the spiral. The proof relies upon the delicate construction of a\ncompeting curve.\n"} {"abstract": " We introduce a thermodynamically consistent, minimal stochastic model for\ncomplementary logic gates built with field-effect transistors. We characterize\nthe performance of such gates with tools from information theory and study the\ninterplay between accuracy, speed, and dissipation of computations. With a few\nuniversal building blocks, such as the NOT and NAND gates, we are able to model\narbitrary combinatorial and sequential logic circuits, which are modularized to\nimplement computing tasks. We find generically that high accuracy can be\nachieved provided sufficient energy consumption and time to perform the\ncomputation. However, for low-energy computing, accuracy and speed are coupled\nin a way that depends on the device architecture and task. Our work bridges the\ngap between the engineering of low dissipation digital devices and theoretical\ndevelopments in stochastic thermodynamics, and provides a platform to study\ndesign principles for low dissipation digital devices.\n"} {"abstract": " A low complexity frequency offset estimation algorithm based on all-phase FFT\nfor M-QAM is proposed. Compared with two-stage algorithms such as FFT+CZT and\nFFT+ZoomFFT, our algorithm can lower computational complexity by 73% and 30%\nrespectively, without loss of the estimation accuracy.\n"} {"abstract": " While self-supervised pretraining has proven beneficial for many computer\nvision tasks, it requires expensive and lengthy computation, large amounts of\ndata, and is sensitive to data augmentation. Prior work demonstrates that\nmodels pretrained on datasets dissimilar to their target data, such as chest\nX-ray models trained on ImageNet, underperform models trained from scratch.\nUsers that lack the resources to pretrain must use existing models with lower\nperformance. This paper explores Hierarchical PreTraining (HPT), which\ndecreases convergence time and improves accuracy by initializing the\npretraining process with an existing pretrained model. Through experimentation\non 16 diverse vision datasets, we show HPT converges up to 80x faster, improves\naccuracy across tasks, and improves the robustness of the self-supervised\npretraining process to changes in the image augmentation policy or amount of\npretraining data. Taken together, HPT provides a simple framework for obtaining\nbetter pretrained representations with less computational resources.\n"} {"abstract": " Currently, every 1 in 54 children have been diagnosed with Autism Spectrum\nDisorder (ASD), which is 178% higher than it was in 2000. An early diagnosis\nand treatment can significantly increase the chances of going off the spectrum\nand making a full recovery. With a multitude of physical and behavioral tests\nfor neurological and communication skills, diagnosing ASD is very complex,\nsubjective, time-consuming, and expensive. We hypothesize that the use of\nmachine learning analysis on facial features and social behavior can speed up\nthe diagnosis of ASD without compromising real-world performance. We propose to\ndevelop a hybrid architecture using both categorical data and image data to\nautomate traditional ASD pre-screening, which makes diagnosis a quicker and\neasier process. We created and tested a Logistic Regression model and a Linear\nSupport Vector Machine for Module 1, which classifies ADOS categorical data. A\nConvolutional Neural Network and a DenseNet network are used for module 2,\nwhich classifies video data. Finally, we combined the best performing models, a\nLinear SVM and DenseNet, using three data averaging strategies. We used a\nstandard average, weighted based on number of training data, and weighted based\non the number of ASD patients in the training data to average the results,\nthereby increasing accuracy in clinical applications. The results we obtained\nsupport our hypothesis. Our novel architecture is able to effectively automate\nASD pre-screening with a maximum weighted accuracy of 84%.\n"} {"abstract": " With the Deep Neural Networks (DNNs) as a powerful function approximator,\nDeep Reinforcement Learning (DRL) has been excellently demonstrated on robotic\ncontrol tasks. Compared to DNNs with vanilla artificial neurons, the\nbiologically plausible Spiking Neural Network (SNN) contains a diverse\npopulation of spiking neurons, making it naturally powerful on state\nrepresentation with spatial and temporal information. Based on a hybrid\nlearning framework, where a spike actor-network infers actions from states and\na deep critic network evaluates the actor, we propose a Population-coding and\nDynamic-neurons improved Spiking Actor Network (PDSAN) for efficient state\nrepresentation from two different scales: input coding and neuronal coding. For\ninput coding, we apply population coding with dynamically receptive fields to\ndirectly encode each input state component. For neuronal coding, we propose\ndifferent types of dynamic-neurons (containing 1st-order and 2nd-order neuronal\ndynamics) to describe much more complex neuronal dynamics. Finally, the PDSAN\nis trained in conjunction with deep critic networks using the Twin Delayed Deep\nDeterministic policy gradient algorithm (TD3-PDSAN). Extensive experimental\nresults show that our TD3-PDSAN model achieves better performance than\nstate-of-the-art models on four OpenAI gym benchmark tasks. It is an important\nattempt to improve RL with SNN towards the effective computation satisfying\nbiological plausibility.\n"} {"abstract": " Mode-locking operation and multimode instabilities in Terahertz (THz) quantum\ncascade lasers (QCLs) have been intensively investigated during the last\ndecade. These studies have unveiled a rich phenomenology, owing to the unique\nproperties of these lasers, in particular their ultrafast gain medium. Thanks\nto this, in QCLs a modulation of the intracavity field intensity gives rise to\na strong modulation of the population inversion, directly affecting the laser\ncurrent. In this work we show that this property can be used to study the\nreal-time dynamics of multimode THz QCLs, using a self-detection technique\ncombined with a 60GHz real-time oscilloscope. To demonstrate the potential of\nthis technique we investigate a free-running 4.2THz QCL, and observe a\nself-starting periodic modulation of the laser current, producing trains of\nregularly spaced, ~100ps-long pulses. Depending on the drive current we find\ntwo regimes of oscillation with dramatically different properties: a first\nregime at the fundamental repetition rate, characterised by large amplitude and\nphase noise, with coherence times of a few tens of periods; a much more regular\nsecond-harmonic-comb regime, with typical coherence times of ~105 oscillation\nperiods. We interpret these measurements using a set of effective semiconductor\nMaxwell-Bloch equations that qualitatively reproduce the fundamental features\nof the laser dynamics, indicating that the observed carrier-density and optical\npulses are in antiphase, and appear as a rather shallow modulation on top of a\ncontinuous wave background. Thanks to its simplicity and versatility, the\ndemonstrated technique is a powerful tool for the study of ultrafast dynamics\nin THz QCLs.\n"} {"abstract": " We study the renormalization of Entanglement Entropy in holographic CFTs dual\nto Lovelock gravity. It is known that the holographic EE in Lovelock gravity is\ngiven by the Jacobson-Myers (JM) functional. As usual, due to the divergent\nWeyl factor in the Fefferman-Graham expansion of the boundary metric for\nAsymptotically AdS spaces, this entropy functional is infinite. By considering\nthe Kounterterm renormalization procedure, which utilizes extrinsic boundary\ncounterterms in order to renormalize the on-shell Lovelock gravity action for\nAAdS spacetimes, we propose a new renormalization prescription for the\nJacobson-Myers functional. We then explicitly show the cancellation of\ndivergences in the EE up to next-to-leading order in the holographic radial\ncoordinate, for the case of spherical entangling surfaces. Using this new\nrenormalization prescription, we directly find the $C-$function candidates for\nodd and even dimensional CFTs dual to Lovelock gravity. Our results illustrate\nthe notable improvement that the Kounterterm method affords over other\napproaches, as it is non-perturbative and does not require that the Lovelock\ntheory has limiting Einstein behavior.\n"} {"abstract": " We study identification of linear systems with multiplicative noise from\nmultiple trajectory data. A least-squares algorithm, based on exploratory\ninputs, is proposed to simultaneously estimate the parameters of the nominal\nsystem and the covariance matrix of the multiplicative noise. The algorithm\ndoes not need prior knowledge of the noise or stability of the system, but\nrequires mild conditions of inputs and relatively small length for each\ntrajectory. Identifiability of the noise covariance matrix is studied, showing\nthat there exists an equivalent class of matrices that generate the same\nsecond-moment dynamic of system states. It is demonstrated how to obtain the\nequivalent class based on estimates of the noise covariance. Asymptotic\nconsistency of the algorithm is verified under sufficiently exciting inputs and\nsystem controllability conditions. Non-asymptotic estimation performance is\nalso analyzed under the assumption that system states and noise are bounded,\nproviding vanishing high-probability bounds as the number of trajectories grows\nto infinity. The results are illustrated by numerical simulations.\n"} {"abstract": " Yield farming has been an immensely popular activity for cryptocurrency\nholders since the explosion of Decentralized Finance (DeFi) in the summer of\n2020. In this Systematization of Knowledge (SoK), we study a general framework\nfor yield farming strategies with empirical analysis. First, we summarize the\nfundamentals of yield farming by focusing on the protocols and tokens used by\naggregators. We then examine the sources of yield and translate those into\nthree example yield farming strategies, followed by the simulations of yield\nfarming performance, based on these strategies. We further compare four major\nyield aggregrators -- Idle, Pickle, Harvest and Yearn -- in the ecosystem,\nalong with brief introductions of others. We systematize their strategies and\nrevenue models, and conduct an empirical analysis with on-chain data from\nexample vaults, to find a plausible connection between data anomalies and\nhistorical events. Finally, we discuss the benefits and risks of yield\naggregators.\n"} {"abstract": " We propose a Point-Voxel DeConvolution (PVDeConv) module for 3D data\nautoencoder. To demonstrate its efficiency we learn to synthesize\nhigh-resolution point clouds of 10k points that densely describe the underlying\ngeometry of Computer Aided Design (CAD) models. Scanning artifacts, such as\nprotrusions, missing parts, smoothed edges and holes, inevitably appear in real\n3D scans of fabricated CAD objects. Learning the original CAD model\nconstruction from a 3D scan requires a ground truth to be available together\nwith the corresponding 3D scan of an object. To solve the gap, we introduce a\nnew dedicated dataset, the CC3D, containing 50k+ pairs of CAD models and their\ncorresponding 3D meshes. This dataset is used to learn a convolutional\nautoencoder for point clouds sampled from the pairs of 3D scans - CAD models.\nThe challenges of this new dataset are demonstrated in comparison with other\ngenerative point cloud sampling models trained on ShapeNet. The CC3D\nautoencoder is efficient with respect to memory consumption and training time\nas compared to stateof-the-art models for 3D data generation.\n"} {"abstract": " We investigate quantitative aspects of the LEF property for subgroups of the\ntopological full group $[[ \\sigma ]]$ of a two-sided minimal subshift over a\nfinite alphabet, measured via the LEF growth function. We show that the LEF\ngrowth of $[[ \\sigma ]]^{\\prime}$ may be bounded from above and below in terms\nof the recurrence function and the complexity function of the subshift,\nrespectively. As an application, we construct groups of previously unseen LEF\ngrowth types, and exhibit a continuum of finitely generated LEF groups which\nmay be distinguished from one another by their LEF growth.\n"} {"abstract": " Riordan arrays, denoted by pairs of generating functions (g(z), f(z)), are\ninfinite lower-triangular matrices that are used as combinatorial tools. In\nthis paper, we present Riordan and stochastic Riordan arrays that have\nconnections to the Fibonacci and modified Lucas numbers. Then, we present some\npseudo-involutions in the Riordan group that are based on constructions\nstarting with a certain generating function g(z). We also present a theorem\nthat shows how to construct pseudo-involutions in the Riordan group starting\nwith a certain generating function f(z) whose additive inverse has\ncompositional order 2. The theorem is then used to construct more\npseudo-involutions in the Riordan group where some arrays have connections to\nthe Fibonacci and modified Lucas numbers. A MATLAB algorithm for constructing\nthe pseudo-involutions is also given.\n"} {"abstract": " The measurement of bias in machine learning often focuses on model\nperformance across identity subgroups (such as man and woman) with respect to\ngroundtruth labels. However, these methods do not directly measure the\nassociations that a model may have learned, for example between labels and\nidentity subgroups. Further, measuring a model's bias requires a fully\nannotated evaluation dataset which may not be easily available in practice. We\npresent an elegant mathematical solution that tackles both issues\nsimultaneously, using image classification as a working example. By treating a\nclassification model's predictions for a given image as a set of labels\nanalogous to a bag of words, we rank the biases that a model has learned with\nrespect to different identity labels. We use (man, woman) as a concrete example\nof an identity label set (although this set need not be binary), and present\nrankings for the labels that are most biased towards one identity or the other.\nWe demonstrate how the statistical properties of different association metrics\ncan lead to different rankings of the most \"gender biased\" labels, and conclude\nthat normalized pointwise mutual information (nPMI) is most useful in practice.\nFinally, we announce an open-sourced nPMI visualization tool using TensorBoard.\n"} {"abstract": " Usually, in mechanics, we obtain the trajectory of a particle in a given\nforce field by solving Newton's second law with chosen initial conditions. In\ncontrast, through our work here, we first demonstrate how one may analyse the\nbehaviour of a suitably defined family of trajectories of a given mechanical\nsystem. Such an approach leads us to develop a mechanics analog following the\nwell-known Raychaudhuri equation largely studied in Riemannian geometry and\ngeneral relativity. The idea of geodesic focusing, which is more familiar to a\nrelativist, appears to be analogous to the meeting of trajectories of a\nmechanical system within a finite time. Applying our general results to the\ncase of simple pendula, we obtain relevant quantitative consequences.\nThereafter, we set up and perform a straightforward experiment based on a\nsystem with two pendula. The experimental results on this system are found to\ntally well with our proposed theoretical model. In summary, the simple theory,\nas well as the related experiment, provides us with a way to understand the\nessence of a fairly involved concept in advanced physics from an elementary\nstandpoint.\n"} {"abstract": " Recently it has become essential to search for and retrieve high-resolution\nand efficient images easily due to swift development of digital images, many\npresent annotation algorithms facing a big challenge which is the variance for\nrepresent the image where high level represent image semantic and low level\nillustrate the features, this issue is known as semantic gab. This work has\nbeen used MPEG-7 standard to extract the features from the images, where the\ncolor feature was extracted by using Scalable Color Descriptor (SCD) and Color\nLayout Descriptor (CLD), whereas the texture feature was extracted by employing\nEdge Histogram Descriptor (EHD), the CLD produced high dimensionality feature\nvector therefore it is reduced by Principal Component Analysis (PCA). The\nfeatures that have extracted by these three descriptors could be passing to the\nclassifiers (Naive Bayes and Decision Tree) for training. Finally, they\nannotated the query image. In this study TUDarmstadt image bank had been used.\nThe results of tests and comparative performance evaluation indicated better\nprecision and executing time of Naive Bayes classification in comparison with\nDecision Tree classification.\n"} {"abstract": " According to the O'Nan--Scott Theorem, a finite primitive permutation group\neither preserves a structure of one of three types (affine space, Cartesian\nlattice, or diagonal semilattice), or is almost simple. However, diagonal\ngroups are a much larger class than those occurring in this theorem. For any\npositive integer $m$ and group $G$ (finite or infinite), there is a diagonal\nsemilattice, a sub-semilattice of the lattice of partitions of a set $\\Omega$,\nwhose automorphism group is the corresponding diagonal group. Moreover, there\nis a graph (the diagonal graph), bearing much the same relation to the diagonal\nsemilattice and group as the Hamming graph does to the Cartesian lattice and\nthe wreath product of symmetric groups.\n Our purpose here, after a brief introduction to this semilattice and graph,\nis to establish some properties of this graph. The diagonal graph\n$\\Gamma_D(G,m)$ is a Cayley graph for the group~$G^m$, and so is\nvertex-transitive. We establish its clique number in general and its chromatic\nnumber in most cases, with a conjecture about the chromatic number in the\nremaining cases. We compute the spectrum of the adjacency matrix of the graph,\nusing a calculation of the M\\\"obius function of the diagonal semilattice. We\nalso compute some other graph parameters and symmetry properties of the graph.\n We believe that this family of graphs will play a significant role in\nalgebraic graph theory.\n"} {"abstract": " Many Riordan arrays play a significant role in algebraic combinatorics. We\nexplore the inversion of Riordan arrays in this context. We give a general\nconstruct for the inversion of a Riordan array, and study this in the case of\nvarious subgroups of the Riordan group. For instance, we show that the\ninversion of an ordinary Bell matrix is an exponential Riordan array in the\nassociated subgroup. Examples from combinatorics and algebraic combinatorics\nillustrate the usefulness of such inversions. We end with a brief look at the\ninversion of exponential Riordan arrays. A final example places Airey's\nconvergent factor in the context of a simple exponential Riordan array.\n"} {"abstract": " Unveiling point defects concentration in transition metal oxide thin films is\nessential to understand and eventually control their functional properties,\nemployed in an increasing number of applications and devices. Despite this\nunquestionable interest, there is a lack of available experimental techniques\nable to estimate the defect chemistry and equilibrium constants in such oxides\nat intermediate-to-low temperatures. In this study, the defect chemistry of a\nrelevant material such as La1-xSrxFeO3-d (LSF) with (x = 0.2, 0.4 and 0.5\n(LSF20, LSF40 and LSF50 respectively) is obtained by using a novel in situ\nspectroscopic ellipsometry approach applied to thin films. Through this\ntechnique, the concentration of holes in LSF is correlated to measured optical\nproperties and its evolution with temperature and oxygen partial pressure is\ndetermined. In this way, a systematic description of defect chemistry in LSF\nthin films in the temperature range from 350dC to 500dC is obtained for the\nfirst time, which represents a step forward in the understanding of LSF20,\nLSF40 and LSF50 for emerging low temperature applications.\n"} {"abstract": " We use a theorem of P. Berger and D. Turaev to construct an example of a\nFinsler geodesic flow on the 2-torus with a transverse section, such that its\nPoincar\\'e return map has positive metric entropy. The Finsler metric\ngenerating the flow can be chosen to be arbitrarily $C^\\infty$-close to a flat\nmetric.\n"} {"abstract": " In this paper, we characterize the performance of a three-dimensional (3D)\ntwo-hop cellular network in which terrestrial base stations (BSs) coexist with\nunmanned aerial vehicles (UAVs) to serve a set of ground user equipment (UE).\nIn particular, a UE connects either directly to its serving terrestrial BS by\nan access link or connects first to its serving UAV which is then wirelessly\nbackhauled to a terrestrial BS (joint access and backhaul). We consider\nrealistic antenna radiation patterns for both BSs and UAVs using practical\nmodels developed by the third generation partnership project (3GPP). We assume\na probabilistic channel model for the air-to-ground transmission, which\nincorporates both line-of-sight (LoS) and non-line-of-sight (NLoS) links.\nAssuming the max-power association policy, we study the performance of the\nnetwork in both amplify-and-forward (AF) and decode-and-forward (DF) relaying\nprotocols. Using tools from stochastic geometry, we analyze the joint\ndistribution of distance and zenith angle of the closest (and serving) UAV to\nthe origin in a 3D setting. Further, we identify and extensively study key\nmathematical constructs as the building blocks of characterizing the received\nsignal-to-interference-plus-noise ratio (SINR) distribution. Using these\nresults, we obtain exact mathematical expressions for the coverage probability\nin both AF and DF relaying protocols. Furthermore, considering the fact that\nbackhaul links could be quite weak because of the downtilted antennas at the\nBSs, we propose and analyze the addition of a directional uptilted antenna at\nthe BS that is solely used for backhaul purposes. The superiority of having\ndirectional antennas with wirelessly backhauled UAVs is further demonstrated\nvia simulation.\n"} {"abstract": " Edge computing has become one of the key enablers for ultra-reliable and\nlow-latency communications in the industrial Internet of Things in the fifth\ngeneration communication systems, and is also a promising technology in the\nfuture sixth generation communication systems. In this work, we consider the\napplication of edge computing to smart factories for mission-critical task\noffloading through wireless links. In such scenarios, although high end-to-end\ndelays from the generation to completion of tasks happen with low probability,\nthey may incur severe casualties and property loss, and should be seriously\ntreated. Inspired by the risk management theory widely used in finance, we\nadopt the Conditional Value at Risk to capture the tail of the delay\ndistribution. An upper bound of the Conditional Value at Risk is derived\nthrough analysis of the queues both at the devices and the edge computing\nservers. We aim to find out the optimal offloading policy taking into\nconsideration both the average and the worst case delay performance of the\nsystem. Given that the formulated optimization problem is a non-convex mixed\ninteger non-linear programming problem, a decomposition into sub-problems is\nperformed and a two-stage heuristic algorithm is proposed. Simulation results\nvalidate our analysis and indicate that the proposed algorithm can reduce the\nrisk in both the queuing and end-to-end delay.\n"} {"abstract": " For many years, the image databases used in steganalysis have been relatively\nsmall, i.e. about ten thousand images. This limits the diversity of images and\nthus prevents large-scale analysis of steganalysis algorithms.\n In this paper, we describe a large JPEG database composed of 2 million colour\nand grey-scale images. This database, named LSSD for Large Scale Steganalysis\nDatabase, was obtained thanks to the intensive use of \\enquote{controlled}\ndevelopment procedures. LSSD has been made publicly available, and we aspire it\ncould be used by the steganalysis community for large-scale experiments.\n We introduce the pipeline used for building various image database versions.\nWe detail the general methodology that can be used to redevelop the entire\ndatabase and increase even more the diversity. We also discuss computational\ncost and storage cost in order to develop images.\n"} {"abstract": " Machine-Learning-as-a-Service providers expose machine learning (ML) models\nthrough application programming interfaces (APIs) to developers. Recent work\nhas shown that attackers can exploit these APIs to extract good approximations\nof such ML models, by querying them with samples of their choosing. We propose\nVarDetect, a stateful monitor that tracks the distribution of queries made by\nusers of such a service, to detect model extraction attacks. Harnessing the\nlatent distributions learned by a modified variational autoencoder, VarDetect\nrobustly separates three types of attacker samples from benign samples, and\nsuccessfully raises an alarm for each. Further, with VarDetect deployed as an\nautomated defense mechanism, the extracted substitute models are found to\nexhibit poor performance and transferability, as intended. Finally, we\ndemonstrate that even adaptive attackers with prior knowledge of the deployment\nof VarDetect, are detected by it.\n"} {"abstract": " Fog computing can be used to offload computationally intensive tasks from\nbattery powered Internet of Things (IoT) devices. Although it reduces energy\nrequired for computations in an IoT device, it uses energy for communications\nwith the fog. This paper analyzes when usage of fog computing is more energy\nefficient than local computing. Detailed energy consumption models are built in\nboth scenarios with the focus set on the relation between energy consumption\nand distortion introduced by a Power Amplifier (PA). Numerical results show\nthat task offloading to a fog is the most energy efficient for short, wideband\nlinks.\n"} {"abstract": " The geometric properties of sigma models with target space a Jacobi manifold\nare investigated. In their basic formulation, these are topological field\ntheories - recently introduced by the authors - which share and generalise\nrelevant features of Poisson sigma models, such as gauge invariance under\ndiffeomorphisms and finite dimension of the reduced phase space. After\nreviewing the main novelties and peculiarities of these models, we perform a\ndetailed analysis of constraints and ensuing gauge symmetries in the\nHamiltonian approach. Contact manifolds as well as locally conformal symplectic\nmanifolds are discussed, as main instances of Jacobi manifolds.\n"} {"abstract": " Using symmetrization techniques, we show that, for every $N \\geq 2$, any\nsecond eigenfunction of the fractional Laplacian in the $N$-dimensional unit\nball with homogeneous Dirichlet conditions is nonradial, and hence its nodal\nset is an equatorial section of the ball.\n"} {"abstract": " In March 2020 the United Kingdom (UK) entered a nationwide lockdown period\ndue to the Covid-19 pandemic. As a result, levels of nitrogen dioxide (NO2) in\nthe atmosphere dropped. In this work, we use 550,134 NO2 data points from 237\nstations in the UK to build a spatiotemporal Gaussian process capable of\npredicting NO2 levels across the entire UK. We integrate several covariate\ndatasets to enhance the model's ability to capture the complex spatiotemporal\ndynamics of NO2. Our numerical analyses show that, within two weeks of a UK\nlockdown being imposed, UK NO2 levels dropped 36.8%. Further, we show that as a\ndirect result of lockdown NO2 levels were 29-38% lower than what they would\nhave been had no lockdown occurred. In accompaniment to these numerical\nresults, we provide a software framework that allows practitioners to easily\nand efficiently fit similar models.\n"} {"abstract": " Locally-rotationally-symmetric Bianchi type-I viscous and non -viscous\ncosmological models are explored in general relativity (GR) and in f(R,T)\ngravity. Solutions are obtained by assuming that the expansion scalar is\nproportional to the shear scalar which yields a constant value for the\ndeceleration parameter (q=2). Constraints are obtained by requiring the\nphysical viability of the solutions. A comparison is made between the viscous\nand non-viscous models, and between the models in GR and in f(R,T) gravity. The\nmetric potentials remain the same in GR and in f(R,T) gravity. Consequently,\nthe geometrical behavior of the $f(R,T)$ gravity models remains the same as the\nmodels in GR. It is found that f(R,T) gravity or bulk viscosity does not affect\nthe behavior of effective matter which acts as a stiff fluid in all models. The\nindividual fluids have very rich behavior. In one of the viscous models, the\nmatter either follows a semi-realistic EoS or exhibits a transition from stiff\nmatter to phantom, depending on the values of the parameter. In another model,\nthe matter describes radiation, dust, quintessence, phantom, and the\ncosmological constant for different values of the parameter. In general, f(R,T)\ngravity diminishes the effect of bulk viscosity.\n"} {"abstract": " We propose TubeR: a simple solution for spatio-temporal video action\ndetection. Different from existing methods that depend on either an off-line\nactor detector or hand-designed actor-positional hypotheses like proposals or\nanchors, we propose to directly detect an action tubelet in a video by\nsimultaneously performing action localization and recognition from a single\nrepresentation. TubeR learns a set of tubelet-queries and utilizes a\ntubelet-attention module to model the dynamic spatio-temporal nature of a video\nclip, which effectively reinforces the model capacity compared to using\nactor-positional hypotheses in the spatio-temporal space. For videos containing\ntransitional states or scene changes, we propose a context aware classification\nhead to utilize short-term and long-term context to strengthen action\nclassification, and an action switch regression head for detecting the precise\ntemporal action extent. TubeR directly produces action tubelets with variable\nlengths and even maintains good results for long video clips. TubeR outperforms\nthe previous state-of-the-art on commonly used action detection datasets AVA,\nUCF101-24 and JHMDB51-21.\n"} {"abstract": " We prove the existence of an extremal function in the\nHardy-Littlewood-Sobolev inequality for the energy associated to an stable\noperator. To this aim we obtain a concentration-compactness principle for\nstable processes in $\\mathbb{R}^N$.\n"} {"abstract": " Smooth interfaces of topological systems are known to host massive surface\nstates along with the topologically protected chiral one. We show that in Weyl\nsemimetals these massive states, along with the chiral Fermi arc, strongly\nalter the form of the Fermi-arc plasmon, Most saliently, they yield further\ncollective plasmonic modes that are absent in a conventional interfaces. The\nplasmon modes are completely anisotropic as a consequence of the underlying\nanisotropy in the surface model and expected to have a clear-cut experimental\nsignature, e.g. in electron-energy loss spectroscopy.\n"} {"abstract": " A two-class Processor-Sharing queue with one impatient class is studied.\nLocal exponential decay rates for its stationary distribution (N, M) are\nestablished in the heavy traffic regime where the arrival rate of impatient\ncustomers grows proportionally to a large factor A. This regime is\ncharacterized by two time-scales, so that no general Large Deviations result is\napplicable. In the framework of singular perturbation methods, we instead\nassume that an asymptotic expansion of the solution of associated Kolmogorov\nequations exists for large A and derive it in the form P(N = Ax, M = Ay) ~\ng(x,y)/A exp(-A H(x,y)) for x > 0 and y > 0 with explicit functions g and H.\n This result is then applied to the model of mobile networks proposed in a\nprevious work and accounting for the spatial movement of users. We give further\nevidence of a unusual growth behavior in heavy traffic in that the stationary\nmean queue length E(N') and E(M') of each customer-class increases\nproportionally to E(N') ~ E(M') ~ -log(1-rho) with system load rho tending to\n1, instead of the usual 1/(1-rho) growth behavior.\n"} {"abstract": " Loosely bound van der Waals dimers of lanthanide atoms, as might be obtained\nin ultracold atom experiments, are investigated. These molecules are known to\nexhibit a degree of quantum chaos, due to the strong anisotropic mixing of\ntheir angular spin and rotation degrees of freedom. Within a model of these\nmolecules, we identify different realms of this anisotropic mixing, depending\non whether the spin, the rotation, or both, are significantly mixed by the\nanisotropy. These realms are in turn generally correlated with the resulting\nmagnetic moments of the states.\n"} {"abstract": " We have investigated the structural, magnetic and dielectric properties of\nPb-based langasite compound Pb$_3$TeMn$_3$P$_2$O$_{14}$ both experimentally and\ntheoretically in the light of metal-oxygen covalency, and the consequent\ngeneration of multiferroicity. It is known that large covalency between Pb 6$p$\nand O 2$p$ plays instrumental role behind stereochemical lone pair activity of\nPb. The same happens here but a subtle structural phase transition above room\ntemperature changes the degree of such lone pair activity and the system\nbecomes ferroelectric below 310 K. Interestingly, this structural change also\nmodulates the charge densities on different constituent atoms and consequently\nthe overall magnetic response of the system while maintaining global\nparamagnetism behavior of the compound intact. This single origin of modulation\nin polarity and paramagnetism inherently connects both the functionalities and\nthe system exhibits mutiferroicity at room temperature.\n"} {"abstract": " We present a series of models of three-dimensional rotation-symmetric fragile\ntopological insulators in class AI (time-reversal symmetric and spin-orbit-free\nsystems), which have gapless surface states protected by time-reversal ($T$)\nand $n$-fold rotation ($C_n$) symmetries ($n=2,4,6$). Our models are\ngeneralizations of Fu's model of a spinless topological crystalline insulator,\nin which orbital degrees of freedom play the role of pseudo-spins. We consider\nminimal surface Hamiltonian with $C_n$ symmetry in class AI and discuss\npossible symmetry-protected gapless surface states, i.e., a quadratic band\ntouching and multiple Dirac cones with linear dispersion. We characterize\ntopological structure of bulk wave functions in terms of two kinds of\ntopological invariants obtained from Wilson loops: $\\mathbb{Z}_2$ invariants\nprotected by $C_n$ ($n=4,6$) and time-reversal symmetries, and\n$C_2T$-symmetry-protected $\\mathbb{Z}$ invariants (the Euler class) when the\nnumber of occupied bands is two. Accordingly, our models realize two kinds of\nfragile topological insulators. One is a fragile $\\mathbb{Z}$ topological\ninsulator whose only nontrivial topological index is the Euler class that\nspecifies the number of surface Dirac cones. The other is a fragile\n$\\mathbb{Z}_2$ topological insulator having gapless surface states with either\na quadratic band touching or four (six) Dirac cones, which are protected by\ntime-reversal and $C_4$ ($C_6$) symmetries. Finally, we discuss the instability\nof gapless surface states against the addition of $s$-orbital bands and\ndemonstrate that surface states are gapped out through hybridization with\nsurface-localized $s$-orbital bands.\n"} {"abstract": " Complexity of products, volatility in global markets, and the increasingly\nrapid pace of innovations may make it difficult to know how to approach\nchallenging situations in mechatronic design and production. Technical Debt\n(TD) is a metaphor that describes the practical bargain of exchanging\nshort-term benefits for long-term negative consequences. Oftentimes, the scope\nand impact of TD, as well as the cost of corrective measures, are\nunderestimated. Especially for mechatronic teams in the mechanical, electrical,\nand software disciplines, the adverse interdisciplinary ripple effects of TD\nincidents are passed on throughout the life cycle. The analysis of the first\ncomprehensive survey showed that not only do the TD types differ in\ncross-disciplinary comparisons, but different characteristics can also be\nobserved depending on whether a discipline is studied in isolation or in\ncombination with others. To validate the study results and to report on a\ngeneral consciousness of TD in the disciplines, this follow-up study involves\n15 of the 50 experts of the predecessor study and reflects the frequency and\nimpact of technical debt in industrial experts' daily work using a\nquestionnaire. These experts rate 14 TD types, 47 TD causes, and 33 TD symptoms\nin terms of their frequency and impact. Detailed analyses reveal consistent\nresults for the most frequent TD types and causes, yet they show divergent\ncharacteristics in a profound exploration of discipline-specific phenomena.\nThus, this study has the potential to set the foundations for future automated\nTD identification analyses in mechatronics.\n"} {"abstract": " Defect detection at commit check-in time prevents the introduction of defects\ninto software systems. Current defect detection approaches rely on metric-based\nmodels which are not very accurate and whose results are not directly useful\nfor developers. We propose a method to detect bug-inducing commits by comparing\nthe incoming changes with all past commits in the project, considering both\nthose that introduced defects and those that did not. Our method considers\nindividual changes in the commit separately, at the method-level granularity.\nDoing so helps developers as they are informed of specific methods that need\nfurther attention instead of being told that the entire commit is problematic.\nOur approach represents source code as abstract syntax trees and uses tree\nkernels to estimate the similarity of the code with previous commits. We\nexperiment with subtree kernels (STK), subset tree kernels (SSTK), or partial\ntree kernels (PTK). An incoming change is then classified using a K-NN\nclassifier on the past changes. We evaluate our approach on the BigCloneBench\nbenchmark and on the Technical Debt dataset, using the NiCad clone detector as\nthe baseline. Our experiments with the BigCloneBench benchmark show that the\ntree kernel approach can detect clones with a comparable MAP to that of NiCad.\nAlso, on defect detection with the Technical Debt dataset, tree kernels are\nleast as effective as NiCad with MRR, F-score, and Accuracy of 0.87, 0.80, and\n0.82 respectively.\n"} {"abstract": " Particle-In-Cell codes are widely used for plasma physics simulations. It is\noften the case that particles within a computational cell need to be split to\nimprove the statistics or, in the case of non-uniform meshes, to avoid the\ndevelopment of fictitious self-forces. Existing particle splitting methods are\nlargely empirical and their accuracy in preserving the distribution function\nhas not been evaluated in a quantitative way. Here we present a new method\nspecifically designed for codes using adaptive mesh refinement. Although we\npoint out that an exact, distribution function preserving method does exist, it\nrequires a large number of split particles and its practical use is limited. We\nderive instead a method that minimizes the cost function representing the\ndistance between the assignment function of the original particle and that of\nthe sum of split particles. Depending on the interpolation degree and the\ndimension of the problem, we provide tabulated results for the weight and\nposition of the split particles. This strategy represents no overhead in\ncomputing time and for a large enough number of split-particles it\nasymptotically tends to the exact solution.\n"} {"abstract": " Lettericity is a graph parameter introduced by Petkov\\v{s}ek in 2002 in order\nto study well-quasi-orderability under the induced subgraph relation. In the\nworld of permutations, geometric griddability was independently introduced in\n2013 by Albert, Atkinson, Bouvel, Ru\\v{s}kuc and Vatter, partly as an\nenumerative tool. Despite their independent origins, those two notions share a\nconnection: they highlight very similar structural features in their respective\nobjects. The fact that those structural features arose separately on two\ndifferent occasions makes them very interesting to study in their own right.\n In the present paper, we explore the notion of lettericity through the lens\nof \"minimal obstructions\", i.e., minimal classes of graphs of unbounded\nlettericity, and identify an infinite collection of such classes. We also\ndiscover an intriguing structural hierarchy that arises in the study of\nlettericity and that of griddability.\n"} {"abstract": " Deuterated molecules are good tracers of the evolutionary stage of\nstar-forming cores. During the star formation process, deuterated molecules are\nexpected to be enhanced in cold, dense pre-stellar cores and to deplete after\nprotostellar birth. In this paper we study the deuteration fraction of\nformaldehyde in high-mass star-forming cores at different evolutionary stages\nto investigate whether the deuteration fraction of formaldehyde can be used as\nan evolutionary tracer. Using the APEX SEPIA Band 5 receiver, we extended our\npilot study of the $J$=3$\\rightarrow$2 rotational lines of HDCO and D$_2$CO to\neleven high-mass star-forming regions that host objects at different\nevolutionary stages. High-resolution follow-up observations of eight objects in\nALMA Band 6 were performed to reveal the size of the H$_2$CO emission and to\ngive an estimate of the deuteration fractions HDCO/H$_2$CO and D$_2$CO/HDCO at\nscales of $\\sim$6\" (0.04-0.15 pc at the distance of our targets). Our\nobservations show that singly- and doubly deuterated H$_2$CO are detected\ntoward high-mass protostellar objects (HMPOs) and ultracompact HII regions\n(UCHII regions), the deuteration fraction of H$_2$CO is also found to decrease\nby an order of magnitude from the earlier HMPO phases to the latest\nevolutionary stage (UCHII), from $\\sim$0.13 to $\\sim$0.01. We have not detected\nHDCO and D$_2$CO emission from the youngest sources (high-mass starless cores,\nHMSCs). Our extended study supports the results of the previous pilot study:\nthe deuteration fraction of formaldehyde decreases with evolutionary stage, but\nhigher sensitivity observations are needed to provide more stringent\nconstraints on the D/H ratio during the HMSC phase. The calculated upper limits\nfor the HMSC sources are high, so the trend between HMSC and HMPO phases cannot\nbe constrained.\n"} {"abstract": " In this paper we investigate multi-agent discrete-event systems with partial\nobservation. The agents can be divided into several groups in each of which the\nagents have similar (isomorphic) state transition structures, and thus can be\nrelabeled into the same template. Based on the template a scalable supervisor\nwhose state size and computational cost are independent of the number of agents\nis designed for the case of partial observation. The scalable supervisor under\npartial observation does not need to be recomputed regardless of how many\nagents are added to or removed from the system. We generalize our earlier\nresults to partial observation by proposing sufficient conditions for safety\nand maximal permissiveness of the scalable least restrictive supervisor on the\ntemplate level. An example is provided to illustrate the proposed scalable\nsupervisory synthesis.\n"} {"abstract": " We study best approximations in Banach spaces via Birkhoff-James\northogonality of functionals. To exhibit the usefulness of Birkhoff-James\northogonality techniques in the study of best approximation problems, some\nalgorithms and distance formulae are presented. As an application of our study,\nwe obtain some crucial inequalities, which also strengthen the classical\nH\\\"{o}lder's inequality. The relevance of the algorithms and the inequalities\nare discussed through concrete examples.\n"} {"abstract": " We present an alternative proof of asymptotic freeness of independent sample\ncovariance matrices, when the dimension and the sample size grow at the same\nrate, by embedding these matrices into Wigner matrices of a larger order and\nusing asymptotic freeness of independent Wigner and deterministic matrices.\n"} {"abstract": " Despite the structural resemblance of certain cuprate and nickelate parent\ncompounds there is a striking spread of $T_c$ among such transition metal oxide\nsuperconductors. We adopt a minimal two-orbital $e_g$ model which covers\ncuprates and nickelate heterostructures in different parametric limits, and\nanalyse its superconducting instabilities. The joint consideration of\ninteractions, doping, Fermiology, and in particular the $e_g$ orbital splitting\nallows us to explain the strongly differing pairing propensities in cuprate and\nnickelate superconductors.\n"} {"abstract": " We use the plasma density based on measurements of the probe-to-spacecraft\npotential in combination with magnetic field measurements by MAG to study\nfields and density fluctuations in the solar wind observed by Solar Orbiter\nduring the first perihelion encounter ($\\sim$0.5~AU away from the Sun). In\nparticular we use the polarization of the wave magnetic field, the phase\nbetween the compressible magnetic field and density fluctuations and the\ncompressibility ratio (the ratio of the normalized density fluctuations to the\nnormalized compressible fluctuations of B) to characterize the observed waves\nand turbulence. We find that the density fluctuations are out-of-phase with the\ncompressible component of magnetic fluctuations for intervals of turbulence,\nwhile they are in phase for the circular-polarized waves around the proton\ncyclotron frequency. We analyze in detail two specific events with simultaneous\npresence of left- and right-handed waves at different frequencies. We compare\nobserved wave properties to a prediction of the three-fluid (electrons, protons\nand alphas) model. We find a limit on the observed wavenumbers, $10^{-6} < k <\n7 \\times 10^{-6}$~m$^{-1}$, which corresponds to wavelength $7 \\times 10^6\n>\\lambda > 10^6$~m. We conclude that most likely both the left- and\nright-handed waves correspond to the low-wavenumber part (close to the cut-off\nat $\\Omega_{c\\mathrm{He}++}$) proton-band electromagnetic ion cyclotron\n(left-handed wave in the plasma frame confined to the frequency range\n$\\Omega_{c\\mathrm{He}++} < \\omega < \\Omega_{c\\mathrm{H}+}$) waves propagating\nin the outwards and inwards directions respectively. The fact that both wave\npolarizations are observed at the same time and the identified wave mode has a\nlow group velocity suggests that the double-banded events occur in the source\nregions of the waves.\n"} {"abstract": " Finite players gather information about an uncertain state before making\ndecisions. Each player allocates his limited attention capacity between biased\nsources and the other players, and the resulting stochastic attention network\nfacilitates the transmission of information from primary sources to him either\ndirectly or indirectly through the other players. The scarcity of attention\nleads the player to focus on his own-biased source, resulting in occasional\ncross-cutting exposures but most of the time a reinforcement of his\npredisposition. It also limits his attention to like-minded friends who, by\nattending to the same primary source as his, serve as secondary sources in case\nthe information transmission from the primary source to him is disrupted. A\nmandate on impartial exposures to all biased sources disrupts echo chambers but\nentails ambiguous welfare consequences. Inside an echo chamber, even a small\namount of heterogeneity between players can generate fat-tailed distributions\nof public opinion, and factors affecting the visibility of sources and players\ncould have unintended consequences for public opinion and consumer welfare.\n"} {"abstract": " This report focuses on safety aspects of connected and automated vehicles\n(CAVs). The fundamental question to be answered is how can CAVs improve road\nusers' safety? Using advanced data mining and thematic text analytics tools,\nthe goal is to systematically synthesize studies related to Big Data for safety\nmonitoring and improvement. Within this domain, the report systematically\ncompares Big Data initiatives related to transportation initiatives nationally\nand internationally and provides insights regarding the evolution of Big Data\nscience applications related to CAVs and new challenges. The objectives\naddressed are: 1-Creating a database of Big Data efforts by acquiring reports,\nwhite papers, and journal publications; 2-Applying text analytics tools to\nextract key concepts, and spot patterns and trends in Big Data initiatives;\n3-Understanding the evolution of CAV Big Data in the context of safety by\nquantifying granular taxonomies and modeling entity relations among contents in\nCAV Big Data research initiatives, and 4-Developing a foundation for exploring\nnew approaches to tracking and analyzing CAV Big Data and related innovations.\nThe study synthesizes and derives high-quality information from innovative\nresearch activities undertaken by various research entities through Big Data\ninitiatives. The results can provide a conceptual foundation for developing new\napproaches for guiding and tracking the safety implications of Big Data and\nrelated innovations.\n"} {"abstract": " Performance of recommender systems (RS) relies heavily on the amount of\ntraining data available. This poses a chicken-and-egg problem for early-stage\nproducts, whose amount of data, in turn, relies on the performance of their RS.\nOn the other hand, zero-shot learning promises some degree of generalization\nfrom an old dataset to an entirely new dataset. In this paper, we explore the\npossibility of zero-shot learning in RS. We develop an algorithm, dubbed\nZEro-Shot Recommenders (ZESRec), that is trained on an old dataset and\ngeneralize to a new one where there are neither overlapping users nor\noverlapping items, a setting that contrasts typical cross-domain RS that has\neither overlapping users or items. Different from categorical item indices,\ni.e., item ID, in previous methods, ZESRec uses items' natural-language\ndescriptions (or description embeddings) as their continuous indices, and\ntherefore naturally generalize to any unseen items. In terms of users, ZESRec\nbuilds upon recent advances on sequential RS to represent users using their\ninteractions with items, thereby generalizing to unseen users as well. We study\nthree pairs of real-world RS datasets and demonstrate that ZESRec can\nsuccessfully enable recommendations in such a zero-shot setting, opening up new\nopportunities for resolving the chicken-and-egg problem for data-scarce\nstartups or early-stage products.\n"} {"abstract": " We revisit gravitational particle production from the Stokes phenomenon\nviewpoint, which helps us make a systematic way to understand asymptotic\nbehavior of mode functions in time-dependent background. One of our purposes of\nthis work is to make the method more practical for evaluation of\nnon-perturbative particle production rate. In particular, with several examples\nof time-dependent backgrounds, we introduce some approximation methods that\nmake the analysis more practical. Specifically, we consider particle production\nin simple expanding backgrounds, preheating after $R^2$ inflation, and a\ntransition model with smoothly changing mass. As we find several technical\nissues in analyzing the Stokes phenomenon of each example, we discuss how to\nsimplify the problems while showing the accuracy of analytic estimation under\nthe approximations we make.\n"} {"abstract": " We present new BVR band photometric light curves of BO Aries obtained in 2020\nand combined them with the Transiting Exoplanet Survey Satellite (TESS) light\ncurves. We obtained times of minima based on Gaussian and Cauchy distributions\nand then applied the Monte Carlo Markov Chain (MCMC) method to measure the\namount of uncertainty from our CCD photometry and TESS data. A new ephemeris of\nthe binary system was computed employing 204 times of minimum. The light curves\nwere analyzed using the Wilson-Devinney binary code combined with the Monte\nCarlo (MC) simulation. For this light curve solution, we considered a dark spot\non the primary component. We conclude that this binary is an A-type system with\na mass ratio of q=0.2074+-0.0001, an orbital inclination of i=82.18+-0.02 deg,\nand a fillout factor of f=75.7+-0.8%. Our results for the a(Rsun) and q\nparameters are consistent with the results of the Xu-Dong Zhang and Sheng-Bang\nQian (2020) model. The absolute parameters of the two components were\ncalculated and the distance estimate of the binary system was found to be\n142+-9 pc.\n"} {"abstract": " For Dirichlet $L$-functions in $\\mathbb{F}_q [T]$ we obtain a hybrid\nEuler-Hadamard product formula. We make a splitting conjecture, namely that the\n$2k$-th moment of the Dirichlet $L$-functions at $\\frac{1}{2}$, averaged over\nprimitive characters of modulus $R$, is asymptotic to (as $\\mathrm{deg} R\n\\longrightarrow \\infty$) the $2k$-th moment of the Euler product multiplied by\nthe $2k$-th moment of the Hadamard product. We explicitly obtain the main term\nof the $2k$-th moment of the Euler product, and we conjecture via random matrix\ntheory the main term of the $2k$-th moment of the Hadamard product. With the\nsplitting conjecture, this directly leads to a conjecture for the $2k$-th\nmoment of Dirichlet $L$-functions. Finally, we lend support for the splitting\nconjecture by proving the cases $k=1,2$. This work is the function field\nanalogue of the work of Bui and Keating. A notable difference in the function\nfield setting is that the Euler-Hadamard product formula is exact, in that\nthere is no error term.\n"} {"abstract": " Nonadiabatic holonomic quantum computation (NHQC) provides a method to\nimplement error resilient gates and that has attracted considerable attention\nrecently. Since it was proposed, three-level {\\Lambda} systems have become the\ntypical building block for NHQC and a number of NHQC schemes have been\ndeveloped based on such systems. In this paper, we investigate the realization\nof NHQC beyond the standard three-level setting. The central idea of our\nproposal is to improve NHQC by enlarging the Hilbert space of the building\nblock system and letting it have a bipartite graph structure in order to ensure\npurely holonomic evolution. Our proposal not only improves conventional\nqubit-based NHQC by efficiently reducing its duration, but also provides\nimplementations of qudit-based NHQC. Therefore, our proposal provides a further\ndevelopment of NHQC that can contribute significantly to the physical\nrealization of efficient quantum information processors.\n"} {"abstract": " Bilateral trade, a fundamental topic in economics, models the problem of\nintermediating between two strategic agents, a seller and a buyer, willing to\ntrade a good for which they hold private valuations. Despite the simplicity of\nthis problem, a classical result by Myerson and Satterthwaite (1983) affirms\nthe impossibility of designing a mechanism which is simultaneously efficient,\nincentive compatible, individually rational, and budget balanced. This\nimpossibility result fostered an intense investigation of meaningful trade-offs\nbetween these desired properties. Much work has focused on approximately\nefficient fixed-price mechanisms, i.e., Blumrosen and Dobzinski (2014; 2016),\nColini-Baldeschi et al. (2016), which have been shown to fully characterize\nstrong budget balanced and ex-post individually rational direct revelation\nmechanisms. All these results, however, either assume some knowledge on the\npriors of the seller/buyer valuations, or a black box access to some samples of\nthe distributions, as in D{\\\"u}tting et al. (2021). In this paper, we cast for\nthe first time the bilateral trade problem in a regret minimization framework\nover rounds of seller/buyer interactions, with no prior knowledge on the\nprivate seller/buyer valuations. Our main contribution is a complete\ncharacterization of the regret regimes for fixed-price mechanisms with\ndifferent models of feedback and private valuations, using as benchmark the\nbest fixed price in hindsight. More precisely, we prove the following bounds on\nthe regret:\n $\\bullet$ $\\widetilde{\\Theta}(\\sqrt{T})$ for full-feedback (i.e., direct\nrevelation mechanisms);\n $\\bullet$ $\\widetilde{\\Theta}(T^{2/3})$ for realistic feedback (i.e.,\nposted-price mechanisms) and independent seller/buyer valuations with bounded\ndensities;\n $\\bullet$ $\\Theta(T)$ for realistic feedback and seller/buyer valuations with\nbounded densities;\n $\\bullet$ $\\Theta(T)$ for realistic feedback and independent seller/buyer\nvaluations;\n $\\bullet$ $\\Theta(T)$ for the adversarial setting.\n"} {"abstract": " Spontaneous synchronization is a general phenomenon in which a large\npopulation of coupled oscillators of diverse natural frequencies self-organize\nto operate in unison. The phenomenon occurs in physical and biological systems\nover a wide range of spatial and temporal scales, e.g., in electrochemical and\nelectronic oscillators, Josephson junctions, laser arrays, animal flocking,\npedestrians on footbridges, audience clapping, etc. Besides the obvious\nnecessity of the synchronous firings of cardiac cells to keep the heart\nbeating, synchrony is desired in many man-made systems such as parallel\ncomputing, electrical power-grids. On the contrary, synchrony could also be\nhazardous, e.g., in neurons, leading to impaired brain function in Parkinson's\ndisease and epilepsy. Due to this wide range of applications, collective\nsynchrony in networks of oscillators has attracted the attention of physicists,\napplied mathematicians and researchers from many other fields. An essential\naspect of synchronizing systems is that long-range order naturally appear in\nthese systems, which questions the fact whether long-range interactions may be\nparticular suitable to synchronization. In this context, it is interesting to\nremind that long-range interacting system required several adaptations from\nstatistical mechanics \\`a la Gibbs Boltzmann, in order to deal with the\npeculiarities of these systems: negative specific heat, breaking of ergodicity\nor lack of extensivity. As for synchrony, it is still lacking a theoretical\nframework to use the tools from statistical mechanics. The present issue\npresents a collection of exciting recent theoretical developments in the field\nof synchronization and long-range interactions, in order to highlight the\nmutual progresses of these twin areas.\n"} {"abstract": " We construct a toy model for the harmonic oscillator that is neither\nclassical nor quantum. The model features a discrete energy spectrum, a ground\nstate with sharp position and momentum, an eigenstate with non-positive Wigner\nfunction as well as a state that has tunneling properties. The underlying\nformalism exploits that the Wigner-Weyl approach to quantum theory and the\nHamilton formalism in classical theory can be formulated in the same\noperational language, which we then use to construct generalized theories with\nwell-defined phase space. The toy model demonstrates that operational theories\nare a viable alternative to operator-based approaches for building physical\ntheories.\n"} {"abstract": " Extending on ideas of Lewin, Lieb and Seiringer (Phys Rev B, 100, 035127,\n(2019)) we present a modified \"floating crystal\" trial state for Jellium (also\nknown as the classical homogeneous electron gas) with density equal to a\ncharacteristic function. This allows us to show that three definitions of the\nJellium energy coincide in dimensions $d\\geq 2$, thus extending the result of\nCotar and Petrache (arXiv: 1707.07664) and Lewin, Lieb and Seiringer (Phys Rev\nB, 100, 035127, (2019)) that the three definitions coincide in dimension $d\n\\geq 3$. We show that the Jellium energy is also equivalent to a \"renormalized\nenergy\" studied in a series of papers by Serfaty and others and thus, by work\nof B\\'etermin and Sandier (Constr Approx, 47:39-74, (2018)), we relate the\nJellium energy to the order $n$ term in the logarithmic energy of $n$ points on\nthe unit 2-sphere. We improve upon known lower bounds for this renormalized\nenergy. Additionally, we derive formulas for the Jellium energy of periodic\nconfigurations.\n"} {"abstract": " Automated planning enables robots to find plans to achieve complex,\nlong-horizon tasks, given a planning domain. This planning domain consists of a\nlist of actions, with their associated preconditions and effects, and is\nusually manually defined by a human expert, which is very time-consuming or\neven infeasible. In this paper, we introduce a novel method for generating this\ndomain automatically from human demonstrations. First, we automatically segment\nand recognize the different observed actions from human demonstrations. From\nthese demonstrations, the relevant preconditions and effects are obtained, and\nthe associated planning operators are generated. Finally, a sequence of actions\nthat satisfies a user-defined goal can be planned using a symbolic planner. The\ngenerated plan is executed in a simulated environment by the TIAGo robot. We\ntested our method on a dataset of 12 demonstrations collected from three\ndifferent participants. The results show that our method is able to generate\nexecutable plans from using one single demonstration with a 92% success rate,\nand 100% when the information from all demonstrations are included, even for\npreviously unknown stacking goals.\n"} {"abstract": " We investigate the upscaling of diffusive transport parameters as function of\npore scale material structure using a stochastic framework. We focus on sub-REV\n(representative elementary volume) scale where the complexity of pore space\ngeometry leads to a significant scatter of transport observations. We study a\nlarge data set of sub-REV measurements on porosity and transport ability being\na dimensionless parameter representing the ratio of diffusive flow through the\nporous volume and through an empty volume. We characterize transport ability as\nprobability distribution functions (PDFs) of porosity capturing the effect of\npore structure differences among samples. We then investigate domain size\neffects and predict the REV scale. While scatter in porosity observation\ndecrease linearly with increasing sample size, the observed scatter in\ntransport ability converges towards a constant value larger zero. Our results\nconfirm that differences in pore structure topology impact transport parameters\nat all scales. Consequently, the use of PDFs to describe the relationship of\neffective transport coefficients to porosity is advantageous to deterministic\nsemi-empirical functions. We discuss the consequences and advocate the use of\nPDFs for effective parameters in both continuum equations and data\ninterpretation of experimental or computational work. We believe that the\npresented statistics-based upscaling technique of sub-REV microscopy data\nprovides a new tool in understanding, describing and predicting macroscopic\ntransport behavior of micro-porous media.\n"} {"abstract": " The linear noise approximation models the random fluctuations from the mean\nfield model of a chemical reaction that unfolds near the thermodynamic limit.\nSpecifically, the fluctuations obey a linear Langevin equation up to order\n$\\Omega^{-1/2}$, where $\\Omega$ is the size of the chemical system (usually the\nvolume). In the presence of disparate timescales, the linear noise\napproximation admits a quasi-steady-state reduction referred to as the\n\\textit{slow scale} linear noise approximation. However, the slow scale linear\napproximation has only been derived for fast/slow systems that are in Tikhonov\nstandard form. In this work, we derive, for the first time, the slow scale\nlinear noise approximation directly from Fenichel theory, without the need for\na priori scaling and dimensional analysis. In so doing, we also apply, for the\nfirst time, the slow scale linear noise approximation to fast/slow systems that\nare not in standard form. Furthermore, we demonstrate how a priori scaling\nanalysis can in some cases distort the structure of the mean field singular\nperturbation, and this distortion can lead to the incomplete reduction of the\nlinear Langevin equation. In its entirety, our paper not only presents new\nmathematical results to the fields of mathematical biology and physical\nchemistry, it also unifies the several important research results in\ndeterministic and stochastic chemical kinetics.\n"} {"abstract": " Multilingual transformers (XLM, mT5) have been shown to have remarkable\ntransfer skills in zero-shot settings. Most transfer studies, however, rely on\nautomatically translated resources (XNLI, XQuAD), making it hard to discern the\nparticular linguistic knowledge that is being transferred, and the role of\nexpert annotated monolingual datasets when developing task-specific models. We\ninvestigate the cross-lingual transfer abilities of XLM-R for Chinese and\nEnglish natural language inference (NLI), with a focus on the recent\nlarge-scale Chinese dataset OCNLI. To better understand linguistic transfer, we\ncreated 4 categories of challenge and adversarial tasks (totaling 17 new\ndatasets) for Chinese that build on several well-known resources for English\n(e.g., HANS, NLI stress-tests). We find that cross-lingual models trained on\nEnglish NLI do transfer well across our Chinese tasks (e.g., in 3/4 of our\nchallenge categories, they perform as well/better than the best monolingual\nmodels, even on 3/5 uniquely Chinese linguistic phenomena such as idioms, pro\ndrop). These results, however, come with important caveats: cross-lingual\nmodels often perform best when trained on a mixture of English and high-quality\nmonolingual NLI data (OCNLI), and are often hindered by automatically\ntranslated resources (XNLI-zh). For many phenomena, all models continue to\nstruggle, highlighting the need for our new diagnostics to help benchmark\nChinese and cross-lingual models. All new datasets/code are released at\nhttps://github.com/huhailinguist/ChineseNLIProbing.\n"} {"abstract": " For lower arm amputees, robotic prosthetic hands offer the promise to regain\nthe capability to perform fine object manipulation in activities of daily\nliving. Current control methods based on physiological signals such as EEG and\nEMG are prone to poor inference outcomes due to motion artifacts, variability\nof skin electrode junction impedance over time, muscle fatigue, and other\nfactors. Visual evidence is also susceptible to its own artifacts, most often\ndue to object occlusion, lighting changes, variable shapes of objects depending\non view-angle, among other factors. Multimodal evidence fusion using\nphysiological and vision sensor measurements is a natural approach due to the\ncomplementary strengths of these modalities.\n In this paper, we present a Bayesian evidence fusion framework for grasp\nintent inference using eye-view video, gaze, and EMG from the forearm processed\nby neural network models. We analyze individual and fused performance as a\nfunction of time as the hand approaches the object to grasp it. For this\npurpose, we have also developed novel data processing and augmentation\ntechniques to train neural network components. Our experimental data analyses\ndemonstrate that EMG and visual evidence show complementary strengths, and as a\nconsequence, fusion of multimodal evidence can outperform each individual\nevidence modality at any given time. Specifically, results indicate that, on\naverage, fusion improves the instantaneous upcoming grasp type classification\naccuracy while in the reaching phase by 13.66% and 14.8%, relative to EMG and\nvisual evidence individually. An overall fusion accuracy of 95.3% among 13\nlabels (compared to a chance level of 7.7%) is achieved, and more detailed\nanalysis indicate that the correct grasp is inferred sufficiently early and\nwith high confidence compared to the top contender, in order to allow\nsuccessful robot actuation to close the loop.\n"} {"abstract": " The grade of clear cell renal cell carcinoma (ccRCC) is a critical prognostic\nfactor, making ccRCC nuclei grading a crucial task in RCC pathology analysis.\nComputer-aided nuclei grading aims to improve pathologists' work efficiency\nwhile reducing their misdiagnosis rate by automatically identifying the grades\nof tumor nuclei within histopathological images. Such a task requires precisely\nsegment and accurately classify the nuclei. However, most of the existing\nnuclei segmentation and classification methods can not handle the inter-class\nsimilarity property of nuclei grading, thus can not be directly applied to the\nccRCC grading task. In this paper, we propose a Composite High-Resolution\nNetwork for ccRCC nuclei grading. Specifically, we propose a segmentation\nnetwork called W-Net that can separate the clustered nuclei. Then, we recast\nthe fine-grained classification of nuclei to two cross-category classification\ntasks, based on two high-resolution feature extractors (HRFEs) which are\nproposed for learning these two tasks. The two HRFEs share the same backbone\nencoder with W-Net by a composite connection so that meaningful features for\nthe segmentation task can be inherited for the classification task. Last, a\nhead-fusion block is applied to generate the predicted label of each nucleus.\nFurthermore, we introduce a dataset for ccRCC nuclei grading, containing 1000\nimage patches with 70945 annotated nuclei. We demonstrate that our proposed\nmethod achieves state-of-the-art performance compared to existing methods on\nthis large ccRCC grading dataset.\n"} {"abstract": " Heavy ion collisions provide a unique opportunity to study the nature of\nX(3872) compared with electron-positron and proton-proton (antiproton)\ncollisions. With the abundant charm pairs produced in heavy-ion collisions, the\nproduction of multicharm hadrons and molecules can be enhanced by the\ncombination of charm and anticharm quarks in the medium. We investigate the\ncentrality and momentum dependence of X(3872) in heavy-ion collisions via the\nLangevin equation and instant coalescence model (LICM). When X(3872) is treated\nas a compact tetraquark state, the tetraquarks are produced via the coalescence\nof heavy and light quarks near the quantum chromodynamic (QCD) phase transition\ndue to the restoration of the heavy quark potential at $T\\rightarrow T_c$. In\nthe molecular scenario, loosely bound X(3872) is produced via the coalescence\nof $D^0$-$\\bar D^{*0}$ mesons in a hadronic medium after kinetic freeze-out.\nThe phase space distributions of the charm quarks and D mesons in a bulk medium\nare studied with the Langevin equation, while the coalescence probability\nbetween constituent particles is controlled by the Wigner function, which\nencodes the internal structure of the formed particle.\n First, we employ the LICM to explain both $D^0$ and $J/\\psi$ production as a\nbenchmark. Then, we give predictions regarding X(3872) production. We find that\nthe total yield of tetraquark is several times larger than the molecular\nproduction in Pb-Pb collisions. Although the geometric size of the molecule is\nhuge, the coalescence probability is small due to strict constraints on the\nrelative momentum between $D^0$ and $\\bar D^{*0}$ in the molecular Wigner\nfunction, which significantly suppresses the molecular yield.\n"} {"abstract": " With a model of a geometric theory in an arbitrary topos, we associate a site\nobtained by endowing a category of generalized elements of the model with a\nGrothendieck topology, which we call the antecedent topology. Then we show that\nthe associated sheaf topos, which we call the over-topos at the given model,\nadmits a canonical totally connected morphism to the given base topos and\nsatisfies a universal property generalizing that of the colocalization of a\ntopos at a point. We first treat the case of the base topos of sets, where\nglobal elements are sufficient to describe our site of definition; in this\ncontext, we also introduce a geometric theory classified by the over-topos,\nwhose models can be identified with the model homomorphisms towards the\n(internalizations of the) model. Then we formulate and prove the general\nstatement over an arbitrary topos, which involves the stack of generalized\nelements of the model. Lastly, we investigate the geometric and 2-categorical\naspects of the over-topos construction, exhibiting it as a bilimit in the\nbicategory of Grothendieck toposes.\n"} {"abstract": " Let F be a non-Archimedean locally compact field of residue characteristic p,\nlet G be an inner form of GL(n,F) with n>0, and let l be a prime number\ndifferent from p. We describe the block decomposition of the category of finite\nlength smooth representations of G with coefficients in an algebraically closed\nfield of characteristic l. Unlike the case of complex representations of an\narbitrary p-adic reductive group and that of l-modular representations of\nGL(n,F), several non-isomorphic supercuspidal supports may correspond to the\nsame block. We describe the (finitely many) supercuspidal supports\ncorresponding to a given block. We also prove that a supercuspidal block is\nequivalent to the principal (that is, the one which contains the trivial\ncharacter) block of the multiplicative group of a suitable division algebra,\nand we determine those irreducible representations having a nontrivial\nextension with a given supercuspidal representation of G.\n"} {"abstract": " Unmanned aerial vehicles (UAVs) are expected to be an integral part of\nwireless networks, and determining collision-free trajectories for multiple\nUAVs while satisfying requirements of connectivity with ground base stations\n(GBSs) is a challenging task. In this paper, we first reformulate the multi-UAV\ntrajectory optimization problem with collision avoidance and wireless\nconnectivity constraints as a sequential decision making problem in the\ndiscrete time domain. We, then, propose a decentralized deep reinforcement\nlearning approach to solve the problem. More specifically, a value network is\ndeveloped to encode the expected time to destination given the agent's joint\nstate (including the agent's information, the nearby agents' observable\ninformation, and the locations of the nearby GBSs). A\nsignal-to-interference-plus-noise ratio (SINR)-prediction neural network is\nalso designed, using accumulated SINR measurements obtained when interacting\nwith the cellular network, to map the GBSs' locations into the SINR levels in\norder to predict the UAV's SINR. Numerical results show that with the value\nnetwork and SINR-prediction network, real-time navigation for multi-UAVs can be\nefficiently performed in various environments with high success rate.\n"} {"abstract": " We report on building of a compact vacuum chamber for spectroscopy of\nultracold thulium and trapping of up to 13 million atoms. Compactness is\nachieved by obviating a classical Zeeman slower section and placing an atomic\noven close to a magneto-optical trap (MOT), specifically at the distance of 11\ncm. In this configuration, we significantly gained in solid angle of an atomic\nbeam, which is affected by MOT laser beams, and reached 1 million atoms loaded\ndirectly in the MOT. By exploiting Zeeman-like deceleration of atoms with an\nadditional laser beam, we increased the number of trapped atoms to 6 million.\nThen we gained an extra factor of 2 by tailoring the MOT magnetic field\ngradient with an additional small magnetic coil. Demonstrated results show\ngreat perspective of the developed setup for realizing a compact\nhigh-performance optical atomic clock based on thulium atoms.\n"} {"abstract": " Among the challenges in discriminating between theoretical approaches to the\nglass transition is obtaining suitable data. In particular, particle--resolved\ndata in liquids supercooled past the mode--coupling crossover has until\nrecently been hard to obtain. Here we combine nano-particle resolved\nexperiments and GPU simulation data which addresses this and investigate the\npredictions of differing theoretical approaches. We find support for both\ndynamic facilitation and thermodynamic approaches. In particular, excitations,\nthe elementary units of relaxation in dynamic facilitation theory follow the\npredicted scaling behaviour and the properties of cooperatively rearranging\nregions (CRRs) are consistent with RFOT theory. At weak supercooling there is\nlimited correlation between particles identified in excitations and CRRs, but\nthis increases very substantially at deep supercooling. We identify a timescale\nrelated to the CRRs which is coupled to the structural relaxation time and thus\ndecoupled from the excitation timescale, which remains microscopic.\n"} {"abstract": " At a first glance the Theory of computation relies on potential infinity and\nan organization aimed at solving a problem. Under such aspect it is like\nMendeleev theory of chemistry. Also its theoretical development reiterates that\nof this scientific theory: it makes use of doubly negated propositions and its\nreasoning proceeds through ad absurdum proofs; a final, universal predicate of\nequivalence of all definitions of a computations is translated into an equality\none, and at the same time intuitionist logic into classical logic. Yet, the\nlast step of this development of current theory includes both a misleading\nnotion of thesis and intuitive notions (e.g. the partial computable function,\nas stressed by some scholars). A program for a rational re-construction of the\ntheory according to the theoretical development of the above mentioned theories\nis sketchy suggested.\n"} {"abstract": " Gaussian Graphical models (GGM) are widely used to estimate the network\nstructures in many applications ranging from biology to finance. In practice,\ndata is often corrupted by latent confounders which biases inference of the\nunderlying true graphical structure. In this paper, we compare and contrast two\nstrategies for inference in graphical models with latent confounders: Gaussian\ngraphical models with latent variables (LVGGM) and PCA-based removal of\nconfounding (PCA+GGM). While these two approaches have similar goals, they are\nmotivated by different assumptions about confounding. In this paper, we explore\nthe connection between these two approaches and propose a new method, which\ncombines the strengths of these two approaches. We prove the consistency and\nconvergence rate for the PCA-based method and use these results to provide\nguidance about when to use each method. We demonstrate the effectiveness of our\nmethodology using both simulations and in two real-world applications.\n"} {"abstract": " We consider the model of the data broker selling information to a single\nagent to maximize his revenue. The agent has private valuation for the\nadditional information, and upon receiving the signal from the data broker, the\nagent can conduct her own experiment to refine her posterior belief on the\nstates with additional costs. In this paper, we show that in the optimal\nmechanism, the agent has no incentive to acquire any additional costly\ninformation under equilibrium. Still, the ability to acquire additional\ninformation distorts the incentives of the agent, and reduces the optimal\nrevenue of the data broker. Moreover, we characterize the optimal mechanism\nwhen the valuation function of the agent is separable. The optimal mechanism in\ngeneral may be complex and contain a continuum of menu entries. However, we\nshow that posting a deterministic price for revealing the states is optimal\nwhen the prior distribution is sufficiently informative or the cost of\nacquiring additional information is sufficiently high, and obtains at least\nhalf of the optimal revenue for arbitrary prior and cost functions.\n"} {"abstract": " Why does the vaccination rate remain low, even in countries where\nlong-established immunization programs exist, and vaccines are provided for\nfree? We study this lower vaccination paradox in the context of India- which\ncontributes to the largest pool of under-vaccinated children in the world and\nabout one-third of all vaccine-preventable deaths globally. We explore the\nimportance of historical events shaping current vaccination practices.\nCombining historical records with survey datasets, we examine the Indian\ngovernment's forced sterilization policy implemented in 1976-77 and find that\ngreater exposure to forced sterilization has had a large negative effect on the\ncurrent vaccination completion rate. We explore the mechanism for this practice\nand find that institutional delivery and antenatal care are low in states where\npolicy exposure was high. Finally, we examine the consequence of lower\nvaccination, suggesting that child mortality is currently high in states with\ngreater sterilization exposure. Together, the evidence suggests that government\npolicies implemented in the past could have persistent impacts on adverse\ndemand for health-seeking behavior, even if the burden is exceedingly high.\n"} {"abstract": " Accurate and robust prediction of patient's response to drug treatments is\ncritical for developing precision medicine. However, it is often difficult to\nobtain a sufficient amount of coherent drug response data from patients\ndirectly for training a generalized machine learning model. Although the\nutilization of rich cell line data provides an alternative solution, it is\nchallenging to transfer the knowledge obtained from cell lines to patients due\nto various confounding factors. Few existing transfer learning methods can\nreliably disentangle common intrinsic biological signals from confounding\nfactors in the cell line and patient data. In this paper, we develop a Coherent\nDeconfounding Autoencoder (CODE-AE) that can extract both common biological\nsignals shared by incoherent samples and private representations unique to each\ndata set, transfer knowledge learned from cell line data to tissue data, and\nseparate confounding factors from them. Extensive studies on multiple data sets\ndemonstrate that CODE-AE significantly improves the accuracy and robustness\nover state-of-the-art methods in both predicting patient drug response and\nde-confounding biological signals. Thus, CODE-AE provides a useful framework to\ntake advantage of in vitro omics data for developing generalized patient\npredictive models. The source code is available at\nhttps://github.com/XieResearchGroup/CODE-AE.\n"} {"abstract": " In [M. Zych et al., Nat. Commun. 2, 505 (2011)], the authors predicted that\nthe interferometric visibility is affected by a gravitational field in way that\ncannot be explained without the general relativistic notion of proper time. In\nthis work, we take a different route and start deriving the same effect using\nthe unitary representation of the local Lorentz transformation in the Newtonian\nLimit. In addition, we show that the effect on the interferometric visibility\ndue to gravity persists in different spacetime geometries. However, the\ninfluence is not necessarily due to the notion of proper time. For instance, by\nconstructing a `astronomical' Mach-Zehnder interferometer in the Schwarzschild\nspacetime, the influence on the interferometric visibility can be due to\nanother general relativistic effect, the geodetic precession. Besides, by using\nthe unitary representation of the local Lorentz transformation, we show that\nthis behavior of the interferometric visibility is general for an arbitrary\nspacetime, provided that we restrict the motion of the quanton to a\ntwo-dimensional spacial plane.\n"} {"abstract": " We define a universal state sum construction which specializes to most\npreviously known state sums (Turaev-Viro, Dijkgraaf-Witten, Crane-Yetter,\nDouglas-Reutter, Witten-Reshetikhin-Turaev surgery formula, Brown-Arf). The\ninput data for the state sum is an n-category satisfying various conditions,\nincluding finiteness, semisimplicity and n-pivotality. From this n-category one\nconstructs an n+1-dimensional TQFT, and applying the TQFT gluing rules to a\nhandle decomposition of an n+1-manifold produces the state sum.\n"} {"abstract": " Transition metal dichalcogenide heterobilayers offer attractive opportunities\nto realize lattices of interacting bosons with several degrees of freedom. Such\nheterobilayers can feature moir\\'e patterns that modulate their electronic band\nstructure, leading to spatial confinement of single interlayer excitons (IXs)\nthat act as quantum emitters with $C_3$ symmetry. However, the narrow emission\nlinewidths of the quantum emitters contrast with a broad ensemble IX emission\nobserved in nominally identical heterobilayers, opening a debate regarding the\norigin of IX emission. Here we report the continuous evolution from a few\ntrapped IXs to an ensemble of IXs with both triplet and singlet spin\nconfigurations in a gate-tunable $2H$-MoSe$_2$/WSe$_2$ heterobilayer. We\nobserve signatures of dipolar interactions in the IX ensemble regime which,\nwhen combined with magneto-optical spectroscopy, reveal that the narrow\nquantum-dot-like and broad ensemble emission originate from IXs trapped in\nmoir\\'e potentials with the same atomic registry. Finally, electron doping\nleads to the formation of three different species of localised negative trions\nwith contrasting spin-valley configurations, among which we observe both\nintervalley and intravalley IX trions with spin-triplet optical transitions.\nOur results identify the origin of IX emission in MoSe$_2$/WSe$_2$\nheterobilayers and highlight the important role of exciton-exciton interactions\nand Fermi-level control in these highly tunable quantum materials.\n"} {"abstract": " Multifractal detrended fluctuation analysis (MFDFA) has become a central\nmethod to characterise the variability and uncertainty in empiric time series.\nExtracting the fluctuations on different temporal scales allows quantifying the\nstrength and correlations in the underlying stochastic properties, their\nscaling behaviour, as well as the level of fractality. Several extensions to\nthe fundamental method have been developed over the years, vastly enhancing the\napplicability of MFDFA, e.g. empirical mode decomposition for the study of\nlong-range correlations and persistence. In this article we introduce an\nefficient, easy-to-use python library for MFDFA, incorporating the most common\nextensions and harnessing the most of multi-threaded processing for very fast\ncalculations.\n"} {"abstract": " The Imry-Ma phenomenon, predicted in 1975 by Imry and Ma and rigorously\nestablished in 1989 by Aizenman and Wehr, states that first-order phase\ntransitions of low-dimensional spin systems are `rounded' by the addition of a\nquenched random field to the quantity undergoing the transition. The phenomenon\napplies to a wide class of spin systems in dimensions $d\\le 2$ and to spin\nsystems possessing a continuous symmetry in dimensions $d\\le 4$.\n This work provides quantitative estimates for the Imry--Ma phenomenon: In a\ncubic domain of side length $L$, we study the effect of the boundary conditions\non the spatial and thermal average of the quantity coupled to the random field.\nWe show that the boundary effect diminishes at least as fast as an inverse\npower of $\\log\\log L$ for general two-dimensional spin systems and for\nfour-dimensional spin systems with continuous symmetry, and at least as fast as\nan inverse power of $L$ for two- and three-dimensional spin systems with\ncontinuous symmetry. Specific models of interest for the obtained results\ninclude the two-dimensional random-field $q$-state Potts and Edwards-Anderson\nspin glass models, and the $d$-dimensional random-field spin $O(n)$ models\n($n\\ge 2$) in dimensions $d\\le 4$.\n"} {"abstract": " We study the problem of fairly allocating indivisible items to agents with\ndifferent entitlements, which captures, for example, the distribution of\nministries among political parties in a coalition government. Our focus is on\npicking sequences derived from common apportionment methods, including five\ntraditional divisor methods and the quota method. We paint a complete picture\nof these methods in relation to known envy-freeness and proportionality\nrelaxations for indivisible items as well as monotonicity properties with\nrespect to the resource, population, and weights. In addition, we provide\ncharacterizations of picking sequences satisfying each of the fairness notions,\nand show that the well-studied maximum Nash welfare solution fails resource-\nand population-monotonicity even in the unweighted setting. Our results serve\nas an argument in favor of using picking sequences in weighted fair division\nproblems.\n"} {"abstract": " In a surprising recent work, Lemke Oliver and Soundararajan noticed how\nexperimental data exhibits erratic distributions for consecutive pairs of\nprimes in arithmetic progressions, and proposed a heuristic model based on the\nHardy--Littlewood conjectures containing a large secondary term, which fits the\ndata very well. In this paper, we study consecutive pairs of sums of squares in\narithmetic progressions, and develop a similar heuristic model based on the\nHardy--Littlewood conjecture for sums of squares, which also explain the biases\nin the experimental data. In the process, we prove several results related to\naverages of the Hardy--Littlewood constant in the context of sums of two\nsquares.\n"} {"abstract": " We investigate the problem of jointly testing multiple hypotheses and\nestimating a random parameter of the underlying distribution in a sequential\nsetup. The aim is to jointly infer the true hypothesis and the true parameter\nwhile using on average as few samples as possible and keeping the detection and\nestimation errors below predefined levels. Based on mild assumptions on the\nunderlying model, we propose an asymptotically optimal procedure, i.e., a\nprocedure that becomes optimal when the tolerated detection and estimation\nerror levels tend to zero. The implementation of the resulting asymptotically\noptimal stopping rule is computationally cheap and, hence, applicable for\nhigh-dimensional data. We further propose a projected quasi-Newton method to\noptimally chose the coefficients that parameterize the instantaneous cost\nfunction such that the constraints are fulfilled with equality. The proposed\ntheory is validated by numerical examples.\n"} {"abstract": " Background: Most of the stars in the Universe will end their evolution by\nlosing their envelope during the thermally pulsing asymptotic giant branch\n(TP-AGB) phase, enriching the interstellar medium of galaxies with heavy\nelements, partially condensed into dust grains formed in their extended\ncircumstellar envelopes. Among these stars, carbon-rich TP-AGB stars (C-stars)\nare particularly relevant for the chemical enrichment of galaxies. We here\ninvestigated the role of the metallicity in the dust formation process from a\ntheoretical viewpoint. Methods: We coupled an up-to-date description of dust\ngrowth and dust-driven wind, which included the time-averaged effect of shocks,\nwith FRUITY stellar evolutionary tracks. We compared our predictions with\nobservations of C-stars in our Galaxy, in the Magellanic Clouds (LMC and SMC)\nand in the Galactic Halo, characterised by metallicity between solar and 1/10\nof solar. Results: Our models explained the variation of the gas and dust\ncontent around C-stars derived from the IRS Spitzer spectra. The wind speed of\nthe C-stars at varying metallicity was well reproduced by our description. We\npredicted the wind speed at metallicity down to 1/10 of solar in a wide range\nof mass-loss rates.\n"} {"abstract": " Lower-limb prosthesis wearers are more prone to fall than non-amputees.\nPowered prostheses can reduce this instability of passive prostheses. While\nshown to be more stable in practice, powered prostheses generally use\nmodel-independent control methods that lack formal guarantees of stability and\nrely on heuristic tuning. Recent work overcame one of the limitations of\nmodel-based prosthesis control by developing a class of stable prosthesis\nsubsystem controllers independent of the human model, except for its\ninteraction forces with the prosthesis. Our work realizes the first\nmodel-dependent prosthesis controller that uses in-the-loop on-board real-time\nforce sensing at the interface between the human and prosthesis and at the\nground, resulting in stable human-prosthesis walking and increasing the\nvalidity of our formal guarantees of stability. Experimental results\ndemonstrate this controller using force sensors outperforms the controller when\nnot using force sensors with better tracking performance and more consistent\ntracking performance across 4 types of terrain.\n"} {"abstract": " In this paper, we apply a Bayesian perspective to the sampling of\nalternatives for multinomial logit (MNL) and mixed multinomial logit (MMNL)\nmodels. A sampling of alternatives reduces the computational challenge of\nevaluating the denominator of the logit choice probability for large choice\nsets by only using a smaller subset of sampled alternatives including the\nchosen alternative. To correct for the resulting overestimation of the choice\nprobability, a correction factor has to be applied. McFadden (1978) proposes a\ncorrection factor to the utility of each alternative which is based on the\nprobability of sampling the smaller subset of alternatives and that alternative\nbeing chosen. McFadden's correction factor ensures consistency of parameter\nestimates under a wide range of sampling protocols. A special sampling protocol\ndiscussed by McFadden is uniform conditioning, which assigns the same sampling\nprobability and therefore the same correction factor to each alternative in the\nsampled choice set. Since a constant is added to each alternative the\ncorrection factor cancels out, but consistent estimates are still obtained.\nBayesian estimation is focused on describing the full posterior distributions\nof the parameters of interest instead of the consistency of their point\nestimates. We theoretically show that uniform conditioning is sufficient to\nminimise the loss of information from a sampling of alternatives on the\nparameters of interest over the full posterior distribution in Bayesian MNL\nmodels. Minimum loss of information is, however, not guaranteed for other\nsampling protocols. This result extends to Bayesian MMNL models estimated using\nthe principle of data augmentation. The application of uniform conditioning, a\nmore restrictive sampling protocol, is thus sufficient in a Bayesian estimation\ncontext to achieve finite sample properties of MNL and MMNL parameter\nestimates.\n"} {"abstract": " A decision maker is choosing between an active action (e.g., purchase a\nhouse, invest certain stock) and a passive action. The payoff of the active\naction depends on the buyer's private type and also an unknown state of nature.\nAn information seller can design experiments to reveal information about the\nrealized state to the decision maker, and would like to maximize profit from\nselling such information. We characterize, in closed-form, the revenue-optimal\ninformation selling mechanism for the seller. After eliciting the buyer's type,\nthe optimal mechanism charges the buyer an upfront payment and then simply\nreveals whether the realized state exceeds a certain threshold or not. The\noptimal mechanism features both price discrimination and information\ndiscrimination. The special buyer type who is a-priori indifferent between the\nactive and passive action benefits the most from participating in the\nmechanism.\n"} {"abstract": " Two-particle angular correlations are measured in high-multiplicity\nproton-proton collisions at $\\sqrt{s} =13$ TeV by the ALICE Collaboration. The\nyields of particle pairs at short-($\\Delta\\eta$ $\\sim$ 0) and long-range ($1.6\n< |\\Delta\\eta| < 1.8$) in pseudorapidity are extracted on the near-side\n($\\Delta\\varphi$ $\\sim$ 0). They are reported as a function of transverse\nmomentum ($p_{\\mathrm T}$) in the range $1