text
stringlengths 64
6.93k
|
---|
Reinforcement Learning",3,"['Excited to release our new multi-agent RL paper showing that when agents receive a social reward for having causal influence over other agents, it leads to enhanced cooperation and emergent communication <LINK>', '@daniel_bilar @hardmaru Hey, could you point me to the paper you mean? From what I remember of reading Social Physics, it mainly looked at idea flow in human social networks, not building this into agents. Very curious about this!', ""@daniel_bilar @hardmaru Oh awesome, yes, I think Sandy's work has made a lot of interesting contributions to understanding human social influence.""]",18,10,588 |
65,86,1273893100574789633,1120626288807428096,Jonathan Scarlett,"New group testing paper (RANDOM 2020) putting the ""non-"" in ""non-adaptive binary splitting"", with O(k log n) tests and decoding time: <LINK> (1/2) <LINK> See also <LINK> for concurrent work by @cheraghchi & V. Nakos introducing the same idea, and applying it in additional contexts including heavy hitters & compressed sensing (2/2)",https://arxiv.org/abs/2006.10268,"In this paper, we consider the problem of noiseless non-adaptive group testing under the for-each recovery guarantee, also known as probabilistic group testing. In the case of $n$ items and $k$ defectives, we provide an algorithm attaining high-probability recovery with $O(k \log n)$ scaling in both the number of tests and runtime, improving on the best known $O(k^2 \log k \cdot \log n)$ runtime previously available for any algorithm that only uses $O(k \log n)$ tests. Our algorithm bears resemblance to Hwang's adaptive generalized binary splitting algorithm (Hwang, 1972); we recursively work with groups of items of geometrically vanishing sizes, while maintaining a list of ""possibly defective"" groups and circumventing the need for adaptivity. While the most basic form of our algorithm requires $\Omega(n)$ storage, we also provide a low-storage variant based on hashing, with similar recovery guarantees. ",A Fast Binary Splitting Approach to Non-Adaptive Group Testing,2,"['New group testing paper (RANDOM 2020) putting the ""non-"" in ""non-adaptive binary splitting"", with O(k log n) tests and decoding time: <LINK> (1/2) <LINK>', 'See also https://t.co/KS8PtiLrkK for concurrent work by @cheraghchi & V. Nakos introducing the same idea, and applying it in additional contexts including heavy hitters & compressed sensing (2/2)']",20,06,332 |
66,56,1254853966942240768,384104802,Matthew Petroff,"I have a new paper out, ""Full-sky Cosmic Microwave Background Foreground Cleaning Using Machine Learning""! It uses a neural network trained on simulations to produce a foreground-cleaned CMB temperature map from separate frequency maps. <LINK> 1/4 <LINK> Crucially, it also produces an error estimate. This is important for scientific applications but is less common in many areas of machine learning. 2/4 <LINK> It can also be used to help improve foreground simulations by comparing the residual on different simulations with the residual on observations when compared to an external foreground cleaning method as well as by comparing the error estimate. 3/4 It also works up to around \ell = 900. Thanks to my co-authors, @AddisonGraeme, Chuck, and Janet for their insights on foregrounds and simulations and for helping to guide this research and turn it into a polished manuscript. 4/4 <LINK>",https://arxiv.org/abs/2004.11507,"In order to extract cosmological information from observations of the millimeter and submillimeter sky, foreground components must first be removed to produce an estimate of the cosmic microwave background (CMB). We developed a machine-learning approach for doing so for full-sky temperature maps of the millimeter and submillimeter sky. We constructed a Bayesian spherical convolutional neural network architecture to produce a model that captures both spectral and morphological aspects of the foregrounds. Additionally, the model outputs a per-pixel error estimate that incorporates both statistical and model uncertainties. The model was then trained using simulations that incorporated knowledge of these foreground components that was available at the time of the launch of the Planck satellite. On simulated maps, the CMB is recovered with a mean absolute difference of $<4\mu$K over the full sky after masking map pixels with a predicted standard error of $>50\mu$K; the angular power spectrum is also accurately recovered. Once validated with the simulations, this model was applied to Planck temperature observations from its 70GHz through 857GHz channels to produce a foreground-cleaned CMB map at a Healpix map resolution of NSIDE=512. Furthermore, we demonstrate the utility of the technique for evaluating how well different simulations match observations, particularly in regard to the modeling of thermal dust. ","Full-sky Cosmic Microwave Background Foreground Cleaning Using Machine |
Learning",4,"['I have a new paper out, ""Full-sky Cosmic Microwave Background Foreground Cleaning Using Machine Learning""! It uses a neural network trained on simulations to produce a foreground-cleaned CMB temperature map from separate frequency maps. <LINK> 1/4 <LINK>', 'Crucially, it also produces an error estimate. This is important for scientific applications but is less common in many areas of machine learning. 2/4 https://t.co/gc2yDQwOUj', 'It can also be used to help improve foreground simulations by comparing the residual on different simulations with the residual on observations when compared to an external foreground cleaning method as well as by comparing the error estimate. 3/4', 'It also works up to around \\ell = 900. Thanks to my co-authors, @AddisonGraeme, Chuck, and Janet for their insights on foregrounds and simulations and for helping to guide this research and turn it into a polished manuscript. 4/4 https://t.co/9hbuTyBd4k']",20,04,897 |
67,89,1359558443443568642,1359421250431516683,Dr. Eva Laplace,"1/6 New paper thread! <LINK> Summary: Not only the surface properties but also the pre-supernova core structures of massive single and binary-stripped stars are systematically different, even when considering the same core mass! <LINK> 2/6 We show the pre-supernova core composition with new diagrams that bring into focus three distinct regions I) a He-rich layer II) an O/Ne-rich layer III) an iron-rich core. Binary-stripped stars contain a gradient of C/O/Ne around II that is not present in single stars. <LINK> 3/6 We find that due to this layer, massive binary-stripped stars contain systematically higher masses of carbon at the end of their lives than single stars with the same helium core mass. This is very exciting because the nucleosynthesis from these stars may be different! 4/6 This layer originates from a distinct behavior during core helium burning. The convective He-burning cores of single stars grow in mass while they recede for binaries due to wind mass loss, leaving behind a C/O/Ne layer. <LINK> 5/6 We find that binary-stripped stars have systematically different density structures from single stars. These are tied to how differently they burn. These differences in the core structures also have implications for how explodable these stars are! 6/6 For more of these cool diagrams, stay tuned for my next paper on TULIPS: the Tool for Understanding the Lives, Interiors, and Physics of Stars 🌷",https://arxiv.org/abs/2102.05036,"The majority of massive stars live in binary or multiple systems and will interact during their lifetimes, which helps to explain the observed diversity of core-collapse supernovae. Donor stars in binary systems can lose most of their hydrogen-rich envelopes through mass transfer, which not only affects the surface properties, but also the core structure. However, most calculations of the core-collapse properties of massive stars rely on single-star models. We present a systematic study of the difference between the pre-supernova structures of single stars and stars of the same initial mass (11 - 21\Msun) that have been stripped due to stable post-main sequence mass transfer at solar metallicity. We present the pre-supernova core composition with novel diagrams that give an intuitive representation of the isotope distribution. As shown in previous studies, at the edge of the carbon-oxygen core, the binary-stripped star models contain an extended gradient of carbon, oxygen, and neon. This layer originates from the receding of the convective helium core during core helium burning in binary-stripped stars, which does not occur in single-star models. We find that this same evolutionary phase leads to systematic differences in the final density and nuclear energy generation profiles. Binary-stripped star models have systematically higher total masses of carbon at the moment of core collapse compared to single star models, which likely results in systematically different supernova yields. In about half of our models, the silicon-burning and oxygen-rich layers merge after core silicon burning. We discuss the implications of our findings for the explodability, supernova observations, and nucleosynthesis from these stars. Our models will be publicly available and can be readily used as input for supernova simulations. [Abridged] ","Different to the core: the pre-supernova structures of massive single |
and binary-stripped stars",6,"['1/6 New paper thread! <LINK> \n\nSummary: Not only the surface properties but also the pre-supernova core structures of massive single and binary-stripped stars are systematically different, even when considering the same core mass! <LINK>', '2/6 We show the pre-supernova core composition with new diagrams that bring into focus three distinct regions I) a He-rich layer II) an O/Ne-rich layer III) an iron-rich core. Binary-stripped stars contain a gradient of C/O/Ne around II that is not present in single stars. https://t.co/1wIEJqfTun', '3/6 We find that due to this layer, massive binary-stripped stars contain systematically higher masses of carbon at the end of their lives than single stars with the same helium core mass. This is very exciting because the nucleosynthesis from these stars may be different!', '4/6 This layer originates from a distinct behavior during core helium burning. The convective He-burning cores of single stars grow in mass while they recede for binaries due to wind mass loss, leaving behind a C/O/Ne layer. https://t.co/VaGDPy0XIo', '5/6 We find that binary-stripped stars have systematically different density structures from single stars. These are tied to how differently they burn. These differences in the core structures also have implications for how explodable these stars are!', '6/6 For more of these cool diagrams, stay tuned for my next paper on TULIPS: the Tool for Understanding the Lives, Interiors, and Physics of Stars 🌷']",21,02,1424 |
68,10,1455351686109622272,803882049484398592,Takuya Yoshioka,"Check our new paper on VarArray, a new ""array-geometry-agnostic"" speech separation model. Unlike conventional practices, we did end-to-end evaluation with different meeting corpora, public (AMI) and private, showing real WER impacts. Paper: <LINK> <LINK> Also, in another paper from our summer intern Zhuohuang Zhang, we applied all-neural beamforming (with modifications) to address the overlapped speech problem in the real meeting transcription task. Paper: <LINK> <LINK>",https://arxiv.org/abs/2110.05745,"Continuous speech separation using a microphone array was shown to be promising in dealing with the speech overlap problem in natural conversation transcription. This paper proposes VarArray, an array-geometry-agnostic speech separation neural network model. The proposed model is applicable to any number of microphones without retraining while leveraging the nonlinear correlation between the input channels. The proposed method adapts different elements that were proposed before separately, including transform-average-concatenate, conformer speech separation, and inter-channel phase differences, and combines them in an efficient and cohesive way. Large-scale evaluation was performed with two real meeting transcription tasks by using a fully developed transcription system requiring no prior knowledge such as reference segmentations, which allowed us to measure the impact that the continuous speech separation system could have in realistic settings. The proposed model outperformed a previous approach to array-geometry-agnostic modeling for all of the geometry configurations considered, achieving asclite-based speaker-agnostic word error rates of 17.5% and 20.4% for the AMI development and evaluation sets, respectively, in the end-to-end setting using no ground-truth segmentations. ",VarArray: Array-Geometry-Agnostic Continuous Speech Separation,2,"['Check our new paper on VarArray, a new ""array-geometry-agnostic"" speech separation model. Unlike conventional practices, we did end-to-end evaluation with different meeting corpora, public (AMI) and private, showing real WER impacts. \n\nPaper: <LINK> <LINK>', 'Also, in another paper from our summer intern Zhuohuang Zhang, we applied all-neural beamforming (with modifications) to address the overlapped speech problem in the real meeting transcription task. \n\nPaper: https://t.co/CTzN2iKSvT https://t.co/lIGV526Msl']",21,10,476 |
69,35,890388521709588481,2210861,Jacob Andreas,"New paper: discovering analogs of logical structure in RNN representations <LINK>. Or, Davidson in Vector Space @Smerity Oof sorry that was rude. abs indeed > PDF. At least the link was sufficiently interact that you can tell. @Smerity other embarrassments: I just noticed that someone named ""Israel Ramat Gan"" is listed as @omerlevy_ 's coauthor in the bib",https://arxiv.org/abs/1707.08139,"We investigate the compositional structure of message vectors computed by a deep network trained on a communication game. By comparing truth-conditional representations of encoder-produced message vectors to human-produced referring expressions, we are able to identify aligned (vector, utterance) pairs with the same meaning. We then search for structured relationships among these aligned pairs to discover simple vector space transformations corresponding to negation, conjunction, and disjunction. Our results suggest that neural representations are capable of spontaneously developing a ""syntax"" with functional analogues to qualitative properties of natural language. ",Analogs of Linguistic Structure in Deep Representations,4,"['New paper: discovering analogs of logical structure in RNN representations <LINK>.', 'Or, Davidson in Vector Space', '@Smerity Oof sorry that was rude. abs indeed > PDF. At least the link was sufficiently interact that you can tell.', '@Smerity other embarrassments: I just noticed that someone named ""Israel Ramat Gan"" is listed as @omerlevy_ \'s coauthor in the bib']",17,07,360 |
70,20,1432602596154433542,2322575761,Prof Roberto Trotta,"If you were to bet, what odds would you get: - against the accelerated expansion of the universe? (A: 1100:1) - against the existence of a preferred direction in the expansion? (A: 900:1) Our new constraints from supernovae type Ia in today's paper: <LINK> Huge congrats to lead author and @ImperialAstro PhD student Wahid Rahman, and thanks to our peculiar velocity collaborators and experts @SuprantaSB & Mike Hudson! @pietro_berkes Consider that the highest level (""decisive"") in the Jeffreys' scale of evidence is ln(BF)=5, or odds of ~ 150:1. Evidence accumulates linearly in the posterior/prior width for the simpler model (as opposed to exponentially against a it), so 1000:1 is pretty strong IMHO. @pietro_berkes Also, this paper uses a model that's an evolution of the Bayesian hierarchical model we wrote down together when I visited you in Boston, sometimes in the Late Middle Age 🤣 @pietro_berkes One problem?! I got half a dozen! Expect an email soon 😃 @pietro_berkes #DreamTeam 🤓",https://arxiv.org/abs/2108.12497,"We re-examine the contentious question of constraints on anisotropic expansion from Type Ia supernovae (SNIa) in the light of a novel determination of peculiar velocities, which are crucial to test isotropy with supernovae out to distances $\lesssim 200/h$ Mpc. We re-analyze the Joint Light-Curve Analysis (JLA) Supernovae (SNe) data, improving on previous treatments of peculiar velocity corrections and their uncertainties (both statistical and systematic) by adopting state-of-the-art flow models constrained independently via the 2M$++$ galaxy redshift compilation. We also introduce a novel procedure to account for colour-based selection effects, and adjust the redshift of low-$z$ SNe self-consistently in the light of our improved peculiar velocity model. We adopt the Bayesian hierarchical model \texttt{BAHAMAS} to constrain a dipole in the distance modulus in the context of the $\Lambda$CDM model and the deceleration parameter in a phenomenological Cosmographic expansion. We do not find any evidence for anisotropic expansion, and place a tight upper bound on the amplitude of a dipole, $|D_\mu| < 5.93 \times 10^{-4}$ (95\% credible interval) in a $\Lambda$CDM setting, and $|D_{q_0}| < 6.29 \times 10^{-2}$ in the Cosmographic expansion approach. Using Bayesian model comparison, we obtain posterior odds in excess of 900:1 (640:1) against a constant-in-redshift dipole for $\Lambda$CDM (the Cosmographic expansion). In the isotropic case, an accelerating universe is favoured with odds of $\sim 1100:1$ with respect to a decelerating one. ",New Constraints on Anisotropic Expansion from Supernovae Type Ia,6,"[""If you were to bet, what odds would you get: \n- against the accelerated expansion of the universe? (A: 1100:1) \n- against the existence of a preferred direction in the expansion? (A: 900:1) \nOur new constraints from supernovae type Ia in today's paper: \n<LINK>"", 'Huge congrats to lead author and @ImperialAstro PhD student Wahid Rahman, and thanks to our peculiar velocity collaborators and experts @SuprantaSB & Mike Hudson!', '@pietro_berkes Consider that the highest level (""decisive"") in the Jeffreys\' scale of evidence is ln(BF)=5, or odds of ~ 150:1. Evidence accumulates linearly in the posterior/prior width for the simpler model (as opposed to exponentially against a it), so 1000:1 is pretty strong IMHO.', ""@pietro_berkes Also, this paper uses a model that's an evolution of the Bayesian hierarchical model we wrote down together when I visited you in Boston, sometimes in the Late Middle Age 🤣"", '@pietro_berkes One problem?! I got half a dozen! Expect an email soon 😃', '@pietro_berkes #DreamTeam 🤓']",21,08,993 |
71,272,1327425773611876353,1096659448657932288,Yuki Kawana,"My first paper in the Ph.D. course has been accepted to NeurIPS 2020. We propose a highly expressive 3D representation for primitive decomposition task, which simultaneously has differentiable explicit and implicit shape representations. <LINK> <LINK>",https://arxiv.org/abs/2010.11248,"Reconstructing 3D objects from 2D images is a fundamental task in computer vision. Accurate structured reconstruction by parsimonious and semantic primitive representation further broadens its application. When reconstructing a target shape with multiple primitives, it is preferable that one can instantly access the union of basic properties of the shape such as collective volume and surface, treating the primitives as if they are one single shape. This becomes possible by primitive representation with unified implicit and explicit representations. However, primitive representations in current approaches do not satisfy all of the above requirements at the same time. To solve this problem, we propose a novel primitive representation named neural star domain (NSD) that learns primitive shapes in the star domain. We show that NSD is a universal approximator of the star domain and is not only parsimonious and semantic but also an implicit and explicit shape representation. We demonstrate that our approach outperforms existing methods in image reconstruction tasks, semantic capabilities, and speed and quality of sampling high-resolution meshes. ",Neural Star Domain as Primitive Representation,1,"['My first paper in the Ph.D. course has been accepted to NeurIPS 2020. We propose a highly expressive 3D representation for primitive decomposition task, which simultaneously has differentiable explicit and implicit shape representations. <LINK> <LINK>']",20,10,251 |
72,247,1271086751700877313,175184725,Brendan O'Donoghue,"""Stochastic matrix games with bandit feedback"", with Tor Lattimore and @IanOsband, wherein we generalize stochastic multi-armed bandits to two-player zero-sum matrix games, I was surprised to find out that Thompson sampling provably fails in this context: <LINK>",https://arxiv.org/abs/2006.05145,"We study a version of the classical zero-sum matrix game with unknown payoff matrix and bandit feedback, where the players only observe each others actions and a noisy payoff. This generalizes the usual matrix game, where the payoff matrix is known to the players. Despite numerous applications, this problem has received relatively little attention. Although adversarial bandit algorithms achieve low regret, they do not exploit the matrix structure and perform poorly relative to the new algorithms. The main contributions are regret analyses of variants of UCB and K-learning that hold for any opponent, e.g., even when the opponent adversarially plays the best-response to the learner's mixed strategy. Along the way, we show that Thompson fails catastrophically in this setting and provide empirical comparison to existing algorithms. ",Matrix games with bandit feedback,1,"['""Stochastic matrix games with bandit feedback"", with Tor Lattimore and @IanOsband, wherein we generalize stochastic multi-armed bandits to two-player zero-sum matrix games, I was surprised to find out that Thompson sampling provably fails in this context: <LINK>']",20,06,262 |
73,122,1292854888968474625,373525906,Weijie Su,"New interpretation of the *double descent* phenomenon: noise in features is ubiquitous, and we show using a random feature model that noise can lead to benign overfitting. Paper: <LINK>. w/ Zhu Li and Dino Sejdinovic. <LINK> @2prime_PKU Thanks for the reference! Will look into it. @roydanroy @jeffNegrea @KDziugaite Thanks for the reference. Very related",https://arxiv.org/abs/2008.02901,"Modern machine learning often operates in the regime where the number of parameters is much higher than the number of data points, with zero training loss and yet good generalization, thereby contradicting the classical bias-variance trade-off. This \textit{benign overfitting} phenomenon has recently been characterized using so called \textit{double descent} curves where the risk undergoes another descent (in addition to the classical U-shaped learning curve when the number of parameters is small) as we increase the number of parameters beyond a certain threshold. In this paper, we examine the conditions under which \textit{Benign Overfitting} occurs in the random feature (RF) models, i.e. in a two-layer neural network with fixed first layer weights. We adopt a new view of random feature and show that \textit{benign overfitting} arises due to the noise which resides in such features (the noise may already be present in the data and propagate to the features or it may be added by the user to the features directly) and plays an important implicit regularization role in the phenomenon. ",Benign Overfitting and Noisy Features,3,"['New interpretation of the *double descent* phenomenon: noise in features is ubiquitous, and we show using a random feature model that noise can lead to benign overfitting. Paper: <LINK>. w/ Zhu Li and Dino Sejdinovic. <LINK>', '@2prime_PKU Thanks for the reference! Will look into it.', '@roydanroy @jeffNegrea @KDziugaite Thanks for the reference. Very related']",20,08,355 |
74,12,1333730907900010499,909073399770738694,Giulia De Rosi,"Our new paper is online: <LINK> Exotic liquids recently emerged from ultracold atomic gases. Their temperature is nK and they are 100 million times less dense than water. Their existence is a pure quantum effect. Their thermal effects were unknown until our work <LINK> We have predicted two thermal mechanisms driving the liquid-gas transition: the dynamical instability and the evaporation. We have provided the phase diagram suggesting the realization of the liquid by cooling the gas. We have computed the thermodynamic quantities of the liquid. We have proposed novel and precise methods to measure the temperature in quantum liquids: 1) the strong dependence of the critical temperature on the interactions which can be finely tuned, 2) the thermodynamic quantities of the liquids suggest future in-situ measurements.",https://arxiv.org/abs/2011.14353,"We study the low-temperature thermodynamics of weakly-interacting uniform liquids in one-dimensional attractive Bose-Bose mixtures.~The Bogoliubov approach is used to simultaneously describe quantum and thermal fluctuations. First, we investigate in detail two different thermal mechanisms driving the liquid-to-gas transition, the dynamical instability and the evaporation, and we draw the phase diagram. Then, we compute the main thermodynamic quantities of the liquid, such as the chemical potential, the Tan's contact, the adiabatic sound velocity and the specific heat at constant volume. The strong dependence of the thermodynamic quantities on the temperature may be used as a precise temperature probe for experiments on quantum liquids. ","Thermal instability, evaporation and thermodynamics of one-dimensional |
liquids in weakly-interacting Bose-Bose mixtures",3,"['Our new paper is online: <LINK>\n\nExotic liquids recently emerged from ultracold atomic gases. Their temperature is nK and they are 100 million times less dense than water. Their existence is a pure quantum effect. Their thermal effects were unknown until our work <LINK>', 'We have predicted two thermal mechanisms driving the liquid-gas transition: the dynamical instability and the evaporation. We have provided the phase diagram suggesting the realization of the liquid by cooling the gas. We have computed the thermodynamic quantities of the liquid.', 'We have proposed novel and precise methods to measure the temperature in quantum liquids: \n\n1) the strong dependence of the critical temperature on the interactions which can be finely tuned, \n\n2) the thermodynamic quantities of the liquids suggest future in-situ measurements.']",20,11,825 |
75,183,1519239388458369024,894875488094760960,Andrea Dittadi,"Excited to share our new study on object-centric learning! <LINK> In this work led by @oneapra (w @OleWinther1) we look for architectural inductive biases that may help scale unsupervised object-centric representation learning to more complex images. 1/ <LINK> We add complex textures to standard multi-object datasets, and train state-of-the-art models. The added textures highlight opposite failure modes: (1) In some cases, segmentation is mostly color-based and practically ignores objects (see input image & reconstructed slots). 2/ <LINK> (2) In other cases, the objects are segmented relatively well but the texture details are largely disregarded. 3/ <LINK> We modify the “failure mode 1” models to sacrifice reconstruction accuracy in favor of smoother segmentation, but do not obtain substantial improvements in object separation or representation usefulness. These models tend to have a bias towards color-based segmentation. 4/ <LINK> Similarly, we modify “failure mode 2” models to improve reconstructions. Interestingly, we observe that hyper-segmentation still does not occur – these models can still separate objects correctly, even as their reconstruction quality improves. 5/ <LINK> Main conclusion: methods that use a single module to reconstruct both shape (alpha-blending masks) and appearance of each object tend to learn more useful representations and separate objects more easily. Check out the paper for further interesting experiments and results! 6/6",http://arxiv.org/abs/2204.08479,"Understanding which inductive biases could be helpful for the unsupervised learning of object-centric representations of natural scenes is challenging. We use neural style transfer to generate datasets where objects have complex textures while still retaining ground-truth annotations. We find that methods that use a single module to reconstruct both the shape and visual appearance of each object learn more useful representations and achieve better object separation. In addition, we observe that adjusting the latent space size is not sufficient to improve segmentation performance. Finally, the downstream usefulness of the representations is significantly more strongly correlated with segmentation quality than with reconstruction accuracy. ","Inductive Biases for Object-Centric Representations in the Presence of |
Complex Textures",6,"['Excited to share our new study on object-centric learning! <LINK>\n\nIn this work led by @oneapra (w @OleWinther1) we look for architectural inductive biases that may help scale unsupervised object-centric representation learning to more complex images.\n\n1/ <LINK>', 'We add complex textures to standard multi-object datasets, and train state-of-the-art models.\n\nThe added textures highlight opposite failure modes:\n\n(1) In some cases, segmentation is mostly color-based and practically ignores objects (see input image & reconstructed slots).\n\n2/ https://t.co/kKUyp47lDz', '(2) In other cases, the objects are segmented relatively well but the texture details are largely disregarded.\n\n3/ https://t.co/kMUa8xi3TI', 'We modify the “failure mode 1” models to sacrifice reconstruction accuracy in favor of smoother segmentation, but do not obtain substantial improvements in object separation or representation usefulness. These models tend to have a bias towards color-based segmentation.\n\n4/ https://t.co/ilZieeKKU2', 'Similarly, we modify “failure mode 2” models to improve reconstructions. Interestingly, we observe that hyper-segmentation still does not occur – these models can still separate objects correctly, even as their reconstruction quality improves.\n\n5/ https://t.co/EupAB1220t', 'Main conclusion: methods that use a single module to reconstruct both shape (alpha-blending masks) and appearance of each object tend to learn more useful representations and separate objects more easily.\n\nCheck out the paper for further interesting experiments and results!\n\n6/6']",22,04,1478 |
76,87,1514862957238407176,2856538378,Karel Van Acoleyen,"New paper out on the arxiv, with Daan Maertens and Nick Bultinck. Hawking radiation as quench dynamics from hopping Hamiltonians that interface modes with opposite chirality. (GR is beautiful, but you do not need it to understand the Hawking effect.) <LINK>",https://arxiv.org/abs/2204.06583,"We construct two free fermion lattice models exhibiting Hawking pair creation. Specifically, we consider the simplest case of a d=1+1 massless Dirac fermion, for which the Hawking effect can be understood in terms of a quench of the uniform vacuum state with a non-uniform Hamiltonian that interfaces modes with opposite chirality. For both our models we find that additional modes arising from the lattice discretization play a crucial role, as they provide the bulk reservoir for the Hawking radiation: the Hawking pairs emerge from fermions deep inside the Fermi sea scattering off the effective black hole horizon. Our first model combines local hopping dynamics with a translation over one lattice site, and we find the resulting Floquet dynamics to realize a causal horizon, with fermions scattering from the region outside the horizon. For our second model, which relies on a purely local hopping Hamiltonian, we find the fermions to scatter from the inside. In both cases, for Hawking temperatures up to the inverse lattice spacing we numerically find the resulting Hawking spectrum to be in perfect agreement with the Fermi-Dirac quantum field theory prediction. ",Hawking radiation on the lattice as universal (Floquet) quench dynamics,1,"['New paper out on the arxiv, with Daan Maertens and Nick Bultinck. Hawking radiation as quench dynamics from hopping Hamiltonians that interface modes with opposite chirality. (GR is beautiful, but you do not need it to understand the Hawking effect.)\n\n<LINK>']",22,04,257 |
77,59,1149549986092163072,972555245179064320,Jordy de Vries,"Very happy with a new paper today, but it is a rather technical one <LINK> . Anomalous dimensions play a big role in renormalization-group equations that determine how coupling constants depend on the energy scale where they are probed. We used a method, developed for the QCD beta function, to get higher-loop anomalous dimensions of Standard Model EFT operators. As an example, we calculated 2 and 3 loop anomalous dimension of the so- called CP-odd Weinberg operator, where only one-loop results were known. The calculation seemed impossible to me (10^4 highly nontrivial diagrams) but the algorithm developed by my collaborators did the trick. The method can be extended to high-loop renormalization of a much larger class of effective operators. We found that 2- and 3-loop results are rather big, and that perturbation theory can fail already for not-that-small values of the QCD coupling. Also the 3-loop result has a piece that also appears in the 4-loop beta function. Pretty interesting, but we don’t know why.... We started this work some years ago when we were all at Nikhef in Amsterdam. Now 3 of us are elsewhere in the US, Scotland, and Switzerland. So it’s extra nice the calculation was completed.",https://arxiv.org/abs/1907.04923,"We apply a fully automated extension of the $R^*$-operation capable of calculating higher-loop anomalous dimensions of n-point Green's functions of arbitrary, possibly non-renormalisable, local Quantum Field Theories. We focus on the case of the CP-violating Weinberg operator of the Standard Model Effective Field Theory whose anomalous dimension is so far known only at one loop. We calculate the two-loop anomalous dimension in full QCD and the three-loop anomalous dimensions in the limit of pure Yang-Mills theory. We find sizeable two-loop and large three-loop corrections, due to the appearance of a new quartic group invariant. We discuss phenomenological implications for electric dipole moments and future applications of the method. ","Two- and three-loop anomalous dimensions of Weinberg's dimension-six |
CP-odd gluonic operator",5,"['Very happy with a new paper today, but it is a rather technical one <LINK> . Anomalous dimensions play a big role in renormalization-group equations that determine how coupling constants depend on the energy scale where they are probed.', 'We used a method, developed for the QCD beta function, to get higher-loop anomalous dimensions of Standard Model EFT operators. As an example, we calculated 2 and 3 loop anomalous dimension of the so- called CP-odd Weinberg operator, where only one-loop results were known.', 'The calculation seemed impossible to me (10^4 highly nontrivial diagrams) but the algorithm developed by my collaborators did the trick. The method can be extended to high-loop renormalization of a much larger class of effective operators.', 'We found that 2- and 3-loop results are rather big, and that perturbation theory can fail already for not-that-small values of the QCD coupling. Also the 3-loop result has a piece that also appears in the 4-loop beta function. Pretty interesting, but we don’t know why....', 'We started this work some years ago when we were all at Nikhef in Amsterdam. Now 3 of us are elsewhere in the US, Scotland, and Switzerland. So it’s extra nice the calculation was completed.']",19,07,1214 |
78,82,1439872546745327617,27522184,Jessica May Hislop,"Check out our new paper on the challenge of simulating the star cluster population of dwarf galaxies in simulations! The key points are summarised in the thread below 👇<LINK> 1/7 Simulations of galaxy formation use subgrid models for star formation. But as we go to higher resolution simulations and start to resolve individual stars, we need to make sure we still agree with observations 2/7 So in this study we looked at varying the star formation efficiency (SFE), that is, how much of the star forming gas is actually turned into stars. Turns out varying the SFE makes a huge difference to the distribution of stars 3/7 <LINK> Varying the SFE doesn’t make any major differences to the star formation rate or the outflow rate but it does make a big difference to the star clusters 4/7 Low SFE produces many tightly bound clusters with a high cluster formation efficiency (CFE) and high SFE produces fewer loosely bound clusters with a much lower CFE. Confusingly, no SFE reproduced what is observed in the Universe perfectly 5/7 <LINK> This motivates us to now investigate alternatives to this particular subgrid model because it seems it doesn’t really work that well as we go to higher resolution simulations 6/7 Tl;dr: We ran simulations of isolated galaxies and found that varying the star formation efficiency makes very little difference to the global properties but makes a huge difference to the star cluster populations produced 7/7",https://arxiv.org/abs/2109.08160,"We present results on the star cluster properties from a series of high resolution smoothed particles hydrodynamics (SPH) simulations of isolated dwarf galaxies as part of the GRIFFIN project. The simulations at sub-parsec spatial resolution and a minimum particle mass of 4 $\mathrm{M_\odot}$ incorporate non-equilibrium heating, cooling and chemistry processes, and realise individual massive stars. All the simulations follow feedback channels of massive stars that include the interstellar-radiation field, that is variable in space and time, the radiation input by photo-ionisation and supernova explosions. Varying the star formation efficiency per free-fall time in the range $\epsilon_\mathrm{ff}$ = 0.2 - 50$\%$ neither changes the star formation rates nor the outflow rates. While the environmental densities at star formation change significantly with $\epsilon_\mathrm{ff}$, the ambient densities of supernovae are independent of $\epsilon_\mathrm{ff}$ indicating a decoupling of the two processes. At low $\epsilon_\mathrm{ff}$, more massive, and increasingly more bound star clusters are formed, which are typically not destroyed. With increasing $\epsilon_\mathrm{ff}$ there is a trend for shallower cluster mass functions and the cluster formation efficiency $\Gamma$ for young bound clusters decreases from $50 \%$ to $\sim 1 \%$ showing evidence for cluster disruption. However, none of our simulations form low mass ($< 10^3$ $\mathrm{M_\odot}$) clusters with structural properties in perfect agreement with observations. Traditional star formation models used in galaxy formation simulations based on local free-fall times might therefore not be able to capture low mass star cluster properties without significant fine-tuning. ","The challenge of simulating the star cluster population of dwarf |
galaxies with resolved interstellar medium",7,"['Check out our new paper on the challenge of simulating the star cluster population of dwarf galaxies in simulations! The key points are summarised in the thread below 👇<LINK> 1/7', 'Simulations of galaxy formation use subgrid models for star formation. But as we go to higher resolution simulations and start to resolve individual stars, we need to make sure we still agree with observations 2/7', 'So in this study we looked at varying the star formation efficiency (SFE), that is, how much of the star forming gas is actually turned into stars. Turns out varying the SFE makes a huge difference to the distribution of stars 3/7 https://t.co/7FlGXaXtE4', 'Varying the SFE doesn’t make any major differences to the star formation rate or the outflow rate but it does make a big difference to the star clusters 4/7', 'Low SFE produces many tightly bound clusters with a high cluster formation efficiency (CFE) and high SFE produces fewer loosely bound clusters with a much lower CFE. Confusingly, no SFE reproduced what is observed in the Universe perfectly 5/7 https://t.co/pG4Z1lJXzD', 'This motivates us to now investigate alternatives to this particular subgrid model because it seems it doesn’t really work that well as we go to higher resolution simulations 6/7', 'Tl;dr: We ran simulations of isolated galaxies and found that varying the star formation efficiency makes very little difference to the global properties but makes a huge difference to the star cluster populations produced 7/7']",21,09,1444 |
79,175,1364127006435213314,786855300322172928,Alkistis Pourtsidou,"Paper alert! In <LINK>, led by @CunningtonSD and @CatAstro_Phy, we present a simulations and modelling study of the HI intensity mapping bispectrum, including nasty observational effects from foregrounds with polarisation leakage + a beam with sidelobes. <LINK> And here's how the covariance looks including these effects <LINK>",https://arxiv.org/abs/2102.11153,"The bispectrum is a 3-point statistic with the potential to provide additional information beyond power spectra analyses of survey datasets. Radio telescopes which broadly survey the 21cm emission from neutral hydrogen (HI) are a promising way to probe LSS and in this work we present an investigation into the HI intensity mapping (IM) bispectrum using simulations. We present a model of the redshift space HI IM bispectrum including observational effects from the radio telescope beam and 21cm foreground contamination. We validate our modelling prescriptions with measurements from robust IM simulations, inclusive of these observational effects. Our foreground simulations include polarisation leakage, on which we use a Principal Component Analysis cleaning method. We also investigate the effects from a non-Gaussian beam including side-lobes. For a MeerKAT-like single-dish IM survey at $z=0.39$, we find that foreground removal causes a 8% reduction in the equilateral bispectrum's signal-to-noise ratio $S/N$, whereas the beam reduces it by 62%. We find our models perform well, generally providing $\chi^2_\text{dof}\sim 1$, indicating a good fit to the data. Whilst our focus is on post-reionisation, single-dish IM, our modelling of observational effects, especially foreground removal, can also be relevant to interferometers and reionisation studies. ",The HI intensity mapping bispectrum including observational effects,2,"['Paper alert! In <LINK>, led by @CunningtonSD and @CatAstro_Phy, we present a simulations and modelling study of the HI intensity mapping bispectrum, including nasty observational effects from foregrounds with polarisation leakage + a beam with sidelobes. <LINK>', ""And here's how the covariance looks including these effects https://t.co/Jk8SyDUOTT""]",21,02,328 |
80,54,1262645744676331521,40639812,Colin Cotter,"New paper by my freshly-viva'd student @aa_bock on the ArXiV, on finite element discretisation of image metamorphosis (a way of transforming from one image to another that combines transport with local changes that can change image topology). <LINK> @utropstegn @aa_bock Thanks Marie! Hope you are getting on ok! @utropstegn @aa_bock oh and Andreas is a fabulous computational scientist and numerical analyst. I’m sure he’d be keen to hear of any opportunities in Oslo.",https://arxiv.org/abs/2005.08743,We study the problem of registering images. The framework we use is metamorphosis and we construct a variational Eulerian space-time setting and pose the registration problem as an infinite-dimensional optimisation problem. The geodesic equations correspond to a system of advection and continuity equations and are solved analytically. Well-posedness of a primal conforming finite element method is established and its convergence is investigated numerically. This provides a discrete forward operator for the matching parameterized by a space-time velocity field. We propose a gradient descent method on this control variable and show several promising numerical results for this approach. ,Space-time metamorphosis,3,"[""New paper by my freshly-viva'd student @aa_bock \non the ArXiV, on finite element discretisation of image metamorphosis (a way of transforming from one image to another that combines transport with local changes that can change image topology).\n<LINK>"", '@utropstegn @aa_bock Thanks Marie! Hope you are getting on ok!', '@utropstegn @aa_bock oh and Andreas is a fabulous computational scientist and numerical analyst. I’m sure he’d be keen to hear of any opportunities in Oslo.']",20,05,469 |
81,34,1385607983284080644,367022967,Sharief Hendricks,New Paper: Automated Tackle Injury Risk Assessment in Contact-Based Sports -- A Rugby Union Example with @UnitAfrican accepted in @IEEEXplore Computer Society Conference on Computer Vision and Pattern Recognition Workshops <LINK> #DeepLearning 🏉🔬 <LINK> <LINK>,https://arxiv.org/abs/2104.10916,"Video analysis in tackle-collision based sports is highly subjective and exposed to bias, which is inherent in human observation, especially under time constraints. This limitation of match analysis in tackle-collision based sports can be seen as an opportunity for computer vision applications. Objectively tracking, detecting and recognising an athlete's movements and actions during match play from a distance using video, along with our improved understanding of injury aetiology and skill execution will enhance our understanding how injury occurs, assist match day injury management, reduce referee subjectivity. In this paper, we present a system of objectively evaluating in-game tackle risk in rugby union matches. First, a ball detection model is trained using the You Only Look Once (YOLO) framework, these detections are then tracked by a Kalman Filter (KF). Following this, a separate YOLO model is used to detect persons/players within a tackle segment and then the ball-carrier and tackler are identified. Subsequently, we utilize OpenPose to determine the pose of ball-carrier and tackle, the relative pose of these is then used to evaluate the risk of the tackle. We tested the system on a diverse collection of rugby tackles and achieved an evaluation accuracy of 62.50%. These results will enable referees in tackle-contact based sports to make more subjective decisions, ultimately making these sports safer. ","Automated Tackle Injury Risk Assessment in Contact-Based Sports -- A |
Rugby Union Example",2,"['New Paper: Automated Tackle Injury Risk Assessment in Contact-Based Sports -- A Rugby Union Example with @UnitAfrican accepted in @IEEEXplore Computer Society Conference on Computer Vision and Pattern Recognition Workshops <LINK> #DeepLearning 🏉🔬 <LINK>', 'https://t.co/NPOwLJMaDZ']",21,04,260 |
82,65,1061907026165534720,721931072,Shimon Whiteson,"Are you interested in variational reinforcement learning but find existing formalisms confusing or just plain wrong? Check out our new paper on VIREL, a new theoretically grounded variational framework for RL. <LINK> @mattfellowsoxcs Applying EM to our framework induces actor-critic methods in which the E-step = policy improvement and the M-step = policy evaluation. In high-dimensional tasks, this approach outperforms soft actor-critic.",https://arxiv.org/abs/1811.01132,"Applying probabilistic models to reinforcement learning (RL) enables the application of powerful optimisation tools such as variational inference to RL. However, existing inference frameworks and their algorithms pose significant challenges for learning optimal policies, e.g., the absence of mode capturing behaviour in pseudo-likelihood methods and difficulties learning deterministic policies in maximum entropy RL based approaches. We propose VIREL, a novel, theoretically grounded probabilistic inference framework for RL that utilises a parametrised action-value function to summarise future dynamics of the underlying MDP. This gives VIREL a mode-seeking form of KL divergence, the ability to learn deterministic optimal polices naturally from inference and the ability to optimise value functions and policies in separate, iterative steps. In applying variational expectation-maximisation to VIREL we thus show that the actor-critic algorithm can be reduced to expectation-maximisation, with policy improvement equivalent to an E-step and policy evaluation to an M-step. We then derive a family of actor-critic methods from VIREL, including a scheme for adaptive exploration. Finally, we demonstrate that actor-critic algorithms from this family outperform state-of-the-art methods based on soft value functions in several domains. ",VIREL: A Variational Inference Framework for Reinforcement Learning,2,"['Are you interested in variational reinforcement learning but find existing formalisms confusing or just plain wrong? Check out our new paper on VIREL, a new theoretically grounded variational framework for RL. <LINK> @mattfellowsoxcs', 'Applying EM to our framework induces actor-critic methods in which the E-step = policy improvement and the M-step = policy evaluation. In high-dimensional tasks, this approach outperforms soft actor-critic.']",18,11,440 |
83,25,1256017442389798912,1012125662117851136,Edward Kennedy,Very excited to share new work on flexible estimation of heterogeneous effects: <LINK> 1st part is more practical & gives flexible error bds. 2nd part is more theoretical & tries to find best possible error. Might’ve learned more in this than any other paper <LINK> (Though a lot of the new stuff I learned didn’t actually make it into the paper... stay tuned),https://arxiv.org/abs/2004.14497,"Heterogeneous effect estimation plays a crucial role in causal inference, with applications across medicine and social science. Many methods for estimating conditional average treatment effects (CATEs) have been proposed in recent years, but there are important theoretical gaps in understanding if and when such methods are optimal. This is especially true when the CATE has nontrivial structure (e.g., smoothness or sparsity). Our work contributes in several main ways. First, we study a two-stage doubly robust CATE estimator and give a generic model-free error bound, which, despite its generality, yields sharper results than those in the current literature. We apply the bound to derive error rates in nonparametric models with smoothness or sparsity, and give sufficient conditions for oracle efficiency. Underlying our error bound is a general oracle inequality for regression with estimated or imputed outcomes, which is of independent interest; this is the second main contribution. The third contribution is aimed at understanding the fundamental statistical limits of CATE estimation. To that end, we propose and study a local polynomial adaptation of double-residual regression. We show that this estimator can be oracle efficient under even weaker conditions, if used with a specialized form of sample splitting and careful choices of tuning parameters. These are the weakest conditions currently found in the literature, and we conjecture that they are minimal in a minimax sense. We go on to give error bounds in the non-trivial regime where oracle rates cannot be achieved. Some finite-sample properties are explored with simulations. ",Optimal doubly robust estimation of heterogeneous causal effects,2,"['Very excited to share new work on flexible estimation of heterogeneous effects:\n\n<LINK>\n\n1st part is more practical & gives flexible error bds. 2nd part is more theoretical & tries to find best possible error.\n\nMight’ve learned more in this than any other paper <LINK>', '(Though a lot of the new stuff I learned didn’t actually make it into the paper... stay tuned)']",20,04,360 |
84,28,1277856196444200960,1177063549606203394,Tommi Tenkanen,"A new #paper out! The title is ""The First Three Seconds: a Review of Possible Expansion Histories of the Early Universe"" and a #preprint can be found here: <LINK> 1/n <LINK> As we say in the abstract, ""While the abundance of light elements indicates that the Universe was radiation dominated during Big Bang Nucleosynthesis (BBN), there is scant evidence that the Universe was radiation dominated prior to BBN."" 2/ ""It is therefore possible that the cosmological history was more complicated, with deviations from the standard radiation domination during the earliest epochs."" 3/n In the paper we reviewed various possible causes and consequences of deviations from this ""radiation domination"" in the early Universe and the known constraints on them. 4/n In particular, we reviewed several interesting proposals regarding the generation of #DarkMatter, matter-#antimatter asymmetry, #GravitationalWaves, primordial #BlackHole's, and #microhalo's. We hope that the review helps in guiding further research on these topics! 5/n Again, a #preprint is available here: <LINK>. Finally, big thanks to all my co-authors including @carambolos, @ktfreese, @DanHooperAstro, @GordanKrnjaic, @VivPoulin, @160GHz, and @gswatson! 6/6",https://arxiv.org/abs/2006.16182,"It is commonly assumed that the energy density of the Universe was dominated by radiation between reheating after inflation and the onset of matter domination 54,000 years later. While the abundance of light elements indicates that the Universe was radiation dominated during Big Bang Nucleosynthesis (BBN), there is scant evidence that the Universe was radiation dominated prior to BBN. It is therefore possible that the cosmological history was more complicated, with deviations from the standard radiation domination during the earliest epochs. Indeed, several interesting proposals regarding various topics such as the generation of dark matter, matter-antimatter asymmetry, gravitational waves, primordial black holes, or microhalos during a nonstandard expansion phase have been recently made. In this paper, we review various possible causes and consequences of deviations from radiation domination in the early Universe - taking place either before or after BBN - and the constraints on them, as they have been discussed in the literature during the recent years. ","The First Three Seconds: a Review of Possible Expansion Histories of the |
Early Universe",6,"['A new #paper out! The title is ""The First Three Seconds: a Review of Possible Expansion Histories of the Early Universe"" and a #preprint can be found here: <LINK> 1/n <LINK>', 'As we say in the abstract, ""While the abundance of light elements indicates that the Universe was radiation dominated during Big Bang Nucleosynthesis (BBN), there is scant evidence that the Universe was radiation dominated prior to BBN."" 2/', '""It is therefore possible that the cosmological history was more complicated, with deviations from the standard radiation domination during the earliest epochs."" 3/n', 'In the paper we reviewed various possible causes and consequences of deviations from this ""radiation domination"" in the early Universe and the known constraints on them. 4/n', ""In particular, we reviewed several interesting proposals regarding the generation of #DarkMatter, matter-#antimatter asymmetry, #GravitationalWaves, primordial #BlackHole's, and #microhalo's. We hope that the review helps in guiding further research on these topics! 5/n"", 'Again, a #preprint is available here: https://t.co/GyQdnDl327. Finally, big thanks to all my co-authors including @carambolos, @ktfreese, @DanHooperAstro, @GordanKrnjaic, @VivPoulin, @160GHz, and @gswatson! 6/6']",20,06,1219 |
85,69,1493483808036950018,199249094,Paul Villoutreix,"New paper out! We started with the question: how does the structure of the lymph node affects T-cells exploration? We ended up with a new approach for the study of random walks (RW) on large networks and used it on an actual lymph node. 1/n <LINK> <LINK> RW on networks are a widely used to model search strategies, transportation problems, or disease propagation. In this model, random walkers hop from node to node while choosing with an uniform probability the edges on which to travel. The structure of the underlying network, such as its degree distribution and connectivity pattern, will thus determine how the RW evolves over time. Can this connectivity pattern favour the exploration of some nodes over others? We propose a general framework to find network heterogeneities, which we define as connectivity patterns that affect the RW. We propose to characterize and measure these heterogeneities by i) ranking nodes and ii) detecting communities in a way that is RW interpretable. <LINK> Moreover, we propose iii) an approximation to accurately and efficiently compute these quantities on large networks. We first applied our method on toy models, in particular, we showed its efficiency at contrasting two highly similar networks (identical degree distribution, same number of nodes) <LINK> Moreover, we showed that none of our computations are redundant with previous centralities or random walk based measures (such as the global mean first passage time) <LINK> Finally, we applied our methodology to characterize an actual lymph node obtained from Kelch et al. 2019. It's a large network containing about 200,000 nodes. So we had to use our approximation which appeared very accurate. <LINK> Our analysis suggests that the lymph node conduit network structure is highly homogeneous (a bit like a foam) and therefore promotes a uniform exploration of space by T-cells! We developed an interactive visualisation platform if you want to explore these results <LINK> The credit of this work goes to Solène Song (@SongSolene), Malek Senoussi (@PtyFilou ) and Paul Escande! <LINK>",https://arxiv.org/abs/2202.06729,"Random walks on networks are widely used to model stochastic processes such as search strategies, transportation problems or disease propagation. A prominent biological example of search by random walkers on a network is the guiding of naive T cells by the lymphatic conduits network in the lymph node. Motivated by this case study, we propose a general framework to find network heterogeneities, which we define as connectivity patterns that affect the random walk. We propose to characterize and measure these heterogeneities by i) ranking nodes and ii) detecting communities in a way that is interpretable in terms of random walk, moreover, we propose iii) an approximation to accurately and efficiently compute these quantities on large networks. The ranking parameter we propose is the probability of presence field, and the community detection method adapts previously defined diffusion coordinates. In addition, we propose an interactive data visualization platform to follow the dynamics of the random walks and their characteristics on our datasets, and a ready-to-use pipeline for other datasets upon download. We first showcase the properties of our method on toy models. We highlight this way the efficiency of our methods at contrasting two highly similar networks (identical degree distribution, same number of nodes). Moreover, we show numerically that the ranking and communities defined in this way are not redundant with any other classical methods (centralities, global mean first passage, louvain, node2vec). We then use our methods to characterize the lymph node conduits network. We show that the lymph node conduits network appears homogeneous and therefore has a global structure that promotes a uniform exploration of space by T-cells. ","Random walk informed community detection reveals heterogeneities in |
large networks",11,"['New paper out! \nWe started with the question: how does the structure of the lymph node affects T-cells exploration? \nWe ended up with a new approach for the study of random walks (RW) on large networks and used it on an actual lymph node. 1/n\n<LINK> <LINK>', 'RW on networks are a widely used to model search strategies, transportation problems, or disease propagation. In this model, random walkers hop from node to node while choosing with an uniform probability the edges on which to travel.', 'The structure of the underlying network, such as its degree distribution and connectivity pattern, will thus determine how the RW evolves over time. Can this connectivity pattern favour the exploration of some nodes over others?', 'We propose a general framework to find network heterogeneities, which we define as connectivity patterns that affect the RW. We propose to characterize and measure these heterogeneities by i) ranking nodes and ii) detecting communities in a way that is RW interpretable. https://t.co/QwrqB7tYfr', 'Moreover, we propose iii) an approximation to accurately and efficiently compute these quantities on large networks.', 'We first applied our method on toy models, in particular, we showed its efficiency at contrasting two highly similar networks (identical degree distribution, same number of nodes) https://t.co/3Wy2HV8qgz', 'Moreover, we showed that none of our computations are redundant with previous centralities or random walk based measures (such as the global mean first passage time) https://t.co/4Ee4hUgjxm', ""Finally, we applied our methodology to characterize an actual lymph node obtained from Kelch et al. 2019. It's a large network containing about 200,000 nodes. So we had to use our approximation which appeared very accurate. https://t.co/AbMAhy0voG"", 'Our analysis suggests that the lymph node conduit network structure is highly homogeneous (a bit like a foam) and therefore promotes a uniform exploration of space by T-cells!', 'We developed an interactive visualisation platform if you want to explore these results \nhttps://t.co/KQKXgTUjaz', 'The credit of this work goes to Solène Song (@SongSolene), Malek Senoussi (@PtyFilou ) and Paul Escande! https://t.co/prATrApFTs']",22,02,2087 |
86,180,1453986675625975810,321943790,Mikko J.S. Auvinen,"A year ago, we set to create the most accurate #indoor flow model in the world in order to understand and control #airborne transmission of #coronavirus. Now this exceptional study is documented and submitted for peer-review. Our preprint is published at <LINK>",http://arxiv.org/abs/2110.14348,"High-resolution large-eddy simulation (LES) is exploited to study indoor air turbulence and its effect on the dispersion of respiratory virus-laden aerosols and subsequent transmission risks. The methodology is applied to assess two dissimilar approaches to reduce transmission risks: a strategy to augment the indoor ventilation capacity with portable air purifiers and a strategy to utilize partitioning by exploiting portable space dividers. To substantiate the physical relevance of the LES model, a set of experimental aerosol concentration measurements are carried out, and their results are used for validating the LES model results. The obtained LES dispersion results are subjected to pathogen exposure and infection probability analysis. Wells-Riley probability model is extended to rely on realistic time- and space-dependent concentration fields to yield time- and space-dependent infection probability fields. The use of air purifiers leads to greater reduction in absolute risks compared to the analytical Wells-Riley model, which fails to predict the original risk level. However, the two models do agree on the relative risk reduction. The spatial partitioning strategy is demonstrated to have an undesirable effect when employed without other measures. The partitioning approach may yield positive results when employed together with targeted air purifier units. The LES-based results are examined in juxtaposition with the classical Wells-Riley model, which is shown to significantly underestimate the infection probability, highlighting the importance of employing accurate indoor turbulence modeling when evaluating different risk-reduction strategies. ","High-resolution large-eddy simulation of indoor turbulence and its |
effect on airborne transmission of respiratory pathogens; model validation |
and infection probability analysis",1,"['A year ago, we set to create the most accurate #indoor flow model in the world in order to understand and control #airborne transmission of #coronavirus. Now this exceptional study is documented and submitted for peer-review. Our preprint is published at <LINK>']",21,10,261 |
87,61,1470958559827529740,108681761,William Keel,"Paper day! Accepted for MNRAS: two new distant AGN-ionized clouds, at least one from faded AGN. Paper w/@AlexeiMoiseev1 et al. 128 galaxies in TELPERION survey imaged @SARA_Obs, followup with 1.5-2.5-6m telescopes in Russia. <LINK> <LINK> I finally found some of these clouds myself! However, Dutch colleagues will quickly note that ""Keel voorwerp"" would be an inappropriately confusing way to refer to them (albeit pretty amusing).",https://arxiv.org/abs/2112.07084,"We present a narrowband [O III] imaging survey of 111 AGN hosts and 17 merging-galaxy systems, in search of distant extended emission-line regions (EELRs) around AGN (either extant or faded). Our data reach deeper than detection from the broadband SDSS data, and cover a wider field than some early emission-line surveys used to study extended structure around AGN. Spectroscopic followup confirms two new distant AGN-ionized clouds, in the merging systems NGC 235 and NGC 5514, projected at 26 and 75 kpc from the nuclei (respectively). We also recover the previously-known region in NGC 7252. These results strengthen the connection between EELRs and tidal features; kinematically quiescent distant EELRs are virtually always photoionized tidal debris. We see them in ~10% of the galaxies in our sample with tidal tails. Energy budgets suggest that the AGN in NGC 5514 has faded by >3 times during the extra light-travel time ~250,000 years from the nucleus to the cloud and then to the observer; strong shock emission in outflows masks the optical signature of the AGN. For NGC 235 our data are consistent with but do not unequivocally require variation over ~85,000 years. In addition to these very distant ionized clouds, luminous and extensive line emission within four galaxies - IC 1481, ESO 362-G08, NGC 5514, and NGC 7679. IC 1481 shows apparent ionization cones, a rare combination with its LINER AGN spectrum. In NGC 5514, we measure a 7-kpc shell expanding at ~370 km/s west of the nucleus. ","The TELPERION Survey for Distant [O III] Clouds around Luminous and |
Hibernating AGN",2,"['Paper day! Accepted for MNRAS: two new distant AGN-ionized clouds, at least one from faded AGN. Paper w/@AlexeiMoiseev1 et al. 128 galaxies in TELPERION survey imaged @SARA_Obs, followup with 1.5-2.5-6m telescopes in Russia. \n<LINK> <LINK>', 'I finally found some of these clouds myself! However, Dutch colleagues will quickly note that ""Keel voorwerp"" would be an inappropriately confusing way to refer to them (albeit pretty amusing).']",21,12,432 |
88,39,1508890083792986126,377049708,Eddie Lee,"New paper with co-authors @ChrisKempes and Geoffrey West on providing a birds-eye-view formalization of the dynamics of #innovation and #obsolescence. Now on the arXiv. @CSHVienna @sfiscience <LINK> Understanding the dynamics and structure of innovation and obsolescence has been a subject of considerable interest across many domains. Innovation and obsolescence describes dynamics of ever-churning and adapting social and biological systems from the development of economic markets and scientific progress to biological evolution. The shared aspect of this picture is that agents destroy and extend the “idea lattice” in which they live, finding new possibilities and rendering old solutions irrelevant. We focus on this aspect with a simple model to study the central relationship between the rates at which replicating agents discover new ideas and at which old ideas are rendered obsolete. When these rates are equal, the space of the possible (e.g. markets, technologies, mutations) remains finite. A positive or negative difference distinguishes flourishing, ever-expanding spaces from Schumpeterian dystopias in which obsolescence causes the system to collapse. We map the phase space in terms of the rates at which new agents enter, replicate, and die. When we extend our model to higher dimensional graphs, cooperative agents, or inverted, obsolescence-driven innovation, we find that the essential features of the model are preserved. We predict variation in the density profile of agents along the spectrum of new to old such as a drop in density close to both frontiers. When comparing our model to data, we discover that the density reveals a follow-the-leader dynamic in firm cost efficiency and biological evolution, whereas scientific progress reflects consensus that waits on old ideas to go obsolete. We show how the fundamental forces of innovation and obsolescence provide a unifying perspective on complex systems that may help us understand, harness, and shape their collective outcomes.",http://arxiv.org/abs/2203.14611,"Innovation and obsolescence describes dynamics of ever-churning and adapting social and biological systems from the development of economic markets and scientific progress to biological evolution. The shared aspect of this picture is that agents destroy and extend the ""idea lattice"" in which they live, finding new possibilities and rendering old solutions irrelevant. We focus on this aspect with a simple model to study the central relationship between the rates at which replicating agents discover new ideas and at which old ideas are rendered obsolete. When these rates are equal, the space of the possible (e.g. ideas, markets, technologies, mutations) remains finite. A positive or negative difference distinguishes flourishing, ever-expanding idea lattices from Schumpeterian dystopias in which obsolescence causes the system to collapse. We map the phase space in terms of the rates at which new agents enter, replicate, and die. When we extend our model to higher dimensional graphs, cooperative agents, or inverted, obsolescence-driven innovation, we find that the essential features of the model are preserved. In all cases, we predict variation in the density profile of agents along the spectrum of new to old such as a drop in density close to both frontiers. When comparing our model to data, we discover that the density reveals a follow-the-leader dynamic in firm cost efficiency and biological evolution, whereas scientific progress reflects consensus that waits on old ideas to go obsolete. We show how the fundamental forces of innovation and obsolescence provide a unifying perspective on complex systems that may help us understand, harness, and shape their collective outcomes. ","Idea engines: A unified theory of innovation and obsolescence from |
markets and genetic evolution to science",10,"['New paper with co-authors @ChrisKempes and Geoffrey West on providing a birds-eye-view formalization of the dynamics of #innovation and #obsolescence. Now on the arXiv. @CSHVienna @sfiscience \n\n<LINK>', 'Understanding the dynamics and structure of innovation and obsolescence has been a subject of considerable interest across many domains.', 'Innovation and obsolescence describes dynamics of ever-churning and adapting social and biological systems from the development of economic markets and scientific progress to biological evolution.', 'The shared aspect of this picture is that agents destroy and extend the “idea lattice” in which they live, finding new possibilities and rendering old solutions irrelevant.', 'We focus on this aspect with a simple model to study the central relationship between the rates at which replicating agents discover new ideas and at which old ideas are rendered obsolete.', 'When these rates are equal, the space of the possible (e.g. markets, technologies, mutations) remains finite. A positive or negative difference distinguishes flourishing, ever-expanding spaces from Schumpeterian dystopias in which obsolescence causes the system to collapse.', 'We map the phase space in terms of the rates at which new agents enter, replicate, and die. When we extend our model to higher dimensional graphs, cooperative agents, or inverted, obsolescence-driven innovation, we find that the essential features of the model are preserved.', 'We predict variation in the density profile of agents along the spectrum of new to old such as a drop in density close to both frontiers.', 'When comparing our model to data, we discover that the density reveals a follow-the-leader dynamic in firm cost efficiency and biological evolution, whereas scientific progress reflects consensus that waits on old ideas to go obsolete.', 'We show how the fundamental forces of innovation and obsolescence provide a unifying perspective on complex systems that may help us understand, harness, and shape their collective outcomes.']",22,03,2011 |
89,88,1227792851213393921,223440240,Nathan Kallus,"Masa and I just posted a new paper on *efficient* off-policy policy gradients: <LINK>. We establish a lower bound on how well one can estimate policy gradients and develop an algo that achieves this bound & exhibits 3-way double robustness. ☘️ 1/n <LINK> <LINK> We show (theoretically and empirically) how gradient ascent using our new off-policy policy gradients translates to better off-policy learning that can overcome the curse of horizon. This is crucial for reliably applying off-policy RL in practice. ⛑️ 2/n Connections to Double Reinforcement Learning: While our previous work (<LINK>, <LINK>) shows how structure like Makovianness can significantly improve off-policy eval, this new work shows how this translates to off-policy *learning* 🧐 n/n And fyi, for those interested in policy gradients and in case you haven't seen it yet, @lilianweng has a super helpful and succinct reference on the zoo of policy gradient algos: <LINK> 🐒🦁🐍🐯",https://arxiv.org/abs/2002.04014,"Policy gradient methods in reinforcement learning update policy parameters by taking steps in the direction of an estimated gradient of policy value. In this paper, we consider the statistically efficient estimation of policy gradients from off-policy data, where the estimation is particularly non-trivial. We derive the asymptotic lower bound on the feasible mean-squared error in both Markov and non-Markov decision processes and show that existing estimators fail to achieve it in general settings. We propose a meta-algorithm that achieves the lower bound without any parametric assumptions and exhibits a unique 3-way double robustness property. We discuss how to estimate nuisances that the algorithm relies on. Finally, we establish guarantees on the rate at which we approach a stationary point when we take steps in the direction of our new estimated policy gradient. ",Statistically Efficient Off-Policy Policy Gradients,4,"['Masa and I just posted a new paper on *efficient* off-policy policy gradients: <LINK>. We establish a lower bound on how well one can estimate policy gradients and develop an algo that achieves this bound & exhibits 3-way double robustness. ☘️ 1/n <LINK> <LINK>', 'We show (theoretically and empirically) how gradient ascent using our new off-policy policy gradients translates to better off-policy learning that can overcome the curse of horizon. This is crucial for reliably applying off-policy RL in practice. ⛑️ 2/n', 'Connections to Double Reinforcement Learning: While our previous work (https://t.co/4CFbV6r75R, https://t.co/LpSXwyTFxL) shows how structure like Makovianness can significantly improve off-policy eval, this new work shows how this translates to off-policy *learning* 🧐 n/n', ""And fyi, for those interested in policy gradients and in case you haven't seen it yet, @lilianweng has a super helpful and succinct reference on the zoo of policy gradient algos: https://t.co/bjP7wQc8g6 🐒🦁🐍🐯""]",20,02,946 |
90,80,1516271391787536386,714535792366981121,Chris,"Hi folks, we have a new submitted paper! In short, we estimate scaleheights and ages of low-mass stars and brown dwarfs in deep fields. We find that M dwarfs are older than L dwarfs as a population, in agreement with models. <LINK> <LINK> We also make predictions for a similar survey with JWST called PASSAGES. With the upcoming Nancy Grace Roman Telescope and the Vera Rubin Observatory, we will be able to see these tiny ""stars"" throughout the Galaxy. The goal is to start doing galactic archeology with brown dwarfs and low-mass stars and to learn something potentially new about how our galaxy formed and evolved. There are significant systematic uncertainties in evolutionary models and age-velocity dispersion relations in addition to limitations due to small sample sizes, which will be solved, in part, by the next generation of surveys. Major thanks to my co-authors for being awesome ! @ChihChunHsu @browndwarfs @philosicist @astro_daniella @RobTejada42 + others",https://arxiv.org/abs/2204.07621,"Ultracool dwarfs represent a significant proportion of stars in the Milky Way,and deep samples of these sources have the potential to constrain the formation history and evolution of low-mass objects in the Galaxy. Until recently, spectral samples have been limited to the local volume (d<100 pc). Here, we analyze a sample of 164 spectroscopically-characterized ultracool dwarfs identified by Aganze et al. (2022) in the Hubble Space Telescope WFC3 Infrared Spectroscopic Parallel (WISP) Survey and 3D-HST. We model the observed luminosity function using population simulations to place constraints on scaleheights, vertical velocity dispersions and population ages as a function of spectral type. Our star counts are consistent with a power-law mass function and constant star formation history for ultracool dwarfs, with vertical scaleheights 249$_{-61}^{+48}$ pc for late M dwarfs, 153$_{-30}^{+56}$ pc for L dwarfs, and 175$_{-56}^{+149}$ pc for T dwarfs. Using spatial and velocity dispersion relations, these scaleheights correspond to disk population ages of 3.6$_{-1.0}^{+0.8}$ for late M dwarfs, 2.1$_{-0.5}^{+0.9}$ Gyr for L dwarfs, and 2.4$_{-0.8}^{+2.4}$ Gyr for T dwarfs, which are consistent with prior simulations that predict that L-type dwarfs are on average a younger and less dispersed population. There is an additional 1-2 Gyr systematic uncertainty on these ages due to variances in age-velocity relations. We use our population simulations to predict the UCD yield in the JWST PASSAGES survey, a similar and deeper survey to WISPS and 3D-HST, and find that it will produce a comparably-sized UCD sample, albeit dominated by thick disk and halo sources. ","Beyond the Local Volume II: Population Scaleheights and Ages of |
Ultracool Dwarfs in Deep HST/WFC3 Parallel Fields",5,"['Hi folks, we have a new submitted paper! In short, we estimate scaleheights and ages of low-mass stars and brown dwarfs in deep fields. We find that M dwarfs are older than L dwarfs as a population, in agreement with models. <LINK> <LINK>', 'We also make predictions for a similar survey with JWST called PASSAGES. With the upcoming Nancy Grace Roman Telescope and the Vera Rubin Observatory, we will be able to see these tiny ""stars"" throughout the Galaxy.', 'The goal is to start doing galactic archeology with brown dwarfs and low-mass stars and to learn something potentially new about how our galaxy formed and evolved.', 'There are significant systematic uncertainties in evolutionary models and age-velocity dispersion relations in addition to limitations due to small sample sizes, which will be solved, in part, by the next generation of surveys.', 'Major thanks to my co-authors for being awesome ! @ChihChunHsu @browndwarfs @philosicist @astro_daniella @RobTejada42 + others']",22,04,973 |
91,137,1379475290154356738,55348425,Ajay Jain,"Check out our new paper - we put NeRF on a diet! Given just 1 to 8 images, DietNeRF renders consistent novel views of an object using prior knowledge from large visual encoders like CLIP ViT. <LINK> <LINK> w/ Matthew Tancik, @pabbeel 1/ Given photos of a scene from many viewpoints, NeRF learns a volumetric representation that can be rendered from novel perspectives. NeRF works well given ~20-100 photos, but often not with a few (8, on left). DietNeRF adds an auxiliary loss that removes most artifacts (right). 2/ <LINK> The core problem is that NeRF computes loss in pixel space, so renderings need to align pixel-wise with an observation. However, *a bulldozer is a bulldozer from any viewpoint*: images from different viewpoints share high-level semantic properties like object identity. 3/ <LINK> Our DietNeRF regularizes the NeRF scene representation with a semantic consistency loss, computed in *a feature space*. This allows us to compare renderings from arbitrary poses. We use pre-trained CLIP and ImageNet Vision Transformers in experiments. 4/ <LINK> pixelNeRF tackled the same problem by training NeRF on multiple similar scenes. This allows generalization to new scenes with only a few views. Using our loss, ""DietPixelNeRF"" synthesizes novel views with higher perceptual quality from *only a single monocular photo*. <LINK> Our paper has details and bonus results, including extrapolation to completely unseen regions and tips for making this fast. Watch our video explanation for a quick overview: <LINK> 6/6 cc @_parasj @AravSrinivas @akanazawa @pathak2206 @aditij @alexyu00 @jon_barron @_pratul_ - thank you for feedback along the way!",https://arxiv.org/abs/2104.00677,"We present DietNeRF, a 3D neural scene representation estimated from a few images. Neural Radiance Fields (NeRF) learn a continuous volumetric representation of a scene through multi-view consistency, and can be rendered from novel viewpoints by ray casting. While NeRF has an impressive ability to reconstruct geometry and fine details given many images, up to 100 for challenging 360{\deg} scenes, it often finds a degenerate solution to its image reconstruction objective when only a few input views are available. To improve few-shot quality, we propose DietNeRF. We introduce an auxiliary semantic consistency loss that encourages realistic renderings at novel poses. DietNeRF is trained on individual scenes to (1) correctly render given input views from the same pose, and (2) match high-level semantic attributes across different, random poses. Our semantic loss allows us to supervise DietNeRF from arbitrary poses. We extract these semantics using a pre-trained visual encoder such as CLIP, a Vision Transformer trained on hundreds of millions of diverse single-view, 2D photographs mined from the web with natural language supervision. In experiments, DietNeRF improves the perceptual quality of few-shot view synthesis when learned from scratch, can render novel views with as few as one observed image when pre-trained on a multi-view dataset, and produces plausible completions of completely unobserved regions. ",Putting NeRF on a Diet: Semantically Consistent Few-Shot View Synthesis,7,"['Check out our new paper - we put NeRF on a diet! Given just 1 to 8 images, DietNeRF renders consistent novel views of an object using prior knowledge from large visual encoders like CLIP ViT.\n \n<LINK>\n<LINK>\nw/ Matthew Tancik, @pabbeel 1/', 'Given photos of a scene from many viewpoints, NeRF learns a volumetric representation that can be rendered from novel perspectives. NeRF works well given ~20-100 photos, but often not with a few (8, on left). DietNeRF adds an auxiliary loss that removes most artifacts (right). 2/ https://t.co/C0fhAQRQCs', 'The core problem is that NeRF computes loss in pixel space, so renderings need to align pixel-wise with an observation. However, *a bulldozer is a bulldozer from any viewpoint*: images from different viewpoints share high-level semantic properties like object identity. 3/ https://t.co/uboZq3DY8X', 'Our DietNeRF regularizes the NeRF scene representation with a semantic consistency loss, computed in *a feature space*. This allows us to compare renderings from arbitrary poses. We use pre-trained CLIP and ImageNet Vision Transformers in experiments. 4/ https://t.co/EfgtLqnOsZ', 'pixelNeRF tackled the same problem by training NeRF on multiple similar scenes. This allows generalization to new scenes with only a few views. Using our loss, ""DietPixelNeRF"" synthesizes novel views with higher perceptual quality from *only a single monocular photo*. https://t.co/35tLQt4I2H', 'Our paper has details and bonus results, including extrapolation to completely unseen regions and tips for making this fast. Watch our video explanation for a quick overview: https://t.co/DzWKXUCLFD 6/6', 'cc @_parasj @AravSrinivas @akanazawa @pathak2206 @aditij @alexyu00 @jon_barron @_pratul_ - thank you for feedback along the way!']",21,04,1658 |
92,136,1500934607935520768,1171264866113470470,Yago Ph,New paper with awesome guys @manibrata_sen and @sjtwitt_1! :) <LINK> The neutronization burst from a future galactic Supernova has the potential to provide a wealth of information on neutrino magnetic moments. The neutrino transition magnetic moments which can be explored are an order to several orders of magnitude better than the current limits!,https://arxiv.org/abs/2203.01950,"A core-collapse supernova (SN) offers an excellent astrophysical laboratory to test non-zero neutrino magnetic moments. In particular, the neutronization burst phase, which lasts for few tens of milliseconds post-bounce, is dominated by electron neutrinos and can offer exceptional discovery potential for transition magnetic moments. We simulate the neutrino spectra from the burst phase in forthcoming neutrino experiments like the Deep Underground Neutrino Experiment (DUNE), and the Hyper-Kamiokande (HK), by taking into account spin-flavour conversions of SN neutrinos, caused by interactions with ambient magnetic fields. We find that the neutrino transition magnetic moments which can be explored by these experiments for a galactic SN are an order to several orders of magnitude better than the current terrestrial and astrophysical limits. Additionally, we also discuss how this realization might shed light on three important neutrino properties: (a) the Dirac/Majorana nature, (b) the neutrino mass ordering, and (c) the neutrino mass-generation mechanism. ","Exploiting a future galactic supernova to probe neutrino magnetic |
moments",2,"['New paper with awesome guys @manibrata_sen and @sjtwitt_1! :)\n\n<LINK>', 'The neutronization burst from a future galactic Supernova has the potential to provide a wealth of information on neutrino magnetic moments. The neutrino transition magnetic moments which can be explored are an order to several orders of magnitude better than the current limits!']",22,03,348 |
93,45,1309147235339448320,979379437069271043,Pedro Machado,"New paper today with Guillermo, Ivan, @yuberfpg, Darío and Salva! Awesome team! We look at what physics a Skipper CCD detector near a nuclear reactor could do and how things like backgrounds, uncertainties, quenching, ..., would affect the sensitivity. <LINK> <LINK>",https://arxiv.org/abs/2009.10741,"We analyze in detail the physics potential of an experiment like the one recently proposed by the vIOLETA collaboration: a kilogram-scale Skipper CCD detector deployed 12 meters away from a commercial nuclear reactor core. This experiment would be able to detect coherent elastic neutrino nucleus scattering from reactor neutrinos, capitalizing on the exceptionally low ionization energy threshold of Skipper CCDs. To estimate the physics reach, we elect the measurement of the weak mixing angle as a case study. We choose a realistic benchmark experimental setup and perform variations on this benchmark to understand the role of quenching factor and its systematic uncertainties,background rate and spectral shape, total exposure, and reactor antineutrino flux uncertainty. We take full advantage of the reactor flux measurement of the Daya Bay collaboration to perform a data driven analysis which is, up to a certain extent, independent of the theoretical uncertainties on the reactor antineutrino flux. We show that, under reasonable assumptions, this experimental setup may provide a competitive measurement of the weak mixing angle at few MeV scale with neutrino-nucleus scattering. ","The physics potential of a reactor neutrino experiment with Skipper |
CCDs: Measuring the weak mixing angle",1,"['New paper today with Guillermo, Ivan, @yuberfpg, Darío and Salva! Awesome team!\nWe look at what physics a Skipper CCD detector near a nuclear reactor could do and how things like backgrounds, uncertainties, quenching, ..., would affect the sensitivity.\n<LINK> <LINK>']",20,09,266 |
94,2,1513175376700350474,807052015285207040,Michal Zajaček,A new paper on the identification of mid-infrared sources in the Galactic center led by Harshitha Bhat- we identified stellar sources and their proper motions as well as a coherent motion of compact dust clumps along the minispiral #astronomy #sgrastar <LINK> <LINK>,https://arxiv.org/abs/2203.16727,"Mid-Infrared (MIR) images of the Galactic center show extended gas and dust features along with bright IRS sources. Some of these dust features are a part of ionized clumpy streamers orbiting Sgr~A*, known as the mini-spiral. We present their proper motions over 12 year time period and report their flux densities in $N$-band filters {and derive their spectral indices}. The observations were carried out by VISIR at ESO VLT. High-pass filtering led to the detection of several resolved filaments and clumps along the mini-spiral. Each source was fit by a 2-D Gaussian profile to determine the offsets and aperture sizes. We perform aperture photometry to extract fluxes in two different bands. We present the proper motions of the largest consistent set of resolved and reliably determined sources. In addition to stellar orbital motions, we identify a stream-like motion of extended clumps along the mini-spiral. We also detect MIR counterparts of the radio tail components of the IRS7 source. They show a clear kinematical deviation with respect to the star. They likely represent Kelvin-Helmholtz instabilities formed downstream in the shocked stellar wind. We also analyze the shape and the orientation of the extended late-type IRS3 star that is consistent with the ALMA sub-mm detection of the source. Its puffed-up envelope with the radius of $\sim 2\times 10^6\,R_{\odot}$ could be the result of the red-giant collision with a nuclear jet, which was followed by the tidal prolongation along the orbit. ",Mid-Infrared studies of dusty sources in the Galactic Center,1,['A new paper on the identification of mid-infrared sources in the Galactic center led by Harshitha Bhat- we identified stellar sources and their proper motions as well as a coherent motion of compact dust clumps along the minispiral #astronomy #sgrastar\n<LINK> <LINK>'],22,03,266 |
95,33,1364480362458583042,802543221943439360,Andrea Caputo,New paper out! <LINK> We study the impact of the plasma around BHs on the superradiance for dark photons. We notice that it is possible -- in the presence of kinetic mixing -- that superradiance is shut down before extracting a sizable spin energy from the BH. <LINK>,https://arxiv.org/abs/2102.11280,"Black hole superradiance is a powerful tool in the search for ultra-light bosons. Constraints on the existence of such particles have been derived from the observation of highly spinning black holes, absence of continuous gravitational-wave signals, and of the associated stochastic background. However, these constraints are only strictly speaking valid in the limit where the boson's interactions can be neglected. In this work we investigate the extent to which the superradiant growth of an ultra-light dark photon can be quenched via scattering processes with ambient electrons. For dark photon masses $m_{\gamma^\prime} \gtrsim 10^{-17}\,{\rm eV}$, and for reasonable values of the ambient electron number density, we find superradiance can be quenched prior to extracting a significant fraction of the black-hole spin. For sufficiently large $m_{\gamma^\prime}$ and small electron number densities, the in-medium suppression of the kinetic mixing can be efficiently removed, and quenching occurs for mixings $\chi_0 \gtrsim \mathcal{O}(10^{-8})$; at low masses, however, in-medium effects strongly inhibit otherwise efficient scattering processes from dissipating energy. Intriguingly, this quenching leads to a time- and energy-oscillating electromagnetic signature, with luminosities potentially extending up to $\sim 10^{57}\,{\rm erg / s}$, suggesting that such events should be detectable with existing telescopes. As a byproduct we also show that superradiance cannot be used to constrain a small mass for the Standard Model photon. ",Electromagnetic Signatures of Dark Photon Superradiance,1,['New paper out! <LINK>\nWe study the impact of the plasma around BHs on the superradiance for dark photons. We notice that it is possible -- in the presence of kinetic mixing -- that superradiance is shut down before extracting a sizable spin energy from the BH. <LINK>'],21,02,267 |
96,217,1313483090891862016,1244087220,Andrew Beam,"Are we making meaningful progress on machine learning for EHR data? New preprint with @DavidRBellamy and Leo Celi tries to answer this question through the lens of benchmarks and the answer is, unfortunately, ""probably not"" Paper: <LINK> Some highlights 👇 <LINK> 2/ The main problem with measuring progress is lack of standardization, so we looked for studies using common ""benchmark tasks"" Based on our definition, we found only one benchmark that has received significant attention (>200 citations, ~20 papers reporting results) <LINK> 3/ The benchmark has 4 sub prediction tasks: mortality, LOS, phenotype, and decompensation. We found there has been no significant progress over time (trend line for each task ~0, p-val non-significant) since the benchmark was introduced ~3 years ago <LINK> 4/ Many of these papers use common baselines (e.g. logistic regression, LSTM, etc), so we did a meta-analysis to see if any of these model classes are better on average Compared to LR, the neural models were only better on phenotyping and decomp, and only then by a small margin <LINK> 5/ So what does all this mean? Benchmark tasks are our best bet for community engagement and progress, but not alI prediction tasks are created equal. In medicine, some outcomes like mortality actually involve quite a bit of discretion about when to withdraw care. 6/ This means we should think carefully about what tasks are the right ones for benchmarks, because simple models might already be close to Bayes error rates for things like mortality. The label has to be ""hard enough"" but not ""too hard"" for it to be a good benchmark task. 7/ In summary, if we want to have an ""Imagenet moment"" in medicine, we have to have an equivalent benchmark, and currently it seems pretty clear that we do not. We hope the analysis and discussion in this paper will help clarify considerations for future benchmark tasks. <LINK> @erikrtn Agree with all of the above! That's why it probably doesn't make a good benchmark task if you're trying to catalyze and measure model development progress!",https://arxiv.org/abs/2010.01149,"The Large Scale Visual Recognition Challenge based on the well-known Imagenet dataset catalyzed an intense flurry of progress in computer vision. Benchmark tasks have propelled other sub-fields of machine learning forward at an equally impressive pace, but in healthcare it has primarily been image processing tasks, such as in dermatology and radiology, that have experienced similar benchmark-driven progress. In the present study, we performed a comprehensive review of benchmarks in medical machine learning for structured data, identifying one based on the Medical Information Mart for Intensive Care (MIMIC-III) that allows the first direct comparison of predictive performance and thus the evaluation of progress on four clinical prediction tasks: mortality, length of stay, phenotyping, and patient decompensation. We find that little meaningful progress has been made over a 3 year period on these tasks, despite significant community engagement. Through our meta-analysis, we find that the performance of deep recurrent models is only superior to logistic regression on certain tasks. We conclude with a synthesis of these results, possible explanations, and a list of desirable qualities for future benchmarks in medical machine learning. ","Evaluating Progress on Machine Learning for Longitudinal Electronic |
Healthcare Data",8,"['Are we making meaningful progress on machine learning for EHR data?\n\nNew preprint with @DavidRBellamy and Leo Celi tries to answer this question through the lens of benchmarks and the answer is, unfortunately, ""probably not""\n\nPaper: <LINK>\n\nSome highlights 👇 <LINK>', '2/ The main problem with measuring progress is lack of standardization, so we looked for studies using common ""benchmark tasks""\n\nBased on our definition, we found only one benchmark that has received significant attention (>200 citations, ~20 papers reporting results) https://t.co/xHdbVV9LQH', '3/ The benchmark has 4 sub prediction tasks: mortality, LOS, phenotype, and decompensation. \n\nWe found there has been no significant progress over time (trend line for each task ~0, p-val non-significant) since the benchmark was introduced ~3 years ago https://t.co/nDwVpQy9yE', '4/ Many of these papers use common baselines (e.g. logistic regression, LSTM, etc), so we did a meta-analysis to see if any of these model classes are better on average\n\nCompared to LR, the neural models were only better on phenotyping and decomp, and only then by a small margin https://t.co/SUTQUMBlgK', '5/ So what does all this mean? Benchmark tasks are our best bet for community engagement and progress, but not alI prediction tasks are created equal. In medicine, some outcomes like mortality actually involve quite a bit of discretion about when to withdraw care.', '6/ This means we should think carefully about what tasks are the right ones for benchmarks, because simple models might already be close to Bayes error rates for things like mortality. \n\nThe label has to be ""hard enough"" but not ""too hard"" for it to be a good benchmark task.', '7/ In summary, if we want to have an ""Imagenet moment"" in medicine, we have to have an equivalent benchmark, and currently it seems pretty clear that we do not.\n\nWe hope the analysis and discussion in this paper will help clarify considerations for future benchmark tasks. https://t.co/uciRb2Xaxx', ""@erikrtn Agree with all of the above! That's why it probably doesn't make a good benchmark task if you're trying to catalyze and measure model development progress!""]",20,10,2069 |
97,217,1375247172732604422,1169068112177745922,Alexis Plascencia,"One more paper with @fileviez and @clamurgal 😀 <LINK> We study the mechanism of Leptogenesis in theories where Baryon and Lepton number are promoted to local gauge symmetries 1/n <LINK> We numerically solved the Boltzmann equations including the effects of the process N N <-> Z_L <-> f bar(f) and depending on how large is the ratio g_L/M_ZL this new interaction can quickly bring the right-handed neutrino into thermal equilibrium 2/n <LINK> If this new gauge interaction is too large it keeps the right-handed neutrinos in thermal equilibrium and suppresses the final asymmetry. Thus, we find a lower bound on the symmetry breaking scale for U(1)_L of M_ZL/g_L > 10^10 GeV in order to have successful leptogenesis 3/n <LINK> the spontaneous breaking of a U(1) at such high temperatures leads to the formation of cosmic strings that radiate gravitational waves that could be probed by future Laser Interferometers such as LISA 4/n Furthermore, in this scenario the ‘t Hooft operator associated with the sphaleron effects is different from the SM since it needs to preserve the U(1)_B gauge symmetry: 5/n <LINK> and hence, sphaleron processes can transfer a lepton asymmetry and a dark matter asymmetry into a baryon asymmetry. 6/n The theory has an automatic dark matter that is predicted from the cancellation of gauge anomalies. Namely, in the theory with 6 new representation, the DM candidate is generically a Dirac fermion (chi) 7/n <LINK> Then, the question arises: How can we generate a dark matter asymmetry? Well, it turns out that by just adding a new complex scalar phi the theory nicely accommodates the mechanism proposed in <LINK> 8/n <LINK> In this mechanism, the out-of-equilibrium decays of N1 -> phi DM and N1 -> phi* bar(DM) can also generate a dark matter asymmetry. Then, the lepton and dark matter asymmetries are partially converted into a baryon asymmetry via sphaleron processes 9/n <LINK> Thus, theories with local Baryon and Lepton number can explain the baryon asymmetry, dark matter and neutrino masses 🙂 10/n",https://arxiv.org/abs/2103.13397,"In order to address the baryon asymmetry in the Universe one needs to understand the origin of baryon (B) and lepton (L) number violation. In this article, we discuss the mechanism of baryogenesis via leptogenesis to explain the matter-antimatter asymmetry in theories with spontaneous breaking of baryon and lepton number. In this context, a lepton asymmetry is generated through the out-of-equilibrium decays of right-handed neutrinos at the high-scale, while local baryon number must be broken below the multi-TeV scale to satisfy the cosmological bounds on the dark matter relic density. We demonstrate how the lepton asymmetry generated via leptogenesis can be converted in two different ways: a) in the theory predicting Majorana dark matter the lepton asymmetry is converted into a baryon asymmetry, and b) in the theory with Dirac dark matter the decays of right-handed neutrinos can generate lepton and dark matter asymmetries that are then partially converted into a baryon asymmetry. Consequently, we show how to explain the matter-antimatter asymmetry, the dark matter relic density and neutrino masses in theories for local baryon and lepton number. ",Baryogenesis via Leptogenesis: Spontaneous B and L Violation,10,"['One more paper with @fileviez and @clamurgal 😀 <LINK>\n \nWe study the mechanism of Leptogenesis in theories where Baryon and Lepton number are promoted to local gauge symmetries 1/n <LINK>', 'We numerically solved the Boltzmann equations including the effects of the process N N <-> Z_L <-> f bar(f) and depending on how large is the ratio g_L/M_ZL this new interaction can quickly bring the right-handed neutrino into thermal equilibrium 2/n https://t.co/uOz8TYMUto', 'If this new gauge interaction is too large it keeps the right-handed neutrinos in thermal equilibrium and suppresses the final asymmetry. Thus, we find a lower bound on the symmetry breaking scale for U(1)_L of \n\nM_ZL/g_L > 10^10 GeV \n\nin order to have successful leptogenesis 3/n https://t.co/QXKeiapvRK', 'the spontaneous breaking of a U(1) at such high temperatures leads to the formation of cosmic strings that radiate gravitational waves that could be probed by future Laser Interferometers such as LISA 4/n', 'Furthermore, in this scenario the ‘t Hooft operator associated with the sphaleron effects is different from the SM since it needs to preserve the U(1)_B gauge symmetry: 5/n https://t.co/VrBkXRXQSp', 'and hence, sphaleron processes can transfer a lepton asymmetry and a dark matter asymmetry into a baryon asymmetry. 6/n', 'The theory has an automatic dark matter that is predicted from the cancellation of gauge anomalies. Namely, in the theory with 6 new representation, the DM candidate is generically a Dirac fermion (chi) 7/n https://t.co/q8vpS4bcez', 'Then, the question arises: How can we generate a dark matter asymmetry? \n \nWell, it turns out that by just adding a new complex scalar phi the theory nicely accommodates the mechanism proposed in https://t.co/ELGNx0n7bt 8/n https://t.co/Tf7a5tVyGD', 'In this mechanism, the out-of-equilibrium decays of N1 -> phi DM and N1 -> phi* bar(DM) can also generate a dark matter asymmetry. Then, the lepton and dark matter asymmetries are partially converted into a baryon asymmetry via sphaleron processes 9/n https://t.co/eDn7LjCnkK', 'Thus, theories with local Baryon and Lepton number can explain the baryon asymmetry, dark matter and neutrino masses 🙂 10/n']",21,03,2066 |
98,9,1521541543676444672,232287209,Dallas Card,"In a new paper with Junshen Chen (@cab1n_) and Dan @Jurafsky (to appear in Findings of ACL), we focus on domain adaptation for off-the-shelf models (e.g., lexicons or cloud APIs), which often don't provide full access to training data or model parameters. <LINK> We suggest that even when data cannot be shared, model producers should try to facilitate adaptation by model consumers. How can this be done? We present two lightweight techniques for this purpose, both of which can be used by model consumers without additional training. In brief, model producers can incorporate domain specific biases and/or domain specific feature normalization during training, with parameters which can then be easily replaced by model consumers when applying models in a new domain. On four multi-domain text classification datasets, we find significant improvements in out-of-domain accuracy when used in combination with either linear or contextual embedding models. Experiments also estimate the typical drop in accuracy of base models across domains (3-10%). <LINK> Fine-tuning CE models to the new domain is still best where possible, but requires model access and sufficient computational resources (GPUs). Especially for CSS practitioners employing off-the-shelf models, lightweight adaptation may be preferable, and should be encouraged. As an aside, we also compare against several sentiment lexicons, and find they are no better than a basic bag-of-words logistic regression model (trained on a handful of sentiment analysis datasets) in terms of out-of-domain accuracy, when adapting models to the new domain. <LINK> Replication code is available here: <LINK> And more details in this blog post: <LINK>",https://arxiv.org/abs/2204.14213,"Off-the-shelf models are widely used by computational social science researchers to measure properties of text, such as sentiment. However, without access to source data it is difficult to account for domain shift, which represents a threat to validity. Here, we treat domain adaptation as a modular process that involves separate model producers and model consumers, and show how they can independently cooperate to facilitate more accurate measurements of text. We introduce two lightweight techniques for this scenario, and demonstrate that they reliably increase out-of-domain accuracy on four multi-domain text classification datasets when used with linear and contextual embedding models. We conclude with recommendations for model producers and consumers, and release models and replication code to accompany this paper. ",Modular Domain Adaptation,8,"[""In a new paper with Junshen Chen (@cab1n_) and Dan @Jurafsky (to appear in Findings of ACL), we focus on domain adaptation for off-the-shelf models (e.g., lexicons or cloud APIs), which often don't provide full access to training data or model parameters. <LINK>"", 'We suggest that even when data cannot be shared, model producers should try to facilitate adaptation by model consumers. How can this be done? We present two lightweight techniques for this purpose, both of which can be used by model consumers without additional training.', 'In brief, model producers can incorporate domain specific biases and/or domain specific feature normalization during training, with parameters which can then be easily replaced by model consumers when applying models in a new domain.', 'On four multi-domain text classification datasets, we find significant improvements in out-of-domain accuracy when used in combination with either linear or contextual embedding models. Experiments also estimate the typical drop in accuracy of base models across domains (3-10%). https://t.co/JRbT1UIrnn', 'Fine-tuning CE models to the new domain is still best where possible, but requires model access and sufficient computational resources (GPUs). Especially for CSS practitioners employing off-the-shelf models, lightweight adaptation may be preferable, and should be encouraged.', 'As an aside, we also compare against several sentiment lexicons, and find they are no better than a basic bag-of-words logistic regression model (trained on a handful of sentiment analysis datasets) in terms of out-of-domain accuracy, when adapting models to the new domain. https://t.co/fWh7gTFdqH', 'Replication code is available here: https://t.co/BTuemReUhW', 'And more details in this blog post: https://t.co/3FmmenRlfD']",22,04,1700 |
99,152,1410843177058127875,96253726,Navin Sridhar,"New paper! <LINK> The coronal plasmas of black holes are subject to inverse-Compton cooling by the soft ~blackbody photons from the accretion disk. What powers the hard, non-thermal X-rays from BH coronae despite this radiative cooling is an unsolved mystery... In this work, we show that the bulk motion of plasmoid chains—resulting from the reconnection of magnetic field lines anchored onto the accretion disk—can Compton up-scatter the soft disk photons into a hard, non-thermal spectrum. *This works despite a cooled-down coronal plasma* <LINK> We demonstrate this using first-principle PIC simulations of relativistic reconnection in pair plasmas for different levels of magnetization (σ) and seed photon density. The spectrum of bulk motions resembles a ~100 keV Maxwellian, and barely depends on the above two parameters. We perform Monte-Carlo calculations of the radiative transfer of seed photons through the reconnection layer laden with particles, whose momenta are obtained from PIC simulations: the σ=10 model describes the BH ""hard state"" spectrum remarkably well, across 3 decades in energy! <LINK> We welcome you to our paper—available on arXiv (<LINK>)—for a detailed dive into our calculations, assumptions, and results. Stay tuned to the second part of this series, where we will be modifying the composition of the corona! All comments appreciated! <LINK>",https://arxiv.org/abs/2107.00263,"We perform two-dimensional particle-in-cell simulations of reconnection in magnetically dominated electron-positron plasmas subject to strong Compton cooling. We vary the magnetization $\sigma\gg1$, defined as the ratio of magnetic tension to plasma inertia, and the strength of cooling losses. Magnetic reconnection under such conditions can operate in magnetically dominated coronae around accreting black holes, which produce hard X-rays through Comptonization of seed soft photons. We find that the particle energy spectrum is dominated by a peak at mildly relativistic energies, which results from bulk motions of cooled plasmoids. The peak has a quasi-Maxwellian shape with an effective temperature of $\sim 100$ keV, which depends only weakly on the flow magnetization and the strength of radiative cooling. The mean bulk energy of the reconnected plasma is roughly independent of $\sigma$, whereas the variance is larger for higher magnetizations. The spectra also display a high-energy tail, which receives $\sim 25$% of the dissipated reconnection power for $\sigma=10$ and $\sim 40$% for $\sigma=40$. We complement our particle-in-cell studies with a Monte-Carlo simulation of the transfer of seed soft photons through the reconnection layer, and find the escaping X-ray spectrum. The simulation demonstrates that Comptonization is dominated by the bulk motions in the chain of Compton-cooled plasmoids and, for $\sigma\sim 10$, yields a spectrum consistent with the typical hard state of accreting black holes. ","Comptonization by Reconnection Plasmoids in Black Hole Coronae I: |
Magnetically Dominated Pair Plasma",5,"['New paper!\n<LINK>\nThe coronal plasmas of black holes are subject to inverse-Compton cooling by the soft ~blackbody photons from the accretion disk. What powers the hard, non-thermal X-rays from BH coronae despite this radiative cooling is an unsolved mystery...', 'In this work, we show that the bulk motion of plasmoid chains—resulting from the reconnection of magnetic field lines anchored onto the accretion disk—can Compton up-scatter the soft disk photons into a hard, non-thermal spectrum. *This works despite a cooled-down coronal plasma* https://t.co/jQy5UDZsT1', 'We demonstrate this using first-principle PIC simulations of relativistic reconnection in pair plasmas for different levels of magnetization (σ) and seed photon density. The spectrum of bulk motions resembles a ~100 keV Maxwellian, and barely depends on the above two parameters.', 'We perform Monte-Carlo calculations of the radiative transfer of seed photons through the reconnection layer laden with particles, whose momenta are obtained from PIC simulations: the σ=10 model describes the BH ""hard state"" spectrum remarkably well, across 3 decades in energy! https://t.co/T1OGjAq48Q', 'We welcome you to our paper—available on arXiv (https://t.co/8Z3diNKhOF)—for a detailed dive into our calculations, assumptions, and results. Stay tuned to the second part of this series, where we will be modifying the composition of the corona! All comments appreciated! https://t.co/6qkp5SaFcp']",21,07,1377 |
100,48,1040106477359427584,2969696397,Ion Nechita,New paper <LINK> with Andreas Bluhm: compatibility of quantum measurements is equivalent to inclusion of the matrix jewel. A new application of algebraic convexity (and of the theory of free spectrahedra) to quantum information - or is it the other way around ???,https://arxiv.org/abs/1809.04514,"In this work, we establish the connection between the study of free spectrahedra and the compatibility of quantum measurements with an arbitrary number of outcomes. This generalizes previous results by the authors for measurements with two outcomes. Free spectrahedra arise from matricial relaxations of linear matrix inequalities. A particular free spectrahedron which we define in this work is the matrix jewel. We find that the compatibility of arbitrary measurements corresponds to the inclusion of the matrix jewel into a free spectrahedron defined by the effect operators of the measurements under study. We subsequently use this connection to bound the set of (asymmetric) inclusion constants for the matrix jewel using results from quantum information theory and symmetrization. The latter translate to new lower bounds on the compatibility of quantum measurements. Among the techniques we employ are approximate quantum cloning and mutually unbiased bases. ","Compatibility of quantum measurements and inclusion constants for the |
matrix jewel",1,['New paper <LINK> with Andreas Bluhm: compatibility of quantum measurements is equivalent to inclusion of the matrix jewel. A new application of algebraic convexity (and of the theory of free spectrahedra) to quantum information - or is it the other way around ???'],18,09,263 |
101,139,1427851262523940864,826222576754057216,Aaron Parsons,"Our next big HERA paper is out! Interpreting our groundbreaking limits, we find evidence for early heating in the universe, most likely from galaxies that are brighter in X-ray than they are today. <LINK> Because our measurements come from a later epoch in the evolution of the universe, we cannot confirm or refute a cosmological interpretation of the EDGES feature seen at z~18, but with HERA’s upgraded receivers, we may soon be able to address this directly. This work is the culmination of an immense amount of effort by many dedicated scientists. As a collaboration, we have begun listing author contributions to recognize the efforts of our junior colleagues who have poured their careers into this. More groups should do this.",http://arxiv.org/abs/2108.07282,"Recently, the Hydrogen Epoch of Reionization Array (HERA) collaboration has produced the experiment's first upper limits on the power spectrum of 21-cm fluctuations at z~8 and 10. Here, we use several independent theoretical models to infer constraints on the intergalactic medium (IGM) and galaxies during the epoch of reionization (EoR) from these limits. We find that the IGM must have been heated above the adiabatic cooling threshold by z~8, independent of uncertainties about the IGM ionization state and the nature of the radio background. Combining HERA limits with galaxy and EoR observations constrains the spin temperature of the z~8 neutral IGM to 27 K < T_S < 630 K (2.3 K < T_S < 640 K) at 68% (95%) confidence. They therefore also place a lower bound on X-ray heating, a previously unconstrained aspects of early galaxies. For example, if the CMB dominates the z~8 radio background, the new HERA limits imply that the first galaxies produced X-rays more efficiently than local ones (with soft band X-ray luminosities per star formation rate constrained to L_X/SFR = { 10^40.2, 10^41.9 } erg/s/(M_sun/yr) at 68% confidence), consistent with expectations of X-ray binaries in low-metallicity environments. The z~10 limits require even earlier heating if dark-matter interactions (e.g., through millicharges) cool down the hydrogen gas. Using a model in which an extra radio background is produced by galaxies, we rule out (at 95% confidence) the combination of high radio and low X-ray luminosities of L_{r,\nu}/SFR > 3.9 x 10^24 W/Hz/(M_sun/yr) and L_X/SFR<10^40 erg/s/(M_sun/yr). The new HERA upper limits neither support nor disfavor a cosmological interpretation of the recent EDGES detection. The analysis framework described here provides a foundation for the interpretation of future HERA results. ","HERA Phase I Limits on the Cosmic 21-cm Signal: Constraints on |
Astrophysics and Cosmology During the Epoch of Reionization",3,"['Our next big HERA paper is out! Interpreting our groundbreaking limits, we find evidence for early heating in the universe, most likely from galaxies that are brighter in X-ray than they are today. <LINK>', 'Because our measurements come from a later epoch in the evolution of the universe, we cannot confirm or refute a cosmological interpretation of the EDGES feature seen at z~18, but with HERA’s upgraded receivers, we may soon be able to address this directly.', 'This work is the culmination of an immense amount of effort by many dedicated scientists. As a collaboration, we have begun listing author contributions to recognize the efforts of our junior colleagues who have poured their careers into this. More groups should do this.']",21,08,734 |
102,151,1511181276094648324,1508991637850107910,Nhat Ho,"I would like to share a new paper that @khainb12 and I just finished on revising the usage of sliced Wasserstein distance for ML and DL applications with images (e.g., GAN, Domain Adaptation, etc.): <LINK> The idea here is that sliced Wasserstein only defines for data in vector form. When apply to images, which are in tensor form, we usually vectorized the images and then project them to a one-dimensional subspace via Radon transform. The vectorization step can be sub-optimal. To address the issue of vectorization, we incorporate convolution operators (with stride, dilation, non-linear activation,etc.) into the Radon transform of images. It not only can keep the spatial structure of images but also greatly save memory (with far fewer parameters).",https://arxiv.org/abs/2204.01188,"The conventional sliced Wasserstein is defined between two probability measures that have realizations as vectors. When comparing two probability measures over images, practitioners first need to vectorize images and then project them to one-dimensional space by using matrix multiplication between the sample matrix and the projection matrix. After that, the sliced Wasserstein is evaluated by averaging the two corresponding one-dimensional projected probability measures. However, this approach has two limitations. The first limitation is that the spatial structure of images is not captured efficiently by the vectorization step; therefore, the later slicing process becomes harder to gather the discrepancy information. The second limitation is memory inefficiency since each slicing direction is a vector that has the same dimension as the images. To address these limitations, we propose novel slicing methods for sliced Wasserstein between probability measures over images that are based on the convolution operators. We derive convolution sliced Wasserstein (CSW) and its variants via incorporating stride, dilation, and non-linear activation function into the convolution operators. We investigate the metricity of CSW as well as its sample complexity, its computational complexity, and its connection to conventional sliced Wasserstein distances. Finally, we demonstrate the favorable performance of CSW over the conventional sliced Wasserstein in comparing probability measures over images and in training deep generative modeling on images. ","Revisiting Sliced Wasserstein on Images: From Vectorization to |
Convolution",3,"['I would like to share a new paper that @khainb12 and I just finished on revising the usage of sliced Wasserstein distance for ML and DL applications with images (e.g., GAN, Domain Adaptation, etc.): <LINK>', 'The idea here is that sliced Wasserstein only defines for data in vector form. When apply to images, which are in tensor form, we usually vectorized the images and then project them to a one-dimensional subspace via Radon transform. The vectorization step can be sub-optimal.', 'To address the issue of vectorization, we incorporate convolution operators (with stride, dilation, non-linear activation,etc.) into the Radon transform of images. It not only can keep the spatial structure of images but also greatly save memory (with far fewer parameters).']",22,04,756 |
103,51,829291524659740673,526115229,Kevin Heng,"Our new study on a hidden demon in transmission spectra. Frankly, we are dismayed by what we found: <LINK> @planetremco Mike Line just mentioned it too, but our study goes deeper and provides a formula to cleanly understand these degeneracies. @planetremco Also, this was noticed numerically by Benneke & Seager (2012). One only realizes the full extent of the problem analytically... @planetremco I agree. Call it a literature review oversight. @planetremco I know for a fact it does, c.f. Jaemin Lee. One of the key points is that R0 and P0 cannot be treated as independent fit params @planetremco Read the paper. You will be surprised. :) @planetremco Hence the use of the terms ""revisited"" and ""unresolved"". No one is claiming it's new. We're shining a different light on it. @ExoSing @planetremco @DrJoVian Refs are easy to fix. The novelty of our approach is in clarifying the formalism and unification of details.",https://arxiv.org/abs/1702.02051,"The computation of transmission spectra is a central ingredient in the study of exoplanetary atmospheres. First, we revisit the theory of transmission spectra, unifying ideas from several workers in the literature. Transmission spectra lack an absolute normalization due to the a priori unknown value of a reference transit radius, which is tied to an unknown reference pressure. We show that there is a degeneracy between the uncertainty in the transit radius, the assumed value of the reference pressure (typically set to 10 bar) and the inferred value of the water abundance when interpreting a WFC3 transmission spectrum. Second, we show that the transmission spectra of isothermal atmospheres are nearly isobaric. We validate the isothermal, isobaric analytical formula for the transmission spectrum against full numerical calculations and show that the typical errors are ~0.1% (~10 ppm) within the WFC3 range of wavelengths for temperatures of 1500 K (or higher). Third, we generalize the previous expression for the transit radius to include a small temperature gradient. Finally, we analyze the measured WFC3 transmission spectrum of WASP-12b and demonstrate that we obtain consistent results with the retrieval approach of Kreidberg et al. (2015) if the reference transit radius and reference pressure are fixed to assumed values. The unknown functional relationship between the reference transit radius and reference pressure implies that it is the product of the water abundance and reference pressure that is being retrieved from the data, and not just the water abundance alone. This degeneracy leads to a limitation on how accurately we may extract molecular abundances from transmission spectra using WFC3 data alone. Finally, we compare our study to that of Griffith (2014) and discuss why the degeneracy was missed in previous retrieval studies. [abridged] ","The theory of transmission spectra revisited: a semi-analytical method |
for interpreting WFC3 data and an unresolved challenge",8,"['Our new study on a hidden demon in transmission spectra. Frankly, we are dismayed by what we found:\n<LINK>', '@planetremco Mike Line just mentioned it too, but our study goes deeper and provides a formula to cleanly understand these degeneracies.', '@planetremco Also, this was noticed numerically by Benneke & Seager (2012). One only realizes the full extent of the problem analytically...', '@planetremco I agree. Call it a literature review oversight.', '@planetremco I know for a fact it does, c.f. Jaemin Lee. One of the key points is that R0 and P0 cannot be treated as independent fit params', '@planetremco Read the paper. You will be surprised. :)', '@planetremco Hence the use of the terms ""revisited"" and ""unresolved"". No one is claiming it\'s new. We\'re shining a different light on it.', '@ExoSing @planetremco @DrJoVian Refs are easy to fix. The novelty of our approach is in clarifying the formalism and unification of details.']",17,02,920 |
104,115,1010244714484850688,68493084,sadia afroz,"Our new paper explores blocking beyond censorship: blocking visitors from the EU to avoid GDPR compliance, blocking based upon the visitor’s country, and blocking due to security concerns: <LINK> (will be presented at #foci18 workshop @USENIXSecurity)",https://arxiv.org/abs/1806.00459,"This paper examines different reasons the websites may vary in their availability by location. Prior works on availability mostly focus on censorship by nation states. We look at three forms of server-side blocking: blocking visitors from the EU to avoid GDPR compliance, blocking based upon the visitor's country, and blocking due to security concerns. We argue that these and other forms of blocking warrant more research. ","A Bestiary of Blocking: The Motivations and Modes behind Website |
Unavailability",1,"['Our new paper explores blocking beyond censorship: blocking\nvisitors from the EU to avoid GDPR compliance, blocking\nbased upon the visitor’s country, and blocking due to\nsecurity concerns: <LINK> (will be presented at #foci18 workshop @USENIXSecurity)']",18,06,251 |
105,16,1222833648988065792,2902687319,Mike Hudson,"We have new paper led by Tianyi Yang, with @nafshordi, on the luminous content and structure of filaments between massive haloes. Mass-to-light ratios of filaments similar to the cosmic average. <LINK> <LINK> @nafshordi And (what a coincidence!) I will present this at #LocalWeb20 tomorrow ;)",https://arxiv.org/abs/2001.10943v1,"The cold dark matter model predicts that dark matter haloes are connected by filaments. Direct measurements of the masses and structure of these filaments are difficult, but recently several studies have detected these dark-matter-dominated filaments using weak lensing. Here we study the efficiency of galaxy formation within the filaments by measuring their total mass-to-light ratios and stellar mass fractions. Specifically, we stack pairs of Luminous Red Galaxies (LRGs) with a typical separation on the sky of $8 h^{-1}$ Mpc. We stack background galaxy shapes around pairs to obtain mass maps through weak lensing, and we stack galaxies from the Sloan Digital Sky Survey (SDSS) to obtain maps of light and stellar mass. To isolate the signal from the filament, we construct two matched catalogues of physical and non-physical (projected) LRG pairs, with the same distributions of redshift and separation. We then subtract the two stacked maps. Using LRG pair samples from the BOSS survey at two different redshifts, we find that the evolution of the mass in filament is consistent with the predictions from perturbation theory. The filaments are not entirely dark: their mass-to-light ratios ($M/L = 351\pm87$ in solar units in the $r$-band) and stellar mass fractions ($M_{\rm stellar}/M = 0.0073\pm0.0020$) are consistent with the cosmic values (and with their redshift evolutions) ",] How dark are filaments in the cosmic web?,2,"['We have new paper led by Tianyi Yang, with @nafshordi, on the luminous content and structure of filaments between massive haloes. Mass-to-light ratios of filaments similar to the cosmic average. <LINK> <LINK>', '@nafshordi And (what a coincidence!) I will present this at #LocalWeb20 tomorrow ;)']",20,01,292 |
106,224,1511624651126067202,890517217850294273,Pepa Atanasova,"📜 Happy to share that our work on #factchecking with insufficient evidence has been accepted to TACL! We propose a new diagnostic dataset, SufficientFacts, and a novel data augmentation strategy for contrastive self-learning of missing evidence. <LINK> #NLProc <LINK> Work with my amazing co-authors @IAugenstein, Jakob, and Christina.",https://arxiv.org/abs/2204.02007,"Automating the fact checking (FC) process relies on information obtained from external sources. In this work, we posit that it is crucial for FC models to make veracity predictions only when there is sufficient evidence and otherwise indicate when it is not enough. To this end, we are the first to study what information FC models consider sufficient by introducing a novel task and advancing it with three main contributions. First, we conduct an in-depth empirical analysis of the task with a new fluency-preserving method for omitting information from the evidence at the constituent and sentence level. We identify when models consider the remaining evidence (in)sufficient for FC, based on three trained models with different Transformer architectures and three FC datasets. Second, we ask annotators whether the omitted evidence was important for FC, resulting in a novel diagnostic dataset, SufficientFacts, for FC with omitted evidence. We find that models are least successful in detecting missing evidence when adverbial modifiers are omitted (21% accuracy), whereas it is easiest for omitted date modifiers (63% accuracy). Finally, we propose a novel data augmentation strategy for contrastive self-learning of missing evidence by employing the proposed omission method combined with tri-training. It improves performance for Evidence Sufficiency Prediction by up to 17.8 F1 score, which in turn improves FC performance by up to 2.6 F1 score. ",Fact Checking with Insufficient Evidence,2,"['📜 Happy to share that our work on #factchecking with insufficient evidence has been accepted to TACL! We propose a new diagnostic dataset, SufficientFacts, and a novel data augmentation strategy for contrastive self-learning of missing evidence.\n<LINK> #NLProc <LINK>', 'Work with my amazing co-authors @IAugenstein, Jakob, and Christina.']",22,04,335 |
107,26,1310877005072736257,205969794,Léa Steinacker,In our new paper on the spread of COVID-19 misinformation on 🇩🇪-Twitter we show that 🔸partisan accounts contribute relatively more to conspiratorial narratives 🔸bots don't significantly influence the spread of misinformation in 🇩🇪-Twitter (1.31%) <LINK> <LINK>,https://arxiv.org/abs/2009.12905,"In late 2019, the gravest pandemic in a century began spreading across the world. A state of uncertainty related to what has become known as SARS-CoV-2 has since fueled conspiracy narratives on social media about the origin, transmission and medical treatment of and vaccination against the resulting disease, COVID-19. Using social media intelligence to monitor and understand the proliferation of conspiracy narratives is one way to analyze the distribution of misinformation on the pandemic. We analyzed more than 9.5M German language tweets about COVID-19. The results show that only about 0.6% of all those tweets deal with conspiracy theory narratives. We also found that the political orientation of users correlates with the volume of content users contribute to the dissemination of conspiracy narratives, implying that partisan communicators have a higher motivation to take part in conspiratorial discussions on Twitter. Finally, we showed that contrary to other studies, automated accounts do not significantly influence the spread of misinformation in the German speaking Twitter sphere. They only represent about 1.31% of all conspiracy-related activities in our database. ","COVID-19's (mis)information ecosystem on Twitter: How partisanship |
boosts the spread of conspiracy narratives on German speaking Twitter",1,"[""In our new paper on the spread of COVID-19 misinformation on 🇩🇪-Twitter we show that \n\n🔸partisan accounts contribute relatively more to conspiratorial narratives\n\n🔸bots don't significantly influence the spread of misinformation in 🇩🇪-Twitter (1.31%)\n\n<LINK> <LINK>""]",20,09,261 |
108,189,1435634029504589824,921385730,Marshall Burke,"Check out our new paper (preprint aka working paper) using satellites and ML to measure the impact of electrification on economic livelihoods, led by @NWRAT with colleagues at @atlasai_co . Quick 🧵 <LINK> Billions of $ going into expansion of energy grid across emerging markets, but lack of consensus on what this expansion means for livelihoods. Challenging research question: grid placement not random, hard to measure livelihood outcomes at scale. 2/n To solve measurement prob, we use daytime satellite imagery (Landsat) and deep learning to measure evolution of local-level asset wealth over two decades across Uganda. Model works well: high predictive performance in handful of locations/yrs where we have ground data. 3/n We pair sat-based estimates of wealth with constructed dataset of grid expansion. Then use new-ish ML estimators (matrix completion) to help deal with potentially endogenous grid placement. Show multiple tests to convince ourselves (refs) that it might have worked. 4/n Critically, we show that livelihood prediction step cannot be divorced from causal inference step. Standard loss fnc in deep learning models generates output with too low variance in this setting; this attenuates causal estimates. Custom loss fnc in step 1 model fixes it. 5/n We find large positive effects of electrification on community-level wealth in Uganda: +0.2sd in the 3-4 years after grid access. Seems good. We believe suite of methods we use could be useful for large-scale impact evaluation in a lot of settings. /n",https://arxiv.org/abs/2109.02890,"In many regions of the world, sparse data on key economic outcomes inhibits the development, targeting, and evaluation of public policy. We demonstrate how advancements in satellite imagery and machine learning can help ameliorate these data and inference challenges. In the context of an expansion of the electrical grid across Uganda, we show how a combination of satellite imagery and computer vision can be used to develop local-level livelihood measurements appropriate for inferring the causal impact of electricity access on livelihoods. We then show how ML-based inference techniques deliver more reliable estimates of the causal impact of electrification than traditional alternatives when applied to these data. We estimate that grid access improves village-level asset wealth in rural Uganda by 0.17 standard deviations, more than doubling the growth rate over our study period relative to untreated areas. Our results provide country-scale evidence on the impact of a key infrastructure investment, and provide a low-cost, generalizable approach to future policy evaluation in data sparse environments. ","Using Satellite Imagery and Machine Learning to Estimate the Livelihood |
Impact of Electricity Access",6,"['Check out our new paper (preprint aka working paper) using satellites and ML to measure the impact of electrification on economic livelihoods, led by @NWRAT with colleagues at @atlasai_co . Quick 🧵 <LINK>', 'Billions of $ going into expansion of energy grid across emerging markets, but lack of consensus on what this expansion means for livelihoods. Challenging research question: grid placement not random, hard to measure livelihood outcomes at scale. 2/n', 'To solve measurement prob, we use daytime satellite imagery (Landsat) and deep learning to measure evolution of local-level asset wealth over two decades across Uganda. Model works well: high predictive performance in handful of locations/yrs where we have ground data. 3/n', 'We pair sat-based estimates of wealth with constructed dataset of grid expansion. Then use new-ish ML estimators (matrix completion) to help deal with potentially endogenous grid placement. Show multiple tests to convince ourselves (refs) that it might have worked. 4/n', 'Critically, we show that livelihood prediction step cannot be divorced from causal inference step. Standard loss fnc in deep learning models generates output with too low variance in this setting; this attenuates causal estimates. Custom loss fnc in step 1 model fixes it. 5/n', 'We find large positive effects of electrification on community-level wealth in Uganda: +0.2sd in the 3-4 years after grid access. Seems good. We believe suite of methods we use could be useful for large-scale impact evaluation in a lot of settings. /n']",21,09,1528 |
109,28,1288795332999086080,1439446945,Lav Varshney,"New manuscript on active sampling for generating high-quality text from language models which may work better than top-k and nucleus sampling @SFResearch @ECEILLINOIS @CSL_Illinois blog: <LINK> paper: <LINK> code: <LINK> <LINK> Nice work from @sourya_basu, Sachin, and @StrongDuality",https://arxiv.org/abs/2007.14966,"Neural text decoding is important for generating high-quality texts using language models. To generate high-quality text, popular decoding algorithms like top-k, top-p (nucleus), and temperature-based sampling truncate or distort the unreliable low probability tail of the language model. Though these methods generate high-quality text after parameter tuning, they are ad hoc. Not much is known about the control they provide over the statistics of the output, which is important since recent reports show text quality is highest for a specific range of likelihoods. Here, first we provide a theoretical analysis of perplexity in top-k, top-p, and temperature sampling, finding that cross-entropy behaves approximately linearly as a function of p in top-p sampling whereas it is a nonlinear function of k in top-k sampling, under Zipfian statistics. We use this analysis to design a feedback-based adaptive top-k text decoding algorithm called mirostat that generates text (of any length) with a predetermined value of perplexity, and thereby high-quality text without any tuning. Experiments show that for low values of k and p in top-k and top-p sampling, perplexity drops significantly with generated text length, which is also correlated with excessive repetitions in the text (the boredom trap). On the other hand, for large values of k and p, we find that perplexity increases with generated text length, which is correlated with incoherence in the text (confusion trap). Mirostat avoids both traps: experiments show that cross-entropy has a near-linear relation with repetition in generated text. This relation is almost independent of the sampling method but slightly dependent on the model used. Hence, for a given language model, control over perplexity also gives control over repetitions. Experiments with human raters for fluency, coherence, and quality further verify our findings. ","Mirostat: A Neural Text Decoding Algorithm that Directly Controls |
Perplexity",2,"['New manuscript on active sampling for generating high-quality text from language models which may work better than top-k and nucleus sampling @SFResearch @ECEILLINOIS @CSL_Illinois \n\nblog: <LINK>\npaper: <LINK>\ncode: <LINK> <LINK>', 'Nice work from @sourya_basu, Sachin, and @StrongDuality']",20,07,284 |
110,105,1225803379957522432,1211580702480748544,Andrea Idini,"New Paper is out: <LINK> The idea is to use the Levy-Lieb formulation of DFT, and use Self Consistent Green's function calculations from first principle to derive a possible first principle functional. *Spoiler: doesn't work. We know from Hohenberg-Kohn... 1/6 ... that a universal functional, that can reproduce in a one-body fashion the true ground state density and energy of a correlated many-body system, exists. To find it is a different story, and literally a trillion dollar question. 2/6 Finding the universal density functional is a Quantum Merlin-Arthur problem. So a problem that can be solved in decent time only having a quantum oracle (Merlin) that can perfectly and immediately answer questions or a smart classical Arthur. We try our best using 3/6 Using Self-Consistent Green's function to do a proper simulation from first principle of the nuclear state using Chiral interaction and density functional generators. That's our Merlin. The constrained DFT then tries to reproduce the one-body densities and energies. 4/6 The results of this ambitious work present us with mixed feeling: we did not get nice results in finite nuclei that would have been so epic. However we identified problems in the ab initio calculation, and the Skyrme generators used, opening the door for more studies. 5/6 We tried to cheat the Quantum Merlin Arthur and it costed us years and several bones. But I learned a lot from it. Thanks Gianluca for this bold attempt in cracking this impossible problem. @jacekdob512 @AleStyle81 @NucTheorySurrey 6/6",https://arxiv.org/abs/2002.01903,"We present the first application of a new approach, proposed in [Journal of Physics G: Nuclear and Particle Physics, 43, 04LT01 (2016)] to derive coupling constants of the Skyrme energy density functional (EDF) from ab initio Hamiltonian. By perturbing the ab initio Hamiltonian with several functional generators defining the Skyrme EDF, we create a set of metadata that is then used to constrain the coupling constants of the functional. We use statistical analysis to obtain such an ab initio-equivalent Skyrme EDF. We find that the resulting functional describes properties of atomic nuclei and infinite nuclear matter quite poorly. This may point out to the necessity of building up the ab initio-equivalent functionals from more sophisticated generators. However, we also indicate that the current precision of the ab initio calculations may be insufficient for deriving meaningful nuclear EDFs. ","Model nuclear energy density functionals derived from ab initio |
calculations",6,"[""New Paper is out:\n<LINK>\n\nThe idea is to use the Levy-Lieb formulation of DFT, and use Self Consistent Green's function calculations from first principle to derive a possible first principle functional.\n\n*Spoiler: doesn't work.\n\nWe know from Hohenberg-Kohn...\n1/6"", '... that a universal functional, that can reproduce in a one-body fashion the true ground state density and energy of a correlated many-body system, exists.\n\nTo find it is a different story, and literally a trillion dollar question.\n\n2/6', 'Finding the universal density functional is a Quantum Merlin-Arthur problem. So a problem that can be solved in decent time only having a quantum oracle (Merlin) that can perfectly and immediately answer questions or a smart classical Arthur.\n\nWe try our best using \n\n3/6', ""Using Self-Consistent Green's function to do a proper simulation from first principle of the nuclear state using Chiral interaction and density functional generators. That's our Merlin. \nThe constrained DFT then tries to reproduce the one-body densities and energies.\n4/6"", 'The results of this ambitious work present us with mixed feeling: we did not get nice results in finite nuclei that would have been so epic.\nHowever we identified problems in the ab initio calculation, and the Skyrme generators used, opening the door for more studies.\n\n5/6', 'We tried to cheat the Quantum Merlin Arthur and it costed us years and several bones. But I learned a lot from it.\n\nThanks Gianluca for this bold attempt in cracking this impossible problem.\n\n@jacekdob512 @AleStyle81 @NucTheorySurrey \n\n6/6']",20,02,1547 |
111,181,1438457730621128710,2546561779,Masoud,"Check our new paper at #EMNLP2021 ""Graph Algorithms for Multiparallel Word Alignment"". We get better results compared to bilingual aligners even when we use noisy translations. Joint work: @imani_ayyoob @lksenel @PDufter @HinrichSchuetze PDF: <LINK> #NLProc",https://arxiv.org/abs/2109.06283,"With the advent of end-to-end deep learning approaches in machine translation, interest in word alignments initially decreased; however, they have again become a focus of research more recently. Alignments are useful for typological research, transferring formatting like markup to translated texts, and can be used in the decoding of machine translation systems. At the same time, massively multilingual processing is becoming an important NLP scenario, and pretrained language and machine translation models that are truly multilingual are proposed. However, most alignment algorithms rely on bitexts only and do not leverage the fact that many parallel corpora are multiparallel. In this work, we exploit the multiparallelity of corpora by representing an initial set of bilingual alignments as a graph and then predicting additional edges in the graph. We present two graph algorithms for edge prediction: one inspired by recommender systems and one based on network link prediction. Our experimental results show absolute improvements in $F_1$ of up to 28% over the baseline bilingual word aligner in different datasets. ",Graph Algorithms for Multiparallel Word Alignment,1,"['Check our new paper at #EMNLP2021 ""Graph Algorithms for Multiparallel Word Alignment"". We get better results compared to bilingual aligners even when we use noisy translations.\nJoint work: @imani_ayyoob\n@lksenel @PDufter @HinrichSchuetze\nPDF: <LINK>\n#NLProc']",21,09,257 |
112,24,988545226510757888,865254426101067778,Mohit Iyyer,"new #NAACL2018 paper out on controllable paraphrasing w/ John Wieting, @kevingimpel @LukeZettlemoyer <LINK> our model generates paraphrases with user-specified syntactic forms. we use it to generate (and make models more robust to) adversarial examples! #NLProc <LINK>",https://arxiv.org/abs/1804.06059,"We propose syntactically controlled paraphrase networks (SCPNs) and use them to generate adversarial examples. Given a sentence and a target syntactic form (e.g., a constituency parse), SCPNs are trained to produce a paraphrase of the sentence with the desired syntax. We show it is possible to create training data for this task by first doing backtranslation at a very large scale, and then using a parser to label the syntactic transformations that naturally occur during this process. Such data allows us to train a neural encoder-decoder model with extra inputs to specify the target syntax. A combination of automated and human evaluations show that SCPNs generate paraphrases that follow their target specifications without decreasing paraphrase quality when compared to baseline (uncontrolled) paraphrase systems. Furthermore, they are more capable of generating syntactically adversarial examples that both (1) ""fool"" pretrained models and (2) improve the robustness of these models to syntactic variation when used to augment their training data. ","Adversarial Example Generation with Syntactically Controlled Paraphrase |
Networks",1,"['new #NAACL2018 paper out on controllable paraphrasing w/ John Wieting, @kevingimpel @LukeZettlemoyer <LINK> our model generates paraphrases with user-specified syntactic forms. we use it to generate (and make models more robust to) adversarial examples! #NLProc <LINK>']",18,04,268 |
113,109,1197121963757576197,454838126,Jos de Bruijne,"""Novel constraints on the particle nature of dark matter from stellar streams"" <LINK> ""[We find] a 95% lower limit on the mass of warm dark matter thermal relics mWDM>4.6 keV; adding dwarf satellite counts strengthens this to mWDM>6.3 keV"" #GaiaMission #GaiaDR2 <LINK>",https://arxiv.org/abs/1911.02662,"New data from the $\textit{Gaia}$ satellite, when combined with accurate photometry from the Pan-STARRS survey, allow us to accurately estimate the properties of the GD-1 stream. Here, we analyze the stellar density perturbations in the GD-1 stream and show that they cannot be due to known baryonic structures like giant molecular clouds, globular clusters, or the Milky Way's bar or spiral arms. A joint analysis of the GD-1 and Pal 5 streams instead requires a population of dark substructures with masses $\approx 10^{7}$ to $10^9 \ M_{\rm{\odot}}$. We infer a total abundance of dark subhalos normalised to standard cold dark matter $n_{\rm sub}/n_{\rm sub, CDM} = 0.4 ^{+0.3}_{-0.2}$ ($68 \%$), which corresponds to a mass fraction contained in the subhalos $f_{\rm{sub}} = 0.14 ^{+0.11}_{-0.07} \%$, compatible with the predictions of hydrodynamical simulation of cold dark matter with baryons. ","Evidence of a population of dark subhalos from Gaia and Pan-STARRS |
observations of the GD-1 stream",1,"['""Novel constraints on the particle nature of dark matter from stellar streams"" <LINK> ""[We find] a 95% lower limit on the mass of warm dark matter thermal relics mWDM>4.6 keV; adding dwarf satellite counts strengthens this to mWDM>6.3 keV"" #GaiaMission #GaiaDR2 <LINK>']",19,11,274 |
114,199,1286035276427599886,114636884,Sagar,"Glad to share the final part of my Ph.D. as a preprint. In this work we propose a new method for conducting triadic motif census, which can be centred around a particular node in a conversation thread. This results in a much richer variety of triads. <LINK> We call these variants as Anchored Triads. We find that some of these triads are significantly more prevalent in supportive conversations about suicide (r/SuicideWatch). We hypothesize that this may point to a signature of social support in online forums. We investigate the significance of this finding by comparing against generic Reddit threads. Joint work with the amazing @rina_dutta , @__sumithra__ , and @nishanthsastry",https://arxiv.org/abs/2007.10159,"Platforms like Reddit and Twitter offer internet users an opportunity to talk about diverse issues, including those pertaining to physical and mental health. Some of these forums also function as a safe space for severely distressed mental health patients to get social support from peers. The online community platform Reddit's SuicideWatch is one example of an online forum dedicated specifically to people who suffer from suicidal thoughts, or who are concerned about people who might be at risk. It remains to be seen if these forums can be used to understand and model the nature of online social support, not least because of the noisy and informal nature of conversations. Moreover, understanding how a community of volunteering peers react to calls for help in cases of suicidal posts, would help to devise better tools for online mitigation of such episodes. In this paper, we propose an approach to characterise conversations in online forums. Using data from the SuicideWatch subreddit as a case study, we propose metrics at a macroscopic level -- measuring the structure of the entire conversation as a whole. We also develop a framework to measure structures in supportive conversations at a mesoscopic level -- measuring interactions with the immediate neighbours of the person in distress. We statistically show through comparison with baseline conversations from random Reddit threads that certain macro and meso-scale structures in an online conversation exhibit signatures of social support, and are particularly over-expressed in SuicideWatch conversations. ","Analysing Meso and Macro conversation structures in an online suicide |
support forum",3,"['Glad to share the final part of my Ph.D. as a preprint. In this work we propose a new method for conducting triadic motif census, which can be centred around a particular node in a conversation thread. This results in a much richer variety of triads. \n<LINK>', 'We call these variants as Anchored Triads. We find that some of these triads are significantly more prevalent in supportive conversations about suicide (r/SuicideWatch). We hypothesize that this may point to a signature of social support in online forums.', 'We investigate the significance of this finding by comparing against generic Reddit threads. \n\nJoint work with the amazing @rina_dutta , @__sumithra__ , and @nishanthsastry']",20,07,685 |
115,122,1447496520975138817,761263821881221120,Dr. Roberta Amato,"My latest paper!! New experimental data on the scattering of low-energy protons from a sample of the optics of the future ESA X-ray mission @AthenaXobs. These data are the key to minimise the bkg and achieve the ambitious goals of the mission <LINK> This work was a team effort and I personally want to thank my colleagues from @uni_tue and my two supervisors prof. A. Santangelo and T. Mineo. This paper was written, submitted, reviewed, and approved during the pandemic. Here is my message to all the people out there struggling because of covid: Hold on! Better times always come! 💪🏼 #astronomy #research #WomenInSTEM",https://arxiv.org/abs/2110.04122,"Soft protons are a potential threat for X-ray missions using grazing incidence optics, as once focused onto the detectors they can contribute to increase the background and possibly induce radiation damage as well. The assessment of these undesired effects is especially relevant for the future ESA X-ray mission Athena, due to its large collecting area. To prevent degradation of the instrumental performance, which ultimately could compromise some of the scientific goals of the mission, the adoption of ad-hoc magnetic diverters is envisaged. Dedicated laboratory measurements are fundamental to understand the mechanisms of proton forward scattering, validate the application of the existing physical models to the Athena case and support the design of the diverters. In this paper we report on scattering efficiency measurements of soft protons impinging at grazing incidence onto a Silicon Pore Optics sample, conducted in the framework of the EXACRAD project. Measurements were taken at two different energies, ~470 keV and ~170 keV, and at four different scattering angles between 0.6 deg and 1.2 deg. The results are generally consistent with previous measurements conducted on eROSITA mirror samples, and as expected the peak of the scattering efficiency is found around the angle of specular reflection. ","Scattering efficiencies measurements of soft protons at grazing |
incidence from an Athena Silicon Pore Optics sample",3,"['My latest paper!! New experimental data on the scattering of low-energy protons from a sample of the optics of the future ESA X-ray mission @AthenaXobs. These data are the key to minimise the bkg and achieve the ambitious goals of the mission\n<LINK>', 'This work was a team effort and I personally want to thank my colleagues from @uni_tue and my two supervisors prof. A. Santangelo and T. Mineo.', 'This paper was written, submitted, reviewed, and approved during the pandemic. Here is my message to all the people out there struggling because of covid: Hold on! Better times always come! 💪🏼\n#astronomy #research #WomenInSTEM']",21,10,620 |
116,66,1216775386601861121,28734416,Sebastian Risi,"Happy our new paper ""Revealing Neural Network Bias to Non-Experts Through Interactive Counterfactual Examples"" w/ @chelmyers, @efreed52, @larispardo, @anushayfurqan, and @jichenz is now on arXiv: <LINK> @ITUkbh @DrexelUniv <LINK> Traditionally, non-experts have little control in uncovering potential social bias in the algorithms that may impact their lives. We present a preliminary design for an interactive visualization tool, to reveal biases in neural networks. The tool combines counterfactual examples and clustering of the networks' activation patterns to empower non-experts to detect bias. For most of us, this is the first foray into fair AI methods, so any pointers to other relevant work or suggestions are greatly appreciated. @drscotthawley @mahimapushkarna Thanks! And we'll have a look at that tool.",https://arxiv.org/abs/2001.02271,"AI algorithms are not immune to biases. Traditionally, non-experts have little control in uncovering potential social bias (e.g., gender bias) in the algorithms that may impact their lives. We present a preliminary design for an interactive visualization tool CEB to reveal biases in a commonly used AI method, Neural Networks (NN). CEB combines counterfactual examples and abstraction of an NN decision process to empower non-experts to detect bias. This paper presents the design of CEB and initial findings of an expert panel (n=6) with AI, HCI, and Social science experts. ","Revealing Neural Network Bias to Non-Experts Through Interactive |
Counterfactual Examples",4,"['Happy our new paper ""Revealing Neural Network Bias to Non-Experts Through Interactive Counterfactual Examples"" w/ @chelmyers, @efreed52, @larispardo, @anushayfurqan, and @jichenz is now on arXiv: <LINK> @ITUkbh @DrexelUniv <LINK>', 'Traditionally, non-experts have little control in uncovering potential social bias in the algorithms that may impact their lives. We present a preliminary design for an interactive visualization tool, to reveal biases in neural networks.', ""The tool combines counterfactual examples and clustering of the networks' activation patterns to empower non-experts to detect bias. For most of us, this is the first foray\xa0into fair AI methods, so any pointers to other relevant work or suggestions are greatly appreciated."", ""@drscotthawley @mahimapushkarna Thanks! And we'll have a look at that tool.""]",20,01,817 |
117,155,1459888302438367241,12309242,Onur Mutlu,"""Uncovering In-DRAM #RowHammer Protection Mechanisms: A New Methodology, Custom RowHammer Patterns, and Implications"", MICRO'21 talk, Hasan Hassan. Tomorrow 6:30pm Zurich time. Youtube: <LINK> Paper: <LINK> @SAFARI_ETH_CMU @ETH_en @CSatETH This work introduces a rigorous methodology for uncovering the operational principles & details of RowHammer protection mechanisms employed in modern DDR4 DRAM chips. We show that one can use this methodology to circumvent existing protections & cause many more RowHammer bitflips Our methodology, U-TRR (Uncovering TRR) can help enable fundamentally-secure and robust solutions to RowHammer. Collaborative research with @kavehrazavi and @vvdveen. Full paper: <LINK> Full Talk Slides (PDF): <LINK>",https://arxiv.org/abs/2110.10603,"The RowHammer vulnerability in DRAM is a critical threat to system security. To protect against RowHammer, vendors commit to security-through-obscurity: modern DRAM chips rely on undocumented, proprietary, on-die mitigations, commonly known as Target Row Refresh (TRR). At a high level, TRR detects and refreshes potential RowHammer-victim rows, but its exact implementations are not openly disclosed. Security guarantees of TRR mechanisms cannot be easily studied due to their proprietary nature. To assess the security guarantees of recent DRAM chips, we present Uncovering TRR (U-TRR), an experimental methodology to analyze in-DRAM TRR implementations. U-TRR is based on the new observation that data retention failures in DRAM enable a side channel that leaks information on how TRR refreshes potential victim rows. U-TRR allows us to (i) understand how logical DRAM rows are laid out physically in silicon; (ii) study undocumented on-die TRR mechanisms; and (iii) combine (i) and (ii) to evaluate the RowHammer security guarantees of modern DRAM chips. We show how U-TRR allows us to craft RowHammer access patterns that successfully circumvent the TRR mechanisms employed in 45 DRAM modules of the three major DRAM vendors. We find that the DRAM modules we analyze are vulnerable to RowHammer, having bit flips in up to 99.9% of all DRAM rows. ","Uncovering In-DRAM RowHammer Protection Mechanisms: A New Methodology, |
Custom RowHammer Patterns, and Implications",3,"['""Uncovering In-DRAM #RowHammer Protection Mechanisms: A New Methodology, Custom RowHammer Patterns, and Implications"", MICRO\'21 talk, Hasan Hassan. \n\nTomorrow 6:30pm Zurich time. \n\nYoutube: <LINK>\n\nPaper: <LINK>\n\n@SAFARI_ETH_CMU @ETH_en @CSatETH', 'This work introduces a rigorous methodology for uncovering the operational principles & details of RowHammer protection mechanisms employed in modern DDR4 DRAM chips. We show that one can use this methodology to circumvent existing protections & cause many more RowHammer bitflips', 'Our methodology, U-TRR (Uncovering TRR) can help enable fundamentally-secure and robust solutions to RowHammer. \n\nCollaborative research with @kavehrazavi and @vvdveen.\n\nFull paper: https://t.co/94xVVr8ZLM \n\nFull Talk Slides (PDF): https://t.co/ImY6SPPF7r']",21,10,741 |
118,20,1233394656584454144,162293874,Jeff Clune,"Here is a new talk entirely on ANML (a Neuromodulated Meta-Learning Algorithm) @reworkAI ANML meta-learns to reduce catastrophic forgetting, and can learn at least 600 tasks sequentially! <LINK> paper: Learning to Continually Learn (<LINK>) #AI <LINK> @reworkAI Work done with a great team: Shawn Beaulieu (first author), Lapo Frati, @ThomasMiconi, Joel Lehman ( @joelbot3000), @kenneth0stanley, and Nick Cheney. @QuantRob @reworkAI You can do ANML with any optimization algorithm if you like, including a genetic algorithm. But a GA is likely less efficient for supervised learning tasks.",https://arxiv.org/abs/2002.09571,"Continual lifelong learning requires an agent or model to learn many sequentially ordered tasks, building on previous knowledge without catastrophically forgetting it. Much work has gone towards preventing the default tendency of machine learning models to catastrophically forget, yet virtually all such work involves manually-designed solutions to the problem. We instead advocate meta-learning a solution to catastrophic forgetting, allowing AI to learn to continually learn. Inspired by neuromodulatory processes in the brain, we propose A Neuromodulated Meta-Learning Algorithm (ANML). It differentiates through a sequential learning process to meta-learn an activation-gating function that enables context-dependent selective activation within a deep neural network. Specifically, a neuromodulatory (NM) neural network gates the forward pass of another (otherwise normal) neural network called the prediction learning network (PLN). The NM network also thus indirectly controls selective plasticity (i.e. the backward pass of) the PLN. ANML enables continual learning without catastrophic forgetting at scale: it produces state-of-the-art continual learning performance, sequentially learning as many as 600 classes (over 9,000 SGD updates). ",Learning to Continually Learn,3,"['Here is a new talk entirely on ANML (a Neuromodulated Meta-Learning Algorithm) @reworkAI ANML meta-learns to reduce catastrophic forgetting, and can learn at least 600 tasks sequentially!\n<LINK> paper: Learning to Continually Learn (<LINK>) #AI <LINK>', '@reworkAI Work done with a great team: Shawn Beaulieu (first author), Lapo Frati, @ThomasMiconi, Joel Lehman (\n@joelbot3000), @kenneth0stanley, and Nick Cheney.', '@QuantRob @reworkAI You can do ANML with any optimization algorithm if you like, including a genetic algorithm. But a GA is likely less efficient for supervised learning tasks.']",20,02,589 |
119,163,1378058929079345155,128608141,Thamme Gowda,"Check out our new work; collaboration w/ Zhao Zhang, @chrismattmann, @jonathanmay: Paper: <LINK> TL;DR: Introducing our tools for machine translation; showing their use by creating a model that translates 500 languages to English! Demo: <LINK> Our tools are: 1. MTData: downloads datasets from many sources <LINK> 2. NLCodec: for byte-pair-encoding, dataset storage+retrieval on large datasets (using @ApacheSpark) <LINK> 3. RTG: our NMT toolkit based on @PyTorch <LINK> use cases for pre-trained MT models: the web and social media analysis and retrieval beyond English. Docker images are available so you may use them for translating *many* langs to English. <LINK> Also being integrated to @ApacheTika <LINK> An additional use case (also one of my favorite areas of research) is improving the translation of low-resource languages. We have been impressed with the fine-tuning of our model in low resource settings. <LINK> <LINK>",https://arxiv.org/abs/2104.00290,"While there are more than 7000 languages in the world, most translation research efforts have targeted a few high-resource languages. Commercial translation systems support only one hundred languages or fewer, and do not make these models available for transfer to low resource languages. In this work, we present useful tools for machine translation research: MTData, NLCodec, and RTG. We demonstrate their usefulness by creating a multilingual neural machine translation model capable of translating from 500 source languages to English. We make this multilingual model readily downloadable and usable as a service, or as a parent model for transfer-learning to even lower-resource languages. ","Many-to-English Machine Translation Tools, Data, and Pretrained Models",4,"['Check out our new work; collaboration w/ Zhao Zhang, @chrismattmann, @jonathanmay:\nPaper: <LINK> \nTL;DR: Introducing our tools for machine translation; showing their use by creating a model that translates 500 languages to English!\nDemo: <LINK>', 'Our tools are:\n1. MTData: downloads datasets from many sources https://t.co/Srrkv0XLDw \n2. NLCodec: for byte-pair-encoding, dataset storage+retrieval on large datasets (using @ApacheSpark) https://t.co/Z6tZtCzJkx \n3. RTG: our NMT toolkit based on @PyTorch https://t.co/wHXLQ6Odr2', 'use cases for pre-trained MT models: the web and social media analysis and retrieval beyond English. Docker images are available so you may use them for translating *many* langs to English.\nhttps://t.co/OTqX6U4DYd \n\nAlso being integrated to @ApacheTika https://t.co/1L9YvjFzMS', 'An additional use case (also one of my favorite areas of research) is improving the translation of low-resource languages. We have been impressed with the fine-tuning of our model in low resource settings. \nhttps://t.co/7JlSY5aIdv https://t.co/9RYuhA5Zof']",21,04,932 |
120,76,940571109770088448,907232486735958018,Jaki Noronha-Hostler,We have predictions for the recent @CERN @uslhc XeXe run comparing both spherical and deformed Xenon. We find that the deformation plays a role in central collisions elliptical flow. <LINK> @CERN @uslhc For these calculations we used a relativistic viscous hydrodynamic model with event-by-event TRENTO initial conditions. We also found that multi-particle cumulants differed more from their eccentricities in XeXe collisions (compared to PbPb collisions). @CERN @uslhc The reason XeXe was interesting to compare to PbPb is that we're trying to understand how system size affects the Quark Gluon Plasma. Xe has a smaller nucleus (129 vs 208 from Pb) so it produces a smaller droplet of Quark Gluon Plasma.,https://arxiv.org/abs/1711.08499,"We argue that relativistic hydrodynamics is able to make robust predictions for soft particle production in Xe+Xe collisions at the CERN Large Hadron Collider (LHC). The change of system size from Pb+Pb to Xe+Xe provides a unique opportunity to test the scaling laws inherent to fluid dynamics. Using event-by-event hydrodynamic simulations, we make quantitative predictions for several observables: mean transverse momentum, anisotropic flow coefficients, and their fluctuations. Results are shown as function of collision centrality. ",Hydrodynamic predictions for 5.44 TeV Xe+Xe collisions,3,"['We have predictions for the recent @CERN @uslhc XeXe run comparing both spherical and deformed Xenon. We find that the deformation plays a role in central collisions elliptical flow.\n<LINK>', '@CERN @uslhc For these calculations we used a relativistic viscous hydrodynamic model with event-by-event TRENTO initial conditions. We also found that multi-particle cumulants differed more from their eccentricities in XeXe collisions (compared to PbPb collisions).', ""@CERN @uslhc The reason XeXe was interesting to compare to PbPb is that we're trying to understand how system size affects the Quark Gluon Plasma. Xe has a smaller nucleus (129 vs 208 from Pb) so it produces a smaller droplet of Quark Gluon Plasma.""]",17,11,705 |
121,9,1356255004416487429,2302304521,Dr Andra Stroe 🏳️🌈🇷🇴,"I am very happy to share my new paper with @victor_savu, introducing GLEAM (Galaxy Line Emission & Absorption Modeling), a Python tool we wrote for fitting Gaussian models to emission and absorption lines in large samples of 1D extragalactic spectra. <LINK> 1/3 With GLEAM, you can uniformly process a variety of spectra, including galaxies and AGN, in a wide range of instrument setups and signal-to-noise regimes. With a goal to enable reproducible workflows for users, we designed GLEAM to be easy to install and use. 2/3 Check out the paper on ArXiv and contribute to our open-source project on Github! <LINK> 3/3 Also, this is @victor_savu's first refereed paper! I am very thankful to @victor_savu for the expertise on all things code development. Software developer turned astronomer? 4/3 @franco_vazza @victor_savu I worked through a number of possible acronyms and this one just fit the best. I do love your LOFAR acronym! @MarculewiczM @victor_savu Really glad to hear that you found it useful! Keep in touch if you have any questions or feedback!",https://arxiv.org/abs/2101.12231,"We present GLEAM (Galaxy Line Emission & Absorption Modeling), a Python tool for fitting Gaussian models to emission and absorption lines in large samples of 1D extragalactic spectra. GLEAM is tailored to work well in batch mode without much human interaction. With GLEAM, users can uniformly process a variety of spectra, including galaxies and active galactic nuclei, in a wide range of instrument setups and signal-to-noise regimes. GLEAM also takes advantage of multiprocessing capabilities to process spectra in parallel. With the goal of enabling reproducible workflows for its users, GLEAM employs a small number of input files, including a central, user-friendly configuration in which fitting constraints can be defined for groups of spectra and overrides can be specified for edge cases. For each spectrum, GLEAM produces a table containing measurements and error bars for the detected spectral lines and continuum, and upper limits for non-detections. For visual inspection and publishing, GLEAM can also produce plots of the data with fitted lines overlaid. In the present paper, we describe GLEAM's main features, the necessary inputs, expected outputs, and some example applications, including thorough tests on a large sample of optical/infra-red multi-object spectroscopic observations and integral field spectroscopic data. gleam is developed as an open-source project hosted at this https URL and welcomes community contributions. ",GLEAM: Galaxy Line Emission & Absorption Modeling,6,"['I am very happy to share my new paper with @victor_savu, introducing GLEAM (Galaxy Line Emission & Absorption Modeling), a Python tool we wrote for fitting Gaussian models to emission and absorption lines in large samples of 1D extragalactic spectra. <LINK> 1/3', 'With GLEAM, you can uniformly process a variety of spectra, including galaxies and AGN, in a wide range of instrument setups and signal-to-noise regimes. With a goal to enable reproducible workflows for users, we designed GLEAM to be easy to install and use. 2/3', 'Check out the paper on ArXiv and contribute to our open-source project on Github! https://t.co/OK2XT9T6gO 3/3', ""Also, this is @victor_savu's first refereed paper! I am very thankful to @victor_savu for the expertise on all things code development. Software developer turned astronomer? 4/3"", '@franco_vazza @victor_savu I worked through a number of possible acronyms and this one just fit the best. I do love your LOFAR acronym!', '@MarculewiczM @victor_savu Really glad to hear that you found it useful! Keep in touch if you have any questions or feedback!']",21,01,1057 |
122,127,1257586464872955905,633108360,Matteo Cadeddu,"Phase II started in Italy just yesterday. Since we are following strictly the rules we decided that our new paper can finally ""go out""! Physics results from the first #COHERENT observation of #CEνNS in Ar and combination with CsI. #COVID19 <LINK> @COHERENT_NUS <LINK>",https://arxiv.org/abs/2005.01645,"We present the results on the radius of the neutron distribution in $^{40}\text{Ar}$, on the low-energy value of the weak mixing angle, and on the electromagnetic properties of neutrinos obtained from the analysis of the coherent neutrino-nucleus elastic scattering data in argon recently published by the COHERENT collaboration, taking into account proper radiative corrections. We present also the results of the combined analysis of the COHERENT argon and cesium-iodide data for the determination of the low-energy value of the weak mixing angle and the electromagnetic properties of neutrinos. In particular, the COHERENT argon data allow us to improve significantly the only existing laboratory bounds on the electric charge $q_{\mu\mu}$ of the muon neutrino and on the transition electric charge $q_{\mu\tau}$. ","Physics results from the first COHERENT observation of CE$\nu$NS in |
argon and their combination with cesium-iodide data",1,"['Phase II started in Italy just yesterday. Since we are following strictly the rules we decided that our new paper can finally ""go out""! Physics results from the first #COHERENT observation of #CEνNS in Ar and combination with CsI. #COVID19 \n<LINK>\n\n@COHERENT_NUS <LINK>']",20,05,267 |
123,8,1366218659379613700,97939183,Yuandong Tian,"Our AIStats'21 paper ""Understanding Robustness in Teacher-Student Setting: A New Perspective"" is on arXiv now: <LINK>. <LINK> 1/ Assuming that the ground truth labels are the outputs of a hidden fixed teacher network, we study what vulnerability a student network could have. Extending our previous work (<LINK>), we build our empirical model of learned student weight as the following: <LINK> 2/ Here, w_j is the ""ground-truth"" teacher weight, plus two additional residual terms (in-plane eps_in and out-plane eps_out) that trigger adversarial samples. Here ""plane"" refers to input data subspace X. Our theorems relate these residual terms to input data distribution. 3/ Empirically, the robustness of a student model is indeed highly correlated with the two residual terms in standard/adversarial training, etc. Furthermore, adversarial training and data augmentation leads to smaller residual terms by increasing the rank of the data subspace.",https://arxiv.org/abs/2102.13170,"Adversarial examples have appeared as a ubiquitous property of machine learning models where bounded adversarial perturbation could mislead the models to make arbitrarily incorrect predictions. Such examples provide a way to assess the robustness of machine learning models as well as a proxy for understanding the model training process. Extensive studies try to explain the existence of adversarial examples and provide ways to improve model robustness (e.g. adversarial training). While they mostly focus on models trained on datasets with predefined labels, we leverage the teacher-student framework and assume a teacher model, or oracle, to provide the labels for given instances. We extend Tian (2019) in the case of low-rank input data and show that student specialization (trained student neuron is highly correlated with certain teacher neuron at the same layer) still happens within the input subspace, but the teacher and student nodes could differ wildly out of the data subspace, which we conjecture leads to adversarial examples. Extensive experiments show that student specialization correlates strongly with model robustness in different scenarios, including student trained via standard training, adversarial training, confidence-calibrated adversarial training, and training with robust feature dataset. Our studies could shed light on the future exploration about adversarial examples, and enhancing model robustness via principled data augmentation. ",Understanding Robustness in Teacher-Student Setting: A New Perspective,4,"['Our AIStats\'21 paper ""Understanding Robustness in Teacher-Student Setting: A New Perspective"" is on arXiv now: <LINK>. <LINK>', '1/ Assuming that the ground truth labels are the outputs of a hidden fixed teacher network, we study what vulnerability a student network could have. Extending our previous work (https://t.co/Pm9pbXPvGD), we build our empirical model of learned student weight as the following: https://t.co/FIiQ2bswxl', '2/ Here, w_j is the ""ground-truth"" teacher weight, plus two additional residual terms (in-plane eps_in and out-plane eps_out) that trigger adversarial samples. Here ""plane"" refers to input data subspace X. Our theorems relate these residual terms to input data distribution.', '3/ Empirically, the robustness of a student model is indeed highly correlated with the two residual terms in standard/adversarial training, etc. Furthermore, adversarial training and data augmentation leads to smaller residual terms by increasing the rank of the data subspace.']",21,02,946 |
124,192,1372902714464739329,839948365622300672,Juri Smirnov 🌻,"New Paper(s) on the exciting topic of dark #baryons! With P. Asadi, E. D. Kramer, @EKuflik, Gr. W. Ridgway and T.R. Slatyer, thank you all for a fun project! The idea: <LINK> The details: <LINK> Quick video Summary: <LINK> In a nutshell: 1) In the heavy #quark regime the #phasetransition is the same as in pure Yang-Mills theories and so it is 1. Order. It proceeds by #bubble nucleation and growth. The quarks, that are frozen out at this point, are squeezed in pockets of the wrong vacuum. <LINK> 2) This results in a re-coupling of interactions and a second stage of massive quark and anti-quark depletion. However, in each pocket, just accidentally there will be quarks or anti-quarks that will not have a partner antiparticle to annihilate with ;-( <LINK> 3) At the end of the day the correct #darkmatter abundance is reproduced at very large dark matter masses. So the new dark matter forming baryons could have significant interactions with the standard model. It’s intriguing to think of new #experiments that will look for those! <LINK> I will discuss the work in our informal Lunch Seminar at 11:45 EST. If you are interested, come and hang out with us @osuccapp details: <LINK>",https://arxiv.org/abs/2103.09822,"We study the effect of a first-order phase transition in a confining $SU(N)$ dark sector with heavy dark quarks. The baryons of this sector are the dark matter candidate. During the confinement phase transition the heavy quarks are trapped inside isolated, contracting pockets of the deconfined phase, giving rise to a second stage of annihilation that dramatically suppresses the dark quark abundance. The surviving abundance is determined by the local accidental asymmetry in each pocket. The correct dark matter abundance is obtained for $\mathcal{O}(1-100)$ PeV dark quarks, above the usual unitarity bound. ",Accidentally Asymmetric Dark Matter,5,"['New Paper(s) on the exciting topic of dark #baryons! \n\nWith P. Asadi, E. D. Kramer, @EKuflik, Gr. W. Ridgway and T.R. Slatyer, thank you all for a fun project! \n\nThe idea: <LINK>\n\nThe details: <LINK>\n\nQuick video Summary: <LINK>', 'In a nutshell:\n\n1) In the heavy #quark regime the #phasetransition is the same as in pure Yang-Mills theories and so it is 1. Order.\nIt proceeds by #bubble nucleation and growth. The quarks, that are frozen out at this point, are squeezed in pockets of the wrong vacuum. https://t.co/D4C3RoIloc', '2) This results in a re-coupling of interactions and a second stage of massive quark and anti-quark depletion. However, in each pocket, just accidentally there will be quarks or anti-quarks that will not have a partner antiparticle to annihilate with ;-( https://t.co/4Fjy8A0fHR', '3) At the end of the day the correct #darkmatter abundance is reproduced at very large dark matter masses. So the new dark matter forming baryons could have significant interactions with the standard model. It’s intriguing to think of new #experiments that will look for those! https://t.co/laZYN59R6e', 'I will discuss the work in our informal Lunch Seminar at 11:45 EST.\nIf you are interested, come and hang out with us @osuccapp \ndetails: https://t.co/VBFiyku47C']",21,03,1192 |
125,191,1326067640306192384,2401274228,tvayer,Happy to share the manuscript of my PhD thesis! <LINK> Contains detailed version of my works past 3yea. New results in Ch4 about Gromov-Wasserstein: we study the problem of finding an optimal Monge map + closed form linear Gromov Monge between Gaussian.,https://arxiv.org/abs/2011.04447,"Optimal Transport is a theory that allows to define geometrical notions of distance between probability distributions and to find correspondences, relationships, between sets of points. Many machine learning applications are derived from this theory, at the frontier between mathematics and optimization. This thesis proposes to study the complex scenario in which the different data belong to incomparable spaces. In particular we address the following questions: how to define and apply Optimal Transport between graphs, between structured data? How can it be adapted when the data are varied and not embedded in the same metric space? This thesis proposes a set of Optimal Transport tools for these different cases. An important part is notably devoted to the study of the Gromov-Wasserstein distance whose properties allow to define interesting transport problems on incomparable spaces. More broadly, we analyze the mathematical properties of the various proposed tools, we establish algorithmic solutions to compute them and we study their applicability in numerous machine learning scenarii which cover, in particular, classification, simplification, partitioning of structured data, as well as heterogeneous domain adaptation. ",A contribution to Optimal Transport on incomparable spaces,1,['Happy to share the manuscript of my PhD thesis! <LINK>\nContains detailed version of my works past 3yea. New results in Ch4 about Gromov-Wasserstein: we study the problem of finding an optimal Monge map + closed form linear Gromov Monge between Gaussian.'],20,11,253 |
126,139,1511606594034970628,1063135448493686785,Sergio Llana,"IS IT WORTH THE EFFORT? New paper to understand players and teams’ high-intensity efforts. Estimation of the impact of a player’s run on his team’s possession value. Developed with 500+ matches of tracking data of the big-5 European competitions. Link: <LINK> <LINK> We detect the players’ runs from tracking data and we contextualize them thanks to different layers of tactical concepts to better understand high-intensity efforts. Some of the concepts taken into account are the possession phases, defensive lines, and the attack/defense types. <LINK> Do teams with higher possession percentages run less? Do all players run in the same way? We create team and player profiles that mix tactical and physical metrics derived from HI runs. As an example, we discuss the relationship between more ball possession and running less. <LINK> How much does a player depend on his speed to add value? We present examples of how different strikers tend to do on-ball actions at different speeds. But we highlight those who rely on their HI runs to add value from those who are consistent in adding value at any speed. <LINK> What is the impact of the HI runs on the possession value? We aim to distinguish the players who run more from the ones who run better by estimating the impact of their high-intensity runs on the increase in their team's possession value (via EPV). <LINK> We see this work as a new path of research thanks to the impact on the different profiles of experts in a soccer club. From players and coaches to scouts and even physical coaches. Co-authors: @BorjaBurriel @PauMadrero @JaviOnData Data: @SkillCorner @StatsBomb",https://arxiv.org/abs/2204.02313,"We present a framework that gives a deep insight into the link between physical and technical-tactical aspects of soccer and it allows associating physical performance with value generation thanks to a top-down approach. First, we estimate physical indicators from tracking data. Then, we contextualize each player's run to understand better the purpose and circumstances in which it is done, adding a new dimension to the creation of team and player profiles. Finally, we assess the value-added by off-ball high-intensity runs by linking with a possession-value model. This novel approach allows answering practical questions from very different profiles of practitioners within a soccer club, from analysts, coaches, and scouts to physical coaches and readaptation physiotherapists. ","Is it worth the effort? Understanding and contextualizing physical |
metrics in soccer",6,"['IS IT WORTH THE EFFORT?\n\nNew paper to understand players and teams’ high-intensity efforts.\nEstimation of the impact of a player’s run on his team’s possession value.\nDeveloped with 500+ matches of tracking data of the big-5 European competitions.\n\nLink: <LINK> <LINK>', 'We detect the players’ runs from tracking data and we contextualize them thanks to different layers of tactical concepts to better understand high-intensity efforts. Some of the concepts taken into account are the possession phases, defensive lines, and the attack/defense types. https://t.co/hkQrq4Q5j8', 'Do teams with higher possession percentages run less? Do all players run in the same way?\n\nWe create team and player profiles that mix tactical and physical metrics derived from HI runs. As an example, we discuss the relationship between more ball possession and running less. https://t.co/tGlKpE6Pe5', 'How much does a player depend on his speed to add value?\n\nWe present examples of how different strikers tend to do on-ball actions at different speeds. But we highlight those who rely on their HI runs to add value from those who are consistent in adding value at any speed. https://t.co/cMbp4uEJGi', ""What is the impact of the HI runs on the possession value?\n\nWe aim to distinguish the players who run more from the ones who run better by estimating the impact of their high-intensity runs on the increase in their team's possession value (via EPV). https://t.co/ag0x5NRfTW"", 'We see this work as a new path of research thanks to the impact on the different profiles of experts in a soccer club. From players and coaches to scouts and even physical coaches.\n\nCo-authors: @BorjaBurriel @PauMadrero @JaviOnData\nData: @SkillCorner @StatsBomb']",22,04,1633 |
127,203,1505928221673545728,991687128387149824,Ajad Chhatkuli,"Our ICLR 2022 work. We propose quite a different (old) perspective to boundary detection by using a vector field label representation and its vector divergence: <LINK> Label representations can affect learning more than we think, as seen in implicit functions.",https://arxiv.org/abs/2203.08795,"Boundaries are among the primary visual cues used by human and computer vision systems. One of the key problems in boundary detection is the label representation, which typically leads to class imbalance and, as a consequence, to thick boundaries that require non-differential post-processing steps to be thinned. In this paper, we re-interpret boundaries as 1-D surfaces and formulate a one-to-one vector transform function that allows for training of boundary prediction completely avoiding the class imbalance issue. Specifically, we define the boundary representation at any point as the unit vector pointing to the closest boundary surface. Our problem formulation leads to the estimation of direction as well as richer contextual information of the boundary, and, if desired, the availability of zero-pixel thin boundaries also at training time. Our method uses no hyper-parameter in the training loss and a fixed stable hyper-parameter at inference. We provide theoretical justification/discussions of the vector transform representation. We evaluate the proposed loss method using a standard architecture and show the excellent performance over other losses and representations on several datasets. ",Zero Pixel Directional Boundary by Vector Transform,1,"['Our ICLR 2022 work. We propose quite a different (old) perspective to boundary detection by using a vector field label representation and its vector divergence:\n<LINK>\nLabel representations can affect learning more than we think, as seen in implicit functions.']",22,03,260 |
128,44,1508718414055059458,1271852576296906755,Ashley Chrimes,"New paper on arxiv today! We find NIR emission at the location of 6 Galactic magnetars for the first time, and show that some known NIR counterparts are highly variable. We also discuss the nature of the NIR emission - more to come on that, so stay tuned! <LINK>",https://arxiv.org/abs/2203.14947,"We report the discovery of six new magnetar counterpart candidates from deep near-infrared Hubble Space Telescope imaging. The new candidates are among a sample of nineteen magnetars for which we present HST data obtained between 2018-2020. We confirm the variability of previously established near-infrared counterparts, and newly identify candidates for PSRJ1622-4950, SwiftJ1822.3-1606, CXOUJ171405.7-381031, SwiftJ1833-0832, SwiftJ1834.9-0846 and AXJ1818.8-1559 based on their proximity to X-ray localisations. The new candidates are compared with the existing counterpart population in terms of their colours, magnitudes, and near-infrared to X-ray spectral indices. We find two candidates for AXJ1818.8-1559 which are both consistent with previously established counterparts. The other new candidates are likely to be chance alignments, or otherwise have a different origin for their near-infrared emission not previously seen in magnetar counterparts. Further observations and studies of these candidates are needed to firmly establish their nature. ","New candidates for magnetar counterparts from a deep search with the |