text
stringlengths
64
6.93k
multi-player games",3,"[""New paper with Sarath Pattathil &amp; Costis Daskalakis: <LINK>.\nWe answer the question: at what rate can players' actions converge to equilibrium if each plays according a no-regret algorithm in a smooth monotone game?"", 'We study the no-regret Optimistic Gradient (OG) algorithm, and show that its T-th iterate converges at a rate of 1/sqrt(T). We also prove a matching lower bound.\nPrevious work established either rates for on-average convergence or showed last-iterate convergence but without rates', 'To prove our upper bound we introduce a potential function that depends on the global structure of the game -- we call it an adaptive potential function. ""System-augmentation"" type approaches previously used for showing convergence of OG (w/o rates) seem not to work.']",20,10,764
390,119,1422690450905780227,88627644,Ray Norris,"After 13 years of hard work, we've published the first deep survey from Evolutionary Map of the Universe. Lots of new science! Many thanks to all the team who contributed so much to this paper, and to all the amazing CSIRO engineers who built ASKAP. <LINK> @astroduff Many thanks Alan! Yes it’s all paying off now!",http://arxiv.org/abs/2108.00569,"We present the data and initial results from the first Pilot Survey of the Evolutionary Map of the Universe (EMU), observed at 944 MHz with the Australian Square Kilometre Array Pathfinder (ASKAP) telescope. The survey covers 270 \sqdeg of an area covered by the Dark Energy Survey, reaching a depth of 25--30 \ujybm\ rms at a spatial resolution of $\sim$ 11--18 arcsec, resulting in a catalogue of $\sim$ 220,000 sources, of which $\sim$ 180,000 are single-component sources. Here we present the catalogue of single-component sources, together with (where available) optical and infrared cross-identifications, classifications, and redshifts. This survey explores a new region of parameter space compared to previous surveys. Specifically, the EMU Pilot Survey has a high density of sources, and also a high sensitivity to low surface-brightness emission. These properties result in the detection of types of sources that were rarely seen in or absent from previous surveys. We present some of these new results here. ",The Evolutionary Map of the Universe Pilot Survey,2,"[""After 13 years of hard work, we've published the first deep survey from Evolutionary Map of the Universe. Lots of new science! Many thanks to all the team who contributed so much to this paper, and to all the amazing CSIRO engineers who built ASKAP. <LINK>"", '@astroduff Many thanks Alan! Yes it’s all paying off now!']",21,08,314
391,213,1250371864251285506,702919640,Subir Sachdev,"We propose that ""the time reparameterization soft mode offers a promising and generic mechanism for resolving the strange metal puzzle"". The resistivity exponent is pinned at 1 by the `graviton', and can't vary with details, as in previous proposals. <LINK>",https://arxiv.org/abs/2004.05182,"The most puzzling aspect of the 'strange metal' behavior of correlated electron compounds is that the linear in temperature resistivity often extends down to low temperatures, lower than natural microscopic energy scales. We consider recently proposed deconfined critical points (or phases) in models of electrons in large dimension lattices with random nearest-neighbor exchange interactions. The criticality is in the class of Sachdev-Ye-Kitaev models, and exhibits a time reparameterization soft mode representing gravity in dual holographic theories. We compute the low temperature resistivity in a large $M$ limit of models with SU($M$) spin symmetry, and find that the dominant temperature dependence arises from this soft mode. The resistivity is linear in temperature down to zero temperature at the critical point, with a co-efficient universally proportional to the product of the residual resistivity and the co-efficient of the linear in temperature specific heat. We argue that the time reparameterization soft mode offers a promising and generic mechanism for resolving the strange metal puzzle. ","Linear in temperature resistivity in the limit of zero temperature from
the time reparameterization soft mode",1,"['We propose that ""the time reparameterization soft mode offers a promising and generic mechanism for resolving the strange metal puzzle"". The resistivity exponent is pinned at 1 by the `graviton\', and can\'t vary with details, as in previous proposals. <LINK>']",20,04,257
392,127,1179409574815830016,1177063549606203394,Tommi Tenkanen,"A new paper out: <LINK> If cosmic inflation lasted for a long time, then the observed length scales originate from modes smaller than the Planck length. This has serious consequences on dark matter which has its origins in quantum fluctuations during inflation.",https://arxiv.org/abs/1910.00521,"If the inflationary phase lasted longer than the minimal period, the length scales observed today originate from modes that were smaller than the Planck length during inflation. It was recently argued that this ""trans-Planckian problem"" can never arise in a consistent string theory framework, which places a stringent constraint on the energy scale of inflation, $V^{1/4}\lesssim 10^9$ GeV. In this paper, we show that this requirement corresponds to a very small Hubble scale during inflation, $H_{\rm inf}\lesssim 1$ GeV, and therefore has serious consequences on scenarios where the dark matter density was generated by amplification of quantum fluctuations during inflation. We also present a class of inflationary models which both satisfy the above limit for the scale of inflation and are in perfect agreement with observational data. ","Trans-Planckian Censorship, Inflation and Dark Matter",1,"['A new paper out: <LINK> \nIf cosmic inflation lasted for a long time, then the observed length scales originate from modes smaller than the Planck length. This has serious consequences on dark matter which has its origins in quantum fluctuations during inflation.']",19,10,261
393,160,1481370712795496452,256513537,Dr Chiara Mingarelli,"The International Pulsar Timing Array has just published our first gravitational-wave search on DR2! While DR2 contains @NANOGrav's 9-yr data, combined with older EPTA and PPTA data, we still find a correlated red noise signal (like NG's 12.5yr result!) <LINK> <LINK> Using @jacaseyclyde's latest results, we can also interpret this signal in terms of the minimum black hole mass and number density of black holes contributing to a potential gravitational-wave background signal, as well as the volume of the background. <LINK> What about more recent data? Never fear, we have also compared this IPTA result to the latest PTA data, and it looks great! The next big step is combining some new datasets for an IPTA DR3. This huge effort is being led by Deborah Good, a @FlatironCCA and @UConn postdoc. <LINK>",https://arxiv.org/abs/2201.03980,"We searched for an isotropic stochastic gravitational wave background in the second data release of the International Pulsar Timing Array, a global collaboration synthesizing decadal-length pulsar-timing campaigns in North America, Europe, and Australia. In our reference search for a power law strain spectrum of the form $h_c = A(f/1\,\mathrm{yr}^{-1})^{\alpha}$, we found strong evidence for a spectrally-similar low-frequency stochastic process of amplitude $A = 3.8^{+6.3}_{-2.5}\times10^{-15}$ and spectral index $\alpha = -0.5 \pm 0.5$, where the uncertainties represent 95\% credible regions, using information from the auto- and cross-correlation terms between the pulsars in the array. For a spectral index of $\alpha = -2/3$, as expected from a population of inspiralling supermassive black hole binaries, the recovered amplitude is $A = 2.8^{+1.2}_{-0.8}\times10^{-15}$. Nonetheless, no significant evidence of the Hellings-Downs correlations that would indicate a gravitational-wave origin was found. We also analyzed the constituent data from the individual pulsar timing arrays in a consistent way, and clearly demonstrate that the combined international data set is more sensitive. Furthermore, we demonstrate that this combined data set produces comparable constraints to recent single-array data sets which have more data than the constituent parts of the combination. Future international data releases will deliver increased sensitivity to gravitational wave radiation, and significantly increase the detection probability. ","The International Pulsar Timing Array second data release: Search for an
isotropic Gravitational Wave Background",3,"[""The International Pulsar Timing Array has just published our first gravitational-wave search on DR2! While DR2 contains @NANOGrav's 9-yr data, combined with older EPTA and PPTA data, we still find a correlated red noise signal (like NG's 12.5yr result!) <LINK> <LINK>"", ""Using @jacaseyclyde's latest results, we can also interpret this signal in terms of the minimum black hole mass and number density of black holes contributing to a potential gravitational-wave background signal, as well as the volume of the background. https://t.co/Z8yiIqmEgX"", 'What about more recent data? Never fear, we have also compared this IPTA result to the latest PTA data, and it looks great!\n\nThe next big step is combining some new datasets for an IPTA DR3. This huge effort is being led by Deborah Good, a @FlatironCCA and @UConn postdoc. https://t.co/oqZeVA5xDb']",22,01,806
394,105,1392322029030973440,3250001664,Colin Hill,"New paper led by Leander Thiele + @yilun_guan, w/ A. Kosowsky + @DavidSpergel: <LINK> We show that small-scale baryon clumping models (e.g., primordial magnetic fields) aiming to resolve the H0 tension are highly constrained by @ACT_Pol data (+ @Planck) #H0 #CMB Key result in Fig. 2: <LINK> @nu_phases Planck does do quite well on its own. ACT's most important role here is as an independent test of the model; ACT's constraining power comes from very different range in TT+TE+EE than Planck. If the model were true, one would expect shift in ACT params w.r.t. Planck. But not seen @nu_phases (and yes, adding ACT does also further cut the high-H0 tail of the posteriors)",https://arxiv.org/abs/2105.03003,"Small-scale inhomogeneities in the baryon density around recombination have been proposed as a solution to the tension between local and global determinations of the Hubble constant. These baryon clumping models make distinct predictions for the cosmic microwave background anisotropy power spectra on small angular scales. We use recent data from the Atacama Cosmology Telescope to test these predictions. No evidence for baryon clumping is found, assuming a range of parameterizations for time-independent baryon density probability distribution functions. The inferred Hubble constant remains in significant tension with the SH0ES measurement. ","Can small-scale baryon inhomogeneities resolve the Hubble tension? An
investigation with ACT DR4",4,"['New paper led by Leander Thiele + @yilun_guan, w/ A. Kosowsky + @DavidSpergel: <LINK> We show that small-scale baryon clumping models (e.g., primordial magnetic fields) aiming to resolve the H0 tension are highly constrained by @ACT_Pol data (+ @Planck) #H0 #CMB', 'Key result in Fig. 2: https://t.co/ykUj0NRtuD', ""@nu_phases Planck does do quite well on its own. ACT's most important role here is as an independent test of the model; ACT's constraining power comes from very different range in TT+TE+EE than Planck. If the model were true, one would expect shift in ACT params w.r.t. Planck. But not seen"", '@nu_phases (and yes, adding ACT does also further cut the high-H0 tail of the posteriors)']",21,05,672
395,75,1394614280478240771,23980621,"Brett Morris, PhD","New @ESA_CHEOPS GO program paper by me, @KevinHeng1, @42Lendl and more: hunting for planetesimals orbiting white dwarfs! <LINK> <LINK> Back in 2019, @KevinHeng1 and I started talking about our mutual interest in white dwarfs. White dwarfs are the white-hot embers of stellar cores that remain into the stellar afterlife, after a star swells into a giant and blows off its outer layers. One of my favorite parts of this job is using astrophysics as a time machine. By looking at young stars we can ""go back in time"" to see what the young Sun might have been like. But we can also use this trick to see the future: the Sun one day will leave behind a white dwarf. <LINK> Since the stars which become white dwarfs often have planets, we can use space telescopes like @ESA_CHEOPS to take precise photometry to hunt for transits of the planetary remnants orbiting white dwarfs, and foretell our Solar System's fate. <LINK> We observed six white dwarfs for 24 hours each, taking images of the WDs once per minute. @ESA_CHEOPS measured the brightness of the white dwarfs precisely enough to see changes of only two parts per thousand. And, we found... *drumroll please*... <LINK> ...nothing! The white dwarfs didn't show any transit events, which could mean one of a few things: either they don't have transiting material on short orbital periods of a few hours, that material is quite small, or that material is inclined so we can't see it. We would have been most sensitive to material orbiting on periods of 3-8 hours, with significant drops in detection efficiency near multiples of the @ESA_CHEOPS orbital period of 100 minutes... <LINK> ...and we would have been sensitive to incredibly small planetesimals, with great detection efficiency over objects 1000 km or larger: about half the size of the Earth's Moon! 🌒 <LINK> Though we didn't catch anything, we're super thrilled and grateful for the opportunity to go fishing with this exquisite photometric machine. And of course, lots more @ESA_CHEOPS results are in the pipeline from the GTO side, so get ready, here they come! <LINK> @dstndstn @ESA_CHEOPS @KevinHeng1 @42Lendl I love this rendering too 😅",https://arxiv.org/abs/2105.07987,"White dwarf spectroscopy shows that nearly half of white dwarf atmospheres contain metals that must have been accreted from planetary material that survived the red giant phases of stellar evolution. We can use metal pollution in white dwarf atmospheres as flags, signalling recent accretion, in order to prioritize an efficient sample of white dwarfs to search for transiting material. We present a search for planetesimals orbiting six nearby white dwarfs with the CHEOPS spacecraft. The targets are relatively faint for CHEOPS, $11$ mag $< G < 12.8$ mag. We use aperture photometry data products from the CHEOPS mission as well as custom PSF photometry to search for periodic variations in flux due to transiting planetesimals. We detect no significant variations in flux that cannot be attributed to spacecraft systematics, despite reaching a photometric precision of $<2$ ppt in 60 s exposures on each target. We simulate observations to show that the small survey is sensitive primarily to Moon-sized transiting objects with periods $3$ hr $< P < 10$ hr, with radii $R \gtrsim 1000$ km. ",A CHEOPS White Dwarf Transit Search,10,"['New @ESA_CHEOPS GO program paper by me, @KevinHeng1, @42Lendl and more: hunting for planetesimals orbiting white dwarfs!\n\n<LINK> <LINK>', 'Back in 2019, @KevinHeng1 and I started talking about our mutual interest in white dwarfs. White dwarfs are the white-hot embers of stellar cores that remain into the stellar afterlife, after a star swells into a giant and blows off its outer layers.', 'One of my favorite parts of this job is using astrophysics as a time machine. By looking at young stars we can ""go back in time"" to see what the young Sun might have been like. But we can also use this trick to see the future: the Sun one day will leave behind a white dwarf. https://t.co/da9ZkY7cEx', ""Since the stars which become white dwarfs often have planets, we can use space telescopes like @ESA_CHEOPS to take precise photometry to hunt for transits of the planetary remnants orbiting white dwarfs, and foretell our Solar System's fate. https://t.co/2AiF72gu5r"", 'We observed six white dwarfs for 24 hours each, taking images of the WDs once per minute. @ESA_CHEOPS measured the brightness of the white dwarfs precisely enough to see changes of only two parts per thousand. And, we found... *drumroll please*... https://t.co/ArOWIPyVQZ', ""...nothing! The white dwarfs didn't show any transit events, which could mean one of a few things: either they don't have transiting material on short orbital periods of a few hours, that material is quite small, or that material is inclined so we can't see it."", 'We would have been most sensitive to material orbiting on periods of 3-8 hours, with significant drops in detection efficiency near multiples of the @ESA_CHEOPS orbital period of 100 minutes... https://t.co/f9gMLYHb83', ""...and we would have been sensitive to incredibly small planetesimals, with great detection efficiency over objects 1000 km or larger: about half the size of the Earth's Moon! 🌒 https://t.co/3bKOOCnBZW"", ""Though we didn't catch anything, we're super thrilled and grateful for the opportunity to go fishing with this exquisite photometric machine. \n\nAnd of course, lots more @ESA_CHEOPS results are in the pipeline from the GTO side, so get ready, here they come! https://t.co/vEdtpVsLJk"", '@dstndstn @ESA_CHEOPS @KevinHeng1 @42Lendl I love this rendering too 😅']",21,05,2155
396,1,1357666238131023872,326864247,Amanda Clare,"In @jamesravey's new paper (accepted to #EACL2021 <LINK>) we look at how to find words/phrases/entities that refer to the same thing in two different documents. For example, a news article that reports on findings from a published science paper. This is a hard problem, for humans and for automation. We scratched our heads many times while hand-annotating the corpus (available from <LINK>). News language is different to science paper language, and terms have subtle but important semantics. Vector space embeddings that allow semantically related entities to be neighbours, don't provide good enough separation to distinguish whether they're co-referent or not. But we'll need this to determine how science is reported in the news: which parts and how it's slanted. This work was done with @xrysoflhs @ArieCattan and Ido Dagan and you can find more in @jamesravey's thread here: <LINK>",https://arxiv.org/abs/2101.12637,"Cross-document co-reference resolution (CDCR) is the task of identifying and linking mentions to entities and concepts across many text documents. Current state-of-the-art models for this task assume that all documents are of the same type (e.g. news articles) or fall under the same theme. However, it is also desirable to perform CDCR across different domains (type or theme). A particular use case we focus on in this paper is the resolution of entities mentioned across scientific work and newspaper articles that discuss them. Identifying the same entities and corresponding concepts in both scientific articles and news can help scientists understand how their work is represented in mainstream media. We propose a new task and English language dataset for cross-document cross-domain co-reference resolution (CD$^2$CR). The task aims to identify links between entities across heterogeneous document types. We show that in this cross-domain, cross-document setting, existing CDCR models do not perform well and we provide a baseline model that outperforms current state-of-the-art CDCR models on CD$^2$CR. Our data set, annotation tool and guidelines as well as our model for cross-document cross-domain co-reference are all supplied as open access open source resources. ",CD2CR: Co-reference Resolution Across Documents and Domains,4,"[""In @jamesravey's new paper (accepted to #EACL2021 <LINK>) we look at how to find words/phrases/entities that refer to the same thing in two different documents. For example, a news article that reports on findings from a published science paper."", 'This is a hard problem, for humans and for automation. We scratched our heads many times while hand-annotating the corpus (available from https://t.co/hXKytYqsmS). News language is different to science paper language, and terms have subtle but important semantics.', ""Vector space embeddings that allow semantically related entities to be neighbours, don't provide good enough separation to distinguish whether they're co-referent or not. But we'll need this to determine how science is reported in the news: which parts and how it's slanted."", ""This work was done with @xrysoflhs @ArieCattan and Ido Dagan and you can find more in @jamesravey's thread here: https://t.co/OyfjmhyXIs""]",21,01,888
397,179,1478243699771412481,1411871581,Huaxiu Yao,"Super-excited to share our new work about out-of-distribution robustness (<LINK>). We propose a simple mixup-based method to learn invariant functions via selective augmentation. Motivated by mixup, our method encourages learning invariant functions and cancel out domain-related information by (1) interpolating samples with the same label but different domains; (2) interpolating samples with the same domain but different labels. <LINK> Our method is easy to implement and well-suited to domain shifts and subpopulation shifts. The results are cool in nine benchmarks in domain shifts (left figure) and subpopulation shifts (right figure) <LINK> We also qualitatively show that our method leads to stronger invariant functions <LINK> Under a linear setting, we finally provide a theoretical analysis of the phenomena distilled from the empirical study and show our method leads to smaller worst-domain error compared with ERM and vanilla mixup. a wonderful collaboration w/ @__YuWang__, Sai Li, @zlj11112222, @liang_weixin, @james_y_zou, @chelseabfinn",https://arxiv.org/abs/2201.00299,"Machine learning algorithms typically assume that training and test examples are drawn from the same distribution. However, distribution shift is a common problem in real-world applications and can cause models to perform dramatically worse at test time. In this paper, we specifically consider the problems of subpopulation shifts (e.g., imbalanced data) and domain shifts. While prior works often seek to explicitly regularize internal representations or predictors of the model to be domain invariant, we instead aim to learn invariant predictors without restricting the model's internal representations or predictors. This leads to a simple mixup-based technique which learns invariant predictors via selective augmentation called LISA. LISA selectively interpolates samples either with the same labels but different domains or with the same domain but different labels. Empirically, we study the effectiveness of LISA on nine benchmarks ranging from subpopulation shifts to domain shifts, and we find that LISA consistently outperforms other state-of-the-art methods and leads to more invariant predictors. We further analyze a linear setting and theoretically show how LISA leads to a smaller worst-group error. ",Improving Out-of-Distribution Robustness via Selective Augmentation,6,"['Super-excited to share our new work about out-of-distribution robustness (<LINK>). \n\nWe propose a simple mixup-based method to learn invariant functions via selective augmentation.', 'Motivated by mixup, our method encourages learning invariant functions and cancel out domain-related information by\n\n(1) interpolating samples with the same label but different domains;\n\n(2) interpolating samples with the same domain but different labels. https://t.co/hJXGweTJwA', 'Our method is easy to implement and well-suited to domain shifts and subpopulation shifts.\n\nThe results are cool in nine benchmarks in domain shifts (left figure) and subpopulation shifts (right figure) https://t.co/0pLuhWzzmk', 'We also qualitatively show that our method leads to stronger invariant functions https://t.co/UtB7mg43vR', 'Under a linear setting, we finally provide a theoretical analysis of the phenomena distilled from the empirical study and show our method leads to smaller worst-domain error compared with ERM and vanilla mixup.', 'a wonderful collaboration w/ @__YuWang__, Sai Li, @zlj11112222, @liang_weixin, @james_y_zou, @chelseabfinn']",22,01,1055
398,4,1477851677831331840,974769773539155968,Daniel Baumann,"New paper on the ""Ionization of Gravitational Atoms"" with Giovanni Maria Tomaselli (a fantastic new PhD student), John Stout and @gfbertone, building on earlier work with @horngsheng and @RAPortoUY. <LINK> A short summary follows 👇🏼 Gravitational atoms are clouds of ultralight bosons generated by superradiance around rotating black holes. (The mathematical structure is almost the same as for the hydrogen atom.) <LINK> With Chia and Porto, we studied what would happen when such atoms are part of binary systems. We found that the gravitational perturbation due to the companion can induce resonant transition between the different states of the cloud. (A bit like Rabi oscillations for atoms.) <LINK> In the new paper, we study the transitions from bound to unbound states, which ionize the atom. (A bit like the photoelectric effect.) <LINK> We find that the effect can be large, with the power emitted in scalar waves overwhelming the power lost due to GW emission. The signal contains sharp features which arise when the bound state begins to resonate with the continuum. <LINK> Using the conservation of energy and angular momentum, we computed the backreaction on the orbital dynamics. We show that ionization (and accretion) shorten the merger time significantly. <LINK> The mass of the cloud and the companion develop a significant time dependence. <LINK> The sharp features in the ionized power lead to kinks in the frequency evolution of the GW signals. <LINK> More work needs to be done to develop waveforms for these systems that can be used in future data analysis. (We hope some of our friends working on GW data analysis may be inspired to look into this.) [All figures were made by John Stout.]",https://arxiv.org/abs/2112.14777,"Superradiant instabilities may create clouds of ultralight bosons around rotating black holes, forming so-called ""gravitational atoms."" It was recently shown that the presence of a binary companion can induce resonant transitions between bound states of these clouds, whose backreaction on the binary's orbit leads to characteristic signatures in the emitted gravitational waves. In this work, we show that the interaction with the companion can also trigger transitions from bound to unbound states of the cloud -- a process that we refer to as ""ionization"" in analogy with the photoelectric effect in atomic physics. The orbital energy lost in the process overwhelms the losses due to gravitational wave emission and contains sharp features carrying information about the energy spectrum of the cloud. Moreover, we also show that if the companion is a black hole, then the part of the cloud impinging on the event horizon will be absorbed. This ""accretion"" leads to a significant increase of the companion's mass, which alters the dynamical evolution and ensuing waveform of the binary. We argue that a combined treatment of resonances, ionization, and accretion is crucial to discover and characterize gravitational atoms with upcoming gravitational wave detectors. ",Ionization of Gravitational Atoms,10,"['New paper on the ""Ionization of Gravitational Atoms"" with Giovanni Maria Tomaselli (a fantastic new PhD student), John Stout and @gfbertone, building on earlier work with @horngsheng and @RAPortoUY.\n<LINK> A short summary follows 👇🏼', 'Gravitational atoms are clouds of ultralight bosons generated by superradiance around rotating black holes.\n(The mathematical structure is almost the same as for the hydrogen atom.) https://t.co/i1cQmS6OKA', 'With Chia and Porto, we studied what would happen when such atoms are part of binary systems.\nWe found that the gravitational perturbation due to the companion can induce resonant transition between the different states of the cloud. (A bit like Rabi oscillations for atoms.) https://t.co/C9zMF21g36', 'In the new paper, we study the transitions from bound to unbound states, which ionize the atom. \n(A bit like the photoelectric effect.) https://t.co/fljRTVFc1z', 'We find that the effect can be large, with the power emitted in scalar waves overwhelming the power lost due to GW emission. The signal contains sharp features which arise when the bound state begins to resonate with the continuum. https://t.co/1lwuO4UZ5K', 'Using the conservation of energy and angular momentum, we computed the backreaction on the orbital dynamics.\nWe show that ionization (and accretion) shorten the merger time significantly. https://t.co/uOUhv3uXPR', 'The mass of the cloud and the companion develop a significant time dependence. https://t.co/LmWjLQlQUj', 'The sharp features in the ionized power lead to kinks in the frequency evolution of the GW signals. https://t.co/6Tm5W4DrQJ', 'More work needs to be done to develop waveforms for these systems that can be used in future data analysis. (We hope some of our friends working on GW data analysis may be inspired to look into this.)', '[All figures were made by John Stout.]']",21,12,1713
399,123,1437343941691588608,1415338317764349959,Nuno Guerreiro,"I am happy to announce our new #EMNLP2021 paper ""SPECTRA: Sparse Structured Text Rationalization"" w/ @andre_t_martins . 📰 Paper: <LINK> @andre_t_martins Most work on selective rationalization focus on highlights extraction. Commonly, rationales are modeled as stochastic binary masks, requiring sampling-based gradient estimators, which complicates training and requires careful hyperparameter tuning. [1/n] Sparse attention mechanisms are a deterministic alternative, but they lack a way to regularize the rationale extraction. In our work, we present a framework for deterministic rationale extraction via constrained inference on a factor graph, forming a differentiable layer. [2/n] By leveraging LP-SparseMAP (@vnfrombucharest 🙌) we are able to extract differently constrained structured explanations, such as sparse alignments between two documents. [3/n] Finally, we also provide a comparative study between stochastic 🎲 and deterministic 🎯 methods for rationale extraction for both highlights and matchings extraction. We also share the code for our library for selective rationalization of highlights and matchings. 💻️Code: <LINK>",https://arxiv.org/abs/2109.04552,"Selective rationalization aims to produce decisions along with rationales (e.g., text highlights or word alignments between two sentences). Commonly, rationales are modeled as stochastic binary masks, requiring sampling-based gradient estimators, which complicates training and requires careful hyperparameter tuning. Sparse attention mechanisms are a deterministic alternative, but they lack a way to regularize the rationale extraction (e.g., to control the sparsity of a text highlight or the number of alignments). In this paper, we present a unified framework for deterministic extraction of structured explanations via constrained inference on a factor graph, forming a differentiable layer. Our approach greatly eases training and rationale regularization, generally outperforming previous work on what comes to performance and plausibility of the extracted rationales. We further provide a comparative study of stochastic and deterministic methods for rationale extraction for classification and natural language inference tasks, jointly assessing their predictive power, quality of the explanations, and model variability. ",SPECTRA: Sparse Structured Text Rationalization,6,"['I am happy to announce our new #EMNLP2021 paper ""SPECTRA: Sparse Structured Text Rationalization"" w/ @andre_t_martins .\n\n📰 Paper: <LINK>', '@andre_t_martins Most work on selective rationalization focus on highlights extraction. Commonly, rationales are modeled as stochastic binary masks, requiring sampling-based gradient estimators, which complicates training and requires careful hyperparameter tuning. [1/n]', 'Sparse attention mechanisms are a deterministic alternative, but they lack a way to regularize the rationale extraction. In our work, we present a framework for deterministic rationale extraction via constrained inference on a factor graph, forming a differentiable layer. [2/n]', 'By leveraging LP-SparseMAP (@vnfrombucharest 🙌) we are able to extract differently constrained structured explanations, such as sparse alignments between two documents. [3/n]', 'Finally, we also provide a comparative study between stochastic 🎲 and deterministic 🎯 methods for rationale extraction for both highlights and matchings extraction.', 'We also share the code for our library for selective rationalization of highlights and matchings.\n\n💻️Code: https://t.co/4rkd4d0BTU']",21,09,1139
400,88,1405416030055370753,804069495253962752,David Martínez Delgado,"In this new paper, we report the discovery of three faint dwarf galaxies found by @Giusepp39181130 around the spiral NGC 253. This is the first paper of our project to search for a satellite plane in the Sculptor group in collaboration with @8minutesold (<LINK>). <LINK>",http://arxiv.org/abs/2106.08868,"In the last years, a new generation of large-scale imaging surveys have probed wide field regions around some nearby galaxies at unprecedented low surface brightness regime (~28.0-29.0 mag arcsec^-2). This offers a chance of discovering very faint dwarf satellites by means of visual inspection of these public deep images. We report the first results of a systematic survey of faint dwarf spheroidal galaxies in the vicinity of the bright late-type spiral NGC 253 galaxy by means of a visual inspection of the images taken by the Dark Energy Survey. Three new dwarf galaxies have been discovered in the vicinity of the brightest member of the Sculptor filament, the late-type spiral NGC 253. Assuming they are companions of NGC 253, their total absolute V-magnitudes fall in the -7 to -9 mag range, which is typical for dwarf satellites in the local Universe. The central surface brightness tend to be extremely low for all the discovered dwarfs and fall roughly in the range of 25-26 mag arcsec^-2 in g-band. Using known data on distances and velocities of galaxies, we estimate the total virial mass of the NGC 253 group to be 8 x 10^11 Mo, which gives the virial radius R_200 = 186 kpc and the turn-around radius of 706 kpc. We also discuss the possible existence of a spatially flattened and velocity-correlated satellite system around NGC 253. This large-scale structure is orientated almost edge-on to line of sight. The possible plane of satellites is only 31 kpc thick with the minor-to-major axis ratio of 0.14. Four out of five galaxies with measured velocities follow a common velocity trend similar to those observed in the planes of satellites around the Andromeda and Centaurus A galaxies. However, the small number of galaxies with known velocities prevents to reach a definitive conclusion about the formation scenario of the structure and its possible relation to the surrounding cosmic web. ","Tracing satellite planes in the Sculptor group: I. Discovery of three
faint dwarf galaxies around NGC 253",1,"['In this new paper, we report the discovery of three faint dwarf galaxies found by @Giusepp39181130 around the spiral NGC 253. This is the first paper of our project to search for a satellite plane in the Sculptor group in collaboration with @8minutesold (<LINK>). <LINK>']",21,06,270
401,108,1290807436643901447,20703003,Peter B Denton,"Nu paper with the team at BNL, Julia and Rebekah: CP-Violating Neutrino Non-Standard Interactions in Long-Baseline-Accelerator Data <LINK> NOvA and T2K disagree in a weird way. The significance isn't yet high (~2sig), what if it was due to new physics? 1/7 Long-baseline neutrinos are a great place to probe all the oscillation parameters as well as new physics. @novaexperiment and @Tokai2Kamioka just released updated data and it was weird. They both prefer the normal ordering, but combined they prefer the inverted. 2/7 <LINK> But even in the inverted ordering things aren't a great fit. No one has asked what kind of new physics this data could be pointing to. So a few weeks after Neutrino, we dove in! (Figure borrowed from the FNAL team with every intention of returning: <LINK>) 3/7 <LINK> Since the matter effect is bigger at NOvA than at T2K, NSIs will do the trick. The experiments measure nus/nubars separately at different rates, so the NSI had better violate CP. In fact, there's a simple relationship between what's measured and the phase of the new physics 4/7 <LINK> You can also estimate the size of the NSI the same way to be ~0.2 (orange is preferred relative to the SM, gray is disfavored, dark gray is disfavored at &gt;90%). The NOvA and T2K data are pointing to a region that is right at the edge of IceCube's constraint (also COHERENT). 5/7 <LINK> I'm not saying that this is new physics, but if it were, it would mean that not only is there (fairly large) CP violation in the lepton mass matrix, but also a new neutrino interaction that *maximally* violates CP. 6/7 Given that the quark matrix violates CP only a tiny bit, the strong interaction seems to conserve CP while the weak interaction maximally violates CP: understanding CP violation in neutrinos is vital to guide our understanding of what the heck is going on with CP. 7/7",https://arxiv.org/abs/2008.01110,"Neutrino oscillations in matter provide a unique probe of new physics. Leveraging the advent of neutrino appearance data from NOvA and T2K in recent years, we investigate the presence of CP-violating neutrino non-standard interactions in the oscillation data. We first show how to very simply approximate the expected NSI parameters to resolve differences between two long-baseline appearance experiments analytically. Then, by combining recent NOvA and T2K data, we find a tantalizing hint of CP-violating NSI preferring a new complex phase that is close to maximal: $\phi_{e\mu}$ or $\phi_{e\tau}\approx3\pi/2$ with $|\epsilon_{e\mu}|$ or $|\epsilon_{e\tau}|\sim0.2$. We then compare the results from long-baseline data to constraints from IceCube and COHERENT. ","CP-Violating Neutrino Non-Standard Interactions in
Long-Baseline-Accelerator Data",7,"[""Nu paper with the team at BNL, Julia and Rebekah:\n\nCP-Violating Neutrino Non-Standard Interactions in Long-Baseline-Accelerator Data\n\n<LINK>\n\nNOvA and T2K disagree in a weird way. The significance isn't yet high (~2sig), what if it was due to new physics? 1/7"", 'Long-baseline neutrinos are a great place to probe all the oscillation parameters as well as new physics. @novaexperiment and @Tokai2Kamioka just released updated data and it was weird. They both prefer the normal ordering, but combined they prefer the inverted. 2/7 https://t.co/ERO9C6bEgK', ""But even in the inverted ordering things aren't a great fit.\n\nNo one has asked what kind of new physics this data could be pointing to. So a few weeks after Neutrino, we dove in!\n\n(Figure borrowed from the FNAL team with every intention of returning: https://t.co/qolzWxUxbw) 3/7 https://t.co/ySxV4fbShL"", ""Since the matter effect is bigger at NOvA than at T2K, NSIs will do the trick. The experiments measure nus/nubars separately at different rates, so the NSI had better violate CP. \n\nIn fact, there's a simple relationship between what's measured and the phase of the new physics 4/7 https://t.co/BiOVun5fgS"", ""You can also estimate the size of the NSI the same way to be ~0.2 (orange is preferred relative to the SM, gray is disfavored, dark gray is disfavored at &gt;90%).\n\nThe NOvA and T2K data are pointing to a region that is right at the edge of IceCube's constraint (also COHERENT). 5/7 https://t.co/5CqPQpdbgA"", ""I'm not saying that this is new physics, but if it were, it would mean that not only is there (fairly large) CP violation in the lepton mass matrix, but also a new neutrino interaction that *maximally* violates CP. 6/7"", 'Given that the quark matrix violates CP only a tiny bit, the strong interaction seems to conserve CP while the weak interaction maximally violates CP: understanding CP violation in neutrinos is vital to guide our understanding of what the heck is going on with CP. 7/7']",20,08,1862
402,174,1316083890604388353,1271552707464032256,Shunyu Yao,"Happy to announce our new #emnlp2020 paper “Keep CALM and Explore: Language Models for Action Generation in Text-based Games” is online! w/ Rohan, @mhauskn, @karthik_r_n arxiv: <LINK> code: <LINK> more below (1/n) In text-based games, players receive text observation & scalar reward, and issue text actions. For the observation below, what actions would you try? While most previous RL models use a valid-action handicap, we show a data-driven language modeling approach to this problem. <LINK> Our Contextual Action Language Model (CALM) learns to produce actions conditional on game context (previous & current observation, previous action). To play a new game, we simply trains an RL agent (DRRN) with CALM generating top-k actions as a reduced action space at each state. <LINK> For training, we collect a new ClubFloyd dataset, containing 200k+ human gameplay context-action pairs from 500+ games. They are noisy, non-optimal (in terms of scoring), but diverse and rich in human commonsense. For CALM training we use both n-gram and GPT-2. <LINK> We test on Jericho games which are **ALL UNSEEN** during CALM training. Turns out DRRN + CALM (GPT-2) surpasses all other no-handicap models, and more surprisingly, surpass DRRN + handicap action space on 8/28 games. More results and analysis in paper. Code and Dataset in repo. <LINK> Some interesting thoughts at the interaction of language and RL, e.g. how to/what aspects of functional use of language can be learned from LM pertaining? can be transferred across envs./tasks? does RL action space come from env. or agent? Hope you enjoy the fun paper!",https://arxiv.org/abs/2010.02903,"Text-based games present a unique challenge for autonomous agents to operate in natural language and handle enormous action spaces. In this paper, we propose the Contextual Action Language Model (CALM) to generate a compact set of action candidates at each game state. Our key insight is to train language models on human gameplay, where people demonstrate linguistic priors and a general game sense for promising actions conditioned on game history. We combine CALM with a reinforcement learning agent which re-ranks the generated action candidates to maximize in-game rewards. We evaluate our approach using the Jericho benchmark, on games unseen by CALM during training. Our method obtains a 69% relative improvement in average game score over the previous state-of-the-art model. Surprisingly, on half of these games, CALM is competitive with or better than other models that have access to ground truth admissible actions. Code and data are available at this https URL ","Keep CALM and Explore: Language Models for Action Generation in
Text-based Games",6,"['Happy to announce our new #emnlp2020 paper “Keep CALM and Explore: Language Models for Action Generation in Text-based Games” is online! w/ Rohan, @mhauskn, @karthik_r_n\narxiv: <LINK>\ncode: <LINK>\nmore below (1/n)', 'In text-based games, players receive text observation &amp; scalar reward, and issue text actions. For the observation below, what actions would you try? While most previous RL models use a valid-action handicap, we show a data-driven language modeling approach to this problem. https://t.co/by2amequaG', 'Our Contextual Action Language Model (CALM) learns to produce actions conditional on game context (previous &amp; current observation, previous action). To play a new game, we simply trains an RL agent (DRRN) with CALM generating top-k actions as a reduced action space at each state. https://t.co/FqW4TvW14L', 'For training, we collect a new ClubFloyd dataset, containing 200k+ human gameplay context-action pairs from 500+ games. They are noisy, non-optimal (in terms of scoring), but diverse and rich in human commonsense. For CALM training we use both n-gram and GPT-2. https://t.co/C8VVBT4Qex', 'We test on Jericho games which are **ALL UNSEEN** during CALM training. Turns out DRRN + CALM (GPT-2) surpasses all other no-handicap models, and more surprisingly, surpass DRRN + handicap action space on 8/28 games. More results and analysis in paper. Code and Dataset in repo. https://t.co/WG9TGFsdjK', 'Some interesting thoughts at the interaction of language and RL, e.g. how to/what aspects of functional use of language can be learned from LM pertaining? can be transferred across envs./tasks? does RL action space come from env. or agent? Hope you enjoy the fun paper!']",20,10,1608
403,9,1311470502838333440,326843207,Yuta Notsu,"Our new paper is accepted !! ""Time-resolved spectroscopy and photometry of an M dwarf flare star YZ Canis Minoris with OISTER and TESS: Blue asymmetry in Hα line during the non-white light flare"" Maehara, Notsu, Namekata, Honda, Kowalski, et al. <LINK> We present the results from spectroscopic and photometric observations of the M-type flare star YZ CMi during the Transiting Exoplanet Survey Satellite (TESS) observation period. 3 flares are detected, and one of them shows “blue asymmetry” of H-alpha line. <LINK> Blue asymmetries can be used for discussing stellar mass ejections. The estimated mass is comparable to expectations from the empirical relation between the flare X-ray energy and mass of upward-moving material for stellar flares and solar CMEs. <LINK> In contrast, the estimated kinetic energy is roughly 2 orders of magnitude smaller than that expected from the relation between flare X-ray energy and kinetic energy for solar CMEs. This could be understood by the difference in the velocity between CMEs and prominence eruptions. Also discussions on flare frequency, duration, and rotation from TESS data <LINK>",https://arxiv.org/abs/2009.14412,"In this paper, we present the results from spectroscopic and photometric observations of the M-type flare star YZ CMi in the framework of the Optical and Infrared Synergetic Telescopes for Education and Research (OISTER) collaborations during the Transiting Exoplanet Survey Satellite (TESS) observation period. We detected 145 white-light flares from the TESS light curve and 4 H$\alpha$ flares from the OISTER observations performed between 2019-01-16 and 2019-01-18. Among them, 3 H$\alpha$ flares were associated with white-light flares. However, one of them did not show clear brightening in continuum; during this flare, the H$\alpha$ line exhibited blue-asymmetry which has lasted for $\sim 60$ min. The line of sight velocity of the blue-shifted component is $-80$ - $-100$ km s$^{-1}$. This suggests that there can be upward flows of chromospheric cool plasma even without detectable red/NIR continuum brightening. By assuming that the blue-asymmetry in H$\alpha$ line was caused by a prominence eruption on YZ CMi, we estimated the mass and kinetic energy of the upward-moving material to be $10^{16}$ - $10^{18}$ g and $10^{29.5}$ - $10^{31.5}$ erg, respectively. The estimated mass is comparable to expectations from the empirical relation between the flare X-ray energy and mass of upward-moving material for stellar flares and solar CMEs. In contrast, the estimated kinetic energy for the non-white-light flare on YZ CMi is roughly $2$ orders of magnitude smaller than that expected from the relation between flare X-ray energy and kinetic energy for solar CMEs. This could be understood by the difference in the velocity between CMEs and prominence eruptions. ","Time-resolved spectroscopy and photometry of an M dwarf flare star YZ
Canis Minoris with OISTER and TESS: Blue asymmetry in H$\alpha$ line during
the non-white light flare",5,"['Our new paper is accepted !!\n \n""Time-resolved spectroscopy and photometry of an M dwarf flare star YZ Canis Minoris with OISTER and TESS: Blue asymmetry in Hα line during the non-white light flare""\n\nMaehara, Notsu, Namekata, Honda, Kowalski, et al. \n<LINK>', 'We present the results from spectroscopic and photometric observations of the \nM-type flare star YZ CMi during the Transiting Exoplanet Survey Satellite (TESS) observation period. \n\n3 flares are detected, and one of them shows “blue asymmetry” of H-alpha line. https://t.co/4H3un4cCJa', 'Blue asymmetries can be used for discussing stellar mass ejections.\n\nThe estimated mass is comparable to expectations from the empirical relation between the flare X-ray energy and mass of upward-moving material for stellar flares and solar CMEs. https://t.co/MfJy08VSfp', 'In contrast, the estimated kinetic energy is roughly 2 orders of magnitude smaller than that expected from the relation between flare X-ray energy and kinetic energy for solar CMEs. This could be understood by the difference in the velocity between CMEs and prominence eruptions.', 'Also discussions on flare frequency, duration, and rotation from TESS data https://t.co/ppgjFB2qHF']",20,09,1134
404,8,1400253722073079808,2909643908,Dan Zhang,"I wrote a paper with a few Google colleagues about FAST, a new technique to build new specialized ML hardware accelerators able to improve computer vision inference performance by up to 6x relative to TPU-v3! <LINK> (1/5) FAST extends previous work by expanding the design exploration space to up to 10^2300, covering not just the hardware datapath, but also software scheduling and compiler decisions including padding and fusion. Fusion is key since it addresses memory bandwidth bottlenecks. (2/5) Accelerating ML inference is important because fast ML inference latency/throughput is required to launch models in production at scale. If an application is sufficiently important and with enough volume, it can make sense to customize a chip for this purpose. (3/5) A key benefit of the work is that our specialized accelerators can still run other ML workloads - just not as efficiently. Relative to building a chip that can only handle a single workload, this gives engineers the flexibility to still change the model in production. (4/5) Finally, techniques like this are important because they can be used to accelerate the chip design process from years to potentially months. By increasing the workload set targeted by FAST, FAST can also be used to automatically design general-purpose ML accelerators. (5/5) TLDR: we made specialized hardware chips that can do machine learning faster :) @selynnasun Thanks! We're super excited about the work!",https://arxiv.org/abs/2105.12842,"The rapidly-changing deep learning landscape presents a unique opportunity for building inference accelerators optimized for specific datacenter-scale workloads. We propose Full-stack Accelerator Search Technique (FAST), a hardware accelerator search framework that defines a broad optimization environment covering key design decisions within the hardware-software stack, including hardware datapath, software scheduling, and compiler passes such as operation fusion and tensor padding. In this paper, we analyze bottlenecks in state-of-the-art vision and natural language processing (NLP) models, including EfficientNet and BERT, and use FAST to design accelerators capable of addressing these bottlenecks. FAST-generated accelerators optimized for single workloads improve Perf/TDP by 3.7x on average across all benchmarks compared to TPU-v3. A FAST-generated accelerator optimized for serving a suite of workloads improves Perf/TDP by 2.4x on average compared to TPU-v3. Our return on investment analysis shows that FAST-generated accelerators can potentially be practical for moderate-sized datacenter deployments. ","A Full-Stack Search Technique for Domain Optimized Deep Learning
Accelerators",7,"['I wrote a paper with a few Google colleagues about FAST, a new technique to build new specialized ML hardware accelerators able to improve computer vision inference performance by up to 6x relative to TPU-v3!\n\n<LINK> (1/5)', 'FAST extends previous work by expanding the design exploration space to up to 10^2300, covering not just the hardware datapath, but also software scheduling and compiler decisions including padding and fusion. Fusion is key since it addresses memory bandwidth bottlenecks. (2/5)', 'Accelerating ML inference is important because fast ML inference latency/throughput is required to launch models in production at scale. If an application is sufficiently important and with enough volume, it can make sense to customize a chip for this purpose. (3/5)', 'A key benefit of the work is that our specialized accelerators can still run other ML workloads - just not as efficiently. Relative to building a chip that can only handle a single workload, this gives engineers the flexibility to still change the model in production. (4/5)', 'Finally, techniques like this are important because they can be used to accelerate the chip design process from years to potentially months. By increasing the workload set targeted by FAST, FAST can also be used to automatically design general-purpose ML accelerators. (5/5)', 'TLDR: we made specialized hardware chips that can do machine learning faster :)', ""@selynnasun Thanks! We're super excited about the work!""]",21,05,1453
405,156,1229945825784143872,972555245179064320,Jordy de Vries,"Very happy this is out! <LINK>. My first paper with a great grad student, Guanghui Zhou, in my group UMass. We study the role of sterile neutrinos in neutrinoless double beta decay. Neutrinos with masses in the MeV-GeV range are particularly tricky to include.",https://arxiv.org/abs/2002.07182,"We investigate neutrinoless double beta decay ($0\nu\beta\beta$) in the presence of sterile neutrinos with Majorana mass terms. These gauge-singlet fields are allowed to interact with Standard-Model (SM) fields via renormalizable Yukawa couplings as well as higher-dimensional gauge-invariant operators up to dimension seven in the Standard Model Effective Field Theory extended with sterile neutrinos. At the GeV scale, we use Chiral effective field theory involving sterile neutrinos to connect the operators at the level of quarks and gluons to hadronic interactions involving pions and nucleons. This allows us to derive an expression for $0\nu\beta\beta$ rates for various isotopes in terms of phase-space factors, hadronic low-energy constants, nuclear matrix elements, the neutrino masses, and the Wilson coefficients of higher-dimensional operators. The needed hadronic low-energy constants and nuclear matrix elements depend on the neutrino masses, for which we obtain interpolation formulae grounded in QCD and chiral perturbation theory that improve existing formulae that are only valid in a small regime of neutrino masses. The resulting framework can be used directly to assess the impact of $0\nu\beta\beta$ experiments on scenarios with light sterile neutrinos and should prove useful in global analyses of sterile-neutrino searches. We perform several phenomenological studies of $0\nu\beta\beta$ in the presence of sterile neutrinos with and without higher-dimensional operators. We find that non-standard interactions involving sterile neutrinos have a dramatic impact on $0\nu\beta\beta$ phenomenology, and next-generation experiments can probe such interactions up to scales of $\mathcal O(100)$ TeV. ","Sterile neutrinos and neutrinoless double beta decay in effective field
theory",1,"['Very happy this is out! <LINK>. My first paper with a great grad student, Guanghui Zhou, in my group UMass. We study the role of sterile neutrinos in neutrinoless double beta decay. Neutrinos with masses in the MeV-GeV range are particularly tricky to include.']",20,02,260
406,183,1324787707684352000,1176230196736737281,Aniruddh Raghu,"Teaching with Commentaries: <LINK> We study the use of commentaries, metalearned auxiliary information, to improve neural network training and provide insights. With @maithra_raghu, @skornblith, @DavidDuvenaud, @geoffreyhinton Thread⬇️ <LINK> We define commentaries as functions of a task/dataset that are learned by optimizing a network’s validation loss. They can be used to improve the training of new models and to understand aspects of the network training process. <LINK> We first learn commentaries that encode a weight for each training example at each training iteration. These example weighting curricula commentaries capture intuitive structure, lead to speedups in network training, and can improve performance on few-shot learning problems. <LINK> We then investigate commentaries that define a label-dependent data augmentation policy, where images are blended together with a proportion based on their labels. The learned commentaries are interpretable and can improve model performance. <LINK> Finally, we explore applying commentaries to identify salient image regions by using them to parameterize image attention masks. On a variety of datasets, the learned masks capture important regions and can improve network robustness to spurious background correlations. <LINK>",https://arxiv.org/abs/2011.03037,"Effective training of deep neural networks can be challenging, and there remain many open questions on how to best learn these models. Recently developed methods to improve neural network training examine teaching: providing learned information during the training process to improve downstream model performance. In this paper, we take steps towards extending the scope of teaching. We propose a flexible teaching framework using commentaries, learned meta-information helpful for training on a particular task. We present gradient-based methods to learn commentaries, leveraging recent work on implicit differentiation for scalability. We explore diverse applications of commentaries, from weighting training examples, to parameterising label-dependent data augmentation policies, to representing attention masks that highlight salient image regions. We find that commentaries can improve training speed and/or performance, and provide insights about the dataset and training process. We also observe that commentaries generalise: they can be reused when training new models to obtain performance benefits, suggesting a use-case where commentaries are stored with a dataset and leveraged in future for improved model training. ",Teaching with Commentaries,5,"['Teaching with Commentaries: <LINK>\n\nWe study the use of commentaries, metalearned auxiliary information, to improve neural network training and provide insights.\n\nWith @maithra_raghu, @skornblith, @DavidDuvenaud, @geoffreyhinton\n\nThread⬇️ <LINK>', 'We define commentaries as functions of a task/dataset that are learned by optimizing a network’s validation loss. They can be used to improve the training of new models and to understand aspects of the network training process. https://t.co/4yPwsCGcDu', 'We first learn commentaries that encode a weight for each training example at each training iteration. These example weighting curricula commentaries capture intuitive structure, lead to speedups in network training, and can improve performance on few-shot learning problems. https://t.co/iQFbdWGObV', 'We then investigate commentaries that define a label-dependent data augmentation policy, where images are blended together with a proportion based on their labels. The learned commentaries are interpretable and can improve model performance. https://t.co/zZKkWTYi0H', 'Finally, we explore applying commentaries to identify salient image regions by using them to parameterize image attention masks. On a variety of datasets, the learned masks capture important regions and can improve network robustness to spurious background correlations. https://t.co/o0UHhUwzDF']",20,11,1287
407,30,1265081568529448969,15327263,Carl-Johan Haster,"New paper led by Yiwen Huang (grad student @MITKavli & @MIT_Physics) where we've looked at how well we should trust future analyses of gravitational waves from Neutron Star-Black Hole binaries. <LINK> In addition to Yiwen and myself, @sasomao, @vijay_x_varma (from @Caltech, and soon @Cornell) , @FrancoisFoucart (from @UofNH ) and @sylvia_bisco have contributed. We find, not too surprisingly, that one has to be quite careful with these NSBH-signals. But as long as you've chosen a reasonable model to do your analysis with (ie. one that's constructed to describe NSBH systems), the potential model systematics shouldn't be too bad.",https://arxiv.org/abs/2005.11850,"Gravitational waves emitted by neutron star black hole mergers encode key properties of neutron stars - such as their size, maximum mass and spins - and black holes. However, the presence of matter and the high mass ratio makes generating long and accurate waveforms from these systems hard to do with numerical relativity, and not much is known about systematic uncertainties due to waveform modeling. We simulate gravitational waves from neutron star black hole mergers by hybridizing numerical relativity waveforms produced with the SpEC code with a recent numerical relativity surrogate NRHybSur3dq8Tidal. These signals are analyzed using a range of available waveform families, and statistical and systematic errors are reported. We find that at a network signal-to-noise ratio (SNR) of 30, statistical uncertainties are usually larger than systematic offsets, while at an SNR of 70 the two become comparable. The individual black hole and neutron star masses, as well as the mass ratios, are typically measured very precisely, though not always accurately at high SNR. At a SNR of 30 the neutron star tidal deformability can only be bound from above, while for louder sources it can be measured and constrained away from zero. All neutron stars in our simulations are non-spinning, but in no case we can constrain the neutron star spin to be smaller than $\sim0.4$ (90% credible interval). Waveform families whose late inspiral has been tuned specifically for neutron star black hole signals typically yield the most accurate characterization of the source parameters. Their measurements are in tension with those obtained using waveform families tuned against binary neutron stars, even for mass ratios that could be relevant for both binary neutron stars and neutron star black holes mergers. ","Statistical and systematic uncertainties in extracting the source
properties of neutron star - black hole binaries with gravitational waves",3,"[""New paper led by Yiwen Huang (grad student @MITKavli &amp; @MIT_Physics) where we've looked at how well we should trust future analyses of gravitational waves from Neutron Star-Black Hole binaries. <LINK>"", 'In addition to Yiwen and myself, @sasomao, @vijay_x_varma (from @Caltech, and soon @Cornell) , @FrancoisFoucart (from @UofNH ) and @sylvia_bisco have contributed.', ""We find, not too surprisingly, that one has to be quite careful with these NSBH-signals. But as long as you've chosen a reasonable model to do your analysis with (ie. one that's constructed to describe NSBH systems), the potential model systematics shouldn't be too bad.""]",20,05,634
408,18,1498467101010505728,1138666325897752576,Kimon Fountoulakis,"New paper ""Graph Attention Retrospective"". One of the most popular type of models is graph attention networks. These models were introduced to allow a node to aggregate information from the features of neighbor nodes in a non-uniform way <LINK> <LINK> in contrast to simple graph convolution which does not distinguish the neighbors of a node. In this paper, we study theoretically this expected behaviour of graph attention networks. We prove multiple results on the performance of the graph attention mechanism for the problem of node classification for a contextual stochastic block model. Here the features of the nodes are obtained from a mixture of Gaussians and the edges from a stochastic block model where the features and the edges are coupled in a natural way. We show that in an “easy” regime, where the distance between the means of the Gaussians is large enough, graph attention maintains the weights of intra-class edges and significantly reduces the weights of the inter-class edges. <LINK> This implies perfect node classification independent of the weights of inter-class edges. However, a classical argument shows that in the “easy” regime, the graph is not needed at all to classify the data with high probability. (in the fig. q is the inter-edge prob) <LINK> In the “hard” regime, we show that every attention mechanism fails to distinguish intra-class from inter-class edges. <LINK> This is joint work with Amit Levi, @shenghao_yang, @aseemrb and Aukosh Jagannath. Also since you reached here, peace🕊️.",https://arxiv.org/abs/2202.13060,"Graph-based learning is a rapidly growing sub-field of machine learning with applications in social networks, citation networks, and bioinformatics. One of the most popular type of models is graph attention networks. These models were introduced to allow a node to aggregate information from the features of neighbor nodes in a non-uniform way in contrast to simple graph convolution which does not distinguish the neighbors of a node. In this paper, we study theoretically this expected behaviour of graph attention networks. We prove multiple results on the performance of the graph attention mechanism for the problem of node classification for a contextual stochastic block model. Here the features of the nodes are obtained from a mixture of Gaussians and the edges from a stochastic block model where the features and the edges are coupled in a natural way. First, we show that in an ""easy"" regime, where the distance between the means of the Gaussians is large enough, graph attention maintains the weights of intra-class edges and significantly reduces the weights of the inter-class edges. As a corollary, we show that this implies perfect node classification independent of the weights of inter-class edges. However, a classical argument shows that in the ""easy"" regime, the graph is not needed at all to classify the data with high probability. In the ""hard"" regime, we show that every attention mechanism fails to distinguish intra-class from inter-class edges. We evaluate our theoretical results on synthetic and real-world data. ",Graph Attention Retrospective,7,"['New paper ""Graph Attention Retrospective"". One of the most popular type of models is graph attention networks. These models were introduced to allow a node to aggregate information from the features of neighbor nodes in a non-uniform way <LINK> <LINK>', 'in contrast to simple graph convolution which does not distinguish the neighbors of a node. In this paper, we study theoretically this expected behaviour of graph attention networks. We prove multiple results on the performance of the graph attention mechanism for the problem', 'of node classification for a contextual stochastic block model. Here the features of the nodes are obtained from a mixture of Gaussians and the edges from a stochastic block model where the features and the edges are coupled in a natural way.', 'We show that in an “easy” regime, where the distance between the means of the Gaussians is large enough, graph attention maintains the weights of intra-class edges and significantly reduces the weights of the inter-class edges. https://t.co/SeNQYwhFo4', 'This implies perfect node classification independent of the weights of inter-class edges. However, a classical argument shows that in the “easy” regime, the graph is not needed at all to classify the data with high probability. (in the fig. q is the inter-edge prob) https://t.co/fls3GN8k8M', 'In the “hard” regime, we show that every attention mechanism fails to distinguish intra-class from inter-class edges. https://t.co/pb9zkISdeN', 'This is joint work with Amit Levi, @shenghao_yang, @aseemrb and Aukosh Jagannath. Also since you reached here, peace🕊️.']",22,02,1525
409,89,1202270193080078336,922847904058011649,Romy Rodríguez,"New paper from the KELT survey!!! Here we present a substellar companion and an ultra-hot Jupiter, both orbiting early A-stars which were observed by @NASA_TESS this year! <LINK> @NASA_TESS With zero-albedo equilibrium temperatures of ~2300K (KELT-25b) and ~2400 K (KELT-26b), both companions join are among the hottest exoplanets known @NASA_TESS These planets are also both highly inflated, making them excellent targets for follow-up detailed atmospheric characterization @NASA_TESS We see a subtle asymmetry in the light curve of KELT-26, which, combined with the observation of a polar orbit & a low visini of the host star, leads us to think it is a signature of gravity darkening <LINK> @NASA_TESS This effect likely not observable from the ground, which is why we only see it in the TESS light curve!",https://arxiv.org/abs/1912.01017,"We present the discoveries of KELT-25b (TIC 65412605, TOI-626.01) and KELT-26b (TIC 160708862, TOI-1337.01), two transiting companions orbiting relatively bright, early A-stars. The transit signals were initially detected by the KELT survey, and subsequently confirmed by \textit{TESS} photometry. KELT-25b is on a 4.40-day orbit around the V = 9.66 star CD-24 5016 ($T_{\rm eff} = 8280^{+440}_{-180}$ K, $M_{\star}$ = $2.18^{+0.12}_{-0.11}$ $M_{\odot}$), while KELT-26b is on a 3.34-day orbit around the V = 9.95 star HD 134004 ($T_{\rm eff}$ =$8640^{+500}_{-240}$ K, $M_{\star}$ = $1.93^{+0.14}_{-0.16}$ $M_{\odot}$), which is likely an Am star. We have confirmed the sub-stellar nature of both companions through detailed characterization of each system using ground-based and \textit{TESS} photometry, radial velocity measurements, Doppler Tomography, and high-resolution imaging. For KELT-25, we determine a companion radius of $R_{\rm P}$ = $1.64^{+0.039}_{-0.043}$ $R_{\rm J}$, and a 3-sigma upper limit on the companion's mass of $\sim64~M_{\rm J}$. For KELT-26b, we infer a planetary mass and radius of $M_{\rm P}$ = $1.41^{+0.43}_{-0.51}$ $M_{\rm J}$ and $R_{\rm P}$ = $1.940^{+0.060}_{-0.058}$ $R_{\rm J}$. From Doppler Tomographic observations, we find KELT-26b to reside in a highly misaligned orbit. This conclusion is weakly corroborated by a subtle asymmetry in the transit light curve from the \textit{TESS} data. KELT-25b appears to be in a well-aligned, prograde orbit, and the system is likely a member of a cluster or moving group. ","KELT-25b and KELT-26b: A Hot Jupiter and a Substellar Companion
Transiting Young A-stars Observed by TESS",5,"['New paper from the KELT survey!!! Here we present a substellar companion and an ultra-hot Jupiter, both orbiting early A-stars which were observed by @NASA_TESS this year! \n<LINK>', '@NASA_TESS With zero-albedo equilibrium temperatures of ~2300K (KELT-25b) and ~2400 K (KELT-26b), both companions join are among the hottest exoplanets known', '@NASA_TESS These planets are also both highly inflated, making them excellent targets for follow-up detailed atmospheric characterization', '@NASA_TESS We see a subtle asymmetry in the light curve of KELT-26, which, combined with the observation of a polar orbit &amp; a low visini of the host star, leads us to think it is a signature of gravity darkening https://t.co/jGiAkhMo0R', '@NASA_TESS This effect likely not observable from the ground, which is why we only see it in the TESS light curve!']",19,12,808
410,51,1471206237899411458,1326946405378764801,Mohan Sarovar,"New paper on the arXiv today: ""Quantum simulation of weak-field light-matter interactions"" <LINK> 1/3 We construct an approach to simulate response functions of materials interacting with quantum fields. The key innovation is a way to simulate the effect of continuum fields with a small, discrete number of controllable bosonic modes. 2/3 Any trapped-ion or circuit-QED groups interested in an experimental demonstration? 3/3",https://arxiv.org/abs/2112.07177,"Simulation of the interaction of light with matter, including at the few-photon level, is important for understanding the optical and optoelectronic properties of materials, and for modeling next-generation non-linear spectroscopies that use entangled light. At the few-photon level the quantum properties of the electromagnetic field must be accounted for with a quantized treatment of the field, and then such simulations quickly become intractable, especially if the matter subsystem must be modeled with a large number of degrees of freedom, as can be required to accurately capture many-body effects and quantum noise sources. Motivated by this we develop a quantum simulation framework for simulating such light-matter interactions on platforms with controllable bosonic degrees of freedom, such as vibrational modes in the trapped ion platform. The key innovation in our work is a scheme for simulating interactions with a continuum field using only a few discrete bosonic modes, which is enabled by a Green's function (response function) formalism. We develop the simulation approach, sketch how the simulation can be performed using trapped ions, and then illustrate the method with numerical examples. Our work expands the reach of quantum simulation to important light-matter interaction models and illustrates the advantages of extracting dynamical quantities such as response functions from quantum simulations. ",Quantum simulation of weak-field light-matter interactions,3,"['New paper on the arXiv today:\n""Quantum simulation of weak-field light-matter interactions""\n\n<LINK>\n\n1/3', 'We construct an approach to simulate response functions of materials interacting with quantum fields. The key innovation is a way to simulate the effect of continuum fields with a small, discrete number of controllable bosonic modes.\n \n2/3', 'Any trapped-ion or circuit-QED groups interested in an experimental demonstration?\n\n3/3']",21,12,427
411,111,1196490342369046529,118321407,Julien Barrier,"We studied rhomboedral #graphite, and found: · gapped bulk states · electron transport dominated by surface states · it allows observation of the #Quantum Hall Effect · phase transition between gappless and gapped phases · giant Berry curvature Preprint: <LINK>",https://arxiv.org/abs/1911.04565,"Of the two stable forms of graphite, hexagonal (HG) and rhombohedral (RG), the former is more common and has been studied extensively. RG is less stable, which so far precluded its detailed investigation, despite many theoretical predictions about the abundance of exotic interaction-induced physics. Advances in van der Waals heterostructure technology have now allowed us to make high-quality RG films up to 50 graphene layers thick and study their transport properties. We find that the bulk electronic states in such RG are gapped and, at low temperatures, electron transport is dominated by surface states. Because of topological protection, the surface states are robust and of high quality, allowing the observation of the quantum Hall effect, where RG exhibits phase transitions between gapless semimetallic phase and gapped quantum spin Hall phase with giant Berry curvature. An energy gap can also be opened in the surface states by breaking their inversion symmetry via applying a perpendicular electric field. Moreover, in RG films thinner than 4 nm, a gap is present even without an external electric field. This spontaneous gap opening shows pronounced hysteresis and other signatures characteristic of electronic phase separation, which we attribute to emergence of strongly-correlated electronic surface states. ","Electronic phase separation in topological surface states of
rhombohedral graphite",1,"['We studied rhomboedral #graphite, and found:\n· gapped bulk states\n· electron transport dominated by surface states\n· it allows observation of the #Quantum Hall Effect\n· phase transition between gappless and gapped phases\n· giant Berry curvature\nPreprint: <LINK>']",19,11,261
412,26,1497038810357674011,1556664198,Kyle Cranmer,"The saga continues… new paper using AI /ML for theoretical nuclear physics with the crew at MIT and @DeepMind. “Flow-based sampling in the lattice Schwinger model at criticality” 🧵 <LINK> <LINK> We bring together two main threads of recent research. The first has to do with incorporating the symmetries found in fundamental particle physics (Lie groups) into a type of deep generative model called normalizing flows <LINK> The second is described in the thread below, where we figured out how to incorporate fermions (eg matter particles like electrons and quarks) <LINK> Here we bring the two ingredients together to describe a simple model for how matter interacts with a force carrier known as the Schwander Model <LINK> <LINK> Fermions introduce “long range correlations” and there are different “modes” to the distribution characterized by a topological charge. It is hard for (Hamiltonian) Monte Carlo to transition between these sectors, but the flow-based sampling is orders of magnitude more efficient! <LINK> As always, it’s been a privilege to work with so many awesome people. I continue to learn a lot about physics & ML. I don’t have handles for everyone, but a shout-out to @DaniloJRezende @sracaniere @msalbergo @Julian_Urban @iaifi_news @NYUPhysics @NYUDataScience @DeepMind Oh yeah, and don’t forget <LINK>",https://arxiv.org/abs/2202.11712,"Recent results suggest that flow-based algorithms may provide efficient sampling of field distributions for lattice field theory applications, such as studies of quantum chromodynamics and the Schwinger model. In this work, we provide a numerical demonstration of robust flow-based sampling in the Schwinger model at the critical value of the fermion mass. In contrast, at the same parameters, conventional methods fail to sample all parts of configuration space, leading to severely underestimated uncertainties. ",Flow-based sampling in the lattice Schwinger model at criticality,7,"['The saga continues… new paper using AI /ML for theoretical nuclear physics with the crew at MIT and @DeepMind. \n“Flow-based sampling in the lattice Schwinger model at criticality”\n🧵 \n\n<LINK> <LINK>', 'We bring together two main threads of recent research. The first has to do with incorporating the symmetries found in fundamental particle physics (Lie groups) into a type of deep generative model called normalizing flows https://t.co/NHVCWgzBCp', 'The second is described in the thread below, where we figured out how to incorporate fermions (eg matter particles like electrons and quarks) https://t.co/JY1KHDFLya', 'Here we bring the two ingredients together to describe a simple model for how matter interacts with a force carrier known as the Schwander Model \nhttps://t.co/7ZJDVjjolD https://t.co/2nJ1s7nFAz', 'Fermions introduce “long range correlations” and there are different “modes” to the distribution characterized by a topological charge. It is hard for (Hamiltonian) Monte Carlo to transition between these sectors, but the flow-based sampling is orders of magnitude more efficient! https://t.co/kfK6CmpoRu', 'As always, it’s been a privilege to work with so many awesome people. I continue to learn a lot about physics &amp; ML. I don’t have handles for everyone, but a shout-out to @DaniloJRezende @sracaniere @msalbergo @Julian_Urban @iaifi_news @NYUPhysics @NYUDataScience @DeepMind', 'Oh yeah, and don’t forget https://t.co/ms77K6ENej']",22,02,1326
413,58,1149646895377076224,806058672619212800,Guillaume Lample,"Our new paper: Large Memory Layers with Product Keys <LINK> We created a key-value memory layer that can increase model capacity for a negligible computational cost. A 12-layer transformer with a memory outperforms a 24-layer transformer, and is 2x faster! 1/2 <LINK> The memory is based on a product-key parametrization that enables fast and exact nearest neighbor search over millions of values. It improves model performance without using larger dimension or adding more layers that would slow the model. @alexsablay @LudovicDenoyer @hjegou 2/2 @arankomatsuzaki Yes, all models were trained in the same setting, with the same number of iterations. With more iterations, results would have been even more favorable to memory augmented models, as the sparse updates in the memory require more iterations to fully converge.",https://arxiv.org/abs/1907.05242,"This paper introduces a structured memory which can be easily integrated into a neural network. The memory is very large by design and significantly increases the capacity of the architecture, by up to a billion parameters with a negligible computational overhead. Its design and access pattern is based on product keys, which enable fast and exact nearest neighbor search. The ability to increase the number of parameters while keeping the same computational budget lets the overall system strike a better trade-off between prediction accuracy and computation efficiency both at training and test time. This memory layer allows us to tackle very large scale language modeling tasks. In our experiments we consider a dataset with up to 30 billion words, and we plug our memory layer in a state-of-the-art transformer-based architecture. In particular, we found that a memory augmented model with only 12 layers outperforms a baseline transformer model with 24 layers, while being twice faster at inference time. We release our code for reproducibility purposes. ",Large Memory Layers with Product Keys,3,"['Our new paper: Large Memory Layers with Product Keys <LINK>\nWe created a key-value memory layer that can increase model capacity for a negligible computational cost. A 12-layer transformer with a memory outperforms a 24-layer transformer, and is 2x faster! 1/2 <LINK>', 'The memory is based on a product-key parametrization that enables fast and exact nearest neighbor search over millions of values. It improves model performance without using larger dimension or adding more layers that would slow the model. @alexsablay @LudovicDenoyer @hjegou 2/2', '@arankomatsuzaki Yes, all models were trained in the same setting, with the same number of iterations. With more iterations, results would have been even more favorable to memory augmented models, as the sparse updates in the memory require more iterations to fully converge.']",19,07,823
414,17,1244969215813201921,382269545,Jason Lyall,"In non-pandemic news, we have a new working paper about how to do causal inference when you have spatio-temporal data (which is nearly everyone doing microlevel studies). Application is to US airstrikes & effects on insurgent violence in Iraq. It's here: <LINK> <LINK> @fhollenbach Thanks! It’s the first in a series, so would love feedback @notanastronomer Ha! I’d be happy too, but I’m not sure this one’s a political science paper! @notanastronomer It's a really cool method with lots and lots of applications. If you've got feedback, let us know. We're planning a series of papers on the topic.",https://arxiv.org/abs/2003.13555,"Many causal processes have spatial and temporal dimensions. Yet the classic causal inference framework is not directly applicable when the treatment and outcome variables are generated by spatio-temporal processes with an infinite number of possible event locations. We extend the potential outcomes framework to these settings by formulating the treatment point process as a stochastic intervention. Our causal estimands include the expected number of outcome events in a specified area under a particular stochastic treatment assignment strategy. We develop methodology that allows for arbitrary patterns of spatial spillover and temporal carryover effects. Using martingale theory, we show that the proposed estimator is consistent and asymptotically normal as the number of time periods increases, even when the propensity score is estimated. We propose a sensitivity analysis for the possible existence of unmeasured confounders, and extend it to the H\'ajek estimator. Simulation studies are conducted to examine the estimators' finite sample performance. Finally, we use the proposed methods to estimate the effects of American airstrikes on insurgent violence in Iraq from February 2007 to July 2008. We find that increasing the average number of daily airstrikes for up to one month results in more insurgent attacks across Iraq and within Baghdad. We also find evidence that airstrikes can displace attacks from Baghdad to new locations up to 400 kilometers away. ","Causal Inference with Spatio-temporal Data: Estimating the Effects of
Airstrikes on Insurgent Violence in Iraq",4,"[""In non-pandemic news, we have a new working paper about how to do causal inference when you have spatio-temporal data (which is nearly everyone doing microlevel studies). Application is to US airstrikes &amp; effects on insurgent violence in Iraq. It's here: <LINK> <LINK>"", '@fhollenbach Thanks! It’s the first in a series, so would love feedback', '@notanastronomer Ha! I’d be happy too, but I’m not sure this one’s a political science paper!', ""@notanastronomer It's a really cool method with lots and lots of applications. If you've got feedback, let us know. We're planning a series of papers on the topic.""]",20,03,598
415,22,1001896436194213894,2932678322,Keaton Bell,"Our new paper uses nearly 200 hours of @mcdonaldobs 2.1m time to find and characterize pulsating stars among the enigmatic ""sdAs""---stars with spectroscopic gravities too strong for the main sequence and temperatures too low for normal subdwarfs. <LINK> Seven out of 23 targets show clear variability in their time series photometry. With dominant pulsation periods spanning 4.6 minutes to 12.3 hours, the asteroseismic evidence supports that the ""sdAs"" comprise multiple subpopulations. <LINK> An orbital period of 6.4 hours from follow-up spectroscopy establishes that SDSS J1618+3854 is a pulsating extremely low-mass white dwarf, while SDSS J0756+5027 may be a new low-mass RR Lyrae variable like OGLE-BLG-RRLYR-02792. <LINK> <LINK> We are releasing all of the light curves of the new variable stars to facilitate better follow-up analyses.",https://arxiv.org/abs/1805.11129,"Context. The nature of the recently identified ""sdA"" spectroscopic class of star is not well understood. The thousands of known sdAs have H-dominated spectra, spectroscopic surface gravities intermediate to main sequence stars and isolated white dwarfs, and effective temperatures below the lower limit for He-burning subdwarfs. Most are likely products of binary stellar evolution, whether extremely low-mass white dwarfs and their precursors, or blue stragglers in the halo. Aims. Stellar eigenfrequencies revealed through time series photometry of pulsating stars sensitively probe stellar structural properties. The properties of pulsations exhibited by any sdA stars would contribute importantly to our developing understanding of this class. Methods. We extend our photometric campaign to discover pulsating extremely low-mass white dwarfs from McDonald Observatory to target sdA stars classified from SDSS spectra. We also obtain follow-up time series spectroscopy to search for binary signatures from four new pulsators. Results. Out of 23 sdA stars observed, we clearly detect stellar pulsations in seven. Dominant pulsation periods range from 4.6 minutes to 12.3 hours, with most on ~hour timescales. We argue specific classifications for some of the new variables, identifying both compact and likely main sequence dwarf pulsators, along with a candidate low-mass RR Lyrae star. Conclusions. With dominant pulsation periods spanning orders of magnitude, the pulsational evidence supports the emerging narrative that the sdA class consists of multiple stellar populations. Since multiple types of sdA exhibit stellar pulsations, follow-up asteroseismic analysis can be used to probe the precise evolutionary natures and stellar structures of these individual subpopulations. ","The McDonald Observatory search for pulsating sdA stars: asteroseismic
support for multiple populations",4,"['Our new paper uses nearly 200 hours of @mcdonaldobs 2.1m time to find and characterize pulsating stars among the enigmatic ""sdAs""---stars with spectroscopic gravities too strong for the main sequence and temperatures too low for normal subdwarfs. <LINK>', 'Seven out of 23 targets show clear variability in their time series photometry. With dominant pulsation periods spanning 4.6 minutes to 12.3 hours, the asteroseismic evidence supports that the ""sdAs"" comprise multiple subpopulations. https://t.co/6RPdQQEcN8', 'An orbital period of 6.4 hours from follow-up spectroscopy establishes that SDSS J1618+3854 is a pulsating extremely low-mass white dwarf, while SDSS J0756+5027 may be a new low-mass RR Lyrae variable like OGLE-BLG-RRLYR-02792. https://t.co/HQVxGqEUXx https://t.co/bT6f0ntFKd', 'We are releasing all of the light curves of the new variable stars to facilitate better follow-up analyses.']",18,05,844
416,224,1415593113582768129,903175505377198080,alessioferrari,"After three years and three iterations of search and code, we have finalised our mapping study on #formal methods in #railways with M. ter Beek. Time for holiday now. Here is the preprint: <LINK> Part of @S2R_ASTRail and @4securail projects by @Shift2Rail_JU <LINK>",https://arxiv.org/abs/2107.05413,"Formal methods are mathematically-based techniques for the rigorous development of software-intensive systems. The railway signaling domain is a field in which formal methods have traditionally been applied, with several success stories. This article reports on a mapping study that surveys the landscape of research on applications of formal methods to the development of railway systems. Our main results are as follows: (i) we identify a total of 328 primary studies relevant to our scope published between 1989 and 2020, of which 44% published during the last 5 years and 24% involving industry; (ii) the majority of studies are evaluated through Examples (41%) and Experience Reports (38%), while full-fledged Case Studies are limited (1.5%); (iii) Model checking is the most commonly adopted technique (47%), followed by simulation (27%) and theorem proving (19.5%); (iv) the dominant languages are UML (18%) and B (15%), while frequently used tools are ProB (9%), NuSMV (8%) and UPPAAL (7%); however, a diverse landscape of languages and tools is employed; (v) the majority of systems are interlocking products (40%), followed by models of high-level control logic (27%); (vi) most of the studies focus on the Architecture (66%) and Detailed Design (45%) development phases. Based on these findings, we highlight current research gaps and expected actions. In particular, the need to focus on more empirically sound research methods, such as Case Studies and Controlled Experiments, and to lower the degree of abstraction, by applying formal methods and tools to development phases that are closer to software development. Our study contributes with an empirically based perspective on the future of research and practice in formal methods applications for railways. ",Formal Methods in Railways: a Systematic Mapping Study,1,"['After three years and three iterations of search and code, we have finalised our mapping study on #formal methods in #railways with M. ter Beek. Time for holiday now.\nHere is the preprint: <LINK>\n\nPart of @S2R_ASTRail and @4securail projects by @Shift2Rail_JU <LINK>']",21,07,265
417,123,1455990822537764872,69202541,Jonathan Le Roux,"New paper out: ""Sequence Transduction with Graph-based Supervision"", w/ N. Moritz, T. Hori, and S. Watanabe @shinjiw_at_cmu. We use our GTC loss to explore variations on RNN-T alignment rules & show a CTC-like transducer system outperforms standard RNN-T. <LINK>",https://arxiv.org/abs/2111.01272,"The recurrent neural network transducer (RNN-T) objective plays a major role in building today's best automatic speech recognition (ASR) systems for production. Similarly to the connectionist temporal classification (CTC) objective, the RNN-T loss uses specific rules that define how a set of alignments is generated to form a lattice for the full-sum training. However, it is yet largely unknown if these rules are optimal and do lead to the best possible ASR results. In this work, we present a new transducer objective function that generalizes the RNN-T loss to accept a graph representation of the labels, thus providing a flexible and efficient framework to manipulate training lattices, e.g., for studying different transition rules, implementing different transducer losses, or restricting alignments. We demonstrate that transducer-based ASR with CTC-like lattice achieves better results compared to standard RNN-T, while also ensuring a strictly monotonic alignment, which will allow better optimization of the decoding procedure. For example, the proposed CTC-like transducer achieves an improvement of 4.8% on the test-other condition of LibriSpeech relative to an equivalent RNN-T based system. ",Sequence Transduction with Graph-based Supervision,1,"['New paper out: ""Sequence Transduction with Graph-based Supervision"", w/ N. Moritz, T. Hori, and S. Watanabe @shinjiw_at_cmu. We use our GTC loss to explore variations on RNN-T alignment rules &amp; show a CTC-like transducer system outperforms standard RNN-T.\n<LINK>']",21,11,262
418,36,1309417647403139074,2886658437,Sean Raymond,"Born Eccentric! Our new paper -- led by Matt Clement -- shows that Jupiter and Saturn's orbits may have been non-circular from the start. The evidence is indirect: simulations of the Solar System's instability more easily match present-day orbits. See <LINK> This image explains the broader context (I'm planning on hanging a high-res version of this poster in our kitchen) <LINK> Authors: Matt Clement, Nate Kaib, Rogerio Deienno, John Chambers, Andre Izidoro and myself (I might have got the order wrong, but Matt Clement is the driver of this train) Let me motivate our paper a little more clearly. The gaseous disk phase (which only lasts a few million years) was pretty busy: all of the giant planets formed and migrated around. At the end of this phase they were probably in a chain of orbital resonances. To date it has generally been thought that Jupiter and Saturn were trapped in 3:2 resonance when the gas disk dissipated. The next big event for the giant planets was their big dynamical instability (the ""Nice model""). Starting with Jup and Sat in 3:2 resonance, the instability only matches Jup's current orbit a small fraction of the time (Nesvorny & Morbidelli 2012) BUT: hydro simulations find that Jup and Sat could have been trapped in 2:1 resonance during the gas disk phase, not 3:2 Pierens et al (2014) even found that Jup and Sat in 2:1 resonance end up with eccentric orbits when the gas disk goes away.... (FYI Jupiter and Saturn can still follow a Grand Tack migration in the 2:1 resonance) <LINK> Finally, the punchline: And now, we (Clement et al) find that the odds of the planets ending up on their current orbits after the instability are quite a big higher if Jup and Sat started in the 2:1 resonance with modestly-eccentric orbits. <LINK>",https://arxiv.org/abs/2009.11323,"An episode of dynamical instability is thought to have sculpted the orbital structure of the outer solar system. When modeling this instability, a key constraint comes from Jupiter's fifth eccentric mode (quantified by its amplitude M55), which is an important driver of the solar system's secular evolution. Starting from commonly-assumed near-circular orbits, the present-day giant planets' architecture lies at the limit of numerically generated systems, and M55 is rarely excited to its true value. Here we perform a dynamical analysis of a large batch of artificially triggered instabilities, and test a variety of configurations for the giant planets' primordial orbits. In addition to more standard setups, and motivated by the results of modern hydrodynamical simulations of the giant planets' evolution within the primordial gaseous disk, we consider the possibility that Jupiter and Saturn emerged from the nebular gas locked in 2:1 resonance with non-zero eccentricities. We show that, in such a scenario, the modern Jupiter-Saturn system represents a typical simulation outcome, and M55 is commonly matched. Furthermore, we show that Uranus and Neptune's final orbits are determined by a combination of the mass in the primordial Kuiper belt and that of an ejected ice giant. ","Born eccentric: constraints on Jupiter and Saturn's pre-instability
orbits",8,"[""Born Eccentric!\n\nOur new paper -- led by Matt Clement -- shows that Jupiter and Saturn's orbits may have been non-circular from the start. \n\nThe evidence is indirect: simulations of the Solar System's instability more easily match present-day orbits.\n\nSee <LINK>"", ""This image explains the broader context\n\n(I'm planning on hanging a high-res version of this poster in our kitchen) https://t.co/VMRT3T3TER"", 'Authors: Matt Clement, Nate Kaib, Rogerio Deienno, John Chambers, Andre Izidoro and myself \n\n(I might have got the order wrong, but Matt Clement is the driver of this train)', 'Let me motivate our paper a little more clearly.\n\nThe gaseous disk phase (which only lasts a few million years) was pretty busy: all of the giant planets formed and migrated around.\n\nAt the end of this phase they were probably in a chain of orbital resonances.', 'To date it has generally been thought that Jupiter and Saturn were trapped in 3:2 resonance when the gas disk dissipated. \n\nThe next big event for the giant planets was their big dynamical instability (the ""Nice model"").', ""Starting with Jup and Sat in 3:2 resonance, the instability only matches Jup's current orbit a small fraction of the time (Nesvorny &amp; Morbidelli 2012)\n\nBUT: hydro simulations find that Jup and Sat could have been trapped in 2:1 resonance during the gas disk phase, not 3:2"", 'Pierens et al (2014) even found that Jup and Sat in 2:1 resonance end up with eccentric orbits when the gas disk goes away....\n\n(FYI Jupiter and Saturn can still follow a Grand Tack migration in the 2:1 resonance)\n\nhttps://t.co/ivO4mpSmxV', 'Finally, the punchline: \n\nAnd now, we (Clement et al) find that the odds of the planets ending up on their current orbits after the instability are quite a big higher if Jup and Sat started in the 2:1 resonance with modestly-eccentric orbits.\n\nhttps://t.co/t5fQYSOXts']",20,09,1774
419,43,1321495999853006849,1135159531913338880,Ruari Mackenzie,"Well I'm a day late, but here's my new paper on Giant Lyman alpha Nebulae around faint quasars. In these objects we can observe the circumgalactic media of massive galaxies in emission. <LINK> <LINK> We studied a sample of much fainter quasars with MUSE than previous surveys, to see how the luminosity of the quasar impacts the surrounding nebula. With our very large dynamic range (~7 magnitudes) we can clearly see luminosity dependence. Nebulae around faint quasars (blue) have lower surface-brightness when compared to those around more luminous QSOs (red). <LINK> It's also curious that the nebular brightness is much more tightly correlated to the quasar Lyman alpha, compare to the UV continuum shown here. If you're interested please have a look at the paper.",https://arxiv.org/abs/2010.12589,"We present the results from a MUSE survey of twelve $z\simeq3.15$ quasars, which were selected to be much fainter (20<i<23) than in previous studies of Giant Ly$\alpha$ Nebulae around the brightest quasars (16.6<i<18.7). We detect HI Ly$\alpha$ nebulae around 100% of our target quasars, with emission extending to scales of at least 60 physical kpc, and up to 190 pkpc. We explore correlations between properties of the nebulae and their host quasars, with the goal of connecting variations in the properties of the illuminating QSO to the response in nebular emission. We show that the surface brightness profiles of the nebulae are similar to those of nebulae around bright quasars, but with a lower normalization. Our targeted quasars are on average 3.7 magnitudes (~30 times) fainter in UV continuum than our bright reference sample, and yet the nebulae around them are only 4.3 times fainter in mean Ly$\alpha$ surface brightness, measured between 20 and 50 pkpc. We find significant correlations between the surface brightness of the nebula and the luminosity of the quasar in both UV continuum and Ly$\alpha$. The latter can be interpreted as evidence for a substantial contribution from unresolved inner parts of the nebulae to the narrow components seen in the Ly$\alpha$ lines of some of our faint quasars, possibly from the inner CGM or from the host galaxy's ISM. ",Revealing the Impact of Quasar Luminosity on Giant Ly$\alpha$ Nebulae,4,"[""Well I'm a day late, but here's my new paper on Giant Lyman alpha Nebulae around faint quasars. In these objects we can observe the circumgalactic media of massive galaxies in emission.\n\n<LINK> <LINK>"", 'We studied a sample of much fainter quasars with MUSE than previous surveys, to see how the luminosity of the quasar impacts the surrounding nebula.', 'With our very large dynamic range (~7 magnitudes) we can clearly see luminosity dependence. Nebulae around faint quasars (blue) have lower surface-brightness when compared to those around more luminous QSOs (red). https://t.co/YOHiTz38jD', ""It's also curious that the nebular brightness is much more tightly correlated to the quasar Lyman alpha, compare to the UV continuum shown here. If you're interested please have a look at the paper.""]",20,10,768
420,104,1258404137659744258,70874545,Josh Lothringer,"New paper on the arXiv today in which we explore the short-wavelength transit spectra of hot and ultra-hot Jupiters! <LINK> (just submitted) <LINK> We show that the high transit depths found in some ultra-hot Jupiters can be interpreted as absorption by a plethora of species guaranteed to be present if near chemical equilibrium. Like WASP-12b for example: <LINK> We also explore how the UV-optical transit depths should vary between Teq = ~800-4000 K, with and without the rainout of condensates: <LINK> We're just beginning to see some of these species in both low and high-resolution observations so that's very exciting. This paper was inspired by @Guangwei_Fu's great work on WASP-76b, finding H2O, TiO, and probably a bunch of the species mentioned above: <LINK>",https://arxiv.org/abs/2005.02528,"The low-resolution transmission spectra of ultra-hot Jupiters observed shortward of 0.5 microns indicate strong absorption at short-wavelengths. Previous explanations have included scattering, photochemistry, escaping metals, and disequilibrium chemistry. In this Letter, we show that slopes and features shortward of 0.5 microns can be caused by opacity not commonly considered in atmosphere models of exoplanets but guaranteed to be present if conditions are near chemical equilibrium including Fe I, Fe II, Ti I, Ni I, Ca I, Ca II, and SiO. Even relatively trace species (e.g., Cr I and Mn I) can contribute through strong lines in the UV and blue-optical. Using the PHOENIX atmosphere model, we describe how the short-wavelength transit spectrum varies with equilibrium temperature between 1000 K and 4000 K, as well as the effect that the rainout of condensates has at these wavelengths. We define two spectral indices to quantify the strength of the NUV and blue absorption compared to that in the red-optical, finding that the NUV transit depth will significantly exceed the transit depth from Rayleigh scattering alone for all hot Jupiters down to around 1000 K. In the blue-optical, hot Jupiters warmer than 2000 K will have transit depths larger than that from Rayleigh scattering, but below 2000 K, Rayleigh scattering can dominate, if present. We further show that these spectral indices may be used to trace the effects of rainout. We then compare our simulated transit spectra to existing observations of WASP-12b, WASP-33b, WASP-76b, and WASP-121b. ","UV Exoplanet Transmission Spectral Features as Probes of Metals and
Rainout",4,"['New paper on the arXiv today in which we explore the short-wavelength transit spectra of hot and ultra-hot Jupiters! <LINK> (just submitted) <LINK>', 'We show that the high transit depths found in some ultra-hot Jupiters can be interpreted as absorption by a plethora of species guaranteed to be present if near chemical equilibrium. Like WASP-12b for example: https://t.co/AmT0B6MI0Y', 'We also explore how the UV-optical transit depths should vary between Teq = ~800-4000 K, with and without the rainout of condensates: https://t.co/xiYCnwNUkx', ""We're just beginning to see some of these species in both low and high-resolution observations so that's very exciting. This paper was inspired by @Guangwei_Fu's great work on WASP-76b, finding H2O, TiO, and probably a bunch of the species mentioned above: https://t.co/3hl2GDQuQw""]",20,05,769
421,107,1283586543685267456,1172321171670360064,Suraj Nair,"Can we learn dynamics models that are conditioned on goals, and only model goal-relevant quantities? We explore this question in our new work Goal-Aware Prediction, to appear at #ICML2020 at 7 AM/6 PM PDT tomorrow w/ @chelseabfinn @silviocinguetta Paper: <LINK> <LINK>",https://arxiv.org/abs/2007.07170,"Learned dynamics models combined with both planning and policy learning algorithms have shown promise in enabling artificial agents to learn to perform many diverse tasks with limited supervision. However, one of the fundamental challenges in using a learned forward dynamics model is the mismatch between the objective of the learned model (future state reconstruction), and that of the downstream planner or policy (completing a specified task). This issue is exacerbated by vision-based control tasks in diverse real-world environments, where the complexity of the real world dwarfs model capacity. In this paper, we propose to direct prediction towards task relevant information, enabling the model to be aware of the current task and encouraging it to only model relevant quantities of the state space, resulting in a learning objective that more closely matches the downstream task. Further, we do so in an entirely self-supervised manner, without the need for a reward function or image labels. We find that our method more effectively models the relevant parts of the scene conditioned on the goal, and as a result outperforms standard task-agnostic dynamics models and model-free reinforcement learning. ",Goal-Aware Prediction: Learning to Model What Matters,1,"['Can we learn dynamics models that are conditioned on goals, and only model goal-relevant quantities? We explore this question in our new work Goal-Aware Prediction, to appear at #ICML2020 at 7 AM/6 PM PDT tomorrow\nw/ @chelseabfinn @silviocinguetta \n\nPaper: <LINK> <LINK>']",20,07,269
422,138,1281418430063796224,1077995761487568896,Jon Miller,"New paper day! <LINK> Using @NASANuSTAR, graduate student Paul Draghis has measured the highest black hole spin that I've yet seen (highest ever?) in his very first paper. The orientation of the black hole is also measured very precisely. <LINK> @agniva_rc @NASANuSTAR Not in this case. You must know the mass and distance very well. Radio observations could, in principle, determine a parallax distance. But the line-of-sight obscuration is too high to determine a mass via radial velocity curves.",https://arxiv.org/abs/2007.04324,"The black hole candidate EXO 1846-031 underwent an outburst in 2019, after at least 25 years in quiescence. We observed the system using \textit{NuSTAR} on August 3rd, 2019. The 3--79 keV spectrum shows strong relativistic reflection features. Our baseline model gives a nearly maximal black hole spin value of $a=0.997_{-0.002}^{+0.001}$ ($1\sigma$ statistical errors). This high value nominally excludes the possibility of the central engine harboring a neutron star. Using several models, we test the robustness of our measurement to assumptions about the density of the accretion disk, the nature of the corona, the choice of disk continuum model, and addition of reflection from the outer regions of the accretion disk. All tested models agree on a very high black hole spin value and a high value for the inclination of the inner accretion disk of $\theta\approx73^\circ$. We discuss the implications of this spin measurement in the population of stellar mass black holes with known spins, including LIGO events. ",A New Spin on an Old Black Hole: NuSTAR Spectroscopy of EXO 1846-031,2,"[""New paper day!\n<LINK>\nUsing @NASANuSTAR, graduate student Paul Draghis has measured the highest black hole spin that I've yet seen (highest ever?) in his very first paper. The orientation of the black hole is also measured very precisely. <LINK>"", '@agniva_rc @NASANuSTAR Not in this case. You must know the mass and distance very well. Radio observations could, in principle, determine a parallax distance. But the line-of-sight obscuration is too high to determine a mass via radial velocity curves.']",20,07,498
423,87,1405434239978459140,216729597,Marcel S. Pawlowski,"New paper on the arXiv today, lead by @astro_delgado with @Javanmardi_B and @Giusepp39181130. We looked at the dwarf galaxy system surrounding the Sculptor galaxy NGC 253, motivated by the discovery of three new dwarfs in the area. <LINK> <LINK> <LINK> Many of the dwarfs around NGC 253 are distributed along a preferred direction, towards the north of the galaxy. Five of these galaxies have measured velocities, and four of them follow a common trend. Could this be an edge-on satellite plane? That's maybe rotating? <LINK> Looking at the 3D position of confirmed galaxies around NGC 253 reveals their distribution is oriented edge-on to us, and is indeed more flattened than typical. Not highly significant (too few objects with good distance), but enough to motivate further attention and follow-up. <LINK> Interestingly, the orientation also aligns with the supergalactic plane and the local tidal tensor of the surrounding larger-scale structure, following the preference @satellitegalaxy found for such dwarf galaxy structures. <LINK> So, is this a satellite plane? Possible. To be certain we need more & better distance measurements, also for the new objects. And more spectroscopic velocities to judge the kinematic correlation. Given the extent of 600 kpc, it might also be related to a cosmic filament or sheet. Or, and this is could be the most exciting possibility, are we maybe witnessing a satellite plane in formation from preferred accretion directions of dwarfs along of the cosmic web?",https://arxiv.org/abs/2106.08868,"In the last years, a new generation of large-scale imaging surveys have probed wide field regions around some nearby galaxies at unprecedented low surface brightness regime (~28.0-29.0 mag arcsec^-2). This offers a chance of discovering very faint dwarf satellites by means of visual inspection of these public deep images. We report the first results of a systematic survey of faint dwarf spheroidal galaxies in the vicinity of the bright late-type spiral NGC 253 galaxy by means of a visual inspection of the images taken by the Dark Energy Survey. Three new dwarf galaxies have been discovered in the vicinity of the brightest member of the Sculptor filament, the late-type spiral NGC 253. Assuming they are companions of NGC 253, their total absolute V-magnitudes fall in the -7 to -9 mag range, which is typical for dwarf satellites in the local Universe. The central surface brightness tend to be extremely low for all the discovered dwarfs and fall roughly in the range of 25-26 mag arcsec^-2 in g-band. Using known data on distances and velocities of galaxies, we estimate the total virial mass of the NGC 253 group to be 8 x 10^11 Mo, which gives the virial radius R_200 = 186 kpc and the turn-around radius of 706 kpc. We also discuss the possible existence of a spatially flattened and velocity-correlated satellite system around NGC 253. This large-scale structure is orientated almost edge-on to line of sight. The possible plane of satellites is only 31 kpc thick with the minor-to-major axis ratio of 0.14. Four out of five galaxies with measured velocities follow a common velocity trend similar to those observed in the planes of satellites around the Andromeda and Centaurus A galaxies. However, the small number of galaxies with known velocities prevents to reach a definitive conclusion about the formation scenario of the structure and its possible relation to the surrounding cosmic web. ","Tracing satellite planes in the Sculptor group: I. Discovery of three
faint dwarf galaxies around NGC 253",6,"['New paper on the arXiv today, lead by @astro_delgado with @Javanmardi_B and @Giusepp39181130. We looked at the dwarf galaxy system surrounding the Sculptor galaxy NGC 253, motivated by the discovery of three new dwarfs in the area.\n\n<LINK> <LINK> <LINK>', ""Many of the dwarfs around NGC 253 are distributed along a preferred direction, towards the north of the galaxy. Five of these galaxies have measured velocities, and four of them follow a common trend. Could this be an edge-on satellite plane? That's maybe rotating? https://t.co/rNyVBcnE1g"", 'Looking at the 3D position of confirmed galaxies around NGC 253 reveals their distribution is oriented edge-on to us, and is indeed more flattened than typical. Not highly significant (too few objects with good distance), but enough to motivate further attention and follow-up. https://t.co/WPyrnEHeOe', 'Interestingly, the orientation also aligns with the supergalactic plane and the local tidal tensor of the surrounding larger-scale structure, following the preference @satellitegalaxy found for such dwarf galaxy structures. https://t.co/IqenOHVb9A', 'So, is this a satellite plane? Possible. To be certain we need more &amp; better distance measurements, also for the new objects. And more spectroscopic velocities to judge the kinematic correlation. Given the extent of 600 kpc, it might also be related to a cosmic filament or sheet.', 'Or, and this is could be the most exciting possibility, are we maybe witnessing a satellite plane in formation from preferred accretion directions of dwarfs along of the cosmic web?']",21,06,1504
424,170,1412693915787055104,1277975431723900936,Laura Rogers,"New Co-Author paper: ""Infrared Excesses around Bright White Dwarfs from Gaia and unWISE. II"" <LINK> Studies identifying and characterising infrared radiation around white dwarf stars can reveal valuable insights into white dwarf planetary systems and stellar companions. In paper I (S. Xu, 2020) 188 infrared excess white dwarf candidates were discovered using Gaia and unWISE In paper II, follow up photometric observations of these candidates were taken with Spitzer (3.6 and 4.5 micron) and Gemini (JHK) to confirm the infrared excess and the nature of the excess The infrared excess is confirmed for 61 white dwarfs, and it is likely 10 of which are stellar companions. Check out table A2 (appendix) for all confirmed infrared excess white dwarfs in this study",https://arxiv.org/abs/2107.01221,"Infrared excesses around white dwarf stars indicate the presence of various astrophysical objects of interest, including companions and debris disks. In this second paper of a series, we present follow-up observations of infrared excess candidates from Gaia and unWISE discussed in the first paper, Paper I. We report space-based infrared photometry at 3.6 and 4.5 micron for 174 white dwarfs from the Spitzer Space Telescope and ground-based near-infrared J, H, and K photometry of 235 white dwarfs from Gemini Observatory with significant overlap between Spitzer and Gemini observations. This data is used to confirm or rule-out the observed unWISE infrared excess. From the unWISE-selected candidate sample, the most promising infrared excess sample comes from both colour and flux excess, which has a Spitzer confirmation rate of 95%. We also discuss a method to distinguish infrared excess caused by stellar or sub-stellar companions from potential dust disks. In total, we confirm the infrared excess around 62 white dwarfs, 10 of which are likely to be stellar companions. The remaining 52 bright white dwarf with infrared excess beyond two microns has the potential to double the known sample of white dwarfs with dusty exoplanetary debris disks. Follow-up high-resolution spectroscopic studies of a fraction of confirmed excess white dwarfs in this sample have discovered emission from gaseous dust disks. Additional investigations will be able to expand the parameter space from which dust disks around white dwarfs are found. ",Infrared Excesses around Bright White Dwarfs from Gaia and unWISE. II,4,"['New Co-Author paper: ""Infrared Excesses around Bright White Dwarfs from Gaia and unWISE. II"" \n<LINK>', 'Studies identifying and characterising infrared radiation around white dwarf stars can reveal valuable insights into white dwarf planetary systems and stellar companions. In paper I (S. Xu, 2020) 188 infrared excess white dwarf candidates were discovered using Gaia and unWISE', 'In paper II, follow up photometric observations of these candidates were taken with Spitzer (3.6 and 4.5 micron) and Gemini (JHK) to confirm the infrared excess and the nature of the excess', 'The infrared excess is confirmed for 61 white dwarfs, and it is likely 10 of which are stellar companions. Check out table A2 (appendix) for all confirmed infrared excess white dwarfs in this study']",21,07,764
425,98,1234927029193187331,1405259995,Adam Elmachtoub,"Excited about our new paper where we use decision trees for predicting cost vectors to optimization problems! We show our SPO Trees lead to higher quality decisions than CART, and require significantly fewer leaves to do so(more interpretable)! <LINK> Work with former PhD student Ryan McNellis (now at @amazon) and former Columbia undergrad Jason Liang (now at @ORCenter)",https://arxiv.org/abs/2003.00360,"We consider the use of decision trees for decision-making problems under the predict-then-optimize framework. That is, we would like to first use a decision tree to predict unknown input parameters of an optimization problem, and then make decisions by solving the optimization problem using the predicted parameters. A natural loss function in this framework is to measure the suboptimality of the decisions induced by the predicted input parameters, as opposed to measuring loss using input parameter prediction error. This natural loss function is known in the literature as the Smart Predict-then-Optimize (SPO) loss, and we propose a tractable methodology called SPO Trees (SPOTs) for training decision trees under this loss. SPOTs benefit from the interpretability of decision trees, providing an interpretable segmentation of contextual features into groups with distinct optimal solutions to the optimization problem of interest. We conduct several numerical experiments on synthetic and real data including the prediction of travel times for shortest path problems and predicting click probabilities for news article recommendation. We demonstrate on these datasets that SPOTs simultaneously provide higher quality decisions and significantly lower model complexity than other machine learning approaches (e.g., CART) trained to minimize prediction error. ","Decision Trees for Decision-Making under the Predict-then-Optimize
Framework",2,"['Excited about our new paper where we use decision trees for predicting cost vectors to optimization problems! We show our SPO Trees lead to higher quality decisions than CART, and require significantly fewer leaves to do so(more interpretable)! <LINK>', 'Work with former PhD student Ryan McNellis (now at @amazon) and former Columbia undergrad Jason Liang (now at @ORCenter)']",20,03,372
426,170,1512334358497083396,38941661,Daniel Stilck França,"New paper out with Cambyse Rouze, Giacomo de Palma and Milad Marvian! <LINK> The paper is about concentration inequalities for the outputs of shallow and noisy quantum circuits. We obtain our results using quantum optimal transport techniques. 1/n But the title has variational quantum algorithms in it. So what is the connection between these topics? A simple way to see the connection is to consider GHZ states on n qubits when measured in the computational basis. 2/n Now consider the observable \sum_iZ_i. Its expectation value for the GHZ is 0, but you will either observe the outcomes -n or n. That is, the outputs DO NOT concentrate around the mean. And any family of circuits that produces outputs that concentrate cannot produce GHZs. 3/n So concentration inequalities can be used to prove barriers to prepare states. In the paper we prove that constant depth circuits produce outputs that concentrate around the mean, improving and simplifying previous results in the literature on limitations of shallow circuits. 4/n In the noisy setting, i.e. circuits with a constant density of errors, we show Gaussian concentration inequalities. In short, these show that if the local noise rate is p, at depth ~1/p, probability of having an output that deviates significantly 5/n from one would obtain when measuring a trivial product state is exponentially small in system size. This strengthens the results I previously obtained with @RaulGarciaPatr1. There we arrived at similar conclusions, but only in expectation! 6/n Hopefully our relatively simple proofs can convince more people to take a closer look at quantum optimal techniques! I certainly believe they provide an elegant framework to study shallow and noisy quantum circuits! 7/n And I would also like to thank my collaborators for this fun project! It was particularly nice to finally write a paper with Giacomo, with whom I overlapped for a while at @QMATH_KU! 8/8",https://arxiv.org/abs/2204.03455,"The impressive progress in quantum hardware of the last years has raised the interest of the quantum computing community in harvesting the computational power of such devices. However, in the absence of error correction, these devices can only reliably implement very shallow circuits or comparatively deeper circuits at the expense of a nontrivial density of errors. In this work, we obtain extremely tight limitation bounds for standard NISQ proposals in both the noisy and noiseless regimes, with or without error-mitigation tools. The bounds limit the performance of both circuit model algorithms, such as QAOA, and also continuous-time algorithms, such as quantum annealing. In the noisy regime with local depolarizing noise $p$, we prove that at depths $L=\cO(p^{-1})$ it is exponentially unlikely that the outcome of a noisy quantum circuit outperforms efficient classical algorithms for combinatorial optimization problems like Max-Cut. Although previous results already showed that classical algorithms outperform noisy quantum circuits at constant depth, these results only held for the expectation value of the output. Our results are based on newly developed quantum entropic and concentration inequalities, which constitute a homogeneous toolkit of theoretical methods from the quantum theory of optimal mass transport whose potential usefulness goes beyond the study of variational quantum algorithms. ","Limitations of variational quantum algorithms: a quantum optimal
transport approach",8,"['New paper out with Cambyse Rouze, Giacomo de Palma and Milad Marvian! <LINK>\nThe paper is about concentration inequalities for the outputs of shallow and noisy quantum circuits. We obtain our results using quantum optimal transport techniques. 1/n', 'But the title has variational quantum algorithms in it. So what is the connection between these topics? A simple way to see the connection is to consider GHZ states on n qubits when measured in the computational basis. 2/n', 'Now consider the observable \\sum_iZ_i. Its expectation value for the GHZ is 0, but you will either observe the outcomes -n or n. That is, the outputs DO NOT concentrate around the mean. And any family of circuits that produces outputs that concentrate cannot produce GHZs. 3/n', 'So concentration inequalities can be used to prove barriers to prepare states. In the paper we prove that constant depth circuits produce outputs that concentrate around the mean, improving and simplifying previous results in the literature on limitations of shallow circuits. 4/n', 'In the noisy setting, i.e. circuits with a constant density of errors, we show Gaussian concentration inequalities. In short, these show that if the local noise rate is p, at depth ~1/p, probability of having an output that deviates significantly 5/n', 'from one would obtain when measuring a trivial product state is exponentially small in system size. This strengthens the results I previously obtained with @RaulGarciaPatr1. There we arrived at similar conclusions, but only in expectation! 6/n', 'Hopefully our relatively simple proofs can convince more people to take a closer look at quantum optimal techniques! I certainly believe they provide an elegant framework to study shallow and noisy quantum circuits! 7/n', 'And I would also like to thank my collaborators for this fun project! It was particularly nice to finally write a paper with Giacomo, with whom I overlapped for a while at @QMATH_KU! 8/8']",22,04,1930
427,128,1318188318958456834,2972152960,Renato Negrinho,"New EMNLP findings paper #emnlp #emnlp2020 . Paper: <LINK> Code: <LINK> We empirically study beam-aware training algorithms instantiated through a meta-algorithm and evaluate them on supertagging. <LINK> We find that beam-aware training yields large performance improvements when the model must rely on the beam to manage uncertainty effectively, e.g., must make decisions with incomplete information. Beam-aware training uses beam search for both training and decoding and therefore models do not suffer from exposure bias and learn to exploit the beam. This contrasts with the usual approach of training on maximum likelihood and decoding with beam search. The goal of this paper is to better understand the impact of beam-aware training, i.e., under what conditions would we see performance improvements and what design aspects would be most important. We used a meta-algorithm for beam-aware training proposed in our previous work (<LINK>), which identifies several design dimensions: beam size, data collection strategy, and loss function. We have found that beam-aware training yields large improvements when the model can’t encode the complete sequence before starting prediction. In this case, beam-aware training yielded a model that did a better job managing uncertainty about future predictions. The standard approach of maximum likelihood training and post-hoc beam search failed to reach the same performances (~10 absolute perf. points in some cases). We also have useful observations about design choices to train the models stably and achieve high performances. Personal take: I believe that beam-aware training will be most impactful in settings where the instance for which we have to generate predictions is not fully available and partial predictions must be made with incomplete information, or in cases where .. actions taken affect information gathered, e.g., when traversing a graph incrementally. I’m also hopeful about the ability of these models to address pathologies that have been identified with the typical use of beam search for decoding after maximum likelihood training.",https://arxiv.org/abs/2010.04980,"Structured prediction is often approached by training a locally normalized model with maximum likelihood and decoding approximately with beam search. This approach leads to mismatches as, during training, the model is not exposed to its mistakes and does not use beam search. Beam-aware training aims to address these problems, but unfortunately, it is not yet widely used due to a lack of understanding about how it impacts performance, when it is most useful, and whether it is stable. Recently, Negrinho et al. (2018) proposed a meta-algorithm that captures beam-aware training algorithms and suggests new ones, but unfortunately did not provide empirical results. In this paper, we begin an empirical investigation: we train the supertagging model of Vaswani et al. (2016) and a simpler model with instantiations of the meta-algorithm. We explore the influence of various design choices and make recommendations for choosing them. We observe that beam-aware training improves performance for both models, with large improvements for the simpler model which must effectively manage uncertainty during decoding. Our results suggest that a model must be learned with search to maximize its effectiveness. ",An Empirical Investigation of Beam-Aware Training in Supertagging,9,"['New EMNLP findings paper #emnlp #emnlp2020 .\n\nPaper: <LINK>\nCode: <LINK>\n\nWe empirically study beam-aware training algorithms instantiated through a meta-algorithm and evaluate them on supertagging. <LINK>', 'We find that beam-aware training yields large performance improvements when the model must rely on the beam to manage uncertainty effectively, e.g., must make decisions with incomplete information.', 'Beam-aware training uses beam search for both training and decoding and therefore models do not suffer from exposure bias and learn to exploit the beam. This contrasts with the usual approach of training on maximum likelihood and decoding with beam search.', 'The goal of this paper is to better understand the impact of beam-aware training, i.e., under what conditions would we see performance improvements and what design aspects would be most important.', 'We used a meta-algorithm for beam-aware training proposed in our previous work (https://t.co/39WMuxTzPD), which identifies several design dimensions: beam size, data collection strategy, and loss function.', 'We have found that beam-aware training yields large improvements when the model can’t encode the complete sequence before starting prediction. In this case, beam-aware training yielded a model that did a better job managing uncertainty about future predictions.', 'The standard approach of maximum likelihood training and post-hoc beam search failed to reach the same performances (~10 absolute perf. points in some cases). We also have useful observations about design choices to train the models stably and achieve high performances.', 'Personal take: I believe that beam-aware training will be most impactful in settings where the instance for which we have to generate predictions is not fully available and partial predictions must be made with incomplete information, or in cases where ..', 'actions taken affect information gathered, e.g., when traversing a graph incrementally. I’m also hopeful about the ability of these models to address pathologies that have been identified with the typical use of beam search for decoding after maximum likelihood training.']",20,10,2105
428,198,1254804653683834882,2252332513,Duncan Ralph,"Do you want to find high affinity antibodies in B cell receptor deep sequencing data? We have methods for you! <LINK> with @ematsen @petrelharp @ematsen There's definitely been people doing sequencing, and I'd expect to start seeing papers soon. Hopefully the data gets posted promptly to @bcorrie's awesome public ireceptor database <LINK>",https://arxiv.org/abs/2004.11868,"We are frequently faced with a large collection of antibodies, and want to select those with highest affinity for their cognate antigen. When developing a first-line therapeutic for a novel pathogen, for instance, we might look for such antibodies in patients that have recovered. There exist effective experimental methods of accomplishing this, such as cell sorting and baiting; however they are time consuming and expensive. Next generation sequencing of B cell receptor (BCR) repertoires offers an additional source of sequences that could be tapped if we had a reliable method of selecting those coding for the best antibodies. In this paper we introduce a method that uses evolutionary information from the family of related sequences that share a naive ancestor to predict the affinity of each resulting antibody for its antigen. When combined with information on the identity of the antigen, this method should provide a source of effective new antibodies. We also introduce a method for a related task: given an antibody of interest and its inferred ancestral lineage, which branches in the tree are likely to harbor key affinity-increasing mutations? These methods are implemented as part of continuing development of the partis BCR inference package, available at this https URL ",Using B cell receptor lineage structures to predict affinity,2,"['Do you want to find high affinity antibodies in B cell receptor deep sequencing data? We have methods for you!\n\n<LINK>\n\nwith @ematsen', ""@petrelharp @ematsen There's definitely been people doing sequencing, and I'd expect to start seeing papers soon. Hopefully the data gets posted promptly to @bcorrie's awesome public ireceptor database https://t.co/SSZ4dbxi87""]",20,04,340
429,30,1509235465538478080,1465098219847831562,Pablo Villanueva Domingo,"Glad to announce our new paper with @vmmunoza, @carambolos and Sergio Palomares-Ruiz, on constraining the abundance of primordial black holes using neutrino fluxes at Super Kamiokande and future neutrino detectors. <LINK> If primordial black holes exist, they would emit neutrinos via Hawking evaporation, which could be detected at Earth. Here we use current SK data to bound their abundance, and study how these limits could be improved in future neutrino telescopes such as Hyper Kamiokande.",https://arxiv.org/abs/2203.14979,"Primordial black holes (PBHs) formed in the early Universe are sources of neutrinos emitted via Hawking radiation. Such astrophysical neutrinos could be detected at Earth and constraints on the abundance of comet-mass PBHs could be derived from the null observation of this neutrino flux. Here, we consider non-rotating PBHs and improve constraints using Super-Kamiokande neutrino data, as well as we perform forecasts for next-generation neutrino (Hyper-Kamiokande, JUNO, DUNE) and dark matter (DARWIN, ARGO) detectors, which we compare. For PBHs less massive than $\sim \textrm{few} \times 10^{14}$ g, PBHs would have already evaporated by now, whereas more massive PBHs would still be present and would constitute a fraction of the dark matter of the Universe. We consider monochromatic and extended (log-normal) mass distributions, and a PBH mass range spanning from $10^{12}$ g to $\sim 10^{16}$ g. Finally, we also compare our results with previous ones in the literature. ","Current and future neutrino limits on the abundance of primordial black
holes",2,"['Glad to announce our new paper with @vmmunoza, @carambolos and Sergio Palomares-Ruiz, on constraining the abundance of primordial black holes using neutrino fluxes at Super Kamiokande and future neutrino detectors.\n\n<LINK>', 'If primordial black holes exist, they would emit neutrinos via Hawking evaporation, which could be detected at Earth. Here we use current SK data to bound their abundance, and study how these limits could be improved in future neutrino telescopes such as Hyper Kamiokande.']",22,03,494
430,40,1108942932466270209,77317235,Antonio Ortega,"Check out our new tool to visualize Graph Fourier Transforms. Paper: <LINK> Code: <LINK> Makes it easier to visualize localization, sampling, filtering #graphsignalprocessing <LINK> This GFT corresponds to a graph with 3 clusters, separated by thick vertical lines. The largest frequencies only appear in two of the clusters. Red circles indicate the order of sampling.",https://arxiv.org/abs/1903.08827,"Recent progress in graph signal processing (GSP) has addressed a number of problems, including sampling and filtering. Proposed methods have focused on generic graphs and defined signals with certain characteristics, e.g., bandlimited signals, based on t he graph Fourier transform (GFT). However, the effect of GFT properties (e.g., vertex localization) on the behavior of such methods is not as well understood. In this paper, we propose novel GFT visualization tools and provide some examples to illustrate certain GFT properties and their impact on sampling or wavelet transforms. ","What's in a frequency: new tools for graph Fourier Transform
visualization",2,"['Check out our new tool to visualize Graph Fourier Transforms. Paper: <LINK> Code: <LINK> Makes it easier to visualize localization, sampling, filtering #graphsignalprocessing <LINK>', 'This GFT corresponds to a graph with 3 clusters, separated by thick vertical lines. The largest frequencies only appear in two of the clusters. Red circles indicate the order of sampling.']",19,03,369
431,141,1363171994565509121,1004816650628227073,César A. Uribe,🚨🚨🚨 New paper alert! The first ever official paper from my Lab is out!🦾🦾🥳 @RiceECE @RiceEngineering Amazing work of my student @mttoghani. We show non-asymptotic concentration bounds for social learning with very efficient communication of beliefs. <LINK> @AnaMaPorras @RiceECE @RiceEngineering @mttoghani Gracias! Siempre tenemos esa conversación con @SandraAriassu sobre lo diferentes que son las escalas de tiempo en nuestras áreas.,https://arxiv.org/abs/2102.07767,"We study the problem of distributed cooperative learning, where a group of agents seeks to agree on a set of hypotheses that best describes a sequence of private observations. In the scenario where the set of hypotheses is large, we propose a belief update rule where agents share compressed (either sparse or quantized) beliefs with an arbitrary positive compression rate. Our algorithm leverages a unified communication rule that enables agents to access wide-ranging compression operators as black-box modules. We prove the almost sure asymptotic exponential convergence of beliefs around the set of optimal hypotheses. Additionally, we show a non-asymptotic, explicit, and linear concentration rate in probability of the beliefs on the optimal hypothesis set. We provide numerical experiments to illustrate the communication benefits of our method. The simulation results show that the number of transmitted bits can be reduced to 5-10% of the non-compressed method in the studied scenarios. ","Communication-efficient Distributed Cooperative Learning with Compressed
Beliefs",2,"['🚨🚨🚨 New paper alert! The first ever official paper from my Lab is out!🦾🦾🥳 @RiceECE @RiceEngineering Amazing work of my student @mttoghani. We show non-asymptotic concentration bounds for social learning with very efficient communication of beliefs. <LINK>', '@AnaMaPorras @RiceECE @RiceEngineering @mttoghani Gracias! Siempre tenemos esa conversación con @SandraAriassu sobre lo diferentes que son las escalas de tiempo en nuestras áreas.']",21,02,435
432,215,1389869832338817024,2209253130,Vito Dichio,"On ArXiv now the output of a long work and a true friendship more than a collaboration - which started about 1y ago, in Stockholm. We play with Statistical Genetics, do calculations, find results, write appendices, discuss things. #biophysics #genetics <LINK>",https://arxiv.org/abs/2105.01428,"This review is about statistical genetics, an interdisciplinary topic between Statistical Physics and Population Biology. The focus is on the phase of Quasi-Linkage Equilibrium (QLE). Our first objective is to clarify under which conditions the QLE phase can be expected to hold in population biology, and how parameters describing a QLE phase relate to underlying population dynamics. Our second objective is to clarify how the stability of the QLE phase is lost. The QLE state was studied at the global genome scale by Neher \& Shraiman (2011): what we will refer to as the Kimura-Neher-Shraiman (KNS) theory describes a population evolving due to the mutations, recombination, genetic drift, natural selection (pairwise epistatic fitness). The main conclusion of KNS is that QLE phase exists at sufficiently high recombination rate (r) with respect to the variability in selection strength (fitness). Combining the results of the KNS theory with the techniques of the Direct Coupling Analysis (DCA), we show that in QLE epistatic fitness can be inferred from the knowledge of the (dynamical) distribution of genotypes in a population. We further consider evolution of a population at higher selection strength with respect to recombination and mutation parameters. We identify a new bi-stable phase which we call the Non-Random Coexistence (NRC) phase where variability persist in the population without either fixating or disappearing. We also identify an intermediate region in the parameter space where a finite population jumps stochastically between a QLE-like state and NRC-like behaviour. ","Statistical Genetics and Direct Coupling Analysis in and out of
Quasi-Linkage Equilibrium",1,"['On ArXiv now the output of a long work and a true friendship more than a collaboration - which started about 1y ago, in Stockholm. \nWe play with Statistical Genetics, do calculations, find results, write appendices, discuss things.\n#biophysics #genetics \n<LINK>']",21,05,259
433,14,1167058977953390593,701638582587371520,Alejandro Perdomo-Ortiz,Fresh out of the oven! :).Check out our new paper comparing #quantum and classical #MachineLearning models when modeling probability distributions constructed from subsets of the stock market. A benchmark readily implementable in ion-trap quantum computers <LINK> <LINK>,https://arxiv.org/abs/1908.10778,"Although several models have been proposed towards assisting machine learning (ML) tasks with quantum computers, a direct comparison of the expressive power and efficiency of classical versus quantum models for datasets originating from real-world applications is one of the key milestones towards a quantum ready era. Here, we take a first step towards addressing this challenge by performing a comparison of the widely used classical ML models known as restricted Boltzmann machines (RBMs), against a recently proposed quantum model, now known as quantum circuit Born machines (QCBMs). Both models address the same hard tasks in unsupervised generative modeling, with QCBMs exploiting the probabilistic nature of quantum mechanics and a candidate for near-term quantum computers, as experimentally demonstrated in three different quantum hardware architectures to date. To address the question of the performance of the quantum model on real-world classical data sets, we construct scenarios from a probabilistic version out of the well-known portfolio optimization problem in finance, by using time-series pricing data from asset subsets of the S\&P500 stock market index. It is remarkable to find that, under the same number of resources in terms of parameters for both classical and quantum models, the quantum models seem to have superior performance on typical instances when compared with the canonical training of the RBMs. Our simulations are grounded on a hardware efficient realization of the QCBMs on ion-trap quantum computers, by using their native gate sets, and therefore readily implementable in near-term quantum devices. ","Classical versus Quantum Models in Machine Learning: Insights from a
Finance Application",1,['Fresh out of the oven! :).Check out our new paper comparing #quantum and classical #MachineLearning models when modeling probability distributions constructed from subsets of the stock market. A benchmark readily implementable in ion-trap quantum computers <LINK> <LINK>'],19,08,270
434,155,1501393377610407946,1055883888159932416,Jason Lee,"What is the analog of ERM for offline RL? We propose primal dual regularized offline rl (PRO-RL), which has many of the properties that makes ERM so successful. <LINK> 1) For general function classes on the value/density ratios, the algorithm allows for agnostic learning, 2) does not require completeness (closure under Bellman operator), and 3) can compete with the best covered policy. <LINK> With Wenhao Zhan (<LINK>) , Baihe Huang, Audrey Huang, and @nanjiang_cs @yuxiangw_cs So my viewpoint is more about 'what is the right loss' so that it has all the benefits erm in stat learning has (agnostic, any func class). If you do ope for all \pi in the FA setting, almost certainly you will have an assumption that is for all \pi Blah needs to hold. @yuxiangw_cs I am also abusing the term erm here. All the losses are empirical risks, but the question is more which risk to use",https://arxiv.org/abs/2202.04634,"Sample-efficiency guarantees for offline reinforcement learning (RL) often rely on strong assumptions on both the function classes (e.g., Bellman-completeness) and the data coverage (e.g., all-policy concentrability). Despite the recent efforts on relaxing these assumptions, existing works are only able to relax one of the two factors, leaving the strong assumption on the other factor intact. As an important open problem, can we achieve sample-efficient offline RL with weak assumptions on both factors? In this paper we answer the question in the positive. We analyze a simple algorithm based on the primal-dual formulation of MDPs, where the dual variables (discounted occupancy) are modeled using a density-ratio function against offline data. With proper regularization, we show that the algorithm enjoys polynomial sample complexity, under only realizability and single-policy concentrability. We also provide alternative analyses based on different assumptions to shed light on the nature of primal-dual algorithms for offline RL. ","Offline Reinforcement Learning with Realizability and Single-policy
Concentrability",5,"['What is the analog of ERM for offline RL? We propose primal dual regularized offline rl (PRO-RL), which has many of the properties that makes ERM so successful. <LINK>', '1) For general function classes on the value/density ratios, the algorithm allows for agnostic learning, 2) does not require completeness (closure under Bellman operator), and 3) can compete with the best covered policy. https://t.co/JaRkGjMfyD', 'With Wenhao Zhan (https://t.co/bi5IdnhFNb) , Baihe Huang, Audrey Huang, and @nanjiang_cs', ""@yuxiangw_cs So my viewpoint is more about 'what is the right loss' so that it has all the benefits erm in stat learning has (agnostic, any func class). If you do ope for all \\pi in the FA setting, almost certainly you will have an assumption that is for all \\pi Blah needs to hold."", '@yuxiangw_cs I am also abusing the term erm here. All the losses are empirical risks, but the question is more which risk to use']",22,02,879
435,188,1336132443317866497,1167063592941891585,Bonaventure Dossou,"We propose an AI-based using CNNs to detect Pneumonia from Chest X-rays. Our model achieve high accuracy (91, 04), F1-Score of 97 and AUC 88,04. We also address technical, legal, ethical, and logistical issues, with a blueprint of possible solutions. <LINK> <LINK> CC: @jacobs_bremen @NanaYaaSally Thanks @NanaYaaSally",https://arxiv.org/abs/2012.03487,"Each year, over 2.5 million people, most of them in developed countries, die from pneumonia [1]. Since many studies have proved pneumonia is successfully treatable when timely and correctly diagnosed, many of diagnosis aids have been developed, with AI-based methods achieving high accuracies [2]. However, currently, the usage of AI in pneumonia detection is limited, in particular, due to challenges in generalizing a locally achieved result. In this report, we propose a roadmap for creating and integrating a system that attempts to solve this challenge. We also address various technical, legal, ethical, and logistical issues, with a blueprint of possible solutions. ",An Approach to Intelligent Pneumonia Detection and Integration,3,"['We propose an AI-based using CNNs to detect Pneumonia from Chest X-rays. Our model achieve high accuracy (91, 04), F1-Score of 97 and AUC 88,04. We also address technical, legal, ethical, and logistical issues, with a blueprint of possible solutions.\n\n<LINK> <LINK>', 'CC: @jacobs_bremen', '@NanaYaaSally Thanks @NanaYaaSally']",20,12,318
436,207,1337417757109968897,1119248934629773313,Javad Shabani,Not every superconducting transition is a proof that material stays superconducting at milliKelvin. Our new study in Gallium doped Silicon superconductivity shows earlier studies have residual conductivity at mK. But we can fix it. Here is how: <LINK>,https://arxiv.org/abs/2012.04748,"Hyperdoping with gallium (Ga) has been established as a route to observe superconductivity in silicon (Si). The relatively large critical temperatures (T$_{\rm c}$) and magnetic fields (B$_{\rm c}$) make this phase attractive for cryogenic circuit applications, particularly for scalable hybrid superconductor--semiconductor platforms. However, the robustness of Si:Ga superconductivity at millikelvin temperatures is yet to be evaluated. Here, we report the presence of a reentrant resistive transition below T$_{\rm c}$ for Si:Ga whose strength strongly depends on the distribution of the Ga clusters that precipitate in the implanted Si after annealing. By monitoring the reentrant resistance over a wide parameter space of implantation energies and fluences, we determine conditions that significantly improve the coherent coupling of Ga clusters, therefore, eliminating the reentrant transition even at temperatures as low as 20~mK. ","Tailoring Superconducting Phases Observed in Hyperdoped Si:Ga for
Cryogenic Circuit Applications",1,['Not every superconducting transition is a proof that material stays superconducting at milliKelvin. Our new study in Gallium doped Silicon superconductivity shows earlier studies have residual conductivity at mK. But we can fix it. Here is how: <LINK>'],20,12,251
437,89,1425432307981295619,56081214,Jan Witowski,"Today, we release an open-source *meta-repository* for breast cancer mammography classifiers! In a new paper, we use it to evaluate 5 SOTA models on 5 various datasets from around the world. Preprint is now live at: <LINK>, and a thread below: <LINK> If you own a mammography dataset, meta-repository makes it trivial to evaluate multiple SOTA models on your data. On the other hand, if you have a great breast cancer classifier and you want to prove that it's the best, simply follow the instructions in our repo and make a PR! As of now, there are 5 models ready to use on any mammography dataset. With the use of our meta-repository, we evaluated those models on 5 datasets from around the world. 3 are public (DDSM, INBreast, CCMD) and the remaining 2 (NYU) are available for evaluation upon request. <LINK> As expected, models perform better for test sets drawn from the same distribution as the training data. Also, did we mention that you can compare your model performance to human experts? We use our 2020 reader study to allow *anyone* compare their model to radiologists! <LINK> Meta-repository is now live at <LINK>. If you use it to evaluate your data or want to add a model, please send us a message! We’re happy to help and committed to making science more transparent. This was a collaborative effort of many people brought together by @NYUImaging, @cai2r, @NYUDataScience, @NYUAbuDhabi, @JagiellonskiUni. S/o to @kjgeras, @kchonyc, @jchledowski, @farahshamout. Cc: @DrLukeOR, @NAWA_Poland & @gra_ze, @ngsinformatics, @yindalon, @StanfordHAI, @MazurowskiPhD",https://arxiv.org/abs/2108.04800,"Artificial intelligence (AI) is showing promise in improving clinical diagnosis. In breast cancer screening, recent studies show that AI has the potential to improve early cancer diagnosis and reduce unnecessary workup. As the number of proposed models and their complexity grows, it is becoming increasingly difficult to re-implement them. To enable reproducibility of research and to enable comparison between different methods, we release a meta-repository containing models for classification of screening mammograms. This meta-repository creates a framework that enables the evaluation of AI models on any screening mammography data set. At its inception, our meta-repository contains five state-of-the-art models with open-source implementations and cross-platform compatibility. We compare their performance on seven international data sets. Our framework has a flexible design that can be generalized to other medical image analysis tasks. The meta-repository is available at this https URL ",Meta-repository of screening mammography classifiers,7,"['Today, we release an open-source *meta-repository* for breast cancer mammography classifiers! In a new paper, we use it to evaluate 5 SOTA models on 5 various datasets from around the world. Preprint is now live at: <LINK>, and a thread below: <LINK>', ""If you own a mammography dataset, meta-repository makes it trivial to evaluate multiple SOTA models on your data.\nOn the other hand, if you have a great breast cancer classifier and you want to prove that it's the best, simply follow the instructions in our repo and make a PR!"", 'As of now, there are 5 models ready to use on any mammography dataset.\nWith the use of our meta-repository, we evaluated those models on 5 datasets from around the world. 3 are public (DDSM, INBreast, CCMD) and the remaining 2 (NYU) are available for evaluation upon request. https://t.co/rSL7sLjT4U', 'As expected, models perform better for test sets drawn from the same distribution as the training data.\nAlso, did we mention that you can compare your model performance to human experts? We use our 2020 reader study to allow *anyone* compare their model to radiologists! https://t.co/LjgHhYWENu', 'Meta-repository is now live at https://t.co/Cytm21Kq3o. If you use it to evaluate your data or want to add a model, please send us a message! We’re happy to help and committed to making science more transparent.', 'This was a collaborative effort of many people brought together by @NYUImaging, @cai2r, @NYUDataScience, @NYUAbuDhabi, @JagiellonskiUni. S/o to @kjgeras, @kchonyc, @jchledowski, @farahshamout.', 'Cc: @DrLukeOR, @NAWA_Poland &amp; @gra_ze, @ngsinformatics, @yindalon, @StanfordHAI, @MazurowskiPhD']",21,08,1573
438,85,1438884403221049349,1069608017841262593,Joseph Ortiz,"Excited to share new work entitled: Incremental Abstraction in Distributed Probabilistic SLAM Graphs. Work with: @talfanevans, @SucarEdgar, @AjdDavison Project page: <LINK> Paper: <LINK> Video: <LINK> Takeaway 1: We abstract the SLAM factor graph into a more compact + semantic graph using network predictions that are accepted / rejected through inference. Takeaway 2: We use GBP (<LINK>) for distributed inference on a graph processor (@graphcoreai IPU). We make GBP work for dynamic graphs and show for complex graphs with many different types of factors, GBP is very efficient and faster than the Ceres Solver!",https://arxiv.org/abs/2109.06241,"Scene graphs represent the key components of a scene in a compact and semantically rich way, but are difficult to build during incremental SLAM operation because of the challenges of robustly identifying abstract scene elements and optimising continually changing, complex graphs. We present a distributed, graph-based SLAM framework for incrementally building scene graphs based on two novel components. First, we propose an incremental abstraction framework in which a neural network proposes abstract scene elements that are incorporated into the factor graph of a feature-based monocular SLAM system. Scene elements are confirmed or rejected through optimisation and incrementally replace the points yielding a more dense, semantic and compact representation. Second, enabled by our novel routing procedure, we use Gaussian Belief Propagation (GBP) for distributed inference on a graph processor. The time per iteration of GBP is structure-agnostic and we demonstrate the speed advantages over direct methods for inference of heterogeneous factor graphs. We run our system on real indoor datasets using planar abstractions and recover the major planes with significant compression. ",Incremental Abstraction in Distributed Probabilistic SLAM Graphs,3,"['Excited to share new work entitled: Incremental Abstraction in Distributed Probabilistic SLAM Graphs. \n\nWork with: @talfanevans, @SucarEdgar, @AjdDavison \n\nProject page: <LINK>\nPaper: <LINK>\nVideo: <LINK>', 'Takeaway 1:\nWe abstract the SLAM factor graph into a more compact + semantic graph using network predictions that are accepted / rejected through inference.', 'Takeaway 2:\nWe use GBP (https://t.co/ymjBGC5cnG) for distributed inference on a graph processor (@graphcoreai IPU). We make GBP work for dynamic graphs and show for complex graphs with many different types of factors, GBP is very efficient and faster than the Ceres Solver!']",21,09,616
439,84,1491894215734743040,66175375,Jason Wang,"New KPIC paper out showing the ability to measure composition of a benchmark brown dwarf companion from high resolution spectra! The inferred C and O abundances are mostly consistent with its host star, which is good. Led by fellow JWang astronomer Ji Wang <LINK>",https://arxiv.org/abs/2202.02477,"A benchmark brown dwarf (BD) is a BD whose properties (e.g., mass and chemical composition) are precisely and independently measured. Benchmark BDs are valuable in testing theoretical evolutionary tracks, spectral synthesis, and atmospheric retrievals for sub-stellar objects. Here, we report results of atmospheric retrieval on a synthetic spectrum and a benchmark BD -- HR 7672~B -- with \petit. First, we test the retrieval framework on a synthetic PHOENIX BT-Settl spectrum with a solar composition. We show that the retrieved C and O abundances are consistent with solar values, but the retrieved C/O is overestimated by 0.13-0.18, which is $\sim$4 times higher than the formal error bar. Second, we perform retrieval on HR 7672~B using high spectral resolution data (R=35,000) from the Keck Planet Imager and Characterizer (KPIC) and near infrared photometry. We retrieve [C/H], [O/H], and C/O to be $-0.24\pm0.05$, $-0.19\pm0.04$, and $0.52\pm0.02$. These values are consistent with those of HR 7672~A within 1.5-$\sigma$. As such, HR 7672~B is among only a few benchmark BDs (along with Gl 570~D and HD 3651~B) that have been demonstrated to have consistent elemental abundances with their primary stars. Our work provides a practical procedure of testing and performing atmospheric retrieval, and sheds light on potential systematics of future retrievals using high- and low-resolution data. ","Retrieving the C and O Abundances of HR 7672~AB: a Solar-Type Primary
Star with a Benchmark Brown Dwarf",1,"['New KPIC paper out showing the ability to measure composition of a benchmark brown dwarf companion from high resolution spectra! The inferred C and O abundances are mostly consistent with its host star, which is good. Led by fellow JWang astronomer Ji Wang <LINK>']",22,02,263
440,120,1249742288571400197,15254510,Tobias Weyand,Excited to announce our CVPR'20 paper on the Google Landmarks Dataset v2: <LINK> GLDv2 is a new benchmark for image retrieval and instance-level object recognition with 5M images and over 200k classes and a special focus on application-relevant challenges. <LINK>,https://arxiv.org/abs/2004.01804,"While image retrieval and instance recognition techniques are progressing rapidly, there is a need for challenging datasets to accurately measure their performance -- while posing novel challenges that are relevant for practical applications. We introduce the Google Landmarks Dataset v2 (GLDv2), a new benchmark for large-scale, fine-grained instance recognition and image retrieval in the domain of human-made and natural landmarks. GLDv2 is the largest such dataset to date by a large margin, including over 5M images and 200k distinct instance labels. Its test set consists of 118k images with ground truth annotations for both the retrieval and recognition tasks. The ground truth construction involved over 800 hours of human annotator work. Our new dataset has several challenging properties inspired by real world applications that previous datasets did not consider: An extremely long-tailed class distribution, a large fraction of out-of-domain test photos and large intra-class variability. The dataset is sourced from Wikimedia Commons, the world's largest crowdsourced collection of landmark photos. We provide baseline results for both recognition and retrieval tasks based on state-of-the-art methods as well as competitive results from a public challenge. We further demonstrate the suitability of the dataset for transfer learning by showing that image embeddings trained on it achieve competitive retrieval performance on independent datasets. The dataset images, ground-truth and metric scoring code are available at this https URL ","Google Landmarks Dataset v2 -- A Large-Scale Benchmark for
Instance-Level Recognition and Retrieval",1,"[""Excited to announce our CVPR'20 paper on the Google Landmarks Dataset v2: <LINK>\n\nGLDv2 is a new benchmark for image retrieval and instance-level object recognition with 5M images and over 200k classes and a special focus on application-relevant challenges. <LINK>""]",20,04,263
441,105,1371745025638416384,953655661430222848,Moritz Schauer,"New paper <LINK> on arXiv ""Sticky PDMP samplers for sparse and local inference problems"" by @jbierkens @SebastianoGraz3 @MeulenFrank and me. <LINK> The sampler moves like the e.g. the Zig-Zag but ""sticks"" to the coordinate axess... to sample statistical models with spike and slab priors for high-dimensional variable selection <LINK> We can combine spike and slab priors with a Gaussian smoothing prior to filter out noise and solve the combinatorial problem of selecting the right roughly 100k background pixels in this structured sparse problem <LINK> A sampler trajectory for this example shows the mixing and convergence <LINK>",https://arxiv.org/abs/2103.08478,"We construct a new class of efficient Monte Carlo methods based on continuous-time piecewise deterministic Markov processes (PDMPs) suitable for inference in high dimensional sparse models, i.e. models for which there is prior knowledge that many coordinates are likely to be exactly $0$. This is achieved with the fairly simple idea of endowing existing PDMP samplers with 'sticky' coordinate axes, coordinate planes etc. Upon hitting those subspaces, an event is triggered during which the process sticks to the subspace, this way spending some time in a sub-model. This results in non-reversible jumps between different (sub-)models. While we show that PDMP samplers in general can be made sticky, we mainly focus on the Zig-Zag sampler. The computational efficiency of our method (and implementation) is established through numerical experiments where both the sample size and the dimension of the parameter space are large. ",Sticky PDMP samplers for sparse and local inference problems,4,"['New paper <LINK> on arXiv ""Sticky PDMP samplers for sparse and local inference problems"" by @jbierkens @SebastianoGraz3 @MeulenFrank and me. <LINK>', 'The sampler moves like the e.g. the Zig-Zag but ""sticks"" to the coordinate axess... to sample statistical models with spike and slab priors for high-dimensional variable selection https://t.co/OKOmVtxgrB', 'We can combine spike and slab priors with a Gaussian smoothing prior to filter out noise and solve the combinatorial problem of selecting the right roughly 100k background pixels in this structured sparse problem https://t.co/WmJP4KHPOO', 'A sampler trajectory for this example shows the mixing and convergence https://t.co/tnuzNNXpsF']",21,03,632
442,156,1463682412056125446,805911864123211776,Keshav Motwani,"I am very excited to share my first statistics paper, with @aaron_molstad and @rbacher! We propose a method to fit a classification model using multiple datasets with response categories at different resolutions, as is common in single-cell datasets <LINK> <LINK> The process of manual cell type annotation in single-cell genomics is time-intensive and subjective, which has led to different studies describing cell types with labels of varying degrees of resolution. For example, if a cell is truly of cell type “effector memory CD4+ T cell”, it is equally valid, though less informative, to call it a “memory CD4+ T cell”, a “CD4+ T cell”, or a “T cell.” Depending on the investigator’s specific research interests and availability of external data (e.g. protein expression), it is common to see all of these labels across published datasets. Despite the existence of hundreds of publicly available datasets with expertly annotated cell types, existing methods are limited in their ability to integrate a wide-array of datasets due to varying label resolution. In our paper, we propose a new classification method which will allow investigators to use many datasets jointly to train a unified classification model without loss of information by utilizing the known relationships between labels. Our method allows one to make cell type predictions/annotations at the finest resolution labels allowed by the union of all datasets’ labels. We applied the method to ten single-cell genomics datasets from peripheral blood annotated at varying resolutions, as depicted in this figure. <LINK> This method could be used in other contexts as well. For example, in one dataset, we may know whether a patient is a control, disease subtype A, or disease subtype B, whereas in another dataset, we only know if a patient is a control or has the disease. We may want to combine both datasets for fitting the model for increased sample size, but still be able to predict subtypes on a new data point. Our method would allow for this. Check out the software here: <LINK> and a user guide here: <LINK>",https://arxiv.org/abs/2111.12149,"Categorizing individual cells into one of many known cell type categories, also known as cell type annotation, is a critical step in the analysis of single-cell genomics data. The current process of annotation is time-intensive and subjective, which has led to different studies describing cell types with labels of varying degrees of resolution. While supervised learning approaches have provided automated solutions to annotation, there remains a significant challenge in fitting a unified model for multiple datasets with inconsistent labels. In this article, we propose a new multinomial logistic regression estimator which can be used to model cell type probabilities by integrating multiple datasets with labels of varying resolution. To compute our estimator, we solve a nonconvex optimization problem using a blockwise proximal gradient descent algorithm. We show through simulation studies that our approach estimates cell type probabilities more accurately than competitors in a wide variety of scenarios. We apply our method to ten single-cell RNA-seq datasets and demonstrate its utility in predicting fine resolution cell type labels on unlabeled data as well as refining cell type labels on data with existing coarse resolution annotations. An R package implementing the method is available at this https URL and the collection of datasets we analyze is available at this https URL ","Binned multinomial logistic regression for integrative cell type
annotation",11,"['I am very excited to share my first statistics paper, with @aaron_molstad and @rbacher! We propose a method to fit a classification model using multiple datasets with response categories at different resolutions, as is common in single-cell datasets <LINK> <LINK>', 'The process of manual cell type annotation in single-cell genomics is time-intensive and subjective, which has led to different studies describing cell types with labels of varying degrees of resolution.', 'For example, if a cell is truly of cell type “effector memory CD4+ T cell”, it is equally valid, though less informative, to call it a “memory CD4+ T cell”, a “CD4+ T cell”, or a “T cell.”', 'Depending on the investigator’s specific research interests and availability of external data (e.g. protein expression), it is common to see all of these labels across published datasets.', 'Despite the existence of hundreds of publicly available datasets with expertly annotated cell types, existing methods are limited in their ability to integrate a wide-array of datasets due to varying label resolution.', 'In our paper, we propose a new classification method which will allow investigators to use many datasets jointly to train a unified classification model without loss of information by utilizing the known relationships between labels.', 'Our method allows one to make cell type predictions/annotations at the finest resolution labels allowed by the union of all datasets’ labels.', 'We applied the method to ten single-cell genomics datasets from peripheral blood annotated at varying resolutions, as depicted in this figure. https://t.co/VbCtqeKziu', 'This method could be used in other contexts as well. For example, in one dataset, we may know whether a patient is a control, disease subtype A, or disease subtype B, whereas in another dataset, we only know if a patient is a control or has the disease.', 'We may want to combine both datasets for fitting the model for increased sample size, but still be able to predict subtypes on a new data point. Our method would allow for this.', 'Check out the software here: https://t.co/VWQyJ6YRIB and a user guide here: https://t.co/0OU4tLGZ65']",21,11,2086
443,96,1113469610051932160,28889546,Dr. Qusai Al Shidi,My paper is on Arxiv. A new 2fluid collisional mhd model for the sun's chromosphere. <LINK> @brewstronomy I got that elevator pitch version prepared depending on understanding and you'll get to that point too for any of your papers. Unfortunately I'm unsure twitter character limit is condusive.. I should try.,https://arxiv.org/abs/1904.01572,"The sun's chromosphere is a highly dynamic, partially-ionized region where spicules (hot jets of plasma) form. Here we present a two-fluid MHD model to study the chromosphere, which includes ion-neutral interaction and frictional heating. Our simulation recovers a magnetic canopy shape that forms quickly, but is also quickly disrupted by the formation of a jet. Our simulation produces a shock self-consistently, where the jet is driven by the frictional heating, which is much greater than the ohmic heating. Thus, our simulation demonstrates that the jet could be driven purely by thermal effects due to ion-neutral collisions and not by magnetic reconnection. We plan to improve the model to include photo-chemical effects and radiation. ","Time-Dependent Two-Fluid Magnetohydrodynamic Model and Simulation of the
Chromosphere",2,"[""My paper is on Arxiv. A new 2fluid collisional mhd model for the sun's chromosphere.\n<LINK>"", ""@brewstronomy I got that elevator pitch version prepared depending on understanding and you'll get to that point too for any of your papers. Unfortunately I'm unsure twitter character limit is condusive.. I should try.""]",19,04,310
444,133,1379737640388472834,14327235,Jack Valmadre,"New paper on arxiv! Local metrics for multi-object tracking <LINK> We propose metrics for MOT where tracks only need to be correct within a finite temporal horizon. Varying the horizon reveals the distance at which trackers make association errors 1/n A difficult question in MOT is how to measure accuracy when some degree of association error is tolerable. Various association metrics have been proposed over the years (MOTA, HOTA, mutual info, Rand index, ...) and each defines an implicit trade-off with detection 2/n Local metrics provide an alternative mechanism to explicitly specify the trade-off between detection and association. Instead of choosing an association metric and combining it with a detection metric, benchmarks can just choose one (or more) temporal horizon(s) 3/n We further propose a decomposition of tracker errors into detection (false-negative and false-positive) and association (split and merge). Together with the temporal horizon, these constitute valuable tools for analysing and comparing MOT algorithms 4/n Shown here: Average Local Tracking Accuracy (ALTA) as a function of temporal horizon for state-of-the-art trackers on MOT17. The metric ranges from pure detection (horizon &lt; 1 frame) to strict tracking (horizon &gt; seq len). Asterisks denote the use of a custom detector 5/n <LINK>",https://arxiv.org/abs/2104.02631,"This paper introduces temporally local metrics for Multi-Object Tracking. These metrics are obtained by restricting existing metrics based on track matching to a finite temporal horizon, and provide new insight into the ability of trackers to maintain identity over time. Moreover, the horizon parameter offers a novel, meaningful mechanism by which to define the relative importance of detection and association, a common dilemma in applications where imperfect association is tolerable. It is shown that the historical Average Tracking Accuracy (ATA) metric exhibits superior sensitivity to association, enabling its proposed local variant, ALTA, to capture a wide range of characteristics. In particular, ALTA is better equipped to identify advances in association independent of detection. The paper further presents an error decomposition for ATA that reveals the impact of four distinct error types and is equally applicable to ALTA. The diagnostic capabilities of ALTA are demonstrated on the MOT 2017 and Waymo Open Dataset benchmarks. ",Local Metrics for Multi-Object Tracking,5,"['New paper on arxiv!\nLocal metrics for multi-object tracking\n<LINK>\n\nWe propose metrics for MOT where tracks only need to be correct within a finite temporal horizon. Varying the horizon reveals the distance at which trackers make association errors\n\n1/n', 'A difficult question in MOT is how to measure accuracy when some degree of association error is tolerable. Various association metrics have been proposed over the years (MOTA, HOTA, mutual info, Rand index, ...) and each defines an implicit trade-off with detection\n\n2/n', 'Local metrics provide an alternative mechanism to explicitly specify the trade-off between detection and association. Instead of choosing an association metric and combining it with a detection metric, benchmarks can just choose one (or more) temporal horizon(s)\n\n3/n', 'We further propose a decomposition of tracker errors into detection (false-negative and false-positive) and association (split and merge). Together with the temporal horizon, these constitute valuable tools for analysing and comparing MOT algorithms\n\n4/n', 'Shown here: Average Local Tracking Accuracy (ALTA) as a function of temporal horizon for state-of-the-art trackers on MOT17. The metric ranges from pure detection (horizon &lt; 1 frame) to strict tracking (horizon &gt; seq len). Asterisks denote the use of a custom detector\n\n5/n https://t.co/aVWbRoqvjD']",21,04,1328
445,104,1112884635682320384,2930561996,Foivos Diakogiannis,"From astronomy to computer vision: new paper on semantic segmentation (includes multi-tasking as well). If you are into this type of problems, you will enjoy the discussion on the Generalized Dice loss. Code and blog will follow. With @cwperth <LINK> <LINK> From the paper: ""not all loss functions are created equal"". Gradient flow and Laplacian operator on a 2D toy problem for a set of losses is intuitive for their performance (rightmost column is our choice). <LINK>",https://arxiv.org/abs/1904.00592,"Scene understanding of high resolution aerial images is of great importance for the task of automated monitoring in various remote sensing applications. Due to the large within-class and small between-class variance in pixel values of objects of interest, this remains a challenging task. In recent years, deep convolutional neural networks have started being used in remote sensing applications and demonstrate state of the art performance for pixel level classification of objects. \textcolor{black}{Here we propose a reliable framework for performant results for the task of semantic segmentation of monotemporal very high resolution aerial images. Our framework consists of a novel deep learning architecture, ResUNet-a, and a novel loss function based on the Dice loss. ResUNet-a uses a UNet encoder/decoder backbone, in combination with residual connections, atrous convolutions, pyramid scene parsing pooling and multi-tasking inference. ResUNet-a infers sequentially the boundary of the objects, the distance transform of the segmentation mask, the segmentation mask and a colored reconstruction of the input. Each of the tasks is conditioned on the inference of the previous ones, thus establishing a conditioned relationship between the various tasks, as this is described through the architecture's computation graph. We analyse the performance of several flavours of the Generalized Dice loss for semantic segmentation, and we introduce a novel variant loss function for semantic segmentation of objects that has excellent convergence properties and behaves well even under the presence of highly imbalanced classes.} The performance of our modeling framework is evaluated on the ISPRS 2D Potsdam dataset. Results show state-of-the-art performance with an average F1 score of 92.9\% over all classes for our best model. ","ResUNet-a: a deep learning framework for semantic segmentation of
remotely sensed data",2,"['From astronomy to computer vision: new paper on semantic segmentation (includes multi-tasking as well). If you are into this type of problems, you will enjoy the discussion on the Generalized Dice loss. Code and blog will follow. \n\nWith @cwperth \n<LINK> <LINK>', 'From the paper: ""not all loss functions are created equal"". Gradient flow and Laplacian operator on a 2D toy problem for a set of losses is intuitive for their performance (rightmost column is our choice). https://t.co/4omZ0tEFSS']",19,04,471
446,109,1336048085840097282,4365927557,Dr. Jake Turner 🌅,"**New 1st Author Paper*** ""Decaying Orbit of the Hot Jupiter WASP-12b: Confirmation with TESS Observations"" Paper can be found here: <LINK> Co-authors: @AstroAndrew123 @DrRayJay THREAD/ @CSInst @CornellAstro @Cornell @NASA_TESS #Exoplanets #Astronomy @NASA <LINK> WASP-12b is an ultra-hot Jupiter (~2600 K) that has been studied extensively since 2009. It is thought to have clouds made of corundum. Also, the atmosphere of the planet is escaping & will completely evaporate in 300 Myr. But what about its orbit? 1/ <LINK> Maciejewski et al. 2016 (<LINK>) were the first to report evidence of a decreasing orbital period for WASP-12b. Follow-up observations by Yee et al. 2020 (<LINK>) determined conclusively that the orbit of WASP-12b was decaying. 2/ <LINK> Observations of orbital decay on close-in planets enhances our understanding of the hot Jupiter population and their evolution 3/ <LINK> Motivated by these indications of WASP-12b’s changing period, my team (@AstroAndrew123, @DrRayJay) decided to use observations from @NASA’s @NASA_TESS to verify its orbital decay, and derive updated orbital parameters and planetary properties. 4/ <LINK> We modeled each individual transit (the phase folded light curve is below). We also detected the occultation of the planet but only after binning the entire @NASA_TESS data These data were modeled with the EXOplanet Modeling Package (EXOMOP) developed in my PhD 5/ <LINK> We found that the timing data from the transits & occultation from @NASA_TESS confirmed the decaying orbit of WASP-12b! We compared the decaying orbit model with an apsidal precession model and the decaying orbit model fit the data much better. 6/ <LINK> Our finding indicates an orbital decay lifetime of 2.9 Myr, shorter than the estimated mass-loss timescale of 300 Myr. We also find a modified tidal quality factor of Q’⋆ = 1.39×10^5. The cause of such a low tidal quality factor requires additional theoretical work 7/ <LINK> Our study highlights the power of long-term high-precision ground & space-based transit & occultation observations for understanding the orbital evolution of close-in giant planets. It will be interesting to see what other discoveries @NASA_TESS will make in the future 8/8 @decaelus @NASA_TESS Thanks @decaelus! @wilson_cauley @NASA_TESS Thanks! @PlavchanPeter @decaelus @NASA_TESS Thanks Peter!",https://arxiv.org/abs/2012.02211,"Theory suggests that the orbits of some close-in giant planets should decay due to tidal interactions with their host stars. To date, WASP-12b is the only hot Jupiter reported to have a decaying orbit, at a rate of 29$\pm$2 msec year$^{-1}$. We analyzed data from NASA's Transiting Exoplanet Survey Satellite (TESS) to verify that WASP-12b's orbit is indeed changing. We find that the TESS transit and occultation data are consistent with a decaying orbit with an updated period of 1.091420090$\pm$0.000000041 days and a decay rate of 32.53$\pm$1.62 msec year$^{-1}$. We find an orbital decay timescale of $\tau$ = P/$|\dot P|$ = 2.90$\pm$0.14 Myr. If the observed decay results from tidal dissipation, the modified tidal quality factor is Q'$_{\star}$ = 1.39$\pm$0.15 $\times 10^{5}$, which falls at the lower end of values derived for binary star systems and hot Jupiters. Our result highlights the power of space-based photometry for investigating the orbital evolution of short-period exoplanets. ","Decaying Orbit of the Hot Jupiter WASP-12b: Confirmation with TESS
Observations",12,"['**New 1st Author Paper***\n\n""Decaying Orbit of the Hot Jupiter WASP-12b: Confirmation with TESS Observations"" \n\nPaper can be found here: <LINK>\n\nCo-authors: @AstroAndrew123 @DrRayJay \n\nTHREAD/\n@CSInst @CornellAstro @Cornell @NASA_TESS \n#Exoplanets #Astronomy @NASA <LINK>', 'WASP-12b is an ultra-hot Jupiter (~2600 K) that has been studied extensively since 2009. It is thought to have clouds made of corundum. Also, the atmosphere of the planet is escaping &amp; will completely evaporate in 300 Myr. \n\nBut what about its orbit? \n1/ https://t.co/QNqnh6d2Lq', 'Maciejewski et al. 2016 (https://t.co/uWu5gge7ob) were the first to report evidence of a decreasing orbital period for WASP-12b. \n\nFollow-up observations by Yee et al. 2020\n(https://t.co/6bPk0hiZbe) determined conclusively that the orbit of WASP-12b was decaying. \n\n2/ https://t.co/PZyBYC8BYj', 'Observations of orbital decay on close-in planets enhances our understanding of the hot Jupiter population and their evolution \n\n3/ https://t.co/ZqgJSEvJA5', 'Motivated by these indications of WASP-12b’s changing period, my team (@AstroAndrew123, @DrRayJay) decided to use observations from @NASA’s @NASA_TESS to verify its orbital decay, and derive updated orbital parameters and planetary properties. \n\n4/ https://t.co/QxLoEKm93c', 'We modeled each individual transit (the phase folded light curve is below). We also detected the occultation of the planet but only after binning the entire @NASA_TESS data \n\nThese data were modeled with the EXOplanet Modeling Package (EXOMOP) developed in my PhD\n\n5/ https://t.co/TKKGKPx2UO', 'We found that the timing data from the transits &amp; occultation from @NASA_TESS confirmed the decaying orbit of WASP-12b! \n\nWe compared the decaying orbit model with an apsidal precession model and the decaying orbit model fit the data much better. \n\n6/ https://t.co/NRbp7Nbamx', 'Our finding indicates an orbital decay lifetime of 2.9 Myr, shorter than the estimated mass-loss timescale of 300 Myr. \n\nWe also find a modified tidal quality factor of Q’⋆ = 1.39×10^5. The cause of such a low tidal quality factor requires additional theoretical work\n\n7/ https://t.co/DkGOqDdtoD', 'Our study highlights the power of long-term high-precision ground &amp; space-based transit &amp; occultation observations for understanding the orbital evolution of close-in giant planets.\n\nIt will be interesting to see what other discoveries @NASA_TESS will make in the future \n\n8/8', '@decaelus @NASA_TESS Thanks @decaelus!', '@wilson_cauley @NASA_TESS Thanks!', '@PlavchanPeter @decaelus @NASA_TESS Thanks Peter!']",20,12,2363
447,193,1451620449054433283,2577596593,Chelsea Finn,"Large language models (LLMs) often make mistakes that are difficult to correct. We study the problem of quickly editing these models: Paper: <LINK> Code: <LINK> w/ @_eric_mitchell_, C. Lin, @ABosselut, @chrmanning thread 🧵👇 <LINK> We assume a pre-trained model & a dataset that covers many possible model edits Then, we meta-train a model editor that predicts a model update that: - edits the model - otherwise keeps the model behavior the same (2/4) <LINK> You can train model editors for massive models (e.g. GPT-J, T5-11B) in &lt;1 day on a single GPU. Edits with the resulting model editor are extremely fast, with edit success rate of 80-90%. (3/4) <LINK> While I think that MEND makes significant progress on the problem of model editing, still lots of open challenges including: - deeper edit generalization - making many model edits It was also a lot of fun to collaborate with @stanfordnlp! (4/4) @jackclarkSF @chrmanning @_eric_mitchell_ @ABosselut Thanks Jack!",https://arxiv.org/abs/2110.11309,"While large pre-trained models have enabled impressive results on a variety of downstream tasks, the largest existing models still make errors, and even accurate predictions may become outdated over time. Because detecting all such failures at training time is impossible, enabling both developers and end users of such models to correct inaccurate outputs while leaving the model otherwise intact is desirable. However, the distributed, black-box nature of the representations learned by large neural networks makes producing such targeted edits difficult. If presented with only a single problematic input and new desired output, fine-tuning approaches tend to overfit; other editing algorithms are either computationally infeasible or simply ineffective when applied to very large models. To enable easy post-hoc editing at scale, we propose Model Editor Networks with Gradient Decomposition (MEND), a collection of small auxiliary editing networks that use a single desired input-output pair to make fast, local edits to a pre-trained model. MEND learns to transform the gradient obtained by standard fine-tuning, using a low-rank decomposition of the gradient to make the parameterization of this transformation tractable. MEND can be trained on a single GPU in less than a day even for 10 billion+ parameter models; once trained MEND enables rapid application of new edits to the pre-trained model. Our experiments with T5, GPT, BERT, and BART models show that MEND is the only approach to model editing that produces effective edits for models with tens of millions to over 10 billion parameters. Implementation available at this https URL ",Fast Model Editing at Scale,5,"['Large language models (LLMs) often make mistakes that are difficult to correct.\n\nWe study the problem of quickly editing these models:\nPaper: <LINK>\nCode: <LINK>\n\nw/ @_eric_mitchell_, C. Lin, @ABosselut, @chrmanning\n\nthread 🧵👇 <LINK>', 'We assume a pre-trained model &amp; a dataset that covers many possible model edits\n \nThen, we meta-train a model editor that predicts a model update that:\n- edits the model\n- otherwise keeps the model behavior the same\n\n(2/4) https://t.co/N8ZT3KE93o', 'You can train model editors for massive models (e.g. GPT-J, T5-11B) in &lt;1 day on a single GPU.\n\nEdits with the resulting model editor are extremely fast, with edit success rate of 80-90%.\n(3/4) https://t.co/YGWT2jdALy', 'While I think that MEND makes significant progress on the problem of model editing, still lots of open challenges including:\n- deeper edit generalization\n- making many model edits\n\nIt was also a lot of fun to collaborate with @stanfordnlp!\n(4/4)', '@jackclarkSF @chrmanning @_eric_mitchell_ @ABosselut Thanks Jack!']",21,10,972
448,30,1332377870480662528,108977933,David Rothschild 🇺🇦,"Let me emphasize 3 findings of this new working paper <LINK> (with a bunch of co-authors in quoted tweet) (1) News is still a relatively small part of YouTube, but YouTube is huge, news is growing, and right-wing news in particular is growing very fast. <LINK> (2) Conditional on consuming right-wing news on YouTube, people are watching a lot in each setting & coming back regularly (3) Recommendation engines not key driver of right-wing news: people click on links from right-wing online sites, emails, etc. or they are search for it. Facebook gets all the attention, because their leadership keeps going pubic with their support of various right-wing initiatives, but if you worry about people living in extreme right-wing content YouTube's explosive growth in right-wing content & consumption way more concerning. @James_Monroe_17 @dukeblue24 Mainstream media has a strong pro-Republican bias driven by falsely equating the validity & quality of the two major parties. Social Media generally reflects the market they have created, which (while generally reflecting MSM) tends towards more extreme content (both sides). Censorship is *never* the answer: We need to shift objectives of mainstream media away from false equivalency to objectivity, regulate social media with push towards transparency (exact opposite of what most people are pushing for), and Democrats should embrace partisan media. Fact Republicans have massive media empires is a asymmetric pull on selection & framing of all news, which Democrats need to counter, or any other changes will be relatively futile. Republican partisan media has no journalistic standards, but Democratic partisan media can be held to the highest standards: just have no need to balance their lineup with Republicans and temper their advice to appear neutral.",https://arxiv.org/abs/2011.12843,"Although it is under-studied relative to other social media platforms, YouTube is arguably the largest and most engaging online media consumption platform in the world. Recently, YouTube's scale has fueled concerns that YouTube users are being radicalized via a combination of biased recommendations and ostensibly apolitical anti-woke channels, both of which have been claimed to direct attention to radical political content. Here we test this hypothesis using a representative panel of more than 300,000 Americans and their individual-level browsing behavior, on and off YouTube, from January 2016 through December 2019. Using a labeled set of political news channels, we find that news consumption on YouTube is dominated by mainstream and largely centrist sources. Consumers of far-right content, while more engaged than average, represent a small and stable percentage of news consumers. However, consumption of anti-woke content, defined in terms of its opposition to progressive intellectual and political agendas, grew steadily in popularity and is correlated with consumption of far-right content off-platform. We find no evidence that engagement with far-right content is caused by YouTube recommendations systematically, nor do we find clear evidence that anti-woke channels serve as a gateway to the far right. Rather, consumption of political content on YouTube appears to reflect individual preferences that extend across the web as a whole. ",Examining the consumption of radical content on YouTube,7,"['Let me emphasize 3 findings of this new working paper <LINK> (with a bunch of co-authors in quoted tweet) (1) News is still a relatively small part of YouTube, but YouTube is huge, news is growing, and right-wing news in particular is growing very fast. <LINK>', '(2) Conditional on consuming right-wing news on YouTube, people are watching a lot in each setting &amp; coming back regularly (3) Recommendation engines not key driver of right-wing news: people click on links from right-wing online sites, emails, etc. or they are search for it.', ""Facebook gets all the attention, because their leadership keeps going pubic with their support of various right-wing initiatives, but if you worry about people living in extreme right-wing content YouTube's explosive growth in right-wing content &amp; consumption way more concerning."", '@James_Monroe_17 @dukeblue24 Mainstream media has a strong pro-Republican bias driven by falsely equating the validity &amp; quality of the two major parties. Social Media generally reflects the market they have created, which (while generally reflecting MSM) tends towards more extreme content (both sides).', 'Censorship is *never* the answer: We need to shift objectives of mainstream media away from false equivalency to objectivity, regulate social media with push towards transparency (exact opposite of what most people are pushing for), and Democrats should embrace partisan media.', 'Fact Republicans have massive media empires is a asymmetric pull on selection &amp; framing of all news, which Democrats need to counter, or any other changes will be relatively futile.', 'Republican partisan media has no journalistic standards, but Democratic partisan media can be held to the highest standards: just have no need to balance their lineup with Republicans and temper their advice to appear neutral.']",20,11,1810
449,30,1188469688579350533,2423179856,Edward Raff,"Would a File by Any Other Name Seem as Malicious? New @BoozAllen paper with @AndreNguyen16 & @AaronSantMiller accepted to @IEEEorg #BigData conference! There are a number of cases where you might not have an EXE file, but want to know if it was malicious. <LINK> <LINK> Turns out, you can get surprisingly far by just looking at file names, with an AUC=0.98 on the @EndgameInc Ember2017 corpus! LightGBM is domain knowledge features, CharCNN our approach just using the file name. Why does this even work? <LINK> Malware authors are generally lazy. If we look at clusters in CharCNN file name space, we can find a number of naming patterns used: pretending to be a backup, random tokens, copying common app name (e.g, iTunes), or even lazily just calling themselves Ransomware. <LINK> We did a lot of checking to try and avoid as much potential label leakage, and to check that we are more than just a naive black list on file names - which would fail to classify most files. File name based classification isn't a panacea, but a helpful approach. Check it out! <LINK>",https://arxiv.org/abs/1910.04753,"Successful malware attacks on information technology systems can cause millions of dollars in damage, the exposure of sensitive and private information, and the irreversible destruction of data. Anti-virus systems that analyze a file's contents use a combination of static and dynamic analysis to detect and remove/remediate such malware. However, examining a file's entire contents is not always possible in practice, as the volume and velocity of incoming data may be too high, or access to the underlying file contents may be restricted or unavailable. If it were possible to obtain estimates of a file's relative likelihood of being malicious without looking at the file contents, we could better prioritize file processing order and aid analysts in situations where a file is unavailable. In this work, we demonstrate that file names can contain information predictive of the presence of malware in a file. In particular, we show the effectiveness of a character-level convolutional neural network at predicting malware status using file names on Endgame's EMBER malware detection benchmark dataset. ",Would a File by Any Other Name Seem as Malicious?,4,"['Would a File by Any Other Name Seem as Malicious? New @BoozAllen paper with @AndreNguyen16 &amp; @AaronSantMiller accepted to @IEEEorg #BigData conference! There are a number of cases where you might not have an EXE file, but want to know if it was malicious. <LINK> <LINK>', 'Turns out, you can get surprisingly far by just looking at file names, with an AUC=0.98 on the @EndgameInc Ember2017 corpus! LightGBM is domain knowledge features, CharCNN our approach just using the file name. Why does this even work? https://t.co/s1QlMTgBCQ', 'Malware authors are generally lazy. If we look at clusters in CharCNN file name space, we can find a number of naming patterns used: pretending to be a backup, random tokens, copying common app name (e.g, iTunes), or even lazily just calling themselves Ransomware. https://t.co/oE822eoVFP', ""We did a lot of checking to try and avoid as much potential label leakage, and to check that we are more than just a naive black list on file names - which would fail to classify most files. \n\nFile name based classification isn't a panacea, but a helpful approach. Check it out! https://t.co/xHir6O5WxQ""]",19,10,1069
450,96,1425165149904809987,318652707,Fangzhou Jiang (Arthur),"Check out our new paper on SIDM subhalo s. We compare the effects of dark ram-pressure and tides on satellite evolution, and show that the central density of compact satellites can be used to constrain cross-section parameters, in a proof-of-concept way. <LINK>",https://arxiv.org/abs/2108.03243,"Dark matter self interactions can leave distinctive signatures on the properties of satellite galaxies around Milky Way--like hosts through their impact on tidal stripping, ram pressure, and gravothermal collapse. We delineate the regions of self-interacting dark matter parameter space---specified by interaction cross section and a velocity scale---where each of these effects dominates, and show how the relative mass loss depends on the satellite's initial mass, density profile and orbit. We obtain novel, conservative constraints in this parameter space using Milky Way satellite galaxies with notably high central densities and small pericenter distances. Our results for self-interacting dark matter models, in combination with constraints from clusters of galaxies, favor velocity-dependent cross sections that lead to gravothermal core collapse in the densest satellites. ","Orbital Evolution of Satellite Galaxies in Self-Interacting Dark Matter
Models",1,"['Check out our new paper on SIDM subhalo s. We compare the effects of dark ram-pressure and tides on satellite evolution, and show that the central density of compact satellites can be used to constrain cross-section parameters, in a proof-of-concept way. <LINK>']",21,08,261
451,31,1020106065059328000,837133583558987776,Colin Raffel,"New paper w/ @D_Berthelot_ML Aurko Roy and @goodfellow_ian where we propose an adversarial regularizer for improving interpolation in autoencoders and measure whether it also improves representation learning performance. Paper <LINK>, code <LINK> <LINK>",http://arxiv.org/abs/1807.07543,"Autoencoders provide a powerful framework for learning compressed representations by encoding all of the information needed to reconstruct a data point in a latent code. In some cases, autoencoders can ""interpolate"": By decoding the convex combination of the latent codes for two datapoints, the autoencoder can produce an output which semantically mixes characteristics from the datapoints. In this paper, we propose a regularization procedure which encourages interpolated outputs to appear more realistic by fooling a critic network which has been trained to recover the mixing coefficient from interpolated data. We then develop a simple benchmark task where we can quantitatively measure the extent to which various autoencoders can interpolate and show that our regularizer dramatically improves interpolation in this setting. We also demonstrate empirically that our regularizer produces latent codes which are more effective on downstream tasks, suggesting a possible link between interpolation abilities and learning useful representations. ","Understanding and Improving Interpolation in Autoencoders via an
Adversarial Regularizer",1,"['New paper w/ @D_Berthelot_ML Aurko Roy and @goodfellow_ian where we propose an adversarial regularizer for improving interpolation in autoencoders and measure whether it also improves representation learning performance. Paper <LINK>, code <LINK> <LINK>']",18,07,253
452,42,1298018355513659392,536866317,Hernan Garcia,"New paper with Nick Lammers, @yangjoonkim @jiaxi_zhao94! Transcriptional bursts occur on times scales of minutes to hours, but transcription factor binding lasts &lt;5s. We review several molecular models that can reconcile these dissimilar timescales. <LINK> <LINK>",http://arxiv.org/abs/2008.09225,"Eukaryotic transcription generally occurs in bursts of activity lasting minutes to hours; however, state-of-the-art measurements have revealed that many of the molecular processes that underlie bursting, such as transcription factor binding to DNA, unfold on timescales of seconds. This temporal disconnect lies at the heart of a broader challenge in physical biology of predicting transcriptional outcomes and cellular decision-making from the dynamics of underlying molecular processes. Here, we review how new dynamical information about the processes underlying transcriptional control can be combined with theoretical models that predict not only averaged transcriptional dynamics, but also their variability, to formulate testable hypotheses about the molecular mechanisms underlying transcriptional bursting and control. ","A matter of time: Using dynamics and theory to uncover mechanisms of
transcriptional bursting",1,"['New paper with Nick Lammers, @yangjoonkim @jiaxi_zhao94!\nTranscriptional bursts occur on times scales of minutes to hours, but transcription factor binding lasts &lt;5s. We review several molecular models that can reconcile these dissimilar timescales.\n<LINK> <LINK>']",20,08,266