Unnamed: 0
int64
0
217
id
int64
1,526,373,200B
1,546,707,910B
tweet_text
stringlengths
76
140
paper_reference
stringlengths
20
113
like_count
int64
8
2.72k
100
1,536,493,418,305,704,000
How Much is Enough? A Study on Diffusion Times in Score-based Generative Models abs: https://t.co/qFEZBDrdrq https://t.co/iBlNs4iNE2
How Much is Enough? A Study on Diffusion Times in Score-based Generative Models
60
101
1,536,491,133,513,130,000
Meta Optimal Transport abs: https://t.co/UKdYXKA8Vd github: https://t.co/xb9FVcim7g Meta OT models surpass the sta… https://t.co/OlfwZIC52r
Meta Optimal Transport
67
102
1,535,656,084,488,192,000
Neural Prompt Search abs: https://t.co/wZTUHIcqdv github: https://t.co/vnYEMBrKzt view existing parameter-efficien… https://t.co/pLvxNt84gV
Neural Prompt Search
174
103
1,535,521,674,233,319,400
Deep Surrogate Assisted Generation of Environments abs: https://t.co/1RYhxJ71tt project page:… https://t.co/5MuAOKIePA
Deep Surrogate Assisted Generation of Environments
58
104
1,535,521,046,257,975,300
Deep Hierarchical Planning from Pixels abs: https://t.co/xXBDevsRnK project page: https://t.co/LoNsGVecaR https://t.co/K7RKIq2hBT
Deep Hierarchical Planning from Pixels
101
105
1,535,506,620,624,642,000
VN-Transformer: Rotation-Equivariant Attention for Vector Neurons abs: https://t.co/OkS58YpYq8 https://t.co/ailLjhzsqa
VN-Transformer: Rotation-Equivariant Attention for Vector Neurons
144
106
1,535,469,100,436,271,000
Factuality Enhanced Language Models for Open-Ended Text Generation abs: https://t.co/YX83NnfpMU factual-nucleus sa… https://t.co/suFGgO8Ajv
Factuality Enhanced Language Models for Open-Ended Text Generation
31
107
1,535,449,832,332,177,400
Unveiling Transformers with LEGO: a synthetic reasoning task abs: https://t.co/FCnAD9AjMY https://t.co/LsUblvE3Ig
Unveiling Transformers with LEGO: a synthetic reasoning task
77
108
1,535,392,356,068,892,700
BigVGAN: A Universal Neural Vocoder with Large-Scale Training abs: https://t.co/4NRS1WBePa project page:… https://t.co/rpuKyOEGMH
BigVGAN: A Universal Neural Vocoder with Large-Scale Training
170
109
1,535,069,067,052,195,800
Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language models abs:… https://t.co/v2aIh9B5H2
Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language models
158
110
1,535,067,850,435,600,400
Draft-and-Revise: Effective Image Generation with Contextual RQ-Transformer abs: https://t.co/0s94Tbwh3q propose i… https://t.co/lQZWEHXeRI
Draft-and-Revise: Effective Image Generation with Contextual RQ-Transformer
52
111
1,535,066,703,075,352,600
VideoINR: Learning Video Implicit Neural Representation for Continuous Space-Time Super-Resolution abs:… https://t.co/UKXo53aomf
VideoINR: Learning Video Implicit Neural Representation for Continuous Space-Time Super-Resolution
146
112
1,535,061,799,975,919,600
Diffusion probabilistic modeling of protein backbones in 3D for the motif-scaffolding problem abs:… https://t.co/fUyM4hz22a
Diffusion probabilistic modeling of protein backbones in 3D for the motif-scaffolding problem
48
113
1,535,026,713,100,537,900
Sparse Fusion Mixture-of-Experts are Domain Generalizable Learners abs: https://t.co/koYO5SuiDQ github:… https://t.co/1xMmVzboCC
Sparse Fusion Mixture-of-Experts are Domain Generalizable Learners
70
114
1,534,712,305,790,894,000
STable: Table Generation Framework for Encoder-Decoder Models abs: https://t.co/P8GcsztVFp https://t.co/lJnhODKXyn
STable: Table Generation Framework for Encoder-Decoder Models
32
115
1,534,702,470,202,630,100
Neural Diffusion Processes abs: https://t.co/do2pFgpRWY empirically show that NDPs are able to capture functional… https://t.co/Fx5BFrA9qQ
Neural Diffusion Processes
229
116
1,534,701,793,183,252,500
Patch-based Object-centric Transformers for Efficient Video Generation abs: https://t.co/oeAa0hiBqZ project page:… https://t.co/qCoaulnDfS
Patch-based Object-centric Transformers for Efficient Video Generation
30
117
1,534,700,653,628,764,200
Accelerating Score-based Generative Models for High-Resolution Image Synthesis abs: https://t.co/rC90ydANVJ project… https://t.co/5reyDDPyBN
Accelerating Score-based Generative Models for High-Resolution Image Synthesis
69
118
1,534,476,660,355,043,300
On the Effectiveness of Fine-tuning Versus Meta-reinforcement Learning abs: https://t.co/1gEuTB7Sf1 multi-task pre… https://t.co/zx8QDoxq2l
On the Effectiveness of Fine-tuning Versus Meta-reinforcement Learning
39
119
1,534,465,882,512,146,400
Few-Shot Learning by Dimensionality Reduction in Gradient Space abs: https://t.co/IMwlsW0r5V introduce SubGD, a no… https://t.co/YltxH8mUtF
Few-Shot Learning by Dimensionality Reduction in Gradient Space
204
120
1,534,376,291,453,083,600
DETR++: Taming Your Multi-Scale Detection Transformer abs: https://t.co/kOQ5V4vC3C DETR++, a new architecture that… https://t.co/i7qtSX9eA3
DETR++: Taming Your Multi-Scale Detection Transformer
85
121
1,534,347,375,128,547,300
Intra-agent speech permits zero-shot task acquisition abs: https://t.co/2yVGA91kSA with ~ 150 additional image cap… https://t.co/DtBczvw7lh
Intra-agent speech permits zero-shot task acquisition
60
122
1,534,343,347,334,176,800
Universal Speech Enhancement with Score-based Diffusion abs: https://t.co/jv1rQ14Do4 project page:… https://t.co/UMEE3irGWN
Universal Speech Enhancement with Score-based Diffusion
125
123
1,534,341,405,920,870,400
Generating Long Videos of Dynamic Scenes abs: https://t.co/SjMCJub1RO project page: https://t.co/c97Jcf3lcC presen… https://t.co/jgcfMwGMo6
Generating Long Videos of Dynamic Scenes
336
124
1,533,997,063,951,765,500
Zero-Shot Voice Conditioning for Denoising Diffusion TTS Models abs: https://t.co/iTfFppABzr method requires a sho… https://t.co/GALvAsiQ0J
Zero-Shot Voice Conditioning for Denoising Diffusion TTS Models
89
125
1,533,996,337,557,020,700
Drawing out of Distribution with Neuro-Symbolic Generative Models abs: https://t.co/PcRRRLIVyV DooD trained on MNI… https://t.co/h28KgM3m3k
Drawing out of Distribution with Neuro-Symbolic Generative Models
39
126
1,533,993,050,627,776,500
Separable Self-attention for Mobile Vision Transformers abs: https://t.co/Xj1aZMucFe With ~ 3M parameters, MobileV… https://t.co/LTag2ck7Ew
Separable Self-attention for Mobile Vision Transformers
89
127
1,533,989,659,017,199,600
Extreme Compression for Pre-trained Transformers Made Simple and Efficient abs: https://t.co/7epbwDmV31 https://t.co/n9nppcTgGJ
Extreme Compression for Pre-trained Transformers Made Simple and Efficient
84
128
1,533,988,146,102,288,400
On the duality between contrastive and non-contrastive self-supervised learning abs: https://t.co/O2GdHjqiTz https://t.co/nUibodNE9M
On the duality between contrastive and non-contrastive self-supervised learning
83
129
1,533,982,101,653,098,500
ZeroQuant: Efficient and Affordable Post-Training Quantization for Large-Scale Transformers abs:… https://t.co/tQuBWS3uaH
ZeroQuant: Efficient and Affordable Post-Training Quantization for Large-Scale Transformers
25
130
1,533,980,842,867,015,700
Torsional Diffusion for Molecular Conformer Generation abs: https://t.co/VfhEdlJLd7 github: https://t.co/DYpXh7NbKe https://t.co/khz3yO5FFZ
Torsional Diffusion for Molecular Conformer Generation
24
131
1,533,980,437,114,232,800
Blended Latent Diffusion abs: https://t.co/5K8QQnlQfz project page: https://t.co/ztlJtR4Sio present an accelerated… https://t.co/qzrdUJc4i9
Blended Latent Diffusion
55
132
1,533,979,552,761,913,300
Diffusion-GAN: Training GANs with Diffusion abs: https://t.co/rxRpORfP5U DiffusionGAN can provide stable and data-… https://t.co/ScQTvm3XaA
Diffusion-GAN: Training GANs with Diffusion
237
133
1,533,676,404,063,232,000
Beyond Tabula Rasa: Reincarnating Reinforcement Learning abs: https://t.co/r8TcfqPyIs https://t.co/qSO5K11vYB
Beyond Tabula Rasa: Reincarnating Reinforcement Learning
34
134
1,533,649,732,345,778,200
Improving Fairness in Large-Scale Object Recognition by CrowdSourced Demographic Information abs:… https://t.co/3mGwmSsO6M
Improving Fairness in Large-Scale Object Recognition by CrowdSourced Demographic Information
17
135
1,533,634,419,986,153,500
Positive Unlabeled Contrastive Learning abs: https://t.co/LC33ii48Q6 https://t.co/eWLoasRamS
Positive Unlabeled Contrastive Learning
67
136
1,533,633,258,545,610,800
Reinforcement Learning with Neural Radiance Fields abs: https://t.co/8ESw75I2N9 project page:… https://t.co/DQrpZ5dyrb
Reinforcement Learning with Neural Radiance Fields
131
137
1,533,619,945,996,697,600
Compositional Visual Generation with Composable Diffusion Models abs: https://t.co/FEKYaDOlwf project page:… https://t.co/qvaTyuj3un
Compositional Visual Generation with Composable Diffusion Models
122
138
1,533,611,409,069,711,400
Neural Differential Equations for Learning to Program Neural Nets Through Continuous Learning Rules abs:… https://t.co/rQTNT4yfcB
Neural Differential Equations for Learning to Program Neural Nets Through Continuous Learning Rules
40
139
1,532,729,442,321,170,400
Deep Learning on Implicit Neural Datasets abs: https://t.co/nPGleDBRSq introduce the INR-Net, the first general fr… https://t.co/i1xT7bLhSN
Deep Learning on Implicit Neural Datasets
81
140
1,532,726,423,697,465,300
SupMAE: Supervised Masked Autoencoders Are Efficient Vision Learners abs: https://t.co/SIR2ufE89J github:… https://t.co/tZoNFvtDFQ
SupMAE: Supervised Masked Autoencoders Are Efficient Vision Learners
178
141
1,532,558,380,119,752,700
DepthShrinker: A New Compression Paradigm Towards Boosting Real-Hardware Efficiency of Compact Neural Networks abs:… https://t.co/dHBUdpmqm9
DepthShrinker: A New Compression Paradigm Towards Boosting Real-Hardware Efficiency of Compact Neural Networks
31
142
1,532,554,016,072,376,300
Cascaded Video Generation for Videos In-the-Wild abs: https://t.co/wDkiRCEWXN https://t.co/GJSVK80qC0
Cascaded Video Generation for Videos In-the-Wild
57
143
1,532,547,568,567,300,000
Finding the Right Recipe for Low Resource Domain Adaptation in Neural Machine Translation abs:… https://t.co/FAEEhSyQpY
Finding the Right Recipe for Low Resource Domain Adaptation in Neural Machine Translation
12
144
1,532,540,853,071,265,800
BayesFormer: Transformer with Uncertainty Estimation abs: https://t.co/0OqGgau2D2 introduce BayesFormer, a Transfo… https://t.co/znYfXmUPpJ
BayesFormer: Transformer with Uncertainty Estimation
188
145
1,532,539,121,662,574,600
Improving Diffusion Models for Inverse Problems using Manifold Constraints abs: https://t.co/Mt78QlNgZZ https://t.co/d6T7XFkqf1
Improving Diffusion Models for Inverse Problems using Manifold Constraints
115
146
1,532,538,212,438,130,700
DPM-Solver: A Fast ODE Solver for Diffusion Probabilistic Model Sampling in Around 10 Steps abs:… https://t.co/PBn2cEeEle
DPM-Solver: A Fast ODE Solver for Diffusion Probabilistic Model Sampling in Around 10 Steps
93
147
1,532,201,565,167,267,800
Hopular: Modern Hopfield Networks for Tabular Data abs: https://t.co/O5h6GYoGZd github: https://t.co/kztLUsmzMY pro… https://t.co/xqlUFoil7K
Hopular: Modern Hopfield Networks for Tabular Data
485
148
1,532,173,830,428,442,600
PandA: Unsupervised Learning of Parts and Appearances in the Feature Maps of GANs abs: https://t.co/MdoshW31xe gith… https://t.co/d0PWKpIufP
PandA: Unsupervised Learning of Parts and Appearances in the Feature Maps of GANs
121
149
1,532,162,242,715,721,700
Elucidating the Design Space of Diffusion-Based Generative Models abs: https://t.co/WtodJSq1wa improve efficiency… https://t.co/Fp84kzysBZ
Elucidating the Design Space of Diffusion-Based Generative Models
257
150
1,531,810,146,178,957,300
Chefs' Random Tables: Non-Trigonometric Random Features abs: https://t.co/qrt5BnhG2g https://t.co/AuWq9HKnl5
Chefs' Random Tables: Non-Trigonometric Random Features
19
151
1,531,802,121,280,147,500
Few-Shot Diffusion Models abs: https://t.co/Oz75eOx0Ue At test time, the model is able to generate samples from pr… https://t.co/qw3Wdivfks
Few-Shot Diffusion Models
114
152
1,531,798,720,550,953,000
SAMURAI: Shape And Material from Unconstrained Real-world Arbitrary Image collections abs: https://t.co/eviBoaJ1Zw… https://t.co/XsdD2CSafR
SAMURAI: Shape And Material from Unconstrained Real-world Arbitrary Image collections
148
153
1,531,484,127,177,937,000
Play it by Ear: Learning Skills amidst Occlusion through Audio-Visual Imitation Learning abs:… https://t.co/yafGze7shH
Play it by Ear: Learning Skills amidst Occlusion through Audio-Visual Imitation Learning
36
154
1,531,466,054,492,364,800
Dataset Condensation via Efficient Synthetic-Data Parameterization abs: https://t.co/IA66WHQQCH github:… https://t.co/PuBEVyx5EK
Dataset Condensation via Efficient Synthetic-Data Parameterization
110
155
1,531,465,172,262,568,000
Neural Shape Mating: Self-Supervised Object Assembly with Adversarial Shape Priors abs: https://t.co/25EYR1yE1A pro… https://t.co/qdqxXZtyYx
Neural Shape Mating: Self-Supervised Object Assembly with Adversarial Shape Priors
56
156
1,531,460,153,152,786,400
Teaching Models to Express Their Uncertainty in Words abs: https://t.co/rKcZNhBLt5 GPT-3 model can learn to expres… https://t.co/Z3YCzXqaMX
Teaching Models to Express Their Uncertainty in Words
163
157
1,531,454,478,968,406,000
Temporal Latent Bottleneck: Synthesis of Fast and Slow Processing Mechanisms in Sequence Learning abs:… https://t.co/U47eMKEmf3
Temporal Latent Bottleneck: Synthesis of Fast and Slow Processing Mechanisms in Sequence Learning
36
158
1,531,451,492,120,535,000
Gating Dropout: Communication-efficient Regularization for Sparsely Activated Transformers abs:… https://t.co/Ar0fNxMRi9
Gating Dropout: Communication-efficient Regularization for Sparsely Activated Transformers
28
159
1,531,445,364,217,237,500
Prompting ELECTRA: Few-Shot Learning with Discriminative Pre-Trained Models abs: https://t.co/myWID3paI2 https://t.co/S0WUP71wz8
Prompting ELECTRA: Few-Shot Learning with Discriminative Pre-Trained Models
66
160
1,531,444,059,780,309,000
Neural Volumetric Object Selection abs: https://t.co/ZLiJ5iBZzQ project page: https://t.co/YGsNO14XK7 https://t.co/4twrRcyExx
Neural Volumetric Object Selection
97
161
1,531,442,002,814,025,700
Multi-Game Decision Transformers abs: https://t.co/5JtgTx3B49 project page: https://t.co/rKk7h7wLga a single trans… https://t.co/zcJXA5tDhR
Multi-Game Decision Transformers
105
162
1,531,440,090,161,025,000
Diffusion-LM Improves Controllable Text Generation abs: https://t.co/YYVX2fuWrM Diffusion-LM iteratively denoises… https://t.co/1pJ5djHV9T
Diffusion-LM Improves Controllable Text Generation
145
163
1,531,176,037,400,338,400
MyoSuite -- A contact-rich simulation suite for musculoskeletal motor control abs: https://t.co/HpRvGT2UDz project… https://t.co/6noxiVtz85
MyoSuite -- A contact-rich simulation suite for musculoskeletal motor control
47
164
1,531,174,102,572,191,700
Neural Basis Models for Interpretability abs: https://t.co/u0G7oK87X4 https://t.co/ML7UCNPDkP
Neural Basis Models for Interpretability
55
165
1,531,173,694,214,656,000
Scalable Interpretability via Polynomials abs: https://t.co/EKZDra09oM https://t.co/XyIoQHWftG
Scalable Interpretability via Polynomials
32
166
1,531,173,081,393,336,300
Sharpness-Aware Training for Free abs: https://t.co/R6SSrWAjL2 https://t.co/alHDGt3zQo
Sharpness-Aware Training for Free
155
167
1,531,165,352,037,691,400
Global Normalization for Streaming Speech Recognition in a Modular Framework abs: https://t.co/OfIb7wiVkx demonstr… https://t.co/0iVBVXVBBs
Global Normalization for Streaming Speech Recognition in a Modular Framework
21
168
1,531,104,909,927,628,800
Bongard-HOI: Benchmarking Few-Shot Visual Reasoning for Human-Object Interactions abs: https://t.co/gVXiOx5Df3 https://t.co/eufEJbHHRr
Bongard-HOI: Benchmarking Few-Shot Visual Reasoning for Human-Object Interactions
47
169
1,531,100,741,166,833,700
FlashAttention: Fast and Memory-Efficient Exact Attention with IO-Awareness abs: https://t.co/3aHeecihur an IO-awa… https://t.co/GoJsOKYEgt
FlashAttention: Fast and Memory-Efficient Exact Attention with IO-Awareness
233
170
1,531,098,962,932,945,000
Contrastive Siamese Network for Semi-supervised Speech Recognition abs: https://t.co/SL374ByjZO experiments show t… https://t.co/efVonWBQC5
Contrastive Siamese Network for Semi-supervised Speech Recognition
71
171
1,531,096,569,365,282,800
X-ViT: High Performance Linear Vision Transformer without Softmax abs: https://t.co/A6HZ2vXKDB https://t.co/kArY0Tm4VE
X-ViT: High Performance Linear Vision Transformer without Softmax
120
172
1,531,093,245,308,059,600
Tranception: protein fitness prediction with autoregressive transformers and inference-time retrieval transformer… https://t.co/OSLGlyUNqb
Tranception: protein fitness prediction with autoregressive transformers and inference-time retrieval
12
173
1,531,092,289,090,736,000
Quark: Controllable Text Generation with Reinforced Unlearning abs: https://t.co/OmS9AqhC7d introduce Quantized Re… https://t.co/M4DHSUpwF3
Quark: Controllable Text Generation with Reinforced Unlearning
144
174
1,531,091,654,567,919,600
Training and Inference on Any-Order Autoregressive Models the Right Way abs: https://t.co/G8DNeKtoJK leads to impr… https://t.co/JjXafy7iAu
Training and Inference on Any-Order Autoregressive Models the Right Way
22
175
1,531,090,584,231,891,000
Contrastive Learning Rivals Masked Image Modeling in Fine-tuning via Feature Distillation abs:… https://t.co/binMlc2scV
Contrastive Learning Rivals Masked Image Modeling in Fine-tuning via Feature Distillation
52
176
1,531,089,687,263,293,400
Maximum Likelihood Training of Implicit Nonlinear Diffusion Models abs: https://t.co/U2YtYUURqH https://t.co/lw7hcspT7o
Maximum Likelihood Training of Implicit Nonlinear Diffusion Models
110
177
1,531,088,458,839,740,400
Why So Pessimistic? Estimating Uncertainties for Offline RL through Ensembles, and Why Their Independence Matters a… https://t.co/e1H5ZyvcQg
Why So Pessimistic? Estimating Uncertainties for Offline RL through Ensembles, and Why Their Independence Matters
20
178
1,531,086,920,461,308,000
Learning to Reason with Neural Networks: Generalization, Unseen Data and Boolean Measures abs:… https://t.co/7DWwix1kP1
Learning to Reason with Neural Networks: Generalization, Unseen Data and Boolean Measures
81
179
1,531,017,163,284,394,000
CogVideo: Large-scale Pretraining for Text-to-Video Generation via Transformers github: https://t.co/1JuOHU7puc https://t.co/Wilcq2Xxb9
CogVideo: Large-scale Pretraining for Text-to-Video Generation via Transformers
1,498
180
1,530,278,551,676,657,700
Discovering Policies with DOMiNO: Diversity Optimization Maintaining Near Optimality abs: https://t.co/swtjYLryr5 https://t.co/Ny4wTtkaAI
Discovering Policies with DOMiNO: Diversity Optimization Maintaining Near Optimality
31
181
1,530,029,153,101,168,600
Towards Learning Universal Hyperparameter Optimizers with Transformers abs: https://t.co/yON7zKZCRy extensive expe… https://t.co/UWv7nrCmhF
Towards Learning Universal Hyperparameter Optimizers with Transformers
129
182
1,530,028,097,692,647,400
BiT: Robustly Binarized Multi-distilled Transformer abs: https://t.co/buQ40Vo9ee https://t.co/Q8iyC2Auql
BiT: Robustly Binarized Multi-distilled Transformer
37
183
1,530,018,008,667,660,300
Evaluating Multimodal Interactive Agents abs: https://t.co/CtrOihrZBZ https://t.co/sThFVydSUZ
Evaluating Multimodal Interactive Agents
23
184
1,530,013,711,645,253,600
Matryoshka Representations for Adaptive Deployment abs: https://t.co/KkqN7sxmnN flexibility within the learned Mat… https://t.co/RYra48uEKN
Matryoshka Representations for Adaptive Deployment
69
185
1,530,010,193,836,245,000
Green Hierarchical Vision Transformer for Masked Image Modeling abs: https://t.co/r4Y9LfE4HC github:… https://t.co/o7ZihujhkM
Green Hierarchical Vision Transformer for Masked Image Modeling
26
186
1,529,673,576,835,698,700
Inception Transformer abs: https://t.co/EoPDBOafSS iFormer-S hits the top-1 accuracy of 83.4% on ImageNet-1K, much… https://t.co/24J3SnTBdm
Inception Transformer
117
187
1,529,640,184,081,535,000
FLEURS: Few-shot Learning Evaluation of Universal Representations of Speech abs: https://t.co/IABvUreqHv https://t.co/iUUzNPaPFp
FLEURS: Few-shot Learning Evaluation of Universal Representations of Speech
30
188
1,529,637,573,462,831,000
Autoformalization with Large Language Models abs: https://t.co/SoGYXkMGhV methodology results in a new state-of-th… https://t.co/pTxpC00QFC
Autoformalization with Large Language Models
24
189
1,529,630,110,885,851,100
AdaMix: Mixture-of-Adapter for Parameter-efficient Tuning of Large Language Models abs: https://t.co/aD0daO7HEa By… https://t.co/NW3DbOJdwH
AdaMix: Mixture-of-Adapter for Parameter-efficient Tuning of Large Language Models
64
190
1,529,625,016,471,634,000
An Evolutionary Approach to Dynamic Introduction of Tasks in Large-scale Multitask Learning Systems abs:… https://t.co/gks4xeDd22
An Evolutionary Approach to Dynamic Introduction of Tasks in Large-scale Multitask Learning Systems
10
191
1,529,341,790,335,246,300
Policy Compliance Detection via Expression Tree Inference abs: https://t.co/Ic7Wm852Qz https://t.co/4RtEnug1RD
Policy Compliance Detection via Expression Tree Inference
8
192
1,529,309,686,318,653,400
History Compression via Language Models in Reinforcement Learning abs: https://t.co/N1smkJUAW9 https://t.co/4v1an4CkTU
History Compression via Language Models in Reinforcement Learning
85
193
1,529,303,237,572,034,600
On the Role of Bidirectionality in Language Model Pre-Training abs: https://t.co/fG2SbUhB1W propose a new framewor… https://t.co/Gc40i0zyeV
On the Role of Bidirectionality in Language Model Pre-Training
26
194
1,529,301,315,221,917,700
Large Language Models are Zero-Shot Reasoners abs: https://t.co/GgdLms77wF LLMs are decent zero-shot reasoners by… https://t.co/PTH6QpdSo2
Large Language Models are Zero-Shot Reasoners
85
195
1,529,278,657,856,000,000
Naive Few-Shot Learning: Sequence Consistency Evaluation abs: https://t.co/ySAzuujz2O https://t.co/aVVLHJdBUC
Naive Few-Shot Learning: Sequence Consistency Evaluation
19
196
1,529,075,001,256,824,800
All Birds with One Stone: Multi-task Text Classification for Efficient Inference with One Forward Pass abs:… https://t.co/fcPGWaFEk5
All Birds with One Stone: Multi-task Text Classification for Efficient Inference with One Forward Pass
12
197
1,529,071,850,860,454,000
StreamingQA: A Benchmark for Adaptation to New Knowledge over Time in Question Answering Models abs:… https://t.co/MDT1Bxw9by
StreamingQA: A Benchmark for Adaptation to New Knowledge over Time in Question Answering Models
20
198
1,528,909,940,324,192,300
Contrastive and Non-Contrastive Self-Supervised Learning Recover Global and Local Spectral Embedding Methods abs:… https://t.co/B65LGrnCLg
Contrastive and Non-Contrastive Self-Supervised Learning Recover Global and Local Spectral Embedding Methods
38
199
1,528,907,841,335,066,600
Flexible Diffusion Modeling of Long Videos abs: https://t.co/Cx1BUqA7zM demonstrate improved video modeling over p… https://t.co/Y15RoaMAFg
Flexible Diffusion Modeling of Long Videos
84