Delete Tweets-from-AK.csv
Browse files- Tweets-from-AK.csv +0 -419
Tweets-from-AK.csv
DELETED
@@ -1,419 +0,0 @@
|
|
1 |
-
,Unnamed: 0,id,tweet_text,paper_reference,total_likes,uuid
|
2 |
-
0,0,1541238366599012355,"HM3D-ABO: A Photo-realistic Dataset for Object-centric Multi-view 3D Reconstruction
|
3 |
-
abs: https://t.co/fSVklQH3H4
|
4 |
-
gi… https://t.co/38aK0bOtoh",HM3D-ABO: A Photo-realistic Dataset for Object-centric Multi-view 3D Reconstruction,77,b04965e6-a9bb-591f-8f8a-1adcb2c8dc39
|
5 |
-
1,1,1541226747533922308,"PSP: Million-level Protein Sequence Dataset for Protein Structure Prediction
|
6 |
-
abs: https://t.co/yXdFTqRWF3
|
7 |
-
|
8 |
-
dataset… https://t.co/ZDNMPI2NVR",PSP: Million-level Protein Sequence Dataset for Protein Structure Prediction,51,4b166dbe-d99d-5091-abdd-95b83330ed3a
|
9 |
-
2,2,1541224802425442305,"RT @aerinykim: Before I forget, I'd like to summarize some interesting papers that I found at #CVPR2022.
|
10 |
-
|
11 |
-
Dual-key multimodal backdoors for…","RT @aerinykim: Before I forget, I'd like to summarize some interesting papers that I found at #CVPR2022.",0,98123fde-012f-5ff3-8b50-881449dac91a
|
12 |
-
3,3,1541222358735790082,"Text-Driven Stylization of Video Objects
|
13 |
-
abs: https://t.co/dQps6x2n65
|
14 |
-
project page: https://t.co/Ycsjsus0y6
|
15 |
-
|
16 |
-
TL;DR:… https://t.co/l9v0AGY7Ks",Text-Driven Stylization of Video Objects,70,6ed955c6-506a-5343-9be4-2c0afae02eef
|
17 |
-
4,4,1541219433259175937,"Megapixel Image Generation with Step-Unrolled Denoising Autoencoders
|
18 |
-
abs: https://t.co/6fX9PseXBT
|
19 |
-
|
20 |
-
obtain FID score… https://t.co/HPodJ8xzPx",Megapixel Image Generation with Step-Unrolled Denoising Autoencoders,94,c8691da2-158a-5ed6-8537-0e6f140801f2
|
21 |
-
5,5,1541125242118078465,"RT @dasayan05: #CVPR2022 summary:
|
22 |
-
1. Boiling temperature at NOLA
|
23 |
-
2. Reading NeRF posters
|
24 |
-
3. Searching for @ak92501
|
25 |
-
4. Reading more NeRF po…",RT @dasayan05: #CVPR2022 summary:,0,a6c4fc8f-6950-51de-a9ae-2c519c465071
|
26 |
-
6,6,1541101988125048838,"The @CVPR event on @huggingface is ending on June 30th (AOE Time Zone), 118 team members and 25 @Gradio demos have… https://t.co/dS8GWnOvid","The @CVPR event on @huggingface is ending on June 30th (AOE Time Zone), 118 team members and 25 @Gradio demos have… https://t.co/dS8GWnOvid",37,a9f96b98-dd44-5216-ab0d-dbfc6b262edf
|
27 |
-
7,7,1540790151273517056,github: https://t.co/nw8tY5xWN3 https://t.co/VmCO75ftIQ,github: https://t.co/nw8tY5xWN3 https://t.co/VmCO75ftIQ,63,e99caacd-6c45-5906-bd9f-b79e62f25963
|
28 |
-
8,8,1540760803900530691,"RT @zhengzhongtu: Already back in Austin now!
|
29 |
-
|
30 |
-
Finally caught up with @ak92501 the Arxiv robot on the last day of CVPR~ https://t.co/9hFLvt…",RT @zhengzhongtu: Already back in Austin now!,0,e4d80b30-151e-51b5-9f4f-18a3b82718e6
|
31 |
-
9,9,1540531617609011200,RT @saihv: @sitzikbs @CSProfKGD @ak92501 #6 seems interesting.. https://t.co/7PIEQOraSz,RT @saihv: @sitzikbs @CSProfKGD @ak92501 #6 seems interesting.. https://t.co/7PIEQOraSz,0,0159d6c7-973f-5e7a-a9a0-d195d0ea6fe2
|
32 |
-
10,10,1540526641264353283,"RT @MatthewWalmer: Today we’re presenting our poster for “Dual Key Multimodal Backdoors for Visual Question Answering” at #cvpr2022
|
33 |
-
|
34 |
-
Aftern…",RT @MatthewWalmer: Today we’re presenting our poster for “Dual Key Multimodal Backdoors for Visual Question Answering” at #cvpr2022,0,7fef88f7-411d-5669-b42d-bf5fc7f9b58b
|
35 |
-
11,11,1540518390904807424,RT @sitzikbs: @WaltonStevenj @ak92501 @CSProfKGD Wow! Same thing happned to me! https://t.co/SndtMVGdkd,RT @sitzikbs: @WaltonStevenj @ak92501 @CSProfKGD Wow! Same thing happned to me! https://t.co/SndtMVGdkd,0,52524d6e-10dc-5261-aa36-8b2efcbaa5f0
|
36 |
-
12,12,1540514393653395457,RT @WaltonStevenj: @CSProfKGD @ak92501 I tried to get a picture but this happened https://t.co/LFqqqwfwGl,RT @WaltonStevenj: @CSProfKGD @ak92501 I tried to get a picture but this happened https://t.co/LFqqqwfwGl,0,91c274f2-9a0d-5ce6-ac3d-7529f452df21
|
37 |
-
13,13,1540498719245746178,RT @apsdehal: Come stop by at our WinoGround poster during afternoon session at #CVPR2022 today to talk about where today's advanced visio…,RT @apsdehal: Come stop by at our WinoGround poster during afternoon session at #CVPR2022 today to talk about where today's advanced visio…,0,0ff1e264-520d-543a-87dd-181a491e667e
|
38 |
-
14,14,1540496892018188289,"WALT: Watch And Learn 2D amodal representation from Time-lapse imagery
|
39 |
-
paper: https://t.co/8GHgNUGdi6
|
40 |
-
project page:… https://t.co/5YSt8ydEu0",WALT: Watch And Learn 2D amodal representation from Time-lapse imagery,64,23986425-d3a5-5e13-8bab-299745777a8d
|
41 |
-
15,15,1540492673039187969,RT @CSProfKGD: FUN FACT: @ak92501 spends 4-5 hours each night sifting through the arXiv feed and posting.,RT @CSProfKGD: FUN FACT: @ak92501 spends 4-5 hours each night sifting through the arXiv feed and posting.,0,c15b38c9-9a3e-543c-a703-dd742f25b4d5
|
42 |
-
16,16,1540451974797316096,@mervenoyann Happy birthday! 🎈🎉 🎁,@mervenoyann Happy birthday! 🎈🎉 🎁,4,db680066-c83d-5ed7-89a4-1d79466ea62d
|
43 |
-
17,17,1540439841007083520,RT @shahrukh_athar: Really excited to present RigNeRF today at Poster Session 4.2 of #CVPR2022 (@CVPR)!! Drop by PosterID 161b to discuss R…,RT @shahrukh_athar: Really excited to present RigNeRF today at Poster Session 4.2 of #CVPR2022 (@CVPR)!! Drop by PosterID 161b to discuss R…,0,cadb7952-2bba-5609-88d4-8e47ec4e7920
|
44 |
-
18,18,1540422370153881601,RT @jw2yang4ai: We are at 46b to present our UniCL/mini-Florence! https://t.co/U5nvHiO4bR,RT @jw2yang4ai: We are at 46b to present our UniCL/mini-Florence! https://t.co/U5nvHiO4bR,0,35140057-a2a4-5adb-a500-46f8ed8b66a9
|
45 |
-
19,19,1540407710038065152,"RT @sitzikbs: OK, @ak92501 just stopped by our poster. Officially, not a bot. https://t.co/tSljzLLjer","RT @sitzikbs: OK, @ak92501 just stopped by our poster. Officially, not a bot. https://t.co/tSljzLLjer",0,66e549b7-01e2-5d07-98d5-430f74d8d3b2
|
46 |
-
20,20,1540383826630909953,"RT @DrJimFan: Introducing MineDojo for building open-ended generalist agents! https://t.co/PmOCWz6T5E
|
47 |
-
✅Massive benchmark: 1000s of tasks in…",RT @DrJimFan: Introducing MineDojo for building open-ended generalist agents! https://t.co/PmOCWz6T5E,0,292c8e99-2378-55aa-83d8-350e0ac3f1cc
|
48 |
-
21,21,1540367998745206784,RT @YiwuZhong: #CVPR2022 We just released a web demo for RegionCLIP (https://t.co/rGvI5L9tXN). The pre-trained RegionCLIP demonstrates inte…,RT @YiwuZhong: #CVPR2022 We just released a web demo for RegionCLIP (https://t.co/rGvI5L9tXN). The pre-trained RegionCLIP demonstrates inte…,0,0e3b230a-0509-55d8-96a0-9875f387a2be
|
49 |
-
22,22,1540353957289234432,will be here until 11,will be here until 11,8,4c507660-a83b-55c0-9b2b-83eccb07723d
|
50 |
-
23,23,1540350076274593794,"RT @karol_majek: @PDillis @ak92501 Real, 3 instances, they balance the load https://t.co/eMMYwmS3xV","RT @karol_majek: @PDillis @ak92501 Real, 3 instances, they balance the load https://t.co/eMMYwmS3xV",0,a1b9b633-da11-58be-b1a9-5cfa2848f186
|
51 |
-
24,24,1540349713953595393,"RT @Jerry_XU_Jiarui: 🥰This morning 10:00AM-12:30PM at #CVPR2022, I will present GroupViT at poster 208a. Please come by and have a chat!…","RT @Jerry_XU_Jiarui: 🥰This morning 10:00AM-12:30PM at #CVPR2022, I will present GroupViT at poster 208a. Please come by and have a chat!…",0,c2708a8b-120a-56f5-a30d-990048af87cc
|
52 |
-
25,25,1540349465265061889,RT @CSProfKGD: Got an autograph 🤩 #CVPR2022 https://t.co/897WuqIdM4,RT @CSProfKGD: Got an autograph 🤩 #CVPR2022 https://t.co/897WuqIdM4,0,e7263999-68b6-5a23-b530-af25b7efd632
|
53 |
-
26,26,1540347498606346245,"RT @jw2yang4ai: If you are interested, just stop at our RegionCLIP poster detected by our RegionCLIP model. https://t.co/Qnc71nMGuZ","RT @jw2yang4ai: If you are interested, just stop at our RegionCLIP poster detected by our RegionCLIP model. https://t.co/Qnc71nMGuZ",0,ce1ae2d5-3454-5952-97ff-36ff935bcfe9
|
54 |
-
27,27,1540336050488446977,"Sitting at tables on the other side of coffee shop next to door and between cafe, wearing a red shirt https://t.co/EgkMDHNvyQ","Sitting at tables on the other side of coffee shop next to door and between cafe, wearing a red shirt https://t.co/EgkMDHNvyQ",29,33677b87-bc8d-5ff6-9a25-fe60225e4bf0
|
55 |
-
28,28,1540320889753030661,"RT @sitzikbs: Are you still at #CVPR2022 ? Come chat with us at the last poster session (4.2). @ChaminHewa and I will be at poster 61b, 14:…","RT @sitzikbs: Are you still at #CVPR2022 ? Come chat with us at the last poster session (4.2). @ChaminHewa and I will be at poster 61b, 14:…",0,ed2305ae-e8f9-5387-b860-3d80ae6c02f7
|
56 |
-
29,29,1540320736971300871,"RT @confusezius: If contrastive learning and language is something that sounds interesting, drop by at this mornings oral (or poster) sessi…","RT @confusezius: If contrastive learning and language is something that sounds interesting, drop by at this mornings oral (or poster) sessi…",0,604ed872-ae2d-5d91-8e3e-572f3a3aaaa5
|
57 |
-
30,30,1540306609594826753,"RT @jw2yang4ai: If you are there, please try our CVPR 2022 work RegionCLIP demo! You can feed any queries to localize the fine-grained obje…","RT @jw2yang4ai: If you are there, please try our CVPR 2022 work RegionCLIP demo! You can feed any queries to localize the fine-grained obje…",0,8f8173d9-2f8d-5636-a693-24d9f79ba651
|
58 |
-
31,31,1540197464543838208,"""New York City, oil painting"" - CogView2
|
59 |
-
demo: https://t.co/KgWC23knx7 https://t.co/28oJbeDKsm","""New York City, oil painting"" - CogView2",18,36eb8d4d-b854-51f1-9fdf-3735964225d5
|
60 |
-
32,32,1540187756164423687,"RT @Zhao_Running: Our #INTERSPEECH paper introduces Radio2Speech, a #wirelesssensing system that recovers high quality speech via RF signal…","RT @Zhao_Running: Our #INTERSPEECH paper introduces Radio2Speech, a #wirelesssensing system that recovers high quality speech via RF signal…",0,3493b6ca-f84b-56a9-97cc-c0bd1c46c4c0
|
61 |
-
33,33,1540184734390706176,"Walk the Random Walk: Learning to Discover and Reach Goals Without Supervision
|
62 |
-
abs: https://t.co/NO2vzfdYdS https://t.co/WoN73BzgeQ",Walk the Random Walk: Learning to Discover and Reach Goals Without Supervision,65,f413ea13-fcd9-5b44-9d22-1fa1f7b063a5
|
63 |
-
34,34,1540180978425073664,"BlazePose GHUM Holistic: Real-time 3D Human Landmarks and Pose Estimation
|
64 |
-
abs: https://t.co/qnxAmRVP71
|
65 |
-
|
66 |
-
present Bla… https://t.co/w4Zi72blos",BlazePose GHUM Holistic: Real-time 3D Human Landmarks and Pose Estimation,81,f468d924-d23b-56c2-b90f-3d1cf4b45337
|
67 |
-
35,35,1540176838017916933,"Offline RL for Natural Language Generation with Implicit Language Q Learning
|
68 |
-
abs: https://t.co/wYTtUgdryZ
|
69 |
-
project p… https://t.co/xS8JCODxwP",Offline RL for Natural Language Generation with Implicit Language Q Learning,40,8828c9d6-ed76-5c09-bf64-ba9e9cd90896
|
70 |
-
36,36,1540173636774002688,github: https://t.co/Nu0jgZ3qKo https://t.co/cnG50SKwpf,github: https://t.co/Nu0jgZ3qKo https://t.co/cnG50SKwpf,12,facb7618-55ca-5c30-9cba-fd567b6c0611
|
71 |
-
37,37,1540173392996958209,"GODEL: Large-Scale Pre-Training for Goal-Directed Dialog
|
72 |
-
abs: https://t.co/ayJI8xXVL2
|
73 |
-
|
74 |
-
GODEL outperforms sota pre-t… https://t.co/eUfnl7dszD",GODEL: Large-Scale Pre-Training for Goal-Directed Dialog,40,96f3de0e-6412-5434-b406-67ef3352ab85
|
75 |
-
38,38,1540166602364174338,RT @victormustar: « A lion man is typing in the office » CogView2 demo is nice 😅 https://t.co/6ZTomM8NBs https://t.co/4wnutOZASQ,RT @victormustar: « A lion man is typing in the office » CogView2 demo is nice 😅 https://t.co/6ZTomM8NBs https://t.co/4wnutOZASQ,0,9ebacb89-40ab-52b3-93a2-9054611d8f55
|
76 |
-
39,39,1540166227162812421,"Adversarial Multi-Task Learning for Disentangling Timbre and Pitch in Singing Voice Synthesis 🎤🎤
|
77 |
-
abs:… https://t.co/acdjzVMMU3",Adversarial Multi-Task Learning for Disentangling Timbre and Pitch in Singing Voice Synthesis 🎤🎤,35,681046ff-9129-5ade-b11c-769864e02184
|
78 |
-
40,40,1540161095930880001,"MaskViT: Masked Visual Pre-Training for Video Prediction
|
79 |
-
abs: https://t.co/uhMEB6ashb
|
80 |
-
project page:… https://t.co/gbnxrCxUrc",MaskViT: Masked Visual Pre-Training for Video Prediction,144,c13d0b5d-1ca3-57b6-a23f-8586bca44928
|
81 |
-
41,41,1540156319923060736,"The ArtBench Dataset: Benchmarking Generative Models with Artworks
|
82 |
-
abs: https://t.co/Zzq0A2i5ob
|
83 |
-
github:… https://t.co/SfQlvTLrk3",The ArtBench Dataset: Benchmarking Generative Models with Artworks,177,7c411b5e-9d3f-50b5-9c28-62096e41c4ed
|
84 |
-
42,42,1540151560939921409,"RT @ccloy: We cast blind 😀 restoration as a code prediction task, and exploit global compositions and long-range dependencies of low-qualit…","RT @ccloy: We cast blind 😀 restoration as a code prediction task, and exploit global compositions and long-range dependencies of low-qualit…",0,f825aafe-6696-5121-b263-6b2c408b7f43
|
85 |
-
43,43,1540138378498383873,a @Gradio Demo for RegionCLIP: Region-based Language-Image Pretraining on @huggingface Spaces for @CVPR 2022 by… https://t.co/XZCASqN208,a @Gradio Demo for RegionCLIP: Region-based Language-Image Pretraining on @huggingface Spaces for @CVPR 2022 by… https://t.co/XZCASqN208,45,f2b4caea-61c3-5bed-8ce7-d8b9d16e129e
|
86 |
-
44,44,1540136841155907585,I will be near the coffee shop outside Hall C tomorrow if anyone wants to meet up after 9 am at CVPR,I will be near the coffee shop outside Hall C tomorrow if anyone wants to meet up after 9 am at CVPR,90,3593855a-6557-5736-8cab-172c6987f949
|
87 |
-
45,45,1540134704057294848,"EventNeRF: Neural Radiance Fields from a Single Colour Event Camera
|
88 |
-
abs: https://t.co/qzJtFOGuNK
|
89 |
-
project page:… https://t.co/drOF3x8DLH",EventNeRF: Neural Radiance Fields from a Single Colour Event Camera,160,36392431-d554-5385-b876-7bc6e1cb26b3
|
90 |
-
46,46,1540114214756536320,RT @elliottszwu: .@ak92501 is real! Come to hall C!,RT @elliottszwu: .@ak92501 is real! Come to hall C!,0,7e645493-0898-5501-8155-e8578b4f5224
|
91 |
-
47,47,1540109042584064001,"@CSProfKGD @elliottszwu @CVPR thanks, would also be great to meet, sent a dm, also I am at the coffee shop outside… https://t.co/j3i3h6Bbfs","@CSProfKGD @elliottszwu @CVPR thanks, would also be great to meet, sent a dm, also I am at the coffee shop outside… https://t.co/j3i3h6Bbfs",17,14dc6a81-0491-5683-baaf-7582a61c5798
|
92 |
-
48,48,1540101501456187395,"RT @hyungjin_chung: For those interested diffusion models and inverse problems, come check out our poster on 174a #CVPR2022 ! Joint work wi…","RT @hyungjin_chung: For those interested diffusion models and inverse problems, come check out our poster on 174a #CVPR2022 ! Joint work wi…",0,883e0a9c-e3b3-5f9c-8073-2913cbbb99ec
|
93 |
-
49,49,1540098318029692928,"RT @gclue_akira: CogView2のWebデモ
|
94 |
-
https://t.co/OVu6EE6YQD
|
95 |
-
|
96 |
-
https://t.co/kUtxCq4EqV",RT @gclue_akira: CogView2のWebデモ,0,44b1d52f-cb65-59c3-a00a-a9f9a6b92247
|
97 |
-
50,50,1540078626745589761,RT @cyrilzakka: Was working on something very similar but never got the chance to publish due to finals and graduation. Still a WIP but I'v…,RT @cyrilzakka: Was working on something very similar but never got the chance to publish due to finals and graduation. Still a WIP but I'v…,0,f428abba-f3c6-50d1-ace0-b15fe2b42d8a
|
98 |
-
51,51,1540073247177408516,RT @ducha_aiki: #CVPR2022 https://t.co/6NU0e5LA16,RT @ducha_aiki: #CVPR2022 https://t.co/6NU0e5LA16,0,6768f5a2-051e-54ea-ad74-832847c693cf
|
99 |
-
52,52,1540043756216492035,@elliottszwu @CVPR I will be around in the poster session today in the exhibits hall,@elliottszwu @CVPR I will be around in the poster session today in the exhibits hall,21,c8f4ed2e-397e-5644-a4ee-8b41a90a6de2
|
100 |
-
53,53,1540035360860045312,https://t.co/qTaxrKwP7R,https://t.co/qTaxrKwP7R,10,62d770e4-1e11-5556-8a43-d5fec06b97fa
|
101 |
-
54,54,1540033980128436226,a @Gradio Demo for CogView2: Faster and Better Text-to-Image Generation via Hierarchical Transformers on… https://t.co/qQF0GG5cxR,a @Gradio Demo for CogView2: Faster and Better Text-to-Image Generation via Hierarchical Transformers on… https://t.co/qQF0GG5cxR,119,a5a4ee27-4652-5f7d-9e4d-652a965d288e
|
102 |
-
55,55,1540032783023849473,RT @elliottszwu: How can we find @ak92501 @CVPR?,RT @elliottszwu: How can we find @ak92501 @CVPR?,0,97f29d4d-a3a3-5aa4-a883-87b799d604d2
|
103 |
-
56,56,1540028949920710657,RT @jeffclune: Introducing Video PreTraining (VPT): it learns complex behaviors by watching (pretraining on) vast amounts of online videos.…,RT @jeffclune: Introducing Video PreTraining (VPT): it learns complex behaviors by watching (pretraining on) vast amounts of online videos.…,0,dc9e84f6-774e-53fc-833f-a683841deef6
|
104 |
-
57,57,1539985557937340418,"RT @douwekiela: Check out these FLAVA-based demos: https://t.co/VmnTJwIGey
|
105 |
-
And this one for Winoground:
|
106 |
-
https://t.co/rU3Gf2ZOwz
|
107 |
-
Loading FLA…",RT @douwekiela: Check out these FLAVA-based demos: https://t.co/VmnTJwIGey,0,0b1b11cd-c728-515b-967a-d0df61b8ed7c
|
108 |
-
58,58,1539982089113767936,RT @lidaiqing: Excited to share BigDatasetGAN @CVPR! We are able to synthesize ImageNet with pixel-wise labels using as few as 5 annotatio…,RT @lidaiqing: Excited to share BigDatasetGAN @CVPR! We are able to synthesize ImageNet with pixel-wise labels using as few as 5 annotatio…,0,0463a67b-30ae-56d5-b7c8-65c01be01d7f
|
109 |
-
59,59,1539961370971541505,"RT @yangtao_wang: #CVPR2022 23/6
|
110 |
-
Welcome to our poster ""TokenCut: Self-Supervised Transformers for Unsupervised Object Discovery Using Norm…",RT @yangtao_wang: #CVPR2022 23/6,0,083fc808-0906-5c2e-abd2-0d4c1603a9e2
|
111 |
-
60,60,1539820424376320000,"Multimodal Colored Point Cloud to Image Alignment
|
112 |
-
paper: https://t.co/YD9bnByUYx
|
113 |
-
colab: https://t.co/vwGwlrWZhg https://t.co/zE5z2gnzdb",Multimodal Colored Point Cloud to Image Alignment,35,43ee290a-b01b-5a38-a99b-1afb62a7193a
|
114 |
-
61,61,1539811680359796739,"TiCo: Transformation Invariance and Covariance Contrast for Self-Supervised Visual Representation Learning
|
115 |
-
abs:… https://t.co/UArbr7zhRE",TiCo: Transformation Invariance and Covariance Contrast for Self-Supervised Visual Representation Learning,83,18c2f394-3c7e-519c-9232-7a4470c7868f
|
116 |
-
62,62,1539809856168890368,proposed system Qin achieves 40 points higher than the average scores made by students and 15 points higher than GP… https://t.co/bAiPTd9WlF,proposed system Qin achieves 40 points higher than the average scores made by students and 15 points higher than GP… https://t.co/bAiPTd9WlF,8,08c02838-0ff8-5ad7-9ac9-66bac02971eb
|
117 |
-
63,63,1539809066033487872,"BenchCLAMP: A Benchmark for Evaluating Language Models on Semantic Parsing
|
118 |
-
abs: https://t.co/mi3tdM4hjU https://t.co/C5sOd9hwUk",BenchCLAMP: A Benchmark for Evaluating Language Models on Semantic Parsing,13,14888a48-5f16-5cb9-9a0d-9c0563de121e
|
119 |
-
64,64,1539806514466144257,"Radio2Speech: High Quality Speech Recovery from Radio Frequency Signals
|
120 |
-
abs: https://t.co/oFcSQlgsX8
|
121 |
-
project page:… https://t.co/xfYJtJWIpQ",Radio2Speech: High Quality Speech Recovery from Radio Frequency Signals,239,fb0e3f8f-605c-5f9f-be82-41ed661e8bbf
|
122 |
-
65,65,1539794210190155778,"Jointist: Joint Learning for Multi-instrument Transcription and Its Applications
|
123 |
-
abs: https://t.co/xeuPUBcr01
|
124 |
-
proje… https://t.co/QmyCioKviJ",Jointist: Joint Learning for Multi-instrument Transcription and Its Applications,17,235f1dc8-1eea-5918-b2e1-eac7572df017
|
125 |
-
66,66,1539782468504412160,"Towards Robust Blind Face Restoration with Codebook Lookup Transformer
|
126 |
-
abs: https://t.co/NNhj6EhwIP
|
127 |
-
project page:… https://t.co/3lkIhDyh6P",Towards Robust Blind Face Restoration with Codebook Lookup Transformer,96,c8771d1b-14d9-550a-87a1-cf0a56a02a84
|
128 |
-
67,67,1539780412297330689,"GEMv2: Multilingual NLG Benchmarking in a Single Line of Code
|
129 |
-
abs: https://t.co/pKS5mgoDkG
|
130 |
-
|
131 |
-
GEMv2 supports 40 docum… https://t.co/qMitHzTlO0",GEMv2: Multilingual NLG Benchmarking in a Single Line of Code,17,22276c6f-08f9-5944-bcd2-81e6bf89fd72
|
132 |
-
68,68,1539779702306603008,"Questions Are All You Need to Train a Dense Passage Retriever
|
133 |
-
abs: https://t.co/qdSmN5pe7a
|
134 |
-
|
135 |
-
a novel approach to tra… https://t.co/NKgAHWaLsh",Questions Are All You Need to Train a Dense Passage Retriever,57,a192fb22-6740-5029-948d-cc2bad74db31
|
136 |
-
69,69,1539777865688010753,"reStructured Pre-training
|
137 |
-
abs: https://t.co/mYm7qbt59N https://t.co/O5T3tSY4PL",reStructured Pre-training,31,85105cfe-bec4-5f56-971f-98d24a8063fd
|
138 |
-
70,70,1539756137070878721,"RT @earthcurated: Gausdal, Norway ✨ https://t.co/tCYoryrbff","RT @earthcurated: Gausdal, Norway ✨ https://t.co/tCYoryrbff",0,d62149c8-71b3-5c0e-a3c8-acd70b6675a2
|
139 |
-
71,71,1539755999065772034,"RT @earthcurated: Tuscany, Italy 🇮🇹 https://t.co/tswGswZcJL","RT @earthcurated: Tuscany, Italy 🇮🇹 https://t.co/tswGswZcJL",0,df08d167-1bd8-55a3-b20f-6763dd47aa7f
|
140 |
-
72,72,1539751376263192577,RT @wightmanr: I’m excited to announce that I’ve joined @huggingface to take AI based computer vision to the next level. I will continue t…,RT @wightmanr: I’m excited to announce that I’ve joined @huggingface to take AI based computer vision to the next level. I will continue t…,0,c5a21254-71c0-557d-84d0-e075d9bee976
|
141 |
-
73,73,1539749459915149313,a @Gradio Demo for FLAVA: A Foundation Language And Vision Alignment Model on @huggingface Spaces for @CVPR 2022 by… https://t.co/fxXcV0KZkQ,a @Gradio Demo for FLAVA: A Foundation Language And Vision Alignment Model on @huggingface Spaces for @CVPR 2022 by… https://t.co/fxXcV0KZkQ,23,73702180-6d2d-5a7f-9983-6ec8607fa214
|
142 |
-
74,74,1539736626087206913,RT @imtiazprio: Catch us at the #CVPR2022 Oral Session 3.1.1 at 8:30 am Thursday and Poster Session 10:30 am right after!!,RT @imtiazprio: Catch us at the #CVPR2022 Oral Session 3.1.1 at 8:30 am Thursday and Poster Session 10:30 am right after!!,0,61491fee-d69e-5dae-b94c-180f4ddd68d7
|
143 |
-
75,75,1539728223638097920,"RT @Sa_9810: It was really great to see everyone today at the poster session. Thanks for coming!
|
144 |
-
If you would like to meet for coffee or if…",RT @Sa_9810: It was really great to see everyone today at the poster session. Thanks for coming!,0,dee4df3e-dd90-5855-9b7d-b2280889fd38
|
145 |
-
76,76,1539711494522392577,RT @AnimaAnandkumar: Minedojo is largest open-ended language-prompted multitask #benchmark #AI agents explore procedurally generated #3D w…,RT @AnimaAnandkumar: Minedojo is largest open-ended language-prompted multitask #benchmark #AI agents explore procedurally generated #3D w…,0,5cb3eaa8-5b22-5842-8d28-a2831327fb27
|
146 |
-
77,77,1539705700347219975,@RealGilbaz @DatagenTech Sure will visit,@RealGilbaz @DatagenTech Sure will visit,1,87a82bba-b18b-58dd-b2bd-4619c102dedb
|
147 |
-
78,78,1539689285137432578,RT @ducha_aiki: #CVPR2022 https://t.co/xRaw8ulZi6,RT @ducha_aiki: #CVPR2022 https://t.co/xRaw8ulZi6,0,363e1fb0-2c44-591a-a856-6cfcb9866cf0
|
148 |
-
79,79,1539672920456298498,"Scaling Autoregressive Models for Content-Rich Text-to-Image Generation
|
149 |
-
paper: https://t.co/NKkTeHttLd
|
150 |
-
project page… https://t.co/CcKxsWPmjR",Scaling Autoregressive Models for Content-Rich Text-to-Image Generation,134,10d8e7e4-e125-58c9-9551-52c3ee0d6024
|
151 |
-
80,80,1539672517903847425,RT @victormustar: Looking for inspiration? https://t.co/0pyZ02Xxu6 is full of awesome ML demos 🤩 https://t.co/F3eYSZAC3x,RT @victormustar: Looking for inspiration? https://t.co/0pyZ02Xxu6 is full of awesome ML demos 🤩 https://t.co/F3eYSZAC3x,0,beb25716-f8dd-5ac2-a35a-18a7e0994d85
|
152 |
-
81,81,1539665352258625537,"Check out Talking Face Generation with Multilingual TTS at @CVPR and try out the live @Gradio Demo
|
153 |
-
|
154 |
-
online… https://t.co/mCj9bIMB5u",Check out Talking Face Generation with Multilingual TTS at @CVPR and try out the live @Gradio Demo,18,67792738-0179-57d6-9454-98e5a81453f2
|
155 |
-
82,82,1539638155111956480,"RT @abidlabs: Slides for my @CVPR 2022 talk:
|
156 |
-
|
157 |
-
""Papers and Code Aren't Enough: Why Demos are Critical to ML Research and How to Build Them""…",RT @abidlabs: Slides for my @CVPR 2022 talk: ,0,0acf2a93-9318-5bd8-8359-4984b002720d
|
158 |
-
83,83,1539622527890333697,"RT @Gradio: 🔥 Exciting to see live *physical* @Gradio demos at #CVPR2022
|
159 |
-
|
160 |
-
Demo link for automatic sign language recognition: https://t.co…",RT @Gradio: 🔥 Exciting to see live *physical* @Gradio demos at #CVPR2022 ,0,a3b84142-7bfd-53d9-9880-bb744115a507
|
161 |
-
84,84,1539614419541528578,"RT @zsoltkira: @ak92501 Thanks @ak92501! The poster at #CVPR202 for this is today!
|
162 |
-
|
163 |
-
Location: Halls B2-C
|
164 |
-
Poster number: 183b
|
165 |
-
Time: 6/22 (We…",RT @zsoltkira: @ak92501 Thanks @ak92501! The poster at #CVPR202 for this is today!,0,01c70902-bebc-5728-a2aa-ffd0fc494aaa
|
166 |
-
85,85,1539612340718637057,RT @Jimantha: To all the CVPR-heads out there -- check out @KaiZhang9546's work on inverse rendering in this morning's oral session! Religh…,RT @Jimantha: To all the CVPR-heads out there -- check out @KaiZhang9546's work on inverse rendering in this morning's oral session! Religh…,0,ae342967-157a-5a54-bec1-83c7f47d8fab
|
167 |
-
86,86,1539480179151712256,"Intra-Instance VICReg: Bag of Self-Supervised Image Patch Embedding
|
168 |
-
abs: https://t.co/Bq3GUQywPV https://t.co/iLTaoXm0yC",Intra-Instance VICReg: Bag of Self-Supervised Image Patch Embedding,65,e679e61e-a009-574f-bea2-02690256db1a
|
169 |
-
87,87,1539473926778236934,"RT @zhanghe920312: Thanks @ak92501 for sharing.
|
170 |
-
Our poster session happening on Thursday Morning at @CVPR. Feel free to check out our…",RT @zhanghe920312: Thanks @ak92501 for sharing. ,0,19b8791a-82e2-54d5-bdb0-1483885e9e6d
|
171 |
-
88,88,1539473873816719360,RT @zengxianyu18: Thanks for sharing our work😀 I will be presenting SketchEdit @CVPR 2022. If you are interested in our work or just want t…,RT @zengxianyu18: Thanks for sharing our work😀 I will be presenting SketchEdit @CVPR 2022. If you are interested in our work or just want t…,0,e5bbc5bf-1a5a-5727-bfcc-1775ef1f9c27
|
172 |
-
89,89,1539460213211910150,"EnvPool: A Highly Parallel Reinforcement Learning Environment Execution Engine
|
173 |
-
abs: https://t.co/F4XkHLRxPi
|
174 |
-
github:… https://t.co/JiwSuMdkZH",EnvPool: A Highly Parallel Reinforcement Learning Environment Execution Engine,32,4317be1f-25d3-5778-9ddf-9f2c7ed44956
|
175 |
-
90,90,1539459120667021312,"EpiGRAF: Rethinking training of 3D GANs
|
176 |
-
abs: https://t.co/RcY2vQr0NH
|
177 |
-
project page: https://t.co/kuXPKA00bZ https://t.co/CVCsseAS21",EpiGRAF: Rethinking training of 3D GANs,142,acd00791-fd31-55f3-a25d-777153b901c8
|
178 |
-
91,91,1539453554578055168,"Unbiased Teacher v2: Semi-supervised Object Detection for Anchor-free and Anchor-based Detectors
|
179 |
-
abs:… https://t.co/noluSxtqzu",Unbiased Teacher v2: Semi-supervised Object Detection for Anchor-free and Anchor-based Detectors,71,2c6a2f46-907d-598d-bc9d-71d8d326865f
|
180 |
-
92,92,1539451329034297349,RT @ahatamiz1: Please check out our new paper which introduces a new vision transformer model dubbed as GC ViT !,RT @ahatamiz1: Please check out our new paper which introduces a new vision transformer model dubbed as GC ViT !,0,3ca74f2f-f913-5d54-b6ae-be56bdb405f0
|
181 |
-
93,93,1539442569733718016,"GAN2X: Non-Lambertian Inverse Rendering of Image GANs
|
182 |
-
abs: https://t.co/ziYgRUK2Sr
|
183 |
-
project page:… https://t.co/rLK6Qp9by0",GAN2X: Non-Lambertian Inverse Rendering of Image GANs,182,5eee0865-3eef-5e7f-8e4d-555ca08738e1
|
184 |
-
94,94,1539435374103220226,"Global Context Vision Transformers
|
185 |
-
abs: https://t.co/d6go0yv7fu
|
186 |
-
github: https://t.co/rUYFs09ReC
|
187 |
-
|
188 |
-
On ImageNet-1K dat… https://t.co/HJnw5wclQV",Global Context Vision Transformers,87,02939ee0-163d-59b9-a896-e0b63cfee862
|
189 |
-
95,95,1539434284213227528,"M&M Mix: A Multimodal Multiview Transformer Ensemble
|
190 |
-
abs: https://t.co/jQEZR3WCY4 https://t.co/8LZDCG0ePF",M&M Mix: A Multimodal Multiview Transformer Ensemble,39,f4d9fbfc-24ec-547d-b66a-28079c596a60
|
191 |
-
96,96,1539431648374099968,"CMT-DeepLab: Clustering Mask Transformers for Panoptic Segmentation
|
192 |
-
abs: https://t.co/yy78osDplK
|
193 |
-
|
194 |
-
CMTDeepLab improv… https://t.co/zCvYqSLp3G",CMT-DeepLab: Clustering Mask Transformers for Panoptic Segmentation,26,b5033382-9d21-53ce-b630-b4e3a1146d51
|
195 |
-
97,97,1539425826177007616,"nuQmm: Quantized MatMul for Efficient Inference of Large-Scale Generative Language Models
|
196 |
-
abs:… https://t.co/13fwAaXIn3",nuQmm: Quantized MatMul for Efficient Inference of Large-Scale Generative Language Models,84,3686fa2d-1003-5130-a1a9-6f0a4e63df4d
|
197 |
-
98,98,1539423930984931329,"Temporally Consistent Semantic Video Editing
|
198 |
-
abs: https://t.co/sg1dRt2xkw
|
199 |
-
project page: https://t.co/PyZKnxUQko https://t.co/1Az9nG5ccH",Temporally Consistent Semantic Video Editing,93,6e34a7cd-9970-58c0-a006-084ef6d2947a
|
200 |
-
99,99,1539421251076247554,"(Certified!!) Adversarial Robustness for Free!
|
201 |
-
abs: https://t.co/NTU6lioyII
|
202 |
-
|
203 |
-
show how to achieve sota certified adv… https://t.co/2VW1CDARya",(Certified!!) Adversarial Robustness for Free!,39,3c64bce0-4f00-54bc-a9fb-a2402a364b87
|
204 |
-
100,100,1539419136467554305,"DALL-E for Detection: Language-driven Context Image Synthesis for Object Detection
|
205 |
-
abs: https://t.co/rXx4npbY5G https://t.co/QBHP494eSn",DALL-E for Detection: Language-driven Context Image Synthesis for Object Detection,143,df2ad546-a4f0-51ac-b38c-88216742e553
|
206 |
-
101,101,1539379827966459904,"paper: https://t.co/cm0NWvfHVO
|
207 |
-
poster: https://t.co/cyLKrP84wD https://t.co/8iW8nEYdUi",paper: https://t.co/cm0NWvfHVO,4,73abe0ce-d97c-5d7c-bee5-b8e6e6fe6a17
|
208 |
-
102,102,1539379340324048898,a @Gradio Demo for SPOTER + Media Pipe: Combining Efficient and Precise Sign Language Recognition on @huggingface S… https://t.co/wg6qExJtL3,a @Gradio Demo for SPOTER + Media Pipe: Combining Efficient and Precise Sign Language Recognition on @huggingface S… https://t.co/wg6qExJtL3,17,77d0745d-c3a1-5248-81de-8cdc02bed84a
|
209 |
-
103,103,1539355589159026689,"GlideNet: Global, Local and Intrinsic based Dense Embedding NETwork for Multi-category Attributes Prediction
|
210 |
-
abs:… https://t.co/ztR7AnAQHl","GlideNet: Global, Local and Intrinsic based Dense Embedding NETwork for Multi-category Attributes Prediction",32,f2cd1fff-21e4-581f-a7fa-850997197b7f
|
211 |
-
104,104,1539322541482860545,RT @SaurabhBanga4: @ak92501 @CVPR @Gradio @abidlabs @huggingface https://t.co/9KxGEaHp0J,RT @SaurabhBanga4: @ak92501 @CVPR @Gradio @abidlabs @huggingface https://t.co/9KxGEaHp0J,0,98de7712-1e55-55f7-a774-3b00ec9edbae
|
212 |
-
105,105,1539304673211031554,Starting in 10 minutes @CVPR https://t.co/tAppaZFKep,Starting in 10 minutes @CVPR https://t.co/tAppaZFKep,10,dddd9632-2f62-529d-aa08-fcb37c695039
|
213 |
-
106,106,1539302809404952577,RT @ak92501: Come see the talk today at @CVPR for Papers and Code Aren’t Enough: Why Demos are Critical to ML Research and How to Build The…,RT @ak92501: Come see the talk today at @CVPR for Papers and Code Aren’t Enough: Why Demos are Critical to ML Research and How to Build The…,0,d9bf4821-ec3d-5359-962f-d5ff4b0c48cb
|
214 |
-
107,107,1539291146710654976,Come see the talk today at @CVPR for Papers and Code Aren’t Enough: Why Demos are Critical to ML Research and How t… https://t.co/rmjCWbTxJH,Come see the talk today at @CVPR for Papers and Code Aren’t Enough: Why Demos are Critical to ML Research and How t… https://t.co/rmjCWbTxJH,41,ee3d5236-fccc-5ca1-bc10-ed5cb324dde0
|
215 |
-
108,108,1539260231062065154,"RT @mattjr97: I somehow didn’t see this until today. Whomever is at CVPR, swing by the poster tomorrow afternoon, I’d love to answer any qu…","RT @mattjr97: I somehow didn’t see this until today. Whomever is at CVPR, swing by the poster tomorrow afternoon, I’d love to answer any qu…",0,3dc5f44e-8666-58db-bc76-a455210e8891
|
216 |
-
109,109,1539256590737580034,"RT @permutans: Best paper shortlisted at CVPR’22 (U. Washington, OpenAI, Google Brain, Columbia U)
|
217 |
-
|
218 |
-
“ensembling the weights of the zero-sho…","RT @permutans: Best paper shortlisted at CVPR’22 (U. Washington, OpenAI, Google Brain, Columbia U)",0,06111f84-55d6-56de-8b7d-698385f2a1e4
|
219 |
-
110,110,1539246900020449281,"RT @humphrey_shi: Last Minute UPDATE:
|
220 |
-
Our Invited Talk about ML Demos @ Hall B1 will be 1-1:30PM instead due to a scheduling conflict. @CVP…",RT @humphrey_shi: Last Minute UPDATE:,0,daa63f9d-c771-52fe-9a75-12b643d6c0f1
|
221 |
-
111,111,1539113571388366849,GALAXY: A Generative Pre-trained Model for Task-Oriented Dialog with Semi-Supervised Learning and Explicit Policy I… https://t.co/9i8574hPgN,GALAXY: A Generative Pre-trained Model for Task-Oriented Dialog with Semi-Supervised Learning and Explicit Policy I… https://t.co/9i8574hPgN,23,3428207e-bf16-539d-bee7-481226dfcb16
|
222 |
-
112,112,1539111398437011460,"RT @yan_xg: Code/pretained model is released, please have a try! 😁https://t.co/iAW5MlgDcp","RT @yan_xg: Code/pretained model is released, please have a try! 😁https://t.co/iAW5MlgDcp",0,9e6e8030-1b13-50e4-9f68-f84759a4769d
|
223 |
-
113,113,1539093616886534146,RT @humphrey_shi: Come join us tmr/Tue 10am - 5pm @CVPR to check out in-person Demos at the Demo Area. (also online 27/7 ones at https://t.…,RT @humphrey_shi: Come join us tmr/Tue 10am - 5pm @CVPR to check out in-person Demos at the Demo Area. (also online 27/7 ones at https://t.…,0,6666f368-0968-5880-b34a-cf8d3de58b35
|
224 |
-
114,114,1539076449788997632,"A Closer Look at Smoothness in Domain Adversarial Training
|
225 |
-
abs: https://t.co/GgKE9695vj
|
226 |
-
github:… https://t.co/33MX6TZhjt",A Closer Look at Smoothness in Domain Adversarial Training,96,32e1b97d-7003-598d-92e7-0ceb44416cc9
|
227 |
-
115,115,1539066735965380608,"a @Gradio Demo for Thin-Plate Spline Motion Model for Image Animation on @huggingface Spaces for @CVPR 2022
|
228 |
-
|
229 |
-
demo:… https://t.co/ieg4Xlfnu0",a @Gradio Demo for Thin-Plate Spline Motion Model for Image Animation on @huggingface Spaces for @CVPR 2022,121,619a5b3a-5ec8-5ff7-b0b1-5070a7c17694
|
230 |
-
116,116,1539058707643961345,"Holiday at arXiv, underway 🔧, I can sleep today
|
231 |
-
status: https://t.co/JEXsWfngyb https://t.co/rVve6lNLfB","Holiday at arXiv, underway 🔧, I can sleep today",58,7636baec-e2ba-510c-90e1-8992a8ec0f7e
|
232 |
-
117,117,1538970393859526656,"Day 2 at @CVPR 2022
|
233 |
-
|
234 |
-
Join the CVPR event on @huggingface to build @Gradio demos for CVPR papers here:… https://t.co/ekTNYuUkCQ",Day 2 at @CVPR 2022,47,71cc5dc6-a767-5334-951f-ef6ae8936459
|
235 |
-
118,118,1538765711169966080,@_arohan_ there is already a queue 😄 https://t.co/3ggYefcjMI,@_arohan_ there is already a queue 😄 https://t.co/3ggYefcjMI,2,164696f9-9de4-57df-b939-8dd7e23d8d8f
|
236 |
-
119,119,1538764856991547393,https://t.co/UjLVdJKjDt,https://t.co/UjLVdJKjDt,12,608a3c70-9b91-59ad-82d2-30ebcd75dbc2
|
237 |
-
120,120,1538757119796715520,https://t.co/ghtd6xHQ7c,https://t.co/ghtd6xHQ7c,4,94456986-ee75-50f3-8434-c724d8e33743
|
238 |
-
121,121,1538756244298661889,temporary link: https://t.co/fHFgtTir64 https://t.co/9Qbwr3mUwu,temporary link: https://t.co/fHFgtTir64 https://t.co/9Qbwr3mUwu,5,46c64717-ad5a-5bf5-8273-e5588aa0ee1b
|
239 |
-
122,122,1538754677466087424,WIP @Gradio Demo for CogView2 https://t.co/hPmcvwjLsk,WIP @Gradio Demo for CogView2 https://t.co/hPmcvwjLsk,66,37813542-0dca-5a8a-b2a2-b69c2d45583f
|
240 |
-
123,123,1538734927604338688,"a @Gradio Demo for V-Doc : Visual questions answers with Documents on @huggingface Spaces for @CVPR 2022
|
241 |
-
|
242 |
-
demo:… https://t.co/dF6Y2s4H5d",a @Gradio Demo for V-Doc : Visual questions answers with Documents on @huggingface Spaces for @CVPR 2022,20,7f34517b-4494-54ec-9087-49910dc3dc10
|
243 |
-
124,124,1538731091175038977,"RT @Seungu_Han: Our paper ""NU-Wave 2: A General Neural Audio Upsampling Model for Various Sampling Rates"" got accepted to Interspeech 2022…","RT @Seungu_Han: Our paper ""NU-Wave 2: A General Neural Audio Upsampling Model for Various Sampling Rates"" got accepted to Interspeech 2022…",0,5919125a-fe24-541c-959d-393aae3cf8b0
|
244 |
-
125,125,1538719219818409994,"TAVA: Template-free Animatable Volumetric Actors
|
245 |
-
abs: https://t.co/lJ2C6e1VpG
|
246 |
-
project page: https://t.co/lpUgeGI7CX https://t.co/D62WYod4by",TAVA: Template-free Animatable Volumetric Actors,71,7099c1e0-efdc-54e4-93b7-b6ecd3612deb
|
247 |
-
126,126,1538716898015293440,"RT @yilin_sung: Excited to participate in my first in-person @CVPR to present VL-Adapter, that benchmarks different parameter-efficient tra…","RT @yilin_sung: Excited to participate in my first in-person @CVPR to present VL-Adapter, that benchmarks different parameter-efficient tra…",0,15f175fc-9690-5d13-a2ea-114d8a2e74bd
|
248 |
-
127,127,1538710356444471296,"Fast Finite Width Neural Tangent Kernel
|
249 |
-
abs: https://t.co/iY1lFoYMjA https://t.co/hWzzcCd5OZ",Fast Finite Width Neural Tangent Kernel,22,6c61704f-9bf3-5251-ba56-032e2561d8ee
|
250 |
-
128,128,1538706936211951617,"What do navigation agents learn about their environment?
|
251 |
-
abs: https://t.co/eXelV0REgZ
|
252 |
-
github:… https://t.co/TGSzEQ1v1c",What do navigation agents learn about their environment?,36,f14111ed-16d8-5461-80f2-1d57b198248b
|
253 |
-
129,129,1538700561800912896,RT @DrJimFan: @ak92501 Thank you so much AK for posting our work 🥰! What an honor! I’m the first author of MineDojo. We will have an announ…,RT @DrJimFan: @ak92501 Thank you so much AK for posting our work 🥰! What an honor! I’m the first author of MineDojo. We will have an announ…,0,6820d696-1207-5f5d-b2a3-1e300a8e6129
|
254 |
-
130,130,1538698653493338114,"Bootstrapped Transformer for Offline Reinforcement Learning
|
255 |
-
abs: https://t.co/YiEY3uiTgL https://t.co/yle4hPgMmf",Bootstrapped Transformer for Offline Reinforcement Learning,136,7110587b-e023-511f-81a8-648b5ac25565
|
256 |
-
131,131,1538695806311665665,RT @mark_riedl: MineDojo: a new framework built on the popular Minecraft game that features a simulation suite with thousands of diverse op…,RT @mark_riedl: MineDojo: a new framework built on the popular Minecraft game that features a simulation suite with thousands of diverse op…,0,6d16bb82-189e-56df-a05d-907690ec8db9
|
257 |
-
132,132,1538695457550921728,"Bridge-Tower: Building Bridges Between Encoders in Vision-Language Representation Learning
|
258 |
-
abs:… https://t.co/uLQLmf4l3M",Bridge-Tower: Building Bridges Between Encoders in Vision-Language Representation Learning,41,3743a65a-6869-528e-a7d9-aa502935b7f6
|
259 |
-
133,133,1538694061531533313,"Evolution through Large Models
|
260 |
-
abs: https://t.co/2B0yygTiWa
|
261 |
-
|
262 |
-
pursues the insight that large language models trained… https://t.co/tfvNrHbTYG",Evolution through Large Models,97,99e0d2cb-a972-51c9-87f9-cbb71166eebd
|
263 |
-
134,134,1538692524830769152,"MineDojo: Building Open-Ended Embodied Agents with Internet-Scale Knowledge
|
264 |
-
abs: https://t.co/etfGL1xnum
|
265 |
-
project pa… https://t.co/Fv1aLuEJSV",MineDojo: Building Open-Ended Embodied Agents with Internet-Scale Knowledge,262,81c216e1-2508-52ab-b2ee-38b30cc35f92
|
266 |
-
135,135,1538689482534309890,"EyeNeRF: A Hybrid Representation for Photorealistic Synthesis, Animation and Relighting of Human Eyes
|
267 |
-
abs:… https://t.co/GfAeLP6iAD","EyeNeRF: A Hybrid Representation for Photorealistic Synthesis, Animation and Relighting of Human Eyes",105,a75ccaac-5bc1-5384-92cb-59207d99a4ef
|
268 |
-
136,136,1538687423722541056,"Lossy Compression with Gaussian Diffusion
|
269 |
-
abs: https://t.co/tw5YiZAN3B
|
270 |
-
|
271 |
-
implement a proof of concept and find that… https://t.co/4nvLjhIX4e",Lossy Compression with Gaussian Diffusion,102,733e8fc5-b7c8-56aa-b8c6-9d06d7fe7135
|
272 |
-
137,137,1538686489491648514,"NU-Wave 2: A General Neural Audio Upsampling Model for Various Sampling Rates
|
273 |
-
abs: https://t.co/4S8sBXq6Ko
|
274 |
-
|
275 |
-
a diffu… https://t.co/xd3eQ0ApQJ",NU-Wave 2: A General Neural Audio Upsampling Model for Various Sampling Rates,85,476fbdc8-a847-5b17-9532-698ccb88b9a7
|
276 |
-
138,138,1538685207385079809,"Unified-IO: A Unified Model for Vision, Language, and Multi-Modal Tasks
|
277 |
-
abs: https://t.co/ydrEo1SVh9
|
278 |
-
project page:… https://t.co/4LgYqVNenf","Unified-IO: A Unified Model for Vision, Language, and Multi-Modal Tasks",177,0fc2d79b-a331-5b1b-80e3-9805ba6c1358
|
279 |
-
139,139,1538685023708127238,RT @phiyodr: Check out our work/demo for the #VizWiz workshop at #CVPR2022,RT @phiyodr: Check out our work/demo for the #VizWiz workshop at #CVPR2022,0,d8b85fe3-e2aa-52f9-80fa-10ecf946fead
|
280 |
-
140,140,1538642504609832960,"RT @gclue_akira: I shared #CogView2 colab working.
|
281 |
-
|
282 |
-
https://t.co/jwFBWFCSos
|
283 |
-
|
284 |
-
@ak92501",RT @gclue_akira: I shared #CogView2 colab working.,0,af4f9a79-868f-5d64-bec6-6af60009446f
|
285 |
-
141,141,1538593847764197386,Made it to @CVPR 2022 https://t.co/alBnBYHmnT,Made it to @CVPR 2022 https://t.co/alBnBYHmnT,222,f8444d03-2a4d-5283-ac07-cd61aaa8128c
|
286 |
-
142,142,1538558197459460096,"RT @mitts1910: Excited to share our #CVPR2022 paper, a collaboration of @Microsoft & @RITtigers, that achieves SOTA on Online Action Detect…","RT @mitts1910: Excited to share our #CVPR2022 paper, a collaboration of @Microsoft & @RITtigers, that achieves SOTA on Online Action Detect…",0,892805cc-c5d0-571f-8841-3ba335035073
|
287 |
-
143,143,1538347108671049728,RT @gowthami_s: I will be in person at #CVPR22 to discuss our paper on understanding model reproducibility! Drop by and say hi if you are a…,RT @gowthami_s: I will be in person at #CVPR22 to discuss our paper on understanding model reproducibility! Drop by and say hi if you are a…,0,51b27e05-a0a6-597e-a4c0-831b34c198ea
|
288 |
-
144,144,1538331269863510017,Can Neural Nets Learn the Same Model Twice? Investigating Reproducibility and Double Descent from the Decision Boun… https://t.co/oqjzwd8h3E,Can Neural Nets Learn the Same Model Twice? Investigating Reproducibility and Double Descent from the Decision Boun… https://t.co/oqjzwd8h3E,326,57116c35-e49b-50b9-b36f-df793733eb60
|
289 |
-
145,145,1538211869017653249,"RT @keunwoochoi: https://t.co/wEZo4Sxn0Q
|
290 |
-
|
291 |
-
AI Song Contest 2022 - the finalists 🔥🔥🔥",RT @keunwoochoi: https://t.co/wEZo4Sxn0Q,0,78a31a7e-ddca-50bc-a5ba-53192c4428a1
|
292 |
-
146,146,1538200789243596800,"RT @_tingliu: See you at Poster Session 3.2 on Thursday June 23, 2:30 - 5pm at #CVPR2022!","RT @_tingliu: See you at Poster Session 3.2 on Thursday June 23, 2:30 - 5pm at #CVPR2022!",0,0802b100-6787-5170-86f8-e2ca30ad1e34
|
293 |
-
147,147,1538200381863481344,submit @Gradio demos for CVPR papers by joining the organization on @huggingface here: https://t.co/sNaZf2ztdy https://t.co/jc7VX1Hekd,submit @Gradio demos for CVPR papers by joining the organization on @huggingface here: https://t.co/sNaZf2ztdy https://t.co/jc7VX1Hekd,21,ba3b9707-7ffa-5376-840f-302816944395
|
294 |
-
148,148,1538026339747307521,"RT @weichiuma: Can you match images with little or no overlaps?
|
295 |
-
|
296 |
-
Humans can🧠but most existing methods fail😰
|
297 |
-
|
298 |
-
Our #CVPR2022 paper shoots c…",RT @weichiuma: Can you match images with little or no overlaps?,0,8b1e6e51-e2ab-5715-8476-fb783e9e53ce
|
299 |
-
149,149,1538019922667659265,"RT @humphrey_shi: AI Research is empowering the world, and DEMO is a best way to showcase this power. Besides in-person Demos, we invite @C…","RT @humphrey_shi: AI Research is empowering the world, and DEMO is a best way to showcase this power. Besides in-person Demos, we invite @C…",0,12cc27f2-c3d6-57cb-a1f4-3206d6b6870c
|
300 |
-
150,150,1538006265363738625,"iBoot: Image-bootstrapped Self-Supervised Video Representation Learning
|
301 |
-
abs: https://t.co/dkZUd4QC81 https://t.co/pJFpxd7ckU",iBoot: Image-bootstrapped Self-Supervised Video Representation Learning,72,64ea809e-f2be-5c3c-9c83-4127d5554ba6
|
302 |
-
151,151,1538002482088931331,dalle2 - robot reading arxiv papers on a laptop at midnight on a small desk with a lamp turn on and a full coffee m… https://t.co/sg2WIavOZn,dalle2 - robot reading arxiv papers on a laptop at midnight on a small desk with a lamp turn on and a full coffee m… https://t.co/sg2WIavOZn,38,efbdd8e7-4dea-5bd4-a670-465dbc927e3d
|
303 |
-
152,152,1538000649933115393,"Neural Scene Representation for Locomotion on Structured Terrain
|
304 |
-
abs: https://t.co/68xY622f4w https://t.co/W3wTYp31f6",Neural Scene Representation for Locomotion on Structured Terrain,82,8fe160ce-5952-5549-abfc-21af16476fe9
|
305 |
-
153,153,1537998346350043137,"Disentangling visual and written concepts in CLIP
|
306 |
-
abs: https://t.co/VsyuDV4HNI
|
307 |
-
project page: https://t.co/2hTQnhR2o1 https://t.co/LbWpnpTTHT",Disentangling visual and written concepts in CLIP,93,8c273076-e27a-517a-8c7e-9d958b3b607c
|
308 |
-
154,154,1537992206987845638,dalle2 - a digital art piece of a robot reading arxiv papers at midnight on a small desk with a lamp turn on and a… https://t.co/V7tHDksfFX,dalle2 - a digital art piece of a robot reading arxiv papers at midnight on a small desk with a lamp turn on and a… https://t.co/V7tHDksfFX,221,0265a65e-e20e-56a1-b7f0-3d600942d861
|
309 |
-
155,155,1537989713256099848,"a @Gradio Demo for It's About Time: Analog Clock Reading in the Wild on @huggingface Spaces for @CVPR 2022
|
310 |
-
|
311 |
-
demo:… https://t.co/P8xkisydJQ",a @Gradio Demo for It's About Time: Analog Clock Reading in the Wild on @huggingface Spaces for @CVPR 2022,10,c789699b-87c7-5c04-a3ec-dc1a7b315b6e
|
312 |
-
156,156,1537972518438379520,"RT @imisra_: Why train separate models for visual modalities?
|
313 |
-
|
314 |
-
Following up on our Omnivore work: We train a single model on images, videos…",RT @imisra_: Why train separate models for visual modalities?,0,f38d4bf7-85a2-5ce6-98b3-2af28e39b14c
|
315 |
-
157,157,1537924151389736961,"Programmatic Concept Learning for Human Motion Description and Synthesis
|
316 |
-
paper: https://t.co/Qemk23gUHX
|
317 |
-
project pag… https://t.co/ImHeYQC5vj",Programmatic Concept Learning for Human Motion Description and Synthesis,59,3ab3a352-202f-531f-9ee4-dd82a1861caa
|
318 |
-
158,158,1537825873931472898,"RT @abidlabs: Excited to announce the 2022 @CVPR-@Gradio competition ahead of the conference next week!
|
319 |
-
|
320 |
-
Our goal is to make it machine lea…",RT @abidlabs: Excited to announce the 2022 @CVPR-@Gradio competition ahead of the conference next week!,0,94a6c40a-4f4c-5539-9cef-47801cda2203
|
321 |
-
159,159,1537818135444828160,a @Gradio Demo for Less Is More: Linear Layers on CLIP Features as Powerful VizWiz Model on @huggingface Spaces for… https://t.co/tpSavhBA9G,a @Gradio Demo for Less Is More: Linear Layers on CLIP Features as Powerful VizWiz Model on @huggingface Spaces for… https://t.co/tpSavhBA9G,17,b39ba39f-f784-59ec-904c-d0acd4747835
|
322 |
-
160,160,1537817765213519873,RT @taesiri: @ak92501 @Gradio @huggingface @CVPR Neat! 😄 https://t.co/R6vy3QXcfB,RT @taesiri: @ak92501 @Gradio @huggingface @CVPR Neat! 😄 https://t.co/R6vy3QXcfB,0,7d0dc440-ffe9-510e-9e9e-d200b238bedd
|
323 |
-
161,161,1537796080238305280,"RT @armandjoulin: Thanks @ak92501 for sharing our work! Masked Autoencoders are insanely easy to use. You can throw any data at them, and t…","RT @armandjoulin: Thanks @ak92501 for sharing our work! Masked Autoencoders are insanely easy to use. You can throw any data at them, and t…",0,40d16c23-e81a-5cf5-abd6-7c1fe3ddb68d
|
324 |
-
162,162,1537790206946181120,"RT @danxuhk: Please check our paper and project for talking head video generation at the incoming CVPR 22 😃😃😃
|
325 |
-
@harlan_hong
|
326 |
-
You may also tr…",RT @danxuhk: Please check our paper and project for talking head video generation at the incoming CVPR 22 😃😃😃,0,4ab2441b-ef66-517e-8ca2-a46a69d16c76
|
327 |
-
163,163,1537778006302793728,"RT @_rohitgirdhar_: Excited to share the next evolution of Omnivore: https://t.co/SikzTdVIgx
|
328 |
-
|
329 |
-
Omnivore meets MAE! OmniMAE is a single mod…",RT @_rohitgirdhar_: Excited to share the next evolution of Omnivore: https://t.co/SikzTdVIgx ,0,de1c2056-b2b1-5c81-a2e1-b9522b386fc4
|
330 |
-
164,164,1537777742590230528,RT @CVPR: The papers to be presented will be listed here: https://t.co/IZfETICs8J https://t.co/dcRQ1BayrT,RT @CVPR: The papers to be presented will be listed here: https://t.co/IZfETICs8J https://t.co/dcRQ1BayrT,0,dd31b3e1-cea1-5166-8984-5170e44bf712
|
331 |
-
165,165,1537775332316614656,"RT @victormustar: 🚪Can you tell if a Neural Net contains a Backdoor Attack? 🤓
|
332 |
-
A really cool HF Space with good explanations and some nice e…",RT @victormustar: 🚪Can you tell if a Neural Net contains a Backdoor Attack? 🤓,0,9c391a25-c414-5a8e-b739-27e6cd6abc8e
|
333 |
-
166,166,1537688195206418433,"Virtual Correspondence: Humans as a Cue for Extreme-View Geometry
|
334 |
-
abs: https://t.co/hAx8x4rnIO
|
335 |
-
project page:… https://t.co/z19LsVo2qX",Virtual Correspondence: Humans as a Cue for Extreme-View Geometry,195,87cdfca3-b1ea-5bc6-b639-fd77c6f4583e
|
336 |
-
167,167,1537685927505678337,"Beyond Supervised vs. Unsupervised: Representative Benchmarking and Analysis of Image Representation Learning
|
337 |
-
abs:… https://t.co/n02uqo0cb2",Beyond Supervised vs. Unsupervised: Representative Benchmarking and Analysis of Image Representation Learning,167,2773c03c-d793-5e44-bd80-90ec7205b49f
|
338 |
-
168,168,1537650506683801601,"GateHUB: Gated History Unit with Background Suppression for Online Action Detection
|
339 |
-
abs: https://t.co/3DqwFesEZi https://t.co/t1Pcz09AUR",GateHUB: Gated History Unit with Background Suppression for Online Action Detection,24,8ffec35f-18c3-524d-90fd-f2fb36ce4206
|
340 |
-
169,169,1537640654968324099,"Spatially-Adaptive Multilayer Selection for GAN Inversion and Editing
|
341 |
-
abs: https://t.co/9tpvhXuaRw
|
342 |
-
project page:… https://t.co/XxpZg5PGke",Spatially-Adaptive Multilayer Selection for GAN Inversion and Editing,72,6ed8fbe8-21a7-5535-a9ea-70bd9583501f
|
343 |
-
170,170,1537639309888610305,"Realistic One-shot Mesh-based Head Avatars
|
344 |
-
abs: https://t.co/aETolvwoiH
|
345 |
-
project page: https://t.co/rTTLG67oPy https://t.co/C8aUN3VS37",Realistic One-shot Mesh-based Head Avatars,562,861c16ed-ecf4-5f49-ac5b-7d1565adf2a8
|
346 |
-
171,171,1537637590274277376,"MoDi: Unconditional Motion Synthesis from Diverse Data
|
347 |
-
abs: https://t.co/YBV9jSUemo https://t.co/o1uvG18RSk",MoDi: Unconditional Motion Synthesis from Diverse Data,70,c4249cfa-d77f-51df-9227-5d795af232ae
|
348 |
-
172,172,1537630146244517889,"OmniMAE: Single Model Masked Pretraining on Images and Videos
|
349 |
-
abs: https://t.co/j9a3imUEJ6
|
350 |
-
|
351 |
-
single pretrained model… https://t.co/OiR2pY5emm",OmniMAE: Single Model Masked Pretraining on Images and Videos,144,b83bfcfa-6ab9-5c4b-b3c8-aa10bff96c03
|
352 |
-
173,173,1537626871319470080,"FWD: Real-time Novel View Synthesis with Forward Warping and Depth
|
353 |
-
abs: https://t.co/hbo0vxrlDd
|
354 |
-
|
355 |
-
propose a generali… https://t.co/etVCe4HPI9",FWD: Real-time Novel View Synthesis with Forward Warping and Depth,37,e71bebff-36c2-5a41-aeb7-be101a7510bf
|
356 |
-
174,174,1537622879386456064,"SAVi++: Towards End-to-End Object-Centric Learning from Real-World Videos
|
357 |
-
abs: https://t.co/0MkpFJiUzM
|
358 |
-
|
359 |
-
using spars… https://t.co/x1Hvgf13qE",SAVi++: Towards End-to-End Object-Centric Learning from Real-World Videos,54,97325783-f5c4-5cce-b965-909537c630ee
|
360 |
-
175,175,1537621348339572736,"BYOL-Explore: Exploration by Bootstrapped Prediction
|
361 |
-
abs: https://t.co/xXQtolzjlP
|
362 |
-
|
363 |
-
BYOL-Explore achieves superhuman… https://t.co/uZvAbVd1Bb",BYOL-Explore: Exploration by Bootstrapped Prediction,79,9ad49f10-88ca-5bfd-af26-6e3cb9ba7773
|
364 |
-
176,176,1537618457365303296,"Know your audience: specializing grounded language models with the game of Dixit
|
365 |
-
abs: https://t.co/T8d5ir8LDQ https://t.co/zSk5oR2F9D",Know your audience: specializing grounded language models with the game of Dixit,39,702a8439-d0bc-5ca9-9bc0-08cc09d8fd01
|
366 |
-
177,177,1537616695749230592,"Characteristics of Harmful Text: Towards Rigorous Benchmarking of Language Models
|
367 |
-
abs: https://t.co/JVutpfCfIq
|
368 |
-
|
369 |
-
pro… https://t.co/8nvWHPxXYm",Characteristics of Harmful Text: Towards Rigorous Benchmarking of Language Models,11,2cd01de2-7379-5a43-936e-5459f584f381
|
370 |
-
178,178,1537615160172589056,"GoodBye WaveNet -- A Language Model for Raw Audio with Context of 1/2 Million Samples
|
371 |
-
abs: https://t.co/XRTTRbABXG… https://t.co/2ewOJYVqTC",GoodBye WaveNet -- A Language Model for Raw Audio with Context of 1/2 Million Samples,360,42f48fd5-a756-5720-92b7-332df0af3d0a
|
372 |
-
179,179,1537613030225240066,"Discrete Contrastive Diffusion for Cross-Modal and Conditional Generation
|
373 |
-
abs: https://t.co/RBbFId9jPF
|
374 |
-
|
375 |
-
On dance-to… https://t.co/IrXLM4bPcQ",Discrete Contrastive Diffusion for Cross-Modal and Conditional Generation,68,408fecf3-f842-59cd-bc30-2181b96dd749
|
376 |
-
180,180,1537593193407053826,a @Gradio Demo for Dual-Key Multimodal Backdoors for Visual Question Answering on @huggingface Spaces for @CVPR 202… https://t.co/g0MakJAhtz,a @Gradio Demo for Dual-Key Multimodal Backdoors for Visual Question Answering on @huggingface Spaces for @CVPR 202… https://t.co/g0MakJAhtz,16,4b911258-886d-5620-a5d2-e6f2c2bddedf
|
377 |
-
181,181,1537586831310602240,"RT @chaaarig: Also have a try at our demo on @Gradio/@huggingface !
|
378 |
-
|
379 |
-
Demo: https://t.co/qyqmbg4eIC
|
380 |
-
|
381 |
-
and do join the CVPR 2022 organization…",RT @chaaarig: Also have a try at our demo on @Gradio/@huggingface !,0,e8617523-a281-5341-ac9f-20c21515451d
|
382 |
-
182,182,1537568313504681986,RT @jw2yang4ai: We added a heat map visualization for our demo. It can somehow segment the concepts you are querying. Try it out.,RT @jw2yang4ai: We added a heat map visualization for our demo. It can somehow segment the concepts you are querying. Try it out.,0,940cfd96-0205-5baf-a776-da89f5825910
|
383 |
-
183,183,1537546603262787584,"RT @gadelha_m: Always nice to see the work in AK’s feed! Congrats, @YimingXie4!","RT @gadelha_m: Always nice to see the work in AK’s feed! Congrats, @YimingXie4!",0,e02907e6-4bd3-52fd-a89e-1ef70b9ef685
|
384 |
-
184,184,1537539330901782528,"RT @MatthewWalmer: Can you tell if a Neural Net contains a Backdoor Attack? Try this demo for ""Dual-Key Multimodal Backdoors for Visual Que…","RT @MatthewWalmer: Can you tell if a Neural Net contains a Backdoor Attack? Try this demo for ""Dual-Key Multimodal Backdoors for Visual Que…",0,3064a89d-80de-51b5-9823-70b0c5be51fc
|
385 |
-
185,185,1537489260126904322,"a @Gradio Demo for Bamboo_ViT-B16 for Image Recognition on @huggingface Spaces for @CVPR 2022
|
386 |
-
|
387 |
-
demo:… https://t.co/lEM23bNPL0",a @Gradio Demo for Bamboo_ViT-B16 for Image Recognition on @huggingface Spaces for @CVPR 2022,26,f3eec8cc-9927-571a-b89c-7ab945eb5a47
|
388 |
-
186,186,1537478059154079751,"RT @K_S_Schwarz: Sparse voxel grids have proven super useful for speeding up novel view synthesis. Inspired by this, our latest work uses a…","RT @K_S_Schwarz: Sparse voxel grids have proven super useful for speeding up novel view synthesis. Inspired by this, our latest work uses a…",0,18525cbf-b7b0-5e7b-9791-1258a44f53fa
|
389 |
-
187,187,1537477283409272836,"RT @skamalas: TLDR is now accepted at the Transactions of Machine Learning Research (TMLR) journal - @TmlrOrg
|
390 |
-
|
391 |
-
Openreview: https://t.co/wV…",RT @skamalas: TLDR is now accepted at the Transactions of Machine Learning Research (TMLR) journal - @TmlrOrg ,0,8a542466-526f-5a5a-ae3a-1b80c10e7808
|
392 |
-
188,188,1537460438463651842,RT @yilin_sung: Do you still get Out-of-Memory error even when you've saved >95% params w. adapter/prompt-tuning? Try Ladder Side-Tuning (L…,RT @yilin_sung: Do you still get Out-of-Memory error even when you've saved >95% params w. adapter/prompt-tuning? Try Ladder Side-Tuning (L…,0,ac48c094-6490-5d07-92d0-052eb46d8521
|
393 |
-
189,189,1537460412937019396,"RT @yilin_sung: All our code is available at https://t.co/gTrTXtEodS. Feel free to check it out. @uncnlp
|
394 |
-
|
395 |
-
(and thanks @ak92501 for sharing)",RT @yilin_sung: All our code is available at https://t.co/gTrTXtEodS. Feel free to check it out. @uncnlp,0,830b5ead-a469-50fa-b405-de9f123a5c0c
|
396 |
-
190,190,1537446428259233792,"RT @roeiherzig: Thanks for featuring our work @ak92501! For more info, please visit our page!
|
397 |
-
|
398 |
-
This research is a collaborative effort w/ @…","RT @roeiherzig: Thanks for featuring our work @ak92501! For more info, please visit our page!",0,4bb9aa10-fe7a-56ee-8261-30150a38688c
|
399 |
-
191,191,1537324192978419713,"AVATAR: Unconstrained Audiovisual Speech Recognition
|
400 |
-
abs: https://t.co/ZXdnRJppOk https://t.co/OTcPmcNM9E",AVATAR: Unconstrained Audiovisual Speech Recognition,30,3b235495-c750-573d-85b5-cbabd1967057
|
401 |
-
192,192,1537323042380124160,"VCT: A Video Compression Transformer
|
402 |
-
abs: https://t.co/llH1L1ooKa
|
403 |
-
|
404 |
-
presented an elegantly simple transformer-based… https://t.co/ErovCWVDg3",VCT: A Video Compression Transformer,68,0c12b360-fe72-5fb6-992b-e514ff8982ea
|
405 |
-
193,193,1537319908920393729,"It’s Time for Artistic Correspondence in Music and Video
|
406 |
-
abs: https://t.co/BKyP9MErgw
|
407 |
-
project page:… https://t.co/NYbUVqPTFo",It’s Time for Artistic Correspondence in Music and Video,58,a901406d-a86b-5698-9ef0-62f01eeb2356
|
408 |
-
194,194,1537316756880072705,"PlanarRecon: Real-time 3D Plane Detection and Reconstruction from Posed Monocular Videos
|
409 |
-
abs:… https://t.co/TpuSD4Ybkd",PlanarRecon: Real-time 3D Plane Detection and Reconstruction from Posed Monocular Videos,763,57e608f4-1699-5eb4-bf14-585406aebb20
|
410 |
-
195,195,1537315443932815360,"LET-3D-AP: Longitudinal Error Tolerant 3D Average Precision for Camera-Only 3D Detection
|
411 |
-
abs:… https://t.co/tRCXSz3kxE",LET-3D-AP: Longitudinal Error Tolerant 3D Average Precision for Camera-Only 3D Detection,33,1b510513-ebf3-5fb3-94ca-55cdb64a1300
|
412 |
-
196,196,1537314480056672258,"Contrastive Learning as Goal-Conditioned Reinforcement Learning
|
413 |
-
abs: https://t.co/6dv7PNn0qq
|
414 |
-
project page:… https://t.co/vRSdekL9If",Contrastive Learning as Goal-Conditioned Reinforcement Learning,77,f52db2d6-17c6-575e-8ebf-27cf2ac49fb5
|
415 |
-
197,197,1537312940956712961,RT @ashkamath20: Presenting FIBER (Fusion In-the-Backbone transformER) a novel V&L architecture w/ deep multi-modal fusion + a new pre-trai…,RT @ashkamath20: Presenting FIBER (Fusion In-the-Backbone transformER) a novel V&L architecture w/ deep multi-modal fusion + a new pre-trai…,0,8ed9e034-fc54-5c0b-8252-6777e6c14b51
|
416 |
-
198,198,1537301855595790337,"LAVENDER: Unifying Video-Language Understanding as Masked Language Modeling
|
417 |
-
abs:https://t.co/RGQy8Vv1LG https://t.co/G1bdakn5Pr",LAVENDER: Unifying Video-Language Understanding as Masked Language Modeling,42,e43c7e16-cc3e-5902-b5bd-492e84c6ea74
|
418 |
-
199,199,1537288570880368640,"Masked Siamese ConvNets
|
419 |
-
abs: https://t.co/YMG1O1ZZ5N https://t.co/LCVqVvFNfR",Masked Siamese ConvNets,83,6e6b2363-2b37-5b02-8345-e7ca8b8f3971
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|