Upload Tweets-from-AK.csv
Browse files- Tweets-from-AK.csv +419 -0
Tweets-from-AK.csv
ADDED
@@ -0,0 +1,419 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
,id,tweet_text,paper_reference,total_likes
|
2 |
+
0,1541238366599012355,"HM3D-ABO: A Photo-realistic Dataset for Object-centric Multi-view 3D Reconstruction
|
3 |
+
abs: https://t.co/fSVklQH3H4
|
4 |
+
gi… https://t.co/38aK0bOtoh",HM3D-ABO: A Photo-realistic Dataset for Object-centric Multi-view 3D Reconstruction,77
|
5 |
+
1,1541226747533922308,"PSP: Million-level Protein Sequence Dataset for Protein Structure Prediction
|
6 |
+
abs: https://t.co/yXdFTqRWF3
|
7 |
+
|
8 |
+
dataset… https://t.co/ZDNMPI2NVR",PSP: Million-level Protein Sequence Dataset for Protein Structure Prediction,51
|
9 |
+
2,1541224802425442305,"RT @aerinykim: Before I forget, I'd like to summarize some interesting papers that I found at #CVPR2022.
|
10 |
+
|
11 |
+
Dual-key multimodal backdoors for…","RT @aerinykim: Before I forget, I'd like to summarize some interesting papers that I found at #CVPR2022.",0
|
12 |
+
3,1541222358735790082,"Text-Driven Stylization of Video Objects
|
13 |
+
abs: https://t.co/dQps6x2n65
|
14 |
+
project page: https://t.co/Ycsjsus0y6
|
15 |
+
|
16 |
+
TL;DR:… https://t.co/l9v0AGY7Ks",Text-Driven Stylization of Video Objects,70
|
17 |
+
4,1541219433259175937,"Megapixel Image Generation with Step-Unrolled Denoising Autoencoders
|
18 |
+
abs: https://t.co/6fX9PseXBT
|
19 |
+
|
20 |
+
obtain FID score… https://t.co/HPodJ8xzPx",Megapixel Image Generation with Step-Unrolled Denoising Autoencoders,94
|
21 |
+
5,1541125242118078465,"RT @dasayan05: #CVPR2022 summary:
|
22 |
+
1. Boiling temperature at NOLA
|
23 |
+
2. Reading NeRF posters
|
24 |
+
3. Searching for @ak92501
|
25 |
+
4. Reading more NeRF po…",RT @dasayan05: #CVPR2022 summary:,0
|
26 |
+
6,1541101988125048838,"The @CVPR event on @huggingface is ending on June 30th (AOE Time Zone), 118 team members and 25 @Gradio demos have… https://t.co/dS8GWnOvid","The @CVPR event on @huggingface is ending on June 30th (AOE Time Zone), 118 team members and 25 @Gradio demos have… https://t.co/dS8GWnOvid",37
|
27 |
+
7,1540790151273517056,github: https://t.co/nw8tY5xWN3 https://t.co/VmCO75ftIQ,github: https://t.co/nw8tY5xWN3 https://t.co/VmCO75ftIQ,63
|
28 |
+
8,1540760803900530691,"RT @zhengzhongtu: Already back in Austin now!
|
29 |
+
|
30 |
+
Finally caught up with @ak92501 the Arxiv robot on the last day of CVPR~ https://t.co/9hFLvt…",RT @zhengzhongtu: Already back in Austin now!,0
|
31 |
+
9,1540531617609011200,RT @saihv: @sitzikbs @CSProfKGD @ak92501 #6 seems interesting.. https://t.co/7PIEQOraSz,RT @saihv: @sitzikbs @CSProfKGD @ak92501 #6 seems interesting.. https://t.co/7PIEQOraSz,0
|
32 |
+
10,1540526641264353283,"RT @MatthewWalmer: Today we’re presenting our poster for “Dual Key Multimodal Backdoors for Visual Question Answering” at #cvpr2022
|
33 |
+
|
34 |
+
Aftern…",RT @MatthewWalmer: Today we’re presenting our poster for “Dual Key Multimodal Backdoors for Visual Question Answering” at #cvpr2022,0
|
35 |
+
11,1540518390904807424,RT @sitzikbs: @WaltonStevenj @ak92501 @CSProfKGD Wow! Same thing happned to me! https://t.co/SndtMVGdkd,RT @sitzikbs: @WaltonStevenj @ak92501 @CSProfKGD Wow! Same thing happned to me! https://t.co/SndtMVGdkd,0
|
36 |
+
12,1540514393653395457,RT @WaltonStevenj: @CSProfKGD @ak92501 I tried to get a picture but this happened https://t.co/LFqqqwfwGl,RT @WaltonStevenj: @CSProfKGD @ak92501 I tried to get a picture but this happened https://t.co/LFqqqwfwGl,0
|
37 |
+
13,1540498719245746178,RT @apsdehal: Come stop by at our WinoGround poster during afternoon session at #CVPR2022 today to talk about where today's advanced visio…,RT @apsdehal: Come stop by at our WinoGround poster during afternoon session at #CVPR2022 today to talk about where today's advanced visio…,0
|
38 |
+
14,1540496892018188289,"WALT: Watch And Learn 2D amodal representation from Time-lapse imagery
|
39 |
+
paper: https://t.co/8GHgNUGdi6
|
40 |
+
project page:… https://t.co/5YSt8ydEu0",WALT: Watch And Learn 2D amodal representation from Time-lapse imagery,64
|
41 |
+
15,1540492673039187969,RT @CSProfKGD: FUN FACT: @ak92501 spends 4-5 hours each night sifting through the arXiv feed and posting.,RT @CSProfKGD: FUN FACT: @ak92501 spends 4-5 hours each night sifting through the arXiv feed and posting.,0
|
42 |
+
16,1540451974797316096,@mervenoyann Happy birthday! 🎈🎉 🎁,@mervenoyann Happy birthday! 🎈🎉 🎁,4
|
43 |
+
17,1540439841007083520,RT @shahrukh_athar: Really excited to present RigNeRF today at Poster Session 4.2 of #CVPR2022 (@CVPR)!! Drop by PosterID 161b to discuss R…,RT @shahrukh_athar: Really excited to present RigNeRF today at Poster Session 4.2 of #CVPR2022 (@CVPR)!! Drop by PosterID 161b to discuss R…,0
|
44 |
+
18,1540422370153881601,RT @jw2yang4ai: We are at 46b to present our UniCL/mini-Florence! https://t.co/U5nvHiO4bR,RT @jw2yang4ai: We are at 46b to present our UniCL/mini-Florence! https://t.co/U5nvHiO4bR,0
|
45 |
+
19,1540407710038065152,"RT @sitzikbs: OK, @ak92501 just stopped by our poster. Officially, not a bot. https://t.co/tSljzLLjer","RT @sitzikbs: OK, @ak92501 just stopped by our poster. Officially, not a bot. https://t.co/tSljzLLjer",0
|
46 |
+
20,1540383826630909953,"RT @DrJimFan: Introducing MineDojo for building open-ended generalist agents! https://t.co/PmOCWz6T5E
|
47 |
+
✅Massive benchmark: 1000s of tasks in…",RT @DrJimFan: Introducing MineDojo for building open-ended generalist agents! https://t.co/PmOCWz6T5E,0
|
48 |
+
21,1540367998745206784,RT @YiwuZhong: #CVPR2022 We just released a web demo for RegionCLIP (https://t.co/rGvI5L9tXN). The pre-trained RegionCLIP demonstrates inte…,RT @YiwuZhong: #CVPR2022 We just released a web demo for RegionCLIP (https://t.co/rGvI5L9tXN). The pre-trained RegionCLIP demonstrates inte…,0
|
49 |
+
22,1540353957289234432,will be here until 11,will be here until 11,8
|
50 |
+
23,1540350076274593794,"RT @karol_majek: @PDillis @ak92501 Real, 3 instances, they balance the load https://t.co/eMMYwmS3xV","RT @karol_majek: @PDillis @ak92501 Real, 3 instances, they balance the load https://t.co/eMMYwmS3xV",0
|
51 |
+
24,1540349713953595393,"RT @Jerry_XU_Jiarui: 🥰This morning 10:00AM-12:30PM at #CVPR2022, I will present GroupViT at poster 208a. Please come by and have a chat!…","RT @Jerry_XU_Jiarui: 🥰This morning 10:00AM-12:30PM at #CVPR2022, I will present GroupViT at poster 208a. Please come by and have a chat!…",0
|
52 |
+
25,1540349465265061889,RT @CSProfKGD: Got an autograph 🤩 #CVPR2022 https://t.co/897WuqIdM4,RT @CSProfKGD: Got an autograph 🤩 #CVPR2022 https://t.co/897WuqIdM4,0
|
53 |
+
26,1540347498606346245,"RT @jw2yang4ai: If you are interested, just stop at our RegionCLIP poster detected by our RegionCLIP model. https://t.co/Qnc71nMGuZ","RT @jw2yang4ai: If you are interested, just stop at our RegionCLIP poster detected by our RegionCLIP model. https://t.co/Qnc71nMGuZ",0
|
54 |
+
27,1540336050488446977,"Sitting at tables on the other side of coffee shop next to door and between cafe, wearing a red shirt https://t.co/EgkMDHNvyQ","Sitting at tables on the other side of coffee shop next to door and between cafe, wearing a red shirt https://t.co/EgkMDHNvyQ",29
|
55 |
+
28,1540320889753030661,"RT @sitzikbs: Are you still at #CVPR2022 ? Come chat with us at the last poster session (4.2). @ChaminHewa and I will be at poster 61b, 14:…","RT @sitzikbs: Are you still at #CVPR2022 ? Come chat with us at the last poster session (4.2). @ChaminHewa and I will be at poster 61b, 14:…",0
|
56 |
+
29,1540320736971300871,"RT @confusezius: If contrastive learning and language is something that sounds interesting, drop by at this mornings oral (or poster) sessi…","RT @confusezius: If contrastive learning and language is something that sounds interesting, drop by at this mornings oral (or poster) sessi…",0
|
57 |
+
30,1540306609594826753,"RT @jw2yang4ai: If you are there, please try our CVPR 2022 work RegionCLIP demo! You can feed any queries to localize the fine-grained obje…","RT @jw2yang4ai: If you are there, please try our CVPR 2022 work RegionCLIP demo! You can feed any queries to localize the fine-grained obje…",0
|
58 |
+
31,1540197464543838208,"""New York City, oil painting"" - CogView2
|
59 |
+
demo: https://t.co/KgWC23knx7 https://t.co/28oJbeDKsm","""New York City, oil painting"" - CogView2",18
|
60 |
+
32,1540187756164423687,"RT @Zhao_Running: Our #INTERSPEECH paper introduces Radio2Speech, a #wirelesssensing system that recovers high quality speech via RF signal…","RT @Zhao_Running: Our #INTERSPEECH paper introduces Radio2Speech, a #wirelesssensing system that recovers high quality speech via RF signal…",0
|
61 |
+
33,1540184734390706176,"Walk the Random Walk: Learning to Discover and Reach Goals Without Supervision
|
62 |
+
abs: https://t.co/NO2vzfdYdS https://t.co/WoN73BzgeQ",Walk the Random Walk: Learning to Discover and Reach Goals Without Supervision,65
|
63 |
+
34,1540180978425073664,"BlazePose GHUM Holistic: Real-time 3D Human Landmarks and Pose Estimation
|
64 |
+
abs: https://t.co/qnxAmRVP71
|
65 |
+
|
66 |
+
present Bla… https://t.co/w4Zi72blos",BlazePose GHUM Holistic: Real-time 3D Human Landmarks and Pose Estimation,81
|
67 |
+
35,1540176838017916933,"Offline RL for Natural Language Generation with Implicit Language Q Learning
|
68 |
+
abs: https://t.co/wYTtUgdryZ
|
69 |
+
project p… https://t.co/xS8JCODxwP",Offline RL for Natural Language Generation with Implicit Language Q Learning,40
|
70 |
+
36,1540173636774002688,github: https://t.co/Nu0jgZ3qKo https://t.co/cnG50SKwpf,github: https://t.co/Nu0jgZ3qKo https://t.co/cnG50SKwpf,12
|
71 |
+
37,1540173392996958209,"GODEL: Large-Scale Pre-Training for Goal-Directed Dialog
|
72 |
+
abs: https://t.co/ayJI8xXVL2
|
73 |
+
|
74 |
+
GODEL outperforms sota pre-t… https://t.co/eUfnl7dszD",GODEL: Large-Scale Pre-Training for Goal-Directed Dialog,40
|
75 |
+
38,1540166602364174338,RT @victormustar: « A lion man is typing in the office » CogView2 demo is nice 😅 https://t.co/6ZTomM8NBs https://t.co/4wnutOZASQ,RT @victormustar: « A lion man is typing in the office » CogView2 demo is nice 😅 https://t.co/6ZTomM8NBs https://t.co/4wnutOZASQ,0
|
76 |
+
39,1540166227162812421,"Adversarial Multi-Task Learning for Disentangling Timbre and Pitch in Singing Voice Synthesis 🎤🎤
|
77 |
+
abs:… https://t.co/acdjzVMMU3",Adversarial Multi-Task Learning for Disentangling Timbre and Pitch in Singing Voice Synthesis 🎤🎤,35
|
78 |
+
40,1540161095930880001,"MaskViT: Masked Visual Pre-Training for Video Prediction
|
79 |
+
abs: https://t.co/uhMEB6ashb
|
80 |
+
project page:… https://t.co/gbnxrCxUrc",MaskViT: Masked Visual Pre-Training for Video Prediction,144
|
81 |
+
41,1540156319923060736,"The ArtBench Dataset: Benchmarking Generative Models with Artworks
|
82 |
+
abs: https://t.co/Zzq0A2i5ob
|
83 |
+
github:… https://t.co/SfQlvTLrk3",The ArtBench Dataset: Benchmarking Generative Models with Artworks,177
|
84 |
+
42,1540151560939921409,"RT @ccloy: We cast blind 😀 restoration as a code prediction task, and exploit global compositions and long-range dependencies of low-qualit…","RT @ccloy: We cast blind 😀 restoration as a code prediction task, and exploit global compositions and long-range dependencies of low-qualit…",0
|
85 |
+
43,1540138378498383873,a @Gradio Demo for RegionCLIP: Region-based Language-Image Pretraining on @huggingface Spaces for @CVPR 2022 by… https://t.co/XZCASqN208,a @Gradio Demo for RegionCLIP: Region-based Language-Image Pretraining on @huggingface Spaces for @CVPR 2022 by… https://t.co/XZCASqN208,45
|
86 |
+
44,1540136841155907585,I will be near the coffee shop outside Hall C tomorrow if anyone wants to meet up after 9 am at CVPR,I will be near the coffee shop outside Hall C tomorrow if anyone wants to meet up after 9 am at CVPR,90
|
87 |
+
45,1540134704057294848,"EventNeRF: Neural Radiance Fields from a Single Colour Event Camera
|
88 |
+
abs: https://t.co/qzJtFOGuNK
|
89 |
+
project page:… https://t.co/drOF3x8DLH",EventNeRF: Neural Radiance Fields from a Single Colour Event Camera,160
|
90 |
+
46,1540114214756536320,RT @elliottszwu: .@ak92501 is real! Come to hall C!,RT @elliottszwu: .@ak92501 is real! Come to hall C!,0
|
91 |
+
47,1540109042584064001,"@CSProfKGD @elliottszwu @CVPR thanks, would also be great to meet, sent a dm, also I am at the coffee shop outside… https://t.co/j3i3h6Bbfs","@CSProfKGD @elliottszwu @CVPR thanks, would also be great to meet, sent a dm, also I am at the coffee shop outside… https://t.co/j3i3h6Bbfs",17
|
92 |
+
48,1540101501456187395,"RT @hyungjin_chung: For those interested diffusion models and inverse problems, come check out our poster on 174a #CVPR2022 ! Joint work wi…","RT @hyungjin_chung: For those interested diffusion models and inverse problems, come check out our poster on 174a #CVPR2022 ! Joint work wi…",0
|
93 |
+
49,1540098318029692928,"RT @gclue_akira: CogView2のWebデモ
|
94 |
+
https://t.co/OVu6EE6YQD
|
95 |
+
|
96 |
+
https://t.co/kUtxCq4EqV",RT @gclue_akira: CogView2のWebデモ,0
|
97 |
+
50,1540078626745589761,RT @cyrilzakka: Was working on something very similar but never got the chance to publish due to finals and graduation. Still a WIP but I'v…,RT @cyrilzakka: Was working on something very similar but never got the chance to publish due to finals and graduation. Still a WIP but I'v…,0
|
98 |
+
51,1540073247177408516,RT @ducha_aiki: #CVPR2022 https://t.co/6NU0e5LA16,RT @ducha_aiki: #CVPR2022 https://t.co/6NU0e5LA16,0
|
99 |
+
52,1540043756216492035,@elliottszwu @CVPR I will be around in the poster session today in the exhibits hall,@elliottszwu @CVPR I will be around in the poster session today in the exhibits hall,21
|
100 |
+
53,1540035360860045312,https://t.co/qTaxrKwP7R,https://t.co/qTaxrKwP7R,10
|
101 |
+
54,1540033980128436226,a @Gradio Demo for CogView2: Faster and Better Text-to-Image Generation via Hierarchical Transformers on… https://t.co/qQF0GG5cxR,a @Gradio Demo for CogView2: Faster and Better Text-to-Image Generation via Hierarchical Transformers on… https://t.co/qQF0GG5cxR,119
|
102 |
+
55,1540032783023849473,RT @elliottszwu: How can we find @ak92501 @CVPR?,RT @elliottszwu: How can we find @ak92501 @CVPR?,0
|
103 |
+
56,1540028949920710657,RT @jeffclune: Introducing Video PreTraining (VPT): it learns complex behaviors by watching (pretraining on) vast amounts of online videos.…,RT @jeffclune: Introducing Video PreTraining (VPT): it learns complex behaviors by watching (pretraining on) vast amounts of online videos.…,0
|
104 |
+
57,1539985557937340418,"RT @douwekiela: Check out these FLAVA-based demos: https://t.co/VmnTJwIGey
|
105 |
+
And this one for Winoground:
|
106 |
+
https://t.co/rU3Gf2ZOwz
|
107 |
+
Loading FLA…",RT @douwekiela: Check out these FLAVA-based demos: https://t.co/VmnTJwIGey,0
|
108 |
+
58,1539982089113767936,RT @lidaiqing: Excited to share BigDatasetGAN @CVPR! We are able to synthesize ImageNet with pixel-wise labels using as few as 5 annotatio…,RT @lidaiqing: Excited to share BigDatasetGAN @CVPR! We are able to synthesize ImageNet with pixel-wise labels using as few as 5 annotatio…,0
|
109 |
+
59,1539961370971541505,"RT @yangtao_wang: #CVPR2022 23/6
|
110 |
+
Welcome to our poster ""TokenCut: Self-Supervised Transformers for Unsupervised Object Discovery Using Norm…",RT @yangtao_wang: #CVPR2022 23/6,0
|
111 |
+
60,1539820424376320000,"Multimodal Colored Point Cloud to Image Alignment
|
112 |
+
paper: https://t.co/YD9bnByUYx
|
113 |
+
colab: https://t.co/vwGwlrWZhg https://t.co/zE5z2gnzdb",Multimodal Colored Point Cloud to Image Alignment,35
|
114 |
+
61,1539811680359796739,"TiCo: Transformation Invariance and Covariance Contrast for Self-Supervised Visual Representation Learning
|
115 |
+
abs:… https://t.co/UArbr7zhRE",TiCo: Transformation Invariance and Covariance Contrast for Self-Supervised Visual Representation Learning,83
|
116 |
+
62,1539809856168890368,proposed system Qin achieves 40 points higher than the average scores made by students and 15 points higher than GP… https://t.co/bAiPTd9WlF,proposed system Qin achieves 40 points higher than the average scores made by students and 15 points higher than GP… https://t.co/bAiPTd9WlF,8
|
117 |
+
63,1539809066033487872,"BenchCLAMP: A Benchmark for Evaluating Language Models on Semantic Parsing
|
118 |
+
abs: https://t.co/mi3tdM4hjU https://t.co/C5sOd9hwUk",BenchCLAMP: A Benchmark for Evaluating Language Models on Semantic Parsing,13
|
119 |
+
64,1539806514466144257,"Radio2Speech: High Quality Speech Recovery from Radio Frequency Signals
|
120 |
+
abs: https://t.co/oFcSQlgsX8
|
121 |
+
project page:… https://t.co/xfYJtJWIpQ",Radio2Speech: High Quality Speech Recovery from Radio Frequency Signals,239
|
122 |
+
65,1539794210190155778,"Jointist: Joint Learning for Multi-instrument Transcription and Its Applications
|
123 |
+
abs: https://t.co/xeuPUBcr01
|
124 |
+
proje… https://t.co/QmyCioKviJ",Jointist: Joint Learning for Multi-instrument Transcription and Its Applications,17
|
125 |
+
66,1539782468504412160,"Towards Robust Blind Face Restoration with Codebook Lookup Transformer
|
126 |
+
abs: https://t.co/NNhj6EhwIP
|
127 |
+
project page:… https://t.co/3lkIhDyh6P",Towards Robust Blind Face Restoration with Codebook Lookup Transformer,96
|
128 |
+
67,1539780412297330689,"GEMv2: Multilingual NLG Benchmarking in a Single Line of Code
|
129 |
+
abs: https://t.co/pKS5mgoDkG
|
130 |
+
|
131 |
+
GEMv2 supports 40 docum… https://t.co/qMitHzTlO0",GEMv2: Multilingual NLG Benchmarking in a Single Line of Code,17
|
132 |
+
68,1539779702306603008,"Questions Are All You Need to Train a Dense Passage Retriever
|
133 |
+
abs: https://t.co/qdSmN5pe7a
|
134 |
+
|
135 |
+
a novel approach to tra… https://t.co/NKgAHWaLsh",Questions Are All You Need to Train a Dense Passage Retriever,57
|
136 |
+
69,1539777865688010753,"reStructured Pre-training
|
137 |
+
abs: https://t.co/mYm7qbt59N https://t.co/O5T3tSY4PL",reStructured Pre-training,31
|
138 |
+
70,1539756137070878721,"RT @earthcurated: Gausdal, Norway ✨ https://t.co/tCYoryrbff","RT @earthcurated: Gausdal, Norway ✨ https://t.co/tCYoryrbff",0
|
139 |
+
71,1539755999065772034,"RT @earthcurated: Tuscany, Italy 🇮🇹 https://t.co/tswGswZcJL","RT @earthcurated: Tuscany, Italy 🇮🇹 https://t.co/tswGswZcJL",0
|
140 |
+
72,1539751376263192577,RT @wightmanr: I’m excited to announce that I’ve joined @huggingface to take AI based computer vision to the next level. I will continue t…,RT @wightmanr: I’m excited to announce that I’ve joined @huggingface to take AI based computer vision to the next level. I will continue t…,0
|
141 |
+
73,1539749459915149313,a @Gradio Demo for FLAVA: A Foundation Language And Vision Alignment Model on @huggingface Spaces for @CVPR 2022 by… https://t.co/fxXcV0KZkQ,a @Gradio Demo for FLAVA: A Foundation Language And Vision Alignment Model on @huggingface Spaces for @CVPR 2022 by… https://t.co/fxXcV0KZkQ,23
|
142 |
+
74,1539736626087206913,RT @imtiazprio: Catch us at the #CVPR2022 Oral Session 3.1.1 at 8:30 am Thursday and Poster Session 10:30 am right after!!,RT @imtiazprio: Catch us at the #CVPR2022 Oral Session 3.1.1 at 8:30 am Thursday and Poster Session 10:30 am right after!!,0
|
143 |
+
75,1539728223638097920,"RT @Sa_9810: It was really great to see everyone today at the poster session. Thanks for coming!
|
144 |
+
If you would like to meet for coffee or if…",RT @Sa_9810: It was really great to see everyone today at the poster session. Thanks for coming!,0
|
145 |
+
76,1539711494522392577,RT @AnimaAnandkumar: Minedojo is largest open-ended language-prompted multitask #benchmark #AI agents explore procedurally generated #3D w…,RT @AnimaAnandkumar: Minedojo is largest open-ended language-prompted multitask #benchmark #AI agents explore procedurally generated #3D w…,0
|
146 |
+
77,1539705700347219975,@RealGilbaz @DatagenTech Sure will visit,@RealGilbaz @DatagenTech Sure will visit,1
|
147 |
+
78,1539689285137432578,RT @ducha_aiki: #CVPR2022 https://t.co/xRaw8ulZi6,RT @ducha_aiki: #CVPR2022 https://t.co/xRaw8ulZi6,0
|
148 |
+
79,1539672920456298498,"Scaling Autoregressive Models for Content-Rich Text-to-Image Generation
|
149 |
+
paper: https://t.co/NKkTeHttLd
|
150 |
+
project page… https://t.co/CcKxsWPmjR",Scaling Autoregressive Models for Content-Rich Text-to-Image Generation,134
|
151 |
+
80,1539672517903847425,RT @victormustar: Looking for inspiration? https://t.co/0pyZ02Xxu6 is full of awesome ML demos 🤩 https://t.co/F3eYSZAC3x,RT @victormustar: Looking for inspiration? https://t.co/0pyZ02Xxu6 is full of awesome ML demos 🤩 https://t.co/F3eYSZAC3x,0
|
152 |
+
81,1539665352258625537,"Check out Talking Face Generation with Multilingual TTS at @CVPR and try out the live @Gradio Demo
|
153 |
+
|
154 |
+
online… https://t.co/mCj9bIMB5u",Check out Talking Face Generation with Multilingual TTS at @CVPR and try out the live @Gradio Demo,18
|
155 |
+
82,1539638155111956480,"RT @abidlabs: Slides for my @CVPR 2022 talk:
|
156 |
+
|
157 |
+
""Papers and Code Aren't Enough: Why Demos are Critical to ML Research and How to Build Them""…",RT @abidlabs: Slides for my @CVPR 2022 talk: ,0
|
158 |
+
83,1539622527890333697,"RT @Gradio: 🔥 Exciting to see live *physical* @Gradio demos at #CVPR2022
|
159 |
+
|
160 |
+
Demo link for automatic sign language recognition: https://t.co…",RT @Gradio: 🔥 Exciting to see live *physical* @Gradio demos at #CVPR2022 ,0
|
161 |
+
84,1539614419541528578,"RT @zsoltkira: @ak92501 Thanks @ak92501! The poster at #CVPR202 for this is today!
|
162 |
+
|
163 |
+
Location: Halls B2-C
|
164 |
+
Poster number: 183b
|
165 |
+
Time: 6/22 (We…",RT @zsoltkira: @ak92501 Thanks @ak92501! The poster at #CVPR202 for this is today!,0
|
166 |
+
85,1539612340718637057,RT @Jimantha: To all the CVPR-heads out there -- check out @KaiZhang9546's work on inverse rendering in this morning's oral session! Religh…,RT @Jimantha: To all the CVPR-heads out there -- check out @KaiZhang9546's work on inverse rendering in this morning's oral session! Religh…,0
|
167 |
+
86,1539480179151712256,"Intra-Instance VICReg: Bag of Self-Supervised Image Patch Embedding
|
168 |
+
abs: https://t.co/Bq3GUQywPV https://t.co/iLTaoXm0yC",Intra-Instance VICReg: Bag of Self-Supervised Image Patch Embedding,65
|
169 |
+
87,1539473926778236934,"RT @zhanghe920312: Thanks @ak92501 for sharing.
|
170 |
+
Our poster session happening on Thursday Morning at @CVPR. Feel free to check out our…",RT @zhanghe920312: Thanks @ak92501 for sharing. ,0
|
171 |
+
88,1539473873816719360,RT @zengxianyu18: Thanks for sharing our work😀 I will be presenting SketchEdit @CVPR 2022. If you are interested in our work or just want t…,RT @zengxianyu18: Thanks for sharing our work😀 I will be presenting SketchEdit @CVPR 2022. If you are interested in our work or just want t…,0
|
172 |
+
89,1539460213211910150,"EnvPool: A Highly Parallel Reinforcement Learning Environment Execution Engine
|
173 |
+
abs: https://t.co/F4XkHLRxPi
|
174 |
+
github:… https://t.co/JiwSuMdkZH",EnvPool: A Highly Parallel Reinforcement Learning Environment Execution Engine,32
|
175 |
+
90,1539459120667021312,"EpiGRAF: Rethinking training of 3D GANs
|
176 |
+
abs: https://t.co/RcY2vQr0NH
|
177 |
+
project page: https://t.co/kuXPKA00bZ https://t.co/CVCsseAS21",EpiGRAF: Rethinking training of 3D GANs,142
|
178 |
+
91,1539453554578055168,"Unbiased Teacher v2: Semi-supervised Object Detection for Anchor-free and Anchor-based Detectors
|
179 |
+
abs:… https://t.co/noluSxtqzu",Unbiased Teacher v2: Semi-supervised Object Detection for Anchor-free and Anchor-based Detectors,71
|
180 |
+
92,1539451329034297349,RT @ahatamiz1: Please check out our new paper which introduces a new vision transformer model dubbed as GC ViT !,RT @ahatamiz1: Please check out our new paper which introduces a new vision transformer model dubbed as GC ViT !,0
|
181 |
+
93,1539442569733718016,"GAN2X: Non-Lambertian Inverse Rendering of Image GANs
|
182 |
+
abs: https://t.co/ziYgRUK2Sr
|
183 |
+
project page:… https://t.co/rLK6Qp9by0",GAN2X: Non-Lambertian Inverse Rendering of Image GANs,182
|
184 |
+
94,1539435374103220226,"Global Context Vision Transformers
|
185 |
+
abs: https://t.co/d6go0yv7fu
|
186 |
+
github: https://t.co/rUYFs09ReC
|
187 |
+
|
188 |
+
On ImageNet-1K dat… https://t.co/HJnw5wclQV",Global Context Vision Transformers,87
|
189 |
+
95,1539434284213227528,"M&M Mix: A Multimodal Multiview Transformer Ensemble
|
190 |
+
abs: https://t.co/jQEZR3WCY4 https://t.co/8LZDCG0ePF",M&M Mix: A Multimodal Multiview Transformer Ensemble,39
|
191 |
+
96,1539431648374099968,"CMT-DeepLab: Clustering Mask Transformers for Panoptic Segmentation
|
192 |
+
abs: https://t.co/yy78osDplK
|
193 |
+
|
194 |
+
CMTDeepLab improv… https://t.co/zCvYqSLp3G",CMT-DeepLab: Clustering Mask Transformers for Panoptic Segmentation,26
|
195 |
+
97,1539425826177007616,"nuQmm: Quantized MatMul for Efficient Inference of Large-Scale Generative Language Models
|
196 |
+
abs:… https://t.co/13fwAaXIn3",nuQmm: Quantized MatMul for Efficient Inference of Large-Scale Generative Language Models,84
|
197 |
+
98,1539423930984931329,"Temporally Consistent Semantic Video Editing
|
198 |
+
abs: https://t.co/sg1dRt2xkw
|
199 |
+
project page: https://t.co/PyZKnxUQko https://t.co/1Az9nG5ccH",Temporally Consistent Semantic Video Editing,93
|
200 |
+
99,1539421251076247554,"(Certified!!) Adversarial Robustness for Free!
|
201 |
+
abs: https://t.co/NTU6lioyII
|
202 |
+
|
203 |
+
show how to achieve sota certified adv… https://t.co/2VW1CDARya",(Certified!!) Adversarial Robustness for Free!,39
|
204 |
+
100,1539419136467554305,"DALL-E for Detection: Language-driven Context Image Synthesis for Object Detection
|
205 |
+
abs: https://t.co/rXx4npbY5G https://t.co/QBHP494eSn",DALL-E for Detection: Language-driven Context Image Synthesis for Object Detection,143
|
206 |
+
101,1539379827966459904,"paper: https://t.co/cm0NWvfHVO
|
207 |
+
poster: https://t.co/cyLKrP84wD https://t.co/8iW8nEYdUi",paper: https://t.co/cm0NWvfHVO,4
|
208 |
+
102,1539379340324048898,a @Gradio Demo for SPOTER + Media Pipe: Combining Efficient and Precise Sign Language Recognition on @huggingface S… https://t.co/wg6qExJtL3,a @Gradio Demo for SPOTER + Media Pipe: Combining Efficient and Precise Sign Language Recognition on @huggingface S… https://t.co/wg6qExJtL3,17
|
209 |
+
103,1539355589159026689,"GlideNet: Global, Local and Intrinsic based Dense Embedding NETwork for Multi-category Attributes Prediction
|
210 |
+
abs:… https://t.co/ztR7AnAQHl","GlideNet: Global, Local and Intrinsic based Dense Embedding NETwork for Multi-category Attributes Prediction",32
|
211 |
+
104,1539322541482860545,RT @SaurabhBanga4: @ak92501 @CVPR @Gradio @abidlabs @huggingface https://t.co/9KxGEaHp0J,RT @SaurabhBanga4: @ak92501 @CVPR @Gradio @abidlabs @huggingface https://t.co/9KxGEaHp0J,0
|
212 |
+
105,1539304673211031554,Starting in 10 minutes @CVPR https://t.co/tAppaZFKep,Starting in 10 minutes @CVPR https://t.co/tAppaZFKep,10
|
213 |
+
106,1539302809404952577,RT @ak92501: Come see the talk today at @CVPR for Papers and Code Aren’t Enough: Why Demos are Critical to ML Research and How to Build The…,RT @ak92501: Come see the talk today at @CVPR for Papers and Code Aren’t Enough: Why Demos are Critical to ML Research and How to Build The…,0
|
214 |
+
107,1539291146710654976,Come see the talk today at @CVPR for Papers and Code Aren’t Enough: Why Demos are Critical to ML Research and How t… https://t.co/rmjCWbTxJH,Come see the talk today at @CVPR for Papers and Code Aren’t Enough: Why Demos are Critical to ML Research and How t… https://t.co/rmjCWbTxJH,41
|
215 |
+
108,1539260231062065154,"RT @mattjr97: I somehow didn’t see this until today. Whomever is at CVPR, swing by the poster tomorrow afternoon, I’d love to answer any qu…","RT @mattjr97: I somehow didn’t see this until today. Whomever is at CVPR, swing by the poster tomorrow afternoon, I’d love to answer any qu…",0
|
216 |
+
109,1539256590737580034,"RT @permutans: Best paper shortlisted at CVPR’22 (U. Washington, OpenAI, Google Brain, Columbia U)
|
217 |
+
|
218 |
+
“ensembling the weights of the zero-sho…","RT @permutans: Best paper shortlisted at CVPR’22 (U. Washington, OpenAI, Google Brain, Columbia U)",0
|
219 |
+
110,1539246900020449281,"RT @humphrey_shi: Last Minute UPDATE:
|
220 |
+
Our Invited Talk about ML Demos @ Hall B1 will be 1-1:30PM instead due to a scheduling conflict. @CVP…",RT @humphrey_shi: Last Minute UPDATE:,0
|
221 |
+
111,1539113571388366849,GALAXY: A Generative Pre-trained Model for Task-Oriented Dialog with Semi-Supervised Learning and Explicit Policy I… https://t.co/9i8574hPgN,GALAXY: A Generative Pre-trained Model for Task-Oriented Dialog with Semi-Supervised Learning and Explicit Policy I… https://t.co/9i8574hPgN,23
|
222 |
+
112,1539111398437011460,"RT @yan_xg: Code/pretained model is released, please have a try! 😁https://t.co/iAW5MlgDcp","RT @yan_xg: Code/pretained model is released, please have a try! 😁https://t.co/iAW5MlgDcp",0
|
223 |
+
113,1539093616886534146,RT @humphrey_shi: Come join us tmr/Tue 10am - 5pm @CVPR to check out in-person Demos at the Demo Area. (also online 27/7 ones at https://t.…,RT @humphrey_shi: Come join us tmr/Tue 10am - 5pm @CVPR to check out in-person Demos at the Demo Area. (also online 27/7 ones at https://t.…,0
|
224 |
+
114,1539076449788997632,"A Closer Look at Smoothness in Domain Adversarial Training
|
225 |
+
abs: https://t.co/GgKE9695vj
|
226 |
+
github:… https://t.co/33MX6TZhjt",A Closer Look at Smoothness in Domain Adversarial Training,96
|
227 |
+
115,1539066735965380608,"a @Gradio Demo for Thin-Plate Spline Motion Model for Image Animation on @huggingface Spaces for @CVPR 2022
|
228 |
+
|
229 |
+
demo:… https://t.co/ieg4Xlfnu0",a @Gradio Demo for Thin-Plate Spline Motion Model for Image Animation on @huggingface Spaces for @CVPR 2022,121
|
230 |
+
116,1539058707643961345,"Holiday at arXiv, underway 🔧, I can sleep today
|
231 |
+
status: https://t.co/JEXsWfngyb https://t.co/rVve6lNLfB","Holiday at arXiv, underway 🔧, I can sleep today",58
|
232 |
+
117,1538970393859526656,"Day 2 at @CVPR 2022
|
233 |
+
|
234 |
+
Join the CVPR event on @huggingface to build @Gradio demos for CVPR papers here:… https://t.co/ekTNYuUkCQ",Day 2 at @CVPR 2022,47
|
235 |
+
118,1538765711169966080,@_arohan_ there is already a queue 😄 https://t.co/3ggYefcjMI,@_arohan_ there is already a queue 😄 https://t.co/3ggYefcjMI,2
|
236 |
+
119,1538764856991547393,https://t.co/UjLVdJKjDt,https://t.co/UjLVdJKjDt,12
|
237 |
+
120,1538757119796715520,https://t.co/ghtd6xHQ7c,https://t.co/ghtd6xHQ7c,4
|
238 |
+
121,1538756244298661889,temporary link: https://t.co/fHFgtTir64 https://t.co/9Qbwr3mUwu,temporary link: https://t.co/fHFgtTir64 https://t.co/9Qbwr3mUwu,5
|
239 |
+
122,1538754677466087424,WIP @Gradio Demo for CogView2 https://t.co/hPmcvwjLsk,WIP @Gradio Demo for CogView2 https://t.co/hPmcvwjLsk,66
|
240 |
+
123,1538734927604338688,"a @Gradio Demo for V-Doc : Visual questions answers with Documents on @huggingface Spaces for @CVPR 2022
|
241 |
+
|
242 |
+
demo:… https://t.co/dF6Y2s4H5d",a @Gradio Demo for V-Doc : Visual questions answers with Documents on @huggingface Spaces for @CVPR 2022,20
|
243 |
+
124,1538731091175038977,"RT @Seungu_Han: Our paper ""NU-Wave 2: A General Neural Audio Upsampling Model for Various Sampling Rates"" got accepted to Interspeech 2022…","RT @Seungu_Han: Our paper ""NU-Wave 2: A General Neural Audio Upsampling Model for Various Sampling Rates"" got accepted to Interspeech 2022…",0
|
244 |
+
125,1538719219818409994,"TAVA: Template-free Animatable Volumetric Actors
|
245 |
+
abs: https://t.co/lJ2C6e1VpG
|
246 |
+
project page: https://t.co/lpUgeGI7CX https://t.co/D62WYod4by",TAVA: Template-free Animatable Volumetric Actors,71
|
247 |
+
126,1538716898015293440,"RT @yilin_sung: Excited to participate in my first in-person @CVPR to present VL-Adapter, that benchmarks different parameter-efficient tra…","RT @yilin_sung: Excited to participate in my first in-person @CVPR to present VL-Adapter, that benchmarks different parameter-efficient tra…",0
|
248 |
+
127,1538710356444471296,"Fast Finite Width Neural Tangent Kernel
|
249 |
+
abs: https://t.co/iY1lFoYMjA https://t.co/hWzzcCd5OZ",Fast Finite Width Neural Tangent Kernel,22
|
250 |
+
128,1538706936211951617,"What do navigation agents learn about their environment?
|
251 |
+
abs: https://t.co/eXelV0REgZ
|
252 |
+
github:… https://t.co/TGSzEQ1v1c",What do navigation agents learn about their environment?,36
|
253 |
+
129,1538700561800912896,RT @DrJimFan: @ak92501 Thank you so much AK for posting our work 🥰! What an honor! I’m the first author of MineDojo. We will have an announ…,RT @DrJimFan: @ak92501 Thank you so much AK for posting our work 🥰! What an honor! I’m the first author of MineDojo. We will have an announ…,0
|
254 |
+
130,1538698653493338114,"Bootstrapped Transformer for Offline Reinforcement Learning
|
255 |
+
abs: https://t.co/YiEY3uiTgL https://t.co/yle4hPgMmf",Bootstrapped Transformer for Offline Reinforcement Learning,136
|
256 |
+
131,1538695806311665665,RT @mark_riedl: MineDojo: a new framework built on the popular Minecraft game that features a simulation suite with thousands of diverse op…,RT @mark_riedl: MineDojo: a new framework built on the popular Minecraft game that features a simulation suite with thousands of diverse op…,0
|
257 |
+
132,1538695457550921728,"Bridge-Tower: Building Bridges Between Encoders in Vision-Language Representation Learning
|
258 |
+
abs:… https://t.co/uLQLmf4l3M",Bridge-Tower: Building Bridges Between Encoders in Vision-Language Representation Learning,41
|
259 |
+
133,1538694061531533313,"Evolution through Large Models
|
260 |
+
abs: https://t.co/2B0yygTiWa
|
261 |
+
|
262 |
+
pursues the insight that large language models trained… https://t.co/tfvNrHbTYG",Evolution through Large Models,97
|
263 |
+
134,1538692524830769152,"MineDojo: Building Open-Ended Embodied Agents with Internet-Scale Knowledge
|
264 |
+
abs: https://t.co/etfGL1xnum
|
265 |
+
project pa… https://t.co/Fv1aLuEJSV",MineDojo: Building Open-Ended Embodied Agents with Internet-Scale Knowledge,262
|
266 |
+
135,1538689482534309890,"EyeNeRF: A Hybrid Representation for Photorealistic Synthesis, Animation and Relighting of Human Eyes
|
267 |
+
abs:… https://t.co/GfAeLP6iAD","EyeNeRF: A Hybrid Representation for Photorealistic Synthesis, Animation and Relighting of Human Eyes",105
|
268 |
+
136,1538687423722541056,"Lossy Compression with Gaussian Diffusion
|
269 |
+
abs: https://t.co/tw5YiZAN3B
|
270 |
+
|
271 |
+
implement a proof of concept and find that… https://t.co/4nvLjhIX4e",Lossy Compression with Gaussian Diffusion,102
|
272 |
+
137,1538686489491648514,"NU-Wave 2: A General Neural Audio Upsampling Model for Various Sampling Rates
|
273 |
+
abs: https://t.co/4S8sBXq6Ko
|
274 |
+
|
275 |
+
a diffu… https://t.co/xd3eQ0ApQJ",NU-Wave 2: A General Neural Audio Upsampling Model for Various Sampling Rates,85
|
276 |
+
138,1538685207385079809,"Unified-IO: A Unified Model for Vision, Language, and Multi-Modal Tasks
|
277 |
+
abs: https://t.co/ydrEo1SVh9
|
278 |
+
project page:… https://t.co/4LgYqVNenf","Unified-IO: A Unified Model for Vision, Language, and Multi-Modal Tasks",177
|
279 |
+
139,1538685023708127238,RT @phiyodr: Check out our work/demo for the #VizWiz workshop at #CVPR2022,RT @phiyodr: Check out our work/demo for the #VizWiz workshop at #CVPR2022,0
|
280 |
+
140,1538642504609832960,"RT @gclue_akira: I shared #CogView2 colab working.
|
281 |
+
|
282 |
+
https://t.co/jwFBWFCSos
|
283 |
+
|
284 |
+
@ak92501",RT @gclue_akira: I shared #CogView2 colab working.,0
|
285 |
+
141,1538593847764197386,Made it to @CVPR 2022 https://t.co/alBnBYHmnT,Made it to @CVPR 2022 https://t.co/alBnBYHmnT,222
|
286 |
+
142,1538558197459460096,"RT @mitts1910: Excited to share our #CVPR2022 paper, a collaboration of @Microsoft & @RITtigers, that achieves SOTA on Online Action Detect…","RT @mitts1910: Excited to share our #CVPR2022 paper, a collaboration of @Microsoft & @RITtigers, that achieves SOTA on Online Action Detect…",0
|
287 |
+
143,1538347108671049728,RT @gowthami_s: I will be in person at #CVPR22 to discuss our paper on understanding model reproducibility! Drop by and say hi if you are a…,RT @gowthami_s: I will be in person at #CVPR22 to discuss our paper on understanding model reproducibility! Drop by and say hi if you are a…,0
|
288 |
+
144,1538331269863510017,Can Neural Nets Learn the Same Model Twice? Investigating Reproducibility and Double Descent from the Decision Boun… https://t.co/oqjzwd8h3E,Can Neural Nets Learn the Same Model Twice? Investigating Reproducibility and Double Descent from the Decision Boun… https://t.co/oqjzwd8h3E,326
|
289 |
+
145,1538211869017653249,"RT @keunwoochoi: https://t.co/wEZo4Sxn0Q
|
290 |
+
|
291 |
+
AI Song Contest 2022 - the finalists 🔥🔥🔥",RT @keunwoochoi: https://t.co/wEZo4Sxn0Q,0
|
292 |
+
146,1538200789243596800,"RT @_tingliu: See you at Poster Session 3.2 on Thursday June 23, 2:30 - 5pm at #CVPR2022!","RT @_tingliu: See you at Poster Session 3.2 on Thursday June 23, 2:30 - 5pm at #CVPR2022!",0
|
293 |
+
147,1538200381863481344,submit @Gradio demos for CVPR papers by joining the organization on @huggingface here: https://t.co/sNaZf2ztdy https://t.co/jc7VX1Hekd,submit @Gradio demos for CVPR papers by joining the organization on @huggingface here: https://t.co/sNaZf2ztdy https://t.co/jc7VX1Hekd,21
|
294 |
+
148,1538026339747307521,"RT @weichiuma: Can you match images with little or no overlaps?
|
295 |
+
|
296 |
+
Humans can🧠but most existing methods fail😰
|
297 |
+
|
298 |
+
Our #CVPR2022 paper shoots c…",RT @weichiuma: Can you match images with little or no overlaps?,0
|
299 |
+
149,1538019922667659265,"RT @humphrey_shi: AI Research is empowering the world, and DEMO is a best way to showcase this power. Besides in-person Demos, we invite @C…","RT @humphrey_shi: AI Research is empowering the world, and DEMO is a best way to showcase this power. Besides in-person Demos, we invite @C…",0
|
300 |
+
150,1538006265363738625,"iBoot: Image-bootstrapped Self-Supervised Video Representation Learning
|
301 |
+
abs: https://t.co/dkZUd4QC81 https://t.co/pJFpxd7ckU",iBoot: Image-bootstrapped Self-Supervised Video Representation Learning,72
|
302 |
+
151,1538002482088931331,dalle2 - robot reading arxiv papers on a laptop at midnight on a small desk with a lamp turn on and a full coffee m… https://t.co/sg2WIavOZn,dalle2 - robot reading arxiv papers on a laptop at midnight on a small desk with a lamp turn on and a full coffee m… https://t.co/sg2WIavOZn,38
|
303 |
+
152,1538000649933115393,"Neural Scene Representation for Locomotion on Structured Terrain
|
304 |
+
abs: https://t.co/68xY622f4w https://t.co/W3wTYp31f6",Neural Scene Representation for Locomotion on Structured Terrain,82
|
305 |
+
153,1537998346350043137,"Disentangling visual and written concepts in CLIP
|
306 |
+
abs: https://t.co/VsyuDV4HNI
|
307 |
+
project page: https://t.co/2hTQnhR2o1 https://t.co/LbWpnpTTHT",Disentangling visual and written concepts in CLIP,93
|
308 |
+
154,1537992206987845638,dalle2 - a digital art piece of a robot reading arxiv papers at midnight on a small desk with a lamp turn on and a… https://t.co/V7tHDksfFX,dalle2 - a digital art piece of a robot reading arxiv papers at midnight on a small desk with a lamp turn on and a… https://t.co/V7tHDksfFX,221
|
309 |
+
155,1537989713256099848,"a @Gradio Demo for It's About Time: Analog Clock Reading in the Wild on @huggingface Spaces for @CVPR 2022
|
310 |
+
|
311 |
+
demo:… https://t.co/P8xkisydJQ",a @Gradio Demo for It's About Time: Analog Clock Reading in the Wild on @huggingface Spaces for @CVPR 2022,10
|
312 |
+
156,1537972518438379520,"RT @imisra_: Why train separate models for visual modalities?
|
313 |
+
|
314 |
+
Following up on our Omnivore work: We train a single model on images, videos…",RT @imisra_: Why train separate models for visual modalities?,0
|
315 |
+
157,1537924151389736961,"Programmatic Concept Learning for Human Motion Description and Synthesis
|
316 |
+
paper: https://t.co/Qemk23gUHX
|
317 |
+
project pag… https://t.co/ImHeYQC5vj",Programmatic Concept Learning for Human Motion Description and Synthesis,59
|
318 |
+
158,1537825873931472898,"RT @abidlabs: Excited to announce the 2022 @CVPR-@Gradio competition ahead of the conference next week!
|
319 |
+
|
320 |
+
Our goal is to make it machine lea…",RT @abidlabs: Excited to announce the 2022 @CVPR-@Gradio competition ahead of the conference next week!,0
|
321 |
+
159,1537818135444828160,a @Gradio Demo for Less Is More: Linear Layers on CLIP Features as Powerful VizWiz Model on @huggingface Spaces for… https://t.co/tpSavhBA9G,a @Gradio Demo for Less Is More: Linear Layers on CLIP Features as Powerful VizWiz Model on @huggingface Spaces for… https://t.co/tpSavhBA9G,17
|
322 |
+
160,1537817765213519873,RT @taesiri: @ak92501 @Gradio @huggingface @CVPR Neat! 😄 https://t.co/R6vy3QXcfB,RT @taesiri: @ak92501 @Gradio @huggingface @CVPR Neat! 😄 https://t.co/R6vy3QXcfB,0
|
323 |
+
161,1537796080238305280,"RT @armandjoulin: Thanks @ak92501 for sharing our work! Masked Autoencoders are insanely easy to use. You can throw any data at them, and t…","RT @armandjoulin: Thanks @ak92501 for sharing our work! Masked Autoencoders are insanely easy to use. You can throw any data at them, and t…",0
|
324 |
+
162,1537790206946181120,"RT @danxuhk: Please check our paper and project for talking head video generation at the incoming CVPR 22 😃😃😃
|
325 |
+
@harlan_hong
|
326 |
+
You may also tr…",RT @danxuhk: Please check our paper and project for talking head video generation at the incoming CVPR 22 😃😃😃,0
|
327 |
+
163,1537778006302793728,"RT @_rohitgirdhar_: Excited to share the next evolution of Omnivore: https://t.co/SikzTdVIgx
|
328 |
+
|
329 |
+
Omnivore meets MAE! OmniMAE is a single mod…",RT @_rohitgirdhar_: Excited to share the next evolution of Omnivore: https://t.co/SikzTdVIgx ,0
|
330 |
+
164,1537777742590230528,RT @CVPR: The papers to be presented will be listed here: https://t.co/IZfETICs8J https://t.co/dcRQ1BayrT,RT @CVPR: The papers to be presented will be listed here: https://t.co/IZfETICs8J https://t.co/dcRQ1BayrT,0
|
331 |
+
165,1537775332316614656,"RT @victormustar: 🚪Can you tell if a Neural Net contains a Backdoor Attack? 🤓
|
332 |
+
A really cool HF Space with good explanations and some nice e…",RT @victormustar: 🚪Can you tell if a Neural Net contains a Backdoor Attack? 🤓,0
|
333 |
+
166,1537688195206418433,"Virtual Correspondence: Humans as a Cue for Extreme-View Geometry
|
334 |
+
abs: https://t.co/hAx8x4rnIO
|
335 |
+
project page:… https://t.co/z19LsVo2qX",Virtual Correspondence: Humans as a Cue for Extreme-View Geometry,195
|
336 |
+
167,1537685927505678337,"Beyond Supervised vs. Unsupervised: Representative Benchmarking and Analysis of Image Representation Learning
|
337 |
+
abs:… https://t.co/n02uqo0cb2",Beyond Supervised vs. Unsupervised: Representative Benchmarking and Analysis of Image Representation Learning,167
|
338 |
+
168,1537650506683801601,"GateHUB: Gated History Unit with Background Suppression for Online Action Detection
|
339 |
+
abs: https://t.co/3DqwFesEZi https://t.co/t1Pcz09AUR",GateHUB: Gated History Unit with Background Suppression for Online Action Detection,24
|
340 |
+
169,1537640654968324099,"Spatially-Adaptive Multilayer Selection for GAN Inversion and Editing
|
341 |
+
abs: https://t.co/9tpvhXuaRw
|
342 |
+
project page:… https://t.co/XxpZg5PGke",Spatially-Adaptive Multilayer Selection for GAN Inversion and Editing,72
|
343 |
+
170,1537639309888610305,"Realistic One-shot Mesh-based Head Avatars
|
344 |
+
abs: https://t.co/aETolvwoiH
|
345 |
+
project page: https://t.co/rTTLG67oPy https://t.co/C8aUN3VS37",Realistic One-shot Mesh-based Head Avatars,562
|
346 |
+
171,1537637590274277376,"MoDi: Unconditional Motion Synthesis from Diverse Data
|
347 |
+
abs: https://t.co/YBV9jSUemo https://t.co/o1uvG18RSk",MoDi: Unconditional Motion Synthesis from Diverse Data,70
|
348 |
+
172,1537630146244517889,"OmniMAE: Single Model Masked Pretraining on Images and Videos
|
349 |
+
abs: https://t.co/j9a3imUEJ6
|
350 |
+
|
351 |
+
single pretrained model… https://t.co/OiR2pY5emm",OmniMAE: Single Model Masked Pretraining on Images and Videos,144
|
352 |
+
173,1537626871319470080,"FWD: Real-time Novel View Synthesis with Forward Warping and Depth
|
353 |
+
abs: https://t.co/hbo0vxrlDd
|
354 |
+
|
355 |
+
propose a generali… https://t.co/etVCe4HPI9",FWD: Real-time Novel View Synthesis with Forward Warping and Depth,37
|
356 |
+
174,1537622879386456064,"SAVi++: Towards End-to-End Object-Centric Learning from Real-World Videos
|
357 |
+
abs: https://t.co/0MkpFJiUzM
|
358 |
+
|
359 |
+
using spars… https://t.co/x1Hvgf13qE",SAVi++: Towards End-to-End Object-Centric Learning from Real-World Videos,54
|
360 |
+
175,1537621348339572736,"BYOL-Explore: Exploration by Bootstrapped Prediction
|
361 |
+
abs: https://t.co/xXQtolzjlP
|
362 |
+
|
363 |
+
BYOL-Explore achieves superhuman… https://t.co/uZvAbVd1Bb",BYOL-Explore: Exploration by Bootstrapped Prediction,79
|
364 |
+
176,1537618457365303296,"Know your audience: specializing grounded language models with the game of Dixit
|
365 |
+
abs: https://t.co/T8d5ir8LDQ https://t.co/zSk5oR2F9D",Know your audience: specializing grounded language models with the game of Dixit,39
|
366 |
+
177,1537616695749230592,"Characteristics of Harmful Text: Towards Rigorous Benchmarking of Language Models
|
367 |
+
abs: https://t.co/JVutpfCfIq
|
368 |
+
|
369 |
+
pro… https://t.co/8nvWHPxXYm",Characteristics of Harmful Text: Towards Rigorous Benchmarking of Language Models,11
|
370 |
+
178,1537615160172589056,"GoodBye WaveNet -- A Language Model for Raw Audio with Context of 1/2 Million Samples
|
371 |
+
abs: https://t.co/XRTTRbABXG… https://t.co/2ewOJYVqTC",GoodBye WaveNet -- A Language Model for Raw Audio with Context of 1/2 Million Samples,360
|
372 |
+
179,1537613030225240066,"Discrete Contrastive Diffusion for Cross-Modal and Conditional Generation
|
373 |
+
abs: https://t.co/RBbFId9jPF
|
374 |
+
|
375 |
+
On dance-to… https://t.co/IrXLM4bPcQ",Discrete Contrastive Diffusion for Cross-Modal and Conditional Generation,68
|
376 |
+
180,1537593193407053826,a @Gradio Demo for Dual-Key Multimodal Backdoors for Visual Question Answering on @huggingface Spaces for @CVPR 202… https://t.co/g0MakJAhtz,a @Gradio Demo for Dual-Key Multimodal Backdoors for Visual Question Answering on @huggingface Spaces for @CVPR 202… https://t.co/g0MakJAhtz,16
|
377 |
+
181,1537586831310602240,"RT @chaaarig: Also have a try at our demo on @Gradio/@huggingface !
|
378 |
+
|
379 |
+
Demo: https://t.co/qyqmbg4eIC
|
380 |
+
|
381 |
+
and do join the CVPR 2022 organization…",RT @chaaarig: Also have a try at our demo on @Gradio/@huggingface !,0
|
382 |
+
182,1537568313504681986,RT @jw2yang4ai: We added a heat map visualization for our demo. It can somehow segment the concepts you are querying. Try it out.,RT @jw2yang4ai: We added a heat map visualization for our demo. It can somehow segment the concepts you are querying. Try it out.,0
|
383 |
+
183,1537546603262787584,"RT @gadelha_m: Always nice to see the work in AK’s feed! Congrats, @YimingXie4!","RT @gadelha_m: Always nice to see the work in AK’s feed! Congrats, @YimingXie4!",0
|
384 |
+
184,1537539330901782528,"RT @MatthewWalmer: Can you tell if a Neural Net contains a Backdoor Attack? Try this demo for ""Dual-Key Multimodal Backdoors for Visual Que…","RT @MatthewWalmer: Can you tell if a Neural Net contains a Backdoor Attack? Try this demo for ""Dual-Key Multimodal Backdoors for Visual Que…",0
|
385 |
+
185,1537489260126904322,"a @Gradio Demo for Bamboo_ViT-B16 for Image Recognition on @huggingface Spaces for @CVPR 2022
|
386 |
+
|
387 |
+
demo:… https://t.co/lEM23bNPL0",a @Gradio Demo for Bamboo_ViT-B16 for Image Recognition on @huggingface Spaces for @CVPR 2022,26
|
388 |
+
186,1537478059154079751,"RT @K_S_Schwarz: Sparse voxel grids have proven super useful for speeding up novel view synthesis. Inspired by this, our latest work uses a…","RT @K_S_Schwarz: Sparse voxel grids have proven super useful for speeding up novel view synthesis. Inspired by this, our latest work uses a…",0
|
389 |
+
187,1537477283409272836,"RT @skamalas: TLDR is now accepted at the Transactions of Machine Learning Research (TMLR) journal - @TmlrOrg
|
390 |
+
|
391 |
+
Openreview: https://t.co/wV…",RT @skamalas: TLDR is now accepted at the Transactions of Machine Learning Research (TMLR) journal - @TmlrOrg ,0
|
392 |
+
188,1537460438463651842,RT @yilin_sung: Do you still get Out-of-Memory error even when you've saved >95% params w. adapter/prompt-tuning? Try Ladder Side-Tuning (L…,RT @yilin_sung: Do you still get Out-of-Memory error even when you've saved >95% params w. adapter/prompt-tuning? Try Ladder Side-Tuning (L…,0
|
393 |
+
189,1537460412937019396,"RT @yilin_sung: All our code is available at https://t.co/gTrTXtEodS. Feel free to check it out. @uncnlp
|
394 |
+
|
395 |
+
(and thanks @ak92501 for sharing)",RT @yilin_sung: All our code is available at https://t.co/gTrTXtEodS. Feel free to check it out. @uncnlp,0
|
396 |
+
190,1537446428259233792,"RT @roeiherzig: Thanks for featuring our work @ak92501! For more info, please visit our page!
|
397 |
+
|
398 |
+
This research is a collaborative effort w/ @…","RT @roeiherzig: Thanks for featuring our work @ak92501! For more info, please visit our page!",0
|
399 |
+
191,1537324192978419713,"AVATAR: Unconstrained Audiovisual Speech Recognition
|
400 |
+
abs: https://t.co/ZXdnRJppOk https://t.co/OTcPmcNM9E",AVATAR: Unconstrained Audiovisual Speech Recognition,30
|
401 |
+
192,1537323042380124160,"VCT: A Video Compression Transformer
|
402 |
+
abs: https://t.co/llH1L1ooKa
|
403 |
+
|
404 |
+
presented an elegantly simple transformer-based… https://t.co/ErovCWVDg3",VCT: A Video Compression Transformer,68
|
405 |
+
193,1537319908920393729,"It’s Time for Artistic Correspondence in Music and Video
|
406 |
+
abs: https://t.co/BKyP9MErgw
|
407 |
+
project page:… https://t.co/NYbUVqPTFo",It’s Time for Artistic Correspondence in Music and Video,58
|
408 |
+
194,1537316756880072705,"PlanarRecon: Real-time 3D Plane Detection and Reconstruction from Posed Monocular Videos
|
409 |
+
abs:… https://t.co/TpuSD4Ybkd",PlanarRecon: Real-time 3D Plane Detection and Reconstruction from Posed Monocular Videos,763
|
410 |
+
195,1537315443932815360,"LET-3D-AP: Longitudinal Error Tolerant 3D Average Precision for Camera-Only 3D Detection
|
411 |
+
abs:… https://t.co/tRCXSz3kxE",LET-3D-AP: Longitudinal Error Tolerant 3D Average Precision for Camera-Only 3D Detection,33
|
412 |
+
196,1537314480056672258,"Contrastive Learning as Goal-Conditioned Reinforcement Learning
|
413 |
+
abs: https://t.co/6dv7PNn0qq
|
414 |
+
project page:… https://t.co/vRSdekL9If",Contrastive Learning as Goal-Conditioned Reinforcement Learning,77
|
415 |
+
197,1537312940956712961,RT @ashkamath20: Presenting FIBER (Fusion In-the-Backbone transformER) a novel V&L architecture w/ deep multi-modal fusion + a new pre-trai…,RT @ashkamath20: Presenting FIBER (Fusion In-the-Backbone transformER) a novel V&L architecture w/ deep multi-modal fusion + a new pre-trai…,0
|
416 |
+
198,1537301855595790337,"LAVENDER: Unifying Video-Language Understanding as Masked Language Modeling
|
417 |
+
abs:https://t.co/RGQy8Vv1LG https://t.co/G1bdakn5Pr",LAVENDER: Unifying Video-Language Understanding as Masked Language Modeling,42
|
418 |
+
199,1537288570880368640,"Masked Siamese ConvNets
|
419 |
+
abs: https://t.co/YMG1O1ZZ5N https://t.co/LCVqVvFNfR",Masked Siamese ConvNets,83
|