context
stringlengths
368
421
authors_citing
stringlengths
9
106
title_cited
stringlengths
33
128
authors_cited
stringlengths
15
219
label
stringclasses
3 values
te-of-the-art multi-camera 3D pose estimation algorithms tend to be computationally expensive because they rely on deep networks that operate on volumetric grids , or volumetric Pictorial Structures , to combine features coming from different views in accordance with epipolar geometry. Fig. 1 Figure 1 . Overview of 3D pose estimation from multi-view images. The state-of-the-art approaches proje
Edoardo Remelli,Shangchen Han,Sina Honari,Pascal Fua,Robert Wang
Cross View Fusion for 3D Human Pose Estimation
Haibo Qiu,Chunyu Wang,Jingdong Wang,Naiyan Wang,Wenjun Zeng
p
state-of-the-art approaches project 2D detections to 3D grids and reason jointly across views through computationally intensive volumetric convolutional neural networks or Pictorial Structures (PSM) . This yields accurate predictions but is computationally expensive. We design a lightweight architecture that predicts 2D joint locations from a learned camera-independent representation of 3D pose
Edoardo Remelli,Shangchen Han,Sina Honari,Pascal Fua,Robert Wang
Cross View Fusion for 3D Human Pose Estimation
Haibo Qiu,Chunyu Wang,Jingdong Wang,Naiyan Wang,Wenjun Zeng
p
t multi-view inputs. In particular, proposes to concatenate together pre-computed 2D detections and pass them as input to a fully connected network to predict global 3D joint coordinates. Similarly, refines 2D heatmap detections jointly by using a fully connected layer before aggregating them on 3D volumes. Although, similar to our proposed approach, these methods fuse information from different
Edoardo Remelli,Shangchen Han,Sina Honari,Pascal Fua,Robert Wang
Cross View Fusion for 3D Human Pose Estimation
Haibo Qiu,Chunyu Wang,Jingdong Wang,Naiyan Wang,Wenjun Zeng
o
across embeddings {z i } n i=1 , by concatenating features from different views and processing them through convolutional layers into view-dependent features, similar in spirit to the recent models . In Section 4 we refer to this general approach as Fusion. Although computationally lightweight and effective, we argue that this approach is limited for two reasons: (1) it does not make use of kno
Edoardo Remelli,Shangchen Han,Sina Honari,Pascal Fua,Robert Wang
Cross View Fusion for 3D Human Pose Estimation
Haibo Qiu,Chunyu Wang,Jingdong Wang,Naiyan Wang,Wenjun Zeng
n
style has three videos of the same action of which 1 and 2 are used for training and 3 for testing. This setup allows for testing on unseen and seen subjects but always unseen performances. Following , we use the data of four cameras to train and test our models. However, to illustrate the generalization ability of our approach to new camera settings, we propose an experiment were we train on
Edoardo Remelli,Shangchen Han,Sina Honari,Pascal Fua,Robert Wang
Cross View Fusion for 3D Human Pose Estimation
Haibo Qiu,Chunyu Wang,Jingdong Wang,Naiyan Wang,Wenjun Zeng
o
prove our accuracy by 3%. This is not surprising as we optimize directly for the target metric when training our network. Our best performing model outperforms the state-ofthe-art volumetric model of by ∼ 5%. Note that their method lifts 2D detections to 3D using Recurrent Pictorial Structures (RPSM), which uses a pre-defined skeleton, as a strong prior, to lift 2D heatmaps to 3D detections. Our
Edoardo Remelli,Shangchen Han,Sina Honari,Pascal Fua,Robert Wang
Cross View Fusion for 3D Human Pose Estimation
Haibo Qiu,Chunyu Wang,Jingdong Wang,Naiyan Wang,Wenjun Zeng
n
y ∼ 15%, which is significant and indicates the superiority of our fusion technique. Similar to what observed in Section 4.3, our best performing method is even superior to the off-line volumetric of , which uses a strong bone-length prior (Qui et al. Fusion + RPSM). Our method outperforms all other multi-view approaches by a large margin. Note that in this setting we cannot compare to , as the
Edoardo Remelli,Shangchen Han,Sina Honari,Pascal Fua,Robert Wang
Cross View Fusion for 3D Human Pose Estimation
Haibo Qiu,Chunyu Wang,Jingdong Wang,Naiyan Wang,Wenjun Zeng
o
iority of our fusion technique. Similar to what observed in Section 4.3, our best performing method is even superior to the off-line volumetric of , which uses a strong bone-length prior (Qui et al. Fusion + RPSM). Our method outperforms all other multi-view approaches by a large margin. Note that in this setting we cannot compare to , as they do not report results without using additional data
Edoardo Remelli,Shangchen Han,Sina Honari,Pascal Fua,Robert Wang
Cross View Fusion for 3D Human Pose Estimation
Haibo Qiu,Chunyu Wang,Jingdong Wang,Naiyan Wang,Wenjun Zeng
n
e-of-the-art multi-camera 3D pose estimation algorithms tend to be computationally expensive because they rely on deep networks that operate on volumetric grids , or volumetric Pictorial Structures , to combine features coming from different views in accordance with epipolar geometry. Fig. 1 Figure 1 . Overview of 3D pose estimation from multi-view images. The state-of-the-art approaches projec
Edoardo Remelli,Shangchen Han,Sina Honari,Pascal Fua,Robert Wang
Harvesting Multiple Views for Marker-Less 3D Human Pose Annotations
Georgios Pavlakos,Xiaowei Zhou,Konstantinos G. Derpanis,Kostas Daniilidis
p
tate-of-the-art approaches project 2D detections to 3D grids and reason jointly across views through computationally intensive volumetric convolutional neural networks or Pictorial Structures (PSM) . This yields accurate predictions but is computationally expensive. We design a lightweight architecture that predicts 2D joint locations from a learned camera-independent representation of 3D pose
Edoardo Remelli,Shangchen Han,Sina Honari,Pascal Fua,Robert Wang
Harvesting Multiple Views for Marker-Less 3D Human Pose Annotations
Georgios Pavlakos,Xiaowei Zhou,Konstantinos G. Derpanis,Kostas Daniilidis
p
s advantages e.g., ambiguities arising due to body joint occlusions as well as foreshortening or motion blur can be resolved by utilizing information from other views. There have been only few works that utilize multi-view data to learn monocular 3D pose estimation models. While the approaches need extrinsic camera calibration, require at least some part of their training data to be labell
Umar Iqbal,Pavlo Molchanov,Jan Kautz
Harvesting Multiple Views for Marker-Less 3D Human Pose Annotations
Georgios Pavlakos,Xiaowei Zhou,Konstantinos G. Derpanis,Kostas Daniilidis
o
tion blur can be resolved by utilizing information from other views. There have been only few works that utilize multi-view data to learn monocular 3D pose estimation models. While the approaches need extrinsic camera calibration, require at least some part of their training data to be labelled with ground-truth 3D poses. Both of these requirements are, however, very hard to acquire for un
Umar Iqbal,Pavlo Molchanov,Jan Kautz
Harvesting Multiple Views for Marker-Less 3D Human Pose Annotations
Georgios Pavlakos,Xiaowei Zhou,Konstantinos G. Derpanis,Kostas Daniilidis
o
ervised methods do not require paired 2D-3D data and only use weak supervision in form of motioncapture data , images/videos with 2D annotations , collection of 2D poses , or multi-view images . Our approach also lies in this paradigm and learns to estimate 3D poses from unlabeled multi-view data. In , a probabilistic 3D pose model learned using motion-capture data is integrated into a mu
Umar Iqbal,Pavlo Molchanov,Jan Kautz
Harvesting Multiple Views for Marker-Less 3D Human Pose Annotations
Georgios Pavlakos,Xiaowei Zhou,Konstantinos G. Derpanis,Kostas Daniilidis
o
tween plausible and in-plausible poses. In , non-rigid structure from motion is used to learn a 3D pose estimator from videos with 2D pose annotations. The closest to our work are the approaches of in that they also use unlabeled multi-view data for training. The approach of , however, requires calibrated camera views that are very hard to acquire in unconstrained environments. The approach
Umar Iqbal,Pavlo Molchanov,Jan Kautz
Harvesting Multiple Views for Marker-Less 3D Human Pose Annotations
Georgios Pavlakos,Xiaowei Zhou,Konstantinos G. Derpanis,Kostas Daniilidis
o
s used to learn a 3D pose estimator from videos with 2D pose annotations. The closest to our work are the approaches of in that they also use unlabeled multi-view data for training. The approach of , however, requires calibrated camera views that are very hard to acquire in unconstrained environments. The approach estimates 2D poses from multi-view images and reconstructs corresponding 3D pose
Umar Iqbal,Pavlo Molchanov,Jan Kautz
Harvesting Multiple Views for Marker-Less 3D Human Pose Annotations
Georgios Pavlakos,Xiaowei Zhou,Konstantinos G. Derpanis,Kostas Daniilidis
o
on from multi-view images. The state-of-the-art approaches project 2D detections to 3D grids and reason jointly across views through computationally intensive volumetric convolutional neural networks or Pictorial Structures (PSM) . This yields accurate predictions but is computationally expensive. We design a lightweight architecture that predicts 2D joint locations from a learned camera-indepe
Edoardo Remelli,Shangchen Han,Sina Honari,Pascal Fua,Robert Wang
Learnable Triangulation of Human Pose
Karim Iskakov,Egor Burkov,Victor Lempitsky,Yury Malkov
p
torial Structures aggregation to estimate 3D poses. Similarly, proposes to use Recurrent Pictorial Structures to efficiently refine 3D pose estimations step by step. Improving upon these approaches, projects 2D heatmaps to a 3D volume using a differen-tiable model and regresses the estimated root-centered 3D pose through a learnable 3D convolutional neural network. This allows them to train thei
Edoardo Remelli,Shangchen Han,Sina Honari,Pascal Fua,Robert Wang
Learnable Triangulation of Human Pose
Karim Iskakov,Egor Burkov,Victor Lempitsky,Yury Malkov
o
etric of , which uses a strong bone-length prior (Qui et al. Fusion + RPSM). Our method outperforms all other multi-view approaches by a large margin. Note that in this setting we cannot compare to , as they do not report results without using additional data. Table 4 . Additional training data setup. We compare our method to the state-of-the-art approaches in terms of performance, inference ti
Edoardo Remelli,Shangchen Han,Sina Honari,Pascal Fua,Robert Wang
Learnable Triangulation of Human Pose
Karim Iskakov,Egor Burkov,Victor Lempitsky,Yury Malkov
o
remarked in the previous section, our method also outperforms . The gap, however, is somewhat larger in this case (∼ 20%). Our approach also outperforms the triangulation baseline of (Iskakov et al. Algebraic), indicating that our fusion technique if effective in reasoning about multi-view input images. Finally, we observe that our method reaches accuracy comparable to the volumetric approach of
Edoardo Remelli,Shangchen Han,Sina Honari,Pascal Fua,Robert Wang
Learnable Triangulation of Human Pose
Karim Iskakov,Egor Burkov,Victor Lempitsky,Yury Malkov
o
n across embeddings {z i } n i=1 , by concatenating features from different views and processing them through convolutional layers into view-dependent features, similar in spirit to the recent models . In Section 4 we refer to this general approach as Fusion. Although computationally lightweight and effective, we argue that this approach is limited for two reasons: (1) it does not make use of kn
Edoardo Remelli,Shangchen Han,Sina Honari,Pascal Fua,Robert Wang
A generalizable approach for multi-view 3D human pose regression
Abdolrahim Kadkhodamohammadi,Nicolas Padoy
n
tion of which 1 and 2 are used for training and 3 for testing. This setup allows for testing on unseen and seen subjects but always unseen performances. Following , we use the data of four cameras to train and test our models. However, to illustrate the generalization ability of our approach to new camera settings, we propose an experiment were we train on cameras and test on unseen camer
Edoardo Remelli,Shangchen Han,Sina Honari,Pascal Fua,Robert Wang
Realtime Multi-person 2D Pose Estimation Using Part Affinity Fields
Zhe Cao,Tomas Simon,Shih-En Wei,Yaser Sheikh
o
a. To the best of our knowledge there are no OoD-detection methods which are usable and have been investigated on graph-based data. Since human skeleton graphs can be easily generated from RGB images , , depth data , and even RF-signals , the representation of the dynamics of human actions can be captured without the high computational cost of optical flow or problems regarding poor visual con
Jens Bayer,David Munch,Michael Arens
Realtime Multi-person 2D Pose Estimation Using Part Affinity Fields
Zhe Cao,Tomas Simon,Shih-En Wei,Yaser Sheikh
o
ta is one way to solve action recognition tasks. Another strategy uses skeleton data which can be extracted by a 2D or 3D pose estimator such as Stacked Hourglass Networks , PersonLab , or OpenPose . The extracted landmarks can be seen as human joints and form the nodes of a skeleton graph ( Figure 2 ). Based upon a time series of this graph input data, there are several ways on how to recogniz
Jens Bayer,David Munch,Michael Arens
Realtime Multi-person 2D Pose Estimation Using Part Affinity Fields
Zhe Cao,Tomas Simon,Shih-En Wei,Yaser Sheikh
o
graph skeleton data. Figure 5 shows the basic pipeline, which is similar to the one presented in . Given a video input, single frames are extracted and analyzed by a 2D pose estimator (e.g. OpenPose ). The resulting sequence of skeleton data is then propagated through a graph CNN (e.g. ST-GCN ) resulting in a regularized high-level representation of the input data. Based on this extracted high-
Jens Bayer,David Munch,Michael Arens
Realtime Multi-person 2D Pose Estimation Using Part Affinity Fields
Zhe Cao,Tomas Simon,Shih-En Wei,Yaser Sheikh
o
corresponding keypoint-based pose P ∈ R 18×H×W of I, 18 channel heat map that encodes the locations of 18 joints of a human body, can be automatically extracted via an existing pose estimation method . During training, a target pose P t and a source person image I s are fed into the generator and a synthesized image I g following the appearance of I s but under the pose P t will be challenged for
Yifang Men,Yiming Mao,Yuning Jiang,Wei-Ying Ma,Zhouhui Lian
Realtime Multi-person 2D Pose Estimation Using Part Affinity Fields
Zhe Cao,Tomas Simon,Shih-En Wei,Yaser Sheikh
o
ease to handle them. In fact, keypoints-based methods have been crucial to the success of many vision applications. A few examples include; 3D reconstruction , registration , human body pose , recognition , and generation . That being said, many keypoints are defined manually, while considering their semantic locations such as facial landmarks and human body joints, to serve and sim
Clara Fernandez-Labrador,Ajad Chhatkuli,Danda Pani Paudel,Jose J. Guerrero,C'edric Demonceaux,Luc Van Gool
Realtime Multi-person 2D Pose Estimation Using Part Affinity Fields
Zhe Cao,Tomas Simon,Shih-En Wei,Yaser Sheikh
p
efers to the segmentation of human image into multiple parts with fine-grained semantics. These has been used in many tasks such as -human behaviour analysis , person re-identification . Cao et al. proposed a part affinity field (PAF) based method for localizing the human landmarks, where PAF is a non-parametric representation that learns to associate body parts of the person in the image. Pres
Debapriya Roy,Sanchayan Santra,Bhabatosh Chanda
Realtime Multi-person 2D Pose Estimation Using Part Affinity Fields
Zhe Cao,Tomas Simon,Shih-En Wei,Yaser Sheikh
o
based method for localizing the human landmarks, where PAF is a non-parametric representation that learns to associate body parts of the person in the image. Present work uses the method presented in for localizing the human landmarks of the model and the person. This facilitates aligning the model cloth efficiently that is extracted from the model image using , which is a human parsing approach
Debapriya Roy,Sanchayan Santra,Bhabatosh Chanda
Realtime Multi-person 2D Pose Estimation Using Part Affinity Fields
Zhe Cao,Tomas Simon,Shih-En Wei,Yaser Sheikh
o
h annotations. Before applying our algorithm, we prepare various inputs by some existing methods. These include densepose representation (by ), human segmentation (by ) and human pose estimates (by ) of the images. Due to our self-supervised training strategy our method does not require any train-test split of the dataset. Hence, the entire dataset is used for training. During testing we random
Debapriya Roy,Sanchayan Santra,Bhabatosh Chanda
Realtime Multi-person 2D Pose Estimation Using Part Affinity Fields
Zhe Cao,Tomas Simon,Shih-En Wei,Yaser Sheikh
o
e is also a wide range of applications. Computer vision, in particular, is abundant of tasks with a matching flavor; optical flow , person re-identification , stereo matching , pose estimation , object tracking , to name just a few. Matching problems are also relevant in a variety of scientific disciplines including biology , language processing , bioinformatics , correspondence prob
Michal Rol'inek,Paul Swoboda,Dominik Zietlow,Anselm Paulus,V'it Musil,Georg Martius
Realtime Multi-person 2D Pose Estimation Using Part Affinity Fields
Zhe Cao,Tomas Simon,Shih-En Wei,Yaser Sheikh
o
lips in 400 classes collected from YouTube. The original Kinetics does not contain joint information, so Yan. et al. estimate the 2D 18 joint coordinates and confidence per person using the OpenPose toolbox. The Open-Pose toolbox is publicly available. The released dataset is divided into training sets (240,000 clips) and test sets (20,000 clips). The clips have 300 frames and each frame contain
Yuya Obinata,Takuma Yamamoto
Realtime Multi-person 2D Pose Estimation Using Part Affinity Fields
Zhe Cao,Tomas Simon,Shih-En Wei,Yaser Sheikh
o
s keypoints , , . Bottom-up methods have reversed order of steps: the first step is to locate all the keypoints in an image and then to group these keypoints according to the person they belong to and . Recently, researchers also tried to find the whole body estimation using only a single network , which improves the performance drastically compared to the well-known OpenPose . The model us
Seyed Yahya Nikouei,Yu Chen,Alexander Aved,Erik Blasch
Realtime Multi-person 2D Pose Estimation Using Part Affinity Fields
Zhe Cao,Tomas Simon,Shih-En Wei,Yaser Sheikh
p
el in , we consider a setting in which we exploit additional training data. We adopt the same pre-training strategy as , that is we pretrain a monocular pose estimation network on the COCO dataset , and fine-tune jointly on Human3.6M and MPII datasets. We then simply use these pre-trained weights to initialize our network. We also report results for , which trains its detector jointly on MPI
Edoardo Remelli,Shangchen Han,Sina Honari,Pascal Fua,Robert Wang
Microsoft COCO: Common Objects in Context
Tsung-Yi Lin,Michael Maire,Serge Belongie,Lubomir Bourdev,Ross Girshick,James Hays,Pietro Perona,Deva Ramanan,C. Lawrence Zitnick,Piotr Doll'ar
o
r method consistently outperforms . Table 9 shows the comparison with the Faster R-CNN variant of . The first set of columns in Table 9 shows the results on PASCAL VOC and the next two on MS COCO dataset. We show the results on different incremental settings ('a + b' columns, where a is the set of base classes and b are the incremental classes added to the detector trained on a). The detector
K J Joseph,Jathushan Rajasegaran,Salman Khan,Fahad Shahbaz Khan,Vineeth Balasubramanian,Ling Shao
Microsoft COCO: Common Objects in Context
Tsung-Yi Lin,Michael Maire,Serge Belongie,Lubomir Bourdev,Ross Girshick,James Hays,Pietro Perona,Deva Ramanan,C. Lawrence Zitnick,Piotr Doll'ar
o
ple, as shown in Fig. 1 , images in object recognition datasets (e.g. ImageNet ) often contain a single object, usually from a closeup view, whereas scenes in object detection datasets (e.g. MS COCO ) have multiple objects. Due to this, object characteristics in the two types of datasets might be different. For example, objects are often smaller in detection datasets compared to recognition data
Ali Borji
Microsoft COCO: Common Objects in Context
Tsung-Yi Lin,Michael Maire,Serge Belongie,Lubomir Bourdev,Ross Girshick,James Hays,Pietro Perona,Deva Ramanan,C. Lawrence Zitnick,Piotr Doll'ar
o
for visual objects of various scales. For both the one-stage methods and two-stage methods, detectors based on CNN with FPN achieve better result on large scale natural object detection dataset COCO . The structure of standard FPN takes the last residual layer from the 4 stages of the backbone as input and then goes through a top-down pathway to construct 4 feature layers at different scales. Th
Qilei Chen,Ping Liu,Jing Ni,Yu Cao,Benyuan Liu,Honggang Zhang
Microsoft COCO: Common Objects in Context
Tsung-Yi Lin,Michael Maire,Serge Belongie,Lubomir Bourdev,Ross Girshick,James Hays,Pietro Perona,Deva Ramanan,C. Lawrence Zitnick,Piotr Doll'ar
p
res massive amount of training samples. There have been a range of natural imagery datasets of extremely large volume in the computer vision community, such as ImageNet [25], Open Images , and COCO . For instance, more than 14 million images have been hand-annotated by the ImageNet project. There have also been a number of datasets extracted from optical satellite or aerial images, although the
Sheng Sun,Armando Marino,Wenze Shui,Zhongwen Hu
Microsoft COCO: Common Objects in Context
Tsung-Yi Lin,Michael Maire,Serge Belongie,Lubomir Bourdev,Ross Girshick,James Hays,Pietro Perona,Deva Ramanan,C. Lawrence Zitnick,Piotr Doll'ar
o
. Typically, captioning is approached as a translation task, mapping an input image to a sequence of words. In most existing work, learning is fully supervised, using existing data sets of "natural images/videos" with associated descriptions. However, when faced with drastically different visual content, e.g., remote sensing or surveillance videos, and a different type of descript
Davis Gilton,Ruotian Luo,Rebecca Willett,Greg Shakhnarovich
Microsoft COCO: Common Objects in Context
Tsung-Yi Lin,Michael Maire,Serge Belongie,Lubomir Bourdev,Ross Girshick,James Hays,Pietro Perona,Deva Ramanan,C. Lawrence Zitnick,Piotr Doll'ar
o
rences. First, we use a real-fake discriminator instead of a retrieval-based discriminator. Second, the datasets/tasks are different. Our datasets are more under-annotated and out-of-domain than COCO , a large natural image dataset that can benefit easiliy from pretrained vision networks. Note that, despite the similar technique, our focus in this paper is not to propose a new semi-supervised tec
Davis Gilton,Ruotian Luo,Rebecca Willett,Greg Shakhnarovich
Microsoft COCO: Common Objects in Context
Tsung-Yi Lin,Michael Maire,Serge Belongie,Lubomir Bourdev,Ross Girshick,James Hays,Pietro Perona,Deva Ramanan,C. Lawrence Zitnick,Piotr Doll'ar
o
ion tasks such as image classification , object detection , or semantic segmentation . One key of the success of these approaches relies on massively labeled datasets such as ImageNet or COCO . Unfortunately, annotating data at this scale is expensive and not always feasible, depending on the task at hand. Improving the generalization capabilities of deep neural networks and removing the
Nikita Dvornik,Cordelia Schmid,Julien Mairal
Microsoft COCO: Common Objects in Context
Tsung-Yi Lin,Michael Maire,Serge Belongie,Lubomir Bourdev,Ross Girshick,James Hays,Pietro Perona,Deva Ramanan,C. Lawrence Zitnick,Piotr Doll'ar
p
multiple datasets with different data distributions. It includes ImageNet , Omniglot , Aircraft , CU-Birds , Describable Textures , Quick Draw , Fungi , VGG-Flower , Traffic Sign and MSCOCO . A short description of each dataset is contained in Appendix. Traffic Sign and MSCOCO datasets are reserved for testing only, while all other datasets have their corresponding train, val and test s
Nikita Dvornik,Cordelia Schmid,Julien Mairal
Microsoft COCO: Common Objects in Context
Tsung-Yi Lin,Michael Maire,Serge Belongie,Lubomir Bourdev,Ross Girshick,James Hays,Pietro Perona,Deva Ramanan,C. Lawrence Zitnick,Piotr Doll'ar
o
encoder, as shown in Figure 3 . The values of parameters in the original encoder are learnable while those in the VGG encoder are fixed. Since the fixed VGG network is pretrained on the COCO dataset and it has seen many images with various textures, it has a global property and strong generalization ability for in-the-wild textures. But unlike the typical style transfer task requiring only a r
Yifang Men,Yiming Mao,Yuning Jiang,Wei-Ying Ma,Zhouhui Lian
Microsoft COCO: Common Objects in Context
Tsung-Yi Lin,Michael Maire,Serge Belongie,Lubomir Bourdev,Ross Girshick,James Hays,Pietro Perona,Deva Ramanan,C. Lawrence Zitnick,Piotr Doll'ar
o
h tasks have evolved, approaches have become more robust and scalable and are starting to "solve" early datasets. Moreover, while increasingly largescale classification datasets like ImageNet , COCO and OpenImages have established themselves as standard benchmarks, image retrieval is still commonly evaluated on very small datasets. For example, the original Oxford5k and Paris6k datasets that
Tobias Weyand,Andre Araujo,Bingyi Cao,Jack Sim
Microsoft COCO: Common Objects in Context
Tsung-Yi Lin,Michael Maire,Serge Belongie,Lubomir Bourdev,Ross Girshick,James Hays,Pietro Perona,Deva Ramanan,C. Lawrence Zitnick,Piotr Doll'ar
o
are HOIs. To evaluate PaStaNet, we perform image-based HOI recognition on HICO . HICO has 38,116 and 9,658 images in train and test sets and 600 HOIs composed of 117 verbs and 80 COCO objects . Each image has an image-level label which is the aggregation over all HOIs in an image and does not contain any instance boxes. Modes. We first pre-train Activity2Vec with PaSta labels, then fine-t
Yong-Lu Li,Liang Xu,Xinpeng Liu,Xijie Huang,Yue Xu,Shiyi Wang,Hao-Shu Fang,Ze Ma,Mingyang Chen,Cewu Lu
Microsoft COCO: Common Objects in Context
Tsung-Yi Lin,Michael Maire,Serge Belongie,Lubomir Bourdev,Ross Girshick,James Hays,Pietro Perona,Deva Ramanan,C. Lawrence Zitnick,Piotr Doll'ar
o
the ImageNet and Activity2Vec is used as a pretrained knowledge engine to promote other tasks. V-COCO. V-COCO contains 10,346 images and instance boxes. It has 29 action categories, COCO 80 objects . For a fair comparison, we exclude the images of V-COCO and corresponding PaSta labels in PaStaNet, and use remaining data (109K images) for pre-training. We use SGD with 0.9 momenta and cosine deca
Yong-Lu Li,Liang Xu,Xinpeng Liu,Xijie Huang,Yue Xu,Shiyi Wang,Hao-Shu Fang,Ze Ma,Mingyang Chen,Cewu Lu
Microsoft COCO: Common Objects in Context
Tsung-Yi Lin,Michael Maire,Serge Belongie,Lubomir Bourdev,Ross Girshick,James Hays,Pietro Perona,Deva Ramanan,C. Lawrence Zitnick,Piotr Doll'ar
o
nal image to efficient low dimensional features, and the latter use this low dimensional features to generate captions. However, most of the image captioning models have mostly been trained on MSCOCO or Pascal-VOC (Everingham et al.) , which consists of 80 and 20 object classes respectively. All the images are captioned taking into consideration only these classes. Thus, even though current model
Pranav Agarwal,Alejandro Betancourt,Vana Panagiotou,Natalia D'iaz-Rodr'iguez
Microsoft COCO: Common Objects in Context
Tsung-Yi Lin,Michael Maire,Serge Belongie,Lubomir Bourdev,Ross Girshick,James Hays,Pietro Perona,Deva Ramanan,C. Lawrence Zitnick,Piotr Doll'ar
o
coder then generates each word by taking into consideration the relative importance of each spatial region. Most approaches follow this methodology and train the models using datasets such as MS-COCO , PASCAL-VOC (Everingham et al.) and Flickr 30k to name a few. These datasets have millions of images with human labelled captions for a predefined number of object categories. COCO dataset has capt
Pranav Agarwal,Alejandro Betancourt,Vana Panagiotou,Natalia D'iaz-Rodr'iguez
Microsoft COCO: Common Objects in Context
Tsung-Yi Lin,Michael Maire,Serge Belongie,Lubomir Bourdev,Ross Girshick,James Hays,Pietro Perona,Deva Ramanan,C. Lawrence Zitnick,Piotr Doll'ar
o
ons, to facilitate the linking of different knowledge sources. 2 https://verbs.colorado.edu/verb-index/vn3.3 Visual Genome (VG) includes natural images from the intersection of YFCC100M and MS-COCO . Scenes are annotated with regions enclosing each object. Each region is annotated with: (i) the object class label, (ii) a textual description of the region content, and, optionally, (iii) addition
Agnese Chiatti,Enrico Motta,Enrico Daga
Microsoft COCO: Common Objects in Context
Tsung-Yi Lin,Michael Maire,Serge Belongie,Lubomir Bourdev,Ross Girshick,James Hays,Pietro Perona,Deva Ramanan,C. Lawrence Zitnick,Piotr Doll'ar
o
detection models. We adopt a person-detection model trained with Detectron . It is a Faster R-CNN with a ResNeXt-101-FPN backbone. It is pre-trained on ImageNet and the COCO human keypoint images . We fine-tune this detector on AVA for person (actor) detection. The person detector produces 93.9 AP@50 on the AVA validation set. Then, the region proposals for action detection are detected perso
Christoph Feichtenhofer
Microsoft COCO: Common Objects in Context
Tsung-Yi Lin,Michael Maire,Serge Belongie,Lubomir Bourdev,Ross Girshick,James Hays,Pietro Perona,Deva Ramanan,C. Lawrence Zitnick,Piotr Doll'ar
o
mark. For both, we generate per-frame mask proposals M ∈ {m 1 , ..., m n } for all the objects in a video using a ResNet-101 based Mask R-CNN [27] . To ensure a fair , and augmented images from COCO , as well as Pascal-VOC dataset for 120k iterations. This network is initialized with weights from a model trained for image instance segmentation on COCO. We use SGD with a momentum of 0.9 and an i
Ali Athar,Sabarinath Mahadevan,Aljovsa Ovsep,Laura Leal-Taix'e,Bastian Leibe
Microsoft COCO: Common Objects in Context
Tsung-Yi Lin,Michael Maire,Serge Belongie,Lubomir Bourdev,Ross Girshick,James Hays,Pietro Perona,Deva Ramanan,C. Lawrence Zitnick,Piotr Doll'ar
o
detects object along with the instance segmentation which the latter takes in to estimate the 6D pose. We finetune the Mask R-CNN with the YCB-Video Dataset with the pretrained Microsoft coco model . We use the public available DenseFusion weights and implementation without fine-tuning. The frontend in our implementation selects every 10th camera frame as a keyframe. The camera visual odometry
Zhiqiang Sui,Haonan Chang,Ning Xu,Odest Chadwicke Jenkins
Microsoft COCO: Common Objects in Context
Tsung-Yi Lin,Michael Maire,Serge Belongie,Lubomir Bourdev,Ross Girshick,James Hays,Pietro Perona,Deva Ramanan,C. Lawrence Zitnick,Piotr Doll'ar
o
Context-Aware Emotion Recognition (CAER) dataset is a collection of video-clips from TV shows with 7 discrete emotion annotations. EMOTIC dataset is a collection of images from datasets like MSCOCO and ADE20K along with images downloaded from web searches. The dataset is a collection of 23, 571 images, with about 34, 320 people annotated for 26 discrete emotion classes. We have summarised and
Trisha Mittal,Pooja Guhan,Uttaran Bhattacharya,Rohan Chandra,Aniket Bera,Dinesh Manocha
Microsoft COCO: Common Objects in Context
Tsung-Yi Lin,Michael Maire,Serge Belongie,Lubomir Bourdev,Ross Girshick,James Hays,Pietro Perona,Deva Ramanan,C. Lawrence Zitnick,Piotr Doll'ar
o
ning pose estimation and part segmentation. We use an Adam optimizer with a learning rate 3×10 −4 , and the batch size is 32. In the experiments, we first pre-train our Pose-Part Network using MSCOCO and Pascal-Person-Parts to ensure a reasonable performance for pose estimation and part segmentation. Next, we train our full model using UP-3D and Human3.6M datasets to learn mesh reconstruction.
Kevin Lin,Lijuan Wang,Ying Jin,Zicheng Liu,Ming-Ting Sun
Microsoft COCO: Common Objects in Context
Tsung-Yi Lin,Michael Maire,Serge Belongie,Lubomir Bourdev,Ross Girshick,James Hays,Pietro Perona,Deva Ramanan,C. Lawrence Zitnick,Piotr Doll'ar
o
and the design of the loss function. For small dataset like MINST and CIFAR-10 , shallow architecture such as AlexNet and CNN-F are widely used. While for complex dataset like NUS-WIDE and COCO , deeper architecture such as VGG and ResNet50 are needed. The intuition of the loss function design is to maintain similarity, such as minimizing the gap between the similarity in the original spa
Xiao Luo,Chong Chen,Huasong Zhong,Hao Zhang,Minghua Deng,Jianqiang Huang,Xiansheng Hua
Microsoft COCO: Common Objects in Context
Tsung-Yi Lin,Michael Maire,Serge Belongie,Lubomir Bourdev,Ross Girshick,James Hays,Pietro Perona,Deva Ramanan,C. Lawrence Zitnick,Piotr Doll'ar
o
ages containing categories such as sky, trees, and sea without manual inspection. There are 9,649 training images and 943 validation images. -COCO-Stuff has the same number of images as COCO dataset , but augments COCO by adding dense pixel-wise stuff annotations. It has 118,000 training images and 5,000 validation images with 182 semantic classes. -Cityscapes dataset is a widely used dataset f
Zhentao Tan,Dongdong Chen,Qi Chu,Menglei Chai,Jing Liao,Mingming He,Lu Yuan,Nenghai Yu
Microsoft COCO: Common Objects in Context
Tsung-Yi Lin,Michael Maire,Serge Belongie,Lubomir Bourdev,Ross Girshick,James Hays,Pietro Perona,Deva Ramanan,C. Lawrence Zitnick,Piotr Doll'ar
p
ning set of Pascal VOC 2012 and ImageNet ) for supervised learning only, denoted as CCNs, and semi-supervised learning with additional subset of trainval35k with overlapping classes of COCO dataset , denoted as CCNs*. The evaluation is done on the val set of Pascal 3D+ using Average Precision (AP) metric and Average Viewpoint Precision (AVP) , where we focus on AVP24 metric. Furthermore, we
Sunghun Joung,Seungryong Kim,Hanjae Kim,Minsu Kim,Ig-Jae Kim,Junghyun Cho,Kwanghoon Sohn
Microsoft COCO: Common Objects in Context
Tsung-Yi Lin,Michael Maire,Serge Belongie,Lubomir Bourdev,Ross Girshick,James Hays,Pietro Perona,Deva Ramanan,C. Lawrence Zitnick,Piotr Doll'ar
o
. 1 . Indeed, modern computer vision systems are almost all built upon backbone deep neural networks (e.g., ResNet or Faster R-CNN ) pre-trained on large-scale datasets (e.g., ImageNet and MS-COCO ). The pre-training not only speeds up the training, but also provides a powerful feature extractor for down-stream tasks. As shown in Fig. 1(a) , the backbone network will represent the image featur
Xu Yang,Hanwang Zhang,Jianfei Cai
Microsoft COCO: Common Objects in Context
Tsung-Yi Lin,Michael Maire,Serge Belongie,Lubomir Bourdev,Ross Girshick,James Hays,Pietro Perona,Deva Ramanan,C. Lawrence Zitnick,Piotr Doll'ar
p
LSTM and were one of the first to show the success of neural networks for this problem, with applications to task oriented dialogue. Since then, some works have focused on alternative architectures - generate text by conditioning language models on tables, while propose to explictly model entities present in the structured data. The findings of the E2E challenge show that standard se
Mihir Kale,Scott Roy
Table-to-text Generation by Structure-aware Seq2seq Learning
Tianyu Liu,Kexiang Wang,Lei Sha,Baobao Chang,Zhifang Sui
o
e unsupervised pre-training + fine-tuning paradigm has shown to be remarkably effective, leading to improvements in NLP tasks like classification, question answering and spoken language understanding . Results for generation tasks like summarization are also positive, albeit less dramatic. propose the MASS technique and obtain state-of-the-art results for summarization and unsupervised machine t
Mihir Kale,Scott Roy
Unsupervised Transfer Learning for Spoken Language Understanding in Intelligent Agents
Aditya Siddhant,Anuj Goyal,Angeliki Metallinou
p
ding to improvements in NLP tasks like classification, question answering and spoken language understanding . Results for generation tasks like summarization are also positive, albeit less dramatic. propose the MASS technique and obtain state-of-the-art results for summarization and unsupervised machine translation. show that denoising autoencoders can be leveraged for unsupervised language gen
Mihir Kale,Scott Roy
MASS: Masked Sequence to Sequence Pre-training for Language Generation
Kaitao Song,Xu Tan,Tao Qin,Jianfeng Lu,Tie-Yan Liu
p
T models on a combination of languages can lead to surprisingly effective crosslingual performance on NLU tasks, without using any parallel data. Of the myriad unsupervised techniques, we choose MASS for our baseline since it has been shown to outperform other alternatives like BERT, left-to-right language models and denoising autoencoders for language generation tasks. We first train a unsupervi
Mihir Kale,Scott Roy
MASS: Masked Sequence to Sequence Pre-training for Language Generation
Kaitao Song,Xu Tan,Tao Qin,Jianfeng Lu,Tie-Yan Liu
o
ample. In contrast, the current best summarization systems are at most comparable to novice writers who provide a plain descriptive representation of the text body as the title (Cao et al., 2018b,a; . These systems usually use a language generation model that mixes styles with other linguistic patterns and inherently lacks a mechanism to control the style explicitly. More fundamentally, the tra
Di Jin,Zhijing Jin,Joey Tianyi Zhou,Lisa Orii,Peter Szolovits
MASS: Masked Sequence to Sequence Pre-training for Language Generation
Kaitao Song,Xu Tan,Tao Qin,Jianfeng Lu,Tie-Yan Liu
o
ts of a 6-layer encoder E(·; θ E ) and a 6-layer decoder G(·; θ G ) with a hidden size of 1024 and a feed-forward filter size of 4096. For better generation quality, we initialize with the MASS model . MASS is pretrained by masking a sentence fragment in the encoder, and then predicting it in the decoder on large-scale English monolingual data. This pretraining is adopted in the current state-of-
Di Jin,Zhijing Jin,Joey Tianyi Zhou,Lisa Orii,Peter Szolovits
MASS: Masked Sequence to Sequence Pre-training for Language Generation
Kaitao Song,Xu Tan,Tao Qin,Jianfeng Lu,Tie-Yan Liu
o
r method tries to pre-train a SEQ2SEQ Transformer with its encoder and decoder parameters shared. Differently, we pre-train a SEQ2SEQ Transformer with separate parameters for the encoder and decoder. proposed a method to pre-train a SEQ2SEQ Transformer by masking a span of text and then predicting the original text with masked tokens at other positions. Their pretraining task is similar to our Ma
Yanyan Zou,Xingxing Zhang,Wei Lu,Furu Wei,Ming Zhou
MASS: Masked Sequence to Sequence Pre-training for Language Generation
Kaitao Song,Xu Tan,Tao Qin,Jianfeng Lu,Tie-Yan Liu
o
uage modeling . BERT pre-trains a large Transformer at the masked-language modeling task. There have been numerous extensions to BERT. For example, MASS and UniLM extend BERT to generation tasks by adding auto-regressive generative training objectives. ERNIE and SpanBERT mask out contiguous sequences of token for improved span representations. This
Kevin Clark,Minh-Thang Luong,Quoc V. Le,Christopher D. Manning
MASS: Masked Sequence to Sequence Pre-training for Language Generation
Kaitao Song,Xu Tan,Tao Qin,Jianfeng Lu,Tie-Yan Liu
o
or generation tasks like summarization are also positive, albeit less dramatic. propose the MASS technique and obtain state-of-the-art results for summarization and unsupervised machine translation. show that denoising autoencoders can be leveraged for unsupervised language generation from structured data. cast data-totext as text-to-text generation and show that finetuning GPT language models
Mihir Kale,Scott Roy
Unsupervised Natural Language Generation with Denoising Autoencoders
Markus Freitag,Scott Roy
p
d obtain state-of-the-art results for summarization and unsupervised machine translation. show that denoising autoencoders can be leveraged for unsupervised language generation from structured data. cast data-totext as text-to-text generation and show that finetuning GPT language models can lead to performance competitive with architectures developed specifically for data-to-text. use language m
Mihir Kale,Scott Roy
Hello, It's GPT-2 -- How Can I Help You? Towards the Use of Pretrained Language Models for Task-Oriented Dialogue Systems
Pawel Budzianowski,Ivan Vuli'c
p
state and pruning the agent's incremental, word-level generation actions to those leading to syntactically correct word sequences. While outperforming end-to-end dialogue models on bAbI Dialog Tasks in the extreme zero-shot case , this method inherited the limitations of the dialogue grammar -specifically, it is limited to a single closed domain until a wide-coverage grammar is available.
Igor Shalyminov,Alessandro Sordoni,Adam Atkinson,Hannes Schulz
Hello, It's GPT-2 -- How Can I Help You? Towards the Use of Pretrained Language Models for Task-Oriented Dialogue Systems
Pawel Budzianowski,Ivan Vuli'c
n
., 2017) . Since the English dataset was automatically created by crawling and aligning sports score boxes and summaries, large parts of the text in the RotoWire dataset are not grounded in the data. find that techniques such as multilingual training, back-translation etc can help improve data-to-text performance in data scarce scenarios. Our focus is on NMT based transfer learning 3 and it can b
Mihir Kale,Scott Roy
Findings of the Third Workshop on Neural Generation and Translation
Hiroaki Hayashi,Yusuke Oda,Alexandra Birch,Ioannis Konstas,Andrew Finch,Minh-Thang Luong,Graham Neubig,Katsuhito Sudoh
o
where the structured data is flattened into a plain string consisting of a series of intents and slot key-value pairs. More exotic architectures have been suggested in prior work, but the findings of show that simple seq2seq models are competitive alternatives, while being simpler to implement. Secondly, the transformer architecture is state-of-the art for NMT. Thirdly, keeping the pre-train and
Mihir Kale,Scott Roy
Findings of the E2E NLG Challenge
Ondvrej Duvsek,Jekaterina Novikova,Verena Rieser
p
ugh continuous quantum transitions, from the disorder phase to the order phase. Its scaling predictions have been confirmed by experiments for various physically interesting systems, see e.g. Refs. [19] . KZ-like protocols have been largely employed to investigate the critical dynamics of closed systems, subject to unitary time evolutions only . The open nature quantum systems, however, may
Davide Rossini,Ettore Vicari
Spontaneous creation of Kibble-Zurek solitons in a Bose-Einstein condensate
Giacomo Lamporesi,Simone Donadello,Simone Serafini,Franco Dalfovo,Gabriele Ferrari
o
gh continuous quantum transitions, from the disorder phase to the order phase. Its scaling predictions have been confirmed by experiments for various physically interesting systems, see e.g. Refs. [19] . KZ-like protocols have been largely employed to investigate the critical dynamics of closed systems, subject to unitary time evolutions only . The open nature quantum systems, however, may
Davide Rossini,Ettore Vicari
Simulating the Kibble-Zurek mechanism of the Ising model with a superconducting qubit system
Ming Gong,Xueda Wen,Guozhu Sun,Dan-Wei Zhang,Dong Lan,Yu Zhou,Yunyi Fan,Yuhao Liu,Xinsheng Tan,Haifeng Yu,Yang Yu,Shi-Liang Zhu,Siyuan Han,Peiheng Wu
o
tinuous quantum transitions, from the disorder phase to the order phase. Its scaling predictions have been confirmed by experiments for various physically interesting systems, see e.g. Refs. [19] . KZ-like protocols have been largely employed to investigate the critical dynamics of closed systems, subject to unitary time evolutions only . The open nature quantum systems, however, may lead t
Davide Rossini,Ettore Vicari
Quantum Kibble-Zurek mechanism and critical dynamics on a programmable Rydberg simulator
Alexander Keesling,Ahmed Omran,Harry Levine,Hannes Bernien,Hannes Pichler,Soonwon Choi,Rhine Samajdar,Sylvain Schwartz,Pietro Silvi,Subir Sachdev,Peter Zoller,Manuel Endres,Markus Greiner,Vladan Vuletic,Mikhail D. Lukin
o
physically interesting systems, see e.g. Refs. [19] . KZ-like protocols have been largely employed to investigate the critical dynamics of closed systems, subject to unitary time evolutions only . The open nature quantum systems, however, may lead to a departure from the dynamic scaling behavior predicted for the isolated case . In particular, it has been observed that slower qu
Davide Rossini,Ettore Vicari
Dynamics of a Quantum Phase Transition and Relaxation to a Steady State
Jacek Dziarmaga
o
mics of closed systems, subject to unitary time evolutions only . The open nature quantum systems, however, may lead to a departure from the dynamic scaling behavior predicted for the isolated case . In particular, it has been observed that slower quenches in open systems, or subject to noisy controls, may generate an overabundance of defects when approaching the adiabatic limit in K
Davide Rossini,Ettore Vicari
Robustness of adiabatic passage through a quantum phase transition
Andrea Fubini,Giuseppe Falci,Andreas Osterloh
o
ics of closed systems, subject to unitary time evolutions only . The open nature quantum systems, however, may lead to a departure from the dynamic scaling behavior predicted for the isolated case . In particular, it has been observed that slower quenches in open systems, or subject to noisy controls, may generate an overabundance of defects when approaching the adiabatic limit in KZ
Davide Rossini,Ettore Vicari
Adiabatic dynamics in open quantum critical many-body systems
Dario Patane,Alessandro Silva,Luigi Amico,Rosario Fazio,Giuseppe E. Santoro
o
cs of closed systems, subject to unitary time evolutions only . The open nature quantum systems, however, may lead to a departure from the dynamic scaling behavior predicted for the isolated case . In particular, it has been observed that slower quenches in open systems, or subject to noisy controls, may generate an overabundance of defects when approaching the adiabatic limit in KZ
Davide Rossini,Ettore Vicari
Adiabatic dynamics of a quantum critical system coupled to an environment: Scaling and kinetic equation approaches
Dario Patane,Alessandro Silva,Luigi Amico,Rosario Fazio,Giuseppe E. Santoro
o
s of closed systems, subject to unitary time evolutions only . The open nature quantum systems, however, may lead to a departure from the dynamic scaling behavior predicted for the isolated case . In particular, it has been observed that slower quenches in open systems, or subject to noisy controls, may generate an overabundance of defects when approaching the adiabatic limit in KZ p
Davide Rossini,Ettore Vicari
Quantum Kibble-Zurek physics in the presence of spatially-correlated dissipation
P. Nalbach,Smitha Vishveshwara,Aashish A. Clerk
o
of closed systems, subject to unitary time evolutions only . The open nature quantum systems, however, may lead to a departure from the dynamic scaling behavior predicted for the isolated case . In particular, it has been observed that slower quenches in open systems, or subject to noisy controls, may generate an overabundance of defects when approaching the adiabatic limit in KZ pr
Davide Rossini,Ettore Vicari
Anti-Kibble-Zurek Behavior in Crossing the Quantum Critical Point of a Thermally Isolated System Driven by a Noisy Control Field
Anirban Dutta,Armin Rahmani,Adolfo del Campo
o
of closed systems, subject to unitary time evolutions only . The open nature quantum systems, however, may lead to a departure from the dynamic scaling behavior predicted for the isolated case . In particular, it has been observed that slower quenches in open systems, or subject to noisy controls, may generate an overabundance of defects when approaching the adiabatic limit in KZ pro
Davide Rossini,Ettore Vicari
Anti-Kibble-Zurek behavior of a noisy transverse-field XY chain and its quantum simulation with two-level systems
Zhi-Peng Gao,Dan-Wei Zhang,Yang Yu,Shi-Liang Zhu
o
f closed systems, subject to unitary time evolutions only . The open nature quantum systems, however, may lead to a departure from the dynamic scaling behavior predicted for the isolated case . In particular, it has been observed that slower quenches in open systems, or subject to noisy controls, may generate an overabundance of defects when approaching the adiabatic limit in KZ prot
Davide Rossini,Ettore Vicari
Dissipation in adiabatic quantum computers: Lessons from an exactly solvable model
Maximilian Keck,Simone Montangero,Giuseppe E. Santoro,Rosario Fazio,Davide Rossini
o
closed systems, subject to unitary time evolutions only . The open nature quantum systems, however, may lead to a departure from the dynamic scaling behavior predicted for the isolated case . In particular, it has been observed that slower quenches in open systems, or subject to noisy controls, may generate an overabundance of defects when approaching the adiabatic limit in KZ proto
Davide Rossini,Ettore Vicari
Quantum annealing via environment-mediated quantum diffusion
Vadim N. Smelyanskiy,Davide Venturelli,Alejandro Perdomo-Ortiz,Sergey Knysh,Mark I. Dykman
o
losed systems, subject to unitary time evolutions only . The open nature quantum systems, however, may lead to a departure from the dynamic scaling behavior predicted for the isolated case . In particular, it has been observed that slower quenches in open systems, or subject to noisy controls, may generate an overabundance of defects when approaching the adiabatic limit in KZ protoco
Davide Rossini,Ettore Vicari
Spontaneous symmetry breaking induced by quantum monitoring
Luis Pedro Garc'ia-Pintos,Diego Tielas,Adolfo del Campo
o
osed systems, subject to unitary time evolutions only . The open nature quantum systems, however, may lead to a departure from the dynamic scaling behavior predicted for the isolated case . In particular, it has been observed that slower quenches in open systems, or subject to noisy controls, may generate an overabundance of defects when approaching the adiabatic limit in KZ protocol
Davide Rossini,Ettore Vicari
Universal anti-Kibble-Zurek scaling in fully-connected systems
Ricardo Puebla,Andrea Smirne,Susana F. Huelga,Martin B. Plenio
o
sed systems, subject to unitary time evolutions only . The open nature quantum systems, however, may lead to a departure from the dynamic scaling behavior predicted for the isolated case . In particular, it has been observed that slower quenches in open systems, or subject to noisy controls, may generate an overabundance of defects when approaching the adiabatic limit in KZ protocols
Davide Rossini,Ettore Vicari
Work statistics across a quantum phase transition
Zhaoyu Fei,Nahuel Freitas,Vasco Cavina,H. T. Quan,Massimiliano Esposito
o
system. We focus on a class of dissipative mechanisms whose dynamics can be reliably described through a Lindblad master equation governing the time evolution of the density matrix of the system . We argue that, in the presence of weak dissipation, the dynamics of many-body systems may still develop a scaling behavior under KZ protocols (i.e., slow changes of one Hamiltonian parameter across
Davide Rossini,Ettore Vicari
Keldysh Field Theory for Driven Open Quantum Systems
L. M. Sieberer,M. Buchhold,S. Diehl
o
dy systems may still develop a scaling behavior under KZ protocols (i.e., slow changes of one Hamiltonian parameter across its critical value), thus extending the dynamic KZ scaling of closed systems . Its main features, in the presence of weak dissipation, are still controlled by the universality class of the quantum transition, provided the system-environment interaction strength is suitably tu
Davide Rossini,Ettore Vicari
The Kibble-Zurek Problem: Universality and the Scaling Limit
Anushya Chandran,Amir Erez,Steven S. Gubser,S. L. Sondhi
o
ough a Lindblad master equation governing the time evolution of the density matrix of the open system. The perturbation arising from the dissipation turns out to be relevant at the quantum transition . This implies that open systems cannot develop asymptotic dynamic scaling behaviors controlled by the universality class of the quantum transition when keeping the dissipation decay rate u finite a
Davide Rossini,Ettore Vicari
Competing coherent and dissipative dynamics close to quantum criticality
Davide Nigro,Davide Rossini,Ettore Vicari
o
ugh a Lindblad master equation governing the time evolution of the density matrix of the open system. The perturbation arising from the dissipation turns out to be relevant at the quantum transition . This implies that open systems cannot develop asymptotic dynamic scaling behaviors controlled by the universality class of the quantum transition when keeping the dissipation decay rate u finite an
Davide Rossini,Ettore Vicari
Scaling behavior of the stationary states arising from dissipation at continuous quantum transitions
Davide Rossini,Ettore Vicari
o
eavily on the quantity and quality of data and there is some evidence that the performance on some computer vision tasks (e.g. image classification) keeps improving at least up-to billions of samples . Pedestrian detection research community has in recent years published increasingly bigger and more challenging datasets ,1] to advance the field. Although the size of these datasets has increase
Irtiza Hasan,Shengcai Liao,Jinpeng Li,Saad Ullah Akram,Ling Shao
Exploring the Limits of Weakly Supervised Pretraining
Dhruv Mahajan,Ross Girshick,Vignesh Ramanathan,Kaiming He,Manohar Paluri,Yixuan Li,Ashwin Bharambe,Laurens van der Maaten
p
d learning techniques , all of which train on a set of highly-curated, well-balanced data: Im-ageNet . Scaling up single-image techniques to larger, less-curated datasets like Instagram-1B has not provided large improvements in performance . There is only so much that can be learned from a single image: no amount of artificial augmentation can show a new view of an object or what migh
Daniel Gordon,Kiana Ehsani,Dieter Fox,Ali Farhadi
Exploring the Limits of Weakly Supervised Pretraining
Dhruv Mahajan,Ross Girshick,Vignesh Ramanathan,Kaiming He,Manohar Paluri,Yixuan Li,Ashwin Bharambe,Laurens van der Maaten
o
.g. image classification) keeps improving at least up-to billions of samples . Pedestrian detection research community has in recent years published increasingly bigger and more challenging datasets ,1] to advance the field. Although the size of these datasets has increased by several orders of magnitude, the data still remains one of the major bottleneck in the performance of these methods
Irtiza Hasan,Shengcai Liao,Jinpeng Li,Saad Ullah Akram,Ling Shao
CityPersons: A Diverse Dataset for Pedestrian Detection
Shanshan Zhang,Rodrigo Benenson,Bernt Schiele
n
oposed Histogram of Oriented Gradients (HOG) feature descriptor for representing pedestrians. Dollar et al , proposed ACF, where the key idea was to use features across multiple channels. Similarly, , used filtered channel features and low-level visual features along with spatial pooling respectively for pedestrian detection. These earlier works focused more on feature descriptors and mostly us
Irtiza Hasan,Shengcai Liao,Jinpeng Li,Saad Ullah Akram,Ling Shao
CityPersons: A Diverse Dataset for Pedestrian Detection
Shanshan Zhang,Rodrigo Benenson,Bernt Schiele
o
g large-scale images for the autonomous driving systems. However, in the last decade several datasets have been proposed from the context of autonomous driving such as KITTI , Caltech , CityPersons and ECP . Typically these datasets are captured by a vehicle-mounted camera navigating through crowded scenarios. These datsets have been used by several methods with Caltech and CityPersons b
Irtiza Hasan,Shengcai Liao,Jinpeng Li,Saad Ullah Akram,Ling Shao
CityPersons: A Diverse Dataset for Pedestrian Detection
Shanshan Zhang,Rodrigo Benenson,Bernt Schiele
o
s and ECP . Typically these datasets are captured by a vehicle-mounted camera navigating through crowded scenarios. These datsets have been used by several methods with Caltech and CityPersons being the most established benchmarks in this domain. However, Caltech and CityPersons datasets are monotonous in nature and they lack diverse scenarios (contain only street view images). Recently,
Irtiza Hasan,Shengcai Liao,Jinpeng Li,Saad Ullah Akram,Ling Shao
CityPersons: A Diverse Dataset for Pedestrian Detection
Shanshan Zhang,Rodrigo Benenson,Bernt Schiele
o
ating through crowded scenarios. These datsets have been used by several methods with Caltech and CityPersons being the most established benchmarks in this domain. However, Caltech and CityPersons datasets are monotonous in nature and they lack diverse scenarios (contain only street view images). Recently, ECP dataset which is an order of magnitude larger than CityPersons has been porposed.
Irtiza Hasan,Shengcai Liao,Jinpeng Li,Saad Ullah Akram,Ling Shao
CityPersons: A Diverse Dataset for Pedestrian Detection
Shanshan Zhang,Rodrigo Benenson,Bernt Schiele
o
and CityPersons datasets are monotonous in nature and they lack diverse scenarios (contain only street view images). Recently, ECP dataset which is an order of magnitude larger than CityPersons has been porposed. ECP is much bigger and diverse, since it contains images from all seasons in several different countries and under both day and night times. However, despite its large scale, ECP
Irtiza Hasan,Shengcai Liao,Jinpeng Li,Saad Ullah Akram,Ling Shao
CityPersons: A Diverse Dataset for Pedestrian Detection
Shanshan Zhang,Rodrigo Benenson,Bernt Schiele
n
cle in Los Angeles, USA. All experiments on Caltech are conducted using new annotations provided by . Table 1 . Evaluation protocol. Following the widely accepted protocol of Caltech , CityPersons and ECP , the detection performance is evaluated using log average miss rate over False Positive Per Image (FPPI) ranging in [10 −2 , 10 0 ] denoted by (M R −2 ). We evaluate and compare all methods
Irtiza Hasan,Shengcai Liao,Jinpeng Li,Saad Ullah Akram,Ling Shao
CityPersons: A Diverse Dataset for Pedestrian Detection
Shanshan Zhang,Rodrigo Benenson,Bernt Schiele
p
e how well state-of-the-art pedestrian detectors generalize to different datasets, we performed cross dataset evaluation of three state-of-the-art pedestrian detectors and our baseline on CityPersons and Caltech datasets. We evaluated recently proposed CSP , ALFNet and FRCNN (tailored for pedestrian detection) . Furthermore, we added along with our baseline, Faster R-CNN , without "bells a
Irtiza Hasan,Shengcai Liao,Jinpeng Li,Saad Ullah Akram,Ling Shao
CityPersons: A Diverse Dataset for Pedestrian Detection
Shanshan Zhang,Rodrigo Benenson,Bernt Schiele
o
s, we performed cross dataset evaluation of three state-of-the-art pedestrian detectors and our baseline on CityPersons and Caltech datasets. We evaluated recently proposed CSP , ALFNet and FRCNN (tailored for pedestrian detection) . Furthermore, we added along with our baseline, Faster R-CNN , without "bells and whistles". We present results for Caltech and CityPersons in Table 6 , respect
Irtiza Hasan,Shengcai Liao,Jinpeng Li,Saad Ullah Akram,Ling Shao
CityPersons: A Diverse Dataset for Pedestrian Detection
Shanshan Zhang,Rodrigo Benenson,Bernt Schiele
o

This dataset contains manual sentiment annotations within a scientific context, with labels "p," "n," and "o" denoting positive, negative, and others, respectively. The dataset comprises 100 records and was created for evaluating the performance of the sci-sentiment-classify model : https://huggingface.co/puzzz21/sci-sentiment-classify.

Downloads last month
12
Edit dataset card