_id
stringlengths
36
36
text
stringlengths
200
328k
label
stringclasses
5 values
114f38b3-2792-4224-bd93-79ab5b38c86e
With donerf, we are the first to render large-scale computer graphics scenes from a compact neural representation in real time. Additionally, donerf is significantly faster to train. We focus on static synthetic scenes and consider dynamic scenes and animations orthogonal to our work. Still, donerf can directly be used as a compact backdrop for distant parts of a game scene, or in vr and ar, where an environment map does not offer the required parallax for a stereo stimulus. Our source code and datasets are available at https://depthoraclenerf.github.io/. <FIGURE>
i
77741a5c-15a2-4cdb-9e68-dae487f5e755
To evaluate donerf, we focus on three competing requirements of (neural) scene representations: quality of the generated images, efficiency of the image generation, and compactness of the representation. Clearly tradeoffs between them are possible, but an ideal representation should generate high-quality outputs in real-time, while being compact and extensible, e.g., for streaming dynamic scenes.We compare against nerf , nsvf , llff  and nex  to evaluate methods that choose different tradeoffs among our three goals.
m
5730aea3-841f-4afe-a81c-1c3caf63e7ea
These methods capture a mix between being strictly mlp-based (nerf), using explicit structures and mlp (nsvf, nex) and using a mostly image-based representation (llff). For nex, we include an additional variant that does not bake radiance coefficients and neural basis functions into an mpi, but recomputes those via mlp inference during test time (nex-mlp). For nsvf, we run a grid search and evaluate on three representative variants (nsvf-small, nsvf-medium and nsvf-large) that capture the lowest memory footprint, best quality-speed tradeoff, and best quality respectively. Furthermore, we include a variant of nerf that uses our log+warp sampling to show the effect of the sampling strategy in isolation (nerf (log+warp)). See Appendix for details about the methods.
m
c5ac1bc7-dc82-4969-8c66-c9941481ae3e
We analyze the ability to extract novel high-quality views for generated content where reference depth maps are available during training. As an additional proof-of-concept, we extract estimated depth maps from a densely sampled nerf for each scene, and use these depth maps to train our depth oracles, showcasing a solution for scenes without available ground truth depth. We evaluate quality by computing PSNR and FLIP  against ground truth renderings, efficiency as FLOP per pixel and compactness by total storage cost for the representation. For all methods, images are downsampled to a resolution of \(400\times 400\) to speed up training. <TABLE><FIGURE>
m
70e2d6a9-979b-4657-97bc-be4fa6aeb972
Video semantic segmentation is a compute-intensive vision task. It aims at classifying pixels in video frames into semantic classes. This task often has real-time or even faster than real-time requirements in data-center applications, in order to process hours-long videos in a much shorter time. The ever increasing video resolution in both spatial and temporal dimensions makes real-time processing even more challenging.
i
e2881c23-589e-4cf2-92ce-0fb50aeaef53
A simple approach to video segmentation is to adopt image-based processing; that is, to process individual video frames independently using an off-the-shelf image segmentation network. This approach was once considered prohibitively expensive, given that most image segmentation networks, like DeepLabv3+ [1]}, usually optimized the feature extraction for segmentation accuracy rather than throughput.
i
ca8e4978-08e6-4e32-8ee8-9a0b6df9ae42
To address the prolonged feature extraction, several fast video segmentation frameworks are proposed. One common recipe is to propagate features of few selected keyframes, in order to conserve computation for feature extraction on subsequent non-keyframes [1]}, [2]}. This is motivated by the high correlation between consecutive video frames. Due to the evolution of video content in the temporal dimension, some [3]}, [4]} additionally introduce lightweight feature extraction, followed by fusion of spatio-temporal features [3]}, [4]}, for non-keyframes to cope with scene changes or dis-occlusion. Others propose adaptive keyframe selection [7]}, [4]} or bi-directional feature propagation [9]} for alleviating error propagation. Meanwhile, Xu et al.[7]} perform adaptive inference for cropped regions with scene changes.
i
b91047fa-eb59-4761-9cc1-7ac2c3238fa1
However, recent advances in fast image segmentation [1]}, [2]}, [3]} make video-based solutions less attractive. For example, BiSeNet [1]} and SwiftNet [2]} can now process high-definition (\(2048 \times 1024\) ) videos at 40 to 60 frames per second on modern graphics processing units (GPU) while achieving reasonably good segmentation accuracy. A question that arises is whether video-based approaches can benefit from these advanced image segmentation networks. The answer relies crucially on how to address the following challenges. First, the increasing spatial resolution of videos may render optical flow estimation too expensive. Second, the excessive amount of channels in feature space may make the propagation of features time-consuming. Third, the feature extraction for non-keyframes has to be even more lightweight. Lastly, errors resulting from imperfect flow estimation and feature extraction may be propagated along the temporal dimension.
i
ac4e28d5-d487-42ce-83fa-0da090dff091
To tackle these issues, we propose a simple yet efficient propagation framework, termed GSVNet, for fast semantic segmentation on video. Our contributions include: (1) to conserve computation for temporal propagation, we perform lightweight flow estimation in 1/8-downscaled image space for warping in segmentation outpace space; and (2) to mitigate propagation error and enable lightweight feature extraction on non-keyframes, we introduce a guided spatially-varying convolution for fusing segmentations derived from the previous and current frames.
i
b5ac6c09-200b-48cb-83d2-f333d5ca4a7d
Experimental results on Cityscapes [1]} show that when working with BiSeNet [2]} and SwiftNet [3]}, our scheme can process up to 142 high-definition (\(2048 \times 1024\) ) video frames per second on GTX 1080Ti with \(71.8\%\) accuracy in terms of Mean Intersection over Union (mIoU). It achieves the state-of-the-art accuracy-throughput trade-off on video segmentation and can readily work with any off-the-shelf fast image segmentation network.
i
944887af-4dd1-4a2d-8e9f-f72a6224e9bb
Image Semantic Segmentation: Image semantic segmentation models [1]}, [2]} have achieved great success in segmentation accuracy by incorporating sophisticated feature extractors and task decoders [1]}, [2]}. To achieve better accuracy-throughput trade-offs, recent research [5]}, [6]}, [7]} has been focused on making feature extractors lightweight and less sensitive to the adaptation of input resolution. To this end, Yu et al. [6]} introduce a cost-effective feature extractor by including a spatial path for preserving spatial details and a context path for capturing contextual information in a wide receptive field. In another attempt, Orsic et al. [9]} take the advantage of transfer learning by using pre-trained encoder on ImageNet and adopting a simple upsampling decoder with lateral connections.
w
260122e0-b906-443d-b083-82b9cc1e369e
Video Semantic Segmentation: Efficient video segmentation is another active research area. Unlike images, consecutive video frames usually have a high correlation or similarity. To conserve computation for feature extraction, several works leverage the temporal correlation between video frames to reuse features of selected keyframes on non-keyframes. Shelhamer et al. [1]} employ features at different stages of the network from previous frames. For feature propagation, Zhu et al.  [2]} use optical flow estimated by a flow network, while Li et al. [3]} adopt spatially variant convolution. [4]}, [3]} additionally introduce lightweight feature extraction for non-keyframes together with spatio-temporal feature fusion, to reduce error propagation arising from scene changes or warping errors. However, the emergence of lightweight image segmentation calls for a careful rethink of these strategies. Specifically, the feature extraction on non-keyframes must be even more lightweight and the extracted features must be used to their fullest potential to mitigate error propagation. <FIGURE><FIGURE>
w
1dad8ceb-8b95-480f-bc66-afc1f650f492
This work addresses the problem of efficient semantic segmentation on video. Given an input video consisting of a set \(\lbrace I_t\rbrace _{t=0}^{N-1}\) of video frames, each being of dimension \(3 \times H \times W\) , our task is to predict for every video frame \(I_{t}\) its downscaled, semantic segmentation \(\hat{S}_t \in \mathbb {R}^{C \times H/8 \times W/8}\) with \(C\) classes, aiming to strike a good balance between accuracy and throughput. Following common practice, the final segmentation is obtained by upsampling \(\hat{S}_t\) to the full resolution, where the segmentation accuracy is measured.
m
d2f30642-a5af-4f36-aa36-02d18b779f75
Accuracy-throughput Trade-off: blackTable REF compares the accuracy-throughput trade-offs of the competing methods. As shown, at FPS around 130, Ours-BN-R18 (\(l=4\) ) outperforms BiSeNet-R18 with input size 0.5 by 2.3% mIoU. Likewise, Ours-SN-R18 (\(l=4\) ) surpasses SwiftNet-R18 with input size 0.5 by 3.1% mIoU. Comparing with the video-based methods like [1]}, [2]}, [3]}, our scheme achieves much higher FPS and mIoU. In particular, Ours-SN-R18 (\(l=2\) ) outpaces considerably [4]}, [5]}, which target also high throughput, in FPS at the cost of a modest drop in mIoU. Results on Camvid [6]} (Table REF ) show that our method runs faster than the image-based schemes [7]} while achieving higher or comparable mIoU. It is to be noted that the other video-based schemes can hardly compete with ours in FPS, although [5]} has higher mIoU due to the use of better backbones.
r
c3579d0f-9a1f-421b-a8b4-31d87615aa93
Network Parameters and FLOPS: Table REF compares the number of network parameters and FLOPS. As can be seen, our scheme introduces 1.6M additional network parameters for segmenting non-keyframes, representing less than \(3.3\%\) additional overhead relative to image-based segmentation with SwiftNet-R18 or BiSeNet-R18. It achieves the least number of FLOPS, with a FPS of 142 and 71.8\(\%\) average mIoU. In particular, the FLOPS of our scheme is variable depending on the keyframe interval. On average, a non-keyframe requires 2.8G FLOPS, as compared to 58.5G FLOPS for processing a keyframe with SwiftNet-R18 (0.75). As such, the higher the keyframe interval, the lower the FLOPS. <TABLE><TABLE><TABLE><FIGURE><TABLE><FIGURE>
r
98072b48-695c-4060-b7a1-4215023f893f
This paper presents a simple propagation framework for efficient video segmentation. We show that it is more cost-effective to perform warping in segmentation output space than in feature space. This also allows the propagation error to be minimized at each time step by our guided spatially-varying convolution. Our scheme has the striking feature of being able to work with any off-the-shelf fast image segmentation network to further video segmentation.
d
7ca615db-7dda-4a4d-884c-343094b9562e
We have always wondered why people have the opinions they have. Why did mediaeval Europeans think their souls would pass through purgatory after they died [1]}? Why did the Germans before the war think that Jews posed a mortal threat to their nation? [2]} Why do some people today think that passenger aircraft spray us with poisonous chemicals [3]}? Where do these spontaneous clusters of consensus originate? From a distance of a few generations, a few hundred kilometers, or a few social classes they look almost incredible. Nevertheless, they are real and sometimes they matter more than governments, parliaments, and constitutions [4]}.
i
e1d1e14b-a75e-4ab4-9aec-35141a992699
Let me tell you how I come to an opinion. I think today what I thought yesterday, just a little bit more. That is if I haven't talked about the matter with anyone and if I haven't received any new information. Without any interaction with the world, my opinions tend to strengthen themselves. (My wife says only men work this way, and that is the problem.)
m
02bedc20-8c55-4805-9384-dc6b0ca9fd08
Let us assume that my opinion on a certain matter (e.g. Brexit) lies somewhere between “absolutely nay” (\(x=-1\) ) and “aye by all means” (\(x=+1\) ). The self-amplification of the opinion may be modelled by the relation \(x_{t+1} = \@root 3 \of {x_t},\)
m
e605d85f-4a42-4bc7-adab-842867ed8674
where \(x_t\) denotes my opinion yesterday and \(x_{t+1}\) my opinion today. The choice of the cube root is not essential; any odd extension of an increasing concave function \(f:[0,1] \rightarrow [0,1]\) would work. Figure REF shows the behaviour of a single opinion repeatedly amplified through function (REF ). In a hypothetical world where no one interacts with anyone else and no one receives any new information, all opinions slowly drift to an extreme. Those who started with a slightly positive opinion \((x_0>0),\) will end up as extremists \((x_\infty =1).\) Those who started with a slightly negative opinion \((x_0<0)\) will also end up extreme \((x_\infty = -1).\) Only the phlegmatic \((x_0=0)\) will stay phlegmatic forever \((x_\infty =0).\) <FIGURE>
m
df1b4f52-015e-4731-9626-3ec2cbad9b97
In the real world, however, we are inter-connected by a web of friendship, hostility, media, and social networks where we exchange information and opinions. Whenever I read something about Brexit in the newspaper, see something on TV, or talk to friends or foes, I correct my opinion according to what I learn. This mechanism may be captured by the relation \(x_{t+1} = \frac{ y_t^1 + y_t^2 + \dots + y_t^N}{N}\)
m
25b9dda1-6264-4177-ac74-a2b12eb0c4f5
where \(x_{t+1}\) denotes my opinion today and \(y_t^i\) denotes the opinion of my \(i-\) th source yesterday. This relation describes a completely affectable person whose opinion today is a simple arithmetic mean of the opinions of his friends yesterday – the dream of a marketing specialist come true.
m
cc4ad4cf-d873-4a8c-8c6a-facafac4758a
A more realistic mechanism for the spreading of opinions in a society may be a combination of the two approaches described above. Most classical approaches do not include opinion drift (REF ) (see  [1]} for a recent review); however, we will show that it is a key factor which makes the model non-linear and thus interesting.
m
105d78b5-beb7-44d3-babf-927408529c65
Let us assume a group of \(N\) individuals (vertices/nodes) inter-connected by certain links (edges). The links are described by the adjacency matrix \(A,\) the elements of which are either zeroes or ones, \(a_{ij} ={\left\lbrace \begin{array}{ll}1, & \text{if nodes}\ i\ \text{and}\ j\ \text{are connected by an edge,}\\0, & \text{if nodes}\ i\ \text{and}\ j\ \text{are not connected}.\end{array}\right.}\)
m
e4d81342-f437-429d-b4a8-2ca541a8b4fa
For the sake of simplicity, we assume that the graph is unweighted (all the links are positive with a unit weight) and undirectional \((a_{ij} = a_{ji}).\) It is straightforward to include directional edges (The Economist influences me and not vice versa), more or less important edges (my wife has more influence on me than my cat), and edges with negative weights (it is a matter of honour not to align my opinion with Russia Today).
m
96622169-60f8-4cf7-b824-c4548aa41628
Let us focus on a certain matter that may be subject to an opinion. The value \(x_t^i \in [-1,1]\) denotes the opinion of individual \(i\) at time \(t.\) The combination of the mechanisms (REF ) and (REF ) reads \(x_{t+1}^i = \alpha \, \@root 3 \of {x_t^i} + \frac{1-\alpha }{k_i} \, \sum _{j=1}^N a_{ij} x_t^j\)
m
08a1f655-07a9-4d37-b18b-cad021b8638e
The term \((1-\alpha ) \in [0,1]\) captures the “affectability” of the system. For \(\alpha \) close to one, mechanism (REF ) prevails, while for \(\alpha \) close to zero mechanism (REF ) prevails. For simplicity, let us assume that all individuals have the same affectability.
m
e26ef14f-ab71-403c-9c8a-1c51bf1d5fe0
The model is so simple that it contains only two features to investigate: (1) the affectability \(\alpha \) , and (2) the type of connectivity of the network given by the adjacency matrix \(A\) . The behaviour of the model naturally also depends on the initial condition, which will be sampled randomly from a uniform distribution on \([-1,1]\) .
r
4a11ad75-419a-4486-8944-79340fa7eb39
It has been shown [1]} that the human interaction network can be captured by what are called scale-free models. Scale-free networks are characterized by a vast majority of vertices of low degree and a few vertices with a very large degree (hubs). More precisely, the frequency of vertices of degree \(k\) decreases as \(k^{-\gamma },\) where \(\gamma \) usually lies between 2 and 3. We chose such a scale-free model (Barabási-Albert model [1]}) for the simulation of opinion spreading for various values of \(\alpha .\) We generated a scale-free network by means of the NetworkX package [3]}. The network generation process started with a small network of \(m < N\) nodes. Nodes were added until the desired network size was reached. The probability of the connection of a new node \(A\) to an already existing node \(B\) was proportional to the degree of \(B.\) See Figure REF for an example of a simulated scale-free network. <FIGURE>
r
54279143-cce8-4408-a8d2-42d13c36c45b
Let us now simulate the spreading of opinions The source code of the simulation is available at https://github.com/DostalJ/OpinionModel. by means of equation (REF ) on a scale-free network with the parameters \(N=1000\) and \(m=2\) (a network five times larger than the one in Figure REF ). The value of \(\alpha \) is increased from 0 to 1 with step \(0.02.\) For each value of \(\alpha ,\) 50 simulations of the model (REF ) are performed, each time with a different initial condition randomly sampled from a uniform distribution on \([-1,1].\) The simulation is terminated after an equilibrium has been reached. In the final equilibrium state, the mean opinion is computed \( P = \frac{1}{N} \sum _{i=1}^N x_\infty ^i \)
r
6c8e2b89-704f-4e56-81de-4bfd3fa4aeab
For each simulation, the mean equilibrium opinion \(P\) is depicted by a black cross in Figure REF . Its standard deviation is plotted by an orange dot. Altogether, \(50 \times 50\) simulations are performed, so Figure REF contains 2500 crosses and the same number of dots. For example, the simulation for \(\alpha = 0.1\) yields all the crosses either at \(P=1\) or \(P=-1,\) with the standard deviation always at zero. This means that all the simulations ended with everyone taking the same extreme opinion, either \(x=-1\) or \(x=1\) with no variation. This is an absolute consensus on an extreme – everyone takes the same extreme opinion and the consensus is absolute. However, looking e.g. at \(\alpha = 0.8,\) a different picture appears. The crosses showing the mean equilibrium opinion \(P\) form a line segment between \(P \sim -0.1\) and \(P \sim 0.1,\) while the standard deviation stays around \(V=0.6\) for all the simulation runs. This means that the system always stabilized in a plurality of non-extreme opinions. There may be individuals taking an extreme opinion \(x=+1\) or \(x=-1;\) however, most of the nodes are moderate. The fraction of positive and negative opinions in the equilibrium is similar, thus \(P \sim 0\) . <FIGURE>
r
82b0957a-6c90-412b-92c1-2359dc7ce7bc
The most remarkable region in Figure REF lies around \(\alpha = 0.65,\) where there is a transition between the absolute consensus on an extreme and plurality of opinions. A small change in \(\alpha \) (e.g. from \(0.67\) to \(0.62\) ) may lead to the sudden emergence of an absolute consensus on an extreme without any change in the structure of the network.
r
7530dfd0-e047-4715-bc79-86a25cdc31c7
If there is anything realistic about this toy model, we should wonder which processes may alter the value of \(\alpha .\) Education is an obvious candidate. Socrates' famous quote “Scio me nihil scire” (which probably comes neither from Socrates nor in this form) [1]} reminds us that the more we know, the more nuanced the opinions we form are and the less convinced about them we are. Let us assume that education makes one more attentive to the opinions of others and thus increases the affectability of the system (and consequently reduces \(\alpha \) ). This assumption is very natural – much research shows a correlation between lower education and extreme opinions [2]}. Suddenly, we face a fascinating emergent behaviour of the system: as the level of education in a society grows, affectability increases and the society may quite suddenly switch from a healthy plurality of moderate opinions into the state of an absolute consensus on an extreme. This transition is very counter-intuitive. On the level of an individual, higher affectability makes one more attentive to the opinions of others. However, on the level of the society, the effect of higher affectability is quite the opposite in a rather dramatic way.
d
0652ee81-6840-4dc7-bbbe-1129a90d7578
Naturally, one may disagree with the assumptions of the toy model, and doubt the interpretation of its parameters. However, there is at least one reason to give it a second thought: many historians keep wondering why people in pre-war Germany – probably the most educated society in the world at that time [1]} – succumbed to mass psychosis and decided to exterminate many of their neighbours. As this toy model shows, emergent effects in complex networks may be very counter-intuitive.
d
d9892311-7f9d-4aac-b97a-5f5027a1ac45
Recent developments in self-supervised learning have shown that it is possible to learn high-level representations of object categories from unlabeled images [1]}, [2]}, [3]}, [4]}, [5]}, phonetic information from speech [6]}, [7]} and language understanding from raw text [8]}, [9]}. The most studied benchmark in self-supervised learning is ImageNet [10]}, where representations learned from unlabeled images can surpass supervised representations, both in terms of their data-efficiency and transfer-learning performance [11]}, [12]}. <FIGURE>
i
c654a491-de61-4885-9951-36f1b91bfda9
One of the caveats with self-supervised learning on ImageNet is that it is not completely “self-supervised". The training set of ImageNet, on which the representations are learned, is heavily curated and required extensive human effort to create [1]}. In particular, ImageNet contains many fine-grained classes (such as subtly different dog breeds), each one containing roughly the same number of images. While this consistency may facilitate the learning of high-level visual representations, limiting self-supervised learning to such curated datasets risks biasing their development towards methods which require this consistency, limiting their applicability to more diverse downstream tasks and larger datasets for pre-training.
i
1d37cf10-34db-49ee-ab6c-8f16b764089e
In this paper we assess how well recent self-supervised learning methods perform on downstream tasks (including ImageNet) when they are pre-trained on significantly less curated datasets, such as YFCC100M [1]}. We observe a notable drop in performance of over 9% Top-1 accuracy (from 74.3% to 65.3%) for a ResNet50 model trained with the current state-of-the-art in self-supervised learning.
i
c6f5829c-3b23-481a-b0c1-069fdb5c0d7e
We hypothesize that this curation gap is due to the heavy-tailed nature of images collected in the wild, which present much more diverse content, breaking the global consistency exploited in previous datasets. We test this hypothesis with a new method, Divide and Contast (DnC), which attempts to recover local consistency in subsets of the larger, uncurated dataset, such that self-supervised learning methods can learn high-level features that are specific to each subset. We find that such semantically coherent subsets can be straightforwardly obtained by clustering the representations of standard self-supervised models.
i
85ce26ee-8767-4a63-bc11-ff62f916a88a
Divide and Contrast (DnC) proceeds by training individual “expert” models on each subset and distilling them into a single model. As a result, DnC can be used in combination with any self-supervised learning technique, and requires the same amount of computation, as each expert is trained for significantly less time. Finally, this computation is trivially parallelized, allowing it to be scaled to massive datasets.
i
576b61b0-5566-469b-bba1-96003bb96909
The remainder of this paper is structured as follows. We first review related work in self-supervised learning. We then present a new stronger baseline (MoCLR) which improves over current contrastive methods, matching the performance of the current state-of-the-art (BYOL [1]}). Next we present the main method, Divide and Contrast, and how this model can be used together with any SSL method. In the experiments we evaluate the different hypotheses that support DnC, and compare its ability to learn from uncurated dataset with existing methods.
i
0d454882-4e5c-41f7-962c-161410f95924
Recent self-supervised representation learning generally includes three types of methods: generative models that directly model the data distribution, pretext tasks that are manually designed according to the data, and contrastive learning that contrasts positive pairs with negative pairs.
w
6b73d055-59d4-48e6-9e8c-df96dd1ad715
Generative models. While the primary goal of generative models such as GAN [1]}, [2]} or VAE [3]} is to model the data distribution (e.g., sample new data or estimate likelihood), the encoder network can also extract good representations [4]}. Recent state of the art generative models for representation learning include BiGAN [5]} and BigBiGAN [6]}, which learn a bidirectional mapping between the latent codes and the images, and iGPT [7]} which trains an autoregressive model on raw pixels.
w
152790e1-ac20-4d46-a742-45b8ec2cc9da
Pretext tasks. Good representations may also be learned by solving various pretext tasks. Examples include denoising [1]}, relative patch prediction [2]}, image inpainting [3]}, noise prediction [4]}, colorization [5]}, [6]}, [7]}, Jigsaw [8]}, exemplar modeling [9]}, motion segmentation [10]}, image transformation prediction [11]}, [12]}, tracking [13]}, or even the combination of multiple tasks [14]}. Another line of methods generates pseudo labels by clustering features [15]}, [16]}, [17]}, [18]}, [19]}. Most recently, SeLa [20]} jointly clusters images and balances the clusters. SwAV [21]} learns representatons by having different views of the same image assigned to the same cluster. Another work [22]} directly optimizes the transferability of representation by integrating clustering with meta-learning.
w
915d2c2d-137a-4b7d-85fd-a8f171958005
Contrastive learning. Contrastive learning is a widely-used generic method. The loss function for contrastive learning has evolved from early margin-based binary classification [1]}, to triplet loss [2]}, and to recent k-pair loss [3]}, [4]}. The core idea lying at the heart of the recent series of self-supervised contrastive learning methods [5]}, [4]}, [7]}, [8]}, [9]}, [10]}, [11]}, [12]}, [13]}, [14]}, [15]}, [16]}, [17]}, [18]} is to maximize the agreement between two “views” of the same image while repulsing “views” from different images. Such views can be created by color decomposition [8]}, patch cropping [4]}, [7]}, [10]}, data augmentation [13]}, [16]}, [25]}, or image segmentation [26]}, [27]}, [28]}. Indeed, contrastive learning is very general such that it can be easily adapted to different data types. Examples include different frames of video [4]}, [30]}, [31]}, [32]}, [33]}, [34]}, point clouds [35]}, multiple sensory data [36]}, [37]}, [38]}, text and its context [39]}, [40]}, [41]}, [42]}, or video and language [43]}, [44]}, [45]}. A set of other work [46]}, [15]}, [48]}, [49]}, [50]}, [51]}, [52]} focuses on providing empirical and theoretical understanding of contrastive learning. Recently a non-contrastive method BYOL [53]} applies a momentum-encoder to one view and predicts its output from the other, inspired by bootstrapping RL [54]}. Finally, contrastive learning has also been applied to supervised image classification [55]}, image translation [56]}, knowledge distillation [57]}, [58]}, and adversarial learning [59]}.
w
3ee5dad7-f2ca-484e-8fd2-ca33ee54a171
This paper is also related to knowledge distillation [1]}. In [1]}, several expert models were also trained in parallel on a large scale dataset, and then distilled into a single model. While labels are assumed available in [1]} to partition the dataset and distill into a single model, we are dealing with self-supervised learning without supervision. Our distillation procedure is also inspired by FitNet [4]}.
w
935948b4-d9c5-48f2-ad59-9715f8d050c8
Lastly, while self-supervised representation learning on uncurated datasets is largely unexplored, there are a few prior attempts [1]}, [2]}. In [1]}, clustering is applied to generate training targets, and in order to capture the long-tailed distribution of images in the uncurated YFCC100m [4]}, a hierachical formulation is proposed. The work of [2]} benchmarked pretext-based self-supervised methods in a large scale setting, e.g., jigSaw, colorization and rotation prediction, and found that these pretext tasks are not `hard' enough to take full advantage of large scale data. Concurrent work SEER [6]} directly scales up SwAV with larger models and datasets.
w
bbbbb750-7de9-4299-8e6d-768f91d71b1f
Datasets. We consider two large-scale uncurated datasets. The first is a private dataset of roughly 300 million images (JFT-300M [1]}). For the second dataset we use YFCC100M [2]}, a public dataset of 95M Flickr images available under the Creative Commons license. Figures REF and REF show a visual comparison between images from ImageNet and YFCC100M. ImageNet images often contain the object or animal of interest in the center of the image. ImageNet also does not have a long-tailed distribution (e.g., power law) over object-classes but only considers a specific set of 1000 different classes, which are (roughly) equally represented in the dataset. As a result, specific objects or animals (e.g., common tench, Bedlington terrier, ...) are over-represented compared to more typically occurring scenes such as human faces and landscapes (which are better represented in YFCC100M). <FIGURE>
m
2d541675-1553-4807-908c-59590cc0c1b1
Settings. ResNet-50 [1]} is used in all experiments, unless noted otherwise. For ease of comparison, we report the computational footprint of all experiments in ImageNet-epoch equivalents (1 “epoch” \(=\) \(1281167/\text{batch_size}\) iterations). More implementation and optimization details are included in Appendix. <TABLE>
m
4e93e3c4-bfc1-419d-8c9e-8bb7df5307f4
DnC Schedules. Table REF shows three training schedules with different number of epochs. For example, in the schedule of 3,000 epochs, we first train the base model for 1,000 epochs, after which we cluster the samples into 5 groups. The 5 experts are trained in parallel on these subsets. We use 1,500 epochs in total, spread out over the experts according to the number of images in each cluster (300 on average per expert). The distillation model is then trained for 500 epochs. See Section  for analysis of run time.
m
8ad5ade4-c7d5-41b8-bc84-9f2aa95c066c
In this paper we have studied how state of the art self-supervised learning methods perform when they are pretrained on uncurated data – datasets that did not require human annotations or labels to create – as a step towards fully self-supervised learning. We have observed that current methods suffer from a large drop in performance of up to -9% when pre-trained on these uncurated datasets. To alleviate this issue, we have proposed Divide and Contrast (DnC) that requires a few simple changes to existing self-supervised learning methods, and which largely outperforms state of the art SSL methods on uncurated datasets, as well as achieving similar or better performance on ImageNet. We hope this work draws more attention to uncurated datasets as a benchmark for self-supervised learning.
d
7ec52dca-299e-4dc3-bd91-f02109fb4a35
Acknowledgements. We are grateful to Florent Altché, Bilal Piot, Jean-Bastien Grill, Elena Buchatskaya, and Florian Strub for significant help with reproducing BYOL results; Jeffrey De Fauw for providing the initial code base for SimCLR; Carl Doersch, Lucas Beyer, Phillip Isola, and Oriol Vinyals for valuable feedback on the manuscript.
d
e118260f-01cd-4355-b801-00f9547c3a8d
WARNING: This paper contains text excerpts and words that are offensive in nature. Offensive and impolite language are pervasive in social media posts motivating a number of studies on automatically detecting the various types of offensive content (e.g. aggression [1]}, [2]}, cyber-bullying [3]}, hate speech [4]}, etc.). Most previous work has focused on classifying full instances (e.g. posts, comments, documents) (e.g. offensive vs. not offensive) while the identification of the particular spans that make a text offensive has been mostly neglected.
i
6c890f37-8b49-443f-920d-c584e0a7004e
Identifying offensive spans in texts is the goal of the ongoing SemEval-2021 Task 5: Toxic Spans Detection.The competition web page is available on https://sites.google.com/view/toxicspans The organizers of this task argue that highlighting toxic spans in texts helps assisting human moderators (e.g., news portals moderators) and that this can be a first step in semi-automated content moderation. Finally, as we demonstrate in this paper, addressing offensive spans in texts will make the output of offensive language detection systems more interpretable thus allowing a more detailed linguistics analysis of predictions and improving quality of such systems.
i
8adff244-d303-43bf-a2a9-b3d168ab5ae5
With these important points in mind, we developed MUDES: Multilingual Detection of Offensive Spans. MUDES is a multilingual framework for offensive language detection focusing on text spans. The main contributions of this paper are the following:
i
ab7fcad9-2ec7-4df6-83c3-e8a749154024
We introduce MUDES, a new Python based framework to identify offensive spans with state-of-the-art performance. We release four pre-trained offensive language identification models: en-base, en-large models which are capable of identifying offensive spans in English text. We also release Multilingual-base and Multilingual-large models which are able to recognise offensive spans in languages other than English. We release a Python Application Programming Interface (API) for developers who are interested in training more models and performing inference in the code level. For general users and non-programmers, we release a user-friendly web-based User Interface (UI), which provides the functionality to input a text in multiple languages and to identify the offensive span in that text.
i
a05eb976-b106-4d95-9daf-6371c8873cbc
Early approaches to offensive language identification relied first on traditional machine learning classifiers [1]} and later on neural networks combined with word embeddings [2]}. Transformer-based models like BERT [3]} and ELMO [4]} have been recently applied to offensive language detection achieving competitive very competitive scores in recent SemEval competitions such as HatEval [5]} OffensEval [6]}.
w
130ccdbb-031d-4be0-ba9a-7bdf68ff5e51
In terms of languages, the majority of studies on this topic deal with English [1]}, [2]}, [3]}, [4]} due to the the wide availability of language resources such as corpora and pre-trained models. In recent years, several studies have been published on identifying offensive content in other languages such as Arabic [5]}, Dutch [6]}, French [7]}, Greek [8]}, Italian [9]}, Portuguese [10]}, and Turkish [11]}. Most of these studies have created new datasets and resources for these languages opening avenues for multilingual models as those presented in ranasinghe-etal-2020-multilingual. However, all studies presented in this section focused on classifying full texts, as discussed in the Introduction. MUDES objective is to fill this gap and perform span level offensive language identification.
w
2da1f420-99b8-426e-94fb-b2eed21c683b
The main motivation behind this methodology is the recent success that transformer models had in various NLP tasks [1]} including offensive language identification [2]}, [3]}, [4]}. Most of these transformer-based approaches take the final hidden state of the first token ([CLS]) from the transformer as the representation of the whole sequence and a simple softmax classifier is added to the top of the transformer model to predict the probability of a class label [5]}. However, as previously mentioned, these models classify whole comments or documents and do not identify the spans that make a text offensive. Since the objective of this task is to identify offensive spans rather than classifying the whole comment, we followed a different architecture.
m
50d533d5-86ad-48de-b2b1-3ca23b387107
As shown in Figure REF , the complete architecture contains two main parts; Language Modeling (LM) and Token Classification (TC). In the LM part, we used a pre-trained transformer model and retrained it on the TSDTrain dataset using Masked Language Modeling (MLM) saving the model weights. In the second part of the architecture, we used the saved model from the LM part and we perform a token classification. We added a token level classifier on top of the transformer model as shown in Figure REF . The token-level classifier is a linear layer that takes the last hidden state of the sequence as the input and produce a label for each token as the output. In this case each token can have two labels; offensive and not offensive. We have listed the training configurations in Appendix.
m
d397017e-06a5-4e68-8d5e-735119683c27
We experimented with several popular transformer models like BERT [1]}, XLNET [2]}, ALBERT [3]}, RoBERTa [4]}, SpanBERT [5]} etc. From the pre-trained transformer models we selected, we grouped the large models and base models separately in order to release two English models. A large model; en-large which is more accurate, but has a low efficiency regarding space and time. The base model; en-base is efficient, but has a comparatively low accuracy than the en-large model.
m
616db246-54ed-479e-8600-eb0a65022ae6
This paper introduced MUDES: Multilingual Detection of Offensive Spans. We evaluated MUDES on the recently SemEval-2021 Toxic Spans Detection dataset. Our results show that MUDES outperforms the strong baselines of the competition. Furthermore, we show that once MUDES is trained on English data using state of the art cross-lingual transformer models, it is capable of detecting offensive spans in other languages. With MUDES, we release a Python library, four pre-trained models and an user interfacey. We show that MUDES is efficient to use in real time scenarios even in a non GPU environment. In future work, we would like to further evaluate MUDES on other datasets. Finally, we would like to implement a flexible multitask architecture capable of detecting offense at both at the span and at the post level.
d
38314db0-4db5-47fa-ac40-2c1b472c361d
Consider a prediction task where the goal is to take a set of features about the world as input and predict an outcome of interest. A typical machine learning approach to such a task is to attempt to select a model with low (generalization) loss for the problem at hand. If such a model is applied directly to the prediction task, it will minimize expected loss.
i
629b8815-0905-4730-b713-0ba55a97b5d2
However, this standard approach does not necessarily reflect the way that machine learning tools are actually implemented. Often, algorithmic predictions are presented to humans, who then make a final decision by additionally relying on their own expertise [1]}, [2]}, [3]}, [4]}. For example, consider a doctor looking at a medical record and trying to make a determination of whether disease is present. An algorithmic prediction based on the record may be useful, but it almost certainly will not be the sole factor influencing the doctor's diagnosis. For example, the doctor may have access to different data, such as conversations with the patient. The doctor may also have access to different knowledge, such as distilled expertise from years of practice. The doctor's decision will be a function of the algorithm's prediction, as well as their own inherent belief. Note that the doctor's decision-making may be imperfect, such as relying on their own judgement when the algorithm may have better performance. A successful outcome occurs when the combined system (the doctor using algorithmic output) has low loss, not when the algorithm alone has low loss. Figure REF illustrates three scenarios where a combined human-algorithm system could have differing levels of loss.
i
4fc03d88-780b-4edf-97cc-658d642d3adf
In particular, one especially valuable goal is complementarity (or complementary performance). Complementarity (originally defined in [1]}) is achieved whenever the combined human-algorithm system has strictly lower expected loss than either the human or the algorithm alone (Figure REF ). Complementarity is not necessary for a combined system to be deemed successful: for example, a combined system that does better than the human alone, but not necessarily better than the algorithm alone, would still reflect an improvement from a human-alone status-quo. However, complementarity creates the strongest incentive for adoption of a combined human-algorithm system, which is why it is the focus of our analysis.
i
8c748c43-b0e6-43c0-9bee-17414415ad17
Contributions: At a high level, we address the following problems: (i) How do we formally and tractably model human-algorithm collaborative systems? (ii) When can human-algorithm collaborative systems produce higher accuracy than either the human or algorithm alone? (iii) What are the fairness implications of such collaborative systems?
i
08f7f276-68f3-45eb-92c4-0ebe0c05cbda
The contributions of this work are three-fold. First, in Section , we introduce a simple theoretical framework for analyzing human-algorithm collaboration, and demonstrate the richness of this framework by showing that it can encapsulate models from previous works analyzing human decision-making. In Section , we provide a simple, concrete motivating example using this framework that illustrates the core results of this paper.
i
2f079a16-bdb2-4b5d-abbb-e597a17f7e72
Next, in Section , we use this approach to analyze complementarity. First, we present several impossibility results that characterize regimes in which human-algorithm collaboration can never achieve complementarity. We then give concrete conditions for when complementarity can be achieved. In particular, our results suggest that complementarity is easier to achieve when loss rates are highly variable: when the unaided human (or algorithm) has very low loss on some inputs and very high loss on inputs. Disparate levels of loss raises issues of fairness, which we turn to next.
i
d24afcf5-f93a-4cf5-84fd-cf4f1a473538
In Section we conclude our analysis by examining the fairness impacts of complementarity. The variability in loss rates implied by our results has implications for fairness, since types of inputs with very high error rates may correspond to protected attributes, such as race, gender, or ethnicity. To investigate this concern, we propose and analyze three types of fairness relating to human-algorithm systems, giving conditions for when they can and cannot be achieved. One of main results shows that when complementarity is achieved, at least one group does worse in the combined system than under the human-only status quo. Additionally, we give a simple condition where the combined human-algorithm system will guarantee that loss disparity between different protected groups will not increase.
i
5897da59-4da5-4b0f-bd27-2384054efe1e
Our model considers a prediction task: given some element \(x \in \mathcal {X}\) , make a prediction \(y \in \mathcal {Y}\) that minimizes some loss function \(\mathcal {L}\) , with loss bounded \(\ge 0\) . This loss could reflect any error rates for any type of learning problem—for example, regression and classification tasks could both be represented by this loss function. We model the input space \(\mathcal {X}\) as being made up of \(N\) discrete regimes: all inputs within the same regime are identical from the perspective of algorithmic and human loss. Note that this is without loss of generality, given that \(N\) could be arbitrarily large. We will denote the probability of seeing regime \(i\) is given by \(p_i\) , with \(\sum _{i =1}^{N}p_i = 1\) .
m
6191f9a3-ba6d-4a71-9918-25a015b99f29
An algorithm, which for each regime in the input space \(x_i \in \mathcal {X}\) makes a prediction \(\hat{y}_i^a\) with some loss rate \(a_i\) . The average loss is given by \(\sum _{i =1}^{N}p_i \cdot a_i = A\) . We can write \(a_i = A+ \delta _{ai}\) , with \(\sum _{i=1}^{N} p_i \cdot \delta _{ai} = 0\) . The term \(\delta _{ai}\) represents how much \(a_i\) varies (differs from the average loss \(A\) ). A unaided human, which similarly for each regime in the input space \(x_i \in \mathcal {X}\) makes some prediction \(\hat{y}_{i}^h\) . The average loss of the human is given by \(\sum _{i =1}^{N}p_i \cdot h_i = H\) . Similarly, we write write \(h_i = H+ \delta _{hi}\) , with \(\sum _{i=1}^{N} p_i \cdot \delta _{hi} = 0\) . Finally, some combiner (a human using algorithmic input) \(g(\hat{y}_{i}^a, \hat{y}_{i}^h)\) , which takes predictions given by the algorithm and unaided human and returns a combined prediction, \(\hat{y}_i^c\) . The combining function reflects human decision-making: it could select the algorithm's prediction, the unaided human's prediction, or interpolate between the two of them. We could also view this as a (loss) combining function \(c(a_i, h_i)\) that takes the algorithmic loss and human loss on a particular instance and returns some combined loss.
m
5a17dbb4-ad1c-4524-b0f7-dc86b6cd3903
In general, we may not have control over all (or even any) of these components. For example, the combining function reflects human judgement, which typically can't be directly manipulated. A primary goal of our analyses is to determine when a human-algorithm system displays complementarity, defined in Definition REF below.
m
623320db-2231-46bc-b05d-2d16e244ccc7
Definition 1 (From [1]}) A human-algorithm system displays complementary performance when the combined system has (strictly) lower loss than either the human or algorithm: \(\sum _{i=1}^{N}p_i \cdot c(a_i, h_i) < \min \left( \sum _{i =1}^{N}p_i \cdot a_i,\sum _{i =1}^{N}p_i \cdot h_i \right)= \min (A, H)\)
m
ac43b075-e819-4969-958f-74fbe488a5b5
Neural networks have been successfully applied to a broad range of applications and are now increasingly employed to gain new insights into complex and demanding science problems [1]}. Many science problems, in the context of deep learning, are considered as regression problems where obtaining the best model fit is a key measure of success. The best model fit is typically the result of applying an optimisation method that minimises a loss function. Improvements in the model fit are often a key criteria used to establish that a new model delivers improved performance over prior modelling approaches.
i
77d3d079-5d07-4fd6-8c31-8d4f12fd9a1d
Training neural networks usually continues until the optimizer achieves a minimum value where the loss function no longer decreases or where a comparison of the loss achieved on the training and test data indicates that over-fitting is taking place. The optimizer may not succeed and cannot guarantee that the global minimum has been achieved. The selection and application of the optimizer therefore warrants careful consideration.
i
c202c435-2a1e-4289-883d-88613d32da3a
Neural networks are considered to be universal approximators [1]}. While this property gives neural networks incredible flexibility, as evidenced by their application to a very wide range of problems, is not straightforward to determine an appropriate model neural network architecture and the corresponding set of model weights required to achieve a useful approximator. In this paper we examine the challenge of finding the optimal weights for a given neural network model architecture by comparing the performance of several well known optimisation algorithms, Adam [2]}, LM  [3]}  [4]}, BFGS [5]} and BFGS-L [6]}. We apply these algorithms to a well known but challenging problem where the frequency is fixed and the amplitude varies. We also investigate the process of model optimisation by studying the fit achieved in incremental steps.
i
6afd3ef1-bb7d-46bd-9821-97dc22cf5f36
Finally, the results presented here have significant practical implications. This study was motivated by earlier studies where we examined applying machine learning (ML) to detecting faults of fielded machinery using unsupervised learning  [1]}  [2]} where the function approximation and the choice of the optimizer was a key task in delivering breakthrough results.
i
901e6655-edd9-4a8c-9726-92e46d790d17
There are many quasi-Newton based optimisation algorithms available for training neural networks. All algorithms are a compromise between accuracy, robustness, computational efficiency, memory usage, scalability and flexibility. We have selected four very well known optimisation algorithms that have seen widespread application for our study, briefly summarized as follows.
m
2c06b426-c94f-4705-b0fb-634287a8b48e
We compared a range of well known optimizers applied to fitting neural networks with a small to medium number of weights. We studied the performance of Adam, the Levenberg-Marquardt (LM) algorithm, BFGS and L-BFGS and concluded that the LM optimiser delivered significant advantages over the other methods, including orders of magnitude improvement in the MSE values, rapid optimisation and straightforward application as no hyperparameter tuning was required.
d
ec90a8cd-022e-4ea0-b6e1-386e5ffae290
Using these Optimizers we fit the function \(y = sinc(10x)\) using a neural network with a few parameters. This function has a variable amplitude and a constant frequency. We observed that as the model fit progressed, the higher amplitude components of the function were fit first. We found that only the LM optimiser could fit this function well across the full range of amplitudes.
d
12b9b1ee-48a2-4463-9da1-1ab4abcc2985
We have also demonstrated the usefulness of the LM optimiser to PINNs by solving the Burgers equation to a higher accuracy than has previously been achieved. Combined with the observation that the LM method can achieve an accurate fit where other optimizers cannot, the LM method is likely to be very useful for PINNs and to the broader class of NN models with a small to medium number of parameters.
d
484648ac-bc11-47c3-952d-dcafb1657f46
It would likely benefit the broader ML community if the LM and BFGS optimizers included in could be made available without additional wrapping in TensorFlow. This is recommended as there are a large number of potential applications that would benefit significantly from the ready availability of these optimizers.
d
d70d62fd-eeba-400d-92b6-68e30368708a
Video frame interpolation creates non-existent intermediate frames of the input video and maintains the newly generated video to be continuous spatially and temporally and to have a pleasing visual effect. The technique has been studied widely and becomes a hot research topic in the video processing community. Its applications range from frame rate up conversion [1]}, [2]}, novel view synthesis [3]}, and inter prediction in video coding [4]}, [5]}.
i
956066d0-f453-465e-9f75-9c23a14d64ec
Conventional frame interpolation estimates the optical flow of the input frames first, then infer the optical flow at the intermediate time-step, and finally warp the input pixels to the target ones under the guidance of the optical flow [1]}, [2]}. This kind of method heavily relies on optical flow estimation, therefore their performance is unstable if there are large motions as the optical flow estimation is usually inaccurate in this case. Furthermore, the conventional optical flow estimation might be time-consuming, therefore the complexity of these methods is usually high.
i
f6e6f7ea-69eb-459b-b3c5-b2bcda479ef2
Nowadays, convolutional neural networks (CNN) have been applied to synthesizing the intermediate frames, achieving promising performance in visual quality and time efficiency. All methods can be classified into three branches: 1) direct generation [1]} that takes the original frames as input and directly predicts the intermediate frames; 2) flow-guided method  [2]}, [3]}, [4]}, [5]} that simulates the align-and-synthesis paradigm; 3) adaptive kernel based method [6]}, [7]}, [8]} that adopts a flow-free pipeline, where the convolution kernels are learned by passing the original frames through a CNN.
i
ab5c6aa5-307b-4d0e-a338-89c144bee8d3
All previous methods come across several neglected issues. 1) It is a dilemma whether to adopt the optical flow or not. Optical flow estimation is an effective representation of motion modeling. However, its estimation is not robust when large motions are included. 2) Due to the fixed model parameters, all methods can only be applied to handling the interpolation at the given time-step adopted in the training stage, e.g. 1/2. 3) As all modules adopt fixed parameters, their adaptivity is not enough to handle different contents and motion conditions accurately and robustly.
i
fe0f1baa-f7d1-4ed0-91cc-2461af9d1eba
Recently, meta-learning, is introduced to increase the model's adaptivity via adjusting the model based on the testing conditions and content of the input images/videos for many computer vision and image processing tasks [1]}, [2]}, [3]}, [4]}, [5]}. These works motivate us to address the above-mentioned issues via meta-learning.
i
188223e6-7b60-4d04-b985-5ee8d43b9633
In our work, we still follow the paradigm of align-and-synthesis and aim to realize the time-arbitrary video frame interpolation with accurate and robust modeling of motions. First, to achieve the time-arbitrary video frame interpolation, we build a meta-learned frame interpolation module that takes both the optical flow and time-step as the input for generating the convolutional kernels used in the convolution operations adaptively to produce the predicted intermediate frame. Besides, it also makes our model more adaptive to different motion contexts and contents when meta-learned adaptive convolutions are introduced. Second, a meta-learned flow refinement module is introduced to improve the accuracy of the optical flow estimation based on the down-sampled version of the input frames. As shown in the visual results and the quantitative results, the proposed method outperforms state-of-the-art methods and can provide a better solution for the arbitrary-time video frame interpolation.
i
41b740f7-6841-4aad-a480-ae868e03b0d1
1) Datasets and Metrics. We train our proposed video frame interpolation method on Vimeo90K dataset [1]} and validate on UCF101 [2]}, Vimeo90K [1]} and Middlebury benchmark [4]}. Following the setting in [4]}, We also report Interpolation Error (IE) on Middlebury benchmark.
m
6940c2c9-4bde-46df-938a-5c943438b37c
2) Training Strategies and Hyper-Parameter Setting. We train the network for 20 epochs with a mini-batch size of 2. The initial learning rate is set to 0.001 and a reduce on plateau strategy. Adam[1]} optimzer is used to update the network parameters, with \(\beta _1=0.9,\beta _2=0.999\) .
m
88cfe267-baf2-473c-a1be-a52feb0a2b6e
3) Evaluation. We compare the performance of our approach against several SotA methods on quantity and quality. The results are shown in Table REF and Fig. REF . Our method is denoted as MIN. As we can see in Table  REF , our method outperforms almost all the state-of-the-art methods in all metrics.
m
854c4cc5-bf2d-4fa9-ab81-fcceca426e39
Motion-Aware Meta-Learned Frame Prediction. We denote MIN-Base as the model directly feed-forwarding the two concatenated coarse generation results through the RDN. Meanwhile, we also compare the result of another kernel-based method, SepConv [1]}.
m
bf1415b9-85b6-4ead-83b7-48e987531c70
Content-Aware Meta-Learned Flow Refinement. Then, we perform the ablation study on the content-aware meta-learned flow refinement module. We denote the model without the proposed optical flow refinement module as MIN-UNR, the model that only uses the reconstruction loss \(L_r\) in the training loss as MIN-UNC. Besides, we visualize the optical flow to show the effectiveness of this module in Fig. REF . In general, both of our last two versions can well capture the motions of foreground objects.
m
1e2540f1-a939-4d6c-9d55-22fed29ad146
In this paper, we develop a dual meta-learned frame interpolation framework that is capable to synthesize the intermediate frame at an arbitrary intermediate time-step. First, we create a content-aware meta-learned flow refinement module that takes the down-sampled version of the input frames as input to improve the accuracy and robustness of the optical flow estimation. Second, a motion-aware meta-learned frame interpolation module generates the convolutional kernels to generate the interpolated frame based on the refined optical flows and the time-step. Extensive qualitative and quantitative evaluations demonstrate the superiority of our method and the effectiveness of each component of our method.
d
056e065b-6e8e-4545-969a-c865d7d70eac
Pretrained language models [1]}, [2]}, [3]} have demonstrated strong empirical performance not only within a language but also across languages. Language models pretrained with a mix of monolingual corpora, such as multilingual BERT, exhibit a decent zero-shot cross-lingual transfer capability, i.e., a model fine-tuned in a single source language (L1) can solve the task in another language (L2) [4]}, [5]}. Surprisingly, the transfer happens without lexical overlaps between L1 and L2 [6]}, [7]} or even without joint pretraining [8]}: an encoder only pretrained on L1 can be transferred to L2 without any parameter updates. These results suggest that, whether the encoder is trained on single or multiple languages, it learns some transferable knowledge about language.
i
55a6a09b-8f57-42a9-b9c5-c74d558e6899
However, the characteristics of such transferable knowledge are still underexplored. Recent studies with the probing methodology [1]}, [2]} have revealed that multilingual BERT captures language-independent linguistic structures such as universal dependency relations [3]} and subjecthood [4]}, but it remains unknown whether learning such linguistic properties actually contributes to the performance, and whether there exists more abstract knowledge transferred across languages.
i
9aefbca2-74fb-400b-8ca8-68b2b0aa8ec4
In this study, we try to shed light on these questions with the framework of the Test for Inductive Bias via Language Model Transfer [1]}, focusing on designing artificial languages with natural-language-like structural properties (Figure REF ). We pretrain encoders with artificial languages and transfer the encoders to natural language tasks with their parameters frozen. This enables us to see how learning the specific structural properties of the artificial language affects the downstream performance.
i
1f3b9973-8e79-49f5-be87-e2b791d6de83
Specifically, we explore whether it is beneficial for the encoder to know the following two characteristics of natural language: word distributions and latent dependency structures. We design artificial languages that represent such characteristics and perform an extensive study with different encoder architectures (LSTM and Transformer) pretraining objectives (causal and masked language modelings).
i
bc30d1ba-7e7f-4b8f-8c65-6d0f70fbc53f
We first start by complementing the study in [1]}. We train LSTM and Transformer encoders with the sentence-level causal language modeling task and evaluate the encoders in English. We show that an artificial language that models simple statistical dependency within a sentence provides decent transferable knowledge on natural language modeling. Furthermore, we find that the inductive bias of a nesting head-to-tail dependency structure is more useful than a flat one. We then proceed to investigate transfer learning in masked language modeling [2]}, one of the current dominant pretraining paradigms. We evaluate pretrained Transformer encoders with dependency parsing and confirm that the nesting dependency structure is important to learn the structure of natural language. We hypothesize that the transfer performance of pretrained encoders is related to the way the encoder preserves the input contextual information in the output vectors. We perform a probing experiment and find that the artificial language with the nesting dependency structure trains encoders to encode the information on adjacent tokens into the output vector of each token. We conclude this paper with the hypothesis that a part of transferable knowledge in language models could be explained by the knowledge of position-aware context dependence of language.
i
9c891845-4ab2-4d01-838f-46ab0bf851cb
We provide two baseline models trained on the L2 training corpus from scratch and trained with frozen random weights in the encoder to compare with pretrained encoders. For each configuration, we pretrain three encoders with different random seeds, and for each encoder fine-tuned three models, which results in nine models in total. We summarize the average scores and standard deviations in Figure REF .
r
d11ee16a-6e39-4c0a-a18d-3a8a75bf397f
The Transformer encoder is more flexible than LSTM. We start by discussing overall trends. We observe that the Transformer encoders give lower perplexity scores compared to LSTM regardless of pretraining language. This tendency is in line with the observations on the surprisingly good transferability or pretrained Transformer encoders to other languages [1]}, or even other modalities [2]}, [3]}. We think that this is because Transformer encoders are better at aggregating and preserving the context information at each time step, as we will see in sec:probing, presumably because the Transformer architecture has self-attention and residual connections.
r