_id
stringlengths
36
36
text
stringlengths
200
328k
label
stringclasses
5 values
8017f571-8c92-49db-ae50-9ccb16239fe2
Next, we build a theoretical model for the recommended algorithm and operating model. Figure REF shows ledger state machine. The central bank and privileged institutions would operate the state machine together. This model provides a formal description of finite state machine M = (S, V, t). The state machine can describe any state \(s \in S\) for every moment. It can read input token \(\tau \in V\) and proceed to the next state by different transitions t(s, \(\tau \) ). There are two types of transactions. Single-shard transactions will only need consensus in the retail consensus network, while cross-shard transactions need the central bank participant. Leaders are privileged institutions in the retail consensus network. The central bank is the leader node in the wholesale consensus network. <FIGURE>
m
b7c8b1b6-64bc-43c0-818e-b10cc62a3302
Assuming an initial machine state \(s_{0}\) , a token-based sharding system would distribute transactions to different leaders. The central bank institutions could provide a uniform system interface to the public to handle transactions and process them based on token-based sharding. In a single-shard transaction, the shard leader checks transaction signatures and input tokens \(\tau _{0}\) . After verifying the signature, the tokens become locked. If involved tokens have not previously been locked, the output becomes \(\tau _{locked}\) and the machine moves to the locked state \(s_{1}\) . Otherwise, it will exit ( a rolled-back transaction to the initial state). Next, the leader verifies \(\tau _{locked}\) available in its ledger. If \(\tau _{locked}\) are available, the output token will be \(\tau _{verified}\) , and the machine moves to the verified state \(s_{2}\) . Otherwise, it will exit. Finally, the leader writes the transaction with inputs and outputs. If successful, the output will be \(\tau _{output}\) , and the machine moves into the state \(s_{3}\) . Otherwise, it will exit. These steps ensure the token is recorded on the ledger before noticing clients.
m
dbb7afc4-8011-4c10-9e00-f4d7dc59cf46
Figure REF shows data model of leaders' ledgers. Transaction records are subject to the leader's ledger. Once a leader updates its ledger, the transaction becomes legal and immutable. If traders want to revert the transaction, they must initiate a new transaction to return the token.
m
39e259de-0529-4887-a364-546008a2305d
Definition 1 (Ledger State) The ledger state of the regulated cryptocurrency is defined as a directed graph D=<V(D), E(D), \(\varphi \) > that the elements of V(D) are vertices (tokens) and the elements of E(D) are edges (token flow) in the model. \(\varphi \) is ordered mapping from token set V to token flow set E. <FIGURE>
m
a2bcb8bb-3003-49c0-8cbd-2dca72148485
Definition 2 (UTXO) \(UTXO=\lbrace \tau |\tau \in V(D) \wedge d_{G}^{+}(\tau )=0\rbrace \) . UTXO is an unspent transaction token set. In the model, an unspent token means there is no edge coming from it (the out-degree of it is 0).
m
dd24db89-3469-4ca6-942b-f05aec01a153
Definition 3 (Transaction Graph) A transaction graph is a directed graph TD=<V(TD), E(TD), \(\varphi \) >. In a transaction graph, the in-degree of an input token is 0 and the out-degree of an output token is 0. Leader's ledger will be updated when a transaction is finished by the state machine (\(\forall \) x. \(\forall \) \(\tau \) .(Tx(\(\tau \) ,x) \(\Rightarrow \) D = D + TD)).
m
2a660194-ebf2-4341-bbcf-3e88ef731663
Figure REF shows all types of transaction graph. The Initial Issuance Transaction can only be initiated by the currency issuer (Central Bank), who can generate a new token from the genesis point. The Final Redemption Transaction has to be checked by the central bank (transaction receiver). Besides, our model needs the central bank to carry out real-time Initial Issuance Transactions and Final Redemption Transactions. Other types of transactions can be regulated in a deferred manner for central banks, which reduces the central bank's performance stress. A valid transaction in in figure REF would produce new tokens. <FIGURE>
m
ca258d3a-b28a-470c-aee2-c7c16506d89a
We can use the following mathematical expressions to show token flow. Here we add an assumption that the leader nodes are non-faulty (H) and would follow the model procedures. Faulty nodes may behave arbitrarily and be vulnerable to inside and outside attacks. With non-faulty nodes, we can ensure tokens are recorded in the ledger by every transaction:
m
58f0d128-fc36-4ea0-8cbe-0b0bd090b792
The state machine ensures that every transaction is valid. \(\forall \tau . \, (H \)\(\wedge \)\(p(\tau ) \Rightarrow p(r(\tau ))\) means that if input \(\tau \) has been recorded in the ledger, a valid transaction graph using \(\tau \) as inputs and received token \(r(\tau )\) and change token \(c(\tau )\) as outputs will be added to the ledger by a non-faulty leader(H). If the change is 0, then \(c(\tau ) = null\) and p(c(\(\tau \) )) means no token recorded. Note that the expressions above show the token flow rather than the transaction.
m
e7e040d4-d330-4355-ba9e-6faf53c148a1
\(\forall \tau . \, (H \)\(\wedge \)\(p(\tau ) \Rightarrow p(f(\tau )))\) means non-faulty leaders(H) would ensure that output tokens in a cross-shard transaction would be recorded in another shard's ledger. Cross-shard transactions would not happen in our model if we split one transaction into several concurrent sub-transactions. However, cross-shard transactions would match some business scenarios more. For example, CBDC users may pay tokens in different ledgers and achieve an atomic transaction. Alternatively, CBDC designers want to control the number of tokens and consolidate tokens from different ledgers to one new token.
m
a19dca6c-ab0c-43ba-acf4-62c0b97b58f4
Figure REF shows a client has different tokens allocated in three different shards and uses them to initiate a transaction. The transaction first turns the input tokens to the endpoint via the Final Redemption Transaction and issues new tokens in the new shard via the Initial Issuance Transaction. In a CBDC system, the central bank is responsible for issuing and redeeming tokens, regulating both transactions and ensuring that output tokens are recorded in the new shard. <FIGURE>
m
38d40faf-f5a3-458b-be33-290e29600253
Our paper proposes a CBDC framework (CEV Framework), including an evaluation sub-framework and a verification sub-framework to design central bank digital currency. This work is of significant importance to the evolution of CBDC and proposes an original approach. We provide a holistic solution for CBDC designers who can have a method to analyze potential solutions according to the economic and regulatory conditions of their jurisdictions.
d
532a2311-0b28-4aa1-a30c-cf479f19c782
To the best of our knowledge, we are the first to propose a framework to analyze CBDC related technical solutions by splitting consensus algorithms into different components and proposing operating models to solve CBDC related issues. Most importantly, we build a verification sub-framework to prove the feasibility of the recommended algorithms and operating models with rigorous and professional mathematical proofs. Besides, our framework would not bring any new issues.
d
3c644961-dcff-437d-9a50-87269c010a26
Our framework could be continuously updated and improved by iterating with the workflow. In addition, there are diverse central bank digital currency projects worldwide. These projects can leverage our framework to design the consensus algorithms better and adopt reasonable operating models. To handle the future need for regulated cryptocurrency design, our framework needs further refinement in practice. We have listed all considerations in CBDC in the CEV framework. The main future work is to include more dimensions and solutions into the framework and simplify the process of using this framework.
d
7f304efb-4f43-4362-b513-703b41ec3c38
Generative models allow to represent high-dimensional behaviour patterns (sequences of action-perception pairs) in much lower-dimensional latent space. However, these representations are far from unique. It is convenient for analysis and theoretical arguments about generalization capabilities if encoded points exhibit some regularities. The current paper focuses on the issue of efficient encoding of motion primitives. It is known that PCA analysis of human movements leads to conclusion that most of the variance is explained by few components as described in , which confirms the old idea of the pioneer of kinesiology field of study Nikolai Bernstein. Bernstein proposed that human motion despite being high dimensional process can be described using points of low dimension. In other words, motion primitives can be represented in a compact manner using small finite number of variables for encoding. These variables are responsible to flexible adaptations to variations of related features such as positions of target objects, size and shape of objects, initial position of a manipulator, etc.
i
3203f38f-4e07-4040-9072-d2c7613c8917
Motion primitives is an indispensable idea both in robotic and human behaviour modeling. They provide modularity in construction of complex interactions with an environment. Primitives are considered as a minimal set of reusable patterns to be combined to generate diverse patterns. The importance of motion primitive for robotic behaviour design is discussed in . In their work authors specify two types of motion primitives: discrete and cyclic, which correspond to a fixed point and a limit cycle in dynamic systems, respectively. And each of them can be represented by a single point in a parameter space of dynamic systems.
i
0eda2c3d-8f49-4ee2-89bf-20cbec86044a
There are many approaches to learning behavioural or motion primitives. One of them is described in : direct modelling differential equations for discrete and oscillating patterns with variable parameters tuned by reinforcement learning. Similar approach is taken in with the emphasis on two types of primitives. However the training is goal-oriented in both cases. The reward function is designed to ensure specific dynamic properties of a trajectory and ensure reaching certain final state. In reality, however, there are many constraints regarding interaction with objects in an environment in certain way which are hard to take into account when hand-designing reward functions are used. It is theoretically possible to extract those constraints automatically via supervised learning through imitation of recorded trajectories, if the data is quite plentiful. For example, in autoencoder is used to create a generative model for multimodal primitives.
i
2841b055-8b04-4839-a624-4fbab1db43a6
We consider supervised learning scenario where every motion has finite encoding and can be regenerated using this encoding and shared generative model implemented as a Recurrent Neural Network. In our previous work we demonstrated that explicit embedding of hypersurfaces corresponding to each motion primitive in shared latent space enhances inter-primitive generalization capacity. However, this approach requires to manually label learning data, which may be infeasible when dealing with large datasets. In this paper we address the issue of finding suitable latent representation of sensory-motor data which can be automatically clustered for each corresponding primitive.
i
53ec1a55-5213-4f47-8e19-f881e142563c
First thing to notice is high correlation between motor and sensory (typically visual) information. Information contained in a sequence of joint angles of a robotic manipulator allows us to partially generate visual sequence. For example, the manipulator is reaching for an object and then pushing it in some direction. Sequence of joint angles in this movement provides information about location of the object at any moment of time. Therefore by having appropriate encoding of kinematics data we can reconstruct sensory information with some precision. From these consideration, we assume that Kinematics data can be used to initialize values of latent variables for each sample prior to learning weights of RNN for generative model.
i
d34600fc-3b27-4007-bbe9-20b5e1b004a4
Clustering motion primitives is an intricate problem. Motions corresponding to different primitives could be located very close to each other in trajectory space. For example, let us consider cases of a robotic manipulator reaching for grasping and reaching for simply touching an object in the same location in space. Grasping and touching is an example of different primitives. Specific motions within each primitive are determined by location of the object in space. Hence, each primitive is a low-dimensional manifold in trajectory space. We can exploit an assumption about structure of these manifolds for clustering. Namely, they are close to linear, or at least could be embedded in low-dimensional linear subspaces. Linear subspace clustering has been a prominent topic in recent years.
i
2ec27829-a367-40b3-a8b6-de9135142fb5
However, even if we drop the assumption about linearity, we can linearly project low-dimensional manifolds from trajectory space into parameter space without any overlapping of primitives. In authors use projection of trajectory data into dominant principal components for further probabilistic modeling. Corollary of Whitney embedding theorem described in tells us that almost all linear projections will have the required property. Thus, a random linear projection will suffice. Moreover, we show that random projection is also robust enough if the parameter space has sufficient dimension.
i
8edd354b-0a38-4f9c-a46a-e8d355ee1fa2
There are two hypothesis we test: (i) motion primitives form close to linear manifolds in trajectory space and (ii) random projection of motion data to latent space and consecutive learning the generative model keeps linear manifolds at a degree enough for subspace clustering, which ensures robustness and separation of encodings for different primitives. To test the first hypothesis we will analyse generated joint-angle trajectories of humanoid robot obtained via mapping motion-capture data. For the second hypothesis we design artificial data with plentiful amount of trajectories to test generalisation capacity of the model. Two types of generalisation are tested: intra- and inter-primitive generalisation. The former is the ability of the model to generate unseen samples from known primitives and the later is the ability to quickly learn new primitives.
i
a68b54b7-6fde-4948-8a5e-d4c481a81187
We prepare two experiments to attest hypothesis stated in the introduction. First is to apply subspace clustering algorithm for joint angle sequences of robotic arms generated with demonstration by human via motion capture. Second is to do the same with artificially generated data for robotic manipulator interacting with an object, and since this data is much more plentiful, do training of the RNN generative model with train/test split of the data and then apply subspace clustering algorithm for learned latent encodings. Furthermore, in the second experiment we compare quality of intra-primitive generalisation for different modes of initialization of latent variables as well as inter-primitive generalisation. Generalisation is the ability of a trained model to encode new samples with little or no change to its parameters. Intra-primitive generalisation is about the encoding of samples belonging to known primitives, inter-primitive generalisation is the capability to encode entirely new primitives.
r
94ae7a3a-4abc-4a40-8569-1a93085444e4
In this paper we investigated structure of robotic motion primitives in trajectory space and ways of efficient encoding of that structure. Distinctive feature of each primitive is that the set of all motions belonging to this primitive lies on a low-dimensional manifold embedded in trajectory space, which was confirmed by experiment in which we were able to reconstruct artificially generated robotic motions from random linear projection of its motor trajectory data using RNN model. Moreover, these manifolds are close enough to affine subspaces, which enables us to use subspace clustering algorithm for labeling a collection of motion in unsupervised manner. This claim comports with clustering results of data obtained by motion-capture device. Another assumption is correlation of visual information with motor commands. In our experiments we showed that only slight correction to initial values of latent variables obtained by random linear projection is required to minimize combined loss function for motor and visual data. Last thing to show was random linear projections do not disturb affine subspace clusters of trajectory space, it is still possible to do subspace clustering of projected data for sufficiently large dimension of latent space. Clustering results for latent encodings show sufficient precision to support this claim. Initialization of the latent variable by random linear projections improves intra- and inter-primitive generalization capabilities compared to conventional initialization methods.
d
e82d5d04-db02-4e1b-a6b0-1f5deba11d33
To model more complex behaviours composed of many consecutive primitives, instead of using a single latent vector \(\mathbf {z}\) of fixed dimension usually a sequence of such vectors is used. For the future research we are planning to extend latent variable initialization algorithm by random linear projections to a sequence of latent vectors. It can be done via one-dimensional convolution through time of a random linear projection with long motor sequence. There are some challenges such as primitives might have different number of timesteps and not clear borderline between primitives.
d
f80139dd-888c-4596-ba4b-9179b4ae3537
Another point is visual perception information is not used to initialize encodings. The problem here is only a small portion of each perceived image is relevant to motion, the rest is a background noise which will clutter random projection. Some attention mechanism will potentially alleviate the problem. Vision information is highly correlated with motor commands, but there is some part of it which is independent, such as colors of the objects the robot is interacting with.
d
826d4ffb-8413-4f54-8a11-49a5c8ff4951
Conventional tabular reinforcement learning is bottle-necked by the curse of dimensionality for practical applications. The number of parameters that needs to be trained grows exponentially with respect to the size of states and actions. In order to make reinforcement learning practically tractable, one line of research is hierarchical reinforcement learning (HRL), which develops principled ways of temporal and state abstraction to reduce the dimensionality for sequential decision making.
i
85cc6bca-660e-4f7b-8ff4-791c90761a37
The basic idea of temporal abstraction is to develop macro-actions that take several steps to terminate before returning. Usually good macro-actions aim to solve sub-goals, so that multiple macro-actions divide difficult tasks into several simpler ones. In addition, state abstraction tries to reduce the dimensionality by removing irrelevant state variables for decision making, reducing the cardinality of state space and helping in tackling over-fitting. These two techniques lead to natural hierarchical control architecture, which intuitively resembles how humans solve complex tasks.
i
7a1a1fa4-ceb6-484a-a5cd-ed64406178a1
Another area of research closely related to our work is batch reinforcement learning. Batch reinforcement learning aims to learn the best policy from a fixed set of prior-known samples. Compared to on-policy algorithms, batch reinforcement learning enjoys stability and data-efficiency. More importantly, it allows to apply reinforcement learning in a practical problem that is expensive in collecting new samples, such as education, spoken dialog system and medical systems. Well-known algorithms in batch reinforcement learning include Least Square Policy Iteration (LSPI) [1]}, Fitted Q iteration (FQI) [2]}, Neural Fitted Q Iteration (NFQ) [3]} and etc.
i
1da4efe0-caf1-4b30-b548-8131b26d3810
There are three major approaches developed relatively independently [1]}, aiming to formalize the idea of abstraction into reinforcement learning. The three approaches are: 1) the option framework [2]}, 2) Hierarchies of Abstract Machines (HAMs) [3]} and 3) MAXQ framework [4]}.
w
d72f240f-a3e1-4aaf-b4cd-09643988e0a3
Under the option framework, the developers augment the original action set by options, which are macro actions that have their own predefined policy, termination state and active state. Sutton et al have shown that such a system is a semi-Markov Decision Process (SMDP), which converges to a unique hierarchical optimal solution using a modified Q-learning algorithm. For HAM framework, rather than giving out the entire policy of these macro actions, developers just need to specify a partial program that specifies a part of the policy. Using HAMQ learning [1]}, HAM can also converge to a hierarchical optimal solution.
w
47300cba-01e7-464f-a11d-f2ab8fe279a4
At last, MAXQ framework provides an elegant formulation that decomposes the original MDP into several subroutines in a hierarchy and the algorithm can learn policies recursively for all the subroutines. Therefore, in the MAXQ framework, there is no need to specify the policy for any macro-actions. However, Dietterich shows that it can only achieve recursive optimal solution, which in the extreme case, can be arbitrarily worse than the hierarchical optimal solution.
w
4efc3a16-10dc-413d-9fa9-5d276529e780
All of the above work assumes that the agent can interact with the world while learning. However, in real-world applications that needs HRL, it is usually very expensive to collect data and terrible failures are not allowed on operation. This forbids the usage of online learning algorithms that could potentially preform horribly in the early learning stage. To our best knowledge, there is little prior work [1]} in developing batch learning algorithms that allow a hierarchical SMDP to be trained from an existing data set collected from a stochastic behavior policy. We believe that such algorithms are valuable for applying HRL in complex practical domains.
w
dbf2b379-3a4d-40e5-960a-f811ef43b95c
We applied our algorithm to the Taxi domain described in [1]}. This is a simple grid world that contains a taxi, a passenger, and four specially-designated locations labeled R, G, B, and Y. In the starting state, the taxi is in a randomly-chosen cell of the grid, and the passenger is at one of the four special locations. The passenger has a desired destination that he/she wishes to reach, and the job of the taxi is to go to the passenger, pick him/her up, go to the passenger's destination, and drop the passenger. The taxi has six primitive actions available to it: move one step to one of the four directions (north, south, east and west), pick up the passenger and put down the passenger. To make the task more difficult, the move actions are not deterministic, so that it has \(20\%\) chance of moving in one of the other directions. Also, every move in the grid will cost \(-1\) reward. Attempting to pick up or drop passenger at wrong location will cause \(-10\) reward. At last, successfully finish the task has 20 reward. The grid is described in figure REF . Therefore, there are 4 possible state for the destination, 5 possible state for the passenger (4 location and 5 is on the car), 25 possible locations, which results into \(500*6=3000\) parameters in the Q-table that needs to be learned. We denote the state variable as \([dest, pass, x, y]\) for later discussion.
m
9b02b02a-7b4b-4f5d-8a93-201ac3c6c7ad
The dataset for each run were collected in advance by choosing actions uniformly at random with different sizes. We evaluate the performance of algorithms by running greedy execution for 100 times to obtain average discounted return at every 5000 new samples and up to \(60,000\) samples. We repeat the experiments for 5 times to evaluate the influence of different sample distribution. The discounting factor is set to be \(0.99\) . <FIGURE><FIGURE>
m
4d9090da-0cde-4d12-a7d2-fd0a44fe511d
We conducted three sets of experiments: 1) comparison of HQI with flat Q-value Iteration and the effect of state abstraction. 2) learning polices for different DAGs from the same dataset and 3) learning policy using Fitted-HQI with Random Forest as the function approximator.
r
813b96d3-659e-48a5-a182-691d942d2cd3
The first experiment compares HQI against flat Q-value Iteration (FQI). Also, as pointed out in [1]}, state abstraction is essential for MAXQ to have fast learning speed compared to flat Q learning. As a result, we manually conduct state abstraction for each subtask in DAG 1. However, different from the aggressive state abstraction described in [1]}, where every subtask and child pair has a different set of state variables, we only conduct a simple state abstraction at subtask level, i.e. all children of a subtask has the same state abstraction. The final state abstraction is listed in Table REF . As described above, we run 5 independent runs with different random samples of different sizes, we report the mean average discounted return over five runs in Figure REF , as well as the best average discounted reward of the five runs in Figure REF . <TABLE>
r
75312a87-d06d-4b78-8bac-49e00eca1e8c
Results show that both HQI with and without state abstraction consistently outperforms the FQI when there is limited training data. When the dataset is large enough, they all converge to the same optimal performance, which is around \(1.0\) . We also notice that, occasionally, HQI with state abstraction can learn the optimal performance state abstraction with very limited samples, i.e 5000 samples. This demonstrates that with proper hierarchy constraints and good behavioral policy, HQI can generalize much faster than FQI. Moreover, even the HQI without state abstraction consistently outperforms FQI in terms of sample efficiency. This is different from the behavior of the on-policy MAXQ-Q algorithm reported in [1]}, which needs state abstraction in order to learn faster than Q-learning. We argue that HQI without state abstraction is more sample efficient than FQI for the following reasons: 1) HQI uses all applicable primitive samples to update the Q-table for every subtask while MAXQ-Q only updates for the subtask that executes that particular action. 2) Upper level subtask in MAXQ-Q needs to wait for its children gradually converges to their greedy optimal policy before it can have have a good estimate of \(P(s^{\prime }, N|s, u)\) while HQI does not have this limitation. <FIGURE><FIGURE>
r
47f565f6-6c4e-4555-85e2-0377a2baa599
The second experiment is running HQI on different variations of hierarchical decomposition of the original MDP. Figure REF and Figure REF show two different valid DAGs that could also solve the original MDP. Figure REF demonstrates that with sufficient data all three DAG converge to their recursive optimal solution, which confirms that HQI is able converge for different hierarchies. In terms of sample efficiency, three structures demonstrate slight different behavior. We can notice that DAG 2 learns particularly slower than the other two. We argue that this is because of poor decomposition of the original MDP. Based on the problem settings, pick and drop are all risky actions (illegal execution lead to \(-10\) reward), while in DAG 2 these two actions are mixed with low-cost move actions while the other two DAGs isolated them in a higher level of decision making. Therefore, designing good hierarchy is crucial to obtain performance gain versus flat RL approaches. This emphasizes the importance of the off-policy nature of HQI, which allows developers to experiment with different DAG structures without collecting new samples. How to effectively evaluate the performance of particular hierarchical decomposition without using a simulator is a part of our future research. <FIGURE><FIGURE>
r
e4f2c165-cd26-4721-8e1f-e4a4e31c9303
The last experiment utilizes Random Forests as the function approximator to model the Q-value function in DAG 1. The main purpose is to demonstrate the convergence of Fitted HQI. For each subtask \(O_i\) the Q-value function \(Q_i(s, u)\) is modelled by a random forest with \([dest, pass, x, y]\) as the input feature. Since \(dest\) and \(pass\) are categorical variables, we represent them as a one-hot vector, which transforms the state variable into a 11 dimension vector (4d for destination, 5d for passenger and 2d for the \(x,y\) coordinate). We report the mean average discounted rewards over 5 independent runs with different random samples of different sizes. Figure REF shows that Fitted-HQI achieves similar performance compared to Tabular HQI. <FIGURE>
r
95451c05-818e-4f09-91b9-a5d1ca7a0554
In this paper, we introduced an off-policy batch learning algorithm for hierarchical RL. We showed that it is possible to blindly collect data using a random flat policy. Then, we use this data to learn different structures that the data collection were not aware of. Our experiments on the Taxi domain show that it converges faster than FQI to the optimal policy. It also shows that different DAG structures are able to learn from this flat data, with different speeds. Every DAG structure has its own number of parameters, which suggests a possible line for research to try to minimize the number of parameters in the hierarchy. Other future work include comparing different feature selections techniques for Fitted-SQI and applying the algorithm to large-scale and complex domains.
d
31707091-e429-4736-9c2e-573a136c0e74
Speech-to-text translation (ST) aims at translating acoustic speech signals into text in a foreign language, which has wide applications including voice assistants, translation for multinational video conferences, and so on. Traditional ST methods usually combine automatic speech recognition (ASR) and machine translation (MT) in a cascaded manner [1]}, [2]}, [3]}, [4]}, [5]}, [6]}, which might suffer from error propagation and high latency. To break this bottleneck, end-to-end ST systems attracted much attention recently [7]}, [8]}, [9]}, [10]}, [11]}, [12]}, [13]}, which learn a unified model to generate translations from speech directly. Some recent work has shown great potential for end-to-end speech translation, even surpassing traditional cascaded systems [14]}, [15]}. <FIGURE>
i
3eedc112-5765-4ea9-bbb1-014bcf2dc97b
As a cross-modal task, a major challenge in training an end-to-end ST model is the representation discrepancy across modalities, which means there is a modality gap between speech representations and text embeddings, as shown in the left sub-figure of Figure REF . Existing approaches often adopt a sophisticated MT model to help the training of ST, with some techniques like pretraining [1]}, [2]}, [3]}, multi-task learning [2]}, [5]}, [6]} and knowledge distillation [7]}, [8]}, [9]}, [6]}. Although these methods have achieved impressive improvements in ST task, these methods are not necessarily the best way to leverage the MT knowledge. Considering that during training, the input of the translation module only include speech sequences or text sequences, the lack of multimodal contexts makes it difficult for the ST model to learn from the MT model. Inspired by recent studies on some cross-lingual [11]}, [12]}, [13]} and cross-modal [14]}, [15]}, [16]} tasks, we suggest that building a shared semantic space between speech and text, as illustrated in the right sub-figure of Figure REF , has the potential to benefit the most from the MT model.
i
5081c78c-06c7-4191-916a-0f6c5df09b02
In this paper, we propose the Speech-TExt Manifold Mixup (STEMM) method to bridge the modality gap between text and speech. In order to calibrate the cross-modal representation discrepancy, we mix up the speech and text representation as the input and keep the target sequence unchanged. Specifically, STEMM is a self-learning framework, which takes both the speech representation and the mixed representation as parallel inputs to the translation model, and regularizes their output predictions. Experimental results show that our method achieves promising performance on the benchmark dataset MuST-C [1]}, and even outperforms a strong cascaded baseline. Furthermore, we found that our STEMM could effectively alleviate the cross-modal representation discrepancy, and project two modalities into a shared space.
i
6aff7c06-8fd8-468c-b54f-e5adc0e22540
In this section, we will begin with the basic problem formulation (Section REF ) and introduce the model architecture (Section REF ). Then, we introduce our proposed Speech-TExt Manifold Mixup (STEMM) in Section REF . Finally, we introduce our proposed self-learning framework with STEMM in Section REF and present two mixup ratio strategies in Section REF . Figure REF illustrates the overview of our proposed method.
m
42b1dfcb-1e79-456b-a803-5c7d4fceab4e
End-to-end ST  To overcome the error propagation and high latency in the cascaded ST systems, [1]}, [2]} proved the potential of end-to-end ST without intermediate transcription, which has attracted much attention in recent years [3]}, [4]}, [5]}, [6]}, [7]}, [8]}, [9]}. Since it is difficult to train an end-to-end ST model directly, some training techniques like pretraining [10]}, [11]}, [12]}, [13]}, [14]}, [15]}, [16]}, [17]}, [18]}, [19]}, multi-task learning [20]}, [21]}, [22]}, [23]}, [24]}, curriculum learning [25]}, [26]}, and meta-learning [27]} have been applied. To overcome the scarcity of ST data, [28]}, [29]}, [30]} proposed to generate synthesized data based on ASR and MT corpora. To overcome the modality gap, [31]}, [32]}, [19]} further encode acoustic states which are more adaptive to the decoder. Previous works have mentioned that the modality gap between speech and text is one of the obstacles in the speech translation task, and to overcome such gap, one branch of the works [34]}, [35]}, [19]} introduced a second encoder based on the conventional encoder-decoder model, to extract semantic information of speech and text. Recently, [31]} built a shared semantic projection module that simulates the human brain, while in this work, we explored how to construct an intermediate state of the two modalities via the recent mixup method (i.e. Speech-TExt Manifold Mixup) to narrow such gap. Note that our work is orthogonal with [23]}'s study in training procedure of end-to-end ST model.
w
91691e15-2d84-43fe-a15c-1f743a98100b
Mixup  Our work is inspired by the mixup strategy. [1]} first proposed mixup as a data augmentation method to improve the robustness and the generalization of the model, where additional data are constructed as the linear interpolation of two random examples and their labels at the surface level. [2]} extended the surface-level mixup to the hidden representation by constructing manifold mixup interpolations. Recent work has introduced mixup on machine translation [3]}, [4]}, [5]}, [6]}, sentence classification [7]}, [8]}, [9]}, multilingual understanding [10]}, and speech recognition [11]}, [12]}, [13]}, [14]}, and obtained enhancements. Our approach is the first to introduce the idea of manifold mixup to the speech translation task with two modalities, speech, and text.
w
e995e955-2523-4c4f-b0a7-a8ffb12a22cb
In this paper, we propose a Speech-TExt Manifold Mixup (STEMM) method to mix up the speech representation sequences and word embedding sequences. Based on STEMM, we adopt a self-learning framework, which learns the translation of unimodal speech sequences and multimodal mixed sequences in parallel, and regularizes their output predictions. Experiments and analysis demonstrate the effectiveness of our proposed method, which can alleviate the cross-modal representation discrepancy to some extent and improve the performance of ST. In the future, we will explore how to further eliminate this discrepancy and fill the cross-modal transfer gap for ST.
d
99a5cc02-851e-4eb8-8db1-b9d8b3ec15fb
Traditionally, Spoken Language Understanding (SLU) uses a pipeline transcribing audio into text using Automatic Speech Recognition (ASR), which is then mapped into a semantic structure via Natural Language Understanding (NLU). However, this modular approach is prone to error propagation from noisy ASR transcriptions, and ASR in turn is not able to disambiguate based on semantic information. End-to-end (E2E) approaches on the other hand, can benefit from joint modelling. One of the main bottlenecks for building E2E-SLU systems, however, is the lack of large and diverse datasets of audio inputs paired with corresponding semantic structures. Publicly available datasets to date are limited in terms of lexical and semantic richness , number of vocalizations [1]}, domain coverage [2]}, [3]} and semantic contexts [4]}, . In this paper, we present the Spoken Language Understanding Resource Package (SLURP), a publicly available multi-domain dataset for E2E-SLU, which is substantially bigger and more diverse than existing SLU datasets. SLURP is a collection of 72k audio recordings of single turn user interactions with a home assistant, annotated with three levels of semantics: Scenario, Action and Entities, as in Fig. REF , including over 18 different scenarios, with 46 defined actions and 55 different entity types as listed on https://github.com/pswietojanski/slurp.Note that Action & Entities are also referred to as `Intent'. Entities consist of `Tags' and `Fillers', aka. `Slots' and 'Values'. <FIGURE>
i
2543b152-3c88-4e45-b72d-91ea2f8acf57
In order to further support SLU development, we propose SLU-F1, a new metric for entity prediction, which is specifically designed to assess error propagation in structured E2E-SLU tasks. This metric has 3 main advantages over the commonly used accuracy/F1 metric, aimed at supporting SLU developers: First, it computes a distribution rather than a single score. This distribution is (1) inspectable and interpretable by system developers, and (2) can be converted into a confidence score which can be used in the system logic (akin to previously available ASR confidence scores). Finally, the distribution reflects errors introduced by ASR and their impact on NLU and thus (3) gives an indication of the scope of improvement that can be gained by E2E approaches. Using this metric, we evaluate 4 baseline systems that represent competitive pipeline approaches, i.e. 2 state-of-the-art NLU systems and 2 ASR engines. We conduct a detailed error analysis of cases where E2E could have made a difference, i.e. error propagation and semantic disambiguation.
i
8015f3a1-721f-410c-8f34-5563151411b5
The first corpora containing both audio and semantic annotation reach as far back as the Air Travel Information System (ATIS) corpus [1]} and the Switchboard-DAMSL Labeling Project . However, it was not until recently when the first E2E approaches to SLU were introduced , [2]}. Since then, one of the main research questions is how to overcome data sparsity by e.g. using transfer learning , , or pre-training . Here, we present a new corpus, SLURP, which is considerably bigger than previously available corpora. In particular, we directly compare our dataset to the two biggest E2E-SLU datasets for the English language: The Snips benchmark [3]} and the Fluent Speech Command (FSC) corpus . With respect to these resources, SLURP contains 6 times more sentences than Snips, 2.5 times more audio examples than FSC, while covering 9 times more domains and being on average 10 times lexically richer than both FSC and Snips, see Section REF . SLURP represents the first E2E-SLU corpus of this size for the English language. The only existing comparable project is represented by the CASTLU dataset for Chinese Mandarin.
w
f0fe2335-0a46-4caf-8da3-adbd9375728f
We now establish the performance of different baseline systems on the SLURP corpus. As demonstrated in Section 3.1, SLURP is linguistically more diverse than previous datasets, and therefore more challenging for SLU. We first provide an evaluation of two ASR baselines to show the complexity of the acoustic dimension. We then evaluate the semantic dimension, by testing the corpus against state-of-the-art NLU systems. We finally combine ASR and NLU, implementing several SLU pipelines.
m
3bf67538-0dfc-42f5-8f38-bc21462b9158
Note that so far, the direct comparison of E2E-SLU with pipeline approaches are mainly limited to baselines developed on the same dataset, e.g. a multistage neural model in which the two stages that correspond to ASR and NLU are trained independently, but using the same training data [1]}, [2]}. We follow a different approach, which, as we argue, is closer to the real-life application scenario: We use competitive ASR systems and state-of-the-art NLU systems.
m
079c54c6-4e51-423e-8ac3-80ef6c303c8a
SLURP is not only bigger, but also a magnitude more challenging than previous datasets. The purpose of this new data release is not to provide yet another benchmark dataset, but to provide a use-case inspired new challenge, which is currently beyond the capabilities of SOTA E2E approaches (due to scalability, lack of data efficiency, etc.).
d
838a7928-7046-4cef-909e-f94eece64963
We have tested several SOTA E2E-SLU systems on SLURP, including which produces SOTA results on the FSC corpus. However, re-training these models on this more complex domain did not converge or result in meaningful outputs. Note that these models were developed to solve much easier tasks (e.g. a single domain). Developing an appropriate model architecture is left for future work. For this reason, in this work we focus on benchmarking existing approaches.
d
f1467e73-1d21-42f9-89b0-d9baf7bc2ccd
We show that SOTA modular approaches are able to provide a strong baseline for this challenging data, which has yet to be met by SOTA E2E systems. We also argue that our modular baseline is closer to how real-world applications build SLU systems, nevertheless often overlooked when testing E2E systems. As such, we consider our SOTA modular baseline a major novel contribution.
d
793fd114-2b36-4b31-9a31-bf7e7e3c9ade
In this paper, we present SLURP, a new resource package for SLU. First, we present a novel dataset, which is substantially bigger than other publicly available resources. We show that this dataset is also more challenging by first conducting a linguistic analysis, and then demonstrating the reduced performance of state-of-the-art ASR and NLU systems. Second, we propose the new SLU-F1 metric for evaluating entity prediction in SLU tasks. In a detailed error analysis we demonstrate that the distribution of this metric can be inspected by system developers to identify error types and system weaknesses. Finally, we analyse the performance of two state-of-the-art NLU systems on ASR data. We find that a sequential decoding approach for SLU, which starts from the more abstract notion of scenario and action produces better results for entity tagging, than an approach which works bottom up, i.e. starting from the entities. Our error analysis suggests that this is due to the former approach being able to better account for noise by priming entity tagging, which is a more challenging task than scenario or action recognition.
d
cef138b2-9438-4299-ad90-47723746c554
In future work, we hope that SLURP will be a valuable resource for developing E2E-SLU systems, as well as more traditional pipeline approaches to SLU. The next step is to extend SLURP with spontaneous speech, which would again increase its complexity, but also move it one step closer to real-life applications.
d
8686478c-72fa-434f-9ce0-7477eb8c1b79
Representative sampling of collaborative filtering (CF) data is a crucial problem from numerous stand-points and is generally performed at various levels: (1) mining hard-negatives while training complex recommendation algorithms over massive datasets [1]}, [2]}; (2) down-sampling the item-space to estimate expensive ranking metrics [3]}; and (3) sub-sampling the entire dataset for reasons like easy-sharing, fast-experimentation, mitigating the significant environmental footprint of training resource-hungry machine learning models [4]}, [5]}, [6]}, etc. In this paper, we are interested in finding a sub-sample of a dataset which has minimal effects on model utility evaluation i.e. an algorithm performing well on the sub-sample should also perform well on the original dataset.
i
62221f5a-45e2-4322-b671-9e6fd9102951
Preserving exactly the same levels of performance on sub-sampled data over metrics like MSE, AUC, etc. is a very challenging problem. However, a simpler albeit useful problem is accurately preserving the ranking or relative performance of different algorithms on sub-sampled data. For e.g., a sampling scheme that has a very low bias but high variance in preserving metric performance values has a lesser utility than a different sampling scheme with high amounts of bias but low variance, since the overall algorithm ranking is still preserved.
i
6b836bbe-402c-4e29-b4eb-e94085ce2f25
Performing careless and ad-hoc sampling such as randomly removing interactions, or making dense subsets by removing users/items with few interactions [1]} can have adverse downstream repercussions. For e.g., sampling only the head-portion of a dataset, from a fairness and inclusion perspective, is inherently biased against minority-groups and benchmarking algorithms on this biased data is highly likely to propagate the original sampling biases. On the other hand, from an entirely performance view-point, accurately retaining the relative performance of different recommendation algorithms on much smaller sub-samples is a challenging research problem in itself.
i
455eaf80-02f8-4ebe-81f3-c6d832eba4a9
Two prominent directions towards representative sampling of CF data are: (1) designing principled sampling strategies, especially for user-item interaction data; and (2) analyzing the performance of different sampling strategies, in order to better grasp which sampling scheme works “better” for which type of data. In this paper, we explore both of these directions through the lens of expediting the recommendation algorithm development cycle, by:
i
1aa8683a-979d-4e97-8a21-b3cea1b72c65
Characterizing the efficacy of sixteen different sampling schemes in accurately benchmarking various kinds of recommendation algorithms on smaller sub-samples. Proposing a data-specific sampling strategy, SVP-CF, which can dynamically sample the “toughest” portion of a CF dataset. SVP-CF is also specifically designed to handle the inherent data heterogeneity and missing-not-at-random properties in user-item interaction data.
i
8c9a7dfd-134b-4ad5-91f8-7acadd4db4f1
Ultimately, our experiments reveal that SVP-CF outperforms all other sampling strategies and can accurately benchmark recommendation algorithms with roughly \(50\%\) of the original data leading to roughly \(1.8\times \) time speedup.
i
7a759ebd-ff40-4b14-b08a-5a656b1f61ba
paragraph4 2mm plus1ex minus.2ex -0.7em Sampling CF data. Sampling in CF-data has been a popular choice for three major scenarios. Most prominently, sampling is used for mining hard-negatives while training recommendation algorithms. Some popular approaches include randomly sampling negative interactions; using the graph-structure to find the hardest negatives [1]}, [2]}; and ad-hoc techniques like similarity search [3]}, stratified sampling [4]}, etc. On the other hand, sampling is also generally employed for evaluating recommendation algorithms by estimating expensive to compute metrics like Recall, nDCG, etc. [5]}. Finally, sampling is also used to create smaller sub-samples of the entire data for reasons like fast experimentation, benchmarking different algorithms, privacy concerns, etc. However, the consequences of different sampling strategies on any of these downstream applications is under-studied, and is the main research interest of this paper.
w
9bf6cfae-39b1-4438-b85c-6731e4a98f6d
paragraph4 2mm plus1ex minus.2ex -0.7em Coreset selection. Closest to our work, a coreset is loosely defined to be a subset of the data-points that maintain a similar “quality” as the full dataset for subsequent model training. Submodular approaches try to optimize a function \(f : \mathbf {V} \mapsto \mathcal {R}_+\) which measures the utility of a subset \(\mathbf {V} \subseteq \mathbf {X}\) , and use these estimated functions as a proxy to select the best data subset [1]}. More recent works treat coreset selection as a bi-level optimization problem [2]}, [3]} and directly optimize for the best possible subset for the downstream task. Selection-via-proxy [4]} is another technique which employs an inexpensive base-model as a proxy to tag the importance of each data-point. Note however that all of the discussed coreset selection approaches were designed primarily for classification data, whereas adapting them for interaction data is non-trivial because of: (1) the inherent data heterogeneity; (2) wide range of metrics to evaluate the utility of a recommendation algorithm; and (3) the prevalent missing-data characteristics in user-item interaction data.
w
1c7ccb15-56cc-463f-b396-e86575bf9c62
paragraph4 2mm plus1ex minus.2ex -0.7em Datasets. We use six public CF datasets with varying sizes, sparsity patterns, etc. We use three different subsets (Magazine, Luxury, and Video-games) of the Amazon review datasets [1]}, along with the Movielens-100k [2]}, BeerAdvocate [3]}, and GoodReads Comics [4]} datasets. We simulate all three feedback scenarios (feedbacktypes) for each dataset via different pre-processing strategies. For explicit and implicit feedback, we follow a randomized 80/10/10 train-test-validation split for each user's consumption history, and make use of the leave-one-last [5]} strategy for the sequential feedback task. In pursuit of following the least restrictive data pre-processing [6]}, we only weed out the users that have lesser than 3 interactions in total. Please check appendixexpdetails for further information about data statistics and training details.
m
d609f102-d42d-4b96-9e21-f87851eb35ac
paragraph4 2mm plus1ex minus.2ex -0.7em How do different sampling strategies compare to each other? \(\Psi \) -values for all sampling schemes on all datasets can be found in psiresults. Even though there are only six datasets under consideration, there are a few prominent patterns. First, the average \(\Psi \) for most sampling schemes is around \(0.4\) , which implies a statistically significant correlation between the ranking of algorithms on the full vs. sub-sampled datasets. Next, SVP-CF generally outperforms all commonly used sampling strategies by some margin in retaining the ranking of different recommendation algorithms. Finally, strategies that discard the tail-part of a dataset (head-user, centrality-based) are the worst performing overall, which supports the recent warnings against dense sampling of data [1]}. <TABLE>
m
77d726ab-a2d1-4d05-8048-cbb158302176
paragraph4 2mm plus1ex minus.2ex -0.7em How much data to sample? Since \(\Psi \) is averaged over all \(p \in \lbrace 80, 60, 40, 20, 10, 1 \rbrace \) % data samples, to better understand a reasonable amount of data to sample, we stratify \(\Psi \) according to each value of \(p\) and note the average Kendall's Tau. As we observe from percentsamplingvstau, there is a steady increase in the performance measure when more data is retained. Next, despite the results being averaged over sixteen different sampling strategies, \(50-60\%\) of the data seems enough for gauging the algorithm order.
m
6422f699-d477-4972-bcfa-114a7163f52d
paragraph4 2mm plus1ex minus.2ex -0.7em How does the relative performance of algorithms change as a function of sampling rate? In an attempt to better understand the impact of sampling on different recommendation algorithms used in this study (algorithms), we visualize the probability of an algorithm moving in the overall method ranking with data sampling. We estimate the aforementioned probability using Maximum-Likelihood-Estimation (MLE) on the experiments run in computing \(\Psi (\mathcal {D}, s)\) . Formally, given a recommendation algorithm \(r\) , CF-scenario \(f\) , and data sampling percent \(p\) : \(P_{\mathit {MLE}}(r ~|~ f, p) = \lambda \cdot \sum _{\mathcal {D}} \sum _{s} \sum _{m} 0.5 + \frac{\mathcal {R}_{f, m}(r) - \mathcal {R}_{f, m}^{s, p}(r)}{2 \cdot (n-1)}\)
m
400de626-f64d-4a36-9c4f-e6d90ae78f99
where \(\lambda \) is an appropriate normalizing constant, and \(n\) represents the total number of recommendation algorithms. A heatmap visualizing \(P_{\mathit {MLE}}\) for all recommendation algorithms and CF-scenarios is shown in percentsamplingvsmethod. We observe that simpler methods like Bias-only and PopRec are most probable to move upwards in the ranking order with extreme sampling whereas parameter-heavy algorithms like SASRec, SVAE, MVAE, etc. tend to move downwards. <FIGURE>
m
b575c996-d303-4a7c-b1c0-a9a26abcd1f2
paragraph4 2mm plus1ex minus.2ex -0.7em Are different metrics affected equally by sampling? To better understand how the different implicit and sequential feedback metrics (feedbacktypes) are affected by sampling, we visualize the average Kendall's Tau for all sampling strategies (except SVP-CF for brevity) and all % data sampling choices separately over each metric in metriccorrelation. We observe a steady decrease in the model quality across accuracy metrics and sampling schemes. This is in agreement with the analysis from percentsamplingvstau. Next, most sampling schemes follow a similar downwards trend in performance for all three metrics with AUC being slightly less and nDCG being slightly more affected by sampling. <FIGURE><FIGURE>
m
6d5d3588-7b90-4350-8bf3-f9ec21ae857e
In this work, we characterized the performance of various sampling strategies for the task of accurately retaining the relative performance of different recommendation algorithms. We also proposed a novel sampling scheme, SVP-CF, which is better than commonly used strategies and can confidently gauge the best performing algorithm with only half of the initial dataset. An interesting research direction in addition to more representative sampling of CF-data is analyzing the fairness and privacy aspects of training algorithms on sub-sampled data, that we delay for future work.
d
c49d677b-2f42-41dc-805d-ceec4468c2c6
Many modern programming systems, such as JavaScript engines that are running our web browsers, use just-in-time (JIT) compilers to improve performance. Examples include Google Chrome, Microsoft Edge, Apple Safari, and Mozilla Firefox, which are used by 2.65 billion, 600 million, 446 million, and 220 million, respectively [1]}. JIT compiler bugs can lead to exploitable security vulnerabilities [2]}, [3]}, [4]}, [5]}, [6]}. Such a bug in Google Chrome could be used to hijack passwords and to navigate to other sites and execute malicious programs, as reported by the Microsoft Offensive Security Research team (CVE-2017-5121 [2]}). Thus, the ability to quickly analyze, localize and fix JIT compiler problems is important. However, existing work and available tools focus on static code [8]}, [9]}, [10]}, and so they are not suitable for developers in debugging JIT compilers, which generates code at run-time. Additionally, the size and complexity of JIT-based systems [11]} combined with the dynamic nature of JIT compiler optimizations, make it challenging to analyze and locate bugs quickly. For example, Google V8 has more than 2,000 source files and more than 1 million lines of code.
i
720834fa-e1d0-4cd0-b5d6-53a606d09b9c
Traditional debuggers rely on text even though the main feature of a JIT compiler is building a graph-like structure to translate bytecode into optimized machine code. With this in mind, we propose a new debugging tool, which visualizes the JIT compiler's intermediate representation (IR). Our approach uses IR identification and generation techniques described by Lim and Debray [1]}, where the compiler-related half of the visualization tool's pipeline are described in detail. In this paper we focus on the visualization half, which includes: merging multiple IR graphs into a single graph, simplifying the merged graph, converting the simplified graph into a hypergraph, simplifying the hypergraph, and visualizing the hypergraph using a metro map metaphor. Visualizing the JIT compiler's IR allows us to answer questions such as:
i
f2c9586a-7d9d-4d7a-86df-2dbc9a52655d
Which optimization phases are likely to be buggy? Related Work: There are many methods and tools for debugging static code compilers and optimized code, but little on using the intermediate representation and visualizing it to show the explicit information about the compilation and optimization processes. Google V8's Turbolizer [1]}, [2]} is one of very few IR visualization tools. It shows the final IR graph after each optimization process and provides interactive features to view the control-flow graphs for each optimization phase. Although Turbolizer provides some information about the IR nodes and their relationships, it does not provide enough information about the optimization process and cannot answer several of our initial set of questions. Dux et al. [3]} visualize dynamically modified code at run-time with call graphs and control-flow graphs by showing the graph changes with animation, allowing end-to-end play, pause, and forward/backward step-by-step animation. CFGExplorer [4]} visualizes the control-flow graph of a program to represent the program structure for dynamic binary analysis. It provides interactive features allowing developers to find specific memory addresses, loops, and functions to analyze the system. CcNav [5]} analyzes and visualizes a C++ compiler's optimization process with a call graph, control-flow graph, and loop hierarchies. Control-flow graphs and call graphs are popular in program analysis, especially for analyzing static code. However, they are different from dynamically generated IR graphs. Tools for visualizing and interacting with control-flow graphs and call graphs (such as those above) are not sufficient for visualizing the IR graph as, e.g., they cannot capture the optimization phases. Background: We briefly introduce several concepts relevant to JIT compilers. Interpreter: a computer program that converts input source code into bytecode and executes it without compiling it into a machine code [6]}. Bytecode: instructions generated from input source code by an interpreter; bytecode is portable, unlike compiled programs, and used in many modern languages and systems, such as JavaScript, Python, and Java [7]}. Instruction-level Trace: a file that holds all the instructions that a programming system, such as a JIT compiler, has generated and executed at run-time. The instructions are in a machine-level code with symbol information (e.g., function names) and are used for performance analysis and debugging. Just-in-Time (JIT) compiler: a program that turns bytecode into instructions that are sent to a computer's processor, to improve performance [8]}; see Fig. REF (a) for an example of JIT compiler in Google's V8 pipeline. Optimized code: machine code generated from bytecode by a JIT compiler that can be directly executed by a processor. Intermediate Representation (IR): a type of graph also known as sea-of-nodes [9]}, [10]}, [11]}. Unlike other graphs used in program analysis, such as control-flow or data-flow graphs which have specific types of nodes, nodes in the sea-of-nodes graph represent different types: from scalar values and arithmetic operators to variables and control-flow nodes and function entry nodes. Similarly, edges represent different relationships (e.g., semantic and syntax relationships). Optimization: adding, removing, and merging nodes and edges in the graph during execution. In a single JIT compilation, the compiler executes several different optimization phases (inlining, loop peeling, constant propagation) to generate efficient machine code, which modify the IR graph and correspond to new hyperedges (the set of all nodes generated or optimized in this phase); see Fig. REF (b) for an example of constant propagation. Proof-of-Concept Program: an input program that is used to trigger the buggy behavior in the JIT compiler, i.e., a valid program (without any bugs) which when run can reveal bugs in the JIT compiler. In our experiment, we are targeting JavaScript engine V8, so the PoC is a JavaScript program. <FIGURE>Visualizing the Intermediate Representation Our approach for capturing and visualizing the IR of a JIT compiler below uses compiler-related steps 1-4 [12]}, and steps 5-9 are described in brief below. Modify the input program, \(P_0\) , to create similar programs, \(\lbrace P_1,...P_N\rbrace \) , by generating the abstract syntax tree for \(P_0\) and then randomly modifying nodes in the tree with allowable edits (passing semantic/syntactic checks). The newly created programs either still contain the code that triggers a bug in the JIT compiler, or the buggy code is replaced and no bug is triggered. In the first case, the execution output of the optimized code is different from the interpreted code (as with \(P_0\) ). Run each program \(P_i\) and collect the instruction-level traces. Analyze traces to check if \(P_i\) triggers a bug in the JIT compiler and to identify \(P_i\) 's IR and the optimization phases executed while optimizing \(P_i\) . Select candidate hyperedges, suspected to be buggy, from the information gathered in step 3. Merge all selected candidate hyperedges into the original IR from \(P_0\) . Simplify the merged IR by reducing the number of nodes and edges. Convert the simplified graph into a hypergraph by extracting the hyperedges from step 4 and analyzing each node's optimization status. Simplify the hypergraph by reducing the number of hyperedges and nodes. Visualize the simplified hypergraph with MetroSets [13]}. Intermediate Representation Recall that the intermediate representation (IR) of a JIT compiler is a sea-of-nodes graph that the compiler generates at the beginning of its execution by parsing the bytecode and optimizing it with several optimization phases. Formally, the IR is a simple, undirected graph \(G = (V, E)\) , where \(V\) represents the nodes optimized by the JIT compiler and \(E\) contains pairs of nodes connected by different relationships (e.g., semantic and syntax relationships, such as math expressions). By keeping track of the optimization information for each node we construct the hypergraph \(H = (V, S)\) from \(G\) , where \(V\) is a set of nodes optimized by the JIT compiler and each hyperedge in \(S\) represents an optimization phase. Two important node features are phases and opcodes. Phases are the optimization phases where a node was generated and optimized (and which later correspond to hyperedges). Opcodes represent node operations (e.g., add, sub, return). A node also has two different attribute groups: (1) basic, such as a node id, address, list of neighbors, opcode, and IR ID; and (2) optimization, such as hyperedge (phase) ID, generated hyperedge name, and optimized hyperedge names. Note that a node is generated at one hyperedge, but can be present in multiple different hyperedges, due to different optimization phases. Recall that given one JavaScript code we generate \(N\) similar versions to see if any of them trigger bugs. We generate the IRs for all of these versions (typically about 20). In the real-world examples we work with, each such IR graph has about 300-500 nodes and 30-40 optimization phase executions. Merging Intermediate Representation Hyperedges We now merge the \(N\) similar but different intermediate representations into one single graph. There are two main reasons to do this. First, we want to see the differences among the graphs in one single view. Second, by comparing hyperedges from a buggy program IR to hyperedges from a non-buggy program IR, we can find differences in some hyperedges due to different optimizations, and thus find the bug. Consider, for example, a hyperedge \(\alpha \) in both buggy and non-buggy program IRs and suppose that an additional node (the result of incorrect optimization) makes a buggy program's \(\alpha \) different from the non-buggy program's \(\alpha \) . A merged hyperedge will show this additional node, and its attributes will identify the buggy IR. A developer can now see that there was an optimization difference in \(\alpha \) and find the bug. <FIGURE>Let \(R_0\) be the IR from the original program and \(\lbrace R^{\prime }_1,...,R^{\prime }_N\rbrace \) the IRs from the modified programs. Let \(\lbrace r^{\prime }_1,...,r^{\prime }_n\rbrace \) be sub-IRs, where \(r^{\prime }_i\) is a subgraph of \(R^{\prime }_i\) when \(R^{\prime }_i\ne R_0\) , i.e., \(r^{\prime }_i \subseteq R^{\prime }_i\) , and \(n\) is the number of IRs different from \(R_0\) (\(n \le N\) ). Each \(r^{\prime }_i\) holds buggy candidate hyperedges: \(R^{\prime }_i\) hyperedges are different from \(R_0\) 's hyperedges. We traverse all sub-IRs, comparing each to \(R_0\) , and update the merged IR; see Algorithm 1 in [14]} for detail. Intermediate Representation Simplification Although the resulting merged graph may be useful for debugging, its complexity makes it difficult for developers to use; see Fig. REF (a). Therefore, we simplify the graph, convert it into a hypergraph, and simplify the hypergraph (hopefully without losing much information in these simplifications). The main goal is to end up with an interactive visualization that allows developers to debug. Reducing the IR Graph: We remove dead nodes (nodes with no adjacent edges) as they are not translated into machine code and do not affect other nodes. We then identify nodes that can be merged without losing important information. A pair of nodes is merged if they have the same opcode, the same optimization information, belong to the same IR (which can be identified by the IR id attribute), and share the same neighbors; see Algorithm 2 in [14]} for detail. Reducing the IR Hypergraph: We convert the simplified graph \(G=(V,E)\) into a hypergraph \(H=(V,S)\) , by extracting hyperedges based on the optimization phases; see Algorithm 3 in [14]}. Recall that a node \(v\) generated in phase/hyperedge \(\alpha \) and optimized in phases/hyperedges \(\phi \) and \(\gamma \) now belongs to all three hyperedges. We reduce hypergraph \(H\) by merging suitable pairs of hyperedges. Different nodes can have the same hyperedge names as attributes, but different hyperedge IDs, as IDs are assigned based on the execution order. Therefore, we merge hyperedges with the same name into a single hyperedge while assigning a new unique identifier generated from the original IDs. We use ID concatenation to obtain unique identifiers. Consider two hyperedges A and B executed twice in the order shown in Fig. REF (b). We use the order to create unique IDs by merging the 4 hyperedges into 2 hyperedges and assigning new IDs, generated by concatenating two IDs delimited with a special character `@'; see Algorithm 4 in [14]}. This reduces the number of hyperedges but increases the number of nodes in each hyperedge. Next, we traverse each hyperedge \(s\in S\) , and we use node opcodes to see if they can be merged; see Algorithm 5 and Table 1 in [14]} for more details and results. Visualizing the Hypergraph with MetroSets MetroSets [13]} uses the metro map metaphor to visualize medium-size hypergraphs. It clearly shows the relationships between hyperedges, which in our case captures the relationships among the optimizations. MetroSets provides simple and intuitive interactions that make it possible to quickly identify hyperedges (metro lines) that contain suspicious nodes (metro stations), or hyperedges that intersect with a particular suspicious hyperedge. Each node in the MetroSet map is labeled with its unique ID (representing the node generation timeline). The attributes shown when hovering over a node are phase, opcode, address, graph ID, and phase ID. A phase attribute tells the user where the node was generated and it is useful when nodes belong to multiple sets. A developer can distinguish the phase that generated a node and phases where it was optimized. Evaluation We work with Google's JavaScript engine and its JIT compiler, using a dynamic analysis tool built on top of Intel's Pin software [20]} to collect instruction-level traces, XED [21]} for instruction decoding [21]}, esprima-python [23]} to generate the syntax-tree from JavaScript code, and escodegen [24]} to regenerate JavaScript from the syntax-tree. Our data comes from the Chromium bug report site; see [12]} for details. We can identify the bugs in all listed bug reports, including Chromium bug report 5129. This version of the compiler has a bug in the \(\it EarlyOptimization\) phase. We generate 19 additional modified JavaScript programs from the original and run all 20. The instruction traces are used to generate the IR graph shown in Fig. REF (a) and our visualization is shown in Fig. REF . We can now attempt to answer some of the questions from Sec. . “What optimizations took place to generate the machine code?” The map and the “Key to Lines" legend show all optimization phases. “What is the relationship among the optimization phases?” We can examine the corresponding lines and use the interactive exploration modes (intersection, union, complement, etc.) to see the relationships among the phases. “Which optimization phase was most active?” We can visually identify the longest line, or hover over each line and see the number of nodes in it; see Figure 9 in [14]} for an example of the most active optimization phase. <FIGURE>“What optimizations affected a specific node" We can hover over the node of interest, which grays out the lines that don't contain the node. We can then examine each of the corresponding lines and look at the displayed node attributes. “Which optimization phases are likely to be buggy?” One natural way to do this is to find parts that differ in the IR graphs with the bug and those without. In other words, a program is buggy because either it has additional optimizations or missing optimizations, and this information is captured in the IRs. Any line that has many non-original IRs represents a significant difference between buggy and non-buggy programs. In this case study, the majority of nodes (9 out of 11) in the EarlyOptimization line are from different IRs, indicating a difference in optimization between buggy and non-buggy programs; see the full paper [14]} for more examples. Our prototype is available at https://hlim1.github.io/JITCompilerIRViz/. Acknowledgements This research was supported in part by the National Science Foundation under grants CNS-1908313 and DMS-1839274.
i
0b310f51-2e56-428f-998e-d01726f5ceb3
We work with Google's JavaScript engine and its JIT compiler, using a dynamic analysis tool built on top of Intel's Pin software [1]} to collect instruction-level traces, XED [2]} for instruction decoding [2]}, esprima-python [4]} to generate the syntax-tree from JavaScript code, and escodegen [5]} to regenerate JavaScript from the syntax-tree. Our data comes from the Chromium bug report site; see [6]} for details. We can identify the bugs in all listed bug reports, including Chromium bug report 5129. This version of the compiler has a bug in the \(\it EarlyOptimization\) phase. We generate 19 additional modified JavaScript programs from the original and run all 20. The instruction traces are used to generate the IR graph shown in Fig. REF (a) and our visualization is shown in Fig. REF . We can now attempt to answer some of the questions from Sec. .
m
540701da-7f20-432e-91d6-f363149de7c5
“What is the relationship among the optimization phases?” We can examine the corresponding lines and use the interactive exploration modes (intersection, union, complement, etc.) to see the relationships among the phases.
m
11b40a76-58f4-4628-813b-fbbc5235d974
“Which optimization phase was most active?” We can visually identify the longest line, or hover over each line and see the number of nodes in it; see Figure 9 in [1]} for an example of the most active optimization phase. <FIGURE>
m
91f00203-077b-4308-8dd3-98ddde3999cf
“What optimizations affected a specific node" We can hover over the node of interest, which grays out the lines that don't contain the node. We can then examine each of the corresponding lines and look at the displayed node attributes.
m
682d297d-196b-4c24-9b0d-da832fae2565
“Which optimization phases are likely to be buggy?” One natural way to do this is to find parts that differ in the IR graphs with the bug and those without. In other words, a program is buggy because either it has additional optimizations or missing optimizations, and this information is captured in the IRs. Any line that has many non-original IRs represents a significant difference between buggy and non-buggy programs. In this case study, the majority of nodes (9 out of 11) in the EarlyOptimization line are from different IRs, indicating a difference in optimization between buggy and non-buggy programs; see the full paper [1]} for more examples.
m
9ff8f991-2468-4a4f-86eb-8dde3a785497
The Argyris finite element is one of the first finite elements, cf. [1]}. It is a \(C^1\) -\(P_5^{(2)}\) finite element on triangular grids. Here \(C^1\) -\(P_5^{(2)}\) denotes the space of globally \(C^1\) and locally piecewise polynomials of degree 5 on 2 dimensional triangular grids. In general \(C^m\) -\(P_{k}^{(n)}\) stands for the space of globally \(C^m\) (\(m\ge 1\) ) and locally piecewise \(n\) -dimensional polynomials of degree \(k\) on \(n\) -dimensional simplicial grids. It is straightforward to extend the Argyris finite element to \(C^1\) -\(P_{k}^{(2)}\) (\(k> 5\) ) finite elements, as follows. We introduce \((k-5)\) function values at \((k-5)\) internal points on each edge; we introduce (additional) first-order normal derivatives at \((k-4)\) internal points each edge; and we introduce additional function values at \(\dim P_{k-6}^{(2)}\) internal points on the triangle.
i
3022386c-f53c-422d-a9ee-bd22d6191ad5
In 1970, Bramble and Zlámal [1]} and Ženíšek [2]} extended the above \(C^1\) -\(P_{k}^{(2)}\) finite element to \(C^m\) -\(P_{4m+1}^{(2)}\) finite elements for all \(m\ge 1\) . In fact, Ženíšek [2]} defined all \(C^m\) -\(P_{k}^{(2)}\) finite elements for \(k\ge 4m+1\) . We have a perfect partition of index in two space-dimensions. That is,
i
1fcc8e7a-4bc0-4126-a44a-5627cc92763c
we require \(2m\) th order continuity at each vertex of the triangulation, and the degrees of freedom are exactly the function value, the two first derivatives, and up to the \(2m+1\) \(2m\) th derivatives at each vertex; we require \(m\) th order continuity on each edge of the triangulation, and the degrees of freedom are exactly the function value at \((k-4m-1)\) internal points, the first normal derivatives at \((k-4m)\) internal points, and up to the \(m\) th normal derivatives at \((k-3m-1)\) internal points inside each edge; the degrees of freedom are exactly the function value at \(\dim P_{k-3m-3}^{(2)}\) internal points inside each triangle.
i
d78dd1f7-8938-4120-94b7-623b50ea115f
For \(C^m\) -\(P_{4m+2}^{(2)}\) finite elements, the above sets of index form exactly seven triangles, three at three vertices, three at three edges, and one at the center of the triangle. Such a nice partition of index does not exists in three and higher dimensions.
i
177a603a-87bb-4934-b678-3a1da47d9d86
The first \(C^1\) element in 3D was constructed by Ženíšek in 1973 [1]}, a \(C^1\) -\(P_9^{(3)}\) finite element. By avoiding high-order derivatives in the degrees of freedom of Ženíšek [1]}, the author extended this finite element to all \(C^1\) -\(P_k^{(3)}\) (\(k\ge 9\) ) finite elements [3]} in 2009. Also Ženíšek extended the \(C^1\) -\(P_9^{(3)}\) finite element to \(C^m\) -\(P_{8m+1}^{(3)}\) in 1974 [4]}. Again high-order derivatives (above the continuity order) were used as degrees of freedom in [4]}. The author could not extend the 3D \(C^m\) Ženíšek finite element to all \(C^m\) -\(P_{k}^{(3)}\) (\(k\ge 8m+1\) ) finite elements, but constructed a family of \(C^2\) -\(P_{k}^{(3)}\) (\(k\ge 17\) ) finite elements using the continuity-equal order derivatives in [6]}, in 2016. Also the author defined a family of \(C^1\) -\(P_{k}^{(4)}\) (\(k\ge 17\) ) finite elements in [6]}.
i
7c2c2159-9381-41c5-aeda-c6a2be9f4b77
For \(C^m\) -\(P_{k}^{(n)}\) (\(k\ge 2^n m+1\) ) finite elements on \(n\) (\(n\ge 3\) ) space-dimensional simplicial grids, by the author's limited knowledge, Alfeld, Schumaker and Sirvent are the first to introduce the distance concept to low-dimensional simplex and to define recursively the index for the nodal basis (degrees of freedom), cf. Equation (36) in [1]}. As claimed by [1]}, it is very difficult to find explicit definitions of these index sets for general or given \(m\) , \(n\) and \(k\) .
i
9c6a31fc-6c0d-4026-b978-5ecb1d0b00d7
Similar to [1]}, [2]} and [3]} studied the index partition recently. But no closed formula is obtained for the index sets for general \(C^m\) -\(P_{k}^{(n)}\) finite elements. For example, [2]} obtained the index sets (degrees of freedom) for the \(C^4\) -\(P_{33}^{(3)}\) finite elements on tetrahedral grids.
i
069a87f0-aa69-46e3-bdfd-819bf057cf49
Mathematically it is a challenge to find explicit definitions of basis functions for general \(C^m\) -\(P_{k}^{(n)}\) finite elements. For so many years we could not even complete the work in 3D, except for \(m=1\) ([1]}) and \(m=2\) ([2]}). In this work, we give explicitly the index set of the nodal basis for all \(C^m\) -\(P_{k}^{(3)}\) (\(k\ge 2^3m+1\) ) and \(C^m\) -\(P_{k}^{(4)}\) (\(k\ge 2^4m+1\) ) finite elements, on tetrahedral grids and four-dimensional simplicial grids, respectively. We did not use any formula of [3]}, [4]}, or [5]}, in giving the closed formula on the index of the nodal bases of \(C^m\) -\(P_{k}^{(3)}\) and \(C^m\) -\(P_{k}^{(4)}\) finite elements. We prove the uni-solvency and the \(C^m\) continuity of the constructed \(C^m\) -\(P_{k}^{(3)}\) and \(C^m\) -\(P_{k}^{(4)}\) finite element spaces.
i
498fd564-92e4-4cec-a5dd-a8774597eefb
We also provide a computer code which produces the index sets for any given \(n\) , \(m\) and \(k\) . The computer code is listed in Section . The computer code follows the continuity requirements and goes exhaustively and non-overlappingly from nodal indices on lower dimensional simplicial faces to those on higher ones. The computer code does not solve this mathematical problem, but does give explicit indices for practical computation need. With the computer output, we study the patterns of overlapping index so that we can give explicit nodal basis definitions in 3D and 4D. The computer code verifies the index sets, for some low \(n\) 's, \(m\) 's and \(k\) 's, constructed in this manuscript. In particular, it verifies the indices of the \(C^4\) -\(P_{33}^{(3)}\) finite element in [1]}.
i
af1d96fc-90e8-4ed1-a4b1-2c4dc10f243c
Recent progress in deep learning owes much of its success to novel network architectures for efficient processing of large datasets. One example for image, video, and audio data is the convolutional neural network (CNN) [1]}. CNNs efficiently process high-dimensional data using two key principles: translation equivariance and parameter sharing. The convolutional layers in CNNs are translation equivariant, so when the input to these layers is shifted in space, the extracted features also get shifted. Translation equivariance encodes the prior that translating an input does not change its labels into the neural network and helps efficiently extract features from raw data. Parameter sharing then not only efficiently uses parameters, but also helps induce translation equivariance in CNNs. This work was funded in part by the IBM-Illinois Center for Cognitive Computing Systems Research (C3SR), a research collaboration as part of the IBM AI Horizons Network and the National Science Foundation Grant CCF-1717530.
i
6697756b-3df6-43b5-9526-9d9a116d3409
The idea of equivariance using convolutions and parameter sharing has been generalized to general group symmetries [1]}, [2]}, [3]}. These group-equivariant networks use efficient parameter sharing techniques to encode symmetries in data beyond translation, such as rotations and flips, as priors. Much further research has focused on inducing equivariances for different group symmetries and data types such as [4]} for spherical symmetries, [5]}, [6]} for scale-equivariance, [7]}, [8]}, [9]} for Lie groups, and [10]}, [11]}, [12]} for group-equivariance within attention mechanisms.
i
f8aa5316-d5fe-45af-b829-5433e36d2ac2
But all these works assume knowledge of symmetries in the data is known a priori. Very recently, [1]} proposed to learn appropriate parameter sharing from the data itself using meta-learning techniques; [2]} proposed using \(L\) -conv layers to automatically search and approximate group convolutions. <FIGURE>
i
3c7f186b-2f17-4205-bb14-7da02c57209b
Here, we propose autoequivariant networks (AEN) that automatically induce group equivariance from a reduced search space using deep Q-learning, building on new group theory results that we prove. Compared to [1]}, we are able to restrict to a much smaller search space of relevant symmetries by proving an equivalence relation between parameter sharing schemes using large groups and its subgroups that it can be constructed from, as illustrated in Fig. REF (technical details deferred to Sec. ). Further, this property proved in Sec.  leads to faster construction of equivariant neural networks, as discussed in Sec. . Unlike [2]}, we focus on exact symmetries formed by combining several smaller symmetries. The overall performance of a network is a function of not only the symmetry in its parameters but also on the number of parameters and features in them. Hence, when the group symmetries are large, then equivariant networks constructed using parameter sharing with fixed number of parameters might have too many features or if we fix the number of features, the number of parameters might be too few. This issue with group equivariant networks was identified by [3]} and reiterated by [4]}, [5]}, which limits their application and makes it difficult to choose the right balance between equivariance, number of parameters, and number of features needed to design high-performing equivariant neural networks with reasonable number of parameters and features. We mitigate this issue by letting our search algorithm find the right balance automatically.
i
bfd899ee-5098-4969-80c6-2023dee11dcc
Our contributions are summarized as follows. Sec.  proves that a neural network is equivariant with respect to a set of group symmetries if and only if it is equivariant to any group symmetry constructed using semidirect products from them. Using this result, Sec. REF provides an efficient algorithm to induce equivariance for large symmetry groups in multilayered perceptrons (MLPs). Sec. REF describes our deep Q-learning algorithm for equivariant network architecture search with reduced search space. Sec.  develops and releases new group-augmented datasets, G-MNIST and G-Fashion-MNIST, which augment the standard MNIST and Fashion-MNIST datasets using several large symmetry groups. These are used to evaluate AENs and also provide a standard benchmark for further research. Sec.  also uses deep Q-learning for searching group equivariant CNNs (GCNNs) of different sizes, group equivariances, and trained using different augmentations on several real datasets like CIFAR10, SVHN, RotMNIST, ASL, EMNIST, and KMNIST. We find that the top-performing GCNNs have several group equivariances in them.
i
52bdaa59-3b9e-4802-9721-893cbc5e18f2
We conduct two sets of experiments in Sec. REF and REF for group equivariant MLPs and CNNs respectively. The dataset construction for Sec. REF is described in Sec.  in the supplementary material using various group transformations on MNIST and Fashion-MNIST datasets. We report the performance of group equivariant MLPs with fixed number of features and varying number of parameters, where we use one or more groups from the same set of group transformations for inducing equivariance as used for dataset construction. Group equivariances are induced in these networks using deep Q-learning without knowledge of transformations present in the datasets, but only the possible set of groups that can be used to construct it. For a set of symmetries, a large number of groups can be constructed from their semidirect products by changing the automorphism and components involved in the product. These experiments show that in general, using the same group for inducing equivariance actually in the dataset tends to perform well. But, cases where number of group symmetries are too many, our search algorithm does not choose MLPs with all the relevant symmetries, rather finds a balance between symmetry and parameters. In Sec. REF , our deep Q-learning algorithm chooses appropriate equivariances, augmentations, and channel sizes in GCNNs to maximize their performance on various image datasets. Code is available at https://github.com/basusourya/autoequivariant_networks.
m
2c53f297-06e4-4029-92d0-2b7b61070d8d
Object detection is a key task in machine vision, and involves both localizing objects in an image and classifying them into categories. Achieving high detection accuracy typically requires training the models with large datasets. However, such datasets are expensive to collect since they require manual annotation of multiple bounding boxes per image, while unlabeled images are easy to collect and require no manual annotation. Recently, there has been a growing interest in learning self-supervised representations, which substantially reduce the need for labeled data [1]}, [2]}, [3]}, [4]}. These self-supervised representations are learned in a pretraining stage on large-scale datasets like ImageNet [5]}, and they have led to increased performance for a range of perception tasks [6]} including object detection — even outperformed a supervised pretraining counterparts.
i
34c4fc0a-5e8b-4751-b78b-8cdc5049830c
Despite this recent progress, we argue that current approaches are limited in their ability to learn good representations for object detection, as they do not focus on learning to detect objects. Most past works (e.g., MoCo [1]} and SwAV [2]}) focus on learning only part of the detection architecture, which is usually a subnetwork of the detector (e.g., a convolutional network like ResNet [3]}). Learning a backbone on its own is not enough for a detection model to succeed. While the recent UP-DETR [4]} work trains a full detection architecture, it learns to detect random patches in an image and is therefore not geared towards detection of actual objects.
i
d6725003-2878-4541-be62-d5a55849901d
Our approach to the problem is different and is based on the observation that learning good detectors requires learning to detect objects in the pretraining stage. To accomplish this, we present a new framework called “DEtection with TRansformers based on Region priors”, or DETReg. DETReg can be used to train a detector on unlabeled data by introducing two key pretraining tasks: “Object Localization Task” and the “Object Embedding Task”. The goal of the first is to train the model to localize objects, regardless of their categories. However, learning to localize objects is not enough, and detectors must also classify objects. Towards this end, we introduce the “Object Embedding Task”, which is geared towards understanding the categories of objects in the image. Inspired by the simplicity of recent transformers for object detection [1]}, [2]}, we choose to base our approach on the Deformable DETR [2]} architecture, which simplifies the implementation and is fast to train.
i
9246bbc4-1313-4181-8d69-7918bc9575c5
But how can we learn to localize objects from unlabeled data? Luckily, the machine vision community has worked extensively on the problem of region proposals, and there are effective methods like Selective Search [1]} to produce category-agnostic region proposals at a high recall, off-the-shelf, and without the need for training. The key idea in Selective Search is that objects exhibit certain structural properties (continuity, hierarchy, edges), and fairly simple programmatic (i.e., not trained) procedures can leverage these cues to extract object proposals. As we show here, these classic algorithms can be effectively used for unsupervised learning of detectors. Similarly, our “Object Embedding Task” is based on the recent success of self-supervised methods in learning visual representations from unlabeled data [2]}, [3]}, [4]}. In these works, the key idea is to encourage learning of visual representations that are not sensitive to transformations that preserve object categories, such as translation or mild cropping. We use one such method, SwAV [2]}, to obtain the embeddings of potential objects, and use them to supervise our DETReg object embeddings during pretraining.
i