_id
stringlengths
36
36
text
stringlengths
200
328k
label
stringclasses
5 values
55fba79c-47ba-47b1-91e6-5219478575fe
To address this issue, we present an automated multiple-choice question generation system with focus on educational text. Taking the course text as an input, the system creates question–answer pairs together with additional incorrect options (distractors). It is very well suited for a classroom setting, and the generated questions could also be used for self-assessment and for knowledge gap detection, thus allowing instructors to adapt their course material accordingly. It can also be applied in industry, e.g., to produce questions to enhance the process of onboarding, to enrich the contents of massive open online courses (MOOCs), or to generate data to train question–answering systems [1]} or chatbots [2]}.
i
26d6d7d5-b3fe-43b2-8459-bd714ca9a4b3
While Question Generation is not as popular as the related task of Question Answering, there has been a steady increase in the number of publications in this area in recent years [1]}, [2]}. Traditionally, rules and templates have been used to generate questions [3]}; however, with the rise in popularity of deep neural networks, there was a shift towards using recurrent encored–decoder architectures [4]}, [5]}, [6]}, [7]}, [8]}, [9]}, [10]} and large-scale Transformers [11]}, [12]}, [13]}, [14]}, [15]}.
w
ee132241-7c39-43b8-807e-2f8804728e9b
The task is often formulated as one of generating a question given a target answer and a document as an input. Datasets such as SQuAD1.1 [1]} and NewsQA [2]} are most commonly used for training, and the results are typically evaluated using measures such as BLEU [3]}, ROUGE [4]}, and METEOR [5]}. Note that this task formulation requires the target answer to be provided beforehand, which may not be practical for real-world situations. To get over this limitation, some systems extract all nouns and named entities from the input text as target answers, while other systems train a classifier to label all word \(n\) -grams from the text and to pick the ones with the highest probability to be answers [6]}. To create context-related wrong options (i.e., distractors), typically the RACE dataset [7]} has been used along with beam search [8]}, [9]}, [10]}. Note that MOOCs pose additional challenges as they often cover specialized content that goes beyond knowledge found in Wikipedia, and can be offered in many languages; there are some open datasets that offer such kinds of questions in English [11]}, [12]}, [7]}, [14]}, [15]} and in other languages [16]}, [17]}, [18]}, [19]}, [20]}, [21]}, [22]}, [23]}.
w
ce68b068-be11-45af-8679-4f74a2248980
Various practical systems have been developed for question generation. WebExperimenter [1]} generates Cloze-style questions for English proficiency testing. AnswerQuest [2]} generates questions for better use in Question Answering systems, and SQUASH [3]} decomposes larger articles into paragraphs and generates a text comprehension question for each one; however, both systems lack the ability to generate distractors. There are also online services tailored to teachers. For example, Quillionz [4]} takes longer educational texts and generates questions according to a user-selected domain, while Questgen [5]} can work with texts up to 500 words long. While these systems offer useful question recommendations, they also require paid licenses. Our Leaf system offers a similar functionality, but is free and open-source, and can generate high-quality distractors. It is trained on publicly available data, and we are releasing our training scripts, thus allowing anybody to adapt the system to their own data.
w
c8dac0cc-b05e-4e11-9a8e-1a7797489807
We presented Leaf, a system to generate multiple-choice questions from text. The system can be used both in the classroom and in an industrial setting to detect knowledge gaps or as a self-assessment tool; it could also be integrated as part of other systems. With the aim to enable a better educational process, especially in the context of MOOCs, we open-source the project, including all training scripts and documentation.
d
89090f72-67dd-4ffb-9c3d-a37c85b1a93b
In future work, we plan to experiment with a variety of larger pre-trained Transformers as the underlying model. We further plan to train on additional data. Given the lack of datasets created specifically for the task of Question Generation, we plan to produce a new dataset by using Leaf in real university courses and then collecting and manually curating the question–answer pairs Leaf generates over time.
d
8ac5cab1-0376-46db-baab-327c72076499
As the importance of electrical energy storage continues to grow and Li-ion batteries numbers keep on increasing, this technology is becoming ever more mature. The sheer number of Li-ion batteries being produced is leading to significant price reductions ([1]}), due primarily to the economics of scale, meaning that any new battery chemistry hoping to be commercialised will have to overcome increasingly steep economic barriers before entering the mass-market. As such, at least in the near future, the next wave of battery innovation is likely to come from optimising existing technology, rather than from the introduction of radically new chemistries. Control engineering will play a pivotal role in this optimisation.
i
cbbbe833-8e31-486c-b591-82fd436e1afe
Electrochemical models are important tools for optimising battery use, both for control purposes within a battery management system ([1]}) and for design. These models provide a rich description of the battery's response in terms of snapshots of its internal electrochemical state; however, their accuracy critically relies upon the electrochemical parameter values. Obtaining accurate estimates of these parameters has thus emerged as a crucial research topic in recent years. Unfortunately, the relative complexity of electrochemical models makes estimating their parameter values difficult. This difficulty was illustrated in [2]} where it took approximately three weeks’ worth of computation on five quad-core Intel Q8200 computers to estimate the parameters of the benchmark Doyle-Fuller-Newman (DFN) model ([3]}). To bring some clarity to this problem, [4]} recently approached the parameter estimation problem from a different perspective by performing a structural identifiability analysis of the simplified single particle model (SPM), determining the unique six parameters fully describing the SPM's response. In such a way, the well-posedness of the parameter estimation problem for this simplified model could be established. This result built upon existing literature in this area, including [5]} where sensitivity functions for a polynomial approximations of the single particle model with electrolyte (SPMe) were obtained, [6]} which considered experiment design for ranking the sensitivity of each electrochemical parameter, [7]} where analytic sensitivity functions for the SPM's parameters were derived from Padè approximations of the impedance and [8]} that analysed the structural identifiability of Randles circuit battery models.
i
cda91e4b-5b56-4a46-86c2-a2c6ec8c9175
Motivated by [1]}, this paper develops a framework towards a structural identifiability analysis of the DFN electrochemical Li-ion battery model ([2]}). The DFN model is widely considered as a benchmark micro-scale model for Li-ion batteries from which simplifications like the SPM ([3]}, [4]}) and the SPMe ([5]}, [6]}, [7]}) are derived. But, estimating its parameter values in a methodical way is challenging ([8]}). This paper details preliminary results on this problem by performing a structural identifiability analysis of a decoupled and linearised form of the DFN model which can be considered as the SPMe with added double-layer effects.
i
403eddab-e56b-4881-8888-d1a8790cdc29
To perform the structural identifiability analysis, the DFN model was first simplified into a linearised form with dynamics decomposed into three elements: solid-state diffusion, bulk electrolyte mobility and the charge transfer resistance caused by the relaxation of the overpotential. This simplification enables a tractable analysis, with the generalised impedance functions developed in [1]} being too complex, right now, to determine structural identifiability. It is shown that this decoupled DFN model is uniquely parametrised from current/voltage data by 21 parameters formed from combinations of the electrochemical parameters (see Table REF ). It is hoped that the results of this work will provide the theoretical underpinning behind a generalised parameter estimation method of the DFN model that will not rely upon substantial a prior knowledge of the cell's makeup. The need for accurate, and recursive, estimates of the DFN model's parameters is expected to grow still further as the importance of fast charging ([2]}) and battery design become ever clearer. For these applications, the SPM provides neither sufficient accuracy nor richness.
i
33397a47-ba61-4b10-8414-349733c0bcc5
A structural identifiability analysis of a decoupled and linearised Doyle-Fuller-Newman Li-ion battery electrochemical model was applied. It was shown that the model is structurally identifiable from a group of 21 parameters (composed of electrochemical quantities like the conductivities and lengths), with these parameters uniquely characterising the impedance function of this model. The parameter estimation problem for this model is then well-posed with respect to these groups of parameters. Future work will aim to exploit this result to develop an algorithm to recursively estimate the parameter values for pseudo-2D battery models from generic data.
d
5c474c6e-ac8c-4022-ad57-e3ac7d793c78
Oblivious subspace embeddings (OSEs) were introduced by Sarlos [1]} to solve linear algebra problems more quickly than traditional methods. An OSE is a distribution of matrices \(S \in {\mathbb {R}}^{m \times n}\) with \(m \ll n\) such that, for any \(d\) -dimensional subspace \(U \subset {\mathbb {R}}^n\) , with “high” probability \(S\) preserves the norm of every vector in the subspace. OSEs are a generalization of the classic Johnson-Lindenstrauss lemma from vectors to subspaces. Formally, we require that with probability \(1-\delta \) , \(\Vert Sx\Vert _2 = (1 \pm \varepsilon ) \Vert x\Vert _2\)
i
3dda9ba0-fb11-477f-bdb8-40593328adc6
A major application of OSEs is to regression. The regression problem is, given \(b \in {\mathbb {R}}^n\) and \(A \in {\mathbb {R}}^{n \times d}\) for \(n \ge d\) , to solve for \(x^* = \operatornamewithlimits{arg\,min}_{x \in \mathbb {R}^d} \Vert Ax-b\Vert _2\)
i
b6a4b4a4-f78d-47af-ad4d-c0f2e3cdea19
Because \(A\) is a “tall” matrix with more rows than columns, the system is overdetermined and there is likely no solution to \(Ax = b\) , but regression will find the closest point to \(b\) in the space spanned by \(A\) . The classic answer to regression is to use the Moore-Penrose pseudoinverse: \(x^* = A^\dagger b\) where \(A^\dagger = (A^\top A)^{-1}A^\top \)
i
0b01fd62-2bd1-450d-80c2-125067962bd0
is the “pseudoinverse” of \(A\) (assuming \(A\) has full column rank, which we will typically do for simplicity). This classic solution takes \(O(nd^{\omega - 1} + d^{\omega })\) time, where \(\omega < 2.373\) is the matrix multiplication constant [1]}, [2]}, [3]}: \(nd^{\omega -1}\) time to compute \(A^\top A\) and \(d^{\omega }\) time to compute the inverse.
i
9cbc06ab-59dd-4d57-a0a7-378c13621ed8
for an OSE \(S\) on \(d+1\) -dimensional spaces. This replaces the \(n\times d\) regression problem with an \(m \times d\) problem, which can be solved more quickly since \(m \ll n\) . Because \(Ax - b\) lies in the \(d+1\) -dimensional space spanned by \(b\) and the columns of \(A\) , with high probability \(S\) preserves the norm of \(SAx - Sb\) to \(1 \pm \varepsilon \) for all \(x\) . Thus, \(\Vert Ax^{\prime }-b\Vert _2 \le \frac{1+\varepsilon }{1-\varepsilon } \Vert Ax^*-b\Vert _2.\)
i
f72d8de1-21c9-48d6-89da-e3a1ad859c1d
That is, \(S\) produces a solution \(x^{\prime }\) which preserves the cost of the regression problem. The running time for this method depends on (1) the reduced dimension \(m\) and (2) the time it takes to multiply \(S\) by \(A\) . We can compute these for “standard” OSE types:
i
9a4a04cb-2f59-4855-8bf1-de4ed542e24d
If \(S\) has i.i.d. Gaussian entries, then \(m = O(d/\varepsilon ^2)\) is sufficient (and in fact, \(m \ge d/\epsilon ^2\) is required [1]}). However, computing \(SA\) takes \(O(mnd) =O(nd^2/\varepsilon ^2)\) time, which is worse than solving the original regression problem (one can speed this up using fast matrix multiplication, though it is still worse than solving the original problem). If \(S\) is a subsampled randomized Hadamard transform (SRHT) matrix with random sign flips (see Theorem 2.4 in [2]} for a survey, and also see [3]} which gives a recent improvement) then \(m\) increases to \(\widetilde{O}(d/\varepsilon ^2 \cdot \log n)\) , where \(\widetilde{O}(f) = f{\mathrm {poly}}(\log (f))\) . But now, we can compute \(SA\) using the fast Hadamard transform in \(O(nd\log n)\) time. This makes the overall regression problem take \(O(nd\log n + d^\omega /\varepsilon ^2)\) time. If \(S\) is a random sparse matrix with random signs (the “Count-Sketch” matrix), then \(m = d^{1 + \gamma }/\varepsilon ^2\) suffices for \(\gamma > 0\) a decreasing function of the sparsity [4]}, [5]}, [6]}, [7]}, [8]}. (The definition of a Count-Sketch matrix is, for any \(s\ge 1\) , \(S_{i,j}\in \lbrace 0, -1/\sqrt{s}, 1/\sqrt{s} \rbrace \) , \(\forall i\in [m], j\in [n]\) and the column sparsity of matrix \(S\) is \(s\) . Independently in each column \(s\) positions are chosen uniformly at random without replacement, and each chosen position is set to \(-1/\sqrt{s}\) with probability \(1/2\) , and \(+1/\sqrt{s}\) with probability \(1/2\) .) Sparse OSEs can benefit from the sparsity of \(A\) , allowing for a running time of \(\widetilde{O}(\operatorname{nnz}(A)) + \widetilde{O}(d^\omega /\varepsilon ^2)\) , where \(\operatorname{nnz}(A)\) denotes the number of non-zeros in \(A\) .
i
ed6dda62-0559-4390-96f1-13a3a811cd8e
Due to the limiting of computer resources, two-grid finite element methods/nonlinear Galerkin schemes [1]}, [2]} and domain decomposition methods [3]} are popular and powerful tools for numerical simulations of linear and nonlinear PDEs nowadays. Such as two-grid/two-level post-processing schemes for incompressible flow, we refer [4]}, [5]}, [6]}, [7]}, [8]}, [9]} and the references therein for details.
i
978b3ec1-b740-40ec-b9c9-f09ae28804d7
In the past decades, a local and parallel two-grid finite element method for elliptic boundary value problems was initially proposed in [1]}. The scheme firstly solves the elliptic equation on a coarse mesh to get an initial lower frequency guess of the solution. Then the whole computational domain is divided into a series of disjoint subdomains \(\lbrace D_j\rbrace \) and the driven term of the error equation, namely the residual term, is split into several parts defined only on such small subdomains. Finally the global error equation can be transferred into a series of subproblems with local driven terms. Since the solution of higher frequency to each subproblem decays very fast apart from the support of the local driven term, by suitablely expanding each \(D_j\) to the domain \(\Omega _j\supset D_j\) and imposing the homogeneous boundary condition on \(\partial \Omega _j\) , each subproblem can be approximated in a localized version defined in the corresponding expanded domain \(\Omega _j\) . The most attractive feature of the scheme is that not any communication is required between local fine grid subproblems, which makes the scheme a highly effective parallel scheme. Such local and parallel two-grid scheme can be found in [2]} and has been extended to the Stokes equations in [3]}. Error estimates derived in [1]}, [3]} show that the approximate solutions in such schems can reach the optimal convergence orders in both \(H^1\) and \(L^2\) norms.
i
ea38f847-77d4-49b8-ab5d-1e4c2b8dd395
However, according to [1]}, the error estimates are limited by the usage of the superapproximation property of finite element spaces, which makes the error constant appeared in [2]}, [3]} to be a form of \(O(t^{-1})\) , where \(t\) denotes the distance between \(\partial D_j\) and \(\partial \Omega _j\) . To obtain the optimal error orders, usually \(t=O(1)\) is required, which means the distance of \(\partial \Omega _j\) and \(\partial D_j\) is almost a constant. Therefore one can not expect \(\Omega _j\) to be arbitrary small. This will prevent the corresponding scheme from utilization in large parallel computer systems. In the previously mentioned local and parallel schemes, computational results for each \(\Omega _j\) are usually removed outside \(D_j\) , and are simply pasted together to form the final approximation, in which discontinuity may appear along the boundaries of different \(D_j\) . Instead, based on the method of partition of unity [4]}, a local and parallel two-grid scheme is proposed for second order linear elliptic equations [5]}, [6]} and is also extended to dealing with the Stokes equations and Navier-Stokes equations[7]}, [8]}. Although the partition of unity method can guarantee that the global approximation is continuous, usage of superposition principle causes a crucial requirement that the distance \(t\) should be \(O(1)\) . To overcome such defects, some research on linear elliptic equations has been done by the first author and his collaborator in [9]} by iterative method.
i
2fbe2c3a-2f90-44dd-9929-13e489dbc409
In this paper, based on the basic idea presented in [1]}, [2]}, as an important extension of the idea in [3]}, we construct a local and parallel two-grid iterative scheme for solving the Stokes equations, in which the scale of each subproblem is \(O(H)\) , just requiring that \(\mbox{diam}(D_j)=O(H)\) and \(t=O(H)\) (much smaller than \(O(1)\) ). Since only a small overlapping in each two adjacent subproblems while \(H\) tends to zero, the scale of each subproblem can be arbitrary small as \(O(H)\) , that's a main reason why we call the scheme an expandable local and parallel two-grid scheme. Meanwhile, in each cycle of two-grid iteration, to guarantee a better \(L^2\) error estimate, we adopt a coarse grid correction. Another main contribution in this paper is that, to yield the globally continuous velocity in \(\Omega \) , we use the principle of superposition based on a partition of unity to generate a series of local and independent subproblems. In particular, for patches of given size, to obtain a similar approximate accuracy as the one from the standard Galerkin method on the fine mesh, we carry out rigorous analysis, and through the a priori error estimate of the scheme, we show that a few number of iterations of order \(O(|\ln H|^2)\) in 2-D and \(O(|\ln H|)\) in 3-D is only needed, respectively. Similar technique has been successfully applied for adaptive schemes with some a posterior error estimates in [4]}, [5]}.
i
422cef69-9c1c-4f10-93f3-c46b87d7a71c
The remainder of this paper is organized as follows. In Section 2, we will introduce the model problem and some preliminary materials. In Section 3, we will present our expandable local and parallel scheme for the Stokes equations. The a priori error estimates of the scheme are derived and then suggested iterative scheme is presented in Section 4. Some numerical experiments including 2-D and 3-D examples are carried out in a parallel computer system to support our theoretical analysis in Section 5. Finally we give some conclusions in Section 5 6.
i
eec9bb5a-6a33-46e5-af55-c4c4745e9620
In this paper, we have designed an expandable local and parallel two-grid finite element iterative scheme based on superposition principle for the Stokes problem. The optimal convergence orders of the scheme are analyzed and obtained within suitable two-grid iterations while numerical tests in 2D and 3D are carried out to show the flexible and high efficiency of the scheme. The extension of the scheme to the time-dependent problems or nonlinear problems, e.g., Navier-Stokes equations, will be our further work.
d
3dfc255b-c49e-4354-abd7-4882a06d819d
The film industry is one that stands out in every aspect. It is a world of it's own. For this paper our motivation comes from the desire to provide a prediction model for producers to get an idea of the commercial viability of their proposed movie. Joe Swanberg said in 2016, "The only way you’re ever going to make any money is if you’re investing in your own movies". Before movie producers have to finalize their decision, they have to ensure that their investment is sound and understand how they see a return on that investment – this is where our model enters the movie world. Our main contributions in solving this problem can be summarized as follows:
i
34829c63-d2ad-4d4e-8ab5-aa05e890d1ef
We investigated a wide range of features that are likely to be associated with commercial success of a movie. Different from other available studies, we have incorporated many novel features such as publicity, release date and movie cast & crew. We spent considerable time in feature engineering to better understand what factors make a movie financially lucrative. We extracted 11 different groups of features and built a random forest (RF) model to predict whether the return on investment (ROI) for the movie will be above or below median. After training RF, we have then identified the relative importance of each individual feature and groups of features.
i
bf6006b4-e5f3-41c3-bdaf-4ba864eecc1a
The remaining paper is structured as follows. In Section , we describe our research methodology. In , we present our findings, followed by section , where we discuss the relationship between few important features and ROI. In Section , we identify threats to validity and finally in section , we conclude the paper and suggest scope for future work.
i
a5b05a45-7ce9-4cc1-a653-066463f12485
Prize Collecting Steiner Tree (PCST) refers to a wide class of combinatorial optimization problems, involving variants of the Steiner tree and traveling sales man problems, with many practical applications in computer and telecommunication networks, VLSI design, computational geometry, wireless mesh networks, and cancer genome studies [1]}, [2]}, [3]}, [4]}, [5]}.
i
f02eaa2a-0f4e-40a8-a562-857b2e293ceb
In general, we are given a (directed) graph \(G\) , two functions modelling costs and prizes (or penalties) associated to the edges and/or to the nodes of the graph, and we want to find a connected subgraph of \(G\) (usually a tree or an out-tree) \(T\) which optimizes an objective function defined as a combination of its costs and prizes and/or is subject to some constraints on its cost and prize. Casting suitable constraints and objective functions give rise to different problems. For example, in budgeted problems, we are given a budget \(B\) and we require that the cost of \(T\) is no more than \(B\) and its prize is maximized. In quota problems, we require the prize of \(T\) to be at least some quota \(Q\) and its cost to be minimum. Additional constraints can be required, for example in rooted variants we are given a specific node, called root, which is required to be part of \(T\) and reach all the nodes in \(T\) , while in Steiner tree problems \(T\) must include a specified set of nodes called terminals.
i
afc097cf-176d-4e46-8346-3f49d62d7057
While there is a vast literature providing approximation algorithms for many variants of PCST on undirected graphs where the prize function is additive, e.g. [1]}, [2]}, [3]}, [4]}, [5]}, [6]}, [7]}, [8]}, the case of directed graphs or monotone submodular prize functions received less attention [9]}, [10]}, [11]}, [12]}, [13]}.
i
17634730-2c15-490a-94f7-87e0beef2ee4
In this paper, we consider node-weighted Steiner tree problems, that is both costs and prizes are associated to the nodes of the graph, and we investigate two relevant settings: (i) the underlying graph is directed and the prize function is additive, and (ii) the underlying graph is undirected and the prize function is monotone and submodular. In both settings, we consider budgeted and quota problems. For the first setting we also study the minimum-cost Steiner tree problem. We consider the more general rooted variant of all these problems. For each of the above two settings, we introduce a new technique, resorting on flow-based linear programming relaxations, which allows us to find trees or out-trees with a good trade-off between cost and prize. Casting suitable values of quota and budget and applying new and known tree trimming procedures, we can achieve good bicriteria approximation ratios for all the above problems.
i
cacb3445-7372-40c7-9aac-52b2dcd0ea7d
We obtained very simple polynomial time approximation algorithms for some budgeted and quota variants of node-weighted Steiner tree problems on two scenarios: (i) directed graphs with additive prizes, and (ii) undirected graphs with submodular prizes. The key insights behind our algorithms for the first scenario were to carefully select a subset of vertices as terminals in a fractional solution returned by the standard flow-based LPs and use some flow properties to find a small hitting set through which all the chosen terminals can be reached from the root vertex. The key idea of our algorithms for the second scenario were to use the submodular flow problem and introduce new LPs for our problems.
d
c4a4afa7-7b23-46c5-90a7-00c3740ebd5f
To the best of our knowledge, our techniques yield the first polynomial time (bicriteria) approximation algorithms for these problems (except DSteinerT) in terms of the number of vertices \(n\) . Furthermore, we believe that our introduced LPs can be utilized for some other Steiner problems in which, for example, other constraints can be added to (REF ) and (REF ).
d
c644ba6f-e2a2-4eb6-9c6e-e312cdd44678
A natural open question asks to improve the approximation guarantees or prove that the current guarantees are the best possible using the flow-based LPs. By the result of Bateni, Hajiaghay and Liaghat [1]}, we know that the integrality gap of the flow-based LP for B-DRAT is infinite. This implies that, using this LP, we should work on improving the approximation factors for the budgeted problems while violating the budget constraint. Also, Li and Laekhanukit [2]} showed that the integrality gap of the flow-based LP for DSteinerT is polynomial in the number of vertices \(n\) . This means that using this LP, one needs to work on the possibility of achieving an approximation algorithm with the factor \(O(n^{\varepsilon })\) for DSteinerT, where \(0 <\varepsilon < 1/2\) . Another interesting future work would be the possibility of extending our techniques for second scenario into directed graphs. Appendix An Equivalent formulation of flow constraints All our set of constraints and linear programs have an exponential number of variables. However, they can be solved in polynomial time as we only need to find, independently for any \(v \in V\setminus \lbrace r\rbrace \) , a flow from \(r\) to \(v\) of value \(x_v\) that does not exceed the capacity \(x_w\) (and \(nx_w\) in (REF )), for each vertex \(w \in V\setminus \lbrace v\rbrace \) . Indeed, taking the example of (REF ), the flow variables appear only in constraints () and (), while the quota and budget constraints only depend on the capacity variables. Therefore, we can replace the flow variables and constraints () and (), with an alternative formulation of flow variables and constraints in such a way that for any assignment \(x\) of capacity variables, there exists a feasible assignment of flow variables if and only if there exists a feasible assignment of the alternative flow variables. The two following sets of constraints, where \(x\) gives capacity values, are equivalent in this sense. \(& \\\sum _{P \in \mathcal {P}_v}f^v_P &= x_v,&& \forall v \in V\setminus \lbrace r\rbrace \\\sum _{P \in \mathcal {P}_v:w \in P} f^v_{P}&\le x_w,&& \forall v\in V\setminus \lbrace r\rbrace \text{ and }\forall w \in V\setminus \lbrace v\rbrace \\0 \le f^v_P &\le 1, &&\forall v\in V\setminus \lbrace r\rbrace , P \in \mathcal {P}_v\) \(& \\\sum _{w\in V} f^v_{wv} &= x_v,&& \forall v \in V\setminus \lbrace r\rbrace \\\sum _{u\in V} f^v_{wu} &\le x_w,&& \forall v\in V\setminus \lbrace r\rbrace \text{ and }\forall w \in V\setminus \lbrace v\rbrace \\\sum _{u \in V} f^v_{wu}&=\sum _{u \in V} f^v_{uw}&& \forall v\in V\setminus \lbrace r\rbrace \text{ and }\forall w \in V\setminus \lbrace r, v\rbrace \\\ 0 \le &f^v_{wu} \le 1, && \forall v\in V\setminus \lbrace r\rbrace \text{ and }\forall w,u\in V\) In both formulations, the values of \(x\) are fixed, while \(f_P^v\) , for each \(v\in V\setminus \lbrace r\rbrace \) , and \(f^v_{wu}\) , for each \(v\in V\setminus \lbrace r\rbrace \) and \(w,u \in V\setminus \lbrace v\rbrace \) , are the flow variables for (REF ) and (REF ), respectively. In (REF ) for any \(v \in V \setminus \lbrace r\rbrace \) , \(r\) has to send \(x_v\) units of commodity \(v\) to every vertex \(v\) and \(f^v_{wu}\) is the flow of commodity \(v\) on the directed edge \((w, u)\) . The constraints in (REF ) are as follows. Constraint () ensures that the amount of flow entering each vertex \(v \in V\setminus \lbrace r\rbrace \) should be equal to \(x_v\) . Constraints () and () formulate the standard flow constraints encoding of the connectivity constraint in which in a flow from \(r\) to \(v\) , \(x_u\) is the capacity of each vertex \(u \in V \setminus \lbrace v\rbrace \) and \(f^v_{wu}\) is the flow of commodity \(v\) on the directed edge \((w,u)\) . It is easy to see that, given an assignment of capacity variable \(x\) , there exists an assignment of variables \(f^v_P\) , for each \(v\in V\setminus \lbrace r\rbrace \) and \(P \in \mathcal {P}_v\) , that satisfies constraints (REF ) if and only if there exists an assignment of variables \(f^v_{wu}\) , for each \(v\in V\setminus \lbrace r\rbrace \) and \(w,u \in V\setminus \lbrace v\rbrace \) , that satisfies constraints (REF ). In fact, both conditions are satisfied if an only if it is possible for each \(v\in V\setminus \lbrace r\rbrace \) , to send \(x_v\) units of flow from \(r\) to \(v\) satisfying the node capacities defined by \(x\) . In our approximation algorithms, we will use only the capacity variables, which we can compute by replacing (REF ) with (REF ) in the respective linear program and hence solving linear programs with a polynomial number of variables. Proof of Claim  REF For each element \(i\in V^{\prime }\) , we define a counter \(c_i\) of the number of sets which \(i\) belongs to, \(c_i:=|\lbrace j~:~ i\in X^{\prime }_j\rbrace |\) . We initialize \(X^{\prime }\) to \(\emptyset \) and iterate the following greedy steps until \(X^{\prime }\) hits all the subsets of \(\Sigma \) : (1) select the element \(i\) that maximizes \(c_i\) ; (2) add \(i\) to \(X^{\prime }\) ; (3) update all counters of elements that belong to a set which also \(i\) belongs to. The above algorithm runs for at most \(N\) iterations since at least a subset is covered in each iteration. Moreover, each iteration requires polynomial time in \(N\) and \(M\) . For \(k\ge 0\) , let \(N_k\) be the number of sets not covered by \(X^{\prime }\) after \(k\) iterations of the above algorithm. We have \(N_0=N\) and \(N_{|X^{\prime }|} =0\) . Let \(i\) be the element selected at iteration \(k\ge 1\) . At the beginning of iteration \(k\) , before adding \(i\) to \(X^{\prime }\) , we have that \(\sum _{\ell :c_\ell >0} c_\ell \ge R \cdot N_{k-1},\) and since \(|\lbrace \ell :c_\ell >0\rbrace | \le M-k+1\) , by an averaging argument, we must have \(c_i\ge \frac{R N_{k-1}}{M-k+1}\) . It follows that: \(N_k =N_{k-1}-c_i\le \left( 1 - \frac{R}{M-k+1}\right) N_{k-1}\le N_0 \prod _{\ell =0}^{k-1} \left( 1 - \frac{R}{M-\ell }\right) = N \prod _{\ell =0}^{k-1} \left( 1 - \frac{R}{M-\ell }\right) < N \left( 1 - \frac{R}{M}\right)^{k}\le N e^{-Rk/M}~,\) where the second inequality is due to \(k-1\) recursions on \(N_{k-1}\) and the last one is due to \(1-x\le e^{-x}\) , for any \(x\ge 0\) . For \(k=\frac{M}{R}\ln {N}\) we have \(N_k< 1\) , which means \(N_k=0\) and hence \(|X^{\prime }|\le \frac{M}{R}\ln {N}\) .
d
d0424e34-09c8-4351-876e-bf146ee07b78
A universal quantum computer should be able to perform arbitrary computations on a quantum system. It is common to break down a given computation into a sequence of elementary gates, each of which can be implemented with low cost on an experimental architecture. However, given an abstract representation of the desired computation, such as a unitary matrix, it is in general difficult and time consuming to find a low-cost circuit implementing it. Here, we introduce an open source Mathematica package, UniversalQCompilerSee our webpage for a reference to the github repository and the documentation: http://www-users.york.ac.uk/~rc973/UniversalQCompiler.html. that allows for automation of the compiling process on a small number of qubits. The package requires an existing Mathematica package QIhttps://github.com/rogercolbeck/QI, which can easily handle common computations in quantum information theory, such as partial traces over various qubits or the Schmidt decomposition. Since the code is provided for Mathematica, our packages are well adapted for analytic calculations and can be used alongside the library of mathematical tools provided by Mathematica. Together, these constitute a powerful set of tools for analysing protocols in quantum information theory and then compiling the computations into circuits that can finally be run on a experimental architecture, such as IBM Q Experience (see Figure REF for an overview). UniversalQCompiler focuses on the compilation process, and performs a few basic simplifications on the resulting quantum circuit. Hence, one might want to put the gate sequences obtained from UniversalQCompiler into either a source-to-source compiler or a transpiler (see for example [1]}, [2]}, [3]}) in order to optimize the gate count of the circuits further or to map them to a different hardware, which may have restrictions on the qubit-connectivity [4]}, [5]}, [6]}, [7]}, [8]}. <FIGURE>
i
a2a26ca5-8ac5-4032-b56a-e4bdc90db4bb
The package UniversalQCompiler provides code for all the decompositions described in [1]}, which are near optimal in the required number of gates for generic computations in the quantum circuit model (in fact, the achieved C-not counts differ by a constant factor of about two from a theoretical lower bound given in [1]}). Note that our decompositions may not lead to optimal gate counts for computations of a special form lying in a set of measure zero (see [1]} for the details), as for example for a unitary that corresponds to the circuit performed for Shor's algorithm [4]}. Hence, to optimize the gate counts when decomposing operations of certain special forms, such as diagonal gates, multi-controlled single-qubit gates and uniformly-controlled gates, we provide separate commands. In addition, we provide methods for analyzing, simplifying and manipulating gate sequences. Outputs are given in a bespoke gate list format, and can be exported as graphics, or to using the format of Q-circuit [5]}.
i
14e6c1c7-98ac-4d69-94ed-1a4f2845be23
UniversalQCompiler is intended to be an academic software library that focuses on simplicity and adaptability of the code and it was not our focus to optimize the (classical) run time of the decomposition methods (the theoretical decompositions mainly focused on minimizing the C-not count). A detailed documentation as well as an example notebook are published together with our code and should help the user to get started quickly. The aim of this paper is to give an overview over the package UniversalQCompiler and to provide some theoretical background about the decomposition methods that it uses. A separate manual is provided with the package that provides more details.
i
f752b6cc-ee49-424f-9bbe-82e3474e776d
We work with the universal gate library consisting of arbitrary single-qubit rotations and C-not gates (we also explain how to convert gate sequences from this universal set to another that comprises single-qubit rotations and Mølmer-Sørensen gates (see Appendix ), which are common on experimental architectures with trapped ions). UniversalQCompiler decomposes different classes of quantum operations into sequences of these elementary gates keeping the required number of gates as small as possible.
i
8c299e7f-59e5-4899-8d3b-e391f4ffa9fc
In Section , we describe how to use UniversalQCompiler to decompose arbitrary isometries from \(m\) to \(n\ge m\) qubits describing the most general evolution that a closed quantum system can undergo. Mathematically, an isometry from \(m\) to \(n\) qubits is an inner-product preserving transformation that maps from a Hilbert space of dimension \(2^m\) to one of dimension \(2^n\) . Physically, such an isometry can be thought of as the introduction of \(n-m\) ancilla qubits in a fixed state (conventionally \(\left| 0 \right>\) ) followed by a general \(n\) -qubit unitary on the \(m\) input qubits and ancilla qubits. Unitaries and state preparation on \(n\) qubits are two important special cases of isometries from \(m\) to \(n\) qubits, where \(m=n\) and \(m=0\) , repsectively.
i
b89c2d41-3d39-4a81-a0ab-d20f7c119eea
In Section , we consider the decomposition of quantum channels from \(m\) to \(n\) qubits (no longer restricting to \(m\le n\) ). A quantum channel describes the most general evolution an open quantum system (i.e., a quantum system that may interact with its environment) can undergo. Mathematically, a quantum channel is a completely positive trace-preserving map from the space of density operators on \(m\) qubits to the space of density operators on \(n\) qubits. UniversalQCompiler takes a mathematical description of such a quantum channel (which can be supplied in Kraus representation or as a Choi state) and returns a gate sequence that implements the channel (in general after tracing out some qubits at the end of the circuit). The decomposition is nearly optimal for generic channels working in the quantum circuit model [1]}. However, working in more general models would allow further reducions in the number of gates [2]}. We plan to implement code for the decompositions described in [2]} in the future. For an overview of possible applications of implementing channels, see [4]}.
i
be5e747b-9417-4615-bba5-04e14dfeba81
In Section , we describe how to implement arbitrary POVMs on \(m\) qubits describing the most general measurements that can be performed on a quantum system. Similarly to the case of channels, working in generalized models can reduce the gate count further [1]}, and we plan to implement these in a future version. See also [2]} for an application of UniversalQCompiler for synthesis of POVMs.
i
66d4f162-8388-4cd8-a0dc-7a3ae47b43e3
In Section , we extend from POVMs to quantum instruments. These can be thought of as the most general type of quantum measurement where we care about the post-measurement state (in contrast to a POVM where we only care about the distribution over the classical outcomes). Our decompositions for these are based on those used for channels, and again could be improved using additional methods from [1]}.
i
6aca25ee-6f17-464f-88e1-f47dd6a0d63a
Finally, in Section , we explain how to automatically translate our circuits to the open quantum assembly language (OpenQASM) [1]}, which allows our package to interface with other quantum software packages. <TABLE>
i
50ce2567-2383-4958-9bf9-c9e7b508f837
Now a days, computer network plays an essential role in our world economy and society. We cannot think any moment without being connected to network. Based on network many Internet of Things (IoT) systems also developed [1]}. There are modern power systems, for home and others, connected to computer network, to be controlled power flow from power grid company [2]}[3]}. On the other hand, today’s modern vehicular systems also embeds with a number of computer devices called Electronic Control Unit (ECU), which communicates each other over in-vehicle networks and internet to facilitate advanced automotive applications like auto-driving service and many more [4]}. Mainly, with the development of network techniques and science technologies, IT industry has expanded greatly. Almost organizations such government, enterprises, banking system .etc and as well as personal users also getting depended on computer networks more and more. Such services demands high attentions on data security and integrity. Additionally, our network architecture that us use world wide, has by born architectural defect on its different layer [5]}. As our network architecture and protocols was built on early ages and at that point of time no one thought about those security holes, that we are facing now. There are intruders [6]} (inside or outside), hackers, spammers and may more, who are simultaneously trying to break or crack our network system and protocols to gain access with criminal intention.
i
c377b371-4d7d-4806-b117-4c708e973b82
As our network systems has packets [1]} on its different layer, all kind of network activity transmits on packet basis. In general, incoming and outgoing activities of those data packets are known as network traffic. To monitor or detect any unusual activity on computer network, "classifying network traffic" approach is widely used. Simply, by observing if we can distinguish normal network traffic from all other, then all network attacks can be stopped beforehand. And this is the key idea of any Intrusion detection and prevention System. It classifies and distinguishes normal traffic from all others. Traditional network traffic classification is done in misused based detection approach [2]} [3]}. Where network attacks are identified by using a predefined attack signatures. There are some generic algorithms for it like[4]}[5]}[6]}. Though this traditional approach is quite good and detects known attacks effectively. Moreover this approach has remarkable disadvantages. It is effective only for known attacks and does nothing for unknown attacks. As attackers building new attack tools and ways, most likely on daily basis, it requires simultaneous update on its attack signature data-set, to be up to date. Which indicates that, new attack cannot be handled before deploying and this is really a dangerous and expensive security hole. Some attacks like zero-day exploit, worm etc. uses polymorphism that delays generation of attack signatures [7]} [8]} [9]}. All of this factors put a question mark on this traditional traffic classification. To fill that gap, a new anomaly based traffic classification comes on the stream, which includes Deep Neural Network. According to some Cyber security expert, new attacks are just a variance of some known attack and by the similarity of their parameters. It is possible to detect these variations [10]}. Deep Neural Network does excellent job on this type of scenario. Another good thing about this approach is, it yields more than 90% of accuracy and also covers new type of attacks.
i
46a1473e-ab54-49ff-92ad-26c63522fd86
Those are the mostly popular techniques or methods for classifying Network traffic with help of Neural Network. The data they obtained for training and testing, either from real traffic like [1]} or simulated traffic like [2]}. Different researchers used different Network architecture as their design, but most of them used single hidden layer and obtained good result. Some used normal NN with 1 to 7 hidden layer, some additionally used RPROP[3]}[4]}[5]}, some used Convolutional Neural Network [2]}, some used Recurrent Neural Network like [7]}. They used MATLAB, PlaNet, OPNET NeuralWorks simulators and other personally developed ways. Overall there accuracy level yields up to 100% [8]}, on their testing set. Though it generates some training overhead, for every case Neural Network produces remarkable result. In real scenario, those model can reduce classification, more specifically detection time. Despite all, Application of Neural Network in computer network classification is an ongoing area and is limited to academic research till now[9]}. This field worth more works for more accuracy and precision.
d
4ff37614-494f-4705-ac45-1615b9fcc370
Dialogue state tracking (DST) is an important module in many task-oriented dialogue systems. The goal of this module is to extract users' intentions at each turn in the dialogue as represented in the slot values of a predefined schema. Collecting and annotating turn-level dialogue states is notoriously hard and expensive [1]}. Also, in commercial applications, it is common to extend the schema and incorporate new domains. Thus, it is important to develop DST learning strategies that are flexible and scalable, in addition to requiring less data.
i
8a9ee317-b87f-4121-a083-3fdbae7377b2
Many previous studies have explored few-shot DST [1]}, [2]}, [3]}, [4]}, [5]}, [6]}. However, they have certain limitations. First, all of these works are based on finetuning pretrained language models. For each new slot or domain added in the schema, or even new examples added, models need to be trained again, which is computationally expensive, and makes the system less flexible. Also, we need to retrain these systems again if we want to update the behavior of these systems after deployment. Second, to achieve reasonable performance in few-shot settings, these models often rely on labeled data from other tasks or extra knowledge. For example, TransferQA [3]} relies on a large amount of QA data, SGPDST [8]} leverages natural language descriptions of schema, and PPTOD [5]} uses \(\sim \) 2M dialogue utterances.
i
39ae4dc1-6a00-4b08-a53f-d2c2f45c48e7
To address the above challenges, we propose the IC-DST model to solve the DST problem with the in-context learning paradigm [1]}, in which a large language model makes predictions based on retrieved exemplars from a small set of labeled training data. A key motivation behind this framework is that it requires no finetuning (i.e., no parameter updates), which makes in-context learning models flexible and scalable in that they can handle queries in a new domain via the exemplar retrieval process without re-training. This enables developers to quickly prototype DST systems in new domains and rapidly leverage new collected data. Moreover, compared to traditional few-shot finetuning approaches, in-context learning allows us to rapidly control the behavior of the DST system and correct its errors by simply updating in-context examples without re-training. This approach has proven to be successful in semantic parsing  [2]}, [3]}, especially in few-shot scenarios. However, these studies focus on sentence-level tasks. DST requires understanding a two-speaker multi-sentence history, presenting new challenges for in-context learning.
i
190284b7-2f3c-4400-b41d-7861b0b8d35f
To solve the length challenge of encoding long dialogue history, we rethink the dialogue representation in prompts. We also propose an efficient way to learn a similarity score for dialogue contexts that is trained to match similarity based on dialogue state labels. This allows us to retrieve related exemplars that are more relevant to a particular test context. The proposed dialogue retriever and the in-context learning framework can be applied to various kinds of dialogue tasks, not restricted to DST. In addition, we propose to reformulate DST as a text-to-SQL task. This allows us to leverage the large language models pretrained with code: Codex [1]} and GPT-Neo [2]}.
i
3a51c56a-91b8-4c83-98b9-087eb93a81bd
To our knowledge, we are the first to successfully apply in-context learning for DST, building on a text-to-SQL approach. To extend in-context learning to dialogues, we introduce an efficient representation for the dialogue history and a new objective for dialogue retriever design. Our system achieves a new state of the art on MultiWOZ 2.1 and 2.4 in few-shot settings, specifically when using 1-10% training data. We also conduct extensive analyses to study what works and not for dialogue with in-context learning, including but not limited to, how much training data is needed, what is the best unit for exemplar retrieval, what is the best output representation for dialogue states, and common error types for in-context learning approaches.
i
1803e888-e5b2-4ae3-8bd6-4058a0e71398
The standard joint goal accuracy (JGA) is used as the evaluation metric. It treats a prediction as correct only if for every domain all slots exactly match the ground-truth values. We also report the \(F_1\) on slots for analysis.
m
351c1e3f-d269-42f0-849d-3eb8d43c1500
To better understand the effectiveness of our proposed methods, we provide detailed analysis in this section. All experiments are conducted on MultiWOZ 2.4 development set in 5% few-shot setting. <TABLE>
m
6a519ced-d689-492c-a002-c0496c37468d
We successfully apply in-context learning for dialogue state tracking by introducing a new approach to representing dialogue context, a novel objective for retriever training, and by reformulating DST into a text-to-SQL task. On MultiWOZ 2.1 and 2.4, our system achieves a new state of the art on few-shot settings. We also study in detail the contribution of each design decision. Future works may apply this in-context learning framework to a wider range of dialogue tasks.
d
343f02b9-03d7-4e1e-a5f9-451312f62f59
Recently, random matrix theory has become one of the most exciting fields in probability theory, and has been applied to problems in physics [1]}, high-dimensional statistics [2]}, wireless communications [3]}, finance [4]}, etc. The Tracy-Widom distributions, or, more generally, the distributions of the \(k\) -th largest level at the soft edge scaling limit of Gaussian ensembles, are some of the most important distributions in random matrix theory, and their numerical evaluation is a subject of great practical importance. There are generally two ways of calculating the distributions to high accuracy numerically: one, using the Painlevé representation of the distribution to reduce the calculation to solving a nonlinear ordinary differential equation (ODE) numerically [5]}, and the other, using the determinantal representation of the distribution to reduce the calculation to an eigenproblem involving an integral operator [6]}.
i
0f5f3b43-0491-48c3-a812-3bb8c9e445c8
In the celebrated work [1]}, the Tracy-Widom distribution for the Gaussian unitary ensemble (GUE) was shown to be representable as an integral of a solution to a certain nonlinear ODE called the Painlevé II equation. This nonlinear ODE can be solved to relative accuracy numerically, but achieving relative accuracy is extremely expensive, since it generally requires multi-precision arithmetic [2]}. In addition, the extension of the ODE approach to the computation of the \(k\) -th largest level at the soft edge scaling limit of Gaussian ensembles is not straightforward, as it requires deep analytic knowledge for deriving connection formulas [3]}, [4]}.
i
0cc9482c-39c7-4741-84be-faacd421cb88
On the other hand, the method based on the Fredholm determinantal representation uses the fact that the cumulative distribution function (CDF) of the \(k\) -th largest level at the soft edge scaling limit of the Gaussian unitary ensemble can be written in the following form: \(F_2(k;s)= \sum _{j=0}^{k-1} \frac{(-1)^j}{j!} \frac{\partial ^j}{\partial z^j} \det \big (I - z\mathcal {K}|_{L^2[s,\infty )}\big ) \Bigr |_{z=1},\)
i
d88975ec-70a1-47bc-bf11-86bdce149017
where \(\text{Ai}(x)\) is the Airy function of the first kind (see [1]}, [2]} for the derivations). We also note that there exist similar Fredholm determinantal representations for the cases of the Gaussian orthogonal ensemble (GOE) and Gaussian sympletic ensemble (GSE) (see Section REF ). The cumulative distribution function and the probability density function (PDF) of the distribution can be computed using the eigendecomposition of the so-called Airy integral operator \(\mathcal {T}_s|_{L^2[0,\infty )}\) , where \(\mathcal {T}_s[f](x)=\int _0^\infty \text{Ai}(x+y+s)f(y) \,\mathrm {d}y\) . This is because \(\mathcal {K}|_{L^2[s,\infty )} = \mathcal {G}_s^2\) , where \(\mathcal {G}_s[f](x)=\int _s^\infty \text{Ai}(x+y-s)f(y) \,\mathrm {d}y\) , and \(\mathcal {T}_s\) shares the same eigenvalues and eigenfunctions (up to a translation) with \(\mathcal {G}_s|_{L^2[s,\infty )}\) . If the eigenvalues of the integral operator \(\mathcal {T}_s\) are computed directly, they can be known only to absolute precision since \(\mathcal {T}_s\) is a compact integral operator. Furthermore, the number of degrees of freedom required to discretize \(\mathcal {T}_s\) increases when the kernel is oscillatory (as \(s\rightarrow -\infty \) ).
i
1d6ba943-20a6-4655-ab2c-fa83a9db3268
In this manuscript, we present a new method for computing the eigendecomposition of the Airy integral operator \(\mathcal {T}_s\) . It exploits the remarkable fact that the Airy integral operator admits a commuting differential operator, which shares the same eigenfunctions (see, for example, [1]}, [2]}). In our method, we compute the spectrum and the eigenfunctions of the differential operator by computing the eigenvalues and eigenvectors of a banded eigenproblem. Since the eigenproblem is banded, the eigendecomposition can be done very quickly in \(\mathcal {O}(n^2)\) operations, and the eigenvalues and eigenvectors can be computed to entry-wise full relative precision. Finally, we use the computed eigenfunctions to recover the spectrum of the integral operator \(\mathcal {T}_s\) , also to full relative precision. As a direct application, our method computes the distributions of the \(k\) -th largest level at the soft edge scaling limit of Gaussian ensembles to full relative precision rapidly everywhere except in the left tail (the left tail is computed to absolute precision). We note that several other integral operators admitting commuting differential operators have been studied numerically from the same point of view as this manuscript (see, for example, [3]}, [4]})
i
c2f334ff-5af1-4573-82f3-66477cb2fb61
Integral operators like \(\mathcal {T}_s\) , which admit commuting differential operators, are known as bispectral operators (see, for example, [1]}). One famous example of a bispectral operator is the truncated Fourier transform, which was investigated by Slepian and his collaborators in the 60's [2]}; its eigenfunctions are known as prolate spheroidal wavefunctions. We note that, unlike prolates, the eigenfunctions of the operator \(\mathcal {T}_s\) are relatively unexamined: “In the case of the Airy kernel, the differential equation did not receive much attention and its solutions are not known” (see Section 24.2 in [3]}). In this manuscript, we also characterize these previously unstudied eigenfunctions, and describe their extremal properties in relation to an uncertainty principle involving the Airy transform.
i
e527f98a-0cf8-4eb0-8c3e-195599184c8d
Finally, we note that the Airy integral operator \(\mathcal {T}_s\) is rather universal. For example, in Section REF , we describe an application to optics. In that section, we use the eigenfunctions of the Airy integral operator to compute a finite-energy Airy beam that is optimal, in the sense that the beam is both maximally concentrated, and maximally non-diffracting and self-accelerating.
i
556918a0-aad1-4d49-919f-5ce7d1ecf1c3
We implemented our algorithm in FORTRAN 77, and compiled it using Lahey/Fujitsu Fortran 95 Express, Release L6.20e. For the timing experiments, the Fortran codes were compiled using the Intel Fortran Compiler, version 2021.2.0, with the -fast flag. We conducted all experiments on a ThinkPad laptop, with 16GB of RAM and an Intel Core i7-10510U CPU.
m
5d05b7e9-1103-4570-bffb-6fbcb1168d25
In this manuscript, we present a numerical algorithm for rapidly evaluating the eigendecomposition of the Airy integral operator \(\mathcal {T}_c\) , defined in (REF ). Our method computes the eigenvalues \(\lambda _{j,c}\) of \(\mathcal {T}_c\) to full relative accuracy, and computes the eigenfunctions \(\psi _{j,c}\) of \(\mathcal {T}_c\) and \(\mathcal {L}_c\) in the form of an expansion (REF ) in scaled Laguerre functions, where the expansion coefficients are also computed to full relative accuracy. In addition, we characterize the previously unstudied eigenfunctions of the Airy integral operator, and describe their extremal properties in relation to an uncertainty principle involving the Airy transform.
d
4215e8f9-117b-47e4-ace2-3effbc540859
We also describe two applications. First, we show that this algorithm can be used to rapidly evaluate the distributions of the \(k\) -th largest level at the soft edge scaling limit of Gaussian ensembles to full relative precision rapidly everywhere except in the left tail (the left tail is computed to absolute precision). Second, we show that the eigenfunctions of the Airy integral operator can be used to construct a finite-energy Airy beam that is optimal, in the sense that the beam is both maximally concentrated, and maximally non-diffracting and self-accelerating.
d
ada56039-23be-4176-bc69-5614cedb5ad0
Nowadays, in many safety-critical systems, which are prevalent, e.g. in smart grids [1]} and automotive industry [2]}, a catastrophic accident may happen due to coincidence of sudden events and/or failures of specific subsystem components. These undesirable accidents may result in loss of profits and sometimes severe fatalities. Therefore, the central inquiry, in many critical-systems, where safety is of the utmost importance, is to identify the possible consequences given that one or more components could fail at a subsystem level on the entire system. For that purpose, the main discipline for safety design engineers is to perform a detailed Cause-Consequence Diagram (CCD) [3]} reliability analysis for identifying the subsystem events that prevent the entire system from functioning as desired. This approach models the causes of component failures and their consequences on the entire system using Fault Tree (FT) [4]} and Event Tree (ET) [5]} dependability modeling techniques.
i
d70febbf-d7fc-45de-b5e1-0ca56091e329
FTs mainly provide a graphical model for analyzing the factors causing a system failure upon their occurrences. FTs are generally classified into two categories Static Fault Trees (SFT) and Dynamic Fault Trees (DFT) [1]}. SFTs and DFTs allow safety-analysts to capture the static/dynamic failure characteristics of systems in a very effective manner using logic-gates, such as OR, AND, NOT, Priority-AND (PAND) and SPare (SP) [2]}. However, the FT technique is incapable of identifying the possible consequences resulting from an undesirable failure on the entire system. ETs provide risk analysis with all possible system-level operating states that can occur in the system, i.e., success and failure, so that one of these possible scenarios can occur [3]}. However, both of these modeling techniques are limited to analyzing either a critical-system failure or cascading dependencies of system-level components only, respectively.
i
a9801502-20d8-486e-a782-7560f7f3db85
There exist some techniques that have been developed for subsystem-level reliability analysis of safety-critical systems. For instance, Papadopoulos et al. in [1]} have developed a software tool called HiP-HOPS (Hierarchically Performed Hazard Origin & Propagation Studies) [2]} for subsystem-level failure analysis to overcome classical manual failure analysis of complex systems and prevent human errors. HiP-HOPS can automatically generate the subsystem-level FT and perform Failure Modes, Effects, and Critically Analyses (FEMCA) from a given system model, where each system component is associated with its failure rate or failure probability [1]}. Currently, HiP-HOPS lacks the modeling of multi-state system components and also cannot provide generic mathematical expressions that can be used to predict the reliability of a critical-system based on any probabilistic distribution [4]}. Similarly, Jahanian in [5]} has proposed a new technique called Failure Mode Reasoning (FMR) for identifying and quantifying the failure modes for safety-critical systems at the subsystem level. However, according to Jahanian [6]}, the soundness of the FMR approach needs to be proven mathematically.
i
ee92d26e-2e59-4bc7-ab93-d6ed978c8468
On the other hand, CCD analysis typically uses FTs to analyze failures at the subsystem or component level combined with an ET diagram to integrate their cascading failure dependencies at the system level. CCDs are categorized into two general methods for the ET linking process with the FTs [1]}: (1) Small ET diagram and large subsystem-level FT; (2) Large ET diagram and small subsystem-level FT. The former one with small ET and large subsystem-level FT is the most commonly used for the probabilistic safety assessment of industrial applications (e.g., in [2]}). There are four main steps involved in the CCD analysis [3]}: (1) Component failure events: identify the causes of each component failure associated with their different modes of operations; (2) Construction of a complete CCD: construct a CCD model using its basic blocks, i.e., Decision box, Consequence path and Consequence box; (3) Reduction: removal of unnecessary decision boxes based on the system functional behavior to obtain a minimal CCD; and lastly (4) Probabilistic analysis: evaluating the probabilities of CCD paths describing the occurrence of a sequence of events.
i
f7f51eb0-336d-414f-ab38-fef00de46eb7
Traditionally, CCD subsystem-level reliability analysis is carried out by using paper-and-pencil-based approaches to analyze safety-critical systems, such as high-integrity protection systems (HIPS) [1]} and nuclear power plants [2]}, or using computer simulation tools based on Monte-Carlo approach, as in [3]}. A major limitation in both of the above approaches is the possibility of introducing inaccuracies in the CCD analysis either due to human infallibility or the approximation errors due to numerical methods and pseudo-random numbers in the simulation tools. Moreover, simulation tools do not provide the mathematical expressions that can be used to predict the reliability of a given system based on any probabilistic distributions and failure rates.
i
47bf023a-5740-4b1a-a982-375e152036ff
A more safe way is to substitute the error-prone informal reasoning of CCD analysis by formal generic mathematical proofs as per recommendations of safety standards, such as IEC 61850 [1]}, EN 50128 [2]} and ISO 26262 [3]}. In this work, we propose to use formal techniques based on theorem proving for the formal reliability CCD analysis-based of safety-critical systems, which provides us the ability to obtain a verified subsystem-level failure/operating consequence expression. Theorem proving is a formal verification technique [4]}, which is used for conducting the proof of mathematical theorems based on a computerized proof tool. In particular, we use HOL4 [5]}, which is an interactive theorem prover with the ability of verifying a wide range of mathematical expressions constructed in higher-order logic (HOL). For this purpose, we endeavor to formalize the above-mentioned four steps of CCD analysis using HOL4 proof assistant. To demonstrate the practical effectiveness of the proposed CCD formalization, we conduct the formal CCD analysis of an IEEE 39-bus electrical power network system. Subsequently, we formally determine a commonly used metric, namely Forced Outage Rate (\(\mathcal {FOR}\) ), which determines the capacity outage or unavailability of the power generation units [6]}. Also, we evaluate the System Average Interruption Duration Index (\(\mathcal {SAIDI}\) ), which describes the average duration of interruptions for each customer in a power network [6]}.
i
2dd07418-290e-4f6f-b241-7105b2798916
\(\bullet \) Formalization of the CCD basic constructors, such as Decision box, Consequence path and Consequence box, that can be used to build an arbitrary level of CCDs \(\bullet \) Enabling the formal reduction of CCDs that can remove unnecessary decision boxes from a given CCD model, a feature not available in other existing approaches \(\bullet \) Provide reasoning support for formal probabilistic analysis of scalable CCDs consequence paths with new proposed mathematical formulations \(\bullet \) Application on a real-world IEEE 39-bus electrical power network system and verification of its reliability indexes \(\mathcal {FOR}\) and \(\mathcal {SAIDI}\) \(\bullet \) Development of a Standard Meta Language (SML) function that can numerically compute reliability values from the verified expressions of \(\mathcal {FOR}\) and \(\mathcal {SAIDI}\) \(\bullet \) Comparison between our formal CCD reliability assessment with the corresponding results obtained from MATLAB MCS and other notorious approaches
i
f35a9bea-9fe9-4177-aefe-7b0aa6ff1613
The rest of the report is organized as follows: In Section , we present the related literature review. In Section , we describe the preliminaries to facilitate the understanding of the rest of the report. Section  presents the proposed formalization of CCD and its formal probabilistic properties. In Section , we describe the formal CCD analysis of an electrical network system and the evaluation of its reliability indices \(\mathcal {FOR}\) and \(\mathcal {SAIDI}\) . Lastly, Section  concludes the report.
i
e6462cda-3d92-4537-9f4b-f03c17619711
Only a few work have previously considered using formal techniques [1]} to model and analyze CCDs. For instance, Ortmeier et al. in [2]} developed a framework for Deductive Cause-Consequence Analysis (DCCA) using the SMV model checker [3]} to verify the CCD proof obligations. However, according to the authors [2]}, there is a problem of showing the completeness of DCCA due to the exponential growth of the number of proof obligations with complex systems that need cumbersome proof efforts. To overcome above-mentioned limitations, a more practical way is to verify generic mathematical formulations that can perform \(\mathcal {N}\) -level CCD reliability analysis for real-world systems within a sound environment. Higher-Order-Logic (HOL) [5]} is a good candidate formalism for achieving this goal.
w
6949c6a2-9d3e-4297-923c-8ba8b59843af
Prior to our work, there were two notable projects for building frameworks to formally analyze dependability models using HOL4 theorem proving [1]}. For instance, HOL4 has been previously used by Ahmad et al. in [2]} to formalize SFTs. The SFT formalization includes a new datatype consisting of AND, OR and NOT FT gates [3]} to analyze the factors causing a static system failure. Furthermore, Elderhalli et al. in [4]} had formalized DFTs in the HOL4 theorem prover, which can be used to conduct formal dynamic failure analysis. Similarly, we have defined in [5]} a new EVENT_TREE datatype to model and analyze all possible system-level success and failure relationships. All these formalizations are basically required to formally analyze either a system static/dynamic failure or cascading dependencies of system-level components only, respectively. On the other hand, CCDs have the superior capability to use SFTs/DFTs for analyzing the static/dynamic failures at the subsystem level and analyze their cascading dependencies at the system-level using ETs. For that purpose, in this work, we provide new formulations that can model mathematically the graphical diagrams of CCDs and perform the subsystem-level reliability analysis of highly-critical systems. Moreover, our proposed new mathematics provides the modeling of multi-state system components and is based on any given probabilistic distribution and failure rates, which makes our proposed work the first of its kind. In order to check the correctness of the proposed equations, we verified them within the sound environment of HOL4.
w
258992be-9893-4a89-ab86-0b9da4f76ad3
In this work, we developed a formal approach for Cause-Consequence Diagrams (CCD), which enables safety engineers to perform \(\mathcal {N}\) -level CCD analysis of safety-critical systems within the sound environment of the HOL4 theorem prover. Our proposed approach provides new CCD mathematical formulations, which their correctness was verified in the HOL4 theorem prover. These formulations are capable of performing CCD analysis of multi-state system components and based on any given probabilistic distribution and failure rates. These features are not available in any other existing approaches for subsystem-level reliability analysis. The proposed formalization is limited to perform CCD-based reliability analysis at the subsystem level that integrates static dependability analysis. However, this formalization is generic and can be extended to perform dynamic failure analysis of dynamic subsystems where no dependencies exist in different subsystems. We demonstrated the practical effectiveness of the proposed CCD formalization by performing the formal CCD step-analysis of a standard IEEE 39-bus electrical power network system and also formally verified the power plants Force Outage Rate (\(\mathcal {FOR}\) ) and the System Average Interruption Duration Index (\(\mathcal {SAIDI}\) ). Eventually, we compared the \(\mathcal {FOR}\) and \(\mathcal {SAIDI}\) results obtained from our formal CCD-based reliability analysis with the corresponding ones using MATLAB based on Monte-Carlo Simulation (MCS), the HiP-HOPS software tool, and the Failure Mode Reasoning (FMR) approach. As future work, we plan to integrate Reliability Block Diagrams (RBDs) [1]} as reliability functions in the CCD analysis, which will enable us to analyze hierarchical systems with different component success configurations, based on our CCD formalization in the HOL4 theorem prover.
d
4490c94e-b269-4132-b47f-d651c20afc26
Data assimilation (DA) combines noisy data, usually from instrumental uncertainty or issues with scale, with models that are imperfect, due to simplifying assumptions or other inaccuracies, to improve predictions about the past, current, or future state of a system. Typical DA techniques require a prior uncertainty and use the likelihood of observations to calculate the posterior distribution. This Bayesian context provides not only predictions but also quantification of the uncertainty in these predictions. DA originated with numerical weather prediction and is now employed in many scientific and engineering disciplines. With infinite dimensional models such as partial differential equations (PDEs) there are often features that require fine spatial resolution to obtain accurate approximations of model solutions. An alternative to uniform fine meshes is to employ meshes that are coarse in parts of the spatial domain but fine in other parts of the domain. When the regions requiring finer resolution change with time, time-dependent adaptive meshes are advantageous. In the context of data assimilation, employing time-dependent adaptive meshes also has the potential to reduce the dimension needed for good approximation of model solutions.
i
0c6083d1-dab6-4db8-a15a-99aaa00f6fd8
Combining DA with adaptive moving mesh techniques presents both opportunities and challenges. Many DA techniques employ an ensemble of solutions to make predictions and estimate the uncertainty in the predictions. This introduces a key choice for implementation: either each ensemble solution may evolve on its own independent adaptive mesh, or the ensemble time evolution can happen on a mesh that is common to all ensemble members. In the first case, the mesh for each ensemble member may be tailored to each ensemble solution. However, when incorporating a DA scheme, this presents the challenge of combining ensemble solutions that are supported on different spatial meshes. In the second case, one needs to combine the features of the different ensemble solutions to determine a mesh that provides, on average, good approximation for all ensemble members. In addition, the relative position of the ensemble mesh(es) with respect to potentially time-dependent observation locations may also motivate the choice of ensemble mesh(es).
i
bbfadb16-8416-450f-b2a9-a996a747d09c
Our contribution in this paper is to develop a framework and techniques to utilize adaptive meshing techniques for data assimilation with application to PDE models in one and higher space dimensions. This framework allows each ensemble member to evolve on its own independent mesh and presents an adaptive common mesh for the DA update. An adaptive mesh, while not usually uniform in the standard Euclidean metric, can be viewed as uniform in some other metric. This metric is defined by a positive-definite matrix valued monitor function, also called a metric tensor, which controls mesh movement for the ensemble members and is also used to define the new adaptive common mesh. The computation of the metric tensor easily generalizes to higher spatial dimensions. Using an adaptive common mesh based on metric tensors of the ensemble meshes provides a means for determining a mesh that is common to all ensemble members while providing good approximation properties for the ensemble solutions.
i
95633345-6d28-4038-a536-7dca57d0140c
The adaptive common mesh is formed through the metric tensor intersection of the metric tensors that monitor the movement of the ensemble meshes. Geometrically, the metric tensor intersection is the same as circumscribing an ellipsoid on an element of two meshes and then finding an ellipsoid that resides in the geometric intersection of the first two ellipsoids. The new element given by that intersecting ellipsoid is the result of the metric tensor intersection. This procedure must be done pairwise, so the resulting ellipsoid is not necessarily maximal. However, a greedy algorithm can be used to find an ordering that seeks to maximize the resulting ellipsoid.
i
21b4b384-ce03-4285-bf68-ac231386ad1c
The metric tensor intersection of the ensemble member meshes forms a common mesh that supports all ensemble members with high accuracy. However, the mesh points of this common mesh may not align with the observation locations. If the observation locations do not coincide with nodes in the common mesh, then the location mismatch results in errors from interpolating the observations to the common mesh. This is especially relevant in the case of time-dependent observation locations. With field alignment [1]}, a variational DA scheme is developed that assimilates both the alignment and amplitude of observations simultaneously. In addition, a two-step approach is developed, where first the locations of the state variables are adjusted to better match the observations. Rather than assume that the observations occur at the same spatial discretization used for the numerical solution, a vector of displacement is employed so that the (interpolated) numerical solution at adjusted nodes is obtained to maximize the posterior distribution of the numerical solution with displacement given the observations. The DA then proceeds with traditional correction based upon the amplitude of the errors of the numerical solution at the adjusted discretization.
i
4797b5d5-95e0-4cb1-9630-c81b2405a250
The mismatch of observation locations and nodal positions of the common mesh presents another potential opportunity for DA on adaptive moving meshes: the meshes can adapt to concentrate near or align with the observation locations. An observation mesh can be formed by associating a metric tensor with the location of the (potentially time-dependent) observations. Intersecting the common mesh from the ensemble members with this observation mesh provides a new common mesh that is concentrated near observation locations. Interpolation error can have a significant effect on the accuracy of DA schemes, and concentrating the mesh near observation locations reduces the interpolation required, thereby improving the performance of the DA algorithms. Of course, a fixed common mesh can also be used to concentrate the mesh near fixed observation locations, but this approach has the benefit of adapting easily to time-dependent observation locations.
i
83c2be05-8490-4432-9eb3-8db0d9aaaed8
In addition, we develop spatially and temporally dynamic localization schemes, based upon the metric tensor(s) corresponding to the adaptive common mesh. Localization improves DA procedures by ensuring that observations only affect nearby points. Broadly speaking, localization schemes fall into two categories: domain localization and covariance (or R) localization. Domain localization schemes define a spatial radius and use that to define which mesh points are affected by a given observation. Covariance localization schemes use a correlation function to modify the covariance matrix that is used in the DA update, so that the covariance between an observation and the solution values decays to zero as the distance between the observation and the solution values increases.
i
077f13bf-81b0-4633-8b1e-1eec46669431
We develop a domain localization scheme that employs the metric tensor. Employing a fixed, uniform radius of influence for the observations is that the localization scheme may not be effective if there is a steep gradient in the solution. One could predetermine the location of the gradient and adjust the localization scheme accordingly, but if the regions of large gradient are time-dependent, this will usually result in the tuned localization parameter being quite small. However, since the metric tensor provides information about the dynamics of the ensemble solution, it can be used to define an adaptive localization scheme where the localization radius can vary in time and space.
i
e06b0e60-1349-4634-84f1-f8e133e465b6
One benefit of using an adaptive moving mesh is that fewer mesh points can be used while still maintaining the same accuracy. Having an adaptive time-dependent common mesh allows for a fewer number of nodes used in the common mesh as compared to a fixed, fine common mesh, increasing the efficiency of the linear algebra, e.g., when updating the mean and covariance with an ensemble Kalman filter. An efficient implementation of the Ensemble Kalman Filter (EnKF) requires \(\mathcal {O}((D+N_e)D^2 + (M+D)N_e^2)\) flops when \(D\ll M\) or more generally, for example when \(D\approx M\) , \(\mathcal {O}((M+D+N_e)N_e^2)\) (see, e.g., [1]}) where \(D\) is the dimension of the observation space, \(M\) is the dimension of the discretized dynamical system, and \(N_e\) is the number of ensemble members. In large scale geophysical applications we typically desire \(N_e\approx 20\) (in general \(N_e\) should be roughly the number of positive and neutral Lyapunov exponents). A reduction in \(M\) based upon using fewer mesh points while maintaining or enhancing accuracy results in improved efficiency.
i
5bfa3782-1ecd-4d35-9ee8-fb3fda34b979
There are several recent works on integrating adaptive spatial meshing techniques with DA, although most of the focus has been on PDE models in one space dimension. This includes methods based on evolving meshes based on the solution of a differential equation, methods in which meshes are updated statically based upon interpolation, and remeshing techniques that add or subtract mesh points as the solution structure changes. In [1]}, the evolution of meshes was movement of the nodes was determined by the solution of moving mesh differential equations that are coupled to the discretized PDE. The state variables of the PDE were then augmented with the position of the nodes and incorporated into a DA scheme. The test problem consisted of a two-dimensional ice sheet assumed to be radially symmetric; therefore, it reduced to a problem with one spatial dimension. In [2]} and [3]} common meshes were developed based on a combining through interpolation the ensemble meshes. This allowed for update of mean and covariance for Kalman filter based DA techniques while allowing each ensemble member to evolve on its own independent mesh. That is, at each observational timestep, the ensemble members were interpolated to the common mesh, updated with the DA analysis, and then interpolated back to their respective meshes. Specifically, a uniform, non-conservative mesh was used in [2]}, with Lagrangian observations in one spatial dimension. Higher spatial dimensions were used in [3]}, with a fixed common mesh refined near observation locations. [6]} uses the same 1D non-conservative adaptive meshing scheme as in [2]} and extends this approach through the use of an adaptive common mesh, where, like in [1]}, the state vector is augmented with the node locations.
i
bd08b669-14d3-44a0-b4f6-0548f91f0983
The outline of this paper is as follows. Background of data assimilation and adaptive moving mesh techniques is given in Section . This includes the framework we develop to include equations describing mesh movement within a DA framework. The development of adaptive meshing techniques for DA is detailed in Section . Metric tensors are introduced and their connection to non-uniform meshes is discussed. Techniques for combining meshes based on metric tensor intersection and for concentrating ensemble mesh(es) near observation locations are developed. The details of our implementation are in Section . This includes the discontinuous Galerkin discretization we employ and the specific metric tensor formulation we use to adaptively evolve the ensemble mesh(es). The details of our experimental setup and numerical results for both 1D and 2D inviscid Burgers equations are presented in Section .
i
fd55727c-ea52-44d9-809c-71a8c7394360
The following presents the application of these methods to the one and two dimensional inviscid Burgers equations. We generate synthetic observations by sampling from a truth run, obtained by solving this equation on an adaptive moving mesh. The ensemble members are initialized as perturbations of the initial conditions. Efficacy of the DA scheme is measured by the root mean squared error (RMSE), which is calculated as \(\text{RMSE} \frac{1}{\sqrt{M}} \Vert u^{\text{truth}} - \bar{u}\Vert _2,\)
r
69a53b4b-e49a-451d-9ff1-45a606bd9db6
where \(\bar{u}\) is the analysis mean. A DA procedure is generally considered stable if its asymptotic behavior is on the order of the square root of the norm of the observation error. The RMSE in the experiments that follow is averaged over 10 runs.
r
47332d31-b635-47eb-ab8f-9e901ecd5da6
Through the use of an adaptive common mesh, we develop an ensemble based DA scheme where each of the ensemble members evolve independently on their own adaptive meshes. At each observational timestep, the ensemble members are interpolated to the adaptive common mesh, updated according to the DA scheme, and then interpolated back to their individual meshes.
d
4f0453c8-ed89-4fbc-bea2-e963c3666954
We follow the MMPDE adaptive meshing strategy where the mesh of each ensemble member is determined by a matrix-valued monitor function, also called a metric tensor, so that the mesh is viewed as uniform in that metric. At each observational timestep, an adaptive common mesh is calculated. There are several choices for this common mesh. One choice, \(M^m\) , is obtained by taking the intersection of the ensemble members' metric tensors. This results in a common mesh that in some sense satisfies all of the ensemble members. Another option is to concentrate the common mesh near observation locations or observation trajectories. Concentrating the mesh near the observation locations reduces the amount of interpolation error at each observational timestep. A third choice is to intersect \(\mathbb {M}^m\) with \(\mathbb {M}^O\) . Using the observational mesh \(\mathbb {M}^O\) in a DA scheme reduces the transient time in converging to the asymptotic behavior, but regardless of which is employed as the common mesh, all choices \(\mathbb {M}^m\) , \(\mathbb {M}^O\) , and \(\mathbb {M}^m\cap \mathbb {M}^O\) produce stable results. The efficacy of several techniques developed in this work is illustrated using sharp interface problems, in particular 1D and 2D inviscid Burgers equations, under a discontinuous Galerkin discretization.
d
7a70265f-9dc8-43f8-8650-48c24e793c0e
We develop a new adaptive localization algorithm based on the metric tensor of the common mesh \(\mathbb {M}^m\) . The MT adaptive localization scheme uses the metric tensor to define a domain localization strategy that is dynamically updated in time and space. For the 1D and 2D inviscid Burgers equations, the MT localization scheme compared favorably with the GC localization schemes. One of the benefits of the MT localization scheme is that it is robust with respect to the tuning parameters and requires less precise tuning than GC localization in either the model space or in the observation space.
d
b437a6de-7600-45bb-aed9-f84c94a8deaa
The interpolation that is used at each observational timestep can have a significant impact on the performance of the DA scheme. Using a DG discretization for the PDE together with a DG-based interpolant allows the ensemble members to maintain the advantages of DG discretization independent of their supporting mesh. For the 1D inviscid Burgers problem, there was no significant difference in RMSE between linear and DG interpolation. For the 2D inviscid Burgers equation, however, using a DG-based interpolation scheme improved the overall performance of the DA scheme as compared to linear interpolation.
d
69452cc6-36c2-47e3-a22f-4908bcb6970a
This metric tensor approach to data assimilation on adaptive moving meshes, as well as the MT localization scheme, is applicable in higher spatial dimensions. There are several interesting avenues for further investigation. These include the development of adaptive meshes in which a single (average) mesh supports all ensemble members over an observation cycle, further development of adaptive meshing techniques to minimize error due to uncertainties in the location of observations, and the development of goal-oriented meshing functionals specifically designed to increase the skill of the data assimilation scheme.
d
1bf24067-ca32-4bc8-a314-6fb72c063145
Much of the motivation of model-based reinforcement learning (RL) derives from the potential utility of learned models for downstream tasks, like prediction [1]}, [2]}, planning [3]}, [4]}, [5]}, [6]}, [7]}, [8]}, and counterfactual reasoning [9]}, [10]}. Whether such models are learned from data, or created from domain knowledge, there's an implicit assumption that an agent's world model [11]}, [12]}, [13]} is a forward model for predicting future states. While a perfect forward model will undoubtedly deliver great utility, they are difficult to create, thus much of the research has been focused on either dealing with uncertainties of forward models [14]}, [15]}, [13]}, or improving their prediction accuracy [17]}, [10]}. While progress has been made with current approaches, it is not clear that models trained explicitly to perform forward prediction are the only possible or even desirable solution. <FIGURE>
i
f5f10404-2a7b-4ca2-86d4-dd127423ad79
We hypothesize that explicit forward prediction is not required to learn useful models of the world, and that prediction may arise as an emergent property if it is useful for an agent to perform its task. To encourage prediction to emerge, we introduce a constraint to our agent: at each timestep, the agent is only allowed to observe its environment with some probability \(p\) . To cope with this constraint, we give our agent an internal model that takes as input both the previous observation and action, and it generates a new observation as an output. Crucially, the input observation to the model will be the ground truth only with probability \(p\) , while the input observation will be its previously generated one with probability \(1-p\) . The agent's policy will act on this internal observation without knowing whether it is real, or generated by its internal model. In this work, we investigate to what extent world models trained with policy gradients behave like forward predictive models, by restricting the agent's ability to observe its environment.
i
b53d0ac0-4bf1-4fe7-8f73-68cab8232504
By jointly learning both the policy and model to perform well on the given task, we can directly optimize the model without ever explicitly optimizing for forward prediction. This allows the model to focus on generating any “predictions” that are useful for the policy to perform well on the task, even if they are not realistic. The models that emerge under our constraints capture the essence of what the agent needs to see from the world. We conduct various experiments to show, under certain conditions, that the models learn to behave like imperfect forward predictors. We demonstrate that these models can be used to generate environments that do not follow the rules that govern the actual environment, but nonetheless can be used to teach the agent important skills needed in the actual environment. We also examine the role of inductive biases in the world model, and show that the architecture of the model plays a role in not only in performance, but also interpretability.
i
53f3774a-1e81-48b7-a765-00ef6fe67bd2
One promising reason to learn models of the world is to accelerate learning of policies by training these models. These works obtain experience from the real environment, and fit a model directly to this data. Some of the earliest work leverage simple model parameterizations – e.g. learnable parameters for system identification [1]}. Recently, there has been large interest in using more flexible parameterizations in the form of function approximators. The earliest work we are aware of that uses feed forward neural networks as predictive models for tasks is [2]}. To model time dependence, recurrent neural network were introduced in [3]}. Recently, as our modeling abilities increased, there has been renewed interest in directly modeling pixels [4]}, [5]}, [6]}, [7]}. [8]} modify the loss function used to generate more realistic predictions. [9]} propose a stochastic model which learns to predict the next frame in a sequence, whereas [10]} employ a different parameterization involving predicting pixel movement as opposed to directly predicting pixels. [11]} employ flow based tractable density models to learn models, and [12]} leverages a VAE-RNN architecture to learn an embedding of pixel data across time. [7]} propose to learn a latent space, and learn forward dynamics in this latent space. Other methods utilize probabilistic dynamics models which allow for better planning in the face of uncertainty [14]}, [15]}. Presaging much of this work is [16]}, which learns a model that can predict environment state over multiple timescales via imagined rollouts.
w
a3039d16-4556-498f-bdb0-8115f031fa29
As both predictive modeling and control improves there has been a large number of successes leveraging learned predictive models in Atari [1]}, [2]} and robotics [3]}. Unlike our work, all of these methods leverage transitions to learn an explicit dynamics model. Despite advances in forward predictive modeling, the application of such models is limited to relatively simple domains where models perform well.
w
243831a4-c938-4fbb-918a-911c532f7ab1
Errors in the world model compound, and cause issues when used for control [1]}, [2]}. [3]}, similar to our work, directly optimizes the dynamics model against loss by differentiating through a planning procedure, and [4]} proposes a similar idea of improving the internal model using an RNN, although the RNN world model is initially trained to perform forward prediction. In this work we structure our learning problem so a model of the world will emerge as a result of solving a given task. This notion of emergent behavior has been explored in a number of different areas and broadly is called “representation learning” [5]}. Early work on autoencoders leverage reconstruction based losses to learn meaningful features [6]}, [7]}. Follow up work focuses on learning “disentangled” representations by enforcing more structure in the learning procedure[8]}, [9]}. Self supervised approaches construct other learning problems, e.g. solving a jigsaw puzzle [10]}, or leveraging temporal structure [11]}, [12]}. Alternative setups, closer to our own specify a specific learning problem and observe that by solving these problems lead to interesting learned behavior (e.g. grid cells) [13]}, [14]}. In the context of learning models, [15]} construct a locally linear latent space where planning can then be performed.
w
77fa6b26-51ae-40f0-95c4-e3984d31dd9e
The force driving model improvement in our work consists of black box optimization. In an effort to emulate nature, evolutionary algorithms where proposed [1]}, [2]}, [3]}, [4]}, [5]}. These algorithms are robust and will adapt to constraints such as ours while still solving the given task [6]}, [7]}. Recently, reinforcement learning has emerged as a promising framework to tackle optimization leveraging the sequential nature of the world for increased efficiency [8]}, [9]}, [10]}, [11]}, [12]}. The exact type of the optimization is of less importance to us in this work and thus we choose to use a simple population-based optimization algorithm [13]} with connections to evolution strategies [14]}, [15]}, [16]}.
w