_id
stringlengths
36
36
text
stringlengths
200
328k
label
stringclasses
5 values
06cfd0ef-89f1-4096-b80d-c9f5af19b9c1
Deep learning models for chest X-ray interpretation have high potential for social impact by aiding clinicians in their workflow and increasing access to radiology expertise worldwide [1]}, [2]}. Transfer learning using pretrained ImageNet [3]} models has been the standard approach for developing models not only on chest X-rays [4]}, [5]}, [6]} but also for many other medical imaging modalities [7]}, [8]}, [9]}, [10]}, [11]}. This transfer assumes that better ImageNet architectures perform better and pretrained weights boost performance on their target medical tasks. However, there has not been a systematic investigation of how ImageNet architectures and weights both relate to performance on downstream medical tasks.
i
ddf24095-ff69-409e-9371-10a16af70430
For models without pretraining and models with pretraining, we find no relationship between ImageNet performance and CheXpert performance (Spearman \(\rho = 0.08\) , \(\rho = 0.06\) respectively). This finding suggests that architecture improvements on ImageNet may not lead to improvements on medical imaging tasks. For models without pretraining, we find that within an architecture family, the largest and smallest models have small differences (ResNet 0.005, DenseNet 0.003, EfficientNet 0.004) in CheXpert AUC, but different model families have larger differences in AUC (\(>0.006\) ). This finding suggests that the choice of model family influences performance more than size within a family for medical imaging tasks. We observe that ImageNet pretraining yields a statistically significant boost in performance (average boost of 0.016 AUC) across architectures, with a higher boost for smaller architectures (Spearman \(\rho =-0.72\) with number of parameters). This finding supports the ImageNet pretraining paradigm for medical imaging tasks, especially for smaller models. We find that by truncating final blocks of pretrained models, we can make models 3.25x more parameter-efficient on average without a statistically significant drop in performance. This finding suggests model truncation may be a simple method to yield lighter pretrained models by preserving architecture design features while reducing model size.
i
15c7fa92-deb5-4bcf-af86-b98b18818e99
Our study, to the best of our knowledge, contributes the first systematic investigation of the performance and efficiency of ImageNet architectures and weights for chest X-ray interpretation. Our investigation and findings may be further validated on other datasets and medical imaging tasks.
i
50a5f560-2ead-478d-bc35-9544959dcbd4
Photo retouching is often desirable as it improves the aesthetic quality of photographs by eliminating imperfections and highlighting subjects of interest. Even with significant progress in digital photography owing to advancements in camera sensors and image processing algorithms, professional retouches via manual adjustments are still needed to achieve a desired look. These artistic edits require considerable manual effort as they consist of global adjustments, such as brightening and contrast enhancement, as well as fine edits applied to local regions. Professionals spend a great deal of time to acquire such retouches, which motivates us to automatically mimic a specific style or type of retouch.
i
2537d5e0-f1e8-41b3-ae9d-dbca26d4597e
The development of automatic photo retouching tools can be helpful for both novice users and experts as it offers a basis for a professional retouching style. However, automating detailed edits of professionals is challenging as their editing pipelines are spatially varying, context-aware, and highly nonlinear, containing per-pixel adjustments. Recent learning-based methods address this complexity in image-to-image translation by proposing local context-aware methods, such as pixel-adaptive neural network architectures , , learning parameters of local filters , or multi-stream models to extract global and local features separately . However, these data-driven methods require a large dataset of matching example image pairs to capture context-aware mappings. Even then, the mappings are sensitive to segmentation errors, unseen semantic regions, and image content .
i
cbd8ec64-366f-4486-a49d-bd24482a2e88
Motivated by the gap between manual and automatic enhancement, we propose a novel photo retouching technique that can learn global and local adjustments from just a single example image pair. Our method thus sidesteps the need for large datasets, which are very difficult to obtain for the detail retouching task. We allow users to choose one example before-after pair from which our technique learns the underlying retouching style. Subsequently, we can apply the retouching edit to a different input image.
i
983d11c0-f20f-47dc-860f-aa37f35d54cc
We assume that example and input images share similar local content. The user can thus decide on the semantics of the example and input photos and the structural changes to be transferred. This is easy for humans and practical for many scenarios, e.g. face edits transferred to faces. Our method then handles the difficult part for humans: capturing how fine details change in an edit and applying those automatically to a new image. The method can further be combined with brushes if fully automatic transfers are not desired.
i
f60e60ec-7a25-442a-a949-170522775aa0
We achieve these by defining the retouching problem as a map that is given by a spatio-spectral patch-space neural field based transformation blending. This representation is primarily inspired by professional detail retouching pipelines as we elaborate on in Section . Our map representation is composed of learned patch maps at multiple scales, i.e. frequency bands. Each of these maps is represented by a number of transformation matrices blended with patch-adaptive weights that are represented as neural fields. We jointly optimize the transformation matrices and corresponding weights for each band. This representation captures edits to details better than any previous techniques while staying generalizable to new images. It is also simple enough to be extended in many different ways in future works.
i
94e8978f-9d26-48b0-ad7b-bc2ee0729621
A novel patch-space image map representation as a blending of transformation matrices with neural fields. A one-shot detail retouching algorithm that allows transfer of edits to details to new images based on a single before-after image pair.
i
d3a6996c-2ed1-4aa2-85d2-a74d816215d5
Photo retouching has been explored in image processing and computer vision communities under different domains, such as photo enhancement and image-to-image translation. Below we first discuss recent methods on photo enhancement and then image to image map definitions with the main focus on learning-based methods.
w
bfb7e28a-d9db-46be-bdb0-22b8485775d1
We tested our technique on a diverse range of before-after pairs, including face images from the FFHQ dataset but not limited. We focus on human portraits and face retouching in our experiments as they are arguably the most common and prioritized types of photos for retouching. We illustrate that our technique provides visually pleasing results for various retouching edits and accurately captures image processing filters. <FIGURE>
r
ed2d38b6-a1d4-4e95-927f-0265d4a2af07
Human faces pose a particular challenge for our technique. However, our model can still capture highly nonlinear retouching edits and generalizes well to different types of faces, view directions, and lighting conditions, as illustrated in Figures REF ,  REF , and REF , as well as Figures REF and REF .
r
caec2301-0af9-45e3-b7e6-cecc357748fb
The example pairs in Figures REF ,  REF ,  REF and  REF were generated by brushing onto the skin with artist created brushes, eye sharpening (last pair in Figure REF ), and further brightness/contrast adjustments. These brushes first decompose the skin into a detail and base layer, typically with frequency decomposition, alter the detail layer and blend it with the base layer. They differ in how (1) they decompose the skin into the layers, i.e., what frequencies are in each layer, and (2) they edit and blend each layer with different opacity values. This variation creates retouching nuances, as shown in Figure REF . Our method can still accurately capture such slight differences in styles. <FIGURE>
r
bebbd18a-6d6a-4051-ad97-a6a6a0136d51
In all our experiments, intricate details of the desired retouching, such as small-scale skin texture, eye, and facial hair details, and global features, such as overall lighting and tone, are accurately reproduced. It is interesting to observe that the glamour implied by, e.g., the example retouchings in Figures REF , and REF (last example pair) is transferred from the example pair very accurately without causing an artificial look. Also, such a learned retouching generalizes well to faces with different view directions and lighting conditions, as illustrated in Figure REF and REF . Zooming into the skin reveals that pores and wrinkles are minimized, and the blemishes and discoloring of the skin are eliminated. At the same time, depending on the retouching edit, eyes are more highlighted or preserved, and delicate features such as hair are preserved well (Figures REF , REF ).
r
ad6070f5-b368-4173-a1ed-a715b828f4de
In summary, our technique efficiently edits such intricate details, due to the significantly distinct local statistics of the skin at multiple scales, without affecting overlaying structures thanks to its spatially-varying nature and frequency decomposition. <FIGURE>
r
ae1a0346-5cf1-449d-a4e8-65840af76db3
We presented a neural field based technique for example-based automatic retouching of images. By formulating the transfer problem in the patch space, we showed that blending multiple transformation matrices with patch-adaptive weights can be utilized to learn an accurate and generalizable map. This allowed us to use images of different scenes, people, views, and environmental conditions as the example pair and input. We illustrated the technique's utility on various retouching examples. We believe that our technique-based image map representation can be helpful in many other image processing tasks.
d
4dc624f5-03a6-49d8-a40d-1e256fc50630
Nam dui ligula, fringilla a, euismod sodales, sollicitudin vel, wisi. Morbi auctor lorem non justo. Nam lacus libero, pretium at, lobortis vitae, ultricies et, tellus. Donec aliquet, tortor sed accumsan bibendum, erat ligula aliquet magna, vitae ornare odio metus a mi. Morbi ac orci et nisl hendrerit mollis. Suspendisse ut massa. Cras nec ante. Pellentesque a nulla. Cum sociis natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Aliquam tincidunt urna. Nulla ullamcorper vestibulum turpis. Pellentesque cursus luctus mauris.
i
f12aaa84-1149-4b60-89c4-aeca44c130f3
Nulla malesuada porttitor diam. Donec felis erat, congue non, volutpat at, tincidunt tristique, libero. Vivamus viverra fermentum felis. Donec nonummy pellentesque ante. Phasellus adipiscing semper elit. Proin fermentum massa ac quam. Sed diam turpis, molestie vitae, placerat a, molestie nec, leo. Maecenas lacinia. Nam ipsum ligula, eleifend at, accumsan nec, suscipit a, ipsum. Morbi blandit ligula feugiat magna. Nunc eleifend consequat lorem. Sed lacinia nulla vitae enim. Pellentesque tincidunt purus vel magna. Integer non enim. Praesent euismod nunc eu purus. Donec bibendum quam in tellus. Nullam cursus pulvinar lectus. Donec et mi. Nam vulputate metus eu enim. Vestibulum pellentesque felis eu massa.
i
edcfc002-de7a-4f99-a4a5-051f0453fbfd
Modern high-performance SAT solvers quickly solve large satisfiability instances that occur in practice. If the instance is satisfiable, then the SAT solver can provide a witness in the form of a satisfying truth assignment, which can be checked independently.
i
a1acdae3-5e57-4caa-b285-faf72aa3a10e
If the instance is unsatisfiable, the situation is less clear. Brummayer and others [1]} have shown using fuzz testing that in 2010 many state-of-the-art SAT solvers contained bugs, including soundness bugs. Since 2016, in order to mitigate this issue, the annual SAT competition requires solvers competing in the main track to output UNSAT certificates [2]}; these certificates are independently checked in order to ensure soundness.
i
76cfd664-74b6-454a-ab4e-b715299b99cd
These certificates could be exponentially large and the SAT solver might not even be able to output them due to various resource constraints. The implementation of the SAT solver should then be trusted not to contain bugs. However, typical high-performance SAT solvers contain data structures and algorithms complex enough to allow for subtle programming errors.
i
da5a620b-adcb-4d0d-8889-a61af451e0be
To handle these potential issues, we propose a verified SAT solver using the Dafny system [1]}. Dafny is a high-level imperative language with support for object oriented features. It features methods with preconditions, postconditions and invariants, which are checked at compilation time by relying on the Z3 SMT solver [2]}. If a postcondition cannot be established (either due to a timeout or due to the fact that it does not hold), compilation fails. Therefore, we can place a high degree of trust in a program verified using the Dafny system.
i
a6a95316-e210-4444-b190-db3fa594ce7b
restart strategy. In addition, careful engineering of the implementation is required for high performance. The first three items are usually referred to as the DPLL algorithm [1]}, [2]}, and all items together are the core of the state-of-the-art CDCL algorithm [3]}, [4]}. We have implemented and verified in Dafny the first three items, constituting the DPLL algorithm, and we leave the other items for future work. We implement the MOMS variable ordering heuristic [5]}. We note that our Dafny solver is computer checked for soundness, completeness and termination. We assume that the input is already in CNF form. The parser, which reads a file in the well-known DIMACS format, is also written in Dafny and hence verified against, e.g., out of bounds errors. However, there is no specification for the parser. Our work is part of the larger trend towards producing more trustworthy software artifacts, ranging among certified compilers [6]}, system software [7]}, [8]}, [9]}, [10]}, or logic [11]}. The main conceptual difference to previous work on verified or certified SAT solvers is that we propose to check directly the imperative algorithm using deductive verification, instead of, e.g., verifying functional code and relying on a refinement mechanism to extract imperative code, which could hurt performance. Structure. In Section , we briefly go over the DPLL algorithm, as presented in the literature. In Section , we present our verified implementation in Dafny of the algorithm. We start by presenting the main data structures and their invariants (Section REF ). We continue with the operations supported by the data structures in Section REF . Finally, in Section REF , we present the implementation of the core DPLL algorithm, together with the verified guarantees that it provides. In Section , we benchmark the performance of our solver. In Section , we discuss related work. We conclude in Section . We also discuss the main challenge in verifying our implementation of DPLL, along with some methodological tricks that we have used to make the verification effort tractable. Contributions. We present the first (to our knowledge) assertional proof of the DPLL algorithm. The implementation is competitive in running time with an equivalent C++ solver. Comparison with the workshop version. This paper is a revised extended version of our previous work [12]} published in EPTCS. We feature an improved presentation, additional explanations and a benchmark of the performance of our solver. In addition, the solver improvements over the workshop version are: The new implementation features machine integers, which improve performance approximately 10 times in our tests. Going to machine integers from unbounded integers requires proving upper bounds on indices throughout the code. The new implementation features mutable data structures for identifying unit clauses. Our previous approach used Dafny sequences (seq), which are immutable and cause a performance drawback because they are updated frequently. The new mutable data structures make the solver significantly faster, but they are more difficult to reason about and verify. We implement and verify the MOMS variable ordering heuristic. We also improve the methodology of our verification approach and in particular we significantly reduce verification time. By carefully specifying invariants and separating concerns in the implementation, the verification time is now approximately 13 minutes for the entire project. In contrast, in our previous implementation, one method (setLiteral) took approximately 10 minutes to verify on its own (the entire project used to take about 2 hours to verify in its entirety). We benchmark our Dafny implementation against similar DPLL implementations written in C# and C++ and we show it is competitive in terms of performance. The Davis-Putnam-Logemann-Loveland Algorithm The DPLL procedure is an optimization of backtracking search. The main improvement is called unit propagation. A unit clause has the property that its literals are all false in the current assignment, except one, which has no value yet. If this literal would be set to false, the clause would not be satisfied; therefore, the literal must necessarily be true for the entire formula to be true. This process of identifying unit clauses and setting the unknown literal to true is called unit propagation. We consider a formula with 7 variables and 5 clauses: \((x_{1} \vee x_{2} \vee x_{3}) \wedge \) \((\lnot x_{1} \vee \lnot x_{2}) \wedge \) \((x_{2} \vee \lnot x_{3}) \wedge \) \((x_{2} \vee x_{4} \vee x_{5}) \wedge \) \((x_{5} \vee x_{6} \vee x_{7})\) The formula is satisfiable, as witnessed by the truth assignment (true, false, false, true, true, false, true). Algorithm  describes the DPLL procedure [13]} that we implement and verify, presented slightly differently in order to match our implementation more closely: mainFunction funoneDPLL-recursive \(F\) , \(\textit {tau}\) InputinputOutputoutput SAT/UNSAT, depending on where there exists an assignment extending \(\textit {tau}\) that satisfies \(F\) \(\exists \) unit clause \(\in F\) \(\ell \leftarrow \) the unset literal from the unit clause \(\textit {tau} \leftarrow \textit {tau}[\ell := true]\) F contains the empty clauseUNSAT F has no clauses left Output \(\textit {tau}\) SAT \(\ell \leftarrow \) some unset literal \(\textit {DPLL-recursive}(F, \textit {tau}[\ell := \textit {true}]) = \textit {SAT}\)\(\textit {SAT}\) \(\textit {DPLL-recursive}(F, \textit {tau}[\ell := \textit {false}])\) We describe how the algorithm works on this example: first, the algorithm chooses the literal \(x_{1}\) and sets it to true (arbitrarily; if true would not work out, then the algorithm would backtrack here and try false). At the next step, it finds that the second clause is unit and sets \(\lnot x_{2}\) to true, which makes the third clause unit, so \(\lnot x_{3}\) is set to true. After unit propagation, the next clause not yet satisfied is the fourth one, and the first unset literal is \(x_{4}\) . At the branching step, \(x_{4}\) is assigned to true. Furthermore, only one clause is not satisfied yet, and the next decision is to choose \(x_{5}\) and set it to true, which makes the formula satisfied, even if \(x_{6}\) and \(x_{7}\) are not set yet. Next, we recall some well-known terminology in SAT solvers. Choosing and assigning an unset literal to true or false is called a branching step or a decision. Every time the algorithm makes a decision, the decision level is incremented by one and some more literals are assigned to true or false by unit propagation. The trace of assignments is split into layers, one layer per decision. Multiple literals can be set at the same decision level (the decision literal, and the literals assigned by unit propagation). Every time the algorithm backtracks it must revert an entire layer of assignments. A possible assignments trace corresponding to Example  is shown in Figure REF . <FIGURE> A Verified Implementation of the DPLL Algorithm In this section, we present the main ingredients of our verified solver. The full source code, along with instruction on how to compile it and reproduce our benchmarks, can be found at https://github.com/andricicezar/sat-solver-dafny-v2. Data Structures We first discuss the data structures for representing the formula, for quickly identifying unit clauses and for recalling the current truth assignment. Representing the CNF formula The main class in our Dafny development is Formula, which extends DataStructures (Figure REF ). This class is instantiated with the number of propositional variables (variablesCount) and with the clauses of the formula to be checked for satisfiability. Propositional variables are represented by values between 0 and \(\texttt {variablesCount}-1\) , positive literals are represented by values between 1 and variablesCount, and negative integers between \(-1\) and \(-\)variablesCount represent negative literals. Variables and literals are represented by values of type Int32.t, which we define to model machine integers and which is extracted to int. <FIGURE>Clauses are sequences of literals and the entire formula is represented by a sequence of clauses (var clauses : seq< seq<Int32.t> >). Using sequences for clause (sequences are immutable in Dafny) has no significant performance impact, since they are set at the beginning once and never changed. Representing the current assignment and the assignments trace The member variable decisionLevel recalls the current decision level, which has an initial value of \(-1\) . The assignments trace is represented at computation time by using the arrays traceVariable, traceValue, traceDLStart and traceDLEnd and at verification time also by the ghost construct assignmentsTrace (see Figure REF ). The arrays traceVariable and traceValue have the same actual length. They recall, in order, all variables that have been set so far, together with their value. The arrays traceDLStart and traceDLEnd recall at what index in traceVariable and traceValue each decision layer starts and ends, respectively. The ghost construct assignmentsTrace recalls the same information as a set of (variable, value) pairs. This set is used for the convenience of specifying some of the methods and it only lives at verification time; it is erased before running time and therefore it entails no performance penalty. Note that traceVariable, traceValue, traceDLStart and traceDLEnd are arrays, and they are extracted to C# as such. Therefore, lookups and updates in these arrays take constant time. The link between the ghost construct assignmentsTrace and its imperative counterparts (traceVariable, traceValue, traceDLStart and traceDLEnd) is computer checked as the following class invariant: (decisionLevel >= 0 ==> ( (forall i :: 0 <= i < traceDLEnd[decisionLevel] ==> (traceVariable[i], traceValue[i]) in assignmentsTrace) (forall x :: x in assignmentsTrace ==> ( exists i :: 0 <= i < traceDLEnd[decisionLevel] (traceVariable[i], traceValue[i]) == x)))) The array truthAssignment is indexed from 0 to \(\texttt {variablesCount}-1\) and it recalls the current truth assignment. The value \(\texttt {truthAssignment}[v]\) is \(-1\) if the propositional variable \(v\) is unset, 0 if \(v\) is false, and 1 if \(v\) is true. At the beginning, it is initialized to \(-1\) at all indices. The following class invariant describing the expected link between the assignments trace and the current truth assignment is computer checked: truthAssignment.Length == variablesCount (forall i :: 0 <= i < variablesCount ==> -1 <= truthAssignment[i] <= 1) (forall i :: 0 <= i < variablesCount truthAssignment[i] != -1 ==> (i, truthAssignment[i]) in assignmentsTrace) (forall i :: 0 <= i < variablesCount truthAssignment[i] == -1 ==> (i, false) !in assignmentsTrace (i, true) !in assignmentsTrace) Note that the invariant makes use of the ghost construct assignmentsTrace for brevity. Quickly identifying unit clauses The array trueLiteralsCount (falseLiteralsCount) is used to recall how many literals in each clause are currently true (resp. false). They are indexed from 0 to \(|\texttt {clauses}|-1\) . The value \(\texttt {trueLiteralsCount}[i]\) denotes the number of literals set to true in \(\texttt {clauses}[i]\) and \(\texttt {falseLiteralsCount}[i]\) the number of false literals in \(\texttt {clauses}[i]\) . These are used to quickly identify which clauses are satisfied, which clauses are unit or which clauses are false. For example, to check whether \(\texttt {clauses}[i]\) is satisfied, we simply evaluate \(\texttt {trueLiteralsCount}[i] > 0\) . The following class invariant involving these arrays is computer checked: |trueLiteralsCount| == |clauses| forall i :: 0 <= i < |clauses| ==> 0 <= trueLiteralsCount[i] == countTrueLiterals(truthAssignment, clauses[i]) and analougously for falseLiteralsCount. Note that countTrueLiterals is a function (not a method, hence it is used for specification only) that actually computes the number of true literals by walking through all literals in the respective clause. In order to quickly update trueLiteralsCount and falseLiteralsCount when a new literal is (un)set, we use positiveLiteralsToClauses and negativeLiteralsToClauses. These are arrays indexed from 0 to \(\texttt {variablesCount}-1\) . The first array contains the indices of the clauses in which a given variable occurs. The second array contains the indices of the clauses in which the negation of the given variable occurs. They provably satisfy the following invariant: |positiveLiteralsToClauses| == variablesCount ( forall variable :: 0 <= variable < |positiveLiteralsToClauses| ==> ghost var s := positiveLiteralsToClauses[variable]; ... (forall clauseIndex :: clauseIndex in s ==> variable+1 in clauses[clauseIndex]) (forall clauseIndex :: 0 <= clauseIndex < |clauses| clauseIndex !in s ==> variable+1 !in clauses[clauseIndex])) (analogously for \(\texttt {negativeLiteralsToClauses}\) ). To represent class invariants, Dafny encourages a methodology of defining a class predicate valid. In our development, valid consists of the conjunction of the above invariants, plus several other lower-level predicates that we omit for brevity. The predicate valid is used as a precondition and postcondition for all class methods, and therefore plays the role of a class invariant. This way, it is guaranteed that the data structures are consistent. Verified Operations over the Data Structures From the initial (valid) state, we allow one of these four actions: increase the decision level, set a variable, set a literal and perform unit propagation, and revert the assignments done on the last decision level. Each of the actions is implemented as a method and we show that these four methods preserve the data structure invariants above. The Method increaseDecisionLevel This method increments the decision level by one and creates a new layer. The method guarantees that the new state is valid, and nothing else changes. Its signature and its specification are: method increaseDecisionLevel() requires validVariablesCount(); requires validAssignmentTrace(); requires decisionLevel < variablesCount - 1; requires decisionLevel >= 0 ==> traceDLStart[decisionLevel] < traceDLEnd[decisionLevel]; modifies `decisionLevel, traceDLStart, traceDLEnd; ensures decisionLevel == old(decisionLevel) + 1; ensures validAssignmentTrace(); ensures traceDLStart[decisionLevel] == traceDLEnd[decisionLevel]; ensures getDecisionLevel(decisionLevel) == ; ensures forall i :: 0 <= i < decisionLevel ==> old(getDecisionLevel(i)) == getDecisionLevel(i); The predicates validVariablesCount and validAssignmentTrace are used as conjuncts in the class invariant. The function getDecisionLevel returns all assignments at a given decision level as a set. The Method setVariable This method takes a variable that is not yet set and it updates is value. Because the trace of assignments and truthAssignment are changed, trueLiteralsCount and falseLiteralsCount have to be updated. We use the arrays positiveLiteralsToClauses and negativeLiteralsToClauses to efficiently update them, and prove that the clauses that are not mentioned in these arrays are not impacted. The signature of setVariable and its specification are: method setVariable(variable : Int32.t, value : bool) requires valid(); requires validVariable(variable); requires truthAssignment[variable] == -1; requires 0 <= decisionLevel; modifies truthAssignment, traceVariable, traceValue, traceDLEnd, `assignmentsTrace, trueLiteralsCount, falseLiteralsCount; ensures valid(); ensures traceDLStart[decisionLevel] < traceDLEnd[decisionLevel]; ensures traceVariable[traceDLEnd[decisionLevel]-1] == variable; ensures traceValue[traceDLEnd[decisionLevel]-1] == value; // post conditions that ensure that only a position of the arrays // has been updated. ensures value == false ==> old(truthAssignment[..])[variable := 0] == truthAssignment[..]; ensures value == true ==> old(truthAssignment[..])[variable := 1] == truthAssignment[..]; ensures forall i :: 0 <= i < variablesCount i != decisionLevel ==> traceDLEnd[i] == old(traceDLEnd[i]); ensures forall i :: 0 <= i < variablesCount i != old(traceDLEnd[decisionLevel]) ==> traceVariable[i] == old(traceVariable[i]) traceValue[i] == old(traceValue[i]); ensures forall x :: 0 <= x < old(traceDLEnd[decisionLevel]) ==> traceVariable[x] == old(traceVariable[x]); ensures forall i :: 0 <= i < decisionLevel ==> old(getDecisionLevel(i)) == getDecisionLevel(i); ensures assignmentsTrace == old(assignmentsTrace) + (variable, value) ; ensures countUnsetVariables(truthAssignment[..]) + 1 == old(countUnsetVariables(truthAssignment[..])); The Method setLiteral This method uses setVariable as a primitive, so the preconditions and postconditions are similar. The main difference is that after it makes the first update, it also performs unit propagation, possibly recursively. This means that it calls setLiteral again with new values. So, at the end of a call, truthAssignment might change at several positions. To prove termination, we use as a variant the number of unset variables, which provably decreases at every recursive step. Its signature and its specification are: method setLiteral(literal : Int32.t, value : bool) requires valid(); requires validLiteral(literal); requires getLiteralValue(truthAssignment[..], literal) == -1; requires 0 <= decisionLevel; modifies truthAssignment, trueLiteralsCount, falseLiteralsCount, traceDLEnd, traceValue, traceVariable, `assignmentsTrace; ensures valid(); ensures traceDLStart[decisionLevel] < traceDLEnd[decisionLevel]; ensures forall x :: 0 <= x < old(traceDLEnd[decisionLevel]) ==> traceVariable[x] == old(traceVariable[x]); ensures assignmentsTrace == old(assignmentsTrace) + getDecisionLevel(decisionLevel); ensures forall i :: 0 <= i < decisionLevel ==> old(getDecisionLevel(i)) == getDecisionLevel(i); ensures countUnsetVariables(truthAssignment[..]) < old(countUnsetVariables(truthAssignment[..])); ensures ( ghost var (variable, val) := convertLVtoVI(literal, value); isSatisfiableExtend(old(truthAssignment[..])[variable as int := val]) <==> isSatisfiableExtend(truthAssignment[..]) ); decreases countUnsetVariables(truthAssignment[..]), 0; In the code above, the function \(\texttt {getLiteralValue}(\texttt {tau}, \ell )\) returns the value of the literal \(\ell \) in the truth assignment \(\texttt {tau}\) . Note that the variable truthAssignment is an array, while truthAssignment[..] converts the array to a sequence. The sequence (immutable) is used to represent truth assignments at specification level. The Method revertLastDecisionLevel This method reverts the assignments from in the last layer by changing the value of the respective literals to \(-1\) . The proof of this method requires several helper proofs that confirm that the data structures are updated correctly. To quickly update trueLiteralsCount and falseLiteralsCount, we again use the two arrays positiveLiteralsToClauses and negativeLiteralsToClauses. As part of postcondition, we prove that the literals not on the last decision level remain unchanged: method revertLastDecisionLevel() requires valid(); requires 0 <= decisionLevel; modifies `assignmentsTrace, `decisionLevel, truthAssignment, trueLiteralsCount, falseLiteralsCount, traceDLEnd; ensures decisionLevel == old(decisionLevel) - 1; ensures assignmentsTrace == old(assignmentsTrace) - old(getDecisionLevel(decisionLevel)); ensures valid(); ensures forall i :: 0 <= i <= decisionLevel ==> old(getDecisionLevel(i)) == getDecisionLevel(i); ensures decisionLevel > -1 ==> traceDLStart[decisionLevel] < traceDLEnd[decisionLevel]; Proof of Functional Correctness for the Main Algorithm The entry point called to solve the SAT instance is solve: method solve() returns (result : SATUNSAT) requires formula.valid(); requires formula.decisionLevel > -1 ==> formula.traceDLStart[formula.decisionLevel] < formula.traceDLEnd[formula.decisionLevel]; modifies formula.truthAssignment, formula.traceVariable, formula.traceValue, formula.traceDLStart, formula.traceDLEnd, formula`decisionLevel, formula`assignmentsTrace, formula.trueLiteralsCount, formula.falseLiteralsCount; ensures formula.valid(); ensures old(formula.decisionLevel) == formula.decisionLevel; ensures old(formula.assignmentsTrace) == formula.assignmentsTrace; ensures forall i :: 0 <= i <= formula.decisionLevel ==> old(formula.getDecisionLevel(i)) == formula.getDecisionLevel(i); ensures formula.decisionLevel > -1 ==> formula.traceDLStart[formula.decisionLevel] < formula.traceDLEnd[formula.decisionLevel]; ensures result.SAT? ==> formula.validValuesTruthAssignment(result.tau); ensures formula.countUnsetVariables(formula.truthAssignment[..]) == formula.countUnsetVariables(old(formula.truthAssignment[..])); ensures result.SAT? ==> formula.isSatisfiableExtend(formula.truthAssignment[..]); ensures result.UNSAT? ==> !formula.isSatisfiableExtend(formula.truthAssignment[..]); decreases formula.countUnsetVariables(formula.truthAssignment[..]), 1; It implements the \(\textit {DPLL-procedure}\) given in Algorithm  using recursion. However, for efficiency, the data structures are kept in the instance of a class instead of being passed as arguments. The most important postconditions stating the functional correctness are: if it returns SAT then the current truthAssignment can be extended to satisfy the formula, and if returns UNSAT it means that no truth assignment extending the current truthAssignment satisfies it. We use the predicate \(\texttt {isSatisfiableExtend}(\texttt {tau}, \texttt {clauses})\) , which tests whether there exists a complete assignment that extends the partial truth assignment \(\texttt {tau}\) and that satisfies the formula. We also show as a postcondition that solve ends in the same state as where it starts. This means that we chose to undo the changes even if we find a solution. Otherwise, the preconditions and postconditions for solve would need to change accordingly and become more verbose and less elegant. For simplicity, we chose to revert to the initial state every time. A flowchart that shows graphically the main flow of the solve method, together with the most important statements that hold after each line, is presented in Figure REF . <FIGURE>Once a literal is chosen, the updates to the data structures are delegated to the step method. This removes some duplication in the code, but it also makes the verification take less time. The preconditions and postconditions of step are the same as in solve, but taking into account that step additionally takes as arguments a literal and a desired value for this literal. The method step calls setLiteral to set the literal and perform unit propagation, and then calls solve recursively: method step(literal : Int32.t, value : bool) returns (result : SATUNSAT) requires formula.valid(); requires formula.decisionLevel < formula.variablesCount - 1; requires formula.decisionLevel > -1 ==> formula.traceDLStart[formula.decisionLevel] < formula.traceDLEnd[formula.decisionLevel]; requires !formula.hasEmptyClause(); requires !formula.isEmpty(); requires formula.validLiteral(literal); requires formula.getLiteralValue(formula.truthAssignment[..], literal) == -1; modifies formula.truthAssignment, formula.traceVariable, formula.traceValue, formula.traceDLStart, formula.traceDLEnd, formula`decisionLevel, formula`assignmentsTrace, formula.trueLiteralsCount, formula.falseLiteralsCount; ensures formula.valid(); ensures old(formula.decisionLevel) == formula.decisionLevel; ensures old(formula.assignmentsTrace) == formula.assignmentsTrace; ensures forall i :: 0 <= i <= formula.decisionLevel ==> old(formula.getDecisionLevel(i)) == formula.getDecisionLevel(i); ensures formula.decisionLevel > -1 ==> formula.traceDLStart[formula.decisionLevel] < formula.traceDLEnd[formula.decisionLevel]; ensures result.SAT? ==> formula.validValuesTruthAssignment(result.tau); ensures result.SAT? ==> ( var (variable, val) := formula.convertLVtoVI(literal, value); formula.isSatisfiableExtend(formula.truthAssignment[..][variable := val])); ensures result.UNSAT? ==> ( var (variable, val) := formula.convertLVtoVI(literal, value); !formula.isSatisfiableExtend(formula.truthAssignment[..][variable := val])); ensures formula.countUnsetVariables(formula.truthAssignment[..]) == formula.countUnsetVariables(old(formula.truthAssignment[..])); decreases formula.countUnsetVariables(formula.truthAssignment[..]), 0; Benchmarks Dafny code can be extracted to C#, and then compiled and executed as regular C# code. In this section, we present the results obtained by benchmarking the C# code extracted from our verified solver (we refer to this code as the Dafny solver) to see how it performs against other solvers. Benchmark used. For benchmarking, we use some of the tests in SATLIB - Benchmark Problemshttps://www.cs.ubc.ca/~hoos/SATLIB/benchm.html. We select the sets uf100, uuf100 up to uf200, uuf200. These sets of SAT problems all contain instances in 3-CNF, with uf denoting satisfiable instances and uuf denoting unsatisfiable instances. The numbers in the names (e.g., 100, 200) denote the number of propositional variables. The number of clauses in each set is chosen such that the problems sit at the satisfiability threshold [14]}. We choose these sets of SAT instances because they are small enough for DPLL to solve in reasonable time, but big enough so that the search dominates the execution time (and not, e.g., reading the input). Benchmarking methodology. We run the tests using the benchmarking framework BenchExec [15]}, a solution that reliably measures and limits resource usage of the benchmarked toolhttps://github.com/sosy-lab/benchexec. We used BenchExec to limit resource usage to set the following for each run: time limit to 5000s, memory limit to 1024 MB, CPU core limit to 1. We used a Intel Core i7-9700K CPU @ 3.60GHz machine (cores: 4, threads: 8, frequency: 4900 MHz, Turbo Boost: enabled; RAM: 8290 MB, Operating System Linux-5.3.0-40-generic-x86_64-with-Ubuntu-18.04-bionic, Dafny 2.3.0.10506, Mono JIT compiler 6.8.0.105, G++ 7.5.0). Benchmark 1. We first check whether the extracted code has any added overhead compared to a implementation written directly in C#. For this purpose, we write in C# a solver implementing the same algorithm and data structures as the Dafny solver. We find that there is a negligible overhead coming from the method we use to read files in Dafny, and not from the extraction process itself. In our results, the reading and parsing of the input file in Dafny takes at least twice as long as in C#. On small inputs, the C# solver therefore outperforms the Dafny solver. On larger inputs, the performance is the same. Benchmark 2. The language C# is not popular in SAT solving, with C++ being the language of choice because of performance. Therefore, we implement the same DPLL algorithm directly in C++. We benchmark our verified Dafny solver against the C++ implementation. The results show that the (unverified) C++ solver is approximately twice as fast on large tests as our verified Dafny solver. Benchmark 3. To put the performance of our verified Dafny solver into context, we also benchmark against the solver MiniSat http://minisat.se/ (with the default settings). As MiniSat implements the full CDCL algorithm, which can be exponentially faster than DPLL, it outperforms our solver significantly. However, the correctness guarantee offered by our verified solver is higher than the unverified C code of MiniSat. In Table REF , we summarize the running times of all solvers on the respective sets of tests. We report the average running time, the standard deviation and the sum over all running times for SAT instances in each particular set of tests. <TABLE>Figures REF and REF present the running times (log scale) of all four solvers mentioned above on all SAT instances in the uf200 and uuf200 sets, respectively. The running times are sorted by the time it takes for the Dafny solver to finish. <FIGURE><FIGURE>We conclude that our verified Dafny solver is competitive with an equivalent implementation in C++ (it is only two times slower), but the correctness guarantee offered by our verified solver makes it significantly more trustworthy. Related Work The SAT solver versat [16]} was implemented and verified in the Guru programming language using dependent types. As our solver, it also implements efficient data structures. However, it relies on a translation to C where data structures are implemented imperatively by using reference counting and a statically enforced read/write discipline. Unlike our approach, the solver is only verified to be sound: if it produces an UNSAT answer, then the input formula truly is unsatisfiable. However, termination and completeness (if the solver produces SAT, then the formula truly is satisfiable) are not verified. Another small difference is the verification guarantee: versat is verified to output UNSAT only if a resolution proof of the empty clause exists, while in our approach we use a semantic criterion: our solver always terminates and produces UNSAT only if there is no satisfying model of the input formula. Of course, in the case of propositional logic these criteria are equivalent and therefore this difference is mostly a matter of implementation. Unlike our solver, some checks are not proved statically and must be checked dynamically, so they could be a source of incompleteness. An advantage of versat over our approach is that is implements more optimizations, like conflict analysis and clause learning, which enable it to be more competitive in terms of running time. Blanchette and others [11]} present a certified SAT solving framework verified in the Isabelle/HOL proof assistant. The proof effort is part of the Isabelle Formalization of Logic project. The framework is based on refinement: at the highest level sit several calculi like CDCL and DPLL, which are formally proved. Depending on the strategy, the calculi are also shown to be terminating. The calculi are shown to be refined by a functional program. Finally, at the lowest level is an imperative implementation in Standard ML, which is shown to be a refinement of the functional implementation. Emphasis is also placed on meta-theoretical consideration. The final solver can still two orders of magnitude slower than a state-of-the-art C solver and therefore additional optimizations [18]} are desirable. In contrast, in our own work we do not investigate any meta-theoretical properties of the DPLL/CDCL frameworks; we simply concentrate on obtaining a verified SAT solver. We investigate to what extent directly proving the imperative algorithm is possible in an auto-active manner. A key challenge is that the verification of Dafny code may take a lot of time in certain cases and we have to optimize our code for verification time as well. Another SAT solver verified in Isabelle/HOL, is by Marić [19]}. In contrast to previous formalization, the verification methodology is not based on refinement. Instead, the Hoare triples associated to the solver pseudo-code are verified in Isabelle/HOL. In subsequent work [20]}, Marić and Janičić prove in Isabelle the functional correctness of a SAT solver represented as an abstract transition system. Another formalization of a SAT solver (extended with linear arithmetic) is by Lescuyer [21]}, who verifies a DPLL-based decision procedure for propositional logic in Coq and exposes it as a reflexive tactic. Finally, a decision procedure based on DPLL is also verified by Shankar and Vaucher [22]} in the PVS system. For the proof, they rely on subtyping and dependent types. Berger et al. have used the Minlog proof assistant to extract a certified SAT solver [23]}. For these last approaches, performance considerations seem to be secondary. Conclusion and Further Work We have developed a formally verified implementation of the DPLL algorithm in the Dafny programming language. Our implementation is competitive in terms of execution time, but it is also trustworthy: all specifications are computer checked by the Dafny system. Other approaches to SAT solvers that rely on type checkers [11]} are arguably even more trustworthy, since they are verified by a software system satisfying the de Bruijn criterion. However, we believe that our approach can strike a good balance between efficiency and trustworthiness of the final product. Our implementation incorporates data structures to quickly identify unit clauses and perform unit propagation. The formalization consists of around 3088 lines of Dafny code, including the parser. The code was written by the first author in approximately one year and a half of part time work. The author also learned Dafny during that time. The ratio between lines of proof and lines of code is approximately 4/1. Table REF contains a summary of our verified solver in numbers. <TABLE>In addition to coming up with the right invariants, the main challenge in the development of the verified solver is the large amount of time required by the Dafny system to discharge the verification conditions. In order to minimize this verification time, we develop and use the following development/verification methodology: Avoid nested loops in methods. Nested loops usually require duplicating invariants, which decreases elegance and increases verification time. An example of applying this tip is the revertLastDecisionLevel method (in the file solver/formula.dfy), whose purpose is to backtrack to the previous decision level. The code of the method is currently very simple: it calls removeLastVariable repeatedly in a while loop. However, because it is so simple, it is tempting to inline removeLastVariable – this would lead to a significant increase in verification time. In the same spirit, avoid multiple quantifications in specifications. We have found it useful, whenever having a specification of the form forall x :: forall y :: P(x, y), to try to extract the subformula forall y :: P(x, y) as a separate predicate of x. This helps in two distinct ways: it forces the programmer to name the subformula, thereby clearing their thought process and making their intention more clear, and it enables the Z3 pattern-based quantifier instantiation to perform better [25]}. Use very small methods. We find that it is better to extract as a method even code that is only a few lines of code long. In a usual programming language, such methods would be inlined (by the programmer). In our development, it is not unusual for such methods (with very few lines of code) to require many more helper annotations (invariants, helper assertions, etc.) and take significant time to verify. Use minimal modifies clauses in methods and reads clauses in functions. In particular, we make extensive use of the less well-known backtick operator in Dafny. During development, use Dafny to verify only the lemma/method currently being worked on. Run Dafny on the entire project at the end. To force Dafny to check only one method, we use the -proc command line switch. Finally, we have found that using the rather nice Z3 axiom profiler [26]} to optimize verification time does not scale well to projects the size of our solver. Our project shows that it is possible to obtain a fully verified SAT solver written in assertional style, solver that is competitive in terms of running time with similar solvers written in non-verifiable languages. However, our experience with the verified implementation of the solver is that it currently takes significant effort and expertise to achieve this. We consider that three directions of action for the development of Dafny (and other similar auto-active verification tools) would be beneficial in order to improve this situation: Improve verification time of individual methods/lemmas, Make failures of verification obligations to check more explainable, and Devise a method better than asserts to guide the verifier manually. As future work, we would like to verify an implementation of the full CDCL algorithm, thereby obtaining a verified solver that is competitive against state-of-the-art SAT solvers. In order to upgrade to a competitive CDCL solver, we need to modify the algorithm to implement a back-jumping and clause learning strategy, but also implement the two watched literals data structure [13]}, which becomes more important for performance when the number of clauses grows. Acknowledgments This work was supported by a grant of the Alexandru Ioan Cuza University of Iaşi, within the Research Grants program UAIC Grant, code GI-UAIC-2018-07.
i
adbac909-931e-4f38-bf6e-c1e8f9b488e2
The SAT solver versat [1]} was implemented and verified in the Guru programming language using dependent types. As our solver, it also implements efficient data structures. However, it relies on a translation to C where data structures are implemented imperatively by using reference counting and a statically enforced read/write discipline. Unlike our approach, the solver is only verified to be sound: if it produces an UNSAT answer, then the input formula truly is unsatisfiable. However, termination and completeness (if the solver produces SAT, then the formula truly is satisfiable) are not verified. Another small difference is the verification guarantee: versat is verified to output UNSAT only if a resolution proof of the empty clause exists, while in our approach we use a semantic criterion: our solver always terminates and produces UNSAT only if there is no satisfying model of the input formula. Of course, in the case of propositional logic these criteria are equivalent and therefore this difference is mostly a matter of implementation. Unlike our solver, some checks are not proved statically and must be checked dynamically, so they could be a source of incompleteness. An advantage of versat over our approach is that is implements more optimizations, like conflict analysis and clause learning, which enable it to be more competitive in terms of running time. Blanchette and others [2]} present a certified SAT solving framework verified in the Isabelle/HOL proof assistant. The proof effort is part of the Isabelle Formalization of Logic project. The framework is based on refinement: at the highest level sit several calculi like CDCL and DPLL, which are formally proved. Depending on the strategy, the calculi are also shown to be terminating. The calculi are shown to be refined by a functional program. Finally, at the lowest level is an imperative implementation in Standard ML, which is shown to be a refinement of the functional implementation. Emphasis is also placed on meta-theoretical consideration. The final solver can still two orders of magnitude slower than a state-of-the-art C solver and therefore additional optimizations [3]} are desirable. In contrast, in our own work we do not investigate any meta-theoretical properties of the DPLL/CDCL frameworks; we simply concentrate on obtaining a verified SAT solver. We investigate to what extent directly proving the imperative algorithm is possible in an auto-active manner. A key challenge is that the verification of Dafny code may take a lot of time in certain cases and we have to optimize our code for verification time as well. Another SAT solver verified in Isabelle/HOL, is by Marić [4]}. In contrast to previous formalization, the verification methodology is not based on refinement. Instead, the Hoare triples associated to the solver pseudo-code are verified in Isabelle/HOL. In subsequent work [5]}, Marić and Janičić prove in Isabelle the functional correctness of a SAT solver represented as an abstract transition system. Another formalization of a SAT solver (extended with linear arithmetic) is by Lescuyer [6]}, who verifies a DPLL-based decision procedure for propositional logic in Coq and exposes it as a reflexive tactic. Finally, a decision procedure based on DPLL is also verified by Shankar and Vaucher [7]} in the PVS system. For the proof, they rely on subtyping and dependent types. Berger et al. have used the Minlog proof assistant to extract a certified SAT solver [8]}. For these last approaches, performance considerations seem to be secondary.
w
ac96480f-fc32-4e7b-b3c5-c11739ce7e03
Let us start with a strong statement: the fake news phenomenon is currently a big problem for societies, nations and individual citizens. Fake news has already plagued democratic elections, reputations of individual persons or organizations, and has negatively impacted citizens in the COVID-19 pandemic (e.g., fake news on alleged medicines in the US or in Brazil). It is clear we need agile and reliable solutions to fight and counter the fake news problem. Therefore, this article demonstrates a critical scrutiny of the present level of knowledge in fake news detection, on one hand to show possible solutions but also to motivate the future research in this domain.
i
891df5a5-8841-4ca3-8b9b-4412198e8458
Fake news is a tough challenge to overcome, however there are some efforts from the Machine Learning (ML) community to stand up to this harmful phenomenon. In this mapping study, we present such efforts, solutions and ideas. As it is presented in Fig. REF , fake news detection may be performed by analysing several types of digital content such as images, text and network data, as well as the author/source reputation. <FIGURE>
i
f316891a-7759-4151-9e7f-de2c5d489aba
This survey is not the first one in the domain of fake news. Another major comprehensive work addressing the ways to approach fake news detection (mainly text analysis-based) and mainstream fake news datasets is [1]}. According to it, the state-of-the-art approaches for this kind of analysis may be classified into five general groups with methods relying upon: (i) linguistic features, (ii) deception modelling, (iii) clustering, (iv) predictive modelling and (v) content cues. With regard to the text characteristics, style-based and pattern-based detection methods are also presented in [2]}. Those methods rely on the analysis of specific language attributes and the language structure. The analyzed attributes found by the authors of the survey include such features as: quantity of the language elements (e.g. verbs, nouns, sentences, paragraphs), statistical assessment of language complexity, uncertainty (e.g. number of quantifiers, generalizations, question marks in the text), subjectivity, non-immediacy (such as the count of rhetorical questions or passive voice), sentiment, diversity, informality and specificity of the analyzed text. Paper [3]} surveys several approaches to assessing fake news, which stem from two primary groups: linguistic cue approaches (applying ML) as well as network analysis approaches.
i
f5969cc3-2d4f-4043-aa2a-9d7621f76ec2
Yet another category of solutions is network-based analysis. In [1]}, two distinct categories are mentioned: (i) social network behavior analysis to authenticate the news publisher's social media identity and to verify their trustworthiness and (ii) scalable computational fact-checking methods based on knowledge networks. Beside text-based and network-based analysis, some other approaches are reviewed. For example, [2]} attempts to survey identification and mitigation techniques in combating fake news and discusses feedback-based identification approaches.
i
136a76a8-e8c3-4164-ab92-74f13eaf23ff
Crowd-signal based methods are also reported in [1]}, while content propagation modelling for fake news detection purposes, alongside credibility assessment methods, are discussed in [2]}. Such credibility-based approaches are categorized here into four groups: evaluation of news headlines, news source, news comments and news spreaders/re-publishers. In addition, in some surveys, content-based approaches using non-text analysis are discussed. The most common ones are based on image analysis [3]}, [4]}.
i
661995a7-4e94-44ca-949b-9898672318eb
As complementary to the mentioned surveys, the present paper is unique by catching a very different angle of fake news detection methods (focused on advanced ML approaches). Moreover, in addition to overviewing current methods, we propose our own analysis criterion and categorization. We also suggest expanding the context of methods applicable for such a task and describe the datasets, initiatives and current projects, as well as the future challenges.
i
3d20d882-4dc1-4533-bec1-d11e0f7718c7
The remainder of the paper is structured in the following manner: in Section 1, previous surveys are overviewed and the historic evolution of fake news is presented, its current impact as well as the problem with definitions. In Section 2, we present current activities to address the fake news detection problem as well as technological and educational actions. Section 3 constitutes the in-depth systematic mapping of ML based fake news detection methods focused on the analysis of text, images, network data and reputation. Section 4 describes the relevant datasets used nowadays. In the final part of the paper we present some most emerging challenges in the discussed domain and we draw the main conclusions.
i
65674fca-31b1-4450-b0a7-107d90b97762
A function \(f:[n] \rightarrow \mathbb {R}\) is convex if \(f(x) - f(x-1) \le f(x+1) - f(x)\) for all \(x \in \lbrace 2,3,\dots ,n-1\rbrace \) . Convexity of functions is a natural and interesting property. Given oracle access to a function \(f\) , an \(\varepsilon \) -tester for convexity has to decide with high constant probability, whether \(f\) is a convex function or whether every convex function evaluates differently from \(f\) on at least \(\varepsilon n\) domain points, where \(\varepsilon \in (0,1)\) . Parnas, Ron, and Rubinfeld [1]} gave an \(\varepsilon \) -tester for convexity that has query complexity \(O(\frac{\log n}{\varepsilon })\) . Blais, Raskhodnikova, and Yaroslavtsev [2]} showed that this bound is tight for constant \(\varepsilon \) for nonadaptive algorithmsThe queries of a nonadaptive algorithm does not depend on the answers to the previous queries. The algorithm is adaptive otherwise.. An improved upper bound of \(O(\frac{\log (\varepsilon n)}{\varepsilon })\) was shown by Ben-Eliezer [3]} in a work on the more general question of testing local properties. Recently, Belovs, Blais and Bommireddi [4]} complemented this result by showing a tight lower bound of \(\Omega (\frac{\log (\varepsilon n)}{\varepsilon })\) .
i
40c500cd-9b93-4b89-822c-8973ea741619
In this work, we further investigate and develop new insights into this well-studied problem, thereby asserting that there is more way to go towards a full understanding of testing convexity of functions \(f:[n] \rightarrow \mathbb {R}\) . We show that the number of distinct discrete derivatives \(s\) , as opposed to the input size \(n\) , is the right input parameter to express the complexity of convexity testing, where a discrete derivative is a value of the form \(f(x+1) - f(x)\) for \(x \in [n-1]\) . Specifically, we design a nonadaptive convexity tester with query complexity \(O(\frac{\log s}{\varepsilon })\) , and complement it with a nearly matching lower bound of \(\Omega (\frac{\log (\varepsilon s)}{\varepsilon })\) . Our work is motivated by the work of Pallavoor, Raskhodnikova and Varma [1]} who introduced the notion of parameterization in the setting of sublinear algorithms.
i
f3394edb-302e-40ba-abb6-4eeeeb7a1e03
Our results bring out the fine-grained complexity of the problem of convexity testing. In particular, \(s \le n\) always and therefore, our tester is at least as efficient as the state of the art convexity testers. Furthermore, the parameterization that we introduce, enables us to circumvent the worst case lower bounds expressed in terms of the input size \(n\) and obtain more efficient algorithms when \(s << n\) .
i
f4df1257-7ff6-45c6-826a-a7d59f9a6573
The study of property testing was initiated by Rubinfeld and Sudan [1]} and Goldreich, Goldwasser and Ron [2]}. The first example where parameterization has helped in the design of efficient testers is the work of Jha and Raskhodnikova [3]} on testing the Lipschitz property. A systematic study of parameterization in sublinear-time algorithms was initiated by Pallavoor, Raskhodnikova and Varma [4]} and studied further by [5]}, [6]}, [7]}, [8]}.
w
41609a8d-676b-4d68-8598-d072064656d2
In this work, we are concerned only with convexity of real-valued functions over a \(1D\) domain. We would like to note that not much is known about testing convexity of functions over higher dimensional domains. One possible reason behind this could be the following: there is no single definition of discrete convexity for real-valued functions of multiple variables. For a good overview of this topic, we refer interested readers to the textbook by Murota on discrete convex analysis [1]}. Ben Eliezer [2]}, in his work on local properties, studied the problem of testing convexity of functions of the form \(f:[n]^2 \rightarrow \mathbb {R}\) and designed a nonadaptive tester with query complexity \(O(n)\) . Later, Belovs, Blais and Bommireddi [3]} showed a nonadaptive query lower bound of \(\Omega (\frac{n}{d})^{\frac{d}{2}}\) for testing convexity of real-valued functions over \([n]^d\) . For functions of the form \(f:[3] \times [n] \rightarrow \mathbb {R}\) , they design an adaptive tester with query complexity \(O(\log ^2 n)\) and show that the complexity of nonadaptive testing is \(O(\sqrt{n})\) .
w
cf3a6c7f-e717-4d9d-b8c2-28f23a535e89
Numerous works have applied statistical language models (LMs) on source code to help tackle important tasks in software engineering, including code completion [1]}, program repair [2]}, and many others [3]}. Same as modeling natural language, creating appropriate vocabulary is a crucial prerequisite [4]}. However, when writing source code, software developers can create arbitrary identifiers they like, which probably contain multiple words, e.g. addItemsToList. Due to this fundamental fact, models of code often get an extremely sparse vocabulary containing many rare words when processing code corpora. Training models with such sparse (and typically large) vocabulary is ineffective, and obtained models often have poor performance [4]}. In addition, if identifiers are not observed in the vocabulary, the model cannot handle them, which is known as the notorious out-of-vocabulary (OOV) problem.
i
190fedfe-871a-4795-ad35-e7f4e1b8eda0
Currently, open-vocabulary methods like Byte-Pair Encoding (BPE) algorithm [1]} are widely used in modeling natural languages and achieve promising results in practice. These methods can solve the OOV problem while customizing the size of the vocabulary. Inspired by such success, Karampatsis et al. [2]} first applied the BPE algorithm to construct vocabulary from source code and showed that open-vocabulary LMs have outstanding performance on the code completion task. However, BPE selects the most frequent sub-words into the vocabulary, and this frequency-based approach often fails to capture the semantics and intentions of identifier names when choosing sub-words. Although developers create any identifiers at will, they usually follow certain naming conventions that make identifiers meaningful, legible and easy to understand, either in camelCase or in snake_case [3]}. For example, the method name getListener follows the camelCase convention, and a programmer can easily infer that this method can be used to get a Listener object. At the same time, the BPE algorithm will represent it as three sub-words in our preliminary study: get, List and ener, the latter two do not reflect the semantics developers try to convey.
i
c41b72cb-7893-42da-85d4-aef1f50ba362
In order to empower the BPE algorithm with the ability to better sense semantics when splitting words, an intuitive preprocessing strategy is to split compound identifiers into several words that can imply certain meanings, which is called identifier splitting techniques. Prior research works have demonstrated that various information retrieval models for program comprehension tasks are benefit from identifier splitting, e.g., feature-related code localization [1]}, code reuse [2]}. However, this empirical conclusion is ambiguous for modern LMs of code as no such work has demonstrated it. As stated above, open-vocabulary LMs assist many software engineering tasks effectively but are weak in capturing the semantics of identifiers when creating vocabulary. Thus, it is imperative to clarify if we can improve the performance at a more considerable margin by combining identifier splitting with the BPE algorithm.
i
a6e338a7-b3a8-4c24-872b-a0c782aca629
In this paper, we investigate the potential benefits of splitting identifiers in open-vocabulary LMs of code. Specifically, we adopt the same LMs presented by Karampatsis et al. [1]}, which are the first to adopt the BPE algorithm in code modeling. To achieve the goal, we propose to apply identifier splitting in two stages of open-vocabulary LMs: vocabulary construction and model input processing. Furthermore, we propose two different preprocessing strategies in these stages to apply identifier splitting techniques: (1) simple strategy: we split all identifiers in vocabulary construction and apply identifier splitting before in model input processing stages; (2) hybrid strategy: in the vocabulary construction stage, we first split all identifiers and then merge them with original corpora. In the model input processing stage, we apply identifier splitting only when BPE fails to tokenize them as the original forms. We train LMs under these different settings and evaluate them on the code completion task as [1]} to show the effectiveness of identifier splitting in open-vocabulary LMs.
i
06898891-a5f8-4337-8e10-f6835223f13d
We perform experiments on the C language dataset released by Karampatsis et al. [1]}. We evaluate the cross entropy of LMs, and use Mean Reciprocal Rank (MRR) to measure the performance of LMs on the code completion task. Furthermore, we also obtain the MRR and recall at rank 10 (R@10) on predicting identifier tokens (excluding keywords, punctuations, etc.). The experimental results show that simply performing identifier splitting into preprocessing procedures does not suffice; it degrades MRR by \(0.46\%\) and \(5.68\%\) on predicting all tokens and identifiers, respectively. At the same time, the hybrid strategy is more effective for open-vocabulary LMs, outperforming the LMs with the original setting by \(6.23\%\) of MRR on predicting identifier tokens. The improvements of \(2.02\%\) of LMs entropy and \(3.68\%\) in terms of R@10 on predicting identifiers also confirm the hybrid strategy's effectiveness. The results highlight that the identifier splitting can be combined with open-vocabulary methods to enhance the performance of language models of source code.
i
1134e3fd-16fc-4821-98d5-45dfc87b672f
The rest of this paper is organized as follows. Section  briefly describes backgrounds of this paper. In Section , we elaborate our methodology to apply identifier splitting in the code completion pipelines. We describe the experiment settings and present the results of our experiments in Section . Section  discusses some related works. Finally, we conclude the paper and present future work in Section .
i
f37e14bb-4119-469f-8f0e-560bad7cd7a3
Figure REF presents an overview of how we apply identifier splitting in the open-vocabulary language models. It shows that identifier splitting can be used in both vocabulary construction and model inputs processing procedures before training the language models. We elaborate the motivation and processing details as follows.
m
a88cb37f-db2c-49b7-890f-bb303751fe61
Vocabulary Construction. As shown in Figure REF , we apply the BPE algorithm on the input code corpora and output a vocabulary with a predefined size. The vocabulary will be used in the tokenization of model inputs later.
m
4b6c12c1-cd54-4206-8924-fda85659dfe6
As introduced in Section , the BPE algorithm builds a vocabulary based on the frequency of pairs appearing in training corpora. Prior works sample some software projects as a corpus to create a vocabulary, and then use the vocabulary to tokenize other mutually exclusive projects [1]}. We notice the fact that many complex identifiers are usually project-specific or even file-specific. Researchers have shown that code has a high degree of localness, where identifiers are repeated often within close distance while they are rarely used in other projects [2]}. It means that the vocabulary created on one corpus may contain many complex compounds (as they appear very frequently in the corpus). However, those complex compounds are likely to be rare in other corpora. For a vocabulary with limited size, these rare compounds infringe the space of sub-words that could be applied in the BPE procedure of other corpora, reducing the efficiency of the vocabulary. It motivates us to apply identifier splitting techniques to the corpus.
m
fd964920-4c70-4214-a59e-05c0f94e20b1
Identifier splitting can decompose complex identifiers into several sub-words. Karampatsis et al. [1]} find that on a corpus of around \(11.6\) million unique tokens, the size of this corpus decrease dramatically by around \(90\%\) after splitting identifiers. It implies that although the complex compounds are usually project-specific, the subunits that make up these compounds are highly repetitive across different projects. Instead of creating a vocabulary that contains many complex words, we propose to construct a vocabulary based on the corpus after identifier splitting (Split Corpora as in Figure REF ).
m
c03e4781-6e8a-471d-a6e5-f4dae5cd68ea
We do not claim that using identifier splitting to process the corpus is always beneficial. Integrating identifier splitting in the vocabulary creation may lead to some negative impacts. The vocabulary created by the corpora after identifier splitting may need to use multiple sub-words to represent common compounds shared across different projects, e.g., popular API names. Splitting such common compounds may increase the length of the tokenized sequence and make it harder to relate the current prediction to the past context of inputs [1]}. Thus, we propose another strategy that merges the original and split corpora and builds a vocabulary from the combined corpora to tackle the issue (Merged Corpora as in Figure REF ).
m
cd44a446-97ba-4c74-abef-34e789847022
Model Inputs Processing. As shown in Figure REF , the model input processing part takes input tokens and processes them into lists of sub-tokens for language model training. To handle an input token that is not in the vocabulary, we use the BPE algorithm to decompose this input into a list of sub-tokens and then feed these sub-tokens into the model. As stated in Section , the vocabulary created by the BPE algorithm is an ordered list of sub-words. When BPE decomposes input tokens, it will follow the same order as recording in the vocabulary. We take the identifier getCategory as an example. If we directly apply BPE (using a vocabulary of \(10,000\) words) to this identifier, we get the following three sub-tokens: getC, ateg and ory, which obviously break the original semantics of the identifier name. Although the word Category is in vocabulary, its position (\(9096th\) ) is almost close to the end. When BPE traverses the vocabulary, it will encounter and create the sub-token getC (\(1383th\) ) much earlier. If we first split this identifier into get and Category, and then apply BPE (using the same vocabulary) on the two words, we still get get and Category. Identifier splitting can utilize the semantic information conveyed with naming convention and prevent less meaningful sub-tokens (e.g., getC and ateg) from being created. This observation inspires us to apply identifier splitting before BPE.
m
b0fa485f-9245-4ce9-8394-a742f539408d
Splitting model inputs may also lead to negative impacts. For instance, identifiers (e.g., types of exceptions or methods like toString) can be shared across different projects, especially in object-oriented programming languages like Java. Such identifiers can frequently appear in the corpus and consequently are included in the vocabulary. They can be compactly represented only using one token, while identifier splitting will force them to be represented using multiple sub-tokens. As a result, we use a hybrid strategy to mitigate such negative impacts. More specifically, we first apply BPE to an identifier. If the tokenized result is identical to the original identifier, we directly feed it into the model. Otherwise, we feed the separated tokens to BPE after applying identifier splitting.
m
736b73b1-a27c-4bd2-bfb4-a17cdedeac98
Considering the above, we combine the different operations in the vocabulary construction and the model inputs and propose the following three settings as shown in Figure REF to explore the effectiveness of identifier splitting in the open-vocabulary LMs:
m
94782cce-7f24-4bbe-b8cd-7188d0537b7e
[leftmargin=*] Original: using the BPE algorithm to create a vocabulary directly and then use the vocabulary to tokenize corpora as input. No identifier splitting is applied in this setting; Simple strategy: splitting all identifiers in corpora first then use BPE to construct a vocabulary, and splitting all identifiers in model inputs; Hybrid strategy: splitting identifiers and merging them with original corpora for BPE vocabulary construction, and splitting identifiers in model inputs only when BPE fails to tokenize them as the original forms.
m
63f0a724-0efc-4d41-b9c9-2627a4cc01b9
We compare the performance of language models with different strategies. We denote the LMs with original settings, LMs with simple strategy and LMs with hybrid strategy as OriLMs, SSLMs and HSLMs, respectively.
r
068480f6-783c-4443-9f92-4b2698eacf53
Table REF shows the performance of different LMs. We find that SSLMs perform the worst among the three models. Although the entropy of SSLMs is slightly improved in comparison to OriLMs, SSLMs degrade the MRR on predicting all tokens by \(0.46\%\) , which reflects that the model's performance is not good as the OriLMs. For predicting identifier tokens, we observe a bigger performance gap between OriLMs and SSLMs. In terms of R@10 and MRR on predicting identifiers, OriLMs outperform SSLMs by \(3.44\%\) and \(5.68\%\) , respectively. These results indicate that the simple strategy is not adequate for open-vocabulary LMs and even hurts the entropy and performance of LMs on the code completion task in most cases.
r
42f944a3-1e29-4a53-ab7c-a2966c701d43
However, we observe that HSLMs outperform OriLMs by up to \(0.98\%\) in terms of MRR on predicting all tokens. The models' entropy decreases \(2.02\%\) , showing that the hybrid strategy boosts open-vocabulary LMs. Compared with the results on all token prediction, HSLMs outperform OriLMs by a larger margin on identifier prediction: the results are boosted to \(3.68\%\) and \(6.23\%\) in terms of R@10 and MRR, respectively. The results demonstrate that combining identifier splitting with the BPE algorithm can improve the performance of open-vocabulary LMs, especially the improved performance on identifier prediction reveals that the LMs with hybrid strategy can synthesize identifiers from sub-words better.
r
f6839fb7-9faa-4155-99c3-97d351023974
In summary, by following the hybrid strategy, identifier splitting can boost the performance of open-vocabulary LMs of code. The performance of open-vocabulary LMs can be improved by \(0.98\%\) and \(6.23\%\) in terms of MRR on predicting both all tokens and identifiers. The entropy of LMs and R@10 on predicting identifiers can also be improved by \(2.02\%\) and \(3.68\%\) .
r
55571487-1ac3-45c0-91ab-a82230ee6709
In the literature, prior studies have been interested in employing language models on source code to assist software development [1]}, [2]}. However, the identifiers with complex names in source code make these models suffer from the out-of-vocabulary (OOV) problem. Increasing the size of a vocabulary has a limited effect on addressing the problem and makes models harder to scale [3]}. Recently, researchers have applied open vocabulary methods for code modeling. Karampatsis et al. [4]} are the first to investigate whether open-vocabulary methods can improve the performance of this code completion tool. They trained a GRU-based language model with the BPE algorithm and showed that the model performance increases over close-vocabulary models across three datasets.
w
2d4572df-3478-45af-952f-d660581003d3
At the same time, numerous works about identifier splitting have been proposed before, and several studies have empirically compared these different techniques [1]}, [2]}. Some previous works also try to segment identifiers by naming conventions [3]}, [4]} in vocabulary construction, but no previous work utilizes advanced identifier splitting techniques and combines them with existing open-vocabulary algorithms.
w
4263b889-a43b-42d6-be4d-568ba5481690
In this paper, we investigate the benefit of identifier splitting techniques on code modeling. We propose two strategies to combine identifier splitting with Byte-Pair Encoding (BPE) algorithm. We train open-vocabulary models with different strategies and compare the performance over the C language dataset. The evaluation results show that splitting identifiers improves the performance of open-vocabulary models under a hybrid strategy, which can improve LMs by \(6.23\%\) in terms of MRR on predicting identifiers. The entropy of LMs and R@10 on predicting identifiers are also improved by up to \(2.02\%\) and \(3.68\%\) . Our study confirms that the potential benefits of identifier splitting methods on open-vocabulary language models for C language.
d
1f3a7076-b8c4-4024-a253-31a25642e833
In the future, we plan to validate our findings on more programming languages beyond C, e.g., Java and python. Also, we are interested in considering more models with different architectures, e.g., Transformer-based models, which have recently drawn researchers' attention. Besides, we plan to investigate whether the advanced LMs can boost more code modeling-based tasks, such as code clone detection [1]}, bug localization [2]}, [3]}, and code search [4]}.
d
9c0c1889-7b97-4300-b4cd-8ef8870fc163
Let \(M_{n}(n\geqslant 2)\) be the space of \(n \times n\) complex matrices. Let \(\sigma _{i}\) \((i=1, \cdots , n)\) be the singular values of \(A \in M_{n}\) which is nonsingular and suppose that \(\sigma _{1} \geqslant \sigma _{2} \geqslant \cdots \geqslant \sigma _{n-1} \geqslant \sigma _{n} > 0\) . For \(A=\left[a_{i j}\right] \in M_{n}\) , the Frobenius norm of \(A\) is defined by \(\Vert A\Vert _{F}=\left(\sum _{i, j=1}^{n}\left|a_{i j}\right|^{2}\right)^{1 / 2}={\rm tr}\left( A^{H} A\right)^{\frac{1}{2}}\)
i
ec0c2502-f5f8-4aaf-b4ac-db2d5688ad75
where \(A^{H}\) is the conjugate transpose of \(A\) . The relationship between the Frobenius norm and singular values is \(\Vert A\Vert _{F}^{2}=\sigma _{1}^{2}+\sigma _{2}^{2}+\cdots +\sigma _{n}^{2}\)
i
b907347b-1622-4706-8cd4-c27affccabfd
It is well known that lower bounds for the smallest singular value \(\sigma _{n}\) of a nonsingular matrix \(A \in M_{n}\) have many potential theoretical and practical applications [1]}, [2]}. Yu and Gu [3]} obtained a lower bound for \(\sigma _{n}\) as follows: \(\sigma _{n} \geqslant |\det A|\left(\frac{n-1}{\Vert A\Vert _{F}^{2}}\right)^{(n-1) / 2}=l>0\)
i
0f71812f-e131-423f-ad39-0212bf434370
The above inequality is also shown in [1]}. In [2]}, Zou improved the above inequality by showing that \(\sigma _{n } \geqslant |{\det } A|\left(\frac{n-1}{\Vert A\Vert _{F}^{2}-l^2}\right)^{(n-1) / 2}=l_0\)
i
5ffd5aeb-4a16-4f25-a59c-6fd615cc9921
In [1]}, Lin and Xie improve a lower bound for smallest singular value of matrices by showing that \(a\) is the smallest positive solution to the equation \(x^{2}\left(\Vert A\Vert _{F}^{2}-x^{2}\right)^{n-1}=|{\det } A|^{2}(n-1)^{n-1} .\)
i
30007b4a-8fe9-4b6e-a766-7d4a04264683
and \(\sigma _{n} \geqslant a>l_{0}\) . Under certain conditions, \(\sigma _{n}=a\) will hold. However, in many cases, \(\sigma _{n}=a\) is not true. We give this condition such that \(\sigma _{n}=a\) in Proposition REF .
i
2081376d-cc8d-4176-95a6-33968f6d79cd
Let \({\begin{array}{c}l=|\operatorname{det} A|\left(\frac{n-1}{\Vert A\Vert _{F}^{2}}\right)^{(n-1) / 2}, \\l_{0}=|\operatorname{det} A|\left(\frac{n-1}{\Vert A\Vert _{F}^{2}-l^{2}}\right)^{(n-1) / 2} .\end{array}}\)
r
16e5a210-51be-401e-b161-92f02833343c
Since \(\Vert A\Vert _{F}^{2}=\sigma _{1}^{2}+\sigma _{2}^{2}+\cdots +\sigma _{n}^{2}\) and \(|\operatorname{det} A|^{2}=\sigma _{1}^{2} \sigma _{2}^{2} \cdots \sigma _{n}^{2}\) , we have \({\begin{array}{c}\sigma _{n}^{2}\left(\sigma _{1}^{2}+\sigma _{2}^{2}+\cdots +\sigma _{n-1}^{2}\right)^{n-1}=\sigma _{1}^{2} \sigma _{2}^{2} \cdots \sigma _{n}^{2}(n-1)^{n-1} . \\\left(\frac{\sigma _{1}^{2}+\sigma _{2}^{2}+\cdots +\sigma _{n-1}^{2}}{n-1}\right)^{n-1}=\sigma _{1}^{2} \sigma _{2}^{2} \cdots \sigma _{n-1}^{2}\end{array}}\)
r
17b741f9-6ceb-45d0-bd97-002e427a988e
Obviously, we can see that \(f\left(\sigma _{n}\right)=0\) and \(f(0)=-\sigma _{n}^{2}<0\) . Next, we prove that \(f(x)\) is an strictly increasing function on \(\left[0, \sigma _{n}\right]\) . Taking the derivative of \(f(x)\) , we can get \(f^{\prime }(x)=\left(1+\frac{\sigma _{n}^{2}-x^{2}}{c^{2}(n-1)}\right)^{n-2} \frac{2 x}{c^{2}(n-1)}\left(c^{2}(n-1)+\sigma _{n}^{2}-n x^{2}\right) .\)
r
d9df516e-8b75-457b-95a2-f9c58e660f7e
We have \(f^{\prime }\left(x_{0}\right)>0\) . Therefore \(f(x)\) is an strictly increasing function on \(\left[0, \sigma _{n}\right]\) and \(\sigma _{n}\) is the smallest positive zero point of \(f(x)=2 x\left(1+\frac{\sigma _{n}^{2}-x^{2}}{c^{2}(n-1)}\right)^{n-1}-\sigma _{n}^{2}\)
r
04fc8bdc-c980-4a18-b80a-b8300dc14445
In the above special condition, \(a\) can be equal to \(\sigma _{n}\) , which is not general. Next, we give our main theorem. We give an iterative formula for the smallest singular value, and it is increasing towards the smallest singular value.
r
0654a037-03c5-449b-9d6a-37f243c7506d
Theorem 1 Let \(A \in M_{n}\) be nonsingular and \(0<a_{1} \leqslant \sigma _{n}\) and \(a_{k+1}=\left(a_{k}^{2}+\left|\operatorname{det}\left(a_{k}^{2} I_{n}-A^{H} A\right)\right|\left(\frac{n-1}{\Vert A\Vert _{F}^{2}-(n-1) a_{k}^{2}}\right)^{n-1}\right)^{1 / 2}, k=1,2, \cdots .\)
r
d23ff8e6-420b-4b84-b33f-a36d2c899a67
Let \(0 \leqslant \lambda <\sigma _{n}^{2}\) , by the arithmetic-geometric mean inequality, we have \(\left(\sigma _{1}^{2}-\lambda \right)\left(\sigma _{2}^{2}-\lambda \right) \cdots \left(\sigma _{n-1}^{2}-\lambda \right) \leqslant \left(\frac{\sigma _{1}^{2}+\cdots +\sigma _{n-1}^{2}-(n-1) \lambda }{n-1}\right)^{n-1}\)
r
c7e3d027-3b14-4933-af2a-2e6b039a516b
Since \(\begin{aligned}\left(\sigma _{1}^{2}-\lambda \right)\left(\sigma _{2}^{2}-\lambda \right) \cdots \left(\sigma _{n-1}^{2}-\lambda \right) &=\frac{\left(\sigma _{1}^{2}-\lambda \right)\left(\sigma _{2}^{2}-\lambda \right) \cdots \left(\sigma _{n}^{2}-\lambda \right)}{\sigma _{n}^{2}-\lambda } \\&=\frac{\left|\operatorname{det}\left(\lambda I_{n}-A^{H} A\right)\right|}{\sigma _{n}^{2}-\lambda }\end{aligned}\)
r
36d8a75f-de7c-4fc3-a8a3-ad8b59e433e9
we have \(\frac{\left|\operatorname{det}\left(\lambda I_{n}-A^{H} A\right)\right|}{\sigma _{n}^{2}-\lambda } \leqslant \left(\frac{\sigma _{1}^{2}+\cdots +\sigma _{n-1}^{2}-(n-1) \lambda }{n-1}\right)^{n-1}\) \({\begin{array}{c}\sigma _{n}^{2} \geqslant \lambda +\left|\operatorname{det}\left(\lambda I_{n}-A^{H} A\right)\right|\left(\frac{n-1}{\sigma _{1}^{2}+\cdots +\sigma _{n-1}^{2}-(n-1) \lambda }\right)^{n-1} \\\sigma _{n} \geqslant \left(\lambda +\left|\operatorname{det}\left(\lambda I_{n}-A^{H} A\right)\right|\left(\frac{n-1}{\sigma _{1}^{2}+\cdots +\sigma _{n-1}^{2}+\sigma _{n}^{2}-(n-1) \lambda }\right)^{n-1}\right)^{1 / 2} \\\sigma _{n} \geqslant \left(\lambda +\left|\operatorname{det}\left(\lambda I_{n}-A^{H} A\right)\right|\left(\frac{n-1}{\Vert A\Vert _{F}^{2}-(n-1) \lambda }\right)^{n-1}\right)^{1 / 2}\end{array}}\)
r
a89ab216-5040-4bb2-9d44-416eb599d4b0
Let \(\lambda \rightarrow \sigma _{n}^{2-}\left(\lambda \right.\) tends to \(\sigma _{n}^{2}\) from the left). We get that the above inequality is also true for \(\lambda =\sigma _{n}^{2}\) . Therefore, for \(0 \leqslant \lambda \leqslant \sigma _{n}^{2}\) , we have \(\sigma _{n} \geqslant \left(\lambda +\left|\operatorname{det}\left(\lambda I_{n}-A^{H} A\right)\right|\left(\frac{n-1}{\Vert A\Vert _{F}^{2}-(n-1) \lambda }\right)^{n-1}\right)^{1 / 2}\)
r
5e92106e-9098-46b4-8063-0b531b77173a
We have \(\sigma _{n} \geqslant a_{1}>0\) . In equation REF , let \(\lambda =a_{1}^{2}\) , we have \(\sigma _{n} \geqslant a_{2}=\left(a_{1}^{2}+\left|\operatorname{det}\left(a_{1}^{2} I_{n}-A^{H} A\right)\right|\left(\frac{n-1}{\Vert A\Vert _{F}^{2}-(n-1) a_{1}^{2}}\right)^{n-1}\right)^{1 / 2} \geqslant a_{1}>0\)
r
b40f5fa5-a4d7-4421-b0fd-348f2f04fa77
Assume that our claim is true for \(k=m\) , that is \(\sigma _{n} \geqslant a_{m+1} \geqslant a_{m}>0\) . Now we consider the case when \(k=m+1\) . In equation REF , let \(\lambda =a_{m+1}^{2}\) , we have \(\sigma _{n} \geqslant a_{m+2}=\left(a_{m+1}^{2}+\left|\operatorname{det}\left(a_{m+1}^{2} I_{n}-A^{H} A\right)\right|\left(\frac{n-1}{\Vert A\Vert _{F}^{2}-(n-1) a_{m+1}^{2}}\right)^{n-1}\right)^{1 / 2} \geqslant a_{m+1}>0\)
r
a3903ab5-c1d8-49d8-9d93-06e95597e167
Hence \(\sigma _{n} \geqslant a_{m+2} \geqslant a_{m+1}>0\) . This proves \(\sigma _{n} \geqslant a_{k+1} \geqslant a_{k}>0(k=1,2, \cdots )\) . By the well known monotone convergence theorem, \(\lim _{k \rightarrow \infty } a_{k}\) exists. Let \(\lim _{k \rightarrow \infty } a_{k}=\) \(\sigma \) , then \(\sigma =\left(\sigma ^{2}+\left|\operatorname{det}\left(\sigma ^{2} I_{n}-A^{H} A\right)\right|\left(\frac{n-1}{\Vert A\Vert _{F}^{2}-(n-1) \sigma ^{2}}\right)^{n-1}\right)^{1 / 2}, k=1,2, \cdots \)
r
6b33f9c9-cda3-4a60-9a17-5d957dd023b1
We get that \(\sigma ^{2}\) is the eigenvalue of \(A^{H} A\) . Since \(\sigma _{n}^{2}\) is the smallest eigenvalue of \(A^{H} A\) , we have \(\sigma ^{2} \geqslant \sigma _{n}^{2}\) . According to the definition of \(\sigma \) , we have \(\sigma \leqslant \sigma _{n}\) . Therefore, \(\sigma ^{2} \leqslant \sigma _{n}^{2}\) and we get \(\sigma =\sigma _{n}\) . Hence \(\lim _{k \rightarrow +\infty } a_{k}=\sigma _{n}\) .
r
0155a13b-c268-4e8d-b2e3-72355759da76
From Theorem REF , we can see that as long as there is a lower bound of \(\sigma _{n}\) (we set it to \(b\) ) and let \(a_{1}=b\) in Theorem REF , we can get a better lower bound than \(b\) . For example, we bring the lower bound of Lin and Xie [1]} into our Theorem REF to obtain the following results.
r
05af16cf-dfcc-4b9c-915e-7f3ed01c0ef4
Let \(a_{1}=a\) and \(a_{k+1}=\left(a_{k}^{2}+\left|\operatorname{det}\left(a_{k}^{2} I_{n}-A^{H} A\right)\right|\left(\frac{n-1}{\Vert A\Vert _{F}^{2}-(n-1) a_{k}^{2}}\right)^{n-1}\right)^{1 / 2}, k=1,2, \cdots \)
r
53715545-65e6-4671-910a-90c7d6aecfe1
We give the incremental iterative formula for the smallest singular value: \(a_{k+1}=\left(a_k^2+|\det (a_k^2 I_n-A^HA)|\left(\frac{n-1}{\Vert A\Vert _F^2-(n -1) a_k^2}\right)^{n-1}\right)^{1/2}, k=1,2,\cdots .\)
r
d4980255-f8f8-4f64-9220-2f5e0032a4c8
Theorem 2 Let \(A \in M_{n}\) be nonsingular, \(a_1\geqslant \sigma _1\) . Assume \(a_{k+1}=\left(a_k^2-|\det (a_k^2 I_n-A^HA)|\left(\frac{n-1}{(n+1) a_k^2-\Vert A\Vert _F^2}\right)^{n-1}\right)^{1/2}, k=1,2,\cdots .\)
r
90341a86-cbd7-4490-882f-714c6aa5d4fb
Set \(\lambda >\sigma _1^2\) , according to the arithmetic geometric mean inequality, we can get \(\left(\lambda -\sigma _{2}^{2}\right)\left(\lambda -\sigma _{3}^{2}\right) \cdots \left(\lambda -\sigma _{n}^ 2\right) \leqslant \left(\frac{(n-1) \lambda -(\sigma _2^2 +\cdots +\sigma _{n}^{2})}{n-1}\right)^{n -1}.\)
r
b57abec1-5cff-4b63-a3b8-fda9fd7e8ed3
Since \(\begin{aligned}\left(\lambda -\sigma _{2}^{2}\right)\left(\lambda -\sigma _{3}^{2}\right) \cdots \left(\lambda -\sigma _{n}^ 2\right)&=\frac{\left(\lambda -\sigma _{1}^{2}\right)\left(\lambda -\sigma _{2}^{2}\right) \cdots \left(\lambda -\sigma _{n}^2\right)}{\lambda -\sigma _1^2}\\&=\frac{|\det (\lambda I_n-A^HA)|}{\lambda -\sigma _1^2}\end{aligned},\)
r
0be31e36-81f9-42e9-a29c-c2fcbfea8e83
we can get \(\frac{|\det (\lambda I_n-A^HA)|}{\lambda -\sigma _1^2}\leqslant \left(\frac{(n-1) \lambda -(\sigma _2^2 +\cdots + \sigma _{n}^{2})}{n-1}\right)^{n-1},\) \(\begin{aligned}\sigma _{1}^{2} &\le \lambda -|\operatorname{det}\left( \lambda -A^{H} A\right) \mid \left(\frac{n-1}{( n-1) \lambda +\sigma _{1}^{2}-\Vert A\Vert _{F}^{2}}\right)^{n-1} \\& \le \lambda -\left|\operatorname{det}\left(\lambda -A^{H} A\right)\right|\left(\frac{n-1}{(n+1) \lambda -\Vert A\Vert _{F}^{2}}\right)^{n-1}\end{aligned}\) \(\sigma _1\le \left(\lambda -\left|\operatorname{det}\left(\lambda -A^{H} A\right)\right|\left(\frac{n-1}{(n+ 1) \lambda -\Vert A\Vert _{F}^{2}}\right)^{n-1}\right)^{1/2}\)
r
4d462adb-047d-4c82-a6e4-aeb5619832e6
Let \(\lambda \rightarrow {\sigma _1^2}^+\) (\(\lambda \) tend to \(\sigma _1^2\) from the right). We get that the above equation is also true for \(\lambda =\sigma _1^2\) . So, for \(\lambda \geqslant \sigma _1^2\) , we have \(\sigma _1\le \left(\lambda -\left|\operatorname{det}\left(\lambda -A^{H} A\right)\right|\left(\frac{n-1}{(n+ 1) \lambda -\Vert A\Vert _{F}^{2}}\right)^{n-1}\right)^{1/2}\)
r
138d63b7-2dfd-4430-9a92-ec5154ca886a
For \(k=1\) , \(\sigma _1\leqslant a_1\) can be obtained from the condition. In the equation (REF ), taking \(\lambda =a_1^2\) , we can get \(\sigma _1\leqslant a_2=\left(a_1^2-\left|\operatorname{det}\left(a_1^2-A^{H} A\right)\right|\left(\frac{n-1}{(n+1) a_1^2-\Vert A\Vert _{F}^{2}}\right)^{n-1}\right)^{1/2}\leqslant a_1\)
r
9d7e9119-be0f-4130-8f46-45ef9a9ad269
Suppose our conclusion holds for \(k=m\) , that is, \(\sigma _{1} \leqslant a_{m+1} \leqslant a_{m}\) . Now let's consider the case of \(k=m+1\) . In equation (REF ), let \(\lambda =a_{m+1}^2\) , we can get \(\sigma _1\leqslant a_{m+2}=\left(a_{m+1}^2-\left|\operatorname{det}\left(a_{m+1}^2-A^{H} A\right)\right|\left(\frac{n-1}{(n+1) a_{m+1}^2-\Vert A\Vert _{F}^{2}}\right)^{n -1}\right)^{1/2}\leqslant a_{m+1}\)
r
d7731a2c-2680-49c8-b363-db5c8643439a
So \(\sigma _1\leqslant a_{k+1}\leqslant a_{k}\) . This proves that \(\sigma _1\leqslant a_{k+1}\leqslant a_{k}(k=1,2,\cdots )\) . From the monotone bounded theorem, \(\lim _{k\rightarrow \infty }a_k\) exists. Let \(\lim _{k\rightarrow \infty }a_k=\sigma \) , then \(\sigma =\left(\sigma ^2-|\det (\sigma ^2 I_n-A^HA)|\left(\frac{n-1}{(n+1) \sigma ^2-\Vert A \Vert _F^2}\right)^{n-1}\right)^{1/2}\)
r
9174c6c2-07b0-4dd0-8fe6-2981eb683f94
So \(\sigma ^2\) is the eigenvalue of \(A^HA\) . Because \(\sigma _1^2\) is the largest eigenvalue of \(A^HA\) , there is \(\sigma ^2\leqslant \sigma _1^2\) . According to the definition of \(\sigma \) , we have \(\sigma \geqslant \sigma _1\) , so \(\sigma ^2\geqslant \sigma _1^2\) . We end up with \(\sigma =\sigma _1\) . So \(\lim _{k\rightarrow +\infty }a_k=\sigma _1\) . The largest singular value has an obvious upper bound. Because \(\sigma _{1}^2\leqslant \sigma _{1}^2+\cdots +\sigma _n^2=\Vert A\Vert _F^2\) , so \(\sigma _{1}<\Vert A\Vert _F \) . We give an iterative formula for decreasing convergence to the largest singular value: \(a_{k+1}=\left(a_k^2-|\det (a_k^2 I_n-A^HA)|\left(\frac{n-1}{(n+1) a_k^2-\Vert A\Vert _F^2}\right)^{n-1}\right)^{1/2}, k=1,2,\cdots .\)
r
92276d75-1070-4624-9210-2fb53e58fbd3
Machine learning has become the backbone of information technology and is widely used in various domains such as retail, medical diagnosis, and transportation. The performance of an ML model heavily depends on its training data. In real world applications, often the data is not stored in a central database or file system, but spreads over different data silos. For example, for drug risk prediction, the features reside in datasets collected from clinics, hospitals, pharmacies, and laboratories [1]}; to train a model for keyboard prediction, data from millions of phones needs to be accessed [2]}.
i
c49cfecb-ce23-40e3-9764-d13505419fbf
The data management systems that handle data silos, are data integration systems. A data integration system empowers the interoperability among multiple, heterogeneous sources, and provides a unified view for users. One of the key ingredients of data integration is to describe the sources and their relationships [1]}. Such information is two-fold: mappings between different source schemata, i.e., schema matching and mapping [2]}, [3]}; linkages between data instances, i.e., data matching (also known as record linkage or entity resolution) [4]}. However, the end goal of a data integration system is to answer queries or transform data over silos. It does not directly support machine learning applications. Thus, today people solve the silos problem with the DI systems and ML tools separately, as shown in the following example.
i
ea99f71b-bad2-424c-bdf7-428b3815434c
Running example. Consider the feature augmentation example in Fig. REF . A data scientist tries to build a classification model to predict the Titanic passenger survival rate. She starts from a base table \(S_1\) (s,n,se,a), which has the label column \(s\ (survived) \) , and feature columns \(se\ (sex)\) and \(a\ (age)\) . To improve model accuracy, she discovered a second source table \(S_2\) (s,n,se,p,d) in Fig. REF b. It brings a new feature column \(p\ (parch)\) , which is the number of companion parents/children traveling with the specific passenger. The label column and the selected feature columns constitute the schema of the table for downstream ML models, i.e. T(s,se,a,p), which we refer to as the target table schema or meditated schema.
i
8a45dcd8-68d2-4af1-98ed-015f0c2e344f
Problems with the separation of DI and ML. As shown in Fig. REF c-d, to use the data from the two tables \(S_1\) and \(S_2\) , the data scientist needs to rely on a data integration system, or manually find the schema mapping and data matching between the two given tables. We elaborate on the explanation of schema mappings in Sec. REF . Then a data integration system can integrate these source tables by merging the mapped columns and linked entities (i.e., matched rows). Finally, it materializes the data instances of the target table \(T\) and exports it to downstream ML applications. Such a process usually involves massive manual work and computation overhead, e.g., joining tables. Meanwhile, it demands the user equipped with the knowledge of data sources, principles of data integration, and/or familiarity with DI tools, e.g., Talendhttps://www.talend.com/products/integrate-data/, Informaticahttps://www.informatica.com/products/data-integration.html. In other words, there is great potential to utilize DI techniques to reduce the human burden and automate ML pipelines. []add no-materialize <TABLE>
i
adc7f720-83e2-4a01-b033-87ad21e29da2
Research vision & question. Thrived since the 1980’s, data integration is a well-studied topic with matured logic-based theoretical frameworks, techniques, and systems [1]}, [2]}, [3]}. Our long-term vision is to bridge the two worlds, data integration and machine learning. More specifically, we foresee novel systems that integrate DI techniques (schema matching, schema mapping, entity linkage, query reformulation, etc) deeply into the ML pipeline (e.g., feature selection and augmentation, model training, model debugging) while also expanding to a broad range of ML philosophies such as federated learning [4]}, meta-learning [5]}, reinforcement learning [6]}. As the foundation of these systems, we might also need to reconcile or even rebuild the logic-based DI theoretical framework. As the starting point of these ambiguous goals, in this paper, we discuss the following question:
i
c910336a-b9f0-42cb-b313-843733233f3c
When we focus on ML applications with the data spread in silos, they share a similar problem setting as data integration systems. The core of the problem is to understand and describe the sources. The difference, however, is that DI systems are designed to answer queries or transform data (e.g., Fig. REF d), not for model training and inference. Thus, in this work, we first investigate data source relationships in ML applications, and examine their expressiveness with the well-known logical formalism in data integration. We propose novel representations such that we can transform the DI information into the same mathematical building blocks of machine learning, i.e., matrices and vectors. It opens new gates to optimize and automate ML applications, and build new systems.
i
f8c45d9b-9bc3-466f-8442-11fa9eb24bf7
[leftmargin=*] Formalized DI scenarios for ML. Given ML applications, in particular, feature augmentation and federated learning, we categorize and formalize their typical dataset relationships with first-order schema mapping languages, cf. Table REF (Sec. ). Matrix-based representations of dataset relationships. To seamlessly support ML, we define three types of matrices, which capture column matching, row matching, and redundancies between data sources and the target table (Sec. ). New opportunities to automate and optimize ML. With our matrix-based representations, we enlighten the new opportunities for linear algebra rewriting in model factorization, and feature engineering and model training in federated learning (Sec. ). System. We demonstrate how our vision enables a next-generation data science platform Amalur for supporting an end-to-end, scalable machine learning pipeline over data silos (Sec. ).
i
a0b120eb-8c12-4a08-b55c-36d374df0ea4
Machine learning has been applied to improve key operations of data integration such as schema matching [1]}, [2]}, data matching [3]}, [4]}, [5]}, [2]}. However, except for data cleaning [7]}, [8]}, [9]}, little has been discussed in terms of using the key DI operations to facilitate machine learning [10]}, which is the goal of this paper. The group of works most related to our work, is in-DB machine learning. [inline]some discussion about DIML papers, e.g., 7359925
w
22f3d016-1279-4aab-a5cf-84feab309308
In-DB machine learning. Integration of machine learning in relational databases has become one of the most important database research topics [1]}. A common solution is to link ML algorithms with databases via carefully crafted user-defined functions (UDFs). Another way is to extend the data types for vector/matrix and SQL queries for ML models [2]}, [3]}. Recent systems propose new intermediate data presentations [4]}, [5]} or even novel tensor-based data models [6]}. A more fundamental line of research is to support the formal languages of both worlds, i.e., relational algebra and linear algebra [7]}, [8]}, [9]}, [10]}, [11]}. Notably, the above approaches mostly propose a representation as the data abstraction. In contrast, in data integration it is important to understand data sources and their relationships. Thus, although we have also applied matrix-based data abstraction (\(D_1\) , \(D_2\) in Fig. REF b), our main contribution, i.e., three proposed matrices, are representations of dataset relationships. Those capture the matching and redundancy for columns and rows from different data sources.
w