_id
stringlengths
36
36
text
stringlengths
200
328k
label
stringclasses
5 values
9c5b23e5-9d7b-4f9c-a229-b6281ee8d60f
This evaluation will be done having as basis articles selected during the time frame from July 2011 to January 2020, considering the information of the book released in 2015 by the professor Haas collaborative with Svilen Dimitrov, called “Principles of LED Light Communications: Towards Networked Li-Fi" [1]}.
i
196b10b8-6a72-4b67-a846-c04f69e21b11
Three research bases were used for this search to quantify the commitment in this area that we are addressing. Were used four terms in the searches, being them: Li-Fi, Wi-Fi, 5G, and efficiency. Was adopted as a filter to the works the type “Article" and having your publication date frame from July of 2011 to January of 2020 and who contains in your content the terms cited above. The bases that were used was Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES), Google Scholar, and the Institute of Electrical and Electronics Engineers (IEEE). <TABLE>
w
a876680e-ed0c-4825-9717-43f6c9671ad4
On a CAPES basis, when the term “Li-Fi" was searched individually generated \(1.986\) results. By adding the term “Wi-Fi" to the search to get more works that would make a comparison between the technologies, this number has been reduced to 85. Then “5G" was added to have a return of more recent articles that were already making a connection with the future that has not yet been experienced, resulting in 65 results. Finally, the term “Efficiency" was added in an attempt to locate research that would quantify the results in a real environment. This is showed in Table REF .
w
8e1aa872-a9b1-4358-9cf8-0cbdcc4b175d
The previous process was repeated in the Google Scholar and IEEE bases, presenting more promising results on Google Scholar, where it was assumed that the reason would be the broader search engine. See this in Table REF and REF . <TABLE><TABLE>
w
c67271f7-b2f0-4a31-b6ac-8af12decd911
As one perceives a very great influence of the creator of the technology, Haas, we find ourselves in the permission to search in other article bases, specifically by this author. Used also was his book released in 2015, called “Principles of LED Light Communications: Towards Networked Li-Fi" [1]}. Therefore, the ScienceDirect research base has been added to our data set. In the following paragraphs, the articles and other references used are listed.
w
858fb215-4088-4859-8b30-61f8ab301ab4
The article proposed by Khandal [1]} is an analysis of the state of the technology before the launch of professor Haas' book and follows a similar path to the one approached in this research. As registered in the year 2014, offers a limited view of the progress made in this area, so it intended to update the information contained therein, using Haas' article [2]} and his book [3]}. After it, Bao et al. [4]} solve the key technologies for realizing Li-Fi and present the state-of-the-art on each aspect. Posteriorly, Wu et al. [5]} discussed the differences between homogeneous and heterogeneous networks regarding access point selection (APS), beyond proposing a two-stage APS method for hybrid Li-Fi/Wi-Fi networks.
w
d7f01323-0b70-41a3-aaac-aefd866bb7c5
The book was written by Haas and Dimitrov [1]}, has played a key role in the construction of research since it has allowed building fundamental concepts for the initial understanding of the resources that are used in this technology. It also provided information on the evolution of the technology, since it can be compared with the article released by Professor Haas in 2018.
w
6ef13859-282c-4a26-a0e1-62d0c68dec62
The article was published by professor Haas in 2018 [1]}, where he addresses a more current view than the first article previously cited and also introduced why Li-Fi is considered a fifth-generation technology. It is intended to compare the results with those obtained by Khandal and Jain in 2014. Subsequently, Islim et al. [2]} solve the suitable modulation techniques for Li-Fi including those which explore time, frequency, and color domains.
w
e5b19693-fc1c-4118-8eb1-44139fa63d8a
Finally, the last selected article was A Review Paper on Li-Fi Technology [1]}. This article was produced in the year 2020, the year in which this research is being conducted, and therefore brings us analysis in the latest updates in this technology, therefore intend to analyze it against the work of Haas [2]}.
w
db12aec7-caff-4b86-a367-3c52a71ea88e
Given the above information in sets with the tabulated data, it has been concluded that the subject of this research has not yet been widely explored and, therefore, there is still a wide range of research that can be applied in the area.
w
a7e522f6-be38-4b56-96e9-2b7e95e14f63
This study was applied through scientific research on a worldwide basis, applying a bibliometry according to the terms that were considered central to our theme. Thus, a foundation will be made in ideas and assumptions taken from conferences, articles, and books that present importance in the construction of the concepts of this work.
m
47e59346-b3a8-4f8c-90cb-5149353e32b4
The observation method used was the conceptual-analytic, since concepts and ideas from other authors will be used, which are similar to our objectives, adding new content to them for the construction of scientific analysis on the object of study.
m
a905c056-3afa-4f4c-aa46-a1c4439910ce
The comparison and discussion of the founded results will be carried out by means of explanatory research, giving more freedom to talk about an analysis that traverses several communication properties and allows them to assume more than one position on the subject during the analysis.
m
81d438f1-5826-44ce-979f-3da4f295d4d3
According to Haas [1]} the central idea of a Li-Fi wireless network is to complement heterogeneous wireless networks with radio, thus performing a relief for this spectrum and its amount of data traffic. In this way, the author already refers us to a partial response to the objective of this research, since it induces the reader that this technology will always need support, serving only as an “end" of the network.
r
8c063b61-fe4e-442c-ba02-317d19896ed9
As Haas approached at the beginning of his book, Li-Fi is still just a Wi-Fi supporting technology and still has a long way to go to replace it. The complexity added to hardware to accomplish a good multi-user access and data rates make it a barrier to entry for ordinary consumers.
d
15f3acc2-1368-4be6-becd-5f5867dc204b
The point-to-point topology points that multiply spaces in the same build will need a previous preparation of network scheme. Without that, it becomes necessary for the acquisition of multiply devices to make possible the redirect of the signal. This brings up high use of hardware that, with the necessity of evolution of methodologies applied to Li-fi, can quickly become obsolete, bringing an accumulation of electronic waste.
d
4fce4ef5-536a-4006-accf-8918396fabe8
Based on the selected articles, the technology has not yet undergone many changes since the regulation and creation of a protocol for VLC communication. The same problems are still perceived, but some research results to add more solutions to these problems are already visible.
d
ca944348-a77f-470e-8e61-8233ec65dfaf
Based on the results and tests cited by Haas, there is a great bet that the market will receive more prototypes and companies focused on this market, such as pureLiFi. The existing technology today is still very affected by the interference between its communication peripherals and the noise by the particularity of sunlight impulses.
d
49b4d37f-6244-4254-bf38-1e2138eb9e8d
With the consolidation of this technology, you will have the possibility of using in places that are impossible to use Wi-Fi. Safety is a beneficial side, where the use of light to transmit data because it does not penetrate through walls. On highways for traffic control applications, such as cars, they can have LED lights and can communicate with each other and predict accidents.
d
febf398a-0935-4f8e-98ea-7159617b3b19
Therefore, the key point for the search for solutions to problems in Li-Fi and your implementation seems to forward automatically with the passage of time, the lack of radio band. Haas already made alarming predictions of increased data use in 2015, and today it is a reality. We have some devices that demand communication with other devices in the environment, make queries in the cloud, and still, have advanced artificial intelligence algorithms. This is a perfect environment for Li-Fi to act and its potential to be explored.
d
4cb08378-2f76-416b-aad8-2a9e931b3599
After reading the book, it is noted that there are still a large number of topics to be addressed within this technology. What stood out in terms of the need for study are the areas of signal modulation and access to multi-users. Therefore, themes directed to the discussion or solution of these deficiencies would have much relevance in the area of communication.
d
65eb4dde-e2ce-4be2-83ec-16a88cb62cff
Despite the fast advancement of text classification technologies, most text classification models are trained and applied to a relatively small number of categories. Popular benchmark datasets contain from two up to tens of categories, such as SST2 dataset for sentiment classification (2 categories) [1]}, AG news dataset (4 categories) [2]} and 20 Newsgroups dataset [3]} for topic classification.
i
998c8f9e-54bb-4a90-83eb-7cf031ce47f5
In the meantime, industrial applications often involve fine-grained classification with a large number of categories. For example, Walmart built a hybrid classifier to categorize products into 5000+ product categories [1]}, and Yahoo built a contextual advertising classifier with a taxonomy of around 6000 categories [2]}. Unfortunately, both systems require a huge human effort in composing and maintaining rules and keywords. Readers can neither reproduce their system nor is the system or data available for comparison.
i
fc6a11e1-b761-4ea4-97fc-220f544b3472
In this work, we focus on the application of contextual advertising [1]}, which allows advertisers to target the context most relevant to their ads. However, we cannot fully utilize its power unless we can target the page content using fine-grained categories, e.g., “coupé”' vs. “hatchback” instead of “automotive” vs. “sport”. This motivates a classification taxonomy with both high coverage and high granularity. The commonly used contextual taxonomy introduced by Interactive Advertising Bureau (IAB) contains 23 coarse-grained categories and 355 fine-grained categories https://www.iab.com/guidelines/taxonomy/. Figure REF shows a snippet of the taxonomy. <FIGURE>
i
ae49a356-4f78-447b-941e-65e78e01c037
Large online encyclopedias, such as Wikipedia, contain an updated account of almost all topics. Therefore, we ask an essential question: can we bootstrap a text classifier with hundreds of categories from Wikipedia without any manual labeling?
i
8f93eb56-aca1-4a38-819b-e6c75756f4c6
We tap on and extend previous work on Wikipedia content analysis [1]} to automatically label Wikipedia articles related to each category in our taxonomy by Wikipedia category graph traversal. We then train classification models with the labeled Wikipedia articles. We compare our method with various learning-based and keyword-based baselines and obtain a competitive performance.
i
225b6cbd-9b17-48fa-af1f-00da34f624df
We propose wiki2cat, a simple framework using Wikipedia to bootstrap text categorizers. We first map the target taxonomy to corresponding Wikipedia categories (briefed in Section REF ). We then traverse the Wikipedia category graph to automatically label Wikipedia articles (Section REF ). Finally, we induce a classifier from the labeled Wikipedia articles (Section REF ). Figure REF overviews the end-to-end process of building classifiers under the wiki2cat framework.
m
b6756c2b-a5a1-4185-9cdf-2b080670b702
We introduced wiki2cat, a simple framework to bootstrap large-scale fine-grained text classifiers from Wikipedia without having to label any document manually. The method was benchmarked on both coarse-grained and fine-grained contextual advertising datasets and achieved competitive performance against various baselines. It performed especially well on fine-grained classification, which both is more challenging and requires more manual labeling in a fully-supervised setting. As an ongoing effort, we are exploring using unlabeled in-domain documents for domain adaptation to achieve better accuracy.
d
bde54b75-b558-4559-bd7b-52b4f78f4ee0
In knowledge representation, ontologies are an important means for injecting domain knowledge into an application. In the context of databases, they give rise to ontologymediated queries (OMQs) which enrich a traditional database query such as a conjunctive query (CQ) with an ontology. OMQs aim at querying incomplete data, using the domain knowledge provided by the ontology to derive additional answers. In addition, they may enrich the vocabulary available for query formulation with relation symbols that are not used explicitly in the data. Popular choices for the ontology language include (restricted forms of) tuple-generating dependencies (TDGs), also dubbed existential rules [1]} and Datalog\(^\pm \) [2]}, as well as various description logics [3]}.
i
c179c8d5-b076-44e5-8073-4b5a45ff1d67
The complexity of evaluating OMQs has been the subject of intense study, with a focus on single-testing as the mode of query evaluation: given an ontology-mediated query (OMQ) \(Q\) , a database \(D\) , and a candidate answer \(\bar{a}\) , decide whether \(\bar{a} \in Q(D)\) [1]}, [2]}, [3]}, [4]}. In many applications, however, it is not realistic to assume that a candidate answer is available. This has led database theoreticians and practitioners to investigate more relevant modes of query evaluation such as enumeration: given \(Q\) and \(D\) , generate all answers in \(Q(D)\) , one after the other and without repetition.
i
7ac68356-bb0a-4305-a3b7-57c4e7d60bd0
The first main aim of this paper is to initiate a study of efficiently enumerating answers to OMQs. We consider enumeration algorithms that have a preprocessing phase in which data structures are built that are used in the subsequent enumeration phase to produce the actual output. With `efficient enumeration', we mean that preprocessing may only take time linear in \(O(||D||)\) while the delay between two answers must be constant, that is, independent of \(D\) . One may or may not impose the additional requirement that, in the enumeration phase, the algorithm may consume only a constant amount of memory on top of the data structures precomputed in the preprocessing phase. We refer to the resulting enumeration complexity classes as \({DelayC}_{{lin}}\) and CD\(\circ \) Lin, the former admitting unrestricted (polynomial) memory consumption; the use of these names in the literature is not consistent, we follow [1]}, [2]}. Without ontologies, answer enumeration in CD\(\circ \) Lin and in \({DelayC}_{{lin}}\) has received significant attention [3]}, [4]}, [5]}, [2]}, [7]}, [8]}, [1]}, [10]}, [11]}, see also the survey [12]}. A landmark result is that a CQ \(q(\bar{x})\) admits enumeration in CD\(\circ \) Lin if it is acyclic and free-connex acyclic where the former means that \(q\) has a join tree and the latter that the extension of \(q\) with an atom \(R(\bar{x})\) that `guards' the answer variables is acyclic [4]}. Partially matching lower bounds pertain to self-join free CQs [4]}, [15]}.
i
61ed0cfe-dbfb-465c-9813-65321279e54e
The second aim of this paper is to introduce a novel notion of partial answers to OMQs. In the traditional certain answers, \(\bar{a} \in Q(D)\) if and only if \(\bar{a}\) is a tuple of constants from \(D\) such that \(\bar{a} \in Q(I)\) for every model \(I\) of \(D\) and the ontology used in \(Q\) . In contrast, a partial answer may contain, apart from constants from \(D\) , also the wildcard symbol `\(\ast \) ' to indicate a constant that we know must exists, but whose identity is unknown. Such labeled nulls may be introduced by existential quantifiers in the ontology . To avoid redundancy as in the partial answers \((a,\ast )\) and \((a,b)\) , we are interested in minimal partial answers that cannot be `improved' by replacing a wildcard with a constant from \(D\) while still remaining a partial answer. The following simple example illustrates that minimal partial answers may provide useful information that is not provided by the traditional answers, from now called complete answers. Consider the ontology that contains \(\begin{array}{rcl}{Researcher}(x) &\rightarrow & \exists y \, {HasOffice}(x,y)\\{HasOffice}(x,y)&\rightarrow & {Office}(y) \\{Office}(x) &\rightarrow & \exists y \, {InBuilding}(x,y), \end{array}\)
i
8b069eed-7be6-47ad-965a-df5567359d0a
and the CQ \( q(x_1,x_2,x_3) = {HasOffice}(x_1,x_2) \wedge {InBuilding}(x_2,x_3)\) giving rise to the OMQ \(Q(x_1,x_2,x_3)\) . Take the following database \(D\) : \(\begin{array}{@{}c@{}}\begin{array}{lll}{Researcher}({mary}) &{Researcher}({john}) &{Researcher}({mike})\end{array} \\\begin{array}{ll}{HasOffice}({mary},{room1}) &{HasOffice}({john},{room4})\end{array} \\{InBuilding}({room1},{main1})\end{array}\)
i
b03ed836-37c5-406a-99f4-5849fd77f26c
We also introduce and study minimal partial answers with multiple wildcards \(\ast _1,\ast _2,\dots \) . Distinct occurences of the same wildcard in an answer indicate the same null, while different wildcards may or may not correspond to different nulls. Multiple wildcards may thus be viewed as adding equality on wildcards, but not inequality. We note that there are certain similarities between minimal partial answer to OMQs and answers to SPARQL queries with the `optional' operator [1]}, [2]}, but also many dissimilarities.
i
86b7d786-0c46-410b-a479-92b0b0cd2573
The third aim of this paper is to study two problems for OMQs that are closely related to constant delay enumeration: single-testing in linear time (in data complexity) and all-testing in CD\(\circ \) Lin or \({DelayC}_{{lin}}\) . Note that for Boolean queries, single-testing in linear time coincides with enumeration in CD\(\circ \) Lin and in \({DelayC}_{{lin}}\) . An all-testing algorithm has a prepocessing phase followed by a testing phase where it repeatedly receives candidate answers \(\bar{a}\) and returns `yes' or 'no' depending on whether \(\bar{a} \in Q(D)\)  [1]}. All-testing in \({DelayC}_{{lin}}\) grants preprocessing time \(O(||D||)\) while the time spent per test must be independent of \(D\) , and all-testing in CD\(\circ \) Lin is defined accordingly.
i
4fd9a300-31af-4a67-920b-e8b959c616e3
An ontology-mediated query takes the form \(Q(\bar{x})=(,,q)\) where is an ontology, a schema for the databases on which \(Q\) is evaluated, and \(q(\bar{x})\) a conjunctive query. In this paper, we consider ontologies that are sets of guarded tuple-generating dependencies (TGDs) or formulated in the description logic \(\mathcal {ELI}\) . We remind the reader that a TGD takes the form \(\forall \bar{x} \forall \bar{y} \, \big (\phi (\bar{x},\bar{y}) \rightarrow \exists \bar{z} \, \psi (\bar{x},\bar{z})\big ) \) where \(\phi \) and \(\psi \) are CQs, and that it is guarded if \(\phi \) has an atom that mentions all variables from \(\bar{x}\) and \(\bar{y}\) . Up to normalization, an \(\mathcal {ELI}\) -ontology may be viewed as a finite set of guarded TGDs of a restricted form, using in particular only unary and binary relation symbols. Both guarded TGDs and \(\) are natural and popular choices for the ontology language [1]}, [2]}, [3]}. We use \(({G},{CQ})\) to denote the language of all OMQs that use a set of guarded TGDs as the ontology and a CQ as the actual query, and likewise for \(({ELI},{CQ})\) and \(\mathcal {ELI}\) -ontologies.
i
abbe0c0a-02eb-4251-9e04-ab4c80bc1ff5
We next summarize our results. In Section , we start with showing that in \(({G},{CQ})\) , single-testing complete answers is in linear time for OMQs that are weakly acyclic. A CQ is weakly acyclic if it is acyclic after replacing the answer variables with constants and an OMQ is weakly acyclic if the CQ in it is; in what follows, we lift other properties of CQs to OMQs in the same way without further notice. Our proof relies on the construction of a `query-directed' fragment of the chase and a reduction to the generation of minimal models of propositional Horn formulas. We also give a lower bound for OMQs from \(({ELI},{CQ})\) that are self-join free: every such OMQ that admits single-testing in linear time is weakly acyclic unless the triangle conjecture from fine-grained complexity theory fails. This generalizes a result for the case of CQs without ontologies [1]}. We observe that it is not easily possible to replace \({ELI}\) by \({G}\) in our lower bound as this would allow us to remove also `self-join free' while it is open whether this is possible even in the case without ontologies. We also show that single-testing minimal partial answers with a single wildcard is in linear time for OMQs from \(({G},{CQ})\) that are acyclic and that the same is true for multiple wildcards and acyclic OMQs from \(({ELI},{CQ})\) . We also observe that these (stronger) requirements cannot easily be relaxed.
i
9cd07eee-d415-495b-972e-c6f62edee394
In Section , we turn to enumeration and all-testing of complete answers. We first show that in \(({G},{CQ})\) , enumerating complete answers is in CD\(\circ \) Lin for OMQs that are acyclic and freeconnex acyclic while all-testing complete answers is in CD\(\circ \) Lin for OMQs that are freeconnex acyclic (but not necessarily acyclic). The proof again uses the careful chase construction and a reduction to the case without ontologies. The lower bound for single testing conditional on the triangle conjecture can be adapted to enumeration, with `not weakly acyclic' replaced by `not acyclic'. For enumeration, it thus remains to consider OMQs that are acyclic, but not free-connex acyclic. We show that for every self-join free OMQ from \(({ELI},{CQ})\) that is acyclic, connected, and admits enumeration in CD\(\circ \) Lin, the query is free-connex acyclic, unless sparse Boolean matrix multiplication (BMM) is possible in time linear in the size of the input plus the size of the ouput; this would imply a considerable advance in algorithm theory and currently seems to be out of reach. We also show that it is not possible to drop the requirement that the query is connected, which is not present in the corresponding lower bound for the case without ontologies [1]}, [2]}. We prove a similar lower bound for all-testing complete answers, subject to a condition regarding non-sparse BMM. All mentioned lower bounds also apply to both kinds of partial answers.
i
f0e0b257-8193-4e13-8f8f-ee38b917d4a4
In Section , we then prove that enumerating minimal partial answers with a single wildcard is in \({DelayC}_{{lin}}\) for OMQs from \(({G},{CQ})\) that are acyclic and freeconnex acyclic. This is one of the main results of this paper, based on a non-trivial enumeration algorithm. Here, we only highlight two of its features. First, the algorithm precomputes certain data structures that describe `excursions' that a homomorphism from \(q\) into the chase of \(D\) with \(\) may make into the parts of the chase that has been generated by the existential quantifiers in the ontology. And second, it involves subtle sorting and pruning techniques to ensure that only minimal partial answers are output. We also observe that all-testing minimal partial answers is less well-behaved than enumeration as there is an OMQ \(Q \in ({ELI},{CQ})\) that is acyclic and freeconnex acyclic, but for which all-testing is not in CD\(\circ \) Lin unless the triangle conjecture fails.
i
e5f9d68e-8186-4e25-b4c2-c06f90a5bcad
Finally, Section  extends the upper bound from Section  to minimal partial answers with multiple wildcards. We first show that all-testing (not necessarily minimal!) partial answers with multiple wildcards is in \({DelayC}_{{lin}}\) for OMQs that are acyclic and freeconnex acyclic and then reduce enumeration of minimal partial answers with multiple wildcards to this, combined with the enumeration algorithm of minimal partial answers with a single wildcard obtained in the previous section.
i
fc09e7ed-0061-4a18-941d-8f0558c5138e
As future work, it would be interesting to consider as the ontology language also description logics with functional roles such as \(\mathcal {ELIF}\) ; there should be a close connection to enumeration of answers to CQs in the presence of functional dependencies [1]}. A much more daring extension would be to \(({G},{UCQ})\) or even to \(({FG},{(U)CQ})\) where \({UCQ}\) denotes unions of CQs and \({FG}\) denotes frontier-guarded TGDs. Note, however, that enumeration in CD\(\circ \) Lin of answers to UCQs is not fully understood even in the case without ontologies [2]}. Another interesting question is whether the enumeration problems placed in \({DelayC}_{{lin}}\) in the current paper actually fall within CD\(\circ \) Lin, that is, whether the use of a polynomial amount of memory in the enumeration phase can be avoided.
d
e8439ef0-cf4e-4990-a64c-0ddef2b9a9b6
The nonconforming rotated \(Q_1\) tetrahedron is derived from the nonconforming hexahedral element proposed by Rannacher and Turek [1]} and was applied to linear and nonlinear elasticity problems in [2]}, [3]}, [4]}. It has properties similar to the hexahedral element, with improved bending behaviour in elasticity, compared to the \(P^1\) tetrahedron, and it allows for diagonal mass lumping for explicit time–stepping in dynamic problems.
i
518db373-fdf8-42b1-8cc6-9831daea9f77
In this paper, we investigate its properties as a Stokes element, and show that in combination with linear, continuous pressures, it is inf–sup stable. In a sense it is thus a reduced Taylor–Hood [1]} element with fewer degrees of freedom. It is one of the lowest order elements that is stable for Stokes, and, unlike some other low order non–conforming elements [2]}, [3]}, it fulfills Korn's inequality and can thus handle the strain form of Stokes, cf. [4]}.
i
cf3bae8e-63e3-4450-af3e-ed808a97b4a4
An outline of the paper is as follows: in Section we recall the rotated \(Q_1\) element; in Section 3 we apply it to the Stokes equations and prove stability and convergence; in Section we give some numerical examples to show the properties of the approximation.
i
e93e57fe-4b7e-40bd-bd43-8c48ee26739b
Making neural networks robust against adversarial examples [1]}, [2]} or proving that they are has been the focus of many works. Robustification methods can be split into two main categories. First, empirical defenses empirically improve the resilience of neural networks but has no theoretical guarantee. The most and only successful known empirical defense technique is adversarial training [3]}. Second, certified defense techniques try to provide a guarantee that an adversary does not exist in a certain neighborhood around a given input [4]}, [5]}, [6]}, [7]}, [8]}, [9]}, [10]}, [11]}. Despite the progress on these methods, the certified regions are still meaningless compared with the human perception.
i
16c5b2b7-a42d-473c-b225-1a12a65dea61
In this paper we focus on the Randomized smoothing (RS) technique [1]}, [2]} which, despite its simplicity, has proved the state-of-the-art to certify neural networks on small neighborhoods of inputs. Given a base classifier \(F\) , RS constructs a new classifier \(G = F * \mathcal {N}(0,\sigma ^2I_d)\) . The smoothed classifier is certified robust: \(G(x) = G(x+\delta )\) for all \(\delta \) such as \(||\delta ||_2 < R(x,\sigma )\) with \(R(x,\sigma )=\frac{\sigma ^2}{2}\left(\Phi ^{-1}(p_A) - \Phi ^{-1}(p_B)\right)\)
i
5930ce5b-836d-4ca5-ada2-2e27c5603952
where \(p_A = \mathbb {E}_Z[F_{c_A}(x+Z)] = G(x)_{c_A}\) is the probability of the top class \(c_A\) , \(p_B = \max _{c \ne c_A} \mathbb {E}_Z[F_{c}(x+Z)] = G(x)_{c_B}\) is the probability of the runner-up class \(c_B\) , \(Z \sim \mathcal {N}(0,\sigma ^2I_d)\) and \(\Phi ^{-1}\) is the inverse cumulative distribution function of a standard Gaussian distribution. Several extensions and improvements of RS have been proposed subsequently. For instance, by adversarially training the base classifier [1]}, regularization [2]}, general adjustments for training routines [2]} and by extending RS to more general distributions beyong the Gaussian case [4]}.
i
09bf7108-f57d-456f-855c-8507b146f401
One limitation of the original RS work [1]} is that it only certifies isotropic regions. Properly extending this approach to anisotropic domains is a first important challenge. Enlarging at most the certified regions is a second ongoing challenge for all the proposed approaches. To generalise RS beyond fixed variance noise, [2]} proposed data-dependent randomised smoothing (DDRS) which additionally maximises the radius (REF ) over the parameter \(\sigma \) . This idea has been improved in [3]} which proposes ANCER; an anisotropic diagonal certification method that performs sample-wise (\(i.e.\) per sample in the test set) region volume maximization. They generalized (REF ) for ellipsoid regions \(i.e\) they proved that for \(\Sigma \) a non degenerate covariance matrix, \(G(x) = G(x+\delta )\) for all \(\delta \) such as \(||\delta ||_{\Sigma ,2} < r(x,\Sigma )\) with \(r(x,\Sigma )=\Phi ^{-1}(p_A) - \Phi ^{-1}(p_B)\)
i
7887dbb8-7273-4b1f-8700-7c508a541417
Motivation and contribution. The main motivation of our work is that ANCER is not resilient to rotations of input data. Indeed, in a configuration where the input data are concentrated around one canonical axis, ANCER may give good results. However, rotating all the points will dramatically impact the method since it always seeks for axis parallel to the canonical ones which will not be optimal after rotation. We propose an approach based on the use of information geometry which adequately fixes this issue by design as depicted by Figure REF . To summarize our approach, we consider a general Gaussian noise \(\mathcal {N}(0,C)\) with covariance \(C\) and for each input \(x\) compute the corresponding certification radius \(R(x,C)\) . Next, we maximize \(R(x,C)\) over \(C\) on the manifold of covariance matrices through Riemannian optimisation and finally smooth the original classifier with the noise \(\mathcal {N}(0,C)\) . This methodology generalises ANCER which assumes that \(C\) is a diagonal matrix and optimizes its diagonal in the Euclidean space. Through experiments on MNIST [1]}, we show that our algorithm outperforms the previous works [2]}, [3]} and obtain new state-of-the-art certified accuracy for the DDRS technique.
i
c4ffc912-69e4-4b2f-a5c5-e51350334443
Organisation of the paper. In Section , we review existing DDRS techniques. In Section , we present our approach named Riemannian DDRS. In Section , we experiment our method and show its advantage with respect to the state-of-the-art. Finally, in Section , we discuss limitations of our method and RS techniques in general and outline some challenges related to them.
i
3bbefba0-500c-4fa5-bfc0-0364694213a8
Notations. We denote by \(F : \mathbb {R}^d \longrightarrow \mathcal {P}(\mathcal {Y})\) a base classifier typically a neural network, where \(\mathcal {P}(\mathcal {Y})\) is a probability simplex over \(K\) classes. \(\Phi ^{-1}\) is the inverse cumulative distribution function of a standard Gaussian distribution. The \(\ell _p\) -ball is defined with respect to the \(||.||_p\) norm (\(p \ge 1\) ) and the \(\ell _p^A\) -ellipsoid is defined with respect to \(||.||_{A,p}\) . For \(p \in \lbrace 1,2\rbrace \) , \(||.||_{A,p}\) is the composite norm defined with respect to a positive definite matrix \(A\) and a vector \(u\) as \(||A^{-1/p}u||_p\) .
i
ca70339e-5be3-4310-8305-20a97f1f0f03
In this section, we empirically test the performance of our RDDRS framework for neural networks certification and show that it achieves state-of-the-art performance in terms of certified regions on the well-known MNIST dataset. The methods used for comparison are randomized smoothing (RS) as it appeared in [1]}, DDRS [2]} and ANCER [3]}. For evaluation, the same procedure used in [1]}, [5]}, [2]}, [3]} is followed. We train a classical convolutional neural network on MNIST using the Gaussian data augmentation proposed by [1]} with \(\sigma \) varying in \(\lbrace 0.12,0.25,0.5\rbrace \) and then perform the certification on the whole MNIST test set. To compare between the different tested methods, we plot the approximate certified accuracy curve similar to previous works. This curve is computed for many radius \(R\) and corresponds for each \(R\) to the certified accuracy given by the fraction of the test set images which \(G\) classifies correctly and certifies robust with a radius \(R^{\prime }\ge R\) . During certification by the state-of-the-art methods, we use the same \(\sigma \) as for training. To run these methods, we closely follow the recommendations of the respective papers. The DDRS optimization of (REF ) is done using a gradient ascent, a learning rate \(\alpha = 0.0001\) , an initial \(\sigma _0 = \sigma \) and a number of samples \(n=100\) for Monte-Carlo estimation of \(p_A\) at each iteration. Note that following the previous works there is no need to estimate \(p_B\) even if the number of classes is greater than 2 (see [1]}). The number of gradient ascent iterations for DDRS is taken from \(K=100\) to \(K=1500\) with a step of 100 and the \(\sigma \) providing the best certified radius is saved. For ANCER, we resolved (REF ) with different values of the learning rate \(\alpha \in \lbrace 0,04,0,4\rbrace \) , a penalization parameter \(\kappa = 2\) , the same number of iterations \(K=100\) and estimate \(p_A\) with \(n=100\) . Our method RDDRS is run with different values of the learning rate \(\gamma _t \in \lbrace 0,5,1,25\rbrace , \kappa = 10^{-6}\) , the same number of iterations \(K=100\) and a number of samples \(n=20000\) to estimate \(\mathbb {E}[A]\) . To calculate the exponential map on \(\mathcal {S}_d^{++}\) , we use the Geomstats library [10]}, [11]}. After obtaining a first trained model for three valued of \(\sigma \) as previously described, we run the four methods RS, DDRS, ANCER and RDDRS and report the final certified accuracy. The obtained results are plotted in Figure REF and commented in what follows.
m
3e931722-630f-4e4e-88f4-a4498b83b411
Comments. The graphics of the certified accuracy obtained for the four methods and the three values of \(\sigma \) demonstrate that our RDDRS method globally outperforms DDRS and ANCER for almost all proxy radius and for all the values of \(\sigma \) . The difference becomes more important in our favour when \(R\) gets bigger, \(i.e\) \(R>3\) and also when \(\sigma \) increases. Only for \(\sigma =0.12\) and small radius \(R\) our method is slighlty less better than ANCER. Moreover, we observe that none of the certification methods except ours can certify a good percentage of the test set at large radii. All of this illustrates that our method has significantly pushed forward the data-dependent smoothing techniques.
m
bf14ab5e-5bec-4671-99ea-b5e6d7585629
Discussion on the parameter \(\kappa \) . [1]} proposed to penalize (REF ) for empirical considerations. They set the default value of \(\kappa \) to 2. For our formulation (REF ), we have conducted several runs of RDDRS with different values of \(\kappa \) and have concluded that this factor does not have a significant effect on the method's convergence. The lower bound condition \(\lambda _i^x \ge \sigma _x^*\) has however a more significant effect on that convergence.
m
dc5d73ae-3fda-429d-8ebc-36c1965ab55d
Runtime discussion. Our method is slower than the other methods. This essentially comes from the computation of \(\mathbb {E}[A]\) for which we considered a greater number of samples \(n=20000\) . Indeed, despite the fact that this value is relatively high, it allows our results to be very stable: different runs with the same \(n\) , give almost the same performances.
m
6553966e-6fb7-499c-ac9c-90b80a0f4afd
Scalability challenges. Computing the exponential map becomes costly for high dimensional matrices. We believe that, on datasets such as CIFAR10 [1]}, these computations are feasible and our method is still applicable with reasonable computation ressources. However, extending the approach to high-dimensional data such as IMAGENET [2]} seems to be very challenging. Some geometric approximations of the exponential map could be useful for that.
m
e55ef81b-770e-4305-b234-0d2b8028b698
Nonlinear dynamical systems with non-integrable differential constraints, the so-called nonholonomic systems, have been attracting many researchers and engineers for the last three decades. A theorem in [1]} gave a challenging and negative fact that there does not exist any smooth time-invariant feedback control law to be able to stabilize nonholonomic systems. The applications include various types of robotic vehicles and manipulation. Some of them have been often used as a kind of benchmark platform to demonstrate the performance of a proposed controller for not only a control problem of a single robotic system and also a distributed control problem of multiagent robotic systems.
i
090e9cd9-af71-4606-aa18-786ffe3c59b8
A V/STOL aircraft without gravity ([1]}), an underactuated manipulator ([2]}), and an underactuated hovercraft ([3]}) belong to a class of dynamic nonholonomic systems which are subject to acceleration constraints. The mathematical representation of these systems can be transformed to the second-order chained form by a coordinate and input transformation. The second-order chained form is a canonical form for dynamic nonholonomic systems.
i
ef54184e-9919-4dae-bf81-b8416da4f04e
Several control approaches to the second-order chained-form system have been developed so far. Most of them focuses on avoiding the theorem of [1]}. [2]} and [3]} exploit discontinuity in their stabilizing controllers; [4]} and [5]} reduce the control problem into a trajectory tracking problem. Other than those, [6]} and [7]} consider a motion planning problem (in other words, a feedforward control problem).
i
b776b8a7-0562-4a0b-815b-23e00d01ef3c
For motion planning of the second-order chained form system, this paper presents a novel control approach based on switching a state. The second-order chained form system is divided into three subsystems. Two of them are the so-called double integrators; the other subsystem is a nonlinear system depending on one of the double integrators. In other words, the input matrix of the latter subsystem depends on a single state of the double integrators. The double integrator is linearly controllable, which enables to switch the value of the position state in order to modify the nature of the nonlinear subsystem. Steering the value into one corresponds to modifying the nonlinear subsystem into the double integrator; steering the value into zero corresponds to modifying the nonlinear subsystem into a linear autonomous system. This nature is the basis of the proposed control approach. The proposed approach is composed of such state-switching and also sinusoidal control inputs. Its effectiveness is validated by a simulation result.
i
0d5c84ae-f6e0-425a-9472-6dfadeffbc16
For a motion planning problem of the second-order chained form system, this paper has proposed a state-switching control approach based on subsystem decomposition. The subsystem decomposition divides the second-order chained form system to three subsystems. One of the subsystems has the input matrix that depends on a state of the other subsystem. Switching the state between one and zero modifies the nature of the associated subsystem. This is the key point of the proposed control approach. The effectiveness of the proposed approach was shown by the simulation result.
d
8ef97593-3bb7-4ce1-8cc6-a5b2ca295f3d
to compare the proposed approach with the other related ones; to investigate further properties of the proposed approach; and to extend the second-order chained form system into the higher-order one.
d
02b52ee6-4561-4350-8098-71dfbf7105ce
The combinatorics of planar maps (i.e., planar multigraphs endowed with an embedding on the sphere) has been a very active research topic ever since the early works of W.T. Tutte [1]}. In the last few years, after tremendous progresses on the enumerative and probabilistic theory of maps [2]}, [3]}, [4]}, [5]}, the focus has started to shift to planar maps endowed with constrained orientations. Indeed constrained orientations capture a rich variety of models [6]}, [7]} with connections to (among other) graph drawing [8]}, [9]}, pattern-avoiding permutations [10]}, [11]}, [12]}, Liouville quantum gravity [13]}, or theoretical physics [14]}. From an enumerative perspective, these new families of maps are expected to depart (e.g. [15]}, [16]}) from the usual algebraic generating function pattern followed by many families of planar maps with local constraints [17]}. From a probabilistic point of view, they lead to new models of random graphs and surfaces, as opposed to the universal Brownian map limit capturing earlier models. Both phenomena are first witnessed by the appearance of new critical exponents \(\alpha \ne 5/2\) in the generic \(\gamma ^nn^{-\alpha }\) asymptotic formulas for the number of maps of size \(n\) .
i
4d88bfe1-5bcd-458a-a5dd-477044e200ac
A fruitful approach to oriented planar maps is through bijections (e.g. [1]}) with walks with a specific step-set in the quadrant, or in a cone, up to shear transformations. We rely here on a recent such bijection [2]} that encodes plane bipolar orientations by certain quadrant walks called tandem walks, and that was recently used in the article [3]} to obtain counting formulas for plane bipolar orientations with control on the face-degrees: we show in Section  that it can be furthermore adapted to other models by introducing properly chosen weights. Building on these specializations, in Section  we obtain exact enumeration results for plane bipolar posets and transversal structures. In particular we show that the number \(b_n\) of plane bipolar posets on \(n+2\) vertices is equal to the number of plane permutations of size \(n\) introduced in [4]} and recently further studied in [5]}, and that a reduction to small-steps quadrant walks models (which makes coefficient computation faster) can be performed for the number \(e_n\) of plane bipolar posets with \(n\) edges and the number \(t_n\) of transversal structures on \(n+2\) vertices. In Section  we then obtain asymptotic formulas for the coefficients \(b_n,e_n,t_n\) all of the form \(c\gamma ^nn^{-\alpha }\) with \(c>0\) and with \(\gamma ,\alpha \ne 5/2\) explicit. Using the approach of [6]} we then deduce from these estimates that the generating functions for \(e_n\) and \(t_n\) are not D-finite. Finally in Section  we provide a direct bijection between plane permutations of size \(n\) and plane bipolar posets with \(n+2\) vertices, which is similar to the one [7]} between Baxter permutations and plane bipolar orientations.
i
0c935f46-1356-4ad3-a8b0-850c91fd175d
A plethora of routing techniques supplement the most frequently used fastest and shortest route criteria. Based on the seminal work of Golledge in 1995  [1]}, Johnson et al.  [2]} categorizes these alternative routing techniques into the positive, negative, topological and personalized. The positive category encompasses routes that are the most appealing or attractive. One such technique is described by Runge et al.  [3]}. It is capable of generating scenic routes via a classification of Google Street View images. Similarly, Ali et al.  [4]} present an exploration-based route planner that learns from the routes commonly taken by photographers. As a result, it generates aesthetically-pleasing routes for city exploration and/or photography sessions. Routes that include the least number of unfavorable or adverse conditions to a particular user are classified as negative routes, not because the routes themselves are negative but because routing decisions are made on the basis of discernible negatives. Shah et al.  [5]} demonstrated an algorithm for generating safe routes, deduced from the reported levels of crime in any given locality. Li et al.  [6]} created a routing service to be used in cases of natural and man-made disasters. It first optimizes on measures of survivability, and only then on travel time. Topological routes give higher preference to factors such as simplicity or efficiency. Duckham and Kulik  [7]} proposed a routing algorithm for simple routes that are easy to describe and execute, while Ganti et al.  [8]} presented a fuel-efficient routing algorithm. Among all approaches to routing, only personalized routes take into account the preferences of individual users. Letchner et al. [9]} introduced the concept of an individual inefficiency ratio (r) – which is the degree to which a particular user deviates from the fastest route – to fulfil personal preferences. It is based upon a large dataset of personal GPS traces. This technique suggests routes by optimizing for the preferred ratio “r” of the user. Delling et al. [10]} proposed a routing algorithm that creates routes based upon a user's preferences for speed level, type of road, and number and type of turns. While there has been extensive research into simple routing strategies, as well as recent research into personalized routing, we argue that the diverse needs of users cannot be satisfied with routing techniques that exploit only the passive preferences of the user. Users of non-autonomous vehicles can always optimize the suggested route based upon their current needs, but such interventions become problematic in the context of autonomous vehicles where active control is passed from the user to the vehicle itself. Hence, we propose a routing approach that considers these active elements and improves upon them.
w
5560b2c7-1465-4fa3-8c10-daff32ecc842
Machine Learning allows organisations and researchers to build predictive models from sets of data. In certain circumstances, these predictive models have to be trained on data which is sensitive in nature. For example, models in healthcare settings often have to be trained on patients' data, which is private and confidential. This leads to two problems: firstly, private data is more difficult to acquire because data owners may be reluctant to give away their data or to give consent for their data to be used for machine learning purposes. Secondly, the organisation training the model must keep hold of this data, which introduces security risks and considerations.
i
3108a844-d065-4acf-a9c7-42e567ae4159
Federated Learning [1]} is a technique allowing data owners to contribute their data to the development of machine learning models without revealing it. Broadly speaking, there are two main scenarios in which Federated Learning is useful.
i
17c54f0c-a832-4c76-9162-dfcf53dc16c5
In the Crowdsource Setting, an organisation or team of researchers wish to produce a machine learning model and decide on a set of model hyperparameters and a training protocol. They do not own enough data to train the model themselves, so must draw on the data from multiple outside sources. An example of this scenario would be Google, who train the language model behind Google Keyboard using textual input from users [1]}.
i
e299c900-2f44-44e3-bf8f-f91fc85759c6
In the Consortium Setting, multiple organisations and/or individual data owners wish to combine data to train a model that performs better than any model they could train with only their own data. They collectively agree on a set of model hyperparameters and a training protocol. They also agree to share ownership of the resulting model, either by a pre-determined split or based on the value of their individual contributions. MELLODDY [1]} is an example of this scenario.
i
5a5a642f-c4f5-45f4-bcd0-883bea472617
MNIST [1]} is a labelled dataset of handwritten digits. All images are monochrome and \(28 \times 28\) pixels. The dataset is split into a Train Set of 60,000 images, and a Test Set of 10,000 images.
m
fd15b7b1-eaa0-4c95-a120-d41a63e3d598
In this section we run a series of experiments using MNIST, each with both protocols, as a validation of our contributivity metric. We present a small selection of results for illustrative purposes in the interest of space. The full set of results will be made available in the 2CP repository. For Crowdsource, the entire Test Set is used as the holdout test set (belonging to Alice), while the Train Set is split between trainer clients (Bob et al) in various ways. For Consortium, the Train Set is split between trainer clients in the same way, while the Test Set is not used. In all tests, the trainers ran 5 training rounds in which they trained the model using their whole dataset for 1 epoch, with a batch size of 32 and a learning rate of 0.01. In almost all tests, the final global model would achieve an accuracy rate of at least 0.9 on the test set.
m
f55cfa70-7018-42f1-94ce-23520c47bdfa
In our experiments, almost all tokens are given in the first few rounds. This is likely because our models converge almost fully within two rounds. We also observe very little difference in the relative shares of tokens given to clients between training rounds. A concern expressed about step by step evaluation is that scores might fluctuate wildly between rounds [1]}, but we have not observed this behaviour.
m
d6638b11-8e46-4d06-b48b-ac4fca5c22e5
In MNIST Test A, we split the Train Set randomly and equally between the trainer clients. One therefore expects that each trainer has similar, IID datasets with similar utilities. We vary the number of trainer clients between 2 and 7. As expected, in all cases the token count is split almost equally between clients, reflecting their equal contributions. Remarkably, the Consortium Protocol produces near identical results to Crowdsource at each round, despite its lack of access to Alice's test set. Figure REF shows one such example. <FIGURE>
m
463e926c-7d96-4045-9a6e-64f10336a448
In MNIST Test B, the Train Set is split randomly, but in varying ratios. This experiment is intended to isolate the effect of having a smaller or larger dataset on the final token count. As expected, figure REF shows that larger datasets are better rewarded. In all such experiments, the results from the Crowdsource and Consortium Protocols match each other closely. <FIGURE><FIGURE>
m
71e2b91b-a2b9-4ea1-9846-0b85cb89987a
Finally, MNIST Test C splits the Train Set as in A, but the trainers each replace a proportion \(p\) of their labels with a random one, simulating a label flipping attack to try poisoning the model. One therefore hopes that clients with higher \(p\) values receive fewer tokens. In Figure REF , we can see that for Crowdsource, more label flipping does lead to fewer tokens, as it holds up less favourably to the unmodified holdout set. However, Figure REF also shows that for Consortium, a flip probability \(0<p<0.5\) actually increases eventual token shares at the expense of unmodified trainers. This quickly drops off to zero for \(p>0.5\) . Among our experiments, this effect is greatest when there are many flipped labels overall, and \(0.2<p<0.3\) is rewarded most. In these scenarios, these clients essentially amount to a Sybil attack for untargeted model poisoning [1]}. The interpretation is that they have succeeded in diverting the model parameters to a sub-optimal local minimum, where unmodified trainers are punished for attempting to bring the model parameters away. Note that the early training rounds match the token shares from Crowdsource, which supports this theory. The Consortium measure should however be adjusted to avoid these situations.
m
17373452-2470-4562-8527-a5de41d4b676
This final experiment highlights the limitations of Federated Averaging. The Consortium protocol would thus benefit from being used in conjunction with a robust aggregation mechanism dismissing malicious updates as in [1]}.
m
12a2eb64-8b85-4eb1-b4fa-7c957a4f2962
In this paper, we have introduced and implemented two protocols to calculate the contributivity of each participant's data set in a Federated Learning network. Both protocols are decentralised and maximise the fairness in the calculation.
d
a05ba5e1-a6d9-45c1-81fb-f4688e1395fb
Our experiments showed that the contributivity scores resulting from the Crowdsource Protocol was sound when computed for the MNIST dataset. Clients with larger or higher quality datasets were rewarded with higher shares in the final model. Clients with less to contribute, but whose contributions would still be positive, are still given rewards for their participation. This result was obtained with a high quality holdout test set. The Consortium Protocol correctly rewarded larger datasets with higher scores, but our experiments revealed situations where low quality datasets were given higher scores than perfect ones.
d
2ab0340a-6a3c-425a-93bd-b92126c98e63
At the time of writing, our protocols and 2CP are still under active development. It will be interesting to evaluate the two protocols when trainers have deliberately different and unique distributions of data, instead of being randomly sampled. Similarly, we want to understand the impact of privacy-preserving mechanisms such as Differential Privacy on the quality of the model updates, and hence on the contributivity scores and shares in the final model.
d
0e2ea6a0-82e4-4f3a-bbc5-c5084712e87f
2CP does not yet support verified evaluation and verified training as described in this paper. The contributivity scheme only produces fair results if all clients follow the protocol honestly, or semi-honestly (honest but curious). Malicious clients can freely push bad and/or fake model updates, making the training process highly vulnerable to both untargeted and targeted model poisoning attacks [1]}. Future work will consequently focus on designing a verifiable evaluation and training procedure, as well as designing a penalty scheme to use in conjunction. From a technical standpoint, we aim to optimise the 2CP framework for gas costs and scalability to make it suitable for public deployment on Ethereum and IPFS.
d
c3447977-752f-40a7-b525-f954509e550d
We are witnessing a continuous and indeed accelerating move from decision support systems that are based on explicit rules conceived by domain experts (so called expert systems or knowledge-based systems) to systems whose behaviors can be traced back to an innumerable amount of rules that have been automatically learnt on the basis of correlative and statistical analyses of large quantities of data: this is the shift from symbolic AI systems to sub-symbolic ones, which has made the black-box nature of these latter systems an object of a lively and widespread debate in both technological and philosophical contexts [1]}. The main assumption motivating this debate is that making subsymbolic systems explainable to human decision makers makes them better and more acceptable tools and supports.
i
88842428-c2ab-45cf-9b0c-6121e9f0beb9
This assumption is widely accepted [1]}, [2]}, [3]}, although there are a few scattered voices against it (see e.g. [4]}, [5]}, [6]}, [7]}): for instance, explanations were found to increase complacency towards the machine advice [8]}, increase automation bias [9]}, [10]} as well as groundlessly increase confidence in one's own decision [11]}, [12]}. Understanding or participating to this debate, which characterizes the scientific community that recognizes itself in the expression “explainable AI” and in the acronym “XAI”, is difficult for the seemingly disarming heterogeneity of definitions of explanation, and the variety of characteristics that are associated with “good explanations”, or of the systems that generate them [13]}.
i
45ed5f9e-e42c-42d3-ad34-8586bf648ace
In what follows, we adopt the simplifying approach recently proposed in [1]}, where explanation is defined as the meta output (that is an output that describes, enriches or complements, another main output) of an XAI systems. In this perspective, good explanations are those that make the XAI system more usable, and therefore a useful support. The reference to usability suggests that we can assess explanations (and explainability) on different levels, by addressing complementary questions, such as: do explanations make the socio-technical, decision-making setting more effective, in that they help decision makers commit fewer errors? Do they make it more efficient, by making decisions easier and faster, or just requiring fewer resources? And lastly, but not least, do they make users more satisfied with the advice received, possibly because they have understood it more, and this made them more confident about their final say?
i
c4a7a344-a543-4db1-9bbd-9a43bc00cc27
While some studies [1]} have already considered the psychometric dimension of user satisfaction (see, e.g., the concept of causability [2]}, related to the role of explanations in making advice more understandable from a causal point of view), here we would like to focus on effectiveness (i.e., accuracy) and other cognitive dimensions (than understandability), both in regard to the support (e.g., trust and utility) and the explanations received. In fact, explanations can be either clear or ambiguous (cf. comprehensibility); either tautological and placebic [3]} or instructive (cf. informativeness); either pertinent or off-topic (cf. pertinence); and, as obvious as it may seem, either correct or wrong, as any AI output can be: therefore, otherwise good explanations (that is persuasive, reassuring, comprehensible, etc.) could even mislead their target users: this is the so called white-box paradox, which we have already begun investigating in previous empirical studies [4]}, [5]}. Thus, investigating if and how much users find explanations “good” (and in the next section we will make this term operationally clear) can be related to focusing on the possible determinants of machine influence (i.e., also called dominance), automation bias and other negative effects related to the output of decision support systems on decision performance and practices.
i
2c49c9df-accd-4dfd-b496-8ccf999a1498
To investigate how human decision makers perceive explanations, we designed and conducted a questionnaire-based experiment in which we involved 44 cardiology of varying expertise and competence (namely, 25 residents and 19 specialists) from the Medicine School of the University Hospital of Siena (Italy), in an AI-supported ECG reading task, not connected to their daily care. The readers were invited to classify and annotate 20 ECG cases, previously selected by a cardiologist from a random set of cases extracted from the ECG Wave-Maven repositoryhttps://ecg.bidmc.harvard.edu/maven/mavenmain.asp on the basis of their complexity (recorded in the above repository), so as to have a balanced dataset in terms of case type and difficulty. The study participants had to provide their diagnoses both with and without the support of a simulated AI system, according to an asynchronous Wizard-of-Oz protocol [1]}: the support of the AI system included both a proposed diagnosis and a textual explanation to back the former one. The experiment was performed by means of a web-based questionnaire set up through the LimeSurvey platform (version 3.23), to which the readers had been individually invited by personal email.
m
996475c7-9e56-4bb7-9013-512e0b0d575e
The ECG readers were randomly divided in two groups, which were equivalent for expertise and were supposed to interact with the AI system differently (see Fig. REF ); in doing so, we could comparatively evaluate potential differences between a human-first and an AI-first configuration. In both groups, the first question of the questionnaire asked the readers to self-assess their trust in AI-based diagnostic support systems for ECG reading. The same question was also repeated at the end of the questionnaire, to evaluate potential differences in trust caused by the interaction with the AI system. <FIGURE>
m
b0416c5b-4c5d-4427-9e03-df7f8ef03390
For each ECG case, the readers in the human-first group were first shown the trace of the ECG together with a brief case description, and then they had to provide an initial diagnosis (in free text format). After that this diagnosis had been recorded, these respondents were then shown the diagnosis proposed by the AI; after having considered this latter advice, the respondents could revise their initial diagnosis; then they were shown the textual explanation (motivating the AI advice) and asked to provide their final diagnosis in light of this additional information. In contrast, the participants enrolled in the AI-first group were shown the AI-proposed diagnosis together with the ECG trace and case description; only afterwards, they were asked to provide their own diagnosis in light of this advice only. Finally, ECG readers were shown the textual explanation, and asked whether they wanted to revise their initial diagnoses or confirm it.
m
c69c3ada-1e20-473d-9706-15ee0424700c
For each textual explanation, we asked the participants to rate its quality in terms of its comprehensibility, appropriateness and utility, in this order (so as to reflect a natural sequence through perception, interpretation and action). In particular, while comprehensibility and utility were considered self-explanatory terms, we pointed out in a written comment that appropriateness, to our research aims, would combine the respondents' perception of pertinence and correctness together (that is, that dimension would reflect the extent “the explanation had something to do with the given advice” and, with regard to the latter, “it was plausible and correct”In other words, we asked the participants to judge the quality of the explanation with respect to the advice and to the case at hand by means of two different constructs (appropriateness and utility, respectively).).
m
da33352c-32d9-4019-8cc4-fa23be9594ce
The accuracy of the simulated AI, that is the proportion of correct diagnostic advices, was 70%, with respect to the ECG Wave-Maven gold standard. To avoid negative priming, and hence avoid fostering unnecessary distrust in the AI, the first five cases of the questionnaire shown to the participants were all associated with a correct diagnosis and a correct explanation from the XAI support. Although the participants had been told that the explanations were automatically generated by the AI system, like the diagnostic advice, these had been prepared by a cardiologist: in particular, 40% of the explanations were incorrect or not completely pertinent to the casesMore precisely, for the 5 cases classified as `simple', all explanations were correct; for the 9 cases of medium complexity, 4 explanations were wrong; for the remaining 6 cases denoted as `difficult', 4 explanations were wrong..
m
e306d98f-6982-42af-9810-1e8e434658c4
Does the readers' expertise have any effect in terms of either basal trust, difference in trust by the readers, or final trust? (RQ1a) Does the interaction protocol have any effect on difference in trust by the readers, or final trust? (RQ1b); Is there any difference or correlation between the three investigated psychometric dimensions (i.e. comprehensibility, appropriateness, utility)? A positive answer to this latter question would justify the use of a latent quality construct (defined as the average of the psychometric dimensions) to simplify the treatment of other research questions; Do the readers' expertise, their diagnostic ability (i.e., accuracy), as well as the adopted interaction protocol, have any effect in terms of differences in perceived explanations' quality? Similarly, does the perceived explanations' quality correlate with the basal or final trust? In regard to diagnostic ability, we stratified readers based on whether their baseline accuracy (cfr. HD1, see Fig REF ) was either higher or lower than the median; Is there any correlation between explanations' perceived quality and the readers' susceptibility to technology dominance [1]}, [2]}? Although technological dominance is a multi-factorial concept, for our practical aims, we express it in terms of the rate of decision change due to the exposition to the output of an AI system. Moreover, we distinguished between positive dominance, when changes occur from initial wrong decisions (e.g., diagnoses) to eventually correct ones; and negative dominance, for the dual case, when the AI support misleads decision makers; Finally, does the correctness of the explanation make any difference in terms of either perceived explanations' quality or influence (i.e., dominance)?
m
516cbd61-e3aa-454c-b6a3-a890ac346113
The above mentioned research questions were evaluated by means of a statistical hypothesis testing approach. In particular, correlations were evaluated by means of Spearman \(\rho \) (and associated p-values), so as to properly take into account for monotone relationships between ordinal variables and continuous ones. In regard to research questions 1 and 3, on the other hand, paired comparisons were performed by applying Wilcoxon signed-rank test, while un-paired comparisons were performed by applying the Mann-Whitney U test. In both cases, effect sizes were evaluated through the rank biserial correlation (RBC). In all cases, to control the false discovery rate due to multiple hypothesis testing, we adjusted the observed p-values using the Benjamini-Hochberg procedure. Significance was evaluated at the 95% confidence level.
m
6ae60d03-3584-456b-82ed-1f9069c245af
After having closed the survey, we collected a total of 1352 responses from the 44 ECG readers involved, of which 21 had been enrolled in the human-first protocol and the remaining 23 in the AI-first protocol.
r
4d7dfcce-5ffe-4efb-ac61-157a1321bf3c
The difference between initial and final trust was significant for the novice readers (adjusted p: .004, RBC: 0.92), but not for the expert ones (adjusted p: .407, RBC: 0.35). Furthermore, even though the difference in initial trust between novice and expert readers was not significant (adjusted p: .439, RBC: 0.15), the difference in final trust was instead significant (adjusted p: .009, RBC: 0.54), with the novice readers reporting a higher final trust on average than the expert ones. In regard to the stratification by interaction protocol, the difference between initial and final trust was significant for the human-first cohort (adjusted p: .016, RBC: 0.81) but not so for the AI-first cohort (adjusted p: .078, RBC: 0.60), even though the effect size was large. Differences between the two cohorts were not significant, neither in terms of initial trust (adjusted p: .090, RBC: 0.33) nor in terms of final trust (adjusted p: .787, RBC: 0.05).
r
054b90eb-df43-4fce-9a09-0112b93a06c1
The correlations between the three psychometric dimensions is reported in Figure REF . The three dimensions were all strongly correlated with each other (appropriateness vs comprehensibility, \(\rho \) : .86; appropriateness vs utility, \(\rho \) : .82; comprehensibility vs utility, \(\rho \) : .80), and all of the correlations were significant (adjusted p-values \(< .001\) ). Furthermore, the internal consistency of the questionnaire, in regard to the psychometric items (i.e. appropriateness, comprehensibility and utility), was very high (Cronbach \(\alpha \) : .93). This result suggests that the three psychometric dimensions can be associated with an aggregated quality construct (defined as the average between appropriateness, comprehensibility and utility), which we will consider in what follows. <FIGURE>
r
0514e014-9b91-4c21-826b-b959ea7356db
The results regarding the differences in the perceived quality of the explanations, stratified by either expertise, interaction protocol or readers' baseline accuracy, are reported in Figure REF . <FIGURE>
r
a5c7109f-499a-463b-aae9-c9ec48da3b5a
The difference in explanations' quality between human-first and AI-first interaction protocols (adjusted p: .981, RBC: 0.01) was not significant and associated with a negligible effect. Even though the difference in explanations' quality with respect to readers' baseline accuracy was similarly non-significant (adjusted p: .155), the relationship between the two variables was associated with a medium-to-large effect size (0.36). By contrast, the difference in explanations' quality between novice and expert readers (adjusted p: .012, RBC: 0.51) was significant and associated with a large effect size. The correlations between initial and final trust and explanations' perceived quality is reported in Figure REF . Explanations' quality was weakly correlated with the readers' basal trust in AI-based support systems (Spearman \(\rho \) : .27, adjusted p: .086), but significantly and strongly correlated with final trust (Spearman \(\rho \) : .71, adjusted p \(< .001\) ). <FIGURE>
r
b9e38b49-a130-49d9-82bd-1932ba3335a8
The correlations between dominance (distinguishing between positive and negative dominance), and the perceived explanations' quality are reported in Figure REF . Quality was moderately-to-strongly and significantly correlated with dominance (Spearman \(\rho \) : .57, adjusted: p: .007) and also with the positive component of dominance (Spearman \(\rho \) : .52, adjusted p: .025); while it was only moderately correlated with negative dominance (Spearman \(\rho \) : .39, adjusted p: .077); furthermore, this latter correlation was not significant. <FIGURE>
r
3a04a97b-91c0-4ed4-b709-112272002ddf
Finally, the relationship between the correctness of the explanations, the perceived quality of these latter, and their dominance is depicted in Figure REF . Both the average quality and dominance increased when the explanations were correct, as compared to when these latter ones were wrong: while the observed differences were not significant (quality, adjusted p: .399, RBC: 0.15; dominance, adjusted p: .126, RBC: 0.33), the effects were small-to-medium (for quality) and medium (for dominance). <FIGURE>
r
055e9c02-9727-4ecc-b26a-bbecebd332d8
To discuss the results of the experiment, we follow the order of presentation of the research questions in Section . Thus, in regard to RQ1, we point out that we did not observe any significant effect due to the interaction protocol on initial trust. This result does not come unexpected and was in fact desirable: indeed, readers were randomly assigned to one of the two considered protocols. Interestingly yet, the interaction protocol seemingly did not have an effect on final trust either, despite the fact that interaction protocols had a significant effect in terms of overall accuracy of the hybrid human-AI team [1]}. In this sense, it appears that even though the order of the AI intervention was certainly impactful in regard to the overall diagnostic performanceThis is an effect that is not entirely trivial and calls for further study to understand its causes., it might be perceived by the user as a secondary element compared to other trust-inducing or trust-hindering factors. Nonetheless, both interaction protocols had a large effect on trust difference: in particular, human-first protocols led to a significant increase between initial and final trust. A possible explanation for this difference can be found in the accuracy level of the AI system, which in this user study was 70%, i.e. well over the average accuracy of the readers, and thus likely to lead to a positive interaction for the study participants and hence an increased sense of trust in the decision support. More interestingly, even though no significant difference was detected in initial trust between expert and novice readers, the novice readers reported both a significant increase in trust as well as a significantly higher final trust than the expert readers. This is in line with previous studies in the field of human-AI interaction [2]}, [3]}, [4]}, [5]}, which showed how novice readers were more prone to accept the support of an AI-based system, and appreciate its output. An explanation for this widely-reported observation can be traced back to the literature in the Theory of Technological Dominance (TTD) [6]}, where a previous finding from Noga and Arnold [7]} identified user expertise as one of the main determinants of dominance and reliance, of which trust is a determinant. While a tenet of TTD holds that decision aids are especially beneficial to professionals thanks to a bias mitigation effect [8]}, the study by Jensen et al. [9]} displayed a diverging beneficial effect of decision support, with novices benefiting more in comparison to experts who, in turn, often discounted the aid's support. This is in line with our findings, that point to more experienced decision-makers possibly being less favourably impacted by such systems, possibly due to a lower level of familiarity or a higher prejudice against the machine (see also [10]}).
d
1ec77c9c-4a90-439c-a653-e87ddddc1f0a
In regard to RQ2, we briefly note that the three psychometric dimensions that we investigated were indeed strongly correlated between each other. This result, while interesting, is not totally unexpected since, intuitively, an appropriate and comprehensible explanation is likely to be found also useful; conversely, for an explanation to be useful it should be at least also comprehensible. Notably, the observed value of the Cronbach \(\alpha \) was higher than Nunnally's reliability threshold for applied studies (i.e. .8, see [1]}): thus, the internal consistency of our test was sufficiently high to guarantee its reliability, but not so much as to suggest redundancy and hence undermine its validity [2]}. In particular, we believe that these results justify the aggregation of the three psychometric dimensions into a latent quality construct, which was then considered in the statistical analysis.
d