_id
stringlengths
36
36
text
stringlengths
200
328k
label
stringclasses
5 values
b11ed480-188f-42d2-aa2d-2c33cf81930e
As a second contribution we provide two uniformly processed, cross-organism, homologue gene-matched datasets to the community . While our work is principally a proof-of-concept, we believe our findings and insights indicate that environment-based invariant machine learning approaches will have significant utility for medical data challenges indicating a fruitful research avenue for future work.
d
46920e2e-78dd-406b-b4e5-c1d48df4ea65
Ever since computers were introduced into our daily lives, digital games have radically transformed the way people spend their leisure time. New and accessible technology, such as mobile devices, and the possibility for developers to publish apps to public and centralized stores (like App Store, Play Store, Windows Phone Store) have created new opportunities for people to access videogames. The videogames industry has become the fastest growing leisure market, despite the recent global recession [1]}.
i
aefc042b-3280-4823-8f79-94e58b5ff70f
Like in any other business, the main focus of game developers concerns the quantity of consumers that use their product and their satisfaction. Game developers want to attract the highest number of users possible, and keep them engaged with their product for the longest time possible. Therefore, they need to understand what triggers consumers' engagement and leads them to spend time and money on the games created by the developers, and use this information as early as possible during the videogame's design and development process.
i
6dc777bb-54e0-4b54-b8cf-02c5f818c885
In this paper we present a study about user longevity and engagement in mobile games, and propose a model that analyzes the notion of daily playing time in order to evaluate and maximize games' longevity and engagement. We start from some hypothesis about the time users spend playing mobile games, and we build a model that connects different time parameters. Our model suggests how to size these parameters in order to maximize the total time spent playing the game. We do not address the issues related to how to improve players' engagement with the game in order to produce effects on their session time parameters, since this is very dependent on each single game. Our model can be used to validate the game design early during its development (providing a direct feedback) or to evaluate existing games.
i
073910ff-e645-4403-b39f-590a15e2ae46
In our work we have considered Multiplayer Sports Management Games (MSMGs), because they present the following features: a synchronous multiplayer online world that progresses even when the player is not actively engaged with the game, and a major game cycle (season) that spans across several days and weeks. Some examples of MSMGs are: Top Elevenhttp://www.topeleven.com/, Hattrickhttps://www.hattrick.org/, Online Soccer Managerhttp://en.onlinesoccermanager.com/, GOAL 15: Be A Football Managerhttp://www.goal-games.de/en/home, Top Race Managerhttp://topracemanager.com/.
i
c988c03e-e0b7-4fc7-b25a-a06dd498427a
The rest of the paper is organized as follows. The next section reports on related work. Then, we will introduce the hypothesis we have taken in our work and the parameters we exploit in our model. After that, we will present the proposed model to address longevity and engagement in mobile games. Finally, we will conclude the paper and sketch some future work.
i
82e46162-c817-4c76-8f39-2e42a2ce3840
Existing literature has tried to define a set of tools and features that are similar in successful games. This issue has been approached both from a social/psychological and a technical point of view: with respect to the former, the main issue concerns the research of the psychological elements that make a game enjoyable; from the technical point of view, the goal is to develop a game with a set of features as determined by such psychological elements. According to Connolly et al. [1]}, “computer games build on theories of motivation, constructivism, situated learning, cognitive apprenticeship, problem-based learning, and learning by doing”. Griffiths & Davies [2]} suggest that computer games incorporate features that have compelling, or even addictive, qualities.
w
a328ddf9-f107-4bad-a278-0eb5c7d5bab1
A deep analysis of motivation was performed by Deci & Ryan [1]}, showing a distinction between intrinsic and extrinsic motivation. Intrinsically motivated behaviors are connected with self-rewarding while extrinsically motivated behaviors are usually triggered by the desire for some external reward, such as money, praise or recognition from others.
w
9f361e12-61e2-40e3-925c-52b0326997de
Malone and Lepper [1]} argued that intrinsic motivation prevails in designing engaging games and suggested that intrinsic motivation is driven by four individual factors-challenge, fantasy, curiosity and control—, as well as three interpersonal factors-cooperation, competition, and recognition. Extrinsically motivated behaviors are extremely important in managerial multi-player games, as the user plays against other users, not machines. Users that already know each other (e.g. through connections on social networks) often play together or against one another in multi-player games and this increases the extrinsic motivation. Gajadhar, W. de Kort and Iksselsteijn [2]} performed an experiment involving 86 players and found that playing with others contributed to the players' involvement in the game. They also concluded that player involvement is not necessarily impaired by the presence of others. To achieve the goal of higher player involvement, our investigation focused on how games were designed to be enjoyable, what elements appealed to the players and what encouraged them to continue playing. According to Prensky [3]}, technology has allowed a more immersive game-play experience thanks to the introduction of augmented reality gaming that extends learning and enjoyment when the natural scene of the game is enhanced by the addition of digital objects.
w
3dbebbba-8f6a-4409-be75-3ec73dcedd89
S. de Freitas [1]} suggests that two different methods can be used to increase player involvement and enjoyment. The first method is to ensure that the player has the feeling of being inside the game, while being able to play with other participants, either competing or collaborating. The second is to provide a clearly articulated set of goals and sub-goals, such as completing a level, obtaining intermediate awards, etc. In managerial games, short term, mid-term and long term goals must be connected with a set of rewards that reinforces players' engagement and self-awareness [2]}.
w
8ff3eaec-d92f-4ffb-ad4e-fdde389bf8b2
Thus, although several authors stress the importance of psychological elements such as rewards and social interaction, there has been little concrete research on the practical aspects of the industry, for instance consumer retention. Most works and literature in this regard have predominantly addressed psychological concepts and theories. We haven't found any academic papers that specifically address the estimate of the time that users spend on games, or how to increase the time and money spent by them in the game, or about frameworks that allow a simple validation of games design early in their development process. Considering the current market growth though, we may well conclude that this will change soon.
w
e3bf128e-46b2-4b45-9dc1-196239cd87b7
In this paper we have proposed a model to evaluate MSMGs and model the different playing time parameters that characterize it in order to maximize the longevity and the players' engagement. We have considered Multiplayer Sports Management Games (MSMGs) because they exhibit features and dynamics that require players to play consistently across several days and weeks to experience progress, and because they feature synchronous in-game world that progress independently from the single player interaction with the game. Designing MSMGs to obtain time parameters as suggested by our model will help game developers to achieve the appropriate involvement from the users, increasing the game's longevity and cumulative playing time (which usually correlates with economic return for the game developers).
d
a2ffa873-7b2f-4421-9fb1-72c3400e13ec
Future work will deepen the study of specific research topics. First, we will evaluate the standard deviation of \(T_M\) (i.e. the total time a player will play a specific game during his lifetime) in order to understand how this value can be significant. Second, we will study a clearer definition of \(t_c\) , i.e. the average minimum daily playing time necessary for the player to experience sufficient reward from the game and to develop an affection for it. Third, even if MSMG games are very representative of several kinds of game, we will evaluate the applicability of our framework to other kinds of games.
d
da498222-314d-4f0e-ac0c-f3a6d8f6b764
Natural Language Generation (NLG) is concerned with the generation of natural language text from non-linguistic input [1]}. One step in a classic generation pipeline [2]} is Referring Expression Generation (REG, [3]} for an overview). REG has important practical value for commercial natural language generation [4]}, computer vision [5]}, and robotics [6]}, for example. It has also been used as a tool to understand human language use [7]}. REG contains two different problems. One is to find a set of attributes to single out a referent from a set (also called one-shot REG). The other is to generate referring expressions (REs) to refer to a referent at different points in a discourse [8]}. We will focus on the latter task. We call this the REG-in-context task.
i
22107dee-841b-495f-ad1a-5404ee54dce6
In earlier works, REG is often tackled in two steps [1]}, [2]}. The first step decides the form of an RE. For example, whether a reference should be a proper name (“Marie Skłodowska-Curie”), a description (“the physicist”), or a pronoun (“she”) at a given point in the context. The second step is concerned with content selection, i.e., the different ways in which a referential form can be realised. For example, to generate a description of Marie Curie, the REG system decides whether it is sufficient to mention her profession (i.e., “the physicist”) or whether it is better to mention her nationality as well (i.e., “a Polish-French physicist”).
i
f1dcf106-d637-40fc-91cb-c5e54ff38067
Thanks to the rapid development of deep learning techniques, recent NLG models are able to generate RE in an End2End (E2E) manner, i.e., to tackle the selection of form and content simultaneously [1]}, [2]}, [3]}. The task of End2End (E2E) REG was proposed by [1]}, who extracted a corresponding corpus from the WebNLG corpus [5]}We refer to this extracted REG corpus as webnlg.. Grounding on the webnlg dataset, they proposed a neural REG system built on a sequence-to-sequence with attention model. Their automatic and human evaluation results suggested that neural REG systems significantly outperform rule-based and feature-based machine learning (ML) baselines. However, it can be argued that [1]} did not use very strong baselines for their comparison: OnlyName is a rule-based system that always generates a proper name given an entity, and Ferreira is a feature-based model that uses Naive Bayes with only 3 simple featuresThe human evaluation in [3]} showed a slightly different result: the OnlyName model performed as well as the Neural REG models in terms of fluency, grammaticality, and adequacy. However, since their human evaluation involved only two subjects, these outcomes need to be approached with caution.. <TABLE>
i
ea4224ad-c5ff-46e3-a6a9-d9aa0e34d9ec
We present several rule-based and feature-based baselines to examine how neural models perform against “well-designed” non-neural alternatives. Note that a well-designed model is not necessarily complex. For example, it can be a rule-based system with one or two simple, “well-designed” rules. Since one of the advantages of neural E2E models is that they require little effort for feature engineering, we used two types of baselines, namely models that require minimal expert effort and models that use more demanding (but linguistically well-established) rules or features. Therefore, our main research question is: Do state-of-the-art neural REG models always perform better than rule-based and machine learning-based models?
i
49dd2be9-79c5-432c-a7b9-0d5721e37293
To answer this question fairly, we consider the amount of resources used by each model. For example, the neural models require fewer human resources when it comes to linguistic expertise and annotation, but they require input from Deep Learning experts. Resources such as computing power and data needs should also be considered.
i
33155cab-6af1-4504-9585-67fdfc1470bb
Another issue with previous studies concerns the datasets that were used: in webnlg, approximately 99.34% of entities in the test set also appear in the training set; consequently, evaluations using webnlg do not take unseen entities into consideration. Furthermore, since many sentences in webnlg are paraphrases of one another, evaluating neural models on webnlg alone may overestimate their performance. [1]} recently extended webnlg to include unseen domains that contain many unseen entitiesWe used version 1.5 of the webnlg dataset in https://github.com/ThiagoCF05/webnlg., and [2]} have developed new models to handle them. Their test set has two subsets: one consists of documents 99.34% of whose entities are seen, while the other consists of documents 92.81% of whose entities are unseen. This arguably makes the data in webnlg unrealistic (see § for discussion). Therefore, we created what we believe to be a more realistic dataset based on the Wall Street Journal (wsj) portion of the OntoNotes corpus [3]}, [4]}We used Ontonotes 5.0 licensed by the Linguistic Data Consortium (LDC) https://catalog.ldc.upenn.edu/LDC2013T19..
i
ae8245ef-0f2d-4759-8512-6cfd6cf1600f
This paper is structured as follows: in § and §, we describe the datasets used and the REG models. In §, we provide a detailed description of our automatic and human evaluations. In § and §, we compare the results across different dimensions and make suggestions for future studies. The code for reproducing the results in this article can be found at: https://github.com/a-quei/neuralreg-re-evaluation.
i
57308228-54af-4ef5-a4c2-ee94b9d46a97
We evaluated all the systems described in § on both webnlg and wsj using automatic and human evaluations. We implemented the neural models based on the code of [1]} and [2]}ATT+Copy and ATT+Meta: github.com/rossanacunha/NeuralREG; and ProfileREG: github.com/mcao610/ProfileREG.. For webnlg, we used their original parameter setting, while for wsj, we tuned the parameters on the development set and used the best parameter set.
m
9c24a09d-824f-4e97-bad8-753bfbbead45
To determine the optimal context length \(K\) of wsj, we varied \(K\) from 1 to 5 sentences before and after the target sentence, then tested ATT+Meta on the development set with the different \(K\) contexts. It reaches the best performance when \(K=2\) .
m
4e676c3d-bf7c-4a41-b93a-44eb2f43bc94
Table REF shows the results of the human evaluation webnlg. Few of the differences reach significance (using Wilcoxon's signed-rank test with Bonferroni correctionP-values were multiplied by the number of comparisons.), suggesting that webnlg may be ill-suited for differentiating between REG modelsAll non-significant differences in Table REF and Table REF are associated with p-values greater than \(0.1\) .. The only two significant differences appear when comparing RREG-S with ATT+Meta and ProfileREG in terms of the grammaticality of unseen data. The results suggest that RREG-S is the best model for generating REs on webnlg, performing on a par with neural models on seen data and better than neural models on unseen data. Unlike our automatic evaluation, ATT+Meta does not outperform ATT+Copy in human evaluation.
r
c5919d5e-f2ad-4730-bba9-54b857f02ad3
Since typos are possible in ME (e.g., a worker might type 600 instead of 60), we excluded outliers, defined as a score that is lower than the median minus 3 standard deviations, or higher than the median plus 3 standard deviations of that item. The remaining scores were down-sampled for conducting significant testing. The results are shown in Table REF . Unlike webnlg, significant differences are frequent. For fluency, ML-S and ML-L perform the best while ATT+Meta performs the worst. For grammaticality, ML-L is still the best model, which significantly defeats RREG-L and ATT+Meta. A more detailed study is needed to investigate why RREG-L is the second worst in terms of grammaticality, which we found surprising. For clarity, no significant difference was found, perhaps because it was difficult for participants to compare long documents. In sum, on wsj, ML-L has the best performance, and the simpler ML-S and RREG-S also have considerably good performances.
r
4b3c9447-f909-4cd8-b926-d797bf17ec85
In this work, we have re-evaluated state-of-the-art Neural REG systems by considering four well-designed rule- and ML-based baselines. In addition to the existing webnlg corpus, we built a new dataset for the task of REG-in-context on the basis of the wsj corpus, arguing that this dataset may be more appropriate for the task. In the re-evaluation, we examined both our baselines and SOTA neural REG systems on both datasets, using automatic and human evaluations. The results suggest that the simplest rule-based baseline RREG-S achieves equally good or better performance compared to SOTA neural models. Our results on the wsj suggest that, on that corpus, the linguistically-informed ML-based model (ML-L) is best. We hope these results will encourage further research into the comparative strengths and weaknesses of neural, non-neural and hybrid methods in NLP.
d
e62fc1b9-5a53-48f7-bff5-e9839d84187d
In future, we have 4 items on our TODO list: (1) Investigate bottleneck features for Neural based models based on the feature set of ML-L; (2) Explore other neural architectures (e.g., testing models that leverage pre-trained language models) and construct larger realistic REG corpora; (3) Explore better human evaluation methods for longer documents that are better suited for evaluating the task of generating referring expressions in context; (4) Extend our research to other languages, especially in other language families, including languages that are morphological very rich or very poor and languages that frequently use zero pronouns (e.g., Chinese [1]}).
d
dcbf67b3-5a4b-487c-9d92-f29f27f4f692
A plethora of applications of natural language processing (NLP) performs text-to-text transformation [1]}, [2]}, [3]}. Given an input, these systems are required to produce an output text that is coherent, readable and informative. Due to both high annotation costs and time, researchers tend to rely on automatic evaluation to compare the outputs of such systems. Reference-based automatic evaluation relies on comparing a candidate text produced by the NLG system and one or multiple reference texts (‘gold standard’) created by a human annotator. Generic automatic evaluation of NLG is a huge challenge as it requires building a metric that evaluates the similarity between a candidate and one or several gold-standard reference texts. However, the definition of success criteria is task-specific: as an example, evaluation of text summarization focuses on content, coherence, grammatically, conciseness, and readability [4]}, whereas machine translation focuses on fidelity, fluency and adequacy of the translation [5]} and data2text generation [6]} consider criteria such as data coverage, correctness and text structure.
i
5b370c6b-85ee-4c5e-a14f-44af92c564c8
Automatic text evaluation metrics fall into two categories: metrics that are trained to maximise their correlations using human annotation (e.g., RUSE [1]}, BEER [2]}, BLEND [3]}) and untrained metrics (e.g., BLEU [4]}, ROUGE [5]}, BERTSCORE [6]}, DepthScore [7]}, BaryScore [8]}, MOVERSCORE [9]} Word Mover Distance [10]}). In this work, we focus on untrained metrics as trained metrics may not generalize well to new data (existing labelled corpora are of small size). Two categories of untrained metrics can be distinguished: word or character based-metrics that compute a score based on string representation and embedding-based metrics that rely on a continuous representation. String-based metrics (e.g., BLEU) often fail to robustly match paraphrases [11]} as they mainly focus on the surface form as opposed to embedding-based metrics relying on continuous representations.
i
b1b5e573-5b12-4ecc-9be7-435edb4e6b03
In this paper, we introduce InfoLM a family of new untrained metrics to evaluate text summarization and data2text generation. At the highest level InfoLM key components include: (1) a pre-trained masked language model (PMLM) that is used to compute two discrete probability distributions over the vocabulary. They represent the probability of observing each token of the vocabulary given the candidate and the reference sentence, respectively. (2) A contrast function \(\mathcal {I}\) that is used to measure the dissimilarity between aforementioned probability distributions. InfoLM differs from existing BERT-based metrics (e.g. BERTSCORE, MOVERSCORE) as it directly relies on the PMLM which outputs discrete probability distributions. Thus InfoLM does neither require to arbitrarily select one or several specific layers ( e.g. BERTSCORE relies on the 9th layer for bert-base-uncased), nor involves selecting arbitrary aggregations technics (e.g. Power Mean [1]} for MOVERSCORE). As InfoLM relies on statistics on tokens it can also be seen as a string-based metric. However, it does not suffer from common pitfalls of string-based metrics (e.g. synonyms, need of an exact string-match) as the PMLM also allows ones to assign a high score to paraphrases and to capture distant dependencies.
i
49ce03ee-cd5a-4572-aa01-902b36ae17c9
(1) A set of novel metrics to automatically evaluate summarization and data2text generation. In this work, we introduce InfoLM which overcomes the common pitfall of string matching metrics and does not require to select a layer, nor to rely on a arbitrary aggregation function. InfoLM combines a pre-trained model and a contrast function denoted by \(\mathcal {I}\) between two discrete probability distributions. We explore the use of different choices of contrast functions such as \(f\) -divergences, \(\mathcal {L}_p\) distances or Fisher-Rao distances.
i
30a8455b-3e40-4b0d-9cf1-421f612ce879
(2) Tasks. First, we demonstrate on both summarization and data2text that InfoLM is better suited than concurrent metrics. A comparison is conducted, using multiple correlation measures with human judgment both at the text and system level. Second, we dissect InfoLM to better understand the relative importance of each component (e.g. calibration, sensibility to the change of information measures).
i
373b1110-d38e-44dc-939c-e12951aa6c31
The canonical goal of a multi-armed bandit problem is to find \(x \in X\) that maximizes reward \(Y\) . This means finding \(x^* \in X\) such that \( x^* = \operatornamewithlimits{arg\,max}\limits _x E[Y \mid do(x)]\)
i
dc58e2b9-0b42-4944-ab51-7a39fce4c2c4
Algorithms like UCB or Thompson Sampling find \(x^*\) by keeping an empirical distribution over reward for each arm. They explore arms with fewer samples and exploit arms with higher averages. If the empirical average over arms is about equal after many trials, it is tempting to conclude that the arms are about equal. In this paper, we give graphical conditions for when this type of experimentation may not correctly identify the optimal policy.
i
afa248dc-4373-4d69-b1a1-81eb488e5411
Past work has optimized the effect of the treatment on the treated to differentiate empirically equal arms.\(^1\) In MDP's, it was shown that algorithms that account for actions confounded with reward dominate algorithms that rely entirely on experimentation.\(^2\) We prove conditions where such algorithms dominate experimental algorithms for any environment. We also formally prove why the algorithm in [1] works, and extend the results to the sequential-action case.
i
325807f8-5645-46be-81cf-8c4f87d20985
The counterfactual randomization procedure described in this paper is relevant to any experimental pursuit where the environment is not fully known. Unobserved confounding can invalidate the conclusions of a random trial. We criticize experiments by proving their sub-optimality, and suggest a fix.
i
d2527b61-c930-454b-b4a5-2c99ef16663c
A fundamental goal of causal inference is to determine a layer 2 query, of the form \(P(Y \mid do(x))\) , from layer 1 data, of the form \(P(V)\) .\(^3\) This is useful when interventions, like randomized control trials, cannot be performed. Even when they can, past work had to formalize the conditions under which the results of the trial can be extrapolated to new environments.\(^4\)
i
442d38d6-7d28-4585-89df-14d481ff935a
The primary contribution of this paper is to lay the theoretical foundations for estimating a layer 3 query from layer 2 data. The idea is that a decision confounded with reward allows inference about unobserved variables. This inference is only available in layer 1, because in layer 2 the confounding arrow is cut.
i
1f44c4ea-6e6a-46c9-b55c-62e38cc615cc
We formalize the conditions under which such an inference is useful by presenting a generalization of a soft-intervention called the counterfactual intervention \(\rho \) . It accounts for the "natural predilection" of the decision-variable as it is determined by unobservable variables. We show that \(\rho \) induces a sub-model on \(G\) whose distribution is equivalent to the distribution induced by a soft-intervention \(\pi \) in the conditional-twin of \(G\) . We then provide two examples of an estimation procedure in a realistic scenario, and use this procedure to formally explain the results in [1]. We also give graphical conditions for the advantage of the optimal counterfactual-intervention over the optimal soft-intervention based on the connection between value of information and d-separation.
i
a6690a05-3c3a-49e1-930b-9fb441b741fb
In the single-action case with action \(A\) , the counterfactual intervention on \(G\) is equivalent to a soft-intervention on \(G^{\prime }\) . Consider \(G_{\rho _A}\) , the sub-model induced by \(\rho _I = \lbrace \rho _A\rbrace \) on \(G\) :
i
1e4422f1-50a7-46ee-bed2-5db853b3eb92
The value of \(A\) that is input to \(B\) and \(Y\) is \(\rho _A(f_A(U_A))\) . The value of \(B\) that is input to \(Y\) is \(f_B(\rho _A(f_A(U_A)), U_B)\) . No variables after \(A\) in the topological order 'see' \(f_A\) .
i
f7cca6b0-671d-43cf-8146-b8c7efc258e1
Let's call \(A,B,Y\) row0. The distribution over variables in row0 is the same as the distribution over variables in \(G\) with no intervention. Let's call \(A^{\prime }, B^{\prime }, Y^{\prime }\) row1. The distribution over variables in row1 is the same as the distribution over variables in \(G_{\rho _A}\) when \(\pi _A = \rho _A\) . This is simply because their sub-models are equivalent. The sense in which \(G_{\rho _A}\) and \(G^{\prime }_{\pi _A}\) are equivalent is that any distribution induced in \(G\) is also induced in one of the rows of \(G^{\prime }\) .
i
a9b9f8cd-4930-4370-97ee-32ba1d45792f
In \(G^{\prime }\) , it is impossible to design a soft intervention that both sets \(A \leftarrow \rho _A\) and \(B \leftarrow \rho _B\) . If it were possible, it would have to include \(\pi _A\) to make \(A^{\prime } \leftarrow \pi _A(f_A(U_A))\) . Now, \(\pi _B\) must take into account the value of \(f_B\) . Any soft-intervention on \(B^{\prime }\) will cut the arrow from \(A^{\prime }\) to \(B^{\prime }\) , making it impossible to get \(f_B(A^{\prime })\) . The only way to simulate \(G_{\rho _I}\) in \(G^{\prime }_{\pi _A}\) is to perform a \(\textit {counterfactual intervention}\) \(\rho _B\) on \(B^{\prime }\) .
i
7eaf277e-8500-4848-a6d4-acf5b098dafe
We already know that we can simulate a single counterfactual intervention with a soft-intervention on the twin graph. Since \(\rho _B\) inputs \(A^{\prime }\) and by necessity of \(\rho _B\) accounting for \(\pi _A\) which accounts for \(f_A(U_A))\) , we create the twin with respect to the sub-graph \(G^{\prime }_{B^{\prime }Y^{\prime }}\) . Conceptually, the direct effect of \(A^{\prime }\) on \(Y^{\prime }\) has been decided. Only the value of \(B^{\prime }\) is being intervened on. Thus, we can simulate the distribution of \(Y^{\prime }\) in \(G^{\prime }_{\pi _A, \rho _B}\) with a soft-intervention on \(G^{\prime \prime }_{B^{\prime }Y^{\prime }}\) , the twin graph of the sub-graph of \(G^{\prime }\) containing only the nodes \(B^{\prime }, Y^{\prime }\) . <FIGURE>
i
00871005-b100-45ce-8ce0-f68fd2c4ef39
Now \(\pi _B\) gets to see the natural value of \(B\) conditional on \(\pi _A\) , which was impossible before. Indeed, the distribution over \(Y^{\prime \prime }\) in \(G^{\prime \prime }\) is the same as the distribution over \(Y_{\rho _I}\) for \(\rho _I = \pi _I\) .
i
a6a69ebe-aab1-46db-b650-c0c7b8bb7ad1
It should be further noted that neither \(Y\) not \(Y^{\prime }\) simulate any distribution that cannot be simulated with \(Y^{\prime \prime }\) . So we could simulate \(Y_{\rho _{AB}}\) with the same \(G^{\prime \prime }\) but having removed the nodes \(B, Y, Y^{\prime }\) .
i
881d3a31-ee7f-4eea-be7f-5e7ba05cc2de
What remains is to formalize the construction of the conditional twin, so that we can prove that the analog of \(Y^{\prime \prime }\) for the general case can simulate \(Y_{\rho _I}\) for any \(I\) on any SCM.
i
33f41f7e-a4b1-4a5c-a229-4a59b9e2c42c
We define \(\rho \) and give conditions under which it could be superior to \(\sigma \) . The point is that an agent that employs an experimental procedure cannot be sure that they incur sub-linear regret, even after many samples, if the conditions hold. These environments require a different policy-space that accounts for the natural state of the system, which we call the counterfactual-intervention \(\rho \) . In the sequential-action case, \(\rho \) can be understood as a soft-intervention on a graph called the condition-twin. This observation provides the theoretical basis for estimating counterfactuals using layer 2 data.
d
bd260660-18d3-4057-bac1-d2fab4c5c53e
Future work should try to connect \(\rho \) with counterfactual distributions. If that can be done, then we will have a pipeline for computing counterfactuals with layer 2: \(\pi \) on \(G^c \rightarrow \rho \) on \(G \rightarrow \) ? counterfactual distribution. A good starting place would be to prove that any single-intervention counterfactual distribution can be estimated in the conditional-twin. Extensions of the \(\rho \) policy-space to account for ancestors would also be a good idea.
d
8ea4fb73-4a08-49d2-beae-0bc472e10f3a
Object detection is a key problem in computer vision. It consists in finding all occurrences of objects belonging to a predefined set of classes in an image and classify them. Its applications range from medical diagnosis to aerial intelligence through autonomous vehicles. Object detection methods automate repetitive and time-consuming tasks performed by human operators until now. In the context of Remote Sensing Images (RSI), detection is used for a wide variety of tasks such as environmental surveillance, urban planning, crops and flock monitoring or traffic analysis.
i
e89190e3-b6b1-42c3-879e-98ebe9d59845
Deep learning and especially convolutional neural networks (CNNs) outperform previous methods on most computer vision tasks and object detection is no exception. Plenty of methods have been introduced to address this challenge. Among them, Faster R-CNN [1]} and YOLO [2]} may be the most well-known and studied. Even though this problem is far from being solved, detection algorithms perform well when provided with sufficient annotated data. However, this is often not available in practice and the creation of large dataset for detection requires both time and expertise preventing the deployment of such methods for many use cases. Another limitation to the widespread deployment of detection techniques is the lack of adaptability. Once fully trained, it is hard to modify a model to adapt to new objects. This is critical for some applications which need to detect different objects from one usage to another. Aerial intelligence is an example of such application: each mission may have its specific objects of interest and therefore a detection model must be adaptable (literally) on the fly. The overall objective of this work is to be deployed on vertical aerial images. Yet, large-scale dataset of such images, annotated for object detection, are rare. RSI are a convenient alternative and provide an accurate estimation of performance in deployment.
i
f97b3c38-cd37-44d5-8f19-93f5087720fe
Few-Shot Learning (FSL) techniques have been introduced to address these issues and deal with limited data. This has been extensively studied for classification [1]}, [2]}. Its principle is to learn general knowledge from a large dataset so that it can generalize efficiently (i.e. quickly and from limited data) on new classes. There exist different approaches for this task. Representation learning tries to learn abstract embeddings of images so that the representation space is semantically organized in a way that makes classification relatively easy (e.g. with a linear classifier). Meta-learning based methods learn a model (teacher) that helps another model (student) to perform well based on a limited amount of data. This is often done by training both network on multiple low-data tasks (e.g. by changing classes between epochs). Transfer learning is also a valid approach for FSL. It consists in training a model on a large dataset and then adapt it to perform well on another smaller one. This requires a supplementary training step and is often subject to catastrophic forgetting [3]}, and overfitting. It needs advanced tricks to prevent these undesirable effects.
i
6fc92bed-b228-40e1-8f27-462cb36da533
This performs relatively well for classification but the more challenging detection task still lacks few-shot alternatives. Though, recent work focused on Few-Shot Object Detection (FSOD) applying ideas from FSL literature to object detection. The first approaches were mostly oriented toward transfer learning and fine-tuning [1]}, [2]} disregarding the others FSL practices. Some other work took inspiration from meta-learning [3]} and representation learning [4]}. This is mostly applied to natural images, yet applications on remote sensing images are scarce [5]}, [6]}. Even if object detection is hard, applying it on remote sensing images is even harder: object's sizes can vary greatly plus they can be arbitrarily oriented and densely packed. These supplementary difficulties might explain why this specific topic remains mainly untouched.
i
6bce37fa-d38f-436e-b220-37a01cedae20
This work introduces a new few-shot learning method for object detection and evaluates its performance on aerial images. It detects objects from only a few examples of a class and without any fine-tuning. The main idea is inspired from prototypical networks [1]} which learns an embedding function that maps images into a representation space. Prototypes are computed from the few examples available for each class and classification scores are attributed to each input image according to the distances between its embedding and the prototypes. The classic Faster R-CNN framework is modified to perform few-shot detection based on this idea. Both classification branch in Region Proposal Network (RPN) and in Fast R-CNN are replaced by prototypical networks to allow fast online adaptation. In addition, a few improvements are introduced on the prototypical baseline in order to fix its weaknesses.
i
ef56e655-0cc5-4039-ab2a-eb1594c7c334
This paper begins with a brief overview of literature on object detection, few-shot learning and the intersection of these two topics. Then the prototypical Faster R-CNN architecture is presented in detail alongside with several improvements on our baseline. Next, the potential of the proposed modifications throughout a series of experiences is demonstrated. Finally, the proposed approach is discussed with a critical eye, and it is asked whether representation learning is suitable for object detection.
i
3d3ac780-56c5-4b1f-a3b9-83ec63bd0c98
In the presence of heterogeneous data, where randomly rotated objects fall into multiple underlying categories, it is challenging to simultaneously synchronize the objects and classify them into clusters. A motivating example is the 2D class averaging in cryo-electron microscopy single particle reconstruction [1]}, [2]}, [3]}. It aims to cluster images taken from similar viewing directions, rotationally align and average those images in order to denoise the experimental data. Joint clustering and synchronization is an emerging area that connects community detection [4]}, [5]}, [6]}, [7]}, [8]}, [9]} and synchronization [10]}, [11]}, [12]}. Recently, several works discussed simultaneous classification and mapping (alignment) [13]}, [14]} and proposed different models and algorithms. In [13]}, the authors addressed simultaneous permutation group synchronization and clustering via a spectral method with rounding and used the consistency of the mapping for clustering. In [14]}, the authors noticed that both rotational alignment and classification are problems over compact groups and proposed a harmonic analysis and semidefinite programming based approach for solving alignment and classification simultaneously.
i
a8dd0c1f-1c43-4bdb-91ed-0eb3c375f0b8
In this paper, we consider joint community detection and rotational synchronization under a specific probabilistic model, which extends the celebrated stochastic block model (SBM) [1]}, [2]}, [3]}, [4]}, [5]}, [6]}, [7]}, [8]}, [9]}, [10]} to incorporate both the community structure and pairwise rotations. In particular, we inherit the \(G(n, p,q)\) -SBM setting [11]}, [12]}, [13]}, [14]} for the graph connection as shown in Figure REF . Given a network of size \(n\) with \(K\) underlying disjoint communities, a random graph is generated such that each pair of vertices are independently connected with probability \(p\) (resp. \(q\) ) if they all belong to the same cluster (resp. different clusters). In addition, each node \(i\) is associated with a rotation \(\mathbf {R}_i \in \mathrm {SO}(d)\) which is assumed to be unknown, and a pairwise alignment \(\mathbf {R}_{ij}\) is observed on each connected edge. The noisy observation \(\mathbf {R}_{ij}\) is generated according to the probabilistic model considered in [15]}, [16]}, [17]}. When \(i\) and \(j\) belong to the same cluster we observe the clean measurement \(\mathbf {R}_{ij} = \mathbf {R}_i\mathbf {R}_j^\top \) . When they are in different clusters, \(\mathbf {R}_{ij}\) is assumed to be uniformly drawn from \(\mathrm {SO}(d)\) . The model considered here is different from the probabilistic model in [18]}. <FIGURE>
i
addaa201-660e-4527-b3d6-b61127423745
For such probabilistic model, a naive two-stage approach for recovery is to (1) directly apply existing methods for community detection under SBM, and (2) perform rotational synchronization for each identified community separately. However, such approach does not take advantage of the cycle consistency of the alignments within each cluster and inconsistency of the alignments across clusters, and thus the clustering result might be sub-optimal. Instead, we can exploit such consistency of rotational alignments to further improve the identification of communities. For example, for three nodes \(i,j\) and \(k\) , their pairwise alignments satisfy \(\mathbf {R}_{ij}\mathbf {R}_{jk}\mathbf {R}_{ki} = \mathbf {I}_d\) if they belong to the same cluster. In fact, the notion of cycle consistency has already been used in synchronization problems [1]}, [2]}, [3]}, [4]}, [5]}, [6]}.
i
2c1eff7f-52ab-45a8-9048-b3a8dd4c2e26
This section is devoted to empirically investigating the performance of the SDPs in Section . In Sections REF –REF , we focus on experiments with two clusters and 2D rotations (\(d = 2\) ). In Section REF , we include results for more general settings, e.g., 3D rotations and the number of clusters is larger than two. We generate the observation matrix \(\mathbf {A}\) based on the model introduced in Section . Then we evaluate the SDP solution by measuring the following error metric: \(\text{Error} = \log (\Vert \mathbf {M}_{\text{SDP}} - \mathbf {M}^*\Vert _\textrm {F}).\)
r
548c749d-bfc0-4505-b1db-b3328d054fd5
When cluster sizes are unknown and the SDP in (REF ) is adopted, the ground truth \(\mathbf {M}^*\) in (REF ) should be replaced by the normalized \(\bar{\mathbf {M}}^*\) defined in (REF ). For the case of two clusters, we evaluate the phase transition of the condition \(\widetilde{\mathbf {\Lambda }} \succ 0\) according to our guess of dual variables (e.g. in Lemmas REF and REF ) via the following metric: \(\text{Failure rate} = 1 - \text{the rate that } \widetilde{\mathbf {\Lambda }} \succ 0 \text{ is satisfied}.\)
r
1200a14c-19bb-46f5-abaa-624808583f58
\(\text{Failure rate} = 0\) means that our construction of the dual variables satisfy the optimality and uniqueness condition for \(\mathbf {M}^*\) or \(\bar{\mathbf {M}}^*\) (e.g. in Lemmas REF , REF and REF ) and the exact recovery is guaranteed. For each experiment, we average the result over 10 different realizations of the observation matrix \(\mathbf {A}\) . <FIGURE>
r
d5cbc711-4077-4d62-91da-13692923ff3c
NLP technology has progressed tremendously over the recent decades with significant advances in algorithms and modeling. Yet, by comparison, our understanding lags behind significantly for datasets (including all datasets types in the model life cycle: training, validation, evaluation) that contribute to model performance. This is mostly due to the lack of frameworks, methods, and tools to draw insights into datasets, especially at scale.
i
5615ede2-80ec-4cc4-ad6e-6c74499dfbac
Most NLP models, to date, are evaluated using a relatively small number of readily available evaluation benchmarks, that are often created automatically, or via crowd-sourcing [1]}, [2]}, [3]}, [4]}. It is well-known that most popular (evaluation) datasets are rife with biases, dataset artefacts and spurious correlations, and are prone to be solved with shortcuts [5]}, [6]}. Presenting models with adversarial examples for which those biases or correlations do not hold, often results in stark performance drops [7]}, [8]}, [9]}, [10]}, [11]}, [12]}, [13]}. At best, using datasets with such known issues might result in overestimation of a models' capability on the task in question, which may not be reflective of how well they can execute this task in more realistic scenarios. More worrying, however, is that training or finetuning on datasets that contain biases and artefacts may result in models implementing undesired, biased behavior [14]}, [15]}.
i
e60f273d-dfdc-4345-af20-c6d84a2fbea4
Additionally, datasets are usually treated as homogeneous collections of text, performance for which is captured in a single number – even though there is often a substantial difference between the difficulty/complexity of different examples in a dataset [1]}. Research papers rarely report thorough analysis of performance broken down by characteristics of the data set examples ignoring underlying patterns performance numbers may reflect. The problem is exacerbated by the pervasiveness of benchmarks coupled with a leaderboard competitive culture, where what counts most is system rank.
i
370f92b5-1b21-4c7a-8d89-7eecae619cd9
In part, this may be due to the fact that deeper analysis of results – especially when a number of different datasets is involved – is complex and time-consuming, and there are no standard frameworks or protocols that practitioners can resort to. The problem is even more pervasive, where we curate datasets for development and evaluation. How we curate, create, select data plays a critical role in understanding our models. Many NLP models (even beyond text) require up/down sampling of specific types of data. These processes should rely on principled characterization of data for any given model.
i
92c5846a-5b9b-4f14-a3b9-62bc74b56c1d
Towards this end, we believe that the existence of a standard toolkit that provides an easy to use set of tools and metrics allowing researchers to analyze and systematically characterize datasets involved in the model life cycle, while gaining insights into the relationship between model performance and data properties could become more common place.
i
ee3a6d01-e530-4e0b-ba3e-cd7f5198040d
In this paper, we introduce the Text Characterization Toolkithttps://github.com/facebookresearch/text_characterization_toolkit (TCT), which aims to enable researchers to gain a detailed understanding of the datasets and models they create – with minimal effort. TCT is inspired by the Coh-Metrix toolkit [1]}, a collection of over 100 diverse text characteristics intended for use for text analysis in various applications. TCT offers these capabilities at scale by design. While TCT can process a dataset of 20000 paragraphs in less than a minute using a single command on a MacBook Pro laptop, the very same library, for instance, can also be used as part of a PySpark pipeline to compute text characteristics for a full snapshot of Common Crawlhttps://commoncrawl.org (3.1B web pages) in a matter of hours. In this paper we present:
i
cfe4d002-8888-4df5-92be-8a61893c504b
A repository of text metrics that can help reveal (hidden) patterns in datasets coupled with model performance on these datasets; A set of off-the-shelf analysis tools that researchers can use in a simple notebook to study properties of the dataset and the influence of those properties on model behaviour; A framework that enables the community to share, reuse and standardize metrics and analyses methods/tools; Use cases that demonstrate the efficacy of TCT in practice covering Language Model prompting, Translation and Bias Detection.
i
5030a098-2fce-4a2f-96c9-f11c75768cd6
Multiple existing tools offer similar functionality as TCT does: DataLab [1]} is a tool for detailed data analysis that, among other things, allows users to inspect datasets through the lens of a few text characteristics such as text length, lexical diversity and gender-related features. The Know Your Datahttps://knowyourdata.withgoogle.com/ tool allows for inspection of image data, it surfaces spurious correlations, biases and imbalances in datasets. However, both tools do not connect model behavior to properties of datasets.  [2]} predicts overall hardness of classification datasets based on label statistics and a few text characteristics such as readability and lexical diversity. ExplainaBoard [3]} focuses on model performance analysis and provides a model performance breakdown by simple attributes of data points such as text length, providing a functionality most similar to our work.
w
6af91753-77cc-497d-bad2-028c27238e48
Our toolkit distinguishes itself by including a much wider range of text characteristics and multi-variable analysis tools that can identify larger variations in model accuracy. By packaging our toolkit as a simple Python library used in notebooks – in contrast to the previously described feature-rich systems – we also intend to minimize the effort needed to both use it as well as contribute to it (crowd sourcing more functionality).
w
18af6e8e-6387-45c0-83c3-e266edb98502
The Coh-Metrix tool [1]} collected the most diverse set of text characteristics to our knowledge, designed for various use cases in linguistics and pedagogy. The tool, developed in 2004, is slow as it is designed to process a single document at a time, relatively difficult to access, and the underlying word databases are outdated. Our toolkit aims to make a subset of these metrics easily accessible to the NLP community.
w
f8180d4c-60bd-4e3d-8ea2-bb19b7d014d4
In this demonstration we offer an alternative approach: We take existing data from the evaluation of the 6.7B OPT baseline [1]} then attempt to use simple data characteristics to identify interpretable subsets of the dataset on which OPT's performance substantially differs from its overall high accuracy. We use the HellaSwag task [2]},We chose the HellaSwag task for this demo as it had sufficiently many examples in the test set and showed the most interesting correlations out of all tasks the model was evaluated on prior. a common-sense inference task that is trivial to solve for humans but challenging for LLMs.
m
f26a9daf-8e1f-487d-b7ef-f592e8c2a05e
To evaluate OPT models on this task, prompts corresponding to different choices were scored with the LLMs and the answer with the lowest perplexity was considered to be the choice of the model. For each data point in the test set, we consider two text fragments: the prompt corresponding to the correct answer and the concatenation of all the prompts corresponding to incorrect answers (see Table REF for an example). With a single command using the command_line_tool.py we compute characteristics for the extracted texts and load the results into a notebook. We also load the result of the model evaluation, which is a single binary variable per data point describing whether the model predicted the right answer.
m
64bafa6a-a459-47c4-a3cb-6e58037f97e4
First, we inspect correlations between individual metrics and model performance. This analysis tool orders data points with respect to a particular TCT metric, groups them into buckets of 100 data points and computes model accuracy for each bucket. We find several different data characteristics that show high correlation with model performance, for example number of sentences per prompt or average concreteness [1]} of words. A visualisation of these results is shown in Figure REF . <FIGURE><FIGURE>
r
c11c8eed-f7cd-4586-ad14-6ae9ff284bb9
Secondly, we employ the TCT class named PredictFromCharacteristicsAnalysis to fit a logistic regression model using all characteristics to predict whether the model will yield a correct answer for a particular data point. The tool computes the regression scores on a held out part of the dataset and visualizes model accuracy with respect to this score per data bucket, as shown in Figure REF . We find more variance between the best and worst performing buckets compared to the single variable analysis. On the bucket with the highest predicted score the OPT baselines yield a 0.9 accuracy, but in the lowest scoring bucket the accuracy is below 0.4, which approaches the random baseline of 0.25. To interpret the fitted regression model, we inspect its coefficients,Since inputs to the regression were scaled to unit variance, direct comparison of coefficients is meaningful illustrated in Figure REF . Interestingly, coefficients for given characteristics often yield opposite signs associated with the correct and incorrect answers, indicating that they are in fact, on their own, predictive of the correctness of an answer. For instance, the DESWLlt metric (mean number of letters per word) has coefficients of -0.44 and 0.62 for the correct_prompt and incorrect_prompts features, respectively. <FIGURE>
r
285dc78c-dd32-4e6c-88b3-de259f61017c
We argue that such analyses are useful from two perspectives: i) Analyses that uncover patterns in what characteristics make examples difficult help us improve our understanding of how well a model has in fact learned the task we intended it to. This, in turn, provides a better estimate of the wider applicability of a model. ii) If one knows which text characteristics lead to poor performance from LLMs, one could improve the dataset's coverage for characteristics associated with low model performance – e.g. one could curate data points including tokens with low concreteness scores.
r
338f3109-00e2-42bb-903f-12de9e125fe1
We use a coreference resolution model proposed by  [1]} and the WinoBias dataset [2]}. The model is evaluated using exact match to compute accuracy. To capture gender statistics, we configure a new Word Property metric “genderedness” based on Labor Force Statisticshttps://github.com/uclanlp/corefBias and compute it on two text fragments (the two spans of the ground truth co-reference). A higher genderedness score represents that the occupation is associated with a female stereotype and vice versa. For pronominal references, we assign 100 to female ones (e.g. “she”, “her”) and 0 to male ones (e.g. “he”, “his”). We add the difference between the two characteristics as an additional feature for analysis.
m
b1b7a4bf-11d3-45be-a7f2-e9dd31ac6a8f
The analysis obtained by the TCT toolkit is illustrated in Figure REF . There is a negative correlation between model accuracy and the genderness difference between the occupation and the pronominal reference. In other words, if a female stereotypical occupation and a male pronoun co-occur in a test example (e.g. “nurse” and “he”) or a male stereotypical occupation and a female pronoun (e.g. “constructor” and “she”) co-occurs, the model is more likely to make a wrong prediction. <FIGURE>
r
d3ed1e8d-7783-44ef-b6bf-b000d3cd3fb6
To investigate performance heterogeneity in translation models, we use the No Language Left Behind 1.3B distilled model and the English-Russian validation split of the multi-domain dataset from the same work [1]}. We use the HuggingFace transformers translation pipeline for easy inference [2]}. We extract translations using the pipeline, and employ the chrf++ metric to measure success per individual data point [3]}.Note: we use this as it has better per data point properties than other corpus statistics like BLEU [4]}. Using the toolkit we characterize the English source data with default settings.
m
ec2fd5e0-d7ec-45d7-9cb7-41e6cfbe47e8
Surprisingly, we find significant heterogeneity as seen in Figure REF and below. In particular, more sentences, more verbs, polysemy for content words, and chat-like messages lead to performance drops. Conversely, more nouns and words with more syllables correlate with better chrf++ scores.
r
0d91be7a-dec6-4528-b2fc-71ddae868f87
The driver of this heterogeneity may be deceptive. The HuggingFace translation pipeline does not keep track of the underlying model's training distribution. It would not know that the NLLB model was trained on sentence pairs and the evaluation data contains multi-sentence datapoints. An appropriate way to match the training distribution would instead be to split by sentences and translate individual sentences before re-concatenating. In fact, if we take this approach, we find that performance levels out with the biggest improvements coming from the largest sources of heterogeneity (Figure REF ). This demonstration shows the power of TCT for debugging model workflows. With many layers of abstraction, it is easy to forget that underlying models are likely trained on a particular data distribution.
r
d216a9d0-17ee-4dfe-a25d-2bfb63575568
Place recognition modules, determining if the sensor revisits a previously observed location, are vital whenever a long-time autonomous operation is required, i.e., for simultaneous localization and mapping (SLAM). In SLAM, the ability to relocalize can reduce the accumulated localization drift while correcting the past trajectory to build a consistent map of the environment. Among place recognition applications, robust place recognition for autonomous cars is a commonly tackled problem. City-wide localization in dynamic environments with moving objects, changing weather conditions, and seasonal changes requires robust methods capable of capturing low-level and high-level features from raw sensory data.
i
ec9c61dd-fed2-419b-8694-f2b692d4140a
Designing an efficient learning-based 3D LiDAR place recognition system is still an open problem. A key challenge is to find the best 3D data representation that can be efficiently processed using neural networks to extract meaningful features for robust place recognition. As a community, we have already explored image-like [1]}, voxel-based [2]}, bird's-eye view approaches [3]}, [4]}, [5]}, or unordered point sets [6]}, [7]} representations. More recently, we also see a rise in sparse convolution-based approaches [8]}, [9]} and attention-based modules [10]} that might be combined together [11]} as well. Many of these methods are trained, evaluated, and compared on the Oxford RobotCar dataset [6]}, [8]}, [14]}, [15]}, [16]}, [17]}, [18]}, [19]}, which was gathered by concatenating multiple 2D LiDAR scans covering a 20 m distance and subsampling to 4096 points [6]}. In contrast, a single scan from a modern 3D LiDAR covers a much larger area (even up to 160-200 meters in diameter) and has a greater number of points (up to 260k points for Ouster OS1-128), as presented in Fig REF . <FIGURE>
i
6773d753-1f06-4ff7-8b4e-3034ccc23239
We propose a new 3D LiDAR place recognition system called MinkLoc3D-SI, extending the concept of sparse convolution-based MinkLoc3D [1]} to improve performance on scans from 3D LiDARs. The proposed MinkLoc3D-SI uses spherical coordinates of 3D points and utilizes the intensity value in 3D LiDAR scans. We evaluate our approach on the USyd Campus [2]}, Oxford RobotCar [3]}, and KITTI [4]} datasets.
i
eefd8157-1868-488e-a88f-6a6365d6df68
The first 3D sparse convolution-based place recognition system, MinkLoc3D-SI, utilizing intensity and spherical point representation suited for place recognition based on a single scan from 3D LiDAR. A new place recognition dataset utilizing Velodyne VLP-16 based on the USyd Campus dataset. A modified Oxford RobotCar Intensity dataset including intensity for each 3D point.
i
85707c78-4707-49e3-a02f-2cea600340fb
In this article, we propose MinkLoc3D-SI, the sparse convolution-based method utilizing the natural, spherical representation of 3D points from a single 3D LiDAR scan, and the commonly available intensity information associated with each 3D point measurement. The proposed method targets the problem of place recognition when using a single scan from a 3D LiDAR.
d
36610142-db1b-4fa0-be8b-d3610c036ba6
MinkLoc3D-SI is evaluated on USyd Campus, KITTI, and Oxford RobotCar Intensity datasets. On the USyd Campus dataset, the gains from the spherical point representation, intensity, and combined improvements are notable compared to the state-of-the-art MinkLoc3D and Scan Context. We observe minor improvements on the proposed Oxford RobotCar Intensity dataset when intensity is used, but the spherical representation is unsuitable for map segments created from accumulated 2D scans. The further evaluation of the generalization ability on the KITTI dataset yields the best results among the 3D point cloud-based algorithms. The performed ablation study confirms that the best results should be expected with rather large quantization steps and when all of the available points are processed.
d
2005b223-da0b-4f48-8fc0-a1fe750df1cc
The obtained results suggest that the spherical coordinates with intensity for 3D points are promising modifications to processing point clouds from a rotating 3D LiDAR and thus could be applied to other solutions with sparse 3D convolutional architecture or for other applications.
d
21436941-daf5-4863-9062-6144eb26fa6d
Tree transducers are fundamental devices that were invented in the 1970's in the context of compilers and mathematical linguistics. Since then they have been applied in a huge variety of contexts. The perhaps most basic type of tree transducer is the top-down tree transducer [1]}, [2]} (for short transducer). Even though top-down tree transducers are very well studied, some fundamental problems about them have remained open. For instance, given a (deterministic) such transducer, is it decidable whether or not its translation can be realized by a linear transducer? In this paper we show that indeed this problem is decidable, and that in the affirmative case such a linear transducer can be constructed.
i
d2ccbe6a-d2ab-4bee-9660-cd84ad952073
In general, it is advantageous to know whether a translation belongs to a smaller class, i.e., can be realized by some restricted model with more properties and/or better resource utilization. The corresponding decision problems for transducers, though, are rarely studied and mostly non-trivial. One recent break-through is the decidability of one-way string transducers within (functional) two-way string transducers [1]}. Even more recently, it has been proven that look-ahead removal for linear deterministic top-down tree transducers is decidable [2]}. In our case, one extra advantage of linear transducers (over non-linear ones) is that linear transducers effectively preserve the regular tree languages — implying that forward type checking (where a type is a regular tree language) can be decided in polynomial time. For non-linear transducers on the other hand, type checking is DEXPTIME-complete [3]}, [4]}.
i
57d42b84-29ef-45a5-8f90-d5c248e051d3
The idea of our proof uses the canonical earliest normal form for top-down tree transducers [1]} (to be precise, our proof even works for transducers with look-ahead for which a canonical earliest normal form is presented in [2]}). A given canonical earliest transducer \(M\) produces its output at least as early as any equivalent linear transducer. From this we can deduce that if \(M\) is equivalent to some linear transducer, then it has two special properties:
i
e71d6f3b-c618-45a5-bde9-7c81e58c12c1
Lca-conformity means that if the transducer copies (i.e., processes an input subtree more than once), then the output subtree rooted at the lowest common ancestor of all these processing states may not depend on any other input subtree. Zero output twinned means that in a loop of two states that process the same input path, no output whatsoever may be produced. These properties are decidable in polynomial time and if they hold, an equivalent linear transducer can be constructed.
i
c1bb4238-40b5-4a91-96e3-1373953f5ec9
In our second result we prove that for a transducer (with regular look-ahead) it is decidable whether or not it is equivalent to a tree homomorphism, and that in the affirmative case such a homomorphism can be constructed. In order to obtain this result, we prove that whenever a transducer \(T\) is equivalent to a homomorphism, then any subtree of a certain height of any partial output of \(T\) is either ground, or effectively identical to the axiom of \(T\) . This property can again be checked in polynomial time, and if it holds a corresponding homomorphism can be effectively constructed.
i
e7975842-22e7-4227-8c7e-bd34c8ab60dd
For simplicity and better readability we consider total transducers (without look-ahead) in our first result though we remark that the result can be extended to partial transducers with look-ahead. All proofs for partial transducers with look-ahead are technical variations of the proofs for the total case and can be found in the Appendix. Note that our results also work for given bottom-up tree transducers, because they can be simulated by transducers with look-ahead [1]}.
i
7ca29ffd-776b-484f-9576-57babf264625
We have proved that for a deterministic top-down tree transducer \(M\) with look-ahead, it is decidable whether or not its translation can be realized by a linear transducer (without look-ahead). We have further shown that for such a transducer \(M\) it is decidable whether or not its translation is realizable by a one-state transducer (called a tree homomorphism). In both cases, equivalent transducers in the respective subclass can be constructed if affirmative.
d
36f5e939-58d9-48c4-a6e8-ebefc8d7619f
One may wonder whether our results can be generalized to larger classes of given transducers. It can easily be generalized to nondeterministic top-down tree transducers: first decide if the transducer is functional [1]} and if so, construct an equivalent deterministic top-down tree transducer with look-ahead [2]}. Note that the result of Engelfriet [2]} shows that for any composition of nondeterministic top-down and bottom-up transducers that is functional, an equivalent deterministic top-down tree transducer with look-ahead can be constructed. This raises the question, whether or not for a composition of nondeterministic transducers, functionality is decidable. To the best of our knowledge, this is an open problem.
d
38cface4-d208-4117-9516-9bf9fdc8368b
In future work, it would be nice to extend our result of deciding homomorphisms within deterministic top-down tree transducers, to the case that for a given \(k\) one decides whether an equivalent top-down tree transducer with \(k\) states exists (and if so construct such a transducer). This would offer a state-minimization method.
d
02b2ab10-2835-442c-9631-803356330cfd
With the increase in the number of devices based on wireless communication due to the growth of the IoT (Internet of Things) area, the amount of consumed bandwidth is increasing in the same proportion. The most affected environments are public places where are a huge number of users and maybe a lot of obstacles, but depending on the case even our own home will have limitations that can be imposts on the Wi-Fi network, like interference, attenuation. Such things can harm the user experience [1]}.
i
9f534b58-f847-44d8-8846-2bfafef36234
These problems arise from many fonts, but one of the principal factors that demand attention is the limit of the spectrum of the radio waves. Realizing that the spectrum of radio waves is saturated, and for consequence is expensive, we begin to think that technologies can come to solve or minimize these imposed limits. So, our research before initiated, was a concern to find an environment with easy implementation and promissory results. In this way, was found the Li-Fi, an optic system of data communication.
i
bb1c54af-d97a-4d2e-b434-2655d20d5dc2
Considering that all the workplaces require by law an appropriate illumination and that the electricity is a basic service in all world's countries, the Li-Fi shows to be suitable to mitigate these problems. The Li-Fi is a system that sends data through the visible and infrared (IR) light spectrum, and according to the first public demonstration of this technology, performed by professor Harald Haas in the year 2011 in his lecture at TED Talks, an environment with a light-emitting diode (LED) lamp could provide a 10 Mbps transfer rate [1]}.
i
95f143e6-7714-4b54-96ad-e71fc22f0130
Having found this technology, this research has as objective to evaluate if the Li-Fi could act as a substitute for Wi-Fi or only as a complementary technology. It Will be discussed areas that already have any applications to the technology, and the verdict will be made by listing comparisons with the Wi-Fi and exploring the deficiencies of this technology.
i