_id
stringlengths
36
36
text
stringlengths
200
328k
label
stringclasses
5 values
a9e0db0c-99ab-4580-908a-2aaee08801d8
TL has shown great improvement on the traditional feature-based models while performing cross-lingual tasks. Chen et al. have shown that when language-invariant and language-specific features are coupled at the instance level, the cross-lingual TL approach has shown great potential in creating cross-lingual NLP models [1]}. The inclusion of unsupervised multilingual embeddings has approached even to work when there is no resource for the model to train, like target language data or cross-lingual information.
w
9c5a0e28-10a1-4b99-90d8-e888d936cb27
With all the effectiveness that TL has brought for the NLP research, it has given rise to the uncertainties of its uses. Several works have effectively reported the negative transfer of knowledge from the source to the target for various aspects. Meftah et al. have shown that knowledge transfer between related domains like news and tweets can even negatively affect the model's performance which is further proved using quantitative and qualitative analysis [1]}.
w
ae89ef43-d1bb-4bab-9b83-2b53da36481a
Many researchers have reported methods and approaches to avoid or detect the happening of negative transfer. Chen et al. have reported a novel approach for suppressing negative transfer, called Batch Spectral Shrinkage, which uses a penalizing technique where the smaller singular values are penalized for suppressing the untransferable spectral components [1]}.
w
988316bc-3bbc-465f-9555-39e9a0c01950
In this study, we conducted a series of analyses using state-of-art pre-trained models like BERT, XLNet, and ROBERTa on three different NLP tasks. In each of the tasks, we are going to analyze the TL from two different perspectives. One is how knowledge transfer affects the model's performance when the prediction is made in a somewhat different domain. The second is to see how knowledge transfer affects when the prediction is made cross-lingual tasks with somewhat related languages (French, German and Spanish).
m
e0377e54-5fdb-486a-bd5d-9a6bb0b4829f
We have divided our analyses into two experiments based on the parameters to check model performance. In the first experiment, the models have trained on the target dataset for the tasks: text classification, sentimental analysis, and sentence similarity. Similarly, in the second experiment, the models were fine-tuned with the first dataset of each analysis. Then, the models were made to make predictions on the target dataset using the acquired knowledge from the source.
r
55f355a9-85bb-4b0f-801a-79a3e23325b3
We collected and created specific datasets to analyze how TL affects the prediction of specific NLP tasks. Our analysis showed some transfers that could help better understand when and what to transfer specific to the domains and tasks while performing the prediction. We also demonstrated how in some cases, the knowledge transferred could also help in the cross-lingual tasks without prior understanding of the target language.
d
fcb28ac1-20a2-4871-9912-d4c1be4b8693
Collaboration is a fundamental process that enables humans to perform various complex activities. For example, in professional automobile racing, pit crews swiftly replace tires, refuel, and carry out necessary repairs and adjustments within highly limited time spans; surgeons, nurses, anesthetists, and assistants work together seamlessly to perform surgeries; and firefighters respond collectively to a wide range of emergencies and catastrophes. Beyond professional endeavors, day-to-day interactions also frequently involve people working together, such as when people coordinate actions and distribute efforts while assembling furniture.
i
40ca1f6c-e897-4e8c-b00c-985536501757
In many of these instances, collaboration emerges on an ad-hoc basis, with teammates communicating and determining task roles and actions spontaneously. This emergent collaboration allows humans to effectively initiate and shape team-based behaviors according to their real-time task needs. Achieving similar emergent collaboration in human-robot teaming and interaction is critical for applications such as in-home assistance, flexible manufacturing, and search-and-rescue, where collaborative actions require high adaptability. However, much of the prior work on human-robot collaboration has focused on prescribed scenarios where collaborators have predefined, static roles and on dyadic human-robot collaborations in tabletop settings (e.g., [1]}, [2]}, [3]}, [4]}, [5]}). Furthermore, the task scenarios used in HRC research often involve minimal collaborative activity, such as hand-offs from robots to humans, providing a limited view of how collaboration may need to evolve in larger scale interactions.
i
ef81befe-9c80-4376-8204-0e3c1874d9de
Moving towards developing shared testbeds, datasets, and evaluation metrics focused on large scale, emergent human-robot collaboration can help drive HRC research towards investigating the more unstructured, ad-hoc, and involved collaborative activities that have so far remained largely unexplored. In this work, we present Full-body Ad-hoc Collaboration Testbed (FACT), a testbed that researchers can use to understand human behaviors in emergent collaboration and develop robot capabilities based on these behaviors. We provide implementation details to enable researchers to recreate FACT, describe observations from a preliminary exploration using the testbed, and discuss future steps to further drive research into developing flexible, emergent human-robot collaborations. <FIGURE><FIGURE>
i
a38e54c6-084f-4869-9893-3bfa58b887db
To achieve the full potential of human-robot teaming in a wider range of activities, HRC research must move beyond small scale, static collaborations to focus on large scale, dynamic collaborations. Shared testbeds, datasets, and evaluation metrics derived from large scale emergent collaboration scenarios can help researchers methodically implement and evaluate the robot behaviors needed to achieve complex, ad-hoc human-robot collaborations. In this work, we presented FACT (Full-body Ad-hoc Collaboration Testbed), which consists of a PVC bunk bed collaborative assembly scenario and an accompanying mobile data collection setup for researchers to better understand and model individual and team behaviors during emergent collaboration and to develop and evaluate collaborative robot capabilities.
d
4845b0af-ee4a-4cb2-81bd-f439a489fcd2
Unlike previous testbeds and datasets for modeling human-human interaction (e.g., [1]}, [2]}, [3]}, [4]}), our testbed enables the capture of natural interactions that do not require participants to role-play or act in specific collaborative roles, allows modeling of bigger team behaviors beyond dyadic interactions, and involves full-body collaborations that move beyond tabletop settings. Furthermore, while human-robot interaction researchers have explored various aspects of social intelligence for HRC, such as multimodal understanding (e.g., [5]}, [6]}), action coordination (e.g., [7]}), task parallelism (e.g., [8]}), moderation of team dynamics (e.g., [9]}, [10]}), flexible task planning (e.g., [11]}), and action anticipation (e.g., [12]}), our testbed allows for a more holistic investigation into how an interplay of these various aspects contributes to large scale emergent collaborations and may also provide insight into designing interaction conventions for complex emergent collaboration (e.g., [13]}).
d
35d302aa-0eba-46a8-86cc-c6bd51885a3d
We contribute an openly accessible testbed for investigating large scale emergent collaboration in this work. Our future work will involve the development of a shared dataset from collaborations using FACT, which will include full-body and egocentric manipulation data, similar to [1]} but focused on team-based collaborative activity rather than on one-person task performance. We would also like to add to the existing set of established evaluation metrics for human-robot collaboration (e.g., [2]}) to capture aspects of human-robot interaction specific to large scale emergent collaboration, such as dynamic sub-team formation. To minimize the influence of limited manipulation and motion capabilities of real-word robots on research on emergent collaboration, we would also like to develop a simulation counterpart to FACT that enables researchers to deploy and test behaviors on virtual AI agents in the bunk bed assembly scenario. Overall, we hope that FACT can serve as an initial tool that encourages the development of additional resources and research directions that can advance investigation of social intelligence for large scale emergent collaboration.
d
8de712fb-4474-4aef-b4dc-fd63f4c76b87
In last decades, many efforts have been made to enhance Modern Standard Arabic (MSA) Natural Language Processing (NLP). These efforts have led to build systems that can serve more than 400 million people, across Africa and Asia, in many tasks such as machine translation, sentiment classification, diacritization, etc. However, in most cases, MSA is only used in formal settings, such as newspapers and professional or academic purposes, while Arabic dialects are used in everyday communication.
i
ae81ba6f-84ef-4ee8-919d-73e3ad744655
In term of resource availability, the majority of Dialectal Arabic (DA) variants are considered as low-resource languages, and suffer from the scarcity of labeled data to build NLP systems. Furthermore, previous works have mainly focused on MSA and some dialects (mainly Egyptian and middle east region dialects) [1]}, [2]}, [3]}. Relying on the fact that the MSA and Arabic dialect (DA) variants are etymologically close, the use of MSA NLP systems on DA data has shown shallow performance compared to the performance on MSA data [4]}. Thus, in order to enhance the performance of MSA NLP systems and make them generalize better on DA input texts, models should be trained on data containing samples from DA. In other words, there is a real need to open data sets with good quality of labeling for DA.
i
11b9e500-ca04-4817-b4cf-6ccd89187401
Social media (Twitter, Facebook,…) might be the most convenient source to collect DA data, as they provide different contents that reflect the feelings of users across several topics and are written in the user native and informal Arabic dialect. However, data collected on social media should not be used in its raw form as it suffers from several issues [1]}. We can cite, for example, the problem of code-switching [2]} where users tend to borrow words or phrases from other languages (English or French), which will introduce noise into the collected data especially for the case of automatic annotation.
i
94c2fd1d-0276-4a03-9576-b5cfe34f787f
The creation of open access social media data sets, such as the one presented in our paper, aims to enhance innovation and practical applications of NLP for DA, such as social media content analysis for marketing studies, public opinion assessment or for social sciences.
i
d16da11a-923c-4621-a551-750664b47826
In this work, we present the first multi-topic and multi-dialect data set, manually annotated on five (5) DA variants and with five (5) topics. The data set was collected from Twitter and is publicly available and designed to serve several Arabic NLP tasks: sentiment classification, topic classification, and Arabic dialect identification. In order to evaluate the usability of the collected data set, several studies have been performed on it, using different machine learning algorithms such as SVM, Naive Bayes Classifier, etc. The main contributions of this paper are:
i
9098b7ea-82ae-4893-a27d-66bef51bd806
The introduction of an open source multi-topic multi-dialects corpus for dialectical Arabic. The proof of the usability of our data set through performance evaluation of Arabic dialect identification, Arabic sentiment classification, and Arabic topic categorization systems under different configurations with different machine learning models.
i
676b9535-7cb9-450a-b133-80ce9fe96570
As far as we know, the proposed cross-topic and dialect Arabic data set is the first one that covers three tasks at the same time. The rest of the paper is organized as follows. Section discusses some related works. Section presents the collected data and the way it was gathered. The labeling of the data is described on section 4 for the three applications. Finally, section concludes this paper and gives some outlooks to future work.
i
76f2bc57-5070-418b-aacc-33af1b845344
With the explosion in the number of social media users in the Arab world in recent years, interest in sentiment analysis in Arabic has gained more attention. However, the publicly available data sets are still limited in terms of coverage, size and number of dialects. Moreover, most of the work on Arab sentiment analysis focuses on Modern Standard Arabic, although some authors cover Egyptian and Gulf dialects. On the other hand, performing an analysis of the sentiment of an Arab user on social media relies on many factors (e.g. dialect, topic, …).
w
9fa8e800-0dc0-4c4d-b161-ded24801d0ea
In this work we presented a Labeled data set of  50K tweets in five (5) Arabic dialects. We presented the process of labeling this data for dialect detection, topic detection and sentiment analysis. We put this labeled data openly available for the research, startup and industrial community to build models for applications related to NLP for dialectal Arabic. We believe that initiatives such as ours can catalyse the innovation and the technological development of AI solutions in Arab countries like Morocco by removing the burden linked to the non availability of labeled data and to time-consuming tasks of collecting and manually labeling data. We also presented a set of Machine learning models that can be used as baseline models and to which future users of this data set can compare and aim to outperform by innovating in term of computational methods and algorithms. The labeled data set can be downloaded at [1]}, and all the implemented algorithms are available at [2]}.
d
0f10888a-2b4b-4e8d-a514-2e9ec98bd4f6
Generative Adversarial Networks [1]} is a family of generative models that are trained from the duel of a generator and a discriminator. The generator aims to generate data from a target distribution, where the fidelity of the generated data is “screened” by the discriminator. Recent studies on the objectives [2]}, [3]}, [4]}, [5]}, [6]}, [7]}, [8]}, backbone architectures [9]}, [10]}, and regularization techniques [11]}, [12]}, [13]} for GANs have achieved impressive progress on image generation, making GANs the state-of-the-art approach to generate high fidelity and diverse images [14]}. Conditional GANs (cGANs) extend GANs to generate data from class-conditional distributions [15]}, [16]}, [17]}, [18]}. The capability of conditional generation extends the application horizon of GANs to conditional image generation based on labels [16]} or texts [20]}, speech enhancement [21]}, and image style transformation [22]}, [23]}.
i
65984cfe-13d3-4105-bcea-da1c50f93fa5
One representative cGAN is Auxiliary Classifier GAN [1]}, which decomposes the conditional discriminator to a classifier and an unconditional discriminator. The generator of ACGAN is expected to generate images that convince the unconditional discriminator while being classified to the right class. The classifier plays a pivotal role in laying down the law of conditional generation for ACGAN, making it the very first cGAN that can learn to generate 1000 classes of ImageNet images [2]}. That is, ACGAN used to be a leading cGAN design. While the classifier in ACGAN indeed improves the quality of conditional generation, deeper studies revealed that the classifier biases the generator to generate easier-to-classify images [3]}, which in term decreases the capability to match the target distribution.
i
2d2fe52f-7ac9-45d7-bbb7-7d3fd831cbd0
Unlike ACGAN, most state-of-the-art cGANs are designed without a classifier. One representative cGAN without a classifier is Projection GAN [1]}, which learns an embedding for each class to form a projection-based conditional discriminator. ProjGAN not only generates higher-quality images than ACGAN, but also accurately generates images in target classes without relying on an explicit classifier. In fact, it was found that ProjGAN usually cannot be further improved by adding a classification loss [1]}. The finding, along with the success of ProjGAN and other cGANs without classifiers [3]}, [4]}, seem to suggest that including a classifier is not helpful for improving cGANs.
i
dc564ec7-affc-4721-a224-077610c822e8
In this work, we challenge the belief that classifiers are not helpful for cGANs, with the conjecture that leveraging the classifiers appropriately can benefit conditional generation. We propose a framework that pins down the roles of the classifier and the conditional discriminator by first decomposing the joint target distribution with Bayes rule. We then model the conditional discriminator as an energy function, which is an unnormalized log probability. Under the energy function, we derive the corresponding optimization term for the classifier and the conditional discriminator with the help of Fenchel duality to form the unified framework. The framework reveals that a jointly generative model can be trained via two routes, from the aspect of the classifier and the conditional discriminator, respectively. We name our framework Energy-based Conditional Generative Adversarial Networks (ECGAN), which not only justifies the use of classifiers for cGANs in a principled manner, but also explains several popular cGAN variants, such as ACGAN [1]}, ProjGAN [2]}, and ContraGAN [3]} as special cases with different approximations. After properly combining the objectives from the two routes of the framework, we empirically find that ECGAN outperforms other cGAN variants across different backbone architectures on benchmark datasets, including the most challenging ImageNet.
i
97cd9c9e-4155-48e0-aa1e-004b72f83148
We justify the principled use of classifiers for cGANs by decomposing the joint distribution. We propose a cGAN framework, Energy-based Conditional Generative Adversarial Networks (ECGAN), which explains several popular cGAN variants in a unified view. We experimentally demonstrate that ECGAN consistently outperforms other state-of-the-art cGANs across different backbone architectures on benchmark datasets.
i
7dd22067-2852-4738-ab83-60b8c533d520
The paper is organized as follows. Section  derives the unified framework that establishes the role of the classifiers for cGANs. The framework is used to explain ACGAN [1]}, ProjGAN [2]}, and ContraGAN [3]} in Section . Then, we demonstrate the effectiveness of our framework by experiments in Section . We discuss related work in Section  before concluding in Section .
i
938af793-5b93-4f30-aaa1-4cd066902aab
Given a \(K\) -class dataset \((x, y) \sim p_d\) , where \(y \in \left\lbrace 1 \dots K \right\rbrace \) is the class of \(x\) and \(p_d\) is the underlying data distribution. Our goal is to train a generator \(G\) to generate a sample \(G(z, y)\) following \(p_d(x|y)\) , where \(z\) is sampled from a known distribution such as \(\mathcal {N}(0, 1)\) . To solved the problem, a typical cGAN framework can be formulated by extending an unconditional GAN as: \(\max _{D} \min _{G} \sum _y \operatornamewithlimits{\mathbb {E}}_{p_d(x|y)} D(x, y) - \operatornamewithlimits{\mathbb {E}}_{p(z)} D(G(z, y), y) \)
m
cc8170aa-7984-45b1-aab6-e9766d83fe53
At first glance, there is no classifier in Eq. (REF ). However, because of the success of leveraging label information via classification, it is hypothesized that a better classifier can improve conditional generation [1]}. Motivated by this, in this section, we show how we bridge classifiers to cGANs by Bayes rule and Fenchel duality.
m
259fe70c-d456-4514-a42b-d05c9bdda60b
We use StudioGANhttps://github.com/POSTECH-CVLab/PyTorch-StudioGAN [1]} to conduct our experiments. StudioGAN is a PyTorch-based project distributed under the MIT license that provides implementation and benchmark of several popular GAN architectures and techniques. To provide reliable evaluation, we conduct experiments on CIFAR-10 and Tiny ImageNet with 4 different random seeds and report the means and standard deviations for each metric. We evaluate the model with the lowest FID for each trial. The default backbone architecture is BigGAN [2]}. We fix the learning rate for generators and discriminators to \(0.0001\) and \(0.0004\) , respectively, and tune \(\lambda _\text{clf}\) in \(\left\lbrace 1, 0.1, 0.05, 0.01 \right\rbrace \) . We follow the setting \(\lambda _c = 1\) in [1]} when using 2C loss, and set \(\alpha = 1\) when applying unconditional GAN loss. The experiments take 1-2 days on single GPU (Nvidia Tesla V100) machines for CIFAR-10, Tiny ImageNet, and take 6 days on 8-GPU machines for ImageNet. More details are described in Appendix . <TABLE>
m
b13bcb62-54cb-4108-b913-5d0d9901af5d
The development of cGANs started from feeding label embeddings to the inputs of GANs or the feature vector at some middle layer [1]}, [2]}. To improve the generation quality, ACGAN [3]} proposes to leverage classifiers and successfully generates high-resolution images. The use of classifiers in GANs is studied in Triple GAN [4]} for semi-supervised learning and Triangle GAN [5]} for cross-domain distribution matching. However, [6]} and  [7]} pointed out that the auxiliary classifier in ACGAN misleads the generator to generate images that are easier to be classified. Thus, whether classifiers can help conditional generation still remains questionable.
w
9c415177-0694-4cfd-b558-2fff726a6382
In this work, we connect cGANs with and without classifiers via an energy model parameterization from the joint probability perspective. [1]} use similar ideas but focus on sampling from the trained classifier via Markov Chain Monte Carlo [2]}. Our work is also similar to a concurrent work [3]}, which improves [1]} by introducing Fenchel duality to replace computationally-intensive MCMC. They use a variational approach [5]} to formulate the objective for tractable entropy estimation. In contrast, we study the GAN perspective and the entropy estimation via contrastive learning. Therefore, the proposed ECGAN can be treated a complements works compared with [1]}, [3]} by studying a GAN perspective. We note that the studied cGAN approaches also result in better generation quality than its variational alternative [3]}.
w
bc56d725-c5e9-4394-a303-0828064c578f
Last, [1]} study the connection between exponential family and unconditional GANs. Different from [1]}, we study the conditional GANs with the focus to provide a unified view of common cGANs and an insight into the role of classifiers in cGANs.
w
fd4b6699-8ace-49be-9e34-638d85a68a3b
In this work, we present a general framework Energy-based Conditional Generative Networks (ECGAN) to train cGANs with classifiers. With the framework, we can explain representative cGANs, including ACGAN, ProjGAN, and ContraGAN, in a unified view. The experiments demonstrate that ECGAN outperforms state-of-the-art cGANs on benchmark datasets, especially on the most challenging ImageNet. Further investigation can be conducted to find a better entropy approximation or improve cGANs by advanced techniques for classifiers. We hope this work can pave the way to more advanced cGAN algorithms in the future.
d
3ff14b2d-a576-4fa6-abcb-c46f3df3ffad
Please follow the steps outlined below when submitting your manuscript to the IEEE Computer Society Press. This style guide now has several important modifications (for example, you are no longer warned against the use of sticky tape to attach your artwork to the paper), so all authors should read this new version.
i
ed1283e9-9113-4bdb-90d1-b2fc2c790dc8
A personalized recommendation model can be conceptualized as a ranking system that learns from customers' past engagements and interactions. [1]}[2]}. Recommendation systems are a critical component of the digital marketing industry, where they are used for matching digital ads to customer preferences. In 2019, this industry was valued at 43.8 billion USD, with an expected compound annual growth rate of 17.4% for 2020–2027 .
i
f051f1dc-e3a7-4ee1-9878-6c4819ba3cd1
Deep neural networks are extensively used in personalized recommendation systems. [1]} However, although they are good at generalization (prediction for previously unseen feature combinations), they struggle with memorization (prediction based on co-occurrences of previously observed categorical feature values) [2]}. Wide and deep networks that use wide linear models to memorize sparse feature cross-products and deep neural networks to generalise to unseen feature values help solve this problem [1]}.
i
fbac8554-040b-4b35-a967-3fb1f8170e57
Recommendation models are typically trained in an online setting, so as to learn from shifting user preferences. However, because predictions aid decision-making in the production context and influence customer decisions, they can create positive feedback loops that alter the underlying data generating process and result in a selection bias for the input data [1]}. Causal inference [2]} and bandit theory [3]} can be used to account for such algorithmic bias.
i
b7387200-0244-408d-8448-9967f931678b
We propose a strategy for generating synthetic email promo datasets with clearly defined measures for unknown deterministic features and feature randomness. We evaluate the performance of the wide & deep, wide-only and deep-only architectures. We show that bandit algorithms that use upper confidence bound (UCB) [1]} and Thompson sampling (TS)[2]} for action selection slightly improves model performance.
i
f47a516a-09f8-480c-b659-c4673dccada7
The idea of combining wide linear models with deep neural networks for building robust recommendation systems that can memorize as well as generalize was first proposed in Ref. [1]}, refining the idea of factorization machines [2]}. The system was deployed and evaluated on the Google Play app store, which yielded a positive app acquisition gain of 3.9% in an online setting.
w
41124d34-35de-4a61-9f40-95cb4a410a9d
Personalized dynamic recommendation has been reframed in a contextual bandit setting to mitigate the cold-start problem and algorithmic bias [1]}. This paper also developed the LinUCB algorithm to efficiently estimate confidence bounds in closed form. Because computing the actual posterior distribution of the bandit problem is usually intractable, Refs. [2]} and [3]} proposed sampling techniques that use Monte Carlo (MC) dropout to approximate the posterior distribution. Both UCB and TS on this approximated distribution perform well and are computationally efficient. Empirical evaluation of many techniques for approximating TS and the best practices for designing such approximations are discussed in Ref. [4]}. Recently, combining bandit algorithms with deep neural networks in an ad recommendation system on a proprietary dataset has also been explored [5]}.
w
7c5a4fec-15f7-4878-be17-f68e6bc3840b
As expected, random guessing results in 10% accuracy for predicting the optimal offer out of 10 possible offers. Also as expected, the prediction accuracy for the deep model that takes the complete customer and promotional campaign feature vectors as input approaches 100% (it is 97.2% for the test set and could potentially be further improved by providing additional training data, using a larger neural network, adjusting the learning rate, etc.).
r
44ef567d-856c-4531-9a76-404763fd6ec9
Note that for all subsequent models, hidden customer and campaign features are not used: only customer features 1 and 2, campaign features 1 and 2, and/or the user and campaign IDs are used for predicting the optimal offer. Thus, the models were deliberately provided with incomplete information:
r
7b44973a-1de8-4aa5-a03d-07fae5b973bb
Numerical customer and campaign features supplied projections of the complete customer and campaign feature vectors onto the “known" customer and campaign feature subspaces. In accordance with Eq. (REF ), the user and campaign IDs can be mapped to the mean value and standard deviation that define the distribution of the hidden customer and campaign features for a specific customer and a specific promotional campaign. However, the user and campaign IDs cannot be deterministically mapped to the values of the hidden customer and campaign features for a specific sample, because these features are randomly sampled from a normal distribution with a fixed mean and a fixed standard deviation.
r
44440e91-8bba-4943-b2be-8ab89cd22b07
If only identifying information for the customer (user ID) and for the promotional campaign (campaign ID) were known, any inference of the optimal offer would be based on memorization. The wide-only model with the user ID and campaign ID cross products as input correctly predicts the optimal offer in about a third of cases, while the deep-only model with only the user ID and campaign ID embeddings as input correctly predicts the optimal offer in just over a quarter of all cases (see Table REF ). The differences in accuracy between the two models are not due to difference in the number of trainable parameters, as varying layer sizes for the deep model did not significantly affect prediction accuracy. These differences are also not due to insufficient training data, as training both models on smaller subsets did not significantly change accuracies either. Therefore, the wide model outperforms the deep model when it comes to pure memorization, at least for the embedding sizes used for the user ID and the campaign ID. This difference in accuracy may be due to more efficient memorization for the wide model, compared to a deep model, where the user ID and the campaign ID embeddings are randomly initialized and then adjusted every time a user ID or campaign ID value is encountered in the training data.
r
b9174ed9-00c6-403f-a7ae-f41dab9e9e19
Note that the poor performance of both the wide and the deep models trained using only the user and campaign IDs is largely determined by the relatively broad distributions of the customer (campaign) features for a given user ID (campaign ID). For instance, reducing the maximum standard deviations for customer feature 1, customer feature 2, and the hidden customer feature from 0.2, 0.3, and 0.1 respectively, to 0.05, 0.1, and 0.05 respectively, without changing the campaign feature distributions, improves the performance of the deep model from 26.9% to 62.1%. If the customer and campaign features were completely determined by the user and campaign IDs, in accordance with Eq. (REF ), then near-perfect prediction accuracy could, in principle, be achievable for both the wide and deep models, provided that they were sufficiently trained. <TABLE>
r
37b3dc73-9b48-40b7-a013-c49444919751
If only the projections of the complete customer and campaign feature vectors onto the known customer and campaign feature subspaces (customer features 1 and 2, campaign features 1 and 2) were known, then the accuracy of the optimal offer predictions would largely be determined by how well these projections approximate the complete feature vectors. In the limit of the hidden customer and hidden campaign features being zero for all samples, customer features 1 and 2 and campaign features 1 and 2 would provide complete information about the customer and the promotional campaign. Consequently, near-perfect accuracy for this model could be expected, for a sufficiently large and sufficiently trained neural network. When the deep model only takes as input customer features 1 and 2 and campaign features 1 and 2 in our generated dataset that has relatively large hidden customer and campaign features, it can predict the optimal offer in over two thirds of cases (see Table REF ). Further improving the prediction accuracy requires accounting for the hidden customer and campaign features that can, to an extent, be inferred from the user and campaign IDs. The precision of this inference is limited by the variability in the hidden features for a given customer (promotional campaign).
r
7cebb4fe-90ba-4df9-90f0-a984c35289b7
Predictably, the highest accuracy of the optimal offer predictions were achieved when both the identifying information for the customer and the promotional campaign, as well as the projections of the complete customer and campaign feature vectors onto the known customer and campaign feature subspaces were provided to the machine learning models. The deep model that took customer features 1 and 2, campaign features 1 and 2, as well as the user ID and campaign ID embeddings as input achieved an accuracy of 80.8% on the test set. The wide & deep model that took customer features 1 and 2 and campaign features 1 and 2 as input to the deep part, as well as user ID–campaign ID cross-products as input to the wide part, achieved an accuracy of 77.3% on the test set. Interestingly, the deep-only model slightly outperformed the wide & deep model, even though the wide model that took user ID–campaign ID cross-products as input outperformed the deep model that took only user ID and campaign ID embeddings as input. Examining the confusion matrices for the predicted optimal offer, \(\widetilde{a}\in \mathbb {A}\) , vs. the actual optimal offer, \(a\in \mathbb {A}\) , suggests that the performance of both models is largely limited by the randomness in the hidden customer and campaign features: \(\left| a - \widetilde{a} \right| \le 1\) for over 95% of the predictions.
r
68ccb595-4d73-46bd-8ae7-52989b69409b
It should be noted that the reasonably high accuracies for the deep-only and the wide & deep models that took customer features 1 and 2, campaign features 1 and 2, as well as the user and campaign IDs as input were achieved because the training, validation, and test sets were randomly sampled from the generated dataset. Consequently, the training, validation, and test sets all contained samples with (nearly) all possible user ID–campaign ID combinations. Randomly splitting the first 80% of the data between the training and validation sets at a 3:1 ratio and using the remaining 20% as the test set leads to significantly worse prediction accuracies on the test set for both the deep-only and the wide & deep models (Table REF ). In this case, the model is trained and tested on different identifying information (different campaigns), making memorization useless, and, in fact, counterproductive: the accuracies on the test set for both the deep-only and the wide & deep models turn out to be considerably lower than the accuracy for the deep model that just uses customer features 1 and 2 and campaign features 1 and 2 as input (Table REF ). In practice, this result means that considering a cross-product of a user ID with a campaign type, for which certain features are similar across multiple campaigns, is likely to lead to enhanced performance of a machine learning model. Conversely, using a cross-product of a user ID with a campain ID that will not re-occur in the data to which the model will eventually be applied may be detrimental to the model's prediction accuracy.
r
e27e840a-ee90-40ff-b8c4-41f326f3e7b6
We evaluated the benefit of using the implementations of UCB and TS, outlined in Section REF , on the output of the wide & deep model for our dataset (see Table REF ). Both bandit algorithms show a slight improvement in accuracy compared to the wide & deep model without explicit exploration. Our implemention of UCB consistently quantified uncertainty and predicted optimal offers better than TS, but was considerably more computationally expensive with, on average, an order of magnitude longer runtimes. <TABLE>
r
d7a96657-9cb0-4b31-a66d-ad8b20269d6e
Identifying features, such as user or campaign IDs, can significantly improve predictions of machine learning models, especially when properties of the customer and of the promotional campaign are uniquely determined by their identities. However, variability in features that correspond to the same ID reduces the predictive power of identifying features (or other categorical features that characterize the properties of individual customers or campaigns).
d
91753d15-5092-468a-8ce1-bea70db947cb
If the optimal offer is, to some extent, determined by the cross-product of two or more categorical features, then any improvements in the accuracy from using these features in a model hinges on the following requirement: the data used to train the model should contain the same combinations of the categorical feature values as the test set. Otherwise, using cross-products of categorical features may be detrimental to the model's accuracy on the test set.
d
47c4d8f5-a7c5-4b7b-b73b-4cb73f5568aa
Cross-products of relevant categorical features can be provided as input to the wide part of a wide & deep neural network. Alternatively, embeddings for individual categorical features can be included in the input for a deep neural network. In the setting we explored, both models showed similar prediction accuracies and computational costs. These results may be dataset dependent, so it would be of interest to study how the accuracies and computational costs for both models depend on the total numbers of numerical and categorical features, the variability between features for a given customer and/or promotional campaign, the number of offers per campaign, the embedding sizes for categorical features, etc.
d
bd9b6a25-207b-4a9c-a4d4-0f3d0c32bfcb
A wide & deep model requires more careful feature engineering than a deep model with embeddings for categorical features. Specifically, it requires explicitly identifying the cross-products of categorical features that affect the optimal offer. Conversely, when using embeddings in a deep model, any interplay between categorical features is implicit in the trained network parameters. If the relevant cross-products are easily identifiable, using a wide & deep model may be the better option. Otherwise, particularly for datasets with many categorical features, a deep model with embeddings may require less extensive exploratory data analysis.
d
c4bbbb35-924f-4555-a06c-531fa59e2954
Our network approximations of the TS and UCB bandit algorithms conferred an increase in accuracy of 1–3% for predicting the optimal offer. However, the computational cost of bandit algorithms may complicate their adoption in online settings. Because in online settings a sound exploration strategy can limit algorithmic bias, we intend to explore possible solutions in future research.
d
a0385462-a4f9-4a70-b7b4-f614cdb54505
Direct-to-consumer DNA‌ testing has made it possible for people to gain information about their ancestry, traits and susceptibility to various health conditions and diseases. The simplicity of testing services by companies like 23andMe, AncestryDNA and FamilyTree DNA has drawn a consumer base of tens of millions of individuals. These sequenced genomes are of great use to the medical research community, providing more data for genome-phenome association studies‌, aiding in early disease diagnoses, and personalized medicine.
i
c6639db9-87aa-4590-b6f7-be32a0e47736
While genome sequencing data gathered in medical settings is anonymized and its use often restricted, individuals may also choose to share their sequenced genomes in the public domain via services like OpenSNPgreshake2014opensnp and Personal Genome Project ball2014harvard. Moreover, even the sharing of de-identified data for medical research typically faces tension between open sharing within the research community and exposure to privacy risks. These risks generally stem from the ability of some data recipients to link the genomic data to the identities of the corresponding individuals. One particularly acute concern raised in recent literature is in the ability to link a genome to the photograph of an individual's face lippert2017identification, crouch2018genetics, qiao2016detecting, caliebe2016more. Specifically, these studies have shown that one can effectively match high-quality three-dimensional face maps of individuals with their associated low-noise sequencing data. However, for a number of reasons, it is unclear whether these demonstrations translate into practical privacy concerns. First, these studies to date have relied on high-quality, and often proprietary data that is not publicly available. This is a concern because such high-quality data is in fact, quite difficult to obtain in practice. While many people post images of their face in public, these are generally two-dimensional, with varying degrees of quality. In addition, observed phenotypes in real photographs need not match actual phenotypes thereby making it challenging to correctly infer one's genotype and vice versa. For example, people may color their hair, or eyes (through contact lenses). Finally, increasing population size poses a considerable challenge to the performance of genome-photograph linkage. Given a targeted individual and a fixed feature-space (namely the predicted phenotypes in our case), the chances of encountering an individual that are similar to the target individual in this feature-space increase with population size. Another related study by Humbert et al. humbert2015anonymizing investigates the re-identification risk of OpenSNP data, but assumes accurate knowledge of a collection of phenotypes, including many that are not observable from photographs, such as asthma and lactose intolerance. We consider this approach to be a theoretical upper-bound in our study, that is, matching performance when ground-truth phenotypes are known apriori, as opposed to when predicted from face images.
i
8ced8c60-19fd-4173-8b5c-563b207c5225
Given these potential confounders in the real world, in this paper we study the risk of re-identification of shared genomic data when it can potentially be linked to publicly posted face images. To this end, we use the OpenSNP greshake2014opensnp database, along with a new dataset of face images collected from an online setting and paired with a select subset of 126 genomes. We develop a re-identification method that integrates deep neural networks for face-to-phenotype prediction (e.g., eye color) with probabilistic information about the relationship between these phenotypes and SNPs to score potential image-genome matches. The first purpose of our study is to assess how significant the average risk is, as a function of population size, given the nature of available data as well as current technology. Our second purpose is to introduce a practical tool to manage individual risk that enables either those who post face images online, or social media platforms that manage this data, to trade off risk and utility from posted images according to their preferences. We find that the overall effectiveness of re-identification and, thus, privacy risk is substantially lower than suggested by the current literature that relies upon high-quality single nucleotide polymorphisms (SNP) and three-dimensional face map data. While some of this discrepancy can be attributed to the difficulty of inferring certain phenotypes—eye color, in particular—from images, we also observe that the risk is relatively low, especially in larger populations, even when we know the true phenotypes that can be observed from commonly posted face images. Indeed, even using synthetically generated data that makes optimistic assumptions about the nature of SNP to phenotype relationships, we find that the average re-identification success rate is relatively low.
i
f998be50-7e66-4a58-832b-da9923a22230
For our second contribution, we propose a method based on adding adversarial perturbations to face images prior to posting them that aims to minimize the score of the correct match. This framework is tunable in the sense that the user can specify the amount of noise they can tolerate, with greater noise added having greater deliterious effect on re-identification success. We show that even using imperceptible noise we can often successfully reduce privacy risk, even if we specifically train deep neural networks to be robust to such noise. Furthermore, adding noise that is mildly perceptible further reduces success rate of re-identification to no better than random guessing.
i
2767358f-3280-4552-b5b5-d3bcf7d841bd
We investigate the risk of re-identification in genomic datasets "in the wild" based on linkage with publicly posted photos. Using the public OpenSNP dataset, we identified 126 individual genotypes for which we were able to successfully find publicly posted photographs (e.g., some were posted along with genomic data on OpenSNP itself). We used a holistic approach to associate genomes to images as follows. If a user's picture was posted on OpenSNP, higher-quality pictures could often be found under the same username on a different website. When no picture was posted for a certain user on OpenSNP, we found pictures posted on different websites under the same username, and used self-reported phenotypes on OpenSNP to ensure with a reasonable degree of certainty that the image corresponds to the genome. This resulted in a dataset of SNPs with the corresponding photos of individuals, which we refer to as the Real dataset. To characterize the error rate in phenotype prediction from images, we constructed two synthetic datasets, leveraging a subset of the CelebA face image dataset liu2015faceattributes, and OpenSNP. We created artificial genotypes for each image (here, genotype refers only to the small subset of SNPs we are interested in - we refer the reader to Supplementary Table REF for a full list) using all available data from OpenSNP where self-reported phenotypes are present. First, we consider an ideal setting where for each individual, we select a genotype from the OpenSNP dataset that corresponds to an individual with the same phenotypes, such that the probability of the selected phenotypes is maximized, given the genotype. In other words, we pick the genotype from the OpenSNP data that is most representative of an individual with a given set of phenotypes. We refer to this dataset as Synthetic-Ideal. Second, we consider a more realistic scenario where for each individual we select a genotype from the OpenSNP dataset that also corresponds to an individual with the same phenotypes, but this time at random according to the empirical distribution of phenotypes for particular SNPs in our data. Since CelebA does not have labels for all considered phenotypes, 1000 images from this dataset were manually labeled by one of the authors, the results of which were confirmed by another of the authors. After cleaning and removing ambiguous cases, the resulting dataset consisted of 456 records. We refer to this dataset as Synthetic-Realistic.
r
a497bc7e-e106-4b42-9809-c31e07c353f1
Our re-identification method works as follows. First, we learned deep neural network models to predict visible phenotypes from face images, leveraging the CelebA public face image dataset, in the form of 1) sex, 2) hair color, 3) eye color, and 4) skin color. We learned a model separately for each phenotype by fine-tuning the VGGFace architecture for face classification parkhi2015deep. The result of each such model is a predicted probability distribution over phenotypes for an input face image. Second, for each input face image \(x_i\) , and for each phenotype \(p\) , we use the associated deep neural network to predict the phenotype \(z_{i,p}\) , that is the most likely phenotype in the predicted distribution. Third, we assign a score, based on the log-likelihood, to each genotype-image pair, \((x_i,y_j)\) as follows: \(p_{i,j} = \sum _{z_{i,p}}^{p \in \lbrace sex,hair,skin,eye\rbrace } \log P(z_{i,p} | y_j )\)
r
188b2e9a-4f16-48bb-9864-5ff4a1c72339
This approach is similar to the one introduced by Humbert et al. humbert2015anonymizing, but differs in that we predict phenotypes from face images as opposed to assuming complete knowledge. Finally, armed with the predicted log-likelihood scores \(p_{i,j}\) for genotype-image pairs, we select the top-\(k\) -scored genotypes for each face image, where \(k\) is a tunable parameter that allows for a trade-off in theprecision and recall of predictions.
r
2f5fcf9a-4f42-4f8a-a0d2-48ef7b8827f6
The effectiveness of re-identification is strongly related to both the choice of \(k\) above, as well as the size of the population that one is trying to match against. More specifically, as we increase \(k\) , one would naturally expect recall (and, thus, the number of successful re-identifications) to increase. On the other hand, a larger population raises the difficulty of the task by increasing the likelihood of spurious matches. We therefore evaluate the impact of both of these factors empirically.
r
89eeaac6-eda3-4ace-9423-9d2b64fb0536
Our findings suggest that the concerns about privacy risks to shared genomic data stemming from the attacks matching genomes to publicly published face photographs are low, and relatively easy to manage to allay even the diverse privacy concerns of individuals. Of course, our results do not imply that shared genomic data is free of concern. There are certainly other potential privacy risks, such as membership attacks on genomic summary statistics hagestedt2020membership, liu2018detecting, Chen2020.08.03.235416, wan2017controlling, wan2017expanding, zerhouni2008protecting, raisaro2017addressing, shringarpure2015privacy, craig2011assessing, which would allow the receipient of the data to determine the presence of an individual in the underlying dataset. This type of attack is of concern because it would allow an attacker to associate the targeted individual with the semantics inherent in the dataset. For instance, if the dataset was solely composed of individuals diagnosed with a sexually transmitted disease, then membership detection would permit the attacker to learn that the target had the disease in question. Moreover, we emphasize that our results are based on current technology; it is possible that improvements in either the quality of data, such as broad availability of high-definition 3D photography, or the quality of AI, such as highly effective approaches for inferring eye color from images, will indeed significantly elevate risks or re-identification. However, through several studies which include synthetic variants controlling for the quality of data, as well as evaluations that assume we can infer observable face phenotypes with perfect accuracy (see, for example, the results in Fig. REF , as well as in the Supplementary Figures REF and REF ), we show that even with advances in technology the risk is likely to remain limited.
d
1cf972e3-c2c5-4950-8f56-efbdfceb9a29
Cyber-Physical Systems (CPSs) often employ distributed networks of embedded sensors and actuators that interact with the physical environment. The availability of cheap communication technologies (e.g., internet) has certainly improved scalability and functionality features in several applications. However, they have made CPSs susceptible to cyber security threats. This makes the cyber security to be of primary importance in safe operation of CPSs.
i
e42b70ee-57d2-4199-8d67-26b59d286c4f
By assuming that sensors-to-controller and controller-to-actuators communication channels are the only ones in CPSs executed via internet and malicious agents can alter data flows in these channels, in general two classes of cyber attacks can be considered: (i) False Data Injection (FDI), and (ii) Denials of Service (DoS). A FDI (a.k.a. deception attack) affects the data integrity of packets by modifying their payloads [1]}, [2]}, [3]}. A DoS is the one that the attacker needs only to disrupt the system by preventing communication between the components. In this paper, we focus on a specific type of DoS attack, so-called Prevented Actuation Attack (PA2) [4]}, [5]}, where the attacker prevents the exchange of information between the controller and the actuators. An attacker can launch such attacks on the physical layer or cyber layer. Examples of real-world PA2 are: sleep deprivation torture attack [6]} (a.k.a. battery exhaustion attack) that exhausts the battery of a surveillance robot or a medical implant until it can no longer function; door lock attack [7]} that suppresses the operation of a smart door by injecting `close' command every time an `open' command is received; and fatigue bearing attack [8]} that restrains the operation of the lubricant system in wind turbines to damage gearboxess.
i
8735e709-f7a6-48b1-a46e-c869cab93be0
Regardless of the type of attack, attack detection approaches presented in the literature can be classified as: (i) passive approaches, and (ii) active approaches. Note that we use the same terminology of the fault literature [1]} to classify attack detection approaches, as faults and attacks usually manifest themselves similarly in control systems despite their natural differences. In passive approaches, the input-output data of the system are measured (remotely or on-site), analyzed for any possible stealthy behavior, and then a decision about an attack is made. The passive approaches are widely studied and commonly used in many today's applications, e.g., [2]}, [3]}, [4]}, [5]}, [6]}. However, they might not be able to recognize an attack when the input-output data are not informative enough. Also, they do not address stability/safety of the system during detection horizon, a time interval from the instant an attack occurs to the instant when it is detected.
i
7c5ed2a0-3f2c-4f38-ad13-b695c9161432
The active approaches interact with the system during the detection horizon by means of a suitably designed input signal that is injected into the system to increase the quality of detection, shorten the detection horizon, and enforce stability/safety during the detection horizon. Contrary to the passive approaches, the active approaches are historically younger and still under development. To the best of the authors' knowledge, the only existing active attack detection approach in the literature is the physical authentication (a.k.a. digital watermarking) [1]}, [2]}, [3]}. The core idea of this method is to inject a known noisy input to the system and observe its effect on the output of the system. Thus, if an attacker is unaware of this physical watermark, the system cannot be adequately emulated, as the attacker is unable to consistently generate the component of the output associated with this known noisy input. The physical authentication, which is mainly used in detection of replay attack (a.k.a. playback attack) [4]}, can be effective if the noise injected at the system input is large enough to achieve good detection performance, which may degrade the control performance. Moreover, this method injects the noisy input irrespective of the probability of attack occurrence, which leads to unneeded loss in control performance. Furthermore, in the case of constrained systems [5]}, [6]}, [7]}, as shown in [8]}, the extra uncertainty injected to the system due to the noisy input should be taken into account in the design procedure, which leads to tighter constraints, and consequently more conservative behavior.
i
81639331-cdcb-4345-9b1c-507dd50bd5d3
This paper answers the following question: How to determine the control input sequence for a constrained CPS such as to improve the detection performance without degrading the control performance? Inspired by [1]}, this paper answers this question in the case of PA2. The proposed structure consists of two units: (i) detection unit, and (ii) control unit. The detection unit uses a priori information and input-output data of the system over the detection horizon with a certain length to generate a decision variable which represents the situation of the system. More precisely, the detection unit recognizes the existence/inexistence of PA2 and distinguishes attacked actuators. The control unit generates the control input which is optimal according to a cost function and guarantees constraint satisfaction at all times. Both control and detection aims are defined in the form of stochastic objective functions, i.e., they are uncertain due to noises and initial condition. The open-loop information processing strategy [2]} is then used to express the stochastic objective functions as deterministic functions. Finally, in order to evaluate the quality of the control input sequence in terms of detection and control aims, a compromise between the two aims is defined in the form of a multi-objective optimization problem whose solution can be computed by means of available optimization tools.
i
36eda3be-9735-428f-8b89-34ce560546c7
This paper proposed an optimization approach for active attack detection and control of constrained CPSs systems subject to expectational linear constraints. This paper mainly focused on PA2 attack, where the attacker prevents the exchange of information between the controller and the actuators. A set of parallel detectors based on hypothesis testing approach was proposed. Using a probabilistic approach to deal with uncertainties, the detection and control aims were formulated as two separate stochastic objective functions. The open loop approach was deployed to transfer the stochastic functions to deterministic ones. Two alternative compromise between detection and control aims were presented in the form of a constrained optimization problem. The effectiveness of the proposed active approach was validated through simulation studies. <FIGURE><FIGURE>
d
2a6a3288-ce0c-4478-a575-c0033d95b5eb
Traditionally, industrial systems were isolated from external access, and security was not a primary design criterion. Many of today's Industrial Control Systems (ICSs) are exposed to the Internet, creating security vulnerabilities due to the lack of proper security solutions [1]}. An ICS adversary often practices different actions to exploit these vulnerabilities, pass the border between Information Technology (IT) and Operational Technology (OT) networks and launch a targeted attack against ICS networks. Many organisations use cyber threat hunting to detect hidden intrusions before they cause a significant breach proactively [2]}. Hunting aims to detect threat actors early in the cyber kill chain by searching for signs of an intrusion and then providing hunting strategies for future use [3]}. While threat hunting in conventional communications networks is not novel, the idea of threat hunting in ICS networks consisting of a combination of IT and OT networks is a new challenge due to the diverse nature of OT networks [4]} [5]} [6]}. This necessitates an automated threat hunting solution that can provide adequate security to monitor and control the operation of ICS networks.
i
a15a5042-73fe-4302-8c0c-25d6f14b49d1
MITRE’s Adversarial Tactics, Techniques, and Common Knowledge (ATT&CK) is the model of cyber adversary behaviour [1]}. It outlines different phases of attacks’ lifecycle in IT and OT networks and the platforms the attackers are known to target. This model helps to expand the knowledge of threat hunters by outlining the tactics, techniques, and procedures (TTPs) which adversaries use to gain access to a system and execute their targeted attacks. It is difficult for adversaries to change their TTP behaviour when they are operating them. Therefore, MITRE ATT&CK has focused on TTPs. TTPs provided by MITRE ATT&CK are sorted based on the attack life cycle. This can help threat hunters to generate a hypothesis based on the initial TTPs of an attack and predict other possible techniques across future phases of the attack life cycle. MITRE ATT&CK consists of two categories: (i) MITRE ATT&CK for IT [1]}, and (ii) MITRE ATT&CK for ICS (released in 2020) [3]}. MITRE ATT&CK for ICS presents adversarial TTPs against OT systems. However, as it has been released recently, the application of this model in threat hunting in ICS networks has not been sufficiently studied.
i
80c66c32-c550-42f6-ac0c-13e73bdf397d
In the MITRE ATT&CK framework, tactics show what objective the attacker wanted to achieve with the compromise. For each tactic, a wide array of techniques that threat actor groups have used are provided by the framework. Furthermore, the ATT&CK for ICS matrix can help determine what types of data sources are required to detect threats in ICS environments [1]} [2]} [3]}. Threat hunting is a human-based approach, and it is prone to human errors [4]} [5]} [6]}. Therefore, automation of the threat hunting process can help reduce human errors and increase detection speed.
i
801bff51-8014-4aca-88e1-6d1f3c8cbed3
Unlike the existing threat hunting solutions, which are cloud-based, costly, and IT-focused, this paper is motivated to design and develop a central and open-source automated threat hunting solution for ICS networks. In this paper, we investigate threat hunting using MITRE ATT&CK for ICS networks. The input data for our threat hunting solution is received from Ethernet-based devices connected to an ICS network. The major contributions of the paper are as follows:
i
e5c205d3-c7d6-4d59-bbd3-1b15f4fa0658
We design a central and open-source framework for automated threat hunting in ICS networks. We provide a detailed proof of concept implementation of the proposed automated threat hunting solution. The implementation consists of the following two phases: Automatic detection of adversarial TTPs: the first step in automating threat hunting is developed from the fact that attackers mostly use common adversarial TTP’s which are stored in a database or framework specifically developed for more effective threat hunting. To detect TTPs in ICSs, we provide a central threat hunting platform that communicates with MITRE ATT&CK for ICS framework and automatically detect TTPs. Prediction of future TTPs: a supervised machine learning method is used to provide an automatic analysis of attack TTPs, generate and validate a hunting hypothesis, and predict the future steps of the detected ICS attack.
i
7ccb9c86-7741-4f71-8863-3b28e4fb290a
To evaluate the accuracy of the automated threat hunting methods, sample training and testing datasets were generated based on MITRE ATT&CK for ICS to train and test machine learning algorithms for real-world APT attacks.
i
4775cf65-3c70-43fe-8243-4b4d249e91e2
The rest of this paper is organised as follows. In Section , we discuss related works. Section  explains the proposed framework of our open-source threat hunting method. ICS dataset generation and the framework components are discussed in Section and Section . The implemented machine learning-based classification is presented in Section . In Section , we present the achieved results and discuss the major findings. Finally, in Section , we conclude the paper with future work.
i
3b981296-ae31-4df8-a33e-2fab69da8495
Several studies examine threat hunting in IT networks [1]} [2]} [3]} [4]}. Yet, there is a need for proactive threat hunting solutions that should be employed in ICS networks due to the growing Advanced Persistent Threats (APTs) against industrial networks. However, this area of research is lacking in recent literature. Furthermore, the automation of threat hunting is another challenge that has not been adequately addressed even in IT networks. While there are some papers about the automation of hunting in IT networks [5]}, [6]}, [7]}, [8]}, [9]}, more research is needed for ICS networks.
w
bbcc6540-d3e5-4df0-b02d-f0bcbca0cd63
In [1]}, known APTs are used to generate automatic hypotheses which are then assessed with given probability readings. The automatic hypothesis generation uses an Intrusion Detection System (IDS) formerly known as BRO. The BRO IDS generates alerts with attached hypotheses, the hypotheses are then compared to the detected APT’s to determine whether they are intruding or false positives [1]}.
w
c8300084-aa31-4b6f-a005-efa35adfc541
A deep learning stack was developed in [1]} to observe APTs as a multi-vector multi-stage attack that can be captured by using the entire network flow and raw data as input for the detection process. The solution considered outliers, data dimensions, non-linear historical events, and previously unknown attacks. Further, it used different algorithms to perform detection and classification. However, unlike their approach, we focus on automation of threat hunting in ICS networks.
w
010af5f8-dc90-4bb0-b20a-17e5d6722bf3
Proposal [1]} developed a threat hunting method for IT networks using a machine learning-based system to detect and predict APT attacks through a holistic approach. The system consists of three main phases: threat detection, alert correlation, and attack predictions. The proposed system is able to capture attacks in a timely fashion.
w
bf969c25-d069-48dd-87e6-25a530a58760
Another hunting solution for IT environment is proposed in [1]}. It deployed in an Ubuntu virtual machine for detecting APT tactics through synthesised analysis and data correlation. The framework achieves APT tactic detection through logs, configuration files and previously seen APT tactics, which are taken as inputs, and by using these variables, a ranked list of APT tactics based on completeness is generated.
w
514d1d79-387f-44f4-9602-f5f0856ad316
DFA-AD is another architecture designed for the detection of APTs by using event correlation techniques [1]}. The system acquires its information through collected and processed network traffic packets. It used a three-staged approach. The first stage consists of four classifiers, each with a different detection scheme to detect all techniques used in various steps of an APT attack. In the second stage, the event correlation modules take all the event outputs from the classifiers and correlate all of them individually to find a caution on APT attack discovery. Finally, in the third stage, a voting service is used that analyse the correlations and determine a result. Through this voting phase, DFA-AD can lessen the rate of false positives and is able to increase the accuracy of APT detection.
w
ddb3fcd9-1dc3-4a64-9528-1169c6c99303
The above-discussed papers proposed automation of threat hunting in IT networks - with a significant gap in presenting the automated hunting in ICS networks. Nevertheless, threat hunting in the ICS domain lacks sufficient research about feeding the MITRE ATT&CK framework into threat hunting tasks. Other automated solutions have utilised the MITRE framework to perform threat hunting and detect APTs. For instance, [1]} and [2]} employed MITRE ATT&CK for Enterprise to detect threats. Proposal [2]} further used statistical analysis based on MITRE ATT&CK to learn APT TTPs to predict the future techniques that the adversary may perform. However, unlike our approach, once again, they do not consider the automation of threat hunting in ICS networks.
w
64371b20-6e88-45d7-8cfa-70b62ce5815d
In [1]}, a framework called ‘Spiking One-Class Anomaly Detection’ based on the evolving ‘Spiking Neural Network Algorithm’ is discussed for APT detections in ICS networks. The algorithm implements the one-class methodology’s innovative application in training a model with exclusive data that characterise the operation of an ICS. It can detect abnormal behaviours and is suitable for applications with massive amounts of data. However, this algorithm did not utilise an application to display the results of the test data.
w
004d0600-4497-47d8-8c8e-73c995a62986
Another detection method in an ICS, [1]}, used a spatio-temporal association analysis method to detect intrusions in industrial networks. It focused on feature mining and retrieval methods of historical attacks between the features of APT attacks. The proposal used a multi-feature SVM classification detection algorithm to detect abnormal APT attacks. However, unlike our motivation, the solution does not explicitly state which of the many APT groups is attacking or discuss the technique used in the detected attack.
w
fecb0ead-d532-4d8a-a95a-86da3893943f
In summary, we note that there are still areas that need further development regarding threat hunting, specifically that it is imperative to incorporate the knowledge of existing APTs. At the same time, there is a need for additional validation of defence mechanisms and to enhance these defences so that they can be easily integrated into ICSs, which is missing in recent literature. Our paper proposes a framework that addresses (i) threat hunting in ICS networks, (ii) applies new MITRE ATT&CK for ICS framework to the threat hunting process, and (iii) automate the hunting tasks. In addition, our framework can predict possible TTPs that may happen in the future and detect the type of APTs using these TTPs.
w
143b5859-c8d2-4e2b-8dd5-e84e43a9e7a5
The proposed framework is of significant use for the ICS due to its novel approach to threat hunting. The solution could be automated in ICS organisations which in turn minimises human errors, time and overall costs in the threat hunting process. Further, the framework can provide detailed information regarding the attacks based on the MITRE ATT&CK framework, thereby making it useful for ICS industries to interpret the attacks in an extensive manner. This type of application is also rarely found in the industry of ICS. Using the dataset in Table REF , an SVM classifier was used to analyse the TTPs detected in the network and identify the APT groups that used the same TTPs. The SVM classifier can accurately predict the attacking groups in both IT and OT Matrix. Since the high accuracy was confirmed, it was implemented using Python and it posts the predicted groups and detected TTPs to Elastic-Search. The index is requested by a server-side application developed using the Express JS framework. An additional single page application was developed using React JS, which retrieved this data from the server to display the detected TTPs and predicted group. As shown in Fig. REF , a bar chart is provided for the users to see the adversaries detected in their systems. The chart can display multiple groups detected when specific TTPs are given as input. It also allows users to find out more about the groups detected as it will take them to the group's details on the MITRE framework website. In Fig. REF , react application predicts multiple APT attacks in our networks using the two TTPs detected by the central threat hunting phase. <FIGURE>
d
65771bb0-af1c-4cb7-9c68-625fcce5050a
Threat hunting aims to detect threat actors early in the cyber kill chain by searching for signs of intrusions and then providing hunting strategies for future use. In response to concerns about hidden intrusions, several cloud-based hunting platforms have been proposed for IT networks that facilitate monitoring of distributed devices and help detect ATT&CK TTPs associated with detected anomalies. In this paper, we have presented a comprehensive approach that can be used by industrial companies to implement central and automated threat hunting in ICS networks. Our proposed open-source solution uses MITRE ATT&CK for ICS to detect TTPs and then, automatically find the APTs associated with the input TTPs. Our method can potentially detect malicious activities more quickly and accurately. The machine learning-based analyser in our solution can predict the future steps of the identified TTPs. It is also used to automate decision making tasks e.g., the verification of the hypothesis. Note, this paper studied an automated threat hunting process in ICS networks as a proof of concept. In the future, we intend to explore how graph analysis can help our framework to improve threat hunting.
d
fa0a8dc6-66e3-40c9-926a-ec8e16fb6160
In this article, we proposed the first decentralized Battleships game, which is composed of various cryptographic components to enforce fairness, keep battleships' location secret and protect honest players from malicious cheaters. Furthermore, playing battleships over the blockchain provides major benefits such as making the game DDoS resistant, where the money is transferred immediately to the winner, or to the opponent in case cheating is discovered. The logic of the game is developed using solidity language and deploy over the Ethereum blockchain as a smart contract.
d
9b5b70ad-be88-491f-bfde-ba3b198fb702
Given the social and ethical impact that some affective computing systems may have [1]}, it becomes of the utmost importance to clearly identify and document their context of use, envisaged operational scenario or intended purpose. Undertaking such use case documentation practices would benefit, among others, system vendors and developers to make key design decisions from early development stages (e.g. target user profile/population, data gathering strategies, human oversight mechanisms to be put in place), authorities and auditors to assess the potential risks and misuses of a system, end users to understand the permitted uses of a commercial system, the people on whom the system is used to know how their data is processed and, in general, the wide public to have a better informed knowledge of the technology.
i
f8d7d372-3870-42ad-ab4d-ca594de142c0
The need for transparency and documentation practices in the field of Artificial Intelligence (AI) has been widely acknowledged in the recent literature [1]}. Several methodologies have been proposed for AI documentation, but their focus is rather on data [2]} and models [3]} than AI systems as a whole, limiting at most the documentation of use cases to a brief textual description. Nowadays, voluntary AI documentation practices are in the process of becoming legal requirements in some countries. The European Commission presented in April 2021 its pioneering proposal for the Regulation of Artificial Intelligence, the AI Act [4]}, which regulates software systems that are developed with AI techniques such as machine or deep learning. Interestingly, the legal text does not mandate any specific technical solutions or approaches to be adopted; instead, it focuses on the intended purpose of an AI system which determines its risk profile and, consequently, a set of legal requirements that must be met. The AI Act's approach further reinforces the need to properly document AI use cases.
i
8dd403b1-2bc8-493a-bcff-9f2069c1a2c0
The concept of use case has been used in classic software development for more than 20 years. Use cases are powerful documentation tools to capture the context of use, scope and functional requirements of a software system. They allow structuring requirements according to user goals [1]} and provide a means to specify the interaction between a certain software system and its environment [2]}. This work revisits classic software use case documentation methodologies, more particularly those based on the Unified Markup Language (UML) specification [3]}, and proposes a template-based approach for AI use case documentation considering current information needs identified in the research literature and the European AI Act. Although the documentation methodology we propose is horizontal, i.e. it can be applied to different domains (e.g AI for medicine, social media, law enforcement), we address the specific information needs of affective computing use cases. The objective is to provide a standardised basis for an AI and affective computing technology-agnostic use case repository, where different aspects such as intended users, opportunities or risk levels can be easily assessed. To the best of our knowledge, this is the first methodology specific to the documentation of AI use cases.
i
aa215ffd-00d3-4195-a8fa-d76f1693c518
The remainder of the paper is as follows. Section  provides an overview of the current AI regulatory framework, existing approaches for the documentation of AI and affective computing systems, and a background on UML. Section  identifies use case information needs and proposes an UML-based methodology for their unified documentation. In Section , we put the methodology into practice with some concrete exemplar affective computing use cases. Finally, Section  concludes the paper.
i
b55392e7-a1d3-4455-9e4e-15c21cdccea7
In this paper, we propose a methodology for the documentation of AI use cases which covers the particular information elements needed to address affective computing ones. The methodology has a solid grounding, being based on two strong pillars: (1) the UML use case modelling standard, and (2) the recently proposed European AI regulatory framework. Each use case is represented in a highly visual way by means of an UML diagram, accompanied by a structured and concise table that compiles the relevant information to understand the intended use of a system, and to assess its risk level and foreseeable misuses. Our approach is not intended to be an exhaustive methodology for the technical documentation of AI or affective computing systems (e.g. to demonstrate compliance with legal acts). Rather, it aims to provide a template for compiling related use cases with a simple but effective and unified language, understandable even by non-technical audiences. We have demonstrated the power of this language through practical affective computing exemplar use cases.
d
c22c5777-2677-4a77-8f5e-ca0e452e616a
In the near future, we plan to develop a collaborative repository compiling a catalogue of AI –including affective computing– use cases following the proposed template. The first step will be to transcribe the 60 facial processing applications presented in [1]}, which contain 18 emotion recognition use cases, in order to add them to this catalogue.
d
65c7eaac-e9a7-45a6-9a63-33a3c88a3da1
Boolean decision trees constitute one of the simplest computational models. It is therefore intriguing when the complexity of a function is still unknown. A notable example is the recursive majority-of-three function. This function can be represented by a uniform ternary tree of depth \(d\) , such that every internal node has three children and all leaves are on the same level. The function computed by interpreting the tree as a circuit with internal nodes labeled by majority gates (with output 1 only when at least two of the three inputs are 1) is \(\mathop {\rm maj}\nolimits _d\) , the recursive majority-of-three of depth \(d\) .
i
e682efce-e558-4754-809c-f08e50106000
This function seems to have been given by Ravi Boppana (see Example \(1.2\) in [1]}) as an example of a function that has deterministic complexity \(3^d\) , while its randomized complexity is asymptotically smaller. Other functions with this property are known. Another notable example is the function \(\mathrm {nand}_d\) , first analyzed by Snir [2]}. This is the function represented by a uniform binary tree of depth \(d\) , with the internal nodes labeled by \(\mathrm {nand}\) gates. Equivalently, the internal nodes can be labeled by AND and OR gates, alternately at each level. A simple randomized framework that can be used to compute both \(\mathop {\rm maj}\nolimits _d\) and \(\mathrm {nand}_d\) is the following. Start at the root and, as long as the output is not known, choose a child at random and evaluate it recursively. Algorithms of this type are called directional. For \(\mathop {\rm maj}\nolimits _d\) the directional algorithm computes the output in \((8/3)^d\) queries. It was noted by Boppana and also in [1]} that better algorithms exist for \(\mathop {\rm maj}\nolimits _d\) . (See [4]} for a more recent study of directional algorithms.) Interestingly, Saks and Wigderson show that the directional algorithm is optimal for the \(\mathrm {nand}_d\) function, and show that its zero-error randomized decision tree complexity is \(\Theta \bigl (\smash{({\textstyle {1+\sqrt{3}3\over 4}})}^d\bigr )\) . Their proof uses a bottom up induction and generalized costs. Their method of generalized costs allows them to charge for a query according to the value of the variable. In this work we use this method to show that the directional algorithm is optimal for uniform AND-OR trees where each gate has fanout \(n\) .
i
87e31412-e86b-40ea-b5e3-6381ef8ad396
More generally, there have been studied functions that can be represented by formulae involving threshold functions as connectives. A threshold \(k\) -out-of-\(n\) function, denoted \(T^{n}_{k}\) , is a Boolean function of \(n\) arguments that has value 1 if at least \(k\) of the \(n\) Boolean input values are 1. A threshold formula can be defined as a rooted tree with label nodes; each internal node is labeled by a threshold function and each leaf by a variable. If each variable appears exactly once the formula is called read-once. A formula represents a Boolean function in a natural way: the tree is evaluated as a circuit where each internal node is the corresponding threshold gate. If the formula is read-once, the function it represents is also called read-once. If no OR gate is an input to another OR gate, and the same for AND gates, then the formula is non-degenerate (see Theorem 2.2 in [1]}) and uniquely represents the corresponding function. Thus, we may define the depth of \(f\) , denoted \(d(f)\) , as the maximum depth of a leaf in the unique tree-representation. Define also \(n(f)\) as the number of variables. We prove lower bounds for the subclass that is represented by uniform trees (full and complete trees with all their leaves at the same level and with each internal node having the same number of children).
i
98921b45-3e37-4353-aa68-33ef30fbf484
The fundamental problem of distribution learning concerns the design of algorithms (i.e., estimators) that, given samples generated from an unknown distribution \(f\) , output an “approximation” of \(f\) . While the literature on distribution learning is vast and has a long history dating back to the late nineteenth century, the problem of distribution learning under privacy constraints is relatively new and unexplored.
i