_id
stringlengths
36
36
text
stringlengths
200
328k
label
stringclasses
5 values
812cb8de-4802-4f94-b2ff-71b585164e39
In this paper, we work with the notion of differential privacy which was introduced by Dwork et al. [1]} as a rigorous and practical notion of data privacy. Roughly speaking, differential privacy guarantees that no single data point can influence the output of an algorithm too much, which intuitively provides privacy by “hiding” the contribution of each individual. Differential privacy is the de facto standard for modern private analysis which has seen widespread impact in both industry and government [2]}, [3]}, [4]}, [5]}, [6]}.
i
96079a81-2c92-4daa-a2c3-1dd341b9add5
In recent years, there has been a flurry of activity in differentially private distribution learning. A number of techniques have been developed in the literature for this problem. In the pure differentially private setting, Bun et al. [1]} recently introduced a method to learn a class of distributions when the class admits a finite cover, i.e. when the entire class of distributions can be well-approximated by a finite number of representative distributions. In fact, they show that this is an exact characterization of distributions which can be learned under pure differential privacy in the sense that a class of distributions is learnable under pure differential privacy if and only if the class admits a finite cover [2]}, [1]}. As a consequence of this result, they obtained pure differentially private algorithms for learning Gaussian distributions provided that the mean of the Gaussians are bounded and the covariance matrix of the Gaussians are spectrally bounded.When we say that a matrix \(\Sigma \) is spectrally bounded, we mean that there are \(0 < a_1 \le a_2\) such that \(a_1 \cdot I \preceq \Sigma \preceq a_2 \cdot I\) . Moreover, such restrictions on the Gaussians are necessary under the constraint of pure differential privacy.
i
7f88a302-d59f-41c3-9a86-39179bcbc486
One way to remove the requirement of having a finite cover is to relax to a weaker notion of privacy known as approximate differential privacy. With this notion, Bun et al. [1]} introduced another method to learn a class of distributions that, instead of requiring a finite cover, requires a “locally small” cover, i.e. a cover where each distribution in the class is well-approximated by only a small number of elements within the cover. They prove that the class of Gaussians with arbitrary mean and a fixed, known covariance matrix has a locally small cover which implies an approximate differentially private algorithm to learn this class of distributions. Later, Aden-Ali, Ashtiani, and Kamath [2]} proved that the class of mean-zero Gaussians (with no assumptions on the covariance matrix) admits a locally small cover. This can then be used to obtain an approximate differentially private algorithm to learn the class of all Gaussians.
i
0cd2ec73-ebe4-4384-ad09-bf60511556f2
It is a straightforward observation that if a class of distributions admits a finite cover then the class of its mixtures also admits a finite cover. Combined with the aforementioned work of Bun et al. this implies a pure differentially private algorithm for learning mixtures of Gaussians with bounded mean and spectrally bounded covariance matrices. It is natural to wonder whether an analogous statement holds for locally small covers. In other words, if a class of distributions admits a locally small cover then does the class of mixtures also admit a locally small cover? If so, this would provide a fruitful direction to design differentially private algorithms for learning mixtures of arbitrary Gaussians. Unfortunately, there are simple examples of classes of distributions that admit a locally small cover yet their mixture do not. This leaves open the question of designing private algorithms for many classes of distributions that are learnable in the non-private setting. One concrete open problem is for the class of mixtures of two arbitrary univariate Gaussian distributions. A more general problem is private learning of mixtures of \(k\) axis-aligned (or general) Gaussian distributions.
i
077edbdb-6b0b-43d8-a673-57caeb4c4a39
We demonstrate that it is indeed possible to privately learn mixtures of unbounded univariate Gaussians. More generally, we give sample complexity upper bounds for learning mixtures of unbounded \(d\) -dimensional axis-aligned Gaussians. In the following theorem and the remainder of the paper, \(n\) denotes the number of samples that is given to the algorithm.
r
1e1c6b2a-c5e1-4fe3-9750-0c5705551780
Theorem 1.1 (Informal) Let \(\varepsilon , \alpha , \beta \in (0,1)\) and \(\delta \in (0, 1/n)\) . The sample complexity of learning a mixture of \(k\) \(d\) -dimensional axis-aligned Gaussians to \(\alpha \) -accuracy in total variation distance under \((\varepsilon , \delta )\) -differential privacy and success probability \(1- \beta \) is \(\widetilde{O}\left(\frac{k^{2}d\log ^{3/2}(1/\beta \delta )}{\alpha ^{2}\varepsilon }\right).\)
r
eedae3b6-f887-4141-ae56-d7cf8c24d26f
The formal statement of this theorem can be found in Theorem REF . We note that the condition on \(\delta \in (0, 1/n)\) is standard in the differential privacy literature. Indeed, for useful privacy, \(\delta \) should be “cryptographically small”, i.e., \(\delta \ll 1/n\) .
r
557ba2e8-81a2-48f5-8293-a5c13f22a4c6
Even for the univariate case, our result is the first sample complexity upper bound for learning mixture of Gaussians under differential privacy for which the variances are unknown and the parameters of the Gaussians may be unbounded. In the non-private setting, it is known that \(\widetilde{\Theta }(kd/\alpha ^2)\) samples are necessary and sufficient to learn an axis-aligned Gaussian in \(\mathbb {R}^d\) [1]}, [2]}. In the private setting, the best known sample complexity lower bound is \(\Omega ( d/\alpha \varepsilon \log (d))\) under \((\varepsilon , \delta )\) -DP when \(\delta \le \widetilde{O}(\sqrt{d} / n)\)  [3]}. Obtaining improved upper or lower bounds in this setting remains an open question.
r
7392eb68-9601-4481-923c-48c41e8208e5
If the covariance matrix of each component of the mixture is the same and known or, without loss of generality, equal to the identity matrix, then we can improve the dependence on the parameters and obtain a result that is in line with the non-private setting.
r
a3d36a1f-6128-4c24-88d7-c9e01c360493
Theorem 1.2 (Informal) Let \(\varepsilon , \alpha , \beta \in (0,1)\) and \(\delta \in (0, 1/n)\) . The sample complexity of learning a mixture of \(k\) \(d\) -dimensional Gaussians with identity covariance matrix to \(\alpha \) -accuracy in total variation distance under \((\varepsilon , \delta )\) -differential privacy and success probability \(1- \beta \) is \(\widetilde{O}\left(\frac{kd+\log (1/\beta )}{\alpha ^2} + \frac{k d\log (1/\beta \delta )}{\alpha \varepsilon }\right).\)
r
c92fedd8-eec3-463b-a24b-456e0a333371
We relegate the formal statement and the proof of this theorem to the appendix (see Appendix ). Note that the work of [1]} implies an upper bound of \(O(k^2 d^3\log ^2(1/\delta ) / \alpha ^2 \varepsilon ^2)\) for private learning of the same class albeit in the incomparable setting of parameter estimation.
r
739c12e5-8d5f-4955-9c04-672e540731d6
paragraph41.25ex plus1ex minus.2ex-1emComparison with locally small covers. While the results in [1]}, [2]} for learning Gaussian distributions under approximate differential privacy do not yield finite-time algorithms, they do give strong information-theoretic upper bounds. This is achieved by showing that certain classes of Gaussians admit locally small covers. It is thus natural to ask whether it is possible to use this approach based on locally small covers to obtain sharper upper bounds than our main result. Unfortunately, we cannot hope to do so because it is not possible to construct locally small covers for mixture classes in general. While univariate Gaussians admit locally small covers [1]}, the following simple example shows that mixtures of univariate Gaussians do not.
r
8faa9da7-61b0-4021-aab5-692bc963c666
Information technology (IT) is playing a vital role in the fight against covid-19. It helps to evaluate the outbreak of the covid-19 pandemic, coronavirus statistical breakdown, identifying covid-19 through various symptoms, vaccine advancement [1]}. It is heavily used for contract tracing around the world. However, these systems are designed and maintained independently. Therefore, they cannot communicate with each other. It is challenging for policymakers to get a consolidated view of transmission, testing, and vaccination. In addition, trust in the testing data and vaccination data was in question, especially in developing countries. For instance, there was a case of test fraud happened in Bangladesh. Hospitals in Bangladesh named Regent hospital, JKG healthcare, and Shahabuddin hospital were caught scamming people by creating fake covid tests, wrong treatment and a series of other irregularities. These cases raised serious trust issues amongst people both inside and outside the country [2]}[3]}[4]}.
i
82c9495e-ceb4-47bd-94a7-ade3c4286bbe
Blockchain is a distributed ledger technology, which can address the limitations of the current mechanism. It can help to integrate multiple systems and at the same time allows all the parties to interact with the plans without interfering with others. All the parties will maintain the system, and the system will automatically update if there is any activity in the system. However, it will automatically stop any unlawful activity. It provides a transparent view of the data to all the parties, and it helps build trust. It also prevents corruption as nobody can manipulate the data. If anybody provides wrong input purposefully, it can be detected using traditional audit or re-testing. The bad actor can be easily identified and held responsible as they cannot change the record in the system due to the immutability of the blockchain.
i
3b55d6c0-5ee4-4319-a2f6-d674d0c8c70c
Blockchain contributes vastly to the healthcare sector. A blockchain-based medical record system was implemented to help patients keep their logs more securely. A distributed ledger like blockchain helps those data to be kept private and safe. Already Blockchain proved its capability in other health-based research. Blockchain performed amazingly in recording personal health data where the data privacy, flexibility, and authenticity were ensured [1]} [2]} [3]}. Blockchain has been useful in different aspects of the management of covid-19 pandemic, and thus, a risk notification system and location and bluetooth based contact tracing system had been implemented to ensure tamper-free services [4]}. Furthermore the Chinese University of Hong Kong has proposed a concept, describing the structure of a blockchain based vaccine passport with health records [5]}. Blockchain can also assure the safety, security, transparency and traceability for distributing covid vaccine [6]}. Double layer Blockchain has been used for recording vaccine production and information also. Using a timestamp, the information of enterprises and vaccines becomes tamper-proof, and the validity period of the vaccine is measured. A data cutting system has been introduced for reducing space [7]}. In line with these works, this article presents an integrated blockchain-enabled testing and vaccination system. The core contributions of the articles are as follows:
i
377a700c-7c15-4f9e-891f-a6d2a9c48e8f
Designing a blockchain-based system that can seamlessly integrate testing and vaccination systems. Implementation of a QR-based "Digital Vaccine Passport" (DVP) mechanism which will reduce the corruption in covid testing and vaccination.
i
e7a881bb-a728-41ca-b7ce-c3b9c4341b6f
We have discussed the the background knowledge in section . System requirements and design are discussed in sections and respectively. In section , we have discussed about the implementation. Performance evaluation is presented in section . We have concluded the article with future research directions in section .
i
2906552d-b6df-42dc-bc9c-d4970b6302b8
In this section, we present the detailed implementationSource Code: https://github.com/Salekin-Nabil/VDHP of the proposed system. The system was implemented on top of Ethereum blockchain. The code was written in Solidy programming language on the Remix IDE, which is also used to compile and evaluate smart contracts. We have built three smart contracts so far; namely dhp (Digital Health Passport), vaccination and locationInfo. Figure REF represents the class diagram of our project. The attributes of each structure are as shown in this figure and the methods are explained below.
m
18a3c9eb-c47c-4529-b204-b24bab78d2a5
The dhp smart contract and vaccination smart contract can be associated with each other and both can be associated with the locationInfo smart contract, but the locationInfo smart contract is fully independent. Firstly, the dhp smart contract is used for the covid-19 test certificate. It includes the information about the issuer and the holder. Secondly, the vaccination smart contract stands for the vaccination passport or a certification of vaccination which includes 3 structures namely: vaccine, authority and vaccine provider. Finally, the locationInfo smart contract is storing the location information to prioritise them according to the rate of covid-19 positive cases and utilise them as per needed (i.e. calculating total number of tests, total covid-19 positive cases, total vaccine receiving candidates in an area). The system has been deployed on Ethereum Rinkeby test network [1]}. The following parts go into the specifics of each smart contract.
m
e769ce7e-ad79-47b3-b529-fa0c773fa4e8
It has been a year or more that the world is suffering with the invisible enemy sars-cov2 (covid-19). It is difficult to fight or create antibodies against viruses. In the past we have seen that the vaccine against viruses like influenza, ebola and so on took several years to achieve a suitable vaccine. Fortunately, due to the massive advancement in technology we have now lots of approved vaccines within just one year. However, due to production limitations, it is impossible to cover all people under vaccination within a very short period of time. So the possibility of chaos in vaccination raises. Countries like Bangladesh with huge populations need an authentic prioritisation based system, where proper vaccination will be assured without any chaos. Our implemented system offers all these criteria. Also authentic test report certification has been implemented where unbiased and counterfeit certificates have no chance to get rid from the authority. Recently, we have seen lots of scams recently related to false covid test certificates. Our blockchain-enabled deployed system mitigates the tampering possibilities and creates transparency. We have shown the cost efficiency and benchmark result where we proved that our system will provide all these services conveniently. Prioritisation offers the most unique but significant feature which will ensure optimised vaccination. With all these features, we believe that our system can be an effective tool to fight against covid-19.
d
5d26fbd6-6c2d-48ae-bb18-5a6bc0be541d
Liquid crystal is an intermediate phase of matter between solid and liquid, which can be depicted as calamitic (rod-like) molecules in a microscopic sense[1]}. This paper will focus on the Q-tensor model for nematic liquid crystals, established by Landau-de Gennes's theory[2]}, [3]}. In this framework, the director field of liquid crystal, denoted by \(n\) , is determined through a symmetric, trace-free \(d\times d\) matrix \(Q\) known as a Q-tensor order parameter[4]}. This matrix is assumed to minimize the Landau-de Gennes free energy \(E_{LG}(Q)=\frac{L}{2}\Vert \nabla Q\Vert _{L^2}^2+\int _\Omega \mathcal {F}_B(Q).\)
i
fcb440de-7833-4f2b-83de-eb16da09d926
Here \(L>0\) , \(\Omega \in \mathbb {R}^d\) with \(d=2\) or 3 represents the spatial region where the liquid crystal molecules immerse and \(\partial \Omega \in C^2\) . \(\mathcal {F}_B\) denotes a bulk potential given by a truncated Taylor series of the thermotropic energy at \(Q=0\)[1]}, given by \(\mathcal {F}_B(Q)=\frac{a}{2}\operatorname{\text{tr}}(Q^2)-\frac{b}{3}\operatorname{\text{tr}}(Q^3)+\frac{c}{4}\left(\operatorname{\text{tr}}(Q^2)\right)^2\) , where \(a, b,\) and \(c\) are constants and \(c>0\) . Modeled by calculating the gradient flow[2]}, [3]} of Landau-de Gennes free energy, the following equation will describe the non-equilibrium situation satisfied by the Q-tensor, \(Q_t=-\frac{\delta E_{LG}}{\delta Q}=M\left[L\Delta Q-\left(aQ-b\left(Q^2-\frac{1}{3}\operatorname{\text{tr}}(Q^2)I\right)+c\operatorname{\text{tr}}(Q^2)Q \right)\right]M\left(L\Delta Q-S(Q)\right).\)
i
6403dad0-190e-49b4-837d-d3740035b5e0
Abundance analysis results have been established for this Q-tensor flow model and the related hydrodynamics models. See [1]}, [2]}, [3]}, [4]}, [5]}, [6]}, [7]} and the reference within. Various numerical approaches have also been made[8]}, [9]}, [10]}, [11]}, [12]}, [13]}, [14]}. A typical problem in designing a stable and efficient numerical scheme for problem (REF ) is the high non-linearity of the functional derivative of the bulk potential term. The Invariant Energy Quadratization(IEQ) method, which is recently developed, is a powerful tool for dealing with such difficulty and constructing linear energy-stable schemes. It has been widely used in treating gradient flow type problems. See [15]}, [16]}, [17]}, [18]}, [19]}, [20]} for more applications.
i
201dd7f8-fe44-450e-8ea1-2e6fd733f369
This method introduces an auxiliary variable replacing the original bulk potential. Specifically, we define \(r(Q)=\sqrt{2\left( \frac{a}{2}\operatorname{\text{tr}}(Q^2)-\frac{b}{3}\operatorname{\text{tr}}(Q^3)+\frac{c}{4}\left(\operatorname{\text{tr}}(Q^2)\right)^2 \right)+A_0},\)
i
cb0c1000-64af-4ae6-97f0-ea4eac66b235
where \(A_0>0\) is a large enough constant to ensure \(r(Q)\) to be positive for any symmetric, trace-free tensor \(Q\) . This is well-defined since the bulk potential term \(\mathcal {F}_B \) is bounded from below when \(c>0\)[1]}. It follows that \(P(Q)\frac{\delta r(Q)}{\delta Q}=\frac{S(Q)}{r(Q)}=\frac{aQ-b\left(Q^2-\frac{1}{3}\operatorname{\text{tr}}(Q^2)I\right)+c\operatorname{\text{tr}}(Q^2)Q}{\sqrt{2\left( \frac{a}{2}\operatorname{\text{tr}}(Q^2)-\frac{b}{3}\operatorname{\text{tr}}(Q^3)+\frac{c}{4}\left(\operatorname{\text{tr}}(Q^2)\right)^2 \right)+A_0}},\)
i
17760feb-f400-4624-8d69-c96a2ee5e570
In [1]}, we have constructed a fully discrete energy-stable scheme based on the IEQ formulation for this system and proved the convergence of the numerical solution to the weak solution of (). In this work, we will construct a semi-discrete numerical scheme solving the system (), following the idea we raised in [1]}. Let \((Q,r)\) denote the weak limit obtained by the convergence of numerical solutions. We will show a uniform \(H^2\) bound for the Q-tensor given a regular enough initial condition and then deduce that \(Q\) is a strong solution for system (REF ). It will be done by showing the equivalence of \(r\) and \(r(Q)\) in the \(L^2\) sense. To the best of our knowledge, this is the first work to show such equivalence by explicitly computing the difference between the auxiliary variable and the energy quadratization in a discrete sense. The technique we use here can be applied to study other numerical schemes regarding different problems based on the IEQ method, for example, the hydrodynamical liquid system problem.
i
6a13b23e-bf3b-4867-9ed5-a58163f49517
There is an increasing trend to use the cloud for complex workflows, such as scientific computing workflows and big-data analytics [1]} [2]} [3]}. The customers submit their workflow processing requests together with their budget to the cloud. The workflow management system in the cloud assigns the processing requests to appropriate virtual machines (VM) by jointly considering the requests, the VM capability and the budget. Hopefully, the customers service level agreement will be met and the objective of the cloud provider will be optimized. However, the current workflow management systems are inadequate for scheduling complex workflows with diverse requirements and heterogeneous virtual machines. This has resulted in long processing latency, wasted cloud resources, and poor return on investment.
i
595c31e2-26a7-4187-bc57-a9f17b3cac33
This paper investigates a workflow scheduling problem in the cloud with budget constraints. More specifically, a set of workflows is to be placed in the cloud. Each workflow has multiple computation jobs, with precedence constraints among themselves. For each workflow, we can use a directed acyclic graph (DAG) to represent the precedence constraints of the jobs. A job has an execution time, which depends on where the job is placed and how much computing resources are allocated to it. A job has a minimum computation resource requirement, including the CPU power and the memory requirements. The jobs are placed on a limited set of VMs. The customer is charged only for the period when a VM is used, i.e., on the pay-as-you-go basis. This describes the use cases of on-demand VMs in Amazon EC2. With respect to the jobs, the decision problem we consider in this paper is to decide where and when to place each job, i.e., which VM will execute each job and when the execution starts. The precedence constraints and the budget constraints must be satisfied. Furthermore, all the resource capacity constraints at the placement targets must also be respected. The optimization objective is to minimize the processing time of the set of workflows, i.e., the makespan of the workflows.
i
a0c2f66c-26bb-4569-82df-b31da25ab2e0
Scheduling of a workflow represented by a directed task graph is a well-known NP-complete problem in general [1]} [2]}. The precedence constraints among jobs make the scheduling hard and many efforts have been made to find efficient heuristics in the area of parallel computing and grid computing. Topcuoglu et al. proposed the upward-rank based heuristic proposed in [3]} to tackle the precedence constraints. In the upward-rank based approached, each job computes its accumulated processing time from the exit job upward to itself along the critical (i.e., the longest) path as the upward rank. Jobs are then scheduled in the non-increasing order of their ranks. For the jobs with precedence constraints, the upward-rank based scheme assigns the priorities in a reasonable way; but, for those jobs with no precedence constraints among themselves, the upward-rank priority scheme assigns priorities in an arbitrary fashion. In this work, we propose to assign priorities for those unrelated jobs considering jobs' importance in the global DAG topology. We construct a random walk on the (extended) workflow DAG, and apply the random walk stationary distribution probabilities as jobs' importance (i.e., weights). The rationale is that the stationary probabilities are computed recursively across the global topology and carry the global information of all states (jobs) propagated back to each state (job), and therefore the resulted stationary probabilities reflect the jobs' importance in the global topology. The other issue is that in parallel computing and grid computing, workflow computing often aims to minimize the makespan without considering the cost of computing facility. In the era of the cloud, the leasing cost of the cloud facility brings a new challenge in scheduling DAG-based workflows in the cloud. Since jobs are scheduled in a prioritized order and often greedily, how the budget is split and reserved for each job remains a heuristic. In this work, we propose to reserve the minimum required budget for each job, and assign the spare budget uniformly across the jobs.
i
23feee58-48e9-4d1a-9c6b-e01aa3ecb7d0
We formulate an integer programming model of the DAG-based workflow scheduling problem with budget constraints. The model can be evaluated by integer programming solvers such as Gurobi [1]} and the solution can be used as the performance baseline of different heuristics. We propose a weighted upward-rank priority scheme that assigns the scheduling priorities to the jobs. It leads to improved performance in average when compared with the plain upward-rank priority scheme in [2]}. The weights in our scheme are the stationary probabilities of a random walk on the workflow digraphs. We assign the spare budget uniformly across all the jobs. The empirical results show that for most cases, the uniform spare-budget-splitting scheme outperforms the scheme of splitting budget in proportion to extra demand in average.
i
80a78ea1-7273-4ce4-bbc3-0135731d1a85
The remaining of the paper is organized as follows. In Section , we discuss more related works. In Section , we formulate the workflow scheduling problem as an integer programming problem. We describe the weighted upward-rank priority scheme based on a random walk and the uniform spare budget splitting heuristic in Section . We evaluate the heuristic on empirical test cases in Section . Finally, we draw the conclusion in Section .
i
91fda8f9-3d62-4d58-bd39-4ca27ab914c8
DAG-based workflow scheduling has been extensively studied in the literature of parallel computing and grid computing. In the survey paper [1]}, the authors summarized on a wide spectrum of algorithms on DAG-based workflow scheduling in a multi-processor environment, including branch-and-bound, integer-programming, searching, randomization, and genetic algorithms. Topcuoglu et al. proposed the Heterogeneous Earliest-Finish-Time (HEFT) algorithm in [2]}. The HEFT algorithm first computes the upward-rank of each task by traversing the task graph; it then sorts tasks non-increasingly based on the upward-rank values, and assigns the tasks in the sorted list to the available fastest processor. Upward-rank based task prioritization achieves good performance and becomes an important solution in solving DAG-based workflow scheduling. Daoud and Kharma studied a similar problem in [3]} and designed the longest dynamic critical path algorithm (LDCP). The LDCP algorithm introduces a DAG for each processor, named DAGP, with the sizes of all the tasks set to the computation costs of on each specified processor. It computes the upward-rank of each task within a DAGP to gain more precise task priorities. A task with the highest upward-rank among all DAGPs is assigned with a priority to the proper processor and all DAGPs will be updated after the assignment. The tie is broken by choosing the task with the largest number of outgoing edges. The LDCP has better scheduling performance than HEFT, but with higher complexity. The work in [4]} studies the problem of minimizing the execution time of a workflow in heterogeneous environments and designs an ant-colony based heuristic algorithm. The heuristic generates task sequences considering both the forward and backward (i.e., global) dependency of tasks, where the forward dependency is defined as the number of predecessors, and the backward dependency is defined as the number of successors, respectively. The algorithm searches the suitable machine with a greedy minimum strategy in each round of searching. The work in [4]} aligns with our opinion that not only jobs on the critical path but also other jobs should be accounted when we compute the scheduling priority.
w
8cf352ff-bad4-44ec-9d14-aa21430701dd
As more and more workflows are moved to the cloud, scheduling DAG-based workflows faces a new challenge of scheduling tasks under budget constraints. Recently, several studies have worked on the budget-constrained workflow makespan minimization problem in the cloud environment [1]} [2]} [3]}. Wang and Shi [1]} consider a special \(\kappa \) -stage MapReduce-like workflow where each stage consists of a batch of concurrent jobs. Their approach is to first greedily allocate budget to the slowest job of each stage across all the stages, hoping to minimize the execution time of each stage. It then gradually refines the budget allocation across the stages and schedules the concurrent jobs of each stage based on the budget. Shu and Wu [2]} study a workflow mapping problem to minimize workflow makespan under a budget constraint in public clouds. The work assumes that a job consists of homogeneous tasks and there is an unlimited number of VMs in the cloud. It pre-computes the most expensive schedule and the cheapest schedule based on the concept of the critical path, and applies the binary search to find an approximate solution. The work in [3]} considers a budget-constrained workflow scheduling heuristic in a heterogenous cloud environment. The heuristic algorithm schedules the task in a prioritized order based on the upward-rank of each task [7]}. The main idea of the algorithm is that it splits and reserves the budget to each individual task. It first assigns each task the minimum budget equal to the cost of using the cheapest VM; then, the remaining budget is split so that each task gets an additional share in proportion to the cost difference between using the cheapest VM and using the most expensive VM. Hence, by reserving the minimum budget to each task, the algorithm guarantees to find a feasible solution. By splitting the extra budget in proportion to each task's extra cost demand, the heuristic reserves more spare budget for the tasks with lower priorities. These jobs will enjoy more flexibility in selecting better VMs. Sakellariou et al. considered the facility cost in a grid environment [8]}. It proposes two approaches to find a minimum makespan solution with budget constraint, LOSS and GAIN, respectively. The LOSS approach starts with the scheduling solution achieved by the HEFT algorithm, and keeps swapping task to cheaper machines until the budget constraint is satisfied. The GAIN approach starts with a solution with the cheapest cost, and keeps swapping tasks to faster machines whenever there is available budget. The work in [9]} extends the HEFT algorithm in [7]} and proposes a Budget-constrained HEFT algorithm (BHEFT). The BHEFT algorithm assigns scheduling priorities based on the upward rank. It splits the budget to each task based on its average cost over difference resources; if there is additional spare budget, the spare budget will be assigned to each task in proportion to its demand. With the budget for each individual task, the BHEFT algorithm always assigns the affordable fastest resource to a task. Arabnejad and Barbosa worked on a similar DAG scheduling problem in [11]} and proposed the HBCS algorithm. The task prioritization is also based on the upward-rank. The HBCS algorithm computes a worthiness indicator which jointly considers the cost, the remaining budget and the speed of each processor and assigns a task to the processor with the highest worthiness.
w
6fc46ca9-67de-4c1a-972c-d6514d91e2cd
Some studies consider the min-cost workflow scheduling problem under the processing deadline constraint. Abrishami et al. proposed the IaaS cloud partial critical paths algorithm (IC-PCP algorithm) in [1]} to minimize the execution cost of the workflow under a deadline constraint. The key idea is the critical parent and partial critical paths(PCPs). The critical parent of a task is its unassigned parent that has the latest finish time. The PCP consists of a task and its critical parents. The algorithm schedules tasks in a PCP as a pack, and assigns it to the cheapest VM which can meet the sub-deadline of the PCP. Sahni and Vidyarthi proposed the just-in-time (JIT-C) algorithm in a follow-up work of the IC-PCP [2]}. It first checks the feasibility of the customer's deadline requirement. With a feasible deadline, the algorithm starts from the entry tasks and steps into a monitoring control loop. Within each control loop, it identifies the tasks whose parent tasks have been scheduled and are running, and assigns each of these tasks to the cheapest VM satisfying its sub-deadline requirement.
w
0e887c66-742c-48cc-b38e-1e2f43499f6a
Regarding the scheduling of multiple workflows, several different scheduling strategies were proposed. The work in [1]} focuses on how to schedule mutiple workflows onto a set of heterogeneous resources and minimize the makespan. It proposes four policies to create a composite DAG, including common entry and common exit node, level-based ordering, alternating DAGs, and ranking-based composition. It define a slowdown metric as the ratio of the finish time achieved when a workflow is scheduled individually and the finish time achieved when the workflow is scheduled together with other workflows. It aims to achieve fairness across workflows by minimizing the largest slowdown value when scheduling jobs. The work in [2]} uses a heterogeneous priority rank value that includes the out-degree of a task as a weight in the evaluation of task priorities. It further proposes three scheduling strategies across multiple workflows including round-robin, priority-based, and trade off between round-robin and priority. Rodriguez and Buyyawe [3]} proposed an elastic resource provisioning and scheduling algorithm for multiple workflows, which aims to minimize the overall cost of leasing resources while meeting the independent deadline constraint of workflows.
w
c2cb8799-eecc-4a18-b8b4-1f61d77f1f2c
Wang and Xia explored using mixed integer programming (MIP) to formulate and solve complex workflow scheduling problems as building blocks of large-scale scheduling problems [1]}. The scheduling problems considered in [1]} are minimization of the cost under the deadline constraint. Meena et al. [3]} aimed at finding schedules to minimize the execution cost while meeting the deadline in cloud computing environment. They employed a PerVar parameter to record the variation of performance of VMs and proposed a Cost Effective Genetic Algorithm (CEGA) to generate schedules. Li et al. [4]} focused on a similar work of [3]} and captured dynamic performance fluctuations of VMs by a time-series-based approach. With the VM performance forecast information, they designed a genetic algorithm that fulfills the Service-Level-Agreement. The work in [6]} develops a scheduling system to minimize the expected monetary cost given the user-specified probabilistic deadline guarantees in IaaS clouds. It focuses on dealing with the price and performance dynamics in clouds and does not assume precedence constraints in workflows. Zheng et al. [7]} studied the problem of improving utility of cloud computing by allowing partial execution of jobs. The workflows in clouds consist of parallel homogeneous preemptable tasks without precedence constraints. The work proposes efficient online multi-resource allocation algorithms. Champati and Liang considered the job-machine assignment problem in the setting where jobs have placement constraints, and machines are heterogeneous [8]}, and there is no precedence constraints either. They developed an efficient algorithm to minimize the sum-cost.
w
51b8b138-e44a-4cc6-9150-ad349abb85ef
In this section, we present the comparative evaluation results of our algorithms, the algorithms of MSLBL [1]}, HBCS [2]} and BHEFT [3]}, and the optimal baseline solution generated by Gurobi. In Table REF , we list the shorthands for these algorithms, which will be used throughout this section. We first describe a single workflow scenario, where various experimental cases and algorithms are tested and results are reported. Then, we move to a multiple workflow scenario, where we compare our algorithm with random and round-robin priority generation schemes. In the experiments, we use a broad range of workloads, including workflows from real applications and randomly generated workflows. <TABLE>
m
7cb03618-8b99-4918-809d-de46beddc673
DAG-based complex workflows are becoming significant workload in the cloud. In scheduling workflows, the budget constraint is an important factor of consideration due to the pay-as-you-go nature of the cloud. In this paper, we formulate the workflow scheduling problem with budget constraints as an integer programming model. Improving upon the plain upward-rank priority scheme, we propose a weighted scheme using the stationary probabilities of a random walk on the digraph as the weights. We further design a uniform spare budget splitting strategy, which assigns the spare budget uniformly across all the jobs. The empirical results show that the uniform spare budget splitting scheme outperforms the earlier scheme that splits the spare budget in proportion to extra demand, and the weighted priority scheme further improves the workflow makespan. The advantage of the weighted priority scheme is due to its ability to evaluate the jobs' global importance in the workflow, by considering not only the jobs on the critical path but also off the critical path. Because of the diversity and complexity of workflow types in production, there may be some other unknown factors yet to be studied. Deep analysis of the structural characteristics of different workflows may lead to some new discovery and help design a further improved task priority assignment strategy. For instance, we can borrow the idea proposed in LDCP [1]} that assigns a higher priority to a job with more children whenever there is a tie. These kinds of refinement that relies on deep analysis of the workflow topologies will be a direction of future research.
d
6657dc7a-5270-4544-80e7-e97d81049b06
Whether conspiratorial videos on YouTube, political memes on Instagram, or viral protest photos on Twitter, images and videos on technology platforms have immense power to influence public opinion [1]}, [2]}. As technology platforms find themselves in the position to moderate the credibility of visual content, they are not only tasked with evaluating whether or not content is manipulated and/or misleading, but also with determining how to take action in response to their evaluations. In this case study, we describe why understanding individuals’ encounters and behaviors in response to visual information across platforms is vital for informing effective misinformation interventions.
i
0f15261c-a3f8-4dd7-97a0-d471d2c5df98
This need is especially urgent in light of several recent high-profile efforts to label posts, such as Twitter’s labeling of state [1]} and manipulated media , and Facebook’s evolving set of labels based on fact-checker ratings [2]}. While such interventions have an intuitive appeal, seeming to be net-positive efforts to fortify media integrity, they take for granted that audiences will accept information from these labels as intended. It is therefore crucial for researchers to independently evaluate the effects of labeling, and other misinformation interventions, in practice.
i
16f676fc-6b26-41fb-afa9-a650aff8615e
We conducted two complementary studies: a 60-minute semi-structured interview study with 15 participants using photo elicitation methods in order to gain insight into attitudes about visual misinformation and existing labels (S1), followed by an 11-day diary study with 23 additional participants to capture the range and prevalence of misinformation labels and media examples that people trust and distrust in the moment across platforms (S2). S2 included a final concept sketching activity that gave participants the opportunity to redesign their platform experiences when encountering media about current events. The visual information discussed in both studies was primarily focused on COVID-19 in order to provide a baseline topic to reference that was likely relevant to all participants.
m
bf9a2c5d-87d7-4847-9507-aab03df8f0ac
Both studies were recruited and conducted in English remotely through dscout, an online qualitative research platform. All participants signed terms of service and consented to each study, with additional consent language written in consultation with dscout project managers and Partnership on AI legal counsel. Semi-structured interview protocols and diary instructions and prompts are provided in the supplementary materials [1]}.
m
ccbf8669-5ebe-417f-8213-30e3f2f0f254
These findings have profound implications for platform interventions. Though drawing from a limited sample size, both studies vividly demonstrate the intense emotional reactions evoked by labels for misinformation in general – perceived by many as overly paternalistic, biased, and punitive. As such, current post-level labeling interventions seem likely to continue to provoke feelings of distrust and hostility toward platforms and content correction labels. At a minimum, these insights should push platforms, researchers, and journalists to more holistically understand the feelings of discontent and uncertainty that lead many to engage with misinformation in alternative media narratives in the first place, and invest in ways to legitimately earn the trust of consumers disillusioned with these institutions.
d
ed7bae19-3768-4c66-a74a-eb7bb4b8f13d
These findings also complicate discussion around “the backfire effect” — the idea that “when a claim aligns with someone’s ideological beliefs, telling them that it’s wrong will actually make them believe it even more strongly” [1]}. Though this phenomenon is thought to be rare, our findings suggest that emotionally-charged, defensive backfire reactions may be common in practice for American social media users encountering corrections on social media posts about news topics. While our sample size was too small to definitively measure whether the labels actually strengthened beliefs in inaccurate claims, at the very least, reactions described above showed doubt and distrust toward the credibility of labels – often with reason, as in the case of “false positive” automated application of labels in inappropriate contexts. Any additional insight that platforms can provide about the extent of label skepticism and disbelief toward various label contents from their own data will further the ability of independent researchers to make appropriate intervention recommendations.
d
fb23ad4d-2059-4758-bae3-ecf3298177b3
As an immediate next step, we aim to evaluate the prevalence and generality of these themes by conducting a demographically representative survey on a broader population. A survey could serve as a supplement to initial qualitative studies, assessing emerging themes from interviews and diary submissions. Additionally, we aim to learn more about global risks not captured in our American interviews through consultation with small focus groups of civil society partner organizations in order to better connect with affected and marginalized communities that are distinctly impacted by harmful online content.
d
118cd73c-931b-4747-801f-c39d6ca20bc9
In summary, our findings point to major limitations in current visual misinformation labeling approaches as experienced by end-users. Yet there are fruitful opportunities for any independent researcher to use qualitative methods to work closely with most affected platform users to define context-specific needs. Doing so will bolster creation of emotionally resonant interventions for visual misinformation and information at large.
d
12ff2c13-aa10-4c4d-a56e-ff56969f45bd
Thanks to Victoria Kwan, Tommy Shane, Laura Garcia, and Pedro Noel at First Draft, and Stephen Adler, Nicholas Anway, Kemi Bello, Jeffrey Brown, B Cavello, Riccardo Fogliato, Adam Schetky, Jonathan Stray, Tina Park and all of our colleagues at the Partnership on AI, the AI & Media Integrity Steering Committee, and The Duke Reporters' Lab.
d
056f836b-0aec-4fb9-b2e7-15ad52703ced
Previous works have shown that there is a significant performance drop when applying models trained on cased data to uncased data and vice-versa [1]}. Since capitalization is not always available due to real world constraints, there have been some methods trying to use casing prediction (called truecasing) to tackle this trade-off.
i
d88037dd-c03f-400a-98fa-ad8b89fca381
Truecasing is a task where given all lowercase sentences we have to predict correct casing. Here we trained a simple Bidirectional LSTM with a logical layer on top, as described in both [1]} and [2]}. Since the former paper, does a great job of mentioning hyper-parameters used in the network, the hyperparameter search is not required.
m
c3180ddb-b21d-408c-bee6-518e01611d89
Part of Speech tagging is a task in which given a sentence we have to predict part of speech for each word. It is our the most in depth experiment, which lead a lot of additional hypotheses which got tested. Firstly, we had to find optimal hyper-parameter setting. The search was not too large, since a few of the parameters (such as number of layers) were known, but it still took a significant amount of time. After finding such setting we compared its results with ones reported the original paper. Then we investigated whether claims from this paper are applicable to other datasets and encodings. Lastly, we looked at the CRF layer, and what its impact on performance on the whole model is.
m
4d25ca5f-7b24-4be3-8a54-99671d1a1bdd
Named Entity Recognition is a task in which given a sentence we have to predict which parts of it describe some entity. It had the most sophisticated model out of all the experiments we tried reproducing. Fortunately we did not have to do a hyperparameter search since [1]} provided a similar model to that of the original paper.
m
53ec5ba2-0daa-4846-8eee-f278b9112967
We trained these models for 30 epochs, with crossentropy loss. We recorded both training and validation loss for both, and chose the models with lowest validation loss for each of OOV settings. In both cases, such point was reached after around 7-8 epochs.
r
a08d3db4-1936-4ab2-bcf9-ac1c90c8d396
The performance of the model matched one claimed both by original paper, and original truecasing paper on Wikipedia dataset. However, there was a significant mismatch in performance on other datasets provided in the original paper. This can be seen in Table REF . A drop-off between datasets is expected, because as explained in original paper, various sources will use various casing schemes specific to the domain. <TABLE>
r
ba74ea62-b58a-41f4-ad13-0cd634a5f6e0
For reference Table REF contains results from investigated papers. We can see that there is not a high difference in performance between results for Wikipedia dataset, but there is a 10% difference in both CoNLL and PTB between our results and original ones. The possible reason for it might be the fact that original paper used fork of, before mentioned, char-rnn. We looked at it, searching for differences between our intuition and actual implementation, but could not find any. <TABLE>
r
231ba907-4020-40cf-88dd-89c3c7e7caed
This leaves us in conclusion that, in terms of performance, this experiment can be only partially reproduced, and it is possible that additional techniques (such as dropout, etc.) were applied during training of the model. However, in terms of between-datasets trends the paper describes, we have confirmed them, and were able to (partially) reproduce the first hypothesis of the paper.
r
ec9167a6-d0e1-4f45-9182-284f9c5b4b74
However, as in case of Truecasing, we conclude that in terms of absolute performance we did get different results as reference paper. This can be seen in Table REF , where there is about 1% difference in accuracy between our implementation and the original one. In addition to this the gain due to mixing cased and uncased data is sometimes smaller than the difference between implementations, such as in case of Uncased flavor, where reference has higher accuracy than our C + U flavor.
r
3872c072-5ae7-434e-b79f-3850842bdc75
We tested on the same 5 variants as in case of POS experiment. Dataset used was CoNLL2003, similarly to the original paper. Table REF shows results of these variants using our code. We can see that, mixing cased and uncased data is again a very good data pre-processing step, which provides respectively first and third highest F1 score. <TABLE>
r
5042b7a1-8815-477f-9a18-7a523fdb7082
Table REF compares scores from our implementation to ones from [1]}. We can see there is a 1-2% gap in absolute results between our implementation and the original. This can happen due to minor differences in the implementations. <TABLE>
r
05f38884-d7f6-4c44-a61a-8725fc657b2a
However, the most interesting difference in the cased variant, where there is a above 30% gap between our and the original implementation. After closer investigation we discovered that the reason for it is huge different in its performance on uncased data (81.47 in our implementation vs 34.46 in original one). We do not have a firm intuition on why this is happening. However, it might be the case that models trained on cased dataset are highly unstable when tested on uncased data.
r
2d8db418-d99e-45fc-a9d9-409b6b373aac
Overall, however we can see that relative performance of results is similar, and mixing cased and uncased data provides the best performance with our implementation. Because of this we believe our results support second hypothesis of the paper.
r
eda2805f-c5a5-433a-ae17-6a40b1e9159b
Another experiment that [1]} performed is testing how well their model from NER task transfers to other datasets. In particular they used the Twitter dataset, since it has very different properties than CoNLL2003.
r
40ce9a81-1648-4bbc-83d6-6aeee0edab5c
Table REF compares performance of our implementation to the original one when transferred to the Twitter dataset. We see that our implementation performs much worse than the original one (by 30% or more in each case). <TABLE>
r
da22c485-d8fb-49d5-a273-225d3132a1cc
Comparison here is not relative, but direct, on absolute values, due to the fact that we are testing our performance on a new dataset. This means, that our model did much worse than the original one. This is very counterintuitive, when we consider the fact that in the original dataset, our cased experiment generalized much better. Our intuition is that there was a dropout used in the original implementation, which was undocumented in the paper.
r
ab5d6612-b8a1-44f4-9c34-1f45dae02e7e
Table REF shows results for various encodings. We see that we consistently mixed dataset appears to be in the top 2 results. We can also see that encoding has a strong effect on absolute performance of the model, especially in case of word2vec which has a 10% drop in accuracy relative to GloVe and ELMo. This is an expected behavior as ELMo is known to outperform both word2vec and GloVe in many cases.
r
77b4b93c-1f68-4c04-9852-36416b2fa3a4
There is an interesting notion that both of Truecasing flavors tend to perform better than Half Mixed one. However, due to significant drop in word2vec with respect to other encodings we believe that it rather an artifact of the encoding itself rather than a theme. Additionally, performance of all three scenarios is rather close further pointing to encoding specific problem.
r
f7aeebe7-4771-4c2a-a741-3dbea4ed5cab
Table REF shows results for different dataset. Here results are clear, mixing cased and uncased datasets provides the best result regardless of a dataset. In fact for Brwon CoNLL2000 gaps between this technique and others is even larger than on the original PTB dataset. Hence, these results strongly support our second additional hypothesis. <TABLE>
r
465a8cb9-3943-4151-a414-6951833ddb32
Table REF compares performance of the original model with or without the last layer. Considering that most of results are within a few hundredths between two models, with TT being 0.26% away from the original implementation, we conclude that CRF can be eliminated from the model without major differences in performance. This is very beneficial on the implementation and runtime side, since CRF is a non-standard library, making model are complicated, and does not have multi-gpu support, causing it to run much longer. These results, strongly support our third additional hypothesis, and we recommend potential future users of BiLSTM to not include CRF layer (at least for POS task).
r
a8096680-4add-4523-874d-e119dc08544d
Graphs are one of the most fundamental data structures that are widely used for modeling complex systems across diverse domains from bioinformatics [1]}, to neuroscience [2]}, to social sciences [3]}. Modern graph datasets increasingly incorporate temporal information to describe the dynamics of relations over time. Such graphs are referred to as temporal graphs [4]} and typically represented by a set of vertices and a sequence of timestamped and directed edges between vertices called temporal edges. For example, a communication network [5]}, [6]}, [7]}, [8]}, [9]} is often denoted by a temporal graph, where each person is a vertex and each message sent from one person to another is a temporal edge. Similarly, computer networks and financial transactions can also be modeled as temporal graphs. Due to the ubiquitousness of temporal graphs, they have attracted much attention [6]}, [11]}, [5]}, [13]}, [14]}, [15]}, [16]}, [17]} recently.
i
7295a7dc-2221-424e-b860-4608eaa12dbd
One fundamental problem in temporal graphs with wide real-world applications such as network characterization [1]}, structure prediction [2]}, and fraud detection [3]}, is to count the number of occurrences of small (connected) subgraph patterns (i.e., motifs [4]}). To capture the temporal dynamics in network analysis, the notion of motif [1]}, [6]}, [7]}, [2]} in temporal graphs is more general than its counterpart in static graphs. It takes into account not only the subgraph structure (i.e., subgraph isomorphism [9]}, [10]}) but also the temporal information including edge ordering and motif duration. As an illustrative example, two temporal motifs \(M\) and \(M^{\prime }\) in Fig. REF are different temporal motifs. Though they are exactly the same in structure, they are distinguished from each other by the ordering of edges. Consequently, although there has been a considerable amount of work on subgraph counting [11]}, [12]}, [13]}, [14]}, [15]}, [16]}, [17]}, [18]}, [19]}, [20]} in static graphs, they cannot be directly used for counting temporal motifs.
i
2265fa86-f99f-42f3-80ad-8a75ed2e8cc5
Generally, it is a challenging task to count temporal motifs. Firstly, the problem is at least as hard as subgraph counting in static graphs, whose time complexity increases exponentially with the number of edges in the query subgraph. Secondly, it becomes even more computationally difficult because the temporal information is taken into consideration. For example, counting the number of instances of \(k\) -stars is simple in static graphs. However, counting temporal \(k\) -stars is proven to be NP-hard [1]} due to the combinatorial nature of edge ordering. Thirdly, temporal graphs are a kind of multi-graph that is permitted to have multiple edges between the same two vertices at different timestamps. As a result, there may exist many different instances of a temporal motif within the same set of vertices, which leads to more challenges for counting problems. There have been a few methods for exact temporal motif counting [2]} or enumeration [3]}, [4]}. However, they suffer from efficiency issues and often cannot scale well in massive temporal graphs with hundreds of millions of edges [1]}.
i
ec3e3454-d6a7-46c6-893a-e21a6c1fa706
In many scenarios, it is not necessary to count motifs exactly, and finding an approximate number is sufficient for practical use. A recent work [1]} has proposed a sampling method for approximate temporal motif counting. It partitions a temporal graph into equal-time intervals, utilizes an exact algorithm [2]} to count the number of motif instances in a subset of intervals, and computes an estimate from the per-interval counts. However, this method still cannot achieve satisfactory performance in massive datasets. On the one hand, it fails to provide an accurate estimate when the sampling rate and length of intervals are small. On the other hand, its efficiency is not significantly improved upon that of exact methods when the sampling rate and length of intervals are too large. <FIGURE>
i
55b177c0-5159-4c84-919f-6cdc68b1d048
Moreover, the vertices and edges in a temporal graph are typically observed incrementally over time in a streaming manner. For example, in communication networks, newly registered users are observed as new vertices and new messages between any two existing or new users from time to time are generated continuously as new temporal edges. In such scenarios, it is almost impossible to obtain the whole temporal graph all at once. Even if the whole dataset is already available, it may still be infeasible to keep it entirely in main memory for motif counting due to high space consumption. In addition, when new edges arrive over time, an offline counting method has to be rerun from scratch for maintaining the count in the updated dataset. However, to the best of knowledge, all the existing methods [1]}, [2]}, [3]}, [4]} for temporal motif counting and enumeration are designed for the offline setting and become very inefficient for temporal graph streams.
i
5bd64514-3a1e-45ec-945b-881e83c1663c
To address the above problems, we propose more efficient and accurate sampling algorithms for approximate temporal motif counting in this paper. The basic idea of our algorithms is to first uniformly draw a set of random edges from a temporal graph (stream), then exactly count or estimate the number of local motif instances that contain each sampled edge, and finally compute the global motif count from local counts. We first propose two offline algorithms for temporal motif counting. For any \(k\) -vertex \(l\) -edge temporal motif, we propose a generic Edge Sampling (ES) algorithm, which exactly counts the number of local motif instances by enumerating them. Next, temporal motifs with 3 vertices and 3 edges (i.e., triadic patterns) are one of the most important classes of motifs, whose distribution is an indicator to characterize temporal networks [1]}, [2]}, [3]}, [4]}. We propose an improved Edge-Wedge Sampling (EWS) algorithm for counting any 3-vertex 3-edge temporal motif, which estimates the local counts by wedge sampling [1]}, [6]}. Furthermore, based on the above two offline algorithms, we propose a reservoir sampling-based framework to extend them for counting temporal motifs in temporal graph streams. We analyze the theoretical bounds and complexities of our proposed algorithms, and perform extensive experiments to show their accuracy and efficiency. Our main contributions in this paper are summarized as follows.
i
a7ef6e6d-ea84-42c6-90fd-b32c37779f80
We propose a generic Edge Sampling (ES) algorithm to estimate the number of instances of any temporal motif in a temporal graph. It exploits the BackTracking (BT) algorithm [1]}, [2]} for subgraph isomorphism to enumerate local motif instances. We devise simple heuristics to determine the matching order of a temporal motif to reduce the search space. We propose an improved Edge-Wedge Sampling (EWS) algorithm that combines edge sampling with wedge sampling [3]}, [4]} specialized for counting any 3-vertex 3-edge temporal motif. Instead of enumerating all instances containing a sampled edge, EWS estimates the number of local instances via temporal wedge sampling. In this way, EWS avoids the computationally intensive enumeration and greatly improves the efficiency upon ES. We further propose two algorithms on top of ES and EWS, namely SES and SEWS, to estimate the number of instances of a temporal motif over a temporal graph stream. SES and SEWS utilize the same methods as ES and EWS, respectively, to compute the local count for each sampled edge. Moreover, they adopt a reservoir sampling-based framework to maintain a fixed-size set of sampled edges over time and thus always keep the up-to-date global count dynamically w.r.t. the set of sampled edges. Finally, we evaluate the performance of our proposed algorithms on several real-world temporal graphs. The experimental results confirm the accuracy, efficiency, and scalability of our proposed algorithms. In the offline setting, ES and EWS run up to \(10.3\) and \(48.5\) times faster than the state-of-the-art sampling method while having lower estimation errors. In the streaming setting, SES and SEWS further achieve up to three orders of magnitude speedups over ES and EWS while the estimation errors are still comparable.
i
c57c2375-f8d3-4937-a331-0052ee245cf9
Differences from Prior Conference Paper [1]}: A preliminary version [1]} of this paper has been published on CIKM '20. The new contributions of this extended version are listed as follows. Firstly, all the proofs for the unbiasedness and variances of ES and EWS, which are omitted in the preliminary version due to space limitations, are included in the extended version. Secondly, we propose two novel algorithms (i.e., SES and SEWS) for temporal motif counting in temporal graph streams. We analyze the theoretical bounds and complexities of SES and SEWS. Thirdly, we conduct new experiments to evaluate the performance of SES and SEWS in temporal graph streams. And the experimental results confirm the efficiency and effectiveness of SES and SEWS over ES and EWS in the streaming setting.
i
115d2df3-c56d-4808-999a-09afbbc0724e
Paper Organization: The remainder of this paper is organized as follows. Section  reviews the related work. Section  introduces the background and formulation of temporal motif counting. Section  presents the ES and EWS algorithms for temporal motif counting and analyzes them theoretically. Section  proposes the SES and SEWS algorithms for streaming temporal motif counting and provides the theoretical analyses accordingly. Section  describes the setup and results of the experiments. Finally, Section  provides some concluding remarks.
i
1a677c94-789d-4e5f-88f9-c67655e264fc
Subgraph (Motif) Counting in Static Graphs: The problem of counting the number of instances of a query subgraph in a large data graph has been extensively studied across several decades. Since counting the exact number of instances by enumeration is computationally intensive due to the NP-hardness of subgraph isomorphism [1]}, more efforts have been made to estimate the counts within bounded errors using random sampling (see [2]} for a survey). First of all, as triangles are the simplest yet most fundamental subgraph with wide applications in many network analysis tasks, a large number of sampling methods were proposed for triangle counting in massive graphs [3]}, [4]}, [5]}, [6]}, [7]}, [8]}, [9]}, [10]}, [11]}, [12]}, [13]}, [14]}, [15]}, [16]}, [17]}, [18]}, [19]}, [20]}, [21]}, [22]}, [23]}, [24]}, [25]}. The above methods considered the triangle counting problem in many different settings, including offline graphs [3]}, [7]}, [13]}, [19]}, [18]}, [22]}, insertion-only [3]}, [6]}, [5]}, [9]}, [10]}, [11]}, [14]}, [17]}, [15]}, [16]}, [20]} and fully-dynamic [12]}, [21]}, [23]} graph streams, sliding windows [24]}, and distributed graphs [4]}, [8]}, [25]}. Moreover, sampling methods were also proposed for estimating more complex motifs than triangles, e.g., 4-vertex motifs [50]}, [51]}, 5-vertex motifs [52]}, [53]}, [54]}, [55]}, motifs with 6 or more vertices [56]}, \(k\) -cliques [57]}, [58]}, sparse motifs with low counts [59]}, and butterflies in bipartite graphs [51]}. However, all the above methods were not designed for temporal graphs. They considered neither the temporal information nor the ordering of edges. Therefore, they could not be used for temporal motif counting directly.
w
3a6c6def-c392-4d0a-9da8-bcf0406d33de
Motifs in Temporal Graphs: Prior studies have considered different types of temporal network motifs. Viard et al. [1]}, [2]} and Himmel et al. [3]} extended the notion of maximal clique to temporal networks and proposed efficient algorithms for maximal clique enumeration. Li et al. [4]} proposed the notion of \((\theta ,\tau )\) -persistent \(k\) -core to capture the persistence of a community in temporal networks. However, these notions of temporal motifs were different from ours since they did not take edge ordering into account. Zhao et al. [5]} and Gurukar et al. [6]} studied the communication motifs, which are frequent subgraphs to characterize the patterns of information propagation in social networks. Kovanen et al. [7]} and Kosyfaki et al. [8]} defined the flow motifs to model flow transfer among a set of vertices within a time window in temporal networks. Although both definitions accounted for edge ordering, they were more restrictive than ours because the former assumed any two adjacent edges must occur within a fixed time span while the latter assumed edges in a motif must be consecutive events for a vertex [9]}.
w
7cac6e02-2198-4410-8d8f-6372ae3fdb52
Temporal Motif Counting & Enumeration: There have been several existing studies on counting and enumerating temporal motifs. Paranjape et al. [1]} first formally defined the notion of temporal motifs we use in this paper. They proposed exact algorithms for counting temporal motifs based on subgraph enumeration and timestamp-based pruning. Kumar and Calders [2]} proposed an efficient algorithm called 2SCENT to enumerate all simple temporal cycles in a directed interaction network. Although 2SCENT was shown to be effective for cycles, it could not be used for enumerating temporal motifs of any other type. Mackey et al. [3]} proposed an efficient BackTracking algorithm for temporal subgraph isomorphism. The algorithm could count temporal motifs exactly by enumerating all of them. Very recently, Micale et al. [4]} proposed a subgraph isomorphism algorithm specialized for flow motifs in temporal graphs. Liu et al. [5]} proposed an interval-based sampling framework for counting temporal motifs. To the best of our knowledge, this is the only existing work on approximate temporal motif counting via sampling. In this paper, we present several improved sampling algorithms for temporal motif counting in offline and streaming temporal graphs and compare them with the algorithms in [1]}, [3]}, [2]}, [5]}.
w
55406779-545c-423c-b26d-ea3f3239f271
Experimental Environment: All experiments were conducted on a server running Ubuntu 18.04.1 LTS with an Intel® Xeon® Gold 6140 2.30GHz processor and 250GB RAM. We downloaded the codehttp://snap.stanford.edu/temporal-motifs/\(^,\)https://github.com/rohit13k/CycleDetection\(^,\)https://gitlab.com/paul.liu.ubc/sampling-temporal-motifs of the baseline algorithms published by the authors and followed the compilation and usage instructions. Our proposed algorithms were implemented in C++11 compiled by GCC v7.4 with -O3 optimizations, and ran on a single thread. Our code are publicly available on GitHubhttps://github.com/jingjing-hnu/Temporal-Motif-Counting.
m
a177b8c3-12de-4c4c-91d8-7792d023a968
Datasets: We used five different real-world datasets in our experiments including AskUbuntu (AU), SuperUser (SU), StackOverflow (SO), BitCoin (BC), and RedditComments (RC). All the datasets were downloaded from publicly available sources like the SNAP repository [1]}. Each dataset contains a sequence of temporal edges in chronological order. We report several statistics of these datasets in Table REF , including the number of vertices, the number of static edges, the number of temporal edges, and the overall time span, i.e., the time difference between the first and last edges.
m
4ddacdc7-2e57-44fc-bc90-47c8b891cc82
EX: An exact algorithm for temporal motif counting in [1]}. The implementation published by the authors is applicable only to 3-edge motifs and cannot support motifs with 4 or more edges (e.g., Q5 in Fig. REF ). 2SCENT: An algorithm for simple temporal cycle (e.g., Q4 and Q5 in Fig. REF ) enumeration in [2]}. BT: A BackTracking algorithm for temporal subgraph isomorphism in [3]}. It can provide the exact count of any temporal motif by enumerating all instances of them. IS-BT: An interval-based sampling algorithm for temporal motif counting in [4]}. BT [3]} is used as a subroutine for any motif with more than 2 vertices. ES: Our generic edge sampling algorithm for temporal motif counting in Section REF . EWS: Our improved edge-wedge sampling algorithm for counting temporal motifs with 3 vertices and 3 edges (e.g. Q1–Q4 in Figure REF ) in Section REF .
m
9e0d23f4-1bf6-45c9-addc-4600b45202aa
SES: Our streaming extension of ES for counting temporal motifs of any kind in Section . SEWS: Our streaming extension of EWS for counting temporal motifs with 3 vertices and 3 edges in Section . <TABLE><FIGURE>
m
6b4ca5a1-3b99-4270-87e4-9e2bf534e613
Queries: The five query motifs we use in the experiments are listed in Fig. REF . Since different algorithms specialize in different types of motifs, we select a few motifs that can best represent the specializations of all the algorithms. As discussed above, an algorithm may not be applicable to some of the query motifs. In this case, the algorithm is ignored in the experiments on these motifs.
m
4b868b0c-e339-4808-921f-73bdf6a05d8b
Performance Measures: In the offline setting, the efficiency is measured by the CPU time (in seconds) of an algorithm to count a query motif in a temporal graph. In the streaming setting, when a new edge arrives, an algorithm updates and returns an estimate. The efficiency is measured by the CPU time (in seconds) of an algorithm to update the motif count for each 1,000 new edges. The accuracy of a sampling algorithm is measured by the relative error \(\frac{|\widehat{x}-x|}{x}\) where \(x\) is the exact number of instances of a query motif in a temporal graph and \(\widehat{x}\) is an estimate of \(x\) returned by the algorithm. In each experiment, we run all the algorithms 10 times and use the average CPU time and relative errors for comparison.
m
93e093fa-116f-49df-90d4-9d44a0ad67e0
In this paper, we studied the problem of approximately counting a temporal motif in a temporal graph via random sampling. We first proposed a generic Edge Sampling (ES) algorithm to estimate the number of any \(k\) -vertex \(l\) -edge temporal motif in a temporal graph. Then, we improved the ES algorithm by combining edge sampling with wedge sampling and devised the EWS algorithm for counting 3-vertex 3-edge temporal motifs. Furthermore, we extended the ES and EWS algorithms to the SES and SEWS algorithms, respectively, for processing temporal graph streams using a reservoir sampling-based framework. We provided comprehensive theoretical analyses on the unbiasedness, variances, and complexities of our proposed algorithms. Extensive experiments on several real-world temporal graphs demonstrated the accuracy, efficiency, and scalability of our proposed algorithms. Specifically, ES and EWS ran up to \(10.3\) x and \(48.5\) x faster than the state-of-the-art sampling method while having lower estimation errors in the offline setting. In addition, SES and SEWS further achieved up to three orders of magnitude speedups over ES and EWS with comparable estimation errors in the streaming setting.
d
97276fde-5efe-40f5-9c0b-afe109379dfa
We want our papers being published and read by others, we want to ship a great idea or a valuable lesson. Reading a paper, however, is an effort for the reader, especially if the reader does not know if the paper is a great one. The first impression is always of key importance and you have to work hard to change the mind of a person afterwards. Thus, ship great work from the beginning on. Style and formatting are important to make a good first impression and it is something that is neglected by quite some authors. You have to think it from the perspective of a potential reviewer: “The authors have not even invested an epsilon effort to create a reasonably looking paper—and now I should to put my valuable time into reviewing it? Fuck it.” Consequently, this style guide provides rules to format a paper in the IPB style. But without further ado, let's start.
i
30e5d4f2-3b21-4968-95e9-493c5eb12f0a
The recent development in cryptography and distributed ledger technology (DLT) has seen a new form of currency known as Central Bank Digital Currency (CBDC) [1]}. CBDC is a digital representation of legal currency regulated by nations' monetary authority or central bank. There are many differences between jurisdictions in terms of national conditions and the focuses of regulators. So CBDC designers across different jurisdictions have varying approaches in designing a CBDC system. Based on previous research, we conclude several CBDC dimensions to measure a CBDC system and propose an evaluation sub-framework that provides holistic CBDC solutions covering these dimensions. Moreover, we build a verification sub-framework to verify the feasibility and rationality of proposed solutions and see if they match initial expectations. Finally, we integrate both sub-frameworks into one framework, called CEV Framework.
i
d5512d81-b9fe-42d7-bbf4-6f4ad5ef53a1
Consensus algorithms are vital components in a CBDC system. They directly impact many dimensions, including performance and privacy. Therefore, it is essential to evaluate consensus algorithms and undertake the related calibration properly. We analyze diverse consensus algorithms and build a theoretical framework, the evaluation sub-framework, that splits consensus algorithms into different components, such that the impacts of each of them on CBDC dimensions can be derived.
i
8e2610c4-bb16-4da6-b679-a907a90a9b3a
We also propose two operating models to cover operational aspects in which business secrecy is one of the critical criteria for CBDC systems. Most CBDC pilots have shown a tiered architecture, bringing many potential problems, especially business secrecy issues. Current technical solutions to keep business secrecy, such as homomorphic encryption [1]}, could not satisfy non-functional requirements, especially performance. Therefore, we propose new operating architectures to cover these aspects in our evaluation sub-framework, like helping institutions retain their business secrecy.
i
26ee099d-82f6-4be0-b5a1-cc6d33357c0a
After proposing CBDC solutions, we then propose a verification sub-framework to guide CBDC designers to verify proposed solutions. The sub-framework needs CBDC designers to build a mathematical model for its choice and verify if it can meet initial expectations on diverse CBDC dimensions, like performance, security, privacy. By leveraging this framework, they can judge whether proposed consensus algorithms and operating models are consistent with their preferable evaluation dimensions.
i
4341956f-c98b-4cad-aad1-dbe1c252154f
In summary, our CEV framework includes an evaluation sub-framework to provide CBDC solutions and a verification sub-framework to prove the feasibility of proposed solutions. Depending on different national economic and regulatory conditions, CBDC designers can create various consensus algorithms to address specific CBDC needs.
i
9082b437-63ea-4a61-b2d0-34da2ab8e51a
The remainder of this paper is organized as follows. In Section II, we present the background of our research. Section III introduces our CBDC framework, including an evaluation sub-framework and a verification sub-framework. We mainly discuss the evaluation sub-framework in Section III and the verification sub-framework in Section IV. Section IV also gives an example of leveraging our framework to develop a solution and verify it. Finally, we conclude the paper in Section V.
i
663f5006-72ce-436f-89a4-459124ff097a
We build a theoretical model to describe the proposed solution above, including state machine, ledger and transaction types in a token-based CBDC system. Furthermore, we use 'regulated cryptocurrency' to replace 'CBDC' in some parts because regulated cryptocurrency includes CBDC and stablecoin, which means the CEV framework can also be used to design a stablecoin system.
m
e68655c6-482d-4e0f-8a0f-3005958fed9d
Different components in the consensus process map can be used flexibly. For example, recommended consensus algorithm two in figure REF chooses "To one / Sharding" in the "client-request" step. This choice brings many possibilities to system design and implementation.
m
8b35617d-5139-4134-be7f-aae7491cfc64
There are two types of transactions in the CBDC system: cross-shard transaction, which describes a transaction with two ledgers involved, single-shard transaction, which describes a transaction within one ledger. Figure REF shows a cross-shard transaction on the left side. We discussed three-tier operating models in Section III, in which privileged institutions can also directly provide services to end-users. Therefore, we only show two-tier in this figure. In a CBDC system, cross-shard transactions would influence the system's performance compared to single-shard transactions because cross-shard transaction needs the currency issuer (Central Bank) to redeem a token in one ledger and issue a new token in another ledger. Moreover, with the increasing number of operating institutions, it would be frequent that cross-shard transactions happen in an account-based sharding system. <FIGURE>
m
a0406231-eadd-4a33-8ab8-05b90d35cfdc
Sharding is a method used to improve performance. The account-based sharding method divides users by accounts. Traditionally, privileged institutions have different wallets and divide users by their accounts. Specifically, PI (privileged institution) A's customers will come to PI A when using CBDC because it created its account from PI A. If the customer transfers its money to PI B's customers, a cross-shard transaction happens.
m
8bdd7570-3160-4c61-8f67-12bdb80713d6
On the contrary, we may divide users by tokens. If PI A issues a token to one retail customer, the customer will come to PI to initiate CBDC related services because of the token he used rather than its account. Figure REF shows token data structure, in which a string of the operator could be used to determine its service provider. In a token-based sharding, users must contact their token service provider. Therefore, users can transfer the token to everyone without a cross-shard transaction. However, token-based sharding needs the central bank with operating institutions to provide a uniform interface to distribute transactions based on tokens. Because if a customer use PI A's token, he needs to come to PI A to transfer the token. If he comes to PI B's interface, he should be distributed to PI A based on the uniform interface.
m
697246cf-3fb9-47dd-b5fb-ed9d534acb3b
Another important thing is that the central bank should provide uniform wallet standards to privileged institutions. For example, if the transaction receiver is not PI A's customer before, there should be a wallet address to record the token from it. In this case, if PI B provides KYC and on-boarding services to its customers, it does not expect to see data leakage of its customers to PI A. So virtual addresses could help design wallet addresses. Figure REF shows that a virtual address for its holder or wallet. Note that token-based sharding is only a new direction to explore. It may be not an excellent solution to solve all related pain points. <FIGURE>
m
da458785-bb9f-4203-a771-7c15edf6906b
Besides, in consensus algorithm two, encryption in the third step is used to protect business secrecy from validators, including non-privileged institutions. Therefore, validators in the algorithm would only make a backup of encrypted data, which will be used for tampering with proof.
m