0, otherwise it will be\nclassified as class 2.\n2.5 Multi-Class CLGR\nIn the above we have introduced the basic framework of\nClustering with Local and Global Regularization (CLGR) for\nthe two-class clustering problem, and we will extending it\nto multi-class clustering in this subsection.\nFirst we assume that all the documents belong to C classes\nindexed by L = {1, 2, \u00b7 \u00b7 \u00b7 , C}. qc\nis the classification\nfunction for class c (1 c C), such that qc\n(xi) returns the\nconfidence that xi belongs to class c. Our goal is to obtain\nthe value of qc\n(xi) (1 c C, 1 i n), and the cluster\nassignment of xi can be determined by {qc\n(xi)}C\nc=1 using\nsome proper discretization methods that we will introduce\nlater.\nTherefore, in this multi-class case, for each document xi (1\ni n), we will construct C locally linear regularized label\npredictors whose normal vectors are\nwc\u2217\ni = Xi XT\ni Xi + \u03bbiniIi\n\u22121\nqc\ni (1 c C), (21)\nwhere Xi = [xi1, xi2, \u00b7 \u00b7 \u00b7 , xini ] with xik being the k-th\nneighbor of xi, and qc\ni = [qc\ni1, qc\ni2, \u00b7 \u00b7 \u00b7 , qc\nini\n]T\nwith qc\nik = qc\n(xik).\nThen (wc\u2217\ni )T\nxi returns the predicted confidence of xi\nbelonging to class c. Hence the local prediction error for class\nc can be defined as\nJ c\nl =\nn\ni=1\n(wc\u2217\ni )\nT\nxi \u2212 qc\ni\n2\n, (22)\nAnd the total local prediction error becomes\nJl =\nC\nc=1\nJ c\nl =\nC\nc=1\nn\ni=1\n(wc\u2217\ni )\nT\nxi \u2212 qc\ni\n2\n. (23)\nAs in Eq.(11), we can define an n\u00d7n matrix P (see Eq.(12))\nand rewrite Jl as\nJl =\nC\nc=1\nJ c\nl =\nC\nc=1\nPqc\n\u2212 qc 2\n. (24)\nSimilarly we can define the global smoothness regularizer\nin multi-class case as\nJg =\nC\nc=1\nn\ni=1\n(qc\ni \u2212 qc\nj )2\nwij =\nC\nc=1\n(qc\n)T\nLqc\n. (25)\nThen the criterion to be minimized for CLGR in multi-class\ncase becomes\nJ = Jl + \u03bbJg\n=\nC\nc=1\nPqc\n\u2212 qc 2\n+ \u03bb(qc\n)T\nLqc\n=\nC\nc=1\n(qc\n)T\n(P \u2212 I)T\n(P \u2212 I) + \u03bbL qc\n= trace QT\n(P \u2212 I)T\n(P \u2212 I) + \u03bbL Q , (26)\nwhere Q = [q1\n, q2\n, \u00b7 \u00b7 \u00b7 , qc\n] is an n \u00d7 c matrix, and trace(\u00b7)\nreturns the trace of a matrix. The same as in Eq.(20), we\nalso add the constraint that QT\nQ = I to restrict the scale\nof Q. Then our optimization problem becomes\nminQ J = trace QT\n(P \u2212 I)T\n(P \u2212 I) + \u03bbL Q\ns.t. QT\nQ = I, (27)\nFrom the Ky Fan theorem [28], we know the optimal solution\nof the above problem is\nQ\u2217\n= [q\u2217\n1, q\u2217\n2, \u00b7 \u00b7 \u00b7 , q\u2217\nC ]R, (28)\nwhere q\u2217\nk (1 k C) is the eigenvector corresponds to the\nk-th smallest eigenvalue of matrix (P \u2212 I)T\n(P \u2212 I) + \u03bbL,\nand R is an arbitrary C \u00d7 C matrix. Since the values of the\nentries in Q\u2217\nis continuous, we need to further discretize Q\u2217\nto get the cluster assignments of all the data points. There\nare mainly two approaches to achieve this goal:\n1. As in [20], we can treat the i-th row of Q as the\nembedding of xi in a C-dimensional space, and apply some\ntraditional clustering methods like kmeans to\nclustering these embeddings into C clusters.\n2. Since the optimal Q\u2217\nis not unique (because of the\nexistence of an arbitrary matrix R), we can pursue an\noptimal R that will rotate Q\u2217\nto an indication matrix4\n.\nThe detailed algorithm can be referred to [26].\nThe detailed algorithm procedure for CLGR is summarized\nin table 1.\n3. EXPERIMENTS\nIn this section, experiments are conducted to empirically\ncompare the clustering results of CLGR with other 8\nrepresentitive document clustering algorithms on 5 datasets.\nFirst we will introduce the basic informations of those datasets.\n3.1 Datasets\nWe use a variety of datasets, most of which are frequently\nused in the information retrieval research. Table 2\nsummarizes the characteristics of the datasets.\n4\nHere an indication matrix T is a n\u00d7c matrix with its (i,\nj)th entry Tij \u2208 {0, 1} such that for each row of Q\u2217\nthere is\nonly one 1. Then the xi can be assigned to the j-th cluster\nsuch that j = argjQ\u2217\nij = 1.\nTable 1: Clustering with Local and Global\nRegularization (CLGR)\nInput:\n1. Dataset X = {xi}n\ni=1;\n2. Number of clusters C;\n3. Size of the neighborhood K;\n4. Local regularization parameters {\u03bbi}n\ni=1;\n5. Global regularization parameter \u03bb;\nOutput:\nThe cluster membership of each data point.\nProcedure:\n1. Construct the K nearest neighborhoods for each\ndata point;\n2. Construct the matrix P using Eq.(12);\n3. Construct the Laplacian matrix L using Eq.(16);\n4. Construct the matrix M = (P \u2212 I)T\n(P \u2212 I) + \u03bbL;\n5. Do eigenvalue decomposition on M, and construct\nthe matrix Q\u2217\naccording to Eq.(28);\n6. Output the cluster assignments of each data point\nby properly discretize Q\u2217\n.\nTable 2: Descriptions of the document datasets\nDatasets Number of documents Number of classes\nCSTR 476 4\nWebKB4 4199 4\nReuters 2900 10\nWebACE 2340 20\nNewsgroup4 3970 4\nCSTR. This is the dataset of the abstracts of technical\nreports published in the Department of Computer Science\nat a university. The dataset contained 476 abstracts, which\nwere divided into four research areas: Natural Language\nProcessing(NLP), Robotics/Vision, Systems, and Theory.\nWebKB. The WebKB dataset contains webpages\ngathered from university computer science departments. There\nare about 8280 documents and they are divided into 7\ncategories: student, faculty, staff, course, project, department\nand other. The raw text is about 27MB. Among these\n7 categories, student, faculty, course and project are four\nmost populous entity-representing categories. The\nassociated subset is typically called WebKB4.\nReuters. The Reuters-21578 Text Categorization Test\ncollection contains documents collected from the Reuters\nnewswire in 1987. It is a standard text categorization\nbenchmark and contains 135 categories. In our experiments, we\nuse a subset of the data collection which includes the 10\nmost frequent categories among the 135 topics and we call\nit Reuters-top 10.\nWebACE. The WebACE dataset was from WebACE project\nand has been used for document clustering [17][5]. The\nWebACE dataset contains 2340 documents consisting news\narticles from Reuters new service via the Web in October 1997.\nThese documents are divided into 20 classes.\nNews4. The News4 dataset used in our experiments are\nselected from the famous 20-newsgroups dataset5\n. The topic\nrec containing autos, motorcycles, baseball and hockey was\nselected from the version 20news-18828. The News4 dataset\ncontains 3970 document vectors.\n5\nhttp://people.csail.mit.edu/jrennie/20Newsgroups/\nTo pre-process the datasets, we remove the stop words\nusing a standard stop list, all HTML tags are skipped and all\nheader fields except subject and organization of the posted\narticles are ignored. In all our experiments, we first select\nthe top 1000 words by mutual information with class labels.\n3.2 Evaluation Metrics\nIn the experiments, we set the number of clusters equal\nto the true number of classes C for all the clustering\nalgorithms. To evaluate their performance, we compare the\nclusters generated by these algorithms with the true classes\nby computing the following two performance measures.\nClustering Accuracy (Acc). The first performance\nmeasure is the Clustering Accuracy, which discovers the\none-toone relationship between clusters and classes and measures\nthe extent to which each cluster contained data points from\nthe corresponding class. It sums up the whole matching\ndegree between all pair class-clusters. Clustering accuracy can\nbe computed as:\nAcc =\n1\nN\nmax\n\uf8eb\n\uf8ed\nCk,Lm\nT(Ck, Lm)\n\uf8f6\n\uf8f8 , (29)\nwhere Ck denotes the k-th cluster in the final results, and Lm\nis the true m-th class. T(Ck, Lm) is the number of entities\nwhich belong to class m are assigned to cluster k.\nAccuracy computes the maximum sum of T(Ck, Lm) for all pairs\nof clusters and classes, and these pairs have no overlaps.\nThe greater clustering accuracy means the better clustering\nperformance.\nNormalized Mutual Information (NMI). Another\nevaluation metric we adopt here is the Normalized Mutual\nInformation NMI [23], which is widely used for determining\nthe quality of clusters. For two random variable X and Y,\nthe NMI is defined as:\nNMI(X, Y) =\nI(X, Y)\nH(X)H(Y)\n, (30)\nwhere I(X, Y) is the mutual information between X and\nY, while H(X) and H(Y) are the entropies of X and Y\nrespectively. One can see that NMI(X, X) = 1, which is the\nmaximal possible value of NMI. Given a clustering result,\nthe NMI in Eq.(30) is estimated as\nNMI =\nC\nk=1\nC\nm=1 nk,mlog\nn\u00b7nk,m\nnk \u02c6nm\nC\nk=1 nklog nk\nn\nC\nm=1 \u02c6nmlog \u02c6nm\nn\n, (31)\nwhere nk denotes the number of data contained in the cluster\nCk (1 k C), \u02c6nm is the number of data belonging to the\nm-th class (1 m C), and nk,m denotes the number of\ndata that are in the intersection between the cluster Ck and\nthe m-th class. The value calculated in Eq.(31) is used as\na performance measure for the given clustering result. The\nlarger this value, the better the clustering performance.\n3.3 Comparisons\nWe have conducted comprehensive performance\nevaluations by testing our method and comparing it with 8 other\nrepresentative data clustering methods using the same data\ncorpora. The algorithms that we evaluated are listed below.\n1. Traditional k-means (KM).\n2. Spherical k-means (SKM). The implementation is based\non [9].\n3. Gaussian Mixture Model (GMM). The implementation\nis based on [16].\n4. Spectral Clustering with Normalized Cuts (Ncut). The\nimplementation is based on [26], and the variance of\nthe Gaussian similarity is determined by Local Scaling\n[30]. Note that the criterion that Ncut aims to\nminimize is just the global regularizer in our CLGR\nalgorithm except that Ncut used the normalized Laplacian.\n5. Clustering using Pure Local Regularization (CPLR).\nIn this method we just minimize Jl (defined in Eq.(24)),\nand the clustering results can be obtained by doing\neigenvalue decomposition on matrix (I \u2212 P)T\n(I \u2212 P)\nwith some proper discretization methods.\n6. Adaptive Subspace Iteration (ASI). The\nimplementation is based on [14].\n7. Nonnegative Matrix Factorization (NMF). The\nimplementation is based on [27].\n8. Tri-Factorization Nonnegative Matrix Factorization\n(TNMF) [12]. The implementation is based on [15].\nFor computational efficiency, in the implementation of\nCPLR and our CLGR algorithm, we have set all the local\nregularization parameters {\u03bbi}n\ni=1 to be identical, which is\nset by grid search from {0.1, 1, 10}. The size of the k-nearest\nneighborhoods is set by grid search from {20, 40, 80}. For\nthe CLGR method, its global regularization parameter is\nset by grid search from {0.1, 1, 10}. When constructing the\nglobal regularizer, we have adopted the local scaling method\n[30] to construct the Laplacian matrix. The final\ndiscretization method adopted in these two methods is the same as\nin [26], since our experiments show that using such method\ncan achieve better results than using kmeans based methods\nas in [20].\n3.4 Experimental Results\nThe clustering accuracies comparison results are shown in\ntable 3, and the normalized mutual information comparison\nresults are summarized in table 4. From the two tables we\nmainly observe that:\n1. Our CLGR method outperforms all other document\nclustering methods in most of the datasets;\n2. For document clustering, the Spherical k-means method\nusually outperforms the traditional k-means clustering\nmethod, and the GMM method can achieve\ncompetitive results compared to the Spherical k-means method;\n3. The results achieved from the k-means and GMM type\nalgorithms are usually worse than the results achieved\nfrom Spectral Clustering. Since Spectral Clustering can\nbe viewed as a weighted version of kernel k-means, it\ncan obtain good results the data clusters are arbitrarily\nshaped. This corroborates that the documents vectors\nare not regularly distributed (spherical or elliptical).\n4. The experimental comparisons empirically verify the\nequivalence between NMF and Spectral Clustering, which\nTable 3: Clustering accuracies of the various\nmethods\nCSTR WebKB4 Reuters WebACE News4\nKM 0.4256 0.3888 0.4448 0.4001 0.3527\nSKM 0.4690 0.4318 0.5025 0.4458 0.3912\nGMM 0.4487 0.4271 0.4897 0.4521 0.3844\nNMF 0.5713 0.4418 0.4947 0.4761 0.4213\nNcut 0.5435 0.4521 0.4896 0.4513 0.4189\nASI 0.5621 0.4752 0.5235 0.4823 0.4335\nTNMF 0.6040 0.4832 0.5541 0.5102 0.4613\nCPLR 0.5974 0.5020 0.4832 0.5213 0.4890\nCLGR 0.6235 0.5228 0.5341 0.5376 0.5102\nTable 4: Normalized mutual information results of\nthe various methods\nCSTR WebKB4 Reuters WebACE News4\nKM 0.3675 0.3023 0.4012 0.3864 0.3318\nSKM 0.4027 0.4155 0.4587 0.4003 0.4085\nGMM 0.4034 0.4093 0.4356 0.4209 0.3994\nNMF 0.5235 0.4517 0.4402 0.4359 0.4130\nNcut 0.4833 0.4497 0.4392 0.4289 0.4231\nASI 0.5008 0.4833 0.4769 0.4817 0.4503\nTNMF 0.5724 0.5011 0.5132 0.5328 0.4749\nCPLR 0.5695 0.5231 0.4402 0.5543 0.4690\nCLGR 0.6012 0.5434 0.4935 0.5390 0.4908\nhas been proved theoretically in [10]. It can be\nobserved from the tables that NMF and Spectral\nClustering usually lead to similar clustering results.\n5. The co-clustering based methods (TNMF and ASI)\ncan usually achieve better results than traditional purely\ndocument vector based methods. Since these methods\nperform an implicit feature selection at each iteration,\nprovide an adaptive metric for measuring the\nneighborhood, and thus tend to yield better clustering results.\n6. The results achieved from CPLR are usually better\nthan the results achieved from Spectral Clustering, which\nsupports Vapnik\"s theory [24] that sometimes local\nlearning algorithms can obtain better results than global\nlearning algorithms.\nBesides the above comparison experiments, we also test\nthe parameter sensibility of our method. There are mainly\ntwo sets of parameters in our CLGR algorithm, the local\nand global regularization parameters ({\u03bbi}n\ni=1 and \u03bb, as we\nhave said in section 3.3, we have set all \u03bbi\"s to be identical to\n\u03bb\u2217\nin our experiments), and the size of the neighborhoods.\nTherefore we have also done two sets of experiments:\n1. Fixing the size of the neighborhoods, and testing the\nclustering performance with varying \u03bb\u2217\nand \u03bb. In this\nset of experiments, we find that our CLGR algorithm\ncan achieve good results when the two regularization\nparameters are neither too large nor too small.\nTypically our method can achieve good results when \u03bb\u2217\nand \u03bb are around 0.1. Figure 1 shows us such a\ntesting example on the WebACE dataset.\n2. Fixing the local and global regularization parameters,\nand testing the clustering performance with different\n\u22125\n\u22124.5\n\u22124\n\u22123.5\n\u22123\n\u22125\n\u22124.5\n\u22124\n\u22123.5\n\u22123\n0.35\n0.4\n0.45\n0.5\n0.55\nlocal regularization para\n(log\n2\nvalue)\nglobal regularization para\n(log\n2\nvalue)\nclusteringaccuracy\nFigure 1: Parameter sensibility testing results on\nthe WebACE dataset with the neighborhood size\nfixed to 20, and the x-axis and y-axis represents the\nlog2 value of \u03bb\u2217\nand \u03bb.\nsizes of neighborhoods. In this set of experiments,\nwe find that the neighborhood with a too large or\ntoo small size will all deteriorate the final clustering\nresults. This can be easily understood since when\nthe neighborhood size is very small, then the data\npoints used for training the local classifiers may not\nbe sufficient; when the neighborhood size is very large,\nthe trained classifiers will tend to be global and\ncannot capture the typical local characteristics. Figure 2\nshows us a testing example on the WebACE dataset.\nTherefore, we can see that our CLGR algorithm (1) can\nachieve satisfactory results and (2) is not very sensitive to\nthe choice of parameters, which makes it practical in real\nworld applications.\n4. CONCLUSIONS AND FUTURE WORKS\nIn this paper, we derived a new clustering algorithm called\nclustering with local and global regularization. Our method\npreserves the merit of local learning algorithms and spectral\nclustering. Our experiments show that the proposed\nalgorithm outperforms most of the state of the art algorithms on\nmany benchmark datasets. In the future, we will focus on\nthe parameter selection and acceleration issues of the CLGR\nalgorithm.\n5. REFERENCES\n[1] L. Baker and A. McCallum. Distributional Clustering\nof Words for Text Classification. In Proceedings of the\nInternational ACM SIGIR Conference on Research\nand Development in Information Retrieval, 1998.\n[2] M. Belkin and P. Niyogi. Laplacian Eigenmaps for\nDimensionality Reduction and Data Representation.\nNeural Computation, 15 (6):1373-1396. June 2003.\n[3] M. Belkin and P. Niyogi. Towards a Theoretical\nFoundation for Laplacian-Based Manifold Methods. In\nProceedings of the 18th Conference on Learning\nTheory (COLT). 2005.\n10 20 30 40 50 60 70 80 90 100\n0.35\n0.4\n0.45\n0.5\n0.55\nsize of the neighborhood\nclusteringaccuracy\nFigure 2: Parameter sensibility testing results on\nthe WebACE dataset with the regularization\nparameters being fixed to 0.1, and the neighborhood size\nvaring from 10 to 100.\n[4] M. Belkin, P. Niyogi and V. Sindhwani. Manifold\nRegularization: a Geometric Framework for Learning\nfrom Examples. Journal of Machine Learning\nResearch 7, 1-48, 2006.\n[5] D. Boley. Principal Direction Divisive Partitioning.\nData mining and knowledge discovery, 2:325-344, 1998.\n[6] L. Bottou and V. Vapnik. Local learning algorithms.\nNeural Computation, 4:888-900, 1992.\n[7] P. K. Chan, D. F. Schlag and J. Y. Zien. Spectral\nK-way Ratio-Cut Partitioning and Clustering. IEEE\nTrans. Computer-Aided Design, 13:1088-1096, Sep.\n1994.\n[8] D. R. Cutting, D. R. Karger, J. O. Pederson and J.\nW. Tukey. Scatter/Gather: A Cluster-Based Approach\nto Browsing Large Document Collections. In\nProceedings of the International ACM SIGIR\nConference on Research and Development in\nInformation Retrieval, 1992.\n[9] I. S. Dhillon and D. S. Modha. Concept\nDecompositions for Large Sparse Text Data using\nClustering. Machine Learning, vol. 42(1), pages\n143-175, January 2001.\n[10] C. Ding, X. He, and H. Simon. On the equivalence of\nnonnegative matrix factorization and spectral\nclustering. In Proceedings of the SIAM Data Mining\nConference, 2005.\n[11] C. Ding, X. He, H. Zha, M. Gu, and H. D. Simon. A\nmin-max cut algorithm for graph partitioning and\ndata clustering. In Proc. of the 1st International\nConference on Data Mining (ICDM), pages 107-114,\n2001.\n[12] C. Ding, T. Li, W. Peng, and H. Park. Orthogonal\nNonnegative Matrix Tri-Factorizations for Clustering.\nIn Proceedings of the Twelfth ACM SIGKDD\nInternational Conference on Knowledge Discovery and\nData Mining, 2006.\n[13] R. O. Duda, P. E. Hart, and D. G. Stork. Pattern\nClassification. John Wiley & Sons, Inc., 2001.\n[14] T. Li, S. Ma, and M. Ogihara. Document Clustering\nvia Adaptive Subspace Iteration. In Proceedings of the\nInternational ACM SIGIR Conference on Research\nand Development in Information Retrieval, 2004.\n[15] T. Li and C. Ding. The Relationships Among Various\nNonnegative Matrix Factorization Methods for\nClustering. In Proceedings of the 6th International\nConference on Data Mining (ICDM). 2006.\n[16] X. Liu and Y. Gong. Document Clustering with\nCluster Refinement and Model Selection Capabilities.\nIn Proc. of the International ACM SIGIR Conference\non Research and Development in Information\nRetrieval, 2002.\n[17] E. Han, D. Boley, M. Gini, R. Gross, K. Hastings, G.\nKarypis, V. Kumar, B. Mobasher, and J. Moore.\nWebACE: A Web Agent for Document Categorization\nand Exploration. In Proceedings of the 2nd\nInternational Conference on Autonomous Agents\n(Agents98). ACM Press, 1998.\n[18] M. Hein, J. Y. Audibert, and U. von Luxburg. From\nGraphs to Manifolds - Weak and Strong Pointwise\nConsistency of Graph Laplacians. In Proceedings of\nthe 18th Conference on Learning Theory (COLT),\n470-485. 2005.\n[19] J. He, M. Lan, C.-L. Tan, S.-Y. Sung, and H.-B. Low.\nInitialization of Cluster Refinement Algorithms: A\nReview and Comparative Study. In Proc. of Inter.\nJoint Conference on Neural Networks, 2004.\n[20] A. Y. Ng, M. I. Jordan, Y. Weiss. On Spectral\nClustering: Analysis and an algorithm. In Advances in\nNeural Information Processing Systems 14. 2002.\n[21] B. Sch\u00a8olkopf and A. Smola. Learning with Kernels.\nThe MIT Press. Cambridge, Massachusetts. 2002.\n[22] J. Shi and J. Malik. Normalized Cuts and Image\nSegmentation. IEEE Trans. on Pattern Analysis and\nMachine Intelligence, 22(8):888-905, 2000.\n[23] A. Strehl and J. Ghosh. Cluster Ensembles - A\nKnowledge Reuse Framework for Combining Multiple\nPartitions. Journal of Machine Learning Research,\n3:583-617, 2002.\n[24] V. N. Vapnik. The Nature of Statistical Learning\nTheory. Berlin: Springer-Verlag, 1995.\n[25] Wu, M. and Sch\u00a8olkopf, B. A Local Learning Approach\nfor Clustering. In Advances in Neural Information\nProcessing Systems 18. 2006.\n[26] S. X. Yu, J. Shi. Multiclass Spectral Clustering. In\nProceedings of the International Conference on\nComputer Vision, 2003.\n[27] W. Xu, X. Liu and Y. Gong. Document Clustering\nBased On Non-Negative Matrix Factorization. In\nProceedings of the International ACM SIGIR\nConference on Research and Development in\nInformation Retrieval, 2003.\n[28] H. Zha, X. He, C. Ding, M. Gu and H. Simon. Spectral\nRelaxation for K-means Clustering. In NIPS 14. 2001.\n[29] T. Zhang and F. J. Oles. Text Categorization Based\non Regularized Linear Classification Methods. Journal\nof Information Retrieval, 4:5-31, 2001.\n[30] L. Zelnik-Manor and P. Perona. Self-Tuning Spectral\nClustering. In NIPS 17. 2005.\n[31] D. Zhou, O. Bousquet, T. N. Lal, J. Weston and B.\nSch\u00a8olkopf. Learning with Local and Global\nConsistency. NIPS 17, 2005.", "keywords": "global regularization;specified search;partitioning method;label prediction;manifold;function estimation;document cluster;regularization;spectrum;hierarchical method;cluster hierarchy;document clustering"}
-{"name": "test_H-11", "title": "Laplacian Optimal Design for Image Retrieval", "abstract": "Relevance feedback is a powerful technique to enhance ContentBased Image Retrieval (CBIR) performance. It solicits the user\"s relevance judgments on the retrieved images returned by the CBIR systems. The user\"s labeling is then used to learn a classifier to distinguish between relevant and irrelevant images. However, the top returned images may not be the most informative ones. The challenge is thus to determine which unlabeled images would be the most informative (i.e., improve the classifier the most) if they were labeled and used as training samples. In this paper, we propose a novel active learning algorithm, called Laplacian Optimal Design (LOD), for relevance feedback image retrieval. Our algorithm is based on a regression model which minimizes the least square error on the measured (or, labeled) images and simultaneously preserves the local geometrical structure of the image space. Specifically, we assume that if two images are sufficiently close to each other, then their measurements (or, labels) are close as well. By constructing a nearest neighbor graph, the geometrical structure of the image space can be described by the graph Laplacian. We discuss how results from the field of optimal experimental design may be used to guide our selection of a subset of images, which gives us the most amount of information. Experimental results on Corel database suggest that the proposed approach achieves higher precision in relevance feedback image retrieval.", "fulltext": "1. INTRODUCTION\nIn many machine learning and information retrieval tasks,\nthere is no shortage of unlabeled data but labels are\nexpensive. The challenge is thus to determine which unlabeled\nsamples would be the most informative (i.e., improve the\nclassifier the most) if they were labeled and used as training\nsamples. This problem is typically called active learning [4].\nHere the task is to minimize an overall cost, which depends\nboth on the classifier accuracy and the cost of data\ncollection. Many real world applications can be casted into active\nlearning framework. Particularly, we consider the problem\nof relevance feedback driven Content-Based Image Retrieval\n(CBIR) [13].\nContent-Based Image Retrieval has attracted substantial\ninterests in the last decade [13]. It is motivated by the fast\ngrowth of digital image databases which, in turn, require\nefficient search schemes. Rather than describe an image\nusing text, in these systems an image query is described using\none or more example images. The low level visual features\n(color, texture, shape, etc.) are automatically extracted to\nrepresent the images. However, the low level features may\nnot accurately characterize the high level semantic concepts.\nTo narrow down the semantic gap, relevance feedback is\nintroduced into CBIR [12].\nIn many of the current relevance feedback driven CBIR\nsystems, the user is required to provide his/her relevance\njudgments on the top images returned by the system. The\nlabeled images are then used to train a classifier to separate\nimages that match the query concept from those that do\nnot. However, in general the top returned images may not\nbe the most informative ones. In the worst case, all the\ntop images labeled by the user may be positive and thus\nthe standard classification techniques can not be applied\ndue to the lack of negative examples. Unlike the standard\nclassification problems where the labeled samples are\npregiven, in relevance feedback image retrieval the system can\nactively select the images to label. Thus active learning can\nbe naturally introduced into image retrieval.\nDespite many existing active learning techniques, Support\nVector Machine (SVM) active learning [14] and regression\nbased active learning [1] have received the most interests.\nBased on the observation that the closer to the SVM\nboundary an image is, the less reliable its classification is, SVM\nactive learning selects those unlabeled images closest to the\nboundary to solicit user feedback so as to achieve maximal\nrefinement on the hyperplane between the two classes. The\nmajor disadvantage of SVM active learning is that the\nestimated boundary may not be accurate enough. Moreover, it\nmay not be applied at the beginning of the retrieval when\nthere is no labeled images. Some other SVM based active\nlearning algorithms can be found in [7], [9].\nIn statistics, the problem of selecting samples to label is\ntypically referred to as experimental design. The sample\nx is referred to as experiment, and its label y is referred\nto as measurement. The study of optimal experimental\ndesign (OED) [1] is concerned with the design of experiments\nthat are expected to minimize variances of a parameterized\nmodel. The intent of optimal experimental design is\nusually to maximize confidence in a given model, minimize\nparameter variances for system identification, or minimize the\nmodel\"s output variance. Classical experimental design\napproaches include A-Optimal Design, D-Optimal Design, and\nE-Optimal Design. All of these approaches are based on a\nleast squares regression model. Comparing to SVM based\nactive learning algorithms, experimental design approaches\nare much more efficient in computation. However, this kind\nof approaches takes only measured (or, labeled) data into\naccount in their objective function, while the unmeasured\n(or, unlabeled) data is ignored.\nBenefit from recent progresses on optimal experimental\ndesign and semi-supervised learning, in this paper we\npropose a novel active learning algorithm for image retrieval,\ncalled Laplacian Optimal Design (LOD). Unlike\ntraditional experimental design methods whose loss functions are\nonly defined on the measured points, the loss function of\nour proposed LOD algorithm is defined on both measured\nand unmeasured points. Specifically, we introduce a locality\npreserving regularizer into the standard least-square-error\nbased loss function. The new loss function aims to find a\nclassifier which is locally as smooth as possible. In other\nwords, if two points are sufficiently close to each other in\nthe input space, then they are expected to share the same\nlabel. Once the loss function is defined, we can select the\nmost informative data points which are presented to the user\nfor labeling. It would be important to note that the most\ninformative images may not be the top returned images.\nThe rest of the paper is organized as follows. In Section\n2, we provide a brief description of the related work. Our\nproposed Laplacian Optimal Design algorithm is introduced\nin Section 3. In Section 4, we compare our algorithm with\nthe state-or-the-art algorithms and present the experimental\nresults on image retrieval. Finally, we provide some\nconcluding remarks and suggestions for future work in Section 5.\n2. RELATED WORK\nSince our proposed algorithm is based on regression\nframework. The most related work is optimal experimental design\n[1], including A-Optimal Design, D-Optimal Design, and\nEOptimal Design. In this Section, we give a brief description\nof these approaches.\n2.1 The Active Learning Problem\nThe generic problem of active learning is the following.\nGiven a set of points A = {x1, x2, \u00b7 \u00b7 \u00b7 , xm} in Rd\n, find a\nsubset B = {z1, z2, \u00b7 \u00b7 \u00b7 , zk} \u2282 A which contains the most\ninformative points. In other words, the points zi(i = 1, \u00b7 \u00b7 \u00b7 , k)\ncan improve the classifier the most if they are labeled and\nused as training points.\n2.2 Optimal Experimental Design\nWe consider a linear regression model\ny = wT\nx + (1)\nwhere y is the observation, x is the independent variable,\nw is the weight vector and is an unknown error with zero\nmean. Different observations have errors that are\nindependent, but with equal variances \u03c32\n. We define f(x) = wT\nx\nto be the learner\"s output given input x and the weight\nvector w. Suppose we have a set of labeled sample points\n(z1, y1), \u00b7 \u00b7 \u00b7 , (zk, yk), where yi is the label of zi. Thus, the\nmaximum likelihood estimate for the weight vector, \u02c6w, is\nthat which minimizes the sum squared error\nJsse(w) =\nk\ni=1\nwT\nzi \u2212 yi\n2\n(2)\nThe estimate \u02c6w gives us an estimate of the output at a novel\ninput: \u02c6y = \u02c6wT\nx.\nBy Gauss-Markov theorem, we know that \u02c6w \u2212 w has a\nzero mean and a covariance matrix given by \u03c32\nH\u22121\nsse, where\nHsse is the Hessian of Jsse(w)\nHsse =\n\u22022\nJsse\n\u2202w2\n=\nk\ni=1\nzizT\ni = ZZT\nwhere Z = (z1, z2, \u00b7 \u00b7 \u00b7 , zk).\nThe three most common scalar measures of the size of the\nparameter covariance matrix in optimal experimental design\nare:\n\u2022 D-optimal design: determinant of Hsse.\n\u2022 A-optimal design: trace of Hsse.\n\u2022 E-optimal design: maximum eigenvalue of Hsse.\nSince the computation of the determinant and eigenvalues\nof a matrix is much more expensive than the computation\nof matrix trace, A-optimal design is more efficient than the\nother two. Some recent work on experimental design can be\nfound in [6], [16].\n3. LAPLACIAN OPTIMAL DESIGN\nSince the covariance matrix Hsse used in traditional\napproaches is only dependent on the measured samples, i.e.\nzi\"s, these approaches fail to evaluate the expected errors\non the unmeasured samples. In this Section, we introduce\na novel active learning algorithm called Laplacian Optimal\nDesign (LOD) which makes efficient use of both measured\n(labeled) and unmeasured (unlabeled) samples.\n3.1 The Objective Function\nIn many machine learning problems, it is natural to\nassume that if two points xi, xj are sufficiently close to each\nother, then their measurements (f(xi), f(xj)) are close as\nwell. Let S be a similarity matrix. Thus, a new loss function\nwhich respects the geometrical structure of the data space\ncan be defined as follows:\nJ0(w) =\nk\ni=1\nf(zi)\u2212yi\n2\n+\n\u03bb\n2\nm\ni,j=1\nf(xi)\u2212f(xj)\n2\nSij (3)\nwhere yi is the measurement (or, label) of zi. Note that,\nthe loss function (3) is essentially the same as the one used\nin Laplacian Regularized Regression (LRR, [2]). However,\nLRR is a passive learning algorithm where the training data\nis given. In this paper, we are focused on how to select the\nmost informative data for training. The loss function with\nour choice of symmetric weights Sij (Sij = Sji) incurs a\nheavy penalty if neighboring points xi and xj are mapped\nfar apart. Therefore, minimizing J0(w) is an attempt to\nensure that if xi and xj are close then f(xi) and f(xj) are\nclose as well. There are many choices of the similarity matrix\nS. A simple definition is as follows:\nSij =\n\u23a7\n\u23a8\n\u23a9\n1, if xi is among the p nearest neighbors of xj,\nor xj is among the p nearest neighbors of xi;\n0, otherwise.\n(4)\nLet D be a diagonal matrix, Dii = j Sij, and L = D\u2212S.\nThe matrix L is called graph Laplacian in spectral graph\ntheory [3]. Let y = (y1, \u00b7 \u00b7 \u00b7 , yk)T\nand X = (x1, \u00b7 \u00b7 \u00b7 , xm).\nFollowing some simple algebraic steps, we see that:\nJ0(w)\n=\nk\ni=1\nwT\nzi \u2212 yi\n2\n+\n\u03bb\n2\nm\ni,j=1\nwT\nxi \u2212 wT\nxj\n2\nSij\n= y \u2212 ZT\nw\nT\ny \u2212 ZT\nw + \u03bbwT\nm\ni=1\nDiixixT\ni\n\u2212\nm\ni,j=1\nSijxixT\nj w\n= yT\ny \u2212 2wT\nZy + wT\nZZT\nw\n+\u03bbwT\nXDXT\n\u2212 XSXT\nw\n= yT\ny \u2212 2wT\nZy + wT\nZZT\n+ \u03bbXLXT\nw\nThe Hessian of J0(w) can be computed as follows:\nH0 =\n\u22022\nJ0\n\u2202w2\n= ZZT\n+ \u03bbXLXT\nIn some cases, the matrix ZZT\n+\u03bbXLXT\nis singular (e.g. if\nm < d). Thus, there is no stable solution to the optimization\nproblem Eq. (3). A common way to deal with this ill-posed\nproblem is to introduce a Tikhonov regularizer into our loss\nfunction:\nJ(w)\n=\nk\ni=1\nwT\nzi \u2212 yi\n2\n+\n\u03bb1\n2\nm\ni,j=1\nwT\nxi \u2212 wT\nxj\n2\nSij\n+\u03bb2 w 2\n(5)\nThe Hessian of the new loss function is given by:\nH =\n\u22022\nJ\n\u2202w2\n= ZZT\n+ \u03bb1XLXT\n+ \u03bb2I\n:= ZZT\n+ \u039b\nwhere I is an identity matrix and \u039b = \u03bb1XLXT\n+ \u03bb2I.\nClearly, H is of full rank. Requiring that the gradient of\nJ(w) with respect to w vanish gives the optimal estimate\n\u02c6w:\n\u02c6w = H\u22121\nZy\nThe following proposition states the bias and variance\nproperties of the estimator for the coefficient vector w.\nProposition 3.1. E( \u02c6w \u2212 w) = \u2212H\u22121\n\u039bw, Cov( \u02c6w) =\n\u03c32\n(H\u22121\n\u2212 H\u22121\n\u039bH\u22121\n)\nProof. Since y = ZT\nw + and E( ) = 0, it follows that\nE( \u02c6w \u2212 w) (6)\n= H\u22121\nZZT\nw \u2212 w\n= H\u22121\n(ZZT\n+ \u039b \u2212 \u039b)w \u2212 w\n= (I \u2212 H\u22121\n\u039b)w \u2212 w\n= \u2212H\u22121\n\u039bw (7)\nNotice Cov(y) = \u03c32\nI, the covariance matrix of \u02c6w has the\nexpression:\nCov( \u02c6w) = H\u22121\nZCov(y)ZT\nH\u22121\n= \u03c32\nH\u22121\nZZT\nH\u22121\n= \u03c32\nH\u22121\n(H \u2212 \u039b)H\u22121\n= \u03c32\n(H\u22121\n\u2212 H\u22121\n\u039bH\u22121\n) (8)\nTherefore mean squared error matrix for the coefficients w\nis\nE(w \u2212 \u02c6w)(w \u2212 \u02c6w)T\n(9)\n= H\u22121\n\u039bwwT\n\u039bH\u22121\n+ \u03c32\n(H\u22121\n\u2212 H\u22121\n\u039bH\u22121\n) (10)\nFor any x, let \u02c6y = \u02c6wT\nx be its predicted observation. The\nexpected squared prediction error is\nE(y \u2212 \u02c6y)2\n= E( + wT\nx \u2212 \u02c6wT\nx)2\n= \u03c32\n+ xT\n[E(w \u2212 \u02c6w)(w \u2212 \u02c6w)T\n]x\n= \u03c32\n+ xT\n[H\u22121\n\u039bwwT\n\u039bH\u22121\n+ \u03c32\nH\u22121\n\u2212 \u03c32\nH\u22121\n\u039bH\u22121\n]x\nClearly the expected square prediction error depends on the\nexplanatory variable x, therefore average expected square\npredictive error over the complete data set A is\n1\nm\nm\ni=1\nE(yi \u2212 \u02c6wT\nxi)2\n=\n1\nm\nm\ni=1\nxT\ni [H\u22121\n\u039bwwT\n\u039bH\u22121\n+ \u03c32\nH\u22121\n\u2212 \u03c32\nH\u22121\n\u039bH\u22121\n]xi\n+\u03c32\n=\n1\nm\nTr(XT\n[\u03c32\nH\u22121\n+ H\u22121\n\u039bwwT\n\u039bH\u22121\n\u2212 \u03c32\nH\u22121\n\u039bH\u22121\n]X)\n+\u03c32\nSince\nTr(XT\n[H\u22121\n\u039bwwT\n\u039bH\u22121\n\u2212 \u03c32\nH\u22121\n\u039bH\u22121\n]X)\nTr(\u03c32\nXT\nH\u22121\nX),\nOur Laplacian optimality criterion is thus formulated by\nminimizing the trace of XT\nH\u22121\nX.\nDefinition 1. Laplacian Optimal Design\nmin\nZ=(z1,\u00b7\u00b7\u00b7 ,zk)\nTr XT\nZZT\n+ \u03bb1XLXT\n+ \u03bb2I\n\u22121\nX (11)\nwhere z1, \u00b7 \u00b7 \u00b7 , zk are selected from {x1, \u00b7 \u00b7 \u00b7 , xm}.\n4. KERNEL LAPLACIAN OPTIMAL DESIGN\nCanonical experimental design approaches (e.g. A-Optimal\nDesign, D-Optimal Design, and E-Optimal) only consider\nlinear functions. They fail to discover the intrinsic geometry\nin the data when the data space is highly nonlinear. In this\nsection, we describe how to perform Laplacian\nExperimental Design in Reproducing Kernel Hilbert Space (RKHS)\nwhich gives rise to Kernel Laplacian Experimental Design\n(KLOD).\nFor given data points x1, \u00b7 \u00b7 \u00b7 , xm \u2208 X with a positive\ndefinite mercer kernel K : X \u00d7X \u2192 R, there exists a unique\nRKHS HK of real valued functions on X. Let Kt(s) be the\nfunction of s obtained by fixing t and letting Kt(s)\n.\n= K(s, t).\nHK consists of all finite linear combinations of the form\nl\ni=1 \u03b1iKti with ti \u2208 X and limits of such functions as the\nti become dense in X. We have Ks, Kt HK = K(s, t).\n4.1 Derivation of LOD in Reproducing\nKernel Hilbert Space\nConsider the optimization problem (5) in RKHS. Thus,\nwe seek a function f \u2208 HK such that the following objective\nfunction is minimized:\nmin\nf\u2208HK\nk\ni=1\nf(zi)\u2212yi\n2\n+\n\u03bb1\n2\nm\ni,j=1\nf(xi)\u2212f(xj)\n2\nSij+\u03bb2 f 2\nHK\n(12)\nWe have the following proposition.\nProposition 4.1. Let H = { m\ni=1 \u03b1iK(\u00b7, xi)|\u03b1i \u2208 R} be\na subspace of HK , the solution to the problem (12) is in H.\nProof. Let H\u22a5\nbe the orthogonal complement of H, i.e.\nHK = H \u2295 H\u22a5\n. Thus, for any function f \u2208 HK , it has\northogonal decomposition as follows:\nf = fH + fH\u22a5\nNow, let\"s evaluate f at xi:\nf(xi) = f, Kxi HK\n= fH + fH\u22a5 , Kxi HK\n= fH, Kxi HK + fH\u22a5 , Kxi HK\nNotice that Kxi \u2208 H while fH\u22a5 \u2208 H\u22a5\n. This implies that\nfH\u22a5 , Kxi HK = 0. Therefore,\nf(xi) = fH, Kxi HK = fH(xi)\nThis completes the proof.\nProposition 4.1 tells us the minimizer of problem (12) admits\na representation f\u2217\n= m\ni=1 \u03b1iK(\u00b7, xi). Please see [2] for the\ndetails.\nLet \u03c6 : Rd\n\u2192 H be a feature map from the input space\nRd\nto H, and K(xi, xj) =< \u03c6(xi), \u03c6(xj) >. Let X denote\nthe data matrix in RKHS, X = (\u03c6(x1), \u03c6(x2), \u00b7 \u00b7 \u00b7 , \u03c6(xm)).\nSimilarly, we define Z = (\u03c6(z1), \u03c6(z2), \u00b7 \u00b7 \u00b7 , \u03c6(zk)). Thus,\nthe optimization problem in RKHS can be written as follows:\nmin\nZ\nTr XT\nZZT\n+ \u03bb1XLXT\n+ \u03bb2I\n\u22121\nX (13)\nSince the mapping function \u03c6 is generally unknown, there\nis no direct way to solve problem (13). In the following, we\napply kernel tricks to solve this optimization problem. Let\nX\u22121\nbe the Moore-Penrose inverse (also known as pseudo\ninverse) of X. Thus, we have:\nXT\nZZT\n+ \u03bb1XLXT\n+ \u03bb2I\n\u22121\nX\n= XT\nXX\u22121\nZZT\n+ \u03bb1XLXT\n+ \u03bb2I\n\u22121\n(XT\n)\u22121\nXT\nX\n= XT\nX ZZT\nX + \u03bb1XLXT\nX + \u03bb2X\n\u22121\n(XT\n)\u22121\nXT\nX\n= XT\nX XT\nZZT\nX + \u03bb1XT\nXLXT\nX + \u03bb2XT\nX\n\u22121\nXT\nX\n= KXX KXZKZX + \u03bb1KXXLKXX + \u03bb2KXX\n\u22121\nKXX\nwhere KXX is a m \u00d7 m matrix (KXX,ij = K(xi, xj)), KXZ\nis a m\u00d7k matrix (KXZ,ij = K(xi, zj)), and KZX is a k\u00d7m\nmatrix (KZX,ij = K(zi, xj)). Thus, the Kernel Laplacian\nOptimal Design can be defined as follows:\nDefinition 2. Kernel Laplacian Optimal Design\nminZ=(z1,\u00b7\u00b7\u00b7 ,zk) Tr KXX KXZKZX + \u03bb1KXXLKXX\n\u03bb2KXX\n\u22121\nKXX (14)\n4.2 Optimization Scheme\nIn this subsection, we discuss how to solve the\noptimization problems (11) and (14). Particularly, if we select a\nlinear kernel for KLOD, then it reduces to LOD. Therefore,\nwe will focus on problem (14) in the following.\nIt can be shown that the optimization problem (14) is\nNP-hard. In this subsection, we develop a simple sequential\ngreedy approach to solve (14). Suppose n points have been\nselected, denoted by a matrix Zn\n= (z1, \u00b7 \u00b7 \u00b7 , zn). The (n +\n1)-th point zn+1 can be selected by solving the following\noptimization problem:\nmax\nZn+1=(Zn,zn+1)\nTr KXX KXZn+1 KZn+1X +\n\u03bb1KXXLKXX + \u03bb2KXX\n\u22121\nKXX (15)\nThe kernel matrices KXZn+1 and KZn+1X can be rewritten\nas follows:\nKXZn+1 = KXZn , KXzn+1 , KZn+1X =\nKZnX\nKzn+1X\nThus, we have:\nKXZn+1 KZn+1X = KXZn KZnX + KXzn+1 Kzn+1X\nWe define:\nA = KXZn KZnX + \u03bb1KXXLKXX + \u03bb2KXX\nA is only dependent on X and Zn\n. Thus, the (n + 1)-th\npoint zn+1 is given by:\nzn+1 = arg min\nzn+1\nTr KXX A + KXzn+1 Kzn+1X\n\u22121\nKXX\n(16)\nEach time we select a new point zn+1, the matrix A is\nupdated by:\nA \u2190 A + KXzn+1 Kzn+1X\nIf the kernel function is chosen as inner product K(x, y) =\nx, y , then HK is a linear functional space and the\nalgorithm reduces to LOD.\n5. CONTENT-BASED IMAGE RETRIEVAL\nUSING LAPLACIAN OPTIMAL DESIGN\nIn this section, we describe how to apply Laplacian\nOptimal Design to CBIR. We begin with a brief description of\nimage representation using low level visual features.\n5.1 Low-Level Image Representation\nLow-level image representation is a crucial problem in\nCBIR. General visual features includes color, texture, shape,\netc. Color and texture features are the most extensively used\nvisual features in CBIR. Compared with color and texture\nfeatures, shape features are usually described after images\nhave been segmented into regions or objects. Since robust\nand accurate image segmentation is difficult to achieve, the\nuse of shape features for image retrieval has been limited\nto special applications where objects or regions are readily\navailable.\nIn this work, We combine 64-dimensional color histogram\nand 64-dimensional Color Texture Moment (CTM, [15]) to\nrepresent the images. The color histogram is calculated\nusing 4 \u00d7 4 \u00d7 4 bins in HSV space. The Color Texture\nMoment is proposed by Yu et al. [15], which integrates the\ncolor and texture characteristics of the image in a compact\nform. CTM adopts local Fourier transform as a texture\nrepresentation scheme and derives eight characteristic maps to\ndescribe different aspects of co-occurrence relations of\nimage pixels in each channel of the (SVcosH, SVsinH, V) color\nspace. Then CTM calculates the first and second moment\nof these maps as a representation of the natural color image\npixel distribution. Please see [15] for details.\n5.2 Relevance Feedback Image Retrieval\nRelevance feedback is one of the most important\ntechniques to narrow down the gap between low level visual\nfeatures and high level semantic concepts [12].\nTraditionally, the user\"s relevance feedbacks are used to update the\nquery vector or adjust the weighting of different dimensions.\nThis process can be viewed as an on-line learning process in\nwhich the image retrieval system acts as a learner and the\nuser acts as a teacher. They typical retrieval process is\noutlined as follows:\n1. The user submits a query image example to the\nsystem. The system ranks the images in database\naccording to some pre-defined distance metric and presents\nto the user the top ranked images.\n2. The system selects some images from the database and\nrequest the user to label them as relevant or\nirrelevant.\n3. The system uses the user\"s provided information to\nrerank the images in database and returns to the user\nthe top images. Go to step 2 until the user is satisfied.\nOur Laplacian Optimal Design algorithm is applied in the\nsecond step for selecting the most informative images. Once\nwe get the labels for the images selected by LOD, we apply\nLaplacian Regularized Regression (LRR, [2]) to solve the\noptimization problem (3) and build the classifier. The\nclassifier is then used to re-rank the images in database. Note\nthat, in order to reduce the computational complexity, we\ndo not use all the unlabeled images in the database but only\nthose within top 500 returns of previous iteration.\n6. EXPERIMENTAL RESULTS\nIn this section, we evaluate the performance of our\nproposed algorithm on a large image database. To demonstrate\nthe effectiveness of our proposed LOD algorithm, we\ncompare it with Laplacian Regularized Regression (LRR, [2]),\nSupport Vector Machine (SVM), Support Vector Machine\nActive Learning (SVMactive) [14], and A-Optimal Design\n(AOD). Both SVMactive, AOD, and LOD are active\nlearning algorithms, while LRR and SVM are standard\nclassification algorithms. SVM only makes use of the labeled\nimages, while LRR is a semi-supervised learning algorithm\nwhich makes use of both labeled and unlabeled images. For\nSVMactive, AOD, and LOD, 10 training images are selected\nby the algorithms themselves at each iteration. While for\nLRR and SVM, we use the top 10 images as training data.\nIt would be important to note that SVMactive is based on\nthe ordinary SVM, LOD is based on LRR, and AOD is based\non the ordinary regression. The parameters \u03bb1 and \u03bb2 in our\nLOD algorithm are empirically set to be 0.001 and 0.00001.\nFor both LRR and LOD algorithms, we use the same graph\nstructure (see Eq. 4) and set the value of p (number of\nnearest neighbors) to be 5. We begin with a simple synthetic\nexample to give some intuition about how LOD works.\n6.1 Simple Synthetic Example\nA simple synthetic example is given in Figure 1. The data\nset contains two circles. Eight points are selected by AOD\nand LOD. As can be seen, all the points selected by AOD\nare from the big circle, while LOD selects four points from\nthe big circle and four from the small circle. The numbers\nbeside the selected points denote their orders to be selected.\nClearly, the points selected by our LOD algorithm can better\nrepresent the original data set. We did not compare our\nalgorithm with SVMactive because SVMactive can not be\napplied in this case due to the lack of the labeled points.\n6.2 Image Retrieval Experimental Design\nThe image database we used consists of 7,900 images of\n79 semantic categories, from COREL data set. It is a large\nand heterogeneous image set. Each image is represented as\na 128-dimensional vector as described in Section 5.1. Figure\n2 shows some sample images.\nTo exhibit the advantages of using our algorithm, we need\na reliable way of evaluating the retrieval performance and\nthe comparisons with other algorithms. We list different\naspects of the experimental design below.\n6.2.1 Evaluation Metrics\nWe use precision-scope curve and precision rate [10] to\nevaluate the effectiveness of the image retrieval algorithms.\nThe scope is specified by the number (N) of top-ranked\nimages presented to the user. The precision is the ratio of\nthe number of relevant images presented to the user to the\n(a) Data set\n1\n2\n3\n4\n5\n6\n7\n8\n(b) AOD\n1\n2\n3\n4\n5\n6\n7\n8\n(c) LOD\nFigure 1: Data selection by active learning algorithms. The numbers beside the selected points denote their\norders to be selected. Clearly, the points selected by our LOD algorithm can better represent the original\ndata set. Note that, the SVMactive algorithm can not be applied in this case due to the lack of labeled points.\n(a) (b) (c)\nFigure 2: Sample images from category bead, elephant, and ship.\nscope N. The precision-scope curve describes the precision\nwith various scopes and thus gives an overall performance\nevaluation of the algorithms. On the other hand, the\nprecision rate emphasizes the precision at a particular value of\nscope. In general, it is appropriate to present 20 images on\na screen. Putting more images on a screen may affect the\nquality of the presented images. Therefore, the precision at\ntop 20 (N = 20) is especially important.\nIn real world image retrieval systems, the query image is\nusually not in the image database. To simulate such\nenvironment, we use five-fold cross validation to evaluate the\nalgorithms. More precisely, we divide the whole image database\ninto five subsets with equal size. Thus, there are 20 images\nper category in each subset. At each run of cross validation,\none subset is selected as the query set, and the other four\nsubsets are used as the database for retrieval. The\nprecisionscope curve and precision rate are computed by averaging\nthe results from the five-fold cross validation.\n6.2.2 Automatic Relevance Feedback Scheme\nWe designed an automatic feedback scheme to model the\nretrieval process. For each submitted query, our system\nretrieves and ranks the images in the database. 10 images\nwere selected from the database for user labeling and the\nlabel information is used by the system for re-ranking. Note\nthat, the images which have been selected at previous\niterations are excluded from later selections. For each query,\nthe automatic relevance feedback mechanism is performed\nfor four iterations.\nIt is important to note that the automatic relevance\nfeedback scheme used here is different from the ones described\nin [8], [11]. In [8], [11], the top four relevant and irrelevant\nimages were selected as the feedback images. However, this\nmay not be practical. In real world image retrieval systems,\nit is possible that most of the top-ranked images are relevant\n(or, irrelevant). Thus, it is difficult for the user to find both\nfour relevant and irrelevant images. It is more reasonable\nfor the users to provide feedback information only on the 10\nimages selected by the system.\n6.3 Image Retrieval Performance\nIn real world, it is not practical to require the user to\nprovide many rounds of feedbacks. The retrieval\nperformance after the first two rounds of feedbacks (especially the\nfirst round) is more important. Figure 3 shows the average\nprecision-scope curves of the different algorithms for the first\ntwo feedback iterations. At the beginning of retrieval, the\nEuclidean distances in the original 128-dimensional space\nare used to rank the images in database. After the user\nprovides relevance feedbacks, the LRR, SVM, SVMactive,\nAOD, and LOD algorithms are then applied to re-rank the\nimages. In order to reduce the time complexity of active\nlearning algorithms, we didn\"t select the most informative\nimages from the whole database but from the top 500\nimages. For LRR and SVM, the user is required to label the\ntop 10 images. For SVMactive, AOD, and LOD, the user\nis required to label 10 most informative images selected by\nthese algorithms. Note that, SVMactive can only be\nap(a) Feedback Iteration 1 (b) Feedback Iteration 2\nFigure 3: The average precision-scope curves of different algorithms for the first two feedback iterations. The\nLOD algorithm performs the best on the entire scope. Note that, at the first round of feedback, the SVMactive\nalgorithm can not be applied. It applies the ordinary SVM to build the initial classifier.\n(a) Precision at Top 10 (b) Precision at Top 20 (c) Precision at Top 30\nFigure 4: Performance evaluation of the five learning algorithms for relevance feedback image retrieval. (a)\nPrecision at top 10, (b) Precision at top 20, and (c) Precision at top 30. As can be seen, our LOD algorithm\nconsistently outperforms the other four algorithms.\nplied when the classifier is already built. Therefore, it can\nnot be applied at the first round and we use the standard\nSVM to build the initial classifier. As can be seen, our LOD\nalgorithm outperforms the other four algorithms on the\nentire scope. Also, the LRR algorithm performs better than\nSVM. This is because that the LRR algorithm makes\nefficient use of the unlabeled images by incorporating a locality\npreserving regularizer into the ordinary regression objective\nfunction. The AOD algorithm performs the worst. As the\nscope gets larger, the performance difference between these\nalgorithms gets smaller.\nBy iteratively adding the user\"s feedbacks, the\ncorresponding precision results (at top 10, top 20, and top 30) of the\nfive algorithms are respectively shown in Figure 4. As can be\nseen, our LOD algorithm performs the best in all the cases\nand the LRR algorithm performs the second best. Both of\nthese two algorithms make use of the unlabeled images. This\nshows that the unlabeled images are helpful for discovering\nthe intrinsic geometrical structure of the image space and\ntherefore enhance the retrieval performance. In real world,\nthe user may not be willing to provide too many relevance\nfeedbacks. Therefore, the retrieval performance at the first\ntwo rounds are especially important. As can be seen, our\nLOD algorithm achieves 6.8% performance improvement for\ntop 10 results, 5.2% for top 20 results, and 4.1% for top 30\nresults, comparing to the second best algorithm (LRR) after\nthe first two rounds of relevance feedbacks.\n6.4 Discussion\nSeveral experiments on Corel database have been\nsystematically performed. We would like to highlight several\ninteresting points:\n1. It is clear that the use of active learning is beneficial\nin the image retrieval domain. There is a significant\nincrease in performance from using the active learning\nmethods. Especially, out of the three active learning\nmethods (SVMactive, AOD, LOD), our proposed LOD\nalgorithm performs the best.\n2. In many real world applications like relevance\nfeedback image retrieval, there are generally two ways of\nreducing labor-intensive manual labeling task. One is\nactive learning which selects the most informative\nsamples to label, and the other is semi-supervised learning\nwhich makes use of the unlabeled samples to enhance\nthe learning performance. Both of these two\nstrategies have been studied extensively in the past [14],\n[7], [5], [8]. The work presented in this paper is\nfocused on active learning, but it also takes advantage of\nthe recent progresses on semi-supervised learning [2].\nSpecifically, we incorporate a locality preserving\nregularizer into the standard regression framework and\nfind the most informative samples with respect to the\nnew objective function. In this way, the active learning\nand semi-supervised learning techniques are seamlessly\nunified for learning an optimal classifier.\n3. The relevance feedback technique is crucial to image\nretrieval. For all the five algorithms, the retrieval\nperformance improves with more feedbacks provided by\nthe user.\n7. CONCLUSIONS AND FUTURE WORK\nThis paper describes a novel active learning algorithm,\ncalled Laplacian Optimal Design, to enable more effective\nrelevance feedback image retrieval. Our algorithm is based\non an objective function which simultaneously minimizes the\nempirical error and preserves the local geometrical structure\nof the data space. Using techniques from experimental\ndesign, our algorithm finds the most informative images to\nlabel. These labeled images and the unlabeled images in\nthe database are used to learn a classifier. The\nexperimental results on Corel database show that both active learning\nand semi-supervised learning can significantly improve the\nretrieval performance.\nIn this paper, we consider the image retrieval problem on\na small, static, and closed-domain image data. A much more\nchallenging domain is the World Wide Web (WWW). For\nWeb image search, it is possible to collect a large amount\nof user click information. This information can be naturally\nused to construct the affinity graph in our algorithm.\nHowever, the computational complexity in Web scenario may\nbecome a crucial issue. Also, although our primary interest in\nthis paper is focused on relevance feedback image retrieval,\nour results may also be of interest to researchers in patten\nrecognition and machine learning, especially when a large\namount of data is available but only a limited samples can\nbe labeled.\n8. REFERENCES\n[1] A. C. Atkinson and A. N. Donev. Optimum\nExperimental Designs. Oxford University Press, 2002.\n[2] M. Belkin, P. Niyogi, and V. Sindhwani. Manifold\nregularization: A geometric framework for learning\nfrom examples. Journal of Machine Learning\nResearch, 7:2399-2434, 2006.\n[3] F. R. K. Chung. Spectral Graph Theory, volume 92 of\nRegional Conference Series in Mathematics. AMS,\n1997.\n[4] D. A. Cohn, Z. Ghahramani, and M. I. Jordan. Active\nlearning with statistical models. Journal of Artificial\nIntelligence Research, 4:129-145, 1996.\n[5] A. Dong and B. Bhanu. A new semi-supervised em\nalgorithm for image retrieval. In IEEE Conf. on\nComputer Vision and Pattern Recognition, Madison,\nWI, 2003.\n[6] P. Flaherty, M. I. Jordan, and A. P. Arkin. Robust\ndesign of biological experiments. In Advances in\nNeural Information Processing Systems 18, Vancouver,\nCanada, 2005.\n[7] K.-S. Goh, E. Y. Chang, and W.-C. Lai. Multimodal\nconcept-dependent active learning for image retrieval.\nIn Proceedings of the ACM Conference on Multimedia,\nNew York, October 2004.\n[8] X. He. Incremental semi-supervised subspace learning\nfor image retrieval. In Proceedings of the ACM\nConference on Multimedia, New York, October 2004.\n[9] S. C. Hoi and M. R. Lyu. A semi-supervised active\nlearning framework for image retrieval. In IEEE\nInternational Conference on Computer Vision and\nPattern Recognition, San Diego, CA, 2005.\n[10] D. P. Huijsmans and N. Sebe. How to complete\nperformance graphs in content-based image retrieval:\nAdd generality and normalize scope. IEEE\nTransactions on Pattern Analysis and Machine\nIntelligence, 27(2):245-251, 2005.\n[11] Y.-Y. Lin, T.-L. Liu, and H.-T. Chen. Semantic\nmanifold learning for image retrieval. In Proceedings of\nthe ACM Conference on Multimedia, Singapore,\nNovember 2005.\n[12] Y. Rui, T. S. Huang, M. Ortega, and S. Mehrotra.\nRelevance feedback: A power tool for interative\ncontent-based image retrieval. IEEE Transactions on\nCircuits and Systems for Video Technology, 8(5), 1998.\n[13] A. W. Smeulders, M. Worring, S. Santini, A. Gupta,\nand R. Jain. Content-based image retrieval at the end\nof the early years. IEEE Transactions on Pattern\nAnalysis and Machine Intelligence, 22(12):1349-1380,\n2000.\n[14] S. Tong and E. Chang. Support vector machine active\nlearning for image retrieval. In Proceedings of the\nninth ACM international conference on Multimedia,\npages 107-118, 2001.\n[15] H. Yu, M. Li, H.-J. Zhang, and J. Feng. Color texture\nmoments for content-based image retrieval. In\nInternational Conference on Image Processing, pages\n24-28, 2002.\n[16] K. Yu, J. Bi, and V. Tresp. Active learning via\ntransductive experimental design. In Proceedings of\nthe 23rd\nInternational Conference on Machine\nLearning, Pittsburgh, PA, 2006.", "keywords": "contentbased image retrieval;patten recognition;intrinsic geometrical structure;labelling;active learn;active learning;image representation;least square regression model;top returned image;precision rate;relevance feedback;optimal experimental design;image retrieval"}
-{"name": "test_H-12", "title": "Fast Generation of Result Snippets in Web Search", "abstract": "The presentation of query biased document snippets as part of results pages presented by search engines has become an expectation of search engine users. In this paper we explore the algorithms and data structures required as part of a search engine to allow efficient generation of query biased snippets. We begin by proposing and analysing a document compression method that reduces snippet generation time by 58% over a baseline using the zlib compression library. These experiments reveal that finding documents on secondary storage dominates the total cost of generating snippets, and so caching documents in RAM is essential for a fast snippet generation process. Using simulation, we examine snippet generation performance for different size RAM caches. Finally we propose and analyse document reordering and compaction, revealing a scheme that increases the number of document cache hits with only a marginal affect on snippet quality. This scheme effectively doubles the number of documents that can fit in a fixed size cache.", "fulltext": "1. INTRODUCTION\nEach result in search results list delivered by current WWW\nsearch engines such as search.yahoo.com, google.com and\nsearch.msn.com typically contains the title and URL of the\nactual document, links to live and cached versions of the\ndocument and sometimes an indication of file size and type.\nIn addition, one or more snippets are usually presented,\ngiving the searcher a sneak preview of the document contents.\nSnippets are short fragments of text extracted from the\ndocument content (or its metadata). They may be static\n(for example, always show the first 50 words of the\ndocument, or the content of its description metadata, or a\ndescription taken from a directory site such as dmoz.org) or\nquery-biased [20]. A query-biased snippet is one selectively\nextracted on the basis of its relation to the searcher\"s query.\nThe addition of informative snippets to search results may\nsubstantially increase their value to searchers. Accurate\nsnippets allow the searcher to make good decisions about\nwhich results are worth accessing and which can be ignored.\nIn the best case, snippets may obviate the need to open any\ndocuments by directly providing the answer to the searcher\"s\nreal information need, such as the contact details of a person\nor an organization.\nGeneration of query-biased snippets by Web search\nengines indexing of the order of ten billion web pages and\nhandling hundreds of millions of search queries per day imposes\na very significant computational load (remembering that\neach search typically generates ten snippets). The\nsimpleminded approach of keeping a copy of each document in a\nfile and generating snippets by opening and scanning files,\nworks when query rates are low and collections are small,\nbut does not scale to the degree required. The overhead of\nopening and reading ten files per query on top of\naccessing the index structure to locate them, would be manifestly\nexcessive under heavy query load. Even storing ten billion\nfiles and the corresponding hundreds of terabytes of data is\nbeyond the reach of traditional filesystems. Special-purpose\nfilesystems have been built to address these problems [6].\nNote that the utility of snippets is by no means restricted\nto whole-of-Web search applications. Efficient generation of\nsnippets is also important at the scale of whole-of-government\nsearch services such as www.firstgov.gov (c. 25 million\npages) and govsearch.australia.gov.au (c. 5 million pages)\nand within large enterprises such as IBM [2] (c. 50 million\npages). Snippets may be even more useful in database or\nfilesystem search applications in which no useful URL or\ntitle information is present.\nWe present a new algorithm and compact single-file\nstructure designed for rapid generation of high quality snippets\nand compare its space/time performance against an obvious\nbaseline based on the zlib compressor on various data sets.\nWe report the proportion of time spent for disk seeks, disk\nreads and cpu processing; demonstrating that the time for\nlocating each document (seek time) dominates, as expected.\nAs the time to process a document in RAM is small in\ncomparison to locating and reading the document into\nmemory, it may seem that compression is not required. However,\nthis is only true if there is no caching of documents in RAM.\nControlling the RAM of physical systems for\nexperimentation is difficult, hence we use simulation to show that caching\ndocuments dramatically improves the performance of\nsnippet generation. In turn, the more documents can be\ncompressed, the more can fit in cache, and hence the more disk\nseeks can be avoided: the classic data compression tradeoff\nthat is exploited in inverted file structures and computing\nranked document lists [24].\nAs hitting the document cache is important, we examine\ndocument compaction, as opposed to compression, schemes\nby imposing an a priori ordering of sentences within a\ndocument, and then only allowing leading sentences into cache for\neach document. This leads to further time savings, with only\nmarginal impact on the quality of the snippets returned.\n2. RELATED WORK\nSnippet generation is a special type of extractive\ndocument summarization, in which sentences, or sentence\nfragments, are selected for inclusion in the summary on the basis\nof the degree to which they match the search query. This\nprocess was given the name of query-biased summarization\nby Tombros and Sanderson [20] The reader is referred to\nMani [13] and to Radev et al. [16] for overviews of the very\nmany different applications of summarization and for the\nequally diverse methods for producing summaries.\nEarly Web search engines presented query-independent\nsnippets consisting of the first k bytes of the result\ndocument. Generating these is clearly much simpler and much\nless computationally expensive than processing documents\nto extract query biased summaries, as there is no need to\nsearch the document for text fragments containing query\nterms. To our knowledge, Google was the first\nwhole-ofWeb search engine to provide query biased summaries, but\nsummarization is listed by Brin and Page [1] only under the\nheading of future work.\nMost of the experimental work using query-biased\nsummarization has focused on comparing their value to searchers\nrelative to other types of summary [20, 21], rather than\nefficient generation of summaries. Despite the importance of\nefficient summary generation in Web search, few algorithms\nappear in the literature. Silber and McKoy [19] describe a\nlinear-time lexical chaining algorithm for use in generic\nsummaries, but offer no empirical data for the performance of\ntheir algorithm. White et al [21] report some experimental\ntimings of their WebDocSum system, but the snippet\ngeneration algorithms themselves are not isolated, so it is difficult\nto infer snippet generation time comparable to the times we\nreport in this paper.\n3. SEARCH ENGINE ARCHITECTURES\nA search engine must perform a variety of activities, and is\ncomprised of many sub-systems, as depicted by the boxes in\nFigure 1. Note that there may be several other sub-systems\nsuch as the Advertising Engine or the Parsing Engine\nthat could easily be added to the diagram, but we have\nconcentrated on the sub-systems that are relevant to snippet\ngeneration. Depending on the number of documents that\nthe search engine indexes, the data and processes for each\nRanking\nEngine\nCrawling Engine\nIndexing Engine\nEngine\nLexicon Meta Data\nEngine\nEngine\nSnippet\nTerm&Doc#s\nSnippetperdoc\nWEB\nQuery Engine\nQuery Results Page\nTerm#s\nDoc#s\nInvertedlists\nDocs\nperdoc\nTitle,URL,etc\nDoc#s\nDocument meta data\nTerms\nQuerystring\nTerm#s\nFigure 1: An abstraction of some of the sub-systems\nin a search engine. Depending on the number of\ndocuments indexed, each sub-system could reside on\na single machine, be distributed across thousands of\nmachines, or a combination of both.\nsub-system could be distributed over many machines, or all\noccupy a single server and filesystem, competing with each\nother for resources. Similarly, it may be more efficient to\ncombine some sub-systems in an implementation of the\ndiagram. For example, the meta-data such as document title\nand URL requires minimal computation apart from\nhighlighting query words, but we note that disk seeking is likely\nto be minimized if title, URL and fixed summary\ninformation is stored contiguously with the text from which query\nbiased summaries are extracted. Here we ignore the fixed\ntext and consider only the generation of query biased\nsummaries: we concentrate on the Snippet Engine.\nIn addition to data and programs operating on that data,\neach sub-system also has its own memory management scheme.\nThe memory management system may simply be the\nmemory hierarchy provided by the operating system used on\nmachines in the sub-system, or it may be explicitly coded to\noptimise the processes in the sub-system.\nThere are many papers on caching in search engines (see\n[3] and references therein for a current summary), but it\nseems reasonable that there is a query cache in the Query\nEngine that stores precomputed final result pages for very\npopular queries. When one of the popular queries is issued,\nthe result page is fetched straight from the query cache. If\nthe issued query is not in the query cache, then the Query\nEngine uses the four sub-systems in turn to assemble a\nresults page.\n1. The Lexicon Engine maps query terms to integers.\n2. The Ranking Engine retrieves inverted lists for each\nterm, using them to get a ranked list of documents.\n3. The Snippet Engine uses those document numbers and\nquery term numbers to generate snippets.\n4. The Meta Data Engine fetches other information about\neach document to construct the results page.\nIN A document broken into one sentence per line,\nand a sequence of query terms.\n1 For each line of the text, L = [w1, w2, . . . , wm]\n2 Let h be 1 if L is a heading, 0 otherwise.\n3 Let be 2 if L is the first line of a document,\n1 if it is the second line, 0 otherwise.\n4 Let c be the number of wi that are query\nterms, counting repetitions.\n5 Let d be the number of distinct query terms\nthat match some wi.\n6 Identify the longest contiguous run of query\nterms in L, say wj . . . wj+k.\n7 Use a weighted combination of c, d, k, h\nand to derive a score s.\n8 Insert L into a max-heap using s as the key.\nOUT Remove the number of sentences required from\nthe heap to form the summary.\nFigure 2: Simple sentence ranker that operates on\nraw text with one sentence per line.\n4. THE SNIPPET ENGINE\nFor each document identifier passed to the Snippet\nEngine, the engine must generate text, preferably containing\nquery terms, that attempts to summarize that document.\nPrevious work on summarization identifies the sentence as\nthe minimal unit for extraction and presentation to the\nuser [12]. Accordingly, we also assume a web snippet\nextraction process will extract sentences from documents. In order\nto construct a snippet, all sentences in a document should be\nranked against the query, and the top two or three returned\nas the snippet. The scoring of sentences against queries has\nbeen explored in several papers [7, 12, 18, 20, 21], with\ndifferent features of sentences deemed important.\nBased on these observations, Figure 2, shows the general\nalgorithm for scoring sentences in relevant documents, with\nthe highest scoring sentences making the snippet for each\ndocument. The final score of a sentence, assigned in Step 7,\ncan be derived in many different ways. In order to avoid\nbias towards any particular scoring mechanism, we compare\nsentence quality later in the paper using the individual\ncomponents of the score, rather than an arbitrary combination\nof the components.\n4.1 Parsing Web Documents\nUnlike well edited text collections that are often the target\nfor summarization systems, Web data is often poorly\nstructured, poorly punctuated, and contains a lot of data that do\nnot form part of valid sentences that would be candidates\nfor parts of snippets.\nWe assume that the documents passed to the Snippet\nEngine by the Indexing Engine have all HTML tags and\nJavaScript removed, and that each document is reduced to a\nseries of word tokens separated by non-word tokens. We\ndefine a word token as a sequence of alphanumeric characters,\nwhile a non-word is a sequence of non-alphanumeric\ncharacters such as whitespace and the other punctuation symbols.\nBoth are limited to a maximum of 50 characters. Adjacent,\nrepeating characters are removed from the punctuation.\nIncluded in the punctuation set is a special end of sentence\nmarker which replaces the usual three sentence terminators\n?!.. Often these explicit punctuation characters are\nmissing, and so HTML tags such as
and are assumed to\nterminate sentences. In addition, a sentence must contain at\nleast five words and no more than twenty words, with longer\nor shorter sentences being broken and joined as required to\nmeet these criteria [10].\nUnterminated HTML tags-that is, tags with an open\nbrace, but no close brace-cause all text from the open brace\nto the next open brace to be discarded.\nA major problem in summarizing web pages is the\npresence of large amounts of promotional and navigational\nmaterial (navbars) visually above and to the left of the actual\npage content. For example, The most wonderful company\non earth. Products. Service. About us. Contact us. Try\nbefore you buy. Similar, but often not identical,\nnavigational material is typically presented on every page within a\nsite. This material tends to lower the quality of summaries\nand slow down summary generation.\nIn our experiments we did not use any particular\nheuristics for removing navigational information as the test\ncollection in use (wt100g) pre-dates the widespread take up of\nthe current style of web publishing. In wt100g, the average\nweb page size is more than half the current Web average [11].\nAnecdotally, the increase is due to inclusion of sophisticated\nnavigational and interface elements and the JavaScript\nfunctions to support them.\nHaving defined the format of documents that are\npresented to the Snippet Engine, we now define our Compressed\nToken System (CTS) document storage scheme, and the\nbaseline system used for comparison.\n4.2 Baseline Snippet Engine\nAn obvious document representation scheme is to simply\ncompress each document with a well known adaptive\ncompressor, and then decompress the document as required [1],\nusing a string matching algorithm to effect the algorithm in\nFigure 2. Accordingly, we implemented such a system, using\nzlib [4] with default parameters to compress every document\nafter it has been parsed as in Section 4.1.\nEach document is stored in a single file. While\nmanageable for our small test collections or small enterprises with\nmillions of documents, a full Web search engine may require\nmultiple documents to inhabit single files, or a special\npurpose filesystem [6].\nFor snippet generation, the required documents are\ndecompressed one at a time, and a linear search for provided\nquery terms is employed. The search is optimized for our\nspecific task, which is restricted to matching whole words\nand the sentence terminating token, rather than general\npattern matching.\n4.3 The CTS Snippet Engine\nSeveral optimizations over the baseline are possible. The\nfirst is to employ a semi-static compression method over the\nentire document collection, which will allow faster\ndecompression with minimal compression loss [24]. Using a\nsemistatic approach involves mapping words and non-words\nproduced by the parser to single integer tokens, with frequent\nsymbols receiving small integers, and then choosing a coding\nscheme that assigns small numbers a small number of bits.\nWords and non-words strictly alternate in the compressed\nfile, which always begins with a word.\nIn this instance we simply assign each symbol its ordinal\nnumber in a list of symbols sorted by frequency. We use the\nvbyte coding scheme to code the word tokens [22]. The set\nof non-words is limited to the 64 most common punctuation\nsequences in the collection itself, and are encoded with a flat\n6-bit binary code. The remaining 2 bits of each punctuation\nsymbol are used to store capitalization information.\nThe process of computing the semi-static model is\ncomplicated by the fact that the number of words and non-words\nappearing in large web collections is high. If we stored all\nwords and non-words appearing in the collection, and their\nassociated frequency, many gigabytes of RAM or a B-tree or\nsimilar on-disk structure would be required [23]. Moffat et\nal. [14] have examined schemes for pruning models during\ncompression using large alphabets, and conclude that rarely\noccurring terms need not reside in the model. Rather, rare\nterms are spelt out in the final compressed file, using a\nspecial word token (escape symbol), to signal their occurrence.\nDuring the first pass of encoding, two move-to-front queues\nare kept; one for words and one for non-words. Whenever\nthe available memory is consumed and a new symbol is\ndiscovered in the collection, an existing symbol is discarded\nfrom the end of the queue. In our implementation, we\nenforce the stricter condition on eviction that, where possible,\nthe evicted symbol should have a frequency of one. If there is\nno symbol with frequency one in the last half of the queue,\nthen we evict symbols of frequency two, and so on until\nenough space is available in the model for the new symbol.\nThe second pass of encoding replaces each word with its\nvbyte encoded number, or the escape symbol and an ASCII\nrepresentation of the word if it is not in the model.\nSimilarly each non-word sequence is replaced with its codeword,\nor the codeword for a single space character if it is not in the\nmodel. We note that this lossless compression of non-words\nis acceptable when the documents are used for snippet\ngeneration, but may not be acceptable for a document database.\nWe assume that a separate sub-system would hold cached\ndocuments for other purposes where exact punctuation is\nimportant.\nWhile this semi-static scheme should allow faster\ndecompression than the baseline, it also readily allows direct\nmatching of query terms as compressed integers in the compressed\nfile. That is, sentences can be scored without having to\ndecompress a document, and only the sentences returned as\npart of a snippet need to be decoded.\nThe CTS system stores all documents contiguously in one\nfile, and an auxiliary table of 64 bit integers indicating the\nstart offset of each document in the file. Further, it must\nhave access to the reverse mapping of term numbers,\nallowing those words not spelt out in the document to be\nrecovered and returned to the Query Engine as strings. The first\nof these data structures can be readily partitioned and\ndistributed if the Snippet Engine occupies multiple machines;\nthe second, however, is not so easily partitioned, as any\ndocument on a remote machine might require access to the\nwhole integer-to-string mapping. This is the second reason\nfor employing the model pruning step during construction of\nthe semi-static code: it limits the size of the reverse mapping\ntable that should be present on every machine implementing\nthe Snippet Engine.\n4.4 Experimental assessment of CTS\nAll experiments reported in this paper were run on a Sun\nFire V210 Server running Solaris 10. The machine consists\nof dual 1.34 GHz UltraSPARC IIIi processors and 4GB of\nwt10g wt50g wt100g\nNo. Docs. (\u00d7106\n) 1.7 10.1 18.5\nRaw Text 10, 522 56, 684 102, 833\nBaseline(zlib) 2, 568 (24%) 10, 940 (19%) 19, 252 (19%)\nCTS 2, 722 (26%) 12, 010 (21%) 22, 269 (22%)\nTable 1: Total storage space (Mb) for documents\nfor the three test collections both compressed, and\nuncompressed.\n0 20 40 60\n0.00.20.40.60.8 Queries grouped in 100\"s\nTime(seconds) 0 20 40 60\n0.00.20.40.60.8 Queries grouped in 100\"s\nTime(seconds) 0 20 40 60\n0.00.20.40.60.8 Queries grouped in 100\"s\nTime(seconds)\nBaseline\nCTS with caching\nCTS without caching\nFigure 3: Time to generate snippets for 10\ndocuments per query, averaged over buckets of 100\nqueries, for the first 7000 Excite queries on wt10g.\nRAM. All source code was compiled using gcc4.1.1 with -O9\noptimisation. Timings were run on an otherwise\nunoccupied machine and were averaged over 10 runs, with memory\nflushed between runs to eliminate any caching of data files.\nIn the absence of evidence to the contrary, we assume that\nit is important to model realistic query arrival sequences and\nthe distribution of query repetitions for our experiments.\nConsequently, test collections which lack real query logs,\nsuch as TREC ad-hoc and .GOV2 were not considered\nsuitable. Obtaining extensive query logs and associated result\ndoc-ids for a corresponding large collection is not easy. We\nhave used two collections (wt10g and wt100g) from the\nTREC Web Track [8] coupled with queries from Excite logs\nfrom the same (c. 1997) period. Further, we also made use\nof a medium sized collection wt50g, obtained by randomly\nsampling half of the documents from wt100g. The first two\nrows of Table 1 give the number of documents and the size\nin Mb of these collections.\nThe final two rows of Table 1 show the size of the resulting\ndocument sets after compression with the baseline and CTS\nschemes. As expected, CTS admits a small compression\nloss over zlib, but both substantially reduce the size of the\ntext to about 20% of the original, uncompressed size. Note\nthat the figures for CTS do not include the reverse mapping\nfrom integer token to string that is required to produce the\nfinal snippets as that occupies RAM. It is 1024 Mb in these\nexperiments.\nThe Zettair search engine [25] was used to produce a list\nof documents to summarize for each query. For the majority\nof the experiments the Okapi BM25 scoring scheme was used\nto determine document rankings. For the static caching\nexperiments reported in Section 5, the score of each document\nwt10g wt50g wt100g\nBaseline 75 157 183\nCTS 38 70 77\nReduction in time 49% 56% 58%\nTable 2: Average time (msec) for the final 7000\nqueries in the Excite logs using the baseline and CTS\nsystems on the 3 test collections.\nis a 50:50 weighted average of the BM25 score (normalized\nby the top scoring document for each query) and a score for\neach document independent of any query. This is to\nsimulate effects of ranking algorithms like PageRank [1] on the\ndistribution of document requests to the Snippet Engine. In\nour case we used the normalized Access Count [5] computed\nfrom the top 20 documents returned to the first 1 million\nqueries from the Excite log to determine the query\nindependent score component.\nPoints on Figure 3 indicate the mean running time to\ngenerate 10 snippets for each query, averaged in groups of\n100 queries, for the first 7000 queries in the Excite query\nlog. Only the data for wt10g is shown, but the other\ncollections showed similar patterns. The x-axis indicates the\ngroup of 100 queries; for example, 20 indicates the queries\n2001 to 2100. Clearly there is a caching effect, with times\ndropping substantially after the first 1000 or so queries are\nprocessed. All of this is due to the operating system caching\ndisk blocks and perhaps pre-fetching data ahead of specific\nread requests. This is evident because the baseline system\nhas no large internal data structures to take advantage of\nnon-disk based caching, it simply opens and processes files,\nand the speed up is evident for the baseline system.\nPart of this gain is due to the spatial locality of disk\nreferences generated by the query stream: repeated queries\nwill already have their document files cached in memory;\nand similarly different queries that return the same\ndocuments will benefit from document caching. But when the\nlog is processed after removing all but the first request for\neach document, the pronounced speed-up is still evident as\nmore queries are processed (not shown in figure). This\nsuggests that the operating system (or the disk itself) is reading\nand buffering a larger amount of data than the amount\nrequested and that this brings benefit often enough to make\nan appreciable difference in snippet generation times. This\nis confirmed by the curve labeled CTS without caching,\nwhich was generated after mounting the filesystem with a\nno-caching option (directio in Solaris). With disk caching\nturned off, the average time to generate snippets varies little.\nThe time to generate ten snippets for a query, averaged\nover the final 7000 queries in the Excite log as caching effects\nhave dissipated, are shown in Table 2. Once the system has\nstabilized, CTS is over 50% faster than the Baseline\nsystem. This is primarily due to CTS matching single integers\nfor most query words, rather than comparing strings in the\nbaseline system.\nTable 3 shows a break down of the average time to\ngenerate ten snippets over the final 7000 queries of the\nExcite log on the wt50g collection when entire documents are\nprocessed, and when only the first half of each document\nis processed. As can be seen, the majority of time spent\ngenerating a snippet is in locating the document on disk\n(Seek): 64% for whole documents, and 75% for half\ndocuments. Even if the amount of processing a document must\n% of doc processed Seek Read Score & Decode\n100% 45 4 21\n50% 45 4 11\nTable 3: Time to generate 10 snippets for a single\nquery (msec) for the wt50g collection averaged over\nthe final 7000 Excite queries when either all of each\ndocument is processed (100%) or just the first half\nof each document (50%).\nundergo is halved, as in the second row of the Table, there is\nonly a 14% reduction in the total time required to generate\na snippet. As locating documents in secondary storage\noccupies such a large proportion of snippet generation time, it\nseems logical to try and reduce its impact through caching.\n5. DOCUMENT CACHING\nIn Section 3 we observed that the Snippet Engine would\nhave its own RAM in proportion to the size of the\ndocument collection. For example, on a whole-of-Web search\nengine, the Snippet Engine would be distributed over many\nworkstations, each with at least 4 Gb of RAM. In a small\nenterprise, the Snippet Engine may be sharing RAM with\nall other sub-systems on a single workstation, hence only\nhave 100 Mb available. In this section we use simulation to\nmeasure the number of cache hits in the Snippet Engine as\nmemory size varies.\nWe compare two caching policies: a static cache, where\nthe cache is loaded with as many documents as it can hold\nbefore the system begins answering queries, and then never\nchanges; and a least-recently-used cache, which starts out as\nfor the static cache, but whenever a document is accessed it\nmoves to the front of a queue, and if a document is fetched\nfrom disk, the last item in the queue is evicted. Note that\ndocuments are first loaded into the caches in order of\ndecreasing query independent score, which is computed as\ndescribed in Section 4.4.\nThe simulations also assume a query cache exists for the\ntop Q most frequent queries, and that these queries are never\nprocessed by the Snippet Engine.\nAll queries passed into the simulations are from the second\nhalf of the Excite query log (the first half being used to\ncompute query independent scores), and are stemmed, stopped,\nand have their terms sorted alphabetically. This final\nalteration simply allows queries such as red dog and dog\nred to return the same documents, as would be the case\nin a search engine where explicit phrase operators would be\nrequired in the query to enforce term order and proximity.\nFigure 4 shows the percentage of document access that\nhit cache using the two caching schemes, with Q either 0\nor 10,000, on 535,276 Excite queries on wt100g. The\nxaxis shows the percentage of documents that are held in the\ncache, so 1.0% corresponds to about 185,000 documents.\nFrom this figure it is clear that caching even a small\npercentage of the documents has a large impact on reducing\nseek time for snippet generation. With 1% of documents\ncached, about 222 Mb for the wt100g collection, around\n80% of disk seeks are avoided. The static cache performs\nsurprisingly well (squares in Figure 4), but is outperformed\nby the LRU cache (circles). In an actual implementation of\nLRU, however, there may be fragmentation of the cache as\ndocuments are swapped in and out.\nThe reason for the large impact of the document cache is\n0.0 0.5 1.0 1.5 2.0 2.5 3.0\n020406080100\nCache size (% of collection)\n%ofaccessesascachehits\nLRU Q=0\nLRU Q=10,000\nStatic Q=0\nStatic Q=10,000\nFigure 4: Percentage of the time that the Snippet\nEngine does not have to go to disk in order to\ngenerate a snippet plotted against the size of the\ndocument cache as a percentage of all documents in the\ncollection. Results are from a simulation on wt100g\nwith 535,276 Excite queries.\nthat, for a particular collection, some documents are much\nmore likely to appear in results lists than others. This effect\noccurs partly because of the approximately Zipfian query\nfrequency distribution, and partly because most Web search\nengines employ ranking methods which combine query based\nscores with static (a priori) scores determined from factors\nsuch as link graph measures, URL features, spam scores and\nso on [17]. Documents with high static scores are much more\nlikely to be retrieved than others.\nIn addition to the document cache, the RAM of the\nSnippet Engine must also hold the CTS decoding table that\nmaps integers to strings, which is capped by a parameter at\ncompression time (1 Gb in our experiments here). This is\nmore than compensated for by the reduced size of each\ndocument, allowing more documents into the document cache.\nFor example, using CTS reduces the average document size\nfrom 5.7 Kb to 1.2 Kb (as shown in Table 1), so a 2 Gb RAM\ncould hold 368,442 uncompressed documents (2% of the\ncollection), or 850,691 documents plus a 1 Gb decompression\ntable (5% of the collection).\nIn fact, further experimentation with the model size\nreveals that the model can in fact be very small and still CTS\ngives good compression and fast scoring times. This is\nevidenced in Figure 5, where the compressed size of wt50g is\nshown in the solid symbols. Note that when no compression\nis used (Model Size is 0Mb), the collection is only 31 Gb as\nHTML markup, JavaScript, and repeated punctuation has\nbeen discarded as described in Section 4.1. With a 5 Mb\nmodel, the collection size drops by more than half to 14 Gb,\nand increasing the model size to 750 Mb only elicits a 2 Gb\ndrop in the collection size. Figure 5 also shows the average\ntime to score and decode a a snippet (excluding seek time)\nwith the different model sizes (open symbols). Again, there\nis a large speed up when a 5 Mb model is used, but little\n0 200 400 600\n15202530\nModel Size (Mb)\nCollectionSize(Gb)orTime(msec)\nSize (Gb)\nTime (msec)\nFigure 5: Collection size of the wt50g collection\nwhen compressed with CTS using different memory\nlimits on the model, and the average time to\ngenerate single snippet excluding seek time on 20000\nExcite queries using those models.\nimprovement with larger models. Similar results hold for\nthe wt100g collection, where a model of about 10 Mb\noffers substantial space and time savings over no model at all,\nbut returns diminish as the model size increases.\nApart from compression, there is another approach to\nreducing the size of each document in the cache: do not store\nthe full document in cache. Rather store sentences that are\nlikely to be used in snippets in the cache, and if during\nsnippet generation on a cached document the sentence scores do\nnot reach a certain threshold, then retrieve the whole\ndocument from disk. This raises questions on how to choose\nsentences from documents to put in cache, and which to\nleave on disk, which we address in the next section.\n6. SENTENCE REORDERING\nSentences within each document can be re-ordered so that\nsentences that are very likely to appear in snippets are at the\nfront of the document, hence processed first at query time,\nwhile less likely sentences are relegated to the rear of the\ndocument. Then, during query time, if k sentences with a\nscore exceeding some threshold are found before the entire\ndocument is processed, the remainder of the document is\nignored. Further, to improve caching, only the head of each\ndocument can be stored in the cache, with the tail residing\non disk. Note that we assume that the search engine is to\nprovide cached copies of a document-that is, the exact\ntext of the document as it was indexed-then this would be\nserviced by another sub-system in Figure 1, and not from\nthe altered copy we store in the Snippet Engine.\nWe now introduce four sentence reordering approaches.\n1. Natural order The first few sentences of a well authored\ndocument usually best describe the document content [12].\nThus simply processing a document in order should yield a\nquality snippet. Unfortunately, however, web documents are\noften not well authored, with little editorial or professional\nwriting skills brought to bear on the creation of a work of\nliterary merit. More importantly, perhaps, is that we are\nproducing query-biased snippets, and there is no guarantee\nthat query terms will appear in sentences towards the front\nof a document.\n2. Significant terms (ST) Luhn introduced the concept\nof a significant sentence as containing a cluster of\nsignificant terms [12], a concept found to work well by Tombros\nand Sanderson [20]. Let fd,t be the frequency of term t in\ndocument d, then term t is determined to be significant if\nfd,t \u2265\n8\n<\n:\n7 \u2212 0.1 \u00d7 (25 \u2212 sd), if sd < 25\n7, if 25 \u2264 sd \u2264 40\n7 + 0.1 \u00d7 (sd \u2212 40), otherwise,\nwhere sd is the number of sentences in document d. A\nbracketed section is defined as a group of terms where the leftmost\nand rightmost terms are significant terms, and no significant\nterms in the bracketed section are divided by more than four\nnon-significant terms. The score of a bracketed section is\nthe square of the number of significant words falling in the\nsection, divided by the total number of words in the entire\nsentence. The a priori score for a sentence is computed as\nthe maximum of all scores for the bracketed sections of the\nsentence. We then sort the sentences by this score.\n3. Query log based (QLt) Many Web queries repeat,\nand a small number of queries make up a large volume of\ntotal searches [9]. In order to take advantage of this bias,\nsentences that contain many past query terms should be\npromoted to the front of a document, while sentences that\ncontain few query terms should be demoted. In this scheme,\nthe sentences are sorted by the number of sentence terms\nthat occur in the query log. To ensure that long sentences do\nnot dominate over shorter qualitative sentences the score\nassigned to each sentence is divided by the number of terms in\nthat sentence giving each sentence a score between 0 and 1.\n4. Query log based (QLu) This scheme is as for QLt,\nbut repeated terms in the sentence are only counted once.\nBy re-ordering sentences using schemes ST, QLt or QLu,\nwe aim to terminate snippet generation earlier than if\nNatural Order is used, but still produce sentences with the same\nnumber of unique query terms (d in Figure 2), total number\nof query terms (c), the same positional score (h+ ) and the\nsame maximum span (k). Accordingly, we conducted\nexperiments comparing the methods, the first 80% of the Excite\nquery log was used to reorder sentences when required, and\nthe final 20% for testing.\nFigure 6 shows the differences in snippet scoring\ncomponents using each of the three methods over the Natural\nOrder method. It is clear that sorting sentences using the\nSignificant Terms (ST) method leads to the smallest change\nin the sentence scoring components. The greatest change\nover all methods is in the sentence position (h + )\ncomponent of the score, which is to be expected as their is no\nguarantee that leading and heading sentences are processed\nat all after sentences are re-ordered. The second most\naffected component is the number of distinct query terms in a\nreturned sentence, but if only the first 50% of the document\nis processed with the ST method, there is a drop of only 8%\nin the number of distinct query terms found in snippets.\nDepending how these various components are weighted to\ncompute an overall snippet score, one can argue that there\nis little overall affect on scores when processing only half the\ndocument using the ST method.\nSpan (k)\nTerm Count (c)\nSentence Position (h + l)\nDistinct Terms (d)\n40%\n50%\n60%\n70%\nST\nQLt\nQLu\nST\nQLt\nQLu\nST\nQLt\nQLu\nST\nQLt\nQLu\nST\nQLt\nQLu\nRelativedifferencetoNaturalOrder\nDocuments size used\n90% 80% 70% 60% 50%\n0%\n10%\n20%\n30%\nFigure 6: Relative difference in the snippet score\ncomponents compared to Natural Ordered\ndocuments when the amount of documents processed is\nreduced, and the sentences in the document are\nreordered using Query Logs (QLt, QLu) or Significant\nTerms (ST).\n7. DISCUSSION\nIn this paper we have described the algorithms and\ncompression scheme that would make a good Snippet Engine\nsub-system for generating text snippets of the type shown on\nthe results pages of well known Web search engines. Our\nexperiments not only show that our scheme is over 50% faster\nthan the obvious baseline, but also reveal some very\nimportant aspects of the snippet generation problem. Primarily,\ncaching documents avoids seek costs to secondary memory\nfor each document that is to be summarized, and is vital for\nfast snippet generation. Our caching simulations show that\nif as little as 1% of the documents can be cached in RAM as\npart of the Snippet Engine, possibly distributed over many\nmachines, then around 75% of seeks can be avoided. Our\nsecond major result is that keeping only half of each\ndocument in RAM, effectively doubling the cache size, has little\naffect on the quality of the final snippets generated from\nthose half-documents, provided that the sentences that are\nkept in memory are chosen using the Significant Term\nalgorithm of Luhn [12]. Both our document compression and\ncompaction schemes dramatically reduce the time taken to\ngenerate snippets.\nNote that these results are generated using a 100Gb\nsubset of the Web, and the Excite query log gathered from the\nsame period as that subset was created. We are assuming, as\nthere is no evidence to the contrary, that this collection and\nlog is representative of search engine input in other domains.\nIn particular, we can scale our results to examine what\nresources would be required, using our scheme, to provide a\nSnippet Engine for the entire World Wide Web.\nWe will assume that the Snippet Engine is distributed\nacross M machines, and that there are N web pages in the\ncollection to be indexed and served by the search engine. We\nalso assume a balanced load for each machine, so each\nmachine serves about N/M documents, which is easily achieved\nin practice. Each machine, therefore, requires RAM to hold\nthe following.\n\u2022 The CTS model, which should be 1/1000 of the size\nof the uncompressed collection (using results in\nFigure 5 and Williams et al. [23]). Assuming an average\nuncompressed document size of 8 Kb [11], this would\nrequire N/M \u00d7 8.192 bytes of memory.\n\u2022 A cache of 1% of all N/M documents. Each document\nrequires 2 Kb when compressed with CTS (Table 1),\nand only half of each document is required using ST\nsentence reordering, requiring a total of N/M \u00d70.01\u00d7\n1024 bytes.\n\u2022 The offset array that gives the start position of each\ndocument in the single, compressed file: 8 bytes per\nN/M documents.\nThe total amount of RAM required by a single machine,\ntherefore, would be N/M(8.192 + 10.24 + 8) bytes.\nAssuming that each machine has 8 Gb of RAM, and that there are\n20 billion pages to index on the Web, a total of M = 62\nmachines would be required for the Snippet Engine. Of course\nin practice, more machines may be required to manage the\ndistributed system, to provide backup services for failed\nmachines, and other networking services. These machines\nwould also need access to 37 Tb of disk to store the\ncompressed document representations that were not in cache.\nIn this work we have deliberately avoided committing to\none particular scoring method for sentences in documents.\nRather, we have reported accuracy results in terms of the\nfour components that have been previously shown to be\nimportant in determining useful snippets [20]. The CTS\nmethod can incorporate any new metrics that may arise in\nthe future that are calculated on whole words. The\ndocument compaction techniques using sentence re-ordering,\nhowever, remove the spatial relationship between sentences,\nand so if a scoring technique relies on the position of a\nsentence within a document, the aggressive compaction\ntechniques reported here cannot be used.\nA variation on the semi-static compression approach we\nhave adopted in this work has been used successfully in\nprevious search engine design [24], but there are alternate\ncompression schemes that allow direct matching in compressed\ntext (see Navarro and M\u00a8akinen [15] for a recent survey.) As\nseek time dominates the snippet generation process, we have\nnot focused on this portion of the snippet generation in\ndetail in this paper. We will explore alternate compression\nschemes in future work.\nAcknowledgments\nThis work was supported in part by ARC Discovery Project\nDP0558916 (AT). Thanks to Nick Lester and Justin Zobel\nfor valuable discussions.\n8. REFERENCES\n[1] S. Brin and L. Page. The anatomy of a large-scale\nhypertextual Web search engine. In WWW7, pages\n107-117, 1998.\n[2] R. Fagin, Ravi K., K. S. McCurley, J. Novak,\nD. Sivakumar, J. A. Tomlin, and D. P. Williamson.\nSearching the workplace web. In WWW2003,\nBudapest, Hungary, May 2003.\n[3] T. Fagni, R. Perego, F. Silvestri, and S. Orlando.\nBoosting the performance of web search engines:\nCaching and prefetching query results by exploiting\nhistorical usage data. ACM Trans. Inf. Syst.,\n24(1):51-78, 2006.\n[4] J-L Gailly and M. Adler. Zlib Compression Library.\nwww.zlib.net. Accessed January 2007.\n[5] S. Garcia, H.E. Williams, and A. Cannane.\nAccess-ordered indexes. In V. Estivill-Castro, editor,\nProc. Australasian Computer Science Conference,\npages 7-14, Dunedin, New Zealand, 2004.\n[6] S. Ghemawat, H. Gobioff, and S. Leung. The google\nfile system. In SOSP \"03: Proc. of the 19th ACM\nSymposium on Operating Systems Principles, pages\n29-43, New York, NY, USA, 2003. ACM Press.\n[7] J. Goldstein, M. Kantrowitz, V. Mittal, and\nJ. Carbonell. Summarizing text documents: sentence\nselection and evaluation metrics. In SIGIR99, pages\n121-128, 1999.\n[8] D. Hawking, Nick C., and Paul Thistlewaite.\nOverview of TREC-7 Very Large Collection Track. In\nProc. of TREC-7, pages 91-104, November 1998.\n[9] B. J. Jansen, A. Spink, and J. Pedersen. A temporal\ncomparison of altavista web searching. J. Am. Soc.\nInf. Sci. Tech. (JASIST), 56(6):559-570, April 2005.\n[10] J. Kupiec, J. Pedersen, and F. Chen. A trainable\ndocument summarizer. In SIGIR95, pages 68-73, 1995.\n[11] S. Lawrence and C.L. Giles. Accessibility of\ninformation on the web. Nature, 400:107-109, July\n1999.\n[12] H.P. Luhn. The automatic creation of literature\nabstracts. IBM Journal, pages 159-165, April 1958.\n[13] I. Mani. Automatic Summarization, volume 3 of\nNatural Language Processing. John Benjamins\nPublishing Company, Amsterdam/Philadelphia, 2001.\n[14] A. Moffat, J. Zobel, and N. Sharman. Text\ncompression for dynamic document databases.\nKnowledge and Data Engineering, 9(2):302-313, 1997.\n[15] G. Navarro and V. M\u00a8akinen. Compressed full text\nindexes. ACM Computing Surveys, 2007. To appear.\n[16] D. R. Radev, E. Hovy, and K. McKeown. Introduction\nto the special issue on summarization. Comput.\nLinguist., 28(4):399-408, 2002.\n[17] M. Richardson, A. Prakash, and E. Brill. Beyond\npagerank: machine learning for static ranking. In\nWWW06, pages 707-715, 2006.\n[18] T. Sakai and K. Sparck-Jones. Generic summaries for\nindexing in information retrieval. In SIGIR01, pages\n190-198, 2001.\n[19] H. G. Silber and K. F. McCoy. Efficiently computed\nlexical chains as an intermediate representation for\nautomatic text summarization. Comput. Linguist.,\n28(4):487-496, 2002.\n[20] A. Tombros and M. Sanderson. Advantages of query\nbiased summaries in information retrieval. In\nSIGIR98, pages 2-10, Melbourne, Aust., August 1998.\n[21] R. W. White, I. Ruthven, and J. M. Jose. Finding\nrelevant documents using top ranking sentences: an\nevaluation of two alternative schemes. In SIGIR02,\npages 57-64, 2002.\n[22] H. E. Williams and J. Zobel. Compressing integers for\nfast file access. Comp. J., 42(3):193-201, 1999.\n[23] H.E. Williams and J. Zobel. Searchable words on the\nWeb. International Journal on Digital Libraries,\n5(2):99-105, April 2005.\n[24] I. H. Witten, A. Moffat, and T. C. Bell. Managing\nGigabytes: Compressing and Indexing Documents and\nImages. Morgan Kaufmann Publishing, San Francisco,\nsecond edition, May 1999.\n[25] The Zettair Search Engine.\nwww.seg.rmit.edu.au/zettair. Accessed January 2007.", "keywords": "special-purpose filesystem;snippet generation;ram;performance;web summary;semi-static compression;vbyte coding scheme;document cache;search engine;link graph measure;document compaction;document caching;precomputed final result page;text fragment"}
-{"name": "test_H-13", "title": "The Influence of Caption Features on Clickthrough Patterns in Web Search", "abstract": "Web search engines present lists of captions, comprising title, snippet, and URL, to help users decide which search results to visit. Understanding the influence of features of these captions on Web search behavior may help validate algorithms and guidelines for their improved generation. In this paper we develop a methodology to use clickthrough logs from a commercial search engine to study user behavior when interacting with search result captions. The findings of our study suggest that relatively simple caption features such as the presence of all terms query terms, the readability of the snippet, and the length of the URL shown in the caption, can significantly influence users\" Web search behavior.", "fulltext": "1. INTRODUCTION\nThe major commercial Web search engines all present\ntheir results in much the same way. Each search result is\ndescribed by a brief caption, comprising the URL of the\nassociated Web page, a title, and a brief summary (or\nsnippet) describing the contents of the page. Often the snippet\nis extracted from the Web page itself, but it may also be\ntaken from external sources, such as the human-generated\nsummaries found in Web directories.\nFigure 1 shows a typical Web search, with captions for the\ntop three results. While the three captions share the same\nbasic structure, their content differs in several respects. The\nsnippet of the third caption is nearly twice as long as that\nof the first, while the snippet is missing entirely from the\nsecond caption. The title of the third caption contains all\nof the query terms in order, while the titles of the first and\nsecond captions contain only two of the three terms. One of\nthe query terms is repeated in the first caption. All of the\nquery terms appear in the URL of the third caption, while\nnone appear in the URL of the first caption. The snippet\nof the first caption consists of a complete sentence that\nconcisely describes the associated page, while the snippet of the\nthird caption consists of two incomplete sentences that are\nlargely unrelated to the overall contents of the associated\npage and to the apparent intent of the query.\nWhile these differences may seem minor, they may also\nhave a substantial impact on user behavior. A principal\nmotivation for providing a caption is to assist the user in\ndetermining the relevance of the associated page without\nactually having to click through to the result. In the case of\na navigational query - particularly when the destination is\nwell known - the URL alone may be sufficient to identify\nthe desired page. But in the case of an informational query,\nthe title and snippet may be necessary to guide the user in\nselecting a page for further study, and she may judge the\nrelevance of a page on the basis of the caption alone.\nWhen this judgment is correct, it can speed the search\nprocess by allowing the user to avoid unwanted material.\nWhen it fails, the user may waste her time clicking through\nto an inappropriate result and scanning a page containing\nlittle or nothing of interest. Even worse, the user may be\nmisled into skipping a page that contains desired\ninformation.\nAll three of the results in figure 1 are relevant, with some\nlimitations. The first result links to the main Yahoo Kids!\nhomepage, but it is then necessary to follow a link in a menu\nto find the main page for games. Despite appearances, the\nsecond result links to a surprisingly large collection of\nonline games, primarily with environmental themes. The third\nresult might be somewhat disappointing to a user, since it\nleads to only a single game, hosted at the Centers for Disease\nControl, that could not reasonably be described as online.\nUnfortunately, these page characteristics are not entirely\nreflected in the captions.\nIn this paper, we examine the influence of caption\nfeatures on user\"s Web search behavior, using clickthroughs\nextracted from search engines logs as our primary\ninvestigative tool. Understanding this influence may help to validate\nalgorithms and guidelines for the improved generation of the\nFigure 1: Top three results for the query: kids online games.\ncaptions themselves. In addition, these features can play a\nrole in the process of inferring relevance judgments from user\nbehavior [1]. By better understanding their influence, better\njudgments may result.\nDifferent caption generation algorithms might select\nsnippets of different lengths from different areas of a page.\nSnippets may be generated in a query-independent fashion,\nproviding a summary of the page as a whole, or in a\nquerydependent fashion, providing a summary of how the page\nrelates to the query terms. The correct choice of snippet\nmay depend on aspects of both the query and the result\npage. The title may be taken from the HTML header or\nextracted from the body of the document [8]. For links that\nre-direct, it may be possible to display alternative URLs.\nMoreover, for pages listed in human-edited Web directories\nsuch as the Open Directory Project1\n, it may be possible\nto display alternative titles and snippets derived from these\nlistings.\nWhen these alternative snippets, titles and URLs are\navailable, the selection of an appropriate combination for display\nmay be guided by their features. A snippet from a Web\ndirectory may consist of complete sentences and be less\nfragmentary than an extracted snippet. A title extracted from\nthe body may provide greater coverage of the query terms.\nA URL before re-direction may be shorter and provide a\nclearer idea of the final destination.\nThe work reported in this paper was undertaken in the\ncontext of the Windows Live search engine. The image in\nfigure 1 was captured from Windows Live and cropped to\neliminate branding, advertising and navigational elements. The\nexperiments reported in later sections are based on\nWindows Live query logs, result pages and relevance judgments\ncollected as part of ongoing research into search engine\nperformance [1,2]. Nonetheless, given the similarity of caption\nformats across the major Web search engines we believe the\nresults are applicable to these other engines. The query in\n1\nwww.dmoz.org\nfigure 1 produces results with similar relevance on the other\nmajor search engines. This and other queries produce\ncaptions that exhibit similar variations. In addition, we believe\nour methodology may be generalized to other search\napplications when sufficient clickthrough data is available.\n2. RELATED WORK\nWhile commercial Web search engines have followed\nsimilar approaches to caption display since their genesis,\nrelatively little research has been published about methods for\ngenerating these captions and evaluating their impact on\nuser behavior. Most related research in the area of document\nsummarization has focused on newspaper articles and\nsimilar material, rather than Web pages, and has conducted\nevaluations by comparing automatically generated summaries\nwith manually generated summaries. Most research on the\ndisplay of Web results has proposed substantial interface\nchanges, rather than addressing details of the existing\ninterfaces.\n2.1 Display of Web results\nVaradarajan and Hristidis [16] are among the few who\nhave attempted to improve directly upon the snippets\ngenerated by commercial search systems, without introducing\nadditional changes to the interface. They generated\nsnippets from spanning trees of document graphs and\nexperimentally compared these snippets against the snippets\ngenerated for the same documents by the Google desktop search\nsystem and MSN desktop search system. They evaluated\ntheir method by asking users to compare snippets from the\nvarious sources.\nCutrell and Guan [4] conducted an eye-tracking study to\ninvestigate the influence of snippet length on Web search\nperformance and found that the optimal snippet length\nvaried according to the task type, with longer snippets leading\nto improved performance for informational tasks and shorter\nsnippets for navigational tasks.\nMany researchers have explored alternative methods for\ndisplaying Web search results. Dumais et al. [5] compared an\ninterface typical of those used by major Web search engines\nwith one that groups results by category, finding that users\nperform search tasks faster with the category interface. Paek\net al. [12] propose an interface based on a fisheye lens, in\nwhich mouse hovers and other events cause captions to zoom\nand snippets to expand with additional text.\nWhite et al. [17] evaluated three alternatives to the\nstandard Web search interface: one that displays expanded\nsummaries on mouse hovers, one that displays a list of top\nranking sentences extracted from the results taken as a group,\nand one that updates this list automatically through\nimplicit feedback. They treat the length of time that a user\nspends viewing a summary as an implicit indicator of\nrelevance. Their goal was to improve the ability of users to\ninteract with a given result set, helping them to look\nbeyond the first page of results and to reduce the burden of\nquery re-formulation.\n2.2 Document summarization\nOutside the narrow context of Web search considerable\nrelated research has been undertaken on the problem of\ndocument summarization. The basic idea of extractive\nsummarization - creating a summary by selecting sentences or\nfragments - goes back to the foundational work of Luhn [11].\nLuhn\"s approach uses term frequencies to identify\nsignificant words within a document and then selects and extracts\nsentences that contain significant words in close proximity.\nA considerable fraction of later work may be viewed as\nextending and tuning this basic approach, developing\nimproved methods for identifying significant words and\nselecting sentences. For example, a recent paper by Sun et\nal. [14] describes a variant of Luhn\"s algorithm that uses\nclickthrough data to identify significant words. At its\nsimplest, snippet generation for Web captions might also be\nviewed as following this approach, with query terms taking\non the role of significant words.\nSince 2000, the annual Document Understanding\nConference (DUC) series, conducted by the US National Institute\nof Standards and Technology, has provided a vehicle for\nevaluating much of the research in document\nsummarization2\n. Each year DUC defines a methodology for one or\nmore experimental tasks, and supplies the necessary test\ndocuments, human-created summaries, and automatically\nextracted baseline summaries. The majority of\nparticipating systems use extractive summarization, but a number\nattempt natural language generation and other approaches.\nEvaluation at DUC is achieved through comparison with\nmanually generated summaries. Over the years DUC has\nincluded both single-document summarization and\nmultidocument summarization tasks. The main DUC 2007 task\nis posed as taking place in a question answering context.\nGiven a topic and 25 documents, participants were asked\nto generate a 250-word summary satisfying the information\nneed enbodied in the topic. We view our approach of\nevaluating summarization through the analysis of Web logs as\ncomplementing the approach taken at DUC.\nA number of other researchers have examined the value\nof query-dependent summarization in a non-Web context.\nTombros and Sanderson [15] compared the performance of\n20 subjects searching a collection of newspaper articles when\n2\nduc.nist.gov\nguided by query-independent vs. query-dependent snippets.\nThe query-independent snippets were created by extracting\nthe first few sentences of the articles; the query-dependent\nsnippets were created by selecting the highest scoring\nsentences under a measure biased towards sentences containing\nquery terms. When query-dependent summaries were\npresented, subjects were better able to identify relevant\ndocuments without clicking through to the full text.\nGoldstein et al. [6] describe another extractive system for\ngenerating query-dependent summaries from newspaper\narticles. In their system, sentences are ranked by combining\nstatistical and linguistic features. They introduce\nnormalized measures of recall and precision to facilitate evaluation.\n2.3 Clickthroughs\nQueries and clickthroughs taken from the logs of\ncommercial Web search engines have been widely used to improve\nthe performance of these systems and to better understand\nhow users interact with them. In early work, Broder [3]\nexamined the logs of the AltaVista search engine and\nidentified three broad categories of Web queries: informational,\nnavigational and transactional. Rose and Levinson [13]\nconducted a similar study, developing a hierarchy of query goals\nwith three top-level categories: informational, navigational\nand resource. Under their taxonomy, a transactional query\nas defined by Broder might fall under either of their three\ncategories, depending on details of the desired transaction.\nLee et al. [10] used clickthrough patterns to\nautomatically categorize queries into one of two categories:\ninformational - for which multiple Websites may satisfy all or part\nof the user\"s need - and navigational - for which users\nhave a particular Website in mind. Under their taxonomy,\na transactional or resource query would be subsumed under\none of these two categories.\nAgichtein et al. interpreted caption features, clickthroughs\nand other user behavior as implicit feedback to learn\npreferences [2] and improve ranking [1] in Web search. Xue et\nal. [18] present several methods for associating queries with\ndocuments by analyzing clickthrough patterns and links\nbetween documents. Queries associated with documents in\nthis way are treated as meta-data. In effect, they are added\nto the document content for indexing and ranking purposes.\nOf particular interest to us is the work of Joachims et\nal. [9] and Granka et al. [7]. They conducted eye-tracking\nstudies and analyzed log data to determine the extent to\nwhich clickthrough data may be treated as implicit relevance\njudgments. They identified a trust bias, which leads users\nto prefer the higher ranking result when all other factors are\nequal. In addition, they explored techniques that treat clicks\nas pairwise preferences. For example, a click at position\nN + 1 - after skipping the result at position N - may be\nviewed as a preference for the result at position N+1 relative\nto the result at position N. These findings form the basis of\nthe clickthrough inversion methodology we use to interpret\nuser interactions with search results. Our examination of\nlarge search logs compliments their detailed analysis of a\nsmaller number of participants.\n3. CLICKTHROUGH INVERSIONS\nWhile other researchers have evaluated the display of Web\nsearch results through user studies - presenting users with\na small number of different techniques and asking them to\ncomplete experimental tasks - we approach the problem\nby extracting implicit feedback from search engine logs.\nExamining user behavior in situ allows us to consider many\nmore queries and caption characteristics, with the volume\nof available data compensating for the lack of a controlled\nlab environment.\nThe problem remains of interpreting the information in\nthese logs as implicit indicators of user preferences, and in\nthis matter we are guided by the work of Joachims et al. [9].\nWe consider caption pairs, which appear adjacent to one\nanother in the result list.\nOur primary tool for examining the influence of caption\nfeatures is a type of pattern observed with respect to these\ncaption pairs, which we call a clickthrough inversion. A\nclickthrough inversion occurs at position N when the result\nat position N receives fewer clicks than the result at position\nN + 1. Following Joachims et al. [9], we interpret a\nclickthrough inversion as indicating a preference for the lower\nranking result, overcoming any trust bias. For simplicity,\nin the remainder of this paper we refer to the higher\nranking caption in a pair as caption A and the lower ranking\ncaption as caption B.\n3.1 Extracting clickthroughs\nFor the experiments reported in this paper, we sampled\na subset of the queries and clickthroughs from the logs of\nthe Windows Live search engine over a period of 3-4 days\non three separate occasions: once for results reported in\nsection 3.3, once for a pilot of our main experiment, and once\nfor the experiment itself (sections 4 and 5). For simplicity\nwe restricted our sample to queries submitted to the US\nEnglish interface and ignored any queries containing complex\nor non-alphanumeric terms (e.g. operators and phrases). At\nthe end of each sampling period, we downloaded captions\nfor the queries associated with the clickthrough sample.\nWhen identifying clickthroughs in search engine logs, we\nconsider only the first clickthrough action taken by a user\nafter entering a query and viewing the result page. Users\nare identified by IP address, which is a reasonably reliable\nmethod of eliminating multiple results from a single user,\nat the cost of falsely eliminating results from multiple users\nsharing the same address.\nBy focusing on the initial clickthrough, we hope to\ncapture a user\"s impression of the relative relevance within a\ncaption pair when first encountered. If the user later clicks\non other results or re-issues the same query, we ignore these\nactions. Any preference captured by a clickthrough\ninversion is therefore a preference among a group of users issuing\na particular query, rather than a preference on the part of a\nsingle user. In the remainder of the paper, we use the term\nclickthrough to refer only to this initial action.\nGiven the dynamic nature of the Web and the volumes of\ndata involved, search engine logs are bound to contain\nconsiderable noise. For example, even over a period of hours\nor minutes the order of results for a given query can change,\nwith some results dropping out of the top ten and new ones\nappearing. For this reason, we retained clickthroughs for\na specific combination of a query and a result only if this\nresult appears in a consistent position for at least 50% of\nthe clickthroughs. Clickthroughs for the same result when\nit appeared at other positions were discarded. For\nsimilar reasons, if we did not detect at least ten clickthroughs\nfor a particular query during the sampling period, no\nclickthroughs for that query were retained.\n10\n20\n30\n40\n50\n60\n70\n80\n90\n100\n1 2 3 4 5 6 7 8 9 10\nclickthroughpercent\nposition\na) craigslist\n10\n20\n30\n40\n50\n60\n70\n80\n90\n100\n1 2 3 4 5 6 7 8 9 10\nclickthroughpercent\nposition\nb) periodic table of elements\n10\n20\n30\n40\n50\n60\n70\n80\n90\n100\n1 2 3 4 5 6 7 8 9 10\nclickthroughpercent\nposition\nc) kids online games\nFigure 2: Clickthrough curves for three queries: a)\na stereotypical navigational query, b) a stereotypical\ninformational query, and c) a query exhibiting\nclickthrough inversions.\nThe outcome at the end of each sampling period is a set\nof records, with each record describing the clickthroughs for\na given query/result combination. Each record includes a\nquery, a result position, a title, a snippet, a URL, the\nnumber of clickthroughs for this result, and the total number of\nclickthroughs for this query. We then processed this set to\ngenerate clickthrough curves and identify inversions.\n3.2 Clickthrough curves\nIt could be argued that under ideal circumstances,\nclickthrough inversions would not be present in search engine\nlogs. A hypothetical perfect search engine would respond\nto a query by placing the result most likely to be relevant\nfirst in the result list. Each caption would appropriately\nsummarize the content of the linked page and its\nrelationship to the query, allowing users to make accurate\njudgments. Later results would complement earlier ones, linking\nto novel or supplementary material, and ordered by their\ninterest to the greatest number of users.\nFigure 2 provides clickthrough curves for three example\nqueries. For each example, we plot the percentage of\nclickthroughs against position for the top ten results. The first\nquery (craigslist) is stereotypically navigational, showing\na spike at the correct answer (www.craigslist.org). The\nsecond query is informational in the sense of Lee et al. [10]\n(periodic table of elements). Its curve is flatter and less\nskewed toward a single result. For both queries, the number\nof clickthroughs is consistent with the result positions, with\nthe percentage of clickthroughs decreasing monotonically as\nposition increases, the ideal behavior.\nRegrettably, no search engine is perfect, and clickthrough\ninversions are seen for many queries. For example, for the\nthird query (kids online games) the clickthrough curve\nexhibits a number of clickthrough inversions, with an apparent\npreference for the result at position 4.\nSeveral causes may be enlisted to explain the presence of\nan inversion in a clickthrough curve. The search engine may\nhave failed in its primary goal, ranking more relevant results\nbelow less relevant results. Even when the relative ranking\nis appropriate, a caption may fail to reflect the content of\nthe underlying page with respect to the query, leading the\nuser to make an incorrect judgment. Before turning to the\nsecond case, we address the first, and examine the extent to\nwhich relevance alone may explain these inversions.\n3.3 Relevance\nThe simplest explanation for the presence of a clickthrough\ninversion is a relevance difference between the higher\nranking member of caption pair and the lower ranking member.\nIn order to examine the extent to which relevance plays a\nrole in clickthrough inversions, we conducted an initial\nexperiment using a set of 1,811 queries with associated\njudgments created as part of on-going work. Over a four-day\nperiod, we sampled the search engine logs and extracted over\none hundred thousand clicks involving these queries. From\nthese clicks we identified 355 clickthrough inversions,\nsatisfying the criteria of section 3.1, where relevance judgments\nexisted for both pages.\nThe relevance judgments were made by independent\nassessors viewing the pages themselves, rather than the captions.\nRelevance was assessed on a 6-point scale. The outcome is\npresented in figure 3, which shows the explicit judgments\nfor the 355 clickthrough inversions. In all of these cases,\nthere were more clicks on the lower ranked member of the\nRelationship Number Percent\nrel(A) < rel(B) 119 33.5%\nrel(A) = rel(B) 134 37.7%\nrel(A) > rel(B) 102 28.7%\nFigure 3: Relevance relationships at clickthrough\ninversions. Compares relevance between the higher\nranking member of a caption pair (rel(A)) to the\nrelevance of the lower ranking member (rel(B)), where\ncaption A received fewer clicks than caption B.\npair (B). The figure shows the corresponding relevance\njudgments. For example, the first row rel(A) < rel(B), indicates\nthat the higher ranking member of pair (A) was rated as\nless relevant than the lower ranking member of the pair (B).\nAs we see in the figure, relevance alone appears inadequate\nto explain the majority of clickthrough inversions. For\ntwothirds of the inversions (236), the page associated with\ncaption A is at least as relevant as the page associated with\ncaption B. For 28.7% of the inversions, A has greater relevance\nthan B, which received the greater number of clickthroughs.\n4. INFLUENCE OF CAPTION FEATURES\nHaving demonstrated that clickthrough inversions cannot\nalways be explained by relevance differences, we explore\nwhat features of caption pairs, if any, lead users to prefer\none caption over another. For example, we may\nhypothesize that the absence of a snippet in caption A and the\npresence of a snippet in caption B (e.g. captions 2 and 3\nin figure 1) leads users to prefer caption A. Nonetheless,\ndue to competing factors, a large set of clickthrough\ninversions may also include pairs where the snippet is missing in\ncaption B and not in caption A. However, if we compare a\nlarge set of clickthrough inversions to a similar set of pairs\nfor which the clickthroughs are consistent with their\nranking, we would expect to see relatively more pairs where the\nsnippet was missing in caption A.\n4.1 Evaluation methodology\nFollowing this line of reasoning, we extracted two sets\nof caption pairs from search logs over a three day period.\nThe first is a set of nearly five thousand clickthrough\ninversions, extracted according to the procedure described in\nsection 3.1. The second is a corresponding set of caption\npairs that do not exhibit clickthrough inversions. In other\nwords, for pairs in this set, the result at the higher rank\n(caption A) received more clickthroughs than the result at\nthe lower rank (caption B). To the greatest extent possible,\neach pair in the second set was selected to correspond to a\npair in the first set, in terms of result position and number\nof clicks on each result. We refer to the first set, containing\nclickthrough inversions, as the INV set; we refer to the\nsecond set, containing caption pairs for which the clickthroughs\nare consistent with their rank order, as the CON set.\nWe extract a number of features characterizing snippets\n(described in detail in the next section) and compare the\npresence of each feature in the INV and CON sets. We\ndescribe the features as a hypothesized preference (e.g., a\npreference for captions containing a snippet). Thus, in\neither set, a given feature may be present in one of two forms:\nfavoring the higher ranked caption (caption A) or favoring\nthe lower ranked caption (caption B). For example, the\nabFeature Tag Description\nMissingSnippet snippet missing in caption A and present in caption B\nSnippetShort short snippet in caption A (< 25 characters) with long snippet (> 100 characters) in caption B\nTermMatchTitle title of caption A contains matches to fewer query terms than the title of caption B\nTermMatchTS title+snippet of caption A contains matches to fewer query terms than the title+snippet of caption B\nTermMatchTSU title+snippet+URL of caption A contains matches to fewer query terms than caption B\nTitleStartQuery title of caption B (but not A) starts with a phrase match to the query\nQueryPhraseMatch title+snippet+url contains the query as a phrase match\nMatchAll caption B contains one match to each term; caption A contains more matches with missing terms\nURLQuery caption B URL is of the form www.query.com where the query matches exactly with spaces removed\nURLSlashes caption A URL contains more slashes (i.e. a longer path length) than the caption B URL\nURLLenDIff caption A URL is longer than the caption B URL\nOfficial title or snippet of caption B (but not A) contains the term official (with stemming)\nHome title or snippet of caption B (but not A) contains the phrase home page\nImage title or snippet of caption B (but not A) contains a term suggesting the presence of an image gallery\nReadable caption B (but not A) passes a simple readability test\nFigure 4: Features measured in caption pairs (caption A and caption B), with caption A as the higher ranked\nresult. These features are expressed from the perspective of the prevalent relationship predicted for clickthrough\ninversions.\nsence of a snippet in caption A favors caption B, and the\nabsence of a snippet in caption B favors caption A. When\nthe feature favors caption B (consistent with a clickthrough\ninversion) we refer to the caption pair as a positive pair.\nWhen the feature favors caption A, we refer to it as a\nnegative pair. For missing snippets, a positive pair has the\ncaption missing in caption A (but not B) and a negative\npair has the caption missing in B (but not A).\nThus, for a specific feature, we can construct four subsets:\n1) INV+, the set of positive pairs from INV; 2) INV\u2212, the\nset of negative pairs from INV; 3) CON+; the set of positive\npairs from CON; and 4) CON\u2212 the set of negative pairs\nfrom CON. The sets INV+, INV\u2212, CON+, and CON\u2212 will\ncontain different subsets of INV and CON for each feature.\nWhen stating a feature corresponding to a hypothesized user\npreference, we follow the practice of stating the feature with\nthe expectation that the size of INV+ relative to the size\nof INV\u2212 should be greater than the size of CON+ relative\nto the size of CON\u2212. For example, we state the missing\nsnippet feature as snippet missing in caption A and present\nin caption B.\nThis evaluation methodology allows us to construct a\ncontingency table for each feature, with INV essentially forming\nthe experimental group and CON the control group. We can\nthen apply Pearson\"s chi-square test for significance.\n4.2 Features\nFigure 4 lists the features tested. Many of the features on\nthis list correspond to our own assumptions regarding the\nimportance of certain caption characteristics: the presence\nof query terms, the inclusion of a snippet, and the\nimportance of query term matches in the title. Other features\nsuggested themselves during the examination of the snippets\ncollected as part of the study described in section 3.3 and\nduring a pilot of the evaluation methodology (section 4.1).\nFor this pilot we collected INV and CON sets of similar sizes,\nand used these sets to evaluate a preliminary list of features\nand to establish appropriate parameters for the\nSnippetShort and Readable features. In the pilot, all of the features\nlist in figure 4 were significant at the 95% level. A small\nnumber of other features were dropped after the pilot.\nThese features all capture simple aspects of the captions.\nThe first feature concerns the existence of a snippet and the\nsecond concerns the relative size of snippets. Apart from\nthis first feature, we ignore pairs where one caption has a\nmissing snippet. These pairs are not included in the sets\nconstructed for the remaining features, since captions with\nmissing snippets do not contain all the elements of a\nstandard caption and we wanted to avoid their influence.\nThe next six features concern the location and number of\nmatching query terms. For the first five, a match for each\nquery term is counted only once, additional matches for the\nsame term are ignored. The MatchAll feature tests the idea\nthat matching all the query terms exactly once is preferable\nto matching a subset of the terms many times with a least\none query term unmatched.\nThe next three features concern the URLs, capturing\naspects of their length and complexity, and the last four\nfeatures concern caption content. The first two of these content\nfeatures (Official and Home) suggest claims about the\nimportance or significance of the associated page. The third\ncontent feature (Image) suggests the presence of an image\ngallery, a popular genre of Web page. Terms represented by\nthis feature include pictures, pics, and gallery.\nThe last content feature (Readable) applies an ad-hoc\nreadability metric to each snippet. Regular users of Web\nsearch engines may notice occasional snippets that consist\nof little more than lists of words and phrases, rather than a\ncoherent description. We define our own metric, since the\nFlesch-Kincaid readability score and similar measures are\nintended for entire documents not text fragments. While the\nmetric has not been experimentally validated, it does reflect\nour intuitions and observations regarding result snippets. In\nEnglish, the 100 most frequent words represent about 48%\nof text, and we would expect readable prose, as opposed to\na disjointed list of words, to contain these words in roughly\nthis proportion. The Readable feature computes the\npercentage of these top-100 words appearing in each caption.\nIf these words represent more than 40% of one caption and\nless than 10% of the other, the pair is included in the\nappropriate set.\nFeature Tag INV+ INV\u2212 %+ CON+ CON\u2212 %+ \u03c72\np-value\nMissingSnippet 185 121 60.4 144 133 51.9 4.2443 0.0393\nSnippetShort 20 6 76.9 12 16 42.8 6.4803 0.0109\nTermMatchTitle 800 559 58.8 660 700 48.5 29.2154 <.0001\nTermMatchTS 310 213 59.2 269 216 55.4 1.4938 0.2216\nTermMatchTSU 236 138 63.1 189 149 55.9 3.8088 0.0509\nTitleStartQuery 1058 933 53.1 916 1096 45.5 23.1999 <.0001\nQueryPhraseMatch 465 346 57.3 427 422 50.2 8.2741 0.0040\nMatchAll 8 2 80.0 1 4 20.0 0.0470\nURLQuery 277 188 59.5 159 315 33.5 63.9210 <.0001\nURLSlashes 1715 1388 55.2 1380 1758 43.9 79.5819 <.0001\nURLLenDiff 2288 2233 50.6 2062 2649 43.7 43.2974 <.0001\nOfficial 215 142 60.2 133 215 38.2 34.1397 <.0001\nHome 62 49 55.8 64 82 43.8 3.6458 0.0562\nImage 391 270 59.1 315 335 48.4 15.0735 <.0001\nReadable 52 43 54.7 31 48 39.2 4.1518 0.0415\nFigure 5: Results corresponding to the features listed in figure 4 with \u03c72\nand p-values (df = 1). Features supported\nat the 95% confidence level are bolded. The p-value for the MatchAll feature is computed using Fisher\"s Exact\nTest.\n4.3 Results\nFigure 5 presents the results. Each row lists the size of\nthe four sets (INV+, INV\u2212, CON+, and CON\u2212) for a given\nfeature and indicates the percentage of positive pairs (%+)\nfor INV and CON. In order to reject the null hypothesis,\nthis percentage should be significantly greater for INV than\nCON. Except in one case, we applied the chi-squared test\nof independence to these sizes, with p-values shown in the\nlast column. For the MatchAll feature, where the sum of\nthe set sizes is 15, we applied Fisher\"s exact test. Features\nsupported at the 95% confidence level are bolded.\n5. COMMENTARY\nThe results support claims that missing snippets, short\nsnippets, missing query terms and complex URLs negatively\nimpact clickthroughs. While this outcome may not be\nsurprising, we are aware of no other work that can provide\nsupport for claims of this type in the context of a commercial\nWeb search engine.\nThis work was originally motivated by our desire to\nvalidate some simple guidelines for the generation of\ncaptionssummarizing opinions that we formulated while working on\nrelated issues. While our results do not direct address all\nof the many variables that influence users understanding\nof captions, they are consistent with the major guidelines.\nFurther work is needed to provide additional support for\nthe guidelines and to understand the relationships among\nvariables.\nThe first of these guidelines underscores the importance of\ndisplaying query terms in context: Whenever possible all of\nthe query terms should appear in the caption, reflecting their\nrelationship to the associated page. If a query term is\nmissing from a caption, the user may have no idea why the result\nwas returned. The results for the MatchAll feature directly\nsupport this guideline. The results for TermMatchTitle and\nTermMatchTSU confirm that matching more terms is\ndesirable. Other features provide additional indirect support for\nthis guideline, and none of the results are inconsistent with\nit.\nA second guideline speaks to the desirability of\npresenting the user with a readable snippet: When query terms are\npresent in the title, they need not be repeated in the\nsnippet. In particular, when a high-quality query-independent\nsummary is available from an external source, such as a\nWeb directory, it may be more appropriate to display this\nsummary than a lower-quality query-dependent fragment\nselected on-the-fly. When titles are available from multiple\nsources -the header, the body, Web directories - a caption\ngeneration algorithm might a select a combination of title,\nsnippet and URL that includes as many of the query terms\nas possible. When a title containing all query terms can be\nfound, the algorithm might select a query-independent\nsnippet. The MatchAll and Readable features directly support\nthis guideline. Once again, other features provide indirect\nsupport, and none of the results are inconsistent with it.\nFinally, the length and complexity of a URL influences\nuser behavior. When query terms appear in the URL they\nshould highlighted or otherwise distinguished. When\nmultiple URLs reference the same page (due to re-directions,\netc.) the shortest URL should be preferred, provided that\nall query terms will still appear in the caption. In other\nwords, URLs should be selected and displayed in a manner\nthat emphasizes their relationship to the query. The three\nURL features, as well as TermMatchTSU, directly support\nthis guideline.\nThe influence of the Official and Image features led us to\nwonder what other terms are prevalent in the captions of\nclickthrough inversions. As an additional experiment, we\ntreated each of the terms appearing in the INV and CON\nsets as a separate feature (case normalized), ranking them by\ntheir \u03c72\nvalues. The results are presented in figure 6. Since\nwe use the \u03c72\nstatistic as a divergence measure, rather than\na significance test, no p-values are given. The final column\nof the table indicates the direction of the influence, whether\nthe presence of the terms positively or negatively influence\nclickthroughs.\nThe positive influence of official has already been\nobserved (the difference in the \u03c72\nvalue from that of figure 5 is\ndue to stemming). None of the terms included in the Image\nRank Term \u03c72\ninfluence\n1 encyclopedia 114.6891 \u2193\n2 wikipedia 94.0033 \u2193\n3 official 36.5566 \u2191\n4 and 28.3349 \u2191\n5 tourism 25.2003 \u2191\n6 attractions 24.7283 \u2191\n7 free 23.6529 \u2193\n8 sexy 21.9773 \u2191\n9 medlineplus 19.9726 \u2193\n10 information 19.9115 \u2191\nFigure 6: Words exhibiting the greatest positive (\u2191)\nand negative (\u2193) influence on clickthrough patterns.\nfeature appear in the top ten, but pictures and photos\nappear at positions 21 and 22. The high rank given to and\nmay be related to readability (the term the appears in\nposition 20).\nMost surprising to us is the negative influence of the terms:\nencyclopedia, wikipedia, free, and medlineplus. The\nfirst three terms appear in the title of Wikipedia articles3\nand the last appears in the title of MedlinePlus articles4\n.\nThese individual word-level features provide hints about\nissues. More detailed analyses and further experiments will\nbe required to understand these features.\n6. CONCLUSIONS\nClickthrough inversions form an appropriate tool for\nassessing the influence of caption features. Using clickthrough\ninversions, we have demonstrated that relatively simple\ncaption features can significantly influence user behavior. To\nour knowledge, this is first methodology validated for\nassessing the quality of Web captions through implicit\nfeedback. In the future, we hope to substantially expand this\nwork, considering more features over larger datasets. We\nalso hope to directly address the goal of predicting relevance\nfrom clickthoughs and other information present in search\nengine logs.\n7. ACKNOWLEDGMENTS\nThis work was conducted while the first author was\nvisiting Microsoft Research. The authors thank members of\nthe Windows Live team for their comments and assistance,\nparticularly Girish Kumar, Luke DeLorme, Rohit Wad and\nRamez Naam.\n8. REFERENCES\n[1] E. Agichtein, E. Brill, and S. Dumais. Improving web\nsearch ranking by incorporating user behavior\ninformation. In 29th ACM SIGIR, pages 19-26,\nSeattle, August 2006.\n[2] E. Agichtein, E. Brill, S. Dumais, and R. Ragno.\nLearning user interaction models for predicting Web\nsearch result preferences. In 29th ACM SIGIR, pages\n3-10, Seattle, August 2006.\n[3] A. Broder. A taxonomy of Web search. SIGIR Forum,\n36(2):3-10, 2002.\n3\nwww.wikipedia.org\n4\nwww.nlm.nih.gov/medlineplus/\n[4] E. Cutrell and Z. Guan. What are you looking for?\nAn eye-tracking study of information usage in Web\nsearch. In SIGCHI Conference on Human Factors in\nComputing Systems, pages 407-416, San Jose,\nCalifornia, April-May 2007.\n[5] S. Dumais, E. Cutrell, and H. Chen. Optimizing\nsearch by showing results in context. In SIGCHI\nConference on Human Factors in Computing Systems,\npages 277-284, Seattle, March-April 2001.\n[6] J. Goldstein, M. Kantrowitz, V. Mittal, and\nJ. Carbonell. Summarizing text documents: Sentence\nselection and evaluation metrics. In 22nd ACM\nSIGIR, pages 121-128, Berkeley, August 1999.\n[7] L. A. Granka, T. Joachims, and G. Gay. Eye-tracking\nanalysis of user behavior in WWW search. In 27th\nACM SIGIR, pages 478-479, Sheffield, July 2004.\n[8] Y. Hu, G. Xin, R. Song, G. Hu, S. Shi, Y. Cao, and\nH. Li. Title extraction from bodies of HTML\ndocuments and its application to Web page retrieval.\nIn 28th ACM SIGIR, pages 250-257, Salvador, Brazil,\nAugust 2005.\n[9] T. Joachims, L. Granka, B. Pan, H. Hembrooke, and\nG. Gay. Accurately interpreting clickthrough data as\nimplicit feedback. In 28th ACM SIGIR, pages\n154-161, Salvador, Brazil, August 2005.\n[10] U. Lee, Z. Liu, and J. Cho. Automatic identification of\nuser goals in Web search. In 14th International World\nWide Web Conference, pages 391-400, Edinburgh,\nMay 2005.\n[11] H. P. Luhn. The automatic creation of literature\nabstracts. IBM Journal of Research and Development,\n2(2):159-165, April 1958.\n[12] T. Paek, S. Dumais, and R. Logan. WaveLens: A new\nview onto Internet search results. In SIGCHI\nConference on Human Factors in Computing Systems,\npages 727-734, Vienna, Austria, April 2004.\n[13] D. Rose and D. Levinson. Understanding user goals in\nWeb search. In 13th International World Wide Web\nConference, pages 13-19, New York, May 2004.\n[14] J.-T. Sun, D. Shen, H.-J. Zeng, Q. Yang, Y. Lu, and\nZ. Chen. Web-page summarization using clickthrough\ndata. In 28th ACM SIGIR, pages 194-201, Salvador,\nBrazil, August 2005.\n[15] A. Tombros and M. Sanderson. Advantages of query\nbiased summaries in information retrieval. In 21st\nACM SIGIR, pages 2-10, Melbourne, Australia,\nAugust 1998.\n[16] R. Varadarajan and V. Hristidis. A system for\nquery-specific document summarization. In 15th ACM\ninternational conference on Information and\nknowledge management (CIKM), pages 622-631,\nArlington, Virginia, November 2006.\n[17] R. W. White, I. Ruthven, and J. M. Jose. Finding\nrelevant documents using top ranking sentences: An\nevaluation of two alternative schemes. In 25th ACM\nSIGIR, pages 57-64, Tampere, Finland, August 2002.\n[18] G.-R. Xue, H.-J. Zeng, Z. Chen, Y. Yu, W.-Y. Ma,\nW. Xi, and W. Fan. Optimizing web search using Web\nclick-through data. In 13th ACM Conference on\nInformation and Knowledge Management (CIKM),\npages 118-126, Washington, DC, November 2004.", "keywords": "summarization;clickthrough pattern;snippet;extractive summarization;web search behavior;significant word;web search;query log;human factor;clickthrough inversion;query term match;caption feature;query re-formulation"}
-{"name": "test_H-14", "title": "Studying the Use of Popular Destinations to Enhance Web Search Interaction", "abstract": "We present a novel Web search interaction feature which, for a given query, provides links to websites frequently visited by other users with similar information needs. These popular destinations complement traditional search results, allowing direct navigation to authoritative resources for the query topic. Destinations are identified using the history of search and browsing behavior of many users over an extended time period, whose collective behavior provides a basis for computing source authority. We describe a user study which compared the suggestion of destinations with the previously proposed suggestion of related queries, as well as with traditional, unaided Web search. Results show that search enhanced by destination suggestions outperforms other systems for exploratory tasks, with best performance obtained from mining past user behavior at query-level granularity.", "fulltext": "1. INTRODUCTION\nThe problem of improving queries sent to Information Retrieval\n(IR) systems has been studied extensively in IR research [4][11].\nAlternative query formulations, known as query suggestions, can be\noffered to users following an initial query, allowing them to modify\nthe specification of their needs provided to the system, leading to\nimproved retrieval performance. Recent popularity of Web search\nengines has enabled query suggestions that draw upon the query\nreformulation behavior of many users to make query\nrecommendations based on previous user interactions [10].\nLeveraging the decision-making processes of many users for query\nreformulation has its roots in adaptive indexing [8]. In recent years,\napplying such techniques has become possible at a much larger\nscale and in a different context than what was proposed in early\nwork. However, interaction-based approaches to query suggestion\nmay be less potent when the information need is exploratory, since\na large proportion of user activity for such information needs may\noccur beyond search engine interactions. In cases where directed\nsearching is only a fraction of users\" information-seeking behavior,\nthe utility of other users\" clicks over the space of top-ranked results\nmay be limited, as it does not cover the subsequent browsing\nbehavior. At the same time, user navigation that follows search\nengine interactions provides implicit endorsement of Web resources\npreferred by users, which may be particularly valuable for\nexploratory search tasks. Thus, we propose exploiting a\ncombination of past searching and browsing user behavior to\nenhance users\" Web search interactions.\nBrowser plugins and proxy server logs provide access to the\nbrowsing patterns of users that transcend search engine interactions.\nIn previous work, such data have been used to improve search result\nranking by Agichtein et al. [1]. However, this approach only\nconsiders page visitation statistics independently of each other, not\ntaking into account the pages\" relative positions on post-query\nbrowsing paths. Radlinski and Joachims [13] have utilized such\ncollective user intelligence to improve retrieval accuracy by using\nsequences of consecutive query reformulations, yet their approach\ndoes not consider users\" interactions beyond the search result page.\nIn this paper, we present a user study of a technique that exploits the\nsearching and browsing behavior of many users to suggest popular\nWeb pages, referred to as destinations henceforth, in addition to the\nregular search results. The destinations may not be among the\ntopranked results, may not contain the queried terms, or may not even\nbe indexed by the search engine. Instead, they are pages at which\nother users end up frequently after submitting same or similar\nqueries and then browsing away from initially clicked search\nresults. We conjecture that destinations popular across a large\nnumber of users can capture the collective user experience for\ninformation needs, and our results support this hypothesis.\nIn prior work, O\"Day and Jeffries [12] identified teleportation as\nan information-seeking strategy employed by users jumping to their\npreviously-visited information targets, while Anderson et al. [2]\napplied similar principles to support the rapid navigation of Web\nsites on mobile devices. In [19], Wexelblat and Maes describe a\nsystem to support within-domain navigation based on the browse\ntrails of other users. However, we are not aware of such principles\nbeing applied to Web search. Research in the area of recommender\nsystems has also addressed similar issues, but in areas such as\nquestion-answering [9] and relatively small online communities\n[16]. Perhaps the nearest instantiation of teleportation is search\nengines\" offering of several within-domain shortcuts below the title\nof a search result. While these may be based on user behavior and\npossibly site structure, the user saves at most one click from this\nfeature. In contrast, our proposed approach can transport users to\nlocations many clicks beyond the search result, saving time and\ngiving them a broader perspective on the available related\ninformation.\nThe conducted user study investigates the effectiveness of including\nlinks to popular destinations as an additional interface feature on\nsearch engine result pages. We compare two variants of this\napproach against the suggestion of related queries and unaided Web\nsearch, and seek answers to questions on: (i) user preference and\nsearch effectiveness for known-item and exploratory search tasks,\nand (ii) the preferred distance between query and destination used to\nidentify popular destinations from past behavior logs. The results\nindicate that suggesting popular destinations to users attempting\nexploratory tasks provides best results in key aspects of the\ninformation-seeking experience, while providing query refinement\nsuggestions is most desirable for known-item tasks.\nThe remainder of the paper is structured as follows. In Section 2 we\ndescribe the extraction of search and browsing trails from user\nactivity logs, and their use in identifying top destinations for new\nqueries. Section 3 describes the design of the user study, while\nSections 4 and 5 present the study findings and their discussion,\nrespectively. We conclude in Section 6 with a summary.\n2. SEARCH TRAILS AND DESTINATIONS\nWe used Web activity logs containing searching and browsing\nactivity collected with permission from hundreds of thousands of\nusers over a five-month period between December 2005 and April\n2006. Each log entry included an anonymous user identifier, a\ntimestamp, a unique browser window identifier, and the URL of a\nvisited Web page. This information was sufficient to reconstruct\ntemporally ordered sequences of viewed pages that we refer to as\ntrails. In this section, we summarize the extraction of trails, their\nfeatures, and destinations (trail end-points). In-depth description\nand analysis of trail extraction are presented in [20].\n2.1 Trail Extraction\nFor each user, interaction logs were grouped based on browser\nidentifier information. Within each browser instance, participant\nnavigation was summarized as a path known as a browser trail,\nfrom the first to the last Web page visited in that browser. Located\nwithin some of these trails were search trails that originated with a\nquery submission to a commercial search engine such as Google,\nYahoo!, Windows Live Search, and Ask. It is these search trails\nthat we use to identify popular destinations.\nAfter originating with a query submission to a search engine, trails\nproceed until a point of termination where it is assumed that the\nuser has completed their information-seeking activity. Trails must\ncontain pages that are either: search result pages, search engine\nhomepages, or pages connected to a search result page via a\nsequence of clicked hyperlinks. Extracting search trails using this\nmethodology also goes some way toward handling multi-tasking,\nwhere users run multiple searches concurrently. Since users may\nopen a new browser window (or tab) for each task [18], each task\nhas its own browser trail, and a corresponding distinct search trail.\nTo reduce the amount of noise from pages unrelated to the active\nsearch task that may pollute our data, search trails are terminated\nwhen one of the following events occurs: (1) a user returns to their\nhomepage, checks e-mail, logs in to an online service (e.g.,\nMySpace or del.ico.us), types a URL or visits a bookmarked page;\n(2) a page is viewed for more than 30 minutes with no activity; (3)\nthe user closes the active browser window. If a page (at step i)\nmeets any of these criteria, the trail is assumed to terminate on the\nprevious page (i.e., step i - 1).\nThere are two types of search trails we consider: session trails and\nquery trails. Session trails transcend multiple queries and terminate\nonly when one of the three termination criteria above are satisfied.\nQuery trails use the same termination criteria as session trails, but\nalso terminate upon submission of a new query to a search engine.\nApproximately 14 million query trails and 4 million session trails\nwere extracted from the logs. We now describe some trail features.\n2.2 Trail and Destination Analysis\nTable 1 presents summary statistics for the query and session trails.\nDifferences in user interaction between the last domain on the trail\n(Domain n) and all domains visited earlier (Domains 1 to (n - 1))\nare particularly important, because they highlight the wealth of user\nbehavior data not captured by logs of search engine interactions.\nStatistics are averages for all trails with two or more steps (i.e.,\nthose trails where at least one search result was clicked).\nTable 1. Summary statistics (mean averages) for search trails.\nMeasure Query trails Session trails\nNumber of unique domains 2.0 4.3\nTotal page\nviews\nAll domains 4.8 16.2\nDomains 1 to (n - 1) 1.4 10.1\nDomain n (destination) 3.4 6.2\nTotal time\nspent (secs)\nAll domains 172.6 621.8\nDomains 1 to (n - 1) 70.4 397.6\nDomain n (destination) 102.3 224.1\nThe statistics suggest that users generally browse far from the\nsearch results page (i.e., around 5 steps), and visit a range of\ndomains during the course of their search. On average, users visit 2\nunique (non search-engine) domains per query trail, and just over 4\nunique domains per session trail. This suggests that users often do\nnot find all the information they seek on the first domain they visit.\nFor query trails, users also visit more pages, and spend significantly\nlonger, on the last domain in the trail compared to all previous\ndomains combined.1\nThese distinctions of the last domains in the\ntrails may indicate user interest, page utility, or page relevance.2\n2.3 Destination Prediction\nFor frequent queries, most popular destinations identified from Web\nactivity logs could be simply stored for future lookup at search time.\nHowever, we have found that over the six-month period covered by\nour dataset, 56.9% of queries are unique, and 97% queries occur 10\nor fewer times, accounting for 19.8% and 66.3% of all searches\nrespectively (these numbers are comparable to those reported in\nprevious studies of search engine query logs [15,17]). Therefore, a\nlookup-based approach would prevent us from reliably suggesting\ndestinations for a large fraction of searches. To overcome this\nproblem, we utilize a simple term-based prediction model.\nAs discussed above, we extract two types of destinations: query\ndestinations and session destinations. For both destination types,\nwe obtain a corpus of query-destination pairs and use it to construct\nterm-vector representation of destinations that is analogous to the\nclassic tf.idf document representation in traditional IR [14].\nThen, given a new query q consisting of k terms t1\u2026tk, we identify\nhighest-scoring destinations using the following similarity function:\n1\nIndependent measures t-test: t(~60M) = 3.89, p < .001\n2\nThe topical relevance of the destinations was tested for a subset of around\nten thousand queries for which we had human judgments. The average\nrating of most of the destinations lay between good and excellent.\nVisual inspection of those that did not lie in this range revealed that many\nwere either relevant but had no judgments, or were related but had indirect\nquery association (e.g., petfooddirect.com for query [dogs]).\n,\n:\nWhere query and destination term weights, an\ncomputed using standard tf.idf weighting and que\nsession-normalized smoothed tf.idf weighting, respec\nexploring alternative algorithms for the destination p\nremains an interesting challenge for future work, resu\nstudy described in subsequent sections demonstrate th\napproach provides robust, effective results.\n3. STUDY\nTo examine the usefulness of destinations, we con\nstudy investigating the perceptions and performance\non four Web search systems, two with destination sug\n3.1 Systems\nFour systems were used in this study: a baseline Web\nwith no explicit support for query refinement (Base\nsystem with a query suggestion method that recomme\nqueries (QuerySuggestion), and two systems that aug\nWeb search with destination suggestions using either\nquery trails (QueryDestination), or end-points of\n(SessionDestination).\n3.1.1 System 1: Baseline\nTo establish baseline performance against which othe\nbe compared, we developed a masked interface to a p\nengine without additional support in formulating q\nsystem presented the user-constructed query to the\nand returned ten top-ranking documents retrieved by t\nremove potential bias that may have been caused by\nperceptions, we removed all identifying information\nengine logos and distinguishing interface features.\n3.1.2 System 2: QuerySuggestion\nIn addition to the basic search functionality offered\nQuerySuggestion provides suggestions about f\nrefinements that searchers can make following an\nsubmission. These suggestions are computed usin\nengine query log over the timeframe used for trail ge\neach target query, we retrieve two sets of candidate su\ncontain the target query as a substring. One set is com\nmost frequent such queries, while the second set cont\nfrequent queries that followed the target query in que\ncandidate query is then scored by multiplying its sm\nfrequency by its smoothed frequency of following th\nin past search sessions, using Laplacian smoothing. B\nscores, six top-ranked query suggestions are returned.\nsix suggestions are found, iterative backoff is per\nprogressively longer suffixes of the target query; a si\nis described in [10].\nSuggestions were offered in a box positioned on the t\nresult page, adjacent to the search results. Figure\nposition of the suggestions on the page. Figure 1b sh\nview of the portion of the results page containing th\noffered for the query [hubble telescope]. To the left o\nnd , are\nery- and\nuserctively. While\nprediction task\nults of the user\nhat this simple\nnducted a user\nof 36 subjects\nggestions.\nsearch system\nline), a search\nends additional\ngment baseline\nr end-points of\nsession trails\ner systems can\npopular search\nqueries. This\nsearch engine\nthe engine. To\nsubjects\" prior\nsuch as search\nd by Baseline,\nfurther query\nn initial query\nng the search\neneration. For\nuggestions that\nmposed of 100\ntains 100 most\nery logs. Each\nmoothed overall\nhe target query\nBased on these\n. If fewer than\nrformed using\nimilar strategy\ntop-right of the\n1a shows the\nhows a zoomed\nhe suggestions\nof each query\n(a) Position of suggestions (b) Zoo\nFigure 1. Query suggestion presentation in\nsuggestion is an icon similar to a progress b\nnormalized popularity. Clicking a suggestion r\nresults for that query.\n3.1.3 System 3: QueryDestination\nQueryDestination uses an interface similar t\nHowever, instead of showing query refinemen\nquery, QueryDestination suggests up to six des\nvisited by other users who submitted queries s\none, and computed as described in the previous\nshows the position of the destination suggestio\npage. Figure 2b shows a zoomed view of the p\npage destinations suggested for the query [hubb\n(a) Position of destinations (b) Zoo\nFigure 2. Destination presentation in Que\nTo keep the interface uncluttered, the page title\nis shown on hover over the page URL (shown\nto the destination name, there is a clickable icon\nto execute a search for the current query wi\ndomain displayed. We show destinations as a\nthan increasing their search result rank, since\ndeviate from the original query (e.g., those\ntopics or not containing the original query terms\n3.1.4 System 4: SessionDestination\nThe interface functionality in SessionDestinat\nQueryDestination. The only difference between\nthe definition of trail end-points for queries use\ndestinations. QueryDestination directs users to\nend up at for the active or similar que\nSessionDestination directs users to the domains\nthe end of the search session that follows th\nqueries. This downgrades the effect of multi\n(i.e., we only care where users end up after sub\nrather than directing searchers to potentially irre\nmay precede a query reformulation.\n3.2 Research Questions\nWe were interested in determining the value of p\nTo do this we attempt to answer the following re\n3\nTo improve reliability, in a similar way to QueryS\nare only shown if their popularity exceeds a frequen\nmed suggestions\nQuerySuggestion.\nbar that encodes its\nretrieves new search\nto QuerySuggestion.\nnts for the submitted\nstinations frequently\nimilar to the current\ns section.3\nFigure 2a\nons on search results\nportion of the results\nle telescope].\nmed destinations\neryDestination.\ne of each destination\nin Figure 2b). Next\nn that allows the user\nithin the destination\na separate list, rather\nthey may topically\nfocusing on related\ns).\ntion is analogous to\nn the two systems is\ned in computing top\nthe domains others\nries. In contrast,\ns other users visit at\nhe active or similar\niple query iterations\nbmitting all queries),\nelevant domains that\npopular destinations.\nesearch questions:\nSuggestion, destinations\nncy threshold.\nRQ1: Are popular destinations preferable and more effective than\nquery refinement suggestions and unaided Web search for:\na. Searches that are well-defined (known-item tasks)?\nb. Searches that are ill-defined (exploratory tasks)?\nRQ2: Should popular destinations be taken from the end of query\ntrails or the end of session trails?\n3.3 Subjects\n36 subjects (26 males and 10 females) participated in our study.\nThey were recruited through an email announcement within our\norganization where they hold a range of positions in different\ndivisions. The average age of subjects was 34.9 years (max=62,\nmin=27, SD=6.2). All are familiar with Web search, and conduct\n7.5 searches per day on average (SD=4.1). Thirty-one subjects\n(86.1%) reported general awareness of the query refinements\noffered by commercial Web search engines.\n3.4 Tasks\nSince the search task may influence information-seeking behavior\n[4], we made task type an independent variable in the study. We\nconstructed six known-item tasks and six open-ended, exploratory\ntasks that were rotated between systems and subjects as described in\nthe next section. Figure 3 shows examples of the two task types.\nKnown-item task\nIdentify three tropical storms (hurricanes and typhoons) that have\ncaused property damage and/or loss of life.\nExploratory task\nYou are considering purchasing a Voice Over Internet Protocol\n(VoIP) telephone. You want to learn more about VoIP technology and\nproviders that offer the service, and select the provider and telephone\nthat best suits you.\nFigure 3. Examples of known-item and exploratory tasks.\nExploratory tasks were phrased as simulated work task situations\n[5], i.e., short search scenarios that were designed to reflect real-life\ninformation needs. These tasks generally required subjects to\ngather background information on a topic or gather sufficient\ninformation to make an informed decision. The known-item search\ntasks required search for particular items of information (e.g.,\nactivities, discoveries, names) for which the target was\nwelldefined. A similar task classification has been used successfully in\nprevious work [21]. Tasks were taken and adapted from the Text\nRetrieval Conference (TREC) Interactive Track [7], and questions\nposed on question-answering communities (Yahoo! Answers,\nGoogle Answers, and Windows Live QnA). To motivate the\nsubjects during their searches, we allowed them to select two\nknown-item and two exploratory tasks at the beginning of the\nexperiment from the six possibilities for each category, before\nseeing any of the systems or having the study described to them.\nPrior to the experiment all tasks were pilot tested with a small\nnumber of different subjects to help ensure that they were\ncomparable in difficulty and selectability (i.e., the likelihood that\na task would be chosen given the alternatives). Post-hoc analysis of\nthe distribution of tasks selected by subjects during the full study\nshowed no preference for any task in either category.\n3.5 Design and Methodology\nThe study used a within-subjects experimental design. System had\nfour levels (corresponding to the four experimental systems) and\nsearch tasks had two levels (corresponding to the two task types).\nSystem and task-type order were counterbalanced according to a\nGraeco-Latin square design.\nSubjects were tested independently and each experimental session\nlasted for up to one hour. We adhered to the following procedure:\n1. Upon arrival, subjects were asked to select two known-item and\ntwo exploratory tasks from the six tasks of each type.\n2. Subjects were given an overview of the study in written form that\nwas read aloud to them by the experimenter.\n3. Subjects completed a demographic questionnaire focusing on\naspects of search experience.\n4. For each of the four interface conditions:\na. Subjects were given an explanation of interface functionality\nlasting around 2 minutes.\nb. Subjects were instructed to attempt the task on the assigned\nsystem searching the Web, and were allotted up to 10 minutes\nto do so.\nc. Upon completion of the task, subjects were asked to complete\na post-search questionnaire.\n5. After completing the tasks on the four systems, subjects answered\na final questionnaire comparing their experiences on the systems.\n6. Subjects were thanked and compensated.\nIn the next section we present the findings of this study.\n4. FINDINGS\nIn this section we use the data derived from the experiment to\naddress our hypotheses about query suggestions and destinations,\nproviding information on the effect of task type and topic\nfamiliarity where appropriate. Parametric statistical testing is used\nin this analysis and the level of significance is set to < 0.05,\nunless otherwise stated. All Likert scales and semantic differentials\nused a 5-point scale where a rating closer to one signifies more\nagreement with the attitude statement.\n4.1 Subject Perceptions\nIn this section we present findings on how subjects perceived the\nsystems that they used. Responses to post-search (per-system) and\nfinal questionnaires are used as the basis for our analysis.\n4.1.1 Search Process\nTo address the first research question wanted insight into subjects\"\nperceptions of the search experience on each of the four systems. In\nthe post-search questionnaires, we asked subjects to complete four\n5-point semantic differentials indicating their responses to the\nattitude statement: The search we asked you to perform was. The\npaired stimuli offered as responses were: relaxing/stressful,\ninteresting/ boring, restful/tiring, and easy/difficult.\nThe average obtained differential values are shown in Table 1 for\neach system and each task type. The value corresponding to the\ndifferential All represents the mean of all three differentials,\nproviding an overall measure of subjects\" feelings.\nTable 1. Perceptions of search process (lower = better).\nDifferential\nKnown-item Exploratory\nB QS QD SD B QS QD SD\nEasy 2.6 1.6 1.7 2.3 2.5 2.6 1.9 2.9\nRestful 2.8 2.3 2.4 2.6 2.8 2.8 2.4 2.8\nInteresting 2.4 2.2 1.7 2.2 2.2 1.8 1.8 2\nRelaxing 2.6 1.9 2 2.2 2.5 2.8 2.3 2.9\nAll 2.6 2 1.9 2.3 2.5 2.5 2.1 2.7\nEach cell in Table 1 summarizes subject responses for 18\ntasksystem pairs (18 subjects who ran a known-item task on Baseline\n(B), 18 subjects who ran an exploratory task on QuerySuggestion\n(QS), etc.). The most positive response across all systems for each\ndifferential-task pair is shown in bold. We applied two-way\nanalysis of variance (ANOVA) to each differential across all four\nsystems and two task types. Subjects found the search easier on\nQuerySuggestion and QueryDestination than the other systems for\nknown-item tasks.4\nFor exploratory tasks, only searches conducted\non QueryDestination were easier than on the other systems.5\nSubjects indicated that exploratory tasks on the three non-baseline\nsystems were more stressful (i.e., less relaxing) than the\nknownitem tasks.6\nAs we will discuss in more detail in Section 4.1.3,\nsubjects regarded the familiarity of Baseline as a strength, and may\nhave struggled to attempt a more complex task while learning a new\ninterface feature such as query or destination suggestions.\n4.1.2 Interface Support\nWe solicited subjects\" opinions on the search support offered by\nQuerySuggestion, QueryDestination, and SessionDestination. The\nfollowing Likert scales and semantic differentials were used:\n\u2022 Likert scale A: Using this system enhances my effectiveness in\nfinding relevant information. (Effectiveness)7\n\u2022 Likert scale B: The queries/destinations suggested helped me\nget closer to my information goal. (CloseToGoal)\n\u2022 Likert scale C: I would re-use the queries/destinations\nsuggested if I encountered a similar task in the future (Re-use)\n\u2022 Semantic differential A: The queries/destinations suggested by\nthe system were: relevant/irrelevant, useful/useless,\nappropriate/inappropriate.\nWe did not include these in the post-search questionnaire when\nsubjects used the Baseline system as they refer to interface support\noptions that Baseline did not offer. Table 2 presents the average\nresponses for each of these scales and differentials, using the labels\nafter each of the first three Likert scales in the bulleted list above.\nThe values for the three semantic differentials are included at the\nbottom of the table, as is their overall average under All.\nTable 2. Perceptions of system support (lower = better).\nScale /\nDifferential\nKnown-item Exploratory\nQS QD SD QS QD SD\nEffectiveness 2.7 2.5 2.6 2.8 2.3 2.8\nCloseToGoal 2.9 2.7 2.8 2.7 2.2 3.1\nRe-use 2.9 3 2.4 2.5 2.5 3.2\n1 Relevant 2.6 2.5 2.8 2.4 2 3.1\n2 Useful 2.6 2.7 2.8 2.7 2.1 3.1\n3 Appropriate 2.6 2.4 2.5 2.4 2.4 2.6\nAll {1,2,3} 2.6 2.6 2.6 2.6 2.3 2.9\nThe results show that all three experimental systems improved\nsubjects\" perceptions of their search effectiveness over Baseline,\nalthough only QueryDestination did so significantly.8\nFurther\nexamination of the effect size (measured using Cohen\"s d) revealed\nthat QueryDestination affects search effectiveness most positively.9\nQueryDestination also appears to get subjects closer to their\ninformation goal (CloseToGoal) than QuerySuggestion or\n4\neasy: F(3,136) = 4.71, p = .0037; Tukey post-hoc tests: all p \u2264 .008\n5\neasy: F(3,136) = 3.93, p = .01; Tukey post-hoc tests: all p \u2264 .012\n6\nrelaxing: F(1,136) = 6.47, p = .011\n7\nThis question was conditioned on subjects\" use of Baseline and their\nprevious Web search experiences.\n8\nF(3,136) = 4.07, p = .008; Tukey post-hoc tests: all p \u2264 .002\n9\nQS: d(K,E) = (.26, .52); QD: d(K,E) = (.77, 1.50); SD: d(K,E) = (.48, .28)\nSessionDestination, although only for exploratory search tasks.10\nAdditional comments on QuerySuggestion conveyed that subjects\nsaw it as a convenience (to save them typing a reformulation) rather\nthan a way to dramatically influence the outcome of their search.\nFor exploratory searches, users benefited more from being pointed\nto alternative information sources than from suggestions for\niterative refinements of their queries. Our findings also show that\nour subjects felt that QueryDestination produced more relevant\nand useful suggestions for exploratory tasks than the other\nsystems.11\nAll other observed differences between the systems were\nnot statistically significant.12\nThe difference between performance\nof QueryDestination and SessionDestination is explained by the\napproach used to generate destinations (described in Section 2).\nSessionDestination\"s recommendations came from the end of users\"\nsession trails that often transcend multiple queries. This increases\nthe likelihood that topic shifts adversely affect their relevance.\n4.1.3 System Ranking\nIn the final questionnaire that followed completion of all tasks on\nall systems, subjects were asked to rank the four systems in\ndescending order based on their preferences. Table 3 presents the\nmean average rank assigned to each of the systems.\nTable 3. Relative ranking of systems (lower = better).\nSystems Baseline QSuggest QDest SDest\nRanking 2.47 2.14 1.92 2.31\nThese results indicate that subjects preferred QuerySuggestion and\nQueryDestination overall. However, none of the differences\nbetween systems\" ratings are significant.13\nOne possible explanation\nfor these systems being rated higher could be that although the\npopular destination systems performed well for exploratory\nsearches while QuerySuggestion performed well for known-item\nsearches, an overall ranking merges these two performances. This\nrelative ranking reflects subjects\" overall perceptions, but does not\nseparate them for each task category. Over all tasks there appeared\nto be a slight preference for QueryDestination, but as other results\nshow, the effect of task type on subjects\" perceptions is significant.\nThe final questionnaire also included open-ended questions that\nasked subjects to explain their system ranking, and describe what\nthey liked and disliked about each system:\nBaseline:\nSubjects who preferred Baseline commented on the familiarity of\nthe system (e.g., was familiar and I didn\"t end up using\nsuggestions (S36)). Those who did not prefer this system\ndisliked the lack of support for query formulation (Can be\ndifficult if you don\"t pick good search terms (S20)) and difficulty\nlocating relevant documents (e.g., Difficult to find what I was\nlooking for (S13); Clunky current technology (S30)).\nQuerySuggestion:\nSubjects who rated QuerySuggestion highest commented on rapid\nsupport for query formulation (e.g., was useful in (1) saving\ntyping (2) coming up with new ideas for query expansion (S12);\nhelps me better phrase the search term (S24); made my next\nquery easier (S21)). Those who did not prefer this system\ncriticized suggestion quality (e.g., Not relevant (S11); Popular\n10\nF(2,102) = 5.00, p = .009; Tukey post-hoc tests: all p \u2264 .012\n11\nF(2,102) = 4.01, p = .01; \u03b1 = .0167\n12\nTukey post-hoc tests: all p \u2265 .143\n13\nOne-way repeated measures ANOVA: F(3,105) = 1.50, p = .22\nqueries weren\"t what I was looking for (S18)) and the quality of\nresults they led to (e.g., Results (after clicking on suggestions)\nwere of low quality (S35); Ultimately unhelpful (S1)).\nQueryDestination:\nSubjects who preferred this system commented mainly on support\nfor accessing new information sources (e.g., provided potentially\nhelpful and new areas / domains to look at (S27)) and bypassing\nthe need to browse to these pages (Useful to try to \u2018cut to the\nchase\" and go where others may have found answers to the topic\n(S3)). Those who did not prefer this system commented on the\nlack of specificity in the suggested domains (Should just link to\nsite-specific query, not site itself (S16); Sites were not very\nspecific (S24); Too general/vague (S28)14\n), and the quality of\nthe suggestions (Not relevant (S11); Irrelevant (S6)).\nSessionDestination:\nSubjects who preferred this system commented on the utility of\nthe suggested domains (suggestions make an awful lot of sense in\nproviding search assistance, and seemed to help very nicely\n(S5)). However, more subjects commented on the irrelevance of\nthe suggestions (e.g., did not seem reliable, not much help\n(S30); Irrelevant, not my style (S21), and the related need to\ninclude explanations about why the suggestions were offered (e.g.,\nLow-quality results, not enough information presented (S35)).\nThese comments demonstrate a diverse range of perspectives on\ndifferent aspects of the experimental systems. Work is obviously\nneeded in improving the quality of the suggestions in all systems,\nbut subjects seemed to distinguish the settings when each of these\nsystems may be useful. Even though all systems can at times offer\nirrelevant suggestions, subjects appeared to prefer having them\nrather than not (e.g., one subject remarked suggestions were\nhelpful in some cases and harmless in all (S15)).\n4.1.4 Summary\nThe findings obtained from our study on subjects\" perceptions of\nthe four systems indicate that subjects tend to prefer\nQueryDestination for the exploratory tasks and QuerySuggestion\nfor the known-item searches. Suggestions to incrementally refine\nthe current query may be preferred by searchers on known-item\ntasks when they may have just missed their information target.\nHowever, when the task is more demanding, searchers appreciate\nsuggestions that have the potential to dramatically influence the\ndirection of a search or greatly improve topic coverage.\n4.2 Search Tasks\nTo gain a better understanding of how subjects performed during\nthe study, we analyze data captured on their perceptions of task\ncompleteness and the time that it took them to complete each task.\n4.2.1 Subject Perceptions\nIn the post-search questionnaire, subjects were asked to indicate on\na 5-point Likert scale the extent to which they agreed with the\nfollowing attitude statement: I believe I have succeeded in my\nperformance of this task (Success). In addition, they were asked\nto complete three 5-point semantic differentials indicating their\nresponse to the attitude statement: The task we asked you to\nperform was: The paired stimuli offered as possible responses\nwere clear/unclear, simple/complex, and familiar/\nunfamiliar. Table 4 presents the mean average response to these\nstatements for each system and task type.\n14\nAlthough the destination systems provided support for search within a\ndomain, subjects mainly chose to ignore this.\nTable 4. Perceptions of task and task success (lower = better).\nScale\nKnown-item Exploratory\nB QS QD SD B QS QD SD\nSuccess 2.0 1.3 1.4 1.4 2.8 2.3 1.4 2.6\n1 Clear 1.2 1.1 1.1 1.1 1.6 1.5 1.5 1.6\n2 Simple 1.9 1.4 1.8 1.8 2.4 2.9 2.4 3\n3 Familiar 2.2 1.9 2.0 2.2 2.6 2.5 2.7 2.7\nAll {1,2,3} 1.8 1.4 1.6 1.8 2.2 2.2 2.2 2.3\nSubject responses demonstrate that users felt that their searches had\nbeen more successful using QueryDestination for exploratory tasks\nthan with the other three systems (i.e., there was a two-way\ninteraction between these two variables).15\nIn addition, subjects\nperceived a significantly greater sense of completion with\nknownitem tasks than with exploratory tasks.16\nSubjects also found\nknown-item tasks to be more simple, clear, and familiar. 17\nThese responses confirm differences in the nature of the tasks we\nhad envisaged when planning the study. As illustrated by the\nexamples in Figure 3, the known-item tasks required subjects to\nretrieve a finite set of answers (e.g., find three interesting things to\ndo during a weekend visit to Kyoto, Japan). In contrast, the\nexploratory tasks were multi-faceted, and required subjects to find\nout more about a topic or to find sufficient information to make a\ndecision. The end-point in such tasks was less well-defined and\nmay have affected subjects\" perceptions of when they had\ncompleted the task. Given that there was no difference in the tasks\nattempted on each system, theoretically the perception of the tasks\"\nsimplicity, clarity, and familiarity should have been the same for all\nsystems. However, we observe a clear interaction effect between\nthe system and subjects\" perception of the actual tasks.\n4.2.2 Task Completion Time\nIn addition to asking subjects to indicate the extent to which they\nfelt the task was completed, we also monitored the time that it took\nthem to indicate to the experimenter that they had finished. The\nelapsed time from when the subject began issuing their first query\nuntil when they indicated that they were done was monitored using\na stopwatch and recorded for later analysis. A stopwatch rather\nthan system logging was used for this since we wanted to record the\ntime regardless of system interactions. Figure 4 shows the average\ntask completion time for each system and each task type.\nFigure 4. Mean average task completion time (\u00b1 SEM).\n15\nF(3,136) = 6.34, p = .001\n16\nF(1,136) = 18.95, p < .001\n17\nF(1,136) = 6.82, p = .028; Known-item tasks were also more simple on\nQS (F(3,136) = 3.93, p = .01; Tukey post-hoc test: p = .01); \u03b1 = .167\nKnown-item Exploratory\n0\n100\n200\n300\n400\n500\n600\nTask categories\nBaseline\nQSuggest\nTime(seconds)\nSystems\n348.8\n513.7\n272.3\n467.8\n232.3\n474.2\n359.8\n472.2\nQDestination\nSDestination\nAs can be seen in the figure above, the task completion times for the\nknown-item tasks differ greatly between systems.18\nSubjects\nattempting these tasks on QueryDestination and QuerySuggestion\ncomplete them in less time than subjects on Baseline and\nSessionDestination.19\nAs discussed in the previous section, subjects\nwere more familiar with the known-item tasks, and felt they were\nsimpler and clearer. Baseline may have taken longer than the other\nsystems since users had no additional support and had to formulate\ntheir own queries. Subjects generally felt that the recommendations\noffered by SessionDestination were of low relevance and\nusefulness. Consequently, the completion time increased slightly\nbetween these two systems perhaps as the subjects assessed the\nvalue of the proposed suggestions, but reaped little benefit from\nthem. The task completion times for the exploratory tasks were\napproximately equal on all four systems20\n, although the time on\nBaseline was slightly higher. Since these tasks had no clearly\ndefined termination criteria (i.e., the subject decided when they had\ngathered sufficient information), subjects generally spent longer\nsearching, and consulted a broader range of information sources\nthan in the known-item tasks.\n4.2.3 Summary\nAnalysis of subjects\" perception of the search tasks and aspects of\ntask completion shows that the QuerySuggestion system made\nsubjects feel more successful (and the task more simple, clear,\nand familiar) for the known-item tasks. On the other hand,\nQueryDestination was shown to lead to heightened perceptions of\nsearch success and task ease, clarity, and familiarity for the\nexploratory tasks. Task completion times on both systems were\nsignificantly lower than on the other systems for known-item tasks.\n4.3 Subject Interaction\nWe now focus our analysis on the observed interactions between\nsearchers and systems. As well as eliciting feedback on each\nsystem from our subjects, we also recorded several aspects of their\ninteraction with each system in log files. In this section, we analyze\nthree interaction aspects: query iterations, search-result clicks, and\nsubject engagement with the additional interface features offered by\nthe three non-baseline systems.\n4.3.1 Queries and Result Clicks\nSearchers typically interact with search systems by submitting\nqueries and clicking on search results. Although our system offers\nadditional interface affordances, we begin this section by analyzing\nquerying and clickthrough behavior of our subjects to better\nunderstand how they conducted core search activities. Table 5\nshows the average number of query iterations and search results\nclicked for each system-task pair. The average value in each cell is\ncomputed for 18 subjects on each task type and system.\nTable 5. Average query iterations and result clicks (per task).\nScale\nKnown-item Exploratory\nB QS QD SD B QS QD SD\nQueries 1.9 4.2 1.5 2.4 3.1 5.7 2.7 3.5\nResult clicks 2.6 2 1.7 2.4 3.4 4.3 2.3 5.1\nSubjects submitted fewer queries and clicked on fewer search\nresults in QueryDestination than in any of the other systems.21\nAs\n18\nF(3,136) = 4.56, p = .004\n19\nTukey post-hoc tests: all p \u2264 .021\n20\nF(3,136) = 1.06, p = .37\n21\nQueries: F(3,443) = 3.99; p = .008; Tukey post-hoc tests: all p \u2264 .004;\nSystems: F(3,431) = 3.63, p = .013; Tukey post-hoc tests: all p \u2264 .011\ndiscussed in the previous section, subjects using this system felt\nmore successful in their searches yet they exhibited less of the\ntraditional query and result-click interactions required for search\nsuccess on traditional search systems. It may be the case that\nsubjects\" queries on this system were more effective, but it is more\nlikely that they interacted less with the system through these means\nand elected to use the popular destinations instead. Overall,\nsubjects submitted most queries in QuerySuggestion, which is not\nsurprising as this system actively encourages searchers to iteratively\nre-submit refined queries. Subjects interacted similarly with\nBaseline and SessionDestination systems, perhaps due to the low\nquality of the popular destinations in the latter. To investigate this\nand related issues, we will next analyze usage of the suggestions on\nthe three non-baseline systems.\n4.3.2 Suggestion Usage\nTo determine whether subjects found additional features useful, we\nmeasure the extent to which they were used when they were\nprovided. Suggestion usage is defined as the proportion of\nsubmitted queries for which suggestions were offered and at least\none suggestion was clicked. Table 6 shows the average usage for\neach system and task category.\nTable 6. Suggestion uptake (values are percentages).\nMeasure\nKnown-item Exploratory\nQS QD SD QS QD SD\nUsage 35.7 33.5 23.4 30.0 35.2 25.3\nResults indicate that QuerySuggestion was used more for\nknownitem tasks than SessionDestination22\n, and QueryDestination was\nused more than all other systems for the exploratory tasks.23\nFor\nwell-specified targets in known-item search, subjects appeared to\nuse query refinement most heavily. In contrast, when subjects were\nexploring, they seemed to benefit most from the recommendation of\nadditional information sources. Subjects selected almost twice as\nmany destinations per query when using QueryDestination\ncompared to SessionDestination.24\nAs discussed earlier, this may\nbe explained by the lower perceived relevance and usefulness of\ndestinations recommended by SessionDestination.\n4.3.3 Summary\nAnalysis of log interaction data gathered during the study indicates\nthat although subjects submitted fewer queries and clicked fewer\nsearch results on QueryDestination, their engagement with\nsuggestions was highest on this system, particularly for exploratory\nsearch tasks. The refined queries proposed by QuerySuggestion\nwere used the most for the known-item tasks. There appears to be a\nclear division between the systems: QuerySuggestion was preferred\nfor known-item tasks, while QueryDestination provided most-used\nsupport for exploratory tasks.\n5. DISCUSSION AND IMPLICATIONS\nThe promising findings of our study suggest that systems offering\npopular destinations lead to more successful and efficient searching\ncompared to query suggestion and unaided Web search.\nSubjects seemed to prefer QuerySuggestion for the known-item\ntasks where the information-seeking goal was well-defined. If the\ninitial query does not retrieve relevant information, then subjects\n22\nF(2,355) = 4.67, p = .01; Tukey post-hoc tests: p = .006\n23\nTukey\"s post-hoc tests: all p \u2264 .027\n24\nQD: MK = 1.8, ME = 2.1; SD: MK = 1.1, ME = 1.2; F(1,231) = 5.49, p =\n.02; Tukey post-hoc tests: all p \u2264 .003; (M represents mean average).\nappreciate support in deciding what refinements to make to the\nquery. From examination of the queries that subjects entered for the\nknown-item searches across all systems, they appeared to use the\ninitial query as a starting point, and add or subtract individual terms\ndepending on search results. The post-search questionnaire asked\nsubjects to select from a list of proposed explanations (or offer their\nown explanations) as to why they used recommended query\nrefinements. For both known-item tasks and the exploratory tasks,\naround 40% of subjects indicated that they selected a query\nsuggestion because they wanted to save time typing a query,\nwhile less than 10% of subjects did so because the suggestions\nrepresented new ideas. Thus, subjects seemed to view\nQuerySuggestion as a time-saving convenience, rather than a way to\ndramatically impact search effectiveness.\nThe two variants of recommending destinations that we considered,\nQueryDestination and SessionDestination, offered suggestions that\ndiffered in their temporal proximity to the current query. The\nquality of the destinations appeared to affect subjects\" perceptions\nof them and their task performance. As discussed earlier, domains\nresiding at the end of a complete search session (as in\nSessionDestination) are more likely to be unrelated to the current\nquery, and thus are less likely to constitute valuable suggestions.\nDestination systems, in particular QueryDestination, performed best\nfor the exploratory search tasks, where subjects may have benefited\nfrom exposure to additional information sources whose topical\nrelevance to the search query is indirect. As with QuerySuggestion,\nsubjects were asked to offer explanations for why they selected\ndestinations. Over both task types they suggested that destinations\nwere clicked because they grabbed their attention (40%),\nrepresented new ideas (25%), or users couldn\"t find what they\nwere looking for (20%). The least popular responses were\nwanted to save time typing the address (7%) and the destination\nwas popular (3%).\nThe positive response to destination suggestions from the study\nsubjects provides interesting directions for design refinements. We\nwere surprised to learn that subjects did not find the popularity bars\nuseful, or hardly used the within-site search functionality, inviting\nre-design of these components. Subjects also remarked that they\nwould like to see query-based summaries for each suggested\ndestination to support more informed selection, as well as\ncategorization of destinations with capability of drill-down for each\ncategory. Since QuerySuggestion and QueryDestination perform\nwell in distinct task scenarios, integrating both in a single system is\nan interesting future direction. We hope to deploy some of these\nideas on Web scale in future systems, which will allow log-based\nevaluation across large user pools.\n6. CONCLUSIONS\nWe presented a novel approach for enhancing users\" Web search\ninteraction by providing links to websites frequently visited by past\nsearchers with similar information needs. A user study was\nconducted in which we evaluated the effectiveness of the proposed\ntechnique compared with a query refinement system and unaided\nWeb search. Results of our study revealed that: (i) systems\nsuggesting query refinements were preferred for known-item tasks,\n(ii) systems offering popular destinations were preferred for\nexploratory search tasks, and (iii) destinations should be mined\nfrom the end of query trails, not session trails. Overall, popular\ndestination suggestions strategically influenced searches in a way\nnot achievable by query suggestion approaches by offering a new\nway to resolve information problems, and enhance the\ninformationseeking experience for many Web searchers.\n7. REFERENCES\n[1] Agichtein, E., Brill, E. & Dumais, S. (2006). Improving Web\nsearch ranking by incorporating user behavior information. In\nProc. SIGIR, 19-26.\n[2] Anderson, C. et al. (2001). Adaptive Web navigation for\nwireless devices. In Proc. IJCAI, 879-884.\n[3] Anick, P. (2003). Using terminological feedback for Web\nsearch refinement: A log-based study. In Proc. SIGIR, 88-95.\n[4] Beaulieu, M. (1997). Experiments with interfaces to support\nquery expansion. J. Doc. 53, 1, 8-19.\n[5] Borlund, P. (2000). Experimental components for the\nevaluation of interactive information retrieval systems. J. Doc.\n56, 1, 71-90.\n[6] Downey et al. (2007). Models of searching and browsing:\nlanguages, studies and applications. In Proc. IJCAI, 1465-72.\n[7] Dumais, S.T. & Belkin, N.J. (2005). The TREC interactive\ntracks: putting the user into search. In Voorhees, E.M. and\nHarman, D.K. (eds.) TREC: Experiment and Evaluation in\nInformation Retrieval. Cambridge, MA: MIT Press, 123-153.\n[8] Furnas, G. W. (1985). Experience with an adaptive indexing\nscheme. In Proc. CHI, 131-135.\n[9] Hickl, A. et al. (2006). FERRET: Interactive\nquestionanswering for real-world environments. In Proc. of\nCOLING/ACL, 25-28.\n[10] Jones, R., et al. (2006). Generating query substitutions. In\nProc. WWW, 387-396.\n[11] Koenemann, J. & Belkin, N. (1996). A case for interaction: a\nstudy of interactive information retrieval behavior and\neffectiveness. In Proc. CHI, 205-212.\n[12] O\"Day, V. & Jeffries, R. (1993). Orienteering in an\ninformation landscape: how information seekers get from here\nto there. In Proc. CHI, 438-445.\n[13] Radlinski, F. & Joachims, T. (2005). Query chains: Learning to\nrank from implicit feedback. In Proc. KDD, 239-248.\n[14] Salton, G. & Buckley, C. (1988) Term-weighting approaches\nin automatic text retrieval. Inf. Proc. Manage. 24, 513-523.\n[15] Silverstein, C. et al. (1999). Analysis of a very large Web\nsearch engine query log. SIGIR Forum 33, 1, 6-12.\n[16] Smyth, B. et al. (2004). Exploiting query repetition and\nregularity in an adaptive community-based Web search engine.\nUser Mod. User Adapt. Int. 14, 5, 382-423.\n[17] Spink, A. et al. (2002). U.S. versus European Web searching\ntrends. SIGIR Forum 36, 2, 32-38.\n[18] Spink, A., et al. (2006). Multitasking during Web search\nsessions. Inf. Proc. Manage., 42, 1, 264-275.\n[19] Wexelblat, A. & Maes, P. (1999). Footprints: history-rich tools\nfor information foraging. In Proc. CHI, 270-277.\n[20] White, R.W. & Drucker, S.M. (2007). Investigating behavioral\nvariability in Web search. In Proc. WWW, 21-30.\n[21] White, R.W. & Marchionini, G. (2007). Examining the\neffectiveness of real-time query expansion. Inf. Proc. Manage.\n43, 685-704.", "keywords": "web search interaction;search destination;popular destination;information-seeking experience;retrieval performance;improving query;user study;log-based evaluation;enhance web search;session trail;related query;lookup-based approach;query trail"}
-{"name": "test_H-16", "title": "The Impact of Caching on Search Engines", "abstract": "In this paper we study the trade-offs in designing efficient caching systems for Web search engines. We explore the impact of different approaches, such as static vs. dynamic caching, and caching query results vs. caching posting lists. Using a query log spanning a whole year we explore the limitations of caching and we demonstrate that caching posting lists can achieve higher hit rates than caching query answers. We propose a new algorithm for static caching of posting lists, which outperforms previous methods. We also study the problem of finding the optimal way to split the static cache between answers and posting lists. Finally, we measure how the changes in the query log affect the effectiveness of static caching, given our observation that the distribution of the queries changes slowly over time. Our results and observations are applicable to different levels of the data-access hierarchy, for instance, for a memory/disk layer or a broker/remote server layer.", "fulltext": "1. INTRODUCTION\nMillions of queries are submitted daily to Web search\nengines, and users have high expectations of the quality and\nspeed of the answers. As the searchable Web becomes larger\nand larger, with more than 20 billion pages to index,\nevaluating a single query requires processing large amounts of\ndata. In such a setting, to achieve a fast response time and\nto increase the query throughput, using a cache is crucial.\nThe primary use of a cache memory is to speedup\ncomputation by exploiting frequently or recently used data,\nalthough reducing the workload to back-end servers is also a\nmajor goal. Caching can be applied at different levels with\nincreasing response latencies or processing requirements. For\nexample, the different levels may correspond to the main\nmemory, the disk, or resources in a local or a wide area\nnetwork.\nThe decision of what to cache is either off-line (static)\nor online (dynamic). A static cache is based on historical\ninformation and is periodically updated. A dynamic cache\nreplaces entries according to the sequence of requests. When\na new request arrives, the cache system decides whether to\nevict some entry from the cache in the case of a cache miss.\nSuch online decisions are based on a cache policy, and several\ndifferent policies have been studied in the past.\nFor a search engine, there are two possible ways to use a\ncache memory:\nCaching answers: As the engine returns answers to a\nparticular query, it may decide to store these answers to\nresolve future queries.\nCaching terms: As the engine evaluates a particular query,\nit may decide to store in memory the posting lists of\nthe involved query terms. Often the whole set of\nposting lists does not fit in memory, and consequently, the\nengine has to select a small set to keep in memory and\nspeed up query processing.\nReturning an answer to a query that already exists in\nthe cache is more efficient than computing the answer using\ncached posting lists. On the other hand, previously unseen\nqueries occur more often than previously unseen terms,\nimplying a higher miss rate for cached answers.\nCaching of posting lists has additional challenges. As\nposting lists have variable size, caching them dynamically\nis not very efficient, due to the complexity in terms of\nefficiency and space, and the skewed distribution of the query\nstream, as shown later. Static caching of posting lists poses\neven more challenges: when deciding which terms to cache\none faces the trade-off between frequently queried terms and\nterms with small posting lists that are space efficient.\nFinally, before deciding to adopt a static caching policy the\nquery stream should be analyzed to verify that its\ncharacteristics do not change rapidly over time.\nBroker\nStatic caching\nposting lists\nDynamic/Static\ncached answers\nLocal query processor\nDisk\nNext caching level\nLocal network access\nRemote network access\nFigure 1: One caching level in a distributed search\narchitecture.\nIn this paper we explore the trade-offs in the design of each\ncache level, showing that the problem is the same and only\na few parameters change. In general, we assume that each\nlevel of caching in a distributed search architecture is similar\nto that shown in Figure 1. We use a query log spanning a\nwhole year to explore the limitations of dynamically caching\nquery answers or posting lists for query terms.\nMore concretely, our main conclusions are that:\n\u2022 Caching query answers results in lower hit ratios\ncompared to caching of posting lists for query terms, but\nit is faster because there is no need for query\nevaluation. We provide a framework for the analysis of the\ntrade-off between static caching of query answers and\nposting lists;\n\u2022 Static caching of terms can be more effective than\ndynamic caching with, for example, LRU. We provide\nalgorithms based on the Knapsack problem for\nselecting the posting lists to put in a static cache, and\nwe show improvements over previous work, achieving\na hit ratio over 90%;\n\u2022 Changes of the query distribution over time have little\nimpact on static caching.\nThe remainder of this paper is organized as follows.\nSections 2 and 3 summarize related work and characterize the\ndata sets we use. Section 4 discusses the limitations of\ndynamic caching. Sections 5 and 6 introduce algorithms for\ncaching posting lists, and a theoretical framework for the\nanalysis of static caching, respectively. Section 7 discusses\nthe impact of changes in the query distribution on static\ncaching, and Section 8 provides concluding remarks.\n2. RELATED WORK\nThere is a large body of work devoted to query\noptimization. Buckley and Lewit [3], in one of the earliest works,\ntake a term-at-a-time approach to deciding when inverted\nlists need not be further examined. More recent examples\ndemonstrate that the top k documents for a query can be\nreturned without the need for evaluating the complete set\nof posting lists [1, 4, 15]. Although these approaches seek to\nimprove query processing efficiency, they differ from our\ncurrent work in that they do not consider caching. They may\nbe considered separate and complementary to a cache-based\napproach.\nRaghavan and Sever [12], in one of the first papers on\nexploiting user query history, propose using a query base, built\nupon a set of persistent optimal queries submitted in the\npast, to improve the retrieval effectiveness for similar future\nqueries. Markatos [10] shows the existence of temporal\nlocality in queries, and compares the performance of different\ncaching policies. Based on the observations of Markatos,\nLempel and Moran propose a new caching policy, called\nProbabilistic Driven Caching, by attempting to estimate the\nprobability distribution of all possible queries submitted to\na search engine [8]. Fagni et al. follow Markatos\" work by\nshowing that combining static and dynamic caching policies\ntogether with an adaptive prefetching policy achieves a high\nhit ratio [7]. Different from our work, they consider caching\nand prefetching of pages of results.\nAs systems are often hierarchical, there has also been some\neffort on multi-level architectures. Saraiva et al. propose a\nnew architecture for Web search engines using a two-level\ndynamic caching system [13]. Their goal for such systems\nhas been to improve response time for hierarchical engines.\nIn their architecture, both levels use an LRU eviction\npolicy. They find that the second-level cache can effectively\nreduce disk traffic, thus increasing the overall throughput.\nBaeza-Yates and Saint-Jean propose a three-level index\norganization [2]. Long and Suel propose a caching system\nstructured according to three different levels [9]. The\nintermediate level contains frequently occurring pairs of terms\nand stores the intersections of the corresponding inverted\nlists. These last two papers are related to ours in that they\nexploit different caching strategies at different levels of the\nmemory hierarchy.\nFinally, our static caching algorithm for posting lists in\nSection 5 uses the ratio frequency/size in order to evaluate\nthe goodness of an item to cache. Similar ideas have been\nused in the context of file caching [17], Web caching [5], and\neven caching of posting lists [9], but in all cases in a dynamic\nsetting. To the best of our knowledge we are the first to use\nthis approach for static caching of posting lists.\n3. DATA CHARACTERIZATION\nOur data consists of a crawl of documents from the UK\ndomain, and query logs of one year of queries submitted to\nhttp://www.yahoo.co.uk from November 2005 to November\n2006. In our logs, 50% of the total volume of queries are\nunique. The average query length is 2.5 terms, with the\nlongest query having 731 terms.\n1e-07\n1e-06\n1e-05\n1e-04\n0.001\n0.01\n0.1\n1\n1e-08 1e-07 1e-06 1e-05 1e-04 0.001 0.01 0.1 1\nFrequency(normalized)\nFrequency rank (normalized)\nFigure 2: The distribution of queries (bottom curve)\nand query terms (middle curve) in the query log.\nThe distribution of document frequencies of terms\nin the UK-2006 dataset (upper curve).\nFigure 2 shows the distributions of queries (lower curve),\nand query terms (middle curve). The x-axis represents the\nnormalized frequency rank of the query or term. (The most\nfrequent query appears closest to the y-axis.) The y-axis is\nTable 1: Statistics of the UK-2006 sample.\nUK-2006 sample statistics\n# of documents 2,786,391\n# of terms 6,491,374\n# of tokens 2,109,512,558\nthe normalized frequency for a given query (or term). As\nexpected, the distribution of query frequencies and query term\nfrequencies follow power law distributions, with slope of 1.84\nand 2.26, respectively. In this figure, the query frequencies\nwere computed as they appear in the logs with no\nnormalization for case or white space. The query terms (middle\ncurve) have been normalized for case, as have the terms in\nthe document collection.\nThe document collection that we use for our experiments\nis a summary of the UK domain crawled in May 2006.1\nThis\nsummary corresponds to a maximum of 400 crawled\ndocuments per host, using a breadth first crawling strategy,\ncomprising 15GB. The distribution of document frequencies of\nterms in the collection follows a power law distribution with\nslope 2.38 (upper curve in Figure 2). The statistics of the\ncollection are shown in Table 1. We measured the correlation\nbetween the document frequency of terms in the collection\nand the number of queries that contain a particular term in\nthe query log to be 0.424. A scatter plot for a random\nsample of terms is shown in Figure 3. In this experiment, terms\nhave been converted to lower case in both the queries and\nthe documents so that the frequencies will be comparable.\n1e-07\n1e-06\n1e-05\n1e-04\n0.001\n0.01\n0.1\n1\n1e-06 1e-05 1e-04 0.001 0.01 0.1 1\nQueryfrequency\nDocument frequency\nFigure 3: Normalized scatter plot of document-term\nfrequencies vs. query-term frequencies.\n4. CACHING OF QUERIES AND TERMS\nCaching relies upon the assumption that there is locality\nin the stream of requests. That is, there must be sufficient\nrepetition in the stream of requests and within intervals of\ntime that enable a cache memory of reasonable size to be\neffective. In the query log we used, 88% of the unique queries\nare singleton queries, and 44% are singleton queries out of\nthe whole volume. Thus, out of all queries in the stream\ncomposing the query log, the upper threshold on hit ratio is\n56%. This is because only 56% of all the queries comprise\nqueries that have multiple occurrences. It is important to\nobserve, however, that not all queries in this 56% can be\ncache hits because of compulsory misses. A compulsory miss\n1\nThe collection is available from the University of Milan:\nhttp://law.dsi.unimi.it/. URL retrieved 05/2007.\n0\n0.1\n0.2\n0.3\n0.4\n0.5\n0.6\n0.7\n0.8\n0.9\n1\n240 260 280 300 320 340 360\nNumberofelements\nBin number\nTotal terms\nTerms diff\nTotal queries\nUnique queries\nUnique terms\nQuery diff\nFigure 4: Arrival rate for both terms and queries.\nhappens when the cache receives a query for the first time.\nThis is different from capacity misses, which happen due to\nspace constraints on the amount of memory the cache uses.\nIf we consider a cache with infinite memory, then the hit\nratio is 50%. Note that for an infinite cache there are no\ncapacity misses.\nAs we mentioned before, another possibility is to cache the\nposting lists of terms. Intuitively, this gives more freedom\nin the utilization of the cache content to respond to queries\nbecause cached terms might form a new query. On the other\nhand, they need more space.\nAs opposed to queries, the fraction of singleton terms in\nthe total volume of terms is smaller. In our query log, only\n4% of the terms appear once, but this accounts for 73% of\nthe vocabulary of query terms. We show in Section 5 that\ncaching a small fraction of terms, while accounting for terms\nappearing in many documents, is potentially very effective.\nFigure 4 shows several graphs corresponding to the\nnormalized arrival rate for different cases using days as bins.\nThat is, we plot the normalized number of elements that\nappear in a day. This graph shows only a period of 122\ndays, and we normalize the values by the maximum value\nobserved throughout the whole period of the query log.\nTotal queries and Total terms correspond to the total\nvolume of queries and terms, respectively. Unique queries\nand Unique terms correspond to the arrival rate of unique\nqueries and terms. Finally, Query diff and Terms diff\ncorrespond to the difference between the curves for total and\nunique.\nIn Figure 4, as expected, the volume of terms is much\nhigher than the volume of queries. The difference between\nthe total number of terms and the number of unique terms is\nmuch larger than the difference between the total number of\nqueries and the number of unique queries. This observation\nimplies that terms repeat significantly more than queries. If\nwe use smaller bins, say of one hour, then the ratio of unique\nto volume is higher for both terms and queries because it\nleaves less room for repetition.\nWe also estimated the workload using the document\nfrequency of terms as a measure of how much work a query\nimposes on a search engine. We found that it follows closely\nthe arrival rate for terms shown in Figure 4.\nTo demonstrate the effect of a dynamic cache on the query\nfrequency distribution of Figure 2, we plot the same\nfrequency graph, but now considering the frequency of queries\nFigure 5: Frequency graph after LRU cache.\nafter going through an LRU cache. On a cache miss, an\nLRU cache decides upon an entry to evict using the\ninformation on the recency of queries. In this graph, the most\nfrequent queries are not the same queries that were most\nfrequent before the cache. It is possible that queries that\nare most frequent after the cache have different\ncharacteristics, and tuning the search engine to queries frequent before\nthe cache may degrade performance for non-cached queries.\nThe maximum frequency after caching is less than 1% of\nthe maximum frequency before the cache, thus showing that\nthe cache is very effective in reducing the load of frequent\nqueries. If we re-rank the queries according to after-cache\nfrequency, the distribution is still a power law, but with a\nmuch smaller value for the highest frequency.\nWhen discussing the effectiveness of dynamically caching,\nan important metric is cache miss rate. To analyze the cache\nmiss rate for different memory constraints, we use the\nworking set model [6, 14]. A working set, informally, is the set\nof references that an application or an operating system is\ncurrently working with. The model uses such sets in a\nstrategy that tries to capture the temporal locality of references.\nThe working set strategy then consists in keeping in memory\nonly the elements that are referenced in the previous \u03b8 steps\nof the input sequence, where \u03b8 is a configurable parameter\ncorresponding to the window size.\nOriginally, working sets have been used for page\nreplacement algorithms of operating systems, and considering such\na strategy in the context of search engines is interesting for\nthree reasons. First, it captures the amount of locality of\nqueries and terms in a sequence of queries. Locality in this\ncase refers to the frequency of queries and terms in a window\nof time. If many queries appear multiple times in a window,\nthen locality is high. Second, it enables an o\ufb04ine analysis of\nthe expected miss rate given different memory constraints.\nThird, working sets capture aspects of efficient caching\nalgorithms such as LRU. LRU assumes that references farther\nin the past are less likely to be referenced in the present,\nwhich is implicit in the concept of working sets [14].\nFigure 6 plots the miss rate for different working set sizes,\nand we consider working sets of both queries and terms. The\nworking set sizes are normalized against the total number\nof queries in the query log. In the graph for queries, there\nis a sharp decay until approximately 0.01, and the rate at\nwhich the miss rate drops decreases as we increase the size\nof the working set over 0.01. Finally, the minimum value it\nreaches is 50% miss rate, not shown in the figure as we have\ncut the tail of the curve for presentation purposes.\n0.1\n0.2\n0.3\n0.4\n0.5\n0.6\n0.7\n0.8\n0.9\n1\n0 0.05 0.1 0.15 0.2\nMissrate\nNormalized working set size\nQueries\nTerms\nFigure 6: Miss rate as a function of the working set\nsize.\n1 10 100 1000 10000 100000 1e+06\nFrequency\nDistance\nFigure 7: Distribution of distances expressed in\nterms of distinct queries.\nCompared to the query curve, we observe that the\nminimum miss rate for terms is substantially smaller. The miss\nrate also drops sharply on values up to 0.01, and it decreases\nminimally for higher values. The minimum value, however,\nis slightly over 10%, which is much smaller than the\nminimum value for the sequence of queries. This implies that\nwith such a policy it is possible to achieve over 80% hit rate,\nif we consider caching dynamically posting lists for terms as\nopposed to caching answers for queries. This result does\nnot consider the space required for each unit stored in the\ncache memory, or the amount of time it takes to put\ntogether a response to a user query. We analyze these issues\nmore carefully later in this paper.\nIt is interesting also to observe the histogram of Figure 7,\nwhich is an intermediate step in the computation of the miss\nrate graph. It reports the distribution of distances between\nrepetitions of the same frequent query. The distance in the\nplot is measured in the number of distinct queries\nseparating a query and its repetition, and it considers only queries\nappearing at least 10 times. From Figures 6 and 7, we\nconclude that even if we set the size of the query answers cache\nto a relatively large number of entries, the miss rate is high.\nThus, caching the posting lists of terms has the potential to\nimprove the hit ratio. This is what we explore next.\n5. CACHING POSTING LISTS\nThe previous section shows that caching posting lists can\nobtain a higher hit rate compared to caching query answers.\nIn this section we study the problem of how to select\nposting lists to place on a certain amount of available memory,\nassuming that the whole index is larger than the amount of\nmemory available. The posting lists have variable size (in\nfact, their size distribution follows a power law), so it is\nbeneficial for a caching policy to consider the sizes of the posting\nlists. We consider both dynamic and static caching. For\ndynamic caching, we use two well-known policies, LRU and\nLFU, as well as a modified algorithm that takes posting-list\nsize into account.\nBefore discussing the static caching strategies, we\nintroduce some notation. We use fq(t) to denote the query-term\nfrequency of a term t, that is, the number of queries\ncontaining t in the query log, and fd(t) to denote the document\nfrequency of t, that is, the number of documents in the\ncollection in which the term t appears.\nThe first strategy we consider is the algorithm proposed by\nBaeza-Yates and Saint-Jean [2], which consists in selecting\nthe posting lists of the terms with the highest query-term\nfrequencies fq(t). We call this algorithm Qtf.\nWe observe that there is a trade-off between fq(t) and\nfd(t). Terms with high fq(t) are useful to keep in the cache\nbecause they are queried often. On the other hand, terms\nwith high fd(t) are not good candidates because they\ncorrespond to long posting lists and consume a substantial\namount of space. In fact, the problem of selecting the best\nposting lists for the static cache corresponds to the\nstandard Knapsack problem: given a knapsack of fixed\ncapacity, and a set of n items, such as the i-th item has value ci\nand size si, select the set of items that fit in the knapsack\nand maximize the overall value. In our case, value\ncorresponds to fq(t) and size corresponds to fd(t). Thus, we\nemploy a simple algorithm for the knapsack problem, which\nis selecting the posting lists of the terms with the highest\nvalues of the ratio\nfq(t)\nfd(t)\n. We call this algorithm QtfDf. We\ntried other variations considering query frequencies instead\nof term frequencies, but the gain was minimal compared to\nthe complexity added.\nIn addition to the above two static algorithms we consider\nthe following algorithms for dynamic caching:\n\u2022 LRU: A standard LRU algorithm, but many posting\nlists might need to be evicted (in order of least-recent\nusage) until there is enough space in the memory to\nplace the currently accessed posting list;\n\u2022 LFU: A standard LFU algorithm (eviction of the\nleastfrequently used), with the same modification as the\nLRU;\n\u2022 Dyn-QtfDf: A dynamic version of the QtfDf\nalgorithm; evict from the cache the term(s) with the lowest\nfq(t)\nfd(t)\nratio.\nThe performance of all the above algorithms for 15 weeks\nof the query log and the UK dataset are shown in Figure 8.\nPerformance is measured with hit rate. The cache size is\nmeasured as a fraction of the total space required to store\nthe posting lists of all terms.\nFor the dynamic algorithms, we load the cache with terms\nin order of fq(t) and we let the cache warm up for 1\nmillion queries. For the static algorithms, we assume complete\nknowledge of the frequencies fq(t), that is, we estimate fq(t)\nfrom the whole query stream. As we show in Section 7 the\nresults do not change much if we compute the query-term\nfrequencies using the first 3 or 4 weeks of the query log and\nmeasure the hit rate on the rest.\n0.2\n0.3\n0.4\n0.5\n0.6\n0.7\n0.8\n0.9\n1\n0.1 0.2 0.3 0.4 0.5 0.6 0.7\nHitrate\nCache size\nCaching posting lists\nstatic QTF/DF\nLRU\nLFU\nDyn-QTF/DF\nQTF\nFigure 8: Hit rate of different strategies for caching\nposting lists.\nThe most important observation from our experiments is\nthat the static QtfDf algorithm has a better hit rate than\nall the dynamic algorithms. An important benefit a static\ncache is that it requires no eviction and it is hence more\nefficient when evaluating queries. However, if the\ncharacteristics of the query traffic change frequently over time, then\nit requires re-populating the cache often or there will be a\nsignificant impact on hit rate.\n6. ANALYSIS OF STATIC CACHING\nIn this section we provide a detailed analysis for the\nproblem of deciding whether it is preferable to cache query\nanswers or cache posting lists. Our analysis takes into account\nthe impact of caching between two levels of the data-access\nhierarchy. It can either be applied at the memory/disk layer\nor at a server/remote server layer as in the architecture we\ndiscussed in the introduction.\nUsing a particular system model, we obtain estimates for\nthe parameters required by our analysis, which we\nsubsequently use to decide the optimal trade-off between caching\nquery answers and caching posting lists.\n6.1 Analytical Model\nLet M be the size of the cache measured in answer units\n(the cache can store M query answers). Assume that all\nposting lists are of the same length L, measured in answer\nunits. We consider the following two cases: (A) a cache\nthat stores only precomputed answers, and (B) a cache that\nstores only posting lists. In the first case, Nc = M answers\nfit in the cache, while in the second case Np = M/L posting\nlists fit in the cache. Thus, Np = Nc/L. Note that although\nposting lists require more space, we can combine terms to\nevaluate more queries (or partial queries).\nFor case (A), suppose that a query answer in the cache\ncan be evaluated in 1 time unit. For case (B), assume that\nif the posting lists of the terms of a query are in the cache\nthen the results can be computed in TR1 time units, while\nif the posting lists are not in the cache then the results can\nbe computed in TR2 time units. Of course TR2 > TR1.\nNow we want to compare the time to answer a stream of\nQ queries in both cases. Let Vc(Nc) be the volume of the\nmost frequent Nc queries. Then, for case (A), we have an\noverall time\nTCA = Vc(Nc) + TR2(Q \u2212 Vc(Nc)).\nSimilarly, for case (B), let Vp(Np) be the number of\ncomputable queries. Then we have overall time\nTP L = TR1Vp(Np) + TR2(Q \u2212 Vp(Np)).\nWe want to check under which conditions we have TP L <\nTCA. We have\nTP L \u2212 TCA = (TR2 \u2212 1)Vc(Nc) \u2212 (TR2 \u2212 TR1)Vp(Np) > 0.\nFigure 9 shows the values of Vp and Vc for our data. We can\nsee that caching answers saturates faster and for this\nparticular data there is no additional benefit from using more\nthan 10% of the index space for caching answers.\nAs the query distribution is a power law with parameter\n\u03b1 > 1, the i-th most frequent query appears with probability\nproportional to 1\ni\u03b1 . Therefore, the volume Vc(n), which is\nthe total number of the n most frequent queries, is\nVc(n) = V0\nn\ni=1\nQ\ni\u03b1\n= \u03b3nQ (0 < \u03b3n < 1).\nWe know that Vp(n) grows faster than Vc(n) and assume,\nbased on experimental results, that the relation is of the\nform Vp(n) = k Vc(n)\u03b2\n.\nIn the worst case, for a large cache, \u03b2 \u2192 1. That is, both\ntechniques will cache a constant fraction of the overall query\nvolume. Then caching posting lists makes sense only if\nL(TR2 \u2212 1)\nk(TR2 \u2212 TR1)\n> 1.\nIf we use compression, we have L < L and TR1 > TR1.\nAccording to the experiments that we show later, compression\nis always better.\nFor a small cache, we are interested in the transient\nbehavior and then \u03b2 > 1, as computed from our data. In this\ncase there will always be a point where TP L > TCA for a\nlarge number of queries.\nIn reality, instead of filling the cache only with answers or\nonly with posting lists, a better strategy will be to divide\nthe total cache space into cache for answers and cache for\nposting lists. In such a case, there will be some queries that\ncould be answered by both parts of the cache. As the answer\ncache is faster, it will be the first choice for answering those\nqueries. Let QNc and QNp be the set of queries that can\nbe answered by the cached answers and the cached posting\nlists, respectively. Then, the overall time is\nT = Vc(Nc)+TR1V (QNp \u2212QNc )+TR2(Q\u2212V (QNp \u222aQNc )),\nwhere Np = (M \u2212 Nc)/L. Finding the optimal division of\nthe cache in order to minimize the overall retrieval time is a\ndifficult problem to solve analytically. In Section 6.3 we use\nsimulations to derive optimal cache trade-offs for particular\nimplementation examples.\n6.2 Parameter Estimation\nWe now use a particular implementation of a centralized\nsystem and the model of a distributed system as examples\nfrom which we estimate the parameters of the analysis from\nthe previous section. We perform the experiments using\nan optimized version of Terrier [11] for both indexing\ndocuments and processing queries, on a single machine with a\nPentium 4 at 2GHz and 1GB of RAM.\nWe indexed the documents from the UK-2006 dataset,\nwithout removing stop words or applying stemming. The\nposting lists in the inverted file consist of pairs of\ndocument identifier and term frequency. We compress the\ndocument identifier gaps using Elias gamma encoding, and the\n0.1\n0.2\n0.3\n0.4\n0.5\n0.6\n0.7\n0.8\n0.9\n1\n0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1\nQueryvolume\nSpace\nprecomputed answers\nposting lists\nFigure 9: Cache saturation as a function of size.\nTable 2: Ratios between the average time to\nevaluate a query and the average time to return cached\nanswers (centralized and distributed case).\nCentralized system TR1 TR2 TR1 TR2\nFull evaluation 233 1760 707 1140\nPartial evaluation 99 1626 493 798\nLAN system TRL\n1 TRL\n2 TR L\n1 TR L\n2\nFull evaluation 242 1769 716 1149\nPartial evaluation 108 1635 502 807\nWAN system TRW\n1 TRW\n2 TR W\n1 TR W\n2\nFull evaluation 5001 6528 5475 5908\nPartial evaluation 4867 6394 5270 5575\nterm frequencies in documents using unary encoding [16].\nThe size of the inverted file is 1,189Mb. A stored answer\nrequires 1264 bytes, and an uncompressed posting takes 8\nbytes. From Table 1, we obtain L = (8\u00b7# of postings)\n1264\u00b7# of terms\n= 0.75\nand L = Inverted file size\n1264\u00b7# of terms\n= 0.26.\nWe estimate the ratio TR = T/Tc between the average\ntime T it takes to evaluate a query and the average time\nTc it takes to return a stored answer for the same query, in\nthe following way. Tc is measured by loading the answers\nfor 100,000 queries in memory, and answering the queries\nfrom memory. The average time is Tc = 0.069ms. T is\nmeasured by processing the same 100,000 queries (the first\n10,000 queries are used to warm-up the system). For each\nquery, we remove stop words, if there are at least three\nremaining terms. The stop words correspond to the terms\nwith a frequency higher than the number of documents in\nthe index. We use a document-at-a-time approach to\nretrieve documents containing all query terms. The only disk\naccess required during query processing is for reading\ncompressed posting lists from the inverted file. We perform both\nfull and partial evaluation of answers, because some queries\nare likely to retrieve a large number of documents, and only\na fraction of the retrieved documents will be seen by users.\nIn the partial evaluation of queries, we terminate the\nprocessing after matching 10,000 documents. The estimated\nratios TR are presented in Table 2.\nFigure 10 shows for a sample of queries the workload of\nthe system with partial query evaluation and compressed\nposting lists. The x-axis corresponds to the total time the\nsystem spends processing a particular query, and the\nvertical axis corresponds to the sum t\u2208q fq \u00b7 fd(t). Notice\nthat the total number of postings of the query-terms does\nnot necessarily provide an accurate estimate of the workload\nimposed on the system by a query (which is the case for full\nevaluation and uncompressed lists).\n0\n0.2\n0.4\n0.6\n0.8\n1\n0 0.2 0.4 0.6 0.8 1\nTotalpostingstoprocessquery(normalized)\nTotal time to process query (normalized)\nPartial processing of compressed postings\nquery len = 1\nquery len in [2,3]\nquery len in [4,8]\nquery len > 8\nFigure 10: Workload for partial query evaluation\nwith compressed posting lists.\nThe analysis of the previous section also applies to a\ndistributed retrieval system in one or multiple sites. Suppose\nthat a document partitioned distributed system is running\non a cluster of machines interconnected with a local area\nnetwork (LAN) in one site. The broker receives queries and\nbroadcasts them to the query processors, which answer the\nqueries and return the results to the broker. Finally, the\nbroker merges the received answers and generates the final\nset of answers (we assume that the time spent on\nmerging results is negligible). The difference between the\ncentralized architecture and the document partition\narchitecture is the extra communication between the broker and\nthe query processors. Using ICMP pings on a 100Mbps\nLAN, we have measured that sending the query from the\nbroker to the query processors which send an answer of 4,000\nbytes back to the broker takes on average 0.615ms. Hence,\nTRL\n= TR + 0.615ms/0.069ms = TR + 9.\nIn the case when the broker and the query processors\nare in different sites connected with a wide area network\n(WAN), we estimated that broadcasting the query from the\nbroker to the query processors and getting back an answer\nof 4,000 bytes takes on average 329ms. Hence, TRW\n=\nTR + 329ms/0.069ms = TR + 4768.\n6.3 Simulation Results\nWe now address the problem of finding the optimal\ntradeoff between caching query answers and caching posting lists.\nTo make the problem concrete we assume a fixed budget M\non the available memory, out of which x units are used for\ncaching query answers and M \u2212 x for caching posting lists.\nWe perform simulations and compute the average response\ntime as a function of x. Using a part of the query log as\ntraining data, we first allocate in the cache the answers to\nthe most frequent queries that fit in space x, and then we\nuse the rest of the memory to cache posting lists. For\nselecting posting lists we use the QtfDf algorithm, applied to\nthe training query log but excluding the queries that have\nalready been cached.\nIn Figure 11, we plot the simulated response time for a\ncentralized system as a function of x. For the uncompressed\nindex we use M = 1GB, and for the compressed index we\nuse M = 0.5GB. In the case of the configuration that uses\npartial query evaluation with compressed posting lists, the\nlowest response time is achieved when 0.15GB out of the\n0.5GB is allocated for storing answers for queries. We\nobtained similar trends in the results for the LAN setting.\nFigure 12 shows the simulated workload for a distributed\nsystem across a WAN. In this case, the total amount of\nmemory is split between the broker, which holds the cached\n400\n500\n600\n700\n800\n900\n1000\n1100\n1200\n0 0.2 0.4 0.6 0.8 1\nAverageresponsetime\nSpace (GB)\nSimulated workload -- single machine\nfull / uncompr / 1 G\npartial / uncompr / 1 G\nfull / compr / 0.5 G\npartial / compr / 0.5 G\nFigure 11: Optimal division of the cache in a server.\n3000\n3500\n4000\n4500\n5000\n5500\n6000\n0 0.2 0.4 0.6 0.8 1\nAverageresponsetime\nSpace (GB)\nSimulated workload -- WAN\nfull / uncompr / 1 G\npartial / uncompr / 1 G\nfull / compr / 0.5 G\npartial / compr / 0.5 G\nFigure 12: Optimal division of the cache when the\nnext level requires WAN access.\nanswers of queries, and the query processors, which hold\nthe cache of posting lists. According to the figure, the\ndifference between the configurations of the query processors\nis less important because the network communication\noverhead increases the response time substantially. When using\nuncompressed posting lists, the optimal allocation of\nmemory corresponds to using approximately 70% of the memory\nfor caching query answers. This is explained by the fact that\nthere is no need for network communication when the query\ncan be answered by the cache at the broker.\n7. EFFECT OF THE QUERY DYNAMICS\nFor our query log, the query distribution and query-term\ndistribution change slowly over time. To support this claim,\nwe first assess how topics change comparing the distribution\nof queries from the first week in June, 2006, to the\ndistribution of queries for the remainder of 2006 that did not appear\nin the first week in June. We found that a very small\npercentage of queries are new queries. The majority of queries\nthat appear in a given week repeat in the following weeks\nfor the next six months.\nWe then compute the hit rate of a static cache of 128, 000\nanswers trained over a period of two weeks (Figure 13). We\nreport hit rate hourly for 7 days, starting from 5pm. We\nobserve that the hit rate reaches its highest value during the\nnight (around midnight), whereas around 2-3pm it reaches\nits minimum. After a small decay in hit rate values, the hit\nrate stabilizes between 0.28, and 0.34 for the entire week,\nsuggesting that the static cache is effective for a whole week\nafter the training period.\n0.26\n0.27\n0.28\n0.29\n0.3\n0.31\n0.32\n0.33\n0.34\n0.35\n0.36\n0.37\n0 20 40 60 80 100 120 140 160\nHit-rate\nTime\nHits on the frequent queries of distances\nFigure 13: Hourly hit rate for a static cache holding\n128,000 answers during the period of a week.\nThe static cache of posting lists can be periodically\nrecomputed. To estimate the time interval in which we need\nto recompute the posting lists on the static cache we need\nto consider an efficiency/quality trade-off: using too short\na time interval might be prohibitively expensive, while\nrecomputing the cache too infrequently might lead to having\nan obsolete cache not corresponding to the statistical\ncharacteristics of the current query stream.\nWe measured the effect on the QtfDf algorithm of the\nchanges in a 15-week query stream (Figure 14). We compute\nthe query term frequencies over the whole stream, select\nwhich terms to cache, and then compute the hit rate on the\nwhole query stream. This hit rate is as an upper bound, and\nit assumes perfect knowledge of the query term frequencies.\nTo simulate a realistic scenario, we use the first 6 (3) weeks\nof the query stream for computing query term frequencies\nand the following 9 (12) weeks to estimate the hit rate. As\nFigure 14 shows, the hit rate decreases by less than 2%. The\nhigh correlation among the query term frequencies during\ndifferent time periods explains the graceful adaptation of\nthe static caching algorithms to the future query stream.\nIndeed, the pairwise correlation among all possible 3-week\nperiods of the 15-week query stream is over 99.5%.\n8. CONCLUSIONS\nCaching is an effective technique in search engines for\nimproving response time, reducing the load on query\nprocessors, and improving network bandwidth utilization. We\npresent results on both dynamic and static caching.\nDynamic caching of queries has limited effectiveness due to the\nhigh number of compulsory misses caused by the number\nof unique or infrequent queries. Our results show that in\nour UK log, the minimum miss rate is 50% using a working\nset strategy. Caching terms is more effective with respect to\nmiss rate, achieving values as low as 12%. We also propose a\nnew algorithm for static caching of posting lists that\noutperforms previous static caching algorithms as well as dynamic\nalgorithms such as LRU and LFU, obtaining hit rate values\nthat are over 10% higher compared these strategies.\nWe present a framework for the analysis of the trade-off\nbetween caching query results and caching posting lists, and\nwe simulate different types of architectures. Our results\nshow that for centralized and LAN environments, there is\nan optimal allocation of caching query results and caching\nof posting lists, while for WAN scenarios in which network\ntime prevails it is more important to cache query results.\n0.45\n0.5\n0.55\n0.6\n0.65\n0.7\n0.75\n0.8\n0.85\n0.9\n0.95\n0.1 0.2 0.3 0.4 0.5 0.6 0.7\nHitrate\nCache size\nDynamics of static QTF/DF caching policy\nperfect knowledge\n6-week training\n3-week training\nFigure 14: Impact of distribution changes on the\nstatic caching of posting lists.\n9. REFERENCES\n[1] V. N. Anh and A. Moffat. Pruned query evaluation using\npre-computed impacts. In ACM CIKM, 2006.\n[2] R. A. Baeza-Yates and F. Saint-Jean. A three level search\nengine index based in query log distribution. In SPIRE,\n2003.\n[3] C. Buckley and A. F. Lewit. Optimization of inverted\nvector searches. In ACM SIGIR, 1985.\n[4] S. B\u00a8uttcher and C. L. A. Clarke. A document-centric\napproach to static index pruning in text retrieval systems.\nIn ACM CIKM, 2006.\n[5] P. Cao and S. Irani. Cost-aware WWW proxy caching\nalgorithms. In USITS, 1997.\n[6] P. Denning. Working sets past and present. IEEE Trans.\non Software Engineering, SE-6(1):64-84, 1980.\n[7] T. Fagni, R. Perego, F. Silvestri, and S. Orlando. Boosting\nthe performance of web search engines: Caching and\nprefetching query results by exploiting historical usage\ndata. ACM Trans. Inf. Syst., 24(1):51-78, 2006.\n[8] R. Lempel and S. Moran. Predictive caching and\nprefetching of query results in search engines. In WWW,\n2003.\n[9] X. Long and T. Suel. Three-level caching for efficient query\nprocessing in large web search engines. In WWW, 2005.\n[10] E. P. Markatos. On caching search engine query results.\nComputer Communications, 24(2):137-143, 2001.\n[11] I. Ounis, G. Amati, V. Plachouras, B. He, C. Macdonald,\nand C. Lioma. Terrier: A High Performance and Scalable\nInformation Retrieval Platform. In SIGIR Workshop on\nOpen Source Information Retrieval, 2006.\n[12] V. V. Raghavan and H. Sever. On the reuse of past optimal\nqueries. In ACM SIGIR, 1995.\n[13] P. C. Saraiva, E. S. de Moura, N. Ziviani, W. Meira,\nR. Fonseca, and B. Riberio-Neto. Rank-preserving two-level\ncaching for scalable search engines. In ACM SIGIR, 2001.\n[14] D. R. Slutz and I. L. Traiger. A note on the calculation of\naverage working set size. Communications of the ACM,\n17(10):563-565, 1974.\n[15] T. Strohman, H. Turtle, and W. B. Croft. Optimization\nstrategies for complex queries. In ACM SIGIR, 2005.\n[16] I. H. Witten, T. C. Bell, and A. Moffat. Managing\nGigabytes: Compressing and Indexing Documents and\nImages. John Wiley & Sons, Inc., NY, 1994.\n[17] N. E. Young. On-line file caching. Algorithmica,\n33(3):371-383, 2002.", "keywords": "static caching effectiveness;caching posting list;static cache;cache;static caching;dynamic caching;caching query result;query log;efficient caching system;effectiveness of static caching;disk layer;web search;data-access hierarchy;web search engine;distribution of the query;information retrieval system;answer and posting list;remote server layer;the query distribution"}
-{"name": "test_H-17", "title": "Pruning Policies for Two-Tiered Inverted Index with Correctness Guarantee", "abstract": "The Web search engines maintain large-scale inverted indexes which are queried thousands of times per second by users eager for information. In order to cope with the vast amounts of query loads, search engines prune their index to keep documents that are likely to be returned as top results, and use this pruned index to compute the first batches of results. While this approach can improve performance by reducing the size of the index, if we compute the top results only from the pruned index we may notice a significant degradation in the result quality: if a document should be in the top results but was not included in the pruned index, it will be placed behind the results computed from the pruned index. Given the fierce competition in the online search market, this phenomenon is clearly undesirable. In this paper, we study how we can avoid any degradation of result quality due to the pruning-based performance optimization, while still realizing most of its benefit. Our contribution is a number of modifications in the pruning techniques for creating the pruned index and a new result computation algorithm that guarantees that the top-matching pages are always placed at the top search results, even though we are computing the first batch from the pruned index most of the time. We also show how to determine the optimal size of a pruned index and we experimentally evaluate our algorithms on a collection of 130 million Web pages.", "fulltext": "1. INTRODUCTION\nThe amount of information on the Web is growing at a prodigious\nrate [24]. According to a recent study [13], it is estimated that the\nWeb currently consists of more than 11 billion pages. Due to this\nimmense amount of available information, the users are becoming\nmore and more dependent on the Web search engines for locating\nrelevant information on the Web. Typically, the Web search\nengines, similar to other information retrieval applications, utilize a\ndata structure called inverted index. An inverted index provides for\nthe efficient retrieval of the documents (or Web pages) that contain\na particular keyword.\nIn most cases, a query that the user issues may have thousands\nor even millions of matching documents. In order to avoid\noverwhelming the users with a huge amount of results, the search\nengines present the results in batches of 10 to 20 relevant documents.\nThe user then looks through the first batch of results and, if she\ndoesn\"t find the answer she is looking for, she may potentially\nrequest to view the next batch or decide to issue a new query.\nA recent study [16] indicated that approximately 80% of the\nusers examine at most the first 3 batches of the results. That is,\n80% of the users typically view at most 30 to 60 results for every\nquery that they issue to a search engine. At the same time, given the\nsize of the Web, the inverted index that the search engines maintain\ncan grow very large. Since the users are interested in a small\nnumber of results (and thus are viewing a small portion of the index for\nevery query that they issue), using an index that is capable of\nreturning all the results for a query may constitute a significant waste\nin terms of time, storage space and computational resources, which\nis bound to get worse as the Web grows larger over time [24].\nOne natural solution to this problem is to create a small index on\na subset of the documents that are likely to be returned as the top\nresults (by using, for example, the pruning techniques in [7, 20]) and\ncompute the first batch of answers using the pruned index. While\nthis approach has been shown to give significant improvement in\nperformance, it also leads to noticeable degradation in the quality of\nthe search results, because the top answers are computed only from\nthe pruned index [7, 20]. That is, even if a page should be placed as\nthe top-matching page according to a search engine\"s ranking\nmetric, the page may be placed behind the ones contained in the pruned\nindex if the page did not become part of the pruned index for\nvarious reasons [7, 20]. Given the fierce competition among search\nengines today this degradation is clearly undesirable and needs to\nbe addressed if possible.\nIn this paper, we study how we can avoid any degradation of\nsearch quality due to the above performance optimization while\nstill realizing most of its benefit. That is, we present a number of\nsimple (yet important) changes in the pruning techniques for\ncreating the pruned index. Our main contribution is a new answer\ncomputation algorithm that guarantees that the top-matching pages\n(according to the search-engine\"s ranking metric) are always placed\nat the top of search results, even though we are computing the first\nbatch of answers from the pruned index most of the time. These\nenhanced pruning techniques and answer-computation algorithms\nare explored in the context of the cluster architecture commonly\nemployed by today\"s search engines. Finally, we study and present\nhow search engines can minimize the operational cost of answering\nqueries while providing high quality search results.\nIF IF IF\nIF\nIF\nIF\nIF\nIp\nIp\nIp\nIp\nIp\nIp\n5000 queries/sec 5000 queries/sec\n: 1000 queries/sec\n: 1000 queries/sec\n2nd tier\n1st tier\n(a) (b)\nFigure 1: (a) Search engine replicates its full index IF to\nincrease query-answering capacity. (b) In the 1st\ntier, small\npindexes IP handle most of the queries. When IP cannot answer\na query, it is redirected to the 2nd\ntier, where the full index IF\nis used to compute the answer.\n2. CLUSTER ARCHITECTURE AND COST\nSAVINGS FROM A PRUNED INDEX\nTypically, a search engine downloads documents from the Web\nand maintains a local inverted index that is used to answer queries\nquickly.\nInverted indexes. Assume that we have collected a set of\ndocuments D = {D1, . . . , DM } and that we have extracted all the\nterms T = {t1, . . . , tn} from the documents. For every single\nterm ti \u2208 T we maintain a list I(ti) of document IDs that contain\nti. Every entry in I(ti) is called a posting and can be extended to\ninclude additional information, such as how many times ti appears\nin a document, the positions of ti in the document, whether ti is\nbold/italic, etc. The set of all the lists I = {I(t1), . . . , I(tn)} is\nour inverted index.\n2.1 Two-tier index architecture\nSearch engines are accepting an enormous number of queries\nevery day from eager users searching for relevant information. For\nexample, Google is estimated to answer more than 250 million user\nqueries per day. In order to cope with this huge query load, search\nengines typically replicate their index across a large cluster of\nmachines as the following example illustrates:\nExample 1 Consider a search engine that maintains a cluster of\nmachines as in Figure 1(a). The size of its full inverted index IF\nis larger than what can be stored in a single machine, so each copy\nof IF is stored across four different machines. We also suppose\nthat one copy of IF can handle the query load of 1000 queries/sec.\nAssuming that the search engine gets 5000 queries/sec, it needs\nto replicate IF five times to handle the load. Overall, the search\nengine needs to maintain 4 \u00d7 5 = 20 machines in its cluster. 2\nWhile fully replicating the entire index IF multiple times is a\nstraightforward way to scale to a large number of queries, typical\nquery loads at search engines exhibit certain localities, allowing for\nsignificant reduction in cost by replicating only a small portion of\nthe full index. In principle, this is typically done by pruning a full\nindex IF to create a smaller, pruned index (or p-index) IP , which\ncontains a subset of the documents that are likely to be returned as\ntop results.\nGiven the p-index, search engines operate by employing a\ntwotier index architecture as we show in Figure 1(b): All incoming\nqueries are first directed to one of the p-indexes kept in the 1st\ntier.\nIn the cases where a p-index cannot compute the answer (e.g. was\nunable to find enough documents to return to the user) the query\nis answered by redirecting it to the 2nd\ntier, where we maintain\na full index IF . The following example illustrates the potential\nreduction in the query-processing cost by employing this two-tier\nindex architecture.\nExample 2 Assume the same parameter settings as in Example 1.\nThat is, the search engine gets a query load of 5000 queries/sec\nAlgorithm 2.1 Computation of answer with correctness guarantee\nInput q = ({t1, . . . , tn}, [i, i + k]) where\n{t1, . . . , tn}: keywords in the query\n[i, i + k]: range of the answer to return\nProcedure\n(1) (A, C) = ComputeAnswer(q, IP )\n(2) If (C = 1) Then\n(3) Return A\n(4) Else\n(5) A = ComputeAnswer(q, IF )\n(6) Return A\nFigure 2: Computing the answer under the two-tier\narchitecture with the result correctness guarantee.\nand every copy of an index (both the full IF and p-index IP ) can\nhandle up to 1000 queries/sec. Also assume that the size of IP is\none fourth of IF and thus can be stored on a single machine.\nFinally, suppose that the p-indexes can handle 80% of the user queries\nby themselves and only forward the remaining 20% queries to IF .\nUnder this setting, since all 5000/sec user queries are first directed\nto a p-index, five copies of IP are needed in the 1st\ntier. For the\n2nd\ntier, since 20% (or 1000 queries/sec) are forwarded, we need\nto maintain one copy of IF to handle the load. Overall we need\na total of 9 machines (five machines for the five copies of IP and\nfour machines for one copy of IF ). Compared to Example 1, this\nis more than 50% reduction in the number of machines. 2\nThe above example demonstrates the potential cost saving\nachieved by using a p-index. However, the two-tier architecture\nmay have a significant drawback in terms of its result quality\ncompared to the full replication of IF ; given the fact that the p-index\ncontains only a subset of the data of the full index, it is possible that,\nfor some queries, the p-index may not contain the top-ranked\ndocument according to the particular ranking criteria used by the search\nengine and fail to return it as the top page, leading to noticeable\nquality degradation in search results. Given the fierce competition\nin the online search market, search engine operators desperately try\nto avoid any reduction in search quality in order to maximize user\nsatisfaction.\n2.2 Correctness guarantee under two-tier\narchitecture\nHow can we avoid the potential degradation of search quality\nunder the two-tier architecture? Our basic idea is straightforward:\nWe use the top-k result from the p-index only if we know for sure\nthat the result is the same as the top-k result from the full index.\nThe algorithm in Figure 2 formalizes this idea. In the algorithm,\nwhen we compute the result from IP (Step 1), we compute not\nonly the top-k result A, but also the correctness indicator function\nC defined as follows:\nDefinition 1 (Correctness indicator function) Given a query q,\nthe p-index IP returns the answer A together with a correctness\nindicator function C. C is set to 1 if A is guaranteed to be identical\n(i.e. same results in the same order) to the result computed from\nthe full index IF . If it is possible that A is different, C is set to 0. 2\nNote that the algorithm returns the result from IP (Step 3) only\nwhen it is identical to the result from IF (condition C = 1 in\nStep 2). Otherwise, the algorithm recomputes and returns the\nresult from the full index IF (Step 5). Therefore, the algorithm is\nguaranteed to return the same result as the full replication of IF all\nthe time.\nNow, the real challenge is to find out (1) how we can compute\nthe correctness indicator function C and (2) how we should prune\nthe index to make sure that the majority of queries are handled by\nIP alone.\nQuestion 1 How can we compute the correctness indicator\nfunction C?\nA straightforward way to calculate C is to compute the top-k\nanswer both from IP and IF and compare them. This naive solution,\nhowever, incurs a cost even higher than the full replication of IF\nbecause the answers are computed twice: once from IP and once\nfrom IF . Is there any way to compute the correctness indicator\nfunction C only from IP without computing the answer from IF ?\nQuestion 2 How should we prune IF to IP to realize the maximum\ncost saving?\nThe effectiveness of Algorithm 2.1 critically depends on how\noften the correctness indicator function C is evaluated to be 1. If\nC = 0 for all queries, for example, the answers to all queries will be\ncomputed twice, once from IP (Step 1) and once from IF (Step 5),\nso the performance will be worse than the full replication of IF .\nWhat will be the optimal way to prune IF to IP , such that C = 1\nfor a large fraction of queries? In the next few sections, we try to\naddress these questions.\n3. OPTIMAL SIZE OF THE P-INDEX\nIntuitively, there exists a clear tradeoff between the size of IP\nand the fraction of queries that IP can handle: When IP is large and\nhas more information, it will be able to handle more queries, but\nthe cost for maintaining and looking up IP will be higher. When\nIP is small, on the other hand, the cost for IP will be smaller,\nbut more queries will be forwarded to IF , requiring us to maintain\nmore copies of IF . Given this tradeoff, how should we determine\nthe optimal size of IP in order to maximize the cost saving? To\nfind the answer, we start with a simple example.\nExample 3 Again, consider a scenario similar to Example 1,\nwhere the query load is 5000 queries/sec, each copy of an index\ncan handle 1000 queries/sec, and the full index spans across 4\nmachines. But now, suppose that if we prune IF by 75% to IP 1 (i.e.,\nthe size of IP 1 is 25% of IF ), IP 1 can handle 40% of the queries\n(i.e., C = 1 for 40% of the queries). Also suppose that if IF is\npruned by 50% to IP 2, IP 2 can handle 80% of the queries. Which\none of the IP 1, IP 2 is preferable for the 1st\n-tier index?\nTo find out the answer, we first compute the number of machines\nneeded when we use IP 1 for the 1st\ntier. At the 1st\ntier, we need 5\ncopies of IP 1 to handle the query load of 5000 queries/sec. Since\nthe size of IP 1 is 25% of IF (that requires 4 machines), one copy of\nIP 1 requires one machine. Therefore, the total number of machines\nrequired for the 1st\ntier is 5\u00d71 = 5 (5 copies of IP 1 with 1 machine\nper copy). Also, since IP 1 can handle 40% of the queries, the 2nd\ntier has to handle 3000 queries/sec (60% of the 5000 queries/sec),\nso we need a total of 3\u00d74 = 12 machines for the 2nd\ntier (3 copies\nof IF with 4 machines per copy). Overall, when we use IP 1 for the\n1st\ntier, we need 5 + 12 = 17 machines to handle the load. We\ncan do similar analysis when we use IP 2 and see that a total of 14\nmachines are needed when IP 2 is used. Given this result, we can\nconclude that using IP 2 is preferable. 2\nThe above example shows that the cost of the two-tier\narchitecture depends on two important parameters: the size of the p-index\nand the fraction of the queries that can be handled by the 1st\ntier\nindex alone. We use s to denote the size of the p-index relative to\nIF (i.e., if s = 0.2, for example, the p-index is 20% of the size of\nIF ). We use f(s) to denote the fraction of the queries that a p-index\nof size s can handle (i.e., if f(s) = 0.3, 30% of the queries return\nthe value C = 1 from IP ). In general, we can expect that f(s) will\nincrease as s gets larger because IP can handle more queries as its\nsize grows. In Figure 3, we show an example graph of f(s) over s.\nGiven the notation, we can state the problem of p-index-size\noptimization as follows. In formulating the problem, we assume that\nthe number of machines required to operate a two-tier architecture\n0\n0.2\n0.4\n0.6\n0.8\n1\n0 0.2 0.4 0.6 0.8 1\nFractionofqueriesguaranteed-f(s)\nFraction of index - s\nFraction of queries guaranteed per fraction of index\nOptimal size s=0.16\nFigure 3: Example function showing the fraction of guaranteed\nqueries f(s) at a given size s of the p-index.\nis roughly proportional to the total size of the indexes necessary to\nhandle the query load.\nProblem 1 (Optimal index size) Given a query load Q and the\nfunction f(s), find the optimal p-index size s that minimizes the\ntotal size of the indexes necessary to handle the load Q. 2\nThe following theorem shows how we can determine the optimal\nindex size.\nTheorem 1 The cost for handling the query load Q is minimal\nwhen the size of the p-index, s, satisfies d f(s)\nd s\n= 1. 2\nProof The proof of this and the following theorems is omitted due\nto space constraints.\nThis theorem shows that the optimal point is when the slope of\nthe f(s) curve is 1. For example, in Figure 3, the optimal size\nis when s = 0.16. Note that the exact shape of the f(s) graph\nmay vary depending on the query load and the pruning policy. For\nexample, even for the same p-index, if the query load changes\nsignificantly, fewer (or more) queries may be handled by the p-index,\ndecreasing (or increasing)f(s). Similarly, if we use an effective\npruning policy, more queries will be handled by IP than when we\nuse an ineffective pruning policy, increasing f(s). Therefore, the\nfunction f(s) and the optimal-index size may change significantly\ndepending on the query load and the pruning policy. In our later\nexperiments, however, we find that even though the shape of the f(s)\ngraph changes noticeably between experiments, the optimal index\nsize consistently lies between 10%-30% in most experiments.\n4. PRUNING POLICIES\nIn this section, we show how we should prune the full index IF\nto IP , so that (1) we can compute the correctness indicator function\nC from IP itself and (2) we can handle a large fraction of queries\nby IP . In designing the pruning policies, we note the following two\nlocalities in the users\" search behavior:\n1. Keyword locality: Although there are many different words\nin the document collection that the search engine indexes, a\nfew popular keywords constitute the majority of the query\nloads. This keyword locality implies that the search engine\nwill be able to answer a significant fraction of user queries\neven if it can handle only these few popular keywords.\n2. Document locality: Even if a query has millions of\nmatching documents, users typically look at only the first few\nresults [16]. Thus, as long as search engines can compute the\nfirst few top-k answers correctly, users often will not notice\nthat the search engine actually has not computed the correct\nanswer for the remaining results (unless the users explicitly\nrequest them).\nBased on the above two localities, we now investigate two\ndifferent types of pruning policies: (1) a keyword pruning policy, which\ntakes advantage of the keyword locality by pruning the whole\ninverted list I(ti) for unpopular keywords ti\"s and (2) a document\npruning policy, which takes advantage of the document locality by\nkeeping only a few postings in each list I(ti), which are likely to\nbe included in the top-k results.\nAs we discussed before, we need to be able to compute the\ncorrectness indicator function from the pruned index alone in order to\nprovide the correctness guarantee. Since the computation of\ncorrectness indicator function may critically depend on the particular\nranking function used by a search engine, we first clarify our\nassumptions on the ranking function.\n4.1 Assumptions on ranking function\nConsider a query q = {t1, t2, . . . , tw} that contains a subset\nof the index terms. The goal of the search engine is to return the\ndocuments that are most relevant to query q. This is done in two\nsteps: first we use the inverted index to find all the documents that\ncontain the terms in the query. Second, once we have the\nrelevant documents, we calculate the rank (or score) of each one of the\ndocuments with respect to the query and we return to the user the\ndocuments that rank the highest.\nMost of the major search engines today return documents\ncontaining all query terms (i.e. they use AND-semantics). In order to\nmake our discussions more concise, we will also assume the\npopular AND-semantics while answering a query. It is straightforward\nto extend our results to OR-semantics as well. The exact ranking\nfunction that search engines employ is a closely guarded secret.\nWhat is known, however, is that the factors in determining the\ndocument ranking can be roughly categorized into two classes:\nQuery-dependent relevance. This particular factor of relevance\ncaptures how relevant the query is to every document. At a high\nlevel, given a document D, for every term ti a search engine assigns\na term relevance score tr(D, ti) to D. Given the tr(D, ti) scores\nfor every ti, then the query-dependent relevance of D to the query,\nnoted as tr(D, q), can be computed by combining the individual\nterm relevance values. One popular way for calculating the\nquerydependent relevance is to represent both the document D and the\nquery q using the TF.IDF vector space model [29] and employ a\ncosine distance metric.\nSince the exact form of tr(D, ti) and tr(D, q) differs\ndepending on the search engine, we will not restrict to any particular form;\ninstead, in order to make our work applicable in the general case,\nwe will make the generic assumption that the query-dependent\nrelevance is computed as a function of the individual term relevance\nvalues in the query:\ntr(D, q) = ftr(tr(D, t1), . . . , tr(D, tw)) (1)\nQuery-independent document quality. This is a factor that\nmeasures the overall quality of a document D independent of the\nparticular query issued by the user. Popular techniques that compute\nthe general quality of a page include PageRank [26], HITS [17] and\nthe likelihood that the page is a spam page [25, 15]. Here, we\nwill use pr(D) to denote this query-independent part of the final\nranking function for document D.\nThe final ranking score r(D, q) of a document will depend on\nboth the query-dependent and query-independent parts of the\nranking function. The exact combination of these parts may be done in\na variety of ways. In general, we can assume that the final\nranking score of a document is a function of its query-dependent and\nquery-independent relevance scores. More formally:\nr(D, q) = fr(tr(D, q), pr(D)) (2)\nFor example, fr(tr(D, q), pr(D)) may take the form\nfr(tr(D, q), pr(D)) = \u03b1 \u00b7 tr(D, q) + (1 \u2212 \u03b1) \u00b7 pr(D),\nthus giving weight \u03b1 to the query-dependent part and the weight\n1 \u2212 \u03b1 to the query-independent part.\nIn Equations 1 and 2 the exact form of fr and ftr can vary\ndepending on the search engine. Therefore, to make our discussion\napplicable independent of the particular ranking function used by\nsearch engines, in this paper, we will make only the generic\nassumption that the ranking function r(D, q) is monotonic on its\nparameters tr(D, t1), . . . , tr(D, tw) and pr(D).\nt1 \u2192 D1 D2 D3 D4 D5 D6\nt2 \u2192 D1 D2 D3\nt3 \u2192 D3 D5 D7 D8\nt4 \u2192 D4 D10\nt5 \u2192 D6 D8 D9\nFigure 4: Keyword and document pruning.\nAlgorithm 4.1 Computation of C for keyword pruning\nProcedure\n(1) C = 1\n(2) Foreach ti \u2208 q\n(3) If (I(ti) /\u2208 IP ) Then C = 0\n(4) Return C\nFigure 5: Result guarantee in keyword pruning.\nDefinition 2 A function f(\u03b1, \u03b2, . . . , \u03c9) is monotonic if \u2200\u03b11 \u2265\n\u03b12, \u2200\u03b21 \u2265 \u03b22, . . . \u2200\u03c91 \u2265 \u03c92 it holds that: f(\u03b11, \u03b21, . . . , \u03c91) \u2265\nf(\u03b12, \u03b22, . . . , \u03c92).\nRoughly, the monotonicity of the ranking function implies that,\nbetween two documents D1 and D2, if D1 has higher\nquerydependent relevance than D2 and also a higher query-independent\nscore than D2, then D1 should be ranked higher than D2, which\nwe believe is a reasonable assumption in most practical settings.\n4.2 Keyword pruning\nGiven our assumptions on the ranking function, we now\ninvestigate the keyword pruning policy, which prunes the inverted index\nIF horizontally by removing the whole I(ti)\"s corresponding to\nthe least frequent terms. In Figure 4 we show a graphical\nrepresentation of keyword pruning, where we remove the inverted lists for\nt3 and t5, assuming that they do not appear often in the query load.\nNote that after keyword pruning, if all keywords {t1, . . . , tn} in\nthe query q appear in IP , the p-index has the same information as\nIF as long as q is concerned. In other words, if all keywords in q\nappear in IP , the answer computed from IP is guaranteed to be the\nsame as the answer computed from IF . Figure 5 formalizes this\nobservation and computes the correctness indicator function C for\na keyword-pruned index IP . It is straightforward to prove that the\nanswer from IP is identical to that from IF if C = 1 in the above\nalgorithm.\nWe now consider the issue of optimizing the IP such that it can\nhandle the largest fraction of queries. This problem can be formally\nstated as follows:\nProblem 2 (Optimal keyword pruning) Given the query load Q\nand a goal index size s \u00b7 |IF | for the pruned index, select the\ninverted lists IP = {I(t1), . . . , I(th)} such that |IP | \u2264 s \u00b7 |IF | and\nthe fraction of queries that IP can answer (expressed by f(s)) is\nmaximized. 2\nUnfortunately, the optimal solution to the above problem is\nintractable as we can show by reducing from knapsack (we omit the\ncomplete proof).\nTheorem 2 The problem of calculating the optimal keyword\npruning is NP-hard. 2\nGiven the intractability of the optimal solution, we need to resort\nto an approximate solution. A common approach for similar\nknapsack problems is to adopt a greedy policy by keeping the items\nwith the maximum benefit per unit cost [9]. In our context, the\npotential benefit of an inverted list I(ti) is the number of queries\nthat can be answered by IP when I(ti) is included in IP . We\napproximate this number by the fraction of queries in the query\nload Q that include the term ti and represent it as P(ti). For\nexample, if 100 out of 1000 queries contain the term computer,\nAlgorithm 4.2 Greedy keyword pruning HS\nProcedure\n(1) \u2200ti, calculate HS(ti) =\nP (ti)\n|I(ti)|\n.\n(2) Include the inverted lists with the highest\nHS(ti) values such that |IP | \u2264 s \u00b7 |IF |.\nFigure 6: Approximation algorithm for the optimal keyword\npruning.\nAlgorithm 4.3 Global document pruning V SG\nProcedure\n(1) Sort all documents Di based on pr(Di)\n(2) Find the threshold value \u03c4p, such that\nonly s fraction of the documents have pr(Di) > \u03c4p\n(4) Keep Di in the inverted lists if pr(Di) > \u03c4p\nFigure 7: Global document pruning based on pr.\nthen P(computer) = 0.1. The cost of including I(ti) in the\npindex is its size |I(ti)|. Thus, in our greedy approach in Figure 6,\nwe include I(ti)\"s in the decreasing order of P(ti)/|I(ti)| as long\nas |IP | \u2264 s \u00b7 |IF |. Later in our experiment section, we evaluate\nwhat fraction of queries can be handled by IP when we employ\nthis greedy keyword-pruning policy.\n4.3 Document pruning\nAt a high level, document pruning tries to take advantage of the\nobservation that most users are mainly interested in viewing the\ntop few answers to a query. Given this, it is unnecessary to keep\nall postings in an inverted list I(ti), because users will not look at\nmost of the documents in the list anyway. We depict the conceptual\ndiagram of the document pruning policy in Figure 4. In the figure,\nwe vertically prune postings corresponding to D4, D5 and D6 of\nt1 and D8 of t3, assuming that these documents are unlikely to be\npart of top-k answers to user queries. Again, our goal is to develop\na pruning policy such that (1) we can compute the correctness\nindicator function C from IP alone and (2) we can handle the largest\nfraction of queries with IP . In the next few sections, we discuss a\nfew alternative approaches for document pruning.\n4.3.1 Global PR-based pruning\nWe first investigate the pruning policy that is commonly used by\nexisting search engines. The basic idea for this pruning policy is\nthat the query-independent quality score pr(D) is a very important\nfactor in computing the final ranking of the document (e.g.\nPageRank is known to be one of the most important factors determining\nthe overall ranking in the search results), so we build the p-index\nby keeping only those documents whose pr values are high (i.e.,\npr(D) > \u03c4p for a threshold value \u03c4p). The hope is that most of\nthe top-ranked results are likely to have high pr(D) values, so the\nanswer computed from this p-index is likely to be similar to the\nanswer computed from the full index. Figure 7 describes this pruning\npolicy more formally, where we sort all documents Di\"s by their\nrespective pr(Di) values and keep a Di in the p-index when its\nAlgorithm 4.4 Local document pruning V SL\nN: maximum size of a single posting list\nProcedure\n(1) Foreach I(ti) \u2208 IF\n(2) Sort Di\"s in I(ti) based on pr(Di)\n(3) If |I(ti)| \u2264 N Then keep all Di\"s\n(4) Else keep the top-N Di\"s with the highest pr(Di)\nFigure 8: Local document pruning based on pr.\nAlgorithm 4.5 Extended keyword-specific document pruning\nProcedure\n(1) For each I(ti)\n(2) Keep D \u2208 I(ti) if pr(D) > \u03c4pi or tr(D, ti) > \u03c4ti\nFigure 9: Extended keyword-specific document pruning based\non pr and tr.\npr(Di) value is higher than the global threshold value \u03c4p. We refer\nto this pruning policy as global PR-based pruning (GPR).\nVariations of this pruning policy are possible. For example, we\nmay adjust the threshold value \u03c4p locally for each inverted list\nI(ti), so that we maintain at least a certain number of postings\nfor each inverted list I(ti). This policy is shown in Figure 8. We\nrefer to this pruning policy as local PR-based pruning (LPR).\nUnfortunately, the biggest shortcoming of this policy is that we can\nprove that we cannot compute the correctness function C from IP\nalone when IP is constructed this way.\nTheorem 3 No PR-based document pruning can provide the result\nguarantee. 2\nProof Assume we create IP based on the GPR policy\n(generalizing the proof to LPR is straightforward) and that every\ndocument D with pr(D) > \u03c4p is included in IP . Assume that the\nkth\nentry in the top-k results, has a ranking score of r(Dk, q) =\nfr(tr(Dk, q), pr(Dk)). Now consider another document Dj that\nwas pruned from IP because pr(Dj) < \u03c4p. Even so, it is still\npossible that the document\"s tr(Dj, q) value is very high such that\nr(Dj, q) = fr(tr(Dj, q), pr(Dj)) > r(Dk, q).\nTherefore, under a PR-based pruning policy, the quality of the\nanswer computed from IP can be significantly worse than that from\nIF and it is not possible to detect this degradation without\ncomputing the answer from IF . In the next section, we propose simple yet\nessential changes to this pruning policy that allows us to compute\nthe correctness function C from IP alone.\n4.3.2 Extended keyword-specific pruning\nThe main problem of global PR-based document pruning\npolicies is that we do not know the term-relevance score tr(D, ti) of\nthe pruned documents, so a document not in IP may have a higher\nranking score than the ones returned from IP because of their high\ntr scores.\nHere, we propose a new pruning policy, called extended\nkeyword-specific document pruning (EKS), which avoids this\nproblem by pruning not just based on the query-independent pr(D)\nscore but also based on the term-relevance tr(D, ti) score. That\nis, for every inverted list I(ti), we pick two threshold values, \u03c4pi\nfor pr and \u03c4ti for tr, such that if a document D \u2208 I(ti) satisfies\npr(D) > \u03c4pi or tr(D, ti) > \u03c4ti, we include it in I(ti) of IP .\nOtherwise, we prune it from IP . Figure 9 formally describes this\nalgorithm. The threshold values, \u03c4pi and \u03c4ti, may be selected in\na number of different ways. For example, if pr and tr have equal\nweight in the final ranking and if we want to keep at most N\npostings in each inverted list I(ti), we may want to set the two\nthreshold values equal to \u03c4i (\u03c4pi = \u03c4ti = \u03c4i) and adjust \u03c4i such that N\npostings remain in I(ti).\nThis new pruning policy, when combined with a monotonic\nscoring function, enables us to compute the correctness indicator\nfunction C from the pruned index. We use the following example to\nexplain how we may compute C.\nExample 4 Consider the query q = {t1, t2} and a monotonic\nranking function, f(pr(D), tr(D, t1), tr(D, t2)). There are three\npossible scenarios on how a document D appears in the pruned\nindex IP .\n1. D appears in both I(t1) and I(t2) of IP : Since complete\ninformation of D appears in IP , we can compute the exact\nAlgorithm 4.6 Computing Answer from IP\nInput Query q = {t1, . . . , tw}\nOutput A: top-k result, C: correctness indicator function\nProcedure\n(1) For each Di \u2208 I(t1) \u222a \u00b7 \u00b7 \u00b7 \u222a I(tw)\n(2) For each tm \u2208 q\n(3) If Di \u2208 I(tm)\n(4) tr\u2217(Di, tm) = tr(Di, tm)\n(5) Else\n(6) tr\u2217(Di, tm) = \u03c4tm\n(7) f(Di) = f(pr(Di), tr\u2217(Di, t1), . . . , tr\u2217(Di, tn))\n(8) A = top-k Di\"s with highest f(Di) values\n(9) C =\nj\n1 if all Di \u2208 A appear in all I(ti), ti \u2208 q\n0 otherwise\nFigure 10: Ranking based on thresholds tr\u03c4 (ti) and pr\u03c4 (ti).\nscore of D based on pr(D), tr(D, t1) and tr(D, t2) values\nin IP : f(pr(D), tr(D, t1), tr(D, t2)).\n2. D appears only in I(t1) but not in I(t2): Since D does\nnot appear in I(t2), we do not know tr(D, t2), so we\ncannot compute its exact ranking score. However, from our\npruning criteria, we know that tr(D, t2) cannot be larger\nthan the threshold value \u03c4t2. Therefore, from the\nmonotonicity of f (Definition 2), we know that the ranking score\nof D, f(pr(D), tr(D, t1), tr(D, t2)), cannot be larger than\nf(pr(D), tr(D, t1), \u03c4t2).\n3. D does not appear in any list: Since D does not appear\nat all in IP , we do not know any of the pr(D), tr(D, t1),\ntr(D, t2) values. However, from our pruning criteria, we\nknow that pr(D) \u2264 \u03c4p1 and \u2264 \u03c4p2 and that tr(D, t1) \u2264 \u03c4t1\nand tr(D, t2) \u2264 \u03c4t2. Therefore, from the monotonicity of f,\nwe know that the ranking score of D, cannot be larger than\nf(min(\u03c4p1, \u03c4p2), \u03c4t1, \u03c4t2). 2\nThe above example shows that when a document does not appear\nin one of the inverted lists I(ti) with ti \u2208 q, we cannot compute\nits exact ranking score, but we can still compute its upper bound\nscore by using the threshold value \u03c4ti for the missing values. This\nsuggests the algorithm in Figure 10 that computes the top-k result\nA from IP together with the correctness indicator function C. In\nthe algorithm, the correctness indicator function C is set to one only\nif all documents in the top-k result A appear in all inverted lists\nI(ti) with ti \u2208 q, so we know their exact score. In this case,\nbecause these documents have scores higher than the upper bound\nscores of any other documents, we know that no other documents\ncan appear in the top-k. The following theorem formally proves the\ncorrectness of the algorithm. In [11] Fagin et al., provides a similar\nproof in the context of multimedia middleware.\nTheorem 4 Given an inverted index IP pruned by the algorithm\nin Figure 9, a query q = {t1, . . . , tw} and a monotonic ranking\nfunction, the top-k result from IP computed by Algorithm 4.6 is the\nsame as the top-k result from IF if C = 1. 2\nProof Let us assume Dk is the kth\nranked document computed\nfrom IP according to Algorithm 4.6. For every document Di \u2208\nIF that is not in the top-k result from IP , there are two possible\nscenarios:\nFirst, Di is not in the final answer because it was pruned from\nall inverted lists I(tj), 1 \u2264 j \u2264 w, in IP . In this case, we know\nthat pr(Di) \u2264 min1\u2264j\u2264w\u03c4pj < pr(Dk) and that tr(Di, tj) \u2264\n\u03c4tj < tr(Dk, tj), 1 \u2264 j \u2264 w. From the monotonicity assumption,\nit follows that the ranking score of DI is r(Di) < r(Dk). That is,\nDi\"s score can never be larger than that of Dk.\nSecond, Di is not in the answer because Di is pruned from some\ninverted lists, say, I(t1), . . . , I(tm), in IP . Let us assume \u00afr(Di) =\nf(pr(Di),\u03c4t1,. . . ,\u03c4tm,tr(Di, tm+1),. . . ,tr(Di, tw)). Then, from\ntr(Di, tj) \u2264 \u03c4tj(1 \u2264 j \u2264 m) and the monotonicity assumption,\n0\n0.1\n0.2\n0.3\n0.4\n0.5\n0.6\n0.7\n0.8\n0.9\n1\n0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1\n0\n0.1\n0.2\n0.3\n0.4\n0.5\n0.6\n0.7\n0.8\n0.9\n1\nFractionofqueriesguaranteed\u2212f(s)\nFraction of index \u2212 s\nFraction of queries guaranteed per fraction of index\nqueries guaranteed\nFigure 11: Fraction of guaranteed queries f(s) answered in a\nkeyword-pruned p-index of size s.\nwe know that r(Di) \u2264 \u00afr(Di). Also, Algorithm 4.6 sets C =\n1 only when the top-k documents have scores larger than \u00afr(Di).\nTherefore, r(Di) cannot be larger than r(Dk).\n5. EXPERIMENTAL EVALUATION\nIn order to perform realistic tests for our pruning policies, we\nimplemented a search engine prototype. For the experiments in this\npaper, our search engine indexed about 130 million pages, crawled\nfrom the Web during March of 2004. The crawl started from the\nOpen Directory\"s [10] homepage and proceeded in a breadth-first\nmanner. Overall, the total uncompressed size of our crawled Web\npages is approximately 1.9 TB, yielding a full inverted index IF of\napproximately 1.2 TB.\nFor the experiments reported in this section we used a real set\nof queries issued to Looksmart [22] on a daily basis during April\nof 2003. After keeping only the queries containing keywords that\nwere present in our inverted index, we were left with a set of about\n462 million queries. Within our query set, the average number of\nterms per query is 2 and 98% of the queries contain at most 5 terms.\nSome experiments require us to use a particular ranking\nfunction. For these, we use the ranking function similar to the one used\nin [20]. More precisely, our ranking function r(D, q) is\nr(D, q) = prnorm(D) + trnorm(D, q) (3)\nwhere prnorm(D) is the normalized PageRank of D computed\nfrom the downloaded pages and trnorm(D, q) is the normalized\nTF.IDF cosine distance of D to q. This function is clearly simpler\nthan the real functions employed by commercial search engines,\nbut we believe for our evaluation this simple function is adequate,\nbecause we are not studying the effectiveness of a ranking function,\nbut the effectiveness of pruning policies.\n5.1 Keyword pruning\nIn our first experiment we study the performance of the keyword\npruning, described in Section 4.2. More specifically, we apply\nthe algorithm HS of Figure 6 to our full index IF and create a\nkeyword-pruned p-index IP of size s. For the construction of our\nkeyword-pruned p-index we used the query frequencies observed\nduring the first 10 days of our data set. Then, using the remaining\n20-day query load, we measured f(s), the fraction of queries\nhandled by IP . According to the algorithm of Figure 5, a query can be\nhandled by IP (i.e., C = 1) if IP includes the inverted lists for all\nof the query\"s keywords.\nWe have repeated the experiment for varying values of s,\npicking the keywords greedily as discussed in Section 4.2.The result is\nshown in Figure 11. The horizontal axis denotes the size s of the\np-index as a fraction of the size of IF . The vertical axis shows the\nfraction f(s) of the queries that the p-index of size s can answer.\nThe results of Figure 11, are very encouraging: we can answer a\nsignificant fraction of the queries with a small fraction of the\noriginal index. For example, approximately 73% of the queries can be\nanswered using 30% of the original index. Also, we find that when\nwe use the keyword pruning policy only, the optimal index size is\ns = 0.17.\n0\n0.1\n0.2\n0.3\n0.4\n0.5\n0.6\n0.7\n0.8\n0.9\n1\n0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1\nFractionofqueriesguaranteed-f(s)\nFraction of index - s\nFraction of queries guaranteed for top-20 per fraction of index\nfraction of queries guaranteed (EKS)\nFigure 12: Fraction of guaranteed queries f(s) answered in a\ndocument-pruned p-index of size s.\n0\n0.1\n0.2\n0.3\n0.4\n0.5\n0.6\n0.7\n0.8\n0.9\n1\n0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1\nFractionofqueriesanswered\nindex size - s\nFraction of queries answered for top-20 per fraction of index\nGPR\nLPR\nEKS\nFigure 13: Fraction of queries answered in a document-pruned\np-index of size s.\n5.2 Document pruning\nWe continue our experimental evaluation by studying the\nperformance of the various document pruning policies described in\nSection 4.3. For the experiments on document pruning reported here\nwe worked with a 5.5% sample of the whole query set. The reason\nbehind this is merely practical: since we have much less machines\ncompared to a commercial search engine it would take us about a\nyear of computation to process all 462 million queries.\nFor our first experiment, we generate a document-pruned p-index\nof size s by using the Extended Keyword-Specific pruning (EKS)\nin Section 4. Within the p-index we measure the fraction of queries\nthat can be guaranteed (according to Theorem 4) to be correct. We\nhave performed the experiment for varying index sizes s and the\nresult is shown in Figure 12. Based on this figure, we can see that\nour document pruning algorithm performs well across the scale of\nindex sizes s: for all index sizes larger than 40%, we can guarantee\nthe correct answer for about 70% of the queries. This implies that\nour EKS algorithm can successfully identify the necessary\npostings for calculating the top-20 results for 70% of the queries by\nusing at least 40% of the full index size. From the figure, we can\nsee that the optimal index size s = 0.20 when we use EKS as our\npruning policy.\nWe can compare the two pruning schemes, namely the keyword\npruning and EKS, by contrasting Figures 11 and 12. Our\nobservation is that, if we would have to pick one of the two pruning\npolicies, then the two policies seem to be more or less equivalent\nfor the p-index sizes s \u2264 20%. For the p-index sizes s > 20%,\nkeyword pruning does a much better job as it provides a higher\nnumber of guarantees at any given index size. Later in Section 5.3,\nwe discuss the combination of the two policies.\nIn our next experiment, we are interested in comparing EKS\nwith the PR-based pruning policies described in Section 4.3. To\nthis end, apart from EKS, we also generated document-pruned\npindexes for the Global pr-based pruning (GPR) and the Local\nprbased pruning (LPR) policies. For each of the polices we created\ndocument-pruned p-indexes of varying sizes s. Since GPR and\nLPR cannot provide a correctness guarantee, we will compare the\nfraction of queries from each policy that are identical (i.e. the same\nresults in the same order) to the top-k results calculated from the\nfull index. Here, we will report our results for k = 20; the results\nare similar for other values of k. The results are shown in Figure 13.\n0\n0.1\n0.2\n0.3\n0.4\n0.5\n0.6\n0.7\n0.8\n0.9\n1\n0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1\nAveragefractionofdocsinanswer\nindex size - s\nAverage fraction of docs in answer for top-20 per fraction of index\nGPR\nLPR\nEKS\nFigure 14: Average fraction of the top-20 results of p-index with\nsize s contained in top-20 results of the full index.\nFraction of queries guaranteed for top-20 per fraction of index, using keyword and document\n0\n0.1\n0.2\n0.3\n0.4\n0.5\n0.6\n0.7\n0.8\n0.9\n1\nKeyword fraction\nof index - sh\n0\n0.1\n0.2\n0.3\n0.4\n0.5\n0.6\n0.7\n0.8\n0.9\n1\nDocument fraction\nof index - sv\n0\n0.1\n0.2\n0.3\n0.4\n0.5\n0.6\n0.7\n0.8\n0.9\n1\nFraction of queries\nguaranteed - f(s)\nFigure 15: Combining keyword and document pruning.\nThe horizontal axis shows the size s of the p-index; the vertical\naxis shows the fraction f(s) of the queries whose top-20 results are\nidentical to the top-20 results of the full index, for a given size s.\nBy observing Figure 13, we can see that GPR performs the\nworst of the three policies. On the other hand EKS, picks up early,\nby answering a great fraction of queries (about 62%) correctly with\nonly 10% of the index size. The fraction of queries that LPR can\nanswer remains below that of EKS until about s = 37%. For any\nindex size larger than 37%, LPR performs the best.\nIn the experiment of Figure 13, we applied the strict definition\nthat the results of the p-index have to be in the same order as the\nones of the full index. However, in a practical scenario, it may\nbe acceptable to have some of the results out of order. Therefore,\nin our next experiment we will measure the fraction of the results\ncoming from an p-index that are contained within the results of the\nfull index. The result of the experiment is shown on Figure 14. The\nhorizontal axis is, again, the size s of the p-index; the vertical axis\nshows the average fraction of the top-20 results common with the\ntop-20 results from the full index. Overall, Figure 14 depicts that\nEKS and LPR identify the same high (\u2248 96%) fraction of results\non average for any size s \u2265 30%, with GPR not too far behind.\n5.3 Combining keyword and document\npruning\nIn Sections 5.1 and 5.2 we studied the individual performance\nof our keyword and document pruning schemes. One interesting\nquestion however is how do these policies perform in\ncombination? What fraction of queries can we guarantee if we apply both\nkeyword and document pruning in our full index IF ?\nTo answer this question, we performed the following experiment.\nWe started with the full index IF and we applied keyword pruning\nto create an index Ih\nP of size sh \u00b7 100% of IF . After that, we\nfurther applied document pruning to Ih\nP , and created our final\npindex IP of size sv \u00b7100% of Ih\nP . We then calculated the fraction of\nguaranteed queries in IP . We repeated the experiment for different\nvalues of sh and sv. The result is shown on Figure 15. The x-axis\nshows the index size sh after applying keyword pruning; the y-axis\nshows the index size sv after applying document pruning; the z-axis\nshows the fraction of guaranteed queries after the two prunings. For\nexample the point (0.2, 0.3, 0.4) means that if we apply keyword\npruning and keep 20% of IF , and subsequently on the resulting\nindex we apply document pruning keeping 30% (thus creating a\npindex of size 20%\u00b730% = 6% of IF ) we can guarantee 40% of the\nqueries. By observing Figure 15, we can see that for p-index sizes\nsmaller than 50%, our combined pruning does relatively well. For\nexample, by performing 40% keyword and 40% document pruning\n(which translates to a pruned index with s = 0.16) we can provide\na guarantee for about 60% of the queries. In Figure 15, we also\nobserve a plateau for sh > 0.5 and sv > 0.5. For this combined\npruning policy, the optimal index size is at s = 0.13, with sh =\n0.46 and sv = 0.29.\n6. RELATED WORK\n[3, 30] provide a good overview of inverted indexing in Web\nsearch engines and IR systems. Experimental studies and analyses\nof various partitioning schemes for an inverted index are presented\nin [6, 23, 33]. The pruning algorithms that we have presented in\nthis paper are independent of the partitioning scheme used.\nThe works in [1, 5, 7, 20, 27] are the most related to ours, as they\ndescribe pruning techniques based on the idea of keeping the\npostings that contribute the most in the final ranking. However, [1, 5, 7,\n27] do not consider any query-independent quality (such as\nPageRank) in the ranking function. [32] presents a generic framework\nfor computing approximate top-k answers with some probabilistic\nbounds on the quality of results. Our work essentially extends [1,\n2, 4, 7, 20, 27, 31] by proposing mechanisms for providing the\ncorrectness guarantee to the computed top-k results.\nSearch engines use various methods of caching as a means of\nreducing the cost associated with queries [18, 19, 21, 31]. This thread\nof work is also orthogonal to ours because a caching scheme may\noperate on top of our p-index in order to minimize the answer\ncomputation cost. The exact ranking functions employed by current\nsearch engines are closely guarded secrets. In general, however,\nthe rankings are based on query-dependent relevance and\nqueryindependent document quality. Query-dependent relevance can\nbe calculated in a variety of ways (see [3, 30]). Similarly, there are a\nnumber of works that measure the quality of the documents,\ntypically as captured through link-based analysis [17, 28, 26]. Since\nour work does not assume a particular form of ranking function, it\nis complementary to this body of work.\nThere has been a great body of work on top-k result calculation.\nThe main idea is to either stop the traversal of the inverted lists\nearly, or to shrink the lists by pruning postings from the lists [14,\n4, 11, 8]. Our proof for the correctness indicator function was\nprimarily inspired by [12].\n7. CONCLUDING REMARKS\nWeb search engines typically prune their large-scale inverted\nindexes in order to scale to enormous query loads. While this\napproach may improve performance, by computing the top results\nfrom a pruned index we may notice a significant degradation in\nthe result quality. In this paper, we provided a framework for\nnew pruning techniques and answer computation algorithms that\nguarantee that the top matching pages are always placed at the\ntop of search results in the correct order. We studied two pruning\ntechniques, namely keyword-based and document-based pruning as\nwell as their combination. Our experimental results demonstrated\nthat our algorithms can effectively be used to prune an inverted\nindex without degradation in the quality of results. In particular, a\nkeyword-pruned index can guarantee 73% of the queries with a size\nof 30% of the full index, while a document-pruned index can\nguarantee 68% of the queries with the same size. When we combine the\ntwo pruning algorithms we can guarantee 60% of the queries with\nan index size of 16%. It is our hope that our work will help search\nengines develop better, faster and more efficient indexes and thus\nprovide for a better user search experience on the Web.\n8. REFERENCES\n[1] V. N. Anh, O. de Kretser, and A. Moffat. Vector-space ranking with\neffective early termination. In SIGIR, 2001.\n[2] V. N. Anh and A. Moffat. Pruning strategies for mixed-mode\nquerying. In CIKM, 2006.\n[3] R. A. Baeza-Yates and B. A. Ribeiro-Neto. Modern Information\nRetrieval. ACM Press / Addison-Wesley, 1999.\n[4] N. Bruno, L. Gravano, and A. Marian. Evaluating top-k queries over\nweb-accessible databases. In ICDE, 2002.\n[5] S. B\u00a8uttcher and C. L. A. Clarke. A document-centric approach to\nstatic index pruning in text retrieval systems. In CIKM, 2006.\n[6] B. Cahoon, K. S. McKinley, and Z. Lu. Evaluating the performance\nof distributed architectures for information retrieval using a variety of\nworkloads. ACM TOIS, 18(1), 2000.\n[7] D. Carmel, D. Cohen, R. Fagin, E. Farchi, M. Herscovici, Y. Maarek,\nand A. Soffer. Static index pruning for information retrieval systems.\nIn SIGIR, 2001.\n[8] S. Chaudhuri and L. Gravano. Optimizing queries over multimedia\nrepositories. In SIGMOD, 1996.\n[9] T. H. Cormen, C. E. Leiserson, and R. L. Rivest. Introduction to\nAlgorithms, 2nd Edition. MIT Press/McGraw Hill, 2001.\n[10] Open directory. http://www.dmoz.org.\n[11] R. Fagin. Combining fuzzy information: an overview. In SIGMOD\nRecord, 31(2), 2002.\n[12] R. Fagin, A. Lotem, and M. Naor. Optimal aggregation algorithms\nfor middleware. In PODS, 2001.\n[13] A. Gulli and A. Signorini. The indexable web is more than 11.5\nbillion pages. In WWW, 2005.\n[14] U. Guntzer, G. Balke, and W. Kiessling. Towards efficient\nmulti-feature queries in heterogeneous environments. In ITCC, 2001.\n[15] Z. Gy\u00a8ongyi, H. Garcia-Molina, and J. Pedersen. Combating web\nspam with trustrank. In VLDB, 2004.\n[16] B. J. Jansen and A. Spink. An analysis of web documents retrieved\nand viewed. In International Conf. on Internet Computing, 2003.\n[17] J. Kleinberg. Authoritative sources in a hyperlinked environment.\nJournal of the ACM, 46(5):604-632, September 1999.\n[18] R. Lempel and S. Moran. Predictive caching and prefetching of query\nresults in search engines. In WWW, 2003.\n[19] R. Lempel and S. Moran. Optimizing result prefetching in web search\nengines with segmented indices. ACM Trans. Inter. Tech., 4(1), 2004.\n[20] X. Long and T. Suel. Optimized query execution in large search\nengines with global page ordering. In VLDB, 2003.\n[21] X. Long and T. Suel. Three-level caching for efficient query\nprocessing in large web search engines. In WWW, 2005.\n[22] Looksmart inc. http://www.looksmart.com.\n[23] S. Melnik, S. Raghavan, B. Yang, and H. Garcia-Molina. Building a\ndistributed full-text index for the web. ACM TOIS, 19(3):217-241,\n2001.\n[24] A. Ntoulas, J. Cho, C. Olston. What\"s new on the web? The evolution\nof the web from a search engine perspective. In WWW, 2004.\n[25] A. Ntoulas, M. Najork, M. Manasse, and D. Fetterly. Detecting spam\nweb pages through content analysis. In WWW, 2006.\n[26] L. Page, S. Brin, R. Motwani, and T. Winograd. The pagerank\ncitation ranking: Bringing order to the web. Technical report,\nStanford University.\n[27] M. Persin, J. Zobel, and R. Sacks-Davis. Filtered document retrieval\nwith frequency-sorted indexes. Journal of the American Society of\nInformation Science, 47(10), 1996.\n[28] M. Richardson and P. Domingos. The intelligent surfer: Probabilistic\ncombination of link and content information in pagerank. In\nAdvances in Neural Information Processing Systems, 2002.\n[29] S. Robertson and K. Sp\u00a8arck-Jones. Relevance weighting of search\nterms. Journal of the American Society for Information Science,\n27:129-46, 1976.\n[30] G. Salton and M. J. McGill. Introduction to modern information\nretrieval. McGraw-Hill, first edition, 1983.\n[31] P. C. Saraiva, E. S. de Moura, N. Ziviani, W. Meira, R. Fonseca, and\nB. Riberio-Neto. Rank-preserving two-level caching for scalable\nsearch engines. In SIGIR, 2001.\n[32] M. Theobald, G. Weikum, and R. Schenkel. Top-k query evaluation\nwith probabilistic guarantees. In VLDB, 2004.\n[33] A. Tomasic and H. Garcia-Molina. Performance of inverted indices\nin shared-nothing distributed text document information retrieval\nsystems. In Parallel and Distributed Information Systems, 1993.", "keywords": "large-scale inverted index;degradation of result quality;invert index;result computation algorithm;prune;correctness guarantee;result quality degradation;pruning-based performance optimization;top search result;optimal size;web search engine;pruning technique;top-matching page;pruned index;online search market;query load"}
-{"name": "test_H-18", "title": "Topic Segmentation with Shared Topic Detection and Alignment of Multiple Documents", "abstract": "Topic detection and tracking [26] and topic segmentation [15] play an important role in capturing the local and sequential information of documents. Previous work in this area usually focuses on single documents, although similar multiple documents are available in many domains. In this paper, we introduce a novel unsupervised method for shared topic detection and topic segmentation of multiple similar documents based on mutual information (MI) and weighted mutual information (WMI) that is a combination of MI and term weights. The basic idea is that the optimal segmentation maximizes MI(or WMI). Our approach can detect shared topics among documents. It can find the optimal boundaries in a document, and align segments among documents at the same time. It also can handle single-document segmentation as a special case of the multi-document segmentation and alignment. Our methods can identify and strengthen cue terms that can be used for segmentation and partially remove stop words by using term weights based on entropy learned from multiple documents. Our experimental results show that our algorithm works well for the tasks of single-document segmentation, shared topic detection, and multi-document segmentation. Utilizing information from multiple documents can tremendously improve the performance of topic segmentation, and using WMI is even better than using MI for the multi-document segmentation.", "fulltext": "1. INTRODUCTION\nMany researchers have worked on topic detection and\ntracking (TDT) [26] and topic segmentation during the past decade.\nTopic segmentation intends to identify the boundaries in a\ndocument with the goal to capture the latent topical\nstructure. Topic segmentation tasks usually fall into two\ncategories [15]: text stream segmentation where topic transition\nis identified, and coherent document segmentation in which\ndocuments are split into sub-topics. The former category\nhas applications in automatic speech recognition, while the\nlatter one has more applications such as partial-text query\nof long documents in information retrieval, text summary,\nand quality measurement of multiple documents. Previous\nresearch in connection with TDT falls into the former\ncategory, targeted on topic tracking of broadcast speech data\nand newswire text, while the latter category has not been\nstudied very well.\nTraditional approaches perform topic segmentation on\ndocuments one at a time [15, 25, 6]. Most of them perform\nbadly in subtle tasks like coherent document segmentation\n[15]. Often, end-users seek documents that have the\nsimilar content. Search engines, like, Google, provide links to\nobtain similar pages. At a finer granularity, users may\nactually be looking to obtain sections of a document similar\nto a particular section that presumably discusses a topic of\nthe users interest. Thus, the extension of topic\nsegmentation from single documents to identifying similar segments\nfrom multiple similar documents with the same topic is a\nnatural and necessary direction, and multi-document topic\nsegmentation is expected to have a better performance since\nmore information is utilized.\nTraditional approaches using similarity measurement based\non term frequency generally have the same assumption that\nsimilar vocabulary tends to be in a coherent topic segment\n[15, 25, 6]. However, they usually suffer the issue of\nidentifying stop words. For example, additional document-dependent\nstop words are removed together with the generic stop words\nin [15]. There are two reasons that we do not remove stop\nwords directly. First, identifying stop words is another\nissue [12] that requires estimation in each domain. Removing\ncommon stop words may result in the loss of useful\ninformation in a specific domain. Second, even though stop words\ncan be identified, hard classification of stop words and\nnonstop words cannot represent the gradually changing amount\nof information content of each word. We employ a soft\nclassification using term weights.\nIn this paper, we view the problem of topic segmentation\nas an optimization issue using information theoretic\ntechniques to find the optimal boundaries of a document given\nthe number of text segments so as to minimize the loss of\nmutual information (MI) (or a weighted mutual information\n(WMI)) after segmentation and alignment. This is equal to\nmaximizing the MI (or WMI). The MI focuses on\nmeasuring the difference among segments whereas previous research\nfocused on finding the similarity (e.g. cosine distance) of\nsegments [15, 25, 6]. Topic alignment of multiple similar\ndocuments can be achieved by clustering sentences on the\nsame topic into the same cluster. Single-document topic\nsegmentation is just a special case of the multi-document\ntopic segmentation and alignment problem. Terms can be\nco-clustered as in [10] at the same time, given the number of\nclusters, but our experimental results show that this method\nresults in a worse segmentation (see Tables 1, 4, and 6).\nUsually, human readers can identify topic transition based on\ncue words, and can ignore stop words. Inspired by this, we\ngive each term (or term cluster) a weight based on entropy\namong different documents and different segments of\ndocuments. Not only can this approach increase the contribution\nof cue words, but it can also decrease the effect of common\nstop words, noisy word, and document-dependent stop words.\nThese words are common in a document. Many methods\nbased on sentence similarity require that these words are\nremoved before topic segmentation can be performed [15].\nOur results in Figure 3 show that term weights are useful\nfor multi-document topic segmentation and alignment.\nThe major contribution of this paper is that it introduces\na novel method for topic segmentation using MI and shows\nthat this method performs better than previously used\ncriteria. Also, we have addressed the problem of topic\nsegmentation and alignment across multiple documents, whereas\nmost existing research focused on segmentation of single\ndocuments. Multi-document segmentation and alignment\ncan utilize information from similar documents and improves\nthe performance of topic segmentation greatly. Obviously,\nour approach can handle single documents as a special case\nwhen multiple documents are unavailable. It can detect\nshared topics among documents to judge if they are multiple\ndocuments on the same topic. We also introduce the new\ncriterion of WMI based on term weights learned from\nmultiple similar documents, which can improve performance of\ntopic segmentation further. We propose an iterative greedy\nalgorithm based on dynamic programming and show that it\nworks well in practice. Some of our prior work is in [24].\nThe rest of this paper is organized as follows: In Section 2,\nwe review related work. Section 3 contains a formulation of\nthe problem of topic segmentation and alignment of multiple\ndocuments with term co-clustering, a review of the criterion\nof MI for clustering, and finally an introduction to WMI. In\nSection 4, we first propose the iterative greedy algorithm of\ntopic segmentation and alignment with term co-clustering,\nand then describe how the algorithm can be optimized by\nusFigure 1: Illustration of multi-document\nsegmentation and alignment.\ning dynamic programming. In Section 5, experiments about\nsingle-document segmentation, shared topic detection, and\nmulti-document segmentation are described, and results are\npresented and discussed to evaluate the performance of our\nalgorithm. Conclusions and some future directions of the\nresearch work are discussed in Section 6.\n2. PREVIOUS WORK\nGenerally, the existing approaches to text segmentation\nfall into two categories: supervised learning [19, 17, 23]\nand unsupervised learning [3, 27, 5, 6, 15, 25, 21].\nSupervised learning usually has good performance, since it learns\nfunctions from labelled training sets. However, often\ngetting large training sets with manual labels on document\nsentences is prohibitively expensive, so unsupervised\napproaches are desired. Some models consider dependence\nbetween sentences and sections, such as Hidden Markov Model\n[3, 27], Maximum Entropy Markov Model [19], and\nConditional Random Fields [17], while many other approaches are\nbased on lexical cohesion or similarity of sentences [5, 6, 15,\n25, 21]. Some approaches also focus on cue words as hints\nof topic transitions [11]. While some existing methods only\nconsider information in single documents [6, 15], others\nutilize multiple documents [16, 14]. There are not many works\nin the latter category, even though the performance of\nsegmentation is expected to be better with utilization of\ninformation from multiple documents. Previous research studied\nmethods to find shared topics [16] and topic segmentation\nand summarization between just a pair of documents [14].\nText classification and clustering is a related research area\nwhich categorizes documents into groups using supervised or\nunsupervised methods. Topical classification or clustering is\nan important direction in this area, especially co-clustering\nof documents and terms, such as LSA [9], PLSA [13], and\napproaches based on distances and bipartite graph\npartitioning [28] or maximum MI [2, 10], or maximum entropy\n[1, 18]. Criteria of these approaches can be utilized in the\nissue of topic segmentation. Some of those methods have been\nextended into the area of topic segmentation, such as PLSA\n[5] and maximum entropy [7], but to our best knowledge,\nusing MI for topic segmentation has not been studied.\n3. PROBLEM FORMULATION\nOur goal is to segment documents and align the segments\nacross documents (Figure 1). Let T be the set of terms\n{t1, t2, ..., tl}, which appear in the unlabelled set of\ndocuments D = {d1, d2, ..., dm}. Let Sd be the set of sentences\nfor document d \u2208 D, i.e.{s1, s2, ..., snd }. We have a 3D\nmatrix of term frequency, in which the three dimensions are\nrandom variables of D, Sd, and T. Sd actually is a random\nvector including a random variable for each d \u2208 D. The\nterm frequency can be used to estimate the joint probability\ndistribution P(D, Sd, T), which is p(t, d, s) = T(t, d, s)/ND,\nwhere T(t, d, s) is the number of t in d\"s sentence s and ND\nis the total number of terms in D. \u02c6S represents the set of\nsegments {\u02c6s1, \u02c6s2, ..., \u02c6sp} after segmentation and alignment\namong multiple documents, where the number of segments\n| \u02c6S| = p. A segment \u02c6si of document d is a sequence of\nadjacent sentences in d. Since for different documents si may\ndiscuss different sub-topics, our goal is to cluster adjacent\nsentences in each document into segments, and align similar\nsegments among documents, so that for different documents\n\u02c6si is about the same sub-topic. The goal is to find the\noptimal topic segmentation and alignment mapping\nSegd(si) : {s1, s2, ..., snd } \u2192 {\u02c6s1, \u02c6s2, ..., \u02c6sp}\nand Alid(\u02c6si) : {\u02c6s1, \u02c6s2, ..., \u02c6sp} \u2192 {\u02c6s1, \u02c6s2, ..., \u02c6sp}, for all d \u2208\nD, where \u02c6si is ith\nsegment with the constraint that only\nadjacent sentences can be mapped to the same segment,\ni.e. for d, {si, si+1, ..., sj} \u2192 {\u02c6sq}, where q \u2208 {1, ..., p},\nwhere p is the segment number, and if i > j, then for d,\n\u02c6sq is missing. After segmentation and alignment, random\nvector Sd becomes an aligned random variable \u02c6S. Thus,\nP(D, Sd, T) becomes P(D, \u02c6S, T).\nTerm co-clustering is a technique that has been employed\n[10] to improve the accuracy of document clustering. We\nevaluate the effect of it for topic segmentation. A term t\nis mapped to exactly one term cluster. Term co-clustering\ninvolves simultaneously finding the optimal term clustering\nmapping Clu(t) : {t1, t2, ..., tl} \u2192 {\u02c6t1, \u02c6t2, ..., \u02c6tk}, where k \u2264\nl, l is the total number of words in all the documents, and\nk is the number of clusters.\n4. METHODOLOGY\nWe now describe a novel algorithm which can handle\nsingledocument segmentation, shared topic detection, and\nmultidocument segmentation and alignment based on MI or WMI.\n4.1 Mutual Information\nMI I(X; Y ) is a quantity to measure the amount of\ninformation which is contained in two or more random variables\n[8, 10]. For the case of two random variables, we have\nI(X; Y ) =\nx\u2208X y\u2208Y\np(x, y)log\np(x, y)\np(x)p(y)\n, (1)\nObviously, when random variables X and Y are\nindependent, I(X; Y ) = 0. Thus, intuitively, the value of MI\ndepends on how random variables are dependent on each other.\nThe optimal co-clustering is the mapping Clux : X \u2192 \u02c6X and\nCluy : Y \u2192 \u02c6Y that minimizes the loss: I(X; Y ) \u2212 I( \u02c6X; \u02c6Y ),\nwhich is equal to maximizing I( \u02c6X; \u02c6Y ). This is the criterion\nof MI for clustering.\nIn the case of topic segmentation, the two random\nvariables are the term variable T and the segment variable S,\nand each sample is an occurrence of a term T = t in a\nparticular segment S = s. I(T; S) is used to measure how\ndependent T and S are. However, I(T; S) cannot be\ncomputed for documents before segmentation, since we do not\nhave a set of S due to the fact that sentences of Document d,\nsi \u2208 Sd, is not aligned with other documents. Thus, instead\nof minimizing the loss of MI, we can maximize MI after topic\nsegmentation, computed as:\nI( \u02c6T; \u02c6S) =\n\u02c6t\u2208 \u02c6T \u02c6s\u2208 \u02c6S\np(\u02c6t, \u02c6s)log\np(\u02c6t, \u02c6s)\np(\u02c6t)p(\u02c6s)\n, (2)\nwhere p(\u02c6t, \u02c6s) are estimated by the term frequency tf of Term\nCluster \u02c6t and Segment \u02c6s in the training set D. Note that\nhere a segment \u02c6s includes sentences about the the same topic\namong all documents. The optimal solution is the mapping\nClut : T \u2192 \u02c6T, Segd : Sd \u2192 \u02c6S , and Alid : \u02c6S \u2192 \u02c6S, which\nmaximizes I( \u02c6T; \u02c6S).\n4.2 Weighted Mutual Information\nIn topic segmentation and alignment of multiple\ndocuments, if P(D, \u02c6S, T) is known, based on the marginal\ndistributions P(D|T) and P( \u02c6S|T) for each term t \u2208 T, we can\ncategorize terms into four types in the data set:\n\u2022 Common stop words are common both along the\ndimensions of documents and segments.\n\u2022 Document-dependent stop words that depends on the\npersonal writing style are common only along the\ndimension of segments for some documents.\n\u2022 Cue words are the most important elements for\nsegmentation. They are common along the dimension of\ndocuments only for the same segment, and they are\nnot common along the dimensions of segments.\n\u2022 Noisy words are other words which are not common\nalong both dimensions.\nEntropy based on P(D|T) and P( \u02c6S|T) can be used to\nidentify different types of terms. To reinforce the contribution\nof cue words in the MI computation, and simultaneously\nreduce the effect of the other three types of words, similar as\nthe idea of the tf-idf weight [22], we use entropies of each\nterm along the dimensions of document D and segment \u02c6S,\ni.e. ED(\u02c6t) and E\u02c6S(\u02c6t), to compute the weight. A cue word\nusually has a large value of ED(\u02c6t) but a small value of E\u02c6S(\u02c6t).\nWe introduce term weights (or term cluster weights)\nw\u02c6t = (\nED(\u02c6t)\nmax\u02c6t \u2208 \u02c6T (ED(\u02c6t ))\n)a\n(1 \u2212\nE\u02c6S(\u02c6t)\nmax\u02c6t \u2208 \u02c6T (E\u02c6S(\u02c6t ))\n)b\n, (3)\nwhere ED(\u02c6t) = d\u2208D p(d|\u02c6t)log|D|\n1\np(d|\u02c6t)\n,\nE\u02c6S(\u02c6t) = \u02c6s\u2208 \u02c6S p(\u02c6s|\u02c6t)log| \u02c6S|\n1\np(\u02c6s|\u02c6t)\n, and a > 0 and b > 0 are\npowers to adjust term weights. Usually a = 1 and b = 1\nas default, and max\u02c6t \u2208 \u02c6T (ED(\u02c6t )) and max\u02c6t \u2208 \u02c6T (E\u02c6S(\u02c6t )) are\nused to normalize the entropy values. Term cluster weights\nare used to adjust p(\u02c6t, \u02c6s),\npw(\u02c6t, \u02c6s) =\nw\u02c6tp(\u02c6t, \u02c6s)\n\u02c6t\u2208 \u02c6T ;\u02c6s\u2208 \u02c6S w\u02c6tp(\u02c6t, \u02c6s)\n, (4)\nand\nIw( \u02c6T; \u02c6S) =\n\u02c6t\u2208 \u02c6T \u02c6s\u2208 \u02c6S\npw(\u02c6t, \u02c6s)log\npw(\u02c6t, \u02c6s)\npw(\u02c6t)pw(\u02c6s)\n, (5)\nwhere pw(\u02c6t) and pw(\u02c6s) are marginal distributions of pw(\u02c6t, \u02c6s).\nHowever, since we do not know either the term weights\nor P(D, \u02c6S, T), we need to estimate them, but w\u02c6t depends\non p(\u02c6s|t) and \u02c6S, while \u02c6S and p(\u02c6s|t) also depend on w\u02c6t that\nis still unknown. Thus, an iterative algorithm is required\nto estimate term weights w\u02c6t and find the best\nsegmentation and alignment to optimize the objective function Iw\nconcurrently. After a document is segmented into sentences\nInput:\nJoint probability distribution P(D, Sd, T),\nnumber of text segments p \u2208 {2, 3, ..., max(sd)},\nnumber of term clusters k \u2208 {2, 3, ..., l} (if k = l, no term\nco-clustering required), and\nweight type w \u2208 {0, 1}, indicating to use I or Iw, respectively.\nOutput:\nMapping Clu, Seg, Ali, and term weights w\u02c6t.\nInitialization:\n0. i = 0. Initialize Clu\n(0)\nt , Seg\n(0)\nd , and Ali\n(0)\nd ; Initialize w\n(0)\n\u02c6t\nusing Equation (6) if w = 1;\nStage 1:\n1. If |D| = 1, k = l, and w = 0, check all sequential\nsegmentations of d into p segments and find the best one\nSegd(s) = argmax\u02c6sI( \u02c6T; \u02c6S),\nand return Segd; otherwise, if w = 1 and k = l, go to 3.1;\nStage 2:\n2.1 If k < l, for each term t, find the best cluster \u02c6t as\nClu(i+1)(t) = argmax\u02c6tI( \u02c6T; \u02c6S(i))\nbased on Seg(i) and Ali(i);\n2.2 For each d, check all sequential segmentations of d into p\nsegments with mapping s \u2192 \u02c6s \u2192 \u02c6s, and find the best one\nAli\n(i+1)\nd (Seg\n(i+1)\nd (s)) = argmax\u02c6sI( \u02c6T(i+1); \u02c6S)\nbased on Clu(i+1)(t) if k < l or Clu(0)(t) if k = l;\n2.3 i + +. If Clu, Seg, or Ali changed, go to 2.1; otherwise,\nif w = 0, return Clu(i), Seg(i), and Ali(i); else j = 0, go to 3.1;\nStage 3:\n3.1 Update w\n(i+j+1)\n\u02c6t\nbased on Seg(i+j), Ali(i+j), and Clu(i)\nusing Equation (3);\n3.2 For each d, check all sequential segmentations of d into p\nsegments with mapping s \u2192 \u02c6s \u2192 \u02c6s, and find the best one\nAli\n(i+j+1)\nd (Seg\n(i+j+1)\nd (s)) = argmax\u02c6sIw( \u02c6T(i); \u02c6S)\nbased on Clu(i) and w\n(i+j+1)\n\u02c6t\n;\n3.3 j + +. If Iw( \u02c6T; \u02c6S) changes, go to step 6; otherwise, stop\nand return Clu(i), Seg(i+j), Ali(i+j), and w\n(i+j)\n\u02c6t\n;\nFigure 2: Algorithm: Topic segmentation and\nalignment based on MI or WMI.\nand each sentence is segmented into words, each word is\nstemmed. Then the joint probability distribution P(D, Sd, T)\ncan be estimated. Finally, this distribution can be used to\ncompute MI in our algorithm.\n4.3 Iterative Greedy Algorithm\nOur goal is to maximize the objective function, I( \u02c6T; \u02c6S) or\nIw( \u02c6T; \u02c6S), which can measure the dependence of term\noccurrences in different segments. Generally, first we do not know\nthe estimate term weights, which depend on the optimal\ntopic segmentation and alignment, and term clusters.\nMoreover, this problem is NP-hard [10], even though if we know\nthe term weights. Thus, an iterative greedy algorithm is\ndesired to find the best solution, even though probably only\nlocal maxima are reached. We present the iterative greedy\nalgorithm in Figure 2 to find a local maximum of I( \u02c6T; \u02c6S) or\nIw( \u02c6T; \u02c6S) with simultaneous term weight estimation. This\nalgorithm can is iterative and greedy for multi-document\ncases or single-document cases with term weight estimation\nand/or term co-clustering. Otherwise, since it is just a one\nstep algorithm to solve the task of single-document\nsegmentation [6, 15, 25], the global maximum of MI is guaranteed.\nWe will show later that term co-clustering reduces the\naccuracy of the results and is not necessary, and for\nsingledocument segmentation, term weights are also not required.\n4.3.1 Initialization\nIn Step 0, the initial term clustering Clut and topic\nsegmentation and alignment Segd and Alid are important to\navoid local maxima and reduce the number of iterations.\nFirst, a good guess of term weights can be made by using\nthe distributions of term frequency along sentences for each\ndocument and averaging them to get the initial values of w\u02c6t:\nwt = (\nED(t)\nmaxt \u2208T (ED(t ))\n)(1 \u2212\nES(t)\nmaxt \u2208T (ES(t ))\n), (6)\nwhere\nES(t) =\n1\n|Dt|\nd\u2208Dt\n(1 \u2212\ns\u2208Sd\np(s|t)log|Sd|\n1\np(s|t)\n),\nwhere Dt is the set of documents which contain Term t.\nThen, for the initial segmentation Seg(0)\n, we can simply\nsegment documents equally by sentences. Or we can find\nthe optimal segmentation just for each document d which\nmaximizes the WMI, Seg\n(0)\nd = argmax\u02c6sIw(T; \u02c6S), where\nw = w\n(0)\n\u02c6t\n. For the initial alignment Ali(0)\n, we can first\nassume that the order of segments for each d is the same.\nFor the initial term clustering Clu(0)\n, first cluster labels can\nbe set randomly, and after the first time of Step 3, a good\ninitial term clustering is obtained.\n4.3.2 Different Cases\nAfter initialization, there are three stages for different\ncases. Totally there are eight cases, |D| = 1 or |D| > 1,\nk = l or k < l, w = 0 or w = 1. Single document\nsegmentation without term clustering and term weight estimation\n(|D| = 1, k = l, w = 0) only requires Stage 1 (Step 1). If\nterm clustering is required (k < l), Stage 2 (Step 2.1, 2.2,\nand 2.3) is executed iteratively. If term weight estimation\nis required (w = 1), Stage 3 (Step 3.1, 3.2, and 3.3) is\nexecuted iteratively. If both are required (k < l, w = 1), Stage 2\nand 3 run one after the other. For multi-document\nsegmentation without term clustering and term weight estimation\n(|D| > 1, k = l, w = 0), only iteration of Step 2.2 and 2.3\nare required.\nAt Stage 1, the global maximum can be found based on\nI( \u02c6T; \u02c6S) using dynamic programming in Section 4.4.\nSimultaneously finding a good term clustering and estimated term\nweights is impossible, since when moving a term to a new\nterm cluster to maximize Iw( \u02c6T; \u02c6S), we do not know that the\nweight of this term should be the one of the new cluster or\nthe old cluster. Thus, we first do term clustering at Stage\n2, and then estimate term weights at Stage 3.\nAt Stage 2, Step 2.1 is to find the best term clustering\nand Step 2.2 is to find the best segmentation. This cycle is\nrepeated to find a local maximum based on MI I until it\nconverges. The two steps are: (1) based on current term\nclustering Clu\u02c6t, for each document d, the algorithm segments\nall the sentences Sd into p segments sequentially (some\nsegments may be empty), and put them into the p segments\n\u02c6S of the whole training set D (all possible cases of different\nsegmentation Segd and alignment Alid are checked) to find\nthe optimal case, and (2) based on the current segmentation\nand alignment, for each term t, the algorithm finds the best\nterm cluster of t based on the current segmentation Segd\nand alignment Alid. After finding a good term clustering,\nterm weights are estimated if w = 1.\nAt Stage 3, similar as Stage 2, Step 3.1 is term weight\nre-estimation and Step 3.2 is to find a better segmentation.\nThey are repeated to find a local maximum based on WMI\nIw until it converges. However, if the term clustering in\nStage 2 is not accurate, then the term weight estimation at\nStage 3 may have a bad result. Finally, at Step 3.3, this\nalgorithm converges and return the output. This algorithm can\nhandle both single-document and multi-document\nsegmentation. It also can detect shared topics among documents\nby checking the proportion of overlapped sentences on the\nsame topics, as described in Sec 5.2.\n4.4 Algorithm Optimization\nIn many previous works on segmentation, dynamic\nprogramming is a technique used to maximize the objective\nfunction. Similarly, at Step 1, 2.2, and 3.2 of our algorithm,\nwe can use dynamic programming. For Stage 1, using\ndynamic programming can still find the global optimum, but\nfor Stage 2 and Stage 3, we can only find the optimum for\neach step of topic segmentation and alignment of a\ndocument. Here we only show the dynamic programming for\nStep 3.2 using WMI (Step 1 and 2.2 are similar but they can\nuse either I or Iw). There are two cases that are not shown\nin the algorithm in Figure 2: (a) single-document\nsegmentation or multi-document segmentation with the same\nsequential order of segments, where alignment is not required,\nand (b) multi-document segmentation with different\nsequential orders of segments, where alignment is necessary. The\nalignment mapping function of the former case is simply just\nAlid(\u02c6si) = \u02c6si, while for the latter one\"s alignment mapping\nfunction Alid(\u02c6si) = \u02c6sj, i and j may be different. The\ncomputational steps for the two cases are listed below:\nCase 1 (no alignment):\nFor each document d:\n(1) Compute pw(\u02c6t), partial pw(\u02c6t, \u02c6s) and partial pw(\u02c6s)\nwithout counting sentences from d. Then put sentences from i\nto j into Part k, and compute partial WMI\nPIw( \u02c6T; \u02c6sk(si, si+1, ..., sj))\n\u02c6t\u2208 \u02c6T\npw(\u02c6t, \u02c6sk)log\npw(\u02c6t, \u02c6sk)\npw(\u02c6t)pw(\u02c6sk)\n,\nwhere Alid(si, si+1, ..., sj) = k, k \u2208 {1, 2, ..., p}, 1 \u2264 i \u2264 j \u2264\nnd, and Segd(sq) = \u02c6sk for all i \u2264 q \u2264 j.\n(2) Let M(sm, 1) = PIw( \u02c6T; \u02c6s1(s1, s2, ..., sm)). Then\nM(sm, L) = maxi[M(si\u22121, L \u2212 1) + PIw( \u02c6T; \u02c6sL(si, ..., sm))],\nwhere 0 \u2264 m \u2264 nd, 1 < L < p, 1 \u2264 i \u2264 m + 1, and when\ni > m, no sentences are put into \u02c6sk when compute PIw\n(note PIw( \u02c6T; \u02c6s(si, ..., sm)) = 0 for single-document\nsegmentation).\n(3) Finally M(snd , p) = maxi[M(si\u22121, p \u2212 1)+\nPIw( \u02c6T; \u02c6sp(si, ..., snd ))], where 1 \u2264 i \u2264 nd+1. The optimal\nIw is found and the corresponding segmentation is the best.\nCase 2 (alignment required):\nFor each document d:\n(1) Compute pw(\u02c6t), partial pw(\u02c6t, \u02c6s), and partial pw(\u02c6s), and\nPIw( \u02c6T; \u02c6sk(si, si+1, ..., sj)) similarly as Case 1.\n(2) Let M(sm, 1, k) = PIw( \u02c6T; \u02c6sk(s1, s2, ..., sm)), where\nk \u2208 {1, 2, ..., p}. Then M(sm, L, kL) = maxi,j[M(si\u22121, L \u2212\n1, kL/j) + PIw( \u02c6T; \u02c6sAlid(\u02c6sL\n)=j(si, si+1, ..., sm))],\nwhere 0 \u2264 m \u2264 nd, 1 < L < p, 1 \u2264 i \u2264 m + 1, kL \u2208\nSet(p, L), which is the set of all p!\nL!(p\u2212L)!\ncombinations of\nL segments chosen from all p segments, j \u2208 kL, the set\nof L segments chosen from all p segments, and kL/j is the\ncombination of L \u2212 1 segments in kL except Segment j.\n(3) Finally, M(snd , p, kp) = maxi,j[M(si\u22121, p \u2212 1, kp/j)\n+PIw( \u02c6T; \u02c6sAlid(\u02c6sL\n)=j(si, si+1, ..., snd ))],\nwhere kp is just the combination of all p segments and 1 \u2264\ni \u2264 nd + 1, which is the optimal Iw and the corresponding\nsegmentation is the best.\nThe steps of Case 1 and 2 are similar, except in Case 2,\nalignment is considered in addition to segmentation. First,\nbasic items of probability for computing Iw are computed\nexcluding Doc d, and then partial WMI by putting every\npossible sequential segment (including empty segment) of d\ninto every segment of the set. Second, the optimal sum of\nPIw for L segments and the leftmost m sentences, M(sm, L),\nis found. Finally, the maximal WMI is found among\ndifferent sums of M(sm, p \u2212 1) and PIw for Segment p.\n5. EXPERIMENTS\nIn this section, single-document segmentation, shared topic\ndetection, and multi-document segmentation will be tested.\nDifferent hyper parameters of our method are studied. For\nconvenience, we refer to the method using I as MIk if w = 0,\nand Iw as WMIk if w = 2 or as WMIk if w = 1, where k\nis the number of term clusters, and if k = l, where l is the\ntotal number of terms, then no term clustering is required,\ni.e. MIl and WMIl.\n5.1 Single-document Segmentation\n5.1.1 Test Data and Evaluation\nThe first data set we tested is a synthetic one used in\nprevious research [6, 15, 25] and many other papers. It has\n700 samples. Each is a concatenation of ten segments. Each\nsegment is the first n sentence selected randomly from the\nBrown corpus, which is supposed to have a different topic\nfrom each other. Currently, the best results on this data\nset is achieved by Ji et.al. [15]. To compare the\nperformance of our methods, the criterion used widely in previous\nresearch is applied, instead of the unbiased criterion\nintroduced in [20]. It chooses a pair of words randomly. If they\nare in different segments (different) for the real\nsegmentation (real), but predicted (pred) as in the same segment,\nit is a miss. If they are in the same segment (same), but\npredicted as in different segments, it is a false alarm. Thus,\nthe error rate is computed using the following equation:\np(err|real, pred) = p(miss|real, pred, diff)p(diff|real)\n+p(false alarm|real, pred, same)p(same|real).\n5.1.2 Experiment Results\nWe tested the case when the number of segments is known.\nTable 1 shows the results of our methods with different hyper\nparameter values and three previous approaches, C99[25],\nU00[6], and ADDP03[15], on this data set when the\nsegment number is known. In WMI for single-document\nsegmentation, the term weights are computed as follows: w\u02c6t =\n1\u2212E\u02c6S(\u02c6t)/max\u02c6t \u2208 \u02c6T (E\u02c6S(\u02c6t )). For this case, our methods MIl\nand WMIl both outperform all the previous approaches.\nWe compared our methods with ADDP03using one-sample\none-sided t-test and p-values are shown in Table 2. From\nthe p-values, we can see that mostly the differences are very\nTable 1: Average Error Rates of Single-document\nSegmentation Given Segment Numbers Known\nRange of n 3-11 3-5 6-8 9-11\nSample size 400 100 100 100\nC99 12% 11% 10% 9%\nU00 10% 9% 7% 5%\nADDP03 6.0% 6.8% 5.2% 4.3%\nMIl 4.68% 5.57% 2.59% 1.59%\nWMIl 4.94% 6.33% 2.76% 1.62%\nMI100 9.62% 12.92% 8.66% 6.67%\nTable 2: Single-document Segmentation: P-values\nof T-test on Error Rates\nRange of n 3-11 3-5 6-8 9-11\nADDP03, MIl 0.000 0.000 0.000 0.000\nADDP03, WMIl 0.000 0.099 0.000 0.000\nMIl, WMIl 0.061 0.132 0.526 0.898\nsignificant. We also compare the error rates between our\ntwo methods using two-sample two-sided t-test to check the\nhypothesis that they are equal. We cannot reject the\nhypothesis that they are equal, so the difference are not\nsignificant, even though all the error rates for MIl are smaller\nthan WMIl. However, we can conclude that term weights\ncontribute little in single-document segmentation. The\nresults also show that MI using term co-clustering (k = 100)\ndecreases the performance. We tested different number of\nterm clusters, and found that the performance becomes\nbetter when the cluster number increases to reach l. WMIk \u03b8, where\nSd is the set of sentences of d, and |Sd| is the number of\nsentences of d, then d and d have the shared topic.\nFor a pair of documents selected randomly, the error rate\nis computed using the following equation:\np(err|real, pred) = p(miss|real, pred, same)p(same|real)\n+p(false alarm|real, pred, diff)p(diff|real),\nwhere a miss means if they have the same topic (same)\nfor the real case (real), but predicted (pred) as on the same\ntopic. If they are on different topics (diff), but predicted\nas on the same topic, it is a false alarm.\n5.2.2 Experiment Results\nThe results are shown in Table 3. If most documents have\ndifferent topics, in WMIl, the estimation of term weights in\nEquation (3) is not correct. Thus, WMIl is not expected to\nhave a better performance than MIl, when most documents\nhave different topics. When there are fewer documents in\na subset with the same number of topics, more documents\nhave different topics, so WMIl is more worse than MIl. We\ncan see that for most cases MIl has a better (or at least\nsimilar) performance than LDA. After shared topic\ndetection, multi-document segmentation of documents with the\nshared topics is able to be executed.\n5.3 Multi-document Segmentation\n5.3.1 Test Data and Evaluation\nFor multi-document segmentation and alignment, our goal\nis to identify these segments about the same topic among\nmultiple similar documents with shared topics. Using Iw\nis expected to perform better than I, since without term\nweights the result is affected seriously by document-dependent\nstop words and noisy words which depends on the personal\nwriting style. It is more likely to treat the same segments\nof different documents as different segments under the effect\nof document-dependent stop words and noisy words. Term\nweights can reduce the effect of document-dependent stop\nwords and noisy words by giving cue terms more weights.\nThe data set for multi-document segmentation and\nalignment has 102 samples and 2264 sentences totally. Each is\nthe introduction part of a lab report selected from the course\nof Biol 240W, Pennsylvania State University. Each sample\nhas two segments, introduction of plant hormones and the\ncontent in the lab. The length range of samples is from\ntwo to 56 sentences. Some samples only have one part and\nsome have a reverse order the these two segments. It is not\nhard to identify the boundary between two segments for\nhuman. We labelled each sentence manually for evaluation.\nThe criterion of evaluation is just using the proportion of\nthe number of sentences with wrong predicted segment\nlabels in the total number of sentences in the whole training\nTable 4: Average Error Rates of Multi-document\nSegmentation Given Segment Numbers Known\n#Doc MIl WMIl k MIk WMIk\n102 3.14% 2.78% 300 4.68% 6.58%\n51 4.17% 3.63% 300 17.83% 22.84%\n34 5.06% 4.12% 300 18.75% 20.95%\n20 7.08% 5.42% 250 20.40% 21.83%\n10 10.38% 7.89% 250 21.42% 21.91%\n5 15.77% 11.64% 250 21.89% 22.59%\n2 25.90% 23.18% 50 25.44% 25.49%\n1 23.90% 24.82% 25 25.75% 26.15%\nTable 5: Multi-document Segmentation: P-values of\nT-test on Error Rates for MIl and WMIl\n#Doc 51 34 20 10 5 2\nP-value 0.19 0.101 0.025 0.001 0.000 0.002\nset as the error rate:\np(error|predicted, real)\n= d\u2208D s\u2208Sd\n1(predicteds=reals)/ d\u2208D nd.\nIn order to show the benefits of multi-document\nsegmentation and alignment, we compared our method with different\nparameters on different partitions of the same training set.\nExcept the cases that the number of documents is 102 and\none (they are special cases of using the whole set and the\npure single-document segmentation), we randomly divided\nthe training set into m partitions, and each has 51, 34, 20,\n10, 5, and 2 document samples. Then we applied our\nmethods on each partition and calculated the error rate of the\nwhole training set. Each case was repeated for 10 times for\ncomputing the average error rates. For different partitions\nof the training set, different k values are used, since the\nnumber of terms increases when the document number in each\npartition increases.\n5.3.2 Experiment Results\nFrom the experiment results in Table 4, we can see the\nfollowing observations: (1) When the number of documents\nincreases, all methods have better performances. Only from\none to two documents, MIl has decrease a little. We can\nobserve this from Figure 3 at the point of document\nnumber = 2. Most curves even have the worst results at this\npoint. There are two reasons. First, because samples vote\nfor the best multi-document segmentation and alignment,\nbut if only two documents are compared with each other, the\none with missing segments or a totally different sequence will\naffect the correct segmentation and alignment of the other.\nSecond, as noted at the beginning of this section, if two\ndocuments have more document-dependent stop words or noisy\nwords than cue words, then the algorithm may view them\nas two different segments and the other segment is missing.\nGenerally, we can only expect a better performance when\nthe number of documents is larger than the number of\nsegments. (2) Except single-document segmentation, WMIl is\nalways better than MIl, and when the number of documents\nis reaching one or increases to a very large number, their\nperformances become closer. Table 5 shows p-values of\ntwosample one-sided t-test between MIl and WMIl. We also\ncan see this trend from p-values. When document number =\n5, we reached the smallest p-value and the largest difference\nbetween error rates of MIl and WMIl. For single-document\nTable 6: Multi-document Segmentation: Average\nError Rate for Document Number = 5 in Each\nSubset with Different Number of Term Clusters\n#Cluster 75 100 150 250 l\nMIk 24.67% 24.54% 23.91% 22.59% 15.77%\nsegmentation, WMIl is even a little bit worse than MIl,\nwhich is similar as the results of the single-document\nsegmentation on the first data set. The reason is that for\nsingledocument segmentation, we cannot estimate term weights\naccurately, since multiple documents are unavailable. (3)\nUsing term clustering usually gets worse results than MIl\nand WMIl.(4) Using term clustering in WMIk is even worse\nthan in MIk, since in WMIk term clusters are found first\nusing I before using Iw. If the term clusters are not correct,\nthen the term weights are estimated worse, which may\nmislead the algorithm to reach even worse results. From the\nresults we also found that in multi-document segmentation\nand alignment, most documents with missing segments and\na reverse order are identified correctly.\nTable 6 illustrates the experiment results for the case of 20\npartitions (each has five document samples) of the training\nset and topic segmentation and alignment using MIk with\ndifferent numbers of term clusters k. Notice that when the\nnumber of term clusters increases, the error rate becomes\nsmaller. Without term clustering, we have the best result.\nWe did not show results for WMIk with term clustering,\nbut the results are similar.\nWe also tested WMIl with different hyper parameters\nof a and b to adjust term weights. The results are\npresented in Figure 3. It was shown that the default case\nWMIl : a = 1, b = 1 gave the best results for different\npartitions of the training set. We can see the trend that when\nthe document number is very small or large, the difference\nbetween MIl : a = 0, b = 0 and WMIl : a = 1, b = 1\nbecomes quite small. When the document number is not large\n(about from 2 to 10), all the cases using term weights have\nbetter performances than MIl : a = 0, b = 0 without term\nweights, but when the document number becomes larger,\nthe cases WMIl : a = 1, b = 0 and WMIl : a = 2, b = 1\nbecome worse than MIl : a = 0, b = 0. When the document\nnumber becomes very large, they are even worse than cases\nwith small document numbers. This means that a proper\nway to estimate term weights for the criterion of WMI is\nvery important. Figure 4 shows the term weights learned\nfrom the whole training set. Four types of words are\ncategorized roughly even though the transition among them are\nsubtle. Figure 5 illustrates the change in (weighted) mutual\ninformation for MIl and WMIl. As expected, mutual\ninformation for MIl increases monotonically with the number of\nsteps, while WMIl does not. Finally, MIl and WMIl are\nscalable, with computational complexity shown in Figure 6.\nOne advantage for our approach based on MI is that\nremoving stop words is not required. Another important\nadvantage is that there are no necessary hyper parameters to\nadjust. In single-document segmentation, the performance\nbased on MI is even better for that based on WMI, so no\nextra hyper parameter is required. In multi-document\nsegmentation, we show in the experiment, a = 1 and b = 1\nis the best. Our method gives more weights to cue terms.\nHowever, usually cue terms or sentences appear at the\nbeginning of a segment, while the end of the segment may be\n1 2 5 10 20 34 51 102\n0\n0.05\n0.1\n0.15\n0.2\n0.25\n0.3\n0.35\nDocument Number\nErrorRate\nMIl\n:a=0,b=0\nWMI\nl\n:a=1,b=1\nWMI\nl\n:a=1,b=0\nWMI\nl\n:a=2,b=1\nFigure 3: Error rates for\ndifferent hyper\nparameters of term weights.\n0 0.2 0.4 0.6 0.8 1\n0\n0.1\n0.2\n0.3\n0.4\n0.5\n0.6\n0.7\n0.8\n0.9\n1\nNormalized Document Entropy\nNormalizedSegmentEntropy\nNoisy words\nCue words\nCommon stop words\nDoc\u2212dep stop words\nFigure 4: Term weights\nlearned from the whole\ntraining set.\n0 100 200 300 400 500 600\n0.06\n0.08\n0.1\n0.12\n0.14\n0.16\n0.18\nNumber of Steps\n(Weighted)MutualInformation\nMI\nl\nWMI\nl\nFigure 5: Change in\n(weighted) MI for MIl\nand WMIl.\n0 20 40 60 80 100 120\n0\n200\n400\n600\n800\n1000\n1200\n1400\n1600\n1800\n2000\nDocument Number\nTimetoConverge(sec)\nMI\nl\nWMI\nl\nFigure 6: Time to\nconverge for MIl and\nWMIl.\nmuch noisy. One possible solution is giving more weights to\nterms at the beginning of each segment. Moreover, when the\nlength of segments are quite different, long segments have\nmuch higher term frequencies, so they may dominate the\nsegmentation boundaries. Normalization of term\nfrequencies versus the segment length may be useful.\n6. CONCLUSIONS AND FUTURE WORK\nWe proposed a novel method for multi-document topic\nsegmentation and alignment based on weighted mutual\ninformation, which can also handle single-document cases. We\nused dynamic programming to optimize our algorithm. Our\napproach outperforms all the previous methods on\nsingledocument cases. Moreover, we also showed that doing\nsegmentation among multiple documents can improve the\nperformance tremendously. Our results also illustrated that\nusing weighted mutual information can utilize the information\nof multiple documents to reach a better performance.\nWe only tested our method on limited data sets. More\ndata sets especially complicated ones should be tested. More\nprevious methods should be compared with. Moreover,\nnatural segmentations like paragraphs are hints that can be\nused to find the optimal boundaries. Supervised learning\nalso can be considered.\n7. ACKNOWLEDGMENTS\nThe authors want to thank Xiang Ji, and Prof. J. Scott\nPayne for their help.\n8. REFERENCES\n[1] A. Banerjee, I. Ghillon, J. Ghosh, S. Merugu, and\nD. Modha. A generalized maximum entropy approach to\nbregman co-clustering and matrix approximation. In\nProceedings of SIGKDD, 2004.\n[2] R. Bekkerman, R. El-Yaniv, and A. McCallum. Multi-way\ndistributional clustering via pairwise interactions. In\nProceedings of ICML, 2005.\n[3] D. M. Blei and P. J. Moreno. Topic segmentation with an\naspect hidden markov model. In Proceedings of SIGIR,\n2001.\n[4] D. M. Blei, A. Ng, and M. Jordan. Latent dirichlet\nallocation. Journal of Machine Learning Research,\n3:993-1022, 2003.\n[5] T. Brants, F. Chen, and I. Tsochantaridis. Topic-based\ndocument segmentation with probabilistic latent semantic\nanalysis. In Proceedings of CIKM, 2002.\n[6] F. Choi. Advances in domain indepedent linear text\nsegmentation. In Proceedings of the NAACL, 2000.\n[7] H. Christensen, B. Kolluru, Y. Gotoh, and S. Renals.\nMaximum entropy segmentation of broadcast news. In\nProceedings of ICASSP, 2005.\n[8] T. Cover and J. Thomas. Elements of Information Theory.\nJohn Wiley and Sons, New York, USA, 1991.\n[9] S. Deerwester, S. Dumais, G. Furnas, T. Landauer, and\nR. Harshman. Indexing by latent semantic analysis. Journal\nof the American Society for Information Systems, 1990.\n[10] I. Dhillon, S. Mallela, and D. Modha. Information-theoretic\nco-clustering. In Proceedings of SIGKDD, 2003.\n[11] M. Hajime, H. Takeo, and O. Manabu. Text segmentation\nwith multiple surface linguistic cues. In Proceedings of\nCOLING-ACL, 1998.\n[12] T. K. Ho. Stop word location and identification for\nadaptive text recognition. International Journal of\nDocument Analysis and Recognition, 3(1), August 2000.\n[13] T. Hofmann. Probabilistic latent semantic analysis. In\nProceedings of the UAI\"99, 1999.\n[14] X. Ji and H. Zha. Correlating summarization of a pair of\nmultilingual documents. In Proceedings of RIDE, 2003.\n[15] X. Ji and H. Zha. Domain-independent text segmentation\nusing anisotropic diffusion and dynamic programming. In\nProceedings of SIGIR, 2003.\n[16] X. Ji and H. Zha. Extracting shared topics of multiple\ndocuments. In Proceedings of the 7th PAKDD, 2003.\n[17] J. Lafferty, A. McCallum, and F. Pereira. Conditional\nrandom fields: Probabilistic models for segmenting and\nlabeling sequence data. In Proceedings of ICML, 2001.\n[18] T. Li, S. Ma, and M. Ogihara. Entropy-based criterion in\ncategorical clustering. In Proceedings of ICML, 2004.\n[19] A. McCallum, D. Freitag, and F. Pereira. Maximum\nentropy markov models for information extraction and\nsegmentation. In Proceedings of ICML, 2000.\n[20] L. Pevzner and M. Hearst. A critique and improvement of\nan evaluation metric for text segmentation. Computational\nLinguistic, 28(1):19-36, 2002.\n[21] J. C. Reynar. Statistical models for topic segmentation. In\nProceedings of ACL, 1999.\n[22] G. Salton and M. McGill. Introduction to Modern\nInformation Retrieval. McGraw Hill, 1983.\n[23] B. Sun, Q. Tan, P. Mitra, and C. L. Giles. Extraction and\nsearch of chemical formulae in text documents on the web.\nIn Proceedings of WWW, 2007.\n[24] B. Sun, D. Zhou, H. Zha, and J. Yen. Multi-task text\nsegmentation and alignment based on weighted mutual\ninformation. In Proceedings of CIKM, 2006.\n[25] M. Utiyama and H. Isahara. A statistical model for\ndomain-independent text segmentation. In Proceedings of\nthe 39th ACL, 1999.\n[26] C. Wayne. Multilingual topic detection and tracking:\nSuccessful research enabled by corpora and evaluation. In\nProceedings of LREC, 2000.\n[27] J. Yamron, I. Carp, L. Gillick, S. Lowe, and P. van\nMulbregt. A hidden markov model approach to text\nsegmentation and event tracking. In Proceedings of\nICASSP, 1998.\n[28] H. Zha and X. Ji. Correlating multilingual documents via\nbipartite graph modeling. In Proceedings of SIGIR, 2002.", "keywords": "term weight;single-document segmentation;multiple document;performance of topic segmentation;topic segmentation performance;cue term;wmus;shared topic;multi-document segmentation;local and sequential information of document;topic detection;optimal boundary;tracking;stop word;topic alignment;mutual information;topic segmentation;single document;document local and sequential information;share topic detection"}
-{"name": "test_H-19", "title": "Analyzing Feature Trajectories for Event Detection", "abstract": "We consider the problem of analyzing word trajectories in both time and frequency domains, with the specific goal of identifying important and less-reported, periodic and aperiodic words. A set of words with identical trends can be grouped together to reconstruct an event in a completely unsupervised manner. The document frequency of each word across time is treated like a time series, where each element is the document frequency - inverse document frequency (DFIDF) score at one time point. In this paper, we 1) first applied spectral analysis to categorize features for different event characteristics: important and less-reported, periodic and aperiodic; 2) modeled aperiodic features with Gaussian density and periodic features with Gaussian mixture densities, and subsequently detected each feature\"s burst by the truncated Gaussian approach; 3) proposed an unsupervised greedy event detection algorithm to detect both aperiodic and periodic events. All of the above methods can be applied to time series data in general. We extensively evaluated our methods on the 1-year Reuters News Corpus [3] and showed that they were able to uncover meaningful aperiodic and periodic events.", "fulltext": "1. INTRODUCTION\nThere are more than 4,000 online news sources in the\nworld. Manually monitoring all of them for important events\nhas become difficult or practically impossible. In fact, the\ntopic detection and tracking (TDT) community has for many\nyears been trying to come up with a practical solution to\nhelp people monitor news effectively. Unfortunately, the\nholy grail is still elusive, because the vast majority of TDT\nsolutions proposed for event detection [20, 5, 17, 4, 21, 7, 14,\n10] are either too simplistic (based on cosine similarity [5])\nor impractical due to the need to tune a large number of\nparameters [9]. The ineffectiveness of current TDT\ntechnologies can be easily illustrated by subscribing to any of\nthe many online news alerts services such as the\nindustryleading Google News Alerts [2], which generates more than\n50% false alarms [10]. As further proof, portals like Yahoo\ntake a more pragmatic approach by requiring all machine\ngenerated news alerts to go through a human operator for\nconfirmation before sending them out to subscribers.\nInstead of attacking the problem with variations of the\nsame hammer (cosine similarity and TFIDF), a\nfundamental understanding of the characteristics of news stream data\nis necessary before any major breakthroughs can be made in\nTDT. Thus in this paper, we look at news stories and\nfeature trends from the perspective of analyzing a time-series\nword signal. Previous work like [9] has attempted to\nreconstruct an event with its representative features. However,\nin many predictive event detection tasks (i.e., retrospective\nevent detection), there is a vast set of potential features\nonly for a fixed set of observations (i.e., the obvious bursts).\nOf these features, often only a small number are expected\nto be useful. In particular, we study the novel problem of\nanalyzing feature trajectories for event detection,\nborrowing a well-known technique from signal processing:\nidentifying distributional correlations among all features by spectral\nanalysis. To evaluate our method, we subsequently propose\nan unsupervised event detection algorithm for news streams.\n0\n0.2\n0.4\n0.6\n0.8\n8/20/1996 12/8/1996 3/28/1997 7/16/1997\nEaster\nApril\n(a) aperiodic event\n0\n0.1\n0.2\n0.3\n0.4\n8/20/1996 12/8/1996 3/28/1997 7/16/1997\nUnaudited\nEnded\n(b) periodic event\nFigure 1: Feature correlation (DFIDF:time)\nbetween a) Easter and April b) Unaudited and Ended.\nAs an illustrative example, consider the correlation\nbetween the words Easter and April from the Reuters\nCorpus1\n. From the plot of their normalized DFIDF in Figure\n1(a), we observe the heavy overlap between the two words\ncirca 04/1997, which means they probably both belong to\nthe same event during that time (Easter feast). In this\nexample, the hidden event Easter feast is a typical important\naperiodic event over 1-year data. Another example is given\nby Figure 1(b), where both the words Unaudited and Ended\n1\nReuters Corpus is the default dataset for all examples.\nexhibit similar behaviour over periods of 3 months. These\ntwo words actually originated from the same periodic event,\nnet income-loss reports, which are released quarterly by\npublicly listed companies.\nOther observations drawn from Figure 1 are: 1) the bursty\nperiod of April is much longer than Easter, which suggests\nthat April may exist in other events during the same period;\n2) Unaudited has a higher average DFIDF value than Ended,\nwhich indicates Unaudited to be more representative for the\nunderlying event. These two examples are but the tip of\nthe iceberg among all word trends and correlations hidden\nin a news stream like Reuters. If a large number of them\ncan be uncovered, it could significantly aid TDT tasks. In\nparticular, it indicates the significance of mining correlating\nfeatures for detecting corresponding events. To summarize,\nwe postulate that: 1) An event is described by its\nrepresentative features. A periodic event has a list of periodic\nfeatures and an aperiodic event has a list of aperiodic\nfeatures; 2) Representative features from the same event share\nsimilar distributions over time and are highly correlated; 3)\nAn important event has a set of active (largely reported)\nrepresentative features, whereas an unimportant event has\na set of inactive (less-reported) representative features; 4)\nA feature may be included by several events with overlaps\nin time frames. Based on these observations, we can\neither mine representative features given an event or detect\nan event from a list of highly correlated features. In this\npaper, we focus on the latter, i.e., how correlated features can\nbe uncovered to form an event in an unsupervised manner.\n1.1 Contributions\nThis paper has three main contributions:\n\u2022 To the best of our knowledge, our approach is the first\nto categorize word features for heterogenous events.\nSpecifically, every word feature is categorized into one\nof the following five feature types based on its power\nspectrum strength and periodicity: 1) HH (high power\nand high/long periodicity): important aperiodic events,\n2) HL (high power and low periodicity): important\nperiodic events, 3) LH (low power and high periodicity):\nunimportant aperiodic events, 4) LL (low power and\nlow periodicity): non-events, and 5) SW (stopwords),\na higher power and periodicity subset of LL comprising\nstopwords, which contains no information.\n\u2022 We propose a simple and effective mixture\ndensitybased approach to model and detect feature bursts.\n\u2022 We come up with an unsupervised event detection\nalgorithm to detect both aperiodic and periodic events.\nOur algorithm has been evaluated on a real news stream\nto show its effectiveness.\n2. RELATED WORK\nThis work is largely motivated by a broader family of\nproblems collectively known as Topic Detection and\nTracking (TDT) [20, 5, 17, 4, 21, 7, 14, 10]. Moreover, most TDT\nresearch so far has been concerned with clustering/classifying\ndocuments into topic types, identifying novel sentences [6]\nfor new events, etc., without much regard to analyzing the\nword trajectory with respect to time. Swan and Allan [18]\nfirst attempted using co-occuring terms to construct an event.\nHowever, they only considered named entities and noun\nphrase pairs, without considering their periodicities. On the\ncontrary, our paper considers all of the above.\nRecently, there has been significant interest in modeling\nan event in text streams as a burst of activities by\nincorporating temporal information. Kleinberg\"s seminal work\ndescribed how bursty features can be extracted from text\nstreams using an infinite automaton model [12], which\ninspired a whole series of applications such as Kumar\"s\nidentification of bursty communities from Weblog graphs [13], Mei\"s\nsummarization of evolutionary themes in text streams [15],\nHe\"s clustering of text streams using bursty features [11], etc.\nNevertheless, none of the existing work specifically identified\nfeatures for events, except for Fung et al. [9], who clustered\nbusty features to identify various bursty events. Our work\ndiffers from [9] in several ways: 1) we analyze every\nsingle feature, not only bursty features; 2) we classify features\nalong two categorical dimensions (periodicity and power),\nyielding altogether five primary feature types; 3) we do not\nrestrict each feature to exclusively belong to only one event.\nSpectral analysis techniques have previously been used by\nVlachos et al. [19] to identify periodicities and bursts from\nquery logs. Their focus was on detecting multiple\nperiodicities from the power spectrum graph, which were then used\nto index words for query-by-burst search. In this paper,\nwe use spectral analysis to classify word features along two\ndimensions, namely periodicity and power spectrum, with\nthe ultimate goal of identifying both periodic and aperiodic\nbursty events.\n3. DATA REPRESENTATION\nLet T be the duration/period (in days) of a news stream,\nand F represents the complete word feature space in the\nclassical static Vector Space Model (VSM).\n3.1 Event Periodicity Classification\nWithin T, there may exist certain events that occur only\nonce, e.g., Tony Blair elected as Prime Minister of U.K., and\nother recurring events of various periodicities, e.g., weekly\nsoccer matches. We thus categorize all events into two types:\naperiodic and periodic, defined as follows.\nDefinition 1. (Aperiodic Event) An event is aperiodic\nwithin T if it only happens once.\nDefinition 2. (Periodic Event) If events of a certain event\ngenre occur regularly with a fixed periodicity P \u2264 T/2 , we\nsay that this particular event genre is periodic, with each\nmember event qualified as a periodic event.\nNote that the definition of aperiodic is relative, i.e., it is\ntrue only for a given T, and may be invalid for any other\nT > T. For example, the event Christmas feast is aperiodic\nfor T \u2264 365 but periodic for T \u2265 730.\n3.2 Representative Features\nIntuitively, an event can be described very concisely by\na few discriminative and representative word features and\nvice-versa, e.g., hurricane, sweep, and strike could be\nrepresentative features of a Hurricane genre event. Likewise,\na set of strongly correlated features could be used to\nreconstruct an event description, assuming that strongly\ncorrelated features are representative. The representation vector\nof a word feature is defined as follows:\nDefinition 3. (Feature Trajectory) The trajectory of a\nword feature f can be written as the sequence\nyf = [yf (1), yf (2), . . . , yf (T)],\nwhere each element yf (t) is a measure of feature f at time t,\nwhich could be defined using the normalized DFIDF score2\nyf (t) =\nDFf (t)\nN(t)\n\u00d7 log(\nN\nDFf\n),\nwhere DFf (t) is the number of documents (local DF)\ncontaining feature f at day t, DFf is the total number of\ndocuments (global DF) containing feature f over T, N(t) is the\nnumber of documents for day t, and N is the total number\nof documents over T.\n4. IDENTIFYING FEATURES FOR EVENTS\nIn this section, we show how representative features can\nbe extracted for (un)important or (a)periodic events.\n4.1 Spectral Analysis for Dominant Period\nGiven a feature f, we decompose its feature trajectory\nyf = [yf (1), yf (2), ..., yf (T)] into the sequence of T\ncomplex numbers [X1, . . . , XT ] via the discrete Fourier\ntransform (DFT):\nXk =\nT\nt=1\nyf (t)e\u2212 2\u03c0i\nT\n(k\u22121)t\n, k = 1, 2, . . . , T.\nDFT can represent the original time series as a linear\ncombination of complex sinusoids, which is illustrated by the\ninverse discrete Fourier transform (IDFT):\nyf (t) =\n1\nT\nT\nk=1\nXke\n2\u03c0i\nT\n(k\u22121)t\n, t = 1, 2, . . . , T,\nwhere the Fourier coefficient Xk denotes the amplitude of\nthe sinusoid with frequency k/T.\nThe original trajectory can be reconstructed with just the\ndominant frequencies, which can be determined from the\npower spectrum using the popular periodogram estimator.\nThe periodogram is a sequence of the squared magnitude of\nthe Fourier coefficients, Xk\n2\n, k = 1, 2, . . . , T/2 , which\nindicates the signal power at frequency k/T in the spectrum.\nFrom the power spectrum, the dominant period is chosen as\nthe inverse of the frequency with the highest power\nspectrum, as follows.\nDefinition 4. (Dominant Period) The dominant period\n(DP) of a given feature f is Pf = T/ arg max\nk\nXk\n2\n.\nAccordingly, we have\nDefinition 5. (Dominant Power Spectrum) The\ndominant power spectrum (DPS) of a given feature f is\nSf = Xk\n2\n, with Xk\n2\n\u2265 Xj\n2\n, \u2200j = k.\n4.2 Categorizing Features\nThe DPS of a feature trajectory is a strong indicator of its\nactiveness at the specified frequency; the higher the DPS,\nthe more likely for the feature to be bursty. Combining DPS\nwith DP, we therefore categorize all features into four types:\n2\nWe normalize yf (t) as yf (t) = yf (t)/ T\ni=1 yf (i) so that it\ncould be interpreted as a probability.\n\u2022 HH: high Sf , aperiodic or long-term periodic (Pf >\nT/2 );\n\u2022 HL: high Sf , short-term periodic (Pf \u2264 T/2 );\n\u2022 LH: low Sf , aperiodic or long-term periodic;\n\u2022 LL: low Sf , short-term periodic.\nThe boundary between long-term and short-term periodic\nis set to T/2 . However, distinguishing between a high and\nlow DPS is not straightforward, which will be tackled later.\nProperties of Different Feature Sets\nTo better understand the properties of HH, HL, LH and LL,\nwe select four features, Christmas, soccer, DBS and your as\nillustrative examples. Since the boundary between high and\nlow power spectrum is unclear, these chosen examples have\nrelative wide range of power spectrum values. Figure 2(a)\nshows the DFIDF trajectory for Christmas with a distinct\nburst around Christmas day. For the 1-year Reuters dataset,\nChristmas is classified as a typical aperiodic event with\nPf = 365 and Sf = 135.68, as shown in Figure 2(b). Clearly,\nthe value of Sf = 135.68 is reasonable for a well-known\nbursty event like Christmas.\n0\n0.5\n1\n1.5\n8/20/1996 12/8/1996 3/28/1997 7/16/1997\n(a) Christmas(DFIDF:time)\n0\n50\n100\n150\n0.00 0.13 0.25 0.37 0.50\nP=365 S=135.68\n(b) Christmas(S:frequency)\nFigure 2: Feature Christmas with relative high Sf\nand long-term Pf .\nThe DFIDF trajectory for soccer is shown in Figure 3(a),\nfrom which we can observe that there is a regular burst every\n7 days, which is again verified by its computed value of Pf =\n7, as shown in Figure 3(b). Using the domain knowledge\nthat soccer games have more matches every Saturday, which\nmakes it a typical and heavily reported periodic event, we\nthus consider the value of Sf = 155.13 to be high.\n0\n0.2\n0.4\n0.6\n8/20/1996 12/8/1996 3/28/1997 7/16/1997\n(a) soccer(DFIDF:time)\n0\n50\n100\n150\n200\n0.00 0.13 0.25 0.37 0.50\nP=7 S=155.13\n(b) soccer(S:frequency)\nFigure 3: Feature soccer with relative high Sf and\nshort-term Pf .\nFrom the DFIDF trajectory for DBS in Figure 4(a), we\ncan immediately deduce DBS to be an infrequent word with\na trivial burst on 08/17/1997 corresponding to DBS Land\nRa\ufb04es Holdings plans. This is confirmed by the long period\nof Pf = 365 and low power of Sf = 0.3084 as shown in\nFigure 4(b). Moreover, since this aperiodic event is only\nreported in a few news stories over a very short time of few\ndays, we therefore say that its low power value of Sf =\n0.3084 is representative of unimportant events.\nThe most confusing example is shown in Figure 5 for the\nword feature your, which looks very similar to the graph for\nsoccer in Figure 3. At first glance, we may be tempted to\ngroup both your and soccer into the same category of HL or\nLL since both distributions look similar and have the same\ndominant period of approximately a week. However, further\n0\n0.05\n0.1\n0.15\n8/20/1996 12/8/1996 3/28/1997 7/16/1997\n(a) DBS(DFIDF:time)\n0\n0.1\n0.2\n0.3\n0.4\n0.00 0.13 0.25 0.37 0.50\nP=365 S=0.3084\n(b) DBS(S:frequency)\nFigure 4: Feature DBS with relative low Sf and\nlong-term Pf .\nanalysis indicates that the periodicity of your is due to the\ndifferences in document counts for weekdays (average 2,919\nper day) and weekends3\n(average 479 per day). One would\nhave expected the periodicity of a stopword like your to\nbe a day. Moreover, despite our DFIDF normalization, the\nweekday/weekend imbalance still prevailed; stopwords\noccur 4 times more frequently on weekends than on weekdays.\nThus, the DPS remains the only distinguishing factor\nbetween your (Sf = 9.42) and soccer (Sf = 155.13). However,\nit is very dangerous to simply conclude that a power value\nof S = 9.42 corresponds to a stopword feature.\n0\n0.05\n0.1\n0.15\n0.2\n8/20/1996 12/8/1996 3/28/1997 7/16/1997\n(a) your(DFIDF:time)\n0\n5\n10\n0.00 0.13 0.25 0.37 0.50\nP=7 S=9.42\n(b) your(S:frequency)\nFigure 5: Feature your as an example confusing\nwith feature soccer.\nBefore introducing our solution to this problem, let\"s look\nat another LL example as shown in Figure 6 for beenb, which\nis actually a confirmed typo. We therefore classify beenb as a\nnoisy feature that does not contribute to any event. Clearly,\nthe trajectory of your is very different from beenb, which\nmeans that the former has to be considered separately.\n0\n0.001\n0.002\n0.003\n0.004\n8/20/1996 12/8/1996 3/28/1997 7/16/1997\n(a) beenb(DFIDF:time)\n1.20E-05\n1.20E-05\n1.20E-05\n1.20E-05\n1.20E-05\n0.003 0.126 0.249 0.373 0.496\nP=8 S=1.20E-05\n(b) beenb(S:frequency)\nFigure 6: Feature beenb with relative low Sf and\nshort-term Pf .\nStop Words (SW) Feature Set\nBased on the above analysis, we realize that there must be\nanother feature set between HL and LL that corresponds to\nthe set of stopwords. Features from this set has moderate\nDPS and low but known dominant period. Since it is hard\nto distinguish this feature set from HL and LL only based\non DPS, we introduce another factor called average DFIDF\n(DFIDF). As shown in Figure 5, features like your usually\nhave a lower DPS than a HL feature like soccer, but have\na much higher DFIDF than another LL noisy feature such\nas beenb. Since such properties are usually characteristics\nof stopwords, we group features like your into the newly\ndefined stopword (SW) feature set.\nSince setting the DPS and DFIDF thresholds for\nidentifying stopwords is more of an art than science, we proposed\na heuristic HS algorithm, Algorithm 1. The basic idea is to\nonly use news stories from weekdays to identify stopwords.\n3\nThe weekends here also include public holidays falling on\nweekdays.\nThe SW set is initially seeded with a small set of 29 popular\nstopwords utilized by Google search engine.\nAlgorithm 1 Heuristic Stopwords detection (HS)\nInput: Seed SW set, weekday trajectories of all words\n1: From the seed set SW, compute the maximum DPS as\nUDPS, maximum DFIDF as UDFIDF, and minimum\nof DFIDF as LDFIDF.\n2: for fi \u2208 F do\n3: Compute DFT for fi.\n4: if Sfi \u2264 UDPS and DFIDFfi \u2208\n[LDFIDF, UDFIDF] then\n5: fi \u2192 SW\n6: F = F \u2212 fi\n7: end if\n8: end for\nOverview of Feature Categorization\nAfter the SW set is generated, all stopwords are removed\nfrom F. We then set the boundary between high and low\nDPS to be the upper bound of the SW set\"s DPS. An overview\nof all five feature sets is shown in Figure 7.\nFigure 7: The 5 feature sets for events.\n5. IDENTIFYING BURSTS FOR FEATURES\nSince only features from HH, HL and LH are\nmeaningful and could potentially be representative to some events,\nwe pruned all other feature classified as LL or SW. In this\nsection, we describe how bursts can be identified from the\nremaining features. Unlike Kleinberg\"s burst identification\nalgorithm [12], we can identify both significant and trivial\nbursts without the need to set any parameters.\n5.1 Detecting Aperiodic Features\" Bursts\nFor each feature in HH and HL, we truncate its\ntrajectory by keeping only the bursty period, which is modeled\nwith a Gaussian distribution. For example, Figure 8 shows\nthe word feature Iraq with a burst circa 09/06/1996\nbeing modeled as a Gaussian. Its bursty period is defined by\n[\u03bcf \u2212 \u03c3f , \u03bcf + \u03c3f ] as shown in Figure 8(b).\n5.2 Detecting Periodic Features\" Bursts\nSince we have computed the DP for a periodic feature f,\nwe can easily model its periodic feature trajectory yf using\n0\n0.2\n0.4\n0.6\n0.8\n8/20/96 12/8/96 3/28/97 7/16/97\n(a) original DFIDF:time\n0\n0.2\n0.4\n0.6\n0.8\n8/20/96 12/8/96 3/28/97 7/16/97\nburst= [\u03bc-\u03c3, \u03bc+\u03c3]\n(b) identifying burst\nFigure 8: Modeling Iraq\"s time series as a truncated\nGaussian with \u03bc = 09/06/1996 and \u03c3 = 6.26.\na mixture of K = T/Pf Gaussians:\nf(yf = yf (t)|\u03b8f ) =\nK\nk=1\n\u03b1k\n1\n2\u03c0\u03c32\nk\ne\n\u2212 1\n2\u03c32\nk\n(yf (t)\u2212\u00b5k)2\n,\nwhere the parameter set \u03b8f = {\u03b1k, \u03bck, \u03c3k}K\nk=1 comprises:\n\u2022 \u03b1k is the probability of assigning yf into the kth\n\nGaussian. \u03b1k > 0, \u2200k \u2208 [1, K] and K\nk=1 \u03b1k = 1;\n\u2022 \u03bck/\u03c3k is mean/standard deviation of the kth\nGaussian.\nThe well known Expectation Maximization (EM) [8]\nalgorithm is used to compute the mixing proportions \u03b1k, as well\nas the individual Gaussian density parameters \u03bck and \u03c3K .\nEach Gaussian represents one periodic event, and is modeled\nsimilarly as mentioned in Section 5.1.\n6. EVENTS FROM FEATURES\nAfter identifying and modeling bursts for all features, the\nnext task is to paint a picture of the event with a potential\nset of representative features.\n6.1 Feature Correlation\nIf two features fi and fj are representative of the same\nevent, they must satisfy the following necessary conditions:\n1. fi and fj are identically distributed: yfi \u223c yfj .\n2. fi and fj have a high document overlap.\nMeasuring Feature Distribution Similarity\nWe measure the similarity between two features fi and fj\nusing discrete KL-divergence defined as follows.\nDefinition 6. (feature similarity) KL(fi, fj ) is given by\nmax(KL(fi|fj ), KL(fj |fi)), where\nKL(fi|fj ) =\nT\nt=1\nf(yfi (t)|\u03b8fi )log\nf(yfi (t)|\u03b8fi )\nf(yfj (t)|\u03b8fj )\n. (1)\nSince KL-divergence is not symmetric, we define the\nsimilarity between between fi and fj as the maximum of KL(fi|fj )\nand KL(fj |fi). Further, the similarity between two\naperiodic features can be computed using a closed form of the\nKL-divergence [16]. The same discrete KL-divergence\nformula of Eq. 1 is employed to compute the similarity between\ntwo periodic features,\nNext, we define the overal similarity among a set of\nfeatures R using the maximum inter-feature KL-Divergence\nvalue as follows.\nDefinition 7. (set\"s similarity)KL(R) = max\n\u2200fi,fj \u2208R\nKL(fi, fj ).\nDocument Overlap\nLet Mi be the set of all documents containing feature fi.\nGiven two features fi and fj , the overlapping document set\ncontaining both features is Mi \u2229 Mj . Intuitively, the higher\nthe |Mi \u2229 Mj |, the more likelty that fi and fj will be highly\ncorrelated. We define the degree of document overlap\nbetween two features fi and fj as follows.\nDefinition 8. (Feature DF Overlap) d(fi, fj ) =\n|Mi\u2229Mj|\nmin(|Mi|,|Mj|)\n.\nAccordingly, the DF Overlap among a set of features R is\nalso defined.\nDefinition 9. (Set DF Overlap) d(R) = min\n\u2200fi,fj \u2208R\nd(fi, fj).\n6.2 Unsupervised Greedy Event Detection\nWe use features from HH to detect important aperiodic\nevents, features from LH to detect less-reported/unimportant\naperiodic events, and features from HL to detect periodic\nevents. All of them share the same algorithm. Given bursty\nfeature fi \u2208 HH, the goal is to find highly correlated\nfeatures from HH. The set of features similar to fi can then\ncollectively describe an event. Specifically, we need to find a\nsubset Ri of HH that minimizes the following cost function:\nC(Ri) =\nKL(Ri)\nd(Ri) fj \u2208Ri\nSfj\n, Ri \u2282 HH. (2)\nThe underlying event e (associated with the burst of fi) can\nbe represented by Ri as\ny(e) =\nfj \u2208Ri\nSfj\nfu\u2208Ri\nSfu\nyfj . (3)\nThe burst analysis for event e is exactly the same as the\nfeature trajectory.\nThe cost in Eq. 2 can be minimized using our\nunsupervised greedy UG event detection algorithm, which is\ndescribed in Algorithm 2. The UG algorithm allows a feature\nAlgorithm 2 Unsupervised Greedy event detection (UG).\nInput: HH, document index for each feature.\n1: Sort and select features in descending DPS order: Sf1 \u2265\nSf2 \u2265 . . . \u2265 Sf|HH|\n.\n2: k = 0.\n3: for fi \u2208 HH do\n4: k = k + 1.\n5: Init: Ri \u2190 fi, C(Ri) = 1/Sfi and HH = HH \u2212 fi.\n6: while HH not empty do\n7: m = arg min\nm\nC(Ri \u222a fm).\n8: if C(Ri \u222a fm) < C(Ri) then\n9: Ri \u2190 fm and HH = HH \u2212 fm.\n10: else\n11: break while.\n12: end if\n13: end while\n14: Output ek as Eq. 3.\n15: end for\nto be contained in multiple events so that we can detect\nseveral events happening at the same time. Furthermore, trivial\nevents only containing year/month features (i.e., an event\nonly containing 1 feature Aug could be identified over a\n1year news stream) could be removed, although such events\nwill have inherent high cost and should already be ranked\nvery low. Note that our UG algorithm only requires one\ndata-dependant parameter, the boundary between high and\nlow power spectrum, to be set once, and this parameter can\nbe easily estimated using the HS algorithm (Algorithm 1).\n7. EXPERIMENTS\nIn this section, we study the performances of our feature\ncategorizing method and event detection algorithm. We first\nintroduce the dataset and experimental setup, then we\nsubjectively evaluate the categorization of features for HH, HL,\nLH, LL and SW. Finally, we study the (a)periodic event\ndetection problem with Algorithm 2.\n7.1 Dataset and Experimental Setup\nThe Reuters Corpus contains 806,791 English news stories\nfrom 08/20/1996 to 08/19/1997 at a day resolution. Version\n2 of the open source Lucene software [1] was used to tokenize\nthe news text content and generate the document-word\nvector. In order to preserve the time-sensitive past/present/future\ntenses of verbs and the differences between lower case nouns\nand upper case named entities, no stemming was done. Since\ndynamic stopword removal is one of the functionalities of\nour method, no stopword was removed. We did remove\nnonEnglish characters, however, after which the number of word\nfeatures amounts to 423,433. All experiments were\nimplemented in Java and conducted on a 3.2 GHz Pentium 4 PC\nrunning Windows 2003 Server with 1 GB of memory.\n7.2 Categorizing Features\nWe downloaded 34 well-known stopwords utilized by the\nGoogle search engine as our seed training features, which\nincludes a, about, an, are, as, at, be, by, de, for, from, how,\nin, is, it, of, on, or, that, the, this, to, was, what, when,\nwhere, who, will, with, la, com, und, en and www. We\nexcluded the last five stopwords as they are uncommon in news\nstories. By only analyzing news stories over 259 weekdays,\nwe computed the upper bound of the power spectrum for\nstopwords at 11.18 and corresponding DFIDF ranges from\n0.1182 to 0.3691. Any feature f satisfying Sf <= 11.18\nand 0.1182 <= DFIDFf <= 0.3691 over weekdays will be\nconsidered a stopword. In this manner, 470 stopwords were\nfound and removed as visualized in Figure 9. Some detected\nstopwords are A (P = 65, S = 3.36, DFIDF = 0.3103), At\n(P = 259, S = 1.86, DFIDF = 0.1551), GMT (P = 130,\nS = 6.16, DFIDF = 0.1628) and much (P = 22, S = 0.80,\nDFIDF = 0.1865). After the removal of these stopwords,\nthe distribution of weekday and weekend news are more or\nless matched, and in the ensuing experiments, we shall make\nuse of the full corpus (weekdays and weekends).\nThe upper bound power spectrum value of 11.18 for\nstopwords training was selected as the boundary between the\nhigh power and low power spectrum. The boundary\nbetween high and low periodicity was set to 365/2 = 183.\nAll 422,963 (423433 \u2212 470) word features were categorized\ninto 4 feature sets: HH (69 features), HL (1,087 features),\nLH (83,471 features), and LL (338,806 features) as shown\nin Figure 10. In Figure 10, each gray level denotes the\nrelative density of features in a square region, measured by\nlog10(1 + Dk), where Dk is the number of features within\nthe k-th square region. From the figure, we can make the\n0 2 4 6 8 11.18 12879.82\n1\n65\n100\n130\n259\nS(f)\nP(f)\nLH HH\nLL HL\nFigure 9: Distribution of SW (stopwords) in the HH,\nHL, LH, and LL regions.\n0 5 11.18 50 100 200 300 400 500 1000 12879.82\n1\n2\n4\n6\n8\n20\n50\n100\n183\n364\n365\nS(f)\nP(f)\n0\n0.5\n1\n1.5\n2\n2.5\n3\n3.5\n4\n4.5\nLH HH\nLL HL\nFigure 10: Distribution of categorized features over\nthe four quadrants (shading in log scale).\nfollowing observations:\n1. Most features have low S and are easily distinguishable\nfrom those features having a much higher S, which\nallows us to detect important (a)periodic events from\ntrivial events by selecting features with high S.\n2. Features in the HH and LH quadrants are aperiodic,\nwhich are nicely separated (big horizontal gap) from\nthe periodic features. This allows reliably detecting\naperiodic events and periodic events independently.\n3. The (vertical) boundary between high and low power\nspectrum is not as clearcut and the exact value will be\napplication specific.\nBy checking the scatter distribution of features from SW on\nHH, HL, LH, and LL as shown in Figure 9, we found that\n87.02%(409/470) of the detected stopwords originated from\nLL. The LL classification and high DFIDF scores of\nstopwords agree with the generally accepted notion that\nstopwords are equally frequent over all time. Therefore, setting\nthe boundary between high and low power spectrum using\nthe upper bound Sf of SW is a reasonable heuristic.\n7.3 Detecting Aperiodic Events\nWe shall evaluate our two hypotheses, 1)important\naperiodic events can be defined by a set of HH features, and\n2)less reported aperiodic events can be defined by a set of\nLH features. Since no benchmark news streams exist for\nevent detection (TDT datasets are not proper streams), we\nevaluate the quality of the automatically detected events by\ncomparing them to manually-confirmed events by searching\nthrough the corpus.\nAmong the 69 HH features, we detected 17 important\naperiodic events as shown in Table 1 (e1 \u2212 e17). Note that the\nentire identification took less than 1 second, after\nremoving events containing only the month feature. Among the\n17 events, other than the overlaps between e3 and e4 (both\ndescribes the same hostage event), e11 and e16 (both about\ncompany reports), the 14 identified events are extremely\naccurate and correspond very well to the major events of the\nperiod. For example, the defeat of Bob Dole, election of\nTony Blair, Missile attack on Iraq, etc. Recall that selecting\nthe features for one event should minimize the cost in Eq.\n2 such that 1) the number of features span different events,\nand 2) not all features relevant to an event will be selected,\ne.g., the feature Clinton is representative to e12 but since\nClinton relates to many other events, its time domain signal\nis far different from those of other representative features like\nDole and Bob. The number of documents of a detected event\nis roughly estimated by the number of indexed documents\ncontaining the representative features. We can see that all\n17 important aperiodic events are popularly reported events.\nAfter 742 minutes of computation time, we detected 23, 525\nless reported aperiodic events from 83,471 LH features.\nTable 1 lists the top 5 detected aperiodic events (e18 \u2212 e22)\nwith respect to the cost. We found that these 5 events are\nactually very trivial events with only a few news reports,\nand are usually subsumed by some larger topics. For\nexample, e22 is one of the rescue events in an airplane hijack\ntopic. One advantage of our UG Algorithm for discovering\nless-reported aperiodic events is that we are able to precisely\ndetect the true event period.\n7.4 Detecting Periodic Events\nAmong the 1,087 HL features, 330 important periodic\nevents were detected within 10 minutes of computing time.\nTable 1 lists the top 5 detected periodic events with respect\nto the cost (e23 \u2212 e27). All of the detected periodic events\nare indeed valid, and correspond to real life periodic events.\nThe GMM model is able to detect and estimate the bursty\nperiod nicely although it cannot distinguish the slight\ndifference between every Monday-Friday and all weekdays as\nshown in e23. We also notice that e26 is actually a subset\nof e27 (soccer game), which is acceptable since the Sheffield\nleague results are announced independently every weekend.\n8. CONCLUSIONS\nThis paper took a whole new perspective of analyzing\nfeature trajectories as time domain signals. By\nconsidering the word document frequencies in both time and\nfrequency domains, we were able to derive many new\ncharacteristics about news streams that were previously unknown,\ne.g., the different distributions of stopwords during\nweekdays and weekends. For the first time in the area of TDT,\nwe applied a systematic approach to automatically detect\nimportant and less-reported, periodic and aperiodic events.\nThe key idea of our work lies in the observations that\n(a)periodic events have (a)periodic representative features\nand (un)important events have (in)active representative\nfeatures, differentiated by their power spectrums and time\nperiods. To address the real event detection problem, a simple\nand effective mixture density-based approach was used to\nidentify feature bursts and their associated bursty periods.\nWe also designed an unsupervised greedy algorithm to\ndetect both aperiodic and periodic events, which was successful\nin detecting real events as shown in the evaluation on a real\nnews stream.\nAlthough we have not made any benchmark comparison\nagainst another approach, simply because there is no\nprevious work in the addressed problem. Future work includes\nevaluating the recall of detected events for a labeled news\nstream, and comparing our model against the closest\nequivalent methods, which currently are limited to the methods of\nKleinberg [12] (which can only detect certain type of bursty\nevents depending on parameter settings), Fung et al. [9], and\nSwan and Allan [18]. Nevertheless, we believe our simple\nand effective method will be useful for all TDT\npractitioners, and will be especially useful for the initial exploratory\nanalysis of news streams.\n9. REFERENCES\n[1] Apache lucene-core 2.0.0, http://lucene.apache.org.\n[2] Google news alerts, http://www.google.com/alerts.\n[3] Reuters corpus,\nhttp://www.reuters.com/researchandstandards/corpus/.\n[4] J. Allan. Topic Detection and Tracking. Event-based\nInformation Organization. Kluwer Academic Publishers, 2002.\n[5] J. Allan, V. Lavrenko, and H. Jin. First story detection in tdt\nis hard. In CIKM, pages 374-381, 2000.\n[6] J. Allan, C. Wade, and A. Bolivar. Retrieval and novelty\ndetection at the sentence level. In SIGIR, pages 314-321, 2003.\n[7] T. Brants, F. Chen, and A. Farahat. A system for new event\ndetection. In SIGIR, pages 330-337, 2003.\n[8] A. P. Dempster, N. M. Laird, and D. B. Rubin. Maximum\nlikelihood from incomplete data via the EM algorithm. Journal\nof the Royal Statistical Society, 39(1):1-38, 1977.\n[9] G. P. C. Fung, J. X. Yu, P. S. Yu, and H. Lu. Parameter free\nbursty events detection in text streams. In VLDB, pages\n181-192, 2005.\n[10] Q. He, K. Chang, and E.-P. Lim. A model for anticipatory\nevent detection. In ER, pages 168-181, 2006.\n[11] Q. He, K. Chang, E.-P. Lim, and J. Zhang. Bursty feature\nreprensentation for clustering text streams. In SDM, accepted,\n2007.\n[12] J. Kleinberg. Bursty and hierarchical structure in streams. In\nSIGKDD, pages 91-101, 2002.\n[13] R. Kumar, J. Novak, P. Raghavan, and A. Tomkins. On the\nbursty evolution of blogspace. In WWW, pages 159-178, 2005.\n[14] G. Kumaran and J. Allan. Text classification and named\nentities for new event detection. In SIGIR, pages 297-304,\n2004.\n[15] Q. Mei and C. Zhai. Discovering evolutionary theme patterns\nfrom text: an exploration of temporal text mining. In\nSIGKDD, pages 198-207, 2005.\n[16] W. D. Penny. Kullback-liebler divergences of normal, gamma,\ndirichlet and wishart densities. Technical report, 2001.\n[17] N. Stokes and J. Carthy. Combining semantic and syntactic\ndocument classifiers to improve first story detection. In SIGIR,\npages 424-425, 2001.\n[18] R. Swan and J. Allan. Automatic generation of overview\ntimelines. In SIGIR, pages 49-56, 2000.\n[19] M. Vlachos, C. Meek, Z. Vagena, and D. Gunopulos.\nIdentifying similarities, periodicities and bursts for online\nsearch queries. In SIGMOD, pages 131-142, 2004.\n[20] Y. Yang, T. Pierce, and J. Carbonell. A study of retrospective\nand on-line event detection. In SIGIR, pages 28-36, 1998.\n[21] Y. Yang, J. Zhang, J. Carbonell, and C. Jin. Topic-conditioned\nnovelty detection. In SIGKDD, pages 688-693, 2002.\nTable 1: All important aperiodic events (e1 \u2212 e17), top 5 less-reported aperiodic events (e18 \u2212 e22) and top 5\nimportant periodic events (e23 \u2212 e27).\nDetected Event and Bursty Period Doc\n#\nTrue Event\ne1(Sali,Berisha,Albania,Albanian,March)\n02/02/199705/29/1997\n1409 Albanian\"s president Sali Berisha lost in an early election\nand resigned, 12/1996-07/1997.\ne2(Seko,Mobutu,Sese,Kabila) 03/22/1997-06/09/1997 2273 Zaire\"s president Mobutu Sese coordinated the native\nrebellion and failed on 05/16/1997.\ne3(Marxist,Peruvian) 11/19/1996-03/05/1997 824 Peru rebels (Tupac Amaru revolutionary Movement) led\na hostage siege in Lima in early 1997.\ne4(Movement,Tupac,Amaru,Lima,hostage,hostages)\n11/16/1996-03/20/1997\n824 The same as e3.\ne5(Kinshasa,Kabila,Laurent,Congo)\n03/26/199706/15/1997\n1378 Zaire was renamed the Democratic Republic of Congo on\n05/16/1997.\ne6(Jospin,Lionel,June) 05/10/1997-07/09/1997 605 Following the early General Elections circa 06/1997,\nLionel Jospin was appointed Prime Minister on 06/02/1997.\ne7(Iraq,missile) 08/31/1996-09/13/1996 1262 U.S. fired missile at Iraq on 09/03/1996 and 09/04/1996.\ne8(Kurdish,Baghdad,Iraqi) 08/29/1996-09/09/1996 1132 Iraqi troop fought with Kurdish faction circa 09/1996.\ne9(May,Blair) 03/24/1997-07/04/1997 1049 Tony Blair became the Primary Minister of the United\nKingdom on 05/02/1997.\ne10(slalom,skiing) 12/05/1996-03/21/1997 253 Slalom Game of Alpine Skiing in 01/1997-02/1997.\ne11(Interim,months) 09/24/1996-12/31/1996 3063 Tokyo released company interim results for the past\nseveral months in 09/1996-12/1996.\ne12(Dole,Bob) 09/09/1996-11/24/1996 1599 Dole Bob lost the 1996 US presidential election.\ne13(July,Sen) 06/25/1997-06/25/1997 344 Cambodia\"s Prime Minister Hun Sen launched a bloody\nmilitary coup in 07/1997.\ne14(Hebron) 10/15/1996-02/14/1997 2098 Hebron was divided into two sectors in early 1997.\ne15(April,Easter) 02/23/1997-05/04/1997 480 Easter feasts circa 04/1997 (for western and Orthodox).\ne16(Diluted,Group) 04/27/1997-07/20/1997 1888 Tokyo released all 96/97 group results in\n04/199707/1997.\ne17(December,Christmas) 11/17/1996-01/26/1997 1326 Christmas feast in late 12/1997.\ne18(Kolaceva,winter,Together,promenades,Zajedno,\nSlobodan,Belgrade,Serbian,Serbia,Draskovic,municipal,\nKragujevac) 1/25/1997\n3 University students organized a vigil on Kolaceva street\nagainst government on 1/25/1997.\ne19(Tutsi,Luvengi,Burundi,Uvira,fuel,Banyamulenge,\nBurundian,Kivu,Kiliba,Runingo,Kagunga,Bwegera)\n10/19/1996\n6 Fresh fighting erupted around Uvira between Zaire armed\nforces and Banyamulengs Tutsi rebels on 10/19/1996.\ne20(Malantacchi,Korea,Guy,Rider,Unions,labour,\nTrade,unions,Confederation,rammed,Geneva,stoppages,\nVirgin,hire,Myongdong,Metalworkers) 1/11/1997\n2 Marcello Malantacchi secretary general of the\nInternational Metalworkers Federation and Guy Rider who heads\nthe Geneva office of the International Confederation of\nFree Trade Unions attacked the new labour law of South\nKorea on 1/11/1997.\ne21(DBS,Ra\ufb04es) 8/17/1997 9 The list of the unit of Singapore DBS Land Ra\ufb04es\nHoldings plans on 8/17/1997.\ne22(preserver,fuel,Galawa,Huddle,Leul,Beausse)\n11/24/1996\n3 Rescued a woman and her baby during a hijacked\nEthiopian plane that ran out of fuel and crashed into the\nsea near Le Galawa beach on 11/24/1996.\ne23(PRICE,LISTING,MLN,MATURITY,COUPON,\nMOODY,AMT,FIRST,ISS,TYPE,PAY,BORROWER)\nMonday-Friday/week\n7966 Announce bond price on all weekdays.\ne24(Unaudited,Ended,Months,Weighted,Provision,Cost,\nSelling,Revenues,Loss,Income,except,Shrs,Revs) every\nseason\n2264 Net income-loss reports released by companies in every\nseason.\ne25(rating,Wall,Street,Ian) Monday-Friday/week 21767 Stock reports from Wall Street on all weekdays.\ne26(Sheffield,league,scoring,goals,striker,games) every\nFriday, Saturday and Sunday\n574 Match results of Sheffield soccer league were published on\nFriday, Saturday and Sunday 10 times than other 4 days.\ne27(soccer,matches,Results,season,game,Cup,match,\nvictory,beat,played,play,division) every Friday,\nSaturday and Sunday\n2396 Soccer games held on Friday, Saturday and Sunday 7 times\nthan other 4 days.", "keywords": "feature categorization;gaussian;dft;aperiodic event;text stream;topic tracking;word signal;topic detection;time series;news stream;periodic event;word trajectory;event detection;spectral analysis"}
-{"name": "test_H-2", "title": "Personalized Query Expansion for the Web", "abstract": "The inherent ambiguity of short keyword queries demands for enhanced methods for Web retrieval. In this paper we propose to improve such Web queries by expanding them with terms collected from each user\"s Personal Information Repository, thus implicitly personalizing the search output. We introduce five broad techniques for generating the additional query keywords by analyzing user data at increasing granularity levels, ranging from term and compound level analysis up to global co-occurrence statistics, as well as to using external thesauri. Our extensive empirical analysis under four different scenarios shows some of these approaches to perform very well, especially on ambiguous queries, producing a very strong increase in the quality of the output rankings. Subsequently, we move this personalized search framework one step further and propose to make the expansion process adaptive to various features of each query. A separate set of experiments indicates the adaptive algorithms to bring an additional statistically significant improvement over the best static expansion approach.", "fulltext": "1. INTRODUCTION\nThe booming popularity of search engines has determined\nsimple keyword search to become the only widely accepted user\ninterface for seeking information over the Web. Yet keyword queries are\ninherently ambiguous. The query canon book for example covers\nseveral different areas of interest: religion, photography, literature,\nand music. Clearly, one would prefer search output to be aligned\nwith user\"s topic(s) of interest, rather than displaying a selection of\npopular URLs from each category. Studies have shown that more\nthan 80% of the users would prefer to receive such personalized\nsearch results [33] instead of the currently generic ones.\nQuery expansion assists the user in formulating a better query,\nby appending additional keywords to the initial search request in\norder to encapsulate her interests therein, as well as to focus the\nWeb search output accordingly. It has been shown to perform very\nwell over large data sets, especially with short input queries (see\nfor example [19, 3]). This is exactly the Web search scenario!\nIn this paper we propose to enhance Web query reformulation\nby exploiting the user\"s Personal Information Repository (PIR),\ni.e., the personal collection of text documents, emails, cached Web\npages, etc. Several advantages arise when moving Web search\npersonalization down to the Desktop level (note that by Desktop we\nrefer to PIR, and we use the two terms interchangeably). First is of\ncourse the quality of personalization: The local Desktop is a rich\nrepository of information, accurately describing most, if not all\ninterests of the user. Second, as all profile information is stored and\nexploited locally, on the personal machine, another very important\nbenefit is privacy. Search engines should not be able to know about\na person\"s interests, i.e., they should not be able to connect a\nspecific person with the queries she issued, or worse, with the output\nURLs she clicked within the search interface1\n(see Volokh [35] for\na discussion on privacy issues related to personalized Web search).\nOur algorithms expand Web queries with keywords extracted\nfrom user\"s PIR, thus implicitly personalizing the search output.\nAfter a discussion of previous works in Section 2, we first\ninvestigate the analysis of local Desktop query context in Section 3.1.1.\nWe propose several keyword, expression, and summary based\ntechniques for determining expansion terms from those personal\ndocuments matching the Web query best. In Section 3.1.2 we move our\nanalysis to the global Desktop collection and investigate expansions\nbased on co-occurrence metrics and external thesauri. The\nexperiments presented in Section 3.2 show many of these approaches\nto perform very well, especially on ambiguous queries, producing\nNDCG [15] improvements of up to 51.28%. In Section 4 we move\nthis algorithmic framework further and propose to make the\nexpansion process adaptive to the clarity level of the query. This yields\nan additional improvement of 8.47% over the previously identified\nbest algorithm. We conclude and discuss further work in Section 5.\n1\nSearch engines can map queries at least to IP addresses, for example by\nusing cookies and mining the query logs. However, by moving the user\nprofile at the Desktop level we ensure such information is not explicitly\nassociated to a particular user and stored on the search engine side.\n2. PREVIOUS WORK\nThis paper brings together two IR areas: Search Personalization\nand Automatic Query Expansion. There exists a vast amount of\nalgorithms for both domains. However, not much has been done\nspecifically aimed at combining them. In this section we thus\npresent a separate analysis, first introducing some approaches to\npersonalize search, as this represents the main goal of our research,\nand then discussing several query expansion techniques and their\nrelationship to our algorithms.\n2.1 Personalized Search\nPersonalized search comprises two major components: (1) User\nprofiles, and (2) The actual search algorithm. This section splits\nthe relevant background according to the focus of each article into\neither one of these elements.\nApproaches focused on the User Profile. Sugiyama et al. [32]\nanalyzed surfing behavior and generated user profiles as features\n(terms) of the visited pages. Upon issuing a new query, the search\nresults were ranked based on the similarity between each URL and\nthe user profile. Qiu and Cho [26] used Machine Learning on the\npast click history of the user in order to determine topic preference\nvectors and then apply Topic-Sensitive PageRank [13]. User\nprofiling based on browsing history has the advantage of being rather\neasy to obtain and process. This is probably why it is also employed\nby several industrial search engines (e.g., Yahoo! MyWeb2\n).\nHowever, it is definitely not sufficient for gathering a thorough insight\ninto user\"s interests. More, it requires to store all personal\ninformation at the server side, which raises significant privacy concerns.\nOnly two other approaches enhanced Web search using Desktop\ndata, yet both used different core ideas: (1) Teevan et al. [34]\nmodified the query term weights from the BM25 weighting scheme to\nincorporate user interests as captured by their Desktop indexes; (2)\nIn Chirita et al. [6], we focused on re-ranking the Web search\noutput according to the cosine distance between each URL and a set of\nDesktop terms describing user\"s interests. Moreover, none of these\ninvestigated the adaptive application of personalization.\nApproaches focused on the Personalization Algorithm.\nEffectively building the personalization aspect directly into\nPageRank [25] (i.e., by biasing it on a target set of pages) has\nreceived much attention recently. Haveliwala [13] computed a\ntopicoriented PageRank, in which 16 PageRank vectors biased on each\nof the main topics of the Open Directory were initially calculated\noff-line, and then combined at run-time based on the similarity\nbetween the user query and each of the 16 topics. More recently,\nNie et al. [24] modified the idea by distributing the PageRank of\na page across the topics it contains in order to generate topic\noriented rankings. Jeh and Widom [16] proposed an algorithm that\navoids the massive resources needed for storing one Personalized\nPageRank Vector (PPV) per user by precomputing PPVs only for\na small set of pages and then applying linear combination. As the\ncomputation of PPVs for larger sets of pages was still quite\nexpensive, several solutions have been investigated, the most important\nones being those of Fogaras and Racz [12], and Sarlos et al. [30],\nthe latter using rounding and count-min sketching in order to fastly\nobtain accurate enough approximations of the personalized scores.\n2.2 Automatic Query Expansion\nAutomatic query expansion aims at deriving a better formulation\nof the user query in order to enhance retrieval. It is based on\nexploiting various social or collection specific characteristics in order\nto generate additional terms, which are appended to the original\nin2\nhttp://myWeb2.search.yahoo.com\nput keywords before identifying the matching documents returned\nas output. In this section we survey some of the representative\nquery expansion works grouped according to the source employed\nto generate additional terms: (1) Relevance feedback, (2)\nCollection based co-occurrence statistics, and (3) Thesaurus information.\nSome other approaches are also addressed in the end of the section.\nRelevance Feedback Techniques. The main idea of Relevance\nFeedback (RF) is that useful information can be extracted from the\nrelevant documents returned for the initial query. First approaches\nwere manual [28] in the sense that the user was the one choosing\nthe relevant results, and then various methods were applied to\nextract new terms, related to the query and the selected documents.\nEfthimiadis [11] presented a comprehensive literature review and\nproposed several simple methods to extract such new keywords\nbased on term frequency, document frequency, etc. We used some\nof these as inspiration for our Desktop specific techniques. Chang\nand Hsu [5] asked users to choose relevant clusters, instead of\ndocuments, thus reducing the amount of interaction necessary. RF has\nalso been shown to be effectively automatized by considering the\ntop ranked documents as relevant [37] (this is known as Pseudo\nRF). Lam and Jones [21] used summarization to extract\ninformative sentences from the top-ranked documents, and appended them\nto the user query. Carpineto et al. [4] maximized the divergence\nbetween the language model defined by the top retrieved documents\nand that defined by the entire collection. Finally, Yu et al. [38]\nselected the expansion terms from vision-based segments of Web\npages in order to cope with the multiple topics residing therein.\nCo-occurrence Based Techniques. Terms highly co-occurring\nwith the issued keywords have been shown to increase precision\nwhen appended to the query [17]. Many statistical measures have\nbeen developed to best assess term relationship levels, either\nanalyzing entire documents [27], lexical affinity relationships [3] (i.e.,\npairs of closely related words which contain exactly one of the\ninitial query terms), etc. We have also investigated three such\napproaches in order to identify query relevant keywords from the rich,\nyet rather complex Personal Information Repository.\nThesaurus Based Techniques. A broadly explored method is to\nexpand the user query with new terms, whose meaning is closely\nrelated to the input keywords. Such relationships are usually\nextracted from large scale thesauri, as WordNet [23], in which\nvarious sets of synonyms, hypernyms, etc. are predefined. Just as for\nthe co-occurrence methods, initial experiments with this approach\nwere controversial, either reporting improvements, or even\nreductions in output quality [36]. Recently, as the experimental\ncollections grew larger, and as the employed algorithms became more\ncomplex, better results have been obtained [31, 18, 22]. We also\nuse WordNet based expansion terms. However, we base this\nprocess on analyzing the Desktop level relationship between the\noriginal query and the proposed new keywords.\nOther Techniques. There are many other attempts to extract\nexpansion terms. Though orthogonal to our approach, two works\nare very relevant for the Web environment: Cui et al. [8]\ngenerated word correlations utilizing the probability for query terms to\nappear in each document, as computed over the search engine logs.\nKraft and Zien [19] showed that anchor text is very similar to user\nqueries, and thus exploited it to acquire additional keywords.\n3. QUERY EXPANSION USING\nDESKTOP DATA\nDesktop data represents a very rich repository of profiling\ninformation. However, this information comes in a very\nunstructured way, covering documents which are highly diverse in\nformat, content, and even language characteristics. In this section we\nfirst tackle this problem by proposing several lexical analysis\nalgorithms which exploit user\"s PIR to extract keyword expansion terms\nat various granularities, ranging from term frequency within\nDesktop documents up to utilizing global co-occurrence statistics over\nthe personal information repository. Then, in the second part of the\nsection we empirically analyze the performance of each approach.\n3.1 Algorithms\nThis section presents the five generic approaches for\nanalyzing user\"s Desktop data in order to provide expansion terms for\nWeb search. In the proposed algorithms we gradually increase the\namount of personal information utilized. Thus, in the first part we\ninvestigate three local analysis techniques focused only on those\nDesktop documents matching user\"s query best. We append to the\nWeb query the most relevant terms, compounds, and sentence\nsummaries from these documents. In the second part of the section we\nmove towards a global Desktop analysis, proposing to investigate\nterm co-occurrences, as well as thesauri, in the expansion process.\n3.1.1 Expanding with Local Desktop Analysis\nLocal Desktop Analysis is related to enhancing Pseudo\nRelevance Feedback to generate query expansion keywords from the\nPIR best hits for user\"s Web query, rather than from the top ranked\nWeb search results. We distinguish three granularity levels for this\nprocess and we investigate each of them separately.\nTerm and Document Frequency. As the simplest possible\nmeasures, TF and DF have the advantage of being very fast to compute.\nPrevious experiments with small data sets have showed them to\nyield very good results [11]. We thus independently associate a\nscore with each term, based on each of the two statistics. The TF\nbased one is obtained by multiplying the actual frequency of a term\nwith a position score descending as the term first appears closer\nto the end of the document. This is necessary especially for longer\ndocuments, because more informative terms tend to appear towards\ntheir beginning [10]. The complete TF based keyword extraction\nformula is as follows:\nTermScore =\n1\n2\n+\n1\n2\n\u00b7\nnrWords \u2212 pos\nnrWords\n!\n\u00b7 log(1 + TF) (1)\nwhere nrWords is the total number of terms in the document and\npos is the position of the first appearance of the term; TF\nrepresents the frequency of each term in the Desktop document matching\nuser\"s Web query.\nThe identification of suitable expansion terms is even simpler\nwhen using DF: Given the set of Top-K relevant Desktop\ndocuments, generate their snippets as focused on the original search\nrequest. This query orientation is necessary, since the DF scores\nare computed at the level of the entire PIR and would produce too\nnoisy suggestions otherwise. Once the set of candidate terms has\nbeen identified, the selection proceeds by ordering them according\nto the DF scores they are associated with. Ties are resolved using\nthe corresponding TF scores.\nNote that a hybrid TFxIDF approach is not necessarily efficient,\nsince one Desktop term might have a high DF on the Desktop,\nwhile being quite rare in the Web. For example, the term\nPageRank would be quite frequent on the Desktop of an IR scientist,\nthus achieving a low score with TFxIDF. However, as it is rather\nrare in the Web, it would make a good resolution of the query\ntowards the correct topic.\nLexical Compounds. Anick and Tipirneni [2] defined the\nlexical dispersion hypothesis, according to which an expression\"s\nlexical dispersion (i.e., the number of different compounds it appears in\nwithin a document or group of documents) can be used to\nautomatically identify key concepts over the input document set. Although\nseveral possible compound expressions are available, it has been\nshown that simple approaches based on noun analysis are almost\nas good as highly complex part-of-speech pattern identification\nalgorithms [1]. We thus inspect the matching Desktop documents for\nall their lexical compounds of the following form:\n{ adjective? noun+ }\nAll such compounds could be easily generated off-line, at indexing\ntime, for all the documents in the local repository. Moreover, once\nidentified, they can be further sorted depending on their dispersion\nwithin each document in order to facilitate fast retrieval of the most\nfrequent compounds at run-time.\nSentence Selection. This technique builds upon sentence\noriented document summarization: First, the set of relevant Desktop\ndocuments is identified; then, a summary containing their most\nimportant sentences is generated as output. Sentence selection is\nthe most comprehensive local analysis approach, as it produces the\nmost detailed expansions (i.e., sentences). Its downside is that,\nunlike with the first two algorithms, its output cannot be stored\nefficiently, and consequently it cannot be computed off-line. We\ngenerate sentence based summaries by ranking the document sentences\naccording to their salience score, as follows [21]:\nSentenceScore =\nSW2\nTW\n+ PS +\nTQ2\nNQ\nThe first term is the ratio between the square amount of significant\nwords within the sentence and the total number of words therein. A\nword is significant in a document if its frequency is above a\nthreshold as follows:\nTF > ms =\nV\n`\nX\n7 \u2212 0.1 \u2217 (25 \u2212 NS) , if NS < 25\n7 , if NS \u2208 [25, 40]\n7 + 0.1 \u2217 (NS \u2212 40) , if NS > 40\nwith NS being the total number of sentences in the document\n(see [21] for details). The second term is a position score set to\n(Avg(NS) \u2212 SentenceIndex)/Avg2\n(NS) for the first ten\nsentences, and to 0 otherwise, Avg(NS) being the average number of\nsentences over all Desktop items. This way, short documents such\nas emails are not affected, which is correct, since they usually do\nnot contain a summary in the very beginning. However, as longer\ndocuments usually do include overall descriptive sentences in the\nbeginning [10], these sentences are more likely to be relevant. The\nfinal term biases the summary towards the query. It is the ratio\nbetween the square number of query terms present in the sentence and\nthe total number of terms from the query. It is based on the belief\nthat the more query terms contained in a sentence, the more likely\nwill that sentence convey information highly related to the query.\n3.1.2 Expanding with Global Desktop Analysis\nIn contrast to the previously presented approach, global analysis\nrelies on information from across the entire personal Desktop to\ninfer the new relevant query terms. In this section we propose two\nsuch techniques, namely term co-occurrence statistics, and filtering\nthe output of an external thesaurus.\nTerm Co-occurrence Statistics. For each term, we can easily\ncompute off-line those terms co-occurring with it most frequently\nin a given collection (i.e., PIR in our case), and then exploit this\ninformation at run-time in order to infer keywords highly\ncorrelated with the user query. Our generic co-occurrence based query\nexpansion algorithm is as follows:\nAlgorithm 3.1.2.1. Co-occurrence based keyword similarity search.\nOff-line computation:\n1: Filter potential keywords k with DF \u2208 [10, . . . , 20% \u00b7 N]\n2: For each keyword ki\n3: For each keyword kj\n4: Compute SCki,kj\n, the similarity coefficient of (ki, kj)\nOn-line computation:\n1: Let S be the set of keywords,\npotentially similar to an input expression E.\n2: For each keyword k of E:\n3: S \u2190 S \u222a TSC(k), where TSC(k) contains the\nTop-K terms most similar to k\n4: For each term t of S:\n5a: Let Score(t) \u2190\nQ\nk\u2208E(0.01 + SCt,k)\n5b: Let Score(t) \u2190 #DesktopHits(E|t)\n6: Select Top-K terms of S with the highest scores.\nThe off-line computation needs an initial trimming phase (step\n1) for optimization purposes. In addition, we also restricted the\nalgorithm to computing co-occurrence levels across nouns only, as\nthey contain by far the largest amount of conceptual information,\nand as this approach reduces the size of the co-occurrence matrix\nconsiderably. During the run-time phase, having the terms most\ncorrelated with each particular query keyword already identified,\none more operation is necessary, namely calculating the correlation\nof every output term with the entire query. Two approaches are\npossible: (1) using a product of the correlation between the term\nand all keywords in the original expression (step 5a), or (2)\nsimply counting the number of documents in which the proposed term\nco-occurs with the entire user query (step 5b). We considered the\nfollowing formulas for Similarity Coefficients [17]:\n\u2022 Cosine Similarity, defined as:\nCS =\nDFx,y\npDFx \u00b7 DFy\n(2)\n\u2022 Mutual Information, defined as:\nMI = log\nN \u00b7 DFx,y\nDFx \u00b7 DFy\n(3)\n\u2022 Likelihood Ratio, defined in the paragraphs below.\nDFx is the Document Frequency of term x, and DFx,y is the\nnumber of documents containing both x and y. To further increase the\nquality of the generated scores we limited the latter indicator to\ncooccurrences within a window of W terms. We set W to be the same\nas the maximum amount of expansion keywords desired.\nDunning\"s Likelihood Ratio \u03bb [9] is a co-occurrence based\nmetric similar to \u03c72\n. It starts by attempting to reject the null\nhypothesis, according to which two terms A and B would appear in\ntext independently from each other. This means that P(A B) =\nP(A\u00acB) = P(A), where P(A\u00acB) is the probability that term A\nis not followed by term B. Consequently, the test for independence\nof A and B can be performed by looking if the distribution of A\ngiven that B is present is the same as the distribution of A given\nthat B is not present. Of course, in reality we know these terms are\nnot independent in text, and we only use the statistical metrics to\nhighlight terms which are frequently appearing together. We\ncompare the two binomial processes by using likelihood ratios of their\nassociated hypotheses. First, let us define the likelihood ratio for\none hypothesis:\n\u03bb =\nmax\u03c9\u2208\u21260\nH(\u03c9; k)\nmax\u03c9\u2208\u2126 H(\u03c9; k)\n(4)\nwhere \u03c9 is a point in the parameter space \u2126, \u21260 is the particular\nhypothesis being tested, and k is a point in the space of observations\nK. If we assume that two binomial distributions have the same\nunderlying parameter, i.e., {(p1, p2) | p1 = p2}, we can write:\n\u03bb =\nmaxp H(p, p; k1, k2, n1, n2)\nmaxp1,p2\nH(p1, p2; k1, k2, n1, n2)\n(5)\nwhere H(p1, p2; k1, k2, n1, n2) = pk1\n1 \u00b7 (1 \u2212 p1)(n1\u2212k1)\n\u00b7\nn1\nk1\n\u00a1\n\u00b7\npk2\n2 \u00b7 (1 \u2212 p2)(n2\u2212k2)\n\u00b7\nn2\nk2\n\u00a1\n. Since the maxima are obtained with\np1 = k1\nn1\n, p2 = k2\nn2\n, and p = k1+k2\nn1+n2\n, we have:\n\u03bb =\nmaxp L(p, k1, n1)L(p, k2, n2)\nmaxp1,p2 L(p1, k1, n1)L(p2, k2, n2)\n(6)\nwhere L(p, k, n) = pk\n\u00b7 (1 \u2212 p)n\u2212k\n. Taking the logarithm of the\nlikelihood, we obtain:\n\u22122 \u00b7 log \u03bb = 2 \u00b7 [log L(p1, k1, n1) + log L(p2, k2, n2) \u2212\nlog L(p, k1, n1) \u2212 log L(p, k2, n2)]\nwhere log L(p, k, n) = k \u00b7 log p + (n \u2212 k) \u00b7 log(1 \u2212 p). Finally,\nif we write O11 = P(A B), O12 = P(\u00acA B), O21 = P(A \u00acB),\nand O22 = P(\u00acA\u00acB), then the co-occurrence likelihood of terms\nA and B becomes:\n\u22122 \u00b7 log \u03bb = 2 \u00b7 [O11 \u00b7 log p1 + O12 \u00b7 log (1 \u2212 p1) +\nO21 \u00b7 log p2 + O22 \u00b7 log (1 \u2212 p2) \u2212\n(O11 + O21) \u00b7 log p \u2212 (O12 + O22) \u00b7 log (1 \u2212 p)]\nwhere p1 =\nk1\nn1\n=\nO11\nO11+O12\n, p2 =\nk2\nn2\n=\nO21\nO21+O22\n, and p =\nk1+k2\nn1+n2\nThesaurus Based Expansion. Large scale thesauri encapsulate\nglobal knowledge about term relationships. Thus, we first identify\nthe set of terms closely related to each query keyword, and then\nwe calculate the Desktop co-occurrence level of each of these\npossible expansion terms with the entire initial search request. In the\nend, those suggestions with the highest frequencies are kept. The\nalgorithm is as follows:\nAlgorithm 3.1.2.2. Filtered thesaurus based query expansion.\n1: For each keyword k of an input query Q:\n2: Select the following sets of related terms using WordNet:\n2a: Syn: All Synonyms\n2b: Sub: All sub-concepts residing one level below k\n2c: Super: All super-concepts residing one level above k\n3: For each set Si of the above mentioned sets:\n4: For each term t of Si:\n5: Search the PIR with (Q|t), i.e.,\nthe original query, as expanded with t\n6: Let H be the number of hits of the above search\n(i.e., the co-occurence level of t with Q)\n7: Return Top-K terms as ordered by their H values.\nWe observe three types of term relationships (steps 2a-2c): (1)\nsynonyms, (2) sub-concepts, namely hyponyms (i.e., sub-classes)\nand meronyms (i.e., sub-parts), and (3) super-concepts, namely\nhypernyms (i.e., super-classes) and holonyms (i.e., super-parts).\nAs they represent quite different types of association, we\ninvestigated them separately. We limited the output expansion set\n(step 7) to contain only terms appearing at least T times on\nthe Desktop, in order to avoid noisy suggestions, with T =\nmin( N\nDocsPerTopic\n, MinDocs). We set DocsPerTopic = 2, 500, and\nMinDocs = 5, the latter one coping with the case of small PIRs.\n3.2 Experiments\n3.2.1 Experimental Setup\nWe evaluated our algorithms with 18 subjects (Ph.D. and\nPostDoc. students in different areas of computer science and\neducation). First, they installed our Lucene based search engine3\nand\n3\nClearly, if one had already installed a Desktop search application,\nthen this overhead would not be present.\nindexed all their locally stored content: Files within user selected\npaths, Emails, and Web Cache. Without loss of generality, we\nfocused the experiments on single-user machines. Then, they chose\n4 queries related to their everyday activities, as follows:\n\u2022 One very frequent AltaVista query, as extracted from the top\n2% queries most issued to the search engine within a 7.2\nmillion entries log from October 2001. In order to connect such\na query to each user\"s interests, we added an off-line\npreprocessing phase: We generated the most frequent search\nrequests and then randomly selected a query with at least 10\nhits on each subject\"s Desktop. To further ensure a real life\nscenario, users were allowed to reject the proposed query and\nask for a new one, if they considered it totally outside their\ninterest areas.\n\u2022 One randomly selected log query, filtered using the same\nprocedure as above.\n\u2022 One self-selected specific query, which they thought to have\nonly one meaning.\n\u2022 One self-selected ambiguous query, which they thought to\nhave at least three meanings.\nThe average query lengths were 2.0 and 2.3 terms for the log\nqueries, as well as 2.9 and 1.8 for the self-selected ones. Even\nthough our algorithms are mainly intended to enhance search when\nusing ambiguous query keywords, we chose to investigate their\nperformance on a wide span of query types, in order to see how they\nperform in all situations. The log queries evaluate real life requests,\nin contrast to the self-selected ones, which target rather the\nidentification of top and bottom performances. Note that the former ones\nwere somewhat farther away from each subject\"s interest, thus\nbeing also more difficult to personalize on. To gain an insight into the\nrelationship between each query type and user interests, we asked\neach person to rate the query itself with a score of 1 to 5, having the\nfollowing interpretations: (1) never heard of it, (2) do not know it,\nbut heard of it, (3) know it partially, (4) know it well, (5) major\ninterest. The obtained grades were 3.11 for the top log queries, 3.72\nfor the randomly selected ones, 4.45 for the self-selected specific\nones, and 4.39 for the self-selected ambiguous ones.\nFor each query, we collected the Top-5 URLs generated by 20\nversions of the algorithms4\npresented in Section 3.1. These results\nwere then shuffled into one set containing usually between 70 and\n90 URLs. Thus, each subject had to assess about 325 documents\nfor all four queries, being neither aware of the algorithm, nor of\nthe ranking of each assessed URL. Overall, 72 queries were issued\nand over 6,000 URLs were evaluated during the experiment. For\neach of these URLs, the testers had to give a rating ranging from\n0 to 2, dividing the relevant results in two categories, (1) relevant\nand (2) highly relevant. Finally, the quality of each ranking was\nassessed using the normalized version of Discounted Cumulative\nGain (DCG) [15]. DCG is a rich measure, as it gives more weight\nto highly ranked documents, while also incorporating different\nrelevance levels by giving them different gain values:\nDCG(i) =\n&\nG(1) , if i = 1\nDCG(i \u2212 1) + G(i)/log(i) , otherwise.\nWe used G(i) = 1 for relevant results, and G(i) = 2 for highly\nrelevant ones. As queries having more relevant output documents will\nhave a higher DCG, we also normalized its value to a score between\n0 (the worst possible DCG given the ratings) and 1 (the best\npossible DCG given the ratings) to facilitate averaging over queries. All\nresults were tested for statistical significance using T-tests.\n4\nNote that all Desktop level parts of our algorithms were performed with\nLucene using its predefined searching and ranking functions.\nAlgorithmic specific aspects. The main parameter of our\nalgorithms is the number of generated expansion keywords. For this\nexperiment we set it to 4 terms for all techniques, leaving an\nanalysis at this level for a subsequent investigation. In order to optimize\nthe run-time computation speed, we chose to limit the number of\noutput keywords per Desktop document to the number of\nexpansion keywords desired (i.e., four). For all algorithms we also\ninvestigated bigger limitations. This allowed us to observe that the\nLexical Compounds method would perform better if only at most\none compound per document were selected. We therefore chose\nto experiment with this new approach as well. For all other\ntechniques, considering less than four terms per document did not seem\nto consistently yield any additional qualitative gain. We labeled the\nalgorithms we evaluated as follows:\n0. Google: The actual Google query output, as returned by the\nGoogle API;\n1. TF, DF: Term and Document Frequency;\n2. LC, LC[O]: Regular and Optimized (by considering only\none top compound per document) Lexical Compounds;\n3. SS: Sentence Selection;\n4. TC[CS], TC[MI], TC[LR]: Term Co-occurrence Statistics\nusing respectively Cosine Similarity, Mutual Information,\nand Likelihood Ratio as similarity coefficients;\n5. WN[SYN], WN[SUB], WN[SUP]: WordNet based\nexpansion with synonyms, sub-concepts, and super-concepts,\nrespectively.\nExcept for the thesaurus based expansion, in all cases we also\ninvestigated the performance of our algorithms when exploiting only the\nWeb browser cache to represent user\"s personal information. This\nis motivated by the fact that other personal documents such as for\nexample emails are known to have a somewhat different language\nthan that residing on the world wide Web [34]. However, as this\napproach performed visibly poorer than using the entire Desktop\ndata, we omitted it from the subsequent analysis.\n3.2.2 Results\nLog Queries. We evaluated all variants of our algorithms using\nNDCG. For log queries, the best performance was achieved with\nTF, LC[O], and TC[LR]. The improvements they brought were up\nto 5.2% for top queries (p = 0.14) and 13.8% for randomly\nselected queries (p = 0.01, statistically significant), both obtained\nwith LC[O]. A summary of all results is depicted in Table 1.\nBoth TF and LC[O] yielded very good results, indicating that\nsimple keyword and expression oriented approaches might be\nsufficient for the Desktop based query expansion task. LC[O] was much\nbetter than LC, ameliorating its quality with up to 25.8% in the case\nof randomly selected log queries, improvement which was also\nsignificant with p = 0.04. Thus, a selection of compounds spanning\nover several Desktop documents is more informative about user\"s\ninterests than the general approach, in which there is no restriction\non the number of compounds produced from every personal item.\nThe more complex Desktop oriented approaches, namely\nsentence selection and all term co-occurrence based algorithms,\nshowed a rather average performance, with no visible\nimprovements, except for TC[LR]. Also, the thesaurus based expansion\nusually produced very few suggestions, possibly because of the\nmany technical queries employed by our subjects. We observed\nhowever that expanding with sub-concepts is very good for\neveryday life terms (e.g., car), whereas the use of super-concepts is\nvaluable for compounds having at least one term with low\ntechnicality (e.g., document clustering). As expected, the synonym\nbased expansion performed generally well, though in some very\nAlgorithm NDCG Signific. NDCG Signific.\nTop vs. Google Random vs. Google\nGoogle 0.42 -\n0.40TF 0.43 p = 0.32 0.43 p = 0.04\nDF 0.17 -\n0.23LC 0.39 -\n0.36LC[O] 0.44 p = 0.14 0.45 p = 0.01\nSS 0.33 -\n0.36TC[CS] 0.37 -\n0.35TC[MI] 0.40 -\n0.36TC[LR] 0.41 - 0.42 p = 0.06\nWN[SYN] 0.42 -\n0.38WN[SUB] 0.28 -\n0.33WN[SUP] 0.26 -\n0.26Table 1: Normalized Discounted Cumulative Gain at the first\n5 results when searching for top (left) and random (right) log\nqueries.\nAlgorithm NDCG Signific. NDCG Signific.\nClear vs. Google Ambiguous vs. Google\nGoogle 0.71 -\n0.39TF 0.66 - 0.52 p 0.01\nDF 0.37 -\n0.31LC 0.65 - 0.54 p 0.01\nLC[O] 0.69 - 0.59 p 0.01\nSS 0.56 - 0.52 p 0.01\nTC[CS] 0.60 - 0.50 p = 0.01\nTC[MI] 0.60 - 0.47 p = 0.02\nTC[LR] 0.56 - 0.47 p = 0.03\nWN[SYN] 0.70 -\n0.36WN[SUB] 0.46 -\n0.32WN[SUP] 0.51 -\n0.29Table 2: Normalized Discounted Cumulative Gain at the first\n5 results when searching for user selected clear (left) and\nambiguous (right) queries.\ntechnical cases it yielded rather general suggestions. Finally, we\nnoticed Google to be very optimized for some top frequent queries.\nHowever, even within this harder scenario, some of our\npersonalization algorithms produced statistically significant improvements\nover regular search (i.e., TF and LC[O]).\nSelf-selected Queries. The NDCG values obtained with\nselfselected queries are depicted in Table 2. While our algorithms did\nnot enhance Google for the clear search tasks, they did produce\nstrong improvements of up to 52.9% (which were of course also\nhighly significant with p 0.01) when utilized with ambiguous\nqueries. In fact, almost all our algorithms resulted in statistically\nsignificant improvements over Google for this query type.\nIn general, the relative differences between our algorithms were\nsimilar to those observed for the log based queries. As in the\nprevious analysis, the simple Desktop based Term Frequency and\nLexical Compounds metrics performed best. Nevertheless, a very good\noutcome was also obtained for Desktop based sentence selection\nand all term co-occurrence metrics. There were no visible\ndifferences between the behavior of the three different approaches to\ncooccurrence calculation. Finally, for the case of clear queries, we\nnoticed that fewer expansion terms than 4 might be less noisy and\nthus helpful in bringing further improvements. We thus pursued\nthis idea with the adaptive algorithms presented in the next section.\n4. INTRODUCING ADAPTIVITY\nIn the previous section we have investigated the behavior of each\ntechnique when adding a fixed number of keywords to the user\nquery. However, an optimal personalized query expansion\nalgorithm should automatically adapt itself to various aspects of each\nquery, as well as to the particularities of the person using it. In this\nsection we discuss the factors influencing the behavior of our\nexpansion algorithms, which might be used as input for the adaptivity\nprocess. Then, in the second part we present some initial\nexperiments with one of them, namely query clarity.\n4.1 Adaptivity Factors\nSeveral indicators could assist the algorithm to automatically\ntune the number of expansion terms. We start by discussing\nadaptation by analyzing the query clarity level. Then, we briefly introduce\nan approach to model the generic query formulation process in\norder to tailor the search algorithm automatically, and discuss some\nother possible factors that might be of use for this task.\nQuery Clarity. The interest for analyzing query difficulty has\nincreased only recently, and there are not many papers addressing\nthis topic. Yet it has been long known that query disambiguation\nhas a high potential of improving retrieval effectiveness for low\nrecall searches with very short queries [20], which is exactly our\ntargeted scenario. Also, the success of IR systems clearly varies\nacross different topics. We thus propose to use an estimate number\nexpressing the calculated level of query clarity in order to\nautomatically tweak the amount of personalization fed into the algorithm.\nThe following metrics are available:\n\u2022 The Query Length is expressed simply by the number of\nwords in the user query. The solution is rather inefficient,\nas reported by He and Ounis [14].\n\u2022 The Query Scope relates to the IDF of the entire query, as in:\nC1 = log(\n#DocumentsInCollection\n#Hits(Query)\n) (7)\nThis metric performs well when used with document\ncollections covering a single topic, but poor otherwise [7, 14].\n\u2022 The Query Clarity [7] seems to be the best, as well as the\nmost applied technique so far. It measures the divergence\nbetween the language model associated to the user query and\nthe language model associated to the collection. In a\nsimplified version (i.e., without smoothing over the terms which\nare not present in the query), it can be expressed as follows:\nC2 =\n\u0088\nw\u2208Query\nPml(w|Query) \u00b7 log\nPml(w|Query)\nPcoll(w)\n(8)\nwhere Pml(w|Query) is the probability of the word w\nwithin the submitted query, and Pcoll(w) is the probability\nof w within the entire collection of documents.\nOther solutions exist, but we think they are too computationally\nexpensive for the huge amount of data that needs to be processed\nwithin Web applications. We thus decided to investigate only C1\nand C2. First, we analyzed their performance over a large set of\nqueries and split their clarity predictions in three categories:\n\u2022 Small Scope / Clear Query: C1 \u2208 [0, 12], C2 \u2208 [4, \u221e).\n\u2022 Medium Scope / Semi-Ambiguous Query:\nC1 \u2208 [12, 17), C2 \u2208 [2.5, 4).\n\u2022 Large Scope / Ambiguous Query:\nC1 \u2208 [17, \u221e), C2 \u2208 [0, 2.5].\nIn order to limit the amount of experiments, we analyzed only\nthe results produced when employing C1 for the PIR and C2 for the\nWeb. As algorithmic basis we used LC[O], i.e., optimized lexical\ncompounds, which was clearly the winning method in the previous\nanalysis. As manual investigation showed it to slightly overfit the\nexpansion terms for clear queries, we utilized a substitute for this\nparticular case. Two candidates were considered: (1) TF, i.e., the\nsecond best approach, and (2) WN[SYN], as we observed that its\nfirst and second expansion terms were often very good.\nDesktop Scope Web Clarity No. of Terms Algorithm\nLarge Ambiguous 4 LC[O]\nLarge Semi-Ambig. 3 LC[O]\nLarge Clear 2 LC[O]\nMedium Ambiguous 3 LC[O]\nMedium Semi-Ambig. 2 LC[O]\nMedium Clear 1 TF / WN[SYN]\nSmall Ambiguous 2 TF / WN[SYN]\nSmall Semi-Ambig. 1 TF / WN[SYN]\nSmall Clear\n0Table 3: Adaptive Personalized Query Expansion.\nGiven the algorithms and clarity measures, we implemented the\nadaptivity procedure by tailoring the amount of expansion terms\nadded to the original query, as a function of its ambiguity in the\nWeb, as well as within user\"s PIR. Note that the ambiguity level is\nrelated to the number of documents covering a certain query. Thus,\nto some extent, it has different meanings on the Web and within\nPIRs. While a query deemed ambiguous on a large collection such\nas the Web will very likely indeed have a large number of meanings,\nthis may not be the case for the Desktop. Take for example the\nquery PageRank. If the user is a link analysis expert, many of\nher documents might match this term, and thus the query would\nbe classified as ambiguous. However, when analyzed against the\nWeb, this is definitely a clear query. Consequently, we employed\nmore additional terms, when the query was more ambiguous in the\nWeb, but also on the Desktop. Put another way, queries deemed\nclear on the Desktop were inherently not well covered within user\"s\nPIR, and thus had fewer keywords appended to them. The number\nof expansion terms we utilized for each combination of scope and\nclarity levels is depicted in Table 3.\nQuery Formulation Process. Interactive query expansion has a\nhigh potential for enhancing search [29]. We believe that modeling\nits underlying process would be very helpful in producing\nqualitative adaptive Web search algorithms. For example, when the user is\nadding a new term to her previously issued query, she is basically\nreformulating her original request. Thus, the newly added terms\nare more likely to convey information about her search goals. For\na general, non personalized retrieval engine, this could correspond\nto giving more weight to these new keywords. Within our\npersonalized scenario, the generated expansions can similarly be biased\ntowards these terms. Nevertheless, more investigations are\nnecessary in order to solve the challenges posed by this approach.\nOther Features. The idea of adapting the retrieval process to\nvarious aspects of the query, of the user itself, and even of the\nemployed algorithm has received only little attention in the literature.\nOnly some approaches have been investigated, usually indirectly.\nThere exist studies of query behaviors at different times of day, or\nof the topics spanned by the queries of various classes of users,\netc. However, they generally do not discuss how these features can\nbe actually incorporated in the search process itself and they have\nalmost never been related to the task of Web personalization.\n4.2 Experiments\nWe used exactly the same experimental setup as for our\nprevious analysis, with two log-based queries and two self-selected ones\n(all different from before, in order to make sure there is no bias\non the new approaches), evaluated with NDCG over the Top-5\nresults output by each algorithm. The newly proposed adaptive\npersonalized query expansion algorithms are denoted as A[LCO/TF]\nfor the approach using TF with the clear Desktop queries, and as\nA[LCO/WN] when WN[SYN] was utilized instead of TF.\nThe overall results were at least similar, or better than Google\nfor all kinds of log queries (see Table 4). For top frequent queries,\nAlgorithm NDCG Signific. NDCG Signific.\nTop vs. Google Random vs. Google\nGoogle 0.51 -\n0.45TF 0.51 - 0.48 p = 0.04\nLC[O] 0.53 p = 0.09 0.52 p < 0.01\nWN[SYN] 0.51 -\n0.45A[LCO/TF] 0.56 p < 0.01 0.49 p = 0.04\nA[LCO/WN] 0.55 p = 0.01\n0.44Table 4: Normalized Discounted Cumulative Gain at the first 5\nresults when using our adaptive personalized search algorithms\non top (left) and random (right) log queries.\nAlgorithm NDCG Signific. NDCG Signific.\nClear vs. Google Ambiguous vs. Google\nGoogle 0.81 -\n0.46TF 0.76 - 0.54 p = 0.03\nLC[O] 0.77 - 0.59 p 0.01\nWN[SYN] 0.79 -\n0.44A[LCO/TF] 0.81 - 0.64 p 0.01\nA[LCO/WN] 0.81 - 0.63 p 0.01\nTable 5: Normalized Discounted Cumulative Gain at the first 5\nresults when using our adaptive personalized search algorithms\non user selected clear (left) and ambiguous (right) queries.\nboth adaptive algorithms, A[LCO/TF] and A[LCO/WN], improve\nwith 10.8% and 7.9% respectively, both differences being also\nstatistically significant with p \u2264 0.01. They also achieve an\nimprovement of up to 6.62% over the best performing static\nalgorithm, LC[O] (p = 0.07). For randomly selected queries, even\nthough A[LCO/TF] yields significantly better results than Google\n(p = 0.04), both adaptive approaches fall behind the static\nalgorithms. The major reason seems to be the imperfect selection\nof the number of expansion terms, as a function of query clarity.\nThus, more experiments are needed in order to determine the\noptimal number of generated expansion keywords, as a function of the\nquery ambiguity level.\nThe analysis of the self-selected queries shows that adaptivity\ncan bring even further improvements into Web search\npersonalization (see Table 5). For ambiguous queries, the scores given\nto Google search are enhanced by 40.6% through A[LCO/TF]\nand by 35.2% through A[LCO/WN], both strongly significant with\np 0.01. Adaptivity also brings another 8.9% improvement over\nthe static personalization of LC[O] (p = 0.05). Even for clear\nqueries, the newly proposed flexible algorithms perform slightly\nbetter, improving with 0.4% and 1.0% respectively.\nAll results are depicted graphically in Figure 1. We notice that\nA[LCO/TF] is the overall best algorithm, performing better than\nGoogle for all types of queries, either extracted from the search\nengine log, or self-selected. The experiments presented in this section\nconfirm clearly that adaptivity is a necessary further step to take in\nWeb search personalization.\n5. CONCLUSIONS AND FURTHER WORK\nIn this paper we proposed to expand Web search queries by\nexploiting the user\"s Personal Information Repository in order to\nautomatically extract additional keywords related both to the query\nitself and to user\"s interests, personalizing the search output. In\nthis context, the paper includes the following contributions:\n\u2022 We proposed five techniques for determining expansion\nterms from personal documents. Each of them produces\nadditional query keywords by analyzing user\"s Desktop at\nincreasing granularity levels, ranging from term and expression\nlevel analysis up to global co-occurrence statistics and\nexternal thesauri.\nFigure 1: Relative NDCG gain (in %) for each algorithm\noverall, as well as separated per query category.\n\u2022 We provided a thorough empirical analysis of several\nvariants of our approaches, under four different scenarios. We\nshowed some of these approaches to perform very well,\nproducing NDCG improvements of up to 51.28%.\n\u2022 We moved this personalized search framework further and\nproposed to make the expansion process adaptive to features\nof each query, a strong focus being put on its clarity level.\n\u2022 Within a separate set of experiments, we showed our adaptive\nalgorithms to provide an additional improvement of 8.47%\nover the previously identified best approach.\nWe are currently performing investigations on the dependency\nbetween various query features and the optimal number of\nexpansion terms. We are also analyzing other types of approaches to\nidentify query expansion suggestions, such as applying Latent\nSemantic Analysis on the Desktop data. Finally, we are designing\na set of more complex combinations of these metrics in order to\nprovide enhanced adaptivity to our algorithms.\n6. ACKNOWLEDGEMENTS\nWe thank Ricardo Baeza-Yates, Vassilis Plachouras, Carlos\nCastillo and Vanessa Murdock from Yahoo! for the interesting\ndiscussions about the experimental setup and the algorithms we\npresented. We are grateful to Fabrizio Silvestri from CNR and to\nRonny Lempel from IBM for providing us the AltaVista query log.\nFinally, we thank our colleagues from L3S for participating in the\ntime consuming experiments we performed, as well as to the\nEuropean Commission for the funding support (project Nepomuk, 6th\nFramework Programme, IST contract no. 027705).\n7. REFERENCES\n[1] J. Allan and H. Raghavan. Using part-of-speech patterns to reduce query\nambiguity. In Proc. of the 25th Intl. ACM SIGIR Conf. on Research and\ndevelopment in information retrieval, 2002.\n[2] P. G. Anick and S. Tipirneni. The paraphrase search assistant: Terminological\nfeedback for iterative information seeking. In Proc. of the 22nd Intl. ACM\nSIGIR Conf. on Research and Development in Information Retrieval, 1999.\n[3] D. Carmel, E. Farchi, Y. Petruschka, and A. Soffer. Automatic query\nwefinement using lexical affinities with maximal information gain. In Proc. of\nthe 25th Intl. ACM SIGIR Conf. on Research and development in information\nretrieval, pages 283-290, 2002.\n[4] C. Carpineto, R. de Mori, G. Romano, and B. Bigi. An information-theoretic\napproach to automatic query expansion. ACM TOIS, 19(1):1-27, 2001.\n[5] C.-H. Chang and C.-C. Hsu. Integrating query expansion and conceptual\nrelevance feedback for personalized web information retrieval. In Proc. of the\n7th Intl. Conf. on World Wide Web, 1998.\n[6] P. A. Chirita, C. Firan, and W. Nejdl. Summarizing local context to personalize\nglobal web search. In Proc. of the 15th Intl. CIKM Conf. on Information and\nKnowledge Management, 2006.\n[7] S. Cronen-Townsend, Y. Zhou, and W. B. Croft. Predicting query performance.\nIn Proc. of the 25th Intl. ACM SIGIR Conf. on Research and development in\ninformation retrieval, 2002.\n[8] H. Cui, J.-R. Wen, J.-Y. Nie, and W.-Y. Ma. Probabilistic query expansion using\nquery logs. In Proc. of the 11th Intl. Conf. on World Wide Web, 2002.\n[9] T. Dunning. Accurate methods for the statistics of surprise and coincidence.\nComputational Linguistics, 19:61-74, 1993.\n[10] H. P. Edmundson. New methods in automatic extracting. Journal of the ACM,\n16(2):264-285, 1969.\n[11] E. N. Efthimiadis. User choices: A new yardstick for the evaluation of ranking\nalgorithms for interactive query expansion. Information Processing and\nManagement, 31(4):605-620, 1995.\n[12] D. Fogaras and B. Racz. Scaling link based similarity search. In Proc. of the\n14th Intl. World Wide Web Conf., 2005.\n[13] T. Haveliwala. Topic-sensitive pagerank. In Proc. of the 11th Intl. World Wide\nWeb Conf., Honolulu, Hawaii, May 2002.\n[14] B. He and I. Ounis. Inferring query performance using pre-retrieval predictors.\nIn Proc. of the 11th Intl. SPIRE Conf. on String Processing and Information\nRetrieval, 2004.\n[15] K. J\u00a8arvelin and J. Keklinen. Ir evaluation methods for retrieving highly relevant\ndocuments. In Proc. of the 23th Intl. ACM SIGIR Conf. on Research and\ndevelopment in information retrieval, 2000.\n[16] G. Jeh and J. Widom. Scaling personalized web search. In Proc. of the 12th Intl.\nWorld Wide Web Conference, 2003.\n[17] M.-C. Kim and K.-S. Choi. A comparison of collocation-based similarity\nmeasures in query expansion. Inf. Proc. and Mgmt., 35(1):19-30, 1999.\n[18] S.-B. Kim, H.-C. Seo, and H.-C. Rim. Information retrieval using word senses:\nroot sense tagging approach. In Proc. of the 27th Intl. ACM SIGIR Conf. on\nResearch and development in information retrieval, 2004.\n[19] R. Kraft and J. Zien. Mining anchor text for query refinement. In Proc. of the\n13th Intl. Conf. on World Wide Web, 2004.\n[20] R. Krovetz and W. B. Croft. Lexical ambiguity and information retrieval. ACM\nTrans. Inf. Syst., 10(2), 1992.\n[21] A. M. Lam-Adesina and G. J. F. Jones. Applying summarization techniques for\nterm selection in relevance feedback. In Proc. of the 24th Intl. ACM SIGIR\nConf. on Research and Development in Information Retrieval, 2001.\n[22] S. Liu, F. Liu, C. Yu, and W. Meng. An effective approach to document retrieval\nvia utilizing wordnet and recognizing phrases. In Proc. of the 27th Intl. ACM\nSIGIR Conf. on Research and development in information retrieval, 2004.\n[23] G. Miller. Wordnet: An electronic lexical database. Communications of the\nACM, 38(11):39-41, 1995.\n[24] L. Nie, B. Davison, and X. Qi. Topical link analysis for web search. In Proc. of\nthe 29th Intl. ACM SIGIR Conf. on Res. and Development in Inf. Retr., 2006.\n[25] L. Page, S. Brin, R. Motwani, and T. Winograd. The PageRank citation ranking:\nBringing order to the web. Technical report, Stanford Univ., 1998.\n[26] F. Qiu and J. Cho. Automatic indentification of user interest for personalized\nsearch. In Proc. of the 15th Intl. WWW Conf., 2006.\n[27] Y. Qiu and H.-P. Frei. Concept based query expansion. In Proc. of the 16th Intl.\nACM SIGIR Conf. on Research and Development in Inf. Retr., 1993.\n[28] J. Rocchio. Relevance feedback in information retrieval. The Smart Retrieval\nSystem: Experiments in Automatic Document Processing, pages 313-323, 1971.\n[29] I. Ruthven. Re-examining the potential effectiveness of interactive query\nexpansion. In Proc. of the 26th Intl. ACM SIGIR Conf., 2003.\n[30] T. Sarlos, A. A. Benczur, K. Csalogany, D. Fogaras, and B. Racz. To randomize\nor not to randomize: Space optimal summaries for hyperlink analysis. In Proc.\nof the 15th Intl. WWW Conf., 2006.\n[31] C. Shah and W. B. Croft. Evaluating high accuracy retrieval techniques. In\nProc. of the 27th Intl. ACM SIGIR Conf. on Research and development in\ninformation retrieval, pages 2-9, 2004.\n[32] K. Sugiyama, K. Hatano, and M. Yoshikawa. Adaptive web search based on\nuser profile constructed without any effort from users. In Proc. of the 13th Intl.\nWorld Wide Web Conf., 2004.\n[33] D. Sullivan. The older you are, the more you want personalized search, 2004.\nhttp://searchenginewatch.com/searchday/article.php/3385131.\n[34] J. Teevan, S. Dumais, and E. Horvitz. Personalizing search via automated\nanalysis of interests and activities. In Proc. of the 28th Intl. ACM SIGIR Conf.\non Research and Development in Information Retrieval, 2005.\n[35] E. Volokh. Personalization and privacy. Commun. ACM, 43(8), 2000.\n[36] E. M. Voorhees. Query expansion using lexical-semantic relations. In Proc. of\nthe 17th Intl. ACM SIGIR Conf. on Res. and development in Inf. Retr., 1994.\n[37] J. Xu and W. B. Croft. Query expansion using local and global document\nanalysis. In Proc. of the 19th Intl. ACM SIGIR Conf. on Research and\nDevelopment in Information Retrieval, 1996.\n[38] S. Yu, D. Cai, J.-R. Wen, and W.-Y. Ma. Improving pseudo-relevance feedback\nin web information retrieval using web page segmentation. In Proc. of the 12th\nIntl. Conf. on World Wide Web, 2003.", "keywords": "personalize web search;external thesaurus;static expansion approach;each query various feature;web query;web retrieval;granularity level;search output;personalized search framework;adaptive algorithm;short keyword query;expansion process;extensive empirical analysis;desktop profile;query expansion;keyword extraction;term and compound level analysis;output ranking;personal information repository;keyword co-occurrence;additional query keyword;global co-occurrence statistics;significant improvement;quality;various feature of each query;ambiguous query"}
-{"name": "test_H-20", "title": "New Event Detection Based on Indexing-tree and Named Entity", "abstract": "New Event Detection (NED) aims at detecting from one or multiple streams of news stories that which one is reported on a new event (i.e. not reported previously). With the overwhelming volume of news available today, there is an increasing need for a NED system which is able to detect new events more efficiently and accurately. In this paper we propose a new NED model to speed up the NED task by using news indexing-tree dynamically. Moreover, based on the observation that terms of different types have different effects for NED task, two term reweighting approaches are proposed to improve NED accuracy. In the first approach, we propose to adjust term weights dynamically based on previous story clusters and in the second approach, we propose to employ statistics on training data to learn the named entity reweighting model for each class of stories. Experimental results on two Linguistic Data Consortium (LDC) datasets TDT2 and TDT3 show that the proposed model can improve both efficiency and accuracy of NED task significantly, compared to the baseline system and other existing systems.", "fulltext": "1. INTRODUCTION\nTopic Detection and Tracking (TDT) program aims to develop\ntechniques which can effectively organize, search and structure\nnews text materials from a variety of newswire and broadcast\nmedia [1]. New Event Detection (NED) is one of the five tasks in\nTDT. It is the task of online identification of the earliest report for\neach topic as soon as that report arrives in the sequence of\ndocuments. A Topic is defined as a seminal event or activity,\nalong with directly related events and activities [2]. An Event is\ndefined as something (non-trivial) happening in a certain place at\na certain time [3]. For instance, when a bomb explodes in a\nbuilding, the exploding is the seminal event that triggers the topic,\nand other stories on the same topic would be those discussing\nsalvaging efforts, the search for perpetrators, arrests and trial and\nso on. Useful news information is usually buried in a mass of data\ngenerated everyday. Therefore, NED systems are very useful for\npeople who need to detect novel information from real-time news\nstream. These real-life needs often occur in domains like financial\nmarkets, news analysis, and intelligence gathering.\nIn most of state-of-the-art (currently) NED systems, each news\nstory on hand is compared to all the previous received stories. If\nall the similarities between them do not exceed a threshold, then\nthe story triggers a new event. They are usually in the form of\ncosine similarity or Hellinger similarity metric. The core problem\nof NED is to identify whether two stories are on the same topic.\nObviously, these systems cannot take advantage of topic\ninformation. Further more, it is not acceptable in real applications\nbecause of the large amount of computation required in the NED\nprocess. Other systems organize previous stories into clusters\n(each cluster corresponds to a topic), and new story is compared to\nthe previous clusters instead of stories. This manner can reduce\ncomparing times significantly. Nevertheless, it has been proved\nthat this manner is less accurate [4, 5]. This is because sometimes\nstories within a topic drift far away from each other, which could\nlead low similarity between a story and its topic.\nOn the other hand, some proposed NED systems tried to improve\naccuracy by making better use of named entities [10, 11, 12, 13].\nHowever, none of the systems have considered that terms of\ndifferent types (e.g. Noun, Verb or Person name) have different\neffects for different classes of stories in determining whether two\nstories are on the same topic. For example, the names of election\ncandidates (Person name) are very important for stories of election\nclass; the locations (Location name) where accidents happened are\nimportant for stories of accidents class.\nSo, in NED, there still exist following three problems to be\ninvestigated: (1) How to speed up the detection procedure while\ndo not decrease the detection accuracy? (2) How to make good\nuse of cluster (topic) information to improve accuracy? (3) How to\nobtain better news story representation by better understanding of\nnamed entities.\nDriven by these problems, we have proposed three approaches in\nthis paper. (1)To make the detection procedure faster, we propose\na new NED procedure based on news indexing-tree created\ndynamically. Story indexing-tree is created by assembling similar\nstories together to form news clusters in different hierarchies\naccording to their values of similarity. Comparisons between\ncurrent story and previous clusters could help find the most\nsimilar story in less comparing times. The new procedure can\nreduce the amount of comparing times without hurting accuracy.\n(2)We use the clusters of the first floor in the indexing-tree as\nnews topics, in which term weights are adjusted dynamically\naccording to term distribution in the clusters. In this approach,\ncluster (topic) information is used properly, so the problem of\ntheme decentralization is avoided. (3)Based on observations on\nthe statistics obtained from training data, we found that terms of\ndifferent types (e.g. Noun and Verb) have different effects for\ndifferent classes of stories in determining whether two stories are\non the same topic. And we propose to use statistics to optimize the\nweights of the terms of different types in a story according to the\nnews class that the story belongs to. On TDT3 dataset, the new\nNED model just uses 14.9% comparing times of the basic model,\nwhile its minimum normalized cost is 0.5012, which is 0.0797\nbetter than the basic model, and also better than any other results\npreviously reported for this dataset [8, 13].\nThe rest of the paper is organized as follows. We start off this\npaper by summarizing the previous work in NED in section 2.\nSection 3 presents the basic model for NED that most current\nsystems use. Section 4 describes our new detection procedure\nbased on news indexing-tree. In section 5, two term reweighting\nmethods are proposed to improve NED accuracy. Section 6 gives\nour experimental data and evaluation metrics. We finally wrap up\nwith the experimental results in Section 7, and the conclusions and\nfuture work in Section 8.\n2. RELATED WORK\nPapka et al. proposed Single-Pass clustering on NED [6]. When a\nnew story was encountered, it was processed immediately to\nextract term features and a query representation of the story\"s\ncontent is built up. Then it was compared with all the previous\nqueries. If the document did not trigger any queries by exceeding\na threshold, it was marked as a new event. Lam et al build up\nprevious query representations of story clusters, each of which\ncorresponds to a topic [7]. In this manner comparisons happen\nbetween stories and clusters.\nRecent years, most work focus on proposing better methods on\ncomparison of stories and document representation. Brants et al.\n[8] extended a basic incremental TF-IDF model to include\nsourcespecific models, similarity score normalization based on\ndocument-specific averages, similarity score normalization based\non source-pair specific averages, term reweighting based on\ninverse event frequencies, and segmentation of documents. Good\nimprovements on TDT bench-marks were shown. Stokes et al. [9]\nutilized a combination of evidence from two distinct\nrepresentations of a document\"s content. One of the\nrepresentations was the usual free text vector, the other made use\nof lexical chains (created using WordNet) to build another term\nvector. Then the two representations are combined in a linear\nfashion. A marginal increase in effectiveness was achieved when\nthe combined representation was used.\nSome efforts have been done on how to utilize named entities to\nimprove NED. Yang et al. gave location named entities four times\nweight than other terms and named entities [10]. DOREMI\nresearch group combined semantic similarities of person names,\nlocation names and time together with textual similarity [11][12].\nUMass [13] research group split document representation into two\nparts: named entities and non-named entities. And it was found\nthat some classes of news could achieve better performance using\nnamed entity representation, while some other classes of news\ncould achieve better performance using non-named entity\nrepresentation. Both [10] and [13] used text categorization\ntechnique to classify news stories in advance. In [13] news stories\nare classified automatically at first, and then test sensitivities of\nnames and non-name terms for NED for each class. In [10]\nfrequent terms for each class are removed from document\nrepresentation. For example, word election does not help\nidentify different elections. In their work, effectiveness of\ndifferent kinds of names (or terms with different POS) for NED in\ndifferent news classes are not investigated. We use statistical\nanalysis to reveal the fact and use it to improve NED performance.\n3. BASIC MODEL\nIn this section, we present the basic New Event Detection model\nwhich is similar to what most current systems apply. Then, we\npropose our new model by extending the basic model.\nNew Event Detection systems use news story stream as input, in\nwhich stories are strictly time-ordered. Only previously received\nstories are available when dealing with current story. The output is\na decision for whether the current story is on a new event or not\nand the confidence of the decision. Usually, a NED model consists\nof three parts: story representation, similarity calculation and\ndetection procedure.\n3.1 Story Representation\nPreprocessing is needed before generating story representation.\nFor preprocessing, we tokenize words, recognize abbreviations,\nnormalize abbreviations, add part-of-speech tags, remove\nstopwords included in the stop list used in InQuery [14], replace\nwords with their stems using K-stem algorithm[15], and then\ngenerate word vector for each news story.\nWe use incremental TF-IDF model for term weight calculation\n[4]. In a TF-IDF model, term frequency in a news document is\nweighted by the inverse document frequency, which is generated\nfrom training corpus. When a new term occurs in testing process,\nthere are two solutions: simply ignore the new term or set df of the\nterm as a small const (e.g. df = 1). The new term receives too low\nweight in the first solution (0) and too high weight in the second\nsolution. In incremental TF-IDF model, document frequencies are\nupdated dynamically in each time step t:\n1( ) ( ) ( )t t D tdf w df w df w\u2212= + (1)\nwhere Dt represents news story set received in time t, and dfDt(w)\nmeans the number of documents that term w occurs in, and dft(w)\nmeans the total number of documents that term w occurs in before\ntime t. In this work, each time window includes 50 news stories.\nThus, each story d received in t is represented as follows:\n1 2{ ( , , ), ( , , ),..., ( , , )}nd weight d t w weight d t w weight d t w\u2192\nwhere n means the number of distinct terms in story d, and\n( , , )weight d t w means the weight of term w in story d at time t:\n'\nlog( ( , ) 1) log(( 1) /( ( ) 0.5))\n( , , )\nlog( ( , ') 1) log(( 1) /( ( ') 0.5))\nt t\nt t\nw d\ntf d w N df w\nweight d t w\ntf d w N df w\n\u2208\n+ + +\n=\n+ + +\u2211\n(2)\nwhere Nt means the total number of news stories before time t, and\ntf(d,w) means how many times term w occurs in news story d.\n3.2 Similarity Calculation\nWe use Hellinger distance for the calculation of similarity\nbetween two stories, for two stories d and d\" at time t, their\nsimilarity is defined as follows:\n, '\n( , ', ) ( , , ) * ( ', , )\nw d d\nsim d d t weight d t w weight d t w\n\u2208\n= \u2211 (3)\n3.3 Detection Procedure\nFor each story d received in time step t, the value\n( ') ( )\n( ) ( ( , ', ))\ntime d time d\nn d max sim d d t\n<\n= (4)\nis a score used to determine whether d is a story about a new topic\nand at the same time is an indication of the confidence in our\ndecision [8]. time(d) means the publication time of story d. If the\nscore exceeds the threshold\u03b8 new, then there exists a sufficiently\nsimilar document, thus d is a old story, otherwise, there is no\nsufficiently similar previous document, thus d is an new story.\n4. New NED Procedure\nTraditional NED systems can be classified into two main types on\nthe aspect of detection procedure: (1) S-S type, in which the story\non hand is compared to each story received previously, and use\nthe highest similarity to determine whether current story is about a\nnew event; (2) S-C type, in which the story on hand is compared\nto all previous clusters each of which representing a topic, and the\nhighest similarity is used for final decision for current story. If the\nhighest similarity exceeds threshold\u03b8 new, then it is an old story,\nand put it into the most similar cluster; otherwise it is a new story\nand create a new cluster. Previous work show that the first manner\nis more accurate than the second one [4][5]. Since sometimes\nstories within a topic drift far away from each other, a story may\nhave very low similarity with its topic. So using similarities\nbetween stories for determining new story is better than using\nsimilarities between story and clusters. Nevertheless, the first\nmanner needs much more comparing times which means the first\nmanner is low efficient. We propose a new detection procedure\nwhich uses comparisons with previous clusters to help find the\nmost similar story in less comparing times, and the final new\nevent decision is made according to the most similar story.\nTherefore, we can get both the accuracy of S-S type methods and\nthe efficiency of S-C type methods.\nThe new procedure creates a news indexing-tree dynamically, in\nwhich similar stories are put together to form a hierarchy of\nclusters. We index similar stories together by their common\nancestor (a cluster node). Dissimilar stories are indexed in\ndifferent clusters. When a story is coming, we use comparisons\nbetween the current story and previous hierarchical clusters to\nhelp find the most similar story which is useful for new event\ndecision. After the new event decision is made, the current story is\ninserted to the indexing-tree for the following detection.\nThe news indexing-tree is defined formally as follows:\nS-Tree = {r, NC\n, NS\n, E}\nwhere r is the root of S-Tree, NC\nis the set of all cluster nodes, NS\nis the set of all story nodes, and E is the set of all edges in S-Tree.\nWe define a set of constraints for a S-Tree:\n\u2170 . , is an non-terminal node in the treeC\ni i N i\u2200 \u2208 \u2192\n\u2171 . , is a terminal node in the treeS\ni i N i\u2200 \u2208 \u2192\n\u2172 . , out degree of is at least 2C\ni i N i\u2200 \u2208 \u2192\n\u2173 . , is represented as the centroid of its desendantsC\ni i iN\u2200 \u2208 \u2192\nFor a news story di, the comparison procedure and inserting\nprocedure based on indexing-tree are defined as follows. An\nexample is shown by Figure 1 and Figure 2.\nFigure 1. Comparison procedure\nFigure 2. Inserting procedure\nComparison procedure:\nStep 1: compare di to all the direct child nodes of r and select \u03bb\nnodes with highest similarities, e.g., C1\n2 and C1\n3 in Figure 1.\nStep 2: for each selected node in the last step, e.g. C1\n2, compare di\nto all its direct child nodes, and select \u03bb nodes with highest\nsimilarities, e.g. C2\n2 and d8. Repeat step 2 for all non-terminal\nnodes.\nStep 3: record the terminal node with the highest similarty to di,\ne.g. s5, and the similarity value (0.20).\nInserting di to the S-tree with r as root:\nFind the node n which is direct child of r in the path from r to the\nterminal node with highest similarity s, e.g. C1\n2. If s is smaller\nthan \u03b8 init+(h-1)\u03b4 , then add di to the tree as a direct child of r.\nOtherwise, if n is a terminal node, then create a cluster node\ninstead of n, and add both n and di as its direct children; if n is an\nnon-terminal node, then repeat this procedure and insert di to the\nsub-tree with n as root recursively. Here h is the length between n\nand the root of S-tree.\nThe more the stories in a cluster similar to each other, the better\nthe cluster represents the stories in it. Hence we add no constraints\non the maximum of tree\"s height and degree of a node. Therefore,\nwe cannot give the complexity of this indexing-tree based\nprocedure. But we will give the number of comparing times\nneeded by the new procedure in our experiments in section7.\n5. Term Reweighting Methods\nIn this section, two term reweighting methods are proposed to\nimprove NED accuracy. In the first method, a new way is\nexplored for better using of cluster (topic) information. The\nsecond one finds a better way to make use of named entities based\non news classification.\n5.1 Term Reweighting Based on Distribution\nDistance\nTF-IDF is the most prevalent model used in information retrieval\nsystems. The basic idea is that the fewer documents a term\nappears in, the more important the term is in discrimination of\ndocuments (relevant or not relevant to a query containing the\nterm). Nevertheless, in TDT domain, we need to discriminate\ndocuments with regard to topics rather than queries. Intuitively,\nusing cluster (topic) vectors to compare with subsequent news\nstories should outperform using story vectors. Unfortunately, the\nexperimental results do not support this intuition [4][5]. Based on\nobservation on data, we find the reason is that a news topic\nusually contains many directly or indirectly related events, while\nthey all have their own sub-subjects which are usually different\nwith each other. Take the topic described in section 1 as an\nexample, events like the explosion and salvage have very low\nsimilarities with events about criminal trial, therefore stories about\ntrial would have low similarity with the topic vector built on its\nprevious events. This section focuses on how to effectively make\nuse of topic information and at the same time avoid the problem of\ncontent decentralization.\nAt first, we classify terms into 5 classes to help analysis the needs\nof the modified model:\nTerm class A: terms that occur frequently in the whole corpus,\ne.g., year and people. Terms of this class should be given low\nweights because they do not help much for topic discrimination.\nTerm class B: terms that occur frequently within a news category,\ne.g., election, storm. They are useful to distinguish two stories\nin different news categories. However, they cannot provide\ninformation to determine whether two stories are on the same or\ndifferent topics. In another words, term election and term\nstorm are not helpful in differentiate two election campaigns\nand two storm disasters. Therefore, terms of this class should be\nassigned lower weights.\nTerm class C: terms that occur frequently in a topic, and\ninfrequently in other topics, e.g., the name of a crash plane, the\nname of a specific hurricane. News stories that belong to different\ntopics rarely have overlap terms in this class. The more frequently\na term appears in a topic, the more important the term is for a\nstory belonging to the topic, therefore the term should be set\nhigher weight.\nTerm class D: terms that appear in a topic exclusively, but not\nfrequently. For example, the name of a fireman who did very well\nin a salvage action, which may appears in only two or three stories\nbut never appeared in other topics. Terms of this type should\nreceive more weights than in TF-IDF model. However, since they\nare not popular in the topic, it is not appropriate to give them too\nhigh weights.\nTerm class E: terms with low document frequency, and appear in\ndifferent topics. Terms of this class should receive lower weights.\nNow we analyze whether TF-IDF model can give proper weights\nto the five classes of terms. Obviously, terms of class A are lowly\nweighted in TF-IDF model, which is conformable with the\nrequirement described above. In TF-IDF model, terms of class B\nare highly dependant with the number of stories in a news class.\nTF-IDF model cannot provide low weights if the story containing\nthe term belongs to a relative small news class. For a term of class\nC, the more frequently it appears in a topic, the less weight\nTFIDF model gives to it. This strongly conflicts with the requirement\nof terms in class C. For terms of class D, TF-IDF model gives\nthem high weights correctly. But for terms of class E, TF-IDF\nmodel gives high weights to them which are not conformable with\nthe requirement of low weights. To sum up, terms of class B, C, E\ncannot be properly weighted in TF-IDF model. So, we propose a\nmodified model to resolve this problem.\nWhen \u03b8 init and\u03b8 new are set closely, we assume that most of the\nstories in a first-level cluster (a direct child node of root node) are\non the same topic. Therefore, we make use of a first-level cluster\nto capture term distribution (df for all the terms within the cluster)\nwithin the topic dynamically. KL divergence of term distribution\nin a first-level cluster and the whole story set is used to adjust\nterm weights:\n' '\n'\n( , , ) * (1 * ( || ))\n( , , )\n( , , ') * (1 * ( || ))\ncw tw\ncw tw\nw d\nD\nweight d t w KL P P\nweight d t w\nweight d t w KL P P\n\u03b3\n\u03b3\n\u2208\n+\n=\n+\u2211\n(5)\nwhere\n( ) ( )\n( ) ( ) 1,cw cw\nc c\nc c\ndf w df w\np y p y\nN N\n= = \u2212 (6)\n( ) ( )\n( ) ( ) 1,t t\ntw tw\nt t\ndf w df w\np y p y\nN N\n= = \u2212 (7)\nwhere dfc(w) is the number of documents containing term w\nwithin cluster C, and Nc is the number of documents in cluster C,\nand Nt is the total number of documents that arrive before time\nstep t. \u03b3 is a const parameter, now is manually set 3.\nKL divergence is defined as follows [17]:\n( )\n( || ) ( ) log\n( )x\np x\nKL P Q p x\nq x\n= \u2211 (8)\nThe basic idea is: for a story in a topic, the more a term occurs\nwithin the topic, and the less it occurs in other topics, it should be\nassigned higher weights. Obviously, modified model can meet all\nthe requirements of the five term classes listed above.\n5.2 Term Reweighting Based on Term Type\nand Story Class\nPrevious work found that some classes of news stories could\nachieve good improvements by giving extra weight to named\nentities. But we find that terms of different types should be given\ndifferent amount of extra weight for different classes of news\nstories.\nWe use open-NLP1\nto recognize named entity types and\npart-ofspeech tags for terms that appear in news stories. Named entity\ntypes include person name, organization name, location name,\ndate, time, money and percentage, and five POSs are selected:\nnone (NN), verb (VB), adjective (JJ), adverb (RB) and cardinal\nnumber (CD). Statistical analysis shows topic-level discriminative\nterms types for different classes of stories. For the sake of\nconvenience, named entity type and part-of-speech tags are\nuniformly called term type in subsequent sections.\nDetermining whether two stories are about the same topic is a\nbasic component for NED task. So at first we use 2\n\u03c7 statistic to\ncompute correlations between terms and topics. For a term t and a\ntopic T, a contingence table is derived:\nTable 1. A 2\u00d72 Contingence Table\nDoc Number\nbelong to\ntopic T\nnot belong to\ntopic T\ninclude t A B\nnot include t C D\nThe 2\n\u03c7 statistic for a specific term t with respect to topic T is\ndefined to be [16]:\n2\n2\n( , )\n( ) * ( )\n( ) * ( ) * ( ) * ( )\nw T\nA B C D AD CB\nA C B D A B C D\n\u03c7 =\n+ + + \u2212\n+ + + +\n(9)\nNews topics for the TDT task are further classified into 11 rules\nof interpretations (ROIs) 2\n. The ROI can be seen as a higher level\nclass of stories. The average correlation between a term type and a\ntopic ROI is computed as:\n2\navg\n2\n( , )( ( , ) )k m\nm km kT R w P\nw TP R p w T\nR P\n\u03c7 \u03c7\n\u2208 \u2208\n\u2211 \u2211\uff08 , \uff09=\n1 1\nk=1\u2026K, m=1\u2026M (10)\nwhere K is the number of term types (set 12 constantly in the\npaper). M is the number news classes (ROIs, set 11 in the paper).\nPk represents the set of all terms of type k, and Rm represents the\nset of all topics of class m, p(t,T) means the probability that t\noccurs in topic T. Because of limitation of space, only parts of the\nterm types (9 term types) and parts of news classes (8 classes) are\nlisted in table 2 with the average correlation values between them.\nThe statistics is derived from labeled data in TDT2 corpus.\n(Results in table 2 are already normalized for convenience in\ncomparison.)\nThe statistics in table 2 indicates the usefulness of different term\ntypes in topic discrimination with respect to different news\nclasses. We can see that, location name is the most useful term\ntype for three news classes: Natural Disasters, Violence or War,\nFinances. And for three other categories Elections, Legal/Criminal\nCases, Science and Discovery, person name is the most\ndiscriminative term type. For Scandals/Hearings, date is the most\nimportant information for topic discrimination. In addition,\nLegal/Criminal Cases and Finance topics have higher correlation\nwith money terms, while Science and Discovery have higher\ncorrelation with percentage terms. Non-name terms are more\nstable for different classes.\n1\n. http://opennlp.sourceforge.net/\n2\n. http://projects.ldc.upenn.edu/TDT3/Guide/label.html\nFrom the analysis of table 2, it is reasonable to adjust term weight\naccording to their term type and the news class the story belongs\nto. New term weights are reweighted as follows:\n( )\n( )\n( )\n( ')\n'\n( , , ) *\n( , , )\n( , , ) *'\nclass d\nD type w\nT class d\nD type w\nw d\nweight d t w\nweight d t w\nweight d t w\n\u03b1\n\u03b1\n\u2208\n=\n\u2211\n(11)\nwhere type(w) represents the type of term w, and class(d)\nrepresents the class of story d, c\nk\u03b1 is reweighting parameter for\nnews class c and term type k. In the work, we just simply use\nstatistics in table 2 as the reweighting parameters. Even thought\nusing the statistics directly may not the best choice, we do not\ndiscuss how to automatically obtain the best parameters. We will\ntry to use machine learning techniques to obtain the best\nparameters in the future work.\nIn the work, we use BoosTexter [20] to classify all stories into one\nof the 11 ROIs. BoosTexter is a boosting based machine learning\nprogram, which creates a series of simple rules for building a\nclassifier for text or attribute-value data. We use term weight\ngenerated using TF-IDF model as feature for story classification.\nWe trained the model on the 12000 judged English stories in\nTDT2, and classify the rest of the stories in TDT2 and all stories\nin TDT3. Classification results are used for term reweighting in\nformula (11). Since the class labels of topic-off stories are not\ngiven in TDT datasets, we cannot give the classification accuracy\nhere. Thus we do not discuss the effects of classification accuracy\nto NED performance in the paper.\n6. EXPERIMENTAL SETUP\n6.1 Datasets\nWe used two LDC [18] datasets TDT2 and TDT3 for our\nexperiments. TDT2 contains news stories from January to June\n1998. It contains around 54,000 stories from sources like ABC,\nAssociated Press, CNN, New York Times, Public Radio\nInternational, Voice of America etc. Only English stories in the\ncollection were considered. TDT3 contains approximately 31,000\nEnglish stories collected from October to December 1998. In\naddition to the sources used in TDT2, it also contains stories from\nNBC and MSNBC TV broadcasts. We used transcribed versions\nof the TV and radio broadcasts besides textual news.\nTDT2 dataset is labeled with about 100 topics, and approximately\n12,000 English stories belong to at least one of these topics. TDT3\ndataset is labeled with about 120 topics, and approximately 8000\nEnglish stories belong to at least one of these topics. All the topics\nare classified into 11 Rules of Interpretation: (1)Elections,\n(2)Scandals/Hearings, (3)Legal/Criminal Cases, (4)Natural\nDisasters, (5)Accidents, (6)Ongoing Violence or War, (7)Science\nand Discovery News, (8)Finance, (9)New Law, (10)Sports News,\n(11)MISC. News.\n6.2 Evaluation Metric\nTDT uses a cost function CDet that combines the probabilities of\nmissing a new story and a false alarm [19]:\n* * * *Det Miss Miss Target FA FA NontargetC C P P C P P= + (12)\nTable 2. Average correlation between term types and news classes\nwhere CMiss means the cost of missing a new story, PMiss means\nthe probability of missing a new story, and PTarget means the\nprobability of seeing a new story in the data; CFA means the cost\nof a false alarm, PFA means the probability of a false alarm, and\nPNontarget means the probability of seeing an old story. The cost\nCDet is normalized such that a perfect system scores 0 and a trivial\nsystem, which is the better one of mark all stories as new or old,\nscores 1:\n(\n( * , * )\n)\nDet\nDet\nMiss Target FA Nontarget\nC\nNorm C\nmin C P C P\n= (13)\nNew event detection system gives two outputs for each story. The\nfirst part is yes or no indicating whether the story triggers a\nnew event or not. The second part is a score indicating confidence\nof the first decision. Confidence scores can be used to plot DET\ncurve, i.e., curves that plot false alarm vs. miss probabilities.\nMinimum normalized cost can be determined if optimal threshold\non the score were chosen.\n7. EXPERIMENTAL RESULTS\n7.1 Main Results\nTo test the approaches proposed in the model, we implemented\nand tested five systems:\nSystem-1: this system is used as baseline. It is implemented based\non the basic model described in section 3, i.e., using incremental\nTF-IDF model to generate term weights, and using Hellinger\ndistance to compute document similarity. Similarity score\nnormalization is also employed [8]. S-S detection procedure is\nused.\nSystem-2: this system is the same as system-1 except that S-C\ndetection procedure is used.\nSystem-3: this system is the same as system-1 except that it uses\nthe new detection procedure which is based on indexing-tree.\nSystem-4: implemented based on the approach presented in\nsection 5.1, i.e., terms are reweighted according to the distance\nbetween term distributions in a cluster and all stories. The new\ndetection procedure is used.\nSystem-5: implemented based on the approach presented in\nsection 5.2, i.e., terms of different types are reweighted according\nto news class using trained parameters. The new detection\nprocedure is used.\nThe following are some other NED systems:\nSystem-6: [21] for each pair of stories, it computes three\nsimilarity values for named entity, non-named entity and all terms\nrespectively. And employ Support Vector Machine to predict\nnew or old using the similarity values as features.\nSystem-7: [8] it extended a basic incremental TF-IDF model to\ninclude source-specific models, similarity score normalization\nbased on document-specific averages, similarity score\nnormalization based on source-pair specific averages, etc.\nSystem-8: [13] it split document representation into two parts:\nnamed entities and non-named entities, and choose one effective\npart for each news class.\nTable 3 and table 4 show topic-weighted normalized costs and\ncomparing times on TDT2 and TDT3 datasets respectively. Since\nno heldout data set for fine-tuning the threshold \u03b8 new was\navailable for experiments on TDT2, we only report minimum\nnormalized costs for our systems in table 3. System-5 outperforms\nall other systems including system-6, and it performs only\n2.78e+8 comparing times in detection procedure which is only\n13.4% of system-1.\nTable 3. NED results on TDT2\nSystems Min Norm(CDet) Cmp times\nSystem-1 0.5749 2.08e+9\nSystem-2\u2460 0.6673 3.77e+8\nSystem-3\u2461 0.5765 2.81e+8\nSystem-4\u2461 0.5431 2.99e+8\nSystem-5\u2461 0.5089 2.78e+8\nSystem-6 0.5300\n-\u2460 \u03b8 new=0.13\n\u2461 \u03b8 init=0.13, \u03bb =3,\u03b4 =0.15\nWhen evaluating on the normalized costs on TDT3, we use the\noptimal thresholds obtained from TDT2 data set for all systems.\nSystem-2 reduces comparing times to 1.29e+9 which is just\n18.3% of system-1, but at the same time it also gets a deteriorated\nminimum normalized cost which is 0.0499 higher than system-1.\nSystem-3 uses the new detection procedure based on news\nindexing-tree. It requires even less comparing times than system-2.\nThis is because story-story comparisons usually yield greater\nsimilarities than story-cluster ones, so stories tend to be combined\nLocation Person Date Organization Money Percentage NN JJ CD\nElections 0.37 1 0.04 0.58 0.08 0.03 0.32 0.13 0.1\nScandals/Hearings 0.66 0.62 0.28 1 0.11 0.02 0.27 0.13 0.05\nLegal/Criminal Cases 0.48 1 0.02 0.62 0.15 0 0.22 0.24 0.09\nNatural Disasters 1 0.27 0 0.04 0.04 0 0.25 0.04 0.02\nViolence or War 1 0.36 0.02 0.14 0.02 0.04 0.21 0.11 0.02\nScience and Discovery 0.11 1 0.01 0.22 0.08 0.12 0.19 0.08 0.03\nFinances 1 0.45 0.04 0.98 0.13 0.02 0.29 0.06 0.05\nSports 0.16 0.27 0.01 1 0.02 0 0.11 0.03 0.01\ntogether in system-3. And system-3 is basically equivalent to\nsystem-1 in accuracy results. System-4 adjusts term weights based\non the distance of term distributions between the whole corpus\nand cluster story set, yielding a good improvement by 0.0468\ncompared to system-1. The best system (system-5) has a\nminimum normalized cost 0.5012, which is 0.0797 better than\nsystem-1, and also better than any other results previously\nreported for this dataset [8, 13]. Further more, system-5 only\nneeds 1.05e+8 comparing times which is 14.9% of system-1.\nTable 4. NED results on TDT3\nSystems Norm(CDet) Min Norm(CDet) Cmp times\nSystem-1 0.6159 0.5809 7.04e+8\nSystem-2\u2460 0.6493 0.6308 1.29e+8\nSystem-3\u2461 0.6197 0.5868 1.03e+8\nSystem-4\u2461 0.5601 0.5341 1.03e+8\nSystem-5\u2461 0.5413 0.5012 1.05e+8\nSystem-7 -- 0.5783\n-System-8 -- 0.5229\n-\u2460 \u03b8 new=0.13\n\u2461 \u03b8 init=0.13, \u03bb =3,\u03b4 =0.15\nFigure5 shows the five DET curves for our systems on data set\nTDT3. System-5 achieves the minimum cost at a false alarm rate\nof 0.0157 and a miss rate of 0.4310. We can observe that\nSystem4 and System-5 obtain lower miss probability at regions of low\nfalse alarm probabilities. The hypothesis is that, more weight\nvalue is transferred to key terms of topics from non-key terms.\nSimilarity score between two stories belonging to different topics\nare lower than before, because their overlapping terms are usually\nnot key terms of their topics.\n7.2 Parameter selection for indexing-tree\ndetection\nFigure 3 shows the minimum normalized costs obtained by\nsystem-3 on TDT3 using different parameters. The\u03b8 init parameter\nis tested on six values spanning from 0.03 to 0.18. And the \u03bb\nparameter is tested on four values 1, 2, 3 and 4. We can see that,\nwhen\u03b8 init is set to 0.12, which is the closest one to\u03b8 new, the costs\nare lower than others. This is easy to explain, because when\nstories belonging to the same topic are put in a cluster, it is more\nreasonable for the cluster to represent the stories in it. When\nparameter \u03bb is set to 3 or 4, the costs are better than other cases,\nbut there is no much difference between 3 and 4.\n0\n0.05\n0.1\n0.15\n0.2\n1\n2\n3\n4\n0.5\n0.6\n0.7\n0.8\n0.9\n1\n\u03b8-init\u03bb\nMinCost\n0.6\n0.65\n0.7\n0.75\n0.8\n0.85\n0.9\nFigure 3. Min Cost on TDT3 (\u03b4 =0.15)\n0\n0.05\n0.1\n0.15\n0.2\n1\n2\n3\n4\n0\n0.5\n1\n1.5\n2\n2.5\nx 10\n8\n\u03b8-init\n\u03bb\nComparingtimes\n0.2\n0.4\n0.6\n0.8\n1\n1.2\n1.4\n1.6\n1.8\n2\nx 10\n8\nFigure 4. Comparing times on TDT3 (\u03b4 =0.15)\nFigure 4 gives the comparing times used by system-3 on TDT3\nwith the same parameters as figure 3. The comparing times are\nstrongly dependent on\u03b8 init. Because the greater\u03b8 init is, the less\nstories combined together, the more comparing times are needed\nfor new event decision.\nSo we use\u03b8 init =0.13,\u03bb =3,\u03b4 =0.15 for system-3, 4, and 5. In this\nparameter setting, we can get both low minimum normalized costs\nand less comparing times.\n8. CONCLUSION\nWe have proposed a news indexing-tree based detection\nprocedure in our model. It reduces comparing times to about one\nseventh of traditional method without hurting NED accuracy. We\nalso have presented two extensions to the basic TF-IDF model.\nThe first extension is made by adjust term weights based on term\ndistributions between the whole corpus and a cluster story set.\nAnd the second extension to basic TF-IDF model is better use of\nterm types (named entities types and part-of-speed) according to\nnews categories. Our experimental results on TDT2 and TDT3\ndatasets show that both of the two extensions contribute\nsignificantly to improvement in accuracy.\nWe did not consider news time information as a clue for NED\ntask, since most of the topics last for a long time and TDT data\nsets only span for a relative short period (no more than 6 months).\nFor the future work, we want to collect news set which span for a\nlonger period from internet, and integrate time information in\nNED task. Since topic is a relative coarse-grained news cluster,\nwe also want to refine cluster granularity to event-level, and\nidentify different events and their relations within a topic.\nAcknowledgments\nThis work is supported by the National Natural Science\nFoundation of China under Grant No. 90604025. Any opinions,\nfindings and conclusions or recommendations expressed in this\nmaterial are the author(s) and do not necessarily reflect those of\nthe sponsor.\n9. REFERENCES\n[1] http://www.nist.gov/speech/tests/tdt/index.htm\n[2] In Topic Detection and Tracking. Event-based Information\nOrganization. Kluwer Academic Publishers, 2002.\n.01 .02 .05 .1 .2 .5 1 2 5 10 20 40 60 80 90\n1\n2\n5\n10\n20\n40\n60\n80\n90\nFalse Alarm Probability (in %)\nMissProbability(in%) SYSTEM1 Topic Weighted Curve\nSYSTEM1 Min Norm(Cost)\nSYSTEM2 Topic Weighted Curve\nSYSTEM2 Min Norm(Cost)\nSYSTEM3 Topic Weighted Curve\nSYSTEM3 Min Norm(Cost)\nSYSTEM4 Topic Weighted Curve\nSYSTEM4 Min Norm(Cost)\nSYSTEM5 Topic Weighted Curve\nSYSTEM5 Min Norm(Cost)\nRandom Performance\nFigure 5. DET curves on TDT3\n[3] Y. Yang, J. Carbonell, R. Brown, T. Pierce, B.T. Archibald,\nand X. Liu. Learning Approaches for Detecting and Tracking\nNews Events. In IEEE Intelligent Systems Special Issue on\nApplications of Intelligent Information Retrieval, volume 14\n(4), 1999, 32-43.\n[4] Y. Yang, T. Pierce, and J. Carbonell. A Study on\nRetrospective and On-line Event Detection. In Proceedings\nof SIGIR-98, Melbourne, Australia, 1998, 28-36.\n[5] J. Allan, V. Lavrenko, D. Malin, and R. Swan. Detections,\nBounds, and Timelines: Umass and tdt-3. In Proceedings of\nTopic Detection and Tracking Workshop (TDT-3), Vienna,\nVA, 2000, 167-174.\n[6] R. Papka and J. Allan. On-line New Event Detection Using\nSingle Pass Clustering TITLE2:. Technical Report\nUM-CS1998-021, 1998.\n[7] W. Lam, H. Meng, K. Wong, and J. Yen. Using Contextual\nAnalysis for News Event Detection. International Journal on\nIntelligent Systems, 2001, 525-546.\n[8] B. Thorsten, C. Francine, and F. Ayman. A System for New\nEvent Detection. In Proceedings of the 26th Annual\nInternational ACM SIGIR Conference, New York, NY, USA.\nACM Press. 2003, 330-337.\n[9] S. Nicola and C. Joe. Combining Semantic and Syntactic\nDocument Classifiers to Improve First Story Detection. In\nProceedings of the 24th Annual International ACM SIGIR\nConference, New York, NY, USA. ACM Press. 2001,\n424425.\n[10] Y. Yang, J. Zhang, J. Carbonell, and C. Jin.\nTopicconditioned Novelty Detection. In Proceedings of the 8th\nACM SIGKDD International Conference, ACM Press. 2002,\n688-693.\n[11] M. Juha, A.M. Helena, and S. Marko. Applying Semantic\nClasses in Event Detection and Tracking. In Proceedings of\nInternational Conference on Natural Language Processing\n(ICON 2002), 2002, pages 175-183.\n[12] M. Juha, A.M. Helena, and S. Marko. Simple Semantics in\nTopic Detection and Tracking. Information Retrieval, 7(3-4):\n2004, 347-368.\n[13] K. Giridhar and J. Allan. Text Classification and Named\nEntities for New Event Detection. In Proceedings of the 27th\nAnnual International ACM SIGIR Conference, New York,\nNY, USA. ACM Press. 2004, 297-304.\n[14] J. P. Callan, W. B. Croft, and S. M. Harding. The INQUERY\nRetrieval System. In Proceedings of DEXA-92, 3rd\nInternational Conference on Database and Expert Systems\nApplications, 1992, 78-83.\n[15] R. Krovetz. Viewing Morphology as An Inference Process.\nIn Proceedings of ACM SIGIR93, 1993, 61-81.\n[16] Y. Yang and J. Pedersen. A Comparative Study on Feature\nSelection in Text Categorization. In J. D. H. Fisher, editor,\nThe Fourteenth International Conference on Machine\nLearning (ICML'97), Morgan Kaufmann, 1997, 412-420.\n[17] T. M. Cover, and J.A. Thomas. Elements of Information\nTheory. Wiley. 1991.\n[18] The linguistic data consortium, http://www.ldc,upenn.edu/.\n[19] The 2001 TDT task definition and evaluation plan,\nhttp://www.nist.gov/speech/tests/tdt/tdt2001/evalplan.htm.\n[20] R. E. Schapire and Y. Singer. Boostexter: A Boosting-based\nSystem for Text Categorization. In Machine Learning\n39(2/3):1, Kluwer Academic Publishers, 2000, 35-168.\n[21] K. Giridhar and J. Allan. 2005. Using Names and Topics for\nNew Event Detection. In Proceedings of Human Technology\nConference and Conference on Empirical Methods in\nNatural Language, Vancouver, 2005, 121-128", "keywords": "term weight;baseline system;topic detection and track;name entity;new event detection;training data;volume of news;news story stream;named entity reweighting mode;linguistic data consortium;news indexing-tree;class of story;term reweighting approach;real-time index;story class;stream of news story;ned accuracy;news volume;speed up the ned task;existing system;statistics"}
-{"name": "test_H-21", "title": "Robust Classification of Rare Queries Using Web Knowledge", "abstract": "We propose a methodology for building a practical robust query classification system that can identify thousands of query classes with reasonable accuracy, while dealing in realtime with the query volume of a commercial web search engine. We use a blind feedback technique: given a query, we determine its topic by classifying the web search results retrieved by the query. Motivated by the needs of search advertising, we primarily focus on rare queries, which are the hardest from the point of view of machine learning, yet in aggregation account for a considerable fraction of search engine traffic. Empirical evaluation confirms that our methodology yields a considerably higher classification accuracy than previously reported. We believe that the proposed methodology will lead to better matching of online ads to rare queries and overall to a better user experience.", "fulltext": "1. INTRODUCTION\nIn its 12 year lifetime, web search had grown\ntremendously: it has simultaneously become a factor in the daily\nlife of maybe a billion people and at the same time an\neight billion dollar industry fueled by web advertising. One\nthing, however, has remained constant: people use very\nshort queries. Various studies estimate the average length of\na search query between 2.4 and 2.7 words, which by all\naccounts can carry only a small amount of information.\nCommercial search engines do a remarkably good job in\ninterpreting these short strings, but they are not (yet!) omniscient.\nTherefore, using additional external knowledge to augment\nthe queries can go a long way in improving the search results\nand the user experience.\nAt the same time, better understanding of query\nmeaning has the potential of boosting the economic underpinning\nof Web search, namely, online advertising, via the sponsored\nsearch mechanism that places relevant advertisements\nalongside search results. For instance, knowing that the query\nSD450 is about cameras while nc4200 is about laptops\ncan obviously lead to more focused advertisements even if no\nadvertiser has specifically bidden on these particular queries.\nIn this study we present a methodology for query\nclassification, where our aim is to classify queries onto a commercial\ntaxonomy of web queries with approximately 6000 nodes.\nGiven such classifications, one can directly use them to\nprovide better search results as well as more focused ads. The\nproblem of query classification is extremely difficult owing to\nthe brevity of queries. Observe, however, that in many cases\na human looking at a search query and the search query\nresults does remarkably well in making sense of it. Of course,\nthe sheer volume of search queries does not lend itself to\nhuman supervision, and therefore we need alternate sources\nof knowledge about the world. For instance, in the example\nabove, SD450 brings pages about Canon cameras, while\nnc4200 brings pages about Compaq laptops, hence to a\nhuman the intent is quite clear.\nSearch engines index colossal amounts of information, and\nas such can be viewed as very comprehensive repositories of\nknowledge. Following the heuristic described above, we\npropose to use the search results themselves to gain additional\ninsights for query interpretation. To this end, we employ\nthe pseudo relevance feedback paradigm, and assume the\ntop search results to be relevant to the query. Certainly,\nnot all results are equally relevant, and thus we use\nelaborate voting schemes in order to obtain reliable knowledge\nabout the query. For the purpose of this study we first\ndispatch the given query to a general web search engine, and\ncollect a number of the highest-scoring URLs. We crawl the\nWeb pages pointed by these URLs, and classify these pages.\nFinally, we use these result-page classifications to classify\nthe original query. Our empirical evaluation confirms that\nusing Web search results in this manner yields substantial\nimprovements in the accuracy of query classification.\nNote that in a practical implementation of our\nmethodology within a commercial search engine, all indexed pages\ncan be pre-classified using the normal text-processing and\nindexing pipeline. Thus, at run-time we only need to run\nthe voting procedure, without doing any crawling or\nclassification. This additional overhead is minimal, and therefore\nthe use of search results to improve query classification is\nentirely feasible in run-time.\nAnother important aspect of our work lies in the choice\nof queries. The volume of queries in today\"s search engines\nfollows the familiar power law, where a few queries appear\nvery often while most queries appear only a few times. While\nindividual queries in this long tail are rare, together they\naccount for a considerable mass of all searches. Furthermore,\nthe aggregate volume of such queries provides a substantial\nopportunity for income through on-line advertising.1\nSearching and advertising platforms can be trained to\nyield good results for frequent queries, including auxiliary\ndata such as maps, shortcuts to related structured\ninformation, successful ads, and so on. However, the tail queries\nsimply do not have enough occurrences to allow statistical\nlearning on a per-query basis. Therefore, we need to\naggregate such queries in some way, and to reason at the level\nof aggregated query clusters. A natural choice for such\naggregation is to classify the queries into a topical taxonomy.\nKnowing which taxonomy nodes are most relevant to the\ngiven query will aid us to provide the same type of support\nfor rare queries as for frequent queries. Consequently, in this\nwork we focus on the classification of rare queries, whose\ncorrect classification is likely to be particularly beneficial.\nEarly studies in query interpretation focused on query\naugmentation through external dictionaries [22]. More\nrecent studies [18, 21] also attempted to gather some\nadditional knowledge from the Web. However, these\nstudies had a number of shortcomings, which we overcome in\nthis paper. Specifically, earlier works in the field used very\nsmall query classification taxonomies of only a few dozens\nof nodes, which do not allow ample specificity for online\nadvertising [11]. They also used a separate ancillary taxonomy\nfor Web documents, so that an extra level of indirection had\nto be employed to establish the correspondence between the\nancillary and the main taxonomies [18].\nThe main contributions of this paper are as follows. First,\nwe build the query classifier directly for the target\ntaxonomy, instead of using a secondary auxiliary structure; this\ngreatly simplifies taxonomy maintenance and development.\nThe taxonomy used in this work is two orders of\nmagnitude larger than that used in prior studies. The\nempirical evaluation demonstrates that our methodology for\nusing external knowledge achieves greater improvements than\nthose previously reported. Since our taxonomy is\nconsiderably larger, the classification problem we face is much more\ndifficult, making the improvements we achieve particularly\nnotable. We also report the results of a thorough\nempirical study of different voting schemes and different depths of\nknowledge (e.g., using search summaries vs. entire crawled\npages). We found that crawling the search results yields\ndeeper knowledge and leads to greater improvements than\nmere summaries. This result is in contrast with prior\nfindings in query classification [20], but is supported by research\nin mainstream text classification [5].\n2. METHODOLOGY\nOur methodology has two main phases. In the first phase,\n1\nIn the above examples, SD450 and nc4200 represent\nfairly old gadget models, and hence there are advertisers\nplacing ads on these queries. However, in this paper we\nmainly deal with rare queries which are extremely difficult\nto match to relevant ads.\nwe construct a document classifier for classifying search\nresults into the same taxonomy into which queries are to be\nclassified. In the second phase, we develop a query classifier\nthat invokes the document classifier on search results, and\nuses the latter to perform query classification.\n2.1 Building the document classifier\nIn this work we used a commercial classification taxonomy\nof approximately 6000 nodes used in a major US search\nengine (see Section 3.1). Human editors populated the\ntaxonomy nodes with labeled examples that we used as training\ninstances to learn a document classifier in phase 1.\nGiven a taxonomy of this size, the computational\nefficiency of classification is a major issue. Few machine\nlearning algorithms can efficiently handle so many different\nclasses, each having hundreds of training examples. Suitable\ncandidates include the nearest neighbor and the Naive Bayes\nclassifier [3], as well as prototype formation methods such\nas Rocchio [15] or centroid-based [7] classifiers. A recent\nstudy [5] showed centroid-based classifiers to be both\neffective and efficient for large-scale taxonomies and\nconsequently, we used a centroid classifier in this work.\n2.2 Query classification by search\nHaving developed a document classifier for the query\ntaxonomy, we now turn to the problem of obtaining a\nclassification for a given query based on the initial search results\nit yields. Let\"s assume that there is a set of documents\nD = d1 . . . dm indexed by a search engine. The search engine\ncan then be represented by a function f = similarity(q, d)\nthat quantifies the affinity between a query q and a\ndocument d. Examples of such affinity scores used in this paper\nare rank-the rank of the document in the ordered list of\nsearch results; static score-the score of the goodness of\nthe page regardless of the query (e.g., PageRank); and\ndynamic score-the closeness of the query and the document.\nQuery classification is determined by first evaluating\nconditional probabilities of all possible classes P(Cj|q), and\nthen selecting the alternative with the highest probability\nCmax = arg maxCj \u2208C P(Cj|q). Our goal is to estimate the\nconditional probability of each possible class using the search\nresults initially returned by the query. We use the following\nformula that incorporates classifications of individual search\nresults: P(Cj|q) =\nd\u2208D\nP(Cj|q, d)\u00b7 P(d|q) =\nd\u2208D\nP(q|Cj, d)\nP(q|d)\n\u00b7 P(Cj|d)\u00b7 P(d|q).\nWe assume that P(q|Cj, d) \u2248 P(q|d), that is, a\nprobability of a query given a document can be determined without\nknowing the class of the query. This is the case for the\nmajority of queries that are unambiguous. Counter\nexamples are queries like \"jaguar\" (animal and car brand) or\n\"apple\" (fruit and computer manufacturer), but such\nambiguous queries can not be classified by definition, and usually\nconsists of common words. In this work we concentrate on\nrare queries, that tend to contain rare words, be longer, and\nmatch fewer documents; consequently in our setting this\nassumption mostly holds. Using this assumption, we can write\nP(Cj|q) = d\u2208D P(Cj|d)\u00b7 P(d|q). The conditional\nprobability of a classification for a given document P(Cj|d) is\nestimated using the output of the document classifier\n(section 2.1). While P(d|q) is harder to compute, we consider\nthe underlying relevance model for ranking documents given\na query. This issue is further explored in the next section.\n2.3 Classification-based relevance model\nIn order to describe a formal relationship of classification\nand ad placement (or search), we consider a model for using\nclassification to determine ads (or search) relevance. Let a\nbe an ad and q be a query, we denote by R(a, q) the relevance\nof a to q. This number indicates how relevant the ad a is\nto query q, and can be used to rank ads a for a given query\nq. In this paper, we consider the following approximation of\nrelevance function:\nR(a, q) \u2248 RC (a, q) =\nCj \u2208C\nw(Cj)s(Cj, a)s(Cj, q). (1)\nThe right hand-side expresses how we use the\nclassification scheme C to rank ads, where s(c, a) is a scoring function\nthat specifies how likely a is in class c, and s(c, q) is a\nscoring function that specifies how likely q is in class c. The\nvalue w(c) is a weighting term for category c, indicating the\nimportance of category c in the relevance formula.\nThis relevance function is an adaptation of the traditional\nword-based retrieval rules. For example, we may let\ncategories be the words in the vocabulary. We take s(Cj, a) as\nthe word counts of Cj in a, s(Cj, q) as the word counts of\nCj in q, and w(Cj) as the IDF term weighting for word Cj.\nWith such choices, the method given by (1) becomes the\nstandard TFIDF retrieval rule.\nIf we take s(Cj, a) = P(Cj|a), s(Cj, q) = P(Cj|q), and\nw(Cj) = 1/P(Cj), and assume that q and a are\nindependently generated given a hidden concept C, then we have\nRC (a, q) =\nCj \u2208C\nP(Cj|a)P(Cj|q)/P(Cj)\n=\nCj \u2208C\nP(Cj|a)P(q|Cj)/P(q) = P(q|a)/P(q).\nThat is, the ads are ranked according to P(q|a). This\nrelevance model has been employed in various statistical\nlanguage modeling techniques for information retrieval. The\nintuition can be described as follows. We assume that a person\nsearches an ad a by constructing a query q: the person first\npicks a concept Cj according to the weights P(Cj|a), and\nthen constructs a query q with probability P(q|Cj) based\non the concept Cj. For this query generation process, the\nads can be ranked based on how likely the observed query\nis generated from each ad.\nIt should be mentioned that in our case, each query and ad\ncan have multiple categories. For simplicity, we denote by Cj\na random variable indicating whether q belongs to category\nCj. We use P(Cj|q) to denote the probability of q belonging\nto category Cj. Here the sum Cj \u2208C P(Cj|q) may not equal\nto one. We then consider the following ranking formula:\nRC (a, q) =\nCj \u2208C\nP(Cj|a)P(Cj|q). (2)\nWe assume the estimation of P(Cj|a) is based on an existing\ntext-categorization system (which is known). Thus, we only\nneed to obtain estimates of P(Cj|q) for each query q.\nEquation (2) is the ad relevance model that we consider in\nthis paper, with unknown parameters P(Cj|q) for each query\nq. In order to obtain their estimates, we use search results\nfrom major US search engines, where we assume that the\nranking formula in (2) gives good ranking for search. That\nis, top results ranked by search engines should also be ranked\nhigh by this formula. Therefore given a query q, and top K\nresult pages d1(q), . . . , dK (q) from a major search engine, we\nfit parameters P(Cj|q) so that RC (di(q), q) have high scores\nfor i = 1, . . . , K. It is worth mentioning that using this\nmethod we can only compute relative strength of P(Cj|q),\nbut not the scale, because scale does not affect ranking.\nMoreover, it is possible that the parameters estimated may\nbe of the form g(P(Cj|q)) for some monotone function g(\u00b7) of\nthe actually conditional probability g(P(Cj|q)). Although\nthis may change the meaning of the unknown parameters\nthat we estimate, it does not affect the quality of using the\nformula to rank ads. Nor does it affect query classification\nwith appropriately chosen thresholds. In what follows, we\nconsider two methods to compute the classification\ninformation P(Cj|q).\n2.4 The voting method\nWe would like to compute P(Cj|q) so that RC (di(q), q)\nare high for i = 1, . . . , K and RC (d, q) are low for a\nrandom document d. Assume that the vector [P(Cj|d)]Cj \u2208C is\nrandom for an average document, then the condition that\nCj \u2208C P(Cj|q)2\nis small implies that RC (d, q) is also small\naveraged over d. Thus, a natural method is to maximize\nK\ni=1 wiRC (di(q), q) subject to Cj \u2208C P(Cj|q)2\nbeing\nsmall, where wi are weights associated with each rank i:\nmax\n[P (\u00b7|q)]\n\uf8ee\n\uf8f0 1\nK\nK\ni=1\nwi\nCj \u2208C\nP(Cj|di(q))P(Cj|q) \u2212 \u03bb\nCj \u2208C\nP(Cj|q)2\n\uf8f9\n\uf8fb ,\nwhere we assume K\ni=1 wi = 1, and \u03bb > 0 is a tuning\nregularization parameter. The optimal solution is\nP(Cj|q) =\n1\n2\u03bb\nK\ni=1\nwiP(Cj|di(q)).\nSince both P(Cj|di(q)) and P(Cj|q) belong to [0, 1], we may\njust take \u03bb = 0.5 to align the scale. In the experiment,\nwe will simply take uniform weights wi. A more complex\nstrategy is to let w depend on d as well:\nP(Cj|q) =\nd\nw(d, q)g(P(Cj|d)),\nwhere g(x) is a certain transformation of x.\nIn this general formulation, w(d, q) may depend on factors\nother than the rank of d in the search engine results for q.\nFor example, it may be a function of r(d, q) where r(d, q)\nis the relevance score returned by the underlying search\nengine. Moreover, if we are given a set of hand-labeled training\ncategory/query pairs (C, q), then both the weights w(d, q)\nand the transformation g(\u00b7) can be learned using standard\nclassification techniques.\n2.5 Discriminative classification\nWe can treat the problem of estimating P(Cj|q) as a\nclassification problem, where for each q, we label di(q) for\ni = 1, . . . , K as positive data, and the remaining documents\nas negative data. That is, we assign label yi(q) = 1 for di(q)\nwhen i \u2264 K, and label yi(q) = \u22121 for di(q) when i > K.\nIn this setting, the classification scoring rule for a\ndocument di(q) is linear. Let xi(q) = [P(Cj|di(q))], and w =\n[P(Cj|q)], then Cj \u2208C P(Cj|q)P(Cj|di(q)) = w\u00b7xi(q). The\nvalues P(Cj|d) are the features for the linear classifier, and\n[P(Cj|d)] is the weight vector, which can be computed\nusing any linear classification method. In this paper, we\nconsider estimating w using logistic regression [17] as follows:\nP(\u00b7|q) = arg minw i ln(1 + e\u2212w\u00b7xi(q)yi(q)\n).\n0\n200\n400\n600\n800\n1000\n1200\n1400\n1600\n1800\n2000\n0 1 2 3 4 5 6 7 8 9 10\nNumberofcategories\nTaxonomy level\nFigure 1: Number of categories by level\n3. EVALUATION\nIn this section, we evaluate our methodology that uses\nWeb search results for improving query classification.\n3.1 Taxonomy\nOur choice of taxonomy was guided by a Web\nadvertising application. Since we want the classes to be useful for\nmatching ads to queries, the taxonomy needs to be\nelaborate enough to facilitate ample classification specificity. For\nexample, classifying all medical queries into one node will\nlikely result in poor ad matching, as both sore foot and\nflu queries will end up in the same node. The ads\nappropriate for these two queries are, however, very different. To\navoid such situations, the taxonomy needs to provide\nsufficient discrimination between common commercial topics.\nTherefore, in this paper we employ an elaborate taxonomy\nof approximately 6000 nodes, arranged in a hierarchy with\nmedian depth 5 and maximum depth 9. Figure 1 shows the\ndistribution of categories by taxonomy levels. Human\neditors populated the taxonomy with labeled queries (approx.\n150 queries per node), which were used as a training set; a\nsmall fraction of queries have been assigned to more than\none category.\n3.2 Digression: the basics of sponsored search\nTo discuss our set of evaluation queries, we need a brief\nintroduction to some basic concepts of Web advertising.\nSponsored search (or paid search) advertising is placing textual\nads on the result pages of web search engines, with ads\nbeing driven by the originating query. All major search\nengines (Google, Yahoo!, and MSN) support such ads and act\nsimultaneously as a search engine and an ad agency. These\ntextual ads are characterized by one or more bid phrases\nrepresenting those queries where the advertisers would like\nto have their ad displayed. (The name bid phrase comes\nfrom the fact that advertisers bid various amounts to secure\ntheir position in the tower of ads associated to a query. A\ndiscussion of bidding and placement mechanisms is beyond\nthe scope of this paper [13].\nHowever, many searches do not explicitly use phrases that\nsomeone bids on. Consequently, advertisers also buy broad\nmatches, that is, they pay to place their advertisements\non queries that constitute some modification of the desired\nbid phrase. In broad match, several syntactic modifications\ncan be applied to the query to match it to the bid phrase,\ne.g., dropping or adding words, synonym substitution, etc.\nThese transformations are based on rules and dictionaries.\nAs advertisers tend to cover high-volume and high-revenue\nqueries, broad-match queries fall into the tail of the\ndistribution with respect to both volume and revenue.\n3.3 Data sets\nWe used two representative sets of 1000 queries. Both sets\ncontain queries that cannot be directly matched to\nadvertisements, that is, none of the queries contains a bid phrase\n(this means we eliminated practically all popular queries).\nThe first set of queries can be matched to at least one ad\nusing broad match as described above. Queries in the second\nset cannot be matched even by broad match, and therefore\nthe search engine used in our study does not currently\ndisplay any advertising for them. In a sense, these are even\nmore rare queries and further away from common queries.\nAs a measure of query rarity, we estimated their frequency\nin a month worth of query logs for a major US search\nengine; the median frequency was 1 for queries in Set 1 and 0\nfor queries in Set 2.\nThe queries in the two sets differ in their classification\ndifficulty. In fact, queries in Set 2 are difficult to interpret\neven for human evaluators. Queries in Set 1 have on average\n3.50 words, with the longest one having 11 words; queries\nin Set 2 have on average 4.39 words, with the longest query\nof 81 words. Recent studies estimate the average length of\nweb queries to be just under 3 words2\n, which is lower than\nin our test sets. As another measure of query difficulty,\nwe measured the fraction of queries that contain quotation\nmarks, as the latter assist query interpretation by\nmeaningfully grouping the words. Only 8% queries in Set 1 and 14%\nin Set 2 contained quotation marks.\n3.4 Methodology and evaluation metrics\nThe two sets of queries were classified into the target\ntaxonomy using the techniques presented in section 2. Based\non the confidence values assigned, the top 3 classes for each\nquery were presented to human evaluators. These\nevaluators were trained editorial staff who possessed knowledge\nabout the taxonomy. The editors considered every\nqueryclass pair, and rated them on the scale 1 to 4, with 1\nmeaning the classification is highly relevant and 4 meaning it is\nirrelevant for the query. About 2.4% queries in Set 1 and\n5.4% queries in Set 2 were judged to be unclassifiable (e.g.,\nrandom strings of characters), and were consequently\nexcluded from evaluation. To compute evaluation metrics, we\ntreated classifications with ratings 1 and 2 to be correct, and\nthose with ratings 3 and 4 to be incorrect.\nWe used standard evaluation metrics: precision, recall and\nF1. In what follows, we plot precision-recall graphs for all\nthe experiments. For comparison with other published\nstudies, we also report precision and F1 values corresponding to\ncomplete recall (R = 1). Owing to the lack of space, we\nonly show graphs for query Set 1; however, we show the\nnumerical results for both sets in the tables.\n3.5 Results\nWe compared our method to a baseline query classifier\nthat does not use any external knowledge. Our baseline\nclassifier expanded queries using standard query expansion\ntechniques, grouped their terms using a phrase recognizer,\nboosted certain phrases in the query based on their\nstatistical properties, and performed classification using the\n2\n\nhttp://www.rankstat.com/html/en/seo-news1-most-peopleuse-2-word-phrases-in-search-engines.html\n0.4\n0.5\n0.6\n0.7\n0.8\n0.9\n1\n1.00.90.80.70.60.50.40.30.20.1\nPrecision\nRecall\nBaseline\nEngine A full-page\nEngine A summary\nEngine B full-page\nEngine B summary\nFigure 2: The effect of external knowledge\nnearest-neighbor approach. This baseline classifier is\nactually a production version of the query classifier running in a\nmajor US search engine.\nIn our experiments, we varied values of pertinent\nparameters that characterize the exact way of using search results.\nIn what follows, we start with the general assessment of the\neffect of using Web search results. We then proceed to\nexploring more refined techniques, such as using only search\nsummaries versus actually crawling the returned URLs. We\nalso experimented with using different numbers of search\nresults per query, as well as with varying the number of\nclassifications considered for each search result. For lack of\nspace, we only show graphs for Set 1 queries and omit the\ngraphs for Set 2 queries, which exhibit similar phenomena.\n3.5.1 The effect of external knowledge\nQueries by themselves are very short and difficult to\nclassify. We use top search engine results for collecting\nbackground knowledge for queries. We employed two major US\nsearch engines, and used their results in two ways, either\nonly summaries or the full text of crawled result pages.\nFigure 2 and Table 1 show that such extra knowledge\nconsiderably improves classification accuracy. Interestingly, we\nfound that search engine A performs consistently better with\nfull-page text, while search engine B performs better when\nsummaries are used.\nEngine Context Prec. F1 Prec. F1\nSet 1 Set 1 Set 2 Set 2\nA full-page 0.72 0.84 0.509 0.721\nB full-page 0.706 0.827 0.497 0.665\nA summary 0.586 0.744 0.396 0.572\nB summary 0.645 0.788 0.467 0.638\nBaseline 0.534 0.696 0.365 0.536\nTable 1: The effect of using external knowledge\n3.5.2 Aggregation techniques\nThere are two major ways to use search results as\nadditional knowledge. First, individual results can be classified\nseparately, with subsequent voting among individual\nclassifications. Alternatively, individual search results can be\nbundled together as one meta-document and classified as\nsuch using the document classifier. Figure 3 presents the\nresults of these two approaches When full-text pages are\nused, the technique using individual classifications of search\nresults evidently outperforms the bundling approach by a\nwide margin. However, in the case of summaries, bundling\ntogether is found to be consistently better than individual\nclassification. This is because summaries by themselves are\ntoo short to be classified correctly individually, but when\nbundled together they are much more stable.\n0.4\n0.5\n0.6\n0.7\n0.8\n0.9\n1\n1.00.90.80.70.60.50.40.30.20.1\nPrecision\nRecall\nBaseline\nBundled full-page\nVoting full-page\nBundled summary\nVoting summary\nFigure 3: Voting vs. Bundling\n3.5.3 Full page text vs. summary\nTo summarize the two preceding sections, background\nknowledge for each query is obtained by using either the\nfull-page text or only the summaries of the top search\nresults. Full page text was found to be more in conjunction\nwith voted classification, while summaries were found to be\nuseful when bundled together. The best results overall were\nobtained with full-page results classified individually, with\nsubsequent voting used to determine the final query\nclassification. This observation differs from findings by Shen et al.\n[20], who found summaries to be more useful. We attribute\nthis distinction to the fact that the queries we used in this\nstudy are tail ones, which are rare and difficult to classify.\n3.5.4 Varying the number of classes per search result\nWe also varied the number of classifications per search\nresult, i.e., each result was permitted to have either 1, 3, or\n5 classes. Figure 4 shows the corresponding precision-recall\ngraphs for both full-page and summary-only settings. As can\nbe readily seen, all three variants produce very similar\nresults. However, the precision-recall curve for the 1-class\nexperiment has higher fluctuations. Using 3 classes per search\nresult yields a more stable curve, while with 5 classes per\nresult the precision-recall curve is very smooth. Thus, as we\nincrease the number of classes per result, we observe higher\nstability in query classification.\n3.5.5 Varying the number of search results obtained\nWe also experimented with different numbers of search\nresults per query. Figure 5 and Table 2 present the results\nof this experiment. In line with our intuition, we observed\nthat classification accuracy steadily rises as we increase the\nnumber of search results used from 10 to 40, with a slight\ndrop as we continue to use even more results (50). This\nis because using too few search results provides too little\nexternal knowledge, while using too many results introduces\nextra noise.\nUsing paired t-test, we assessed the statistical significance\n0.4\n0.5\n0.6\n0.7\n0.8\n0.9\n1\n1.00.90.80.70.60.50.40.30.20.1\nPrecision\nRecall\nBaseline\n1 class full-page\n3 classes full-page\n5 classes full-page\n1 class summary\n3 classes summary\n5 classes summary\nFigure 4: Varying the number of classes per page\n0.4\n0.5\n0.6\n0.7\n0.8\n0.9\n1\n1.00.90.80.70.60.50.40.30.20.1\nPrecision\nRecall\n10\n20\n30\n40\n50\nBaseline\nFigure 5: Varying the number of results per query\nof the improvements due to our methodology versus the\nbaseline. We found the results to be highly significant (p <\n0.0005), thus confirming the value of external knowledge for\nquery classification.\n3.6 Voting versus alternative methods\nAs explained in Section 2.2, one may use several methods\nto classify queries from search engine results based on our\nrelevance model. As we have seen, the voting method works\nquite well. In this section, we compare the performance of\nvoting top-ten search results to the following two methods:\n\u2022 A: Discriminative learning of query-classification based\non logistic regression, described in Section 2.5.\n\u2022 B: Learning weights based on quality score returned\nby a search engine. We discretize the quality score\ns(d, q) of a query/document pair into {high, medium,\nlow}, and learn the three weights w on a set of training\nqueries, and test the performance on holdout queries.\nThe classification formula, as explained at the end of\nSection 2.4, is P(Cj|q) = d w(s(d, q))P(Cj|d).\nMethod B requires a training/testing split. Neither voting\nnor method A requires such a split; however, for consistency,\nwe randomly draw 50-50 training/testing splits for ten times,\nand report the mean performance \u00b1 standard deviation on\nthe test-split for all three methods. For this experiment,\ninstead of precision and recall, we use DCG-k (k = 1, 5),\npopular in search engine evaluation. The DCG (discounted\ncumulated gain) metric, described in [8], is a ranking\nmeasure where the system is asked to rank a set of candidates (in\nNumber of results Precision F1\nbaseline 0.534 0.696\n10 0.706 0.827\n20 0.751 0.857\n30 0.796 0.886\n40 0.807 0.893\n50 0.798 0.887\nTable 2: Varying the number of search results\nour case, judged categories for each query), and computes\nfor each query q: DCGk(q) = k\ni=1 g(Ci(q))/ log2(i + 1),\nwhere Ci(q) is the i-th category for query q ranked by the\nsystem, and g(Ci) is the grade of Ci: we assign grade of\n10, 5, 1, 0 to the 4-point judgment scale described earlier to\ncompute DCG. The decaying choice of log2(i + 1) is\nconventional, which does not have particular importance. The\noverall DCG of a system is the averaged DCG over queries.\nWe use this metric instead of precision/recall in this\nexperiment because it can directly handle multi-grade output.\nTherefore as a single metric, it is convenient for comparing\nthe methods. Note that precision/recall curves used in the\nearlier sections yield some additional insights not\nimmediately apparent from the DCG numbers.\nSet 1\nMethod DCG-1 DCG-5\nOracle 7.58 \u00b1 0.19 14.52 \u00b1 0.40\nVoting 5.28 \u00b1 0.15 11.80 \u00b1 0.31\nMethod A 5.48 \u00b1 0.16 12.22 \u00b1 0.34\nMethod B 5.36 \u00b1 0.18 12.15 \u00b1 0.35\nSet 2\nMethod DCG-1 DCG-5\nOracle 5.69 \u00b1 0.18 9.94 \u00b1 0.32\nVoting 3.50 \u00b1 0.17 7.80 \u00b1 0.28\nMethod A 3.63 \u00b1 0.23 8.11 \u00b1 0.33\nMethod B 3.55 \u00b1 0.18 7.99 \u00b1 0.31\nTable 3: Voting and alternative methods\nResults from our experiments are given in Table 3. The\noracle method is the best ranking of categories for each query\nafter seeing human judgments. It cannot be achieved by any\nrealistic algorithm, but is included here as an absolute\nupper bound on DCG performance. The simple voting method\nperforms very well in our experiments. The more\ncomplicated methods may lead to moderate performance gain\n(especially method A, which uses discriminative training in\nSection 2.5). However, both methods are computationally\nmore costly, and the potential gain is minor enough to be\nneglected. This means that as a simple method, voting is\nquite effective.\nWe can observe that method B, which uses quality score\nreturned by a search engine to adjust importance weights\nof returned pages for a query, does not yield appreciable\nimprovement. This implies that putting equal weights\n(voting) performs similarly as putting higher weights to higher\nquality documents and lower weights to lower quality\ndocuments (method B), at least for the top search results. It\nmay be possible to improve this method by including other\npage-features that can differentiate top-ranked search\nresults. However, the effectiveness will require further\ninvestigation which we did not test. We may also observe that\nthe performance on Set 2 is lower than that on Set 1, which\nmeans queries in Set 2 are harder than those in Set 1.\n3.7 Failure analysis\nWe scrutinized the cases when external knowledge did\nnot improve query classification, and identified three main\ncauses for such lack of improvement. (1)Queries containing\nrandom strings, such as telephone numbers - these queries\ndo not yield coherent search results, and so the latter cannot\nhelp classification (around 5% of queries were of this kind).\n(2) Queries that yield no search results at all; there were 8%\nsuch queries in Set 1 and 15% in Set 2. (3) Queries\ncorresponding to recent events, for which the search engine did\nnot yet have ample coverage (around 5% of queries). One\nnotable example of such queries are entire names of news\narticles-if the exact article has not yet been indexed by\nthe search engine, search results are likely to be of little use.\n4. RELATED WORK\nEven though the average length of search queries is\nsteadily increasing over time, a typical query is still shorter than\n3 words. Consequently, many researchers studied possible\nways to enhance queries with additional information.\nOne important direction in enhancing queries is through\nquery expansion. This can be done either using electronic\ndictionaries and thesauri [22], or via relevance feedback\ntechniques that make use of a few top-scoring search results.\nEarly work in information retrieval concentrated on\nmanually reviewing the returned results [16, 15]. However, the\nsheer volume of queries nowadays does not lend itself to\nmanual supervision, and hence subsequent works focused\non blind relevance feedback, which basically assumes top\nreturned results to be relevant [23, 12, 4, 14].\nMore recently, studies in query augmentation focused on\nclassification of queries, assuming such classifications to be\nbeneficial for more focused query interpretation. Indeed,\nKowalczyk et al. [10] found that using query classes\nimproved the performance of document retrieval.\nStudies in the field pursue different approaches for\nobtaining additional information about the queries. Beitzel\net al. [1] used semi-supervised learning as well as unlabeled\ndata [2]. Gravano et al. [6] classified queries with respect\nto geographic locality in order to determine whether their\nintent is local or global.\nThe 2005 KDD Cup on web query classification inspired\nyet another line of research, which focused on enriching\nqueries using Web search engines and directories [11, 18, 20,\n9, 21]. The KDD task specification provided a small\ntaxonomy (67 nodes) along with a set of labeled queries, and posed\na challenge to use this training data to build a query\nclassifier. Several teams used the Web to enrich the queries and\nprovide more context for classification. The main research\nquestions of this approach the are (1) how to build a\ndocument classifier, (2) how to translate its classifications into\nthe target taxonomy, and (3) how to determine the query\nclass based on document classifications.\nThe winning solution of the KDD Cup [18] proposed\nusing an ensemble of classifiers in conjunction with searching\nmultiple search engines. To address issue (1) above, their\nsolution used the Open Directory Project (ODP) to produce\nan ODP-based document classifier. The ODP hierarchy was\nthen mapped into the target taxonomy using word matches\nat individual nodes. A document classifier was built for\nthe target taxonomy by using the pages in the ODP\ntaxonomy that appear in the nodes mapped to the particular\ntarget node. Thus, Web documents were first classified with\nrespect to the ODP hierarchy, and their classifications were\nsubsequently mapped to the target taxonomy for query\nclassification.\nCompared to this approach, we solved the problem of\ndocument classification directly in the target taxonomy by\nusing the queries to produce document classifier as described\nin Section 2. This simplifies the process and removes the\nneed for mapping between taxonomies. This also\nstreamlines taxonomy maintenance and development. Using this\napproach, we were able to achieve good performance in a\nvery large scale taxonomy. We also evaluated a few\nalternatives how to combine individual document classifications\nwhen actually classifying the query.\nIn a follow-up paper [19], Shen et al. proposed a\nframework for query classification based on bridging between two\ntaxonomies. In this approach, the problem of not having\na document classifier for web results is solved by using a\ntraining set available for documents with a different\ntaxonomy. For this, an intermediate taxonomy with a training set\n(ODP) is used. Then several schemes are tried that\nestablish a correspondence between the taxonomies or allow for\nmapping of the training set from the intermediate taxonomy\nto the target taxonomy. As opposed to this, we built a\ndocument classifier for the target taxonomy directly, without\nusing documents from an intermediate taxonomy. While we\nwere not able to directly compare the results due to the use\nof different taxonomies (we used a much larger taxonomy),\nour precision and recall results are consistently higher even\nover the hardest query set.\n5. CONCLUSIONS\nQuery classification is an important information retrieval\ntask. Accurate classification of search queries is likely to\nbenefit a number of higher-level tasks such as Web search\nand ad matching. Since search queries are usually short, by\nthemselves they usually carry insufficient information for\nadequate classification accuracy. To address this problem, we\nproposed a methodology for using search results as a source\nof external knowledge. To this end, we send the query to a\nsearch engine, and assume that a plurality of the\nhighestranking search results are relevant to the query. Classifying\nthese results then allows us to classify the original query\nwith substantially higher accuracy.\nThe results of our empirical evaluation definitively\nconfirmed that using the Web as a repository of world\nknowledge contributes valuable information about the query, and\naids in its correct classification. Notably, our method\nexhibits significantly higher accuracy than methods described\nin prior studies3\nCompared to prior studies, our approach\ndoes not require any auxiliary taxonomy, and we produce\na query classifier directly for the target taxonomy.\nFurthermore, the taxonomy used in this study is approximately\n2 orders of magnitude larger than that used in prior works.\nWe also experimented with different values of parameters\nthat characterize our method. When using search results,\none can either use only summaries of the results provided by\n3\nSince the field of query classification does not yet have\nestablished and agreed upon benchmarks, direct comparison\nof results is admittedly tricky.\nthe search engine, or actually crawl the results pages for even\ndeeper knowledge. Overall, query classification performance\nwas the best when using the full crawled pages (Table 1).\nThese results are consistent with prior studies [5], which\nfound that using full crawled pages is superior for document\nclassification than using only brief summaries. Our findings,\nhowever, are different from those reported by Shen et al. [19],\nwho found summaries to yield better results. We attribute\nour observations to using a more elaborate voting scheme\namong the classifications of individual search results, as well\nas to using a more difficult set of rare queries.\nIn this study we used two major search engines, A and B.\nInterestingly, we found notable distinctions in the quality of\ntheir output. Notably, for engine A the overall results were\nbetter when using the full crawled pages of the search\nresults, while for engine B it seems to be more beneficial to use\nthe summaries of results. This implies that while the quality\nof search results returned by engine A is apparently better,\nengine B does a better work in summarizing the pages.\nWe also found that the best results were obtained by\nusing full crawled pages and performing voting among their\nindividual classifications. For a classifier that is external to\nthe search engine, retrieving full pages may be prohibitively\ncostly, in which case one might prefer to use summaries to\ngain computational efficiency. On the other hand, for the\nowners of a search engine, full page classification is much\nmore efficient, since it is easy to preprocess all indexed pages\nby classifying them once onto the (fixed) taxonomy. Then,\npage classifications are obtained as part of the meta-data\nassociated with each search result, and query classification\ncan be nearly instantaneous.\nWhen using summaries it appears that better results are\nobtained by first concatenating individual summaries into a\nmeta-document, and then using its classification as a whole.\nWe believe the reason for this observation is that summaries\nare short and inherently noisier, and hence their aggregation\nhelps to correctly identify the main theme. Consistent with\nour intuition, using too few search results yields useful but\ninsufficient knowledge, and using too many search results\nleads to inclusion of marginally relevant Web pages. The\nbest results were obtained when using 40 top search hits.\nIn this work, we first classify search results, and then use\ntheir classifications directly to classify the original query.\nAlternatively, one can use the classifications of search results\nas features in order to learn a second-level classifier. In\nSection 3.6, we did some preliminary experiments in this\ndirection, and found that learning such a secondary classifier\ndid not yield considerably advantages. We plan to further\ninvestigate this direction in our future work.\nIt is also essential to note that implementing our\nmethodology incurs little overhead. If the search engine classifies\ncrawled pages during indexing, then at query time we only\nneed to fetch these classifications and do the voting.\nTo conclude, we believe our methodology for using Web\nsearch results holds considerable promise for substantially\nimproving the accuracy of Web search queries. This is\nparticularly important for rare queries, for which little\nperquery learning can be done, and in this study we proved\nthat such scarceness of information could be addressed by\nleveraging the knowledge found on the Web. We believe\nour findings will have immediate applications to improving\nthe handling of rare queries, both for improving the search\nresults as well as yielding better matched advertisements.\nIn our further research we also plan to make use of session\ninformation in order to leverage knowledge about previous\nqueries to better classify subsequent ones.\n6. REFERENCES\n[1] S. Beitzel, E. Jensen, O. Frieder, D. Grossman, D. Lewis,\nA. Chowdhury, and A. Kolcz. Automatic web query\nclassification using labeled and unlabeled training data. In\nProceedings of SIGIR\"05, 2005.\n[2] S. Beitzel, E. Jensen, O. Frieder, D. Lewis, A. Chowdhury,\nand A. Kolcz. Improving automatic query classification via\nsemi-supervised learning. In Proceedings of ICDM\"05, 2005.\n[3] R. Duda and P. Hart. Pattern Classification and Scene\nAnalysis. John Wiley and Sons, 1973.\n[4] E. Efthimiadis and P. Biron. UCLA-Okapi at TREC-2:\nQuery expansion experiments. In TREC-2, 1994.\n[5] E. Gabrilovich and S. Markovitch. Feature generation for\ntext categorization using world knowledge. In IJCAI\"05,\npages 1048-1053, 2005.\n[6] L. Gravano, V. Hatzivassiloglou, and R. Lichtenstein.\nCategorizing web queries according to geographical locality.\nIn CIKM\"03, 2003.\n[7] E. Han and G. Karypis. Centroid-based document\nclassification: Analysis and experimental results. In\nPKDD\"00, September 2000.\n[8] K. Jarvelin and J. Kekalainen. IR evaluation methods for\nretrieving highly relevant documents. In SIGIR\"00, 2000.\n[9] Z. Kardkovacs, D. Tikk, and Z. Bansaghi. The ferrety\nalgorithm for the KDD Cup 2005 problem. In SIGKDD\nExplorations, volume 7. ACM, 2005.\n[10] P. Kowalczyk, I. Zukerman, and M. Niemann. Analyzing\nthe effect of query class on document retrieval performance.\nIn Proc. Australian Conf. on AI, pages 550-561, 2004.\n[11] Y. Li, Z. Zheng, and H. Dai. KDD CUP-2005 report:\nFacing a great challenge. In SIGKDD Explorations,\nvolume 7, pages 91-99. ACM, December 2005.\n[12] M. Mitra, A. Singhal, and C. Buckley. Improving automatic\nquery expansion. In SIGIR\"98, pages 206-214, 1998.\n[13] M. Moran and B. Hunt. Search Engine Marketing, Inc.:\nDriving Search Traffic to Your Company\"s Web Site.\nPrentice Hall, Upper Saddle River, NJ, 2005.\n[14] S. Robertson, S. Walker, S. Jones, M. Hancock-Beaulieu,\nand M. Gatford. Okapi at TREC-3. In TREC-3, 1995.\n[15] J. Rocchio. Relevance feedback in information retrieval. In\nThe SMART Retrieval System: Experiments in Automatic\nDocument Processing, pages 313-323. Prentice Hall, 1971.\n[16] G. Salton and C. Buckley. Improving retrieval performance\nby relevance feedback. JASIS, 41(4):288-297, 1990.\n[17] T. Santner and D. Duffy. The Statistical Analysis of\nDiscrete Data. Springer-Verlag, 1989.\n[18] D. Shen, R. Pan, J. Sun, J. Pan, K. Wu, J. Yin, and\nQ. Yang. Q2C@UST: Our winning solution to query\nclassification in KDDCUP 2005. In SIGKDD Explorations,\nvolume 7, pages 100-110. ACM, 2005.\n[19] D. Shen, R. Pan, J. Sun, J. Pan, K. Wu, J. Yin, and\nQ. Yang. Query enrichment for web-query classification.\nACM TOIS, 24:320-352, July 2006.\n[20] D. Shen, J. Sun, Q. Yang, and Z. Chen. Building bridges for\nweb query classification. In SIGIR\"06, pages 131-138, 2006.\n[21] D. Vogel, S. Bickel, P. Haider, R. Schimpfky, P. Siemen,\nS. Bridges, and T. Scheffer. Classifying search engine\nqueries using the web as background knowledge. In\nSIGKDD Explorations, volume 7. ACM, 2005.\n[22] E. Voorhees. Query expansion using lexical-semantic\nrelations. In SIGIR\"94, 1994.\n[23] J. Xu and W. Bruce Croft. Improving the effectiveness of\ninformation retrieval with local context analysis. ACM\nTOIS, 18(1):79-112, 2000.", "keywords": "blind relevance feedback;conditional probability;relevance feedback;affinity score;web search;voting scheme;search advertising;crawling;topical taxonomy;information retrieval;machine learning;search engine;adaptation;query classification"}
-{"name": "test_H-24", "title": "Investigating the Querying and Browsing Behavior of Advanced Search Engine Users", "abstract": "One way to help all users of commercial Web search engines be more successful in their searches is to better understand what those users with greater search expertise are doing, and use this knowledge to benefit everyone. In this paper we study the interaction logs of advanced search engine users (and those not so advanced) to better understand how these user groups search. The results show that there are marked differences in the queries, result clicks, post-query browsing, and search success of users we classify as advanced (based on their use of query operators), relative to those classified as non-advanced. Our findings have implications for how advanced users should be supported during their searches, and how their interactions could be used to help searchers of all experience levels find more relevant information and learn improved searching strategies.", "fulltext": "1. INTRODUCTION\nThe formulation of query statements that capture both the salient\naspects of information needs and are meaningful to Information\nRetrieval (IR) systems poses a challenge for many searchers [3].\nCommercial Web search engines such as Google, Yahoo!, and\nWindows Live Search offer users the ability to improve the\nquality of their queries using query operators such as quotation\nmarks, plus and minus signs, and modifiers that restrict the search\nto a particular site or type of file. These techniques can be useful\nin improving result precision yet, other than via log analyses (e.g.,\n[15][27]), they have generally been overlooked by the research\ncommunity in attempts to improve the quality of search results.\nIR research has generally focused on alternative ways for users to\nspecify their needs rather than increasing the uptake of advanced\nsyntax. Research on practical techniques to supplement existing\nsearch technology and support users has been intensifying in\nrecent years (e.g. [18][34]). However, it is challenging to\nimplement such techniques at large scale with tolerable latencies.\nTypical queries submitted to Web search engines take the form of\na series of tokens separated by spaces. There is generally an\nimplied Boolean AND operator between tokens that restricts\nsearch results to documents containing all query terms. De Lima\nand Pedersen [7] investigated the effect of parsing, phrase\nrecognition, and expansion on Web search queries. They showed\nthat the automatic recognition of phrases in queries can improve\nresult precision in Web search. However, the value of advanced\nsyntax for typical searchers has generally been limited, since most\nusers do not know about advanced syntax or do not understand\nhow to use it [15]. Since it appears operators can help retrieve\nrelevant documents, further investigation of their use is warranted.\nIn this paper we explore the use of query operators in more detail\nand propose alternative applications that do not require all users to\nuse advanced syntax explicitly. We hypothesize that searchers\nwho use advanced query syntax demonstrate a degree of search\nexpertise that the majority of the user population does not; an\nassertion supported by previous research [13]. Studying the\nbehavior of these advanced search engine users may yield\nimportant insights about searching and result browsing from\nwhich others may benefit.\nUsing logs gathered from a large number of consenting users, we\ninvestigate differences between the search behavior of those that\nuse advanced syntax and those that do not, and differences in the\ninformation those users target. We are interested in answering\nthree research questions:\n(i) Is there a relationship between the use of advanced syntax\nand other characteristics of a search?\n(ii) Is there a relationship between the use of advanced syntax\nand post-query navigation behaviors?\n(iii) Is there a relationship between the use of advanced syntax\nand measures of search success?\nThrough an experimental study and analysis, we offer potential\nanswers for each of these questions. A relationship between the\nuse of advanced syntax and any of these features could support\nthe design of systems tailored to advanced search engine users, or\nuse advanced users\" interactions to help non-advanced users be\nmore successful in their searches.\nWe describe related work in Section 2, the data we used in this\nlog-based study in Section 3, the search characteristics on which\nwe focus our analysis in Section 4, and the findings of this\nanalysis in Section 5. In Section 6 we discuss the implications of\nthis research, and we conclude in Section 7.\n2. RELATED WORK\nFactors such as lack of domain knowledge, poor understanding of\nthe document collection being searched, and a poorly developed\ninformation need can all influence the quality of the queries that\nusers submit to IR systems ([24],[28]). There has been a variety\nof research into different ways of helping users specify their\ninformation needs more effectively. Belkin et al. [4] experimented\nwith providing additional space for users to type a more verbose\ndescription of their information needs. A similar approach was\nattempted by Kelly et al. [18], who used clarification forms to\nelicit additional information about the search context from users.\nThese approaches have been shown to be effective in best-match\nretrieval systems where longer queries generally lead to more\nrelevant search results [4]. However, in Web search, where many\nof the systems are based on an extended Boolean retrieval model,\nlonger queries may actually hurt retrieval performance, leading to\na small number of potentially irrelevant results being retrieved. It\nis not simply sufficient to request more information from users;\nthis information must be of better quality.\nRelevance Feedback (RF) [22] and interactive query expansion\n[9] are popular techniques that have been used to improve the\nquality of information that users provide to IR systems regarding\ntheir information needs. In the case of RF, the user presents the\nsystem with examples of relevant information that are then used to\nformulate an improved query or retrieve a new set of documents.\nIt has proven difficult to get users to use RF in the Web domain\ndue to difficulty in conveying the meaning and the benefit of RF\nto typical users [17]. Query suggestions offered based on query\nlogs have the potential to improve retrieval performance with\nlimited user burden. This approach is limited to re-executing\npopular queries, and searchers often ignore the suggestions\npresented to them [1]. In addition, both of these techniques do not\nhelp users learn to produce more effective queries.\nMost commercial search engines provide advanced query syntax\nthat allows users to specify their information needs in more detail.\nQuery modifiers such as \u2018+\" (plus), \u2018\u2212\" (minus), and \u2018 \" (double\nquotes) can be used to emphasize, deemphasize, and group query\nterms. Boolean operators (AND, OR, and NOT) can join terms\nand phrases, and modifiers such as site: and link: can be used\nto restrict the search space. Queries created with these techniques\ncan be powerful. However, this functionality is often hidden from\nthe immediate view of the searcher, and unless she knows the\nsyntax, she must use text fields, pull-down menus and combo\nboxes available via a dedicated advanced search interface to\naccess these features.\nLog-based analysis of users\" interactions with the Excite and\nAltaVista search engines has shown that only 10-20% of queries\ncontained any advanced syntax [14][25]. This analysis can be a\nuseful way of capturing characteristics of users interacting with IR\nsystems. Research in user modeling [6] and personalization [30]\nhas shown that gathering more information about users can\nimprove the effectiveness of searches, but require more\ninformation about users than is typically available from\ninteraction logs alone. Unless coupled with a qualitative\ntechnique, such as a post-session questionnaire [23], it can be\ndifficult to associate interactions with user characteristics. In our\nstudy we conjecture that given the difficulty in locating advanced\nsearch features within the typical search interface, and the\npotential problems in understanding the syntax, those users that do\nuse advanced syntax regularly represent a distinct class of\nsearchers who will exhibit other common search behaviors.\nOther studies of advanced searchers\" search behaviors have\nattempted to better understand the strategic knowledge they have\nacquired. However, such studies are generally limited in size\n(e.g., [13][19]) or focus on domain expertise in areas such as\nhealthcare or e-commerce (e.g., [5]). Nonetheless, they can give\nvaluable insight about the behaviors of users with domain, system,\nor search expertise that exceeds that of the average user.\nQuerying behavior in particular has been studied extensively to\nbetter understand users [31] and support other users [16].\nIn this paper we study other search characteristics of users of\nadvanced syntax in an attempt to determine whether there is\nanything different about how these search engine users search,\nand whether their searches can be used to benefit those who do\nnot make use of the advanced features of search engines. To do\nthis we use interaction logs gathered from large set of consenting\nusers over a prolonged period.\nIn the next section we describe the data we use to study the\nbehavior of the users who use advanced syntax, relative to those\nthat do not use this syntax.\n3. DATA\nTo perform this study we required a description of the querying\nand browsing behavior of many searchers, preferably over a\nperiod of time to allow patterns in user behavior to be analyzed.\nTo obtain these data we mined the interaction logs of consenting\nWeb users over a period of 13 weeks, from January to April 2006.\nWhen downloading a partner client-side application, the users\nwere invited to consent to their interaction with Web pages being\nanonymously recorded (with a unique identifier assigned to each\nuser) and used to improve the performance of future systems.1\nThe information contained in these log entries included a unique\nidentifier for the user, a timestamp for each page view, a unique\nbrowser window identifier (to resolve ambiguities in determining\nwhich browser a page was viewed), and the URL of the Web page\nvisited. This provided us with sufficient data on querying\nbehavior (from interaction with search engines), and browsing\nbehavior (from interaction with the pages that follow a search) to\nmore broadly investigate search behavior.\nIn addition to the data gathered during the course of this study we\nalso had relevance judgments of documents that users examined\nfor 10,680 unique query statements present in the interaction logs.\nThese judgments were assigned on a six-point scale by trained\nhuman judges at the time the data were collected. We use these\njudgments in this analysis to assess the relevance of sites users\nvisited on their browse trail away from search result pages.\nWe studied the interaction logs of 586,029 unique users, who\nsubmitted millions of queries to three popular search\nenginesGoogle, Yahoo!, and MSN Search - over the 13-week duration of\nthe study. To limit the effect of search engine bias, we used four\noperators common to all three search engines: + (plus), \u2212 (minus),\n(double quotes), and site: (to restrict the search to a domain\nor Web page) as advanced syntax. 1.12% of the queries submitted\ncontained at least one of these four operators. 51,080 (8.72%) of\nusers used query operators in any of their queries. In the\nremainder of this paper, we will refer to these users as advanced\nsearchers. We acknowledge that the direct relationship between\nquery syntax usage and search expertise has only been studied\n1\nIt is worth noting that if users did not provide their consent, then\ntheir interaction was not recorded and analyzed in this study.\n(and shown) in a few studies (e.g., [13]), but we feel that this is a\nreasonable criterion for a log-based investigation. We conjecture\nthat these advanced searchers do possess a high level of search\nexpertise, and will show later in the paper that they demonstrate\nbehavioral characteristics consistent with search expertise.\nTo handle potential outlier users that may skew our data analysis,\nwe removed users who submitted fewer than 50 queries in the\nstudy\"s 13-week duration. This left us with 188,405 users \u2212\n37,795 (20.1%) advanced users and 150,610 (79.9%)\nnonadvanced users \u2212 whose interactions we study in more detail. If\nsignificant differences emerge between these groups, it is\nconceivable that these interactions could be used to automatically\nclassify users and adjust a search system\"s interface and result\nweighting to better match the current user.\nThe privacy of our volunteers was maintained throughout the\nentire course of the study: no personal information was elicited\nabout them, participants were assigned a unique anonymous\nidentifier that could not be traced back to them, and we made no\nattempt to identify a particular user or study individual behavior in\nany way. All findings were aggregated over multiple users, and\nno information other than consent for logging was elicited.\nTo find out more about these users we studied whether those\nusing advanced syntax exhibited other search behaviors that were\nnot observed in those who did not use this syntax. We focused on\nquerying, navigation, and overall search success to compare the\nuser groups. In the next section we describe in more detail the\nsearch features that we used.\n4. SEARCH FEATURES\nWe elected to choose features that described a variety of aspects\nof the search process: queries, result clicks, post-query browsing,\nand search success. The query and result-click characteristics we\nchose to examine are described in more detail in Table 1.\nTable 1. Query and result-click features (per user).\nFeature Meaning\nQueries Per Second (QPS) Avg. number of queries per\nsecond between initial query\nand end-of-session\nQuery Repeat Rate (QRR) Fraction of queries that are\nrepeats\nQuery Word Length (QWL) Avg. number of words in query\nQueries Per Day (QPD) Avg. number of queries per day\nAvg. Click Position (ACP) Avg. rank of clicked results\nClick Probability (CP) Ratio of result clicks to queries\nAvg. Seconds To Click (ASC) Avg. search to result click\ninterval\nThese seven features give us a useful overview of users\" direct\ninteractions with search engines, but not of how users are looking\nfor relevant information beyond the result page or how successful\nthey are in locating relevant information. Therefore, in addition to\nthese characteristics we also studied some relevant aspects of\nusers\" post-query browsing behavior. To do this, we extracted\nsearch trails from the interaction logs described in the previous\nsection. A search trail is a series of visited Web pages connected\nvia a hyperlink trail, initiated with a search result page and\nterminating on one of the following events: navigation to any page\nnot linked from the current page, closing of the active browser\nwindow, or a session inactivity timeout of 30 minutes. More\ndetail on the extraction of the search trails are provided in\nprevious work [33]. In total, around 12.5 million search trails\n(containing around 60 million documents) were extracted from the\nlogs for all users. The median number of search trails per user was\n30. The median number of steps in the trails was 3. All search\ntrails contained one search result page and at least one page on a\nhyperlink trail leading from the result page.\nThe extraction of these trails allowed us to study aspects of\npostquery browsing behavior, namely the average duration of users\"\nsearch sessions, the average duration of users\" search trails, the\naverage display time of each document, the average number of\nsteps in users\" search trails, the number of branches in users\"\nnavigation patterns, and the number of back operations in users\"\nsearch trails. All search trails contain at least one branch\nrepresenting any forward motion on the browse path. A trail can\nhave additional branches if the user clicks the browser\"s back\nbutton and immediately proceeds forward to another page prior to\nthe next (if any) back operation. The post-query browsing features\nare described further in Table 2.\nTable 2. Post-query browsing features (per trail).\nFeature Meaning\nSession Seconds (SS) Average session length (in seconds)\nTrail Seconds (TS) Average trail length (in seconds)\nDisplay Seconds (DS) Average display time for each page on\nthe trail (in seconds)\nNum. Steps (NS) Average number of steps from the page\nfollowing the results page to the end of\nthe trail\nNum. Branches (NB) Average number of branches\nNum. Backs (NBA) Average number of back operations\nAs well as using these attributes of users\" interactions, we also\nused the relevance judgments described earlier in the paper to\nmeasure the degree of search success based on the relevance\njudgments assigned to pages that lie on the search trail. Given\nthat we did not have access to relevance assessments from our\nusers, we approximated these assessments using judgments\ncollected as part of ongoing research into search engine\nperformance.2\nThese judgments were created by trained human\nassessors for 10,680 unique queries. Of the 1,420,625 steps on\nsearch trails that started with any one of these queries, we have\nrelevance judgments for 802,160 (56.4%). We use these\njudgments to approximate search success for a given trail in a\nnumber of ways. In Table 3 we list these measures.\n2\nOur assessment of search success is fairly crude compared to\nwhat would have been possible if we had been able to contact\nour subjects. We address this problem in a manner similar to\nthat used by the Text Retrieval Conference (TREC) [21], in that\nsince we cannot determine perceived search success, we\napproximate search success based on assigned relevance scores\nof visited documents.\nTable 3. Relevance judgment measures (per trail).\nMeasure Meaning\nFirst Judgment assigned to the first page in the trail\nLast Judgment assigned to the last page in the trail\nAverage Average judgment across all pages in the trail\nMaximum Maximum judgment across all pages in the trail\nThese measures are used during our analysis to estimate the\nrelevance of the pages viewed at different stages in the trails, and\nallow us to estimate search success in different ways. We chose\nmultiple measures, as users may encounter relevant information in\nmany ways and at different points in the trail (e.g., single\nhighlyrelevant document or gradually over the course of the trail).\nThe features described in this section allowed us to analyze\nimportant attributes of the search process that must be better\nunderstood if we are to support users in their searching. In the\nnext section we present the findings of the analysis.\n5. FINDINGS\nOur analysis is divided into three parts: analysis of query behavior\nand interaction with the results page, analysis of post-query\nnavigation behavior, and search success in terms of locating\njudged-relevant documents. Parametric statistical testing is used,\nand the level of significance for the statistical tests is set to .05.\n5.1 Query and result-click behavior\nWe were interested in comparing the query and result-click\nbehaviors of our advanced and non-advanced users. In Table 4\nwe show the mean average values for each of the seven search\nfeatures for our users. We use padvanced to denote the percentage of\nall queries from each user that contains advanced syntax (i.e.,\npadvanced = 0% means a user never used advanced syntax). The\ntable shows values for users that do not use query operators (0%),\nusers who submitted at least one query with operators (\u2265 0%),\nthrough to users whose queries contained operators at least\nthreequarters of the time (\u2265 75%).\nTable 4. Query and result click features (per user).\nFeature\npadvanced\n0% > 0% \u2265 25% \u2265 50% \u2265 75%\nQPS .028 .010 .012 .013 .015\nQRR .53 .57 .58 .61 .62\nQWL 2.02 2.83 3.40 3.66 4.04\nQPD 2.01 3.52 2.70 2.66 2.31\nACP 6.83 9.12 10.09 10.17 11.37\nCP .57 .51 .47 .47 .47\nASC 87.71 88.16 112.44 102.12 79.13\n%Users 79.90% 20.10% .79% .18% .04%\nWe compared the query and result click features of users who did\nnot use any advanced syntax (padvanced = 0%) in any of their\nqueries with those who used advanced syntax in at least one query\n(padvanced > 0%). The columns corresponding to these two groups\nare bolded in Table 4. We performed an independent measures\nttest between these groups for each of the features. Since this\nanalysis involved many features, we use a Bonferroni correction\nto control the experiment-wise error rate and set the alpha level\n(\u03b1) to .007, i.e., .05 divided by the number of features. This\ncorrection reduces the number of Type I errors i.e., rejecting null\nhypotheses that are true. All differences between the groups were\nstatistically significant (all t(188403) \u2265 2.81, all p \u2264 .002).\nHowever, given the large sample sizes, all differences in the\nmeans were likely to be statistically significant. We applied a\nCohen\"s d-test to determine the effect size for each of the\ncomparisons between the advanced and non-advanced user\ngroups. Ordering in descending order by effect size, the main\nfindings are that relative to non-advanced users, advanced search\nengine users:\n\u2022 Query less frequently in a session (d = 1.98)\n\u2022 Compose longer queries (d = .69)\n\u2022 Click further down the result list (d = .67)\n\u2022 Submit more queries per day (d = .49)\n\u2022 Are less likely to click on a result (d = .32)\n\u2022 Repeat queries more often (d = .16)\nThe increased likelihood that advanced search engine users will\nclick further down the result list implies that they may be less\ntrusting of the search engines\" ability to rank the most relevant\ndocument first, that they are more willing to explore beyond the\nmost popular pages for a given query, that they may be submitting\ndifferent types of queries (e.g., informational rather than\nnavigational), or that they may have customized their search\nsettings to display more than only the default top-10 results.\nMany of the findings listed are consistent with those identified in\nother studies of advanced searchers\" querying and result-click\nbehaviors [13][34]. Given that the only criteria we employed to\nclassify a user as an advanced searcher was their use of advanced\nsyntax, it is certainly promising that this criterion seems to\nidentify users that interact in a way consistent with that reported\npreviously for those with more search expertise.\nAs mentioned earlier, the advanced search engine users for which\nthe average values shown in Table 4 are computed are those who\nsubmit 50 or more queries in the 13 week duration of the data\ncollection and submit at least one query containing advanced\nquery operators. In other words, we consider users whose\npercentage of queries containing advanced syntax, padvanced, is\ngreater than zero. The use of query operators in any queries,\nregardless of frequency, suggests that a user knows about the\nexistence of the operators, and implies a greater degree of\nfamiliarity with the search system. We further hypothesized that\nusers whose queries more frequently contained advanced syntax\nmay be more advanced search engine users. To test this we\ninvestigated varying the query threshold required to qualify for\nadvanced status (padvanced). We incremented padvanced one\npercentage point at a time, and recorded the values of the seven\nquery and result-click features at each point. The values of the\nfeatures at four milestones (> 0%, \u2265 25%, \u2265 50%, and \u2265 75%) are\nshown in Table 4. As can be seen in the table, as padvanced\nincreases, differences in the features between those using\nadvanced syntax and those not using advanced syntax become\nmore substantial. However, it is interesting to note that as padvanced\nincreases, the number of queries submitted per day actually falls\n(Pearson\"s R = \u2212.512, t(98) = 5.98, p < .0001). More advanced\nusers may need to pose fewer queries to find relevant information.\nTo study the patterns of relationship among these dependent\nvariables (including the padvanced), we applied factor analysis [26].\nTable 5 shows the intercorrelation matrix between the features\nand the percentage of queries with operators (Padvanced). Each cell\nin the table contains the Pearson\"s correlation coefficient between\nthe two features for a given row-column pair.\nTable 5. Intercorrelation matrix (query / result-click features).\npadv. QPS QRR QWL QPD ACP CP ASC\npadv. 1.00 .946 .970 .987 \u2212.512 .930 \u2212.746 \u2212.583\nQPS 1.00 .944 .943 \u2212.643 .860 \u2212.594 \u2212.712\nQRR 1.00 .934 \u2212.462 .919 \u2212.621 -.667\nQWL 1.00 \u2212.392 .612 \u2212.445 .735\nQPD 1.00 .676 .780 .943\nACP 1.00 .838 .711\nCP 1.00 .654\nASC 1.00\nIt is only the first data column and row that reflect the correlations\nbetween padvanced and the other query and result-click features.\nColumns 2 - 8 show the inter-correlations between the other\nfeatures. There are strong positive correlations between some of\nthe features (e.g., the number of words in the query (QWL) and\nthe average probability of clicking on a search result (ACP)).\nHowever, there were also fairly strong negative correlations\nbetween some features (e.g., the average length of the queries\n(QWL) and the probability of clicking on a search result (CP)).\nThe factor analysis revealed the presence of two factors that\naccount for 83.6% of the variance. As is standard practice in\nfactor analysis, all features with an absolute factor loading of .30\nor less were removed. The two factors that emerged, with their\nrespective loadings, can be expressed as:\nFactor A = .98(QRR) + .97(padv) + .97(QPS)\n+ .71(ACP) + .69(QWL)\nFactor B = .96(CP) + .90(QPD) + .67(ACP) + .52(ASC)\nVariance in the query and result-click behavior of our advanced\nsearch engine users can be expressed using these two constructs.\nFactor A is the most powerful, contributing 50.5% of the variance.\nIt appears to represent a very basic dimension of variance that\ncovers query attributes and querying behavior, and suggests a\nrelationship between query properties (length, frequency,\ncomplexity, and repetition) and the position of users\" clicks in the\nresult list. The dimension underlying Factor B accounts for\n33.1% of the variance, and describes attributes of result-click\nbehavior, and a strong correlation between result clicks and the\nnumber of queries submitted each day.\nSummary: In this section we have shown that there are marked\ndifferences in aspects of the querying and result-clickthrough\nbehaviors of advanced users relative to non-advanced users. We\nhave also shown that the greater the proportion of queries that\ncontain advanced syntax, the larger the differences in query and\nclickthrough behaviors become. A factor analysis revealed the\npresence of two dimensions that adequately characterize variance\nin the query and result-click features. In the querying dimension\nquery attributes, such as the length and proportion that contain\nadvanced syntax, and querying behavior, such as the number of\nqueries submitted per day both affect result-click position. In\naddition, in the result-click dimension, it appears that daily\nquerying frequency influences result-click features such as the\nlikelihood that a user will click on a search result and the amount\nof time between result presentation and the search result click.\nThe features used in this section are only interactions with search\nengines in the form of queries and result clicks. We did not\naddress how users searched for information beyond the result\npage. In the next section we use the search trails described in\nSection 4 to analyze the post-query browsing behavior of users.\n5.2 Post-query browsing behavior\nIn this section we look at several attributes of the search trails\nusers followed beyond the results page in an attempt to discern\nwhether the use of advanced search syntax can be used as a\npredictor of aspects of post-query interaction behavior.\nAs we did previously, we first describe the mean average values\nfor each of the browsing features, across all advanced users (i.e.\npadvanced > 0%), all non-advanced users (i.e., padvanced = 0%), and all\nusers regardless of their estimated search expertise level. We then\nlook at the effect on the browsing features of increasing the value\nof padvanced required to be considered advanced from 1% to\n100%. In Table 6 we present the average values for each of these\nfeatures for the two groups of users. Also shown are the\npercentage of search trails (%Trails) and the percentage of users\n(%Users) used to compute the averages.\nTable 6. Post-query browsing features (per trail).\nFeature\npadvanced\n0% > 0% \u2265 25% \u2265 50% \u2265 75%\nSession secs. 701.10 706.21 792.65 903.01 1114.71\nTrail secs. 205.39 159.56 156.45 147.91 136.79\nDisplay secs. 36.95 32.94 34.91 33.11 30.67\nNum. steps 4.88 4.72 4.40 4.40 4.39\nNum. backs 1.20 1.02 1.03 1.03 1.02\nNum. branches 1.55 1.51 1.50 1.47 1.44\n%Trails 72.14% 27.86% .83% .23% .05%\n%Users 79.90% 20.10% .79% .18% .04%\nAs can be seen from Table 6, there are differences in the\npostquery interaction behaviors of advanced users (padvanced > 0%)\nrelative to that do not use query operators in any of their queries\n(padvanced = 0%). Once again, the columns of interest in this\ncomparison are bolded. As we did in Section 5.1 for query and\nresult-click behavior, we performed an independent measures\nttest between the values reported for each of the post-query\nbrowsing features. The results of this test suggest that differences\nbetween those that use advanced syntax and those that do not are\nsignificant (t(12495029) \u2265 3.09, p \u2264 .002, \u03b1 = .008). Given the\nsample sizes, all of the differences between means in the two\ngroups were significant. However, we once again applied a\nCohen\"s d-test to determine the effect size. The findings (ranked\nin descending order based on effect size), show that relative to\nnon-advanced users, advanced search engine users:\n\u2022 Revisit pages in the trail less often (d = .45)\n\u2022 Spend less time traversing each search trail (d = .38)\n\u2022 Spend less time viewing each document (d = .28)\n\u2022 Branch (i.e., proceed to new pages following a back\noperation) less often (d = .18)\n\u2022 Follow search trails with fewer steps (d = .16)\nIt seems that advanced users use a more directed searching style\nthan non-advanced users. They spend less time following search\ntrails and view the documents that lie on those trails for less time.\nThis is in accordance with our earlier proposition that advanced\nusers seem able to discern document relevance in less time.\nAdvanced users also tend to deviate less from a direct path as they\nsearch, with fewer revisits to previously-visited pages and less\nbranching during their searching.\nAs we did in the previous section, we increased the padvanced\nthreshold one point at a time. With the exception of number of\nback operations (NB), the values attributable to each of the\nfeatures change as padvanced increased. It seems that the differences\nnoted earlier between non-advanced users and those that use any\nadvanced syntax become more significant as padvanced increases.\nAs in the previous section, we conducted a factor analysis of these\nfeatures and padvanced. Table 7 shows the intercorrelation matrix\nfor all these variables.\nTable 7. Intercorrelation matrix (post-query browsing).\npadv SS TS DS NS NB NBA\npadv 1.00 .977 \u2212.843 \u2212.867 \u2212.395 \u2212.339 \u2212.249\nSS 1.00 \u2212.765 \u2212.875 \u2212.374 \u2212.335 \u2212.237\nTS 1.00 .948 .387 .281 .250\nDS 1.00 .392 .344 .257\nNS 1.00 .891 .934\nNB 1.00 .918\nNBA 1.00\nAs the proportion of queries containing advanced syntax\nincreases, the values of many of the post-query browsing features\ndecrease. Only the average session time (SS) exhibits a strong\npositive correlation with padvanced. The factor analysis revealed the\npresence of two factors that account for 89.8% of the variance.\nOnce again, all features with an absolute factor loading of .30 or\nless were removed. The two factors that emerged, with their\nrespective loadings, can be expressed as:\nFactor A = .95(DS) + .88 (TS) \u2212 .91(SS) \u2212 .95(padv)\nFactor B = .99(NBA) + .93(NS) + .91(NB)\nVariance in the query and result-click behavior of those who use\nquery operators can be expressed using these two constructs.\nFactor A is the most powerful, contributing 50.1% of the variance.\nIt appears to represent a very basic temporal dimension that\ncovers timing and percentage of queries with advanced syntax,\nand suggests a negative relationship between time spent searching\nand overall session time, and a negative relationship between time\nspent searching and padvanced. The navigation dimension\nunderlying Factor B accounts for 39.7% of the variance, and\ndescribes attributes of post-query navigation, all of which seem to\nbe strongly correlated with each other but not padvanced or timing.\nSummary: In this section we have shown that advanced users\"\npost-query browsing behavior appears more directed than that of\nnon-advanced users. Although their search sessions are longer,\nadvanced users follow fewer search trails during their sessions,\n(i.e., submit fewer queries), their search trails are shorter, and\ntheir trails exhibit fewer deviations or regressions to previously\nencountered pages. We also showed that as padvanced increases,\nsession time increases (perhaps more advanced users are\nmultitasking between search and other operations), and search\ninteraction becomes more focused, perhaps because advanced\nusers are able target relevant information more effectively, with\nless need for regressions or deviations in their search trails.\nAs well as interaction behaviors such as queries, result clicks, and\npost-query browse behavior, another important aspect of the\nsearch process is the attainment of information relevant to the\nquery. In the next section we analyze the success of advanced and\nnon-advanced users in obtaining relevant information.\n5.3 Search success\nAs described earlier, we used six-level relevance judgments\nassigned to query-document pairs as an approximate measure of\nsearch success based on documents encountered on search trails.\nHowever, the queries for which we have judgments generally did\nnot contain advanced operators. To maximize the likelihood of\ncoverage we removed advanced operators from all queries when\nretrieving the relevance judgments. The mean average relevance\njudgment values for each of the four metrics - first, last, average,\nand maximum - are shown in Table 8 for non-advanced users\n(0%) and advanced users (> 0%).\nTable 8. Search success (min. = 1, max. = 6) (per trail).\nFeature\npadvanced\n0% > 0% \u2265 25% \u2265 50% \u2265 75%\nFirst M 4.03 4.19 4.24 4.26 4.57\nSD 1.58 1.56 1.34 1.38 1.27\nLast M 3.79 3.92 4.00 4.13 4.35\nSD 1.60 1.57 1.29 1.25 .89\nMax. M 4.04 4.20 4.19 4.19 4.46\nSD 1.63 1.51 1.28 1.37 1.25\nAvg. M 3.93 4.06 4.08 4.08 4.26\nSD 1.57 1.51 1.23 1.32 1.14\nThe findings suggest that users who use advanced syntax at all\n(padvanced > 0%) were more successful - across all four\nmeasuresthan those who never used advanced syntax (padvanced = 0%). Not\nonly were these users more successful in their searching, but they\nwere consistently more successful (i.e., the standard deviation in\nrelevance scores is lower for advanced users and continues to drop\nas padvanced increases). The differences in the four mean average\nrelevance scores for each metric between these two user groups\nwere significant with independent measures t-tests (all t(516765)\n\u2265 3.29, p \u2264 .001, \u03b1 = .0125). As we increase the value of padvanced\nas in previous sections, the average relevance score across all\nmetrics also increases (all Pearson\"s R \u2265 .654), suggesting that\nmore advanced users are also more likely to succeed in their\nsearching. The searchers that use advanced operators may have\nadditional skills in locating relevant information, or may know\nwhere this information resides based on previous experience.3\nDespite the fact that the four metrics targeted different parts of the\nsearch trail (e.g., first vs. last) or different ways to gather relevant\ninformation (e.g., average vs. maximum), the differences between\ngroups and within the advanced group were consistent.\n3\nAlthough in our logs there was no obvious indication of more\nrevisitation by advanced search engine users.\nTo see whether there were any differences in the nature of the\nqueries submitted by advanced search engine users, we studied the\ndistribution of the four advanced operators: quotation marks, plus,\nminus, and site:. In Table 9 we show how these operators were\ndistributed in all queries submitted by these users.\nTable 9. Distribution of query operators.\nFeature\npadvanced\n> 0% \u2265 25% \u2265 50% \u2265 75%\nQuotes () 71.08 77.09 70.33 70.00\nPlus (+) 6.84 13.31 19.21 33.90\nMinus (\u2212) 6.62 2.88 1.96 2.42\nSite: 21.55 12.72 13.04 9.86\nAvg. num. operators 1.08 1.14 1.28 1.49\nThe distribution of the quotes, plus, and minus operators are\nsimilar amongst the four levels of padvanced, with quotes being the\nmost popular of the four operators used. However, it appears that\nthe plus operator is the main differentiator between the padvanced\nuser groups. This operator, which forces the search engine to\ninclude in the query terms that are usually excluded by default\n(e.g. the, a), may account for some portion of the difference\nin observed search success.4\nHowever, this does not capture the\ncontribution that each of these operators makes to the increase in\nrelevance compared with excluding the operator. To gain some\ninsight into this, we examined the impact that each of the\noperators had on the relevance of retrieved results. We focused\non queries in padvanced > 0% where the same user had issued a\nquery without operators and the same query with operators either\nbefore or afterwards. Although there were few queries with\nmatching pairs - and almost all of them contained quotes - there\nwas a small (approximately 10%) increase in the average\nrelevance judgment score assigned to documents on the trail with\nquotes in the initial query. It may be the case that quoted queries\nled to retrieval of more relevant documents, or that they better\nmatch the perceived needs of relevance judges and therefore lead\nto judged documents receiving higher scores. More analysis\nsimilar to [8] is required to test these propositions further.\nSummary: In this section we have used several measures to study\nthe search success of advanced and non-advanced users. The\nfindings of our analysis suggest that advanced search engine users\nare more successful and have more consistency in the relevance of\nthe pages they visit. Their additional search expertise may make\nthese users better able to make better decisions about which\ndocuments to view, meaning they encounter consistently more\nrelevant information on their searches. In addition, within the\ngroup of advanced users there is a strong correlation between\npadvanced and the degree of search success. Advanced search\nengine users may be more adept at combining query operators to\nformulate powerful query statements. We now discuss the\nfindings from all three subsections and their implications for the\ndesign of improved Web search systems.\n4\nIt is worth noting that there were no significant differences in the\ndistribution of usage of the three search engines - Google,\nYahoo!, or Windows Live Search - amongst advanced search\nengine users, or between advanced users and non-advanced.\n6. DISCUSSION AND IMPLICATIONS\nOur findings indicate significant differences in the querying,\nresult-click, post-query navigation, and search success of those\nthat use advanced syntax versus those that do not. Many of these\nfindings mirror those already found in previous studies with\ngroups of self-identified novices and experts [13][19]. There are\nseveral ways in which a commercial search engine system might\nbenefit from a quantitative indication of searcher expertise. This\nmight be yet another feature available to a ranking engine; i.e. it\nmay be the case that expert searchers in some cases prefer\ndifferent pages than novice searchers. The user interface to a\nsearch engine might be tailored to a user\"s expertise level; perhaps\neven more advanced features such as term weighting and query\nexpansion suggestions could be presented to more experienced\nsearchers while preserving the simplicity of the basic interface for\nnovices. Result presentation might also be customized based on\nsearch skill level; future work might re-evaluate the benefits of\ncontent snippets, thumbnails, etc. in a manner that allows different\noutcomes for different expertise levels. Additionally, if browsing\nhistories are available, the destinations of advanced searchers\ncould be used as suggested results for queries, bypassing and\npotentially improving upon the traditional search process [10].\nThe use of the interaction of advanced search engine users to\nguide others with less expertise is an attractive proposition for the\ndesigners of search systems. In part, these searchers may have\nmore post-query browsing expertise that allows them to overcome\nthe shortcomings of search systems [29]. Their interactions can\nbe used to point users to places that advanced search engine users\nvisit [32] or simply to train less experienced searchers how to\nsearch more effectively. However, if expert users are going to be\nused in this way, issues of data sparsity will need to be overcome.\nOur advanced users only accounted for 20.1% of the users whose\ninteractions we studied. Whilst these may be amongst the most\nactive users it is unlikely that they will view documents that cover\nlarge number of subject areas. However, rather than focusing on\nwhere they go (which is perhaps more appropriate for those with\ndomain knowledge), advanced search engine users may use\nmoves, tactics and strategies [2] that inexperienced users can learn\nfrom. Encouraging users to use advanced syntax helps them learn\nhow to formulate better search queries; leveraging the searching\nstyle of expert searchers could help them learn more successful\npost-query interactions.\nOne potential limitation to the results we report is that in prior\nresearch, it has been shown that query operators do not\nsignificantly improve the effectiveness of Web search results [8],\nand that searchers may be able to perform just as well without\nthem [27]. It could therefore be argued that the users who do not\nuse query operators are in fact more advanced, since they do not\nwaste time using potentially redundant syntax in their query\nstatements. However, this seems unlikely given that those who\nuse advanced syntax exhibited search behaviors typical of users\nwith expertise [13], and are more successful in their searching.\nHowever, in future work we will expand of definition of\nadvanced user beyond attributes of the query to also include\nother interaction behaviors, some of which we have defined in this\nstudy, and other avenues of research such as eye-tracking [12].\n7. CONCLUSIONS\nIn this paper we have described a log-based study of search\nbehavior on the Web that has demonstrated that the use of\nadvanced search syntax is correlated with other aspects of search\nbehavior such as querying, result clickthrough, post-query\nnavigation, and search success. Those that use this syntax are\nactive online for longer, spend less time querying and traversing\nsearch trails, exhibit less deviation in their trails, are more likely\nto explore search results, take less time to click on results, and are\nmore successful in there searching. These are all traits that we\nwould expect expert searchers to exhibit. Crude classification of\nusers based on just one feature that is easily extractable from the\nquery stream yields remarkable results about the interaction\nbehavior of users that do not use the syntax and those that do. As\nwe have suggested, search systems may leverage the interactions\nof these users for improved document ranking, page\nrecommendation, or even user training. Future work will include\nthe development of search interfaces and modified retrieval\nengines that make use of these information-rich features, and\nfurther investigation into the use of these features as indicators of\nsearch expertise, including a cross-correlation analysis between\nresult click and post-query behavior.\n8. ACKNOWLEDGEMENTS\nThe authors are grateful to Susan Dumais for her thoughtful and\nconstructive comments on a draft of this paper.\n9. REFERENCES\n[1] Anick, P. (2003). Using terminological feedback for Web\nsearch refinement: A log-based study. In Proc. ACM SIGIR,\npp. 88-95.\n[2] Bates, M. (1990). Where should the person stop and the\ninformation search interface start? Inf. Proc. Manage. 26, 5,\n575-591.\n[3] Belkin, N.J. (2000). Helping people find what they don\"t\nknow. Comm. ACM, 43, 8, 58-61.\n[4] Belkin, N.J. et al. (2003). Query length in interactive\ninformation retrieval. In Proc. ACM SIGIR, pp. 205-212.\n[5] Bhavnani, S.K. (2001). Domain-specific search strategies\nfor the effective retrieval of healthcare and shopping\ninformation. In Proc. ACM SIGCHI, pp. 610-611.\n[6] Chi, E. H., Pirolli, P. L., Chen, K. & Pitkow, J. E. (2001).\nUsing information scent to model user information needs and\nactions and the Web. In Proc. ACM SIGCHI, pp. 490-497.\n[7] De Lima, E.F. & Pedersen, J.O. (1999). Phrase recognition\nand expansion for short, precision-biased queries based on a\nquery log. In Proc. of ACM SIGIR, pp. 145-152.\n[8] Eastman, C.M. & Jansen, B.J. (2003). Coverage, relevance,\nand ranking: The impact of query operators on Web search\nengine results. ACM TOIS, 21, 4, 383-411.\n[9] Efthimiadis, E.N. (1996). Query expansion. Annual Review\nof Information Science and Technology, 31, 121-187.\n[10] Furnas, G. (1985). Experience with an adaptive indexing\nscheme. In Proc. ACM SIGCHI, pp. 131-135.\n[11] Furnas, G.W., Landauer, T.K., Gomez, L.M. & Dumais, S.T.\n(1987). The vocabulary problem in human-system\ncommunication: An analysis and a solution. Comm. ACM,\n30, 11, 964-971.\n[12] Granka, L., Joachims, T. & Gay, G. (2004). Eye-tracking\nanalysis of user behavior in WWW search. In Proc. ACM\nSIGIR, pp. 478-479.\n[13] H\u00f6lscher, C. & Strube, G. (2000). Web search behavior of\ninternet experts and newbies. In Proc.WWW, pp. 337-346.\n[14] Jansen, B.J. (2000). An investigation into the use of simple\nqueries on Web IR systems. Inf. Res. 6, 1.\n[15] Jansen, B.J., Spink, A. & Saracevic, T. (2000). Real life, real\nusers, and real needs: A study and analysis of user queries on\nthe Web. Inf. Proc. Manage. 36, 2, 207-227.\n[16] Jones, R., Rey, B., Madani, O. & Greiner, W. (2006).\nGenerating query substitutions. In Proc. WWW, pp. 387-396.\n[17] Kaski, S., Myllym\u00e4ki, P. & Kojo, I. (2005). User models\nfrom implicit feedback for proactive information retrieval.\nIn Workshop at UM Conference; Machine Learning for User\nModeling: Challenges.\n[18] Kelly, D., Dollu, V.D. & Fu, X. (2005). The loquacious user:\na document-independent source of terms for query\nexpansion. In Proc. ACM SIGIR, pp. 457-464.\n[19] Lazonder, A.W., Biemans, H.J.A. & Woperis, I.G.J.H.\n(2000). Differences between novice and experienced users in\nsearching for information on the World Wide Web. J.\nASIST. 51, 6, 576-581.\n[20] Morita, M. & Shinoda, Y. (1994). Information filtering based\non user behavior analysis and best match text retrieval. In\nProc. ACM SIGIR, pp. 272-281.\n[21] NIST Special Publication 500-266: The Fourteenth Text\nRetrieval Conference Proceedings (TREC 2005).\n[22] Oddy, R. (1977). Information retrieval through man-machine\ndialogue. J. Doc. 33, 1, 1-14.\n[23] Rose, D.E. & Levinson, D. (2004). Understanding user goals\nin Web search. In Proc. WWW, pp. 13-19.\n[24] Salton, G. and Buckley, C. (1990). Improving retrieval\nperformance by relevance feedback. J. ASIST, 41 4, 288-287.\n[25] Silverstein, C., Marais, H., Henzinger, M. & Moricz, M.\n(1999). Analysis of a very large web search engine query\nlog. SIGIR Forum, 33, 1, 6-12.\n[26] Spearman, C. (1904). General intelligence, objectively\ndetermined and measured. Amer. J. Psy. 15, 201-293.\n[27] Spink, A., Bateman, J. & Jansen, B.J. (1998). Searching\nheterogeneous collections on the Web: Behavior of Excite\nusers. Inf. Res. 4, 2, 317-328.\n[28] Spink, A., Griesdorf, H. & Bateman, J. (1998). From highly\nrelevant to not relevant: examining different regions of\nrelevance. Inf. Proc. Manage. 34 5, 599-621.\n[29] Teevan, J. et al. (2004). The perfect search engine is not\nenough: A study of orienteering behavior in directed search.\nIn Proc. ACM SIGCHI, pp. 415-422.\n[30] Teevan, J., Dumais, S.T. & Horvitz, E. (2005). Personalizing\nsearch via automated analysis of interests and activities. In\nProc. ACM SIGIR, pp. 449-456\n[31] Wang, P., Berry, M. & Yang, Y. (2003). Mining longitudinal\nWeb queries: Trends and patterns. J. ASIST, 54, 3, 742-758.\n[32] White, R.W., Bilenko, M. & Cucerzan, S. (2007). Studying\nthe use of popular destinations to enhance Web search\ninteraction. In Proc. ACM SIGIR, in press.\n[33] White, R.W. & Drucker, S. (2007). Investigating behavioral\nvariability in Web search. In Proc. WWW, in press.\n[34] White, R.W., Ruthven, I. & Jose, J.M. (2002). Finding\nrelevant documents using top-ranking sentences: An\nevaluation of two alternative schemes. In Proc. ACM SIGIR,\npp. 57-64.\n[35] Wildemuth, B.M., do Bleik, R., Friedman, C.P. & File, D.D.\n(1995). Medical students\" personal knowledge. Search\nproficiency, and database use in problem solving. J. ASIST,\n46, 590-607.", "keywords": "relevance;relevance feedback;navigation behavior;expert search;search behavior;advance search feature;searching strategy;advanced syntax;query;query syntax;search engine;search success;tolerable latency;relevant information"}
-{"name": "test_H-25", "title": "Term Feedback for Information Retrieval with Language Models", "abstract": "In this paper we study term-based feedback for information retrieval in the language modeling approach. With term feedback a user directly judges the relevance of individual terms without interaction with feedback documents, taking full control of the query expansion process. We propose a cluster-based method for selecting terms to present to the user for judgment, as well as effective algorithms for constructing refined query language models from user term feedback. Our algorithms are shown to bring significant improvement in retrieval accuracy over a non-feedback baseline, and achieve comparable performance to relevance feedback. They are helpful even when there are no relevant documents in the top.", "fulltext": "1. INTRODUCTION\nIn the language modeling approach to information retrieval,\nfeedback is often modeled as estimating an improved query model or\nrelevance model based on a set of feedback documents [25, 13].\nThis is in line with the traditional way of doing relevance feedback\n- presenting a user with documents/passages for relevance\njudgment and then extracting terms from the judged documents or\npassages to expand the initial query. It is an indirect way of seeking\nuser\"s assistance for query model construction, in the sense that the\nrefined query model (based on terms) is learned through feedback\ndocuments/passages, which are high-level structures of terms. It\nhas the disadvantage that irrelevant terms, which occur along with\nrelevant ones in the judged content, may be erroneously used for\nquery expansion, causing undesired effects. For example, for the\nTREC query Hubble telescope achievements, when a relevant\ndocument talks more about the telescope\"s repair than its\ndiscoveries, irrelevant terms such as spacewalk can be added into the\nmodified query.\nWe can consider a more direct way to involve a user in query\nmodel improvement, without an intermediary step of document\nfeedback that can introduce noise. The idea is to present a\n(reasonable) number of individual terms to the user and ask him/her to\njudge the relevance of each term or directly specify their\nprobabilities in the query model. This strategy has been discussed in [15],\nbut to our knowledge, it has not been seriously studied in existing\nlanguage modeling literature. Compared to traditional relevance\nfeedback, this term-based approach to interactive query model\nrefinement has several advantages. First, the user has better\ncontrol of the final query model through direct manipulation of terms:\nhe/she can dictate which terms are relevant, irrelevant, and\npossibly, to what degree. This avoids the risk of bringing unwanted\nterms into the query model, although sometimes the user introduces\nlow-quality terms. Second, because a term takes less time to judge\nthan a document\"s full text or summary, and as few as around 20\npresented terms can bring significant improvement in retrieval\nperformance (as we will show later), term feedback makes it faster to\ngather user feedback. This is especially helpful for interactive\nadhoc search. Third, sometimes there are no relevant documents in\nthe top N of the initially retrieved results if the topic is hard. This\nis often true when N is constrained to be small, which arises from\nthe fact that the user is unwilling to judge too many documents. In\nthis case, relevance feedback is useless, as no relevant document\ncan be leveraged on, but term feedback is still often helpful, by\nallowing relevant terms to be picked from irrelevant documents.\nDuring our participation in the TREC 2005 HARD Track and\ncontinued study afterward, we explored how to exploit term\nfeedback from the user to construct improved query models for\ninformation retrieval in the language modeling approach. We identified\ntwo key subtasks of term-based feedback, i.e., pre-feedback\npresentation term selection and post-feedback query model\nconstruction, with effective algorithms developed for both. We imposed a\nsecondary cluster structure on terms and found that a cluster view\nsheds additional insight into the user\"s information need, and\nprovides a good way of utilizing term feedback. Through experiments\nwe found that term feedback improves significantly over the\nnonfeedback baseline, even though the user often makes mistakes in\nrelevance judgment. Among our algorithms, the one with best\nretrieval performance is TCFB, the combination of TFB, the direct\nterm feedback algorithm, and CFB, the cluster-based feedback\nalgorithm. We also varied the number of feedback terms and\nobserved reasonable improvement even at low numbers. Finally, by\ncomparing term feedback with document-level feedback, we found\nit to be a viable alternative to the latter with competitive retrieval\nperformance.\nThe rest of the paper is organized as follows. Section 2 discusses\nsome related work. Section 4 outlines our general approach to term\nfeedback. We present our method for presentation term selection in\nSection 3 and algorithms for query model construction in Section 5.\nThe experiment results are given in Section 6. Section 7 concludes\nthis paper.\n2. RELATED WORK\nRelevance feedback[17, 19] has long been recognized as an\neffective method for improving retrieval performance. Normally, the\ntop N documents retrieved using the original query are presented\nto the user for judgment, after which terms are extracted from the\njudged relevant documents, weighted by their potential of\nattracting more relevant documents, and added into the query model. The\nexpanded query usually represents the user\"s information need\nbetter than the original one, which is often just a short keyword query.\nA second iteration of retrieval using this modified query usually\nproduces significant increase in retrieval accuracy. In cases where\ntrue relevance judgment is unavailable and all top N documents are\nassumed to be relevant, it is called blind or pseudo feedback[5, 16]\nand usually still brings performance improvement.\nBecause document is a large text unit, when it is used for\nrelevance feedback many irrelevant terms can be introduced into the\nfeedback process. To overcome this, passage feedback is proposed\nand shown to improve feedback performance[1, 23]. A more direct\nsolution is to ask the user for their relevance judgment of feedback\nterms. For example, in some relevance feedback systems such as\n[12], there is an interaction step that allows the user to add or\nremove expansion terms after they are automatically extracted from\nrelevant documents. This is categorized as interactive query\nexpansion, where the original query is augmented with user-provided\nterms, which can come from direct user input (free-form text or\nkeywords)[22, 7, 10] or user selection of system-suggested terms\n(using thesauri[6, 22] or extracted from feedback documents[6, 22,\n12, 4, 7]).\nIn many cases term relevance feedback has been found to\neffectively improve retrieval performance[6, 22, 12, 4, 10]. For\nexample, the study in [12] shows that the user prefers to have explicit\nknowledge and direct control of which terms are used for query\nexpansion, and the penetrable interface that provides this freedom is\nshown to perform better than other interfaces. However, in some\nother cases there is no significant benefit[3, 14], even if the user\nlikes interacting with expansion terms. In a simulated study\ncarried out in [18], the author compares the retrieval performance of\ninteractive query expansion and automatic query expansion with a\nsimulated study, and suggests that the potential benefits of the\nformer can be hard to achieve. The user is found to be not good at\nidentifying useful terms for query expansion, when a simple term\npresentation interface is unable to provide sufficient semantic\ncontext of the feedback terms.\nOur work differs from the previous ones in two important\naspects. First, when we choose terms to present to the user for\nrelevance judgment, we not only consider single-term value (e.g., the\nrelative frequency of a term in the top documents, which can be\nmeasured by metrics such as Robertson Selection Value and\nSimplified Kullback-Leibler Distance as listed in [24]), but also\nexamine the cluster structure of the terms, so as to produce a balanced\ncoverage of the different topic aspects. Second, with the language\nmodelling framework, we allow an elaborate construction of the\nupdated query model, by setting different probabilities for different\nterms based on whether it is a query term, its significance in the\ntop documents, and its cluster membership. Although techniques\nfor adjusting query term weights exist for vector space models[17]\nand probablistic relevance models[9], most of the aforementioned\nworks do not use them, choosing to just append feedback terms to\nthe original query (thus using equal weights for them), which can\nlead to poorer retrieval performance. The combination of the two\naspects allows our method to perform much better than the\nbaseline.\nThe usual way for feedback term presentation is just to display\nthe terms in a list. There have been some works on alternative user\ninterfaces. [8] arranges terms in a hierarchy, and [11] compares\nthree different interfaces, including terms + checkboxes, terms +\ncontext (sentences) + checkboxes, sentences + input text box. In\nboth studies, however, there is no significant performance\ndifference. In our work we adopt the simplest approach of terms +\ncheckboxes. We focus on term presentation and query model\nconstruction from feedback terms, and believe using contexts to improve\nfeedback term quality should be orthogonal to our method.\n3. GENERAL APPROACH\nWe follow the language modeling approach, and base our method\non the KL-divergence retrieval model proposed in [25]. With this\nmodel, the retrieval task involves estimating a query language model\n\u03b8q from a given query, a document language model \u03b8d from each\ndocument, and calculating their KL-divergence D(\u03b8q||\u03b8d), which\nis then used to score the documents. [25] treats relevance feedback\nas a query model re-estimation problem, i.e., computing an updated\nquery model \u03b8q given the original query text and the extra evidence\ncarried by the judged relevant documents. We adopt this view, and\ncast our task as updating the query model from user term feedback.\nThere are two key subtasks here: First, how to choose the best terms\nto present to the user for judgment, in order to gather maximal\nevidence about the user\"s information need. Second, how to compute\nan updated query model based on this term feedback evidence, so\nthat it captures the user\"s information need and translates into good\nretrieval performance.\n4. PRESENTATION TERM SELECTION\nProper selection of terms to be presented to the user for\njudgment is crucial to the success of term feedback. If the terms are\npoorly chosen and there are few relevant ones, the user will have a\nhard time looking for useful terms to help clarify his/her\ninformation need. If the relevant terms are plentiful, but all concentrate on\na single aspect of the query topic, then we will only be able to get\nfeedback on that aspect and missing others, resulting in a breadth\nloss in retrieved results. Therefore, it is important to carefully select\npresentation terms to maximize expected gain from user feedback,\ni.e., those that can potentially reveal most evidence of the user\"s\ninformation need. This is similar to active feedback[21], which\nsuggests that a retrieval system should actively probe the user\"s\ninformation need, and in the case of relevance feedback, the feedback\ndocuments should be chosen to maximize learning benefits (e.g.\ndiversely so as to increase coverage).\nIn our approach, the top N documents from an initial retrieval\nusing the original query form the source of feedback terms: all\nterms that appear in them are considered candidates to present to\nthe user. These documents serve as pseudo-feedback, since they\nprovide a much richer context than the original query (usually very\nshort), while the user is not asked to judge their relevance. Due to\nthe latter reason, it is possible to make N quite large (e.g., in our\nexperiments we set N = 60) to increase its coverage of different\naspects in the topic.\nThe simplest way of selecting feedback terms is to choose the\nmost frequent M terms from the N documents. This method,\nhowever, has two drawbacks. First, a lot of common noisy terms will be\nselected due to their high frequencies in the document collection,\nunless a stop-word list is used for filtering. Second, the\npresentation list will tend to be filled by terms from major aspects of the\ntopic; those from a minor aspect are likely to be missed due to their\nrelatively low frequencies.\nWe solve the above problems by two corresponding measures.\nFirst, we introduce a background model \u03b8B that is estimated from\ncollection statistics and explains the common terms, so that they\nare much less likely to appear in the presentation list. Second, the\nterms are selected from multiple clusters in the pseudo-feedback\ndocuments, to ensure sufficient representation of different aspects\nof the topic.\nWe rely on the mixture multinomial model, which is used for\ntheme discovery in [26]. Specifically, we assume the N documents\ncontain K clusters {Ci| i = 1, 2, \u00b7 \u00b7 \u00b7 K}, each characterized by\na multinomial word distribution (also known as unigram language\nmodel) \u03b8i and corresponding to an aspect of the topic. The\ndocuments are regarded as sampled from a mixture of K + 1\ncomponents, including the K clusters and the background model:\np(w|d) = \u03bbBp(w|\u03b8B) + (1 \u2212 \u03bbB)\nK\ni=1\n\u03c0d,ip(w|\u03b8i)\nwhere w is a word, \u03bbB is the mixture weight for the background\nmodel \u03b8B, and \u03c0d,i is the document-specific mixture weight for the\ni-th cluster model \u03b8i. We then estimate the cluster models by\nmaximizing the probability of the pseudo-feedback documents being\ngenerated from the multinomial mixture model:\nlog p(D|\u039b) =\nd\u2208D w\u2208V\nc(w; d) log p(w|d)\nwhere D = {di| i = 1, 2, \u00b7 \u00b7 \u00b7 N} is the set of the N documents, V\nis the vocabulary, c(w; d) is w\"s frequency in d and \u039b = {\u03b8i| i =\n1, 2, \u00b7 \u00b7 \u00b7 K} \u222a {\u03c0dij | i = 1, 2, \u00b7 \u00b7 \u00b7 N, j = 1, 2, \u00b7 \u00b7 \u00b7 K} is the set\nof model parameters to estimate. The cluster models can be\nefficiently estimated using the Expectation-Maximization (EM)\nalgorithm. For its details, we refer the reader to [26]. Table 1 shows the\ncluster models for TREC query Transportation tunnel disasters\n(K = 3). Note that only the middle cluster is relevant.\nTable 1: Cluster models for topic 363 Transportation tunnel\ndisasters\nCluster 1 Cluster 2 Cluster 3\ntunnel 0.0768 tunnel 0.0935 tunnel 0.0454\ntransport 0.0364 fire 0.0295 transport 0.0406\ntraffic 0.0206 truck 0.0236 toll 0.0166\nrailwai 0.0186 french 0.0220 amtrak 0.0153\nharbor 0.0146 smoke 0.0157 train 0.0129\nrail 0.0140 car 0.0154 airport 0.0122\nbridg 0.0139 italian 0.0152 turnpik 0.0105\nkilomet 0.0136 firefight 0.0144 lui 0.0095\ntruck 0.0133 blaze 0.0127 jersei 0.0093\nconstruct 0.0131 blanc 0.0121 pass 0.0087\n\u00b7 \u00b7 \u00b7 \u00b7 \u00b7 \u00b7 \u00b7 \u00b7 \u00b7\nFrom each of the K estimated clusters, we choose the L =\nM/K terms with highest probabilities to form a total of M\npresentation terms. If a term happens to be in top L in multiple clusters,\nwe assign it to the cluster where it has highest probability and let the\nother clusters take one more term as compensation. We also filter\nout terms in the original query text because they tend to always be\nrelevant when the query is short. The selected terms are then\npresented to the user for judgment. A sample (completed) feedback\nform is shown in Figure 1.\nIn this study we only deal with binary judgment: a presented\nterm is by default unchecked, and a user may check it to\nindicate relevance. We also do not explicitly exploit negative feedback\n(i.e., penalizing irrelevant terms), because with binary feedback an\nunchecked term is not necessarily irrelevant (maybe the user is\nunsure about its relevance). We could ask the user for finer\njudgment (e.g., choosing from highly relevant, somewhat relevant, do\nnot know, somewhat irrelevant and highly irrelevant), but binary\nfeedback is more compact, taking less space to display and less\nuser effort to make judgment.\n5. ESTIMATING QUERY MODELS FROM\nTERM FEEDBACK\nIn this section, we present several algorithms for exploiting term\nfeedback. The algorithms take as input the original query q, the\nclusters {\u03b8i} as generated by the theme discovery algorithm, the set\nof feedback terms T and their relevance judgment R, and outputs\nan updated query language model \u03b8q that makes best use of the\nfeedback evidence to capture the user\"s information need.\nFirst we describe our notations:\n\u2022 \u03b8q: The original query model, derived from query terms only:\np(w|\u03b8q) =\nc(w; q)\n|q|\nwhere c(w; q) is the count of w in q, and |q| = w\u2208q c(w; q)\nis the query length.\n\u2022 \u03b8q : The updated query model which we need to estimate\nfrom term feedback.\n\u2022 \u03b8i (i = 1, 2, . . . K): The unigram language model of cluster\nCi, as estimated using the theme discovery algorithm.\n\u2022 T = {ti,j} (i = 1 . . . K, j = 1 . . . L): The set of terms\npresented to the user for judgment. ti,j is the j-th term chosen\nfrom cluster Ci.\n\u2022 R = {\u03b4w|w \u2208 T}: \u03b4w is an indicator variable that is 1 if w\nis judged relevant or 0 otherwise.\n5.1 TFB (Direct Term Feedback)\nThis is a straight-forward form of term feedback that does not\ninvolve any secondary structure. We give a weight of 1 to terms\njudged relevant by the user, a weight of \u03bc to query terms, zero\nweight to other terms, and then apply normalization:\np(w|\u03b8q ) =\n\u03b4w + \u03bc c(w; q)\nw \u2208T \u03b4w + \u03bc|q|\nwhere w \u2208T \u03b4w is the total number of terms that are judged\nrelevant. We call this method TFB (direct Term FeedBack).\nIf we let \u03bc = 1, this approach is equivalent to appending the\nrelevant terms after the original query, which is what standard query\nexpansion (without term reweighting) does. If we set \u03bc > 1, we are\nputting more emphasis on the query terms than the checked ones.\nNote that the result model will be more biased toward \u03b8q if the\noriginal query is long or the user feedback is weak, which makes\nsense, as we can trust more on the original query in either case.\nFigure 1: Filled clarification form for Topic 363\n363 transportation tunnel disasters\nPlease select all terms that are relevant to the topic.\ntraffic railway\nharbor rail\nbridge kilometer\nconstruct swiss\ncross link\nkong hong\nriver project\nmeter shanghai\nfire truck\nfrench smoke\ncar italian\nfirefights blaze\nblanc mont\nvictim franc\nrescue driver\nchamonix emerge\ntoll amtrak\ntrain airport\nturnpike lui\njersey pass\nrome z\ncenter electron\nroad boston\nspeed bu\nsubmit\n5.2 CFB (Cluster Feedback)\nHere we exploit the cluster structure that played an important\nrole when we selected the presentation terms. The clusters\nrepresent different aspects of the query topic, each of which may or\nmay not be relevant. If we are able to identify the relevant clusters,\nwe can combine them to generate a query model that is good at\ndiscovering documents belonging to these clusters (instead of the\nirrelevant ones). We could ask the user to directly judge the\nrelevance of a cluster after viewing representative terms in that cluster,\nbut this would sometimes be a difficult task for the user, who has to\nguess the semantics of a cluster via its set of terms, which may not\nbe well connected to one another due to a lack of context.\nTherefore, we propose to learn cluster feedback indirectly, inferring the\nrelevance of a cluster through the relevance of its feedback terms.\nBecause each cluster has an equal number of terms presented to\nthe user, the simplest measure of a cluster\"s relevance is the number\nof terms that are judged relevant in it. Intuitively, the more terms\nare marked relevant in a cluster, the closer the cluster is to the query\ntopic, and the more the cluster should participate in query\nmodification. If we combine the cluster models using weights determined\nthis way and then interpolate with the original query model, we\nget the following formula for query updating, which we call CFB\n(Cluster FeedBack):\np(w|\u03b8q ) = \u03bbp(w|\u03b8q) + (1 \u2212 \u03bb)\nK\ni=1\nL\nj=1 \u03b4ti,j\nK\nk=1\nL\nj=1 \u03b4tk,j\np(w|\u03b8i)\nwhere L\nj=1 \u03b4ti,j is the number of relevant terms in cluster Ci, and\nK\nk=1\nL\nj=1 \u03b4tk,j is the total number of relevant terms.\nWe note that when there is only one cluster (K = 1), the above\nformula degenerates to\np(w|\u03b8q ) = \u03bbp(w|\u03b8q) + (1 \u2212 \u03bb)p(w|\u03b81)\nwhich is merely pseudo-feedback of the form proposed in [25].\n5.3 TCFB (Term-cluster Feedback)\nTFB and CFB both have their drawbacks. TFB assigns non-zero\nprobabilities to the presented terms that are marked relevant, but\ncompletely ignores (a lot more) others, which may be left unchecked\ndue to the user\"s ignorance, or simply not included in the\npresentation list, but we should be able to infer their relevance from the\nchecked ones. For example, in Figure 1, since as many as 5 terms\nin the middle cluster (the third and fourth columns) are checked,\nwe should have high confidence in the relevance of other terms in\nthat cluster. CFB remedies TFB\"s problem by treating the terms\nin a cluster collectively, so that unchecked/unpresented terms\nreceive weights when presented terms in their clusters are judged as\nrelevant, but it does not distinguish which terms in a cluster are\npresented or judged. Intuitively, the judged relevant terms should\nreceive larger weights because they are explicitly indicated as\nrelevant by the user. Therefore, we try to combine the two methods,\nhoping to get the best out of both.\nWe do this by interpolating the TFB model with the CFB model,\nand call it TCFB:\np(w|\u03b8q ) = \u03b1p(w|\u03b8qT F B\n) + (1 \u2212 \u03b1)p(w|\u03b8qCF B\n)\n6. EXPERIMENTS\nIn this section, we describe our experiment results. We first\ndescribe our experiment setup and present an overview of various\nmethods\" performance. Then we discuss the effects of varying\nthe parameter setting in the algorithms, as well as the number of\npresentation terms. Next we analyze user term feedback behavior\nand its relation to retrieval performance. Finally we compare term\nfeedback to relevance feedback and show that it has its particular\nadvantage.\n6.1 Experiment Setup and Basic Results\nWe took the opportunity of TREC 2005 HARD Track[2] for the\nevaluation of our algorithms. The tracks used the AQUAINT\ncollection, a 3GB corpus of English newswire text. The topics\nincluded 50 ones previously known to be hard, i.e. with low retrieval\nperformance. It is for these hard topics that user feedback is most\nhelpful, as it can provide information to disambiguate the queries;\nwith easy topics the user may be unwilling to spend efforts for\nfeedback if the automatic retrieval results are good enough.\nParticipants of the track were able to submit custom-designed clarification\nforms (CF) to solicit feedback from human assessors provided by\nTable 2: Retrieval performance for different methods and CF types. The last row is the percentage of MAP improvement over the\nbaseline. The parameter settings \u03bc = 4, \u03bb = 0.1, \u03b1 = 0.3 are near optimal.\nBaseline TFB1C TFB3C TFB6C CFB1C CFB3C CFB6C TCFB1C TCFB3C TCFB6C\nMAP 0.219 0.288 0.288 0.278 0.254 0.305 0.301 0.274 0.309 0.304\nPr@30 0.393 0.467 0.475 0.457 0.399 0.480 0.473 0.431 0.491 0.473\nRR 4339 4753 4762 4740 4600 4907 4872 4767 4947 4906\n% 0% 31.5% 31.5% 26.9% 16.0% 39.3% 37.4% 25.1% 41.1% 38.8%\nTable 3: MAP variation with the number of presented terms.\n# terms TFB1C TFB3C TFB6C CFB3C CFB6C TCFB3C TCFB6C\n6 0.245 0.240 0.227 0.279 0.279 0.281 0.274\n12 0.261 0.261 0.242 0.299 0.286 0.297 0.281\n18 0.275 0.274 0.256 0.301 0.282 0.300 0.286\n24 0.276 0.281 0.265 0.303 0.292 0.305 0.292\n30 0.280 0.285 0.270 0.304 0.296 0.307 0.296\n36 0.282 0.288 0.272 0.307 0.297 0.309 0.297\n42 0.283 0.288 0.275 0.306 0.298 0.309 0.300\n48 0.288 0.288 0.278 0.305 0.301 0.309 0.303\nNIST. We designed three sets of clarification forms for term\nfeedback, differing in the choice of K, the number of clusters, and L,\nthe number of presented terms from each cluster. They are: 1\u00d7 48,\na big cluster with 48 terms, 3 \u00d7 16, 3 clusters with 16 terms each,\nand 6 \u00d7 8, 6 clusters with 8 terms each. The total number of\npresented terms (M) is fixed at 48, so by comparing the performance\nof different types of clarification forms we can know the effects of\ndifferent degree of clustering. For each topic, an assessor would\ncomplete the forms ordered by 6 \u00d7 8, 1 \u00d7 48 and 3 \u00d7 16, spending\nup to three minutes on each form. The sample clarification form\nshown in Figure 1 is of type 3 \u00d7 16. It is a simple and compact\ninterface in which the user can check relevant terms. The form is\nself-explanatory; there is no need for extra user training on how to\nuse it.\nOur initinal queries are constructed only using the topic title\ndescriptions, which are on average 2.7 words in length. As our\nbaseline we use the KL divergence retrieval method implemented\nin the Lemur Toolkit1\nwith 5 pseudo-feedback documents. We\nstem the terms, choose Dirichlet smoothing with a prior of 2000,\nand truncate query language models to 50 terms (these settings are\nused throughout the experiments). For all other parameters we use\nLemur\"s default settings. The baseline turns out to perform above\naverage among the track participants. After an initial run using this\nbaseline retrieval method, we take the top 60 documents for each\ntopic and apply the theme discovery algorithm to output the\nclusters (1, 3, or 6 of them), based on which we generate clarification\nforms. After user feedback is received, we run the term feedback\nalgorithms (TFB, CFB or TCFB) to estimate updated query\nmodels, which are then used for a second iteration of retrieval.\nWe evaluate the different retrieval methods\" performance on their\nrankings of the top 1000 documents. The evaluation metrics we\nadopt include mean average (non-interpolated) precision (MAP),\nprecision at top 30 (Pr@30) and total relevant retrieved (RR). Table\n2 shows the performance of various methods and configurations of\nK \u00d7 L. The suffixes (1C, 3C, 6C) after TFB,CFB,TCFB stand\nfor the number of clusters (K). For example, TCFB3C means the\nTCFB method on the 3 \u00d7 16 clarification forms.\nFrom Table 2 we can make the following observations:\n1\nhttp://www.lemurproject.com\n1. All methods perform considerably better than the\npseudofeedback baseline, with TCFB3C achieving a highest 41.1%\nimprovement in MAP, indicating significant contribution of\nterm feedback for clarification of the user\"s information need.\nIn other words, term feedback is truly helpful for improving\nretrieval accuracy.\n2. For TFB, the performance is almost equal on the 1 \u00d7 48 and\n3 \u00d7 16 clarification forms in terms of MAP (although the\nlatter is slightly better in Pr@30 and RR), and a little worse\non the 6 \u00d7 8 ones.\n3. Both CFB3C and CFB6C perform better than their TFB\ncounterparts in all three metrics, suggesting that feedback on a\nsecondary cluster structure is indeed beneficial. CFB1C is\nactually worse because it cannot adjust the weight of its\n(single) cluster from term feedback and it is merely\npseudofeedback.\n4. Although TCFB is just a simple mixture of TFB and CFB\nby interpolation, it is able to outperform both. This supports\nour speculation that TCFB overcomes the drawbacks of TFB\n(paying attention only to checked terms) and CFB (not\ndistinguishing checked and unchecked terms in a cluster).\nExcept for TCFB6C v.s. CFB6C, the performance advantage\nof TCFB over TFB/CFB is significant at p < 0.05 using the\nWilcoxon signed rank test. This is not true in the case of TFB\nv.s. CFB, each of which is better than the other in nearly half\nof the topics.\n6.2 Reduction of Presentation Terms\nIn some situations we may have to reduce the number of\npresentation terms due to limits in display space or user feedback efforts.\nIt is interesting to know whether our algorithms\" performance\ndeteriorates when the user is presented with fewer terms. Because the\npresentation terms within each cluster are generated in decreasing\norder of their frequencies, the presentation list forms a subset of the\noriginal one if its size is reduced2\n. Therefore, we can easily\nsimulate what happens when the number of presentation terms decreases\n2\nThere are complexities arising from terms appearing in top L of\nmultiple clusters, but these are exceptions\nfrom M to M : we will keep all judgments of the top L = M /K\nterms in each cluster and discard those of others. Table 3 shows the\nperformance of various algorithms as the number of presentation\nterms ranges from 6 to 48.\nWe find that the performance of TFB is more susceptible to\npresentation term reduction than that of CFB or TCFB. For example,\nat 12 terms the MAP of TFB3C is 90.6% of that at 48 terms, while\nthe numbers for CFB3C and TCFB3C are 98.0% and 96.1%\nrespectively. We conjecture the reason to be that while TFB\"s\nperformance heavily depends on how many good terms are chosen\nfor query expansion, CFB only needs a rough estimate of cluster\nweights to work. Also, the 3 \u00d7 16 clarification forms seem to be\nmore robust than the 6 \u00d7 8 ones: at 12 terms the MAP of TFB6C is\n87.1% of that at 48 terms, lower than 90.6% for TFB3C. Similarly,\nfor CFB it is 95.0% against 98.0%. This is natual, as for a large\ncluster number of 6, it is easier to get into the situation where each\ncluster gets too few presentation terms to make topic diversification\nuseful.\nOverall, we are surprised to see that the algorithms are still able\nto perform reasonably well when the number of presentation terms\nis small. For example, at only 12 terms CFB3C (the clarification\nform is of size 3 \u00d7 4) can still improve 36.5% over the baseline,\ndropping slightly from 39.3% at 48 terms.\n6.3 User Feedback Analysis\nIn this part we study several aspects of user\"s term feedback\nbehavior, and whether they are connected to retrieval performance.\nFigure 2: Clarification form completion time distributions\n0\u221230 30\u221260 60\u221290 90\u2212120 120\u2212150 150\u2212180\n0\n5\n10\n15\n20\n25\n30\n35\ncompletion time (seconds)\n#topics\n1\u00d748\n3\u00d716\n6\u00d78\nFigure 2 shows the distribution of time needed to complete a\nclarification form3\n. We see that the user is usually able to finish\nterm feedback within a reasonably short amount of time: for more\nthan half of the topics the clarification form is completed in just\n1 minute, and only a small fraction of topics (less than 10% for\n1 \u00d7 48 and 3 \u00d7 16) take more than 2 minutes. This suggests that\nterm feedback is suitable for interactive ad-hoc retrieval, where a\nuser usually does not want to spend too much time on providing\nfeedback.\nWe find that a user often makes mistakes when judging term\nrelevance. Sometimes a relevant term may be left out because its\nconnection to the query topic is not obvious to the user. Other times a\ndubious term may be included but turns out to be irrelevant. Take\nthe topic in Figure 1 for example. There was a fire disaster in Mont\n3\nThe maximal time is 180 seconds, as the NIST assessor would be\nforced to submit the form at that moment.\nTable 4: Term selection statistics (topic average)\nCF Type 1 \u00d7 48 3 \u00d7 16 6 \u00d7 8\n# checked terms 14.8 13.3 11.2\n# rel. terms 15.0 12.6 11.2\n# rel. checked terms 7.9 6.9 5.9\nprecision 0.534 0.519 0.527\nrecall 0.526 0.548 0.527\nBlanc Tunnel between France and Italy in 1999, but the user failed\nto select such keywords as mont, blanc, french and italian\ndue to his/her ignorance of the event. Indeed, without proper\ncontext it would be hard to make perfect judgment.\nWhat is then, the extent to which the user is good at term\nfeedback? Does it have serious impact on retrieval performance? To\nanswer these questions, we need a measure of individual terms\" true\nrelevance. We adopt the Simplified KL Divergence metric used in\n[24] to decide query expansion terms as our term relevance\nmeasure:\n\u03c3KLD(w) = p(w|R) log\np(w|R)\np(w|\u00acR)\nwhere p(w|R) is the probability that a relevant document contains\nterm w, and p(w|\u00acR) is the probability that an irrelevant document\ncontains w, both of which can be easily computed via maximum\nlikelihood estimate given document-level relevance judgment. If\n\u03c3KLD(w) > 0, w is more likely to appear in relevant documents\nthan irrelevant ones.\nWe consider a term relevant if its Simplified KL Divergence\nvalue is greater than a certain threshold \u03c30. We can then define\nprecision and recall of user term judgment accordingly: precision\nis the fraction of terms checked by the user that are relevant; recall\nis the fraction of presented relevant terms that are checked by the\nuser. Table 4 shows the number of checked terms, relevant terms\nand relevant checked terms when \u03c30 is set to 1.0, as well as the\nprecision/recall of user term judgment.\nNote that when the clarification forms contain more clusters,\nfewer terms are checked: 14.8 for 1 \u00d7 48, 13.3 for 3 \u00d7 16 and\n11.2 for 6\u00d78. Similar pattern holds for relevant terms and relevant\nchecked terms. There seems to be a trade-off between increasing\ntopic diversity by clustering and losing extra relevant terms: when\nthere are more clusters, each of them gets fewer terms to present,\nwhich can hurt a major relevant cluster that contains many relevant\nterms. Therefore, it is not always helpful to have more clusters,\ne.g., TFB6C is actually worse than TFB1C.\nThe major finding we can make from Table 4 is that the user is\nnot particularly good at identifying relevant terms, which echoes\nthe discovery in [18]. In the case of 3 \u00d7 16 clarification forms, the\naverage number of terms checked as relevant by the user is 13.3\nper topic, and the average number of relevant terms whose \u03c3KLD\nvalue exceed 1.0 is 12.6. The user is able to recognize only 6.9\nof these terms on average. Indeed, the precision and recall of user\nfeedback terms (as defined previously) are far from perfect. On\nthe other hand, If the user had correctly checked all such relevant\nterms, the performance of our algorithms would have increased a\nlot, as shown in Table 5.\nWe see that TFB gets big improvement when there is an\noracle who checks all relevant terms, while CFB meets a bottleneck\naround MAP of 0.325, since all it does is adjust cluster weights,\nand when the learned weights are close to being accurate, it\ncannot benefit more from term feedback. Also note that TCFB fails to\noutperform TFB, probably because TFB is sufficiently accurate.\nTable 5: Change of MAP when using all (and only) relevant\nterms (\u03c3KLD > 1.0) for feedback.\noriginal term feedback relevant term feedback\nTF1 0.288 0.354\nTF3 0.288 0.354\nTF6 0.278 0.346\nCF3 0.305 0.325\nCF6 0.301 0.326\nTCF3 0.309 0.345\nTCF6 0.304 0.341\n6.4 Comparison with Relevance Feedback\nNow we compare term feedback with document-level relevance\nfeedback, in which the user is presented with the top N documents\nfrom an initial retrieval and asked to judge their relevance. The\nfeedback process is simulated using document relevance judgment\nfrom NIST. We use the mixture model based feedback method\nproposed in [25], with mixture noise set to 0.95 and feedback\ncoefficient set to 0.9.\nComparative evaluation of relevance feedback against other\nmethods is complicated by the fact that some documents have already\nbeen viewed during feedback, so it makes no sense to include them\nin the retrieval results of the second run. However, this does not\nhold for term feedback. Thus, to make it fair w.r.t. user\"s\ninformation gain, if the feedback documents are relevant, they should be\nkept in the top of the ranking; if they are irrelevant, they should be\nleft out. Therefore, we use relevance feedback to produce a ranking\nof top 1000 retrieved documents but with every feedback document\nexcluded, and then prepend the relevant feedback documents at the\nfront. Table 6 shows the performance of relevance feedback for\ndifferent values of N and compares it with TCFB3C.\nTable 6: Performance of relevance feedback for different\nnumber of feedback documents (N).\nN MAP Pr@30 RR\n5 0.302 0.586 4779\n10 0.345 0.670 4916\n20 0.389 0.772 5004\nTCFB3C 0.309 0.491 4947\nWe see that the performance of TCFB3C is comparable to that\nof relevance feedback using 5 documents. Although it is poorer\nthan when there are 10 feedback documents in terms of MAP and\nPr@30, it does retrieve more documents (4947) when going down\nthe ranked list.\nWe try to compare the quality of automatically inserted terms\nin relevance feedback with that of manually selected terms in term\nfeedback. This is done by truncating the relevance feedback\nmodified query model to a size equal to the number of checked terms\nfor the same topic. We can then compare the terms in the truncated\nmodel with the checked terms. Figure 3 shows the distribution of\nthe terms\" \u03c3KLD scores.\nWe find that term feedback tends to produce expansion terms\nof higher quality(those with \u03c3KLD > 1) compared to relevance\nfeedback (with 10 feedback documents). This does not contradict\nthe fact that the latter yields higher retrieval performance. Actually,\nwhen we use the truncated query model instead of the intact one\nrefined from relevance feedback, the MAP is only 0.304. The truth\nFigure 3: Comparison of expansion term quality between\nrelevance feedback (with 10 feedback documents) and term\nfeedback (with 3 \u00d7 16 CFs)\n\u22121\u22120 0\u22121 1\u22122 2\u22123 3\u22124 4\u22125 5\u22126\n0\n50\n100\n150\n200\n250\n300\n350\n\u03c3KLD\n#terms\nrelevance feedback\nterm feedback\nis, although there are many unwanted terms in the expanded query\nmodel from feedback documents, there are also more relevant terms\nthan what the user can possibly select from the list of presentation\nterms generated with pseudo-feedback documents, and the positive\neffects often outweights the negative ones.\nWe are interested to know under what circumstances term\nfeedback has advantage over relevance feedback. One such situation is\nwhen none of the top N feedback documents is relevant, rendering\nrelevance feedback useless. This is not infrequent, as one might\nhave thought: out of the 50 topics, there are 13 such cases when\nN = 5, 10 when N = 10, and still 3 when N = 20. When this\nhappens, one can only back off to the original retrieval method; the\npower of relevance feedback is lost.\nSurprisingly, in 11 out of 13 such cases where relevance\nfeedback seems impossible, the user is able to check at least 2\nrelevant terms from the 3 \u00d7 16 clarification form (we consider term\nt to be relevant if \u03c3KLD(t) > 1.0). Furthermore, in 10 out of\nthem TCFB3C outperforms the pseudo-feedback baseline,\nincreasing MAP from 0.076 to 0.146 on average (these are particularly\nhard topics). We think that there are two possible explanations for\nthis phenomenon of term feedback being active even when\nrelevance feedback does not work: First, even if none of the top N\n(suppose it is a small number) documents are relevant, we may\nstill find relevant documents in top 60, which is more inclusive but\nusually unreachable when people are doing relevance feedback in\ninteractive ad-hoc search, from which we can draw feedback terms.\nThis is true for topic 367 piracy, where the top 10 feedback\ndocuments are all about software piracy, yet there are documents\nbetween 10-60 that are about piracy on the seas (which is about the\nreal information need), contributing terms such as pirate, ship\nfor selection in the clarification form. Second, for some topics,\na document needs to meet some special condition in order to be\nrelevant. The top N documents may be related to the topic, but\nnonetheless irrelevant. In this case, we may still extract useful\nterms from these documents, even if they do not qualify as\nrelevant ones. For example, in topic 639 consumer online shopping,\na document needs to mention what contributes to shopping growth\nto really match the specified information need, hence none of the\ntop 10 feedback documents are regarded as relevant. But\nnevertheless, the feedback terms such as retail, commerce are good for\nquery expansion.\n7. CONCLUSIONS\nIn this paper we studied the use of term feedback for\ninteractive information retrieval in the language modeling approach. We\nproposed a cluster-based method for selecting presentation terms\nas well as algorithms to estimate refined query models from user\nterm feedback. We saw significant improvement in retrieval\naccuracy brought by term feedback, in spite of the fact that a user often\nmakes mistakes in relevance judgment that hurts its performance.\nWe found the best-performing algorithm to be TCFB, which\nbenefits from the combination of directly observed term evidence with\nTFB and indirectly learned cluster relevance with CFB. When we\nreduced the number of presentation terms, term feedback is still\nable to keep much of its performance gain over the baseline.\nFinally, we compared term feedback to document-level relevance\nfeedback, and found that TCFB3C\"s performance is on a par with the\nlatter with 5 feedback documents. We regarded term feedback as a\nviable alternative to traditional relevance feedback, especially when\nthere are no relevant documents in the top.\nWe propose to extend our work in several ways. First, we want\nto study whether the use of various contexts can help the user to\nbetter identify term relevance, while not sacrificing the simplicity\nand compactness of term feedback. Second, currently all terms are\npresented to the user in a single batch. We could instead consider\niterative term feedback, by presenting a small number of terms first,\nand show more terms after receiving user feedback or stop when\nthe refined query is good enough. The presented terms should be\nselected dynamically to maximize learning benefits at any moment.\nThird, we have plans to incorporate term feedback into our UCAIR\ntoolbar[20], an Internet Explorer plugin, to make it work for web\nsearch. We are also interested in studying how to combine term\nfeedback with relevance feedback or implicit feedback. We could,\nfor example, allow the user to dynamically modify terms in a\nlanguage model learned from feedback documents.\n8. ACKNOWLEDGMENT\nThis work is supported in part by the National Science\nFoundation grants IIS-0347933 and IIS-0428472.\n9. REFERENCES\n[1] J. Allan. Relevance feedback with too much data. In Proceedings of\nthe 18th annual international ACM SIGIR conference on research\nand development in information retrieval, pages 337-343, 1995.\n[2] J. Allan. HARD track overview in TREC 2005 - High Accuracy\nRetrieval from Documents. In The Fourteenth Text REtrieval\nConference, 2005.\n[3] P. Anick. Using terminological feedback for web search refinement:\na log-based study. In Proceedings of the 26th annual international\nACM SIGIR conference on research and development in informaion\nretrieval, pages 88-95, 2003.\n[4] P. G. Anick and S. Tipirneni. The paraphrase search assistant:\nterminological feedback for iterative information seeking. In\nProceedings of the 22nd annual international ACM SIGIR\nconference on research and development in information retrieval,\npages 153-159, 1999.\n[5] C. Buckley, G. Salton, J. Allan, and A. Singhal. Automatic query\nexpansion using SMART. In Proceedings of the Third Text REtrieval\nConference, 1994.\n[6] D. Harman. Towards interactive query expansion. In Proceedings of\nthe 11th annual international ACM SIGIR conference on research\nand development in information retrieval, pages 321-331, 1988.\n[7] N. A. Jaleel, A. Corrada-Emmanuel, Q. Li, X. Liu, C. Wade, and\nJ. Allan. UMass at TREC 2003: HARD and QA. In TREC, pages\n715-725, 2003.\n[8] H. Joho, C. Coverson, M. Sanderson, and M. Beaulieu. Hierarchical\npresentation of expansion terms. In Proceedings of the 2002 ACM\nsymposium on applied computing, pages 645-649, 2002.\n[9] K. S. Jones, S. Walker, and S. E. Robertson. A probabilistic model of\ninformation retrieval: development and status. Technical Report 446,\nComputer Laboratory, University of Cambridge, 1998.\n[10] D. Kelly, V. D. Dollu, and X. Fu. The loquacious user: a\ndocument-independent source of terms for query expansion. In\nProceedings of the 28th annual international ACM SIGIR\nconference on research and development in information retrieval,\npages 457-464, 2005.\n[11] D. Kelly and X. Fu. Elicitation of term relevance feedback: an\ninvestigation of term source and context. In Proceedings of the 29th\nannual international ACM SIGIR conference on research and\ndevelopment in information retrieval, 2006.\n[12] J. Koenemann and N. Belkin. A case for interaction: A study of\ninteractive information retrieval behavior and effectiveness. In\nProceedings of the SIGCHI conference on human factors in\ncomputing systems, pages 205-212, 1996.\n[13] V. Lavrenko and W. B. Croft. Relevance-based language models. In\nResearch and Development in Information Retrieval, pages\n120-127, 2001.\n[14] Y. Nemeth, B. Shapira, and M. Taeib-Maimon. Evaluation of the real\nand perceived value of automatic and interactive query expansion. In\nProceedings of the 27th annual international ACM SIGIR\nconference on research and development in information retrieval,\npages 526-527, 2004.\n[15] J. Ponte. A Language Modeling Approach to Information Retrieval.\nPhD thesis, University of Massachusetts at Amherst, 1998.\n[16] S. E. Robertson, S. Walker, S. Jones, M. Beaulieu, and M. Gatford.\nOkapi at TREC-3. In Proceedings of the Third Text REtrieval\nConference, 1994.\n[17] J. Rocchio. Relevance feedback in information retrieval. In The\nSMART retrieval system, pages 313-323. 1971.\n[18] I. Ruthven. Re-examining the potential effectiveness of interactive\nquery expansion. In Proceedings of the 26th annual international\nACM SIGIR conference on research and development in informaion\nretrieval, pages 213-220, 2003.\n[19] G. Salton and C. Buckley. Improving retrieval performance by\nrelevance feedback. Journal of the American Society for Information\nScience, 41:288-297, 1990.\n[20] X. Shen, B. Tan, and C. Zhai. Implicit user modeling for\npersonalized search. In Proceedings of the 14th ACM international\nconference on information and knowledge management, pages\n824-831, 2005.\n[21] X. Shen and C. Zhai. Active feedback in ad-hoc information\nretrieval. In Proceedings of the 28th annual international ACM\nSIGIR conference on research and development in information\nretrieval, pages 59-66, 2005.\n[22] A. Spink. Term relevance feedback and query expansion: relation to\ndesign. In Proceedings of the 17th annual international ACM SIGIR\nconference on research and development in information retrieval,\npages 81-90, 1994.\n[23] J. Xu and W. B. Croft. Query expansion using local and global\ndocument analysis. In Proceedings of the 19th annual international\nACM SIGIR conference on research and development in information\nretrieval, pages 4-11, 1996.\n[24] H. Zaragoza, N. Craswell, M. Taylor, S. Saria, and S. Robertson.\nMicrosoft cambridge at TREC-13: Web and HARD tracks. In\nProceedings of the 13th Text REtrieval Conference, 2004.\n[25] C. Zhai and J. Lafferty. Model-based feedback in the language\nmodeling approach to information retrieval. In Proceedings of the\ntenth international conference on information and knowledge\nmanagement, pages 403-410, 2001.\n[26] C. Zhai, A. Velivelli, and B. Yu. A cross-collection mixture model\nfor comparative text mining. In Proceedings of the tenth ACM\nSIGKDD international conference on knowledge discovery and data\nmining, pages 743-748, 2004.", "keywords": "interactive retrieval;query model;language modeling;kl-divergence;query expansion;retrieval performance;interactive adhoc search;probability;information retrieval;term-based feedback;query expansion process;presentation term"}
-{"name": "test_H-26", "title": "A Support Vector Method for Optimizing Average Precision", "abstract": "Machine learning is commonly used to improve ranked retrieval systems. Due to computational difficulties, few learning techniques have been developed to directly optimize for mean average precision (MAP), despite its widespread use in evaluating such systems. Existing approaches optimizing MAP either do not find a globally optimal solution, or are computationally expensive. In contrast, we present a general SVM learning algorithm that efficiently finds a globally optimal solution to a straightforward relaxation of MAP. We evaluate our approach using the TREC 9 and TREC 10 Web Track corpora (WT10g), comparing against SVMs optimized for accuracy and ROCArea. In most cases we show our method to produce statistically significant improvements in MAP scores.", "fulltext": "1. INTRODUCTION\nState of the art information retrieval systems commonly\nuse machine learning techniques to learn ranking functions.\nHowever, most current approaches do not optimize for the\nevaluation measure most often used, namely Mean Average\nPrecision (MAP).\nInstead, current algorithms tend to take one of two\ngeneral approaches. The first approach is to learn a model that\nestimates the probability of a document being relevant given\na query (e.g., [18, 14]). If solved effectively, the ranking with\nbest MAP performance can easily be derived from the\nprobabilities of relevance. However, achieving high MAP only\nrequires finding a good ordering of the documents. As a\nresult, finding good probabilities requires solving a more\ndifficult problem than necessary, likely requiring more training\ndata to achieve the same MAP performance.\nThe second common approach is to learn a function that\nmaximizes a surrogate measure. Performance measures\noptimized include accuracy [17, 15], ROCArea [1, 5, 10, 11,\n13, 21] or modifications of ROCArea [4], and NDCG [2, 3].\nLearning a model to optimize for such measures might result\nin suboptimal MAP performance. In fact, although some\nprevious systems have obtained good MAP performance, it\nis known that neither achieving optimal accuracy nor\nROCArea can guarantee optimal MAP performance[7].\nIn this paper, we present a general approach for learning\nranking functions that maximize MAP performance.\nSpecifically, we present an SVM algorithm that globally optimizes\na hinge-loss relaxation of MAP. This approach simplifies\nthe process of obtaining ranking functions with high MAP\nperformance by avoiding additional intermediate steps and\nheuristics. The new algorithm also makes it conceptually\njust as easy to optimize SVMs for MAP as was previously\npossible only for accuracy and ROCArea.\nIn contrast to recent work directly optimizing for MAP\nperformance by Metzler & Croft [16] and Caruana et al.\n[6], our technique is computationally efficient while finding\na globally optimal solution. Like [6, 16], our method learns\na linear model, but is much more efficient in practice and,\nunlike [16], can handle many thousands of features.\nWe now describe the algorithm in detail and provide proof\nof correctness. Following this, we provide an analysis of\nrunning time. We finish with empirical results from experiments\non the TREC 9 and TREC 10 Web Track corpus. We have\nalso developed a software package implementing our\nalgorithm that is available for public use1\n.\n2. THE LEARNING PROBLEM\nFollowing the standard machine learning setup, our goal\nis to learn a function h : X \u2192 Y between an input space\nX (all possible queries) and output space Y (rankings over\na corpus). In order to quantify the quality of a prediction,\n\u02c6y = h(x), we will consider a loss function \u2206 : Y \u00d7 Y \u2192 .\n\u2206(y, \u02c6y) quantifies the penalty for making prediction \u02c6y if the\ncorrect output is y. The loss function allows us to\nincorporate specific performance measures, which we will exploit\n1\nhttp://svmrank.yisongyue.com\nfor optimizing MAP. We restrict ourselves to the supervised\nlearning scenario, where input/output pairs (x, y) are\navailable for training and are assumed to come from some fixed\ndistribution P(x, y). The goal is to find a function h such\nthat the risk (i.e., expected loss),\nR\u2206\nP (h) =\nZ\nX\u00d7Y\n\u2206(y, h(x))dP(x, y),\nis minimized. Of course, P(x, y) is unknown. But given\na finite set of training pairs, S = {(xi, yi) \u2208 X \u00d7 Y : i =\n1, . . . , n}, the performance of h on S can be measured by\nthe empirical risk,\nR\u2206\nS (h) =\n1\nn\nnX\ni=1\n\u2206(yi, h(xi)).\nIn the case of learning a ranked retrieval function, X\ndenotes a space of queries, and Y the space of (possibly weak)\nrankings over some corpus of documents C = {d1, . . . ,d|C|}.\nWe can define average precision loss as\n\u2206map(y, \u02c6y) = 1 \u2212 MAP(rank(y), rank(\u02c6y)),\nwhere rank(y) is a vector of the rank values of each\ndocument in C. For example, for a corpus of two documents,\n{d1, d2}, with d1 having higher rank than d2, rank(y ) =\n(1, 0). We assume true rankings have two rank values, where\nrelevant documents have rank value 1 and non-relevant\ndocuments rank value 0. We further assume that all predicted\nrankings are complete rankings (no ties).\nLet p = rank(y) and \u02c6p = rank(\u02c6y). The average precision\nscore is defined as\nMAP(p, \u02c6p) =\n1\nrel\nX\nj:pj =1\nPrec@j,\nwhere rel = |{i : pi = 1}| is the number of relevant\ndocuments, and Prec@j is the percentage of relevant documents\nin the top j documents in predicted ranking \u02c6y. MAP is the\nmean of the average precision scores of a group of queries.\n2.1 MAP vs ROCArea\nMost learning algorithms optimize for accuracy or\nROCArea. While optimizing for these measures might achieve\ngood MAP performance, we use two simple examples to\nshow it can also be suboptimal in terms of MAP.\nROCArea assigns equal penalty to each misordering of a\nrelevant/non-relevant pair. In contrast, MAP assigns greater\npenalties to misorderings higher up in the predicted ranking.\nUsing our notation, ROCArea can be defined as\nROC(p, \u02c6p) =\n1\nrel \u00b7 (|C| \u2212 rel)\nX\ni:pi=1\nX\nj:pj =0\n1[\u02c6pi>\u02c6pj ],\nwhere p is the true (weak) ranking, \u02c6p is the predicted\nranking, and 1[b] is the indicator function conditioned on b.\nDoc ID 1 2 3 4 5 6 7 8\np 1 0 0 0 0 1 1 0\nrank(h1(x)) 8 7 6 5 4 3 2 1\nrank(h2(x)) 1 2 3 4 5 6 7 8\nTable 1: Toy Example and Models\nSuppose we have a hypothesis space with only two\nhypothesis functions, h1 and h2, as shown in Table 1. These\ntwo hypotheses predict a ranking for query x over a corpus\nof eight documents.\nHypothesis MAP ROCArea\nh1(x) 0.59 0.47\nh2(x) 0.51 0.53\nTable 2: Performance of Toy Models\nTable 2 shows the MAP and ROCArea scores of h1 and\nh2. Here, a learning method which optimizes for\nROCArea would choose h2 since that results in a higher\nROCArea score, but this yields a suboptimal MAP score.\n2.2 MAP vs Accuracy\nUsing a very similar example, we now demonstrate how\noptimizing for accuracy might result in suboptimal MAP.\nModels which optimize for accuracy are not directly\nconcerned with the ranking. Instead, they learn a threshold\nsuch that documents scoring higher than the threshold can\nbe classified as relevant and documents scoring lower as\nnonrelevant.\nDoc ID 1 2 3 4 5 6 7 8 9 10 11\np 1 0 0 0 0 1 1 1 1 0 0\nrank(h1(x)) 11 10 9 8 7 6 5 4 3 2 1\nrank(h2(x)) 1 2 3 4 5 6 7 8 9 10 11\nTable 3: Toy Example and Models\nWe consider again a hypothesis space with two\nhypotheses. Table 3 shows the predictions of the two hypotheses on\na single query x.\nHypothesis MAP Best Acc.\nh1(q) 0.70 0.64\nh2(q) 0.64 0.73\nTable 4: Performance of Toy Models\nTable 4 shows the MAP and best accuracy scores of h1(q)\nand h2(q). The best accuracy refers to the highest\nachievable accuracy on that ranking when considering all\npossible thresholds. For instance, with h1(q), a threshold\nbetween documents 1 and 2 gives 4 errors (documents 6-9\nincorrectly classified as non-relevant), yielding an accuracy of\n0.64. Similarly, with h2(q), a threshold between documents\n5 and 6 gives 3 errors (documents 10-11 incorrectly\nclassified as relevant, and document 1 as non-relevant), yielding\nan accuracy of 0.73. A learning method which optimizes\nfor accuracy would choose h2 since that results in a higher\naccuracy score, but this yields a suboptimal MAP score.\n3. OPTIMIZING AVERAGE PRECISION\nWe build upon the approach used by [13] for\noptimizing ROCArea. Unlike ROCArea, however, MAP does not\ndecompose linearly in the examples and requires a\nsubstantially extended algorithm, which we describe in this section.\nRecall that the true ranking is a weak ranking with two\nrank values (relevant and non-relevant). Let Cx\nand C\u00afx\n\ndenote the set of relevant and non-relevant documents of C for\nquery x, respectively.\nWe focus on functions which are parametrized by a weight\nvector w, and thus wish to find w to minimize the empirical\nrisk, R\u2206\nS (w) \u2261 R\u2206\nS (h(\u00b7; w)). Our approach is to learn a\ndiscriminant function F : X \u00d7 Y \u2192 over input-output\npairs. Given query x, we can derive a prediction by finding\nthe ranking y that maximizes the discriminant function:\nh(x; w) = argmax\ny\u2208Y\nF(x, y; w). (1)\nWe assume F to be linear in some combined feature\nrepresentation of inputs and outputs \u03a8(x, y) \u2208 RN\n, i.e.,\nF(x, y; w) = wT\n\u03a8(x, y). (2)\nThe combined feature function we use is\n\u03a8(x, y) =\n1\n|Cx| \u00b7 |C\u00afx|\nX\ni:di\u2208Cx\nX\nj:dj \u2208C\u00afx\n[yij (\u03c6(x, di) \u2212 \u03c6(x, dj))] ,\nwhere \u03c6 : X \u00d7 C \u2192 N\nis a feature mapping function from\na query/document pair to a point in N dimensional space2\n.\nWe represent rankings as a matrix of pairwise orderings,\nY \u2282 {\u22121, 0, +1}|C|\u00d7|C|\n. For any y \u2208 Y, yij = +1 if di is\nranked ahead of dj, and yij = \u22121 if dj is ranked ahead of di,\nand yij = 0 if di and dj have equal rank. We consider only\nmatrices which correspond to valid rankings (i.e, obeying\nantisymmetry and transitivity). Intuitively, \u03a8 is a\nsummation over the vector differences of all relevant/non-relevant\ndocument pairings. Since we assume predicted rankings to\nbe complete rankings, yij is either +1 or \u22121 (never 0).\nGiven a learned weight vector w, predicting a ranking (i.e.\nsolving equation (1)) given query x reduces to picking each\nyij to maximize wT\n\u03a8(x, y). As is also discussed in [13],\nthis is attained by sorting the documents by wT\n\u03c6(x, d) in\ndescending order. We will discuss later the choices of \u03c6 we\nused for our experiments.\n3.1 Structural SVMs\nThe above formulation is very similar to learning a\nstraightforward linear model while training on the pairwise\ndifference of relevant/non-relevant document pairings. Many\nSVM-based approaches optimize over these pairwise\ndifferences (e.g., [5, 10, 13, 4]), although these methods do not\noptimize for MAP during training. Previously, it was not\nclear how to incorporate non-linear multivariate loss\nfunctions such as MAP loss directly into global optimization\nproblems such as SVM training. We now present a method\nbased on structural SVMs [19] to address this problem.\nWe use the structural SVM formulation, presented in\nOptimization Problem 1, to learn a w \u2208 RN\n.\nOptimization Problem 1. (Structural SVM)\nmin\nw,\u03be\u22650\n1\n2\nw 2\n+\nC\nn\nnX\ni=1\n\u03bei (3)\ns.t. \u2200i, \u2200y \u2208 Y \\ yi :\nwT\n\u03a8(xi, yi) \u2265 wT\n\u03a8(xi, y) + \u2206(yi, y) \u2212 \u03bei (4)\nThe objective function to be minimized (3) is a tradeoff\nbetween model complexity, w 2\n, and a hinge loss relaxation\nof MAP loss,\nP\n\u03bei. As is usual in SVM training, C is a\n2\nFor example, one dimension might be the number of times\nthe query words appear in the document.\nAlgorithm 1 Cutting plane algorithm for solving OP 1\nwithin tolerance .\n1: Input: (x1, y1), . . . , (xn, yn), C,\n2: Wi \u2190 \u2205 for all i = 1, . . . , n\n3: repeat\n4: for i = 1, . . . , n do\n5: H(y; w) \u2261 \u2206(yi, y) + wT\n\u03a8(xi, y) \u2212 wT\n\u03a8(xi, yi)\n6: compute \u02c6y = argmaxy\u2208Y H(y; w)\n7: compute \u03bei = max{0, maxy\u2208Wi H(y; w)}\n8: if H(\u02c6y; w) > \u03bei + then\n9: Wi \u2190 Wi \u222a {\u02c6y}\n10: w \u2190 optimize (3) over W =\nS\ni Wi\n11: end if\n12: end for\n13: until no Wi has changed during iteration\nparameter that controls this tradeoff and can be tuned to\nachieve good performance in different training tasks.\nFor each (xi, yi) in the training set, a set of constraints\nof the form in equation (4) is added to the optimization\nproblem. Note that wT\n\u03a8(x, y) is exactly our discriminant\nfunction F(x, y; w) (see equation (2)). During prediction,\nour model chooses the ranking which maximizes the\ndiscriminant (1). If the discriminant value for an incorrect ranking\ny is greater than for the true ranking yi (e.g., F(xi, y; w) >\nF(xi, yi; w)), then corresponding slack variable, \u03bei, must be\nat least \u2206(yi, y) for that constraint to be satisfied.\nTherefore, the sum of slacks,\nP\n\u03bei, upper bounds the MAP loss.\nThis is stated formally in Proposition 1.\nProposition 1. Let \u03be\u2217\n(w) be the optimal solution of the\nslack variables for OP 1 for a given weight vector w. Then\n1\nn\nPn\ni=1 \u03bei is an upper bound on the empirical risk R\u2206\nS (w).\n(see [19] for proof)\nProposition 1 shows that OP 1 learns a ranking function\nthat optimizes an upper bound on MAP error on the\ntraining set. Unfortunately there is a problem: a constraint is\nrequired for every possible wrong output y, and the\nnumber of possible wrong outputs is exponential in the size of\nC. Fortunately, we may employ Algorithm 1 to solve OP 1.\nAlgorithm 1 is a cutting plane algorithm, iteratively\nintroducing constraints until we have solved the original problem\nwithin a desired tolerance [19]. The algorithm starts with\nno constraints, and iteratively finds for each example (xi, yi)\nthe output \u02c6y associated with the most violated constraint.\nIf the corresponding constraint is violated by more than we\nintroduce \u02c6y into the working set Wi of active constraints for\nexample i, and re-solve (3) using the updated W. It can be\nshown that Algorithm 1\"s outer loop is guaranteed to halt\nwithin a polynomial number of iterations for any desired\nprecision .\nTheorem 1. Let \u00afR = maxi maxy \u03a8(xi, yi) \u2212 \u03a8(xi, y) ,\n\u00af\u2206 = maxi maxy \u2206(yi, y), and for any > 0, Algorithm 1\nterminates after adding at most\nmax\n\uf6be\n2n \u00af\u2206\n,\n8C \u00af\u2206 \u00afR2\n2\nff\nconstraints to the working set W. (see [19] for proof)\nHowever, within the inner loop of this algorithm we have\nto compute argmaxy\u2208Y H(y; w), where\nH(y; w) = \u2206(yi, y) + wT\n\u03a8(xi, y) \u2212 wT\n\u03a8(xi, yi),\nor equivalently,\nargmax\ny\u2208Y\n\u2206(yi, y) + wT\n\u03a8(xi, y),\nsince wT\n\u03a8(xi, yi) is constant with respect to y. Though\nclosely related to the classification procedure, this has the\nsubstantial complication that we must contend with the\nadditional \u2206(yi, y) term. Without the ability to efficiently find\nthe most violated constraint (i.e., solve argmaxy\u2208Y H(y, w)),\nthe constraint generation procedure is not tractable.\n3.2 Finding the Most Violated Constraint\nUsing OP 1 and optimizing to ROCArea loss (\u2206roc), the\nproblem of finding the most violated constraint, or solving\nargmaxy\u2208Y H(y, w) (henceforth argmax H), is addressed in\n[13]. Solving argmax H for \u2206map is more difficult. This is\nprimarily because ROCArea decomposes nicely into a sum\nof scores computed independently on each relative\nordering of a relevant/non-relevant document pair. MAP, on the\nother hand, does not decompose in the same way as\nROCArea. The main algorithmic contribution of this paper is an\nefficient method for solving argmax H for \u2206map.\nOne useful property of \u2206map is that it is invariant to\nswapping two documents with equal relevance. For example, if\ndocuments da and db are both relevant, then swapping the\npositions of da and db in any ranking does not affect \u2206map.\nBy extension, \u2206map is invariant to any arbitrary\npermutation of the relevant documents amongst themselves and of\nthe non-relevant documents amongst themselves. However,\nthis reshu\ufb04ing will affect the discriminant score, wT\n\u03a8(x, y).\nThis leads us to Observation 1.\nObservation 1. Consider rankings which are constrained\nby fixing the relevance at each position in the ranking (e.g.,\nthe 3rd document in the ranking must be relevant). Every\nranking which satisfies the same set of constraints will have\nthe same \u2206map. If the relevant documents are sorted by\nwT\n\u03c6(x, d) in descending order, and the non-relevant\ndocuments are likewise sorted by wT\n\u03c6(x, d), then the\ninterleaving of the two sorted lists which satisfies the constraints will\nmaximize H for that constrained set of rankings.\nObservation 1 implies that in the ranking which\nmaximizes H, the relevant documents will be sorted by wT\n\u03c6(x, d),\nand the non-relevant documents will also be sorted likewise.\nBy first sorting the relevant and non-relevant documents,\nthe problem is simplified to finding the optimal interleaving\nof two sorted lists. For the rest of our discussion, we assume\nthat the relevant documents and non-relevant documents\nare both sorted by descending wT\n\u03c6(x, d). For convenience,\nwe also refer to relevant documents as {dx\n1 , . . . dx\n|Cx|} = Cx\n,\nand non-relevant documents as {d\u00afx\n1 , . . . d\u00afx\n|C\u00afx|} = C\u00afx\n.\nWe define \u03b4j(i1, i2), with i1 < i2, as the change in H from\nwhen the highest ranked relevant document ranked after d\u00afx\nj\nis dx\ni1\nto when it is dx\ni2\n. For i2 = i1 + 1, we have\n\u03b4j(i, i + 1) =\n1\n|Cx|\n\u201e\nj\nj + i\n\u2212\nj \u2212 1\nj + i \u2212 1\n\u00ab\n\u2212 2 \u00b7 (sx\ni \u2212 s\u00afx\nj ), (5)\nwhere si = wT\n\u03c6(x, di). The first term in (5) is the change\nin \u2206map when the ith relevant document has j non-relevant\ndocuments ranked before it, as opposed to j \u22121. The second\nterm is the change in the discriminant score, wT\n\u03a8(x, y),\nwhen yij changes from +1 to \u22121.\n. . . , dx\ni , d\u00afx\nj , dx\ni+1, . . .\n. . . , d\u00afx\nj , dx\ni , dx\ni+1, . . .\nFigure 1: Example for \u03b4j(i, i + 1)\nFigure 1 gives a conceptual example for \u03b4j(i, i + 1). The\nbottom ranking differs from the top only where d\u00afx\nj slides up\none rank. The difference in the value of H for these two\nrankings is exactly \u03b4j(i, i + 1).\nFor any i1 < i2, we can then define \u03b4j(i1, i2) as\n\u03b4j(i1, i2) =\ni2\u22121\nX\nk=i1\n\u03b4j(k, k + 1), (6)\nor equivalently,\n\u03b4j(i1, i2) =\ni2\u22121\nX\nk=i1\n\u00bb\n1\n|Cx|\n\u201e\nj\nj + k\n\u2212\nj \u2212 1\nj + k \u2212 1\n\u00ab\n\u2212 2 \u00b7 (sx\nk \u2212 s\u00afx\nj )\n\n.\nLet o1, . . . , o|C\u00afx| encode the positions of the non-relevant\ndocuments, where dx\noj\nis the highest ranked relevant\ndocument ranked after the jth non-relevant document. Due to\nObservation 1, this encoding uniquely identifies a complete\nranking. We can recover the ranking as\nyij =\n8\n>>><\n>>>:\n0 if i = j\nsign(si \u2212 sj) if di, dj equal relevance\nsign(oj \u2212 i \u2212 0.5) if di = dx\ni , dj = d\u00afx\nj\nsign(j \u2212 oi + 0.5) if di = d\u00afx\ni , dj = dx\nj\n. (7)\nWe can now reformulate H into a new objective function,\nH (o1, . . . , o|C\u00afx||w) = H(\u00afy|w) +\n|C\u00afx\n|\nX\nk=1\n\u03b4k(ok, |Cx\n| + 1),\nwhere \u00afy is the true (weak) ranking. Conceptually H starts\nwith a perfect ranking \u00afy, and adds the change in H when\neach successive non-relevant document slides up the ranking.\nWe can then reformulate the argmax H problem as\nargmax H = argmax\no1,...,o|C\u00afx|\n|C\u00afx\n|\nX\nk=1\n\u03b4k(ok, |Cx\n| + 1) (8)\ns.t.\no1 \u2264 . . . \u2264 o|C\u00afx|. (9)\nAlgorithm 2 describes the algorithm used to solve\nequation (8). Conceptually, Algorithm 2 starts with a perfect\nranking. Then for each successive non-relevant document,\nthe algorithm modifies the solution by sliding that\ndocument up the ranking to locally maximize H while keeping\nthe positions of the other non-relevant documents constant.\n3.2.1 Proof of Correctness\nAlgorithm 2 is greedy in the sense that it finds the best\nposition of each non-relevant document independently from\nthe other non-relevant documents. In other words, the\nalgorithm maximizes H for each non-relevant document, d\u00afx\nj ,\nAlgorithm 2 Finding the Most Violated Constraint\n(argmax H) for Algorithm 1 with \u2206map\n1: Input: w, Cx\n, C\u00afx\n2: sort Cx\nand C\u00afx\nin descending order of wT\n\u03c6(x, d)\n3: sx\ni \u2190 wT\n\u03c6(x, dx\ni ), i = 1, . . . , |Cx\n|\n4: s\u00afx\ni \u2190 wT\n\u03c6(x, d\u00afx\ni ), i = 1, . . . , |C\u00afx\n|\n5: for j = 1, . . . , |C\u00afx\n| do\n6: optj \u2190 argmaxk \u03b4j(k, |Cx\n| + 1)\n7: end for\n8: encode \u02c6y according to (7)\n9: return \u02c6y\nwithout considering the positions of the other non-relevant\ndocuments, and thus ignores the constraints of (9).\nIn order for the solution to be feasible, the jth non-relevant\ndocument must be ranked after the first j \u2212 1 non-relevant\ndocuments, thus satisfying\nopt1 \u2264 opt2 \u2264 . . . \u2264 opt|C\u00afx|. (10)\nIf the solution is feasible, the it clearly solves (8). Therefore,\nit suffices to prove that Algorithm 2 satisfies (10). We first\nprove that \u03b4j(\u00b7, \u00b7) is monotonically decreasing in j.\nLemma 1. For any 1 \u2264 i1 < i2 \u2264 |Cx\n| + 1 and 1 \u2264 j <\n|C\u00afx\n|, it must be the case that\n\u03b4j+1(i1, i2) \u2264 \u03b4j(i1, i2).\nProof. Recall from (6) that both \u03b4j(i1, i2) and \u03b4j+1(i1, i2)\nare summations of i2 \u2212 i1 terms. We will show that each\nterm in the summation of \u03b4j+1(i1, i2) is no greater than the\ncorresponding term in \u03b4j(i1, i2), or\n\u03b4j+1(k, k + 1) \u2264 \u03b4j(k, k + 1)\nfor k = i1, . . . , i2 \u2212 1.\nEach term in \u03b4j(k, k +1) and \u03b4j+1(k, k +1) can be further\ndecomposed into two parts (see (5)). We will show that each\npart of \u03b4j+1(k, k + 1) is no greater than the corresponding\npart in \u03b4j(k, k + 1). In other words, we will show that both\nj + 1\nj + k + 1\n\u2212\nj\nj + k\n\u2264\nj\nj + k\n\u2212\nj \u2212 1\nj + k \u2212 1\n(11)\nand\n\u22122 \u00b7 (sx\nk \u2212 s\u00afx\nj+1) \u2264 \u22122 \u00b7 (sx\nk \u2212 s\u00afx\nj ) (12)\nare true for the aforementioned values of j and k.\nIt is easy to see that (11) is true by observing that for any\ntwo positive integers 1 \u2264 a < b,\na + 1\nb + 1\n\u2212\na\nb\n\u2264\na\nb\n\u2212\na \u2212 1\nb \u2212 1\n,\nand choosing a = j and b = j + k.\nThe second inequality (12) holds because Algorithm 2 first\nsorts d\u00afx\nin descending order of s\u00afx\n, implying s\u00afx\nj+1 \u2264 s\u00afx\nj .\nThus we see that each term in \u03b4j+1 is no greater than the\ncorresponding term in \u03b4j, which completes the proof.\nThe result of Lemma 1 leads directly to our main\ncorrectness result:\nTheorem 2. In Algorithm 2, the computed values of optj\nsatisfy (10), implying that the solution returned by Algorithm\n2 is feasible and thus optimal.\nProof. We will prove that\noptj \u2264 optj+1\nholds for any 1 \u2264 j < |C\u00afx\n|, thus implying (10).\nSince Algorithm 2 computes optj as\noptj = argmax\nk\n\u03b4j(k, |Cx\n| + 1), (13)\nthen by definition of \u03b4j (6), for any 1 \u2264 i < optj,\n\u03b4j(i, optj) = \u03b4j(i, |Cx\n| + 1) \u2212 \u03b4j(optj, |Cx\n| + 1) < 0.\nUsing Lemma 1, we know that\n\u03b4j+1(i, optj) \u2264 \u03b4j(i, optj) < 0,\nwhich implies that for any 1 \u2264 i < optj,\n\u03b4j+1(i, |Cx\n| + 1) \u2212 \u03b4j+1(optj, |Cx\n| + 1) < 0.\nSuppose for contradiction that optj+1 < optj. Then\n\u03b4j+1(optj+1, |Cx\n| + 1) < \u03b4j+1(optj, |Cx\n| + 1),\nwhich contradicts (13). Therefore, it must be the case that\noptj \u2264 optj+1, which completes the proof.\n3.2.2 Running Time\nThe running time of Algorithm 2 can be split into two\nparts. The first part is the sort by wT\n\u03c6(x, d), which\nrequires O(n log n) time, where n = |Cx\n| + |C\u00afx\n|. The second\npart computes each optj, which requires O(|Cx\n| \u00b7 |C\u00afx\n|) time.\nThough in the worst case this is O(n2\n), the number of\nrelevant documents, |Cx\n|, is often very small (e.g., constant\nwith respect to n), in which case the running time for the\nsecond part is simply O(n). For most real-world datasets,\nAlgorithm 2 is dominated by the sort and has complexity\nO(n log n).\nAlgorithm 1 is guaranteed to halt in a polynomial\nnumber of iterations [19], and each iteration runs Algorithm 2.\nVirtually all well-performing models were trained in a\nreasonable amount of time (usually less than one hour). Once\ntraining is complete, making predictions on query x\nusing the resulting hypothesis h(x|w) requires only sorting\nby wT\n\u03c6(x, d).\nWe developed our software using a Python interface3\nto\nSVMstruct\n, since the Python language greatly simplified the\ncoding process. To improve performance, it is advisable to\nuse the standard C implementation4\nof SVMstruct\n.\n4. EXPERIMENT SETUP\nThe main goal of our experiments is to evaluate whether\ndirectly optimizing MAP leads to improved MAP\nperformance compared to conventional SVM methods that\noptimize a substitute loss such as accuracy or ROCArea. We\nempirically evaluate our method using two sets of TREC\nWeb Track queries, one each from TREC 9 and TREC 10\n(topics 451-500 and 501-550), both of which used the WT10g\ncorpus. For each query, TREC provides the relevance\njudgments of the documents. We generated our features using\nthe scores of existing retrieval functions on these queries.\nWhile our method is agnostic to the meaning of the\nfeatures, we chose to use existing retrieval functions as a simple\nyet effective way of acquiring useful features. As such, our\n3\nhttp://www.cs.cornell.edu/~tomf/svmpython/\n4\nhttp://svmlight.joachims.org/svm_struct.html\nDataset Base Funcs Features\nTREC 9 Indri 15 750\nTREC 10 Indri 15 750\nTREC 9 Submissions 53 2650\nTREC 10 Submissions 18 900\nTable 5: Dataset Statistics\nexperiments essentially test our method\"s ability to re-rank\nthe highly ranked documents (e.g., re-combine the scores of\nthe retrieval functions) to improve MAP.\nWe compare our method against the best retrieval\nfunctions trained on (henceforth base functions), as well as against\npreviously proposed SVM methods. Comparing with the\nbest base functions tests our method\"s ability to learn a\nuseful combination. Comparing with previous SVM methods\nallows us to test whether optimizing directly for MAP (as\nopposed to accuracy or ROCArea) achieves a higher MAP\nscore in practice. The rest of this section describes the base\nfunctions and the feature generation method in detail.\n4.1 Choosing Retrieval Functions\nWe chose two sets of base functions for our experiments.\nFor the first set, we generated three indices over the WT10g\ncorpus using Indri5\n. The first index was generated using\ndefault settings, the second used Porter-stemming, and the\nlast used Porter-stemming and Indri\"s default stopwords.\nFor both TREC 9 and TREC 10, we used the\ndescription portion of each query and scored the documents using\nfive of Indri\"s built-in retrieval methods, which are Cosine\nSimilarity, TFIDF, Okapi, Language Model with Dirichlet\nPrior, and Language Model with Jelinek-Mercer Prior. All\nparameters were kept as their defaults.\nWe computed the scores of these five retrieval methods\nover the three indices, giving 15 base functions in total. For\neach query, we considered the scores of documents found in\nthe union of the top 1000 documents of each base function.\nFor our second set of base functions, we used scores from\nthe TREC 9 [8] and TREC 10 [9] Web Track submissions.\nWe used only the non-manual, non-short submissions from\nboth years. For TREC 9 and TREC 10, there were 53 and\n18 such submissions, respectively. A typical submission\ncontained scores of its top 1000 documents.\nb ca\nwT\n\u03c6(x,d)\nf(d|x)\nFigure 2: Example Feature Binning\n4.2 Generating Features\nIn order to generate input examples for our method, a\nconcrete instantiation of \u03c6 must be provided. For each\ndoc5\nhttp://www.lemurproject.org\nTREC 9 TREC 10\nModel MAP W/L MAP W/L\nSVM\u2206\nmap 0.242 -\n0.236Best Func. 0.204 39/11 ** 0.181 37/13 **\n2nd Best 0.199 38/12 ** 0.174 43/7 **\n3rd Best 0.188 34/16 ** 0.174 38/12 **\nTable 6: Comparison with Indri Functions\nument d scored by a set of retrieval functions F on query x,\nwe generate the features as a vector\n\u03c6(x, d) = 1[f(d|x)>k] : \u2200f \u2208 F, \u2200k \u2208 Kf ,\nwhere f(d|x) denotes the score that retrieval function f\nassigns to document d for query x, and each Kf is a set of\nreal values. From a high level, we are expressing the score\nof each retrieval function using |Kf | + 1 bins.\nSince we are using linear kernels, one can think of the\nlearning problem as finding a good piecewise-constant\ncombination of the scores of the retrieval functions. Figure 2\nshows an example of our feature mapping method. In this\nexample we have a single feature F = {f}. Here, Kf =\n{a, b, c}, and the weight vector is w = wa, wb, wc . For any\ndocument d and query x, we have\nwT\n\u03c6(x, d) =\n8\n>><\n>>:\n0 if f(d|x) < a\nwa if a \u2264 f(d|x) < b\nwa + wb if b \u2264 f(d|x) < c\nwa + wb + wc if c \u2264 f(d|x)\n.\nThis is expressed qualitatively in Figure 2, where wa and wb\nare positive, and wc is negative.\nWe ran our main experiments using four choices of F: the\nset of aforementioned Indri retrieval functions for TREC 9\nand TREC 10, and the Web Track submissions for TREC\n9 and TREC 10. For each F and each function f \u2208 F,\nwe chose 50 values for Kf which are reasonably spaced and\ncapture the sensitive region of f.\nUsing the four choices of F, we generated four datasets\nfor our main experiments. Table 5 contains statistics of\nthe generated datasets. There are many ways to generate\nfeatures, and we are not advocating our method over others.\nThis was simply an efficient means to normalize the outputs\nof different functions and allow for a more expressive model.\n5. EXPERIMENTS\nFor each dataset in Table 5, we performed 50 trials. For\neach trial, we train on 10 randomly selected queries, and\nselect another 5 queries at random for a validation set.\nModels were trained using a wide range of C values. The model\nwhich performed best on the validation set was selected and\ntested on the remaining 35 queries.\nAll queries were selected to be in the training, validation\nand test sets the same number of times. Using this setup,\nwe performed the same experiments while using our method\n(SVM\u2206\nmap), an SVM optimizing for ROCArea (SVM\u2206\nroc) [13],\nand a conventional classification SVM (SVMacc) [20]. All\nSVM methods used a linear kernel. We reported the average\nperformance of all models over the 50 trials.\n5.1 Comparison with Base Functions\nIn analyzing our results, the first question to answer is,\ncan SVM\u2206\nmap learn a model which outperforms the best base\nTREC 9 TREC 10\nModel MAP W/L MAP W/L\nSVM\u2206\nmap 0.290 -\n0.287Best Func. 0.280 28/22 0.283 29/21\n2nd Best 0.269 30/20 0.251 36/14 **\n3rd Best 0.266 30/20 0.233 36/14 **\nTable 7: Comparison with TREC Submissions\nTREC 9 TREC 10\nModel MAP W/L MAP W/L\nSVM\u2206\nmap 0.284 -\n0.288Best Func. 0.280 27/23 0.283 31/19\n2nd Best 0.269 30/20 0.251 36/14 **\n3rd Best 0.266 30/20 0.233 35/15 **\nTable 8: Comparison with TREC Subm. (w/o best)\nfunctions? Table 6 presents the comparison of SVM\u2206\nmap with\nthe best Indri base functions. Each column group contains\nthe macro-averaged MAP performance of SVM\u2206\nmap or a base\nfunction. The W/L columns show the number of queries\nwhere SVM\u2206\nmap achieved a higher MAP score. Significance\ntests were performed using the two-tailed Wilcoxon signed\nrank test. Two stars indicate a significance level of 0.95.\nAll tables displaying our experimental results are structured\nidentically. Here, we find that SVM\u2206\nmap significantly\noutperforms the best base functions.\nTable 7 shows the comparison when trained on TREC\nsubmissions. While achieving a higher MAP score than the best\nbase functions, the performance difference between SVM\u2206\nmap\nthe base functions is not significant. Given that many of\nthese submissions use scoring functions which are carefully\ncrafted to achieve high MAP, it is possible that the best\nperforming submissions use techniques which subsume the\ntechniques of the other submissions. As a result, SVM\u2206\nmap\nwould not be able to learn a hypothesis which can\nsignificantly out-perform the best submission.\nHence, we ran the same experiments using a modified\ndataset where the features computed using the best\nsubmission were removed. Table 8 shows the results (note that we\nare still comparing against the best submission though we\nare not using it for training). Notice that while the\nperformance of SVM\u2206\nmap degraded slightly, the performance was\nstill comparable with that of the best submission.\n5.2 Comparison w/ Previous SVM Methods\nThe next question to answer is, does SVM\u2206\nmap produce\nhigher MAP scores than previous SVM methods? Tables 9\nand 10 present the results of SVM\u2206\nmap, SVM\u2206\nroc, and SVMacc\nwhen trained on the Indri retrieval functions and TREC\nsubmissions, respectively. Table 11 contains the corresponding\nresults when trained on the TREC submissions without the\nbest submission.\nTo start with, our results indicate that SVMacc was not\ncompetitive with SVM\u2206\nmap and SVM\u2206\nroc, and at times\nunderperformed dramatically. As such, we tried several\napproaches to improve the performance of SVMacc.\n5.2.1 Alternate SVMacc Methods\nOne issue which may cause SVMacc to underperform is\nthe severe imbalance between relevant and non-relevant\ndocTREC 9 TREC 10\nModel MAP W/L MAP W/L\nSVM\u2206\nmap 0.242 -\n0.236SVM\u2206\nroc 0.237 29/21 0.234 24/26\nSVMacc 0.147 47/3 ** 0.155 47/3 **\nSVMacc2 0.219 39/11 ** 0.207 43/7 **\nSVMacc3 0.113 49/1 ** 0.153 45/5 **\nSVMacc4 0.155 48/2 ** 0.155 48/2 **\nTable 9: Trained on Indri Functions\nTREC 9 TREC 10\nModel MAP W/L MAP W/L\nSVM\u2206\nmap 0.290 -\n0.287SVM\u2206\nroc 0.282 29/21 0.278 35/15 **\nSVMacc 0.213 49/1 ** 0.222 49/1 **\nSVMacc2 0.270 34/16 ** 0.261 42/8 **\nSVMacc3 0.133 50/0 ** 0.182 46/4 **\nSVMacc4 0.233 47/3 ** 0.238 46/4 **\nTable 10: Trained on TREC Submissions\numents. The vast majority of the documents are not\nrelevant. SVMacc2 addresses this problem by assigning more\npenalty to false negative errors. For each dataset, the ratio\nof the false negative to false positive penalties is equal to the\nratio of the number non-relevant and relevant documents in\nthat dataset. Tables 9, 10 and 11 indicate that SVMacc2 still\nperforms significantly worse than SVM\u2206\nmap.\nAnother possible issue is that SVMacc attempts to find\njust one discriminating threshold b that is query-invariant.\nIt may be that different queries require different values of\nb. Having the learning method trying to find a good b value\n(when one does not exist) may be detrimental.\nWe took two approaches to address this issue. The first\nmethod, SVMacc3, converts the retrieval function scores into\npercentiles. For example, for document d, query q and\nretrieval function f, if the score f(d|q) is in the top 90% of\nthe scores f(\u00b7|q) for query q, then the converted score is\nf (d|q) = 0.9. Each Kf contains 50 evenly spaced values\nbetween 0 and 1. Tables 9, 10 and 11 show that the\nperformance of SVMacc3 was also not competitive with SVM\u2206\nmap.\nThe second method, SVMacc4, normalizes the scores given\nby f for each query. For example, assume for query q that\nf outputs scores in the range 0.2 to 0.7. Then for document\nd, if f(d|q) = 0.6, the converted score would be f (d|q) =\n(0.6 \u2212 0.2)/(0.7 \u2212 0.2) = 0.8. Each Kf contains 50 evenly\nspaced values between 0 and 1. Again, Tables 9, 10 and 11\nshow that SVMacc4 was not competitive with SVM\u2206\nmap\n5.2.2 MAP vs ROCArea\nSVM\u2206\nroc performed much better than SVMacc in our\nexperiments. When trained on Indri retrieval functions (see\nTable 9), the performance of SVM\u2206\nroc was slight, though\nnot significantly, worse than the performances of SVM\u2206\nmap.\nHowever, Table 10 shows that SVM\u2206\nmap did significantly\noutperform SVM\u2206\nroc when trained on the TREC submissions.\nTable 11 shows the performance of the models when trained\non the TREC submissions with the best submission removed.\nThe performance of most models degraded by a small amount,\nwith SVM\u2206\nmap still having the best performance.\nTREC 9 TREC 10\nModel MAP W/L MAP W/L\nSVM\u2206\nmap 0.284 -\n0.288SVM\u2206\nroc 0.274 31/19 ** 0.272 38/12 **\nSVMacc 0.215 49/1 ** 0.211 50/0 **\nSVMacc2 0.267 35/15 ** 0.258 44/6 **\nSVMacc3 0.133 50/0 ** 0.174 46/4 **\nSVMacc4 0.228 46/4 ** 0.234 45/5 **\nTable 11: Trained on TREC Subm. (w/o Best)\n6. CONCLUSIONS AND FUTURE WORK\nWe have presented an SVM method that directly\noptimizes MAP. It provides a principled approach and avoids\ndifficult to control heuristics. We formulated the\noptimization problem and presented an algorithm which provably\nfinds the solution in polynomial time. We have shown\nempirically that our method is generally superior to or\ncompetitive with conventional SVMs methods.\nOur new method makes it conceptually just as easy to\noptimize SVMs for MAP as was previously possible only\nfor Accuracy and ROCArea. The computational cost for\ntraining is very reasonable in practice. Since other methods\ntypically require tuning multiple heuristics, we also expect\nto train fewer models before finding one which achieves good\nperformance.\nThe learning framework used by our method is fairly\ngeneral. A natural extension of this framework would be to\ndevelop methods to optimize for other important IR\nmeasures, such as Normalized Discounted Cumulative Gain [2,\n3, 4, 12] and Mean Reciprocal Rank.\n7. ACKNOWLEDGMENTS\nThis work was funded under NSF Award IIS-0412894,\nNSF CAREER Award 0237381, and a gift from Yahoo!\nResearch. The third author was also partly supported by a\nMicrosoft Research Fellowship.\n8. REFERENCES\n[1] B. T. Bartell, G. W. Cottrell, and R. K. Belew.\nAutomatic combination of multiple ranked retrieval\nsystems. In Proceedings of the ACM Conference on\nResearch and Development in Information Retrieval\n(SIGIR), 1994.\n[2] C. Burges, T. Shaked, E. Renshaw, A. Lazier,\nM. Deeds, N. Hamilton, and G. Hullender. Learning\nto rank using gradient descent. In Proceedings of the\nInternational Conference on Machine Learning\n(ICML), 2005.\n[3] C. J. C. Burges, R. Ragno, and Q. Le. Learning to\nrank with non-smooth cost functions. In Proceedings\nof the International Conference on Advances in Neural\nInformation Processing Systems (NIPS), 2006.\n[4] Y. Cao, J. Xu, T.-Y. Liu, H. Li, Y. Huang, and H.-W.\nHon. Adapting ranking SVM to document retrieval. In\nProceedings of the ACM Conference on Research and\nDevelopment in Information Retrieval (SIGIR), 2006.\n[5] B. Carterette and D. Petkova. Learning a ranking\nfrom pairwise preferences. In Proceedings of the ACM\nConference on Research and Development in\nInformation Retrieval (SIGIR), 2006.\n[6] R. Caruana, A. Niculescu-Mizil, G. Crew, and\nA. Ksikes. Ensemble selection from libraries of models.\nIn Proceedings of the International Conference on\nMachine Learning (ICML), 2004.\n[7] J. Davis and M. Goadrich. The relationship between\nprecision-recall and ROC curves. In Proceedings of the\nInternational Conference on Machine Learning\n(ICML), 2006.\n[8] D. Hawking. Overview of the TREC-9 web track. In\nProceedings of TREC-2000, 2000.\n[9] D. Hawking and N. Craswell. Overview of the\nTREC-2001 web track. In Proceedings of TREC-2001,\nNov. 2001.\n[10] R. Herbrich, T. Graepel, and K. Obermayer. Large\nmargin rank boundaries for ordinal regression.\nAdvances in large margin classifiers, 2000.\n[11] A. Herschtal and B. Raskutti. Optimising area under\nthe ROC curve using gradient descent. In Proceedings\nof the International Conference on Machine Learning\n(ICML), 2004.\n[12] K. Jarvelin and J. Kekalainen. Ir evaluation methods\nfor retrieving highly relevant documents. In\nProceedings of the ACM Conference on Research and\nDevelopment in Information Retrieval (SIGIR), 2000.\n[13] T. Joachims. A support vector method for\nmultivariate performance measures. In Proceedings of\nthe International Conference on Machine Learning\n(ICML), pages 377-384, New York, NY, USA, 2005.\nACM Press.\n[14] J. Lafferty and C. Zhai. Document language models,\nquery models, and risk minimization for information\nretrieval. In Proceedings of the ACM Conference on\nResearch and Development in Information Retrieval\n(SIGIR), pages 111-119, 2001.\n[15] Y. Lin, Y. Lee, and G. Wahba. Support vector\nmachines for classification in nonstandard situations.\nMachine Learning, 46:191-202, 2002.\n[16] D. Metzler and W. B. Croft. A markov random field\nmodel for term dependencies. In Proceedings of the\n28th Annual International ACM SIGIR Conference on\nResearch and Development in Information Retrieval,\npages 472-479, 2005.\n[17] K. Morik, P. Brockhausen, and T. Joachims.\nCombining statistical learning with a knowledge-based\napproach. In Proceedings of the International\nConference on Machine Learning, 1999.\n[18] S. Robertson. The probability ranking principle in ir.\njournal of documentation. Journal of Documentation,\n33(4):294-304, 1977.\n[19] I. Tsochantaridis, T. Hofmann, T. Joachims, and\nY. Altun. Large margin methods for structured and\ninterdependent output variables. Journal of Machine\nLearning Research (JMLR), 6(Sep):1453-1484, 2005.\n[20] V. Vapnik. Statistical Learning Theory. Wiley and\nSons Inc., 1998.\n[21] L. Yan, R. Dodier, M. Mozer, and R. Wolniewicz.\nOptimizing classifier performance via approximation\nto the Wilcoxon-Mann-Witney statistic. In\nProceedings of the International Conference on\nMachine Learning (ICML), 2003.", "keywords": "support vector machine;relaxation of map;learning technique;mean average precision;map relaxation;surrogate measure;rank;supervised learning;ranked retrieval system;loss function;machine learn for information retrieval;information retrieval system;optimal solution;probability;machine learning"}
-{"name": "test_H-29", "title": "Estimation and Use of Uncertainty in Pseudo-relevance Feedback", "abstract": "Existing pseudo-relevance feedback methods typically perform averaging over the top-retrieved documents, but ignore an important statistical dimension: the risk or variance associated with either the individual document models, or their combination. Treating the baseline feedback method as a black box, and the output feedback model as a random variable, we estimate a posterior distribution for the feedback model by resampling a given query\"s top-retrieved documents, using the posterior mean or mode as the enhanced feedback model. We then perform model combination over several enhanced models, each based on a slightly modified query sampled from the original query. We find that resampling documents helps increase individual feedback model precision by removing noise terms, while sampling from the query improves robustness (worst-case performance) by emphasizing terms related to multiple query aspects. The result is a meta-feedback algorithm that is both more robust and more precise than the original strong baseline method.", "fulltext": "1. INTRODUCTION\nUncertainty is an inherent feature of information retrieval.\nNot only do we not know the queries that will be presented\nto our retrieval algorithm ahead of time, but the user\"s\ninformation need may be vague or incompletely specified by\nthese queries. Even if the query were perfectly specified,\nlanguage in the collection documents is inherently complex\nand ambiguous and matching such language effectively is a\nformidable problem by itself. With this in mind, we wish\nto treat many important quantities calculated by the\nretrieval system, whether a relevance score for a document,\nor a weight for a query expansion term, as random\nvariables whose true value is uncertain but where the\nuncertainty about the true value may be quantified by replacing\nthe fixed value with a probability distribution over possible\nvalues. In this way, retrieval algorithms may attempt to\nquantify the risk or uncertainty associated with their\noutput rankings, or improve the stability or precision of their\ninternal calculations.\nCurrent algorithms for pseudo-relevance feedback (PRF)\ntend to follow the same basic method whether we use\nvector space-based algorithms such as Rocchio\"s formula [16],\nor more recent language modeling approaches such as\nRelevance Models [10]. First, a set of top-retrieved documents is\nobtained from an initial query and assumed to approximate\na set of relevant documents. Next, a single feedback model\nvector is computed according to some sort of average,\ncentroid, or expectation over the set of possibly-relevant\ndocument models. For example, the document vectors may be\ncombined with equal weighting, as in Rocchio, or by query\nlikelihood, as may be done using the Relevance Model1\n. The\nuse of an expectation is reasonable for practical and\ntheoretical reasons, but by itself ignores potentially valuable\ninformation about the risk of the feedback model.\nOur main hypothesis in this paper is that estimating the\nuncertainty in feedback is useful and leads to better\nindividual feedback models and more robust combined models.\nTherefore, we propose a method for estimating uncertainty\nassociated with an individual feedback model in terms of\na posterior distribution over language models. To do this,\nwe systematically vary the inputs to the baseline feedback\nmethod and fit a Dirichlet distribution to the output. We\nuse the posterior mean or mode as the improved feedback\nmodel estimate. This process is shown in Figure 1. As we\nshow later, the mean and mode may vary significantly from\nthe single feedback model proposed by the baseline method.\nWe also perform model combination using several improved\nfeedback language models obtained by a small number of\nnew queries sampled from the original query. A model\"s\nweight combines two complementary factors: the model\"s\nprobability of generating the query, and the variance of the\nmodel, with high-variance models getting lower weight.\n1\nFor example, an expected parameter vector conditioned on\nthe query observation is formed from top-retrieved\ndocuments, which are treated as training strings (see [10], p. 62).\nFigure 1: Estimating the uncertainty of the feedback model\nfor a single query.\n2. SAMPLING-BASED FEEDBACK\nIn Sections 2.1-2.5 we describe a general method for\nestimating a probability distribution over the set of possible\nlanguage models. In Sections 2.6 and 2.7 we summarize how\ndifferent query samples are used to generate multiple\nfeedback models, which are then combined.\n2.1 Modeling Feedback Uncertainty\nGiven a query Q and a collection C, we assume a\nprobabilistic retrieval system that assigns a real-valued document\nscore f(D, Q) to each document D in C, such that the score\nis proportional to the estimated probability of relevance. We\nmake no other assumptions about f(D, Q). The nature of\nf(D, Q) may be complex: for example, if the retrieval\nsystem supports structured query languages [12], then f(D, Q)\nmay represent the output of an arbitrarily complex\ninference network defined by the structured query operators. In\ntheory, the scoring function can vary from query to query,\nalthough in this study for simplicity we keep the scoring\nfunction the same for all queries. Our specific query method\nis given in Section 3.\nWe treat the feedback algorithm as a black box and\nassume that the inputs to the feedback algorithm are the\noriginal query and the corresponding top-retrieved documents,\nwith a score being given to each document. We assume that\nthe output of the feedback algorithm is a vector of term\nweights to be used to add or reweight the terms in the\nrepresentation of the original query, with the vector normalized\nto form a probability distribution. We view the the inputs\nto the feedback black box as random variables, and analyze\nthe feedback model as a random variable that changes in\nresponse to changes in the inputs. Like the document scoring\nfunction f(D, Q), the feedback algorithm may implement\na complex, non-linear scoring formula, and so as its inputs\nvary, the resulting feedback models may have a complex\ndistribution over the space of feedback models (the sample\nspace). Because of this potential complexity, we do not\nattempt to derive a posterior distribution in closed form, but\ninstead use simulation. We call this distribution over\npossible feedback models the feedback model distribution. Our\ngoal in this section is to estimate a useful approximation to\nthe feedback model distribution.\nFor a specific framework for experiments, we use the\nlanguage modeling (LM) approach for information retrieval [15].\nThe score of a document D with respect to a query Q and\ncollection C is given by p(Q|D) with respect to language\nmodels \u02c6\u03b8Q and \u02c6\u03b8D estimated for the query and document\nrespectively. We denote the set of k top-retrieved\ndocuments from collection C in response to Q by DQ(k, C). For\nsimplicity, we assume that queries and documents are\ngenerated by multinomial distributions whose parameters are\nrepresented by unigram language models.\nTo incorporate feedback in the LM approach, we assume a\nmodel-based scheme in which our goal is take the query and\nresulting ranked documents DQ(k, C) as input, and output\nan expansion language model \u02c6\u03b8E, which is then interpolated\nwith the original query model \u02c6\u03b8Q:\n\u02c6\u03b8New = (1 \u2212 \u03b1) \u00b7 \u02c6\u03b8Q + \u03b1 \u00b7 \u02c6\u03b8E (1)\nThis includes the possibility of \u03b1 = 1 where the original\nquery mode is completely replaced by the feedback model.\nOur sample space is the set of all possible language\nmodels LF that may be output as feedback models. Our\napproach is to take samples from this space and then fit a\ndistribution to the samples using maximum likelihood. For\nsimplicity, we start by assuming the latent feedback\ndistribution has the form of a Dirichlet distribution. Although the\nDirichlet is a unimodal distribution, and in general quite\nlimited in its expressiveness in the sample space, it is a\nnatural match for the multinomial language model, can be\nestimated quickly, and can capture the most salient features of\nconfident and uncertain feedback models, such as the overall\nspread of the distibution.\n2.2 Resampling document models\nWe would like an approximation to the posterior\ndistribution of the feedback model LF . To accomplish this, we\napply a widely-used simulation technique called bootstrap\nsampling ([7], p. 474) on the input parameters, namely, the\nset of top-retrieved documents.\nBootstrap sampling allows us to simulate the approximate\neffect of perturbing the parameters within the black box\nfeedback algorithm by perturbing the inputs to that\nalgorithm in a systematic way, while making no assumptions\nabout the nature of the feedback algorithm.\nSpecifically, we sample k documents with replacement from\nDQ(k, C), and calculate an expansion language model \u03b8b\nusing the black box feedback method. We repeat this process\nB times to obtain a set of B feedback language models, to\nwhich we then fit a Dirichlet distribution. Typically B is\nin the range of 20 to 50 samples, with performance being\nrelatively stable in this range. Note that instead of treating\neach top document as equally likely, we sample according to\nthe estimated probabilities of relevance of each document in\nDQ(k, C). Thus, a document is more likely to be chosen the\nhigher it is in the ranking.\n2.3 Justification for a sampling approach\nThe rationale for our sampling approach has two parts.\nFirst, we want to improve the quality of individual\nfeedback models by smoothing out variation when the baseline\nfeedback model is unstable. In this respect, our approach\nresembles bagging [4], an ensemble approach which\ngenerates multiple versions of a predictor by making bootstrap\ncopies of the training set, and then averages the (numerical)\npredictors. In our application, top-retrieved documents can\nbe seen as a kind of noisy training set for relevance.\nSecond, sampling is an effective way to estimate basic\nproperties of the feedback posterior distribution, which can\nthen be used for improved model combination. For\nexample, a model may be weighted by its prediction confidence,\nestimated as a function of the variability of the posterior\naround the model.\nfoo2-401.map-Dim:5434,Size:12*12units,gaussianneighborhood\n(a) Topic 401\nForeign\nminorities,\nGermany\nfoo2-402.map-Dim:5698,Size:12*12units,gaussianneighborhood\n(b) Topic 402\nBehavioral\ngenetics\nfoo2-459.map-Dim:8969,Size:12*12units,gaussianneighborhood\n(c) Topic 459\nWhen can a\nlender foreclose\non property\nFigure 2: Visualization of expansion language model\nvariance using self-organizing maps, showing the distribution of\nlanguage models that results from resampling the inputs to\nthe baseline expansion method. The language model that\nwould have been chosen by the baseline expansion is at\nthe center of each map. The similarity function is\nJensenShannon divergence.\n2.4 Visualizing feedback distributions\nBefore describing how we fit and use the Dirichlet\ndistribution over feedback models, it is instructive to view some\nexamples of actual feedback model distributions that result\nfrom bootstrap sampling the top-retrieved documents from\ndifferent TREC topics.\nEach point in our sample space is a language model, which\ntypically has several thousand dimensions. To help analyze\nthe behavior of our method we used a Self-Organizing Map\n(via the SOM-PAK package [9]), to \u2018flatten\" and visualize\nthe high-dimensional density function2\n.\nThe density maps for three TREC topics are shown in\nFigure 2 above. The dark areas represent regions of high\nsimilarity between language models. The light areas\nrepresent regions of low similarity - the \u2018valleys\" between\nclusters. Each diagram is centered on the language model that\nwould have been chosen by the baseline expansion. A single\npeak (mode) is evident in some examples, but more complex\nstructure appears in others. Also, while the distribution is\nusually close to the baseline feedback model, for some topics\nthey are a significant distance apart (as measured by\nJensenShannon divergence), as in Subfigure 2c. In such cases, the\nmode or mean of the feedback distribution often performs\nsignificantly better than the baseline (and in a smaller\nproportion of cases, significantly worse).\n2.5 Fitting a posterior feedback distribution\nAfter obtaining feedback model samples by resampling\nthe feedback model inputs, we estimate the feedback\ndistribution. We assume that the multinomial feedback\nmodels {\u02c6\u03b81, . . . , \u02c6\u03b8B} were generated by a latent Dirichlet\ndistribution with parameters {\u03b11, . . . , \u03b1N }. To estimate the\n{\u03b11, . . . , \u03b1N }, we fit the Dirichlet parameters to the B\nlanguage model samples according to maximum likelihood\nusing a generalized Newton procedure, details of which are\ngiven in Minka [13]. We assume a simple Dirichlet prior over\nthe {\u03b11, . . . , \u03b1N }, setting each to \u03b1i = \u03bc \u00b7 p(wi | C), where \u03bc\nis a parameter and p(\u00b7 | C) is the collection language model\nestimated from a set of documents from collection C. The\nparameter fitting converges very quickly - typically just 2 or\n2\nBecause our points are language models in the\nmultinomial simplex, we extended SOM-PAK to support\nJensenShannon divergence, a widely-used similarity measure\nbetween probability distributions.\n3 iterations are enough - so that it is practical to apply at\nquery-time when computational overhead must be small. In\npractice, we can restrict the calculation to the vocabulary of\nthe top-retrieved documents, instead of the entire collection.\nNote that for this step we are re-using the existing retrieved\ndocuments and not performing additional queries.\nGiven the parameters of an N-dimensional Dirichlet\ndistribution Dir(\u03b1) the mean \u03bc and mode x vectors are easy\nto calculate and are given respectively by\n\u03bci = \u03b1iP\n\u03b1i\n(2) and xi = \u03b1i\u22121P\n\u03b1i\u2212N\n. (3)\nWe can then choose the language model at the mean or the\nmode of the posterior as the final enhanced feedback model.\n(We found the mode to give slightly better performance.)\nFor information retrieval, the number of samples we will\nhave available is likely to be quite small for performance\nreasons - usually less than ten. Moreover, while random\nsampling is useful in certain cases, it is perfectly acceptable to\nallow deterministic sampling distributions, but these must\nbe designed carefully in order to approximate an accurate\noutput variance. We leave this for future study.\n2.6 Query variants\nWe use the following methods for generating variants of\nthe original query. Each variant corresponds to a different\nassumption about which aspects of the original query may\nbe important. This is a form of deterministic sampling.\nWe selected three simple methods that cover complimentary\nassumptions about the query.\nNo-expansion Use only the original query. The\nassumption is that the given terms are a complete description\nof the information need.\nLeave-one-out A single term is left out of the original\nquery. The assumption is that one of the query terms\nis a noise term.\nSingle-term A single term is chosen from the original query.\nThis assumes that only one aspect of the query, namely,\nthat represented by the term, is most important.\nAfter generating a variant of the original query, we combine\nit with the original query using a weight \u03b1SUB so that we\ndo not stray too \u2018far\". In this study, we set \u03b1SUB = 0.5. For\nexample, using the Indri [12] query language, a\nleave-oneout variant of the initial query that omits the term \u2018ireland\"\nfor TREC topic 404 is:\n#weight(0.5 #combine(ireland peace talks)\n0.5 #combine(peace talks))\n2.7 Combining enhanced feedback models\nfrom multiple query variants\nWhen using multiple query variants, the resulting\nenhanced feedback models are combined using Bayesian model\ncombination. To do this, we treat each word as an item to\nbe classified as belonging to a relevant or non-relevant class,\nand derive a class probability for each word by combining\nthe scores from each query variant. Each score is given by\nthat term\"s probability in the Dirichlet distribution. The\nterm scores are weighted by the inverse of the variance of\nthe term in the enhanced feedback model\"s Dirichlet\ndistribution. The prior probability of a word\"s membership in\nthe relevant class is given by the probability of the original\nquery in the entire enhanced expansion model.\n3. EVALUATION\nIn this section we present results confirming the usefulness\nof estimating a feedback model distribution from weighted\nresampling of top-ranked documents, and of combining the\nfeedback models obtained from different small changes in\nthe original query.\n3.1 General method\nWe evaluated performance on a total of 350 queries\nderived from four sets of TREC topics: 51-200 (TREC-1&2),\n351-400 (TREC-7), 401-450 (TREC-8), and 451-550 (wt10g,\nTREC-9&10). We chose these for their varied content and\ndocument properties. For example, wt10g documents are\nWeb pages with a wide variety of subjects and styles while\nTREC-1&2 documents are more homogeneous news articles.\nIndexing and retrieval was performed using the Indri system\nin the Lemur toolkit [12] [1]. Our queries were derived from\nthe words in the title field of the TREC topics. Phrases\nwere not used. To generate the baseline queries passed to\nIndri, we wrapped the query terms with Indri\"s #combine\noperator. For example, the initial query for topic 404 is:\n#combine(ireland peace talks)\nWe performed Krovetz stemming for all experiments.\nBecause we found that the baseline (Indri) expansion method\nperformed better using a stopword list with the feedback\nmodel, all experiments used a stoplist of 419 common\nEnglish words. However, an interesting side-effect of our\nresampling approach is that it tends to remove many stopwords\nfrom the feedback model, making a stoplist less critical. This\nis discussed further in Section 3.6.\n3.2 Baseline feedback method\nFor our baseline expansion method, we use an algorithm\nincluded in Indri 1.0 as the default expansion method. This\nmethod first selects terms using a log-odds calculation\ndescribed by Ponte [14], but assigns final term weights using\nLavrenko\"s relevance model[10].\nWe chose the Indri method because it gives a consistently\nstrong baseline, is based on a language modeling approach,\nand is simple to experiment with. In a TREC evaluation\nusing the GOV2 corpus [6], the method was one of the\ntopperforming runs, achieving a 19.8% gain in MAP compared\nto using unexpanded queries. In this study, it achieves an\naverage gain in MAP of 17.25% over the four collections.\nIndri\"s expansion method first calculates a log-odds ratio\no(v) for each potential expansion term v given by\no(v) =\nX\nD\nlog\np(v|D)\np(v|C)\n(4)\nover all documents D containing v, in collection C. Then,\nthe expansion term candidates are sorted by descending\no(v), and the top m are chosen. Finally, the term weights\nr(v) used in the expanded query are calculated based on the\nrelevance model\nr(v) =\nX\nD\np(q|D)p(v|D)\np(v)\np(D)\n(5)\nThe quantity p(q|D) is the probability score assigned to the\ndocument in the initial retrieval. We use Dirichlet\nsmoothing of p(v|D) with \u03bc = 1000.\nThis relevance model is then combined with the original\nquery using linear interpolation, weighted by a parameter \u03b1.\nBy default we used the top 50 documents for feedback and\nthe top 20 expansion terms, with the feedback interpolation\nparameter \u03b1 = 0.5 unless otherwise stated. For example,\nthe baseline expanded query for topic 404 is:\n#weight(0.5 #combine(ireland peace talks) 0.5\n#weight(0.10 ireland 0.08 peace 0.08 northern ...)\n3.3 Expansion performance\nWe measure our feedback algorithm\"s effectiveness by two\nmain criteria: precision, and robustness. Robustness, and\nthe tradeoff between precision and robustness, is analyzed\nin Section 3.4. In this section, we examine average\nprecision and precision in the top 10 documents (P10). We also\ninclude recall at 1,000 documents.\nFor each query, we obtained a set of B feedback models\nusing the Indri baseline. Each feedback model was obtained\nfrom a random sample of the top k documents taken with\nreplacement. For these experiments, B = 30 and k = 50.\nEach feedback model contained 20 terms. On the query side,\nwe used leave-one-out (LOO) sampling to create the query\nvariants. Single-term query sampling had consistently worse\nperformance across all collections and so our results here\nfocus on LOO sampling. We used the methods described in\nSection 2 to estimate an enhanced feedback model from the\nDirichlet posterior distribution for each query variant, and\nto combine the feedback models from all the query variants.\nWe call our method \u2018resampling expansion\" and denote it as\nRS-FB here. We denote the Indri baseline feedback method\nas Base-FB. Results from applying both the baseline\nexpansion method (Base-FB) and resampling expansion (RS-FB)\nare shown in Table 1.\nWe observe several trends in this table. First, the average\nprecision of RS-FB was comparable to Base-FB, achieving\nan average gain of 17.6% compared to using no expansion\nacross the four collections. The Indri baseline expansion\ngain was 17.25%. Also, the RS-FB method achieved\nconsistent improvements in P10 over Base-FB for every topic set,\nwith an average improvement of 6.89% over Base-FB for all\n350 topics. The lowest P10 gain over Base-FB was +3.82%\nfor TREC-7 and the highest was +11.95% for wt10g.\nFinally, both Base-FB and RS-FB also consistently improved\nrecall over using no expansion, with Base-FB achieving\nbetter recall than RS-FB for all topic sets.\n3.4 Retrieval robustness\nWe use the term robustness to mean the worst-case\naverage precision performance of a feedback algorithm. Ideally,\na robust feedback method would never perform worse than\nusing the original query, while often performing better using\nthe expansion.\nTo evaluate robustness in this study, we use a very\nsimple measure called the robustness index (RI)3\n. For a set of\nqueries Q, the RI measure is defined as:\nRI(Q) =\nn+ \u2212 n\u2212\n|Q|\n(6)\nwhere n+ is the number of queries helped by the feedback\nmethod and n\u2212 is the number of queries hurt. Here, by\n\u2018helped\" we mean obtaining a higher average precision as a\nresult of feedback. The value of RI ranges from a minimum\n3\nThis is sometimes also called the reliability of improvement\nindex and was used in Sakai et al. [17].\nCollection NoExp Base-FB RS-FB\nTREC\n1&2\nAvgP 0.1818 0.2419 (+33.04%) 0.2406 (+32.24%)\nP10 0.4443 0.4913 (+10.57%) 0.5363 (+17.83%)\nRecall 15084/37393 19172/37393 15396/37393\nTREC 7\nAvgP 0.1890 0.2175 (+15.07%) 0.2169 (+14.75%)\nP10 0.4200 0.4320 (+2.85%) 0.4480 (+6.67%)\nRecall 2179/4674 2608/4674 2487/4674\nTREC 8\nAvgP 0.2031 0.2361 (+16.25%) 0.2268 (+11.70%)\nP10 0.3960 0.4160 (+5.05%) 0.4340 (+9.59%)\nRecall 2144/4728 2642/4728 2485/4728\nwt10g\nAvgP 0.1741 0.1829 (+5.06%) 0.1946 (+11.78%)\nP10 0.2760 0.2630 (-4.71%) 0.2960 (+7.24%)\nRecall 3361/5980 3725/5980 3664/5980\nTable 1: Comparison of baseline (Base-FB) feedback and feedback using re-sampling (RS-FB). Improvement shown for\nBaseFB and RS-FB is relative to using no expansion.\n(a) TREC 1&2 (upper curve); TREC 8\n(lower curve)\n(b) TREC 7 (upper curve); wt10g (lower\ncurve)\nFigure 3: The trade-off between robustness and average\nprecision for different corpora. The x-axis gives the change in\nMAP over using baseline expansion with \u03b1 = 0.5. The\nyaxis gives the Robustness Index (RI). Each curve through\nuncircled points shows the RI/MAP tradeoff using the\nsimple small-\u03b1 strategy (see text) as \u03b1 decreases from 0.5 to\nzero in the direction of the arrow. Circled points represent\nthe tradeoffs obtained by resampling feedback for \u03b1 = 0.5.\nCollection N Base-FB RS-FB\nn\u2212 RI n\u2212 RI\nTREC 1&2 103 26 +0.495 15 +0.709\nTREC 7 46 14 +0.391 10 +0.565\nTREC 8 44 12 +0.455 12 +0.455\nwt10g 91 48 -0.055 39 +0.143\nCombined 284 100 +0.296 76 +0.465\nTable 2: Comparison of robustness index (RI) for baseline\nfeedback (Base-FB) vs. resampling feedback (RS-FB). Also\nshown are the actual number of queries hurt by feedback\n(n\u2212) for each method and collection. Queries for which\ninitial average precision was negligible (\u2264 0.01) were ignored,\ngiving the remaining query count in column N.\nof \u22121.0, when all queries are hurt by the feedback method,\nto +1.0 when all queries are helped. The RI measure does\nnot take into account the magnitude or distribution of the\namount of change across the set Q. However, it is easy to\nunderstand as a general indication of robustness.\nOne obvious way to improve the worst-case performance\nof feedback is simply to use a smaller fixed \u03b1 interpolation\nparameter, such as \u03b1 = 0.3, placing less weight on the\n(possibly risky) feedback model and more on the original query.\nWe call this the \u2018small-\u03b1\" strategy. Since we are also\nreducing the potential gains when the feedback model is \u2018right\",\nhowever, we would expect some trade-off between average\nprecision and robustness. We therefore compared the\nprecision/robustness trade-off between our resampling feedback\nalgorithm, and the simple small-\u03b1 method. The results are\nsummarized in Figure 3. In the figure, the curve for each\ntopic set interpolates between trade-off points, beginning\nat x=0, where \u03b1 = 0.5, and continuing in the direction of\nthe arrow as \u03b1 decreases and the original query is given\nmore and more weight. As expected, robustness\ncontinuously increases as we move along the curve, but mean\naverage precision generally drops as the gains from feedback are\neliminated. For comparison, the performance of resampling\nfeedback at \u03b1 = 0.5 is shown for each collection as the circled\npoint. Higher and to the right is better. This figure shows\nthat resampling feedback gives a somewhat better trade-off\nthan the small-\u03b1 approach for 3 of the 4 collections.\nFigure 4: Histogram showing improved robustness of\nresampling feedback (RS-FB) over baseline feedback (Base-FB)\nfor all datasets combined. Queries are binned by % change\nin AP compared to the unexpanded query.\nCollection DS + QV DS + No QV\nTREC\n1&2\nAvgP 0.2406 0.2547 (+5.86%)\nP10 0.5263 0.5362 (+1.88%)\nRI 0.7087 0.6515 (-0.0572)\nTREC 7\nAvgP 0.2169 0.2200 (+1.43%)\nP10 0.4480 0.4300 (-4.02%)\nRI 0.5652 0.2609 (-0.3043)\nTREC 8\nAvgP 0.2268 0.2257 (-0.49%)\nP10 0.4340 0.4200 (-3.23%)\nRI 0.4545 0.4091 (-0.0454)\nwt10g\nAvgP 0.1946 0.1865 (-4.16%)\nP10 0.2960 0.2680 (-9.46%)\nRI 0.1429 0.0220 (-0.1209)\nTable 3: Comparison of resampling feedback using\ndocument sampling (DS) with (QV) and without (No QV)\ncombining feedback models from multiple query variants.\nTable 2 gives the Robustness Index scores for Base-FB\nand RS-FB. The RS-FB feedback method obtained higher\nrobustness than Base-FB on three of the four topic sets, with\nonly slightly worse performance on TREC-8.\nA more detailed view showing the distribution over\nrelative changes in AP is given by the histogram in Figure 4.\nCompared to Base-FB, the RS-FB method achieves a\nnoticable reduction in the number of queries significantly hurt\nby expansion (i.e. where AP is hurt by 25% or more), while\npreserving positive gains in AP.\n3.5 Effect of query and document\nsampling methods\nGiven our algorithm\"s improved robustness seen in\nSection 3.4, an important question is what component of our\nsystem is responsible. Is it the use of document re-sampling,\nthe use of multiple query variants, or some other factor? The\nresults in Table 3 suggest that the model combination based\non query variants may be largely account for the improved\nrobustness. When query variants are turned off and the\noriginal query is used by itself with document sampling, there\nis little net change in average precision, a small decrease in\nP10 for 3 out of the 4 topic sets, but a significant drop in\nrobustness for all topic sets. In two cases, the RI measure\ndrops by more than 50%.\nWe also examined the effect of the document sampling\nmethod on retrieval effectiveness, using two different\nstrategies. The \u2018uniform weighting\" strategy ignored the relevance\nscores from the initial retrieval and gave each document in\nthe top k the same probability of selection. In contrast, the\n\u2018relevance-score weighting\" strategy chose documents with\nprobability proportional to their relevance scores. In this\nway, documents that were more highly ranked were more\nlikely to be selected. Results are shown in Table 4.\nThe relevance-score weighting strategy performs better\noverall, with significantly higher RI and P10 scores on 3 of\nthe 4 topic sets. The difference in average precision between\nthe methods, however, is less marked. This suggests that\nuniform weighting acts to increase variance in retrieval\nresults: when initial average precision is high, there are many\nrelevant documents in the top k and uniform sampling may\ngive a more representative relevance model than focusing on\nthe highly-ranked items. On the other hand, when initial\nprecision is low, there are few relevant documents in the\nbottom ranks and uniform sampling mixes in more of the\nnon-relevant documents.\nFor space reasons we only summarize our findings on\nsample size here. The number of samples has some effect on\nprecision when less than 10, but performance stabilizes at\naround 15 to 20 samples. We used 30 samples for our\nexperiments. Much beyond this level, the additional benefits\nof more samples decrease as the initial score distribution is\nmore closely fit and the processing time increases.\n3.6 The effect of resampling on expansion\nterm quality\nIdeally, a retrieval model should not require a stopword\nlist when estimating a model of relevance: a robust\nstatistical model should down-weight stopwords automatically\ndepending on context. Stopwords can harm feedback if\nselected as feedback terms, because they are typically poor\ndiscriminators and waste valuable term slots. In practice,\nhowever, because most term selection methods resemble a\ntf \u00b7 idf type of weighting, terms with low idf but very high\ntf can sometimes be selected as expansion term candidates.\nThis happens, for example, even with the Relevance Model\napproach that is part of our baseline feedback. To ensure\nas strong a baseline as possible, we use a stoplist for all\nexperiments reported here. If we turn off the stopword list,\nhowever, we obtain results such as those shown in Table 5\nwhere four of the top ten baseline feedback terms for TREC\ntopic 60 (said, but, their, not) are stopwords using the\nBaseFB method. (The top 100 expansion terms were selected to\ngenerate this example.)\nIndri\"s method attempts to address the stopword\nproblem by applying an initial step based on Ponte [14] to\nselect less-common terms that have high log-odds of being\nin the top-ranked documents compared to the whole\ncollection. Nevertheless, this does not overcome the stopword\nproblem completely, especially as the number of feedback\nterms grows.\nUsing resampling feedback, however, appears to mitigate\nCollection QV + Uniform QV + Relevance-score\nweighting weighting\nTREC\n1&2\nAvgP 0.2545 0.2406 (-5.46%)\nP10 0.5369 0.5263 (-1.97%)\nRI 0.6212 0.7087 (+14.09%)\nTREC 7\nAvgP 0.2174 0.2169 (-0.23%)\nP10 0.4320 0.4480 (+3.70%)\nRI 0.4783 0.5652 (+18.17%)\nTREC 8\nAvgP 0.2267 0.2268 (+0.04%)\nP10 0.4120 0.4340 (+5.34%)\nRI 0.4545 0.4545 (+0.00%)\nwt10g\nAvgP 0.1808 0.1946 (+7.63%)\nP10 0.2680 0.2960 (+10.45%)\nRI 0.0220 0.1099 (+399.5%)\nTable 4: Comparison of uniform and relevance-weighted document sampling. The percentage change compared to uniform\nsampling is shown in parentheses. QV indicates that query variants were used in both runs.\nBaseline FB p(wi|R) Resampling FB p(wi|R)\nsaid 0.055 court 0.026\ncourt 0.055 pay 0.018\npay 0.034 federal 0.012\nbut 0.026 education 0.011\nemployees 0.024 teachers 0.010\ntheir 0.024 employees 0.010\nnot 0.023 case 0.010\nfederal 0.021 their 0.009\nworkers 0.020 appeals 0.008\neducation 0.020 union 0.007\nTable 5: Feedback term quality when a stoplist is not used.\nFeedback terms for TREC topic 60: merit pay vs seniority.\nthe effect of stopwords automatically. In the example of\nTable 5, resampling feedback leaves only one stopword (their)\nin the top ten. We observed similar feedback term behavior\nacross many other topics. The reason for this effect appears\nto be the interaction of the term selection score with the\ntop-m term cutoff. While the presence and even\nproportion of particular stopwords is fairly stable across different\ndocument samples, their relative position in the top-m list\nis not, as sets of documents with varying numbers of\nbetter, lower-frequency term candidates are examined for each\nsample. As a result, while some number of stopwords may\nappear in each sampled document set, any given stopword\ntends to fall below the cutoff for multiple samples, leading\nto its classification as a high-variance, low-weight feature.\n4. RELATED WORK\nOur approach is related to previous work from several\nareas of information retrieval and machine learning. Our use\nof query variation was inspired by the work of YomTov et\nal. [20], Carpineto et al. [5], and Amati et al. [2], among\nothers. These studies use the idea of creating multiple\nsubqueries and then examining the nature of the overlap in the\ndocuments and/or expansion terms that result from each\nsubquery. Model combination is performed using heuristics.\nIn particular, the studies of Amati et al. and Carpineto et al.\ninvestigated combining terms from individual distributional\nmethods using a term-reranking combination heuristic. In\na set of TREC topics they found wide average variation in\nthe rank-distance of terms from different expansion\nmethods. Their combination method gave modest positive\nimprovements in average precision.\nThe idea of examining the overlap between lists of\nsuggested terms has also been used in early query expansion\napproaches. Xu and Croft\"s method of Local Context\nAnalysis (LCA) [19] includes a factor in the empirically-derived\nweighting formula that causes expansion terms to be\npreferred that have connections to multiple query terms.\nOn the document side, recent work by Zhou & Croft [21]\nexplored the idea of adding noise to documents, re-scoring\nthem, and using the stability of the resulting rankings as\nan estimate of query difficulty. This is related to our use\nof document sampling to estimate the risk of the feedback\nmodel built from the different sets of top-retrieved\ndocuments. Sakai et al. [17] proposed an approach to improving\nthe robustness of pseudo-relevance feedback using a method\nthey call selective sampling. The essence of their method\nis that they allow skipping of some top-ranked documents,\nbased on a clustering criterion, in order to select a more\nvaried and novel set of documents later in the ranking for use\nby a traditional pseudo-feedback method. Their study did\nnot find significant improvements in either robustness (RI)\nor MAP on their corpora.\nGreiff, Morgan and Ponte [8] explored the role of variance\nin term weighting. In a series of simulations that simplified\nthe problem to 2-feature documents, they found that average\nprecision degrades as term frequency variance - high\nnoiseincreases. Downweighting terms with high variance resulted\nin improved average precision. This seems in accord with\nour own findings for individual feedback models.\nEstimates of output variance have recently been used for\nimproved text classification. Lee et al. [11] used\nqueryspecific variance estimates of classifier outputs to perform\nimproved model combination. Instead of using sampling,\nthey were able to derive closed-form expressions for classifier\nvariance by assuming base classifiers using simple types of\ninference networks.\nAndo and Zhang proposed a method that they call\nstructural feedback [3] and showed how to apply it to query\nexpansion for the TREC Genomics Track. They used r query\nvariations to obtain R different sets Sr of top-ranked\ndocuments that have been intersected with the top-ranked\ndocuments obtained from the original query qorig. For each Si,\nthe normalized centroid vector \u02c6wi of the documents is\ncalculated. Principal component analysis (PCA) is then applied\nto the \u02c6wi to obtain the matrix \u03a6 of H left singular vectors\n\u03c6h that are used to obtain the new, expanded query\nqexp = qorig + \u03a6T\n\u03a6qorig. (7)\nIn the case H = 1, we have a single left singular vector \u03c6:\nqexp = qorig + (\u03c6T\nqorig)\u03c6\nso that the dot product \u03c6T\nqorig is a type of dynamic weight\non the expanded query that is based on the similarity of the\noriginal query to the expanded query. The use of variance as\na feedback model quality measure occurs indirectly through\nthe application of PCA. It would be interesting to study\nthe connections between this approach and our own\nmodelfitting method.\nFinally, in language modeling approaches to feedback, Tao\nand Zhai [18] describe a method for more robust feedback\nthat allows each document to have a different feedback \u03b1.\nThe feedback weights are derived automatically using\nregularized EM. A roughly equal balance of query and expansion\nmodel is implied by their EM stopping condition. They\npropose tailoring the stopping parameter \u03b7 based on a function\nof some quality measure of feedback documents.\n5. CONCLUSIONS\nWe have presented a new approach to pseudo-relevance\nfeedback based on document and query sampling. The use\nof sampling is a very flexible and powerful device and is\nmotivated by our general desire to extend current models of\nretrieval by estimating the risk or variance associated with the\nparameters or output of retrieval processes. Such variance\nestimates, for example, may be naturally used in a Bayesian\nframework for improved model estimation and combination.\nApplications such as selective expansion may then be\nimplemented in a principled way.\nWhile our study uses the language modeling approach as a\nframework for experiments, we make few assumptions about\nthe actual workings of the feedback algorithm. We believe\nit is likely that any reasonably effective baseline feedback\nalgorithm would benefit from our approach. Our results on\nstandard TREC collections show that our framework\nimproves the robustness of a strong baseline feedback method\nacross a variety of collections, without sacrificing average\nprecision. It also gives small but consistent gains in\ntop10 precision. In future work, we envision an investigation\ninto how varying the set of sampling methods used and the\nnumber of samples controls the trade-off between\nrobustness, accuracy, and efficiency.\nAcknowledgements\nWe thank Paul Bennett for valuable discussions related to\nthis work, which was supported by NSF grants #IIS-0534345\nand #CNS-0454018, and U.S. Dept. of Education grant\n#R305G03123. Any opinions, findings, and conclusions or\nrecommendations expressed in this material are the authors.\nand do not necessarily reflect those of the sponsors.\n6. REFERENCES\n[1] The Lemur toolkit for language modeling and retrieval.\nhttp://www.lemurproject.org.\n[2] G. Amati, C. Carpineto, and G. Romano. Query difficulty,\nrobustness, and selective application of query expansion. In\nProc. of the 25th European Conf. on Information Retrieval\n(ECIR 2004), pages 127-137.\n[3] R. K. Ando and T. Zhang. A high-performance semi-supervised\nlearning method for text chunking. In Proc. of the 43rd\nAnnual Meeting of the ACL, pages 1-9, June 2005.\n[4] L. Breiman. Bagging predictors. Machine Learning,\n24(2):123-140, 1996.\n[5] C. Carpineto, G. Romano, and V. Giannini. Improving retrieval\nfeedback with multiple term-ranking function combination.\nACM Trans. Info. Systems, 20(3):259 - 290.\n[6] K. Collins-Thompson, P. Ogilvie, and J. Callan. Initial results\nwith structured queries and language models on half a terabyte\nof text. In Proc. of 2005 Text REtrieval Conference. NIST\nSpecial Publication.\n[7] R. O. Duda, P. E. Hart, and D. G. Stork. Pattern\nClassification. Wiley and Sons, 2nd edition, 2001.\n[8] W. R. Greiff, W. T. Morgan, and J. M. Ponte. The role of\nvariance in term weighting for probabilistic information\nretrieval. In Proc. of the 11th Intl. Conf. on Info. and\nKnowledge Mgmt. (CIKM 2002), pages 252-259.\n[9] T. Kohonen, J. Hynninen, J. Kangas, and J. Laaksonen.\nSOMPAK: The self-organizing map program package. Technical\nReport A31, Helsinki University of Technology, 1996.\nhttp://www.cis.hut.fi/research/papers/som tr96.ps.Z.\n[10] V. Lavrenko. A Generative Theory of Relevance. PhD thesis,\nUniversity of Massachusetts, Amherst, 2004.\n[11] C.-H. Lee, R. Greiner, and S. Wang. Using query-specific\nvariance estimates to combine Bayesian classifiers. In Proc. of\nthe 23rd Intl. Conf. on Machine Learning (ICML 2006),\npages 529-536.\n[12] D. Metzler and W. B. Croft. Combining the language model\nand inference network approaches to retrieval. Info. Processing\nand Mgmt., 40(5):735-750, 2004.\n[13] T. Minka. Estimating a Dirichlet distribution. Technical report,\n2000. http://research.microsoft.com/ minka/papers/dirichlet.\n[14] J. Ponte. Advances in Information Retrieval, chapter\nLanguage models for relevance feedback, pages 73-96. 2000.\nW.B. Croft, ed.\n[15] J. M. Ponte and W. B. Croft. A language modeling approach to\ninformation retrieval. In Proc. of the 1998 ACM SIGIR\nConference on Research and Development in Information\nRetrieval, pages 275-281.\n[16] J. Rocchio. The SMART Retrieval System, chapter Relevance\nFeedback in Information Retrieval, pages 313-323.\nPrentice-Hall, 1971. G. Salton, ed.\n[17] T. Sakai, T. Manabe, and M. Koyama. Flexible\npseudo-relevance feedback via selective sampling. ACM\nTransactions on Asian Language Information Processing\n(TALIP), 4(2):111-135, 2005.\n[18] T. Tao and C. Zhai. Regularized estimation of mixture models\nfor robust pseudo-relevance feedback. In Proc. of the 2006\nACM SIGIR Conference on Research and Development in\nInformation Retrieval, pages 162-169.\n[19] J. Xu and W. B. Croft. Improving the effectiveness of\ninformation retrieval with local context analysis. ACM Trans.\nInf. Syst., 18(1):79-112, 2000.\n[20] E. YomTov, S. Fine, D. Carmel, and A. Darlow. Learning to\nestimate query difficulty. In Proc. of the 2005 ACM SIGIR\nConf. on Research and Development in Information\nRetrieval, pages 512-519.\n[21] Y. Zhou and W. B. Croft. Ranking robustness: a novel\nframework to predict query performance. In Proc. of the 15th\nACM Intl. Conf. on Information and Knowledge Mgmt.\n(CIKM 2006), pages 567-574.", "keywords": "language modeling;query expansion;feedback model;pseudo-relevance feedback;probability distribution;risk;enhanced feedback model;vector space-based algorithm;posterior distribution;estimating uncertainty;information retrieval;feedback method;feedback distribution"}
-{"name": "test_H-3", "title": "Using Query Contexts in Information Retrieval", "abstract": "User query is an element that specifies an information need, but it is not the only one. Studies in literature have found many contextual factors that strongly influence the interpretation of a query. Recent studies have tried to consider the user\"s interests by creating a user profile. However, a single profile for a user may not be sufficient for a variety of queries of the user. In this study, we propose to use query-specific contexts instead of user-centric ones, including context around query and context within query. The former specifies the environment of a query such as the domain of interest, while the latter refers to context words within the query, which is particularly useful for the selection of relevant term relations. In this paper, both types of context are integrated in an IR model based on language modeling. Our experiments on several TREC collections show that each of the context factors brings significant improvements in retrieval effectiveness.", "fulltext": "1. INTRODUCTION\nQueries, especially short queries, do not provide a complete\nspecification of the information need. Many relevant terms can be\nabsent from queries and terms included may be ambiguous. These\nissues have been addressed in a large number of previous studies.\nTypical solutions include expanding either document or query\nrepresentation [19][35] by exploiting different resources [24][31],\nusing word sense disambiguation [25], etc. In these studies,\nhowever, it has been generally assumed that query is the only\nelement available about the user\"s information need. In reality,\nquery is always formulated in a search context. As it has been\nfound in many previous studies [2][14][20][21][26], contextual\nfactors have a strong influence on relevance judgments. These\nfactors include, among many others, the user\"s domain of interest,\nknowledge, preferences, etc. All these elements specify the\ncontexts around the query. So we call them context around query\nin this paper. It has been demonstrated that user\"s query should be\nplaced in its context for a correct interpretation.\nRecent studies have investigated the integration of some contexts\naround the query [9][30][23]. Typically, a user profile is\nconstructed to reflect the user\"s domains of interest and\nbackground. A user profile is used to favor the documents that are\nmore closely related to the profile. However, a single profile for a\nuser can group a variety of different domains, which are not\nalways relevant to a particular query. For example, if a user\nworking in computer science issues a query Java hotel, the\ndocuments on Java language will be incorrectly favored. A\npossible solution to this problem is to use query-related profiles or\nmodels instead of user-centric ones. In this paper, we propose to\nmodel topic domains, among which the related one(s) will be\nselected for a given query. This method allows us to select more\nappropriate query-specific context around the query.\nAnother strong contextual factor identified in literature is domain\nknowledge, or domain-specific term relations, such as\nprogram\u2192computer in computer science. Using this relation,\none would be able to expand the query program with the term\ncomputer. However, domain knowledge is available only for a\nfew domains (e.g. Medicine). The shortage of domain\nknowledge has led to the utilization of general knowledge for\nquery expansion [31], which is more available from resources\nsuch as thesauri, or it can be automatically extracted from\ndocuments [24][27]. However, the use of general knowledge gives\nrise to an enormous problem of knowledge ambiguity [31]: we are\noften unable to determine if a relation applies to a query. For\nexample, usually little information is available to determine\nwhether program\u2192computer is applicable to queries Java\nprogram and TV program. Therefore, the relation has been\napplied to all queries containing program in previous studies,\nleading to a wrong expansion for TV program.\nLooking at the two query examples, however, people can easily\ndetermine whether the relation is applicable, by considering the\ncontext words Java and TV. So the important question is how\nwe can serve these context words in queries to select the\nappropriate relations to apply. These context words form a context\nwithin query. In some previous studies [24][31], context words in\na query have been used to select expansion terms suggested by\nterm relations, which are, however, context-independent (such as\nprogram\u2192computer). Although improvements are observed in\nsome cases, they are limited. We argue that the problem stems\nfrom the lack of necessary context information in relations\nthemselves, and a more radical solution lies in the addition of\ncontexts in relations. The method we propose is to add context\nwords into the condition of a relation, such as {Java, program}\n\u2192 computer, to limit its applicability to the appropriate context.\nThis paper aims to make contributions on the following aspects:\n\u2022 Query-specific domain model: We construct more specific\ndomain models instead of a single user model grouping all the\ndomains. The domain related to a specific query is selected\n(either manually or automatically) for each query.\n\u2022 Context within query: We integrate context words in term\nrelations so that only appropriate relations can be applied to the\nquery.\n\u2022 Multiple contextual factors: Finally, we propose a framework\nbased on language modeling approach to integrate multiple\ncontextual factors.\nOur approach has been tested on several TREC collections. The\nexperiments clearly show that both types of context can result in\nsignificant improvements in retrieval effectiveness, and their\neffects are complementary. We will also show that it is possible to\ndetermine the query domain automatically, and this results in\ncomparable effectiveness to a manual specification of domain.\nThis paper is organized as follows. In section 2, we review some\nrelated work and introduce the principle of our approach. Section\n3 presents our general model. Then sections 4 and 5 describe\nrespectively the domain model and the knowledge model. Section\n6 explains the method for parameter training. Experiments are\npresented in section 7 and conclusions in section 8.\n2. CONTEXTS AND UTILIZATION IN IR\nThere are many contextual factors in IR: the user\"s domain of\ninterest, knowledge about the subject, preference, document\nrecency, and so on [2][14]. Among them, the user\"s domain of\ninterest and knowledge are considered to be among the most\nimportant ones [20][21]. In this section, we review some of the\nstudies in IR concerning these aspects.\nDomain of interest and context around query\nA domain of interest specifies a particular background for the\ninterpretation of a query. It can be used in different ways. Most\noften, a user profile is created to encompass all the domains of\ninterest of a user [23]. In [5], a user profile contains a set of topic\ncategories of ODP (Open Directory Project, http://dmoz.org)\nidentified by the user. The documents (Web pages) classified in\nthese categories are used to create a term vector, which represents\nthe whole domains of interest of the user. On the other hand,\n[9][15][26][30], as well as Google Personalized Search [12] use\nthe documents read by the user, stored on user\"s computer or\nextracted from user\"s search history. In all these studies, we\nobserve that a single user profile (usually a statistical model or\nvector) is created for a user without distinguishing the different\ntopic domains. The systematic application of the user profile can\nincorrectly bias the results for queries unrelated to the profile. This\nsituation can often occur in practice as a user can search for a\nvariety of topics outside the domains that he has previously\nsearched in or identified.\nA possible solution to this problem is the creation of multiple\nprofiles, one for a separate domain of interest. The domains\nrelated to a query are then identified according to the query. This\nwill enable us to use a more appropriate query-specific profile,\ninstead of a user-centric one. This approach is used in [18] in\nwhich ODP directories are used. However, only a small scale\nexperiment has been carried out. A similar approach is used in [8],\nwhere domain models are created using ODP categories and user\nqueries are manually mapped to them. However, the experiments\nshowed variable results. It remains unclear whether domain\nmodels can be effectively used in IR.\nIn this study, we also model topic domains. We will carry out\nexperiments on both automatic and manual identification of query\ndomains. Domain models will also be integrated with other\nfactors. In the following discussion, we will call the topic domain\nof a query a context around query to contrast with another context\nwithin query that we will introduce.\nKnowledge and context within query\nDue to the unavailability of domain-specific knowledge, general\nknowledge resources such as Wordnet and term relations extracted\nautomatically have been used for query expansion [27][31]. In\nboth cases, the relations are defined between two single terms such\nas t1\u2192t2. If a query contains term t1, then t2 is always considered\nas a candidate for expansion. As we mentioned earlier, we are\nfaced with the problem of relation ambiguity: some relations apply\nto a query and some others should not. For example,\nprogram\u2192computer should not be applied to TV program\neven if the latter contains program. However, little information\nis available in the relation to help us determine if an application\ncontext is appropriate.\nTo remedy this problem, approaches have been proposed to make\na selection of expansion terms after the application of relations\n[24][31]. Typically, one defines some sort of global relation\nbetween the expansion term and the whole query, which is usually\na sum of its relations to every query word. Although some\ninappropriate expansion terms can be removed because they are\nonly weakly connected to some query terms, many others remain.\nFor example, if the relation program\u2192computer is strong\nenough, computer will have a strong global relation to the whole\nquery TV program and it still remains as an expansion term.\nIt is possible to integrate stronger control on the utilization of\nknowledge. For example, [17] defined strong logical relations to\nencode knowledge of different domains. If the application of a\nrelation leads to a conflict with the query (or with other pieces of\nevidence), then it is not applied. However, this approach requires\nencoding all the logical consequences including contradictions in\nknowledge, which is difficult to implement in practice.\nIn our earlier study [1], a simpler and more general approach is\nproposed to solve the problem at its source, i.e. the lack of context\ninformation in term relations: by introducing stricter conditions in\na relation, for example {Java, program}\u2192computer and\n{algorithm, program}\u2192computer, the applicability of the\nrelations will be naturally restricted to correct contexts. As a\nresult, computer will be used to expand queries Java program\nor program algorithm, but not TV program. This principle is\nsimilar to that of [33] for word sense disambiguation. However,\nwe do not explicitly assign a meaning to a word; rather we try to\nmake differences between word usages in different contexts. From\nthis point of view, our approach is more similar to word sense\ndiscrimination [27].\nIn this paper, we use the same approach and we will integrate it\ninto a more global model with other context factors. As the\ncontext words added into relations allow us to exploit the word\ncontext within the query, we call such factors context within\nquery. Within query context exists in many queries. In fact, users\noften do not use a single ambiguous word such as Java as query\n(if they are aware of its ambiguity). Some context words are often\nused together with it. In these cases, contexts within query are\ncreated and can be exploited.\nQuery profile and other factors\nMany attempts have been made in IR to create query-specific\nprofiles. We can consider implicit feedback or blind feedback\n[7][16][29][32][35] in this family. A short-term feedback model is\ncreated for the given query from feedback documents, which has\nbeen proven to be effective to capture some aspects of the user\"s\nintent behind the query. In order to create a good query model,\nsuch a query-specific feedback model should be integrated.\nThere are many other contextual factors ([26]) that we do not deal\nwith in this paper. However, it seems clear that many factors are\ncomplementary. As found in [32], a feedback model creates a local\ncontext related to the query, while the general knowledge or the\nwhole corpus defines a global context. Both types of contexts have\nbeen proven useful [32]. Domain model specifies yet another type\nof useful information: it reflects a set of specific background terms\nfor a domain, for example pollution, rain, greenhouse, etc.\nfor the domain of Environment. These terms are often presumed\nwhen a user issues a query such as waste cleanup in the domain.\nIt is useful to add them into the query. We see a clear\ncomplementarity among these factors. It is then useful to combine\nthem together in a single IR model.\nIn this study, we will integrate all the above factors within a\nunified framework based on language modeling. Each component\ncontextual factor will determines a different ranking score, and the\nfinal document ranking combines all of them. This is described in\nthe following section.\n3. GENERAL IR MODEL\nIn the language modeling framework, a typical score function is\ndefined in KL-divergence as follows:\n( ) ( ) ( ) ( )DQ\nVt\nDQ KLtPtPDQScore \u03b8\u03b8\u03b8\u03b8 |||log|, \u2212\u221d= \u2211\u2208\n(1)\nwhere \u03b8D is a (unigram) language model created for a document D,\n\u03b8Q a language model for the query Q, and V the vocabulary.\nSmoothing on document model is recognized to be crucial [35],\nand one of common smoothing methods is the Jelinek-Mercer\ninterpolation smoothing:\n( ) ( ) ( ) ( )CDD tPtPtP \u03b8\u03bb\u03b8\u03bb\u03b8 ||1'| +\u2212= (2)\nwhere \u03bb is an interpolation parameter and \u03b8C the collection model.\nIn the basic language modeling approaches, the query model is\nestimated by Maximum Likelihood Estimation (MLE) without any\nsmoothing. In such a setting, the basic retrieval operation is still\nlimited to keyword matching, according to a few words in the\nquery. To improve retrieval effectiveness, it is important to create\na more complete query model that represents better the\ninformation need. In particular, all the related and presumed words\nshould be included in the query model. A more complete query\nmodel by several methods have been proposed using feedback\ndocuments [16][35] or using term relations [1][10][34]. In these\ncases, we construct two models for the query: the initial query\nmodel containing only the original terms, and a new model\ncontaining the added terms. They are then combined through\ninterpolation.\nIn this paper, we generalize this approach and integrate more\nmodels for the query. Let us use\n0\nQ\u03b8 to denote the original query\nmodel,\nF\nQ\u03b8 for the feedback model created from feedback\ndocuments,\nDom\nQ\u03b8 for a domain model and\nK\nQ\u03b8 for a knowledge\nmodel created by applying term relations. 0\nQ\u03b8 can be created by\nMLE.\nF\nQ\u03b8 has been used in several previous studies [16][35]. In\nthis paper,\nF\nQ\u03b8 is extracted using the 20 blind feedback\ndocuments. We will describe the details to construct Dom\nQ\u03b8 and\nK\nQ\u03b8 in Section 4 and 5.\nGiven these models, we create the following final query model by\ninterpolation:\n\u2211\u2208\n=\nXi\ni\nQiQ tPtP )|()|( \u03b8\u03b1\u03b8 (3)\nwhere X={0, Dom, K, F} is the set of all component models and\ni\u03b1 (with 1=\u2211\u2208Xi\ni\u03b1 ) are their mixture weights.\nThen the document score in Equation (1) is extended as follows:\n( ) \u2211\u2211\u2211 \u2208\u2208 \u2208\n==\nXi\nii\nVt Xi\nD\ni\nQi DQScoretPtPDQScore ),()|(log)|(, \u03b1\u03b8\u03b8\u03b1 (4)\nwhere )|(log)|(),( D\nVt\ni\nQi tPtPDQScore \u03b8\u03b8\u2211\u2208\n= is the score according to\neach component model. Here we can see that our strategy of\nenhancing the query model by contextual factors is equivalent to\ndocument re-ranking, which is used in [5][15][30].\nThe remaining problem is to construct domain models and\nknowledge model and to combine all the models (parameter\nsetting). We describe this in the following sections.\n4. CONSTRUCTING AND USING DOMAIN\nMODELS\nAs in previous studies, we exploit a set of documents already\nclassified in each domain. These documents can be identified in\ntwo different ways: 1) One can take advantages of an existing\ndomain hierarchy and the documents manually classified in them,\nsuch as ODP. In that case, a new query should be classified into\nthe same domains either manually or automatically. 2) A user can\ndefine his own domains. By assigning a domain to his queries, the\nsystem can gather a set of answers to the queries automatically,\nwhich are then considered to be in-domain documents. The\nanswers could be those that the user have read, browsed through,\nor judged relevant to an in-domain query, or they can be simply\nthe top-ranked retrieval results.\nAn earlier study [4] has compared the above two strategies using\nTREC queries 51-150, for which a domain has been manually\nassigned. These domains have been mapped to ODP categories. It\nis found that both approaches mentioned above are equally\neffective and result in comparable performance. Therefore, in this\nstudy, we only use the second approach. This choice is also\nmotivated by the possibility to compare between manual and\nautomatic assignment of domain to a new query. This will be\nexplained in detail in our experiments.\nWhatever the strategy, we will obtain a set of documents for each\ndomain, from which a language model can be extracted. If\nmaximum likelihood estimation is used directly on these\ndocuments, the resulting domain model will contain both\ndomain-specific terms and general terms, and the former do not emerge.\nTherefore, we employ an EM process to extract the specific part of\nthe domain as follows: we assume that the documents in a domain\nare generated by a domain-specific model (to be extracted) and\ngeneral language model (collection model). Then the likelihood of\na document in the domain can be formulated as follows:\n( ) ( ) ( ) ( )[ ] ( )\n\u220f\u2208\n+\u2212=\nDt\nDtc\nCDomDom tPtPDP ;\n||1'| \u03b8\u03b7\u03b8\u03b7\u03b8 (5)\nwhere c(t; D) is the count of t in document D and \u03b7 is a\nsmoothing parameter (which will be fixed at 0.5 as in [35]). The\nEM algorithm is used to extract the domain model Dom\u03b8 that\nmaximizes P(Dom| \u03b8\"Dom) (where Dom is the set of documents in\nthe domain), that is:\n( )\n( ) ( ) ( )[ ] ( )\n\u220f \u220f\u2208 \u2208\n+\u2212=\n=\nDomD Dt\nDtc\nCDom\nDomDom\ntPtP\nDomP\nDom\nDom\n;\n'\n||1maxarg\n|maxarg\n\u03b8\u03b7\u03b8\u03b7\n\u03b8\u03b8\n\u03b8\n\u03b8\n(6)\nThis is the same process as the one used to extract feedback model\nin [35]. It is able to extract the most specific words of the domain\nfrom the documents while filtering out the common words of the\nlanguage. This can be observed in the following table, which\nshows some words in the domain model of Environment before\nand after EM iterations (50 iterations).\nTable 1. Term probabilities before/after EM\nTerm Initial Final change Term Initial Final change\nair 0.00358 0.00558 + 56% year 0.00357 0.00052 - 86%\nenvironment 0.00213 0.00340 + 60% system 0.00212 7.13*e-6\n- 99%\nrain 0.00197 0.00336 + 71% program 0.00189 0.00040 - 79%\npollution 0.00177 0.00301 + 70% million 0.00131 5.80*e-6\n- 99%\nstorm 0.00176 0.00302 + 72% make 0.00108 5.79*e-5\n- 95%\nflood 0.00164 0.00281 + 71% company 0.00099 8.52*e-8\n- 99%\ntornado 0.00072 0.00125 + 74% president 0.00077 2.71*e-6\n- 99%\ngreenhouse 0.00034 0.00058 + 72% month 0.00073 3.88*e-5\n- 95%\nGiven a set of domain models, the related ones have to be assigned\nto a new query. This can be done manually by the user or\nautomatically by the system using query classification. We will\ncompare both approaches.\nQuery classification has been investigated in several studies\n[18][28]. In this study, we use a simple classification method: the\nselected domain is the one with which the query\"s KL-divergence\nscore is the lowest, i.e.:\n)|(log)|(minarg 0\nDom\nQt\nQ\nDom\nQ tPtP\nDom\n\u03b8\u03b8\u03b8\n\u03b8\n\u2211\u2208\n= (7)\nThis classification method is an extension to Na\u00efve Bayes as\nshown in [22]. The score depending on the domain model is then\nas follows:\n\u2211\u2208\n=\nVt\nD\nDom\nQDom tPtPDQScore )|(log)|(),( \u03b8\u03b8 (8)\nAlthough the above equation requires using all the terms in the\nvocabulary, in practice, only the strongest terms in the domain\nmodel are useful and the terms with low probabilities are often\nnoise. Therefore, we only retain the top 100 strongest terms. The\nsame strategy is used for Knowledge model.\nAlthough domain models are more refined than a single user\nprofile, the topics in a single domain can still be very different,\nmaking the domain model too large. This is particularly true for\nlarge domains such as Science and technology defined in TREC\nqueries. Using such a large domain model as the background can\nintroduce much noise terms. Therefore, we further construct a\nsub-domain model more related to the given query, by using a subset\nof in-domain documents that are related to the query. These\ndocuments are the top-ranked documents retrieved with the\noriginal query within the domain. This approach is indeed a\ncombination of domain and feedback models. In our experiments,\nwe will see that this further specification of sub-domain is\nnecessary in some cases, but not in all, especially when Feedback\nmodel is also used.\n5. EXTRACTING CONTEXT-DEPENDENT\nTERM RELATIONS FROM DOCUMENTS\nIn this paper, we extract term relations from the document\ncollection automatically.\nIn general, a term relation can be represented as A\u2192B. Both A and\nB have been restricted to single terms in previous studies. A single\nterm in A means that the relation is applicable to all the queries\ncontaining that term. As we explained earlier, this is the source of\nmany wrong applications. The solution we propose is to add more\ncontext terms into A, so that it is applicable only when all the\nterms in A appear in a query. For example, instead of creating a\ncontext-independent relation Java\u2192program, we will create\n{Java, computer}\u2192program, which means that program is\nselected when both Java and computer appear in a query. The\nterm added in the condition specifies a stricter context to apply the\nrelation. We call this type of relation context-dependent relation.\nIn principle, the addition is not restricted to one term. However,\nwe will make this restriction due to the following reasons:\n\u2022 User queries are usually very short. Adding more terms into the\ncondition will create many rarely applicable relations;\n\u2022 In most cases, an ambiguous word such as Java can be\neffectively disambiguated by one useful context word such as\ncomputer or hotel;\n\u2022 The addition of more terms will also lead to a higher space and\ntime complexity for extracting and storing term relations.\nThe extraction of relations of type {tj,tk} \u2192 ti can be performed\nusing mining algorithms for association rules [13]. Here, we use a\nsimple co-occurrence analysis. Windows of fixed size (10 words\nin our case) are used to obtain co-occurrence counts of three\nterms, and the probability )|( kji tttP is determined as follows:\n\u2211=\nlt\nkjlkjikji tttctttctttP ),,(),,()|( (9)\nwhere ),,( kji tttc is the count of co-occurrences.\nIn order to reduce space requirement, we further apply the\nfollowing filtering criteria:\n\u2022 The two terms in the condition should appear at least certain\ntime together in the collection (10 in our case) and they should\nbe related. We use the following pointwise mutual information\nas a measure of relatedness (MI > 0) [6]:\n)()(\n),(\nlog),(\nkj\nkj\nkj\ntPtP\nttP\nttMI =\n\u2022 The probability of a relation should be higher than a threshold\n(0.0001 in our case);\nHaving a set of relations, the corresponding Knowledge model is\ndefined as follows:\n)|()|()|(\n)|()|()|(\n00\n)(\n0\n)(\nQkQjkj\nQtt\ni\nQkjkj\nQtt\ni\nK\nQ\ntPtPtttP\nttPtttPtP\nkj\nkj\n\u03b8\u03b8\n\u03b8\u03b8\n\u2211\n\u2211\n\u2208\n\u2208\n=\n=\n(10)\nwhere (tj tk)\u2208Q means any combination of two terms in the query.\nThis is a direct extension of the translation model proposed in [3]\nto our context-dependent relations. The score according to the\nKnowledge model is then defined as follows:\n\u2211 \u2211\u2208 \u2208\n=\nVt\nDiQkQjkj\nQtt\niK\ni kj\ntPtPtPtttPDQScore )|(log)|()|()|(),( 00\n)(\n\u03b8\u03b8\u03b8 (11)\nAgain, only the top 100 expansion terms are used.\n6. MODEL PARAMETERS\nThere are several parameters in our model: \u03bb in Equation (2) and\n\u03b1i (i\u2208{0, Dom, K, F}) in Equation (3). As the parameter \u03bb only\naffects document model, we will set it to the same value in all our\nexperiments. The value \u03bb=0.5 is determined to maximize the\neffectiveness of the baseline models (see Section 7.2) on the\ntraining data: TREC queries 1-50 and documents on Disk 2.\nThe mixture weights \u03b1i of component models are trained on the\nsame training data using the following method of line search [11]\nto maximize the Mean Average Precision (MAP): each parameter\nis considered as a search direction. We start by searching in one\ndirection - testing all the values in that direction, while keeping\nthe values in other directions unchanged. Each direction is\nsearched in turn, until no improvement in MAP is observed.\nIn order to avoid being trapped at a local maximum, we started\nfrom 10 random points and the best setting is selected.\n7. EXPERIMENTS\n7.1 Setting\nThe main test data are those from TREC 1-3 ad-hoc and filtering\ntracks, including queries 1-150, and documents on Disks 1-3. The\nchoice of this test collection is due to the availability of manually\nspecified domain for each query. This allows us to compare with\nan approach using automatic domain identification. Below is an\nexample of topic:\n Number: 103\n Domain: Law and Government\n Topic: Welfare Reform\nWe only use topic titles in all our tests. Queries 1-50 are used for\ntraining and 51-150 for testing. 13 domains are defined in these\nqueries and their distributions among the two sets of queries are\nshown in Fig. 1. We can see that the distribution varies strongly\nbetween domains and between the two query sets.\nWe have also tested on TREC 7 and 8 data. For this series of tests,\neach collection is used in turn as training data while the other is\nused for testing. Some statistics of the data are described in Tab. 2.\nAll the documents are preprocessed using Porter stemmer in\nLemur and the standard stoplist is used. Some queries (4, 5 and 3\nin the three query sets) only contain one word. For these queries,\nknowledge model is not applicable.\nOn domain models, we examine several questions:\n\u2022 When query domain is specified manually, is it useful to\nincorporate the domain model?\n\u2022 If the query domain is not specified, can it be determined\nautomatically? How effective is this method?\n\u2022 We described two ways to gather documents for a domain:\neither using documents judged relevant to queries in the domain\nor using documents retrieved for these queries. How do they\ncompare?\nOn Knowledge model, in addition to testing its effectiveness, we\nalso want to compare the context-dependent relations with\ncontext-independent ones.\nFinally, we will see the impact of each component model when all\nthe factors are combined.\n7.2 Baseline Methods\nTwo baseline models are used: the classical unigram model\nwithout any expansion, and the model with Feedback. In all the\nexperiments, document models are created using Jelinek-Mercer\nsmoothing. This choice is made according to the observation in\n[36] that the method performs very well for long queries. In our\ncase, as queries are expanded, they perform similarly to long\nqueries. In our preliminary tests, we also found this method\nperformed better than the other methods (e.g. Dirichlet), especially\nfor the main baseline method with Feedback model. Table 3 shows\nthe retrieval effectiveness on all the collections.\n7.3 Knowledge Models\nThis model is combined with both baseline models (with or\nwithout feedback). We also compare the context-dependent\nknowledge model with the traditional context-independent term\nrelations (defined between two single terms), which are used to\nexpand queries. This latter selects expansion terms with strongest\nglobal relation to the query. This relation is measured by the sum\nof relations to each of the query terms. This method is equivalent\nto [24]. It is also similar to the translation model [3]. We call it\n0\n5\n10\n15\n20\n25\n30\n35\nEnvironm\nentFinance\nInt.Econom\nics\nInt.Finance\nInt.Politics\nInt.R\nelations\nLaw\n&G\nov.\nM\nedical&Bio.M\nilitaryPolitics\nSci.&Tech.\nU\nS\nEconom\nics\nU\nS\nPolitics\nQuery 1-50\nQuery 51-150\nFigure 1. Distribution of domains\nTable 2. TREC collection statistics\nCollection Document Size (GB) Voc. # of Doc. Query\nTraining Disk 2 0.86 350,085 231,219 1-50\nDisks 1-3 Disks 1-3 3.10 785,932 1,078,166 51-150\nTREC7 Disks 4-5 1.85 630,383 528,155 351-400\nTREC8 Disks 4-5 1.85 630,383 528,155 401-450\nCo-occurrence model in Table 4. T-test is also performed for\nstatistical significance.\nAs we can see, simple co-occurrence relations can produce\nrelatively strong improvements; but context-dependent relations\ncan produce much stronger improvements in all cases, especially\nwhen feedback is not used. All the improvements over\ncooccurrence model are statistically significant (this is not shown in\nthe table). The large differences between the two types of relation\nclearly show that context-dependent relations are more appropriate\nfor query expansion. This confirms the hypothesis we made, that\nby incorporating context information into relations, we can better\ndetermine the appropriate relations to apply and thus avoid\nintroducing inappropriate expansion terms. The following\nexample can further confirm this observation, where we show the\nstrongest expansion terms suggested by both types of relation for\nthe query #384 space station moon:\nCo-occurrence Relations: year 0.016552 power 0.013226 time 0.010925 1 0.009422\ndevelop 0.008932 offic 0.008485 oper 0.008408 2 0.007875 earth 0.007843 work\n0.007801 radio 0.007701 system 0.007627 build 0.007451 000 0.007403 includ\n0.007377 state 0.007076 program 0.007062 nation 0.006937 open 0.006889 servic\n0.006809 air 0.006734 space 0.006685 nuclear 0.006521 full 0.006425 make\n0.006410 compani 0.006262 peopl 0.006244 project 0.006147 unit 0.006114 gener\n0.006036 dai 0.006029\nContext-Dependent Relations: space 0.053913 mar 0.046589 earth 0.041786 man\n0.037770 program 0.033077 project 0.026901 base 0.025213 orbit 0.025190 build\n0.025042 mission 0.023974 call 0.022573 explor 0.021601 launch 0.019574\ndevelop 0.019153 shuttl 0.016966 plan 0.016641 flight 0.016169 station 0.016045\nintern 0.016002 energi 0.015556 oper 0.014536 power 0.014224 transport\n0.012944 construct 0.012160 nasa 0.011985 nation 0.011855 perman 0.011521\njapan 0.011433 apollo 0.010997 lunar 0.010898\nIn comparison with the baseline model with feedback (Tab. 3), we\nsee that the improvements made by Knowledge model alone are\nslightly lower. However, when both models are combined, there\nare additional improvements over the Feedback model, and these\nimprovements are statistically significant in 2 cases out of 3. This\ndemonstrates that the impacts produced by feedback and term\nrelations are different and complementary.\n7.4 Domain Models\nIn this section, we test several strategies to create and use domain\nmodels, by exploiting the domain information of the query set in\nvarious ways.\nStrategies for creating domain models:\nC1 - With the relevant documents for the in-domain queries: this\nstrategy simulates the case where we have an existing directory in\nwhich documents relevant to the domain are included.\nC2 - With the top-100 documents retrieved with the in-domain\nqueries: this strategy simulates the case where the user specifies a\ndomain for his queries without judging document relevance, and\nthe system gathers related documents from his search history.\nStrategies for using domain models:\nU1 - The domain model is determined by the user manually.\nU2 - The domain model is determined by the system.\n7.4.1 Creating Domain models\nWe test strategies C1 and C2. In this series of tests, each of the\nqueries 51-150 is used in turn as the test query while the other\nqueries and their relevant documents (C1) or top-ranked retrieved\ndocuments (C2) are used to create domain models. The same\nmethod is used on queries 1-50 to tune the parameters.\nTable 3. Baseline models\nUnigram Model\nColl. Measure\nWithout FB With FB\nAvgP 0.1570 0.2344 (+49.30%)\nRecall /48 355 15 711 19 513Disks 1-3\nP@10 0.4050 0.5010\nAvgP 0.1656 0.2176 (+31.40%)\nRecall /4 674 2 237 2 777TREC7\nP@10 0.3420 0.3860\nAvgP 0.2387 0.2909 (+21.87%)\nRecall /4 728 2 764 3 237TREC8\nP@10 0.4340 0.4860\nTable 4. Knowledge models\nCo-occurrence Knowledge model\nColl. Measure\nWithout FB With FB Without FB With FB\nAvgP\n0.1884\n(+20.00%)++\n0.2432\n(+3.75%)**\n0.2164\n(+37.83%)++\n0.2463\n(+5.08%)**\nRecall /48 355 17 430 20 020 18 944 20 260\nDisks1-3\nP@10 0.4640 0.5160 0.5050 0.5120\nAvgP\n0.1823\n(+10.08%)++\n0.2350\n(+8.00%)*\n0.2157\n(+30.25%)++\n0.2401\n(+10.34%)**\nRecall /4 674 2 329 2 933 2 709 2 985\nTREC7\nP@10 0.3780 0.3760 0.3900 0.3900\nAvgP\n0.2519\n(+5.53%)\n0.2926\n(+0.58%)\n0.2724\n(+14.12%)++\n0.3007\n(+3.37%)\nRecall /4 728 2 829 3 279 3 090 3 338\nTREC8\nP@10 0.4360 0.4940 0.4720 0.5000\n(The column WithoutFB is compared to the baseline model without\nfeedback, while WithFB is compared to the baseline with feedback. ++ and +\nmean significant changes in t-test with respect to the baseline without\nfeedback, at the level of p<0.01 and p<0.05, respectively. ** and * are similar\nbut compared to the baseline model with feedback.) Table 5. Domain models with relevant documents (C1)\nDomain Sub-Domain\nColl. Measure\nWithout FB With FB Without FB With FB\nAvgP\n0.1700\n(+8.28%)++\n0.2454\n(+4.69%)**\n0.1918\n(+22.17%)++\n0.2461\n(+4.99%)**\nRecall /48 355 16 517 20 141 17 872 20 212\nDisks1-3\n(U1)\nP@10 0.4370 0.5130 0.4490 0.5150\nAvgP\n0.1715\n(+3.56%)++\n0.2389\n(+9.79%)*\n0.1842\n(+11.23%)++\n0.2408\n(+10.66%)**\nRecall /4 674 2 270 2 965 2 428 2 987\nTREC7\n(U2)\nP@10 0.3720 0.3740 0.3880 0.3760\nAvgP\n0.2442\n(+2. 30%)\n0.2957\n(+1.65%)\n0.2563\n(+7.37%)\n0.2967\n(+1.99%)\nRecall /4 728 2 796 3 308 2 873 3 302\nTREC8\n(U2)\nP@10 0.4420 0.5000 0.4280 0.5020\nTable 6. Domain models with top-100 documents (C2)\nDomain Sub-Domain\nColl. Measure\nWithout FB With FB Without FB With FB\nAvgP\n0.1718\n(+9.43%)++\n0.2456\n(+4.78%)**\n0.1799\n(+14.59%)++\n0.2452\n(+4.61%)**\nRecall /48 355 16 558 20 131 17 341 20 155\nDisks1-3\n(U1)\nP@10 0.4300 0.5140 0.4220 0.5110\nAvgP\n0.1765\n(+6.58%)++\n0.2395\n(+10.06%)**\n0.1785\n(+7.79%)++\n0.2393\n(+9.97%)**\nRecall /4 674 2 319 2 969 2 254 2 968\nTREC7\n(U2)\nP@10 0.3780 0.3820 0.3820 0.3820\nAvgP\n0.2434\n(+1.97%)\n0.2949\n(+1.38%)\n0.2441\n(+2.26%)\n0.2961\n(+1.79%)\nRecall /4 728 2 772 3 318 2 734 3 311\nTREC8\n(U2)\nP@10 0.4380 0.4960 0.4280 0.5020\nWe also compare the domain models created with all the\nindomain documents (Domain) and with only the top-10 retrieved\ndocuments in the domain with the query (Sub-Domain). In these\ntests, we use manual identification of query domain for Disks 1-3\n(U1), but automatic identification for TREC7 and 8 (U2).\nFirst, it is interesting to notice that the incorporation of domain\nmodels can generally improve retrieval effectiveness in all the\ncases. The improvements on Disks 1-3 and TREC7 are statistically\nsignificant. However, the improvement scales are smaller than\nusing Feedback and Relation models. Looking at the distribution\nof the domains (Fig. 1), this observation is not surprising: for\nmany domains, we only have few training queries, thus few\nindomain documents to create domain models. In addition, topics in\nthe same domain can vary greatly, in particular in large domains\nsuch as science and technology, international politics, etc.\nSecond, we observe that the two methods to create domain models\nperform equally well (Tab. 6 vs. Tab. 5). In other words, providing\nrelevance judgments for queries does not add much advantage for\nthe purpose of creating domain models. This may seem surprising.\nAn analysis immediately shows the reason: a domain model (in the\nway we created) only captures term distribution in the domain.\nRelevant documents for all in-domain queries vary greatly.\nTherefore, in some large domains, characteristic terms have\nvariable effects on queries. On the other hand, as we only use term\ndistribution, even if the top documents retrieved for the in-domain\nqueries are irrelevant, they can still contain domain characteristic\nterms similarly to relevant documents. Thus both strategies\nproduce very similar effects. This result opens the door for a\nsimpler method that does not require relevance judgments, for\nexample using search history.\nThird, without Feedback model, the sub-domain models\nconstructed with relevant documents perform much better than the\nwhole domain models (Tab. 5). However, once Feedback model is\nused, the advantage disappears. On one hand, this confirms our\nearlier hypothesis that a domain may be too large to be able to\nsuggest relevant terms for new queries in the domain. It indirectly\nvalidates our first hypothesis that a single user model or profile\nmay be too large, so smaller domain models are preferred. On the\nother hand, sub-domain models capture similar characteristics to\nFeedback model. So when the latter is used, sub-domain models\nbecome superfluous. However, if domain models are constructed\nwith top-ranked documents (Tab. 6), sub-domain models make\nmuch less differences. This can be explained by the fact that the\ndomains constructed with top-ranked documents tend to be more\nuniform than relevant documents with respect to term distribution,\nas the top retrieved documents usually have stronger statistical\ncorrespondence with the queries than the relevant documents.\n7.4.2 Determining Query Domain Automatically\nIt is not realistic to always ask users to specify a domain for their\nqueries. Here, we examine the possibility to automatically identify\nquery domains. Table 7 shows the results with this strategy using\nboth strategies for domain model construction. We can observe\nthat the effectiveness is only slightly lower than those produced\nwith manual identification of query domain (Tab. 5 & 6, Domain\nmodels). This shows that automatic domain identification is a way\nto select domain model as effective as manual identification. This\nalso demonstrates the feasibility to use domain models for queries\nwhen no domain information is provided.\nLooking at the accuracy of the automatic domain identification,\nhowever, it is surprisingly low: for queries 51-150, only 38% of\nthe determined domains correspond to the manual identifications.\nThis is much lower than the above 80% rates reported in [18]. A\ndetailed analysis reveals that the main reason is the closeness of\nseveral domains in TREC queries (e.g. International relations,\nInternational politics, Politics). However, in this situation,\nwrong domains assigned to queries are not always irrelevant and\nuseless. For example, even when a query in International\nrelations is classified in International politics, the latter domain\ncan still suggest useful terms to the query. Therefore, the relatively\nlow classification accuracy does not mean low usefulness of the\ndomain models.\n7.5 Complete Models\nThe results with the complete model are shown in Table 8. This\nmodel integrates all the components described in this paper:\nOriginal query model, Feedback model, Domain model and\nKnowledge model. We have tested both strategies to create\ndomain models, but the differences between them are very small.\nSo we only report the results with the relevant documents.\nOur first observation is that the complete models produce the best\nresults. All the improvements over the baseline model (with\nfeedback) are statistically significant. This result confirms that the\nintegration of contextual factors is effective. Compared to the\nother results, we see consistent, although small in some cases,\nimprovements over all the partial models.\nLooking at the mixture weights, which may reflect the importance\nof each model, we observed that the best settings in all the\ncollections vary in the following ranges: 0.1\u2264\u03b10 \u22640.2, 0.1\u2264\u03b1Dom\n\u22640.2, 0.1\u2264\u03b1K \u22640.2 and 0.5\u2264\u03b1F \u22640.6. We see that the most\nimportant factor is Feedback model. This is also the single factor\nwhich produced the highest improvements over the original query\nmodel. This observation seems to indicate that this model has the\nhighest capability to capture the information need behind the\nquery. However, even with lower weights, the other models do\nhave strong impacts on the final effectiveness. This demonstrates\nthe benefit of integrating more contextual factors in IR.\nTable 7. Automatic query domain identification (U2)\nDom. with rel. doc. (C1) Dom. with top-100 doc. (C2)\nColl. Measure\nWithout FB With FB Without FB With FB\nAvgP\n0.1650\n(+5.10%)++\n0.2444\n(+4.27%)**\n0.1670\n(+6.37%)++\n0.2449\n(+4.48%)**\nRecall 16 343 20 061 16 414 20 090\nDisks\n1-3\n(U2)\nP@10 0.4270 0.5100 0.4090 0.5140\nTable 8. Complete models (C1)\nAll Doc. Domain\nColl. Measure\nMan. dom. id. (U1) Auto. dom. id. (U2)\nAvgP 0.2501 (+6.70%) ** 0.2489 (+6.19%) **\nRecall /48 355 20 514 20 367\nDisks 1-3\nP@10 0.5200 0.5230\nAvgP 0.2462 (+13.14%) **\nRecall /4 674 3 014TREC7\nP@10\nN/A\n0.3960\nAvgP 0.3029 (+4.13%) **\nRecall /4 728 3 321TREC8\nP@10\nN/A\n0.5020\n8. CONCLUSIONS\nTraditional IR approaches usually consider the query as the only\nelement available for the user information need. Many previous\nstudies have investigated the integration of some contextual\nfactors in IR models, typically by incorporating a user profile. In\nthis paper, we argue that a single user profile (or model) can\ncontain a too large variety of different topics so that new queries\ncan be incorrectly biased. Similarly to some previous studies, we\npropose to model topic domains instead of the user.\nPrevious investigations on context focused on factors around the\nquery. We showed in this paper that factors within the query are\nalso important - they help select the appropriate term relations to\napply in query expansion.\nWe have integrated the above contextual factors, together with\nfeedback model, in a single language model. Our experimental\nresults strongly confirm the benefit of using contexts in IR. This\nwork also shows that the language modeling framework is\nappropriate for integrating many contextual factors.\nThis work can be further improved on several aspects, including\nother methods to extract term relations, to integrate more context\nwords in conditions and to identify query domains. It would also\nbe interesting to test the method on Web search using user search\nhistory. We will investigate these problems in our future research.\n9. REFERENCES\n[1] Bai, J., Nie, J.Y., Cao, G., Context-dependent term relations\nfor information retrieval, EMNLP\"06, pp. 551-559, 2006.\n[2] Belkin, N.J., Interaction with texts: Information retrieval as\ninformation seeking behavior, Information Retrieval\"93: Von\nder modellierung zu anwendung, pp. 55-66, Konstanz:\nKrause & Womser-Hacker, 1993.\n[3] Berger, A., Lafferty, J., Information retrieval as statistical\ntranslation, SIGIR\"99, pp. 222-229, 1999.\n[4] Bouchard, H., Nie, J.Y., Mod\u00e8les de langue appliqu\u00e9s \u00e0 la\nrecherche d\"information contextuelle, Conf. en Recherche\nd\"Information et Applications (CORIA), Lyon, 2006.\n[5] Chirita, P.A., Paiu, R., Nejdl, W., Kohlsch\u00fctter, C., Using\nODP metadata to personalize search, SIGIR, pp. 178-185,\n2005.\n[6] Church, K. W., Hanks, P., Word association norms, mutual\ninformation, and lexicography. ACL, pp. 22-29, 1989.\n[7] Croft, W. B., Cronen-Townsend, S., Lavrenko, V., Relevance\nfeedback and personalization: A language modeling\nperspective, In: The DELOS-NSF Workshop on\nPersonalization and Recommender Systems Digital\nLibraries, pp. 49-54, 2006.\n[8] Croft, W. B., Wei, X., Context-based topic models for query\nmodification, CIIR Technical Report, University of\nMassachusetts, 2005.\n[9] Dumais, S., Cutrell, E., Cadiz, J., Jancke, G., Sarin, R.,\nRobbins, D. C., Stuff I've seen: a system for personal\ninformation retrieval and re-use, SIGIR'03, pp. 72-79, 2003.\n[10] Fang, H., Zhai, C., Semantic term matching in axiomatic\napproaches to information retrieval, SIGIR\"06, pp.115-122,\n2006.\n[11] Gao, J., Qi, H., Xia, X., Nie, J.-Y., Linear discriminative\nmodel for information retrieval. SIGIR\"05, pp. 290-297,\n2005.\n[12] Goole Personalized Search, http://www.google.com/psearch.\n[13] Hipp, J., Guntzer, U., Nakhaeizadeh, G., Algorithms for\nassociation rule mining - a general survey and comparison.\nSIGKDD explorations, 2 (1), pp. 58-64, 2000.\n[14] Ingwersen, P., J\u00e4verlin, K., Information retrieval in context:\nIRiX, SIGIR Forum, 39: pp. 31-39, 2004.\n[15] Kim, H.-R., Chan, P.K., Personalized ranking of search\nresults with learned user interest hierarchies from bookmarks,\nWEBKDD\"05 Workshop at ACM-KDD, pp. 32-43, 2005.\n[16] Lavrenko, V., Croft, W. B., Relevance-based language\nmodels, SIGIR\"01, pp. 120-127, 2001.\n[17] Lau, R., Bruza, P., Song, D., Belief revision for adaptive\ninformation retrieval, SIGIR\"04, pp. 130-137, 2004.\n[18] Liu, F., Yu,C., Meng, W., Personalized web search by\nmapping user queries to categories, CIKM\"02, pp. 558-565.\n[19] Liu, X., Croft, W. B., Cluster-based retrieval using language\nmodels, SIGIR '04, pp. 186-193, 2004.\n[20] Morris, R.C., Toward a user-centered information service,\nJASIS, 45: pp. 20-30, 1994.\n[21] Park, T.K., Toward a theory of user-based relevance: A call\nfor a new paradigm of inquiry, JASIS, 45: pp. 135-141, 1994.\n[22] Peng, F., Schuurmans, D., Wang, S. Augmenting Naive\nBayes Classifiers with Statistical Language Models. Inf. Retr.\n7(3-4): pp. 317-345, 2004.\n[23] Pitkow, J., Sch\u00fctze, H., Cass, T., Cooley, R., Turnbull, D.,\nEdmonds, A., Adar, E., Breuel, T., Personalized Search,\nCommunications of ACM, 45: pp. 50-55, 2002.\n[24] Qiu, Y., Frei, H.P. Concept based query expansion.\nSIGIR\"93, pp.160-169, 1993.\n[25] Sanderson, M., Retrieving with good sense, Inf. Ret., 2(1):\npp. 49-69, 2000.\n[26] Schamber, L., Eisenberg, M.B., Nilan, M.S., A\nreexamination of relevance: Towards a dynamic, situational\ndefinition, Information Processing and Management, 26(6):\npp. 755-774, 1990.\n[27] Sch\u00fctze, H., Pedersen J.O., A cooccurrence-based thesaurus\nand two applications to information retrieval, Information\nProcessing and Management, 33(3): pp. 307-318, 1997.\n[28] Shen, D., Pan, R., Sun, J-T., Pan, J.J., Wu, K., Yin, J., Yang,\nQ. Query enrichment for web-query classification.\nACMTOIS, 24(3): pp. 320-352, 2006.\n[29] Shen, X., Tan, B., Zhai, C., Context-sensitive information\nretrieval using implicit feedback, SIGIR\"05, pp. 43-50, 2005.\n[30] Teevan, J., Dumais, S.T., Horvitz, E., Personalizing search\nvia automated analysis of interests and activities, SIGIR\"05,\npp. 449-456, 2005.\n[31] Voorhees, E., Query expansion using lexical-semantic\nrelations. SIGIR\"94, pp. 61-69, 1994.\n[32] Xu, J., Croft, W.B., Query expansion using local and global\ndocument analysis, SIGIR\"96, pp. 4-11, 1996.\n[33] Yarowsky, D. Unsupervised word sense disambiguation\nrivaling supervised methods. ACL, pp. 189-196. 1995.\n[34] Zhou X., Hu X., Zhang X., Lin X., Song I-Y.,\nContextsensitive semantic smoothing for the language modeling\napproach to genomic IR, SIGIR\"06, pp. 170-177, 2006.\n[35] Zhai, C., Lafferty, J., Model-based feedback in the language\nmodeling approach to information retrieval, CIKM\"01, pp.\n403-410, 2001.\n[36] Zhai, C., Lafferty, J., A study of smoothing methods for\nlanguage models applied to ad-hoc information retrieval.\nSIGIR, pp.334-342, 2001.", "keywords": "knowledge ambiguity problem;term relation;interest domain;domain of interest;query context;information need;radical solution;domain knowledge;utilization of general knowledge;general knowledge utilization;context-independent;user-centric one;context information;domain model;context factor;google personalized search;user profile;language model;problem of knowledge ambiguity;search context;word sense disambiguation;query-specific context"}
-{"name": "test_H-30", "title": "Latent Concept Expansion Using Markov Random Fields", "abstract": "Query expansion, in the form of pseudo-relevance feedback or relevance feedback, is a common technique used to improve retrieval effectiveness. Most previous approaches have ignored important issues, such as the role of features and the importance of modeling term dependencies. In this paper, we propose a robust query expansion technique based on the Markov random field model for information retrieval. The technique, called latent concept expansion, provides a mechanism for modeling term dependencies during expansion. Furthermore, the use of arbitrary features within the model provides a powerful framework for going beyond simple term occurrence features that are implicitly used by most other expansion techniques. We evaluate our technique against relevance models, a state-of-the-art language modeling query expansion technique. Our model demonstrates consistent and significant improvements in retrieval effectiveness across several TREC data sets. We also describe how our technique can be used to generate meaningful multi-term concepts for tasks such as query suggestion/reformulation.", "fulltext": "1. INTRODUCTION\nUsers of information retrieval systems are required to\nexpress complex information needs in terms of Boolean\nexpressions, a short list of keywords, a sentence, a question, or\npossibly a longer narrative. A great deal of information is\nlost during the process of translating from the information\nneed to the actual query. For this reason, there has been\na strong interest in query expansion techniques. Such\ntechniques are used to augment the original query to produce a\nrepresentation that better reflects the underlying\ninformation need.\nQuery expansion techniques have been well studied for\nvarious models in the past and have shown to significantly\nimprove effectiveness in both the relevance feedback and\npseudo-relevance feedback setting [12, 21, 28, 29].\nRecently, a Markov random field (MRF) model for\ninformation retrieval was proposed that goes beyond the\nsimplistic bag of words assumption that underlies BM25 and the\n(unigram) language modeling approach to information\nretrieval [20, 22]. The MRF model generalizes the unigram,\nbigram, and other various dependence models [14]. Most\npast term dependence models have failed to show consistent,\nsignificant improvements over unigram baselines, with few\nexceptions [8]. The MRF model, however, has been shown\nto be highly effective across a number of tasks, including ad\nhoc retrieval [14, 16], named-page finding [16], and Japanese\nlanguage web search [6].\nUntil now, the model has been solely used for ranking\ndocuments in response to a given query. In this work, we show\nhow the model can be extended and used for query\nexpansion using a technique that we call latent concept expansion\n(LCE). There are three primary contributions of our work.\nFirst, LCE provides a mechanism for combining term\ndependence with query expansion. Previous query expansion\ntechniques are based on bag of words models. Therefore, by\nperforming query expansion using the MRF model, we are\nable to study the dynamics between term dependence and\nquery expansion.\nNext, as we will show, the MRF model allows arbitrary\nfeatures to be used within the model. Query expansion\ntechniques in the past have implicitly only made use of term\noccurrence features. By using more robust feature sets, it\nis possible to produce better expansion terms that\ndiscriminate between relevant and non-relevant documents better.\nFinally, our proposed approach seamlessly provides a\nmechanism for generating both single and multi-term concepts.\nMost previous techniques, by default, generate terms\nindependently. There have been several approaches that make\nuse of generalized concepts, however such approaches were\nsomewhat heuristic and done outside of the model [19, 28].\nOur approach is both formally motivated and a natural\nextension of the underlying model.\nThe remainder of this paper is laid out as follows. In\nSection 2 we describe related query expansion approaches.\nSection 3 provides an overview of the MRF model and\ndetails our proposed latent concept expansion technique. In\nSection 4 we evaluate our proposed model and analyze the\nresults. Finally, Section 5 concludes the paper and\nsummarizes the major results.\n2. RELATED WORK\nOne of the classic and most widely used approaches to\nquery expansion is the Rocchio algorithm [21]. Rocchio\"s\napproach, which was developed within the vector space model,\nreweights the original query vector by moving the weights\ntowards the set of relevant or pseudo-relevant documents\nand away from the non-relevant documents. Unfortunately,\nit is not possible to formally apply Rocchio\"s approach to\na statistical retrieval model, such as language modeling for\ninformation retrieval.\nA number of formalized query expansion techniques have\nbeen developed for the language modeling framework,\nincluding Zhai and Lafferty\"s model-based feedback and Lavrenko\nand Croft\"s relevance models [12, 29]. Both approaches\nattempt to use pseudo-relevant or relevant documents to\nestimate a better query model.\nModel-based feedback finds the model that best describes\nthe relevant documents while taking a background (noise)\nmodel into consideration. This separates the content model\nfrom the background model. The content model is then\ninterpolated with the original query model to form the\nexpanded query.\nThe other technique, relevance models, is more closely\nrelated to our work. Therefore, we go into the details of the\nmodel. Much like model-based feedback, relevance models\nestimate an improved query model. The only difference\nbetween the two approaches is that relevance models do not\nexplicitly model the relevant or pseudo-relevant documents.\nInstead, they model a more generalized notion of relevance,\nas we now show.\nGiven a query Q, a relevance model is a multinomial\ndistribution, P(\u00b7|Q), that encodes the likelihood of each term\ngiven the query as evidence. It is computed as:\nP(w|Q) =\nD\nP(w|D)P(D|Q)\n\u2248\nD\u2208RQ\nP(w|D)P(Q|D)P(D)\nw D\u2208RQ\nP(w|D)P(Q|D)P(D)\n(1)\nwhere RQ is the set of documents that are relevant or\npseudorelevant to query Q. In the pseudo-relevant case, these are\nthe top ranked documents for query Q. Furthermore, it is\nassumed that P(D) is uniform over this set. These mild\nassumptions make computing the Bayesian posterior more\npractical.\nAfter the model is estimated, documents are ranked by\nclipping the relevance model by choosing the k most likely\nterms from P(\u00b7|Q). This clipped distribution is then\ninterpolated with with the original, maximum likelihood query\nmodel [1]. This can be thought of as expanding the original\nquery by k weighted terms. Throughout the remainder of\nthis work, we refer to this instantiation of relevance models\nas RM3.\nThere has been relatively little work done in the area of\nquery expansion in the context of dependence models [9].\nHowever, there have been several attempts to expand using\nmulti-term concepts. Xu and Croft\"s local context\nanalysis (LCA) method combined passage-level retrieval with\nconcept expansion, where concepts were single terms and\nphrases [28]. Expansion concepts were chosen and weighted\nusing a metric based on co-occurrence statistics. However,\nit is not clear based on the analysis done how much the\nphrases helped over the single terms alone.\nPapka and Allan investigate using relevance feedback to\nperform multi-term concept expansion for document\nrouting [19]. The concepts used in their work are more general\nthan those used in LCA, and include InQuery query\nlanguage structures, such as #UW50(white house), which\ncorresponds to the concept the terms white and house occur, in\nany order, within 50 terms of each other. Results showed\nthat combining single term and large window multi-term\nconcepts significantly improved effectiveness. However, it is\nunclear whether the same approach is also effective for ad\nhoc retrieval, due to the differences in the tasks.\n3. MODEL\nThis section details our proposed latent concept expansion\ntechnique. As mentioned previously, the technique is an\nextension of the MRF model for information retrieval [14].\nTherefore, we begin by providing an overview of the MRF\nmodel and our proposed extensions.\n3.1 MRFs for IR\n3.1.1 Basics\nMarkov random fields, which are undirected graphical\nmodels, provide a compact, robust way of modeling a joint\ndistribution. Here, we are interested in modeling the joint\ndistribution over a query Q = q1, . . . , qn and a document\nD. It is assumed the underlying distribution over pairs of\ndocuments and queries is a relevance distribution. That is,\nsampling from the distribution gives pairs of documents and\nqueries, such that the document is relevant to the query.\nA MRF is defined by a graph G and a set of non-negative\npotential functions over the cliques in G. The nodes in the\ngraph represent the random variables and the edges define\nthe independence semantics of the distribution. A MRF\nsatisfies the Markov property, which states that a node is\nindependent of all of its non-neighboring nodes given observed\nvalues for its neighbors.\nGiven a graph G, a set of potentials \u03c8i, and a parameter\nvector \u039b, the joint distribution over Q and D is given by:\nPG,\u039b(Q, D) =\n1\nZ\u039b c\u2208C(G)\n\u03c8(c; \u039b)\nwhere Z is a normalizing constant. We follow common\nconvention and parameterize the potentials as \u03c8i(c; \u039b) =\nexp[\u03bbifi(c)], where fi(c) is a real-valued feature function.\n3.1.2 Constructing G\nGiven a query Q, the graph G can be constructed in a\nnumber of ways. However, following previous work, we\nconsider three simple variants [14]. These variants are full\nindependence, where each query term is independent of each\nother given a document, sequential dependence, which\nassumes a dependence exists between adjacent query terms,\nand full dependence, which makes no independence\nassumptions.\n3.1.3 Parameterization\nMRFs are commonly parameterized based on the\nmaximal cliques of G. However, such a parameterization is too\ncoarse for our needs. We need a parameterization that allows\nus to associate feature functions with cliques on a more fine\ngrained level, while keeping the number of features, and thus\nthe number of parameters, reasonable. Therefore, we allow\ncliques to share feature functions and parameters based on\nclique sets. That is, all of the cliques within a clique set are\nassociated with the same feature function and share a\nsingle parameter. This effectively ties together the parameters\nof the features associated with each set, which significantly\nreduces the number of parameters while still providing a\nmechanism for fine-tuning on the level of clique sets.\nWe propose seven clique sets for use with information\nretrieval. The first three clique sets consist of cliques that\ncontain one or more query terms and the document node.\nFeatures over these cliques should encode how well the terms\nin the clique configuration describe the document. These\nsets are:\n\u2022 TD - set of cliques containing the document node and\nexactly one query term.\n\u2022 OD - set of cliques containing the document node and\ntwo or more query terms that appear in sequential\norder within the query.\n\u2022 UD - set of cliques containing the document node and\ntwo or more query terms that appear in any order\nwithin the query.\nNote that UD is a superset of OD. By tying the parameters\namong the cliques within each set we can control how much\ninfluence each type gets. This also avoids the problem of\ntrying to determine how to estimate weights for each clique\nwithin the sets. Instead, we now must only estimate a single\nparameter per set.\nNext, we consider cliques that only contain query term\nnodes. These cliques, which were not considered in [14], are\ndefined in an analogous way to those just defined, except the\nthe cliques are only made up of query term nodes and do\nnot contain the document node. Feature functions over these\ncliques should capture how compatible query terms are to\none another. These clique features may take on the form of\nlanguage models that impose well-formedness of the terms.\nTherefore, we define following query-dependent clique sets:\n\u2022 TQ - set of cliques containing exactly one query term.\n\u2022 OQ - set of cliques containing two or more query terms\nthat appear in sequential order within the query.\n\u2022 UQ - set of cliques containing two or more query terms\nthat appear in any order within the query.\nFinally, there is the clique that only contains the\ndocument node. Features over this node can be used as a type\nof document prior, encoding document-centric properties.\nThis trivial clique set is then:\n\u2022 D - clique set containing only the singleton node D\nWe note that our clique sets form a set cover over the\ncliques of G, but are not a partition, since some cliques\nappear in multiple clique sets.\nAfter tying the parameters in our clique sets together and\nusing the exponential potential function form, we end up\nwith the following simplified form of the joint distribution:\nlog PG,\u039b(Q, D) =\n\u03bbTD\nc\u2208TD\nfTD\n(c) + \u03bbOD\nc\u2208OD\nfOD\n(c) + \u03bbUD\nc\u2208UD\nfUD\n(c)\nFDQ(D,Q) - document and query dependent\n+\n\u03bbTQ\nc\u2208TQ\nfTQ\n(c) + \u03bbOQ\nc\u2208OQ\nfOQ\n(c) + \u03bbUQ\nc\u2208UQ\nfUQ\n(c)\nFQ(Q) - query dependent\n+\n\u03bbDfD(D)\nFD(D) - document dependent\n\u2212 log Z\u039b\ndocument + query independent\nwhere FDQ, FQ, and FD are convenience functions defined\nby the document and query dependent, query dependent,\nand document dependent components of the joint\ndistribution, respectively. These will be used to simplify and clarify\nexpressions derived throughout the remainder of the paper.\n3.1.4 Features\nAny arbitrary feature function over clique configurations\ncan be used in the model. The correct choice of features\ndepends largely on the retrieval task and the evaluation\nmetric. Therefore, there is likely not to be a single, universally\napplicable set of features.\nTo provide an idea of the range of features that can be\nused, we now briefly describe possible types of features that\ncould be used. Possible query term dependent features\ninclude tf, idf, named entities, term proximity, and text style\nto name a few. Many types of document dependent features\ncan be used, as well, including document length, PageRank,\nreadability, and genre, among others.\nSince it is not our goal here to find optimal features, we\nuse a simple, fixed set of features that have been shown to\nbe effective in previous work [14]. See Table 1 for a list\nof features used. These features attempt to capture term\noccurrence and term proximity. Better feature selection in\nthe future will likely lead to improved effectiveness.\n3.1.5 Ranking\nGiven a query Q, we wish to rank documents in\ndescending order according to PG,\u039b(D|Q). After dropping document\nindependent expressions from log PG,\u039b(Q, D), we derive the\nfollowing ranking function:\nPG,\u039b(D|Q)\nrank\n= FDQ(D, Q) + FD(D) (2)\nwhich is a simple weighted linear combination of feature\nfunctions that can be computed efficiently for reasonable\ngraphs.\n3.1.6 Parameter Estimation\nNow that the model has been fully specified, the final step\nis to estimate the model parameters. Although MRFs are\ngenerative models, it is inappropriate to train them using\nFeature Value\nfTD\n(qi, D) log (1 \u2212 \u03b1)\ntfqi,D\n|D|\n+ \u03b1\ncfqi\n|C|\nfOD\n(qi, qi+1 . . . , qi+k, D) log (1 \u2212 \u03b2)\ntf#1(qi...qi+k),D\n|D|\n+ \u03b2\ncf#1(qi...qi+k)\n|C|\nfUD\n(qi, ..., qj, D) log (1 \u2212 \u03b2)\ntf#uw(qi...qj ),D\n|D|\n+ \u03b2\ncf#uw(qi...qj )\n|C|\nfTQ\n(qi) \u2212 log\ncfqi\n|C|\nfOQ\n(qi, qi+1 . . . , qi+k) \u2212 log\ncf#1(qi...qi+k)\n|C|\nfUQ\n(qi, ..., qj) \u2212 log\ncf#uw(qi...qj )\n|C|\nfD 0\nTable 1: Feature functions used in Markov random field model. Here, tfw,D is the number of times term\nw occurs in document D, tf#1(qi...qi+k),D denotes the number of times the exact phrase qi . . . qi+k occurs in\ndocument D, tf#uw(qi...qj ),D is the number of times the terms qi, . . . qj appear ordered or unordered within a\nwindow of N terms, and |D| is the length of document D. The cf and |C| values are analogously defined on\nthe collection level. Finally, \u03b1 and \u03b2 are model hyperparameters that control smoothing for single term and\nphrase features, respectively.\nconventional likelihood-based approaches because of metric\ndivergence [17]. That is, the maximum likelihood estimate\nis unlikely to be the estimate that maximizes our evaluation\nmetric. For this reason, we discriminatively train our model\nto directly maximize the evaluation metric under\nconsideration [14, 15, 25]. Since our parameter space is small, we\nmake use of a simple hill climbing strategy, although other\nmore sophisticated approaches are possible [10].\n3.2 Latent Concept Expansion\nIn this section we describe how this extended MRF model\ncan be used in a novel way to generate single and\nmultiterm concepts that are topically related to some original\nquery. As we will show, the concepts generated using our\ntechnique can be used for query expansion or other tasks,\nsuch as suggesting alternative query formulations.\nWe assume that when a user formulates their original\nquery, they have some set of concepts in mind, but are only\nable to express a small number of them in the form of a\nquery. We treat the concepts that the user has in mind, but\ndid not explicitly express in the query, as latent concepts.\nThese latent concepts can consist of a single term,\nmultiple terms, or some combination of the two. It is, therefore,\nour goal to recover these latent concepts given some original\nquery.\nThis can be accomplished within our framework by first\nexpanding the original graph G to include the type of\nconcept we are interested in generating. We call this expanded\ngraph H. In Figure 1, the middle graph provides an example\nof how to construct an expanded graph that can generate\nsingle term concepts. Similarly, the graph on the right\nillustrates an expanded graph that generates two term concepts.\nAlthough these two examples make use of the sequential\ndependence assumption (i.e. dependencies between adjacent\nquery terms), it is important to note that both the original\nquery and the expansion concepts can use any independence\nstructure.\nAfter H is constructed, we compute PH,\u039b(E|Q), a\nprobability distribution over latent concepts, according to:\nPH,\u039b(E|Q) = D\u2208R PH,\u039b(Q, E, D)\nD\u2208R E PH,\u039b(Q, E, D)\nwhere R is the universe of all possible documents and E\nis some latent concept that may consist of one or more\nterms. Since it is not practical to compute this\nsummation, we must approximate it. We notice that PH,\u039b(Q, E, D)\nis likely to be peaked around those documents D that are\nhighly ranked according to query Q. Therefore, we\napproximate PH,\u039b(E|Q) by only summing over a small subset of\nrelevant or pseudo-relevant documents for query Q. This is\ncomputed as follows:\nPH,\u039b(E|Q) \u2248\nD\u2208RQ\nPH,\u039b(Q, E, D)\nD\u2208RQ E PH,\u039b(Q, E, D)\n(3)\n\u221d\nD\u2208RQ\nexp FQD(Q, D) + FD(D) + FQD(E, D) + FQ(E)\nwhere RQ is a set of relevant or pseudo-relevant documents\nfor query Q and all clique sets are constructed using H.\nAs we see, the likelihood contribution for each document in\nRQ is a combination of the original query\"s score for the\ndocument (see Equation 2), concept E\"s score for the\ndocument, and E\"s document-independent score. Therefore, this\nequation can be interpreted as measuring how well Q and E\naccount for the top ranked documents and the goodness\nof E, independent of the documents. For maximum\nrobustness, we use a different set of parameters for FQD(Q, D) and\nFQD(E, D), which allows us to weight the term, ordered, and\nunordered window features differently for the original query\nand the candidate expansion concept.\n3.2.1 Query Expansion\nTo use this framework for query expansion, we first choose\nan expansion graph H that encodes the latent concept\nstructure we are interested in expanding the query using. We\nthen select the k latent concepts with the highest likelihood\ngiven by Equation 3. A new graph G is constructed by\naugmenting the original graph G with the k expansion\nconcepts E1, . . . , Ek. Finally, documents are ranked according\nto PG ,\u039b(D|Q, E1, . . . , Ek) using Equation 2.\n3.2.2 Comparison to Relevance Models\nInspecting Equations 1 and 3 reveals the close\nconnection that exists between LCE and relevance models. Both\nFigure 1: Graphical model representations of relevance modeling (left), latent concept expansion using single\nterm concepts (middle), and latent concept expansion using two term concepts (right) for a three term query.\nmodels essentially compute the likelihood of a term (or\nconcept) in the same manner. It is easy to see that just as the\nMRF model can be viewed as a generalization of language\nmodeling, so too can LCE be viewed as a generalization of\nrelevance models.\nThere are important differences between MRFs/LCE and\nunigram language models/relevance models. See Figure 1\nfor graphical model representations of both models.\nUnigram language models and relevance models are based on\nthe multinomial distribution. This distributional\nassumption locks the model into the bag of words representation\nand the implicit use of term occurrence features. However,\nthe distribution underlying the MRF model allows us to\nmove beyond both of these assumptions, by modeling both\ndependencies between query terms and allowing arbitrary\nfeatures to be explicitly used.\nMoving beyond the simplistic bag of words assumption in\nthis way results in a general, robust model and, as we show\nin the next section, translates into significant improvements\nin retrieval effectiveness.\n4. EXPERIMENTAL RESULTS\nIn order to better understand the strengths and\nweaknesses of our technique, we evaluate it on a wide range of\ndata sets. Table 2 provides a summary of the TREC data\nsets considered. The WSJ, AP, and ROBUST collections\nare smaller and consist entirely of newswire articles, whereas\nWT10g and GOV2 are large web collections. For each data\nset, we split the available topics into a training and test set,\nwhere the training set is used solely for parameter\nestimation and the test set is used for evaluation purposes.\nAll experiments were carried out using a modified version\nof Indri, which is part of the Lemur Toolkit [18, 23]. All\ncollections were stopped using a standard list of 418\ncommon terms and stemmed using a Porter stemmer. In all\ncases, only the title portion of the TREC topics are used\nto construct queries. We construct G using the sequential\ndependence assumption for all data sets [14].\n4.1 ad-hoc Retrieval Results\nWe now investigate how well our model performs in\npractice in a pseudo-relevance feedback setting. We compare\nunigram language modeling (with Dirichlet smoothing), the\nMRF model (without expansion), relevance models, and\nLCE to better understand how each model performs across\nthe various data sets.\nFor the unigram language model, the smoothing\nparameter was trained. For the MRF model, we train the model\nparameters (i.e. \u039b) and model hyperparameters (i.e. \u03b1, \u03b2).\nFor RM3 and LCE, we also train the number of\npseudoName Description # Docs Train\nTopics\nTest\nTopics\nWSJ Wall St.\nJournal 87-92\n173,252 51-150 151-200\nAP Assoc. Press\n88-90\n242,918 51-150 151-200\nROBUST Robust 2004\ndata\n528,155 301-450 601-700\nWT10g TREC Web\ncollection\n1,692,096 451-500 501-550\nGOV2 2004 crawl of\n.gov domain\n25,205,179 701-750 751-800\nTable 2: Overview of TREC collections and topics.\nrelevant feedback documents used and the number of\nexpansion terms.\n4.1.1 Expansion with Single Term Concepts\nWe begin by evaluating how well our model performs when\nexpanding using only single terms. Before we describe and\nanalyze the results, we explicitly state how expansion term\nlikelihoods are computed under this setup (i.e. using the\nsequential dependence assumption, expanding with single\nterm concepts, and using our feature set). The expansion\nterm likelihoods are computed as follows:\nPH,\u039b(e|Q) \u221d\nD\u2208RQ\nexp \u03bbTD\nw\u2208Q\nlog (1 \u2212 \u03b1)\ntfw,D\n|D|\n+ \u03b1\ncfw\n|C|\n+\n\u03bbOD\nb\u2208Q\nlog (1 \u2212 \u03b2)\ntf#1(b),D\n|D|\n+ \u03b2\ncf#1(b)\n|C|\n+\n\u03bbUD\nb\u2208Q\nlog (1 \u2212 \u03b2)\ntf#uw(b),D\n|D|\n+ \u03b2\ncf#uw(b)\n|C|\n+\nlog\n(1 \u2212 \u03b1)\ntfe,D\n|D|\n+ \u03b1 cfe\n|C|\n\u03bbTD\ncfe\n|C|\n\u03bbTQ\n(4)\nwhere b \u2208 Q denotes the set of bigrams in Q. This equation\nclearly shows how LCE differs from relevance models. When\nwe set \u03bbTD = \u03bbT,D = 1 and all other parameters to 0,\nwe obtain the exact formula that is used to compute term\nlikelihoods in the relevance modeling framework. Therefore,\nLCE adds two very important factors to the equation. First,\nit adds the ordered and unordered window features that are\napplied to the original query. Second, it applies an intuitive\ntf.idf-like form to the candidate expansion term w. The idf\nfactor, which is not present in relevance models, plays an\nimportant role in expansion term selection.\n<= \u2212100%\n(\u2212100%,\n\u221275%]\n(\u221275%,\n\u221250%]\n(\u221250%,\n\u221225%]\n(\u221225%,\n0%]\n(0%,\n25%]\n(25%,\n50%]\n(50%,\n75%]\n(75%,\n100%] > 100%\nRM3\nLCE\n05101520\nAP\n<= \u2212100%\n(\u2212100%,\n\u221275%]\n(\u221275%,\n\u221250%]\n(\u221250%,\n\u221225%]\n(\u221225%,\n0%]\n(0%,\n25%]\n(25%,\n50%]\n(50%,\n75%]\n(75%,\n100%] > 100%\nRM3\nLCE\n05101520253035\nROBUST\n<= \u2212100%\n(\u2212100%,\n\u221275%]\n(\u221275%,\n\u221250%]\n(\u221250%,\n\u221225%]\n(\u221225%,\n0%]\n(0%,\n25%]\n(25%,\n50%]\n(50%,\n75%]\n(75%,\n100%] > 100%\nRM3\nLCE\n0510152025\nWT10G\nFigure 2: Histograms that demonstrate and compare the robustness of relevance models (RM3) and latent\nconcept expansion (LCE) with respect to the query likelihood model (QL) for the AP, ROBUST, and WT10G\ndata sets.\nThe results, evaluated using mean average precision, are\ngiven in Table 3. As we see, the MRF model, relevance\nmodels, and LCE always significantly outperform the unigram\nlanguage model. In addition, LCE shows significant\nimprovements over relevance models across all data sets. The\nrelative improvements over relevance models is 6.9% for AP,\n12.9% for WSJ, 6.5% for ROBUST, 16.7% for WT10G, and\n7.3% for GOV2.\nFurthermore, LCE shows small, but not significant,\nimprovements over relevance modeling for metrics such as\nprecision at 5, 10, and 20. However, both relevance modeling\nand LCE show statistically significant improvements in such\nmetrics over the unigram language model.\nAnother interesting result is that the MRF model is\nstatistically equivalent to relevance models on the two web data\nsets. In fact, the MRF model outperforms relevance\nmodels on the WT10g data set. This reiterates the importance\nof non-unigram, proximity-based features for content-based\nweb search observed previously [14, 16].\nAlthough our model has more free parameters than\nrelevance models, there is surprisingly little overfitting. Instead,\nthe model exhibits good generalization properties.\n4.1.2 Expansion with Multi-Term Concepts\nWe also investigated expanding using both single and two\nword concepts. For each query, we expanded using a set of\nsingle term concepts and a set of two term concepts. The\nsets were chosen independently. Unfortunately, only\nnegligible increases in mean average precision were observed.\nThis result may be due to the fact that strong\ncorrelations exist between the single term expansion concepts. We\nfound that the two word concepts chosen often consisted of\ntwo highly correlated terms that are also chosen as single\nterm concepts. For example, the two term concept stock\nmarket was chosen while the single term concepts stock\nand market were also chosen. Therefore, many two word\nconcepts are unlikely to increase the discriminative power\nof the expanded query. This result suggests that concepts\nshould be chosen according to some criteria that also takes\nnovelty, diversity, or term correlations into account.\nAnother potential issue is the feature set used. Other\nfeature sets may ultimately yield different results, especially\nif they reduce the correlation among the expansion concepts.\nTherefore, our experiments yield no conclusive results with\nregard to expansion using multi-term concepts. Instead, the\nresults introduce interesting open questions and directions\nfor future exploration.\nLM MRF RM3 LCE\nWSJ .3258 .3425\u03b1\n.3493\u03b1\n.3943\u03b1\u03b2\u03b3\nAP .2077 .2147\u03b1\n.2518\u03b1\u03b2\n.2692\u03b1\u03b2\u03b3\nROBUST .2920 .3096\u03b1\n.3382\u03b1\u03b2\n.3601\u03b1\u03b2\u03b3\nWT10g .1861 .2053\u03b1\n.1944\u03b1\n.2269\u03b1\u03b2\u03b3\nGOV2 .3234 .3520\u03b1\n.3656\u03b1\n.3924\u03b1\u03b2\u03b3\nTable 3: Test set mean average precision for\nlanguage modeling (LM), Markov random field (MRF),\nrelevance models (RM3), and latent concept\nexpansion (LCE). The superscripts \u03b1, \u03b2, and \u03b3 indicate\nstatistically significant improvements (p < 0.05) over\nLM, MRF, and RM3, respectively.\n4.2 Robustness\nAs we have shown, relevance models and latent concept\nexpansion can significantly improve retrieval effectiveness\nover the baseline query likelihood model. In this section\nwe analyze the robustness of these two methods. Here, we\ndefine robustness as the number queries whose effectiveness\nare improved/hurt (and by how much) as the result of\napplying these methods. A highly robust expansion technique\nwill significantly improve many queries and only minimally\nhurt a few.\nFigure 2 provides an analysis of the robustness of\nrelevance modeling and latent concept expansion for the AP,\nROBUST, and WT10G data sets. The analysis for the\ntwo data sets not shown is similar. The histograms\nprovide, for various ranges of relative decreases/increases in\nmean average precision, the number of queries that were\nhurt/improved with respect to the query likelihood baseline.\nAs the results show, LCE exhibits strong robustness for\neach data set. For AP, relevance models improve 38 queries\nand hurt 11, whereas LCE improves 35 and hurts 14.\nAlthough relevance models improve the effectiveness of 3 more\nqueries than LCE, the relative improvement exhibited by\nLCE is significantly larger. For the ROBUST data set,\nrelevance models improve 67 queries and hurt 32, and LCE\nimproves 77 and hurts 22. Finally, for the WT10G\ncollection, relevance models improve 32 queries and hurt 16, and\nLCE improves 35 and hurts 14. As with AP, the amount of\nimprovement exhibited by the LCE versus relevance models\nis significantly larger for both the ROBUST and WT10G\ndata sets. In addition, when LCE does hurt performance, it\nis less likely to hurt as much as relevance modeling, which\nis a desirable property.\n1 word concepts 2 word concepts 3 word concepts\ntelescope hubble telescope hubble space telescope\nhubble space telescope hubble telescope space\nspace hubble space space telescope hubble\nmirror telescope mirror space telescope NASA\nNASA telescope hubble hubble telescope astronomy\nlaunch mirror telescope NASA hubble space\nastronomy telescope NASA space telescope mirror\nshuttle telescope space telescope space NASA\ntest hubble mirror hubble telescope mission\nnew NASA hubble mirror mirror mirror\ndiscovery telescope astronomy space telescope launch\ntime telescope optical space telescope discovery\nuniverse hubble optical shuttle space telescope\noptical telescope discovery hubble telescope flaw\nlight telescope shuttle two hubble space\nTable 4: Fifteen most likely one, two, and three word concepts constructed using the top 25 documents\nretrieved for the query hubble telescope achievements on the ROBUST collection.\nOverall, LCE improves effectiveness for 65%-80% of queries,\ndepending on the data set. When used in combination with\na highly accurate query performance prediction system, it\nmay be possible to selectively expand queries and minimize\nthe loss associated with sub-baseline performance.\n4.3 Multi-Term Concept Generation\nAlthough we found that expansion using multi-term\nconcepts failed to produce conclusive improvements in\neffectiveness, there are other potential tasks that these concepts may\nbe useful for, such as query suggestion/reformulation,\nsummarization, and concept mining. For example, for a query\nsuggestion task, the original query could be used to\ngenerate a set of latent concepts which correspond to alternative\nquery formulations.\nAlthough evaluating our model on these tasks is beyond\nthe scope of this work, we wish to show an illustrative\nexample of the types of concepts generated using our model. In\nTable 4, we present the most likely one, two, and three term\nconcepts generated using LCE for the query hubble telescope\nachievements using the top 25 ranked documents from the\nROBUST collection.\nIt is well known that generating multi-term concepts\nusing a unigram-based model produces unsatisfactory results,\nsince it fails to consider term dependencies. This is not\nthe case when generating multi-term concepts using our\nmodel. Instead, a majority of the concepts generated are\nwell-formed and meaningful. There are several cases where\nthe concepts are less coherent, such as mirror mirror mirror.\nIn this case, the likelihood of the term mirror appearing in\na pseudo-relevant document outweighs the language\nmodeling features (e.g. fOQ ), which causes this non-coherent\nconcept to have a high likelihood. Such examples are in the\nminority, however.\nNot only are the concepts generated well-formed and\nmeaningful, but they are also topically relevant to the original\nquery. As we see, all of the concepts generated are on topic\nand in some way related to the Hubble telescope. It is\ninteresting to see that the concept hubble telescope flaw is one of\nthe most likely three term concepts, given that it is\nsomewhat contradictory to the original query. Despite this\ncontradiction, documents that discuss the telescope flaws are\nalso likely to describe the successes, as well, and therefore\nthis is likely to be a meaningful concept.\nOne important thing to note is that the concepts LCE\ngenerates are of a different nature than those that would\nbe generated using a bigram relevance model. For example,\na bigram model would be unlikely to generate the concept\ntelescope space NASA, since none of the bigrams that make\nup the concept have high likelihood. However, since our\nmodel is based on a number of different features over various\ntypes of cliques, it is more general and robust than a bigram\nmodel.\nAlthough we only provided the concepts generated for a\nsingle query, we note that the same analysis and conclusions\ngeneralize across other data sets, with coherent, topically\nrelated concepts being consistently generated using LCE.\n4.4 Discussion\nOur latent concept expansion technique captures two\nsemiorthogonal types of dependence. In information retrieval,\nthere has been a long-term interest in understanding the\nrole of term dependence. Out of this research, two broad\ntypes of dependencies have been identified.\nThe first type of dependence is syntactic dependence. This\ntype of dependence covers phrases, term proximity, and term\nco-occurrence [2, 4, 5, 7, 26]. These methods capture the\nfact that queries implicitly or explicitly impose a certain set\nof positional dependencies.\nThe second type is semantic dependence. Examples of\nsemantic dependence are relevance feedback, pseudo-relevance\nfeedback, synonyms, and to some extent stemming [3]. These\ntechniques have been explored on both the query and\ndocument side. On the query side, this is typically done using\nsome form of query expansion, such as relevance models or\nLCE. On the document side, this is done as document\nexpansion or document smoothing [11, 13, 24].\nAlthough there may be some overlap between syntactic\nand semantic dependencies, they are mostly orthogonal. Our\nmodel uses both types of dependencies. The use of phrase\nand proximity features within the model captures\nsyntactic dependencies, whereas LCE captures query-side semantic\ndependence. This explains why the initial improvement in\neffectiveness achieved by using the MRF model is not lost\nafter query expansion. If the same types of dependencies\nwere capture by both syntactic and semantic dependencies,\nLCE would be expected to perform about equally as well\nas relevance models. Therefore, by modeling both types of\ndependencies we see an additive effect, rather than an\nabsorbing effect.\nAn interesting area of future work is to determine whether\nor not modeling document-side semantic dependencies can\nadd anything to the model. Previous results that have\ncombined query- and document-side semantic dependencies have\nshown mixed results [13, 27].\n5. CONCLUSIONS\nIn this paper we proposed a robust query expansion\ntechnique called latent concept expansion. The technique was\nshown to be a natural extension of the Markov random field\nmodel for information retrieval and a generalization of\nrelevance models. LCE is novel in that it performs single or\nmulti-term expansion within a framework that allows the\nmodeling of term dependencies and the use of arbitrary\nfeatures, whereas previous work has been based on the bag of\nwords assumption and term occurrence features.\nWe showed that the technique can be used to produce\nhigh quality, well formed, topically relevant multi-term\nexpansion concepts. The concepts generated can be used in\nan alternative query suggestion module. We also showed\nthat the model is highly effective. In fact, it achieves\nsignificant improvements in mean average precision over relevance\nmodels across a selection of TREC data sets. It was also\nshown the MRF model itself, without any query expansion,\noutperforms relevance models on large web data sets. This\nreconfirms previous observations that modeling\ndependencies via the use of proximity features within the MRF has\nmore of an impact on larger, noisier collections than smaller,\nwell-behaved ones.\nFinally, we reiterated the importance of choosing\nexpansion terms that model relevance, rather than the relevant\ndocuments and showed how LCE captures both syntactic\nand query-side semantic dependencies. Future work will\nlook at incorporating document-side dependencies, as well.\nAcknowledgments\nThis work was supported in part by the Center for Intelligent\nInformation Retrieval, in part by NSF grant #CNS-0454018, in part\nby ARDA and NSF grant #CCF-0205575, and in part by Microsoft\nLive Labs. Any opinions, findings and conclusions or\nrecommendations expressed in this material are those of the author(s) and do not\nnecessarily reflect those of the sponsor.\n6. REFERENCES\n[1] N. Abdul-Jaleel, J. Allan, W. B. Croft, F. Diaz, L. Larkey,\nX. Li, M. D. Smucker, and C. Wade. UMass at TREC 2004:\nNovelty and HARD. In Online proceedings of the 2004 Text\nRetrieval Conf., 2004.\n[2] C. L. A. Clarke and G. V. Cormack. Shortest-substring retrieval\nand ranking. ACM Trans. Inf. Syst., 18(1):44-78, 2000.\n[3] K. Collins-Thompson and J. Callan. Query expansion using\nrandom walk models. In Proc. 14th Intl. Conf. on Information\nand Knowledge Management, pages 704-711, 2005.\n[4] W. B. Croft. Boolean queries and term dependencies in\nprobabilistic retrieval models. Journal of the American Society\nfor Information Science, 37(4):71-77, 1986.\n[5] W. B. Croft, H. Turtle, and D. Lewis. The use of phrases and\nstructured queries in information retrieval. In Proc. 14th Ann.\nIntl. ACM SIGIR Conf. on Research and Development in\nInformation Retrieval, pages 32-45, 1991.\n[6] K. Eguchi. NTCIR-5 query expansion experiments using term\ndependence models. In Proc. of the Fifth NTCIR Workshop\nMeeting on Evaluation of Information Access Technologies,\npages 494-501, 2005.\n[7] J. Fagan. Automatic phrase indexing for document retrieval:\nAn examination of syntactic and non-syntactic methods. In\nProc. tenth Ann. Intl. ACM SIGIR Conf. on Research and\nDevelopment in Information Retrieval, pages 91-101, 1987.\n[8] J. Gao, J. Nie, G. Wu, and G. Cao. Dependence language\nmodel for information retrieval. In Proc. 27th Ann. Intl. ACM\nSIGIR Conf. on Research and Development in Information\nRetrieval, pages 170-177, 2004.\n[9] D. Harper and C. J. van Rijsbergen. An evaluation of feedback\nin document retrieval using co-occurrence data. Journal of\nDocumentation, 34(3):189-216, 1978.\n[10] T. Joachims. A support vector method for multivariate\nperformance measures. In Proc. of the International Conf. on\nMachine Learning, pages 377-384, 2005.\n[11] O. Kurland and L. Lee. Corpus structure, language models,\nand ad-hoc information retrieval. In Proc. 27th Ann. Intl.\nACM SIGIR Conf. on Research and Development in\nInformation Retrieval, pages 194-201, 2004.\n[12] V. Lavrenko and W. B. Croft. Relevance-based language\nmodels. In Proc. 24th Ann. Intl. ACM SIGIR Conf. on\nResearch and Development in Information Retrieval, pages\n120-127, 2001.\n[13] X. Liu and W. B. Croft. Cluster-based retrieval using language\nmodels. In Proc. 27th Ann. Intl. ACM SIGIR Conf. on\nResearch and Development in Information Retrieval, pages\n186-193, 2004.\n[14] D. Metzler and W. B. Croft. A Markov random field model for\nterm dependencies. In Proc. 28th Ann. Intl. ACM SIGIR\nConf. on Research and Development in Information\nRetrieval, pages 472-479, 2005.\n[15] D. Metzler and W. B. Croft. Linear feature based models for\ninformation retrieval. Information Retrieval, to appear, 2006.\n[16] D. Metzler, T. Strohman, Y. Zhou, and W. B. Croft. Indri at\nterabyte track 2005. In Online proceedings of the 2005 Text\nRetrieval Conf., 2005.\n[17] W. Morgan, W. Greiff, and J. Henderson. Direct maximization\nof average precision by hill-climbing with a comparison to a\nmaximum entropy approach. Technical report, MITRE, 2004.\n[18] P. Ogilvie and J. P. Callan. Experiments using the lemur\ntoolkit. In Proc. of the Text REtrieval Conf., 2001.\n[19] R. Papka and J. Allan. Why bigger windows are better than\nsmaller ones. Technical report, University of Massachusetts,\nAmherst, 1997.\n[20] S. Robertson, S. Walker, S. Jones, M. M. Hancock-Beaulieu,\nand M. Gatford. Okapi at trec-3. In Online proceedings of the\nThird Text Retrieval Conf., pages 109-126, 1995.\n[21] J. J. Rocchio. Relevance Feedback in Information Retrieval,\npages 313-323. Prentice-Hall, 1971.\n[22] F. Song and W. B. Croft. A general language model for\ninformation retrieval. In Proc. eighth international conference\non Information and knowledge management (CIKM 99),\npages 316-321, 1999.\n[23] T. Strohman, D. Metzler, H. Turtle, and W. B. Croft. Indri: A\nlanguage model-based serach engine for complex queries. In\nProc. of the International Conf. on Intelligence Analysis,\n2004.\n[24] T. Tao, X. Wang, Q. Mei, and C. Zhai. Language model\ninformation retrieval with document expansion. In Proc. of\nHLT/NAACL, pages 407-414, 2006.\n[25] B. Taskar, C. Guestrin, and D. Koller. Max-margin markov\nnetworks. In Proc. of Advances in Neural Information\nProcessing Systems (NIPS 2003), 2003.\n[26] C. J. van Rijsbergen. A theoretical basis for the use of\ncooccurrence data in information retrieval. Journal of\nDocumentation, 33(2):106-119, 1977.\n[27] X. Wei and W. B. Croft. LDA-based document models for\nad-hoc retrieval. In Proc. 29th Ann. Intl. ACM SIGIR Conf.\non Research and Development in Information Retrieval,\npages 178-185, 2006.\n[28] J. Xu and W. B. Croft. Improving the effectiveness of\ninformation retrieval with local context analysis. ACM Trans.\nInf. Syst., 18(1):79-112, 2000.\n[29] C. Zhai and J. Lafferty. Model-based feedback in the language\nmodeling approach to information retrieval. In Proc. 10th Intl.\nConf. on Information and Knowledge Management, pages\n403-410, 2001.", "keywords": "language modeling framework;rocchio algorithm;document routing;query expansion;mrf;mrf model;web search;language modeling approach;pseudo-relevance feedback;markov random field;rm3;robust query expansion technique;information retrieval;language modeling query expansion technique;relevance feedback;relevance distribution;ad-hoc retrieval"}
-{"name": "test_H-31", "title": "A Study of Poisson Query Generation Model for Information Retrieval", "abstract": "Many variants of language models have been proposed for information retrieval. Most existing models are based on multinomial distribution and would score documents based on query likelihood computed based on a query generation probabilistic model. In this paper, we propose and study a new family of query generation models based on Poisson distribution. We show that while in their simplest forms, the new family of models and the existing multinomial models are equivalent, they behave differently for many smoothing methods. We show that the Poisson model has several advantages over the multinomial model, including naturally accommodating per-term smoothing and allowing for more accurate background modeling. We present several variants of the new model corresponding to different smoothing methods, and evaluate them on four representative TREC test collections. The results show that while their basic models perform comparably, the Poisson model can outperform multinomial model with per-term smoothing. The performance can be further improved with two-stage smoothing.", "fulltext": "1. INTRODUCTION\nAs a new type of probabilistic retrieval models, language\nmodels have been shown to be effective for many retrieval\ntasks [21, 28, 14, 4]. Among many variants of language\nmodels proposed, the most popular and fundamental one is the\nquery-generation language model [21, 13], which leads to the\nquery-likelihood scoring method for ranking documents. In\nsuch a model, given a query q and a document d, we\ncompute the likelihood of generating query q with a model\nestimated based on document d, i.e., the conditional\nprobability p(q|d). We can then rank documents based on the\nlikelihood of generating the query.\nVirtually all the existing query generation language\nmodels are based on either multinomial distribution [19, 6, 28]\nor multivariate Bernoulli distribution [21, 18]. The\nmultinomial distribution is especially popular and also shown to be\nquite effective. The heavy use of multinomial distribution is\npartly due to the fact that it has been successfully used in\nspeech recognition, where multinomial distribution is a\nnatural choice for modeling the occurrence of a particular word\nin a particular position in text. Compared with\nmultivariate Bernoulli, multinomial distribution has the advantage\nof being able to model the frequency of terms in the query;\nin contrast, multivariate Bernoulli only models the presence\nand absence of query terms, thus cannot capture different\nfrequencies of query terms. However, multivariate Bernoulli\nalso has one potential advantage over multinomial from the\nviewpoint of retrieval: in a multinomial distribution, the\nprobabilities of all the terms must sum to 1, making it hard\nto accommodate per-term smoothing, while in a\nmultivariate Bernoulli, the presence probabilities of different terms\nare completely independent of each other, easily\naccommodating per-term smoothing and weighting. Note that term\nabsence is also indirectly captured in a multinomial model\nthrough the constraint that all the term probabilities must\nsum to 1.\nIn this paper, we propose and study a new family of query\ngeneration models based on the Poisson distribution. In this\nnew family of models, we model the frequency of each term\nindependently with a Poisson distribution. To score a\ndocument, we would first estimate a multivariate Poisson model\nbased on the document, and then score it based on the\nlikelihood of the query given by the estimated Poisson model.\nIn some sense, the Poisson model combines the advantage of\nmultinomial in modeling term frequency and the advantage\nof the multivariate Bernoulli in accommodating per-term\nsmoothing. Indeed, similar to the multinomial distribution,\nthe Poisson distribution models term frequencies, but\nwithout the constraint that all the term probabilities must sum\nto 1, and similar to multivariate Bernoulli, it models each\nterm independently, thus can easily accommodate per-term\nsmoothing.\nAs in the existing work on multinomial language models,\nsmoothing is critical for this new family of models. We\nderive several smoothing methods for Poisson model in parallel\nto those used for multinomial distributions, and compare the\ncorresponding retrieval models with those based on\nmultinomial distributions. We find that while with some\nsmoothing methods, the new model and the multinomial model\nlead to exactly the same formula, with some other\nsmoothing methods they diverge, and the Poisson model brings in\nmore flexibility for smoothing. In particular, a key difference\nis that the Poisson model can naturally accommodate\nperterm smoothing, which is hard to achieve with a multinomial\nmodel without heuristic twist of the semantics of a\ngenerative model. We exploit this potential advantage to develop a\nnew term-dependent smoothing algorithm for Poisson model\nand show that this new smoothing algorithm can improve\nperformance over term-independent smoothing algorithms\nusing either Poisson or multinomial model. This advantage\nis seen for both one-stage and two-stage smoothing. Another\npotential advantage of the Poisson model is that its\ncorresponding background model for smoothing can be improved\nthrough using a mixture model that has a closed form\nformula. This new background model is shown to outperform\nthe standard background model and reduce the sensitivity\nof retrieval performance to the smoothing parameter.\nThe rest of the paper is organized as follows. In Section 2,\nwe introduce the new family of query generation models with\nPoisson distribution, and present various smoothing\nmethods which lead to different retrieval functions. In Section 3,\nwe analytically compare the Poisson language model with\nthe multinomial language model, from the perspective of\nretrieval. We then design empirical experiments to compare\nthe two families of language models in Section 4. We discuss\nthe related work in 5 and conclude in 6.\n2. QUERY GENERATION WITH POISSON\nPROCESS\nIn the query generation framework, a basic assumption is\nthat a query is generated with a model estimated based on\na document. In most existing work [12, 6, 28, 29], people\nassume that each query word is sampled independently from\na multinomial distribution. Alternatively, we assume that a\nquery is generated by sampling the frequency of words from\na series of independent Poisson processes [20].\n2.1 The Generation Process\nLet V = {w1, ..., wn} be a vocabulary set. Let w be a\npiece of text composed by an author and c(w1), ..., c(wn)\nbe a frequency vector representing w, where c(wi, w) is the\nfrequency count of term wi in text w. In retrieval, w could\nbe either a query or a document. We consider the frequency\ncounts of the n unique terms in w as n different types of\nevents, sampled from n independent homogeneous Poisson\nprocesses, respectively.\nSuppose t is the time period during which the author\ncomposed the text. With a homogeneous Poisson process, the\nfrequency count of each event, i.e., the number of\noccurrences of wi, follows a Poisson distribution with associated\nparameter \u03bbit, where \u03bbi is a rate parameter characterizing\nthe expected number of wi in a unit time. The probability\ndensity function of such a Poisson Distribution is given by\nP(c(wi, w) = k|\u03bbit) =\ne\u2212\u03bbit\n(\u03bbit)k\nk!\nWithout losing generality, we set t to the length of the text\nw (people write one word in a unit time), i.e., t = |w|.\nWith n such independent Poisson processes, each\nexplaining the generation of one term in the vocabulary, the\nlikelihood of w to be generated from such Poisson processes can\nbe written as\np(w|\u039b) =\nn\ni=1\np(c(wi, w)|\u039b) =\nn\ni=1\ne\u2212\u03bbi\u00b7|w|\n(\u03bbi \u00b7 |w|)c(wi,w)\nc(wi, w)!\nwhere \u039b = {\u03bb1, ..., \u03bbn} and |w| = n\ni=1 c(wi, w). We refer\nto these n independent Poisson processes with parameter \u039b\nas a Poisson Language Model.\nLet D = {d1, ..., dm} be an observed set of document\nsamples generated from the Poisson process above. The\nmaximum likelihood estimate (MLE) of \u03bbi is\n\u02c6\u03bbi = d\u2208D c(wi, d)\nd\u2208D w \u2208V c(w , d)\nNote that this MLE is different from the MLE for the\nPoisson distribution without considering the document lengths,\nwhich appears in [22, 24].\nGiven a document d, we may estimate a Poisson language\nmodel \u039bd using d as a sample. The likelihood that a query\nq is generated from the document language model \u039bd can\nbe written as\np(q|d) =\nw\u2208V\np(c(w, q)|\u039bd) (1)\nThis representation is clearly different from the multinomial\nquery generation model as (1) the likelihood includes all the\nterms in the vocabulary V , instead of only those appearing\nin q, and (2) instead of the appearance of terms, the event\nspace of this model is the frequencies of each term.\nIn practice, we have the flexibility to choose the\nvocabulary V . In one extreme, we can use the vocabulary of the\nwhole collection. However, this may bring in noise and\nconsiderable computational cost. In the other extreme, we may\nfocus on the terms in the query and ignore other terms, but\nsome useful information may be lost by ignoring the\nnonquery terms. As a compromise, we may conflate all the\nnon-query terms as one single pseudo term. In other words,\nwe may assume that there is exactly one non-query term\nin the vocabulary for each query. In our experiments, we\nadopt this pseudo non-query term strategy.\nA document can be scored with the likelihood in\nEquation 1. However, if a query term is unseen in the document,\nthe MLE of the Poisson distribution would assign zero\nprobability to the term, causing the probability of the query to\nbe zero. As in existing language modeling approaches, the\nmain challenge of constructing a reasonable retrieval model\nis to find a smoothed language model for p(\u00b7|d).\n2.2 Smoothing in Poisson Retrieval Model\nIn general, we want to assign non-zero rates for the query\nterms that are not seen in document d. Many smoothing\nmethods have been proposed for multinomial language\nmodels[2, 28, 29]. In general, we have to discount the\nprobabilities of some words seen in the text to leave some extra\nprobability mass to assign to the unseen words. In Poisson\nlanguage models, however, we do not have the same constraint\nas in a multinomial model (i.e., w\u2208V p(w|d) = 1). Thus\nwe do not have to discount the probability of seen words in\norder to give a non-zero rate to an unseen word. Instead, we\nonly need to guarantee that k=0,1,2,... p(c(w, d) = k|d) = 1.\nIn this section, we introduce three different strategies to\nsmooth a Poisson language model, and show how they lead\nto different retrieval functions.\n2.2.1 Bayesian Smoothing using Gamma Prior\nFollowing the risk minimization framework in [11], we\nassume that a document is generated by the arrival of terms\nin a time period of |d| according to the document language\nmodel, which essentially consists of a vector of Poisson rates\nfor each term, i.e., \u039bd = \u03bbd,1, ..., \u03bbd,|V | .\nA document is assumed to be generated from a\npotentially different model. Given a particular document d, we\nwant to estimate \u039bd. The rate of a term is estimated\nindependently of other terms. We use Bayesian estimation with\nthe following Gamma prior, which has two parameters, \u03b1\nand \u03b2:\nGamma(\u03bb|\u03b1, \u03b2) =\n\u03b2\u03b1\n\u0393(\u03b1)\n\u03bb\u03b1\u22121\ne\u2212\u03b2\u03bb\nFor each term w, the parameters \u03b1w and \u03b2w are chosen\nto be \u03b1w = \u00b5 \u2217 \u03bbC,w and \u03b2w = \u00b5, where \u00b5 is a parameter\nand \u03bbC,w is the rate of w estimated from some background\nlanguage model, usually the collection language model.\nThe posterior distribution of \u039bd is given by\np(\u039bd|d, C) \u221d\nw\u2208V\ne\u2212\u03bbw(|d|+\u00b5)\n\u03bb\nc(w,d)+\u00b5\u03bbC,w\u22121\nw\nwhich is a product of |V | Gamma distributions with\nparameters c(w, d) + \u00b5\u03bbC,w and |d| + \u00b5 for each word w. Given\nthat the Gamma mean is \u03b1\n\u03b2\n, we have\n\u02c6\u03bbd,w =\n\u03bbd,w\n\u03bbd,wp(\u03bbd,w|d, C)d\u03bbd,w =\nc(w, d) + \u00b5\u03bbC,w\n|d| + \u00b5\nThis is precisely the smoothed estimate of multinomial\nlanguage model with Dirichlet prior [28].\n2.2.2 Interpolation (Jelinek-Mercer) Smoothing\nAnother straightforward method is to decompose the query\ngeneration model as a mixture of two component models.\nOne is the document language model estimated with\nmaximum likelihood estimator, and the other is a model\nestimated from the collection background, p(\u00b7|C), which assigns\nnon-zero rate to w.\nFor example, we may use an interpolation coefficient\nbetween 0 and 1 (i.e., \u03b4 \u2208 [0, 1]). With this simple\ninterpolation, we can score a document with\nScore(d, q) =\nw\u2208V\nlog((1 \u2212 \u03b4)p(c(w, q)|d) + \u03b4p(c(w, q)|C)) (2)\nUsing the maximum likelihood estimator for p(\u00b7|d), we\nhave \u03bbd,w = c(w,d)\n|d|\n, thus Equation 2 becomes\nScore(d, q) \u221d\nw\u2208d\u2229q\n[log(1 +\n1 \u2212 \u03b4\n\u03b4\ne\u2212\u03bbd,w|q|\n(\u03bbd,w|q|)c(w,q)\nc(w, q)! \u00b7 p(c(w, q)|C)\n)\n\u2212 log\n(1 \u2212 \u03b4)e\u2212\u03bbd,w|q|\n+ \u03b4p(c(w, q) = 0|C)\n1 \u2212 \u03b4 + \u03b4p(c(w, q) = 0|C)\n]\n+\nw\u2208d\nlog\n(1 \u2212 \u03b4)e\u2212\u03bbd,w|q|\n+ \u03b4p(c(w, q) = 0|C)\n1 \u2212 \u03b4 + \u03b4p(c(w, q) = 0|C)\nWe can also use a Poisson language model for p(\u00b7|C), or use\nsome other frequency-based models. In the retrieval formula\nabove, the first summation can be computed efficiently. The\nsecond summation can be actually treated as a document\nprior, which penalizes long documents.\nAs the second summation is difficult to compute efficiently,\nwe conflate all non-query terms as one pseudo\nnon-queryterm, denoted as N. Using the pseudo-term formulation\nand a Poisson collection model, we can rewrite the retrieval\nformula as\nScore(d, q) \u221d\nw\u2208d\u2229q\nlog(1 +\n1 \u2212 \u03b4\n\u03b4\ne\u2212\u03bbd,w (\u03bbd,w|q|)c(w,q)\ne\u2212\u03bbd,C |q|\n(\u03bbd,C )c(w,q)\n)\n+ log\n(1 \u2212 \u03b4)e\u2212\u03bbd,N |q|\n+ \u03b4e\u2212\u03bbC,N |q|\n1 \u2212 \u03b4 + \u03b4e\u2212\u03bbC,N |q|\n(3)\nwhere \u03bbd,N =\n|d|\u2212 w\u2208q c(w,d)\n|d|\nand \u03bbC,N =\n|C|\u2212 w\u2208q c(w,C)\n|C|\n.\n2.2.3 Two-Stage Smoothing\nAs discussed in [29], smoothing plays two roles in retrieval:\n(1) to improve the estimation of the document language\nmodel, and (2) to explain the common terms in the query.\nIn order to distinguish the content and non-discriminative\nwords in a query, we follow [29] and assume that a query\nis generated by sampling from a two-component mixture\nof Poisson language models, with one component being the\ndocument model \u039bd and the other being a query background\nlanguage model p(\u00b7|U). p(\u00b7|U) models the typical term\nfrequencies in the user\"s queries. We may then score each\ndocument with the query likelihood computed using the\nfollowing two-stage smoothing model:\np(c(w, q)|\u039bd, U) = (1 \u2212 \u03b4)p(c(w, q)|\u039bd) + \u03b4p(c(w, q)|U) (4)\nwhere \u03b4 is a parameter, roughly indicating the amount of\nnoise in q. This looks similar to the interpolation\nsmoothing, except that p(\u00b7|\u039bd) now should be a smoothed language\nmodel, instead of the one estimated with MLE.\nWith no prior knowledge on p(\u00b7|U), we could set it to\np(\u00b7|C). Any smoothing methods for the document language\nmodel can be used to estimate p(\u00b7|d) such as the Gamma\nsmoothing as discussed in Section 2.2.1.\nThe empirical study of the smoothing methods is\npresented in Section 4.\n3. ANALYSIS OF POISSON LANGUAGE\nMODEL\nFrom the previous section, we notice that the Poisson\nlanguage model has a strong connection to the multinomial\nlanguage model. This is expected since they both belong to the\nexponential family [26]. However, there are many differences\nwhen these two families of models are applied with\ndifferent smoothing methods. From the perspective of retrieval,\nwill these two language models perform equivalently? If not,\nwhich model provides more benefits to retrieval, or provides\nflexibility which could lead to potential benefits? In this\nsection, we analytically discuss the retrieval features of the\nPoisson language models, by comparing their behavior with\nthat of the multinomial language models.\n3.1 The Equivalence of Basic Models\nLet us begin with the assumption that all the query terms\nappear in every document. Under this assumption, no\nsmoothing is needed. A document can be scored by the log\nlikelihood of the query with the maximum likelihood estimate:\nScore(d, q) =\nw\u2208V\nlog\ne\u2212\u03bbd,w|q|\n(\u03bbd,w|q|)c(w,q)\nc(w, q)!\n(5)\nUsing the MLE, we have \u03bbd,w = c(w,d)\nw\u2208V c(w,d)\n. Thus\nScore(d, q) \u221d\nc(w,q)>0\nc(w, q) log\nc(w, d)\nw\u2208V c(w, d)\nThis is exactly the log likelihood of the query if the\ndocument language model is a multinomial with maximum\nlikelihood estimate. Indeed, even with Gamma smoothing, when\nplugging \u03bbd,w =\nc(w,d)+\u00b5\u03bbC,w\n|d|+\u00b5\nand \u03bbC,w = c(w,C)\n|C|\ninto\nEquation 5, it is easy to show that\nScore(d, q) \u221d\nw\u2208q\u2229d\nc(w, q) log(1 +\nc(w, d)\n\u00b5 \u00b7\nc(w,C)\n|C|\n) + |q| log\n\u00b5\n|d| + \u00b5\n(6)\nwhich is exactly the Dirichlet retrieval formula in [28]. Note\nthat this equivalence holds only when the document length\nvariation is modeled with Poisson process.\nThis derivation indicates the equivalence of the basic\nPoisson and multinomial language models for retrieval. With\nother smoothing strategies, however, the two models would\nbe different. Nevertheless, with this equivalence in basic\nmodels, we could expect that the Poisson language model\nperforms comparably to the multinomial language model in\nretrieval, if only simple smoothing is explored. Based on this\nequivalence analysis, one may ask, why we should pursue\nthe Poisson language model. In the following sections, we\nshow that despite the equivalence in their basic models, the\nPoisson language model brings in extra flexibility for\nexploring advanced techniques on various retrieval features, which\ncould not be achieved with multinomial language models.\n3.2 Term Dependent Smoothing\nOne flexibility of the Poisson language model is that it\nprovides a natural framework to accommodate term\ndependent (per-term) smoothing. Existing work on language model\nsmoothing has already shown that different types of queries\nshould be smoothed differently according to how\ndiscriminative the query terms are. [7] also predicted that\ndifferent terms should have a different smoothing weights. With\nmultinomial query generation models, people usually use a\nsingle smoothing coefficient to control the combination of\nthe document model and the background model [28, 29].\nThis parameter can be made specific for different queries,\nbut always has to be a constant for all the terms. This\nis mandatory since a multinomial language model has the\nconstraint that w\u2208V p(w|d) = 1. However, from retrieval\nperspective, different terms may need to be smoothed\ndifferently even if they are in the same query. For example, a\nnon-discriminative term (e.g., the, is) is expected to be\nexplained more with the background model, while a content\nterm (e.g., retrieval, bush) in the query should be\nexplained with the document model. Therefore, a better way\nof smoothing would be to set the interpolation coefficient\n(i.e., \u03b4 in Formula 2 and Formula 3) specifically for each\nterm. Since the Poisson language model does not have the\nsum-to-one constraint across terms, it can easily\naccommodate per-term smoothing without needing to heuristically\ntwist the semantics of a generative model as in the case of\nmultinomial language models. Below we present a\npossible way to explore term dependent smoothing with Poisson\nlanguage models.\nEssentially, we want to use a term-specific smoothing\ncoefficient \u03b4 in the linear combination, denoted as \u03b4w. This\ncoefficient should intuitively be larger if w is a common word\nand smaller if it is a content word. The key problem is to find\na method to assign reasonable values to \u03b4w. Empirical\ntuning is infeasible for so many parameters. We may instead\nestimate the parameters \u2206 = {\u03b41, ..., \u03b4|V |} by\nmaximizing the likelihood of the query given the mixture model of\np(q|\u039bQ) and p(q|U), where \u039bQ is the true query model to\ngenerate the query and p(q|U) is a query background model\nas discussed in Section 2.2.3.\nWith the model p(q|\u039bQ) hidden, the query likelihood is\np(q|\u2206, U) =\n\u039bQ w\u2208V\n((1 \u2212 \u03b4w)p(c(w, q)|\u039bQ) + \u03b4wp(c(w, q)|U))P(\u039bQ|U)d\u039bQ\nIf we have relevant documents for each query, we can\napproximate the query model space with the language models\nof all the relevant documents. Without relevant documents,\nwe opt to approximate the query model space with the\nmodels of all the documents in the collection. Setting p(\u00b7|U) as\np(\u00b7|C), the query likelihood becomes\np(q|\u2206, U) =\nd\u2208C\n\u03c0d\nw\u2208V\n((1\u2212\u03b4w)p(c(w, q)|\u02c6\u039bd)+\u03b4wp(c(w, q)|C))\nwhere \u03c0d = p(\u02c6\u039bd|U). p(\u00b7|\u02c6\u039bd) is an estimated Poisson\nlanguage model for document d.\nIf we have prior knowledge on p(\u02c6\u039bd|U), such as which\ndocuments are relevant to the query, we can set \u03c0d accordingly,\nbecause what we want is to find \u2206 that can maximize the\nlikelihood of the query given relevant documents. Without\nthis prior knowledge, we can leave \u03c0d as free parameters, and\nuse the EM algorithm to estimate \u03c0d and \u2206. The updating\nfunctions are given as\n\u03c0\n(k+1)\nd =\n\u03c0d w\u2208V ((1 \u2212 \u03b4w)p(c(w, q)|\u02c6\u039bd) + \u03b4wp(c(w, q)|C))\nd\u2208C \u03c0d w\u2208V ((1 \u2212 \u03b4w)p(c(w, q)|\u02c6\u039bd) + \u03b4wp(c(w, q)|C))\nand\n\u03b4\n(k+1)\nw =\nd\u2208C\n\u03c0d\n\u03b4wp(c(w, q)|C))\n(1 \u2212 \u03b4w)p(c(w, q)|\u02c6\u039bd) + \u03b4wp(c(w, q)|C))\nAs discussed in [29], we only need to run the EM\nalgorithm for several iterations, thus the computational cost is\nrelatively low. We again assume our vocabulary containing\nall query terms plus a pseudo non-query term. Note that the\nfunction does not give an explicit way of estimating the\ncoefficient for the unseen non-query term. In our experiments,\nwe set it to the average over \u03b4w of all query terms.\nWith this flexibility, we expect Poisson language models\ncould improve the retrieval performance, especially for\nverbose queries, where the query terms have various\ndiscriminative values. In Section 4, we use empirical experiments to\nprove this hypothesis.\n3.3 Mixture Background Models\nAnother flexibility is to explore different background\n(collection) models (i.e., p(\u00b7|U), or p(\u00b7|C)). One common\nassumption made in language modeling information retrieval\nis that the background model is a homogeneous model of\nthe document models [28, 29]. Similarly, we can also make\nthe assumption that the collection model is a Poisson\nlanguage model, with the rates \u03bbC,w = d\u2208C c(w,d)\n|C|\n. However,\nthis assumption usually does not hold, since the collection\nis far more complex than a single document. Indeed, the\ncollection usually consists of a mixture of documents with\nvarious genres, authors, and topics, etc. Treating the\ncollection model as a mixture of document models, instead of\na single pseudo-document model is more reasonable.\nExisting work of multinomial language modeling has already\nshown that a better modeling of background improves the\nretrieval performance, such as clusters [15, 10], neighbor\ndocuments [25], and aspects [8, 27]. All the approaches\ncan be easily adopted using Poisson language models.\nHowever, a common problem of these approaches is that they\nall require heavy computation to construct the background\nmodel. With Poisson language modeling, we show that it is\npossible to model the mixture background without paying\nfor the heavy computational cost.\nPoisson Mixture [3] has been proposed to model a\ncollection of documents, which can fit the data much better than\na single Poisson. The basic idea is to assume that the\ncollection is generated from a mixture of Poisson models, which\nhas the general form of\np(x = k|PM) =\n\u03bb\np(\u03bb)p(x = k|\u03bb)d\u03bb\np(\u00b7|\u03bb) is a single Poisson model and p(\u03bb) is an arbitrary\nprobability density function. There are three well known Poisson\nmixtures [3]: 2-Poisson, Negative Binomial, and the Katz\"s\nK-Mixture [9]. Note that the 2-Poisson model has actually\nbeen explored in probabilistic retrieval models, which led to\nthe well-known BM25 formula [22].\nAll these mixtures have closed forms, and can be\nestimated from the collection of documents efficiently. This is\nan advantage over the multinomial mixture models, such as\nPLSI [8] and LDA [1], for retrieval. For example, the\nprobability density function of Katz\"s K-Mixture is given as\np(c(w) = k|\u03b1w, \u03b2w) = (1 \u2212 \u03b1w)\u03b7k,0 +\n\u03b1w\n\u03b2w + 1\n(\n\u03b2w\n\u03b2w + 1\n)k\nwhere \u03b7k,0 = 1 when k = 0, and 0 otherwise.\nWith the observation of a collection of documents, \u03b1w and\n\u03b2w can be estimated as\n\u03b2w =\ncf(w) \u2212 df(w)\ndf(w)\nand \u03b1w =\ncf(w)\nN\u03b2w\nwhere cf(w) and df(w) are the collection frequency and\ndocument frequency of w, and N is the number of\ndocuments in the collection. To account for the different\ndocument lengths, we assume that \u03b2w is a reasonable estimation\nfor generating a document of the average length, and use\n\u03b2 = \u03b2w\navdl\n|q| to generate the query. This Poisson mixture\nmodel can be easily used to replace P(\u00b7|C) in the retrieval\nfunctions 3 and 4.\n3.4 Other Possible Flexibilities\nIn addition to term dependent smoothing and efficient\nmixture background, a Poisson language model has also\nsome other potential advantages. For example, in Section 2,\nwe see that Formula 2 introduces a component which does\ndocument length penalization. Intuitively, when the\ndocument has more unique words, it will be penalized more. On\nthe other hand, if a document is exactly n copies of another\ndocument, it would not get over penalized. This feature is\ndesirable and not achieved with the Dirichlet model [5].\nPotentially, this component could penalize a document\naccording to what types of terms it contains. With term specific\nsettings of \u03b4, we could get even more flexibility for document\nlength normalization.\nPseudo-feedback is yet another interesting direction where\nthe Poission model might be able to show its advantage.\nWith model-based feedback, we could again relax the\ncombination coefficients of the feedback model and the background\nmodel, and allow different terms to contribute differently to\nthe feedback model. We could also utilize the relevant\ndocuments to learn better per-term smoothing coefficients.\n4. EVALUATION\nIn Section 3, we analytically compared the Poisson\nlanguage models and multinomial language models from the\nperspective of query generation and retrieval. In this\nsection, we compare these two families of models empirically.\nExperiment results show that the Poisson model with\nperterm smoothing outperforms multinomial model, and the\nperformance can be further improved with two-stage\nsmoothing. Using Poisson mixture as background model also\nimproves the retrieval performance.\n4.1 Datasets\nSince retrieval performance could significantly vary from\none test collection to another, and from one query to\nanother, we select four representative TREC test collections:\nAP, Trec7, Trec8, and Wt2g(Web). To cover different types\nof queries, we follow [28, 5], and construct short-keyword\n(SK, keyword title), short-verbose (SV, one sentence\ndescription), and long-verbose (LV, multiple sentences) queries.\nThe documents are stemmed with the Porter\"s stemmer, and\nwe do not remove any stop word. For each parameter, we\nvary its value to cover a reasonably wide range.\n4.2 Comparison to Multinomial\nWe compare the performance of the Poisson retrieval\nmodels and multinomial retrieval models using interpolation\n(JelinekMercer, JM) smoothing and Bayesian smoothing with\nconjugate priors. Table 1 shows that the two JM-smoothed\nmodels perform similarly on all data sets. Since the Dirichlet\nSmoothing for multinomial language model and the Gamma\nSmoothing for Poisson language model lead to the same\nretrieval formula, the performance of these two models are\njointly presented. We see that Dirichlet/Gamma smoothing\nmethods outperform both Jelinek-Mercer smoothing\nmethods. The parameter sensitivity curves for two Jelinek-Mercer\n0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1\n0\n0.05\n0.1\n0.15\n0.2\n0.25\n0.3\nDataset: Trec8\nParameter: \u03b4\nAveragePrecision\nJM\u2212Multinomial: LV\nJM\u2212Multinomial: SV\nJM\u2212Multinomial: SK\nJM\u2212Poisson: SK\nJM\u2212Poisson: SV\nJM\u2212Poisson: LV\nFigure 1: Poisson and multinomial performs\nsimilarly with Jelinek-Mercer smoothing\nsmoothing methods are shown in Figure 1. Clearly, these\ntwo methods perform similarly either in terms of optimality\nData Query JM-Multinomial JM-Poisson Dirichlet/Gamma Per-term 2-Stage Poisson\nMAP InitPr Pr@5d MAP InitPr Pr@5d MAP InitPr Pr@5d MAP InitPr Pr@5d\nAP88-89 SK 0.203 0.585 0.356 0.203 0.585 0.358 0.224 0.629 0.393 0.226 0.630 0.396\nSV 0.187 0.580 0.361 0.183 0.571 0.345 0.204 0.613 0.387 0.217* 0.603 0.390\nLV 0.283 0.716 0.480 0.271 0.692 0.470 0.291 0.710 0.496 0.304* 0.695 0.510\nTrec7 SK 0.167 0.635 0.400 0.168 0.635 0.404 0.186 0.687 0.428 0.185 0.646 0.436\nSV 0.174 0.655 0.432 0.176 0.653 0.432 0.182 0.666 0.432 0.196* 0.660 0.440\nLV 0.223 0.730 0.496 0.215 0.766 0.488 0.224 0.748 0.52 0.236* 0.738 0.512\nTrec8 SK 0.239 0.621 0.440 0.239 0.621 0.436 0.257 0.718 0.496 0.256 0.704 0.468\nSV 0.231 0.686 0.448 0.234 0.702 0.456 0.228 0.691 0.456 0.246* 0.692 0.476\nLV 0.265 0.796 0.548 0.261 0.757 0.520 0.260 0.741 0.492 0.274* 0.766 0.508\nWeb SK 0.250 0.616 0.380 0.250 0.616 0.380 0.302 0.767 0.468 0.307 0.739 0.468\nSV 0.214 0.611 0.392 0.217 0.609 0.384 0.273 0.693 0.508 0.292* 0.703 0.480\nLV 0.266 0.790 0.464 0.259 0.776 0.452 0.283 0.756 0.496 0.311* 0.759 0.488\nTable 1: Performance comparison between Poisson and Multinomial retrieval models: basic models perform\ncomparably; term dependent two-stage smoothing significantly improves Poisson\nAn asterisk (*) indicates that the difference between the performance of the term dependent two-stage smoothing and that of the\nDirichlet/Gamma single smoothing is statistically significant according to the Wilcoxon signed rank test at the level of 0.05.\nor sensitivity. This similarity of performance is expected as\nwe discussed in Section 3.1.\nAlthough the Poisson model and multinomial model are\nsimilar in terms of the basic model and/or with simple\nsmoothing methods, the Poisson model has great potential and\nflexibility to be further improved. As shown in the\nrightmost column of Table 1, term dependent two-stage Poisson\nmodel consistently outperforms the basic smoothing models,\nespecially for verbose queries. This model is given in\nFormula 4, with a Gamma smoothing for the document model\np(\u00b7|d), and \u03b4w, which is term dependent. The parameter \u00b5 of\nthe first stage Gamma smoothing is empirically tuned. The\ncombination coefficients (i.e., \u2206), are estimated with the EM\nalgorithm in Section 3.2. The parameter sensitivity curves\nfor Dirichlet/Gamma and the per-term two-stage\nsmoothing model are plotted in Figure 2. The per-term two-stage\nsmoothing method is less sensitive to the parameter \u00b5 than\nDirichlet/Gamma, and yields better optimal performance.\n0 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000\n0.1\n0.12\n0.14\n0.16\n0.18\n0.2\n0.22\nDataset: AP; Query Type: SV\nParameter: \u00b5\nAveragePrecision\nDirichlet/Gamma Smoothing\nTerm Dependent 2\u2212Stage\nFigure 2: Term dependent two-stage smoothing of\nPoisson outperforms Dirichlet/Gamma\nIn the following subsections, we conduct experiments to\ndemonstrate how the flexibility of the Poisson model could\nbe utilized to achieve better performance, which we cannot\nachieve with multinomial language models.\n4.3 Term Dependent Smoothing\nTo test the effectiveness of the term dependent\nsmoothing, we conduct the following two experiments. In the first\nexperiment, we relax the constant coefficient in the simple\nJelinek-Mercer smoothing formula (i.e., Formula 3), and use\nthe EM algorithm proposed in Section 3.2 to find a \u03b4w for\neach unique term. Since we are using the EM algorithm to\niteratively estimate the parameters, we usually do not want\nthe probability of p(\u00b7|d) to be zero. We then use a simple\nLaplace method to slightly smooth the document model\nbefore it goes into the EM iterations. The documents are then\nstill scored with Formula 3, but using learnt \u03b4w. The results\nare labeled with JM+L. in Table 2.\nData Q JM JM JM+L. 2-Stage 2-Stage\n(MAP) PT: No Yes Yes No Yes\nAP SK 0.203 0.204 0.206 0.223 0.226*\nSV 0.183 0.189 0.214* 0.204 0.217*\nTrec7 SK 0.168 0.171 0.174 0.186 0.185\nSV 0.176 0.147 0.198* 0.194 0.196\nTrec8 SK 0.239 0.240 0.227* 0.257 0.256\nSV 0.234 0.223 0.249* 0.242 0.246*\nWeb SK 0.250 0.236 0.220* 0.291 0.307*\nSV 0.217 0.232 0.261* 0.273 0.292*\nTable 2: Term dependent smoothing improves\nretrieval performance\nAn asterisk (*) in Column 3 indicates that the difference between\nthe JM+L. method and JM method is statistically significant;\nan asterisk (*) in Column 5 means that the difference between\nterm dependent two-stage method and query dependent two-stage\nmethod is statistically significant; PT stands for per-term.\nWith term dependent coefficients, the performance of the\nJelinek-Mercer Poisson model is improved in most cases.\nHowever, in some cases (e.g., Trec7/SV), it performs poorly.\nThis might be caused by the problem of EM estimation with\nunsmoothed document models. Once non-zero probability\nis assigned to all the terms before entering the EM iteration,\nthe performance on verbose queries can be improved\nsignificantly. This indicates that there is still room to find better\nmethods to estimate \u03b4w. Please note that neither the\nperterm JM method nor the JM+L. method has a parameter\nto tune.\nAs shown in Table 1, the term dependent two-stage\nsmoothing can significantly improve retrieval performance. To\nunderstand whether the improvement is contributed by the\nterm dependent smoothing or the two-stage smoothing\nframework, we design another experiment to compare the\nperterm two-stage smoothing with the two-stage smoothing\nmethod proposed in [29]. Their method managed to find\ncoefficients specific to the query, thus a verbose query would\nuse a higher \u03b4. However, since their model is based on\nmultinomial language modeling, they could not get per-term\ncoefficients. We adopt their method to the Poisson two-stage\nsmoothing, and also estimate a per-query coefficient for all\nthe terms. We compare the performance of such a model\nwith the per-term two-stage smoothing model, and present\nthe results in the right two columns in Table 2. Again, we\nsee that the per-term two-stage smoothing outperforms\nthe per-query two-stage smoothing, especially for verbose\nqueries. The improvement is not as large as how the\nperterm smoothing method improves over Dirichlet/Gamma.\nThis is expected, since the per-query smoothing has already\naddressed the query discrimination problem to some extent.\nThis experiment shows that even if the smoothing is already\nper-query, making it per-term is still beneficial. In brief, the\nper-term smoothing improved the retrieval performance of\nboth one-stage and two-stage smoothing method.\n4.4 Mixture Background Model\nIn this section, we conduct experiments to examine the\nbenefits of using a mixture background model without extra\ncomputational cost, which can not be achieved for\nmultinomial models. Specifically, in retrieval formula 3, instead of\nusing a single Poisson distribution to model the background\np(\u00b7|C), we use Katz\"s K-Mixture model, which is essentially\na mixture of Poisson distributions. p(\u00b7|C) can be computed\nefficiently with simple collection statistics, as discussed in\nSection 3.3.\nData Query JM. Poisson JM. K-Mixture\nAP SK 0.203 0.204\nSV 0.183 0.188*\nTrec-7 SK 0.168 0.169\nSV 0.176 0.178*\nTrec-8 SK 0.239 0.239\nSV 0.234 0.238*\nWeb SK 0.250 0.250\nSV 0.217 0.223*\nTable 3: K-Mixture background model improves\nretrieval performance\nThe performance of the JM retrieval model with single\nPoisson background and with Katz\"s K-Mixture background\nmodel is compared in Table 3. Clearly, using K-Mixture to\nmodel the background model outperforms the single\nPoisson background model in most cases, especially for verbose\nqueries where the improvement is statistically significant.\nFigure 3 shows that the performance changes over\ndifferent parameters for short verbose queries. The model using\nK-Mixture background is less sensitive than the one using\nsingle Poisson background. Given that this type of mixture\n0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1\n0\n0.05\n0.1\n0.15\n0.2\n0.25\nData: Trec8; Query: SV\nParameter: \u03b4\nAveragePrecision\nPoisson Background\nK\u2212Mixture Background\nFigure 3: K-Mixture background model deviates the\nsensitivity of verbose queries\nbackground model does not require any extra computation\ncost, it would be interesting to study whether using other\nmixture Poisson models, such as 2-Poisson and negative\nBinomial, could help the performance.\n5. RELATED WORK\nTo the best of our knowledge, there has been no study of\nquery generation models based on Poisson distribution.\nLanguage models have been shown to be effective for many\nretrieval tasks [21, 28, 14, 4]. The most popular and\nfundamental one is the query-generation language model [21,\n13]. All existing query generation language models are based\non either multinomial distribution [19, 6, 28, 13] or\nmultivariate Bernoulli distribution [21, 17, 18]. We introduce a\nnew family of language models, based on Poisson\ndistribution. Poisson distribution has been previously studied in the\ndocument generation models [16, 22, 3, 24], leading to the\ndevelopment of one of the most effective retrieval formula\nBM25 [23]. [24] studies the parallel derivation of three\ndifferent retrieval models which is related to our comparison\nof Poisson and multinomial. However, the Poisson model\nin their paper is still under the document generation\nframework, and also does not account for the document length\nvariation. [26] introduces a way to empirically search for an\nexponential model for the documents. Poisson mixtures [3]\nsuch as 2-Poisson [22], Negative multinomial, and Katz\"s\nKMixture [9] has shown to be effective to model and retrieve\ndocuments. Once again, none of this work explores Poisson\ndistribution in the query generation framework.\nLanguage model smoothing [2, 28, 29] and background\nstructures [15, 10, 25, 27] have been studied with\nmultinomial language models. [7] analytically shows that term\nspecific smoothing could be useful. We show that\nPoisson language model is natural to accommodate the per-term\nsmoothing without heuristic twist of the semantics of a\ngenerative model, and is able to efficiently better model the\nmixture background, both analytically and empirically.\n6. CONCLUSIONS\nWe present a new family of query generation language\nmodels for retrieval based on Poisson distribution. We\nderive several smoothing methods for this family of models,\nincluding single-stage smoothing and two-stage smoothing.\nWe compare the new models with the popular multinomial\nretrieval models both analytically and experimentally. Our\nanalysis shows that while our new models and multinomial\nmodels are equivalent under some assumptions, they are\ngenerally different with some important differences. In\nparticular, we show that Poisson has an advantage over\nmultinomial in naturally accommodating per-term smoothing. We\nexploit this property to develop a new per-term smoothing\nalgorithm for Poisson language models, which is shown to\noutperform term-independent smoothing for both Poisson\nand multinomial models. Furthermore, we show that a\nmixture background model for Poisson can be used to improve\nthe performance and robustness over the standard Poisson\nbackground model. Our work opens up many interesting\ndirections for further exploration in this new family of models.\nFurther exploring the flexibilities over multinomial language\nmodels, such as length normalization and pseudo-feedback\ncould be good future work. It is also appealing to find\nrobust methods to learn the per-term smoothing coefficients\nwithout additional computation cost.\n7. ACKNOWLEDGMENTS\nWe thank the anonymous SIGIR 07 reviewers for their\nuseful comments. This material is based in part upon work\nsupported by the National Science Foundation under award\nnumbers IIS-0347933 and 0425852.\n8. REFERENCES\n[1] D. Blei, A. Ng, and M. Jordan. Latent dirichlet\nallocation. Journal of Machine Learning Research,\n3:993-1022, 2003.\n[2] S. F. Chen and J. Goodman. An empirical study of\nsmoothing techniques for language modeling.\nTechnical Report TR-10-98, Harvard University, 1998.\n[3] K. Church and W. Gale. Poisson mixtures. Nat. Lang.\nEng., 1(2):163-190, 1995.\n[4] W. B. Croft and J. Lafferty, editors. Language\nModeling and Information Retrieval. Kluwer Academic\nPublishers, 2003.\n[5] H. Fang, T. Tao, and C. Zhai. A formal study of\ninformation retrieval heuristics. In Proceedings of the\n27th annual international ACM SIGIR conference on\nResearch and development in information retrieval,\npages 49-56, 2004.\n[6] D. Hiemstra. Using Language Models for Information\nRetrieval. PhD thesis, University of Twente, Enschede,\nNetherlands, 2001.\n[7] D. Hiemstra. Term-specific smoothing for the\nlanguage modeling approach to information retrieval:\nthe importance of a query term. In Proceedings of the\n25th annual international ACM SIGIR conference on\nResearch and development in information retrieval,\npages 35-41, 2002.\n[8] T. Hofmann. Probabilistic latent semantic indexing.\nIn Proceedings of ACM SIGIR\"99, pages 50-57, 1999.\n[9] S. M. Katz. Distribution of content words and phrases\nin text and language modelling. Nat. Lang. Eng.,\n2(1):15-59, 1996.\n[10] O. Kurland and L. Lee. Corpus structure, language\nmodels, and ad-hoc information retrieval. In\nProceedings of the 27th annual international ACM\nSIGIR conference on Research and development in\ninformation retrieval, pages 194-201, 2004.\n[11] J. Lafferty and C. Zhai. Document language models,\nquery models, and risk minimization for information\nretrieval. In Proceedings of SIGIR\"01, pages 111-119,\nSept 2001.\n[12] J. Lafferty and C. Zhai. Probabilistic IR models based\non query and document generation. In Proceedings of\nthe Language Modeling and IR workshop, pages 1-5,\nMay 31 - June 1 2001.\n[13] J. Lafferty and C. Zhai. Probabilistic relevance models\nbased on document and query generation. In W. B.\nCroft and J. Lafferty, editors, Language Modeling and\nInformation Retrieval. Kluwer Academic Publishers,\n2003.\n[14] V. Lavrenko and B. Croft. Relevance-based language\nmodels. In Proceedings of SIGIR\"01, pages 120-127,\nSept 2001.\n[15] X. Liu and W. B. Croft. Cluster-based retrieval using\nlanguage models. In Proceedings of the 27th annual\ninternational ACM SIGIR conference on Research and\ndevelopment in information retrieval, pages 186-193,\n2004.\n[16] E. L. Margulis. Modelling documents with multiple\npoisson distributions. Inf. Process. Manage.,\n29(2):215-227, 1993.\n[17] A. McCallum and K. Nigam. A comparison of event\nmodels for naive bayes text classification. In\nProceedings of AAAI-98 Workshop on Learning for\nText Categorization, 1998.\n[18] D. Metzler, V. Lavrenko, and W. B. Croft. Formal\nmultiple-bernoulli models for language modeling. In\nProceedings of the 27th annual international ACM\nSIGIR conference on Research and development in\ninformation retrieval, pages 540-541, 2004.\n[19] D. H. Miller, T. Leek, and R. Schwartz. A hidden\nMarkov model information retrieval system. In\nProceedings of the 1999 ACM SIGIR Conference on\nResearch and Development in Information Retrieval,\npages 214-221, 1999.\n[20] A. Papoulis. Probability, random variables and\nstochastic processes. New York: McGraw-Hill, 1984,\n2nd ed., 1984.\n[21] J. M. Ponte and W. B. Croft. A language modeling\napproach to information retrieval. In Proceedings of\nthe 21st annual international ACM SIGIR conference\non Research and development in information retrieval,\npages 275-281, 1998.\n[22] S. Robertson and S. Walker. Some simple effective\napproximations to the 2-poisson model for\nprobabilistic weighted retrieval. In Proceedings of\nSIGIR\"94, pages 232-241, 1994.\n[23] S. E. Robertson, S. Walker, S. Jones,\nM. M.Hancock-Beaulieu, and M. Gatford. Okapi at\nTREC-3. In D. K. Harman, editor, The Third Text\nREtrieval Conference (TREC-3), pages 109-126, 1995.\n[24] T. Roelleke and J. Wang. A parallel derivation of\nprobabilistic information retrieval models. In\nProceedings of the 29th annual international ACM\nSIGIR conference on Research and development in\ninformation retrieval, pages 107-114, 2006.\n[25] T. Tao, X. Wang, Q. Mei, and C. Zhai. Language\nmodel information retrieval with document expansion.\nIn Proceedings of HLT/NAACL 2006, pages 407-414,\n2006.\n[26] J. Teevan and D. R. Karger. Empirical development of\nan exponential probabilistic model for text retrieval:\nusing textual analysis to build a better model. In\nProceedings of the 26th annual international ACM\nSIGIR conference on Research and development in\ninformaion retrieval, pages 18-25, 2003.\n[27] X. Wei and W. B. Croft. Lda-based document models\nfor ad-hoc retrieval. In Proceedings of the 29th annual\ninternational ACM SIGIR conference on Research and\ndevelopment in information retrieval, pages 178-185,\n2006.\n[28] C. Zhai and J. Lafferty. A study of smoothing\nmethods for language models applied to ad-hoc\ninformation retrieval. In Proceedings of ACM\nSIGIR\"01, pages 334-342, Sept 2001.\n[29] C. Zhai and J. Lafferty. Two-stage language models\nfor information retrieval. In Proceedings of ACM\nSIGIR\"02, pages 49-56, Aug 2002.", "keywords": "language model;multinomial distribution;speech recognition;term dependent smooth;query generation probabilistic model;query generation;two-stage smoothing;formal model;vocabulary set;multivariate bernoullus distribution;new term-dependent smoothing algorithm;perterm smoothing;poisson process;poisson distribution;term frequency;homogeneous poisson process;single pseudo term"}
-{"name": "test_H-32", "title": "Interesting Nuggets and Their Impact on Definitional Question Answering", "abstract": "Current approaches to identifying definitional sentences in the context of Question Answering mainly involve the use of linguistic or syntactic patterns to identify informative nuggets. This is insufficient as they do not address the novelty factor that a definitional nugget must also possess. This paper proposes to address the deficiency by building a Human Interest Model from external knowledge. It is hoped that such a model will allow the computation of human interest in the sentence with respect to the topic. We compare and contrast our model with current definitional question answering models to show that interestingness plays an important factor in definitional question answering.", "fulltext": "1. DEFINITIONAL QUESTION\nANSWERING\nDefinitional Question Answering was first introduced to the TExt\nRetrieval Conference Question Answering Track main task in 2003.\nThe Definition questions, also called Other questions in recent years,\nare defined as follows. Given a question topic X, the task of a\ndefinitional QA system is akin to answering the question What is X?\nor Who is X?. The definitional QA system is to search through\na news corpus and return return a set of answers that best describes\nthe question topic. Each answer should be a unique topic-specific\nnugget that makes up one facet in the definition of the question\ntopic.\n1.1 The Two Aspects of Topic Nuggets\nOfficially, topic-specific answer nuggets or simply topic nuggets\nare described as informative nuggets. Each informative nugget is\na sentence fragment that describe some factual information about\nthe topic. Depending on the topic type and domain, this can include\ntopic properties, relationships the topic has with some closely\nrelated entity, or events that happened to the topic.\nFrom observation of the answer set for definitional question\nanswering from TREC 2003 to 2005, it seems that a significant\nnumber of topic nuggets cannot simply be described as informative\nnuggets. Rather, these topic nuggets have a trivia-like quality\nassociated with them. Typically, these are out of the ordinary pieces\nof information about a topic that can pique a human reader\"s\ninterest. For this reason, we decided to define answer nuggets that\ncan evoke human interest as interesting nuggets. In essence,\ninteresting nuggets answer the questions What is X famous for?,\nWhat defines X? or What is extraordinary about X?.\nWe now have two very different perspective as to what\nconstitutes an answer to Definition questions. An answer can be some\nimportant factual information about the topic or some novel and\ninteresting aspect about the topic. This duality of informativeness\nand interestingness can be clearly observed in the five vital answer\nnuggets for a TREC 2005 topic of George Foreman. Certain\nanswer nuggets are more informative while other nuggets are more\ninteresting in nature.\nInformative Nuggets\n- Was graduate of Job Corps.\n- Became oldest world champion in boxing history.\nInteresting Nuggets\n- Has lent his name to line of food preparation products.\n- Waved American flag after winning 1968 Olympics championship.\n- Returned to boxing after 10 yr hiatus.\nAs an African-American professional heavyweight boxer, an\naverage human reader would find the last three nuggets about George\nForeman interesting because boxers do not usually lend their names\nto food preparation products, nor do boxers retire for 10 years\nbefore returning to the ring and become the world\"s oldest boxing\nchampion. Foreman\"s waving of the American flag at the Olympics\nis interesting because the innocent action caused some\nAfricanAmericans to accuse Foreman of being an Uncle Tom. As seen\nhere, interesting nuggets has some surprise factor or unique quality\nthat makes them interesting to human readers.\n1.2 Identifying Interesting Nuggets\nSince the original official description for definitions comprise of\nidentifying informative nuggets, most research has focused entirely\non identifying informative nuggets. In this paper, we focus on\nexploring the properties of interesting nuggets and develop ways of\nidentify such interesting nuggets. A Human Interest Model\ndefinitional question answering system is developed with emphasis\non identifying interesting nuggets in order to evaluate the impact\nof interesting nuggets on the performance of a definitional\nquestion answering system. We further experimented with combining\nthe Human Interest Model with a lexical pattern based definitional\nquestion answering system in order to capture both informative and\ninteresting nuggets.\n2. RELATED WORK\nThere are currently two general methods for Definitional\nQuestion Answering. The more common method uses a lexical\npatternbased approach was first proposed by Blair-Goldensohn et al. [1]\nand Xu et al. [14]. Both groups predominantly used patterns such\nas copulas and appositives, as well as manually crafted\nlexicosyntactic patterns to identify sentences that contain informative nuggets.\nFor example, Xu et al. used 40 manually defined structured\npatterns in their 2003 definitional question answering system. Since\nthen, in an attempt to capture a wider class of informational nuggets,\nmany such systems of increasing complexity has been created. A\nrecent system by Harabagiu et al. [6] created a definitional\nquestion answering system that combines the use of 150 manually\ndefined positive and negative patterns, named entity relations and\nspecially crafted information extraction templates for 33 target\ndomains. Here, a musician template may contain lexical patterns that\nidentify information such as the musician\"s musical style, songs\nsung by the musician and the band, if any, that the musician belongs\nto. As one can imagine, this is a knowledge intensive approach that\nrequires an expert linguist to manually define all possible lexical or\nsyntactic patterns required to identify specific types of information.\nThis process requires a lot of manual labor, expertise and is not\nscalable. This lead to the development of the soft-pattern approach\nby Cui et al. [4, 11]. Instead of manually encoding patterns,\nanswers to previous definitional question answering evaluations were\nconverted into generic patterns and a probabilistic model is trained\nto identify such patterns in sentences. Given a potential answer\nsentence, the probabilistic model outputs a probability that\nindicates how likely the sentence matches one or more patterns that the\nmodel has seen in training.\nSuch lexicalosyntactic patterns approach have been shown to be\nadept at identifying factual informative nuggets such as a person\"s\nbirthdate, or the name of a company\"s CEO. However, these\npatterns are either globally applicable to all topics or to a specific set\nof entities such as musicians or organizations. This is in direct\ncontrast to interesting nuggets that are highly specific to\nindividual topics and not to a set of entities. For example, the interesting\nnuggets for George Foreman are specific only George Foreman and\nno other boxer or human being. Topic specificity or topic relevance\nis thus an important criteria that helps identify interesting nuggets.\nThis leads to the exploration of the second relevance-based\napproach that has been used in definitional question answering.\nPredominantly, this approach has been used as a backup method for\nidentifying definitional sentences when the primary method of\nlexicalosyntactic patterns failed to find a sufficient number of\ninformative nuggets [1]. A similar approach has also been used as a\nbaseline system for TREC 2003 [14]. More recently, Chen et al.\n[3] adapted a bi-gram or bi-term language model for definitional\nQuestion Answering.\nGenerally, the relevance-based approach requires a definitional\ncorpus that contain documents highly relevant to the topic. The\nbaseline system in TREC 2003 simply uses the topic words as its\ndefinitional corpus. Blair-Goldensohn et al. [1] uses a machine\nlearner to include in the definitonal corpus sentences that are likely\nto be definitional. Chen et al. [3] collect snippets from Google to\nbuild its definitional corpus.\nFrom the definitional corpus, a definitional centroid vector is\nbuilt or a set of centroid words are selected. This centroid\nvector or set of centroid words is taken to be highly indicative of the\ntopic. Systems can then use this centroid to identify definitional\nanswers by using a variety of distance metrics to compare against\nsentences found in the set of retrieved documents for the topic.\nBlairGoldensohn et al. [1] uses Cosine similarity to rank sentences by\ncentrality. Chen et al. [3] builds a bigram language model using\nthe 350 most frequently occurring google snippet terms, described\nin their paper as an ordered centroid, to estimate the probability that\na sentence is similar to the ordered centroid.\nAs described here, the relevance-based approach is highly\nspecific to individual topics due to its dependence on a topic specific\ndefinitional corpus. However if individual sentences are viewed as\na document, then relevance-based approaches essentially use the\ncollected topic specific centroid words as a form of document\nretrieval with automated query expansion to identify strongly\nrelevant sentences. Thus such methods identify relevant sentences and\nnot sentences containing definitional nuggets. Yet, the TREC 2003\nbaseline system [14] outperformed all but one other system. The\nbi-term language model [3] is able to report results that are highly\ncompetitive to state-of-the-art results using this retrieval-based\napproach. At TREC 2006, a simple weighted sum of all terms model\nwith terms weighted using solely Google snippets outperformed all\nother systems by a significant margin [7].\nWe believe that interesting nuggets often come in the form of\ntrivia, novel or rare facts about the topic that tend to strongly\ncooccur with direct mention of topic keywords. This may explain\nwhy relevance-based method can perform competitively in\ndefinitional question answering. However, simply comparing against a\nsingle centroid vector or set of centroid words may have over\nemphasized topic relevance and has only identified interesting\ndefinitional nuggets in an indirect manner. Still, relevance based retrieval\nmethods can be used as a starting point in identifying interesting\nnuggets. We will describe how we expand upon such methods to\nidentify interesting nuggets in the next section.\n3. HUMAN INTEREST MODEL\nGetting a computer system to identify sentences that a human\nreader would find interesting is a tall order. However, there are\nmany documents on the world wide web that are contain concise,\nhuman written summaries on just about any topic. What\"s more,\nthese documents are written explicitly for human beings and will\ncontain information about the topic that most human readers would\nbe interested in. Assuming we can identify such relevant\ndocuments on the web, we can leverage them to assist in identifying\ndefinitional answers to such topics. We can take the assumption\nthat most sentences found within these web documents will\ncontain interesting facets about the topic at hand.\nThis greatly simplifies the problem to that of finding within the\nAQUAINT corpus sentences similar to those found in web\ndocuments. This approach has been successfully used in several factoid\nand list Question Answering systems [11] and we feel the use of\nsuch an approach for definitional or Other question answering is\njustified. Identifying interesting nuggets requires computing\nmachinery to understand world knowledge and human insight. This\nis still a very challenging task and the use of human written\ndocuments dramatically simplifies the complexity of the task.\nIn this paper, we report on such an approach by experimenting\nwith a simple word-level edit distance based weighted term\ncomparison algorithm. We use the edit distance algorithm to score the\nsimilarity of a pair of sentences, with one sentence coming from\nweb resources and the other sentence selected from the AQUAINT\ncorpus. Through a series of experiments, we will show that even\nsuch a simple approach can be very effective at definitional\nquestion answering.\n3.1 Web Resources\nThere exists on the internet articles on just about any topic a\nhuman can think of. What\"s more, many such articles are centrally\nlocated on several prominent websites, making them an easily\naccessible source of world knowledge. For our work on identifying\ninteresting nuggets, we focused on finding short one or two page\narticles on the internet that are highly relevant to our desired topic.\nSuch articles are useful as they contain concise information about\nthe topic. More importantly, the articles are written by humans, for\nhuman readers and thus contain the critical human world\nknowledge that a computer system currently is unable to capture.\nWe leverage this world knowledge by collecting articles for each\ntopic from the following external resources to build our Interest\nCorpus for each topic.\nWikipedia is a Web-based, free-content encyclopedia written\ncollaboratively by volunteers. This resource has been used by\nmany Question Answering system as a source of knowledge\nabout each topic. We use a snapshot of Wikipedia taken in\nMarch 2006 and include the most relevant article in the\nInterest Corpus.\nNewsLibrary is a searchable archive of news articles from over\n100 different newspaper agencies. For each topic, we\ndownload the 50 most relevant articles and include the title and\nfirst paragraph of each article in the Interest Corpus.\nGoogle Snippets are retrieved by issuing the topic as a query to\nthe Google search engine. From the search results, we\nextracted the top 100 snippets. While Google snippets are not\narticles, we find that they provide a wide coverage of\nauthorative information about most topics.\nDue to their comprehensive coverage of a wide variety of\ntopics, the above resources form the bulk of our Interest Corpus. We\nalso extracted documents from other resources. However, as these\nresources are more specific in nature, we do not always get any\nsingle relevant document. These resources are listed below.\nBiography.com is the website for the Biography television cable\nchannel. The channel\"s website contains searchable\nbiographies on over 25,000 notable people. If the topic is a person\nand we can find a relevant biography on the person, we\ninclude it it in our Interest Corpus.\nBartleby.com contains a searchable copy of several resources\nincluding the Columbia Encyclopedia, the World Factbook,\nand several English dictionaries.\ns9.com is a biography dictionary on over 33,000 notable people.\nLike Biography.com, we include the most relevant biography\nwe can find in the Interest Corpus.\nGoogle Definitions Google search engine offers a feature called\nDefinitions that provides the definition for a query, if it\nhas one. We use this feature and extract whatever definitions\nthe Google search engine has found for each topic into the\nInterest Corpus.\nFigure 1: Human Interest Model Architecture.\nWordNet WordNet is an well-known electronic semantic lexicon\nfor the English language. Besides grouping English words\ninto sets of synonyms called synsets, it also provide a short\ndefinition on the meaning of words found in each synset. We\nadd this short definition, if there is one, into our Interest\nCorpus.\nWe have two major uses for this topic specific Interest Corpus,\nas a source of sentences containing interesting nuggets and as a\nunigram language model of topic terms, I.\n3.2 Multiple Interesting Centroids\nWe have seen that interesting nuggets are highly specific to a\ntopic. Relevance-based approaches such as the bigram language\nmodel used by Chen et al. [3] are focused on identifying highly\nrelevant sentences and pick up definitional answer nuggets as an\nindirect consequence. We believe that the use of only a single\ncollection of centroid words has over-emphasized topic relevance and\nchoose instead to use multiple centroids.\nSince sentences in the Interest Corpus of articles we collected\nfrom the internet are likely to contain nuggets that are of interest to\nhuman readers, we can essentially use each sentence as\npseudocentroids. Each sentence in the Interest Corpus essentially raises\na different aspect of the topic for consideration as a sentence of\ninterest to human readers. By performing a pairwise sentence\ncomparison between sentences in the Interest Corpus and candidate\nsentences retrieved from the AQUAINT corpus, we increase the\nnumber of sentence comparisons from O(n) to O(nm). Here, n is\nthe number of potential candidate sentences and m is the number\nof sentences in the Interest Corpus. In return, we obtain a diverse\nranked list of answers that are individually similar to various\nsentences found in the topic\"s Interest Corpus. An answer can only be\nhighly ranked if it is strongly similar to a sentence in the Interest\nCorpus, and is also strongly relevant to the topic.\n3.3 Implementation\nFigure 1 shows the system architecture for the proposed Human\nInterest-based definitional QA system.\nThe AQUAINT Retrieval module shown in Figure 1 reuses a\ndocument retrieval module of a current Factoid and List Question\nAnswering system we have implemented. Given a set of words\ndescribing the topic, the AQUAINT Retrieval module does query\nexpansion using Google and searches an index of AQUAINT\ndocuments to retrieve the 800 most relevant documents for\nconsideration.\nThe Web Retrieval module on the other hand, searches the online\nresources described in Section 3.1 for interesting documents in\norder to populate the Interest Corpus.\nThe HIM Ranker, or Human Interest Model Ranking module, is\nthe implementation of what is described in this paper. The module\nfirst builds the unigram language model, I, from the collected web\ndocuments. This language model will be used to weight the\nimportance of terms within sentences. Next, a sentence chunker is used\nto segment all 800 retrieved documents into individual sentences.\nEach of these sentences can be a potential answer sentence that will\nbe independently ranked by interestingness. We rank sentences by\ninterestingness using sentences from both the Interest Corpus of\nexternal documents as well as the unigram language model we built\nearlier which we use to weight terms.\nA candidate sentence in our top 800 relevant AQUAINT\ndocuments is considered interesting if it is highly similar in content to\na sentence found in our collection of external web-documents. To\nachieve this, we perform a pairwise similarity comparison between\na candidate sentence and sentences in our external documents\nusing a weighted-term edit distance algorithm. Term weights are used\nto adjust the relative importance of each unique term found in the\nInterest Corpus. When both sentences share the same term, the\nsimilarity score is incremented by the two times the term\"s weight\nand every dissimilar term decrements the similarity score by the\ndissimilar term\"s weight.\nWe choose the highest achieved similarity score for a candidate\nsentence as the Human Interest Model score for the candidate\nsentence. In this manner, every candidate sentence is ranked by\ninterestingness. Finally, to obtain the answer set, we select the top 12\nhighest ranked and non redundant sentences as definitional answers\nfor the topic.\n4. INITIAL EXPERIMENTS\nThe Human Interest-based system described in the previous\nsection is designed to identify only interesting nuggets and not\ninformative nuggets. Thus, it can be described as a handicapped\nsystem that only deals with half the problem in definitional question\nanswering. This is done in order to explore how interestingness\nplays a factor in definitional answers. In order to compare and\ncontrast the differences between informative and interesting nuggets,\nwe also implemented the soft-pattern bigram model proposed by\nCui et al. [4, 11]. In order to ensure comparable results, both\nsystems are provided identical input data. Since both system require\nthe use of external resources, they are both provided the same web\narticles retrieved by our Web Retrieval module. Both systems also\nrank the same same set of candidate sentences in the form of 800\nmost relevant documents as retrieved by our AQUAINT Retrieval\nmodule.\nFor the experiments, we used the TREC 2004 question set to\ntune any system parameters and use the TREC 2005 question sets\nto test the both systems. Both systems are evaluated the results\nusing the standard scoring methodology for TREC definitions. TREC\nprovides a list of vital and okay nuggets for each question topic.\nEvery question is scored on nugget recall (NR) and nugget\nprecision (NP) and a single final score is computed using F-Measure\n(see equation 1) with \u03b2 = 3 to emphasize nugget recall. Here, NR\nis the number of vital nuggets returned divided by total number\nof vital nuggets while NP is computed using a minimum allowed\ncharacter length function defined in [12]. The evaluation is\nautomatically conducted using Pourpre v1.0c [10].\nFScore =\n\u03b22\n\u2217 NP \u2217 NR\n(\u03b22 + 1)NP + NR\n(1)\nSystem F3-Score\nBest TREC 2005 System 0.2480\nSoft-Pattern (SP) 0.2872\nHuman Interest Model (HIM) 0.3031\nTable 1: Performance on TREC 2005 Question Set\nFigure 2: Performance by entity types.\n4.1 Informativeness vs Interestingness\nOur first experiment compares the performance of solely\nidentifying interesting nuggets against solely identifying informative\nnuggets. We compare the results attained by the Human Interest\nModel that only identify interesting nuggets with the results of the\nsyntactic pattern finding Soft-Pattern model as well as the result of\nthe top performing definitional system in TREC 2005 [13]. Table 1\nshows the F3 score the three systems for the TREC 2005 question\nset.\nThe Human Interest Model clearly outperform both soft pattern\nand the best TREC 2005 system with a F3 score of 0.303. The\nresult is also comparable with the result of a human manual run,\nwhich attained a F3 score of 0.299 on the same question set [9].\nThis result is confirmation that interesting nuggets does indeed play\na significant role in picking up definitional answers, and may be\nmore vital than using information finding lexical patterns.\nIn order to get a better perspective of how well the Human\nInterest Model performs for different types of topics, we manually\ndivided the TREC 2005 topics into four broad categories of\nPERSON, ORGANIZATION, THING and EVENT as listed in Table\n3. These categories conform to TREC\"s general division of\nquestion topics into 4 main entity types [13]. The performance of\nHuman Interest Model and Soft Pattern Bigram Model for each entity\ntype can be seen in Figure 2. Both systems exhibit consistent\nbehavior across entity types, with the best performance coming from\nPERSON and ORGANIZATION topics and the worst performance\nfrom THING and EVENT topics. This can mainly be attributed\nto our selection of web-based resources for the definitional corpus\nused by both system. In general, it is harder to locate a single web\narticle that describes an event or a general object. However given\nthe same set of web-based information, the Human Interest Model\nconsistently outperforms the soft-pattern model for all four entity\ntypes. This suggests that the Human Interest Model is better able\nto leverage the information found in web resources to identify\ndefinitional answers.\n5. REFINEMENTS\nEncouraged by the initial experimental results, we explored two\nfurther optimization of the basic algorithm.\n5.1 Weighting Interesting Terms\nThe word trivia refer to tidbits of unimportant or uncommon\ninformation. As we have noted, interesting nuggets often has a\ntrivialike quality that makes them of interest to human beings. From this\ndescription of interesting nuggets and trivia, we hypothesize that\ninteresting nuggets are likely to occur rarely in a text corpora.\nThere is a possibility that some low-frequency terms may\nactually be important in identifying interesting nuggets. A standard\nunigram language model would not capture these low-frequency terms\nas important terms. To explore this possibility, we experimented\nwith three different term weighting schemes that can provide more\nweight to certain low-frequency terms. The weighting schemes we\nconsidered include commonly used TFIDF, as well as information\ntheoretic Kullback-Leiber divergence and Jensen-Shannon\ndivergence [8].\nTFIDF, or Term Frequency \u00d7 Inverse Document Frequency, is\na standard Information Retrieval weighting scheme that balances\nthe importance of a term in a document and in a corpus. For our\nexperiments, we compute the weight of each term as tf \u00d7 log( N\nnt\n),\nwhere tf is the term frequency, nt is the number of sentences in\nthe Interest Corpus having the term and N is the total number of\nsentences in the Interest Corpus.\nKullback-Leibler Divergence (Equation 2) is also called KL\nDivergence or relative entropy, can be viewed as measuring the\ndissimilarity between two probability distributions. Here, we treat the\nAQUAINT corpus as a unigram language model of general English\n[15], A, and the Interest Corpus as a unigram language model\nconsisting of topic specific terms and general English terms, I.\nGeneral English words are likely to have similar distributions in both\nlanguage models I and A. Thus using KL Divergence as a term\nweighting scheme will cause strong weights to be given to\ntopicspecific terms because their distribution in the Interest Corpus they\noccur significantly more often or less often than in general English.\nIn this way, high frequency centroid terms as well as low frequency\nrare but topic-specific terms are both identified and highly weighted\nusing KL Divergence.\nDKL(I A) =\nt\nI(t)log\nI(t)\nA(t)\n(2)\nDue to the power law distribution of terms in natural language,\nthere are only a small number of very frequent terms and a large\nnumber of rare terms in both I and A. While the common terms\nin English consist of stop words, the common terms in the topic\nspecific corpus, I, consist of both stop words and relevant topic\nwords. These high frequency topic specific words occur very much\nmore frequently in I than in A. As a result, we found that KL\nDivergence has a bias towards highly frequent topic terms as we are\nmeasuring direct dissimilarity against a model of general English\nwhere such topic terms are very rare. For this reason, we explored\nanother divergence measure as a possible term weighting scheme.\nJensen-Shannon Divergence or JS Divergence extends upon KL\nDivergence as seen in Equation 3. As with KL Divergence, we also\nuse JS divergence to measure the dissimilarity between our two\nlanguage models, I and A.\nDJS(I A) = 1\n2\n\u00a2DKL\nI I+A\n2\n\u00a1+ DKL\nA I+A\n2\n\u00a1\u00a3 (3)\nFigure 3: Performance by various term weighting schemes on\nthe Human Interest Model.\nHowever, JS Divergence has additional properties1\nof being\nsymmetric and non-negative as seen in Equation 4. The symmetric\nproperty gives a more balanced measure of dissimilarity and avoids\nthe bias that KL divergence has.\nDJS(I A) = DJS(A I) =\n0 I = A\n> 0 I <> A\n(4)\nWe conducted another experiment, substituting the unigram\nlanguge model weighting scheme we used in the initial experiments\nwith the three term weighting schemes described above. As lower\nbound reference, we included a term weighting scheme consisting\nof a constant 1 for all terms. Figure 3 show the result of applying\nthe five different term weighting schemes on the Human Interest\nModel. TFIDF performed the worst as we had anticipated. The\nreason is that most terms only appear once within each sentence,\nresulting in a term frequency of 1 for most terms. This causes\nthe IDF component to be the main factor in scoring sentences.\nAs we are computing the Inverse Document Frequency for terms\nin the Interest Corpus collected from web resources, IDF\nheavily down-weights highly frequency topic terms and relevant terms.\nThis results in TFIDF favoring all low frequency terms over high\nfrequency terms in the Interest Corpus. Despite this, the TFIDF\nweighting scheme only scored a slight 0.0085 lower than our lower\nbound reference of constant weights. We view this as a positive\nindication that low frequency terms can indeed be useful in finding\ninteresting nuggets.\nBoth KL and JS divergence performed marginally better than the\nuniform language model probabilistic scheme that we used in our\ninitial experiments. From inspection of the weighted list of terms,\nwe observed that while low frequency relevant terms were boosted\nin strength, high frequency relevant terms still dominate the top of\nthe weighted term list. Only a handful of low frequency terms were\nweighted as strongly as topic keywords and combined with their\nlow frequency, may have limited the impact of re-weighting such\nterms. However we feel that despite this, Jensen-Shannon\ndivergence does provide a small but measurable increase in the\nperformance of our Human Interest Model.\n1\nJS divergence also has the property of being bounded, allowing\nthe results to be treated as a probability if required. However, the\nbounded property is not required here as we are only treating the\ndivergence computed by JS divergence as term weights\n5.2 Selecting Web Resources\nIn one of our initial experiments, we observed that the quality\nof web resources included in the Interest Corpus may have a direct\nimpact on the results we obtain. We wanted to determine what\nimpact the choice of web resources have on the performance of our\nHuman Interest Model. For this reason, we split our collection of\nweb resources into four major groups listed here:\nN - News: Title and first paragraph of the top 50 most relevant\narticles found in NewsLibrary.\nW - Wikipedia: Text from the most relevant article found in\nWikipedia.\nS - Snippets: Snippets extracted from the top 100 most relevant\nlinks after querying Google.\nM - Miscellaneous sources: Combination of content (when\navailable) from secondary sources including biography.com, s9.com,\nbartleby.com articles, Google definitions and WordNet definitions.\nWe conducted a gamut of runs on the TREC 2005 question set\nusing all possible combinations of the above four groups of web\nresources to identify the best possible combination. All runs were\nconducted on Human Interest Model using JS divergence as term\nweighting scheme. The runs were sorted in descending F3-Score\nand the top 3 best performing runs for each entity class are listed\nin Table 2 together with earlier reported F3-scores from Figure 2 as\na baseline reference. A consistent trend can be observed for each\nentity class.\nFor PERSON and EVENT topics, NewsLibrary articles are the\nmain source of interesting nuggets with Google snippets and\nmiscellaneous articles offering additional supporting evidence. This\nseem intuitive for events as newspapers predominantly focus on\nreporting breaking newsworthy events and are thus excellent sources\nof interesting nuggets. We had expected Wikipedia rather than\nnews articles to be a better source of interesting facts about\npeople and were surprised to discover that news articles outperformed\nWikipedia. We believe that the reason is because the people\nselected as topics thus far have been celebrities or well known public\nfigures. Human readers are likely to be interested in news events\nthat spotlight these personalities.\nConversely for ORGANIZATION and THING topics, the best\nsource of interesting nuggets come from Wikipedia\"s most relevant\narticle on the topic with Google snippets again providing additional\ninformation for organizations.\nWith an oracle that can classify topics by entity class with 100%\naccuracy and by using the best web resources for each entity class\nas shown in Table 2, we can attain a F3-Score of 0.3158.\n6. UNIFYING INFORMATIVENESS WITH\nINTERESTINGNESS\nWe have thus far been comparing the Human Interest Model\nagainst the Soft-Pattern model in order to understand the\ndifferences between interesting and informative nuggets. However from\nthe perspective of a human reader, both informative and interesting\nnuggets are useful and definitional. Informative nuggets present a\ngeneral overview of the topic while interesting nuggets give\nreaders added depth and insight by providing novel and unique aspects\nabout the topic. We believe that a good definitional question\nanswering system should provide the reader with a combined mixture\nof both nugget types as a definitional answer set.\nRank PERSON ORG THING EVENT\nBaseline\nUnigram Weighting Scheme, N+W+S+M\n0.3279 0.3630 0.2551 0.2644\n1\nN+S+M W+S W+M N+M\n0.3584 0.3709 0.2688 0.2905\n2\nN+S N+W+S W+S+M N+S+M\n0.3469 0.3702 0.2665 0.2745\n3\nN+M N+W+S+M W+S N+S\n0.3431 0.3680 0.2616 0.2690\nTable 2: Top 3 runs using different web resources for each\nentity class\nWe now have two very different experts at identifying\ndefinitions. The Soft Pattern Bigram Model proposed by Cui et al. is\nan expert in identifying informative nuggets. The Human\nInterest Model we have described in this paper on the other hand is an\nexpert in finding interesting nuggets. We had initially hoped to\nunify the two separate definitional question answering systems by\napplying an ensemble learning method [5] such as voting or\nboosting in order to attain a good mixture of informative and interesting\nnuggets in our answer set. However, none of the ensemble\nlearning methods we attempted could outperform our Human Interest\nModel.\nThe reason is that both systems are picking up very different\nsentences as definitional answers. In essence, our two experts are\ndisagreeing on which sentences are definitional. In the top 10\nsentences from both systems, only 4.4% of these sentences appeared in\nboth answer sets. The remaining answers were completely\ndifferent. Even when we examined the top 500 sentences generated by\nboth systems, the agreement rate was still an extremely low 5.3%.\nYet, despite the low agreement rate between both systems, each\nindividual system is still able to attain a relatively high F3 score.\nThere is a distinct possibility that each system may be selecting\ndifferent sentences with different syntactic structures but actually\nhave the same or similar semantic content. This could result in both\nsystems having the same nuggets marked as correct even though the\nsource answer sentences are structurally different. Unfortunately,\nwe are unable to automatically verify this as the evaluation software\nwe are using does not report correctly identified answer nuggets.\nTo verify if both systems are selecting the same answer nuggets,\nwe randomly selected a subset of 10 topics from the TREC 2005\nquestion set and manually identified correct answer nuggets (as\ndefined by TREC accessors) from both systems. When we compared\nthe answer nuggets found by both system for this subset of topics,\nwe found that the nugget agreement rate between both systems was\n16.6%. While the nugget agreement rate is higher than the\nsentence agreement rate, both systems are generally still picking up\ndifferent answer nuggets. We view this as further indication that\ndefinitions are indeed made up of a mixture of informative and\ninteresting nuggets. It is also indication that in general, interesting\nand informative nuggets are quite different in nature.\nThere are thus rational reasons and practical motivation in\nunifying answers from both the pattern based and corpus based\napproaches. However, the differences between the two systems also\ncause issues when we attempt to combine both answer sets.\nCurrently, the best approach we found for combining both answer sets\nis to merge and re-rank both answer sets with boosting agreements.\nWe first normalize the top 1,000 ranked sentences from each\nsystem, to obtain the Normalized Human Interest Model score,\nhim(s), and the Normalized Soft Pattern Bigram Model score,\nsp(s), for every unique sentence, s. For each sentence, the two\nseparate scores for are then unified into a single score using Equation 5.\nWhen only one system believes that the sentence is definitional, we\nsimply retain that system\"s normalized score as the unified score.\nWhen both systems agree agree that the sentence is definitional,\nthe sentence\"s score is boosted by the degree of agreement between\nbetween both systems.\nScore(s) = max(shim, ssp)1\u2212min(shim,ssp)\n(5)\nIn order to maintain a diverse set of answers as well as to\nensure that similar sentences are not given similar ranking, we\nfurther re-rank our combined list of answers using Maximal Marginal\nRelevance or MMR [2]. Using the approach described here, we\nachieve a F3 score of 0.3081. This score is equivalent to the initial\nHuman Interest Model score of 0.3031 but fails to outperform the\noptimized Human Interest Model model.\n7. CONCLUSION\nThis paper has presented a novel perspective for answering\ndefinitional questions through the identification of interesting nuggets.\nInteresting nuggets are uncommon pieces of information about the\ntopic that can evoke a human reader\"s curiosity. The notion of an\naverage human reader is an important consideration in our\napproach. This is very different from the lexico-syntactic pattern\napproach where the context of a human reader is not even considered\nwhen finding answers for definitional question answering.\nUsing this perspective, we have shown that using a combination\nof a carefully selected external corpus, matching against multiple\ncentroids and taking into consideration rare but highly topic\nspecific terms, we can build a definitional question answering\nmodule that is more focused on identifying nuggets that are of interest\nto human beings. Experimental results has shown this approach\ncan significantly outperform state-of-the-art definitional question\nanswering systems.\nWe further showed that at least two different types of answer\nnuggets are required to form a more thorough set of definitional\nanswers. What seems to be a good set of definition answers is some\ngeneral information that provides a quick informative overview mixed\ntogether with some novel or interesting aspects about the topic.\nThus we feel that a good definitional question answering system\nwould need to pick up both informative and interesting nugget types\nin order to provide a complete definitional coverage on all\nimportant aspects of the topic. While we have attempted to build such a\nsystem by combining our proposed Human Interest Model with Cui\net al.\"s Soft Pattern Bigram Model, the inherent differences between\nboth types of nuggets seemingly caused by the low agreement rates\nbetween both models have made this a difficult task. Indeed, this is\nnatural as the two models have been designed to identify two very\ndifferent types of definition answers using very different types of\nfeatures. As a result, we are currently only able to achieve a\nhybrid system that has the same level of performance as our proposed\nHuman Interest Model.\nWe approached the problem of definitional question answering\nfrom a novel perspective, with the notion that interest factor plays\na role in identifying definitional answers. Although the methods\nwe used are simple, they have been shown experimentally to be\neffective. Our approach may also provide some insight into a few\nanomalies in past definitional question answering\"s trials. For\ninstance, the top definitional system at the recent TREC 2006\nevaluation was able to significantly outperform all other systems using\nrelatively simple unigram probabilities extracted from Google\nsnippets. We suspect the main contributor to the system\"s performance\nEntity Type Topics\nORGANIZATION DePauw University, Merck & Co.,\nNorwegian Cruise Lines (NCL), United\nParcel Service (UPS), Little League\nBaseball, Cliffs Notes, American Legion,\nSony Pictures Entertainment (SPE),\nTelefonica of Spain, Lions Club\nInternational, AMWAY, McDonald\"s Corporation,\nHarley-Davidson, U.S. Naval Academy,\nOPEC, NATO, International Bureau of\nUniversal Postal Union (UPU), Organization of\nIslamic Conference (OIC), PBGC\nPERSON Bing Crosby, George Foreman, Akira\nKurosawa, Sani Abacha, Enrico Fermi, Arnold\nPalmer, Woody Guthrie, Sammy Sosa,\nMichael Weiss, Paul Newman, Jesse\nVentura, Rose Crumb, Rachel Carson, Paul\nRevere, Vicente Fox, Rocky Marciano, Enrico\nCaruso, Pope Pius XII, Kim Jong Il\nTHING F16, Bollywood, Viagra, Howdy Doody\nShow, Louvre Museum, meteorites,\nVirginia wine, Counting Crows, Boston Big\nDig, Chunnel, Longwood Gardens, Camp\nDavid, kudzu, U.S. Medal of Honor,\ntsunami, genome, Food-for-Oil Agreement,\nShiite, Kinmen Island\nEVENT Russian submarine Kursk sinks, Miss\nUniverse 2000 crowned, Port Arthur\nMassacre, France wins World Cup in\nsoccer, Plane clips cable wires in Italian\nresort, Kip Kinkel school shooting, Crash\nof EgyptAir Flight 990, Preakness 1998,\nfirst 2000 Bush-Gore presidential debate ,\n1998 indictment and trial of Susan\nMcDougal, return of Hong Kong to Chinese\nsovereignty, 1998 Nagano Olympic Games,\nSuper Bowl XXXIV, 1999 North American\nInternational Auto Show, 1980 Mount St.\nHelens eruption, 1998 Baseball World\nSeries, Hindenburg disaster, Hurricane Mitch\nTable 3: TREC 2005 Topics Grouped by Entity Type\nis Google\"s PageRank algorithm, which mainly consider the\nnumber of linkages, has an indirect effect of ranking web documents by\nthe degree of human interest.\nIn our future work, we seek to further improve on the combined\nsystem by incorporating more evidence in support of correct\ndefinitional answers or to filter away obviously wrong answers.\n8. REFERENCES\n[1] S. Blair-Goldensohn, K. R. McKeown, and A. H. Schlaikjer.\nA hybrid approach for qa track definitional questions. In\nTREC \"03: Proceedings of the 12th Text REtrieval\nConference, Gaithersburg, Maryland, 2003.\n[2] J. G. Carbonell and J. Goldstein. The use of MMR,\ndiversity-based reranking for reordering documents and\nproducing summaries. In Research and Development in\nInformation Retrieval, pages 335-336, 1998.\n[3] Y. Chen, M. Zhou, and S. Wang. Reranking answers for\ndefinitional qa using language modeling. In Proceedings of\nthe 21st International Conference on Computational\nLinguistics and 44th Annual Meeting of the Association for\nComputational Linguistics, pages 1081-1088, Sydney,\nAustralia, July 2006. Association for Computational\nLinguistics.\n[4] H. Cui, M.-Y. Kan, and T.-S. Chua. Generic soft pattern\nmodels for definitional question answering. In SIGIR \"05:\nProceedings of the 28th annual international ACM SIGIR\nconference on Research and development in information\nretrieval, pages 384-391, New York, NY, USA, 2005. ACM\nPress.\n[5] T. G. Dietterich. Ensemble methods in machine learning.\nLecture Notes in Computer Science, 1857:1-15, 2000.\n[6] S. Harabagiu, D. Moldovan, C. Clark, M. Bowden, A. Hickl,\nand P. Wang. Employing two question answering systems at\ntrec 2005. In TREC \"05: Proceedings of the 14th Text\nREtrieval Conference, Gaithersburg, Maryland, 2005.\n[7] M. Kaisser, S. Scheible, and B. Webber. Experiments at the\nuniversity of edinburgh for the trec 2006 qa track. In TREC\n\"06 Notebook: Proceedings of the 14th Text REtrieval\nConference, Gaithersburg, Maryland, 2006. National\nInstitute of Standards and Technology.\n[8] J. Lin. Divergence measures based on the shannon entropy.\nIEEE Transactions on Information Theory, 37(1):145 - 151,\nJan 1991.\n[9] J. Lin, E. Abels, D. Demner-Fushman, D. W. Oard, P. Wu,\nand Y. Wu. A menagerie of tracks at maryland: Hard,\nenterprise, qa, and genomics, oh my! In TREC \"05:\nProceedings of the 14th Text REtrieval Conference,\nGaithersburg, Maryland, 2005.\n[10] J. Lin and D. Demner-Fushman. Automatically evaluating\nanswers to definition questions. In Proceedings of Human\nLanguage Technology Conference and Conference on\nEmpirical Methods in Natural Language Processing, pages\n931-938, Vancouver, British Columbia, Canada, October\n2005. Association for Computational Linguistics.\n[11] R. Sun, J. Jiang, Y. F. Tan, H. Cui, T.-S. Chua, and M.-Y.\nKan. Using syntactic and semantic relation analysis in\nquestion answering. In TREC \"05: Proceedings of the 14th\nText REtrieval Conference, Gaithersburg, Maryland, 2005.\n[12] E. M. Voorhees. Overview of the trec 2003 question\nanswering track. In Text REtrieval Conference 2003,\nGaithersburg, Maryland, 2003. National Institute of\nStandards and Technology.\n[13] E. M. Voorhees. Overview of the trec 2005 question\nanswering track. In TREC \"05: Proceedings of the 14th Text\nREtrieval Conference, Gaithersburg, Maryland, 2005.\nNational Institute of Standards and Technology.\n[14] J. Xu, A. Licuanan, and R. Weischedel. TREC 2003 QA at\nBBN: Answering definitional questions. In TREC \"03:\nProceedings of the 12th Text REtrieval Conference,\nGaithersburg, Maryland, 2003.\n[15] D. Zhang and W. S. Lee. A language modeling approach to\npassage question answering. In TREC \"03: Proceedings of\nthe 12th Text REtrieval Conference, Gaithersburg, Maryland,\n2003.", "keywords": "sentence fragment;human interest;unique quality;use of linguistic;interesting nugget;manual labor;news corpus;baseline system;human interest computation;definitional question answer;informative nugget;human reader;linguistic use;interest;question topic;external knowledge;computation of human interest;surprise factor;lexical pattern"}
-{"name": "test_H-4", "title": "Towards Task-based Personal Information Management Evaluations", "abstract": "Personal Information Management (PIM) is a rapidly growing area of research concerned with how people store, manage and re-find information. A feature of PIM research is that many systems have been designed to assist users manage and re-find information, but very few have been evaluated. This has been noted by several scholars and explained by the difficulties involved in performing PIM evaluations. The difficulties include that people re-find information from within unique personal collections; researchers know little about the tasks that cause people to re-find information; and numerous privacy issues concerning personal information. In this paper we aim to facilitate PIM evaluations by addressing each of these difficulties. In the first part, we present a diary study of information re-finding tasks. The study examines the kind of tasks that require users to re-find information and produces a taxonomy of re-finding tasks for email messages and web pages. In the second part, we propose a task-based evaluation methodology based on our findings and examine the feasibility of the approach using two different methods of task creation.", "fulltext": "1. INTRODUCTION\nPersonal Information Management (PIM) is a rapidly\ngrowing area of research concerned with how people store,\nmanage and re-find information. PIM systems - the methods\nand procedures by which people handle, categorize, and\nretrieve information on a day-to-day basis [18] - are\nbecoming increasingly popular. However the evaluation of these\nPIM systems is problematic. One of the main difficulties is\ncaused by the personal nature of PIM. People collect\ninformation as a natural consequence of completing other tasks.\nThis means that the collections people generate are unique\nto them alone and the information within a collection is\nintrinsically linked with the owner\"s personal experiences. As\npersonal collections are unique, we cannot create evaluation\ntasks that are applicable to all participants in an evaluation.\nSecondly, personal collections may contain information that\nthe participants are uncomfortable sharing within an\nevaluation. The precise nature of this information - what\ninformation individuals would prefer to keep private - varies across\nindividuals making it difficult to base search tasks on the\ncontents of individual collections. Therefore, experimenters\nface a number of challenges in order to conduct realistic but\ncontrolled PIM evaluations.\nA particular feature of PIM research is that many\nsystems have been designed to assist users with managing and\nre-finding their information, but very few have been\nevaluated; a situation noted by several scholars [1, 6, 7]. Recently,\nhowever, researchers have started to focus on ways to\naddress the problem of PIM evaluation. For example, Kelly\n[16] proposes that numerous methodologies must be taken\nto examine and understand the many issues involved in PIM,\nalthough, she makes explicit reference to the need for\nlaboratory based PIM studies and a common set of shared tasks\nto make this possible. Capra [6] also identifies the need for\ncontrolled PIM lab evaluations to complement other\nevaluation techniques, placing specific emphasis on the need to\nunderstand PIM behaviour at the task level.\nIn this paper, we attempt to address the difficulties\ninvolved to faciliate controlled laboratory PIM evaluations.\nIn the first part of this paper we present a diary study of\ninformation re-finding tasks. The study examines the kind\nof tasks that require users to re-find information and\nproduces a taxonomy of re-finding tasks for email messages and\nweb pages. We also look at the features of the tasks that\nmake re-finding difficult. In the second part, we propose\na task-based evaluation methodology based on our findings\nand examine the feasibility of the approach using different\nmethods of task creation. Thus, this paper offers two\ncontributions to the field: an increased understanding of PIM\nbehaviour at the task level and an evaluation method that\nwill facilitate further investigations.\n2. RELATED WORK\nA variety of approaches are available to study PIM.\nNaturalistic approaches study participants performing\nnaturally, completing their own tasks as they occur, within\nfamiliar environments. These approaches allow researchers to\novercome many of the difficulties caused by the personal\nnature of PIM. As the tasks performed are real and not\nsimulated, the participants can utilise their own experiences,\nprevious knowledge and information collections to complete\nthe tasks. A benefit of the approach is that data can be\ncaptured continuously over extended time periods and\nmeasurements can be taken at fixed points in time within these\n[15]. Naturalistic approaches can be applied by\nconducting fieldwork [17, 8], ethnographic methods as suggested by\n[15] or via log file analysis [9, 7]. Both ethnographic and\nfieldwork methods require the presence of an experimenter\nto assess how PIM is performed, which raises a number of\nissues. Firstly, evaluation in this way is expensive; taking\nlong time periods to study small numbers of participants\nand these small samples may not be representative of the\nbehaviour of larger populations. Secondly, because\nparticipants cannot be continually observed, experimenters must\nchoose when to observe and this may affect the findings.\nAn alternative strategy to conducting naturalistic\nevaluations is to utilise log file analysis. This approach makes use\nof logging software that captures a broad sampling of user\nactivities in the context of natural use of a system. In [9]\na novel PIM search tool was deployed to 234 users and the\nlog data provided detailed information about the nature of\nuser queries, interactions with the query interface and about\nproperties of the items retrieved. Log file analysis is a\npowerful methodology as it allows the capture of a large quantity\nof detailed information about how users behave with the\nsystem without the expense and distracting influence of an\nobserver. Nevertheless, there are limitations to this\nstrategy. Firstly, to attain useful results, the deployed prototype\nmust be something that people would use i.e. it has to be\na fully functional piece of software that offers improvement\non the systems ordinarily available to participants.\nDeveloping a research prototype to this standard is beyond the\nresources of many researchers. Further, caution must be\ntaken when analysing logs, as the captured data shows\nnothing about the goals and intentions that the user had at the\ntime. It is, therefore, difficult to make any concrete\nstatements about the reasons for the behaviour depicted in the\nlogs. This reveals a need to complement naturalistic studies\nwith controlled experiments where the experimenter can\nrelate the behaviour of study participants to goals associated\nwith known search tasks.\nLaboratory-based studies simulate users\" real world\nenvironment in the controlled setting of the laboratory,\noffering the ability to study issues that are tightly defined and\nnarrow in scope. One difficulty in performing this kind of\nevaluation is sourcing collections to evaluate. Kelly [16]\nproposes the introduction of a shared test collection that would\nprovide sharable, reusable data sets, tasks and metrics for\nthose interested in conducting PIM research. This may be\nuseful for testing algorithms in a way similar to TREC in\nmainstream IR [13]. However, a shared collection would be\nunsuitable for user studies because it would not be\npossible to incorporate the personal aspects of PIM while using\na common, unfamiliar collection. One alternative approach\nis to ask users to provide their own information collections\nto simulate familiar environments within the lab. This\napproach has been applied to study the re-finding of personal\nphotographs [11], email messages [20], and web-bookmarks\n[21]. The usefulness of this approach depends on how easy\nit is to transfer the collection or gain remote access.\nAnother solution is to use the entire web as a collection when\nstudying web page re-finding [4]. This may be appropriate\nfor studying web page re-finding because previous studies\nhave shown that people often use web search engines for\nthis purpose [5].\nA second difficulty in performing PIM laboratory\nstudies is creating tasks for participants to perform that can be\nsolved by searching a shared or personal collection. Tasks\nrelate to the activity that results in a need for information\n[14] and are acknowledged to be important in determining\nuser behaviour [26]. A large body of work has been carried\nout to understand the nature of tasks and how the type of\ntask influences user information seeking behaviour. For\nexample, tasks have been categorised in terms of increasing\ncomplexity [3] and task complexity has been suggested to\naffect how searchers perceive their information needs [25]\nand how they try to find information [3]. Other previous\nwork has provided methodologies that allow the simulation\nof tasks when studying information seeking behaviour [2].\nHowever, little is known about the kinds of tasks that cause\npeople to search their personal stores or re-find information\nthat they have seen before. Consequently, it is difficult to\ndevise simulated work task situations for PIM. The\nexception is the study of personal photograph management, where\nRodden\"s work on categorising personal photograph search\ntasks has facilitated the creation of simulated work task\nsituations [22]. There have been other suggestions as to how\nto classify PIM tasks. For example, [5] asked participants to\nclassify tasks based on how frequently they perform the task\ntype in their daily life and how familiar they were with the\nlocation of the sought after information and several scholars\nhave classified information objects by the frequency of their\nuse e.g. [24]. While these are interesting properties that\nmay affect how a task will be performed, they do not give\nexperimenters enough scope to devise tasks.\nPersonal collections are one reason why task creation is so\ndifficult. Rodden\"s photo task taxonomy provides a solution\nhere because it allows tasks, tailored to private collections\nto be categorised. Systems can then be compared across\ntask types for different users [11]. Unfortunately, no\nequivalent taxonomy exists for other types of information object.\nFurther, other types of object are more sensitive to privacy\nthan photographs; it is unlikely that participants would be\nas content to allow researchers to browse their email\ncollections to create tasks as they were with photographs in [11].\nThis presents a serious problem - how can researchers\ndevise tasks that correspond to private collections without an\nunderstanding of the kinds of tasks people perform or\njeopardising the privacy of study participants? A few methods\nhave been proposed. For example, [20] studied email search\nby asking participants to re-find emails that had been sent\nto every member in a department; allowing the same tasks\nto be used for all of the study participants. This approach\nensured that privacy issues were avoided and participants\ncould use things that they remember to complete tasks.\nNevertheless, the systems were only tested using one type of\ntask - participants were asked to find single emails, each of\nwhich shared common properties. In section 4 we show that\npeople perform a wider range of email re-finding tasks than\nthis. In [4], generic search tasks were artificially created by\nrunning evaluations over two sessions. In the first session,\nparticipants were asked to complete work tasks that involved\nfinding some unknown information. In the second session,\nparticipants completed the same tasks again, which\nnaturally involved some re-finding behaviour. The limitations of\nthis technique are that it does not allow participants to\nexploit any personal connections with the information because\nthe information they are looking for may not correspond to\nany other aspect of their lives. Further, if time is utilised by\na system or interface being tested the approach is unsuitable\nbecause all of the objects found in the first session will have\nbeen accessed within the same time period.\nOur review of evaluation approaches motivates a\nrequirement for controlled laboratory experiments that allow tightly\ndefined aspects of systems or interfaces to be tested.\nUnfortunately, it has also been shown that there are difficulties\ninvolved in performing this type of evaluation - it is difficult\nto source collections and to devise tasks that correspond to\nprivate collections, while at the same time protect the\nprivacy of the study participants.\nIn the following section we present a diary study of\nrefinding tasks for email and web pages. The outcome is a\nclassification of tasks similar to that devised by Rodden for\npersonal photographs [22]. In section 5 we build on this\nwork by examining methods for creating tasks that do not\ncompromise the privacy of participants and discuss how our\nwork can facilitate task-based PIM user evaluations. We\nshow that by collecting tasks using electronic diaries, not\nonly can we learn about the tasks that cause people to re-find\npersonal information, but we can learn about the contents of\nprivate collections without compromising the privacy of the\nparticipants. This knowledge can then be used to construct\ntasks for use in PIM evaluations.\n3. METHOD\nDiary Studies are a naturalistic technique, offering the\nability to capture factual data, in a natural setting,\nwithout the distracting influence of an observer. Limitations of\nthe technique include difficulties in maintaining participant\ndedication levels and convincing participants that seemingly\nmundane information is useful and should be reported [19].\n[12] suggest that the effects of the negatives can be\nlimited, however, with careful design and good\nimplementation. In our diary study, we followed the suggestions in [12]\nto achieve the best possible data. To this end, we restricted\nthe recorded tasks to web and email re-finding. By asking\nusers to record fewer tasks it was anticipated that\nparticipant apathy would be reduced and dedication levels\nmaintained. The participants were provided with a personalised\nweb form in which they could record details about their\ninformation needs and the contexts in which these needs\ndeveloped. Web forms were deployed rather than\npaperbased diaries because to re-find web and email information\nthe user would be at a computer with an Internet\nconnection and there would be no need to search for a paper-based\ndiary and pen.\nThe diary form solicited the following information: whether\nthe information need related to re-finding a web page or an\nemail message and a description of the task they are\nperforming. This description was to contain both the\ninformation that the participant wished to find and the reason that\nthey needed the information. To help with this, the form\ngave three example task descriptions, which were also\nexplained verbally to each participant during an introductory\nsession. The experimenter ensured that the participants\nunderstood that the tasks to be recorded were not limited to\nthe types shown in the examples. The examples were\nsupplied purely to get participants thinking about the kinds of\nthings they could record and to show the level of and type\nof details expected. The form also asked participants to rate\neach task in terms of difficulty (on a scale from 1-5, where\n1 was very easy and 5 was very hard). Finally, they were\nasked when was the last time they looked at the sought\nafter information. Again, they were able to choose from 5\noptions (less than a day ago, less than a week ago, less than\na month ago, less than a year ago, more than a year ago).\nTime information was used to examine the frequency with\nwhich the participants re-found old and new information,\nand when combined with difficulty ratings created a\npicture of whether or not the time period between accessing\nand re-accessing impacted on how difficult the participants\nperceived tasks to be.\n36 participants, recruited by mass advertisement through\ndepartmental communication channels, research group\nmeetings and undergraduate lectures, were asked to digitally\nrecord details of their information re-finding tasks over a\nperiod of approximately 3 weeks. The final population\nconsisted of 4 academic staff members, 8 research staff\nmembers, 6 research students and 18 undergraduate students.\nThe ages of participants ranged from 19-59. As both\npersonal and work tasks were recorded, the results collected\ncover a broad range of re-finding tasks.\n4. RESULTS\nSeveral analyses were performed on the captured data.\nThe following sections present the findings. Firstly, we\nexamine the kinds of re-finding tasks that were performed both\nwhen searching on email and on the web. Next, we consider\nthe distribution of tasks - which kinds of tasks were\nperformed most often by participants. Lastly, we explore the\nkinds of re-finding tasks that participants perceived as\ndifficult.\n4.1 Nature of Web and Email Re-finding Tasks\nDuring the study 412 tasks were recorded. 150 (36.41%)\nof these tasks were email based, 262 (63.59%) were\nwebbased. As with most diary studies, the number of tasks\nrecorded varied extensively between particpants. The\nmedian number of tasks per participant was 8 (interquartile\nrange (IQR)=9.5). More web tasks (median=5,IQR=7.5)\nwere recorded than email tasks (median=3, IQR=3). This\nmeans that on average each participant recorded\napproximately one task every two days.\nFrom the descriptions supplied by the participants, we\nfound similar features in the recorded tasks for both email\nand web re-finding. Based on this observation a joint\nclassification scheme was devised, encompassing both email and\nweb tasks. The tasks were classified as one of three types:\nlookup tasks, item tasks and multi-item tasks. Lookup tasks\ninvolve searching for specific information from within a\nresource, for example an email or a web page, where the\nresource may or may not be known. Some recorded examples\nof lookup tasks were:\n\u2022 LU1: Looking for the course code for a class - it\"s used in a\nscript that is run to set up a practical. I\"d previously obtained\nthis about 3 weeks ago from our website.\n\u2022 LU2: I am trying to determine the date by which I step down\nas an External Examiner. This is in an email somewhere\n\u2022 LU3: Looking for description of log format from system R\ndeveloped for student project. I think he sent me in it an\nemail\nItem tasks involve looking for a particular email or web\npage, perhaps to pass on to someone else or when the entire\ncontents are needed to complete the task. Some recorded\nexamples of item tasks were:\n\u2022 I1: Looking for SIGIR 2002 paper to give to another student\n\u2022 I2: Find the receipt of an online airline purchase required to\nclaim expenses\n\u2022 I3: I need the peer evaluation forms for the MIA class E sent\nme them by email\nTo clarify, lookup tasks differ from item tasks in two ways\n- in the quantity of information required and in what the\nuser knows about what they are looking for. Lookup tasks\ninvolve a need for a small piece of information e.g. a phone\nnumber or an ingredient, and the user may or may not know\nexactly the resource that contains this information. In item\ntasks the user knows exactly the resource they are looking\nfor and needs the entire contents of that resource.\nMulti-item tasks were tasks that required information that\nwas contained within numerous web pages or email\nmessages. Often these tasks required the user to process or\ncollate the information in order to solve the task. Some\nrecorded examples were:\n\u2022 MI1: Looking for obituaries and other material on the novelist\nJohn Fowles, who died at the weekend. Accessed the online\nGuradian and IMES\n\u2022 MI2: Trying to find details on Piccolo graphics framework.\nRemind myself of what it is and what it does. Looking to\nbuild a GUI within Eclipse\n\u2022 MI3: I am trying to file my emails regarding IPM and I am\nlooking for any emails from or about this journal\nThere were a number of tasks that were difficult to classify.\nFor example, consider the following recorded task:\n\u2022 LU4: re-find AS\"s paper on graded relevance assessments\nbecause I want to see how she presented her results for a paper I\nam writing\nThis task actually consists of two sub-tasks: 1 item\ntask(refind the paper) and 1 lookup task (look for specific\ninformation within the paper). It was decided to treat this as a\nlookup task because the user\"s ultimate goal was to access\nand use the information within the resource.\nThere were a number of examples of combined tasks, mainly\nof the form item then lookup, but there were also examples\nof item then multi-item. For example:\n\u2022 MI4: re-find Kelkoo website so that I can re-check the prices\nof hair-straighteners for my girlfriend\nA second source of ambiguity came from tasks such as\nfinding an email containing a URL as a means of re-accessing\na web page. It was also decided to categorise these as lookup\ntasks because in all cases these were logged by participants\nas email searches and, within this context, what they were\nlooking for was information within an email.\nAnother problem was that some of the logs lacked the\ndetail required to perform a categorisation e.g.\n\u2022 U1: searching for how to retrieve user\"s selection from a\nmessage box. Decided to use some other means\nSuch tasks were labelled as U for unclassifiable. To\nverify the consistency of the taxonomy, the tasks were\nrecategorised by the same researcher after a delay of two weeks.\nThe agreement between the results of the two analyses was\nlargely consistent (96.8%). Further, we asked a researcher\nwith no knowledge of the project or the field to classify a\nsample of 50 tasks. The second researcher achieved a 90%\nagreement. We feel that this high agreement on a large\nnumber of tasks by more than one researcher provides evidence\nfor the reliability of the classification scheme.\nThe distribution of task types is shown in table 1. Overall,\nlookup and item tasks were the most common, with\nmultiitem tasks only representing 8.98% of those recorded. The\ndistribution of the task types was different for web and email\nre-finding. The majority of email tasks (60%) involved\nlooking for information within an email (lookup), in contrast to\nweb tasks where the majority of tasks (52.67%) involved\nlooking for a single web page (item). Another distinction\nwas the number of recorded multi-item tasks for web and\nemail. Multi-item tasks were very rare for email re-finding\n(only 2.67% of email tasks involved searching for multiple\nresources), but comparatively common for web re-finding\n(12.6%).\nLookup Item Multi-item Unclass.\nEmail 90(60%) 52(34.67%) 4(2.67%) 4(2.67%)\nWeb 87(33.21%) 138(52.67%) 33(12.60%) 4(1.53%)\nAll 177(42.96%) 190(46.12%) 37(8.98%) 8(1.94%)\nTable 1: The distribution of task types\nIn addition to the three-way classification described above,\nthe recorded tasks were classified with respect to the\ntemperature metaphor proposed by [24], which classifies\ninformation as one of three temperatures: hot, warm and cold.\nWe classified the tasks using the form data. Information\nthat had been seen less than a day or less than a week\nbefore the task were defined as hot, information that had been\nseen less than a month before the task as warm, and\ninformation that had been seen less than a year or more than a\nyear before the task as cold. Unfortunately, a technical\ndifficulty with the form only allowed 335(81.3%) of the tasks to\nbe classified. The remainder were defined as U for\nunclassifiable. A cross-tabulation of task types and temperatures\nis shown in table 2.\nHot Warm Cold Unclass.\nEmail 50(33.33%) 36(24.00%) 37(24.67%) 27(18%)\nWeb 112(42.75%) 60(22.90%) 40(15.27%) 50(19.08%)\nAll 162(39.32%) 96(23.30%) 77(18.69%) 77(18.69%)\nTable 2: The distribution of temperatures\nMost of the tasks that caused people to re-find web pages\n(42.75%) and email messages (33.33%) involved searching\nfor information that has been accessed in the last week.\nHowever there were also a number of re-finding tasks that\ninvolved searching for older information: 23.30% of the tasks\nrecorded (24.00% for email and 22.90% for web) involved\nsearching for information accessed in the last month and\n18.69% of the tasks recorded (24.67% for email and 15.27%\nfor web) were looking for even older information. This is\nimportant with respect to evaluation because there is\npsychological evidence suggesting that people remember less over\ntime e.g. [23]. This means that users may find searching for\nolder information more difficult or perhaps alter their\nseeking strategy when looking for hot, warm or cold information.\n4.2 What tasks are difficult?\nWe looked for patterns in the recorded data to determine\nif certain tasks were perceived as more difficult than\nothers. For example, we examined whether the media type\naffected how difficult the participants perceived the task to be.\nThere was no evidence that participants found either email\n(median=2 IQR=2) or web (median=2 IQR=2) tasks more\ndifficult. We also investigated whether the type of task or\nthe length of time between accessing and re-accessing made\na task more difficult. Figure 1 shows this information\ngraphically.\nFigure 1: Difficulty ratings for task types\nFrom figure 1, it does not appear that any particular task\ntype was perceived as difficult with respect to the others,\nalthough there is a suggestion that lookup tasks were\nperceived more difficult when looking for cold information than\nhot and item tasks were perceived more difficult for warm\ninformation than hot. To assess the relationship between\ninformation temperature and the perceived difficulty, we\nused Mood\"s median tests to determine whether the rank\nof difficulty scores was in agreement for the information\ntemperatures being compared (p<0.05). For the look-up\ntask data, there was evidence that hot tasks were perceived\neasier than cold (p=0.0001) and that warm tasks were\nperceived easier than cold tasks(p=0.0041), but there was no\nevidence to distinguish between the difficulty ratings of hot\nand warm tasks(p=0.593). For the item task data, there\nwas evidence that hot and cold tasks were rated differently\n(p=0.024), but no evidence to distinguish between hot and\nwarm tasks(p=0.05) or warm and cold tasks(p=0.272).\nThese tests confirm that the length of time between\naccessing and re-accessing the sought after information indeed\ninfluenced how difficult participants perceived the task to\nbe. Nevertheless, the large number of tasks of all types and\ntemperatures rated by participants as easy i.e. < 3, suggests\nthat there are other factors that influence how difficult a task\nis perceived to be. To learn about these factors would\nrequire the kind of user evaluations proposed by [16, 6] - the\nkind of evaluations facilitated by our work.\n4.3 Summary\nIn the first part of this paper, we described a diary study\nof web and email re-finding tasks. We examined the types\nof task that caused the participants to search their personal\nstores and found three main categories of task: tasks where\nthe user requires specific information from within a single\nresource, tasks where a single resource is required, and tasks\nthat require information to be recovered from multiple\nresources. It was discovered that look-up and item tasks were\nrecorded with greater frequency than multi-item tasks.\nAlthough no evidence was found that web or email tasks were\nmore difficult, there was some evidence showing that the\ntime between accessing and re-accessing affected how\ndifficult the participants perceived tasks to be. These findings\nhave implications for evaluating PIM behaviour at the task\nlevel. The remainder of this paper concentrates on this,\ndiscussing what the findings mean with respect to performing\ntask-based PIM user evaluations.\n5. TASK-BASED PIM EVALUATIONS\nThe findings described in section 4 are useful with\nrespect to evaluation because they provide experimenters with\nenough knowledge to conduct controlled user evaluations in\nlab conditions. Greco-Latin square experimental designs can\nbe constructed where participants are assigned n tasks of the\nthree types described above to perform on their own\ncollections using x systems. This would allow the performance of\nthe systems or the behaviour of the participants using\ndifferent systems to be analysed with respect to the type of task\nbeing performed (look-up, item, or multi-item). In the\nfollowing sections we evaluate the feasibility of this approach\nwhen employing different methods of task creation.\n5.1 Using Real Tasks\nOne method of creating realistic re-finding tasks without\ncompromising the privacy of participants is to use real tasks.\nDiary-studies, similar to that described above, would allow\nexperimenters to capture a pool of tasks for participants\nto complete by searching on their own collections. This\nis extremely advantageous because it would allow\nexperimenters to evaluate the behaviour of real users,\ncompleting real search tasks on real collections while in a\ncontrolled environment. There is also the additional benefit\nthat the task descriptions would not make any assumptions\nabout what the user would remember in a real life\nsituation because they would only include the information that\nhad been recorded i.e. the information that was available\nwhen the user originally performed the task. Nevertheless,\nto gain these benefits we must, firstly, confirm that the task\ndescriptions recorded are of sufficient quality to enable the\ntask to be re-performed at a later date. Secondly, we must\nensure that a diary-study would provide experimenters with\nenough tasks to construct a balanced experimental design\nthat would satisfy their data needs.\nTo examine the quality of recorded tasks, 6 weeks after\nthe diary study had completed, we asked 6 of our\nparticipants, selected randomly from the pool of those who\nrecorded enough tasks, to re-perform 5 of their own tasks.\nThe tasks were selected randomly from the pool of those\navailable. The issued tasks consisted of 10 email and 20\nweb tasks, 9 of which were lookup tasks, 12 were item tasks,\nand 8 were multi-item tasks. The issued tasks represented a\nbroad-sampling of the complete set of recorded tasks. They\nalso included tasks with vague descriptions e.g.\n\u2022 LU5:Find a software key for an application I required to\nreinstall.\n\u2022 LU6:Trying to find a quote to use in a paper. Cannot\nremember the person or the exact quote\nThe usefulness of such tasks would rely on the memories of\nparticipants i.e. would the recorder of task LU5 remember\nwhich application he referred to and would the recorder of\nLU6 remember enough about the context in which the task\ntook place to re-perform the task?\nPresented with the tasks exactly as they recorded them,\nthe participants were asked to re-perform each task with any\nsystem of their choice. Of the 30 tasks issued, 26 (86.67%)\nwere completed without problems, 2 (6.67%) of the tasks\nwere not completed because the description recorded was\ninsufficent to recreate the task, and 2 tasks (6.67%) were\nnot completed because the task was too difficult or the\nrequired web page no longer existed. Experimenters are likely\nto be interested in the final group of tasks because it is\nimportant to discover what makes a task difficult and how user\nbehaviour changes in these circumstances. Therefore, from\nthe 30 tasks tested, only 2 tasks were not of sufficient\nquality to be used in an evaluation situation. Further, there did\nnot seem to be any issue of the type, temperature or\ndifficulty ratings affecting the quality of the task descriptions.\nThese findings suggest that the participants who recorded\nmost tasks in the diary study also recorded tasks with\nsufficient quality. However, did the diary study generate enough\ntasks to satisfy the needs of experimenters?\nParticipant Tasks Lookup Item Multi-item Unclass.\n10 26 16 8 2 0\n43 9 4 5 0 0\n26 9 5 4 0 0\n8 9 8 1 0 0\n40 8 5 3 0 0\n18 7 3 4 0 0\n4 6 5 1 0 0\n7 6 5 0 1 0\n12 5 4 0 0 1\n22 5 4 1 0 0\n36 5 0 5 0 0\n46 5 2 2 0 1\n3 5 3 2 0 0\nTable 3: The quantities of recorded email tasks\nParticipant Tasks Lookup Item Multi-item Unclass.\n26 32 7 20 5 0\n32 31 11 18 2 0\n10 19 0 10 7 2\n33 18 5 13 0 0\n5 15 0 7 2 4\n8 11 0 6 5 0\n22 10 0 3 5 2\n28 10 1 7 2 0\n37 10 1 9 0 0\n35 9 7 2 0 0\n6 9 0 1 8 0\n40 7 1 5 1 0\n9 7 0 0 5 2\n12 7 1 0 3 2\n42 6 0 4 2 0\n29 6 0 3 3 0\n15 5 0 2 1 2\n4 5 0 4 1 0\n43 5 2 3 0 0\n18 5 0 0 3 2\nTable 4: The quantities of recorded web tasks\nNaturally the exact number of tasks required to perform\na user evaluation will depend on the goals of the evaluation,\nthe number of users and the number of systems to be tested\netc. However, for illustrative purposes we chose 5 tasks as\na cut-off point for our data. From tables 3 and 4, which\nshow the quantities of email and web tasks recorded for each\nparticipant, we can see that of the 36 participants, only\n13 (36.1%) recorded 5 or more email tasks and 20 (55.6%)\nrecorded 5 or more web tasks. This means that many of the\nrecruited participants could not actually participate in the\nfinal evaluation. This is a major limitation of using recorded\ntasks in evaluations because participant recruitment for user\ntests is challenging and it may not be possible to recruit\nenough participants if experimenters lose between half and\ntwo-thirds of their populations.\nFurther, there was some imbalance in the numbers of\nrecorded tasks of different types. Some participants recorded\nseveral lookup tasks but very few item tasks and others\nrecorded several item tasks but few lookup tasks. There\nwas also a specific lack of multi-item email tasks. This\nsituation makes it very difficult for experimenters to prepare\nbalanced experimental designs. Therefore, even though our\nfirst test suggests that the quality of recorded tasks was\nsufficient for the participants to re-perform the tasks at a later\nstage, the number of tasks recorded was probably too low\nto make this a viable option for experimental task creation.\nHowever, it may be possible to increase the number of tasks\nrecorded by frequently reminding participants or by making\npersonal visits etc.\n5.2 Using Simulated Tasks Based on Real Tasks\nAnother benefit of diary-studies is that they provide\ninformation about the contents and uses of private\ncollections without invading participants\" privacy. This section\nexplores the possibility of using a combination of the\nknowledge gained from diary studies and other attributes known\nabout participants to artificially create re-finding tasks\ncorresponding to the taxonomy defined in section 4.1. We\nexplain the techniques used and demonstrate the feasibility of\ncreating simulated tasks within the context of a user\nevaluation investigating email re-finding behaviour. Space\nlimitations prevent us from reporting our findings; instead we\nconcentrate on the methods of task creation.\nAs preparation for the evaluation, we performed a\nsecond diary-study, where 34 new participants, consisting of\n16 post-graduate students and 18 under-graduate students,\nrecorded 150 email tasks over a period of approximately 3\nweeks. The collected data revealed several patterns that\nhelped with the creation of artificial tasks. For example,\nstudents in both groups recorded tasks relating to classes\nthat they were taking at the time and often different\nparticipants recorded tasks that involved searching for the same\ninformation. This was useful because it provided us with a\nclue that even though some of the participants did not record\na particular task, it was possible that the task may still be\napplicable to their collections. Other patterns revealed\nincluded that students within the same group often searched\nfor emails containing announcements from the same source.\nFor example, several undergraduate students recorded tasks\nthat included re-finding information relating to job\nvacancies. There were also tasks that were recorded by\nparticipants in both groups. For example, searching for an email\nthat would re-confirm the pin code required to access the\ncomputer labs.\nTo supplement our knowledge of the participants\" email\ncollections, we asked 2 participants from each group to\nprovide email tours. These consisted of short 5-10 minute\nsessions, where participants were asked to explain why they\nuse email, who sends them email, and their organisational\nstrategies. This approach has been used successfully in the\npast as a non-intrusive means to learn about how people\nstore and maintain their personal information [17].\nOriginally, we had planned to ask more participants to provide\ntours, but we found 2 tours per group was sufficient for\nour needs. Again, patterns emerged that helped with task\ncreation. We found content overlap within and between\ngroups that confirmed many of our observations from the\ndiary study data. For example, the students who gave tours\nrevealed that they received emails from lecturers for\nparticular class assignments, receipts for completed assignments,\nand various announcements from systems support and about\njob vacancies. Importantly, the participants were also able\nto confirm which other students had received the same\ninformation. This confirmed that many of tasks recorded during\nthe diary study were applicable, not only to the recorder,\nbut to every participant in 1 or both groups.\nBased on this initial investigatory work, a set of 15 tasks\n(5 of each type in our taxonomy) was created for each group\nof participants. We also created a set of tasks for a third\ngroup of participants that consisted of research and\nacademic staff members, based on our knowledge of the emails\nour colleagues receive. Where possible we used the\ninformation recorded in the diary study descriptions to provide\na context for the task i.e. a work task or motivation that\nwould require the task to be performed. When the diary\nstudy data did not provide sufficient context information to\nsupply the participants with a robust description of the\ninformation need, we created simulated work task situations\naccording to the guidelines of [2]. A further advantage of\nusing simulated tasks in this way, rather than real-tasks,\nis that some of the users will not have performed the task\nin the recent past and this allows the examination of tasks\nthat look for information of different temperatures. If only\nreal-tasks had been used all of the participants would have\nperformed the tasks during the period of the diary study.\nThe created tasks were used in a final evaluation, where\nwe examined the email re-finding behaviour of users with\nthree different email systems. 21 users (7 in each group)\nperformed 9 tasks each (1 task of each type on each system)\nusing their own personal collections in a Greco-Latin square\nexperimental design. Performing a PIM evaluation in this\nway allowed the examination of re-finding behaviour in a\nway not possible before - we were able to observe the email\nre-finding strategies employed by real users, performing\nrealistic tasks, on their own collections in a controlled\nenvironment. The study revealed that the participants\nremembered different attributes of emails, demostrated different\nfinding behaviour, and exhibited different levels of\nperformance when asked to complete tasks of the different types\nin the taxonomy. The key to both the task creation and the\nanalysis of the results was our taxonomy, which provided the\ntemplate to create tasks and also a means to compare the\nbehaviour and performance of different users (and systems)\nperforming different tasks of the same type. Some of the\nfindings of the evaluation will be published in [10].\nSummarising the approach, to conduct a user experiment\nusing our methodology, researchers would be required to\nperform the following steps: 1)Conduct a diary study as\nabove 1\n. 2)Analyse the recorded tasks looking for overlap\nbetween the participants. 3)Supplement the gained\nknowledge about the contents of participants\" collections by asking\na selection of the participants to provide a tour of their\ncollection. 4)Use the knowledge gained to devise tasks of the\nthree different types defined within the taxonomy. More\nde1\nInformation about this and the diary forms required can be\nfound at http://www.cis.strath.ac.uk/\u02dcdce/PIMevaluations\ntailed information on how to use the research described in\nthis paper to perform task-based PIM evaluations can be\nfound at our website (see footnote 1).\n6. CONCLUSIONS\nThis paper has focused on overcoming the difficulties\ninvolved in performing PIM evaluations. The personal nature\nof PIM means that it is difficult to construct balanced\nexperiments because participants each have their own unique\ncollections that are self-generated by completing other tasks.\nWe suggested that to incorporate the personal aspects of\nPIM in evaluations, the performance of systems or users\nshould be examined when users complete tasks on their own\ncollections. This approach itself has problems because task\ncreation for personal collections is difficult: researchers don\"t\nknow much about the kinds of re-finding tasks people\nperform and they don\"t know what information is within\nindividual personal collections. In this paper we described ways\nof overcoming these challenges to facilitate task based PIM\nuser evaluations.\nIn the first part of the paper we performed a diary study\nthat examined the tasks that caused people to re-find email\nmessages and web pages. The collected data included a wide\nrange of both work and non-work related tasks, and based on\nthe data we created a taxonomy of web and email re-finding\ntasks. We discovered that people perform three main types\nof re-finding task: tasks that require specific information\nfrom within a single resource, tasks that require a single\ncomplete resource, and tasks that require information to be\nrecovered from multiple resources. In the second part of the\npaper, we discussed the significance of the taxonomy with\nrespect to PIM evaluation. We demonstrated that balanced\nexperiments could be conducted comparing system or user\nperformance on the task categories within the taxonomy.\nWe also suggested two methods of creating tasks that can\nbe completed on personal collections. These methods do\nnot compromise the privacy of study participants. We\nexamined the techniques suggested, firstly by simulating an\nexperimental situation - participants were asked to re-perform\ntheir own tasks as they recorded them, and secondly, in the\ncontext of a full evaluation. Performing evaluations in this\nway will allow systems that have been proposed to improve\nusers\" ability to manage and re-find their information to be\ntested, so that we can learn about the needs and desires\nof users. Thus, this paper has offered two contributions to\nthe field: an increased understanding of PIM behaviour at\nthe task level and an evaluation method that will facilitate\nfurther investigations.\n7. ACKNOWLEDGMENTS\nWe would like to thank Dr Mark Baillie for his insightful\ncomments and help analysing the data.\n8. REFERENCES\n[1] R. Boardman, Improving tool support for personal\ninformation management, Ph.D. thesis, Imperial\nCollege London, 2004.\n[2] P. Borlund, The iir evaluation model: A framework for\nevaluation of interactive information retrieval systems,\nInformation Research 8 (2003), no. 3, paper no. 152.\n[3] K. Bystr\u00a8om and K. J\u00a8arvelin, Task complexity affects\ninformation seeking and use, Information Processing\nand Management 31 (1995), no. 2, 191-213.\n[4] R. G. Capra and M. A. Perez-Quinones, Re-finding\nfound things: An exploratory study of how users\nre-find information, Tech. report, Virginia Tech, 2003.\n[5] R. G. Capra and M. A. Perez-Quinones, Using web\nsearch engines to find and refind information,\nComputer 38 (2005), no. 10, 36-42.\n[6] R. G. Capra and M. A. Perez-Quinones, Factors and\nevaluation of refinding behaviors., SIGIR 2006\nWorkshop on Personal Information Management,\nAugust 10-11, 2006, Seattle, Washington, 2006.\n[7] E. Cutrell, D.Robbins, S.Dumais, and R.Sarin, Fast,\nflexible filtering with phlat, Proc. SIGCHI \"06 (New\nYork, NY, USA), ACM Press, 2006, pp. 261-270.\n[8] M. Czerwinski, E. Horvitz, and S. Wilhite, A diary\nstudy of task switching and interruptions, Proc.\nSIGCHI \"04, 2004, pp. 175-182.\n[9] S. Dumais, E. Cutrell, J. Cadiz, G. Jancke, R. Sarin,\nand D.C. Robbins, Stuff i\"ve seen: a system for\npersonal information retrieval and re-use, Proc. SIGIR\n\"03:, 2003, pp. 72-79.\n[10] D. Elsweiler and I. Ruthven, Memory and email\nre-finding, In preparation for ACM TOIS CFP special\nissue on Keeping, Re-finding, and Sharing Personal\nInformation (2007).\n[11] D. Elsweiler, I. Ruthven, and C. Jones, Dealing with\nfragmented recollection of context in information\nmanagement, Context-Based Information Retrieval\n(CIR-05) Workshop in CONTEXT-05, 2005.\n[12] D. Elsweiler, I. Ruthven, and C. Jones, Towards\nmemory supporting personal information management\ntools, (to appear in) Journal of the American Society\nfor Information Science and Technology (2007).\n[13] D. Harman, What we have learned, and not learned,\nfrom trec, Proc. ECIR 2000, 2000.\n[14] P. Ingwersen, Information retrieval interaction, Taylor\nGraham, 1992.\n[15] D. Kelly, B. Bederson, M. Czerwinski, J. Gemmell,\nW. Pratt, and M. Skeels (eds.), Pim workshop report:\nMeasurement and design, 2005.\n[16] D. Kelly and J. Teevan, (to appear in) personal\ninformation management, ch. Understanding what\nworks: Evaluating personal information management\ntools, Seattle: University of Washington Press., 2007.\n[17] B. H. Kwasnik, How a personal document\"s intended\nuse or purpose affects its classification in an office,\nSIGIR\"89 23 (1989), no. SI, 207-210.\n[18] M.W. Lansdale, The psychology of personal\ninformation management., Appl Ergon 19 (1988),\nno. 1, 55-66.\n[19] L. Palen and M. Salzman, Voice-mail diary studies for\nnaturalistic data capture under mobile conditions,\nCSCW \"02: Proceedings of the 2002 ACM conference\non Computer supported cooperative work, 2002.\n[20] M. Ringel, E. Cutrell, S. Dumais, and E. Horvitz,\nMilestones in time: The value of landmarks in\nretrieving information from personal stores., Proc.\nINTERACT 2003, 2003.\n[21] G. Robertson, M. Czerwinski, K. Larson, D. C.\nRobbins, D. Thiel, and M. van Dantzich, Data\nmountain: using spatial memory for document\nmanagement, Proc. UIST \"98:, 1998.\n[22] K. Rodden, How do people organise their photographs,\nBCS IRSG 21st Annual Colloquium on Information\nRetrieval Research,Glasgow, Scotland, 1999.\n[23] D.C. Rubin and A.E. Wenzel, One hundred years of\nforgetting: A quantitative description of retention,\nPsychological Bulletin 103 (1996), 734-760.\n[24] A. J. Sellen and R. H. R. Harper, The myth of the\npaperless office, MIT Press, Cambridge, MA, USA,\n2003.\n[25] P. Vakkari, Task complexity, problem structure and\ninformation actions: Integrating studies in on\ninformation seeking and retrieval., Information\nProcessing and Management 35 (1999), 819-837.\n[26] P. Vakkari, A theory of task-based information\nretrieval, Journal of Documentation 57 (2001), no. 1,\n44-60.", "keywords": "measurement;taxonomy;re-find information;user evaluation;human factor;experimenter;email message;individual collection;privacy issue;naturalistic approach;personal information management;laboratory-based study"}
-{"name": "test_H-5", "title": "Utility-based Information Distillation Over Temporally Sequenced Documents", "abstract": "This paper examines a new approach to information distillation over temporally ordered documents, and proposes a novel evaluation scheme for such a framework. It combines the strengths of and extends beyond conventional adaptive filtering, novelty detection and non-redundant passage ranking with respect to long-lasting information needs (\u2018tasks\" with multiple queries). Our approach supports fine-grained user feedback via highlighting of arbitrary spans of text, and leverages such information for utility optimization in adaptive settings. For our experiments, we defined hypothetical tasks based on news events in the TDT4 corpus, with multiple queries per task. Answer keys (nuggets) were generated for each query and a semiautomatic procedure was used for acquiring rules that allow automatically matching nuggets against system responses. We also propose an extension of the NDCG metric for assessing the utility of ranked passages as a combination of relevance and novelty. Our results show encouraging utility enhancements using the new approach, compared to the baseline systems without incremental learning or the novelty detection components.", "fulltext": "1. INTRODUCTION\nTracking new and relevant information from temporal\ndata streams for users with long-lasting needs has been a\nchallenging research topic in information retrieval. Adaptive\nfiltering (AF) is one such task of online prediction of the\nrelevance of each new document with respect to pre-defined\ntopics. Based on the initial query and a few positive\nexamples (if available), an AF system maintains a profile for\neach such topic of interest, and constantly updates it based\non feedback from the user. The incremental learning nature\nof AF systems makes them more powerful than standard\nsearch engines that support ad-hoc retrieval (e.g. Google\nand Yahoo) in terms of finding relevant information with\nrespect to long-lasting topics of interest, and more attractive\nfor users who are willing to provide feedback to adapt the\nsystem towards their specific information needs, without\nhaving to modify their queries manually.\nA variety of supervised learning algorithms (Rocchio-style\nclassifiers, Exponential-Gaussian models, local regression\nand logistic regression approaches) have been studied for\nadaptive settings, examined with explicit and implicit\nrelevance feedback, and evaluated with respect to utility\noptimization on large benchmark data collections in TREC\n(Text Retrieval Conferences) and TDT (Topic Detection and\nTracking) forums [1, 4, 7, 15, 16, 20, 24, 23]. Regularized\nlogistic regression [21] has been found representative for\nthe state-of-the-art approaches, and highly efficient for\nfrequent model adaptations over large document collections\nsuch as the TREC-10 corpus (over 800,000 documents and\n84 topics). Despite substantial achievements in recent\nadaptive filtering research, significant problems remain\nunsolved regarding how to leverage user feedback effectively\nand efficiently. Specifically, the following issues may\nseriously limit the true utility of AF systems in real-world\napplications:\n1. User has a rather \u2018passive\" role in the conventional\nadaptive filtering setup - he or she reacts to the system\nonly when the system makes a \u2018yes\" decision on a\ndocument, by confirming or rejecting that decision. A\nmore \u2018active\" alternative would be to allow the user to\nissue multiple queries for a topic, review a ranked list\nof candidate documents (or passages) per query, and\nprovide feedback on the ranked list, thus refining their\ninformation need and requesting updated ranked lists.\nThe latter form of user interaction has been highly\neffective in standard retrieval for ad-hoc queries. How\nto deploy such a strategy for long-lasting information\nneeds in AF settings is an open question for research.\n2. The unit for receiving a relevance judgment (\u2018yes\" or\n\u2018no\") is restricted to the document level in conventional\nAF. However, a real user may be willing to provide\nmore informative, fine-grained feedback via\nhighlighting some pieces of text in a retrieved document\nas relevant, instead of labeling the entire document\nas relevant. Effectively leveraging such fine-grained\nfeedback could substantially enhance the quality of an\nAF system. For this, we need to enable supervised\nlearning from labeled pieces of text of arbitrary span\ninstead of just allowing labeled documents.\n3. System-selected documents are often highly\nredundant. A major news event, for example, would be\nreported by multiple sources repeatedly for a while,\nmaking most of the information content in those\narticles redundant with each other. A conventional AF\nsystem would select all these redundant news stories\nfor user feedback, wasting the user\"s time while offering\nlittle gain. Clearly, techniques for novelty detection\ncan help in principle [25, 2, 22] for improving the\nutility of the AF systems. However, the effectiveness of\nsuch techniques at passage level to detect novelty with\nrespect to user\"s (fine-grained) feedback and to detect\nredundancy in ranked lists remains to be evaluated\nusing a measure of utility that mimics the needs of a\nreal user.\nTo address the above limitations of current AF systems,\nwe propose and examine a new approach in this paper,\ncombining the strengths of conventional AF (incremental\nlearning of topic models), multi-pass passage retrieval\nfor long-lasting queries conditioned on topic, and novelty\ndetection for removal of redundancy from user interactions\nwith the system. We call the new process utility-based\ninformation distillation.\nNote that conventional benchmark corpora for AF\nevaluations, which have relevance judgments at the document level\nand do not define tasks with multiple queries, are insufficient\nfor evaluating the new approach. Therefore, we extended a\nbenchmark corpus - the TDT4 collection of news stories and\nTV broadcasts - with task definitions, multiple queries per\ntask, and answer keys per query. We have conducted our\nexperiments on this extended TDT4 corpus and have made\nthe additionally generated data publicly available for future\ncomparative evaluations 1\n.\nTo automatically evaluate the system-returned arbitrary\nspans of text using our answer keys, we further developed\nan evaluation scheme with semi-automatic procedure for\n1\nURL: http://nyc.lti.cs.cmu.edu/downloads\nacquiring rules that can match nuggets against system\nresponses. Moreover, we propose an extension of NDCG\n(Normalized Discounted Cumulated Gain) [9] for assessing\nthe utility of ranked passages as a function of both relevance\nand novelty.\nThe rest of this paper is organized as follows. Section\n2 outlines the information distillation process with a\nconcrete example. Section 3 describes the technical cores\nof our system called CAF\u00b4E - CMU Adaptive Filtering\nEngine. Section 4 discusses issues with respect to evaluation\nmethodology and proposes a new scheme. Section 5\ndescribes the extended TDT4 corpus. Section 6 presents\nour experiments and results. Section 7 concludes the study\nand gives future perspectives.\n2. A SAMPLE TASK\nConsider a news event - the escape of seven convicts\nfrom a Texas prison in December 2000 and their capture a\nmonth later. Assuming a user were interested in this event\nsince its early stage, the information need could be: \u2018Find\ninformation about the escape of convicts from Texas prison,\nand information related to their recapture\". The associated\nlower-level questions could be:\n1. How many prisoners escaped?\n2. Where and when were they sighted?\n3. Who are their known contacts inside and outside the\nprison?\n4. How are they armed?\n5. Do they have any vehicles?\n6. What steps have been taken so far?\nWe call such an information need a task, and the\nassociated questions as the queries in this task. A\ndistillation system is supposed to monitor the incoming\ndocuments, process them chunk by chunk in a temporal\norder, select potentially relevant and novel passages from\neach chunk with respect to each query, and present a ranked\nlist of passages to the user. Passage ranking here is based on\nhow relevant a passage is with respect to the current query,\nhow novel it is with respect to the current user history (of\nhis or her interactions with the system), and how redundant\nit is compared to other passages with a higher rank in the\nlist.\nWhen presented with a list of passages, the user may\nprovide feedback by highlighting arbitrary spans of text\nthat he or she found relevant. These spans of text are\ntaken as positive examples in the adaptation of the query\nprofile, and also added to the user\"s history. Passages not\nmarked by the user are taken as negative examples. As\nsoon as the query profile is updated, the system re-issues\na search and returns another ranked list of passages where\nthe previously seen passages are either removed or ranked\nlow, based on user preference. For example, if the user\nhighlights \u2018...officials have posted a $100,000 reward for\ntheir capture...\" as relevant answer to the query What\nsteps have been taken so far?, then the highlighted piece\nis used as an additional positive training example in the\nadaptation of the query profile. This piece of feedback is\nalso added to the user history as a seen example, so that in\nfuture, the system will not place another passage mentioning\n\u2018$100,000 reward\" at the top of the ranked list. However,\nan article mentioning \u2018...officials have doubled the reward\nmoney to $200,000...\" might be ranked high since it is\nboth relevant to the (updated) query profile and novel with\nrespect to the (updated) user history. The user may modify\nthe original queries or add a new query during the process;\nthe query profiles will be changed accordingly. Clearly,\nnovelty detection is very important for the utility of such\na system because of the iterative search. Without novelty\ndetection, the old relevant passages would be shown to the\nuser repeatedly in each ranked list.\nThrough the above example, we can see the main\nproperties of our new framework for utility-based\ninformation distillation over temporally ordered documents. Our\nframework combines and extends the power of adaptive\nfiltering (AF), ad-hoc retrieval (IR) and novelty detection\n(ND). Compared to standard IR, our approach has the\npower of incrementally learning long-term information needs\nand modeling a sequence of queries within a task. Compared\nto conventional AF, it enables a more active role of the\nuser in refining his or her information needs and requesting\nnew results by allowing relevance and novelty feedback via\nhighlighting of arbitrary spans of text in passages returned\nby the system.\nCompared to past work, this is the first evaluation\nof novelty detection integrated with adaptive filtering for\nsequenced queries that allows flexible user feedback over\nranked passages. The combination of AF, IR and ND with\nthe new extensions raises an important research question\nregarding evaluation methodology: how can we measure the\nutility of such an information distillation system? Existing\nmetrics in standard IR, AF and ND are insufficient, and new\nsolutions must be explored, as we will discuss in Section 4,\nafter describing the technical cores of our system in the next\nsection.\n3. TECHNICAL CORES\nThe core components of CAF\u00b4E are - 1) AF for incremental\nlearning of query profiles, 2) IR for estimating relevance of\npassages with respect to query profiles, 3) ND for assessing\nnovelty of passages with respect to user\"s history, and 4)\nanti-redundancy component to remove redundancy from\nranked lists.\n3.1 Adaptive Filtering Component\nWe use a state-of-the-art algorithm in the field - the\nregularized logistic regression method which had the best\nresults on several benchmark evaluation corpora for AF [21].\nLogistic regression (LR) is a supervised learning algorithm\nfor statistical classification. Based on a training set of\nlabeled instances, it learns a class model which can then\nby used to predict the labels of unseen instances. Its\nperformance as well as efficiency in terms of training time\nmakes it a good candidate when frequent updates of the class\nmodel are required, as is the case in adaptive filtering, where\nthe system must learn from each new feedback provided by\nthe user. (See [21] and [23] for computational complexity\nand implementation issues).\nIn adaptive filtering, each query is considered as a class\nand the probability of a passage belonging to this class\ncorresponds to the degree of relevance of the passage with\nrespect to the query. For training the model, we use the\nquery itself as the initial positive training example of the\nclass, and the user-highlighted pieces of text (marked as\nRelevant or Not-relevant) during feedback as additional\ntraining examples. To address the cold start issue in the\nearly stage before any user feedback is obtained, the system\nuses a small sample from a retrospective corpus as the initial\nnegative examples in the training set. The details of using\nlogistic regression for adaptive filtering (assigning different\nweights to positive and negative training instances, and\nregularizing the objective function to prevent over-fitting on\ntraining data) are presented in [21].\nThe class model w\u2217\nlearned by Logistic Regression, or the\nquery profile, is a vector whose dimensions are individual\nterms and whose elements are the regression coefficients,\nindicating how influential each term is in the query profile.\nThe query profile is updated whenever a new piece of user\nfeedback is received. A temporally decaying weight can be\napplied to each training example, as an option, to emphasize\nthe most recent user feedback.\n3.2 Passage Retrieval Component\nWe use standard IR techniques in this part of our system.\nIncoming documents are processed in chunks, where each\nchunk can be defined as a fixed span of time or as a fixed\nnumber of documents, as preferred by the user. For each\nincoming document, corpus statistics like the IDF (Inverted\nDocument Frequency) of each term are updated. We use a\nstate-of-the-art named entity identifier and tracker [8, 12]\nto identify person and location names, and merge them\nwith co-referent named entities seen in the past. Then\nthe documents are segmented into passages, which can be\na whole document, a paragraph, a sentence, or any other\ncontinuous span of text, as preferred. Each passage is\nrepresented using a vector of TF-IDF (Term\nFrequencyInverse Document Frequency) weights, where term can be a\nword or a named entity.\nGiven a query profile, i.e. the logistic regression solution\nw\u2217\nas described in Section 3.1, the system computes the\nposterior probability of relevance for each passage x as\nfRL(x) \u2261 P(y = 1|x, w\u2217\n) =\n1\n(1 + e\u2212w\u2217\u00b7x)\n(1)\nPassages are ordered by their relevance scores, and the\nones with scores above a threshold (tuned on a training set)\ncomprise the relevance list that is passed on to the novelty\ndetection step.\n3.3 Novelty Detection Component\nCAF\u00b4E maintains a user history H(t), which contains all\nthe spans of text hi that the user highlighted (as feedback)\nduring his or her past interactions with the system, up to\nthe current time t. Denoting the history as\nH(t) =\nn\nh1, h2, ..., ht\no\n, (2)\nthe novelty score of a new candidate passage x is computed\nas:\nfND(x) = 1 \u2212 max\ni\u22081..t\n{cos(x, hi)} (3)\nwhere both candidate passage x and highlighted spans of\ntext hi are represented as TF-IDF vectors.\nThe novelty score of each passage is compared to a\nprespecified threshold (also tuned on a training set), and any\npassage with a score below this threshold is removed from\nthe relevance list.\n3.4 Anti-redundant Ranking Component\nAlthough the novelty detection component ensures that\nonly novel (previously unseen) information remains in\nthe relevance list, this list might still contain the same\nnovel information at multiple positions in the ranked list.\nSuppose, for example, that the user has already read about a\n$100,000 reward for information about the escaped convicts.\nA new piece of news that the award has been increased to\n$200,000 is novel since the user hasn\"t read about it yet.\nHowever, multiple news sources would report this news and\nwe might end up showing (redundant) articles from all these\nsources in a ranked list. Hence, a ranked list should also be\nmade non-redundant with respect to its own contents. We\nuse a simplified version of the Maximal Marginal Relevance\nmethod [5], originally developed for combining relevance and\nnovelty in text retrieval and summarization. Our procedure\nstarts with the current list of passages sorted by relevance\n(section 3.2), filtered by Novelty Detection component\n(section 3.3), and generates a new non-redundant list as\nfollows:\n1. Take the top passage in the current list as the top one\nin the new list.\n2. Add the next passage x in the current list to the new\nlist only if\nfAR(x) > t\nwhere\nfAR(x) = 1 \u2212 max\npi\u2208Lnew\n{cos(x, pi)}\nand Lnew is the set of passages already selected in the\nnew list.\n3. Repeat step 2 until all the passages in the current list\nhave been examined.\nAfter applying the above-mentioned algorithm, each passage\nin the new list is sufficiently dissimilar to others, thus\nfavoring diversity rather than redundancy in the new ranked\nlist. The anti-redundancy threshold t is tuned on a training\nset.\n4. EVALUATION METHODOLOGY\nThe approach we proposed above for information\ndistillation raises important issues regarding evaluation\nmethodology. Firstly, since our framework allows the output to be\npassages at different levels of granularity (e.g. k-sentence\nwindows where k may vary) instead of a fixed length, it\nis not possible to have pre-annotated relevance judgments\nat all such granularity levels. Secondly, since we wish to\nmeasure the utility of the system output as a combination of\nboth relevance and novelty, traditional relevance-only based\nmeasures must be replaced by measures that penalize the\nrepetition of the same information in the system output\nacross time. Thirdly, since the output of the system is\nranked lists, we must reward those systems that present\nuseful information (both relevant and previously unseen)\nusing shorter ranked lists, and penalize those that present\nthe same information using longer ranked lists. None of\nthe existing measures in ad-hoc retrieval, adaptive filtering,\nnovelty detection or other related areas (text summarization\nand question answering) have desirable properties in all the\nthree aspects. Therefore, we must develop a new evaluation\nmethodology.\n4.1 Answer Keys\nTo enable the evaluation of a system whose output\nconsists of passages of arbitrary length, we borrow the\nconcept of answer keys from the Question Answering (QA)\ncommunity, where systems are allowed to return arbitrary\nspans of text as answers. Answer keys define what should\nbe present in a system response to receive credit, and\nare comprised of a collection of information nuggets, i.e.\nfactoid units about which human assessors can make binary\ndecisions of whether or not a system response contains them.\nDefining answer keys and making the associated binary\ndecisions are conceptual tasks that require semantic\nmapping [19], since system-returned passages can contain the\nsame information expressed in many different ways. Hence,\nQA evaluations have relied on human assessors for the\nmapping between various expressions, making the process\ncostly, time consuming, and not scalable to large query and\ndocument collections, and extensive system evaluations with\nvarious parameter settings.\n4.1.1 Automating Evaluation based on Answer Keys\nAutomatic evaluation methods would allow for faster\nsystem building and tuning, as well as provide an objective\nand affordable way of comparing various systems. Recently,\nsuch methods have been proposed, more or less, based on\nthe idea of n-gram co-occurrences. Pourpre [10] assigns a\nfractional recall score to a system response based on its\nunigram overlap with a given nugget\"s description. For\nexample, a system response \u2018A B C\" has recall 3/4 with\nrespect to a nugget with description \u2018A B C D\". However,\nsuch an approach is unfair to systems that present the same\ninformation but using words other than A, B, C, and D.\nAnother open issue is how to weight individual words in\nmeasuring the closeness of a match. For example, consider\nthe question How many prisoners escaped?. In the nugget\n\u2018Seven prisoners escaped from a Texas prison\", there is no\nindication that \u2018seven\" is the keyword, and that it must\nbe matched to get any relevance credit. Using IDF values\ndoes not help, since \u2018seven\" will generally not have a higher\nIDF than words like \u2018texas\" and \u2018prison\". Also, redefining\nthe nugget as just \u2018seven\" does not solve the problem since\nnow it might spuriously match any mention of \u2018seven\" out\nof context. Nuggeteer [13] works on similar principles but\nmakes binary decisions about whether a nugget is present in\na given system response by tuning a threshold. However,\nit is also plagued by \u2018spurious relevance\" since not all\nwords contained in the nugget description (or known correct\nresponses) are central to the nugget.\n4.1.2 Nugget-Matching Rules\nWe propose a reliable automatic method for determining\nwhether a snippet of text contains a given nugget, based on\nnugget-matching rules, which are generated using a\nsemiautomatic procedure explained below. These rules are\nessentially Boolean queries that will only match against\nsnippets that contain the nugget. For instance, a candidate\nrule for matching answers to How many prisoners\nescaped? is (Texas AND seven AND escape AND (convicts\nOR prisoners)), possibly with other synonyms and variants\nin the rule. For a corpus of news articles, which usually\nfollow a typical formal prose, it is fairly easy to write such\nsimple rules to match expected answers using a bootstrap\napproach, as described below.\nWe propose a two-stage approach, inspired by Autoslog\n[14], that combines the strength of humans in identifying\nsemantically equivalent expressions and the strength of the\nsystem in gathering statistical evidence from a\nhumanannotated corpus of documents. In the first stage, human\nsubjects annotated (using a highlighting tool) portions of\nontopic documents that contained answers to each nugget 2\n.\nIn the second stage, subjects used our rule generation tool\nto create rules that would match the annotations for each\nnugget. The tool allows users to enter a Boolean rule as a\ndisjunction of conjunctions (e.g. ((a AND b) OR (a AND c\nAND d) OR (e))). Given a candidate rule, our tool uses it as\na Boolean query over the entire set of on-topic documents\nand calculates its recall and precision with respect to the\nannotations that it is expected to match. Hence, the\nsubjects can start with a simple rule and iteratively refine\nit until they are satisfied with its recall and precision. We\nobserved that it was very easy for humans to improve the\nprecision of a rule by tweaking its existing conjunctions\n(adding more ANDs), and improving the recall by adding\nmore conjunctions to the disjunction (adding more ORs).\nAs an example, let\"s try to create a rule for the nugget\nwhich says that seven prisoners escaped from the Texas\nprison. We start with a simple rule - (seven). When\nwe input this into the rule generation tool, we realize that\nthis rule matches many spurious occurrences of seven (e.g.\n\u2018...seven states...\") and thus gets a low precision score.\nWe can further qualify our rule - Texas AND seven AND\nconvicts. Next, by looking at the \u2018missed annotations\", we\nrealize that some news articles mentioned ...seven prisoners\nescaped.... We then replace convicts with the disjunction\n(convicts OR prisoners). We continue tweaking the rule\nin this manner until we achieve a sufficiently high recall and\nprecision - i.e. the (small number of) misses and false alarms\ncan be safely ignored.\nThus we can create nugget-matching rules that succinctly\ncapture various ways of expressing a nugget, while avoiding\nmatching incorrect (or out of context) responses. Human\ninvolvement in the rule creation process ensures high quality\ngeneric rules which can then be used to evaluate arbitrary\nsystem responses reliably.\n4.2 Evaluating the Utility of a Sequence of\nRanked Lists\nThe utility of a retrieval system can be defined as the\ndifference between how much the user gained in terms of\nuseful information, and how much the user lost in terms\nof time and energy. We calculate this utility from the\nutilities of individual passages as follows. After reading each\npassage returned by the system, the user derives some gain\ndepending on the presence of relevant and novel information,\nand incurs a loss in terms of the time and energy spent in\ngoing through the passage. However, the likelihood that the\nuser would actually read a passage depends on its position\nin the ranked list. Hence, for a query q, the expected utility\n2\nLDC [18] already provides relevance judgments for 100\ntopics on the TDT4 corpus. We further ensured that these\njudgments are exhaustive on the entire corpus using pooling.\nof a passage pi at rank i can be defined as\nU(pi, q) = P(i) \u2217 (Gain(pi, q) \u2212 Loss(pi, q)) (4)\nwhere P(i) is the probability that the user would go through\na passage at rank i.\nThe expected utility for an entire ranked list of length n\ncan be calculated simply by adding the expected utility of\neach passage:\nU(q) =\nnX\ni=1\nP(i) \u2217 (Gain(pi, q) \u2212 Loss(pi, q)) (5)\nNote that if we ignore the loss term and define P(i) as\nP(i) \u221d 1/ logb(b + i \u2212 1) (6)\nthen we get the recently popularized metric called\nDiscounted Cumulated Gain (DCG) [9], where Gain(pi, q) is\ndefined as the graded relevance of passage pi. However,\nwithout the loss term, DCG is a purely recall-oriented metric\nand not suitable for an adaptive filtering setting, where the\nsystem\"s utility depends in part on its ability to limit the\nnumber of items shown to the user.\nAlthough P(i) could be defined based on empirical studies\nof user behavior, for simplicity, we use P(i) exactly as\ndefined in equation 6.\nThe gain G(pi, q) of passage pi with respect to the query q\nis a function of - 1) the number of relevant nuggets present in\npi, and 2) the novelty of each of these nuggets. We combine\nthese two factors as follows. For each nugget Nj, we assign\nan initial weight wj, and also keep a count nj of the number\nof times this nugget has been seen by the user in the past.\nThe gain derived from each subsequent occurrence of the\nsame nugget is assumed to reduce by a dampening factor \u03b3.\nThus, G(pi, q) is defined as\nG(pi, q) =\nX\nNj \u2208C(pi,q)\nwj \u2217 \u03b3nj\n(7)\nwhere C(pi, q) is the set of all nuggets that appear in passage\npi and also belong to the answer key of query q. The\ninitial weights wj are all set of be 1.0 in our experiments,\nbut can also be set based on a pyramid approach [11].\nThe choice of dampening factor \u03b3 determines the user\"s\ntolerance for redundancy. When \u03b3 = 0, a nugget will\nonly receive credit for its first occurrence i.e. when nj is\nzero3\n. For 0 < \u03b3 < 1, a nugget receives smaller credit\nfor each successive occurrence. When \u03b3 = 1, no dampening\noccurs and repeated occurrences of a nugget receive the same\ncredit. Note that the nugget occurrence counts are preserved\nbetween evaluation of successive ranked lists returned by the\nsystem, since the users are expected to remember what the\nsystem showed them in the past.\nWe define the loss L(pi, q) as a constant cost c (we use 0.1)\nincurred when reading a system-returned passage. Thus, our\nmetric can be re-written as\nU(q) =\nnX\ni=1\nGain(pi, q)\nlogb(b + i \u2212 1)\n\u2212 L(n) (8)\nwhere L(n) is the loss associated with a ranked list of length\nn:\nL(n) = c \u00b7\nnX\ni=1\n1\nlogb(b + i \u2212 1)\n(9)\n3\nNote that 00\n= 1\nDue to the similarity with Discounted Cumulated Gain\n(DCG), we call our metric Discounted Cumulated Utility\n(DCU). The DCU score obtained by the system is converted\nto a Normalized DCU (NDCU) score by dividing it by the\nDCU score of the ideal ranked list, which is created by\nordering passages by their decreasing utility scores U(pi, q)\nand stopping when U(pi, q) \u2264 0 i.e. when the gain is less\nthan or equal to the cost of reading the passage.\n5. DATA\nTDT4 was the benchmark corpus used in TDT2002 and\nTDT2003 evaluations. The corpus consists of over 90, 000\nnews articles from multiple sources (AP, NYT, CNN, ABC,\nNBC, MSNBC, Xinhua, Zaobao, Voice of America, PRI the\nWorld, etc.) published between October 2000 and January\n2001, in the languages of Arabic, English, and Mandarin.\nSpeech-recognized and machine-translated versions of the\nnon-English articles were provided as well.\nLDC [18] has annotated the corpus with 100 topics, that\ncorrespond to various news events in this time period. Out\nof these, we selected a subset of 12 actionable events, and\ndefined corresponding tasks for them4\n. For each task, we\nmanually defined a profile consisting of an initial set of (5\nto 10) queries, a free-text description of the user history,\ni.e., what the user already knows about the event, and a list\nof known on-topic and off-topic documents (if available) as\ntraining examples.\nFor each query, we generated answer keys and\ncorresponding nugget matching rules using the procedure described in\nsection 4.1.2, and produced a total of 120 queries, with an\naverage of 7 nuggets per query.\n6. EXPERIMENTS AND RESULTS\n6.1 Baselines\nWe used Indri [17], a popular language-model based\nretrieval engine, as a baseline for comparison with CAF\u00b4E.\nIndri supports standard search engine functionality,\nincluding pseudo-relevance feedback (PRF) [3, 6], and is\nrepresentative of a typical query-based retrieval system.\nIndri does not support any kind of novelty detection.\nWe compare Indri with PRF turned on and off, against\nCAF\u00b4E with user feedback, novelty detection and\nantiredundant ranking turned on and off.\n6.2 Experimental Setup\nWe divided the TDT4 corpus spanning 4 months into 10\nchunks, each defined as a period of 12 consecutive days.\nAt any given point of time in the distillation process, each\nsystem accessed the past data up to the current point, and\nreturned a ranked list of up 50 passages per query.\nThe 12 tasks defined on the corpus were divided into\na training and test set with 6 tasks each. Each system\nwas allowed to use the training set to tune its parameters\nfor optimizing NDCU (equation 8), including the relevance\nthreshold for both Indri and CAF\u00b4E, and the novelty and\nantiredundancy thresholds for CAF\u00b4E.\nThe NDCU for each system run is calculated\nautomatically. User feedback was also simulated - relevance\njudgments for each system-returned passage (as determined\nby the nugget matching rules described in section 4.1.2) were\n4\nURL: http://nyc.lti.cs.cmu.edu/downloads\nFigure 1: Performance of Indri across chunks\nFigure 2: Performance of CAF\u00b4E across chunks\nused as user feedback in the adaptation of query profiles and\nuser histories.\n6.3 Results\nIn Table 1, we show the NDCU scores of the two systems\nunder various settings. These scores are averaged over\nthe six tasks in the test set, and are calculated with two\ndampening factors (see section 4.2): \u03b3 = 0 and 0.1, to\nsimulate no tolerance and small tolerance for redundancy,\nrespectively.\nUsing \u03b3 = 0 creates a much more strict metric since it does\nnot give any credit to a passage that contains relevant but\nredundant information. Hence, the improvement obtained\nfrom enabling user feedback is smaller with \u03b3 = 0 than the\nimprovement obtained from feedback with \u03b3 = 0.1. This\nreveals a shortcoming of contemporary retrieval\nsystemswhen the user gives positive feedback on a passage, the\nsystems gives higher weights to the terms present in that\npassage and tends to retrieve other passages containing the\nsame terms - and thus - usually the same information.\nHowever, the user does not benefit from seeing such\nredundant passages, and is usually interested in other\npassages containing related information. It is informative\nto evaluate retrieval systems using our utility measure (with\n\u03b3 = 0) which accounts for novelty and thus gives a more\nrealistic picture of how well a system can generalize from\nuser feedback, rather than using traditional IR measures\nlike recall and precision which give an incomplete picture of\nimprovement obtained from user feedback.\nSometimes, however, users might indeed be interested in\nseeing the same information from multiple sources, as an\nTable 1: NDCU Scores of Indri and CAF\u00b4E for two dampening factors (\u03b3), and various settings (F: Feedback,\nN: Novelty Detection, A: Anti-Redundant Ranking)\nIndri CAF\u00b4E\n\u03b3 Base +PRF Base +F +F+N +F+A +F+N+A\n0 0.19 0.19 0.22 0.23 0.24 0.24 0.24\n0.1 0.28 0.29 0.24 0.35 0.35 0.36 0.36\nindicator of its importance or reliability. In such a case, we\ncan simply choose a higher value for \u03b3 which corresponds to\na higher tolerance for redundancy, and hence let the system\ntune its parameters accordingly.\nSince documents were processed chunk by chunk, it\nwould be interesting to see how the performance of systems\nimproves over time. Figures 1 and 2 show the performance\ntrends for both the systems across chunks. While the\nperformance with and without feedback on the first few\nchunks is expected to be close, for subsequent chunks,\nthe performance curve with feedback enabled rises above\nthe one with the no-feedback setting. The performance\ntrends are not consistent across all chunks because on-topic\ndocuments are not uniformly distributed over all the chunks,\nmaking some queries \u2018easier\" than others in certain chunks.\nMoreover, since Indri uses pseudo-relevance feedback while\nCAF\u00b4E uses feedback based on actual relevance judgments, the\nimprovement in case of Indri is less dramatic than that of\nCAF\u00b4E.\n7. CONCLUDING REMARKS\nThis paper presents the first investigation on utility-based\ninformation distillation with a system that learns the\nlonglasting information needs from fine-grained user feedback\nover a sequence of ranked passages. Our system, called CAF\u00b4E,\ncombines adaptive filtering, novelty detection and\nantiredundant passage ranking in a unified framework for utility\noptimization. We developed a new scheme for automated\nevaluation and feedback based on a semi-automatic\nprocedure for acquiring rules that allow automatically matching\nnuggets against system responses. We also proposed an\nextension of the NDCG metric for assessing the utility of\nranked passages as a weighted combination of relevance and\nnovelty. Our experiments on the newly annotated TDT4\nbenchmark corpus show encouraging utility enhancement\nover Indri, and also over our own system with incremental\nlearning and novelty detection turned off.\n8. ACKNOWLEDGMENTS\nWe would like to thank Rosta Farzan, Jonathan Grady,\nJaewook Ahn, Yefei Peng, and the Qualitative Data\nAnalysis Program at the University of Pittsburgh lead\nby Dr. Stuart Shulman for their help with collecting\nand processing the extended TDT4 annotations used in\nour experiments. This work is supported in parts by\nthe National Science Foundation (NSF) under grant\nIIS0434035, and the Defense Advanced Research Project\nAgency (DARPA) under contracts NBCHD030010 and\nW0550432. Any opinions, findings, conclusions or\nrecommendations expressed in this material are those of the\nauthors and do not necessarily reflect the views of the\nsponsors.\n9. ADDITIONAL AUTHORS\nJian Zhang (jianzhan@stat.purdue.edu)\u2217\n, Jaime\nCarbonell (jgc@cs.cmu.edu)\u2020\n, Peter Brusilovsky\n(peterb+@pitt.edu)\u2021\n, Daqing He(dah44@pitt.edu)\u2021\n10. REFERENCES\n[1] J. Allan. Incremental Relevance Feedback for\nInformation Filtering. Proceedings of the 19th annual\ninternational ACM SIGIR conference on Research and\ndevelopment in information retrieval, pages 270-278,\n1996.\n[2] J. Allan, C. Wade, and A. Bolivar. Retrieval and\nNovelty Detection at the Sentence Level. Proceedings\nof the ACM SIGIR conference on research and\ndevelopment in information retrieval, 2003.\n[3] C. Buckley, G. Salton, and J. Allan. Automatic\nRetrieval with Locality Information using SMART.\nNIST special publication, (500207):59-72, 1993.\n[4] J. Callan. Learning While Filtering Documents.\nProceedings of the 21st annual international ACM\nSIGIR conference on Research and development in\ninformation retrieval, pages 224-231, 1998.\n[5] J. Carbonell and J. Goldstein. The use of MMR,\nDiversity-based Reranking for Reordering Documents\nand Producing Summaries. Proceedings of the 21st\nannual international ACM SIGIR conference on\nResearch and development in information retrieval,\npages 335-336, 1998.\n[6] E. Efthimiadis. Query Expansion. Annual Review of\nInformation Science and Technology (ARIST),\n31:p121-87, 1996.\n[7] J. Fiscus and G. Duddington. Topic Detection and\nTracking Overview. Topic Detection and Tracking:\nEvent-based Information Organization, pages 17-31.\n[8] R. Florian, H. Hassan, A. Ittycheriah, H. Jing,\nN. Kambhatla, X. Luo, N. Nicolov, and S. Roukos. A\nStatistical Model for Multilingual Entity Detection\nand Tracking. NAACL/HLT, 2004.\n[9] K. J\u00a8arvelin and J. Kek\u00a8al\u00a8ainen. Cumulated Gain-based\nEvaluation of IR Techniques. ACM Transactions on\nInformation Systems (TOIS), 20(4):422-446, 2002.\n[10] J. Lin and D. Demner-Fushman. Automatically\nEvaluating Answers to Definition Questions.\nProceedings of the 2005 Human Language Technology\nConference and Conference on Empirical Methods in\nNatural Language Processing (HLT/EMNLP 2005),\n2005.\n\u2217\nStatistics Dept., Purdue University, West Lafayette, USA\n\u2020\nLanguage Technologies Inst., Carnegie Mellon University,\nPittsburgh, USA\n\u2021\nSchool of Information Sciences, Univ. of Pittsburgh,\nPittsburgh, USA\n[11] J. Lin and D. Demner-Fushman. Will Pyramids Built\nof nUggets Topple Over. Proceedings of HLT-NAACL,\n2006.\n[12] X. Luo, A. Ittycheriah, H. Jing, N. Kambhatla, and\nS. Roukos. A Mention-synchronous Coreference\nResolution Algorithm based on the Bell Tree. Proc. of\nACL, 4:136-143, 2004.\n[13] G. Marton. Nuggeteer: Automatic Nugget-Based\nEvaluation Using Descriptions and Judgments.\nHLT/NAACL, 2006.\n[14] E. Riloff. Automatically Constructing a Dictionary for\nInformation Extraction Tasks. Proceedings of the\nEleventh National Conference on Artificial\nIntelligence, pages 811-816, 1993.\n[15] S. Robertson and S. Walker. Microsoft Cambridge at\nTREC-9: Filtering track. The Ninth Text REtrieval\nConference (TREC-9), pages 361-368.\n[16] R. Schapire, Y. Singer, and A. Singhal. Boosting and\nRocchio Applied to Text Filtering. Proceedings of the\n21st annual international ACM SIGIR conference on\nResearch and development in information retrieval,\npages 215-223, 1998.\n[17] T. Strohman, D. Metzler, H. Turtle, and W. Croft.\nIndri: A Language Model-based Search Engine for\nComplex Queries. Proceedings of the International\nConference on Intelligence Analysis, 2004.\n[18] The Linguistic Data Consortium.\nhttp://www.ldc.upenn.edu/.\n[19] E. Voorhees. Overview of the TREC 2003 Question\nAnswering Track. Proceedings of the Twelfth Text\nREtrieval Conference (TREC 2003), 2003.\n[20] Y. Yang and B. Kisiel. Margin-based Local Regression\nfor Adaptive Filtering. Proceedings of the twelfth\ninternational conference on Information and\nknowledge management, pages 191-198, 2003.\n[21] Y. Yang, S. Yoo, J. Zhang, and B. Kisiel. Robustness\nof Adaptive Filtering Methods in a Cross-benchmark\nEvaluation. Proceedings of the 28th annual\ninternational ACM SIGIR conference on Research and\ndevelopment in information retrieval, pages 98-105,\n2005.\n[22] C. Zhai, W. Cohen, and J. Lafferty. Beyond\nIndependent Relevance: Methods and Evaluation\nMetrics for Subtopic Retrieval. Proceedings of the 26th\nannual international ACM SIGIR conference on\nResearch and development in information retrieval,\npages 10-17, 2003.\n[23] J. Zhang and Y. Yang. Robustness of Regularized\nLinear Classification Methods in Text Categorization.\nProceedings of the 26th annual international ACM\nSIGIR conference on Research and development in\ninformation retrieval, pages 190-197, 2003.\n[24] Y. Zhang. Using Bayesian Priors to Combine\nClassifiers for Adaptive Filtering. Proceedings of the\n27th annual international conference on Research and\ndevelopment in information retrieval, pages 345-352,\n2004.\n[25] Y. Zhang, J. Callan, and T. Minka. Novelty and\nRedundancy Detection in Adaptive Filtering.\nProceedings of the 25th Annual International ACM\nSIGIR Conference on Research and Development in\nInformation Retrieval, 2002.", "keywords": "new evaluation methodology;adaptive filter;passage rank;answer key;utility-based information distillation;passage ranking;unified framework;nugget-matching rule;adaptive filtering;novelty detection;temporally ordered document;evaluation methodology;ndcg metric;utility-base distillation;unify framework;\ufb02exible user feedback;ad-hoc retrieval"}
-{"name": "test_H-7", "title": "Efficient Bayesian Hierarchical User Modeling for Recommendation Systems", "abstract": "A content-based personalized recommendation system learns user specific profiles from user feedback so that it can deliver information tailored to each individual user\"s interest. A system serving millions of users can learn a better user profile for a new user, or a user with little feedback, by borrowing information from other users through the use of a Bayesian hierarchical model. Learning the model parameters to optimize the joint data likelihood from millions of users is very computationally expensive. The commonly used EM algorithm converges very slowly due to the sparseness of the data in IR applications. This paper proposes a new fast learning technique to learn a large number of individual user profiles. The efficacy and efficiency of the proposed algorithm are justified by theory and demonstrated on actual user data from Netflix and MovieLens.", "fulltext": "1. INTRODUCTION\nPersonalization is the future of the Web, and it has achieved\ngreat success in industrial applications. For example, online\nstores, such as Amazon and Netflix, provide customized\nrecommendations for additional products or services based on a\nuser\"s history. Recent offerings such as My MSN, My Yahoo!,\nMy Google, and Google News have attracted much attention\ndue to their potential ability to infer a user\"s interests from\nhis/her history.\nOne major personalization topic studied in the\ninformation retrieval community is content-based personal\nrecommendation systems1\n. These systems learn user-specific\nprofiles from user feedback so that they can recommend\ninformation tailored to each individual user\"s interest without\nrequiring the user to make an explicit query. Learning the\nuser profiles is the core problem for these systems.\nA user profile is usually a classifier that can identify whether\na document is relevant to the user or not, or a regression\nmodel that tells how relevant a document is to the user. One\nmajor challenge of building a recommendation or\npersonalization system is that the profile learned for a particular user\nis usually of low quality when the amount of data from that\nparticular user is small. This is known as the cold start\nproblem. This means that any new user must endure poor\ninitial performance until sufficient feedback from that user\nis provided to learn a reliable user profile.\nThere has been much research on improving\nclassification accuracy when the amount of labeled training data is\nsmall. The semi-supervised learning approach combines\nunlabeled and labeled data together to achieve this goal [26].\nAnother approach is using domain knowledge. Researchers\nhave modified different learning algorithms, such as\nNa\u00a8\u0131veBayes [17], logistic regression [7], and SVMs [22], to integrate\ndomain knowledge into a text classifier. The third approach\nis borrowing training data from other resources [5][7]. The\neffectiveness of these different approaches is mixed, due to\nhow well the underlying model assumption fits the data.\nOne well-received approach to improve recommendation\nsystem performance for a particular user is borrowing\ninformation from other users through a Bayesian hierarchical\nmodeling approach. Several researchers have demonstrated\nthat this approach effectively trades off between shared and\nuser-specific information, thus alleviating poor initial\nperformance for each user[27][25].\nIn order to learn a Bayesian hierarchical model, the\nsystem usually tries to find the most likely model parameters\nfor the given data. A mature recommendation system\nusually works for millions of users. It is well known that\nlearning the optimal parameters of a Bayesian hierarchical model\nis computationally expensive when there are thousands or\nmillions of users. The EM algorithm is a commonly used\ntechnique for parameter learning due to its simplicity and\nconvergence guarantee. However, a content based\nrecommendation system often handles documents in a very high\ndimensional space, in which each document is represented\nby a very sparse vector. With careful analysis of the EM\nalgorithm in this scenario (Section 4), we find that the EM\ntering, or item-based collaborative filtering. In this paper,\nthe words filtering and recommendation are used\ninterchangeably.\nalgorithm converges very slowly due to the sparseness of\nthe input variables. We also find that updating the model\nparameter at each EM iteration is also expensive with\ncomputational complexity of O(MK), where M is the number\nof users and K is the number of dimensions.\nThis paper modifies the standard EM algorithm to create\nan improved learning algorithm, which we call the Modified\nEM algorithm. The basic idea is that instead of\ncalculating the numerical solution for all the user profile parameters,\nwe derive the analytical solution of the parameters for some\nfeature dimensions, and at the M step use the analytical\nsolution instead of the numerical solution estimated at E step\nfor those parameters. This greatly reduces the computation\nat a single EM iteration, and also has the benefit of\nincreasing the convergence speed of the learning algorithm. The\nproposed technique is not only well supported by theory,\nbut also by experimental results.\nThe organization of the remaining parts of this paper is as\nfollows: Section 3 describes the Bayesian hierarchical linear\nregression modeling framework used for content-based\nrecommendations. Section 4 describes how to learn the model\nparameters using the standard EM algorithm, along with\nusing the new technique proposed in this paper. The\nexperimental setting and results used to validate the proposed\nlearning technique are reported in Sections 5 and 6.\nSection 7 summarizes and offers concluding remarks.\n2. RELATED WORK\nProviding personalized recommendations to users has been\nidentified as a very important problem in the IR community\nsince the 1970\"s. The approaches that have been used to\nsolve this problem can be roughly classified into two major\ncategories: content based filtering versus collaborative\nfiltering. Content-based filtering studies the scenario where\na recommendation system monitors a document stream and\npushes documents that match a user profile to the\ncorresponding user. The user may read the delivered documents\nand provide explicit relevance feedback, which the filtering\nsystem then uses to update the user\"s profile using relevance\nfeedback retrieval models (e.g. Boolean models, vector space\nmodels, traditional probabilistic models [20] , inference\nnetworks [3] and language models [6]) or machine learning\nalgorithms (e.g. Support Vector Machines (SVM), K nearest\nneighbors (K-NN) clustering, neural networks, logistic\nregression, or Winnow [16] [4] [23]). Collaborative filtering\ngoes beyond merely using document content to recommend\nitems to a user by leveraging information from other users\nwith similar tastes and preferences in the past.\nMemorybased heuristics and model based approaches have been used\nin collaborative filtering task [15] [8] [2] [14] [12] [11].\nThis paper contributes to the content-based\nrecommendation research by improving the efficiency and effectiveness\nof Bayesian hierarchical linear models, which have a strong\ntheoretical basis and good empirical performance on\nrecommendation tasks[27][25]. This paper does not intend to\ncompare content-based filtering with collaborative filtering\nor claim which one is a better. We think each complements\nthe other, and that content-based filtering is extremely\nuseful for handling new documents/items with little or no user\nfeedback. Similar to some other researchers[18][1][21], we\nfound that a recommendation system will be more effective\nwhen both techniques are combined. However, this is\nbeyond the scope of this paper and thus not discussed here.\n3. BAYESIAN HIERARCHICAL LINEAR\nREGRESSION\nAssume there are M users in the system. The task of\nthe system is to recommend documents that are relevant to\neach user. For each user, the system learns a user model\nfrom the user\"s history. In the rest of this paper, we will\nuse the following notations to represent the variables in the\nsystem.\nm = 1, 2, ..., M: The index for each individual user. M is\nthe total number of users.\nwm: The user model parameter associated with user m. wm\nis a K dimensional vector.\nj = 1, 2, ..., Jm: The index for a set of data for user m. Jm\nis the number of training data for user m.\nDm = {(xm,j, ym,j)}: A set of data associated with user m.\nxm,j is a K dimensional vector that represents the mth\nuser\"s jth training document.2\nym,j is a scalar that\nrepresents the label of document xm,j.\nk = 1, 2, ..., K: The dimensional index of input variable x.\nThe Bayesian hierarchical modeling approach has been\nwidely used in real-world information retrieval applications.\nGeneralized Bayesian hierarchical linear models, one of the\nsimplest Bayesian hierarchical models, are commonly used\nand have achieved good performance on collaborative\nfiltering [25] and content-based adaptive filtering [27] tasks.\nFigure 1 shows the graphical representation of a Bayesian\nhierarchical model. In this graph, each user model is\nrepresented by a random vector wm. We assume a user model\nis sampled randomly from a prior distribution P(w|\u03a6). The\nsystem can predict the user label y of a document x given\nan estimation of wm (or wm\"s distribution) using a function\ny = f(x, w). The model is called generalized Bayesian\nhierarchical linear model when y = f(wT\nx) is any generalized\nlinear model such as logistic regression, SVM, and linear\nregression. To reliably estimate the user model wm, the\nsystem can borrow information from other users through the\nprior \u03a6 = (\u00b5, \u03a3).\nNow we look at one commonly used model where y =\nwT\nx + , where \u223c N(0, \u03c32\n) is a random noise [25][27].\nAssume that each user model wm is an independent draw\nfrom a population distribution P(w|\u03a6), which is governed by\nsome unknown hyperparameter \u03a6. Let the prior distribution\nof user model w be a Gaussian distribution with parameter\n\u03a6 = (\u00b5, \u03a3), which is the commonly used prior for linear\nmodels. \u00b5 = (\u00b51, \u00b52, ..., \u00b5K ) is a K dimensional vector that\nrepresents the mean of the Gaussian distribution, and \u03a3 is\nthe covariance matrix of the Gaussian. Usually, a Normal\ndistribution N(0, aI) and an Inverse Wishart distribution\nP(\u03a3) \u221d |\u03a3|\u2212 1\n2\nb\nexp(\u22121\n2\nctr(\u03a3\u22121\n)) are used as hyperprior to\nmodel the prior distribution of \u00b5 and \u03a3 respectively. I is\nthe K dimensional identity matrix, and a, b, and c are real\nnumbers.\nWith these settings, we have the following model for the\nsystem:\n1. \u00b5 and \u03a3 are sampled from N(0, aI) and IW\u03bd (aI),\nrespectively.\n2\nThe first dimension of x is a dummy variable that always\nequals to 1.\nFigure 1: Illustration of dependencies of variables\nin the hierarchical model. The rating, y, for a\ndocument, x, is conditioned on the document and the\nuser model, wm, associated with the user m. Users\nshare information about their models through the\nprior, \u03a6 = (\u00b5, \u03a3).\n2. For each user m, wm is sampled randomly from a\nNormal distribution: wm \u223c N(\u00b5, \u03a32\n)\n3. For each item xm,j, ym,j is sampled randomly from a\nNormal distribution: ym,j \u223c N(wT\nmxm,j, \u03c32\n).\nLet \u03b8 = (\u03a6, w1, w2, ..., wM ) represent the parameters of\nthis system that needs to be estimated. The joint\nlikelihood for all the variables in the probabilistic model, which\nincludes the data and the parameters, is:\nP(D, \u03b8) = P(\u03a6)\nm\nP(wm|\u03a6)\nj\nP(ym,j|xm,j, wm) (1)\nFor simplicity, we assume a, b, c, and \u03c3 are provided to\nthe system.\n4. MODEL PARAMETER LEARNING\nIf the prior \u03a6 is known, finding the optimal wm is\nstraightforward: it is a simple linear regression. Therefore, we will\nfocus on estimating \u03a6. The maximum a priori solution of \u03a6\nis given by\n\u03a6MAP = arg max\n\u03a6\nP(\u03a6|D) (2)\n= arg max\n\u03a6\nP(\u03a6, D)\nP(D)\n(3)\n= arg max\n\u03a6\nP(D|\u03a6)P(\u03a6) (4)\n= arg max\n\u03a6 w\nP(D|w, \u03a6)P(w|\u03a6)P(\u03a6)dw (5)\nFinding the optimal solution for the above problem is\nchallenging, since we need to integrate over all w = (w1, w2, ..., wM ),\nwhich are unobserved hidden variables.\n4.1 EM Algorithm for Bayesian Hierarchical\nLinear Models\nIn Equation 5, \u03a6 is the parameter needs to be estimated,\nand the result depends on unobserved latent variables w.\nThis kind of optimization problem is usually solved by the\nEM algorithm.\nApplying EM to the above problem, the set of user models\nw are the unobservable hidden variables and we have:\nQ =\nw\nP(w|\u00b5, \u03a32\n, Dm) log P(\u00b5, \u03a32\n, w, D)dw\nBased on the derivation of the EM formulas presented in\n[24], we have the following Expectation-Maximization steps\nfor finding the optimal hyperparameters. For space\nconsiderations, we omit the derivation in this paper since it is not\nthe focus of our work.\nE step: For each user m, estimate the user model\ndistribution P(wm|Dm, \u03a6) = N(wm; \u00afwm, \u03a32\nm) based on the\ncurrent estimation of the prior \u03a6 = (\u00b5, \u03a32\n).\n\u00afwm = ((\u03a32\n)\u22121\n+\nSxx,m\n\u03c32\n)\u22121\n(\nSxy,m\n\u03c32\n+ (\u03a32\n)\u22121\n\u00b5) (6)\n\u03a32\nm = ((\u03a32\n)\u22121\n+\nSxx,m\n\u03c32\n)\u22121\n(7)\nwhere Sxx,m =\nj\nxm,jxT\nm,j Sxy,m =\nj\nxm,jym,j\nM step: Optimize the prior \u03a6 = (\u00b5, \u03a32\n) based on the\nestimation from the last E step.\n\u00b5 =\n1\nM m\n\u00afwm (8)\n\u03a32\n=\n1\nM m\n\u03a32\nm + ( \u00afwm \u2212 \u00b5)( \u00afwm \u2212 \u00b5)T\n(9)\nMany machine learning driven IR systems use a point\nestimate of the parameters at different stages in the system.\nHowever, we are estimating the posterior distribution of the\nvariables at the E step. This avoids overfitting wm to a\nparticular user\"s data, which may be small and noisy. A\ndetailed discussion about this subject appears in [10].\n4.2 New Algorithm: Modified EM\nAlthough the EM algorithm is widely studied and used in\nmachine learning applications, using the above EM process\nto solve Bayesian hierarchical linear models in large-scale\ninformation retrieval systems is still too computationally\nexpensive. In this section, we describe why the learning rate of\nthe EM algorithm is slow in our application and introduce\na new technique to make the learning of the Bayesian\nhierarchical linear model scalable. The derivation of the new\nlearning algorithm will be based on the EM algorithm\ndescribed in the previous section.\nFirst, the covariance matrices \u03a32\n, \u03a32\nm are usually too large\nto be computationally feasible. For simplicity, and as a\ncommon practice in IR, we do not model the correlation between\nfeatures. Thus we approximate these matrices with K\ndimensional diagonal matrices. In the rest of the paper, we use\nthese symbols to represent their diagonal approximations:\n\u03a32\n=\n\uf8eb\n\uf8ec\n\uf8ec\n\uf8ed\n\u03c32\n1 0 .. 0\n0 \u03c32\n2 .. 0\n.. .. .. ..\n0 0 .. \u03c32\nK\n\uf8f6\n\uf8f7\n\uf8f7\n\uf8f8\n\u03a32\nm =\n\uf8eb\n\uf8ec\n\uf8ec\n\uf8ed\n\u03c32\nm,1 0 .. 0\n0 \u03c32\nm,2 .. 0\n.. .. .. ..\n0 0 .. \u03c32\nm,K\n\uf8f6\n\uf8f7\n\uf8f7\n\uf8f8\nSecondly, and most importantly, the input space is very\nsparse and there are many dimensions that are not related\nto a particular user in a real IR application. For example,\nlet us consider a movie recommendation system, with the\ninput variable x representing a particular movie. For the jth\nmovie that the user m has seen, let xm,j,k = 1 if the director\nof the movie is Jean-Pierre Jeunet (indexed by k). Here\nwe assume that whether or not that this director directed\na specific movie is represented by the kth dimension. If\nthe user m has never seen a movie directed by Jean-Pierre\nJeunet, then the corresponding dimension is always zero\n(xm,j,k = 0 for all j) .\nOne major drawback of the EM algorithm is that the\nimportance of a feature, \u00b5k, may be greatly dominated by users\nwho have never encountered this feature (i.e. j xm,j,k = 0)\nat the M step (Equation 8). Assume that 100 out of 1\nmillion users have viewed the movie directed by Jean-Pierre\nJeunet, and that the viewers have rated all of his movies as\nexcellent. Intuitively, he is a good director and the weight\nfor him (\u00b5k) should be high. Before the EM iteration, the\ninitial value of \u00b5 is usually set to 0. Since the other 999,900\nusers have not seen this movie, their corresponding weights\n(w1,k, w2,k, ..., wm,k..., w999900,k) for that director would be\nvery small initially. Thus the corresponding weight of the\ndirector in the prior \u00b5k at the first M step would be very\nlow , and the variance \u03c3m,k will be large (Equations 8 and\n7). It is undesirable that users who have never seen any\nmovie produced by the director influence the importance of\nthe director so much. This makes the convergence of the\nstandard EM algorithm very slow.\nNow let\"s look at whether we can improve the learning\nspeed of the algorithm. Without a loss of generality, let\nus assume that the kth dimension of the input variable x\nis not related to a particular user m. By which we mean,\nxm,j,k = 0 for all j = 1, ..., Jm. It is straightforward to prove\nthat the kth row and kth column of Sxx,m are completely\nfilled with zeros, and that the kth dimension of Sxy,m is\nzeroed as well. Thus the corresponding kth dimension of the\nuser model\"s mean, \u00afwm, should be equal to that of the prior:\n\u00afwm,k = \u00b5k, with the corresponding covariance of \u03c3m,k = \u03c3k.\nAt the M step, the standard EM algorithm uses the\nnumerical solution of the distribution P(wm|Dm, \u03a6) estimated\nat E step (Equation 8 and Equation 7). However, the\nnumerical solutions are very unreliable for \u00afwm,k and \u03c3m,k when\nthe kth dimension is not related to the mth user. A better\napproach is using the analytical solutions \u00afwm,k = \u00b5k, and\n\u03c3m,k = \u03c3k for the unrelated (m, k) pairs, along with the\nnumerical solution estimated at E step for the other (m, k)\npairs. Thus we get the following new EM-like algorithm:\nModified E step: For each user m, estimate the user model\ndistribution P(wm|Dm, \u03a6) = N(wm; \u00afwm, \u03a32\nm) based\non the current estimation of \u03c3 , \u00b5, \u03a32\n.\n\u00afwm = ((\u03a32\n)\u22121\n+\nSxx,m\n\u03c32 )\u22121\n(\nSxy,m\n\u03c32 + (\u03a32\n)\u22121\n\u00b5)(10)\n\u03c32\nm,k = ((\u03c32\nk)\u22121\n+\nsxx,m,k\n\u03c32 )\u22121\n(11)\nwhere sxx,m,k =\nj\nx2\nm,j,k and sxy,m,k =\nj\nxm,j,kym,j\nModified M Step Optimize the prior \u03a6 = (\u00b5, \u03a32\n) based\non the estimation from the last E step for related\nuserfeature pairs. The M step implicitly uses the analytical\nsolution for unrelated user-feature pairs.\n\u00b5k =\n1\nMk\nm:related\n\u00afwm,k (12)\n\u03c32\nk =\n1\nMk\nm:related\n\u03c32\nm,k\n+( \u00afwm,k \u2212 \u00b5k)( \u00afwm,k \u2212 \u00b5k)T\n(13)\nwhere Mk is the number of users that are related to\nfeature k\nWe only estimate the diagonal of \u03a32\nm and \u03a3 since we are\nusing the diagonal approximation of the covariance\nmatrices. To estimate \u00afwm, we only need to calculate the\nnumerical solutions for dimensions that are related to user m. To\nestimate \u03c32\nk and \u00b5k, we only sum over users that are related\nto the kth feature.\nThere are two major benefits of the new algorithm. First,\nbecause only the related (m, k) pairs are needed at the\nmodified M step, the computational complexity in a single EM\niteration is much smaller when the data is sparse, and many\nof (m, k) pairs are unrelated. Second, the parameters\nestimated at the modified M step (Equations 12 - 13) are\nmore accurate than the standard M step described in\nSection 4.1 because the exact analytical solutions \u00afwm,k = \u00b5k\nand \u03c3m,k = \u03c3k for the unrelated (m, k) pairs were used in\nthe new algorithm instead of an approximate solution as in\nthe standard algorithm.\n5. EXPERIMENTAL METHODOLOGY\n5.1 Evaluation Data Set\nTo evaluate the proposed technique, we used the following\nthree major data sets (Table 1):\nMovieLens Data: This data set was created by combining\nthe relevance judgments from the MovieLens[9] data\nset with documents from the Internet Movie Database\n(IMDB). MovieLens allows users to rank how much\nhe/she enjoyed a specific movie on a scale from 1 to\n5. This likeability rating was used as a\nmeasurement of how relevant the document representing the\ncorresponding movie is to the user. We considered\ndocuments with likeability scores of 4 or 5 as\nrelevant, and documents with a score of 1 to 3 as\nirrelevant to the user. MovieLens provided relevance\njudgments on 3,057 documents from 6,040 separate\nusers. On average, each user rated 151 movies, of\nthese 87 were judged to be relevant. The average\nscore for a document was 3.58. Documents\nrepresenting each movie were constructed from the portion of\nthe IMDB database that is available for public\ndownload[13]. Based on this database, we created one\ndocument per movie that contained the relevant\ninformation about it (e.g. directors, actors, etc.).\nTable 1: Data Set Statistics. On Reuters, the\nnumber of rating for a simulated user is the number of\ndocuments relevant to the corresponding topic.\nData Users Docs Ratings per User\nMovieLens 6,040 3,057 151\nNetflix-all 480,189 17,770 208\nNetflix-1000 1000 17,770 127\nReuters-C 34 100,000 3949\nReuters-E 26 100,000 1632\nReuters-G 33 100,000 2222\nReuters-M 10 100,000 6529\nNetflix Data: This data set was constructed by combining\ndocuments about movies crawled from the web with\na set of actual movie rental customer relevance\njudgments from Netflix[19]. Netflix publicly provides the\nrelevance judgments of 480,189 anonymous customers.\nThere are around 100 million rating on a scale of 1\nto 5 for 17,770 documents. Similar to MovieLens, we\nconsidered documents with likeability scores of 4 or 5\nas relevant.\nThis number was reduced to 1000 customers through\nrandom sampling. The average customer on the\nreduced data set provided 127 judgments, with 70 being\ndeemed relevant. The average score for documents is\n3.55.\nReuters Data: This is the Reuters Corpus, Volume 1. It\ncovers 810,000 Reuters English language news stories\nfrom August 20, 1996 to August 19, 1997. Only the\nfirst 100,000 news were used in our experiments. The\nReuters corpus comes with a topic hierarchy. Each\ndocument is assigned to one of several locations on\nthe hierarchical tree. The first level of the tree\ncontains four topics, denoted as C, E, M, and G. For the\nexperiments in this paper, the tree was cut at level 1 to\ncreate four smaller trees, each of which corresponds to\none smaller data set: Reuters-E Reuters-C,\nReutersM and Reuters-G. For each small data set, we created\nseveral profiles, one profile for each node in a sub-tree,\nto simulate multiple users, each with a related, yet\nseparate definition of relevance. All the user profiles\non a sub-tree are supposed to share the same prior\nmodel distribution. Since this corpus explicitly\nindicates only the relevant documents for a topic(user), all\nother documents are considered irrelevant.\n5.2 Evaluation\nWe designed the experiments to answer the following three\nquestions:\n1. Do we need to take the effort to use a Bayesian\napproach and learn a prior from other users?\n2. Does the new algorithm work better than the standard\nEM algorithm for learning the Bayesian hierarchical\nlinear model?\n3. Can the new algorithm quickly learn many user\nmodels?\nTo answer the first question, we compared the Bayesian\nhierarchical models with commonly used Norm-2 regularized\nlinear regression models. In fact, the commonly used\napproach is equivalent to the model learned at the end of the\nfirst EM iteration. To answer the second question, we\ncompared the proposed new algorithm with the standard EM\nalgorithm to see whether the new learning algorithm is\nbetter. To answer the third question, we tested the efficiency of\nthe new algorithm on the entire Netflix data set where about\nhalf a million user models need to be learned together.\nFor the MovieLens and Netflix data sets, algorithm\neffectiveness was measured by mean square error, while on the\nReuters data set classification error was used because it was\nmore informative. We first evaluated the performance on\neach individual user, and then estimated the macro average\nover all users. Statistical tests (t-tests) were carried out to\nsee whether the results are significant.\nFor the experiments on the MovieLens and Netflix data\nsets, we used a random sample of 90% of each user for\ntraining, and the rest for testing. On Reuters\" data set, because\nthere are too many relevant documents for each topic in the\ncorpus, we used a random sample of 10% of each topic for\ntraining, and 10% of the remaining documents for testing.\nFor all runs, we set (a, b, c, \u03a3 ) = (0.1, 10, 0.1, 1) manually.\n6. EXPERIMENTAL RESULTS\nFigure 2, Figure 3, and Figure 4 show that on all data\nsets, the Bayesian hierarchical modeling approach has a\nstatistical significant improvement over the regularized linear\nregression model, which is equivalent to the Bayesian\nhierarchical models learned at the first iteration. Further analysis\nshows a negative correlation between the number of training\ndata for a user and the improvement the system gets. This\nsuggests that the borrowing information from other users\nhas more significant improvements for users with less\ntraining data, which is as expected. However, the strength of the\ncorrelation differs over data sets, and the amount of\ntraining data is not the only characteristics that will influence\nthe final performance.\nFigure 2 and Figure 3 show that the proposed new\nalgorithm works better than the standard EM algorithm on\nthe Netflix and MovieLens data sets. This is not\nsurprising since the number of related feature-users pairs is much\nsmaller than the number of unrelated feature-user pairs on\nthese two data sets, and thus the proposed new algorithm\nis expected to work better.\nFigure 4 shows that the two algorithms work similarly\non the Reuters-E data set. The accuracy of the new\nalgorithm is similar to that of the standard EM algorithm\nat each iteration. The general patterns are very similar on\nother Reuters\" subsets. Further analysis shows that only\n58% of the user-feature pairs are unrelated on this data set.\nSince the number of unrelated user-feature pairs is not\nextremely large, the sparseness is not a serious problem on\nthe Reuters data set. Thus the two learning algorithms\nperform similarly. The results suggest that only on a corpus\nwhere the number of unrelated user-feature pairs is much\nlarger than the number of related pairs, such as on the\nNetflix data set, the proposed technique will get a significant\nimprovement over standard EM. However, the experiments\nalso show that when the assumption does not hold, the new\nalgorithm does not hurt performance.\nAlthough the proposed technique is faster than standard\nFigure 2: Performance on a Netflix subset with 1,000 users. The new algorithm is statistical significantly\nbetter than EM algorithm at iterations 2 - 10. Norm-2 regularized linear models are equivalent to the\nBayesian hierarchical models learned at the first iteration, and are statistical significantly worse than the\nBayesian hierarchical models.\n0 2 4 6 8 10\n1\n1.05\n1.1\n1.15\n1.2\n1.25\n1.3\n1.35\n1.4\nIterations\nMeanSquareError\nNew Algorithm\nTraditional EM\n1 2 3 4 5 6 7 8 9 10\n0.33\n0.34\n0.35\n0.36\n0.37\n0.38\n0.39\nIterations\nClassificationError\nNew Algorithm\nTraditional EM\nFigure 3: Performance on a MovieLens subset with 1,000 users. The new algorithm is statistical significantly\nbetter than EM algorithm at iteration 2 to 17 (evaluated with mean square error).\n1 6 11 16 21\n0.5\n1\n1.5\n2\n2.5\n3\n3.5\nIterations\nMeanSquareError\nNew Algorithm\nTraditional EM\n1 6 11 16 21\n0.35\n0.4\n0.45\n0.5\n0.55\n0.6\n0.65\nIterations\nClassificationError\nNew Algorithm\nTraditional EM\nFigure 4: Performance on a Reuters-E subset with 26 profiles. Performances on Reuters-C, Reuters-M,\nReuters-G are similar.\n1 2 3 4 5\n0.011\n0.0115\n0.012\n0.0125\n0.013\n0.0135\n0.014\nIterations\nMeanSquareError\nNew Algorithm\nTraditional EM\n1 2 3 4 5\n0.0102\n0.0104\n0.0106\n0.0108\n0.011\n0.0112\n0.0114\nIterations\nClassificationError\nNew Algorithm\nTraditional EM\nEM, can it really learn millions of user models quickly?\nOur results show that the modified EM algorithm converges\nquickly, and 2 - 3 modified EM iterations would result in\na reliable estimation. We evaluated the algorithm on the\nwhole Netflix data set (480,189 users, 159,836 features, and\n100 million ratings) running on a single CPU PC (2GB\nmemory, P4 3GHz). The system finished one modified EM\niteration in about 4 hours. This demonstrates that the proposed\ntechnique can efficiently handle large-scale system like\nNetflix.\n7. CONCLUSION\nContent-based user profile learning is an important\nproblem and is the key to providing personal recommendations\nto a user, especially for recommending new items with a\nsmall number of ratings. The Bayesian hierarchical\nmodeling approach is becoming an important user profile learning\napproach due to its theoretically justified ability to help one\nuser through information transfer from the other users by\nway of hyperpriors.\nThis paper examined the weakness of the popular EM\nbased learning approach for Bayesian hierarchical linear\nmodels and proposed a better learning technique called Modified\nEM. We showed that the new technique is theoretically more\ncomputationally efficient than the standard EM algorithm.\nEvaluation on the MovieLens and Netflix data sets\ndemonstrated the effectiveness of the new technique when the data\nis sparse, by which we mean the ratio of related user-feature\npairs to unrelated pairs is small. Evaluation on the Reuters\ndata set showed that the new technique performed similar to\nthe standard EM algorithm when the sparseness condition\ndoes not hold. In general, it is better to use the new\nalgorithm since it is as simple as standard EM, the performance\nis either better or similar to EM, and the computation\ncomplexity is lower at each iteration. It is worth mentioning that\neven if the original problem space is not sparse, sparseness\ncan be created artificially when a recommendation system\nuses user-specific feature selection techniques to reduce the\nnoise and user model complexity. The proposed technique\ncan also be adapted to improve the learning in such a\nscenario. We also demonstrated that the proposed technique\ncan learn half a million user profiles from 100 million ratings\nin a few hours with a single CPU.\nThe research is important because scalability is a major\nconcern for researchers when using the Bayesian hierarchical\nlinear modeling approach to build a practical large scale\nsystem, even though the literature have demonstrated the\neffectiveness of the models in many applications. Our work\nis one major step on the road to make Bayesian hierarchical\nlinear models more practical. The proposed new technique\ncan be easily adapted to run on a cluster of machines, and\nthus further speed up the learning process to handle a larger\nscale system with hundreds of millions of users.\nThe research has much potential to benefit people using\nEM algorithm on many other IR problems as well as\nmachine learning problems. EM algorithm is a commonly used\nmachine learning technique. It is used to find model\nparameters in many IR problems where the training data is very\nsparse. Although we are focusing on the Bayesian\nhierarchical linear models for recommendation and filtering, the\nnew idea of using analytical solution instead of numerical\nsolution for unrelated user-feature pairs at the M step could\nbe adapted to many other problems.\n8. ACKNOWLEDGMENTS\nWe thank Wei Xu, David Lewis and anonymous\nreviewers for valuable feedback on the work described in this\npaper. Part of the work was supported by Yahoo, Google, the\nPetascale Data Storage Institute and the Institute for\nScalable Scientific Data Management. Any opinions, findings,\nconclusions, or recommendations expressed in this material\nare those of the authors, and do not necessarily reflect those\nof the sponsors.\n9. REFERENCES\n[1] C. Basu, H. Hirsh, and W. Cohen. Recommendation\nas classification: Using social and content-based\ninformation in recommendation. In Proceedings of the\nFifteenth National Conference on Artificial\nIntelligence, 1998.\n[2] J. S. Breese, D. Heckerman, and C. Kadie. Empirical\nanalysis of predictive algorithms for collaborative\nfiltering. Technical report, Microsoft Research, One\nMicrosoft Way, Redmond, WA 98052, 1998.\n[3] J. Callan. Document filtering with inference networks.\nIn Proceedings of the Nineteenth Annual International\nACM SIGIR Conference on Research and\nDevelopment in Information Retrieval, pages 262-269,\n1996.\n[4] N. Cancedda, N. Cesa-Bianchi, A. Conconi,\nC. Gentile, C. Goutte, T. Graepel, Y. Li, J. M.\nRenders, J. S. Taylor, and A. Vinokourov. Kernel\nmethod for document filtering. In The Eleventh Text\nREtrieval Conference (TREC11). National Institute of\nStandards and Technology, special publication\n500-249, 2003.\n[5] C. Chelba and A. Acero. Adaptation of maximum\nentropy capitalizer: Little data can help a lot. In\nD. Lin and D. Wu, editors, Proceedings of EMNLP\n2004, pages 285-292, Barcelona, Spain, July 2004.\nAssociation for Computational Linguistics.\n[6] B. Croft and J. Lafferty, editors. Language Modeling\nfor Information Retrieval. Kluwer, 2002.\n[7] A. Dayanik, D. D. Lewis, D. Madigan, V. Menkov,\nand A. Genkin. Constructing informative prior\ndistributions from domain knowledge in text\nclassification. In SIGIR \"06: Proceedings of the 29th\nannual international ACM SIGIR conference on\nResearch and development in information retrieval,\npages 493-500, New York, NY, USA, 2006. ACM\nPress.\n[8] J. Delgado and N. Ishii. Memory-based\nweightedmajority prediction for recommender\nsystems. In ACM SIGIR\"99 Workshop on\nRecommender Systems, 1999.\n[9] GroupLens. Movielens.\nhttp://www.grouplens.org/taxonomy/term/14, 2006.\n[10] D. Heckerman. A tutorial on learning with bayesian\nnetworks. In M. Jordan, editor, Learning in Graphical\nModels. Kluwer Academic, 1998.\n[11] J. L. Herlocker, J. A. Konstan, A. Borchers, and\nJ. Riedl. An algorithmic framework for performing\ncollaborative filtering. In SIGIR \"99: Proceedings of\nthe 22nd annual international ACM SIGIR conference\non Research and development in information retrieval,\npages 230-237, New York, NY, USA, 1999. ACM\nPress.\n[12] T. Hofmann and J. Puzicha. Latent class models for\ncollaborative filtering. In IJCAI \"99: Proceedings of\nthe Sixteenth International Joint Conference on\nArtificial Intelligence, pages 688-693, San Francisco,\nCA, USA, 1999. Morgan Kaufmann Publishers Inc.\n[13] I. M. D. (IMDB). Internet movie database.\nhttp://www.imdb.com/interfaces/, 2006.\n[14] R. Jin, J. Y. Chai, and L. Si. An automatic weighting\nscheme for collaborative filtering. In SIGIR \"04:\nProceedings of the 27th annual international ACM\nSIGIR conference on Research and development in\ninformation retrieval, pages 337-344, New York, NY,\nUSA, 2004. ACM Press.\n[15] J. A. Konstan, B. N. Miller, D. Maltz, J. L. Herlocker,\nL. R. Gordon, and J. Riedl. GroupLens: Applying\ncollaborative filtering to Usenet news.\nCommunications of the ACM, 40(3):77-87, 1997.\n[16] D. Lewis. Applying support vector machines to the\nTREC-2001 batch filtering and routing tasks. In\nProceedings of the Eleventh Text REtrieval Conference\n(TREC-11), 2002.\n[17] B. Liu, X. Li, W. S. Lee, , and P. Yu. Text\nclassification by labeling words. In Proceedings of The\nNineteenth National Conference on Artificial\nIntelligence (AAAI-2004), July 25-29, 2004.\n[18] P. Melville, R. J. Mooney, and R. Nagarajan.\nContent-boosted collaborative filtering for improved\nrecommendations. In Proceedings of the Eighteenth\nNational Conference on Artificial Intelligence\n(AAAI-2002), Edmonton, Canada, 2002.\n[19] Netflix. Netflix prize. http://www.netflixprize.com\n(visited on Nov. 30, 2006), 2006.\n[20] S. Robertson and K. Sparck-Jones. Relevance\nweighting of search terms. In Journal of the American\nSociety for Information Science, volume 27, pages\n129-146, 1976.\n[21] J. Wang, A. P. de Vries, and M. J. T. Reinders.\nUnifying user-based and item-based collaborative\nfiltering approaches by similarity fusion. In SIGIR \"06:\nProceedings of the 29th annual international ACM\nSIGIR conference on Research and development in\ninformation retrieval, pages 501-508, New York, NY,\nUSA, 2006. ACM Press.\n[22] X. Wu and R. K. Srihari. Incorporating prior\nknowledge with weighted margin support vector\nmachines. In Proc. ACM Knowledge Discovery Data\nMining Conf.(ACM SIGKDD 2004), Aug. 2004.\n[23] Y. Yang, S. Yoo, J. Zhang, and B. Kisiel. Robustness\nof adaptive filtering methods in a cross-benchmark\nevaluation. In Proceedings of the 28th Annual\nInternational ACM SIGIR Conference on Research\nand Development in Information Retrieval, 2005.\n[24] K. Yu, V. Tresp, and A. Schwaighofer. Learning\ngaussian processes from multiple tasks. In ICML \"05:\nProceedings of the 22nd international conference on\nMachine learning, pages 1012-1019, New York, NY,\nUSA, 2005. ACM Press.\n[25] K. Yu, V. Tresp, and S. Yu. A nonparametric\nhierarchical bayesian framework for information\nfiltering. In SIGIR \"04: Proceedings of the 27th annual\ninternational ACM SIGIR conference on Research and\ndevelopment in information retrieval, pages 353-360.\nACM Press, 2004.\n[26] X. Zhu. Semi-supervised learning literature survey.\nTechnical report, University of Wisconsin - Madison,\nDecember 9, 2006.\n[27] P. Zigoris and Y. Zhang. Bayesian adaptive user\nprofiling with explicit & implicit feedback. In\nConference on Information and Knowledge\nMangement 2006, 2006.", "keywords": "classification;information filter;collaborative filtering;ir;learning technique;recommender system;bayesian hierarchical model;recommendation system;rating;content-based;em algorithm;personalization;linear regression;modeling;parameter"}
-{"name": "test_H-8", "title": "Robust Test Collections for Retrieval Evaluation", "abstract": "Low-cost methods for acquiring relevance judgments can be a boon to researchers who need to evaluate new retrieval tasks or topics but do not have the resources to make thousands of judgments. While these judgments are very useful for a one-time evaluation, it is not clear that they can be trusted when re-used to evaluate new systems. In this work, we formally define what it means for judgments to be reusable: the confidence in an evaluation of new systems can be accurately assessed from an existing set of relevance judgments. We then present a method for augmenting a set of relevance judgments with relevance estimates that require no additional assessor effort. Using this method practically guarantees reusability: with as few as five judgments per topic taken from only two systems, we can reliably evaluate a larger set of ten systems. Even the smallest sets of judgments can be useful for evaluation of new systems.", "fulltext": "1. INTRODUCTION\nConsider an information retrieval researcher who has\ninvented a new retrieval task. She has built a system to\nperform the task and wants to evaluate it. Since the task is\nnew, it is unlikely that there are any extant relevance\njudgments. She does not have the time or resources to judge\nevery document, or even every retrieved document. She can\nonly judge the documents that seem to be the most\ninformative and stop when she has a reasonable degree of confidence\nin her conclusions. But what happens when she develops a\nnew system and needs to evaluate it? Or another research\ngroup decides to implement a system to perform the task?\nCan they reliably reuse the original judgments? Can they\nevaluate without more relevance judgments?\nEvaluation is an important aspect of information retrieval\nresearch, but it is only a semi-solved problem: for most\nretrieval tasks, it is impossible to judge the relevance of every\ndocument; there are simply too many of them. The solution\nused by NIST at TREC (Text REtrieval Conference) is the\npooling method [19, 20]: all competing systems contribute\nN documents to a pool, and every document in that pool\nis judged. This method creates large sets of judgments that\nare reusable for training or evaluating new systems that did\nnot contribute to the pool [21].\nThis solution is not adequate for our hypothetical\nresearcher. The pooling method gives thousands of relevance\njudgments, but it requires many hours of (paid) annotator\ntime. As a result, there have been a slew of recent papers\non reducing annotator effort in producing test collections:\nCormack et al. [11], Zobel [21], Sanderson and Joho [17],\nCarterette et al. [8], and Aslam et al. [4], among others.\nAs we will see, the judgments these methods produce can\nsignificantly bias the evaluation of a new set of systems.\nReturning to our hypothetical resesarcher, can she reuse\nher relevance judgments? First we must formally define\nwhat it means to be reusable. In previous work,\nreusability has been tested by simply assessing the accuracy of a set\nof relevance judgments at evaluating unseen systems. While\nwe can say that it was right 75% of the time, or that it had a\nrank correlation of 0.8, these numbers do not have any\npredictive power: they do not tell us which systems are likely\nto be wrong or how confident we should be in any one. We\nneed a more careful definition of reusability.\nSpecifically, the question of reusability is not how\naccurately we can evaluate new systems. A malicious\nadversary can always produce a new ranked list that has not\nretrieved any of the judged documents. The real question\nis how much confidence we have in our evaluations, and,\nmore importantly, whether we can trust our estimates of\nconfidence. Even if confidence is not high, as long as we\ncan trust it, we can identify which systems need more\njudgments in order to increase confidence. Any set of judgments,\nno matter how small, becomes reusable to some degree.\nSmall, reusable test collections could have a huge impact\non information retrieval research. Research groups would\nbe able to share the relevance judgments they have done\nin-house for pilot studies, new tasks, or new topics. The\namount of data available to researchers would grow\nexponentially over time.\n2. ROBUST EVALUATION\nAbove we gave an intuitive definition of reusability: a\ncollection is reusable if we can trust our estimates of\nconfidence in an evaluation. By that we mean that if we have\nmade some relevance judgments and have, for example, 75%\nconfidence that system A is better than system B, we would\nlike there to be no more than 25% chance that our\nassessment of the relative quality of the systems will change as\nwe continue to judge documents. Our evaluation should be\nrobust to missing judgments.\nIn our previous work, we defined confidence as the\nprobability that the difference in an evaluation measure calculated\nfor two systems is less than zero [8]. This notion of\nconfidence is defined in the context of a particular evaluation\ntask that we call comparative evaluation: determining the\nsign of the difference in an evaluation measure. Other\nevaluation tasks could be defined; estimating the magnitude of\nthe difference or the values of the measures themselves are\nexamples that entail different notions of confidence.\nWe therefore see confidence as a probability estimate. One\nof the questions we must ask about a probability estimate is\nwhat it means. What does it mean to have 75% confidence\nthat system A is better than system B? As described above,\nwe want it to mean that if we continue to judge documents,\nthere will only be a 25% chance that our assessment will\nchange. If this is what it means, we can trust the confidence\nestimates. But do we know it has that meaning?\nOur calculation of confidence rested on an assumption\nabout the probability of relevance of unjudged documents,\nspecifically that each unjudged document was equally likely\nto be relevant or nonrelevant. This assumption is almost\ncertainly not realistic in most IR applications. As it turns\nout, it is this assumption that determines whether the\nconfidence estimates can eb trusted. Before elaborating on this,\nwe formally define confidence.\n2.1 Estimating Confidence\nAverage precision (AP) is a standard evaluation metric\nthat captures both the ability of a system to rank relevant\ndocuments highly (precision) as well as its ability to retrieve\nrelevant documents (recall). It is typically written as the\nmean precision at the ranks of relevant documents:\nAP =\n1\n|R| i\u2208R\nprec@r(i)\nwhere R is the set of relevant documents and r(i) is the rank\nof document i. Let Xi be a random variable indicating the\nrelevance of document i. If documents are ordered by rank,\nwe can express precision as prec@i = 1/i i\nj=1 Xj .\nAverage precision then becomes the quadratic equation\nAP =\n1\nXi\nn\ni=1\nXi/i\ni\nj=1\nXj\n=\n1\nXi\nn\ni=1 j\u2265i\naijXiXj\nwhere aij = 1/ max{r(i), r(j)}. Using aij instead of 1/i\nallows us to number the documents arbitrarily. To see why\nthis is true, consider a toy example: a list of 3 documents\nwith relevant documents B, C at ranks 1 and 3 and\nnonrelevant document A at rank 2. Average precision will be\n1\n2\n(1\n1\nx2\nB+ 1\n2\nxBxA+ 1\n3\nxBxC + 1\n2\nx2\nA+ 1\n3\nxAxC + 1\n3\nx2\nC) = 1\n2\n1 + 2\n3\nbecause xA = 0, xB = 1, xC = 1. Though the ordering\nB, A, C is different from the labeling A, B, C, it does not\naffect the computation.\nWe can now see average precision itself is a random\nvariable with a distribution over all possible assignments of\nrelevance to all documents. This random variable has an\nexpectation, a variance, confidence intervals, and a certain\nprobability of being less than or equal to a given value.\nAll of these are dependent on the probability that\ndocument i is relevant: pi = p(Xi = 1). Suppose in our\nprevious example we do not know the relevance judgments,\nbut we believe pA = 0.4, pB = 0.8, pC = 0.7. We can\nthen compute e.g. P(AP = 0) = 0.2 \u00b7 0.6 \u00b7 0.3 = 0.036,\nor P(AP = 1\n2\n) = 0.2 \u00b7 0.4 \u00b7 0.7 = 0.056.\nSumming over all possibilities, we can compute\nexpectation and variance:\nE[AP] \u2248\n1\npi\naiipi +\nj>i\naij pipj\nV ar[AP] \u2248\n1\n( pi)2\nn\ni\na2\niipiqi +\nj>i\na2\nijpipj(1 \u2212 pipj)\n+\ni=j\n2aiiaijpipj(1 \u2212 pi) +\nk>j=i\n2aijaikpipjpk(1 \u2212 pi)\nAP asymptotically converges to a normal distribution with\nexpectation and variance as defined above.1\nFor our comparative evaluation task we are interested in\nthe sign of the difference in two average precisions: \u0394AP =\nAP1 \u2212 AP2. As we showed in our previous work, \u0394AP has\na closed form when documents are ordered arbitrarily:\n\u0394AP =\n1\nXi\nn\ni=1 j\u2265i\ncij XiXj\ncij = aij \u2212 bij\nwhere bij is defined analogously to aij for the second\nranking. Since AP is normal, \u0394AP is normal as well, meaning\nwe can use the normal cumulative density function to\ndetermine the confidence that a difference in AP is less than\nzero.\nSince topics are independent, we can easily extend this\nto mean average precision (MAP). MAP is also normally\ndistributed; its expectation and variance are:\nEMAP =\n1\nT t\u2208T\nE[APt] (1)\nVMAP =\n1\nT2\nt\u2208T\nV ar[APt]\n\u0394MAP = MAP1 \u2212 MAP2\nConfidence can then be estimated by calculating the\nexpectation and variance and using the normal density function\nto find P(\u0394MAP < 0).\n2.2 Confidence and Robustness\nHaving defined confidence, we turn back to the issue of\ntrust in confidence estimates, and show how it ties into the\nrobustness of the collection to missing judgments.\n1\nThese are actually approximations to the true expectation\nand variance, but the error is a negligible O(n2\u2212n\n).\nLet Z be the set of all pairs of ranked results for a\ncommon set of topics. Suppose we have a set of m relevance\njudgments xm\n= {x1, x2, ..., xm} (using small x rather than\ncapital X to distinguish between judged and unjudged\ndocuments); these are the judgments against which we compute\nconfidence. Let Z\u03b1 be the subset of pairs in Z for which\nwe predict that \u0394MAP = \u22121 with confidence \u03b1 given the\njudgments xm\n. For the confidence estimates to be\naccurate, we need at least \u03b1 \u00b7 |Z\u03b1| of these pairs to actually have\n\u0394MAP = \u22121 after we have judged every document. If they\ndo, we can trust the confidence estimates; our evaluation\nwill be robust to missing judgments.\nIf our confidence estimates are based on unrealistic\nassumptions, we cannot expect them to be accurate. The\nassumptions they are based on are the probabilities of\nrelevance pi. We need these to be realistic.\nWe argue that the best possible distribution of relevance\np(Xi) is the one that explains all of the data (all of the\nobservations made about the retrieval systems) while at the\nsame time making no unwarranted assumptions. This is\nknown as the principle of maximum entropy [13].\nThe entropy of a random variable X with distribution\np(X) is defined as H(p) = \u2212 i p(X = i) log p(X = i).\nThis has found a wide array of uses in computer science and\ninformation retrieval. The maximum entropy distribution\nis the one that maximizes H. This distribution is unique\nand has an exponential form. The following theorem shows\nthe utility of a maximum entropy distribution for relevance\nwhen estimating confidence.\nTheorem 1. If p(Xn\n|I, xm\n) = argmaxpH(p), confidence\nestimates will be accurate.\nwhere xm\nis the set of relevance judgments defined above,\nXn\nis the full set of documents that we wish to estimate the\nrelevance of, and I is some information about the documents\n(unspecified as of now). We forgo the proof for the time\nbeing, but it is quite simple.\nThis says that the better the estimates of relevance, the\nmore accurate the evaluation. The task of creating a reusable\ntest collection thus becomes the task of estimating the\nrelevance of unjudged documents.\nThe theorem and its proof say nothing whatsoever about\nthe evaluation metric. The probability estimates are entirely\nindepedent of the measure we are interested in. This means\nthe same probability estimates can tell us about average\nprecision as well as precision, recall, bpref, etc.\nFurthermore, we could assume that the relevance of\ndocuments i and j is independent and achieve the same result,\nwhich we state as a corollary:\nCorollary 1. If p(Xi|I, xm\n) = argmaxpH(p), confidence\nestimates will be accurate.\nThe task therefore becomes the imputation of the missing\nvalues of relevance. The theorem implies that the closer we\nget to the maximum entropy distribution of relevance, the\ncloser we get to robustness.\n3. PREDICTING RELEVANCE\nIn our statement of Theorem 1, we left the nature of the\ninformation I unspecified. One of the advantages of our\nconfidence estimates is that they admit information from a wide\nvariety of sources; essentially anything that can be\nmodeled can be used as information for predicting relevance. A\nnatural source of information is the retrieval systems\nthemselves: how they ranked the judged documents, how often\nthey failed to rank relevant documents, how they perform\nacross topics, and so on. If we treat each system as an\ninformation retrieval expert providing an opinion about the\nrelevance of each document, the problem becomes one of\nexpert opinion aggregation.\nThis is similar to the metasearch or data fusion problem\nin which the task is to take k input systems and merge them\ninto a single ranking. Aslam et al. [3] previously identified a\nconnection between evaluation and metasearch. Our\nproblem has two key differences:\n1. We explicitly need probabilities of relevance that we\ncan plug into Eq. 1; metasearch algorithms have no\nsuch requirement.\n2. We are accumulating relevance judgments as we\nproceed with the evaluation and are able to re-estimate\nrelevance given each new judgment.\nIn light of (1) above, we introduce a probabilistic model for\nexpert combination.\n3.1 A Model for Expert Opinion Aggregation\nSuppose that each expert j provides a probability of\nrelevance qij = pj(Xi = 1). The information about the\nrelevance of document i will then be the set of k expert opinions\nI = qi = (qi1, qi2, \u00b7 \u00b7 \u00b7 , qik). The probability distribution\nwe wish to find is the one that maximizes the entropy of\npi = p(Xi = 1|qi).\nAs it turns out, finding the maximum entropy model is\nequivalent to finding the parameters that maximize the\nlikelihood [5]. Blower [6] explicitly shows that finding the\nmaximum entropy model for a binary variable is equivalent to\nsolving a logistic regression. Then\npi = p(Xi = 1|qi) =\nexp k\nj=1 \u03bbjqij\n1 + exp k\nj=1 \u03bbj qij\n(2)\nwhere \u03bb1, \u00b7 \u00b7 \u00b7 , \u03bbk are the regression parameters. We include\na beta prior for p(\u03bbj) with parameters \u03b1, \u03b2. This can be\nseen as a type of smoothing to account for the fact that the\ntraining data is highly biased.\nThis model has the advantage of including the\nstatistical dependence between the experts. A model of the same\nform was shown by Clemen & Winkler to be the best for\naggregating expert probabilities [10]. A similar\nmaximumentropy-motivated approach has been used for expert\naggregation [15]. Aslam & Montague [1] used a similar model for\nmetasearch, but assumed independence among experts.\nWhere do the qij s come from? Using raw, uncalibrated\nscores as predictors will not work because score distributions\nvary too much between topics. A language modeling ranker,\nfor instance, will typically give a much higher score to the\ntop retrieved document for a short query than to the top\nretrieved document for a long query.\nWe could train a separate predicting model for each topic,\nbut that does not take advantage of all of the information we\nhave: we may only have a handful of judgments for a topic,\nnot enough to train a model to any confidence. Furthermore,\nit seems reasonable to assume that if an expert makes good\npredictions for one topic, it will make good predictions for\nother topics as well. We could use a hierarchical model [12],\nbut that will not generalize to unseen topics. Instead, we\nwill calibrate the scores of each expert individually so that\nscores can be compared both within topic and between topic.\nThus our model takes into account not only the dependence\nbetween experts, but also the dependence between experts\"\nperformances on different tasks (topics).\n3.2 Calibrating Experts\nEach expert gives us a score and a rank for each document.\nWe need to convert these to probabilities. A method such\nas the one used by Manmatha et al. [14] could be used to\nconvert scores into probabilities of relevance. The pairwise\npreference method of Carterette & Petkova [9] could also be\nused, interpeting the ranking of one document over another\nas an expression of preference.\nLet q\u2217\nij be expert j\"s self-reported probability that\ndocument i is relevant. Intuitively it seems clear that q\u2217\nij should\ndecrease with rank, and it should be zero if document i\nis unranked (the expert did not believe it to be relevant).\nThe pairwise preference model can handle these two\nrequirements easily, so we will use it. Let \u03b8rj (i) be the relevance\ncoefficient of the document at rank rj(i). We want to find\nthe \u03b8s that maximize the likelihood function:\nLjt(\u0398) =\nrj (i) \u03c3, there would be an\nedge connecting the corresponding two vertices. After the\nsimilarity graph G\u03c3 is built, the star clustering algorithm\nclusters the documents using a greedy algorithm as follows:\n1. Associate every vertex in G\u03c3 with a flag, initialized as\nunmarked.\n2. From those unmarked vertices, find the one which has\nthe highest degree and let it be u.\n3. Mark the flag of u as center.\n4. Form a cluster C containing u and all its neighbors\nthat are not marked as center. Mark all the selected\nneighbors as satellites.\n5. Repeat from step 2 until all the vertices in G\u03c3 are\nmarked.\nEach cluster is star-shaped, which consists a single center\nand several satellites. There is only one parameter \u03c3 in\nthe star clustering algorithm. A big \u03c3 enforces that the\nconnected documents have high similarities, and thus the\nclusters tend to be small. On the other hand, a small \u03c3 will\nmake the clusters big and less coherent. We will study the\nimpact of this parameter in our experiments.\nA good feature of the star clustering algorithm is that it\noutputs a center for each cluster. In the past query\ncollection Hq, each document corresponds to a query. This center\nquery can be regarded as the most representative one for\nthe whole cluster, and thus provides a label for the cluster\nnaturally. All the clusters obtained are related to the input\nquery q from different perspectives, and they represent the\npossible aspects of interests about query q of users.\n4.3 Categorizing Search Results\nIn order to organize the search results according to users\"\ninterests, we use the learned aspects from the related past\nqueries to categorize the search results. Given the top m\nWeb pages returned by a search engine for q: {s1, ..., sm},\nwe group them into different aspects using a categorization\nalgorithm.\nIn principle, any categorization algorithm can be used\nhere. Here we use a simple centroid-based method for\ncategorization. Naturally, more sophisticated methods such as\nSVM [21] may be expected to achieve even better\nperformance.\nBased on the pseudo-documents in each discovered aspect\nCi, we build a centroid prototype pi by taking the average\nof all the vectors of the documents in Ci:\npi =\n1\n|Ci|\n\nl\u2208Ci\nvl.\nAll these pi\"s are used to categorize the search results.\nSpecifically, for any search result sj, we build a TF-IDF vector.\nThe centroid-based method computes the cosine similarity\nbetween the vector representation of sj and each centroid\nprototype pi. We then assign sj to the aspect with which it\nhas the highest cosine similarity score.\nAll the aspects are finally ranked according to the number\nof search results they have. Within each aspect, the search\nresults are ranked according to their original search engine\nranking.\n5. DATA COLLECTION\nWe construct our data set based on the MSN search log\ndata set released by the Microsoft Live Labs in 2006 [14].\nIn total, this log data spans 31 days from 05/01/2006 to\n05/31/2006. There are 8,144,000 queries, 3,441,000 distinct\nqueries, and 4,649,000 distinct URLs in the raw data.\nTo test our algorithm, we separate the whole data set into\ntwo parts according to the time: the first 2/3 data is used\nto simulate the historical data that a search engine\naccumulated, and we use the last 1/3 to simulate future queries.\nIn the history collection, we clean the data by only\nkeeping those frequent, well-formatted, English queries (queries\nwhich only contain characters \u2018a\", \u2018b\", ..., \u2018z\", and space, and\nappear more than 5 times). After cleaning, we get 169,057\nunique queries in our history data collection totally. On\naverage, each query has 3.5 distinct clicks. We build the\npseudo-documents for all these queries as described in\nSection 3. The average length of these pseudo-documents\nis 68 words and the total data size of our history collection\nis 129MB.\nWe construct our test data from the last 1/3 data.\nAccording to the time, we separate this data into two test sets\nequally for cross-validation to set parameters. For each test\nset, we use every session as a test case. Each session\ncontains a single query and several clicks. (Note that we do not\naggregate sessions for test cases. Different test cases may\nhave the same queries but possibly different clicks.) Since it\nis infeasible to ask the original user who submitted a query\nto judge the results for the query, we follow the work [11]\nand opt to use the clicks associated with the query in a\nsession to approximate relevant documents. Using clicks as\njudgments, we can then compare different algorithms for\norganizing search results to see how well these algorithms can\nhelp users reach the clicked URLs.\nOrganizing search results into different aspects is expected\nto help informational queries. It thus makes sense to focus\non the informational queries in our evaluation. For each\ntest case, i.e., each session, we count the number of different\nclicks and filter out those test cases with fewer than 4 clicks\nunder the assumption that a query with more clicks is more\nlikely to be an informational query. Since we want to test\nwhether our algorithm can learn from the past queries, we\nalso filter out those test cases whose queries can not retrieve\nat least 100 pseudo-documents from our history collection.\nFinally, we obtain 172 and 177 test cases in the first and\nsecond test sets respectively. On average, we have 6.23 and\n5.89 clicks for each test case in the two test sets respectively.\n6. EXPERIMENTS\nIn the section, we describe our experiments on the search\nresult organization based past search engine logs.\n6.1 Experimental Design\nWe use two baseline methods to evaluate the proposed\nmethod for organizing search results. For each test case,\nthe first method is the default ranked list from a search\nengine (baseline). The second method is to organize the\nsearch results by clustering them (cluster-based). For fair\ncomparison, we use the same clustering algorithm as our\nlogbased method (i.e., star clustering). That is, we treat each\nsearch result as a document, construct the similarity graph,\nand find the star-shaped clusters. We compare our method\n(log-based) with the two baseline methods in the following\nexperiments. For both cluster-based and log-based methods,\nthe search results within each cluster is ranked based on their\noriginal ranking given by the search engine.\nTo compare different result organization methods, we adopt\na similar method as in the paper [9]. That is, we compare the\nquality (e.g., precision) of the best cluster, which is defined\nas the one with the largest number of relevant documents.\nOrganizing search results into clusters is to help users\nnavigate into relevant documents quickly. The above metric is to\nsimulate a scenario when users always choose the right\ncluster and look into it. Specifically, we download and organize\nthe top 100 search results into aspects for each test case. We\nuse Precision at 5 documents (P@5) in the best cluster as\nthe primary measure to compare different methods. P@5 is\na very meaningful measure as it tells us the perceived\nprecision when the user opens a cluster and looks at the first 5\ndocuments. We also use Mean Reciprocal Rank (MRR) as\nanother metric. MRR is calculated as\nMRR =\n1\n|T|\n\nq\u2208T\n1\nrq\nwhere T is a set of test queries, rq is the rank of the first\nrelevant document for q.\nTo give a fair comparison across different organization\nalgorithms, we force both cluster-based and log-based\nmethods to output the same number of aspects and force each\nsearch result to be in one and only one aspect. The\nnumber of aspects is fixed at 10 in all the following experiments.\nThe star clustering algorithm can output different number\nof clusters for different input. To constrain the number of\nclusters to 10, we order all the clusters by their sizes, select\nthe top 10 as aspect candidates. We then re-assign each\nsearch result to one of these selected 10 aspects that has\nthe highest similarity score with the corresponding aspect\ncentroid. In our experiments, we observe that the sizes of\nthe best clusters are all larger than 5, and this ensures that\nP@5 is a meaningful metric.\n6.2 Experimental Results\nOur main hypothesis is that organizing search results based\non the users\" interests learned from a search log data set is\nmore beneficial than to organize results using a simple list\nor cluster search results. In the following, we test our\nhypothesis from two perspectives - organization and labeling.\nMethod Test set 1 Test set 2\nMMR P@5 MMR P@5\nBaseline 0.7347 0.3325 0.7393 0.3288\nCluster-based 0.7735 0.3162 0.7666 0.2994\nLog-based 0.7833 0.3534 0.7697 0.3389\nCluster/Baseline 5.28% -4.87% 3.69% -8.93%\nLog/Baseline 6.62% 6.31% 4.10% 3.09%\nLog/Cluster 1.27% 11.76% 0.40% 13.20%\nTable 2: Comparison of different methods by MMR\nand P@5. We also show the percentage of relative\nimprovement in the lower part.\nComparison Test set 1 Test set 2\nImpr./Decr. Impr./Decr.\nCluster/Baseline 53/55 50/64\nLog/Baseline 55/44 60/45\nLog/Cluster 68/47 69/44\nTable 3: Pairwise comparison w.r.t the number of\ntest cases whose P@5\"s are improved versus\ndecreased w.r.t the baseline.\n6.2.1 Overall performance\nWe compare three methods, basic search engine\nranking (baseline), traditional clustering based method\n(clusterbased), and our log based method (log-based), in Table 2\nusing MRR and P@5. We optimize the parameter \u03c3\"s for each\ncollection individually based on P@5 values. This shows the\nbest performance that each method can achieve. In this\ntable, we can see that in both test collections, our method\nis better than both the baseline and the cluster-based\nmethods. For example, in the first test collection, the\nbaseline method of MMR is 0.734, the cluster-based method is\n0.773 and our method is 0.783. We achieve higher\naccuracy than both cluster-based method (1.27% improvement)\nand the baseline method (6.62% improvement). The P@5\nvalues are 0.332 for the baseline, 0.316 for cluster-based\nmethod, but 0.353 for our method. Our method improves\nover the baseline by 6.31%, while the cluster-based method\neven decreases the accuracy. This is because cluster-based\nmethod organizes the search results only based on the\ncontents. Thus it could organize the results differently from\nusers\" preferences. This confirms our hypothesis of the bias\nof the cluster-based method. Comparing our method with\nthe cluster-based method, we achieve significant\nimprovement on both test collections. The p-values of the\nsignificance tests based on P@5 on both collections are 0.01 and\n0.02 respectively. This shows that our log-based method is\neffective to learn users\" preferences from the past query\nhistory, and thus it can organize the search results in a more\nuseful way to users.\nWe showed the optimal results above. To test the\nsensitivity of the parameter \u03c3 of our log-based method, we use\none of the test sets to tune the parameter to be optimal\nand then use the tuned parameter on the other set. We\ncompare this result (log tuned outside) with the optimal\nresults of both cluster-based (cluster optimized) and log-based\nmethods (log optimized) in Figure 1. We can see that, as\nexpected, the performance using the parameter tuned on a\nseparate set is worse than the optimal performance.\nHowever, our method still performs much better than the optimal\nresults of cluster-based method on both test collections.\n0.27\n0.28\n0.29\n0.3\n0.31\n0.32\n0.33\n0.34\n0.35\n0.36\nTest set 1 Test set 2\nP@5\ncluster optimized log optimized log tuned outside\nFigure 1: Results using parameters tuned from the\nother test collection. We compare it with the\noptimal performance of the cluster-based and our\nlogbased methods.\n0\n10\n20\n30\n40\n50\n60\n1 2 3 4\nBin number\n#Queries\nImproved Decreased\nFigure 2: The correlation between performance\nchange and result diversity.\nIn Table 3, we show pairwise comparisons of the three\nmethods in terms of the numbers of test cases for which\nP@5 is increased versus decreased. We can see that our\nmethod improves more test cases compared with the other\ntwo methods. In the next section, we show more detailed\nanalysis to see what types of test cases can be improved by\nour method.\n6.2.2 Detailed Analysis\nTo better understand the cases where our log-based method\ncan improve the accuracy, we test two properties: result\ndiversity and query difficulty. All the analysis below is based\non test set 1.\nDiversity Analysis: Intuitively, organizing search\nresults into different aspects is more beneficial to those queries\nwhose results are more diverse, as for such queries, the\nresults tend to form two or more big clusters. In order to\ntest the hypothesis that log-based method help more those\nqueries with diverse results, we compute the size ratios of\nthe biggest and second biggest clusters in our log-based\nresults and use this ratio as an indicator of diversity. If the\nratio is small, it means that the first two clusters have a\nsmall difference thus the results are more diverse. In this\ncase, we would expect our method to help more. The\nresults are shown in Figure 2. In this figure, we partition the\nratios into 4 bins. The 4 bins correspond to the ratio ranges\n[1, 2), [2, 3), [3, 4), and [4, +\u221e) respectively. ([i, j) means\nthat i \u2264 ratio < j.) In each bin, we count the numbers of\ntest cases whose P@5\"s are improved versus decreased with\nrespect to the ranking baseline, and plot the numbers in this\nfigure. We can observe that when the ratio is smaller, the\nlog-based method can improve more test cases. But when\n0\n5\n10\n15\n20\n25\n30\n1 2 3 4\nBin number\n#Queries\nImproved Decreased\nFigure 3: The correlation between performance\nchange and query difficulty.\nthe ratio is large, the log-based method can not improve\nover the baseline. For example, in bin 1, 48 test cases are\nimproved and 34 are decreased. But in bin 4, all the 4 test\ncases are decreased. This confirms our hypothesis that our\nmethod can help more if the query has more diverse results.\nThis also suggests that we should turn off the option of\nre-organizing search results if the results are not very diverse\n(e.g., as indicated by the cluster size ratio).\nDifficulty Analysis: Difficult queries have been studied\nin recent years [7, 25, 5]. Here we analyze the effectiveness\nof our method in helping difficult queries. We quantify the\nquery difficulty by the Mean Average Precision (MAP) of\nthe original search engine ranking for each test case. We\nthen order the 172 test cases in test set 1 in an increasing\norder of MAP values. We partition the test cases into 4 bins\nwith each having a roughly equal number of test cases. A\nsmall MAP means that the utility of the original ranking is\nlow. Bin 1 contains those test cases with the lowest MAP\"s\nand bin 4 contains those test cases with the highest MAP\"s.\nFor each bin, we compute the numbers of test cases whose\nP@5\"s are improved versus decreased. Figure 3 shows the\nresults. Clearly, in bin 1, most of the test cases are improved\n(24 vs 3), while in bin 4, log-based method may decrease\nthe performance (3 vs 20). This shows that our method\nis more beneficial to difficult queries, which is as expected\nsince clustering search results is intended to help difficult\nqueries. This also shows that our method does not really\nhelp easy queries, thus we should turn off our organization\noption for easy queries.\n6.2.3 Parameter Setting\nWe examine parameter sensitivity in this section. For the\nstar clustering algorithm, we study the similarity threshold\nparameter \u03c3. For the OKAPI retrieval function, we study\nthe parameters k1 and b. We also study the impact of the\nnumber of past queries retrieved in our log-based method.\nFigure 4 shows the impact of the parameter \u03c3 for both\ncluster-based and log-based methods on both test sets. We\nvary \u03c3 from 0.05 to 0.3 with step 0.05. Figure 4 shows that\nthe performance is not very sensitive to the parameter \u03c3. We\ncan always obtain the best result in range 0.1 \u2264 \u03c3 \u2264 0.25.\nIn Table 4, we show the impact of OKAPI parameters.\nWe vary k1 from 1.0 to 2.0 with step 0.2 and b from 0 to\n1 with step 0.2. From this table, it is clear that P@5 is\nalso not very sensitive to the parameter setting. Most of the\nvalues are larger than 0.35. The default values k1 = 1.2 and\nb = 0.8 give approximately optimal results.\nWe further study the impact of the amount of history\n0.2\n0.25\n0.3\n0.35\n0.4\n0.05 0.1 0.15 0.2 0.25 0.3\nP@5\nsimilarity threhold: sigma\ncluster-based 1\nlog-based 1\ncluster-based 2\nlog-based 2\nFigure 4: The impact of similarity threshold \u03c3 on\nboth cluster-based and log-based methods. We show\nthe result on both test collections.\nb\n0.0 0.2 0.4 0.6 0.8 1.0\n1.0 0.3476 0.3406 0.3453 0.3616 0.3500 0.3453\n1.2 0.3418 0.3383 0.3453 0.3593 0.3534 0.3546\nk1 1.4 0.3337 0.3430 0.3476 0.3604 0.3546 0.3465\n1.6 0.3476 0.3418 0.3523 0.3534 0.3581 0.3476\n1.8 0.3465 0.3418 0.3546 0.3558 0.3616 0.3476\n2.0 0.3453 0.3500 0.3534 0.3558 0.3569 0.3546\nTable 4: Impact of OKAPI parameters k1 and b.\ninformation to learn from by varying the number of past\nqueries to be retrieved for learning aspects. The results on\nboth test collections are shown in Figure 5. We can see\nthat the performance gradually increases as we enlarge the\nnumber of past queries retrieved. Thus our method could\npotentially learn more as we accumulate more history. More\nimportantly, as time goes, more and more queries will have\nsufficient history, so we can improve more and more queries.\n6.2.4 An Illustrative Example\nWe use the query area codes to show the difference in\nthe results of the log-based method and the cluster-based\nmethod. This query may mean phone codes or zip codes.\nTable 5 shows the representative keywords extracted from\nthe three biggest clusters of both methods. In the\nclusterbased method, the results are partitioned based on locations:\nlocal or international. In the log-based method, the results\nare disambiguated into two senses: phone codes or zip\ncodes. While both are reasonable partitions, our\nevaluation indicates that most users using such a query are often\ninterested in either phone codes or zip codes. since the\nP@5 values of cluster-based and log-based methods are 0.2\nand 0.6, respectively. Therefore our log-based method is\nmore effective in helping users to navigate into their desired\nresults.\nCluster-based method Log-based method\ncity, state telephone, city, international\nlocal, area phone, dialing\ninternational zip, postal\nTable 5: An example showing the difference between\nthe cluster-based method and our log-based method\n0.16\n0.18\n0.2\n0.22\n0.24\n0.26\n0.28\n0.3\n1501201008050403020\nP@5\n#queries retrieved\nTest set 1\nTest set 2\nFigure 5: The impact of the number of past queries\nretrieved.\n6.2.5 Labeling Comparison\nWe now compare the labels between the cluster-based\nmethod and log-based method. The cluster-based method\nhas to rely on the keywords extracted from the snippets to\nconstruct the label for each cluster. Our log-based method\ncan avoid this difficulty by taking advantage of queries.\nSpecifically, for the cluster-based method, we count the frequency\nof a keyword appearing in a cluster and use the most\nfrequent keywords as the cluster label. For log-based method,\nwe use the center of each star cluster as the label for the\ncorresponding cluster.\nIn general, it is not easy to quantify the readability of a\ncluster label automatically. We use examples to show the\ndifference between the cluster-based and the log-based\nmethods. In Table 6, we list the labels of the top 5 clusters for\ntwo examples jaguar and apple. For the cluster-based\nmethod, we separate keywords by commas since they do not\nform a phrase. From this table, we can see that our log-based\nmethod gives more readable labels because it generates\nlabels based on users\" queries. This is another advantage of\nour way of organizing search results over the clustering\napproach.\nLabel comparison for query jaguar\nLog-based method Cluster-based method\n1. jaguar animal 1. jaguar, auto, accessories\n2. jaguar auto accessories 2. jaguar, type, prices\n3. jaguar cats 3. jaguar, panthera, cats\n4. jaguar repair 4. jaguar, services, boston\n5. jaguar animal pictures 5. jaguar, collection, apparel\nLabel comparison for query apple\nLog-based method Cluster-based method\n1. apple computer 1. apple, support, product\n2. apple ipod 2. apple, site, computer\n3. apple crisp recipe 3. apple, world, visit\n4. fresh apple cake 4. apple, ipod, amazon\n5. apple laptop 5. apple, products, news\nTable 6: Cluster label comparison.\n7. CONCLUSIONS AND FUTURE WORK\nIn this paper, we studied the problem of organizing search\nresults in a user-oriented manner. To attain this goal, we\nrely on search engine logs to learn interesting aspects from\nusers\" perspective. Given a query, we retrieve its related\nqueries from past query history, learn the aspects by\nclustering the past queries and the associated clickthrough\ninformation, and categorize the search results into the aspects\nlearned. We compared our log-based method with the\ntraditional cluster-based method and the baseline of search\nengine ranking. The experiments show that our log-based\nmethod can consistently outperform cluster-based method\nand improve over the ranking baseline, especially when the\nqueries are difficult or the search results are diverse.\nFurthermore, our log-based method can generate more\nmeaningful aspect labels than the cluster labels generated based\non search results when we cluster search results.\nThere are several interesting directions for further\nextending our work: First, although our experiment results have\nclearly shown promise of the idea of learning from search\nlogs to organize search results, the methods we have\nexperimented with are relatively simple. It would be interesting\nto explore other potentially more effective methods. In\nparticular, we hope to develop probabilistic models for learning\naspects and organizing results simultaneously. Second, with\nthe proposed way of organizing search results, we can\nexpect to obtain informative feedback information from a user\n(e.g., the aspect chosen by a user to view). It would thus\nbe interesting to study how to further improve the\norganization of the results based on such feedback information.\nFinally, we can combine a general search log with any\npersonal search log to customize and optimize the organization\nof search results for each individual user.\n8. ACKNOWLEDGMENTS\nWe thank the anonymous reviewers for their valuable\ncomments. This work is in part supported by a Microsoft Live\nLabs Research Grant, a Google Research Grant, and an NSF\nCAREER grant IIS-0347933.\n9. REFERENCES\n[1] E. Agichtein, E. Brill, and S. T. Dumais. Improving\nweb search ranking by incorporating user behavior\ninformation. In SIGIR, pages 19-26, 2006.\n[2] J. A. Aslam, E. Pelekov, and D. Rus. The star\nclustering algorithm for static and dynamic\ninformation organization. Journal of Graph\nAlgorithms and Applications, 8(1):95-129, 2004.\n[3] R. A. Baeza-Yates. Applications of web query mining.\nIn ECIR, pages 7-22, 2005.\n[4] D. Beeferman and A. L. Berger. Agglomerative\nclustering of a search engine query log. In KDD, pages\n407-416, 2000.\n[5] D. Carmel, E. Yom-Tov, A. Darlow, and D. Pelleg.\nWhat makes a query difficult? In SIGIR, pages\n390-397, 2006.\n[6] H. Chen and S. T. Dumais. Bringing order to the web:\nautomatically categorizing search results. In CHI,\npages 145-152, 2000.\n[7] S. Cronen-Townsend, Y. Zhou, and W. B. Croft.\nPredicting query performance. In Proceedings of ACM\nSIGIR 2002, pages 299-306, 2002.\n[8] S. T. Dumais, E. Cutrell, and H. Chen. Optimizing\nsearch by showing results in context. In CHI, pages\n277-284, 2001.\n[9] M. A. Hearst and J. O. Pedersen. Reexamining the\ncluster hypothesis: Scatter/gather on retrieval results.\nIn SIGIR, pages 76-84, 1996.\n[10] T. Joachims. Optimizing search engines using\nclickthrough data. In KDD, pages 133-142, 2002.\n[11] T. Joachims. Evaluating Retrieval Performance Using\nClickthrough Data., pages 79-96. Physica/Springer\nVerlag, 2003. in J. Franke and G. Nakhaeizadeh and I.\nRenz, Text Mining.\n[12] R. Jones, B. Rey, O. Madani, and W. Greiner.\nGenerating query substitutions. In WWW, pages\n387-396, 2006.\n[13] K. Kummamuru, R. Lotlikar, S. Roy, K. Singal, and\nR. Krishnapuram. A hierarchical monothetic\ndocument clustering algorithm for summarization and\nbrowsing search results. In WWW, pages 658-665,\n2004.\n[14] Microsoft Live Labs. Accelerating search in academic\nresearch, 2006.\nhttp://research.microsoft.com/ur/us/fundingopps/RFPs/\nSearch 2006 RFP.aspx.\n[15] P. Pirolli, P. K. Schank, M. A. Hearst, and C. Diehl.\nScatter/gather browsing communicates the topic\nstructure of a very large text collection. In CHI, pages\n213-220, 1996.\n[16] F. Radlinski and T. Joachims. Query chains: learning\nto rank from implicit feedback. In KDD, pages\n239-248, 2005.\n[17] S. E. Robertson and S. Walker. Some simple effective\napproximations to the 2-poisson model for\nprobabilistic weighted retrieval. In SIGIR, pages\n232-241, 1994.\n[18] G. Salton, A. Wong, and C. S. Yang. A vector space\nmodel for automatic indexing. Commun. ACM,\n18(11):613-620, 1975.\n[19] X. Shen, B. Tan, and C. Zhai. Context-sensitive\ninformation retrieval using implicit feedback. In\nSIGIR, pages 43-50, 2005.\n[20] C. J. van Rijsbergen. Information Retrieval, second\nedition. Butterworths, London, 1979.\n[21] V. N. Vapnik. The Nature of Statistical Learning\nTheory. Springer-Verlag, Berlin, 1995.\n[22] Vivisimo. http://vivisimo.com/.\n[23] X. Wang, J.-T. Sun, Z. Chen, and C. Zhai. Latent\nsemantic analysis for multiple-type interrelated data\nobjects. In SIGIR, pages 236-243, 2006.\n[24] J.-R. Wen, J.-Y. Nie, and H. Zhang. Clustering user\nqueries of a search engine. In WWW, pages 162-168,\n2001.\n[25] E. Yom-Tov, S. Fine, D. Carmel, and A. Darlow.\nLearning to estimate query difficulty: including\napplications to missing content detection and\ndistributed information retrieval. In SIGIR, pages\n512-519, 2005.\n[26] O. Zamir and O. Etzioni. Web document clustering: A\nfeasibility demonstration. In SIGIR, pages 46-54,\n1998.\n[27] O. Zamir and O. Etzioni. Grouper: A dynamic\nclustering interface to web search results. Computer\nNetworks, 31(11-16):1361-1374, 1999.\n[28] H.-J. Zeng, Q.-C. He, Z. Chen, W.-Y. Ma, and J. Ma.\nLearning to cluster web search results. In SIGIR,\npages 210-217, 2004.", "keywords": "interest aspect;ranking function;history collection;clickthrough;centroid prototype;monothetic clustering algorithm;reciprocal rank;centroid-based method;search result organization;mean average precision;clustering view;past query;cosine similarity;similarity threshold parameter;log-based method;meaningful cluster label;pseudo-document;retrieval model;ambiguity;search engine log;star clustering algorithm;search result snippet;pairwise similarity graph;suffix tree clustering algorithm"}
-{"name": "test_I-1", "title": "Aborting Tasks in BDI Agents", "abstract": "Intelligent agents that are intended to work in dynamic environments must be able to gracefully handle unsuccessful tasks and plans. In addition, such agents should be able to make rational decisions about an appropriate course of action, which may include aborting a task or plan, either as a result of the agent\"s own deliberations, or potentially at the request of another agent. In this paper we investigate the incorporation of aborts into a BDI-style architecture. We discuss some conditions under which aborting a task or plan is appropriate, and how to determine the consequences of such a decision. We augment each plan with an optional abort-method, analogous to the failure method found in some agent programming languages. We provide an operational semantics for the execution cycle in the presence of aborts in the abstract agent language CAN, which enables us to specify a BDI-based execution model without limiting our attention to a particular agent system (such as JACK, Jadex, Jason, or SPARK). A key technical challenge we address is the presence of parallel execution threads and of sub-tasks, which require the agent to ensure that the abort methods for each plan are carried out in an appropriate sequence.", "fulltext": "1. INTRODUCTION\nIntelligent agents generally work in complex, dynamic\nenvironments, such as air traffic control or robot navigation, in which the\nsuccess of any particular action or plan cannot be guaranteed [13].\nAccordingly, dealing with failure is fundamental to agent\nprogramming, and is an important element of agent characteristics such as\nrobustness, flexibility, and persistence [21].\nIn agent architectures inspired by the Belief-Desire-Intention (BDI)\nmodel [16], these properties are often characterized by the\ninteractions between beliefs, goals, and plans [2].1\nIn general, an agent\nthat wishes to achieve a particular set of tasks will pursue a\nnumber of plans concurrently. When failures occur, the choice of plans\nwill be reviewed. This may involve seeking alternative plans for a\nparticular task, re-scheduling tasks to better comply with resource\nconstraints, dropping some tasks, or some other decision that will\nincrease the likelihood of success [12, 14]. Failures can occur for\na number of reasons, and it is often not possible to predict these\nin advance, either because of the complexity of the system or\nbecause changes in the environment invalidate some earlier decisions.\nGiven this need for deliberation about failed tasks or plans, failure\ndeliberation is commonly built into the agent\"s execution cycle.\nBesides dealing with failure, an important capability of an\nintelligent agent is to be able to abort a particular task or plan. This\ndecision may be due to an internal deliberation (such as the agent\nbelieving the task can no longer be achieved, or that some\nconflicting task now has a higher priority) or due to an external factor\n(such as another agent altering a commitment, or a change in the\nenvironment).\nAborting a task or plan is distinct from its failure. Failure\nreflects an inability to perform and does not negate the need to\nperform - for example, a reasonable response to failure may be to try\nagain. In contrast, aborting says nothing about the ability to\nperform; it merely eliminates the need. Failure propagates from the\nbottom up, whereas aborting propagates from the top down. The\npotential for concurrently executing sub-plans introduces different\ncomplexities for aborting and failure. For aborting, it means that\nmultiple concurrent sub-plans may need to be aborted as the abort\nis propagated down. For failure, it means that parallel-sibling plans\nmay need to be aborted as the failure is propagated up.\nThere has been a considerable amount of work on plan failures\n(such as detecting and resolving resource conflicts [20, 10]) and\nmost agent systems incorporate some notion of failure handling.\nHowever, there has been relatively little work on the development\nof abort techniques beyond simple dropping of currently intended\nplans and tasks, which does not deal with the clean-up required.\nAs one consequence, most agent systems are quite limited in their\ntreatment of the situation where one branch of a parallel construct\n1\nOne can consider both tasks to be performed and goals to achieve\na certain state of the world. A task can be considered a goal of\nachieving the state of the task having been performed, and a goal\ncan be considered a task of bringing about that state of the world.\nWe adopt the latter view and use task to also refer to goals.\n8\n978-81-904262-7-5 (RPS) c 2007 IFAAMAS\nfails (common approaches include either letting the other branch\nrun to completion unhindered or dropping it completely).\nIn this paper we discuss in detail the incorporation of abort\ncleanup methods into the agent execution cycle, providing a unified\napproach to failure and abort. A key feature of our procedure-based\napproach is that we allow each plan to execute some particular code\non a failure and on an abort. This allows a plan to attempt to ensure\na stable, known state, and possibly to recover some resources and\notherwise clean up before exiting. Accordingly, a central\ntechnical challenge is to manage the orderly execution of the appropriate\nclean-up code. We show how aborts can be smoothly introduced\ninto a BDI-style architecture, and for the first time we give an\noperational semantics for aborting in the abstract agent language CAN\n[23, 17]. This allows us to specify an appropriate level of detail for\nthe execution model, without focusing on the specific constructs of\nany one agent system such as JACK [2], Jadex [14], Jason [6], or\nSPARK [9]. Our focus is on a single agent, complementary to\nrelated work that considers exception handling for single- and\nmultiagent systems (e.g., [22, 5, 6]).\nThis paper is organized as follows. In Section 2 we give an\nexample of the consequences of aborting a task, and in Section 3 we\ndiscuss some circumstances under which aborts should occur, and\nthe appropriate representation and invocation procedures. In\nSection 4 we show how we can use CAN to formally specify the\nbehaviour of an aborted plan. Section 5 discusses related work, and\nin Section 6 we present our conclusions and future work.\n2. MOTIVATING EXAMPLE\nAlice is a knowledge worker assisted by a learning, personal\nassistive agent such as CALO [11]. Alice plans to attend the IJCAI\nconference later in the year, and her CALO agent adopts the task of\nSupport Meeting Submission (SMS) to assist her. CALO\"s plan for\nan SMS task in the case of a conference submission consists of the\nfollowing sub-tasks:\n1. Allocate a Paper Number (APN) to be used for administrative\npurposes in the company.\n2. Track Writing Abstract (TWA): keep track of Alice\"s progress\nin preparing an abstract.\n3. Apply For Clearance (AFC) for publication from Alice\"s\nmanager based on the abstract and conference details.\n4. Track Writing Paper (TWP): keep track of Alice\"s progress in\nwriting the paper.\n5. Handle Paper Submission (HPS): follow company internal\nprocedures for submitting a paper to a conference.\nThese steps must be performed in order, with the exception of steps\n3 (AFC) and 4 (TWP), which may be performed in parallel.\nSimilarly, CALO can perform the task Apply For Clearance (AFC)\nby a plan consisting of:\n1. Send Clearance Request (SCR) to Alice\"s manager.\n2. Wait For Response (WFR) from the manager.\n3. Confirm that the response was positive, and fail otherwise.\nNow suppose that a change in circumstances causes Alice to\nreconsider her travel plans while she is writing the paper. Alice\nwill no longer be able to attend IJCAI. She therefore instructs her\nCALO agent to abort the SMS task. Aborting the task implies\naborting both the SMS plan and the AFC subplan. Aborting the\nfirst plan requires CALO to notify the paper number registry that\nthe allocated paper number is obsolete, which it can achieve by\nthe Cancel Paper Number (CPN) task.2\nAborting the second plan\nrequires CALO to notify Alice\"s manager that Alice no longer\nrequires clearance for publication, which CALO can achieve by\ninvoking the Cancel Clearance Request (CCR) task.\nWe note a number of important observations from the\nexample. First, the decision to abort a particular course of action can\ncome from the internal deliberations of the agent (such as\nreasoning about priorities in a conflict over resources), or from external\nsources (such as another agent cancelling a commitment), as in this\nexample. In this paper we only touch on the problem of\ndetermining whether a task or plan should be aborted, instead concentrating\non determining the appropriate actions once this decision is made.\nHence, our objective is to determine how to incorporate aborting\nmechanisms into the standard execution cycle rather than determine\nwhat should be aborted and when.\nSecond, once the decision is made to abort the attempt to submit\na paper, there are some actions the agent should take, such as\ncancelling the clearance request. In other words, aborting a task is not\nsimply a matter of dropping the task and associated active plans:\nthere are some clean up actions that may need to be done. This\nis similar to the case for failure, in that there may also be actions\nto take when a task or plan fails. In both cases, note that it is not\nsimply a matter of the agent undo-ing its actions to date; indeed,\nthis may be neither possible (since the agent acts in a situated world\nand its actions change world state) nor desirable (depending on the\nsemantics of the task). Rather, cleaning up involves compensation\nvia forward recovery actions [3].\nThird, there is a distinction between aborting a task and aborting\na plan. In the former case, it is clear that all plans being executed\nto perform the task should be aborted; in the latter case, it may\nbe that there are better alternatives to the current plan and one of\nthese should be attempted. Hence, plan aborting or failure does not\nnecessarily lead to task aborting or failure.\nFourth, given that tasks may contain sub-tasks, which may\ncontain further sub-tasks, it is necessary for a parent task to wait until\nits children have finished their abort methods. This is the source\nof one of the technical challenges that we address: determining the\nprecise sequence of actions once a parent task or plan is aborted.\n3. ABORTING TASKS AND PLANS\nAs we have alluded to, failure and aborting are related concepts.\nThey both cause the execution of existing plans to cease and,\nconsequentially, the agent to reflect over its current tasks and intentions.\nFailure and aborting, however, differ in the way they arise. In the\ncase of failure, the trigger to cease execution of a task or plan comes\nfrom below, that is, the failure of sub-tasks or lower-level plans. In\nthe case of aborting, the trigger comes from above, that is, the tasks\nand the parent plans that initiated a plan.\nIn BDI-style systems such as JACK and SPARK, an agent\"s\ndomain knowledge includes a pre-defined plan library of plan clauses.\nEach plan clause has a plan body, which is a program (i.e.,\ncombination of primitive actions, sub-tasks, etc.) that can be executed\nin response to a task or other event should the plan clause\"s context\ncondition be satisfied. The agent selects and executes instances of\nplan clauses to perform its tasks. There can be more than one\napplicable plan clause and, in the event that one fails, another applicable\none may be attempted. Plans may have sub-tasks that must succeed\n2\nCALO needs only drop the TWA and TWP tasks to abort them:\nfor the sake of simplicity we suppose no explicit clean up of its\ninternal state is necessary.\nThe Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 9\nfor the plan to succeed. In such systems, a plan failure occurs if\none of the actions or sub-tasks within the plan fails.\nThe agent\"s action upon plan failure depends on its nature: for\nexample, the agent may declare the task to have failed if one plan\nhas been tried and resulted in failure, or it may retry alternate plans\nand declare (indeed, must declare) task failure only if all possible\nalternate plans to perform the task have been tried and resulted in\nfailure. Observe that, while task failure can follow from plan failure\nor a sequence of plan failures, plan failure need not lead to task\nfailure provided the agent can successfully complete an alternate\nplan. Moreover, task failure can also arise separately from plan\nfailure, if the agent decides to abort the task.\nOur approach associates an abort-method with each plan. This\nenables the programmer to specify dedicated compensation actions\naccording to how the agent is attempting to perform the task. Note\nthat our abort-methods can be arbitrary programs and so can invoke\ntasks that may be performed dynamically in the usual BDI fashion,\ni.e., the clean-up is not limited to executing a predetermined set\nof actions. The question remains which abort-method should be\ninvoked, and in what manner. Given the complexity of agent action\nspaces, it is not possible nor desirable to enumerate a static set of\nrules. Rather, the agent will invoke its abort-methods dynamically\naccording to the state of execution and its own internal events.\nAn alternative to attaching an abort-method to each plan is to\nattach such methods to each atomic action. We choose the former\nbecause: (1) action-level abort-methods would incur a greater\noverhead, (2) plans are meant to be designed as single cohesive units\nand are the unit of deliberation in BDI systems, and (3) the\ncleanup methods for failure in current systems are attached to plans.\nIn order to understand how the agent\"s abort processing should\nfunction, we consider three situations where it is sensible for an\nagent to consider aborting some of its tasks and plans:\n1. When a task succeeds or fails because of an external factor\nother than the agent itself, the plan currently executed to\nperform the task should be aborted. For example, suppose\ncompany policy changes so that employees of Alice\"s seniority\nautomatically have clearance for publishing papers. Since\nAlice now has clearance for publishing her paper, CALO can\nabort the plan for Apply For Clearance. In doing so it must\ninvoke the abort-method, in this case thus performing Cancel\nClearance Request.3\n2. When two or more sub-programs are executed in parallel, if\none fails then the others should be aborted, given that the\nfailure of one branch leads to the failure of the overall task. For\nexample, suppose that part-way through writing the paper,\nAlice realizes that there is a fatal flaw in her results, and so\nnotifies CALO that she will not be able to complete the paper\nby the deadline. The failure of the Track Writing Paper task\nshould cause the Apply For Clearance task being executed in\nparallel to be aborted.\n3. When an execution event alters the importance of an existing\ntask or intention, the agent should deliberate over whether the\nexisting plan(s) should continue. For example, suppose that\nAlice tasks CALO with a new, high-priority task to purchase\na replacement laptop, but that Alice lacks enough funds to\nboth purchase the laptop and to attend IJCAI. Reasoning over\nresource requirements [20, 10] will cause the agent to realize\n3\nIf there is any difference between how to abort a task that is\nexternally performed versus how to abort one that is now known to\nbe impossible, the abort-method can detect the circumstances and\nhandle the situation as appropriate.\nthat it cannot successfully complete both tasks. Given that\nthe new task has greater importance, a rational agent will\nevaluate its best course of action and may decide to\nabortor at least suspend - the existing task of submitting a paper\nand intentions derived from it [12].\nThe operational semantics we provide in Section 4 for aborting\ntasks and plans captures the first two situations above. The third\nsituation involves deliberating over the importance of a task, which\ndepends on various factors such as task priority. Although this\ndeliberation is beyond the scope of this paper, it is a complementary\ntopic of our future work.\nNote that the above situations apply to achievement goals, for\nwhich the task is completed when a particular state of the world is\nbrought about (e.g., ensure we have clearance). Different forms of\nreasoning apply to other goal types [4] such as maintenance goals\n[1], where the goal is satisfied by maintaining a state of the world\nfor some period of time (e.g., maintain $100 in cash).\nAbort Method Representation\nThe intent of aborting a task or plan is that the task or plan and all\nits children cease to execute, and that appropriate clean-up methods\nare performed as required. In contrast to offline planning systems,\nBDI agents are situated: they perform online deliberation and their\nactions change the state of the world. As a result, the effects of\nmany actions cannot be simply undone. Moreover, the undo\nprocess may cause adverse effects. Therefore, the clean-up methods\nthat we specify are forward recovery procedures that attempt to\nensure a stable state and that also may, if possible, recover resources.\nThe common plan representation in BDI-style systems such as\nJACK and SPARK includes a failure-method, which is the\ndesignated clean-up method invoked when the plan fails. To this, we\nadd the abort-method, which is invoked if the plan is to be aborted.\nIn our example, the abort-method for the plan for Support Meeting\nSubmission consists of invoking the sub-task Cancel Paper\nNumber. The abort-method need not explicitly abort Apply For\nClearance, because the agent will invoke the abort-method for the\nsubtask appropriately, as we outline below.\nThe assumption here is that, like the failure-method, the\nprogrammer of the agent system has the opportunity to specify a\nsensible abort-method that takes into consideration the point in the plan\nat which the abort is to be executed. For any plan, the abort-method\nis optional: if no abort-method is specified, the agent takes no\nspecific action for this plan. However, the agent\"s default behavioural\nrules still apply, for example, whether to retry an alternate plan for\nthe parent task.\nNote that an explicit representation of the clean-up methods for\ntasks is not required, since tasks are performed by executing some\nplan or plans. Hence, aborting a task means aborting the current\nplan that is executed to perform that task, as we next describe.\nAbort Method Invocation\nWe now informally lay out the agent\"s action upon aborting plans\nand tasks. When a plan P is aborted:\n1. Abort each sub-task that is an active child of P. An active\nchild is one that was triggered by P and is currently in\nexecution.\n2. When there are no more active children, invoke the abort\nmethod of plan P.\n3. Indicate a plan failure to TP , the parent task of P. We note\nhere that if the parent task TP is not to be aborted then the\nagent may choose another applicable plan to satisfy TP .\n10 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nWhen a task (or sub-task) T is aborted:\n1. Abort the current active plan to satisfy T (if any).\n2. When there are no more active child processes, drop the task.\nThe agent thus no longer pursues T.\n3. Note here that when the current active plan for performing T\nis aborted, no other applicable plans to perform T should be\ntried as it is the task that is to be aborted.\nIn order to prevent infinitely cascading clean-up efforts, we\nassume that abort-methods will never be aborted nor fail. In reality,\nhowever, an abort-method may fail. In this case, lacking a more\nsophisticated handling mechanism, the agent simply stops\nexecuting the failed abort-method with no further deliberation. The\nassumption we make is thus not a reflection of the full complexity\nof reality, but one that is pragmatic in terms of the agent\nexecution cycle; the approach to failure-handling of [21] makes the same\nassumption. In systems such as SPARK, the programmer can\nspecify an alternative behaviour for a failed failure- or abort-method by\nmeans of meta-level procedures. We also assume that failure- and\nabort-methods terminate in finite time.\n4. OPERATIONAL SEMANTICS\nWe provide the semantics for the task and plan failure and\naborting processes outlined above. We use the CAN language initially\ndefined in [23] and later extended as CANPLAN in [17] to include\na planning component and then as CANPLAN2 in [18] to improve\nthe goal adoption and dropping mechanisms. The extensions also\nsimplified the semantics in the earlier work. We use some of these\nsimplifications for providing a brief summary of the CAN language\nin Section 4.1. Following a presentation of the operational\nsemantics of our approach in Section 4.2, in Section 4.3 we provide a\nworked example to clarify the semantics that we present.\n4.1 CAN Language\nCAN is a high-level agent language, in a spirit similar to that\nof AgentSpeak [15] and Kinny\"s \u03a8 [7], both of which attempt\nto extract the essence of a class of implemented BDI agent\nsystems. CAN provides an explicit goal construct that captures both\nthe declarative and procedural aspects of a goal. Goals are\npersistent in CAN in that, when a plan fails, another applicable plan is\nattempted. This equates to the default failure handling mechanism\ntypically found in implemented BDI systems such as JACK [2].\nIn practical systems, tasks are typically translated into events that\ntrigger the execution of some plans. This is also true in the CAN\nlanguage, but, in order to maintain the persistence of goals, a goal\nconstruct is introduced. This is denoted by Goal\n`\n\u03c6s, P, \u03c6f\n\u00b4\n, where\n\u03c6s is the success condition that determines when the goal is\nconsidered achieved, \u03c6f is a fail condition under which it is considered\nthe goal is no longer achievable or relevant, and P is a program for\nachieving the goal, which will be aborted once \u03c6s or \u03c6f become\ntrue.\nAn agent\"s behavior is specified by a plan library, denoted by \u03a0,\nthat consists of a collection of plan clauses of the form e : c \u2190 P,\nwhere e is an event, c is a context condition (a logical formula\nover the agent\"s beliefs that must be true in order for the plan to be\napplicable)4\nand P is the plan body. The plan body is a program\nthat is defined recursively as follows:\nP ::= act | +b | \u2212b | ?\u03c6 | !e | P1; P2 | P1 P2 | Goal\n`\n\u03c6s, P1, \u03c6f\n\u00b4\n| P1 P2 | {\u03c81 : P1, . . . , \u03c8n : Pn} | nil\n4\nAn omitted c is equivalent to true.\n\u0394 = {\u03c8i\u03b8 : Pi\u03b8 | e : \u03c8i \u2190 Pi \u2208 \u03a0 \u2227 \u03b8 = mgu(e, e )}\nB, !e \u2212\u2192 B, \u0394\nEvent\n\u03c8i : Pi \u2208 \u0394 B |= \u03c8i\nB, \u0394 \u2212\u2192 B, Pi \u0394 \\ {\u03c8i : Pi}\nSelect\nB, P1 \u2212\u2192\nB, (P1 P2) \u2212\u2192 B, P2\nfail\nB, P1 \u2212\u2192 B , P1\nB, (P1; P2) \u2212\u2192 B , (P ; P2)\nSequence\nB, P1 \u2212\u2192 B , P\nB, (P1 P2) \u2212\u2192 B , (P P2)\nParallel1\nB, P2 \u2212\u2192 B , P\nB, (P1 P2) \u2212\u2192 B , (P P1)\nParallel2\nFigure 1: Operational rules of CAN.\nwhere P1, . . . , Pn are themselves programs, act is a primitive\naction that is not further specified, and +b and \u2212b are operations\nto add and delete beliefs. The belief base contains ground belief\natoms in the form of first-order relations but could be orthogonally\nextended to other logics. It is assumed that well-defined operations\nare provided to check whether a condition follows from a belief set\n(B |= c), to add a belief to a belief set (B \u222a {b}), and to delete\na belief from a belief set (B \\ {b}). ?\u03c6 is a test for condition \u03c6,\nand !e5\nis an event6\nthat is posted from within the program. The\ncompound constructs are sequencing (P1; P2), parallel execution\n(P1 P2), and goals (Goal\n`\n\u03c6s, P, \u03c6f\n\u00b4\n).\nThe above defines the user language. In addition, a set of\nauxiliary compound forms are used internally when assigning\nsemantics to constructs. nil is the basic (terminating) program. When an\nevent matches a set of plan clauses these are collected into a set of\nguarded alternatives ( c1 : P1, . . . , cn : Pn ). The other auxiliary\ncompound form, , is a choice operator dual to sequencing: P1 P2\nexecutes P1 and then executes P2 only if P1 failed.\nA summary of the operational semantics for CAN in line with\n[23] and following some of the simplifications of [17] is as follows.\nA basic configuration S = B, G, \u0393 consists of the current belief\nbase B of the agent, the current set of goals G being pursued (i.e.,\nset of formulae), and the current program P being executed (i.e.,\nthe current intention).\nA transition S0 \u2212\u2192 S1 specifies that executing S0 for a single\nstep yields configuration S1. S0 \u2212\u2192\u2217\nSn is the usual reflexive\ntransitive closure of \u2212\u2192: Sn is the result of one or more\nsinglestep transitions. A derivation rule\nS \u2212\u2192 Sr\nS \u2212\u2192 Sr consists of a (possibly\nempty) set of premises, which are transitions together with some\nauxiliary conditions (numerator), and a single transition conclusion\nderivable from these premises (denominator).\nFigure 1 gives some of the operational rules. The Event rule\nhandles task events by collecting all relevant plan clauses for the\nevent in question: for each plan clause e : \u03c8i \u2190 Pi, if there\nis a most general unifier, \u03b8 = mgu(e, e ) of e and the event in\n5\nWhere it is obvious that e is an event we will sometimes exclude\nthe exclamation mark for readability.\n6\nTypically an achievement goal.\nThe Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 11\nB |= \u03c6s\nB, Goal\n`\n\u03c6s, P, \u03c6f\n\u00b4\n\u2212\u2192 B, true\nGs\nB |= \u03c6f\nB, Goal\n`\n\u03c6s, P, \u03c6f\n\u00b4\n\u2212\u2192 B, fail\nGf\nP = Goal\n`\n\u03c6s, P , \u03c6f\n\u00b4\nP = P1 \u00a3 P2 B |= \u03c6s \u2228 \u03c6f\nB, P \u2212\u2192 B, Goal\n`\n\u03c6s, P \u00a3 P , \u03c6f\n\u00b4 GI\nP = P1 \u00a3 P2 B |= \u03c6s \u2228 \u03c6f B, P1 \u2212\u2192 B , P\nB, Goal\n`\n\u03c6s, P, \u03c6f\n\u00b4\n\u2212\u2192 B , Goal\n`\n\u03c6s, P \u00a3 P2, \u03c6f\n\u00b4 GS\nP = P1 \u00a3 P2 B |= \u03c6s \u2228 \u03c6f P1 \u2208 {true, fail}\nB, Goal\n`\n\u03c6s, P, \u03c6f\n\u00b4\n\u2212\u2192 B, Goal\n`\n\u03c6s, P2 \u00a3 P2, \u03c6f\n\u00b4 GR\nFigure 2: Rules for goals in CAN.\nquestion, then the rule constructs a guarded alternative \u03c8i\u03b8 : Pi\u03b8.\nThe Select rule then selects one applicable plan body from a set\nof (remaining) relevant alternatives: program P \u0394 states that\nprogram P should be tried first, falling back to the remaining\nalternatives, \u0394 \\ P, if necessary. This rule and the fail rule together\nare used for failure handling: if the current program Pi from a plan\nclause for a task fails, rule fail is applied first, and then if possible,\nrule Select will choose another applicable alternative for the task\nif one exists. Rule Sequence handles sequencing of programs in\nthe usual way. Rules Parallel1 and Parallel2 define the possible\ninterleaving when executing two programs in parallel.\nFigure 2 gives simplified rules for dealing with goals, in line with\nthose presented in [17]. The first rule states that a goal succeeds\nwhen \u03c6s become true; the second rule states that a goal fails when\n\u03c6f become true. The third rule GI initializes the execution of a\ngoal-program by updating the goal base and setting the program in\nthe goal to P \u00a3 P; the first P is to be executed and the second P is\nused to keep track of the original program for the goal. The fourth\nrule GS executes a single step of the goal-program. The final rule\nGR restarts the original program (encoded as P2 of pair P1 \u00a3 P2)\nwhenever the current program is finished but the desired and still\npossible goal has not yet been achieved.\n4.2 Aborting Intentions and Handling Failure\nWe next introduce the ability to specify handler programs, in the\nform of failure- and abort-methods, that deal with the clean-up\nrequired when a given program respectively fails or is aborted. We do\nnot associate failure- and abort- methods with plan clauses or with\ntasks (events), but rather we introduce a new program construct that\nspecifies failure- and abort- methods for an arbitrary program. The\nFAb(P, PF , PA) construct executes the program P. Should P fail,\nit executes the failure handling program PF ; should P need to be\naborted, it executes the abort handling program PA. Thus to add\nfailure- and abort- methods PF and PA to a plan clause e : c \u2190 P,\nwe write e : c \u2190 FAb(P, PF , PA).\nWith the introduction of the ability to abort programs, we\nmodify the parallel construct to allow the failure of one branch to abort\nthe other. We must take into consideration the possible existence of\nabort-methods in the aborted branch. Similarly, with the Goal\nconstruct we can no longer completely abandon the program the goal\ncontains as soon as the success or failure condition holds; we must\nnow take into consideration the existence of any abort-methods\napplicable to the program.\nWe provide the semantics of an augmented agent language\ncontaining the FAb construct by defining a source transformation,\nsimilar to macro-expansion, that maps a plan library containing the\nFAb(P, PF , PA) construct into (almost) standard CAN. The one\nnon-standard extension to CAN is a wait-until-condition construct.\nWe explain this simple modification of the parallel construct below\nwhen we come to translation of the Goal construct. First we\ndescribe the general nature of the source transformation, which proves\nto be quite simple for most of the language constructs, and then we\nconcentrate on the three more complex cases: the FAb, parallel, and\nGoal constructs.\nA key issue is that the FAb constructs may be nested, either\ndirectly or indirectly. Let us call each instantiation of the construct at\nexecution time a possible abort point (pap). Where these constructs\nare nested, it is important that before the failure- or abort-method\nof a parent pap is executed, the failure- or abort-methods programs\nof the children paps are executed first, as described earlier in\nSection 3. The need to coordinate the execution of the abort-methods\nof nested paps requires that there be some way to identify the\nparents and children of a particular pap. We achieve this as part of the\nsource transformation by explicitly keeping track of the context of\nexecution as an extra parameter on the events and an extra variable\nwithin each plan body.7\nThe source transformation replaces each plan clause of the form\ne : c \u2190 P with a plan clause e(v) : c \u2190 \u03bcv(P) where v is a free\nvariable, not previously present in the plan clause. This variable is\nused to keep track of the context of execution.\nThe value of the context variable is a list of identifiers, where\neach new pap is represented by prepending a new identifier to the\ncontext. For example, if the identifiers are integers, the context\nof one pap may be represented by a list [42, 1] and the context\nintroduced by a new pap may be represented by [52, 42, 1]. We\nwill refer to paps by the context rather than by the new identifier\nadded, e.g., by [51, 42, 1] not 51. This enables us to equate the\nancestor relationship between paps with the list suffix relationship\non the relevant contexts, i.e., v is an ancestor of v if and only if v\nis a suffix of v .\nFor most CAN constructs, the context variable is unused or passed\nunchanged:\n\u03bcv(act) = act\n\u03bcv(+b) = +b\n\u03bcv(\u2212b) = \u2212b\n\u03bcv(nil) = nil\n\u03bcv(!e) = !e(v)\n\u03bcv(P1; P2) = \u03bcv(P1); \u03bcv(P2)\n\u03bcv(P1 P2) = \u03bcv(P1) \u03bcv(P2)\n\u03bcv( \u03c81 : P1, . . . , \u03c8n : Pn ) = \u03c81 : \u03bcv(P1), . . . , \u03c8n : \u03bcv(Pn)\nIt remains to specify the transformation \u03bcv(\u00b7) in three cases: the\nFAb, parallel, and Goal constructs. These are more complex in\nthat the transformed source needs to create a new pap identifier\ndynamically, for use as a new context within the construct, and to\nkeep track of when the pap is active (i.e., currently in execution)\nby adding and removing beliefs about the context.\nLet us introduce the primitive action prependID(v, v ) that\ncreates a new pap identifier and prepends it to list v giving list v . We\nalso introduce the following predicates:\n\u2022 a(v) - the pap v is currently active.\n\u2022 abort(v) - the pap v should be aborted (after aborting all\nof its descendants).\n7\nAn alternative would be to use meta-level predicates that reflect\nthe current state of the intention structure.\n12 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)\n\u2022 f(v) - the program of pap v has failed.\n\u2022 ancestorof(v, v ) \u2261 v = v \u2228 ancestorof(v, tail(v\n))the pap v is an ancestor of pap v .\n\u2022 nac(v) \u2261 \u00ac\u2203v .(a(v ) \u2227 ancestorof(v, v ) \u2227 v = v ) - v\nhas no active children.\n\u2022 sa(v) \u2261 \u2203v .abort(v ) \u2227 ancestorof(v , v) - we should\nabort v, i.e., abort is true of v or some ancestor; however,\nwe need to wait until no children of v are active.\n\u2022 san(v) \u2261 sa(v) \u2227 nac(v) - we should abort v now if we\nshould abort v and v has no active children.\nFirst let us consider the case of the FAb construct. The idea is\nthat, whenever a new pap occurs, the prependID(v, v ) action is\nused to create a new pap identifier list v from the existing list v.\nWe then add the belief that v is the active context, i.e., +a(v ), and\nstart processing the program within the pap using v instead of v\nas the context. We need to make sure that we retract the belief that\nv is active at the end, i.e., \u2212a(v ).\nWe use the Goal construct to allow us to drop the execution of\na program within a pap v when it is necessary to abort. While\nexecuting the program P, we know that we need to drop P and\ninvoke its abort-method if some ancestor of P has been told to abort.\nThis is represented by the predicate sa(v ) being true. However, we\nneed to make sure that we do this only after every child pap has had\nthe chance to invoke its abort-method and all these abort-methods\nhave completed: if we drop the program too soon, then execution of\nthe abort-methods of the children will also be dropped. Therefore,\nthe condition we actually use in the Goal construct to test when to\ndrop the program is san(v ). This condition relies on the fact that\nas the children paps complete, they remove the relevant a facts.\nOur use of the Goal construct is for its ability to drop the\nexecution of a program when conditions are met. To leave aside the\nrepeat execution until a condition is met aspect, we must ensure that\nthe success or failure condition of the construct is satisfied once\nthe execution of the program succeeds or fails. We make sure of\nthis by retracting the fact a(v ) on success and asserting the fact\nf(v ) on failure, and by having the appropriate success and failure\nconditions on the Goal. Hence, if the Goal construct fails, then\nthe program either was aborted or it failed. We invoke the relevant\nfailure- or abort- method, retract the a(v ) fact, and then fail.\nPutting all this together, we formally define \u03bcv(FAb(P, PA, PF ))\nto be the following, where v is a new variable distinct from any\nother in the agent\"s plan library:\nprependID(v, v ); +a(v );\nGoal ( \u00aca(v ), (\u03bcv (P); \u2212a(v ) +f(v )), san(v ) \u2228 f(v ) )\n(((?sa(v ); \u03bcv(PA)) \u03bcv(PF )); \u2212a(v ); ?false)\nSecond, we must transform the parallel operator to ensure that\nthe failure of one branch safely aborts the other. Here we construct\ntwo new contexts, v and v , from the existing context v. If one\nbranch fails, it must abort the other branch. At the end, if either\nbranch was aborted, then we must fail.\nLet v and v be new variables distinct from any other in the\nagent\"s plan library. We define \u03bcv(P1 P2) to be:\nprependID(v, v ); prependID(v, v ); +a(v ); +a(v );\n( Goal (\u00aca(v ), (\u03bcv (P1); \u2212a(v ) +f(v )), san(v ) \u2228 f(v ) )\n(+abort(v ); \u2212a(v ))\nGoal (\u00aca(v ), (\u03bcv (P2); \u2212a(v ) +f(v )), san(v ) \u2228 f(v ) )\n(+abort(v ); \u2212a(v ))\n); ?\u00acabort(v ) \u2227 \u00acabort(v )\nFinally, we need to modify occurrences of the Goal construct in\ntwo ways: first, to make sure that the abort handling methods are\nnot bypassed when the success or failure conditions are satisfied,\nand second, to trigger the aborting of the contained program when\neither the success or failure conditions are satisfied.\nTo transform the Goal construct we need to extend standard CAN\nwith a wait-until-condition construct. The construct \u03c6 : P does not\nexecute P until \u03c6 becomes true. We augment the CAN language\nwith the following rules for the guard operator \u2018:\":\nB |= \u03c6\nB, G, (\u03c6 : P \u2212\u2192 B, G, P\n:true\nB |= \u03c6\nB, G, (\u03c6 : P) \u2212\u2192 B, G, (\u03c6 : P)\n:false\nIn order to specify \u03bcv(Goal\n`\n\u03c6s, P, \u03c6f\n\u00b4\n), we generate a new pap\nand execute the program within the Goal construct in this new\ncontext. We must ensure that belief a(v ) is removed whether the Goal\nsucceeds or fails. We shift the success and failure condition of the\nGoal construct into a parallel branch using the wait-until-condition\nconstruct, and modify the Goal to use the should abort now\ncondition san(v ) as the success condition. The waiting branch will\ntrigger the abort of the program should either the success or failure\ncondition be met. To avoid any problems with terminating the wait\ncondition, we also end the wait if the pap is no longer active.\nLet v be a new variable distinct from any other in the agent\"s\nplan library. We define \u03bcv(Goal\n`\n\u03c6s, P, \u03c6f\n\u00b4\n) to be:\nprependID(v, v ); +a(v );\n( Goal ( san(v ), \u03bcv (P), false) ; \u2212a(v ); ?\u03c6s )\n\u03c6s \u2228 \u03c6f \u2228 \u00aca(v ) : +abort(v ) )\nThe program P will be repeatedly executed until san(v )\nbecomes true. There are two ways this can occur. First, if either the\nsuccess condition \u03c6s or the failure condition \u03c6f becomes true, then\nthe second branch of the parallel construct executes. This causes\nabort(v ) to become true, and, after the descendant paps\"\nabortmethods are executed, san(v ) becomes true. In this case, P is\nnow dropped, the a(v ) is removed, and the entire construct\nsucceeds or fails based on \u03c6s. The second way for san(v ) to become\ntrue is if v or one of its ancestors is aborted. In this case, once the\ndescendant paps\" abort-methods are executed, san(v ) becomes\ntrue, P is dropped, the a(v ) belief is removed (allowing the\nsecond parallel branch to execute, vacuously instructing v to abort),\nand the first parallel branch fails (assuming \u03c6s is false).\n4.3 Worked Example\nLet us look at translation of the IJCAI submission example of\nSection 2. We will express tasks by events, for example, the task\nAllocate a Paper Number we express as the event APN. Let the\noutput of the Apply For Clearance task be Y or N, indicating the\napproval or not of Alice\"s manager, respectively. Then we have\n(at least) the following two plan clauses in CAN, for the Support\nMeeting Submission and Apply For Clearance tasks, respectively:\nSMS(m) : isconf(m) \u2190\nFAb(!APN; !TWA; (!AFC !TWP); !HPS, !CPN, !CPN)\nAFC : true \u2190 FAb(!SCR; !WFR(r); ?r = Y, nil, !CCR)\nNote that Support Meeting Submission has a parameter m, the\nmeeting of interest (IJCAI, in our example), while Apply For\nClearance has no parameters.\nThe Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 13\nLet us look first at the translation of the second plan clause, for\nAFC, since it is the simpler of the two. Let v and v denote\nnew variables. Then we have as the translated plan clause:\nAFC(v ) : true \u2190\nprependID(v , v ); +a(v );\nGoal ( \u00aca(v ),\n(!SCR(v ); !WFR(r, v ); ?r = Y; \u2212a(v ) +f(v )),\nsan(v ) \u2228 f(v ) )\n(((?sa(v ); !CCR(v )) nil); \u2212a(v ); ?false)\nWe can see that an extra context parameter has been added to\neach task and that the old plan body now appears inside a Goal\nconstruct. Should the old plan body succeed, belief a(v ) is\nretracted, causing the Goal to succeed. If the old plan body fails, or if\nthe task is to be aborted, the Goal construct fails. This is followed\nby the execution of CCR (in the case of an abort), the retraction of\na(v ), and failure.\nThe translation of the first plan clause, for SMS, is more\ncomplex, because of the parallel construct that introduces nested paps:\nSMS(m, v) : isconf(m) \u2190\nprependID(v, v ); +a(v );\nGoal ( \u00aca(v ),\n((!APN(v );\n!TWA(v );\nprependID(v , v ); prependID(v , v ); +a(v ); +a(v );\n( Goal ( \u00aca(v ),\n(!AFC(v ); \u2212a(v ) +f(v )),\nsan(v ) \u2228 f(v ) )\n(+abort(v ); \u2212a(v ))\nGoal ( \u00aca(v ),\n(!TWP(v ); \u2212a(v ) +f(v )),\nsan(v ) \u2228 f(v ) )\n(+abort(v ); \u2212a(v ))\n) ;\n?\u00acabort(v ) \u2227 \u00acabort(v );\n!HPS(v );\n\u2212a(v ))\n+f(v )),\nsan(v ) \u2228 f(v ) )\n(((?sa(v ); !CPN(v)) !CPN(v)); \u2212a(v ); ?false)\nHere we can see that if the task !TWP(v ) fails then f(v )\nwill be asserted, failing the Goal construct that contains it, and\nleading to abort(v ) being asserted. If the !WFR(r, v ) task\nin the expansion of !AFC(v ) is still executing and has no\nactive child paps, then sa(v ) and sa(v ) will be true; however,\nonly san(v ) and not san(v ) will be true. This set of\nconditions will cause the Goal construct in the first plan clause to fail,\ndropping execution of !WFR(r, v ). The task !CCR(v ) will\nbe executed. Once this task completes, belief a(v ) is retracted,\ncausing san(v ) to become true, leading to the first Goal construct\nof the second plan clause to fail.\nWhile the translated plan clauses appear complicated, observe\nthat the translation from the initial plan clauses is entirely\nautomated, according to the rules set out in Section 4.2. The translated\nplan clauses, with the semantics of CAN augmented by our\nwaituntil-condition construct, thus specify the operation of the agent to\nhandle both failure and aborting for the example.\n5. RELATED WORK\nPlan failure is handled in the extended version of AgentSpeak\nfound in the Jason system [6]. Failure clean-up plans are\ntriggered from goal deletion events \u2212!g. Such plans, similar to our\nfailure methods, are designed for the agent to effect state changes\n(act to undo its earlier actions) prior to possibly attempting\nanother plan to achieve the failed goal g.\nGiven Jason\"s constructs for dropping a goal with an indication\nof whether or not to try an alternate plan for it, H\u00a8ubner et al. [6]\nprovide an informal description of how a Jason agent modifies its\nintention structure when a goal failure event occurs. In a goal deletion\nplan, the programmer can specify any undo actions and whether\nto attempt the goal again. If no goal deletion plan is provided,\nJason\"s default behaviour is to not reattempt the goal. Failure\nhandling is applied only to plans triggered by addition of an\nachievement or test goal; in particular, goal deletion events are not posted\nfor failure of a goal deletion plan. Further, the informal semantics\nof [6] do not consider parallel sub-goals (i.e., the CAN construct),\nsince such execution is not part of Jason\"s language.\nThe implementation of H\u00a8ubner et al. [6] requires Jason\"s internal\nactions. A requirement for implementing our approach is a\nreflective capability in the BDI agent implementation. Suitable\nimplementations of the BDI formalism are JACK [2], Jadex [14], and\nSPARK [9]. All three allow meta level methods that are cued by\nmeta events such as goal adoption or plan failure, and offer\nintrospective capabilities over goal and intention states.\nSuch meta level facilities are also required by the approach of\nUnruh et al. [21], who define goal-based semantic compensation for\nan agent. Failure-handling goals are invoked according to\nfailurehandling strategy rules, by a dedicated agent Failure Handling\nComponent (FHC) that tracks task execution. These goals are\nspecified by the agent programmer and attached to tasks, much like\nour FAb(P, PF , PA) construct associates failure and abort\nmethods with a plan P. Note, however, that in contrast to both [6] and\nour semantics, [21] attach the failure-handling knowledge at the\ngoal, not plan, level. Their failure-handling goals may consist of\nstabilization goals that perform localized, immediate clean-up to\nrestore the agent\"s state to a known, stable state, and\ncompensation goals that perform undo actions. Compensation goals are\ntriggered on aborting a goal, and so not necessarily on goal failure\n(i.e., if the FHC directs the agent to retry the failed goal and the\nretry is successful).\nThe FHC approach is defined at the goal level in order to\nfacilitate abstract specification of failure-handling knowledge; the FHC\ndecides when to address a failure and what to do (i.e., what\nfailurehandling goals to invoke), separating this knowledge from the how\nof implementing corrective actions (i.e., what plan to execute to\nmeet the adopted failure-handling goal). This contrasts with\nsimplistic plan-level failure handling in which the what and how are\nintermingled in domain task knowledge. While our approach is\ndefined at the plan level, our extended BDI semantics provides for\nthe separation of execution and failure handling. Further, the FHC\nexplicitly maintains data structures to track agent execution. We\nleverage the existing execution structures and self-reflective ability\nof a BDI agent to accomplish both aborting and failure handling\nwithout additional overhead. FHC\"s failure-handling strategy rules\n(e.g., whether to retry a failed goal) are replaced by instructions\nin our PF and PA plans, together with meta-level default failure\nhandlers according to the agent\"s nature (e.g., blindly committed).\nThe FHC approach is independent of the architecture of the agent\nitself, in contrast to our work that is dedicated to the BDI\nformalism (although not tied to any one agent system). Thus no formal\nsemantics are developed in [21]; the FHC\"s operation is given as\n14 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)\na state-based protocol. This approach, together with state\ncheckpointing, is used for multi-agent systems in [22]. The resulting\narchitecture embeds their failure handling approach within a pair\nprocessing architecture for agent crash recovery.\nOther work on multi-agent exception handling includes AOEX\"s\ndistributed exception handling agents [5], and the similar sentinels\nof [8]. In both cases, failure-handling logic and knowledge are\ndecoupled from the agents; by contrast, while separating exception\nhandling from domain-specific knowledge, Unruh et al.\"s FHC and\nour approach both retain failure-handling logic within an agent.\n6. CONCLUSION AND FUTURE WORK\nThe tasks and plans of an agent may not successfully reach\ncompletion, either by the choice of the agent to abort them (perhaps at\nthe request of another agent to do so), or by unbidden factors that\nlead to failure. In this paper we have presented a procedure-based\napproach that incorporates aborting tasks and plans into the\ndeliberation cycle of a BDI-style agent, thus providing a unified approach\nto failure and abort. Our primary contribution is an analysis of the\nrequirements on the operation of the agent for aborting tasks and\nplans, and a corresponding operational semantics for aborting in\nthe abstract agent language CAN.\nWe are planning to implement an instance of our approach in the\nSPARK agent system [9]; in particular, the work of this paper will\nbe the basis for SPARK\"s abort handling mechanism. We are also\ndeveloping an analysis tool for our extended version of CAN as a\nbasis for experimentation.\nAn intelligent agent will not only gracefully handle unsuccessful\ntasks and plans, but also will deliberate over its cognitive attitudes\nto decide its next course of action. We have assumed the default\nbehaviour of a BDI-style agent, according to its nature: for instance,\nto retry alternatives to a failed plan until one succeeds or until no\nalternative plans remain (in which case to fail the task). Future\nwork is to place our approach in service of more dynamic agent\nreasoning, such as the introspection that an agent capable of\nreasoning over task interaction effects and resource requirements can\naccomplish [19, 12].\nRelated to this is determining the cost of aborting a task or plan,\nand using this as an input to the deliberation process. This would\nin particular influence the commitment the agent has towards a\nparticular task: the higher the cost, the greater the commitment.\nOur assumption that abort-methods do not fail, as discussed above,\nis a pragmatic one. However, this is an issue worthy of further\nexploration, either to develop weaker assumptions that are also\npractical, or to analyze conditions under which our assumption is\nrealistic. A further item of interest is extending our approach to failure\nand abort to maintenance goals [1]. For such goals a different\noperational semantics for abort is necessary than for achievement goals,\nto match the difference in semantics of the goals themselves.\nAcknowledgements\nWe thank Lin Padgham and the anonymous reviewers for their comments.\nThe first author acknowledges the support of the Australian Research\nCouncil and Agent Oriented Software under grant LP0453486. The work of the\ntwo authors at SRI International was supported by the Defense Advanced\nResearch Projects Agency (DARPA) under Contract No. NBCHD030010.\nAny opinions, findings, and conclusions or recommendations expressed in\nthis material are those of the authors and do not necessarily reflect the view\nof DARPA or the Department of Interior-National Business Center.\n7. REFERENCES\n[1] L. Braubach, A. Pokahr, D. Moldt, and W. Lamersdorf. Goal\nrepresentation for BDI Agent systems. In Proc. of Second Intl.\nWorkshop on Programming Multi-Agent Systems (ProMAS\"04), 2004.\n[2] P. Busetta, R. R\u00a8onnquist, A. Hodgson, and A. Lucas. JACK\nintelligent agents - components for intelligent agents in Java.\nAgentLink News, Issue 2, 1999.\n[3] M. G. Chessell, C. Vines, D. Butler, C. M. Ferreira, and\nP. Henderson. Extending the concept of transaction compensation.\nIBM Systems Journal, 41(4), 2002.\n[4] M. Dastani, M. B. van Riemsdijk, and J.-J. C. Meyer. Goal types in\nagent programming. In Proc. of AAMAS\"06, 2006.\n[5] S. Entwisle, S. Loke, S. Krishnaswamy, and E. Kendall. Aoex: An\nagent-based exception handling framework for building reliable,\ndistributed, open software systems. In Proc. of Seventh Joint Conf. on\nKnowledge-Based Software Engineering, 2006.\n[6] J. F. H\u00a8ubner, R. H. Bordini, and M. Wooldridge. Programming\ndeclarative goals using plan patterns. In Proc. of 4th Intl. Workshop\non Declarative Agent Languages and Technologies, 2006.\n[7] D. Kinny. The Psi calculus: an algebraic agent language. In Proc. of\nATAL\"01, 2001.\n[8] M. Klein, J. A. Rodr\u00b4\u0131guez-Aguilar, and C. Dellarocas. Using\ndomain-independent exception handling services to enable robust\nopen multi-agent systems: The case of agent death. Autonomous\nAgents and Multi-Agent Systems, 7(1-2):179-189, 2003.\n[9] D. Morley and K. Myers. The SPARK agent framework. In Proc. of\nAAMAS\"04, 2004.\n[10] D. Morley, K. L. Myers, and N. Yorke-Smith. Continuous refinement\nof agent resource estimates. In Proc. of AAMAS\"06, 2006.\n[11] K. Myers, P. Berry, J. Blythe, K. Conley, M. Gervasio,\nD. McGuinness, D. Morley, A. Pfeffer, M. Pollack, and M. Tambe.\nAn intelligent personal assistant for task and time management. AI\nMagazine, 28, 2007. To appear.\n[12] K. L. Myers and N. Yorke-Smith. A cognitive framework for\ndelegation to an assistive user agent. In Proc. of AAAI 2005 Fall\nSymposium on Mixed-Initiative Problem-Solving Assistants, 2005.\n[13] L. Padgham and M. Winikoff. Developing Intelligent Agent Systems:\nA Practical Guide. John Wiley and Sons, 2004.\n[14] A. Pokahr, L. Braubach, and W. Lamersdorf. Jadex: A BDI reasoning\nengine. In R. Bordini, M. Dastani, J. Dix, and A. E. F. Seghrouchni,\neditors, Multi-Agent Programming. Springer, 2005.\n[15] A. S. Rao. AgentSpeak(L): BDI agents speak out in a logical\ncomputable language. In Proc. of Seventh European Workshop on\nModelling Autonomous Agents in a Multi-Agent World, 1996.\n[16] A. S. Rao and M. P. Georgeff. An abstract architecture for rational\nagents. In Proc. of KR\"92, 1992.\n[17] S. Sardi\u02dcna, L. de Silva, and L. Padgham. Hierarchical planning in\nBDI agent programming languages: a formal approach. In Proc. of\nAAMAS\"06, 2006.\n[18] S. Sardina and L. Padgham. Goals in the context of bdi plan failure\nand planning. In Proc. of AAMAS\"07, 2007.\n[19] J. Thangarajah, L. Padgham, and M. Winikoff. Detecting and\nexploiting positive goal interaction in intelligent agents. In Proc. of\nAAMAS\"03, 2003.\n[20] J. Thangarajah, M. Winikoff, L. Padgham, and K. Fischer. Avoiding\nresource conflicts in intelligent agents. In Proc. of ECAI-02, 2002.\n[21] A. Unruh, J. Bailey, and K. Ramamohanarao. A framework for\ngoal-based semantic compensation in agent systems. In Proc. of First\nIntl. Workshop on Safety and Security in Multi-Agent Systems, 2004.\n[22] A. Unruh, H. Harjadi, J. Bailey, and K. Ramamohanarao.\nSemantic-compensation-based recovery management in multi-agent\nsystems. In Proc. of Second IEEE Symposium on Multi-Agent\nSecurity and Survivability (IEEE MAS&S\"05), 2005.\n[23] M. Winikoff, L. Padgham, J. Harland, and J. Thangarajah.\nDeclarative and procedural goals in intelligent agent systems. In\nProc. of KR\"02, 2002.\nThe Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 15", "keywords": "reactive and deliberative architecture;operational semantics;goal;intelligent agent;formal model of agency;failure;agent;abort-method;dealing;task;agency formal model;cleanup method;goal construct"}
-{"name": "test_I-10", "title": "SMILE: Sound Multi-agent Incremental LEarning ;-)\u2217", "abstract": "This article deals with the problem of collaborative learning in a multi-agent system. Here each agent can update incrementally its beliefs B (the concept representation) so that it is in a way kept consistent with the whole set of information K (the examples) that he has received from the environment or other agents. We extend this notion of consistency (or soundness) to the whole MAS and discuss how to obtain that, at any moment, a same consistent concept representation is present in each agent. The corresponding protocol is applied to supervised concept learning. The resulting method SMILE (standing for Sound Multiagent Incremental LEarning) is described and experimented here. Surprisingly some difficult boolean formulas are better learned, given the same learning set, by a Multi agent system than by a single agent.", "fulltext": "1. INTRODUCTION\nThis article deals with the problem of collaborative\nconcept learning in a multi-agent system. [6] introduces a\ncharacterisation of learning in multi-agent system according to\nthe level of awareness of the agents. At level 1, agents learn\n\u2217The primary author of this paper is a student.\nin the system without taking into account the presence of\nother agents, except through the modification brought upon\nthe environment by their action. Level 2 implies direct\ninteraction between the agents as they can exchange messages\nto improve their learning. Level 3 would require agents to\ntake into account the competencies of other agents, and be\nable to learn from observation of the other agents\" behaviour\n(while considering them as independant entities and not\nindetermined part of the environment as in level 1). We focus\nin this paper on level 2, studying direct interaction between\nagents involved in a learning process.\nEach agent is assumed to be able to learn incrementally from\nthe data he receives, meaning that each agent can update\nhis belief set B to keep it consistent with the whole set of\ninformation K that he has received from the environment\nor from other agents. In such a case, we will say that he is\na-consistent. Here, the belief set B represents hypothetical\nknowledge that can therefore be revised, whereas the set of\ninformation K represents certain knowledge, consisting of\nnon revisable observations and facts. Moreover, we suppose\nthat at least a part Bc of the beliefs of each agent is\ncommon to all agents and must stay that way. Therefore, an\nupdate of this common set Bc by agent r must provoke an\nupdate of Bc for the whole community of agents. It leads\nus to define what is the mas-consistency of an agent with\nrespect to the community. The update process of the\ncommunity beliefs when one of its members gets new information\ncan then be defined as the consistency maintenance process\nensuring that every agent in the community will stay\nmasconsistent. This mas-consistency maintenance process of an\nagent getting new information gives him the role of a learner\nand implies communication with other agents acting as\ncritics. However, agents are not specialised and can in turn be\nlearners or critics, none of them being kept to a specific role.\nPieces of information are distributed among the agents, but\ncan be redundant. There is no central memory.\nThe work described here has its origin in a former work\nconcerning learning in an intentional multi-agent system using\na BDI formalism [6]. In that work, agents had plans, each\nof them being associated with a context defining in which\nconditions it can be triggered. Plans (each of them having\nits own context) were common to the whole set of agents\nin the community. Agents had to adapt their plan contexts\ndepending on the failure or success of executed plans, using\na learning mechanism and asking other agents for examples\n(plans successes or failures). However this work lacked a\ncollective learning protocol enabling a real autonomy of the\nmulti-agent system. The study of such a protocol is the\nobject of the present paper.\nIn section 2 we formally define the mas-consistency of an\nupdate mechanism for the whole MAS and we propose a\ngeneric update mechanism proved to be mas consistent. In\nsection 3 we describe SMILE, an incremental multi agent\nconcept learner applying our mas consistent update\nmechanism to collaborative concept learning. Section 4 describes\nvarious experiments on SMILE and discusses various issues\nincluding how the accuracy and the simplicity of the current\nhypothesis vary when comparing single agent learning and\nmas learning. In section 5 we briefly present some related\nworks and then conclude in section 6 by discussing further\ninvestigations on mas consistent learning.\n2. FORMAL MODEL\n2.1 Definitions and framework\nIn this section, we present a general formulation of\ncollective incremental learning in a cognitive multi agent system.\nWe represent a MAS as a set of agents r1, ..., rn. Each\nagent ri has a belief set Bi consisting of all the revisable\nknowledge he has. Part of these knowledges must be shared\nwith other agents. The part of Bi that is common to all\nagents is denoted as BC . This common part provokes a\ndependency between the agents. If an agent ri updates his\nbelief set Bi to Bi, changing in the process BC into BC , all\nother agents rk must then update their belief set Bk to Bk\nso that BC \u2286 Bk.\nMoreover, each agent ri has stored some certain information\nKi. We suppose that some consistency property Cons(Bi, Ki)\ncan be verified by the agent itself between its beliefs Bi and\nits information Ki. As said before, Bi represents knowledge\nthat might be revised whereas Ki represents observed facts,\ntaken as being true, and which can possibly contradict Bi.\nDefinition 1. a-consistency of an agent\nAn agent ri is a-consistent iff Cons(Bi, Ki) is true.\nExample 1. Agent r1 has a set of plans which are in the\ncommon part BC of B1. Each plan P has a triggering\ncontext d(P) (which acts as a pre-condition) and a body. Some\npiece of information k could be plan P, triggered in\nsituation s, has failed in spite of s being an instance of d(P).\nIf this piece of information is added to K1, then agent r1 is\nnot a-consistent anymore: Cons(B1, K1 \u222a k) is false.\nWe also want to define some notion of consistency for the\nwhole MAS depending on the belief and information sets\nof its constituting elements. We will first define the\nconsistency of an agent ri with respect to its belief set Bi and its\nown information set Ki together with all information sets\nK1...Kn from the other agents of the MAS. We will simply\ndo that by considering what would be the a-consistency of\nthe agent if he has the information of all the other agents.\nWe call this notion the mas-consistency:\nDefinition 2. mas-consistency of an agent\nAn agent ri is mas-consistent iff Cons(Bi, Ki \u222a K) is true,\nwhere K = \u222aj\u2208{1,..,n}\u2212{i}Kj\n1\nis the set of all information\nfrom other agents of the MAS.\n1\nWe will note this \u222a Kj when the context is similar.\nExample 2. Using the previous example, suppose that the\npiece of information k is included in the information K2 of\nagent r2. As long as the piece of information is not\ntransmitted to r1, and so added to K1 , r1 remains a-consistent.\nHowever, r1 is not mas-consistent as k is in the set K of all\ninformation of the MAS.\nThe global consistency of the MAS is then simply the\nmas-consistency of all its agents.\nDefinition 3. Consistency of a MAS\nA MAS r1,...,rn is consistent iff all its agents ri are\nmasconsistent.\nWe now define the required properties for a revision\nmechanism M updating an agent ri when it gets a piece of\ninformation k. In the following, we will suppose that:\n\u2022 Update is always possible, that is, an agent can\nalways modify its belief set Bi in order to regain its\na-consistency. We will say that each agent is locally\nefficient.\n\u2022 Considering two sets of information Cons(Bi, K1) and\nCons(Bi, K2), we also have Cons(Bi, K1 \u222a K2). That\nis, a-consistency of the agents is additive.\n\u2022 If a piece of information k concerning the common\nset BC is consistent with an agent, it is consistent\nwith all agents: for all pair of agents (ri,rj) such that\nCons(Bi, Ki) and Cons(Bj, Kj) are true, we have,\nfor all piece of information k: Cons(Bi, Ki \u222a k) iff\nCons(Bj, Kj \u222a k). In such a case, we will say that\nthe MAS is coherent.\nThis last condition simply means that the common belief\nset BC is independent of the possible differences between\nthe belief sets Bi of each agent ri. In the simplest case,\nB1 = ... = Bn = BC .\nM will also be viewed as an incremental learning\nmechanism and represented as an application changing Bi in Bi.\nIn the following, we shall note ri(Bi, Ki) for ri when it is\nuseful.\nDefinition 4. a-consistency of a revision\nAn update mechanism M is a-consistent iff for any agent ri\nand any piece of information k reaching ri, the a-consistency\nof this agent is preserved. In other words, iff:\nri(Bi, Ki) a-consistent \u21d2 ri(Bi, Ki) a-consistent,\nwhere Bi = M(Bi) and Ki = Ki \u222a k is the set of all\ninformation from other agents of the MAS.\nIn the same way, we define the mas-consistency of a\nrevision mechanism as the a-consistency of this mechanism\nshould the agents dispose of all information in the MAS. In\nthe following, we shall note, if needed, ri(Bi, Ki, K) for the\nagent ri in MAS r1 . . . rn.\nDefinition 5. mas-consistency of a revision\nAn update mechanism Ms is mas-consistent iff for all agent\nri and all pieces of information k reaching ri, the\nmasconsistency of this agent is preserved. In other words, if:\nri(Bi, Ki, K) mas-consistent \u21d2 ri(Bi, Ki, K) mas-consistent,\nwhere Bi = Ms(Bi), Ki = Ki \u222a k, and K = \u222aKj is the set\nof all information from the MAS.\nThe Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 165\nAt last, when a mas-consistent mechanism is applied by\nan agent getting a new piece of information, a desirable\nsideeffect of the mechanism should be that all others agents\nremains mas-consistent after any modification of the common\npart BC , that is, the MAS itself should become consistent\nagain. This property is defined as follows:\nDefinition 6. Strong mas-consistency of a revision\nAn update mechanism Ms is strongly mas-consistent iff\n- Ms is mas-consistent, and\n- the application of Ms by an agent preserves the consistency\nof the MAS.\n2.2 A strongly mas-consistent update\nmechanism\nThe general idea is that, since information is distributed\namong all the agents of the MAS, there must be some\ninteraction between the learner agent and the other agents in\na strongly mas-consistent update mechanism Ms. In order\nto ensure its mas-consistency, Ms will be constituted of\nreiterated applications by the learner agent ri of an internal\na-consistent mechanism M, followed by some interactions\nbetween ri and the other agents, until ri regain its\nmasconsistency. We describe below such a mechanism, first with\na description of an interaction, then an iteration, and finally\na statement of the termination condition of the mechanism.\nThe mechanism is triggered by an agent ri upon receipt\nof a piece of information k disrupting the mas-consistency.\nWe shall note M(Bi) the belief set of the learner agent\nri after an update, BC the common part modified by ri,\nand Bj the belief set of another agent rj induced by the\nmodification of its common part BC in BC .\nAn interaction I(ri, rj) between the learner agent ri and\nanother agent rj, acting as critic is constituted of the\nfollowing steps:\n\u2022 agent ri sends the update BC of the common part of\nits beliefs. Having applied its update mechanism, ri is\na-consistent.\n\u2022 agent rj checks the modification Bj of its beliefs\ninduced by the update BC . If this modification preserve\nits a-consistency, rj adopts this modification.\n\u2022 agent rj sends either an acceptation of BC or a denial\nalong with one (or more) piece(s) of information k\nsuch that Cons(Bj, k ) is false.\nAn iteration of Ms will then be composed of:\n\u2022 the reception by the learner agent ri of a piece of\ninformation and the update M(Bi) restoring its\naconsistency\n\u2022 a set of interactions I(ri, rj) (in which several critic\nagents can possibly participate). If at least one piece\nof information k is transmitted to ri, the addition of\nk will necessarily make ri a-inconsistent and a new\niteration will then occur.\nThis mechanism Ms ends when no agent can provide such\na piece of information k . When it is the case, the\nmasconsistency of the learner agent ri is restored.\nProposition 1. Let r1,...,rn be a consistent MAS in which\nagent ri receives a piece of information k breaking its\naconsistency, and M an a-consistent internal update\nmechanism. The update mechanism Ms described above is strongly\nmas-consistent.\nProof. The proof directly derives from the mechanism\ndescription. This mechanism ensures that each time an\nagent receives an event, its mas-consistency will be restored.\nAs the other agents all adopt the final update BC , they are\nall mas-consistent, and the MAS is consistent. Therefore\nMs is a strongly consistent update mechanism.\nIn the mechanism Ms described above, the learner agent\nis the only one that receives and memorizes information\nduring the mechanism execution. It ensures that Ms\nterminates. The pieces of information transmitted by other\nagents and memorized by the learner agent are redundant\nas they are already present in the MAS, more precisely in\nthe memory of the critic agents that transmitted them.\nNote that the mechanism Ms proposed here does not\nexplicitly indicate the order nor the scope of the interactions.\nWe will consider in the following that the modification\nproposal BC is sent sequentially to the different agents\n(synchronous mechanism). Moreover, the response of a critic\nagent will only contain one piece of information inconsistent\nwith the proposed modification. We will say that the\nresponse of the agent is minimal. This mechanism Ms, being\nsynchronous with minimal response, minimizes the amount\nof information transmitted by the agents. We will now\nillustrate it in the case of multi-agent concept learning.\n3. SOUNDMULTI-AGENTINCREMENTAL\nLEARNING\n3.1 The learning task\nWe experiment the mechanism proposed above in the case\nof incremental MAS concept learning. We consider here\na hypothesis language in which a hypothesis is a\ndisjunction of terms. Each term is a conjunction of atoms from a\nset A. An example is represented by a tag + or \u2212 and a\ndescription 2\ncomposed of a subset of atoms e \u2286 A. A term\ncovers an example if its constituting atoms are included in\nthe example. A hypothesis covers an example if one of its\nterm covers it.\nThis representation will be used below for learning boolean\nformulae. Negative literals are here represented by\nadditional atoms, like not \u2212 a. The boolean formulae f =(a \u2227\nb) \u2228 (b \u2227 \u00acc) will then be written (a \u2227 b) \u2228 (b \u2227 not \u2212 c). A\npositive example of f, like {not \u2212 a, b, not \u2212 c}, represents\na model for f.\n3.2 Incremental learning process\nThe learning process is an update mechanism that, given\na current hypothesis H, a memory E = E+\n\u222a E\u2212\nfilled\nwith the previously received examples, and a new positive\nor negative example e, produces a new updated\nhypothesis. Before this update, the given hypothesis is complete,\nmeaning that it covers all positive examples of E+\n, and\n2\nWhen no confusion is possible, the word example will be\nused to refer to the pair (tag, description) as well as the\ndescription alone.\n166 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)\ncoherent, meaning that it does not cover any negative\nexample of E\u2212\n. After the update, the new hypothesis must be\ncomplete and coherent with the new memory state E \u222a {e}.\nWe describe below our single agent update mechanism,\ninspired from a previous work on incremental learning[7].\nIn the following, a hypothesis H for the target formula f is\na list of terms h, each of them being a conjunction of atoms.\nH is coherent if all terms h are coherent, and H is complete\nif each element of E+\nis covered by at least one term h of\nH. Each term is by construction the lgg (least general\ngeneralization) of a subset of positives instances {e1, ..., en}[5],\nthat is the most specific term covering {e1, ..., en}. The\nlgg operator is defined by considering examples as terms,\nso we denote as lgg(e) the most specific term that covers\ne, and as lgg(h, e) the most specific term which is more\ngeneral than h and that covers e. Restricting the term to\nlgg is the basis of a lot of Bottom-Up learning algorithms\n(for instance [5]). In the typology proposed by [9], our\nupdate mechanism is an incremental learner with full instance\nmemory: learning is made by successive updates and all\nexamples are stored.\nThe update mechanism depends of the ongoing hypothesis\nH, the ongoing examples E+\nand E\u2212\n, and the new example\ne. There are three possible cases:\n\u2022 e is positive and H covers e, or e is negative and H\ndoes not cover e. No update is needed, H is already\ncomplete and coherent with E \u222a {e}.\n\u2022 e is positive and H does not cover e: e is denoted\nas a positive counterexample of H. Then we seek\nto generalize in turn the terms h of H. As soon\nas a correct generalization h = lgg(h, e) is found, h\nreplaces h in H. If there is a term that is less general\nthat h , it is discarded. If no generalization is correct\n(meaning here coherent), H \u222a lgg(e) replaces H.\n\u2022 e is negative and H covers e: e is denoted as a\nnegative counterexample of H. Each term h covering e\nis then discarded from H and replaced by a set of\nterms {h1, ...., hn} that is, as a whole, coherent with\nE\u2212\n\u222a {e} and that covers the examples of E+\n\nuncovered by H \u2212 {h}. Terms of the final hypothesis H\nthat are less general than others are discarded from\nH.\nWe will now describe the case where e = e\u2212\nis a covered\nnegative example. The following functions are used here:\n\u2022 coveredOnlyBy(h, E+) gives the subset of E+\ncovered\nby h and no other term of H.\n\u2022 bestCover(h1, h2) gives h1 if h1 covers more examples\nfrom uncoveredPos than h2, otherwise it gives h2.\n\u2022 covered(h) gives the elements of uncoveredPos covered\nby h.\n// Specialization of each h covering e\u2212\nfor each h of H covering e\u2212\ndo\nH = H \u2212 {h}\nuncoveredPos = coveredOnlyBy(h, E+\n)\nAr= atoms that are neither in e\u2212\nnor in h\nwhile (uncoveredPos = \u2205) do\n// seeking the best specialization of h\nhc=h\nbest=\u22a5 // \u22a5 covers no example\nfor each a of Ar do\nhc= h \u2227 a\nbest = bestCover(hc, best)\nendfor\nAr=Ar\u2212{best}\nhi=lgg(covered(best))\nH = H \u222a {hi}\nuncoveredPos=uncoveredPos - covered(best)\nendwhile\nendfor\nTerms of H that are less general than others are discarded.\nNote that this mechanism tends to both make a minimal\nupdate of the current hypothesis and minimize the number\nof terms in the hypothesis, in particular by discarding terms\nless general than other ones after updating a hypothesis.\n3.3 Collective learning\nIf H is the current hypothesis, Ei the current example\nmemory of agent ri and E the set of all the examples\nreceived by the system, the notation of section 2 becomes\nBi = BC = H, Ki = Ei and K = E. Cons(H, Ei) states\nthat H is complete and coherent with Ei. In such a case,\nri is a-consistent. The piece of information k received by\nagent ri is here simply an example e along with its tag.\nIf e is such that the current hypothesis H is not complete\nor coherent with Ei \u222a {e}, e contradicts H: ri becomes\na-inconsistent, and therefore the MAS is not consistent\nanymore.\nThe update of a hypothesis when a new example arrives\nis an a- consistent mechanism. Following proposition 1 this\nmechanism can be used to produce a strong mas-consistent\nmechanism: upon reception of a new example in the MAS\nby an agent r, an update is possibly needed and, after a set\nof interactions between r and the other agents, results in a\nnew hypothesis shared by all the agents and that restores\nthe consistency of the MAS, that is which is complete and\ncoherent with the set ES of all the examples present in the\nMAS.\nIt is clear that by minimizing the number of\nhypothesis modifications, this synchronous and minimal\nmechanism minimize the number of examples received by the\nlearner from other agents, and therefore, the total number\nof examples stored in the system.\n4. EXPERIMENTS\nIn the following, we will learn a boolean formula that is\na difficult test for the learning method: the 11-multiplexer\n(see [4]). It concerns 3 address boolean attributes a0, a1, a2\nand 8 data boolean attributes d0, ..., d7. Formulae f11 is\nsatisfied if the number coded by the 3 address attributes is\nthe number of a data attribute whose value is 1. Its formula\nis the following:\nf11 = (a0 \u2227a1 \u2227a2 \u2227d7)\u2228(a0 \u2227a1 \u2227\u00aca2 \u2227d6)\u2228(a0 \u2227\u00aca1 \u2227\na2 \u2227d5)\u2228(a0 \u2227\u00aca1 \u2227\u00aca2 \u2227d4)\u2228(\u00aca0 \u2227a1 \u2227a2 \u2227d3)\u2228(\u00aca0 \u2227\na1 \u2227\u00aca2 \u2227d2)\u2228(\u00aca0 \u2227\u00aca1 \u2227a2 \u2227d1)\u2228(\u00aca0 \u2227\u00aca1 \u2227\u00aca2 \u2227d0).\nThere are 2048 = 211\npossible examples, half of whom are\npositive (meaning they satisfy f11) while the other half is\nnegative.\nAn experiment is typically composed of 50 trials. Each\nrun corresponds to a sequence of 600 examples that are\nincrementally learned by a Multi Agent System with n agents\nThe Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 167\n(n-MAS). A number of variables such as accuracy, (i.e. the\nfrequency of correct classification of a set of unseen\nexamples), hypothesis size (i.e. the number of terms in the\ncurrent formula) or number of stored examples, is recorded each\ntime 25 examples are received by the system during those\nruns.\nIn the protocol that is used here, a new example is sent\nto a random agent when the MAS is consistent. The next\nexample will be sent in turn to an other agent when the\nMAS consistency will have been restored. In such a way we\nsimulate a kind of slow learning: the frequency of example\narrivals is slow compared to the time taken by an update.\n4.1 Efficiency of MAS concept learning\n4.1.1 Execution time\nWe briefly discuss here execution time of learning in the\nMAS. Note that the whole set of action and interaction in\nthe MAS is simulated on a single processor. Figure 1 shows\nthat time linearly depends on the number of agents. At the\nend of the most active part of learning (200 examples), a\n16MAS has taken 4 times more learning time than a 4-MAS.\nThis execution time represents the whole set of learning and\nFigure 1: Execution time of a n-MAS (from n = 2 at\nthe bottom to n = 20 on the top).\ncommunication activity and hints at the cost of\nmaintaining a consistent learning hypothesis in a MAS composed of\nautonomous agents.\n4.1.2 Redundancy in the MAS memory\nWe study now the distribution of the examples in the MAS\nmemory. Redundancy is written RS = nS/ne, where nS is\nthe total number of examples stored in the MAS, that is the\nsum of the sizes of agents examples memories Ei, and ne is\nthe total number of examples received from the environment\nin the MAS. In figure 2, we compare redundancies in 2 to\n20 agents MAS. There is a peak, slowly moving from 80 to\n100 examples, that represents the number of examples for\nwhich the learning is most active. For 20 agents, maximal\nredundancy is no more than 6, which is far less than the\nmaximal theoretical value of 20. Note that when learning\nbecomes less active, redundancy tends towards its minimal\nvalue 1: when there is no more updates, examples are only\nFigure 2: Redundancy of examples stored in a\nnMAS (from n = 2 at the bottom to n = 20 on the\ntop) .\nstored by the agent that receives them.\n4.1.3 A n-MAS selects a simpler solution than a\nsingle agent\nThe proposed mechanism tends to minimize the number of\nterms in the selected hypothesis. During learning, the size of\nthe current hypothesis grows up beyond the optimum, and\nthen decreases when the MAS converges. In the Multiplexer\n11 testbed, the optimal number of terms is 8, but there also\nexist equivalent formulas with more terms. It is interesting\nto note that in this case the 10-MAS converges towards an\nexact solution closer to the optimal number of terms (here\n8) (see Figure 3). After 1450 examples have been presented\nboth 1-MAS and 10-MAS have exactly learned the concept\n(the respective accuracies are 0.9999 and 1) but the single\nagent expresses in average the result as a 11.0 terms DNF\nwhereas the 10-MAS expresses it as a 8.8 terms DNF.\nHowever for some other boolean functions we found that\nduring learning 1-MAS always produces larger hypotheses than\n10-MAS but that both MAS converge to hypotheses with\nsimilar size results.\n4.1.4 A n-MAS is more accurate than a single agent\nFigure 4 shows the improvement brought by a MAS with\nn agents compared to a single agent. This improvement was\nnot especially expected, because whether we have one or n\nagents, when N examples are given to the MAS it has access\nto the same amount of information, maintains only on\nongoing hypothesis and uses the same basic revision algorithm\nwhenever an agent has to modify the current hypothesis.\nNote that if the accuracy of 1, 2, 4 and 10-MAS are\nsignificantly different, getting better as the number of agents\nincreases, there is no clear difference beyond this point: the\naccuracy curve of the 100 agents MAS is very close to the\none of the 10 agents MAS.\n4.1.4.1 Boolean formulas.\nTo evaluate this accuracy improvement, we have\nexperimented our protocol on other problems of boolean\nfunction learning, As in the Multiplexer-11 case, these functions\n168 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nFigure 3: Size of the hypothesis built by 1 and\n10MAS: the M11 case.\nFigure 4: Accuracy of a n-MAS: the M11 case (from\nbottom to top, n = 1, 2, 4, 10, 100).\nare learnt in the form of more or less syntactically complex\nDNF3\n(that is with more or less conjunctive terms in the\nDNF), but are also more or less difficult to learn as it can\nbe difficult to get its way in the hypothesis space to reach\nthem. Furthermore, the presence in the description of\nirrelevant attributes (that is attributes that does not belong to\nthe target DNF) makes the problem more difficult. The\nfollowing problems have been selected to experiment our\nprotocol: (i) the multiplexer-11 with 9 irrelevant attributes:\nM11 9, (ii) the 20-multiplexer M20 (with 4 address bits and\n16 data bits), (iii) a difficult parity problem (see [4]) the\nXorp m: there must be an odd number of bits with value 1\nin the p first attributes for the instance to be positive, the\np others bits being irrelevant, and (iv) a simple DNF\nformula (a \u2227 b \u2227 c) \u2228 (c \u2227 d \u2227 e)(e \u2227 f \u2227 g) \u2227 (g \u2227 h \u2227 i) with 19\nirrelevant attributes. The following table sums up some\ninformation about these problems, giving the total number of\nattributes including irrelevant ones, the number of irrelevant\n3\nDisjunctive Normal Forms\nattributes, the minimal number of terms of the\ncorresponding DNF, and the number of learning examples used.\nPb att. irre. att. terms ex.\nM11 11 0 8 200\nM11 9 20 9 8 200\nM20 20 0 16 450\nXor3 25 28 25 4 200\nXor5 5 10 5 16 180\nXor5 15 20 15 16 600\nSimple4-9 19 28 19 4 200\nBelow are given the accuracy results of our learning\nmechanism with a single agent and a 10 agents MAS, along with\nthe results of two standard algorithms implemented with the\nlearning environment WEKA[16]: JRip (an implementation\nof RIPPER[2]) and Id3[12]. For the experiments with JRip\nand Id3, we measured the mean accuracy on 50 trials, each\ntime randomly separating examples in a learning set and a\ntest set. JRip and Id3 parameters are default parameters,\nexcept that JRip is used without pruning. The following\ntable shows the results:\nPb JRip Id3 Sm 1 Sm 10\nM11 88.3 80.7 88.7 95.5\nM11 9 73.4 67.9 66.8 83.5\nM20 67.7 62.7 64.6 78.2\nXor3 25 54.4 55.2 71.4 98.5\nXor5 5 52.6 60.8 71.1 78.3\nXor5 15 50.9 51.93 62.4 96.1\nSimple4-9 19 99.9 92.3 87.89 98.21\nIt is clear that difficult problems are better solved with\nmore agents (see for instance xor5 15). We think that these\nbenefits, which can be important with an increasing number\nof agents, are due to the fact that each agent really\nmemorizes only part of the total number of examples, and this\npart is partly selected by other agents as counter examples,\nwhich cause a greater number of current hypothesis updates\nand therefore, a better exploration of the hypotheses space.\n4.1.4.2 ML database problems.\nWe did also experiments with some non boolean problems.\nWe considered only two classes (positive/negative)\nproblems, taken from the UCI\"s learning problems database[3].\nIn all these problems, examples are described as a\nvector of couples (attribute, value). The value domains can\nbe either boolean, numeric (wholly ordered set), or\nnominal (non-ordered set). An adequate set of atoms A must be\nconstituted for each problem. For instance, if a is a numeric\nattribute, we define at most k threshold si, giving k+1\nintervals of uniform density4\n. Therefore, each distinct threshold\nsi gives two atoms a \u2264 si and a > si. In our experiments,\nwe took a maximal number of threshold k = 8. For instance,\nin the iono problem case, there were 34 numeric attributes,\nand an instance is described with 506 atoms.\nBelow are given the accuracy results of our system along\nwith previous results. The column Nb ex. refer to the\n4\nThe probability for the value of a to be in any interval is\nconstant\nThe Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 169\nnumber of examples used for learning5\n. Column (1)\nrepresents minimal and maximal accuracy values for the thirty\nthree classifiers tested in [8]. Column (2) represents the\nresults of [13], where various learning methods are compared\nto ensemble learning methods using weighted classifiers sets.\nColumn S-1 and S-10 gives the accuracy of SMILE with\nrespectively 1 and 10 agents.\nPb Nb ex. (1) (2) S-1 S-10\nttt 862/574 // 76.2-99.7 99.7 99.9\nkr-vs-kp 2876/958 // 91.4-99.4 96.8 97.3\niono 315 // 88.0-91.8 87.2 88.1\nbupa 310 57-72 58-69.3 62.5 63.3\nbreastw 614 91-97 94.3-97.3 94.7 94.7\nvote 391 94-96 95.3-96 91.9 92.6\npima 691 // 71.5- 73.4 65.0 65.0\nheart 243 66-86 77.1-84.1 69.5 70.7\nThis table shows that the incremental algorithm\ncorresponding to the single agent case, gives honorable results\nrelatively to non-incremental classical methods using larger\nand more complex hypotheses. In some cases, there is an\naccuracy improvement with a 10 agents MAS. However, with\nsuch benchmarks data, which are often noisy, the difficulty\ndoes not really come from the way in which the search space\nis explored, and therefore the improvement observed is not\nalways significant. The same kind of phenomenon have been\nobserved with methods dedicated to hard boolean problems\n[4].\n4.2 MAS synchronization\nHere we consider that n single agents learn without\ninteractions and at a given time start interacting thus forming a\nMAS. The purpose is to observe how the agents take\nadvantage of collaboration when they start from different states of\nbeliefs and memories. We compare in this section a 1-MAS,\na 10-MAS (ref) and a 10-MAS (100sync) whose agents did\nnot communicate during the arrival of the first 100\nexamples (10 by agents). The three accuracy curves are shown in\nfigure 5. By comparing the single agent curve and the\nsynchronized 10-MAS, we can observe that after the beginning\nof the synchronization, that is at 125 examples, accuracies\nare identical. This was expected since as soon as an example\ne received by the MAS contradicts the current hypothesis of\nthe agent ra receiving it, this agent makes an update and its\nnew hypothesis is proposed to the others agents for criticism.\nTherefore, this first contradictory example brings the MAS\nto reach consistency relatively to the whole set of examples\npresent in agents\" memories. A higher accuracy,\ncorresponding to a 10-MAS is obtained later, from the 175th example.\nIn other words, the benefit of a better exploration of the\nresearch space is obtained slightly later in the learning\nprocess. Note that this synchronization happens naturally in all\nsituations where agents have, for some reason, a divergence\nbetween their hypothesis and the system memory. This\nincludes the fusion of two MAS into a single one or the arrival\nof new agents in an existing MAS.\n4.3 Experiments on asynchronous learning:\nthe effect of a large data stream\n5\nFor ttt and kr-vs-kp, our protocol did not use more than\nrespectively 574 and 958 learning examples, so we put another\nnumber in the column.\nFigure 5: Accuracies of a 1-MAS, a 10-MAS, and a\n10-MAS synchronized after 100 examples.\nIn this experiment we relax our slow learning mode: the\nexamples are sent at a given rate to the MAS. The\nresulting example stream is measured in ms\u22121\n, and represents\nthe number of examples sent to the MAS each ms.\nWhenever the stream is too large, the MAS cannot reach MAS\nconsistency on reception of an example from the\nenvironment before a new example arrives. This means that the\nupdate process, started by agent r0 as he received an\nexample, may be unfinished when a new example is received by\nr0 or another agent r1. As a result, a critic agent may have\nat instant t to send counterexamples of hypotheses sent by\nvarious agents. However as far as the agents, in our\nsetting, memorizes all the examples they receive whenever the\nstream ends, the MAS necessarily reaches MAS consistency\nwith respect to all the examples received so far. In our\nexperiments, though its learning curve is slowed down during\nthe intense learning phase (corresponding to low accuracy of\nthe current hypotheses), the MAS still reaches a satisfying\nhypothesis later on as there are less and less\ncounterexamples in the example stream. In Figure 6 we compare the\naccuracies of two 11-MAS respectively submitted to\nexample streams of different rates when learning the M11 formula.\nThe learning curve of the MAS receiving an example at a\n1/33 ms\u22121\nrate is almost not altered (see Figure 4) whereas\nthe 1/16 ms\u22121\nMAS is first severely slowed down before\ncatching up with the first one.\n5. RELATED WORKS\nSince 96 [15], various work have been performed on\nlearning in MAS, but rather few on concept learning. In [11]\nthe MAS performs a form of ensemble learning in which the\nagents are lazy learners (no explicit representation is\nmaintained) and sell useless examples to other agents. In [10]\neach agent observes all the examples but only perceive a\npart of their representation. In mutual online concept\nlearning [14] the agents converge to a unique hypothesis, but each\nagent produces examples from its own concept\nrepresentation, thus resulting in a kind of synchronization rather than\nin pure concept learning.\n170 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nFigure 6: Accuracies of two asynchronous 11-MAS\n(1/33ms\u22121\nand 1/16ms\u22121\nexample rates) .\n6. CONCLUSION\nWe have presented here and experimented a protocol for\nMAS online concept learning. The main feature of this\ncollaborative learning mechanism is that it maintains a\nconsistency property: though during the learning process each\nagent only receives and stores, with some limited\nredundancy, part of the examples received by the MAS, at any\nmoment the current hypothesis is consistent with the whole\nset of examples. The hypotheses of our experiments do not\naddress the issues of distributed MAS such as faults (for\ninstance messages could be lost or corrupted) or other failures\nin general (crash, byzantine faults, etc.). Nevertheless, our\nframework is open, i.e., the agents can leave the system or\nenter it while the consistency mechanism is preserved. For\ninstance if we introduce a timeout mechanism, even when\na critic agent crashes or omits to answer, the consistency\nwith the other critics (within the remaining agents) is\nentailed. In [1], a similar approach has been applied to MAS\nabduction problems: the hypotheses to maintain, given an\nincomplete information, are then facts or statements.\nFurther work concerns first coupling induction and abduction in\norder to perform collaborative concept learning when\nexamples are only partially observed by each agent, and second,\ninvestigating partial memory learning: how learning is\npreserved whenever one agent or the whole MAS forgets some\nselected examples.\nAknowledgments\nWe are very grateful to Dominique Bouthinon for\nimplementing late modifications in SMILE, so much easing our\nexperiments. Part of this work has been performed during\nthe first author\"s visit to the Atelier De BioInformatique\nof Paris VI university, France.\n7. REFERENCES\n[1] G. Bourgne, N. Maudet, and S. Pinson. When agents\ncommunicate hypotheses in critical situations. In\nDALT-2006, May 2006.\n[2] W. W. Cohen. Fast effective rule induction. In ICML,\npages 115-123, 1995.\n[3] C. B. D.J. Newman, S. Hettich and C. Merz. UCI\nrepository of machine learning databases, 1998.\n[4] S. Esmeir and S. Markovitch. Lookahead-based\nalgorithms for anytime induction of decision trees. In\nICML\"O4, pages 257-264. Morgan Kaufmann, 2004.\n[5] J. F\u00a8urnkranz. A pathology of bottom-up hill-climbing\nin inductive rule learning. In ALT, volume 2533 of\nLNCS, pages 263-277. Springer, 2002.\n[6] A. Guerra-Hern\u00b4andez, A. ElFallah-Seghrouchni, and\nH. Soldano. Learning in BDI multi-agent systems. In\nCLIMA IV, volume 3259, pages 218-233. Springer\nVerlag, 2004.\n[7] M. Henniche. Mgi: an incremental bottom-up\nalgorithm. In IEEE Aust. and New Zealand\nConference on Intelligent Information Systems, pages\n347-351, 1994.\n[8] T.-S. Lim, W.-Y. Loh, and Y.-S. Shih. A comparison\nof prediction accuracy, complexity, and training time\nof thirty-three old and new classification algorithms.\nMachine Learning, 40(3):203-228, 2000.\n[9] M. A. Maloof and R. S. Michalski. Incremental\nlearning with partial instance memory. Artif. Intell.,\n154(1-2):95-126, 2004.\n[10] P. J. Modi and W.-M. Shen. Collaborative multiagent\nlearning for classification tasks. In AGENTS \"01,\npages 37-38. ACM Press, 2001.\n[11] S. Onta\u02dcnon and E. Plaza. Recycling data for\nmulti-agent learning. In ICML \"05, pages 633-640.\nACM Press, 2005.\n[12] J. R. Quinlan. Induction of decision trees. Machine\nLearning, 1(1):81-106, 1986.\n[13] U. R\u00a8uckert and S. Kramer. Towards tight bounds for\nrule learning. In ICML \"04 (International conference\non Machine learning), page 90, New York, NY, USA,\n2004. ACM Press.\n[14] J. Wang and L. Gasser. Mutual online concept\nlearning for multiple agents. In AAMAS, pages\n362-369. ACM Press, 2002.\n[15] G. Wei\u00df and S. Sen, editors. Adaption and Learning in\nMulti-Agent Systems, volume 1042 of Lecture Notes in\nComputer Science. Springer, 1996.\n[16] I. H. Witten and E. Frank. Data Mining: Practical\nMachine Learning Tools and Techniques with Java\nImplementations. Morgan Kaufmann, October 1999.\nThe Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 171", "keywords": "learning process;multi-agent learning;incremental learning;agent;collaborative concept learning;knowledge;mas-consistency;multi-agent learn;update mechanism;synchronization"}
-{"name": "test_I-11", "title": "Real-Time Agent Characterization and Prediction", "abstract": "Reasoning about agents that we observe in the world is challenging. Our available information is often limited to observations of the agent\"s external behavior in the past and present. To understand these actions, we need to deduce the agent\"s internal state, which includes not only rational elements (such as intentions and plans), but also emotive ones (such as fear). In addition, we often want to predict the agent\"s future actions, which are constrained not only by these inward characteristics, but also by the dynamics of the agent\"s interaction with its environment. BEE (Behavior Evolution and Extrapolation) uses a faster-than-real-time agentbased model of the environment to characterize agents\" internal state by evolution against observed behavior, and then predict their future behavior, taking into account the dynamics of their interaction with the environment.", "fulltext": "1. INTRODUCTION\nReasoning about agents that we observe in the world must\nintegrate two disparate levels. Our observations are often limited\nto the agent\"s external behavior, which can frequently be\nsummarized numerically as a trajectory in space-time (perhaps\npunctuated by actions from a fairly limited vocabulary). However, this\nbehavior is driven by the agent\"s internal state, which (in the case\nof a human) may involve high-level psychological and cognitive\nconcepts such as intentions and emotions. A central challenge in\nmany application domains is reasoning from external observations\nof agent behavior to an estimate of their internal state. Such\nreasoning is motivated by a desire to predict the agent\"s behavior.\nThis problem has traditionally been addressed under the\nrubric of plan recognition or plan inference. Work to date\nfocuses almost entirely on recognizing the rational state (as opposed\nto the emotional state) of a single agent (as opposed to an\ninteracting community), and frequently takes advantage of explicit\ncommunications between agents (as in managing conversational\nprotocols). Many realistic problems deviate from these conditions.\nIncreasing the number of agents leads to a combinatorial\nexplosion that can swamp conventional analysis.\nEnvironmental dynamics can frustrate agent intentions.\nThe agents often are trying to hide their intentions (and even\ntheir presence), rather than intentionally sharing information.\nAn agent\"s emotional state may be at least as important as its\nrational state in determining its behavior.\nDomains that exhibit these constraints can often be\ncharacterized as adversarial, and include military combat, competitive\nbusiness tactics, and multi-player computer games.\nBEE (Behavioral Evolution and Extrapolation) is a novel\napproach to recognizing the rational and emotional state of multiple\ninteracting agents based solely on their behavior, without recourse\nto intentional communications from them. It is inspired by\ntechniques used to predict the behavior of nonlinear dynamical\nsystems, in which a representation of the system is continually fit to\nits recent past behavior. For nonlinear dynamical systems, the\nrepresentation is a closed-form mathematical equation. In BEE, it is a\nset of parameters governing the behavior of software agents\nrepresenting the individuals being analyzed. The current version of\nBEE characterizes and predicts the behavior of agents\nrepresenting soldiers engaged in urban combat [8].\nSection 2 reviews relevant previous work. Section 3\ndescribes the architecture of BEE. Section 4 reports results from\nexperiments with the system. Section 5 concludes. Further details\nthat cannot be included here for the sake of space are available in\nan on-line technical report [16].\n2. PREVIOUS WORK\nBEE bears comparison with previous research in AI (plan\nrecognition), Hidden Markov Models, and nonlinear dynamics\nsystems (trajectory prediction).\n2.1 Plan Recognition in AI\nAgent theory commonly describes an agent\"s cognitive state\nin terms of its beliefs, desires, and intentions (the so-called BDI\nmodel [5, 20]). An agent\"s beliefs are propositions about the state\nof the world that it considers true, based on its perceptions. Its\ndesires are propositions about the world that it would like to be\ntrue. Desires are not necessarily consistent with one another: an\nagent might desire both to be rich and not to work at the same\ntime. An agent\"s intentions, or goals, are a subset of its desires\nthat it has selected, based on its beliefs, to guide its future actions.\nUnlike desires, goals must be consistent with one another (or at\nleast believed to be consistent by the agent).\nAn agent\"s goals guide its actions. Thus one ought to be able\nto learn something about an agent\"s goals by observing its past\nactions, and knowledge of the agent\"s goals in turn enables\nconclusions about what the agent may do in the future.\nThis process of reasoning from an agent\"s actions to its goals\nis known as plan recognition or plan inference. This body of\nwork (surveyed recently at [3]) is rich and varied. It covers both\nsingle-agent and multi-agent (e.g., robot soccer team) plans,\nintentional vs. non-intentional actions, speech vs. non-speech\nbehavior, adversarial vs. cooperative intent, complete vs. incomplete\nworld knowledge, and correct vs. faulty plans, among other\ndimensions.\nPlan recognition is seldom pursued for its own sake. It\nusually supports a higher-level function. For example, in\nhumancomputer interfaces, recognizing a user\"s plan can enable the\nsystem to provide more appropriate information and options for user\naction. In a tutoring system, inferring the student\"s plan is a first\nstep to identifying buggy plans and providing appropriate\nremediation. In many cases, the higher-level function is predicting\nlikely future actions by the entity whose plan is being inferred.\nWe focus on plan recognition in support of prediction. An\nagent\"s plan is a necessary input to a prediction of its future\nbehavior, but hardly a sufficient one. At least two other influences,\none internal and one external, need to be taken into account.\nThe external influence is the dynamics of the environment,\nwhich may include other agents. The dynamics of the real world\nimpose significant constraints.\nThe environment may interfere with the desires of the agent [4,\n10].\nMost interactions among agents, and between agents and the\nworld, are nonlinear. When iterated, these can generate chaos\n(extreme sensitivity to initial conditions).\nA rational analysis of an agent\"s goals may enable us to\npredict what it will attempt, but any nontrivial plan with several steps\nwill depend sensitively at each step to the reaction of the\nenvironment, and our prediction must take this reaction into account\nas well. Actual simulation of futures is one way (the only one we\nknow now) to deal with the impact of environmental dynamics on\nan agent\"s actions.\nHuman agents are also subject to an internal\ninfluence. The agent\"s emotional state can\nmodulate its decision process and its focus of attention\n(and thus its perception of the environment). In\nextreme cases, emotion can lead an agent to\nchoose actions that from the standpoint of a\nlogical analysis may appear irrational.\nCurrent work on plan recognition for\nprediction focuses on the rational plan, and does not\ntake into account either external environmental\ninfluences or internal emotional biases. BEE\nintegrates all three elements into its predictions.\n2.2 Hidden Markov Models\nBEE is superficially similar to Hidden Markov Models\n(HMM\"s [19]). In both cases, the agent has hidden internal state\n(the agent\"s personality) and observable state (its outward\nbehavior), and we wish to learn the hidden state from the observable\nstate (by evolution in BEE, by the Baum-Welch algorithm [1] in\nHMM\"s) and then predict the agent\"s future behavior (by\nextrapolation via ghosts in BEE, by the forward algorithm in HMM\"s).\nBEE offers two important benefits over HMM\"s.\nFirst, a single agent\"s hidden variables do not satisfy the\nMarkov property. That is, their values at t + 1 depend not only on\ntheir values at t, but also on the hidden variables of other agents.\nOne could avoid this limitation by constructing a single HMM\nover the joint state space of all of the agents, but this approach is\ncombinatorially prohibitive. BEE combines the efficiency of\nindependently modeling individual agents with the reality of taking\ninto account interactions among them.\nSecond, Markov models assume that transition probabilities\nare stationary. This assumption is unrealistic in dynamic\nsituations. BEE\"s evolutionary process continually updates the agents\"\npersonalities based on actual observations, and thus automatically\naccounts for changes in the agents\" personalities.\n2.3 Real-Time Nonlinear Systems Fitting\nMany systems of interest can be described by a vector of real\nnumbers that changes as a function of time. The dimensions of the\nvector define the system\"s state space. One typically analyzes\nsuch systems as vector differential equations, e.g.,\n)(xf\ndt\nxd\n.\nWhen f is nonlinear, the system can be formally chaotic, and\nstarting points arbitrarily close to one another can lead to\ntrajectories that diverge exponentially rapidly. Long-range prediction of\nsuch a system is impossible. However, it is often useful to\nanticipate the system\"s behavior a short distance into the future. A\ncommon technique is to fit a convenient functional form for f to the\nsystem\"s trajectory in the recent past, then extrapolate this fit into\nthe future (Figure 1, [7]). This process is repeated constantly,\nproviding the user with a limited look-ahead.\nThis approach is robust and widely applied, but requires\nsystems that can efficiently be described with mathematical\nequations. BEE extends this approach to agent behaviors, which it fits\nto observed behavior using a genetic algorithm.\n3. ARCHITECTURE\nBEE predicts the future by observing the emergent behavior\nof agents representing the entities of interest in a fine-grained\nagent simulation. Key elements of the BEE\narchitecture include the model of an individual\nagent, the pheromone infrastructure through\nwhich agents interact, the information sources\nthat guide them, and the overall evolutionary\ncycle that they execute.\n3.1 Agent Model\nThe agents in BEE are inspired by two\nbodies of work: our previous work on fine-grained\nagents that coordinate their actions through\ndigital pheromones in a shared environment [2, 13,\n17, 18, 21], and the success of previous\nagentbased combat modeling.\nDigital pheromones are scalar variables that\nagents deposit and sense at their current location\na\nc\nb\nd\na\nc\nb\nd\nFigure 1: Tracking a\nnonlinear dynamical system. a =\nsystem state space; b = system\ntrajectory over time; c = recent\nmeasurements of system state;\nd = short-range prediction.\nThe Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1427\nin the environment. Agents respond to local concentrations of\nthese variables tropistically, climbing or descending local\ngradients. Their movements change the deposit patterns. This feedback\nloop, together with processes of evaporation and propagation in\nthe environment, support complex patterns of interaction and\ncoordination among the agents [15]. Table 1 shows the BEE\"s\ncurrent pheromone flavors. For example, a living member of the\nadversary emits a RED-ALIVE pheromone, while roads emit a\nMOBILITY pheromone.\nOur soldier agents are inspired by EINSTein and MANA.\nEINSTein [6] represents an agent as a set of six weights, each in\n[-1, 1], describing the agent\"s response to six kinds of\ninformation. Four of these describe the number of alive friendly, alive\nenemy, injured friendly, and injured enemy troops within the\nagent\"s sensor range. The other two weights relate to the agent\"s\ndistance to its own flag and that of the adversary, representing\nobjectives that it seeks to protect and attack, respectively. A\npositive weight indicates attraction to the entity described by the\nweight, while a negative weight indicates repulsion.\nMANA [9] extends the concepts in EINSTein. Friendly and\nenemy flags are replaced by the waypoints pursued by each side.\nMANA includes low, medium, and high threat enemies. In\naddition, it defines a set of triggers (e.g., reaching a waypoint, being\nshot at, making contact with the enemy, being injured) that shift\nthe agent from one personality vector to another. A default state\ndefines the personality vector when no trigger state is active.\nThe personality vectors in MANA and EINSTein reflect both\nrational and emotive aspects of decision-making. The notion of\nbeing attracted or repelled by friendly or adversarial forces in\nvarious states of health is an important component of what we\ninformally think of as emotion (e.g., fear, compassion,\naggression), and the use of the term personality in both EINSTein and\nMANA suggests that the system designers are thinking\nanthropomorphically, though they do not use emotion to describe the\neffect they are trying to achieve. The notion of waypoints to\nwhich an agent is attracted reflects goal-oriented rationality.\nBEE uses an integrated rational-emotive personality model.\nA BEE agent\"s rationality is a vector of seven desires, which\nare values in [-1, +1]: ProtectRed (the adversary), ProtectBlue\n(friendly forces), ProtectGreen (civilians), ProtectKeySites,\nAvoidCombat, AvoidDetection, and Survive. Negative values\nreverse the sense suggested by the label. For example, a negative\nvalue of ProtectRed indicates a desire to harm Red, and an agent\nwith a high positive desire to ProtectRed will be attracted to\nREDALIVE, RED-CASUALTY, and MOBILITY pheromone, and\nwill move at maximum speed.\nThe emotive component of a BEE\"s personality is based on\nthe Ortony-Clore-Collins (OCC) framework [11], and is described\nin detail elsewhere [12]. OCC define emotions as valanced\nreactions to agents, states, or events in the environment. This notion\nof reaction is captured in MANA\"s trigger states. An important\nadvance in BEE\"s emotional model is the recognition that agents\nmay differ in how sensitive they are to triggers. For example,\nthreatening situations tend to stimulate the emotion of fear, but a\ngiven level of threat will produce more fear in a new recruit than\nin a seasoned veteran. Thus our model includes not only\nEmotions, but Dispositions. Each Emotion has a corresponding\nDisposition. Dispositions are relatively stable, and considered constant\nover the time horizon of a run of the BEE, while Emotions vary\nbased on the agent\"s disposition and the stimuli to which it is\nexposed.\nInterviews with military domain experts identified the two\nmost crucial emotions for combat behavior as Anger (with the\ncorresponding disposition Irritability) and Fear (whose disposition\nis Cowardice). Table 2 shows which pheromones trigger which\nemotions. For example, RED-CASUALTY pheromone stimulates\nboth Anger and Fear in a Red agent, but not in a Blue agent.\nEmotions are modeled as agent hormones (internal pheromones) that\nare augmented in the presence of the triggering environmental\ncondition and evaporate over time.\nA non-zero emotion modifies the agent\"s actions. Elevated\nlevel Anger increases movement likelihood, weapon firing\nlikelihood, and tendency toward an exposed posture. Elevated Fear\ndecreases these likelihoods.\nFigure 2 summarizes the BEE\"s personality model. The left\nside is a straightforward BDI model (we prefer the term goal to\nintention). The right side is the emotive component, where an\nappraisal of the agent\"s beliefs, moderated by the disposition,\nleads to an emotion that in turn influences the BDI analysis.\nTable 1. Pheromone flavors in BEE\nPheromone\nFlavor\nDescription\nRedAlive\nRedCasualty\nBlueAlive\nBlueCasualty\nGreenAlive\nGreenCasualty\nEmitted by a living or dead entity of the\nappropriate group (Red = enemy, Blue = friendly,\nGreen = neutral)\nWeaponsFire Emitted by a firing weapon\nKeySite\nEmitted by a site of particular importance to\nRed\nCover Emitted by locations that afford cover from fire\nMobility\nEmitted by roads and other structures that\nenhance agent mobility\nRedThreat\nBlueThreat\nDetermined by external process (see Section\n3.3)\nTable 2: Interactions of pheromones and dispositions/emotions\nDispositions/Emotions\nRed\nPerspective\nBlue\nPerspective\nGreen\nPerspective\nPheromone\nIrritability\n/Anger\nCowardice\n/Fear\nIrritability\n/Anger\nCowardice\n/Fear\nIrritability\n/Anger\nCowardice\n/FearRedAlive X X\nRedCasualty X X\nBlueAlive X X X X\nBlueCasualty X X\nGreenCasualty X X X X\nWeaponsFire X X X X X X\nKeySites X X\n1428 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)\n3.2 The BEE Cycle\nBEE\"s major innovation is\nextending the nonlinear systems technique of\nSection 2.2 to agent behaviors. This\nsection describes this process at a high\nlevel, then details the multi-page\npheromone infrastructure that\nimplements it.\n3.2.1 Overview\nFigure 3 is an overview of\nBehavior Evolution and Extrapolation. Each\nactive entity in the battlespace has an\npersistent avatar that continuously\ngenerates a stream of ghost agents\nrepresenting itself. We call the combined\nmodeling entity consisting of avatar and ghosts a polyagent [14].\nGhosts live on a timeline indexed by that begins in the past\nand runs into the future. is offset with respect to the current time\nt. The timeline is divided into discrete pages, each representing\na successive value of . The avatar inserts the ghosts at the\ninsertion horizon. In our current system, the insertion horizon is at - t\n= -30, meaning that ghosts are inserted into a page representing\nthe state of the world 30 minutes ago. At the insertion horizon,\neach ghost\"s behavioral parameters (desires and dispositions) are\nsampled from distributions to explore alternative personalities of\nthe entity it represents.\nEach page between the insertion horizon and = t (now)\nrecords the historical state of the world at the point in the past to\nwhich it corresponds. As ghosts move from page to page, they\ninteract with this past state, based on their behavioral parameters.\nThese interactions mean that their fitness depends not just on their\nown actions, but also on the behaviors of the rest of the\npopulation, which is also evolving. Because advances faster than real\ntime, eventually = t (actual time). At this point, each ghost is\nevaluated based on its location compared with the actual location\nof its corresponding real-world entity.\nThe fittest ghosts have three functions.\n1. The personality of each entity\"s fittest ghost is reported to the\nrest of the system as the likely personality of that entity. This\ninformation enables us to characterize individual warriors as\nunusually cowardly or brave.\n2. The fittest ghosts breed genetically and their offspring return\nto the insertion horizon to continue the fitting process.\n3. The fittest ghosts for each entity form the basis for a\npopulation of ghosts that run past the avatar's present into the\nfuture. Each ghost that runs into the future explores a\ndifferent possible future of the battle, analogous to how some\npeople plan ahead by mentally simulating different ways that\na situation might unfold. Analysis of the behaviors of these\ndifferent possible futures yields predictions.\nThus BEE has three distinct notions of time, all of which\nmay be distinct from real-world time.\n1. Domain time t is the current time in the domain being\nmodeled. If BEE is applied to a real-world situation, this time\nis the same as real-world time. In our experiments, we apply\nBEE to a simulated battle, and domain time is the time stamp\npublished by the simulator. During actual runs, the simulator\nis often paused, so domain time runs slower than real time.\nWhen we replay logs from simulation runs, we can speed\nthem up so that domain time runs faster\nthan real time.\n2. BEE time for a page records the\ndomain time corresponding to the state\nof the world represented on that page,\nand is offset from the current domain\ntime.\n3. Shift time is incremented every time the\nghosts move from one page to the next.\nThe relation between shift time and real\ntime depends on the processing\nresources available.\n3.2.2 Pheromone Infrastructure\nBEE must operate very rapidly, to\nkeep pace with the ongoing battle. Thus\nwe use simple agents coordinated using pheromone mechanisms.\nWe have described the basic dynamics of our pheromone\ninfrastructure elsewhere [2]. This infrastructure runs on the nodes of a\ngraph-structured environment (in the case of BEE, a rectangular\nlattice). Each node maintains a scalar value for each flavor of\npheromone, and provides three functions:\nIt aggregates deposits from individual agents, fusing\ninformation across multiple agents and through time.\nIt evaporates pheromones over time, providing an innovative\nalternative to traditional truth maintenance. Traditionally,\nknowledge bases remember everything they are told unless\nthey have a reason to forget. Pheromone-based systems\nimmediately begin to forget everything they learn, unless it is\ncontinually reinforced. Thus inconsistencies automatically\nremove themselves within a known period.\nIt diffuses pheromones to nearby places, disseminating\ninformation for access by nearby agents.\nThe distribution of each pheromone flavor over the\nenvironment forms a field that represents some aspect of the state of the\nworld at an instant in time. Each page of the timeline is a\ncomplete pheromone field for the world at the BEE time represented\nby that page. The behavior of the pheromones on each page\ndepends on whether the page represents the past or the future.\nEnvironment\nBeliefs\nDesires\nGoal Emotion\nDisposition\nState Process\nAnalysis\nAction\nPerception\nAppraisal\nRational Emotive\nFigure 2: BEE\"s Integrated Rational and\nEmotive Personality Model\nGhost time\n=t(now)\nAvatar\nInsertion Horizon\nMeasure Ghost fitness\nPrediction Horizon\nObserve Ghost prediction\nGhosts\nReadPersonality\nReadPrediction\nEntity\nGhost time\n=t(now)\nAvatar\nInsertion Horizon\nMeasure Ghost fitness\nPrediction Horizon\nObserve Ghost prediction\nGhosts\nReadPersonality\nReadPrediction\nEntity\nFigure 3: Behavioral Emulation and Extrapolation. Each avatar\ngenerates a stream of ghosts that sample the personality space of its\nentity. They evolve against the entity\"s recent observed behavior, and\nthe fittest ghosts run into the future to generate predictions.\nThe Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1429\nIn pages representing the future ( > t), the usual pheromone\nmechanisms apply. Ghosts deposit pheromone each time they\nmove to a new page, and pheromones evaporate and propagate\nfrom one page to the next.\nIn pages representing the past ( t), we have an observed\nstate of the real world. This has two consequences for pheromone\nmanagement. First, we can generate the pheromone fields directly\nfrom the observed locations of individual entities, so there is no\nneed for the ghosts to make deposits. Second, we can adjust the\npheromone intensities based on the changed locations of entities\nfrom page to page, so we do not need to evaporate or propagate\nthe pheromones. Both of these simplifications reflect the fact that\nin our current system, we have complete knowledge of the past.\nWhen we introduce noise and uncertainty, we will probably need\nto introduce dynamic pheromones in the past as well as the future.\nExecution of the pheromone infrastructure proceeds on two\ntime scales, running in separate threads.\nThe first thread updates the book of pages each time the\ndomain time advances past the next page boundary. At each step,\nThe former now + 1page is replaced with a new current page,\nwhose pheromones correspond to the locations and strengths of\nobserved units;\nAn empty page is added at the prediction horizon;\nThe oldest page is discarded, since it has passed the insertion\nhorizon.\nThe second thread moves the ghosts from one page to the\nnext, as fast as the processor allows. At each step,\nGhosts reaching the = t page are evaluated for fitness and\nremoved or evolved;\nNew ghosts from the avatars and from the evolutionary process\nare inserted at the insertion horizon;\nA population of ghosts based on the fittest ghosts are inserted at\n= t to run into the future;\nGhosts that have moved beyond the prediction horizon are\nremoved;\nAll ghosts plan their next actions based on the pheromone field\nin the pages they currently occupy;\nThe system computes the next state of each page, including\nexecuting the actions elected by the ghosts, and (in future\npages) evaporating pheromones and recording new deposits\nfrom the recently arrived ghosts.\nGhost movement based on pheromone gradients is a simple\nprocess, so this system can support realistic agent populations\nwithout excessive computer load. In our current system, each\navatar generates eight ghosts per shift. Since there are about 50\nentities in the battlespace (about 20 units each of Red and Blue and\nabout 5 of Green), we must support about 400 ghosts per page, or\nabout 24000 over the entire book.\nHow fast a processor do we need? Let p be the real-time\nduration of a page in seconds. If each page represents 60 seconds of\ndomain time, and we are replaying a simulation at 2x domain\ntime, p = 30. Let n be the number of pages between the insertion\nhorizon and = t. In our current system, n = 30. Then a shift rate\nof n/p shifts per second will permit ghosts to run from the\ninsertion horizon to the current time at least once before a new page is\ngenerated. Empirically, this level is a lower bound for reasonable\nperformance, and easily achievable on stock WinTel platforms.\n3.3 Information sources\nThe flexibility of the BEE\"s pheromone infrastructure\npermits the integration of numerous information sources as input to\nour characterizations of entity personalities and predictions of\ntheir future behavior. Our current system draws on three sources\nof information, but others can readily be added.\nReal-world observations.-Observations from the real\nworld are encoded into the pheromone field each increment of\nBEE time, as a new current page is generated. Table 1 identifies\nthe entities that generate each flavor of pheromone.\nStatistical estimates of threat regions.-Statistical\ntechniques1 estimate the level of threat to each force (Red or Blue),\nbased on the topology of the battlefield and the known disposition\nof forces. For example, a broad open area with no cover is\nthreatening, especially if the opposite force occupies its margins. The\nresults of this process are posted to the pheromone pages as\nRedThreat pheromone (representing a threat to red) and\nBlueThreat pheromone (representing a threat to Blue).\nAI-based plan recognition.-While plan recognition is not\nsufficient for effective prediction, it is a valuable input. We\ndynamically configure a Bayes net based on heuristics to identify\nthe likely goals that each entity may hold.2 The destinations of\nthese goals function as virtual pheromones. Ghosts include their\ndistance to such points in their action decisions, achieving the\nresult of gradient following without the computational expense of\nmaintaining a pheromone field.\n4. EXPERIMENTAL RESULTS\nWe have tested BEE in a series of experiments in which\nhuman wargamers make decisions that are played out in a battlefield\nsimulator. The commander for each side (Red and Blue) has at his\ndisposal a team of pucksters, human operators who set waypoints\nfor individual units in the simulator. Each puckster is responsible\nfor four to six units. The simulator moves the units, determines\nfiring actions, and resolves the outcome of conflicts. It is\nimportant to emphasize that this simulator is simply a surrogate\nfor a sensor feed from a real-world battlefield\n4.1 Fitting Dispositions\nTo test our ability to fit personalities based on behavior, one\nRed puckster responsible for four units is designated the\nemotional puckster. He selects two of his units to be cowardly\n(chickens) and two to be irritable (Rambos). He does not\ndisclose this assignment during the run. He moves each unit\naccording to the commander\"s orders until the unit encounters\ncircumstances that would trigger the emotion associated with the unit\"s\ndisposition. Then he manipulates chickens as though they are\nfearful (avoiding combat and moving away from Blue), and\nmoves Rambos into combat as quickly as possible. Our software\nreceives position reports on all units, every twenty seconds.\n1\nThis process, known as SAD (Statistical Anomaly Detection), is\ndeveloped by our colleagues Rafael Alonso, Hua Li, and John\nAsmuth at Sarnoff Corporation. Alonso and Li are now at SET\nCorporation.\n2\nThis process, known as KIP (Knowledge-based Intention\nProjection), is developed by our colleagues Paul Nielsen, Jacob\nCrossman, and Rich Frederiksen at Soar Technology.\n1430 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nThe difference between the two disposition values\n(Irritability - Cowardice) of the fittest ghosts proves a better indicator of\nthe emotional state of the corresponding entity than either value\nby itself. Figure 4 shows the delta disposition for each of the eight\nfittest ghosts at each time step, plotted against the time in seconds,\nfor a unit played as a chicken. The values clearly trend negative.\nFigure 5 shows a similar plot for a Rambo. Rambos tend to die\nearly, and often do not give their ghosts enough time to evolve a\nclear picture of their personality, but in\nthis case the positive Delta Disposition is\nevident before the unit\"s demise.\nTo characterize a unit\"s personality,\nwe maintain a 800-second exponentially\nweighted moving average of the Delta\nDisposition, and declare the unit to be a\nchicken or Rambo if this value passes a\nnegative or positive threshold, respectively. Currently, this\nthreshold is set at 0.25. We are exploring additional filters. For example,\na rapid rate of increase enhances the likelihood of calling a\nRambo; units that seek to avoid detection and avoid combat are\nmore readily called chicken.\nTable 1 shows the detection results for emotional units in a\nrecent series of experiments. We never called a Rambo a chicken.\nIn the one case where we called a chicken a Rambo, logs show\nthat in fact the unit was being played aggressively, rushing toward\noncoming Blue forces. The brave die young, so we almost never\ndetect units played intentionally as Rambos.\nFigure 6 shows a comparison on a separate series of\nexperiments of our emotion detector compared with humans. Two\ncowards were played in each of eleven games. Human observers in\neach game were able to detect a total of 13 of the cowards. BEE\nwas able to detect cowards (= chickens) much earlier than the\nhuman, while missing only one chicken that the humans detected.\nIn addition to these results on units intentionally played as\nemotional, BEE sometimes detects other units as cowardly or\nbrave. Analysis of these units shows that these characterizations\nwere appropriate: units that flee in the face of enemy forces or\nweapons fire are detected as chickens, while those that stand their\nground or rush the adversary are denominated as Rambos.\n4.2 Integrated Predictions\nEach ghost that runs into the future generates a possible path\nthat its unit might follow. The paths in the resulting set over all\nghosts vary in how likely they are, the risk they pose to their own\nor the opposite side, and so forth. In the experiments reported\nhere, we select the future whose ghost receives the most guidance\nfrom pheromones in the environment at each step along the way.\nIn this sense, it is the most likely future. In these experiments, we\nreceive position reports only on units that have actually come\nwithin visual range of Blue units, or on average fewer than half of\nthe live Red units at any time.\nWe evaluate predictions spatially, comparing an entity\"s\nactual location with the location predicted\nfor it 15 minutes earlier. We compare\nBEE with two baselines: a\ngametheoretic predictor based on linguistic\ngeometry [22], and estimates by military\nofficers. In both cases, we use a CEP\n(circular error probable) measure of\naccuracy, the radius of the circle that one\nwould have to draw around each prediction to capture 50% of the\nactual unit locations. The higher the CEP measure, the worse the\naccuracy.\nFigure 7 compares our accuracy with that of the\ngametheoretic predictor. Each point gives the median CEP measure\nover all predictions in a single run. Points above the diagonal\nfavor BEE, while points below the line favor the game-theoretic\npredictor. In all but two missions, BEE is more accurate. In one\nmission, the two systems are comparable, while in one, the\ngameTable 1: Experimental Results on Fitting\nDisposition (16 runs)\nCalled\nCorrectly\nCalled\nIncorrectly\nNot\nCalled\nChickens 68% 5% 27%\nRambos 5% 0% 95%\nFigure 4: Delta Disposition for a Chicken\"s Ghosts.\nFigure 5: Delta Disposition for a Rambo.\nCowards Found vs Percent of Run Time\n0\n2\n4\n6\n8\n10\n12\n14\n0% 20% 40% 60% 80% 100%\nPercent of Run Time (Wall Clock)\nCowardsFound(outof22)\nHuman\nARM-A\nFigure 6: BEE vs. Human.\nThe Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1431\ntheoretic predictor is more accurate.\nIn 18 RAID runs, BEE generated 1405 predictions at each of\ntwo time horizons (0 and 15 minutes), while in 18 non-RAID\nruns, staff generated 102 predictions. Figure. 8 shows a\nbox-andwhisker plot of the CEP measures, in meters, of these predictions.\nThe box covers the inter-quartile range with a line at the median,\nwhiskers extend to the most distant data points within 1.5 of the\ninterquartile range from the edge of the box, squares show outliers\nwithin 3 interquartile ranges, and stars show more distant outliers.\nBEE\"s median score even at 15 minutes is lower than either Staff\nmedian. The Wilcoxon test shows that the difference between the\nH15 scores is significant at the 99.76% level, while that between\nthe H0 scores is significant at more than 99.999%.\n5. CONCLUSIONS\nIn many domains, it is important to reason from an entity\"s\nobserved behavior to an estimate of its internal state, and then to\nextrapolate that estimate to predict the entity\"s future behavior.\nBEE performs this task using a faster-than-real-time simulation of\nswarming agents, coordinated through digital pheromones. This\nsimulation integrates knowledge of threat regions, a cognitive\nanalysis of the agent\"s beliefs, desires, and intentions, a model of\nthe agent\"s emotional disposition and state, and the dynamics of\ninteractions with the environment. By evolving agents in this rich\nenvironment, we can fit their internal state to their observed\nbehavior. In realistic wargames, the system successfully detects\ndeliberately played emotions and makes reasonable predictions\nabout the entities\" future behaviors.\nBEE can only model internal state variables that impact the\nagent\"s external behavior. It cannot fit variables that the agent\ndoes not manifest externally, since the basis for the evolutionary\ncycle is a comparison of the outward behavior of the simulated\nagent with that of the real entity. This limitation is serious if our\npurpose is to understand the entity\"s internal state for its own\nsake. If our purpose of fitting agents is to predict their subsequent\nbehavior, the limitation is much less serious. State variables that\ndo not impact behavior, while invisible to a behavior-based\nanalysis, are irrelevant to a behavioral prediction.\nThe BEE architecture lends itself to extension in several\npromising directions.\nThe various inputs being integrated by the BEE are only an\nexample of the kinds of information that can be handled. The\nbasic principle of using a dynamical simulation to integrate a\nwide range of influences can be extended to other inputs as\nwell, requiring much less additional engineering than other\nmore traditional ways of reasoning about how different\nknowledge sources come together in impacting an agent\"s behavior.\nWith such a change in inputs, BEE could be applied more\nwidely than its current domain of adversarial reasoning in\nurban warfare. Potential applications of interest include computer\ngames, business strategy, and sensor fusion.\nOur initial limited repertoire of emotions is a small subset of\nthose that have been distinguished by psychologists, and that\nmight be useful for understanding and projecting behavior. We\nexpect to extend the set of emotions and supporting\ndispositions that BEE can detect.\nThe mapping between an agent\"s psychological (cognitive and\nemotional) state and its outward behavior is not one-to-one.\nSeveral different internal states might be consistent with a\ngiven observed behavior under one set of environmental\nconditions, but might yield distinct behaviors under other conditions.\nIf the environment in the recent past is one that confounds such\ndistinct internal states, we will be unable to distinguish them.\nAs long as the environment stays in this state, our predictions\nwill be accurate, whichever of the internal states we assign to\nthe agent. If the environment then shifts to one under which the\ndifferent internal states lead to different behaviors, using the\npreviously chosen internal state will yield inaccurate\npredictions. One way to address these concerns is to probe the real\nworld, perturbing it in ways that would stimulate distinct\nbehaviors from entities whose psychological state is otherwise\nindistinguishable. Such probing is an important intelligence\ntechnique. BEE\"s faster-than-real-time simulation may enable us to\nidentify appropriate probing actions, greatly increasing the\neffectiveness of intelligence efforts.\n6. ACKNOWLEDGEMENTS\nThis material is based in part upon work supported by the Defense\nAdvanced Research Projects Agency (DARPA) under Contract\nNo. NBCHC040153. Any opinions, findings and conclusions or\nrecommendations expressed in this material are those of the\nauthor(s) and do not necessarily reflect the views of the DARPA or\nthe Department of Interior-National Business Center (DOI-NBC).\nDistribution Statement A (Approved for Public Release,\nDistribution Unlimited).\n7. REFERENCES\n[1] Baum, L. E., Petrie, T., Soules, G., and Weiss, N. A\nmaximization technique occurring in the statistical analysis of\nprob50 100 150 200 250 300\nBEE Median Error\n50\n100\n150\n200\n250\n300\nGLnaideMrorrE\nFigure 7: Median errors for BEE vs. Linguistic Geometry on\neach run.-Squares are Defend missions, triangles are Move\nmissions, diamonds are Attack missions.\nRAID H0 Staff H0 RAID H15 Staff H15\n100\n200\n300\n400\n500\nFigure. 8: Box-and-whisker plots of RAID and Staff predictions\nat 0 and 15 minutes Horizons. Y-axis is CEP radius in meters;\nlower values indicate greater accuracy.\n1432 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nabilistic functions of Markov chains. Ann. Math. Statist., 41,\n1: 1970, 164-171.\n[2] Brueckner, S. Return from the Ant: Synthetic Ecosystems for\nManufacturing Control. Thesis at Humboldt University\nBerlin, Department of Computer Science, 2000.\n[3] Carberry, S. Techniques for Plan Recognition. User\nModeling and User-Adapted Interaction, 11, 1-2: 2001, 31-48.\n[4] Ferber, J. and M\u00fcller, J.-P. Influences and Reactions: a\nModel of Situated Multiagent Systems. In Proceedings of\nSecond International Conference on Multi-Agent Systems\n(ICMAS-96), AAAI, 1996, 72-79.\n[5] Haddadi, A. and Sundermeyer, K. Belief-Desire-Intention\nAgent Architectures. In G. M. P. O'Hare and N. R. Jennings,\nEditors, Foundations of Distributed Artificial Intelligence,\nJohn Wiley, New York, NY, 1996, 169-185.\n[6] Ilachinski, A. Artificial War: Multiagent-based Simulation of\nCombat. Singapore, World Scientific, 2004.\n[7] Kantz, H. and Schreiber, T. Nonlinear Time Series Analysis.\nCambridge, UK, Cambridge University Press, 1997.\n[8] Kott, A. Real-Time Adversarial Intelligence & Decision\nMaking (RAID). vol. 2005, DARPA, Arlington, VA, 2004.\nWeb Site.\n[9] Lauren, M. K. and Stephen, R. T. Map-Aware Non-uniform\nAutomata (MANA)-A New Zealand Approach to Scenario\nModelling. Journal of Battlefield Technology, 5, 1 (March):\n2002, 27ff.\n[10] Michel, F. Formalisme, m\u00e9thodologie et outils pour la\nmod\u00e9lisation et la simulation de syst\u00e8mes multi-agents. Thesis at\nUniversit\u00e9 des Sciences et Techniques du Languedoc,\nDepartment of Informatique, 2004.\n[11] Ortony, A., Clore, G. L., and Collins, A. The cognitive\nstructure of emotions. Cambridge, UK, Cambridge University\nPress, 1988.\n[12] Parunak, H. V. D., Bisson, R., Brueckner, S., Matthews, R.,\nand Sauter, J. Representing Dispositions and Emotions in\nSimulated Combat. In Proceedings of Workshop on Defence\nApplications of Multi-Agent Systems (DAMAS05, at\nAAMAS05), Springer, 2005, 51-65.\n[13] Parunak, H. V. D. and Brueckner, S. Ant-Like Missionaries\nand Cannibals: Synthetic Pheromones for Distributed Motion\nControl. In Proceedings of Fourth International Conference\non Autonomous Agents (Agents 2000), 2000, 467-474.\n[14] Parunak, H. V. D. and Brueckner, S. Modeling Uncertain\nDomains with Polyagents. In Proceedings of International\nJoint Conference on Autonomous Agents and Multi-Agent\nSystems (AAMAS'06), ACM, 2006.\n[15] Parunak, H. V. D., Brueckner, S., Fleischer, M., and Odell, J.\nA Design Taxonomy of Multi-Agent Interactions. In\nProceedings of Agent-Oriented Software Engineering IV,\nSpringer, 2003, 123-137.\n[16] Parunak, H. V. D., Brueckner, S., Matthews, R., Sauter, J.,\nand Brophy, S. Characterizing and Predicting Agents via\nMulti-Agent Evolution. Altarum Institute, Ann Arbor, MI,\n2005. http://www.newvectors.net/staff/parunakv/BEE.pdf.\n[17] Parunak, H. V. D., Brueckner, S., and Sauter, J. Digital\nPheromones for Coordination of Unmanned Vehicles. In\nProceedings of Workshop on Environments for Multi-Agent\nSystems (E4MAS 2004), Springer, 2004, 246-263.\n[18] Parunak, H. V. D., Brueckner, S. A., and Sauter, J. Digital\nPheromone Mechanisms for Coordination of Unmanned\nVehicles. In Proceedings of First International Conference on\nAutonomous Agents and Multi-Agent Systems (AAMAS\n2002), ACM, 2002, 449-450.\n[19] Rabiner, L. R. A Tutorial on Hidden Markov Models and\nSelected Applications in Speech Recognition. Proceedings of\nthe IEEE, 77, 2: 1989, 257-286.\n[20] Rao, A. S. and Georgeff, M. P. Modeling Rational Agents\nwithin a BDI Architecture. In Proceedings of International\nConference on Principles of Knowledge Representation and\nReasoning (KR-91), Morgan Kaufman, 1991, 473-484.\n[21] Sauter, J. A., Matthews, R., Parunak, H. V. D., and\nBrueckner, S. Evolving Adaptive Pheromone Path Planning\nMechanisms. In Proceedings of Autonomous Agents and\nMultiAgent Systems (AAMAS02), ACM, 2002, 434-440.\n[22] Stilman, B. Linguistic Geometry: From Search to\nConstruction. Boston, Kluwer, 2000.\nThe Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1433", "keywords": "agent's goal;pheromone flavor;plan recognition;future behavior;evolution;swarm intelligence;bdi;agent behavior prediction;internal state;nonlinear dynamical system;emotion;prediction;behavioral evolution and extrapolation;external behavior;agent reasoning;disposition;plan inference;dynamics;digital pheromone"}
-{"name": "test_I-12", "title": "Sharing Experiences to Learn User Characteristics in Dynamic Environments with Sparse Data", "abstract": "This paper investigates the problem of estimating the value of probabilistic parameters needed for decision making in environments in which an agent, operating within a multi-agent system, has no a priori information about the structure of the distribution of parameter values. The agent must be able to produce estimations even when it may have made only a small number of direct observations, and thus it must be able to operate with sparse data. The paper describes a mechanism that enables the agent to significantly improve its estimation by augmenting its direct observations with those obtained by other agents with which it is coordinating. To avoid undesirable bias in relatively heterogeneous environments while effectively using relevant data to improve its estimations, the mechanism weighs the contributions of other agents\" observations based on a real-time estimation of the level of similarity between each of these agents and itself. The coordination autonomy module of a coordination-manager system provided an empirical setting for evaluation. Simulation-based evaluations demonstrated that the proposed mechanism outperforms estimations based exclusively on an agent\"s own observations as well as estimations based on an unweighted aggregate of all other agents\" observations.", "fulltext": "1. INTRODUCTION\nFor many real-world scenarios, autonomous agents need to\noperate in dynamic, uncertain environments in which they have only\nincomplete information about the results of their actions and\ncharacteristics of other agents or people with whom they need to\ncooperate or collaborate. In such environments, agents can benefit\nfrom sharing information they gather, pooling their individual\nexperiences to improve their estimations of unknown parameters\nrequired for reasoning about actions under uncertainty.\nThis paper addresses the problem of learning the distribution of\nthe values of a probabilistic parameter that represents a\ncharacteristic of a person who is interacting with a computer agent. The\ncharacteristic to be learned is (or is clearly related to) an important\nfactor in the agent\"s decision making.1\nThe basic setting we consider\nis one in which an agent accumulates observations about a\nspecific user characteristic and uses them to produce a timely estimate\nof some measure that depends on that characteristic\"s distribution.\nThe mechanisms we develop are designed to be useful in a range of\napplication domains, such as disaster rescue, that are characterized\nby environments in which conditions may be rapidly changing,\nactions (whether of autonomous agents or of people) and the overall\noperations occur at a fast pace, and decisions must be made within\ntightly constrained time frames. Typically, agents must make\ndecisions in real time, concurrent with task execution, and in the midst\nof great uncertainty. In the remainder of this paper, we use the term\nfast-paced to refer to such environments. In fast-paced\nenvironments, information gathering may be limited, and it is not possible\nto learn offline or to wait until large amounts of data are collected\nbefore making decisions.\nFast-paced environments impose three constraints on any\nmechanism for learning a distribution function (including the large range\nof Bayesian update techniques [23]): (a) the no structure constraint:\nno a priori information about the structure of the estimated\nparameter\"s distribution nor any initial data from which such structure\ncan be inferred is available; (b) the limited use constraint: agents\ntypically need to produce only a small number of estimations in\ntotal for this parameter; (c) the early use constraint: high\naccuracy is a critical requirement even in the initial stages of learning.\nThus, the goal of the estimation methods presented in this paper is\nto minimize the average error over time, rather than to determine\nan accurate value at the end of a long period of interaction. That\nis, the agent is expected to work with the user for a limited time,\nand it attempts to minimize the overall error in its estimations. In\nsuch environments, an agent\"s individually acquired data (its own\nobservations) are too sparse for it to obtain good estimations in the\nrequisite time frame. Given the no-structure-constraint of the\nenvironment, approaches that depend on structured distributions may\nresult in a significantly high estimation bias.\nWe consider this problem in the context of a multi-agent\ndistributed system in which computer agents support people who are\ncarrying out complex tasks in a dynamic environment. The fact that\nagents are part of a multi-agent setting, in which other agents may\n1\nLearning the distribution rather than just determining some value in the\ndistribution is important whenever the overall shape of the distribution and\nnot just such individual features as mean are important.\nalso be gathering data to estimate a similar characteristic of their\nusers, offers the possibility for an agent to augment its own\nobservations with those of other agents, thus improving the accuracy of\nits learning process. Furthermore, in the environments we consider,\nagents are usually accumulating data at a relatively similar rate.\nNonetheless, the extent to which the observations of other agents\nwill be useful to a given agent depends on the extent to which their\nusers\" characteristics\" distributions are correlated with that of this\nagent\"s user. There is no guarantee that the distribution for two\ndifferent agents is highly, positively correlated, let alone that they\nare the same. Therefore, to use a data-sharing approach, a\nlearning mechanism must be capable of effectively identifying the level\nof correlation between the data collected by different agents and to\nweigh shared data depending on the level of correlation.\nThe design of a coordination autonomy (CA) module within a\ncoordination-manager system (as part of the DARPA Coordinators\nproject [18]), in which agents support a distributed scheduling task,\nprovided the initial motivation and a conceptual setting for this\nwork. However, the mechanisms themselves are general and can\nbe applied not only to other fast-paced domains, but also in other\nmulti-agent settings in which agents are collecting data that\noverlaps to some extent, at approximately similar rates, and in which\nthe environment imposes the no-structure, limited- and early-use\nconstraints defined above (e.g., exploration of remote planets). In\nparticular, our techniques would be useful in any setting in which a\ngroup of agents undertakes a task in a new environment, with each\nagent obtaining observations at a similar rate of individual\nparameters they need for their decision-making.\nIn this paper, we present a mechanism that was used for\nlearning key user characteristics in fast-paced environments. The\nmechanism provides relatively accurate estimations within short time\nframes by augmenting an individual agent\"s direct observations with\nobservations obtained by other agents with which it is\ncoordinating. In particular, we focus on the related problems of estimating\nthe cost of interrupting a person and estimating the probability that\nthat person will have the information required by the system. Our\nadaptive approach, which we will refer to throughout the paper as\nselective-sharing, allows our CA to improve the accuracy of its\ndistribution-based estimations in comparison to relying only on the\ninteractions with a specific user (subsequently, self-learning) or\npooling all data unconditionally (average all), in particular when\nthe number of available observations is relatively small.\nThe mechanism was successfully tested using a system that\nsimulates a Coordinators environment. The next section of the paper\ndescribes the problem of estimating user-related parameters in\nfastpaced domains. Section 3 provides an overview of the methods we\ndeveloped. The implementation, empirical setting, and results are\ngiven in Sections 4 and 5. A comparison with related methods is\ngiven in Section 6 and conclusions in section 7.\n2. PARAMETER ESTIMATION IN\nFASTPACED DOMAINS\nThe CA module and algorithms we describe in this paper were\ndeveloped and tested in the Coordinators domain [21]. In this\ndomain, autonomous agents, called Coordinators, are intended to\nhelp maximize an overall team objective by handling changes in\nthe task schedule as conditions of operation change. Each agent\noperates on behalf of its owner (e.g., the team leader of a\nfirstresponse team or a unit commander) whose schedule it manages.\nThus, the actual tasks being scheduled are executed either by\nowners or by units they oversee, and the agent\"s responsibility is limited\nto maintaining the scheduling of these tasks and coordinating with\nthe agents of other human team members (i.e., other owners). In\nthis domain, scheduling information and constraints are distributed.\nEach agent receives a different view of the tasks and structures that\nconstitute the full multi-agent problem-typically only a partial,\nlocal one. Schedule revisions that affect more than one agent must\nbe coordinated, so agents thus must share certain kinds of\ninformation. (In a team context they may be designed to share other types\nas well.) However, the fast-paced nature of the domain constrains\nthe amount of information they can share, precluding a centralized\nsolution; scheduling problems must be solved distributively.\nThe agent-owner relationship is a collaborative one, with the\nagent needing to interact with its owner to obtain task and\nenvironment information relevant to scheduling. The CA module is\nresponsible for deciding intelligently when and how to interact with\nthe owner for improving the agent\"s scheduling. As a result, the CA\nmust estimate the expected benefit of any such interaction and the\ncost associated with it [19]. In general, the net benefit of a potential\ninteraction is PV \u2212 C, where V is the value of the information the\nuser may have, P is the probability that the user has this\ninformation, and C is the cost associated with an interaction. The values of\nP, V , and C are time-varying, and the CA estimates their value at\nthe intended time of initiating the interaction with its owner. This\npaper focuses on the twin problems of estimating the parameters P\nand C, both of which are user-centric in the sense of being\ndetermined by characteristics of the owner and the environment in which\nthe owner is operating); it presumes a mechanism for determining\nV [18].\n2.1 Estimating Interruption Costs\nThe cost of interrupting owners derives from the potential\ndegradation in performance of tasks they are doing caused by the\ndisruption [1; 9, inter alia]. Research on interaction management has\ndeployed sensor-based statistical models of human interruptibility to\ninfer the degree of distraction likely to be caused by an interruption.\nThis work aims to reduce interruption costs by delaying\ninterruptions to times that are convenient. It typically uses Bayesian models\nto learn a user\"s current or likely future focus of attention from an\nongoing stream of actions. By using sensors to provide continuous\nincoming indications of the user\"s attentional state, these models\nattempt to provide a means for computing probability distributions\nover a user\"s attention and intentions [9]. Work which examines\nsuch interruptibility-cost factors as user frustration and\ndistractability [10] includes work on the cost of repeatedly bothering the user\nwhich takes into account the fact that recent interruptions and\ndifficult questions should carry more weight than interruptions in the\ndistant past or straightforward questions [5].\nAlthough this prior work uses interruptibility estimates to\nbalance the interaction\"s estimated importance against the degree of\ndistraction likely to be caused, it differs from the fast-paced\nenvironments problem we address in three ways that fundamentally\nchange the nature of the problem and hence alter the possible\nsolutions. First, it considers settings in which the computer system\nhas information that may be relevant to its user rather than the\nuser (owner) having information needed by the system, which is\nthe complement of the information exchange situation we consider.\nSecond, the interruptibility-estimation models are task-based. Lastly,\nit relies on continuous monitoring of a user\"s activities.\nIn fast-paced environments, there usually is no single task\nstructure, and some of the activities themselves may have little internal\nstructure. As a result, it is difficult to determine the actual\nattentional state of agent-owners [15]. In such settings, owners must\nmake complex decisions that typically involve a number of other\nmembers of their units, while remaining reactive to events that\ndiverge from expectations [24]. For instance, during disaster\nrescue, a first-response unit may begin rescuing survivors trapped in a\nburning house, when a wall collapses suddenly, forcing the unit to\nThe Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 203\nretract and re-plan their actions.\nPrior work has tracked users\" focus of attention using a range of\ndevices, including those able to monitor gestures [8] and track\neyegaze to identify focus of visual attention [13, 20], thus enabling\nestimations of cognitive load and physical indicators of performance\ndegradation. The mechanisms described in this paper also presume\nthe existence of such sensors. However, in contrast to prior work,\nwhich relies on these devices operating continuously, our\nmechanism presumes that fast-paced environments only allow for the\nactivation of sensors for short periods of time on an ad-hoc basis,\nbecause agents\" resources are severely limited.\nMethods that depend on predicting what a person will do next\nbased only on what the user is currently doing (e.g., MDPs) are not\nappropriate for modeling focus of attention in fast-paced domains,\nbecause an agent cannot rely on a person\"s attentional state being\nwell structured and monitoring can only be done on a sporadic,\nnon-continuous basis. Thus, at any given time, the cost of\ninteraction with the user is essentially probabilistic, as reflected over a\nsingle random monitoring event, and can be assigned a probability\ndistribution function. Consequently, in fast-paced environments,\nan agent needs a sampling strategy by which the CA samples its\nowner\"s interruptibility level (with some cost) and decides whether\nto initiate an interaction at this specific time or to delay until a lower\ncost is observed in future samplings. The method we describe in\nthe remainder of this subsection applies concepts from economic\nsearch theory [16] to this problem. The CA\"s cost estimation uses\na mechanism that integrates the distribution of an owner\"s\ninterruptibility level (as estimated by the CA) into an economic search\nstrategy, in a way that the overall combined cost of sensor costs and\ninteraction costs is minimized.\nIn its most basic form, the economic search problem aims to\nidentify an opportunity that will minimize expected cost or\nmaximize expected utility. The search process itself is associated with\na cost, and opportunities (in our case, interruption opportunities)\nare associated with a stationary distribution function. We use a\nsequential search strategy [16] in which one observation is drawn at\na time, over multiple search stages. The dominating strategy in\nthis model is a reservation-value based strategy which determines a\nlower bound, and keeps drawing samples as long as no opportunity\nabove the bound was drawn.\nIn particular, we consider the situation in which an agent\"s owner\nhas an interruption cost described by a probability distribution\nfunction (pdf) f(x) and a cumulative distribution function (cdf) F(x).\nThe agent can activate sensing devices to get an estimation of the\ninterruption cost, x, at the current time, but there is a cost c of\noperating the sensing devices for a single time unit. The CA module\nsets a reservation value and as long as the sensor-based\nobservation x is greater than this reservation value, the CA will wait and\nre-sample the user for a new estimation.\nThe expected cost, V (xrv), using such a strategy with\nreservation value xrv is described by Equation 1,\nV (xrv) =\nc +\nR xrv\ny=0\nyf(y)\nF(xrv)\n, (1)\nwhich decomposes into two parts. The first part, c divided by\nF(xrv), represents the expected sampling cost. The second, the\nintegral divided by F(xrv), represents the expected cost of\ninterruption, because the expected number of search cycles is\n(random) geometric and the probability of success is F(xrv). Taking\nthe derivative of the left-hand-side of Equation 1 and equating it\nto zero, yields the characteristics of the optimal reservation value,\nnamely x\u2217\nrv must satisfy,\nV (x\u2217\nrv) = x\u2217\nrv. (2)\nSubstituting (2) in Equation 1 yields Equation 3 (after integration\nby parts) from which the optimal reservation value, x\u2217\nrv, and\nconsequently (from Equation 2) V (x\u2217\nrv) can be computed.\nc =\nZ x\u2217\nrv\ny=0\nF(y) (3)\nThis method, which depends on extracting the optimal sequence\nof sensor-based user sampling, relies heavily on the structure of the\ndistribution function, f(x). However, we need only a portion of\nthe distribution function, namely from the origin to the reservation\nvalue. (See Equation 1 and Figure 1.) Thus, when we consider\nsharing data, it is not necessary to rely on complete similarity in\nthe distribution function of different users. For some parameters,\nincluding the user\"s interruptibility level, it is enough to rely on\nsimilarity in the relevant portion of the distribution function. The\nimplementation described in Sections 4-5 relies on this fact.\nFigure 1: The distribution structure affecting the expected\ncost\"s calculation\n2.2 Estimating the Probability of Having\nInformation\nOne way an agent can estimate the probability a user will have\ninformation it needs (e.g., will know at a specific interruption time,\nwith some level of reliability, the actual outcome of a task currently\nbeing executed) is to rely on prior interactions with this user,\ncalculating the ratio between the number of times the user had the\ninformation and the total number of interactions. Alternatively, the\nagent can attempt to infer this probability from measurable\ncharacteristics of the user\"s behavior, which it can assess without\nrequiring an interruption. This indirect approach, which does not require\ninterrupting the user, is especially useful in fast-paced domains.\nThe CA module we designed uses such an indirect method:\nownerenvironment interactions are used as a proxy for measuring whether\nthe owner has certain information. For instance, in\nCoordinatorslike scenarios, owners may obtain a variety of information through\noccasional coordination meetings of all owners, direct\ncommunication with other individual owners participating in the execution\nof a joint task (through which they may learn informally about the\nexistence or status of other actions they are executing), open\ncommunications they overhear (e.g. if commanders leave their radios\nopen, they can listen to messages associated with other teams in\ntheir area), and other formal or informal communication channels\n[24]. Thus, owners\" levels of communication with others, which\ncan be obtained without interrupting them, provide some indication\nof the frequency with which they obtain new information. Given\noccasional updates about its owner\"s level of communication, the\nCA can estimate the probability that a random interaction with the\nowner will yield the information it needs. Denoting the\nprobability distribution function of the amount of communication the user\ngenerally maintains with its environment by g(x), and using the\ntransformation function Z(x), mapping from a level of\ncommunication, x, to a probability of having the information, the expected\nprobability of getting the information that is needed from the owner\nwhen interrupting at a given time can be calculated from\nP =\nZ \u221e\n0\nZ(x)g(x)dy. (4)\n204 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nThe more observations an agent can accumulate about the\ndistribution of the frequency of an owner\"s interaction with the\nenvironment at a given time, the better it can estimate the probability the\nowner has the information needed by the system.\n3. THE SELECTIVE-SHARING MECHANISM\nThis section presents the selective-sharing mechanism by which\nthe CA learns the distribution function of a probabilistic parameter\nby taking advantage of data collected by other CAs in its\nenvironment. We first explain the need for increasing the number of\nobservations used as the basis of estimation and then present a method\nfor determining how much data to adopt from other agents.\nThe most straightforward method for the CA to learn the\ndistribution functions associated with the different parameters\ncharacterizing an owner is by building a histogram based on the\nobservations it has accumulated up to the estimation point. Based on\nthis histogram, the CA can estimate the parameter either by\ntaking into account the entire range of values (e.g., to estimate the\nmean) or a portion of it (e.g., to find the expected cost when using\na reservation-value-based strategy). The accuracy of the estimation\nwill vary widely if it is based on only a small number of\nobservations.\nFor example, Figure 2 illustrates the reservation-value-based cost\ncalculated according to observations received from an owner with\na uniform interruption cost distribution U(0, 100) as a function of\nthe number of accumulated observations used for generating the\ndistribution histogram. (In this simulation, device activation cost\nwas taken to be c = 0.5).\nFigure 2: The convergence of a single CA to its optimal strategy\nThese deviations from the actual (true) value (which is 10 in this\ncase, according to Equation 3) is because the sample used in each\nstage cannot accurately capture the actual structure of the\ndistribution function. Eventually this method yields a very accurate\nestimation for the expected interruption cost. However, in the\ninitial stages of the process, its estimation deviates significantly from\nthe true value. This error could seriously degrade the CA\"s\ndecision making process: underestimating the cost may result in\ninitiating costly, non-beneficial interactions, and overestimating the cost\nmight result in missing opportunities for valuable interactions. Any\nimprovement that can be achieved in predicting the cost values,\nespecially in the initial stages of learning, can make a significant\ndifference in performance, especially because the agent is severely\nlimited in the number of times it can interact with its owner in\nfastpaced domains.\nOne way to decrease the deviation from the actual value is by\naugmenting the data the CA acquires by observing its owner with\nobservations made by other owners\" agents. Such an approach\ndepends on identifying other owners with distribution functions for\nthe characteristic of interest similar to the CA\"s owner. This\ndataaugmentation idea is simple: different owners may exhibit\nsimilar basic behaviors or patterns in similar fast-paced task scenarios.\nSince they are all coordinating on a common overall task and are\noperating in the same environment, it is reasonable to assume some\nlevel of similarity in the distribution function of their modeled\nparameters. People vary in their behavior, so, obviously, there may\nbe different types of owners: some will emphasize communication\nwith their teams, and some will spend more time on map-based\nplanning; some will dislike being disturbed while trying to evaluate\ntheir team\"s progress, while others may be more open to\ninterruptions. Consequently, an owner\"s CA is likely to be able to find some\nCAs that are working with owners who are similar to its owner.\nWhen adopting data collected by other agents, the two main\nquestions are which agents the CA should rely on and to what\nextent it should rely on each of them. The selective-sharing\nmechanism relies on a statistical measure of similarity that allows the CA\nof any specific user to identify the similarity between its owner and\nother owners dynamically. Based on this similarity level, the CA\ndecides if and to what degree to import other CAs\" data in order to\naugment its direct observations, and thus to enable better modeling\nof its owner\"s characteristics.\nIt is notable that the cost of transferring observations between\ndifferent CA modules of different agents is relatively small. This\ninformation can be transferred as part of regular negotiation\ncommunication between agents. The volume of such communication\nis negligible: it involves just the transmission of new observations\"\nvalues.\nIn our learning mechanism, the CA constantly updates its\nestimation of the level of similarity between its owner and the owners\nrepresented by other CAs in the environment. Each new\nobservation obtained either by that CA or any of the other CAs updates this\nestimation. The similarity level is determined using the Wilcoxon\nrank-sum test (Subsection 3.1).\nWhenever it is necessary to produce a parameter estimate, the\nCA decides on the number of additional observations it intends to\nrely on for extracting its estimation. The number of additional\nobservations to be taken from each other agent is a function of the\nnumber of observations it currently has from former interactions\nwith its owner and the level of confidence the CA has in the\nsimilarity between its owner and other owners. In most cases, the\nnumber of observations the CA will want to take from another agent is\nsmaller than the overall number of observations the other agent has;\nthus, it randomly samples (without repetitions) the required\nnumber of observations from this other agent\"s database. The additional\nobservations the CA takes from other agents are used only to model\nits owner\"s characteristics. Future similarity level determination is\nnot affected by this information augmentation procedure.\n3.1 The Wilcoxon Test\nWe use a nonparametric method (i.e., one that makes no\nassumptions about the parametric form of the distributions each set is\ndrawn from), because user characteristics in fast-paced domains do\nnot have the structure needed for parametric approaches. Two\nadditional advantages of a non-parametric approach are their usefulness\nfor dealing with unexpected, outlying observations (possibly\nproblematic for a parametric approach), and the fact that non-parametric\napproaches are computationally very simple and thus ideal for\nsettings in which computational resources are scarce.\nThe Wilcoxon rank-sum test we use is a nonparametric\nalternative to the two-sample t-test [22, 14]2\n. While the t-test\ncompares means, the Wilcoxon test can be used to test the null\nhypothesis that two populations X and Y have the same continuous\ndistribution. We assume that we have independent random\nsamples {x1, x2, ..., xm} and {y1, y2, ..., yn}, of sizes m and n\nrespectively, from each population. We then merge the data and rank\neach measurement from lowest to highest. All sequences of ties are\nassigned an average rank. From the sum of the ranks of the smaller\n2\nChi-Square Goodness-of-Fit Test is for a single sample and thus not\nsuitable.\nThe Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 205\nsample, we calculate the test statistic and extract the level of\nconfidence for rejecting the null hypothesis. This level of confidence\nbecomes the measure for the level of similarity between the two\nowners. The Wilcoxon test does not require that the data originates\nfrom a normally distributed population or that the distribution is\ncharacterized by a finite set of parameters.\n3.2 Determining Required Information\nCorrectly identifying the right number of additional observations\nto gather is a key determinant of success of the selective-sharing\nmechanism. Obviously, if the CA can identify another owner who\nhas identical characteristics to the owner it represents, then it should\nuse all of the observations collected by that owner\"s agent.\nHowever, cases of identical matches are likely to be very uncommon.\nFurthermore, even to establish that another user is identical to its\nown, the CA would need substantial sample sizes to have a\nrelatively high level of confidence. Thus, usually the CA needs to\ndecide how much to rely on another agent\"s data while estimating\nvarious levels of similarity with a changing level of confidence.\nAt the beginning of its process, the selective-sharing mechanism\nhas almost no data to rely on, and thus no similarity measure can\nbe used. In this case, the CA module relies heavily on other agents,\nin the expectation that all owners have some basic level of\nsimilarity in their distribution (see Section 2). As the number of its\ndirect observations increases, the CA module refines the number of\nadditional observations required. Again, there are two conflicting\neffects. On one hand, the more data the CA has, the better it can\ndetermine its level of confidence in the similarity ratings it has for\nother owners. On the other hand, assuming there is some difference\namong owners (even if not noticed yet), as the number of its direct\nobservations increases, the owner\"s own data should gain weight in\nits analysis. Therefore, when CAi decides how many additional\nobservations, Oi\nj should be adopted from CAj\"s database, it\ncalculates Oi\nj as follows:\nOi\nj = N \u2217 (1 \u2212 \u03b1i,j)\n\u221a\nN\n+\n2 + ln(N)\nN\n(5)\nwhere N is the number of observations CAi already has (which is\nsimilar in magnitude to the number of observations CAj has) and\n\u03b1i,j is the confidence of rejecting the Wilcoxon null hypothesis.\nThe function in Equation 5 ensures that the number of additional\nobservations to be taken from another CA module increases as the\nconfidence in the similarity with the source for these additional\nobservations increases. At the same time, it ensures that the level of\ndependency on external observations decreases as the number of\ndirect observations increases. When calculating the parameter \u03b1i,j,\nwe always perform the test over the interval relevant to the\noriginating CA\"s distribution function. For example, when estimating the\ncost of interrupting the user, we apply the Wilcoxon test only for\nobservations in the interval that starts from zero and ends slightly\nto the right of the formerly estimated RV (see Figure 1).\n4. EMPIRICAL SETTING\nWe tested the selective-sharing mechanism in a system that\nsimulates a distributed, Coordinators-like MAS. This testbed\nenvironment includes a variable number of agents, each corresponding to a\nsingle CA module. Each agent is assigned an external source\n(simulating an owner) which it periodically samples to obtain a value\nfrom the distribution being estimated. The simulation system\nenabled us to avoid unnecessary inter-agent scheduling and\ncommunication overhead (which are an inherent part of the Coordinators\nenvironment) and thus to better isolate the performance and\neffectiveness of the estimation and decision-making mechanisms.\nThe distribution functions used in the experiments (i.e., the\ndistribution functions assigned to each user in the simulated\nenvironment) are multi-rectangular shaped. This type of function is ideal\nfor representing empirical distribution functions. It is composed\nof k rectangles, where each rectangle i is defined over the interval\n(xi\u22121, xi), and represents a probability pi, (\nPk\ni=1 pi =1). For any\nvalue x in rectangle i, we can formulate F(x) and f(x) as:\nf(x) =\npi\nxi \u2212 xi\u22121\nF(x) =\ni\u22121X\nj=1\npj +\n(x \u2212 xi\u22121)pi\nxi \u2212 xi\u22121\n(6)\nFor example, the multi-rectangular function in Figure 3 depicts a\npossible interruption cost distribution for a specific user. Each\nrectangle is associated with one of the user\"s typical activities,\ncharacterized by a set of typical interruption costs. (We assume the\ndistribution of cost within each activity is uniform.) The\nrectangular area represents the probability of the user being engaged in\nthis type of activity when she is randomly interrupted. Any overlap\nbetween the interruption costs of two or more activities results in\na new rectangle for the overlapped interval. The user associated\nwith the above distribution function spends most of her time in\nreporting (notice that this is the largest rectangle in terms of area), an\nactivity associated with a relatively high cost of interruption. The\nuser also spends a large portion of her time in planning (associated\nwith a very high cost of interruption), monitoring his team (with a\nrelatively small interruption cost) and receiving reports (mid-level\ncost of interruption). The user spends a relatively small portion\nof her time in scouting the enemy (associated with relatively high\ninterruption cost) and resting.\nFigure 3: Representing interruption cost distribution using a\nmulti-rectangular function\nMulti-rectangular functions are modular and allow the\nrepresentation of any distribution shape by controlling the number and\ndimensions of the rectangles used. Furthermore, these functions have\ncomputational advantages, mostly due to the ability to re-use many\nof their components when calculating the optimal reservation value\nin economical search models. They also fit well the parameters\nthe CA is trying to estimate in fast-paced domains, because these\nparameters are mostly influenced by activities in which the user is\nengaged.\nThe testbed system enabled us to define either hand-crafted or\nautomatically generated multi-rectangular distribution functions. At\neach step of a simulation, each of the CAs samples its owner (i.e.,\nall CAs in the system collect data at a similar rate) and then\nestimates the parameter (either the expected cost when using the\nsequential interruption technique described in Section 2 or the\nprobability that the owner will have the required information) using one\nof the following methods: (a) relying solely on direct observation\n(self-learning) data; (b) relying on the combined data of all other\nagents (average all); and, (c) relying on its own data and\nselective portions of the other agents\" data based on the selective-sharing\nmechanism described in Section 3.\n5. RESULTS\nWe present the results in two parts: (1) using a specific\nsample environment for illustrating the basic behavior of the\nselectivesharing mechanism; and (2) using general environments that were\nautomatically generated.\n206 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)\n5.1 Sample Environment\nTo illustrate the gain obtained by using the selective-sharing\nmechanism, we used an environment of 10 agents, associated with 5\ndifferent interruptibility cost distribution function types. The table in\nFigure 4 details the division of the 10 agents into types, the\ndimensions of the rectangles that form the distribution functions, and\nthe theoretical mean and reservation value (RV) (following\nEquation 3) with a cost c = 2 for sensing the interruption cost. Even\nthough the means of the five types are relatively similar, the use\nof a reservation-value based interruption strategy yields relatively\ndifferent expected interruption costs (RV , following Equation 2).\nThe histogram in this figure depicts the number of observations\nobtained for each bin of size 1 out of a sample of 100000 observations\ntaken from each type\"s distribution function.\nType Agents Rect. Range prob mean RV\nI 1,2 1 0-20 0.40 50 14.1\n2 20-80 0.20\n3 80-100 0.40\nII 3,4,5,6 1 0-40 0.25 50 25.3\n2 40-60 0.50\n3 60-100 0.25\nIII 7 1 0-80 0.10 85 56.6\n2 80-100 0.90\nIV 8,9 1 0-60 0.60 48 20.0\n2 60-90 0.40\nV 10 1 0-100 1.00 50 20.0\n0\n500\n1000\n1500\n2000\n2500\n1 8 15 22 29 36 43 50 57 64 71 78 85 92 99\ntype I type II type III type IV type V\n#ofobservations\nrange\nFigure 4: Users\" interruptibility cost distribution functions (5\ntypes)\nFigure 5 gives CA performance in estimating the expected cost\nof interruption when using the reservation-value based interruption\ninitiation technique. Each graph presents the average prediction\naccuracy (in terms of the absolute deviation from the theoretical\nvalue, so the lower the curve the better the performance) of a\ndifferent type, based on 10000 simulation runs. The three curves in\neach graph represent the methods being compared (self-learning,\naverage all, and selective-sharing). The data is given as a function\nof the accumulated number of observations collected. The sixth\ngraph in the figure is the average for all types, weighted according\nto the number of agents of each type. Similarly, the following table\nsummarizes the overall average performance in terms of the\nabsolute deviation from the theoretical value of each of the different\nmethods:\nIterations Self-Learning Averaging-All Selective-Sharing % Improvement3\n5 20.08 8.70 9.51 53%\n15 12.62 7.84 8.14 36%\n40 8.16 7.42 6.35 22%\nTable 1: Average absolute error along time\nSeveral observations may be made from Figure 5. First,\nalthough the average-all method may produce relatively good results,\nit quickly reaches stagnation, while the other two methods exhibit\ncontinuous improvement as a function of the amount of\naccumulated data. For the Figure 4 environment, average-all is a good\nstrategy for agents of type II, IV and V, because the theoretical\nreservation value of each of these types is close to the one obtained based\non the aggregated distribution function (i.e., 21.27).4\nHowever, for\ntypes I and III for which the optimal RV differs from that value, the\naverage-all method performs significantly worse. Overall, the sixth\ngraph and the table above show that while in this specific\nenvironment the average-all method works well in the first interactions, it\n3\nThe improvement is measured in percentages relative to the self-learning\nmethod.\n4\nThe value is obtained by constructing the weighted aggregated distributed\nfunction according to the different agents\" types and extracting the optimal\nRV using Equation 3.\n0\n4\n8\n12\n16\n20\n1 6 11 16 21 26 31 36\nType I\n0\n4\n8\n12\n16\n20\n1 6 11 16 21 26 31 36\nselective sharing self-learning average all\n0\n4\n8\n12\n16\n20\n1 6 11 16 21 26 31 36\nType II\n0\n8\n16\n24\n32\n40\n1 6 11 16 21 26 31 36\nType III\n0\n4\n8\n12\n16\n20\n1 6 11 16 21 26 31 36\nType IV\n0\n4\n8\n12\n16\n20\n1 6 11 16 21 26 31 36\nType V\n0\n4\n8\n12\n16\n20\n1 6 11 16 21 26 31 36\nWeighted Average\nFigure 5: Average absolute deviation from the theoretical RV\nin each method (10000 runs)\nis quickly outperformed by the selective-sharing mechanism.\nFurthermore, the more user observations the agents accumulate (i.e., as\nwe extend the horizontal axis), the better the other two methods are\nin comparison to average-all. In the long run (and as shown in the\nfollowing subsection for the general case), the average-all method\nexhibits the worst performance.\nSecond, the selective-sharing mechanism starts with a significant\nimprovement in comparison to relying on the agent\"s own\nobservations, and then this improvement gradually decreases until finally\nits performance curve coincides with the self-learning method\"s\ncurve. The selective-sharing mechanism performs better or worse,\ndepending on the type, because the Wilcoxon test cannot guarantee\nan exact identification of similarity; different combinations of\ndistribution function can result in an inability to exactly identify the\nsimilar users for some of the specific types. For example, for type I\nagents, the selective-sharing mechanism actually performs worse\nthan self-learning in the short term (in the long run the two\nmethods\" performance converge). Nevertheless, for the other types in\nour example, the selective-sharing mechanism is the most efficient\none, and outperforms the other two methods overall.\nThird, it is notable that for agents that have a unique type (e.g.,\nagent III), the selective-sharing mechanism quickly converges\ntowards relying on self-collected data. This behavior guarantees that\neven in scenarios in which users are completely different, the method\nexhibits a graceful initial degradation but manages, within a few\ntime steps, to adopt the proper behavior of counting exclusively on\nself-generated data.\nLast, despite the difference in their overall distribution function,\nagents of type IV and V exhibit similar performance because the\nrelevant portion of their distribution functions (i.e., the effective\nparts that affect the RV calculation as explained in Figure 1) is\nidentical. Thus, the selective-sharing mechanism enables the agent\nof type V, despite its unique distribution function, to adopt\nrelevant information collected by agents of types IV which improves\nits estimation of the expected interruption cost.\n5.2 General Evaluation\nTo evaluate selective-sharing, we ran a series of simulations in\nwhich the environment was randomly generated. These\nexperiments focused on the CAs\" estimations of the probability that the\nuser would have the required information if interrupted. They used\na multi-rectangular probability distribution function to represent\nThe Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 207\nthe amount of communication the user is engaged in with its\nenvironment. We models the growth of the probability the user has\nthe required information as a function of the amount of\ncommunication using the logistic function,5\nG(x) =\n1 + e\n\u2212x\n12\n1 + 60e\n\u2212x\n12\n. (7)\nThe expected (mean) value of the parameter representing the\nprobability the user has the required information is thus\n\u03bc =\nZ \u221e\ny=0\nG(y)f(y)dy =\nkX\ni=1\nhx + 708ln(60 + e\nx\n12 )pi\n60(xi \u2212 xi\u22121)\nixi\nxi\u22121\n(8)\nwhere k is the number of rectangles used. We ran 10000 simulation\nruns. For each simulation, a new 20-agent environment was\nautomatically generated by the system, and the agents were randomly\ndivided into a random number of different types.6\nFor each type,\na random 3-rectangle distribution function was generated. Each\nsimulation ran 40 time steps. At each time step each one of the\nagents accumulated one additional observation. Each CA\ncalculated an estimate of the probability its user had the necessary\ninformation according to the three methods, and the absolute error\n(difference from the theoretical value calculated according to\nEquation 8) was recorded. The following table summarizes the average\nperformance of the three mechanisms along different time horizons\n(measured at 5, 15 and 40 time steps):\nIterations Self-Learning Averaging-All Selective-Sharing % Improvement\n5 0.176 0.099 0.103 41.4%\n15 0.115 0.088 0.087 23.9%\n40 0.075 0.082 0.065 13.6%\nTable 2: Average absolute error along time steps\nAs can be seen in the table above, the proposed selective-sharing\nmethod outperforms the two other methods over any execution in\nwhich more than 15 observations are collected by each of the agents.\nAs in the sample environment, the average-all method performs\nwell in the initial few time steps, but does not exhibit further\nimprovement. Thus, the more data collected, the greater the difference\nbetween this latter method and the two other methods. The average\ndifference between selective-sharing and self-learning decreases as\nmore data is collected.\nFinally, we measured the effect of the number of types in the\nenvironment. For this purpose, we used the same self-generation\nmethod, but controlled the number of types generated for each run.\nThe number of types is a good indication for the level of\nheterogeneity in the environment. For each number of types, we ran\n10000 simulations. Figure 6 depicts the performance of the\ndifferent methods (for a 40-observation collection period for each agent).\nSince all simulation runs used for generating Figure 6 are based\non the same seed, the performance of the self-learning mechanism\nis constant regardless of the number of types in the environment. As\nexpected, the average-all mechanism performs best when all agents\nare of the same type; however its performance deteriorates as the\nnumber of types increases. Similarly, the selective-sharing\nmechanism exhibits good results when all agents are of the same type,\nand as the number of types increases, its performance deteriorates.\nHowever, the performance decrease is significantly more modest in\ncomparison to the one experienced in the average-all mechanism.\n5\nThe specific coefficients used guarantee an S-like curve of growth, along\nthe interval (0, 100), where the initial stage of growth is approximately\nexponential, followed by asymptotically slowing growth.\n6\nIn this suggested environment-generation scheme there is no guarantee\nthat every agent will have a potential similar agent to share information\nwith. In those non-rare scenarios where the CA is the only one of its type,\nit will rapidly need to stop relying on others.\n0\n0.02\n0.04\n0.06\n0.08\n0.1\n0.12\n0.14\n1 2 3 4 5\nSelf Learning Average All Selective Sharing\nnumber of types\naverageabsoluteerror\nFigure 6: Average absolute deviation from actual value in 20\nagent scenarios as a function of the agents\" heterogeneity level\nOverall, the selective-sharing mechanism outperforms both other\nmethods for any number of types greater than one.\n6. RELATED WORK\nIn addition to the interruption management literature reviewed\nin Section 2, several other areas of prior work are relevant to the\nselective-sharing mechanism described in this paper.\nCollaborative filtering, which makes predictions (filtering) about\nthe interests of a user [7], operates similarly to selective-sharing.\nHowever, collaborative filtering systems exhibit poor performance\nwhen there is not sufficient information about the users and when\nthere is not sufficient information about a new user whose taste the\nsystem attempts to predict [7].\nSelective-sharing relies on the ability to find similarity between\nspecific parts of the probability distribution function associated with\na characteristic of different users. This capability is closely related\nto clustering and classification, an area widely studied in machine\nlearning. Given space considerations, our review of this area is\nrestricted to some representative approaches for clustering. In spite of\nthe richness of available clustering algorithms (such as the famous\nK-means clustering algorithm [11], hierarchical methods, Bayesian\nclassifiers [6], and maximum entropy), various characteristics of\nfast-paced domains do not align well with the features of\nattributesbased clustering mechanisms, suggesting these mechanisms would\nnot perform well in such domains. Of particular importance is that\nthe CA needs to find similarity between functions, defined over a\ncontinuous interval, with no distinct pre-defined attributes. An\nadditional difficulty is defining the distance measure.\nMany clustering techniques have been used in data mining [2],\nwith particular focus on incremental updates of the clustering, due\nto the very large size of the databases [3]. However the\napplicability of these to fast-paced domains is quite limited because they rely\non a large set of existing data. Similarly, clustering algorithms\ndesigned for the task of class identification in spatial databases (e.g.,\nrelying on a density-based notion [4]) are not useful for our case,\nbecause our data has no spatial attributes.\nThe most relevant method for our purposes is the Kullback-Leibler\nrelative entropy index that is used in probability theory and\ninformation theory [12]. This measure, which can also be applied on\ncontinuous random variables, relies on a natural distance measure\nfrom a true probability distribution (either observation-based or\ncalculated) to an arbitrary probability distribution. However, the\nmethod will perform poorly in scenarios in which the functions\nalternate between different levels while keeping the general\nstructure and moments. For example, consider the two functions f(x) =\n( x mod2)/100 and g(x) = ( x mod2)/100 defined over the\ninterval (0, 200). While these two functions are associated with\nalmost identical reservation values (for any sampling cost) and mean,\nthe Kullback-Leibler method will assign a poor correlation between\n208 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nthem, while our Wilcoxon-based approach will give them the\nhighest rank in terms of similarity.\nWhile the Wilcoxon test is a widely used statistical procedure\n[22, 14], it is usually used for comparing two sets of single-variate\ndata. To our knowledge, no attempt has been made yet to\nextend its properties as an infrastructure for determining with whom\nand to what extent information should be shared, as presented in\nthis paper. Typical use of this non-parametric tool includes\ndetection of rare events in time series (e.g., a hard drive failure\nprediction [17]) and bioinformatics applications (e.g., finding informative\ngenes from microarray data). In these applications, it is used\nprimarily as an identification tool and ranking criterion.\n7. DISCUSSION AND CONCLUSIONS\nThe selective-sharing mechanism presented in this paper does\nnot make any assumptions about the format of the data used or\nabout the structure of the distribution function of the parameter to\nbe estimated. It is computationally lightweight and very simple to\nexecute. Selective-sharing allows an agent to benefit from other\nagents\" observations in scenarios in which data sources of the same\ntype are available. It also guarantees, as a fallback, performance\nequivalent to that of a self-learner when the information source is\nunique. Furthermore, selective-sharing does not require any prior\nknowledge about the types of information sources available in the\nenvironment or of the number of agents associated with each type.\nThe results of our simulations demonstrate the selective-sharing\nmechanism\"s effectiveness in improving the estimation produced\nfor probabilistic parameters based on a limited set of observations.\nFurthermore, most of the improvement is achieved in initial\ninteractions, which is of great importance for agents operating in\nfast-paced environments. Although we tested the selective-sharing\nmechanism in the context of the Coordinators project, it is\napplicable in any MAS environment having the characteristics of a\nfast-paced environment (e.g., rescue environments). Evidence for\nits general effectiveness is given in the general evaluation section,\nwhere environments were continuously randomly generated.\nThe Wilcoxon statistic used as described in this paper to provide\na classifier for similarity between users provides high flexibility\nwith low computational costs and is applicable for any\ncharacteristic being learned. Its use provides a good measure of similarity\nwhich an agent can use to decide how much external information\nto adopt for its assessments.\n8. ACKNOWLEDGEMENT\nThe research reported in this paper was supported in part by\ncontract number 55-000720, a subcontract to SRI International\"s\nDARPA Contract No. FA8750-05-C-0033. Any opinions, findings\nand conclusions, or recommendations expressed in this material\nare those of the authors and do not necessarily reflect the views of\nDARPA or the U.S. Government. We are grateful to an anonymous\nAAMAS reviewer for an exceptionally comprehensive review of\nthis paper.\n9. REFERENCES\n[1] P. Adamczyk, S. Iqbal, and B. Bailey. A method, system, and\ntools for intelligent interruption management. In TAMODIA\n\"05, pages 123-126, New York, NY, USA, 2005. ACM Press.\n[2] P. Berkhin. Survey of clustering data mining techniques.\nTechnical report, Accrue Software, San Jose, CA, 2002.\n[3] M. Ester, H. Kriegel, J. Sander, M. Wimmer, and X. Xu.\nIncremental clustering for mining in a data warehousing\nenvironment. In Proc. 24th Int. Conf. Very Large Data Bases,\nVLDB, pages 323-333, 24-27 1998.\n[4] M. Ester, H. Kriegel, J. Sander, and X. Xu. A density-based\nalgorithm for discovering clusters in large spatial databases\nwith noise. In KDD-96, pages 226-231, 1996.\n[5] M. Fleming and R. Cohen. A decision procedure for\nautonomous agents to reason about interaction with humans.\nIn AAAI Spring Symp. on Interaction between Humans and\nAutonomous Systems over Extended Operation, 2004.\n[6] N. Friedman, D. Geiger, and M. Goldszmidt. Bayesian\nnetwork classifiers. Machine Learning, 29:131-163, 1997.\n[7] N. Good, J. Ben Schafer, J. Konstan, A. Borchers, B. Sarwar,\nJ. Herlocker, and J. Riedl. Combining collaborative filtering\nwith personal agents for better recommendations. In\nAAAI/IAAI, pages 439-446, 1999.\n[8] K. Hinckley, J. Pierce, M. Sinclair, and E. Horvitz. Sensing\ntechniques for mobile interaction. In UIST \"00, pages\n91-100, New York, NY, USA, 2000. ACM Press.\n[9] E. Horvitz, C. Kadie, T. Paek, and D. Hovel. Models of\nattention in computing and communication: from principles\nto applications. Commun. ACM, 46(3):52-59, 2003.\n[10] B. Hui and C. Boutilier. Who\"s asking for help?: a bayesian\napproach to intelligent assistance. In IUI \"06, 2006.\n[11] J. Jang, C. Sun, and E. Mizutani. Neuro-Fuzzy and Soft\nComputing A Computational Approach to Learning and\nMachine Intelligence. Prentice Hall, 1997.\n[12] S. Kullback and R. Leibler. On information and sufficiency.\nAnn. Math. Statist., 22:79-86, 1951.\n[13] P. Maglio, T. Matlock, C. Campbell, S. Zhai, and B. Smith.\nGaze and speech in attentive user interfaces. In ICMI, pages\n1-7, 2000.\n[14] H. Mann and D. Whitney. On a test of whether one of 2\nrandom variables is stochastically larger than the other.\nAnnals of Mathematical Statistics, 18:50-60, 1947.\n[15] W. McClure. Technology and command: Implications for\nmilitary operations in the twenty-first century. Maxwell Air\nForce Base, Center for Strategy and Technology, 2000.\n[16] J. McMillan and M. Rothschild. Search. In Robert J. Aumann\nand Amsterdam Sergiu Hart, editors, Handbook of Game\nTheory with Economic Applications, pages 905-927. 1994.\n[17] J. Murray, G. Hughes, and K. Kreutz-Delgado. Machine\nlearning methods for predicting failures in hard drives: A\nmultiple-instance application. J. Mach. Learn. Res.,\n6:783-816, 2005.\n[18] D. Sarne and B. J. Grosz. Estimating information value in\ncollaborative multi-agent planning systems. In AAMAS\"07,\npage (to appear), 2007.\n[19] D. Sarne and B. J. Grosz. Timing interruptions for better\nhuman-computer coordinated planning. In AAAI Spring\nSymp. on Distributed Plan and Schedule Management, 2006.\n[20] R. Vertegaal. The GAZE groupware system: Mediating joint\nattention in multiparty communication and collaboration. In\nCHI, pages 294-301, 1999.\n[21] T. Wagner, J. Phelps, V. Guralnik, and R. VanRiper. An\napplication view of coordinators: Coordination managers for\nfirst responders. In AAAI, pages 908-915, 2004.\n[22] F Wilcoxon. Individual comparisons by ranking methods.\nBiometrics, 1:80-83, 1945.\n[23] D. Zeng and K. Sycara. Bayesian learning in negotiation. In\nAAAI Symposium on Adaptation, Co-evolution and Learning\nin Multiagent Systems, pages 99-104, 1996.\n[24] Y. Zhang, K. Biggers, L. He, S. Reddy, D. Sepulvado, J. Yen,\nand T. Ioerger. A distributed intelligent agent architecture for\nsimulating aggregate-level behavior and interactions on the\nbattlefield. In SCI-2001, pages 58-63, 2001.\nThe Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 209", "keywords": "adjustable autonomy;interruption management;decision making;learning mechanism;probabilistic parameter;agent;information sharing;parameter estimation;selective-sharing;fast-paced environment;multi-agent distributed system"}
-{"name": "test_I-14", "title": "A Reinforcement Learning based Distributed Search Algorithm For Hierarchical Peer-to-Peer Information Retrieval Systems", "abstract": "The dominant existing routing strategies employed in peerto-peer(P2P) based information retrieval(IR) systems are similarity-based approaches. In these approaches, agents depend on the content similarity between incoming queries and their direct neighboring agents to direct the distributed search sessions. However, such a heuristic is myopic in that the neighboring agents may not be connected to more relevant agents. In this paper, an online reinforcement-learning based approach is developed to take advantage of the dynamic run-time characteristics of P2P IR systems as represented by information about past search sessions. Specifically, agents maintain estimates on the downstream agents\" abilities to provide relevant documents for incoming queries. These estimates are updated gradually by learning from the feedback information returned from previous search sessions. Based on this information, the agents derive corresponding routing policies. Thereafter, these agents route the queries based on the learned policies and update the estimates based on the new routing policies. Experimental results demonstrate that the learning algorithm improves considerably the routing performance on two test collection sets that have been used in a variety of distributed IR studies.", "fulltext": "1. INTRODUCTION\nOver the last few years there have been increasing\ninterests in studying how to control the search processes in\npeer-to-peer(P2P) based information retrieval(IR) systems\n[6, 13, 14, 15]. In this line of research, one of the core\nproblems that concerns researchers is to efficiently route user\nqueries in the network to agents that are in possession of\nappropriate documents. In the absence of global\ninformation, the dominant strategies in addressing this problem are\ncontent-similarity based approaches [6, 13, 14, 15]. While\nthe content similarity between queries and local nodes\nappears to be a creditable indicator for the number of\nrelevant documents residing on each node, these approaches\nare limited by a number of factors. First of all,\nsimilaritybased metrics can be myopic since locally relevant nodes\nmay not be connected to other relevant nodes. Second, the\nsimilarity-based approaches do not take into account the\nrun-time characteristics of the P2P IR systems, including\nenvironmental parameters, bandwidth usage, and the\nhistorical information of the past search sessions, that provide\nvaluable information for the query routing algorithms.\nIn this paper, we develop a reinforcement learning based\nIR approach for improving the performance of distributed\nIR search algorithms. Agents can acquire better search\nstrategies by collecting and analyzing feedback information\nfrom previous search sessions. Particularly, agents\nmaintain estimates, namely expected utility, on the downstream\nagents\" capabilities of providing relevant documents for\nspecific types of incoming queries. These estimates are\nupdated gradually by learning from the feedback information\nreturned from previous search sessions. Based on the\nupdated expected utility information, the agents derive\ncorresponding routing policies. Thereafter, these agents route\nthe queries based on the learned policies and update the\nestimates on the expected utility based on the new routing\npolicies. This process is conducted in an iterative manner.\nThe goal of the learning algorithm, even though it consumes\nsome network bandwidth, is to shorten the routing time so\nthat more queries are processed per time unit while at the\nsame time finding more relevant documents. This contrasts\nwith the content-similarity based approaches where similar\noperations are repeated for every incoming query and the\nprocessing time keeps largely constant over time.\nAnother way of viewing this paper is that our basic\napproach to distributed IR search is to construct a hierarchical\noverlay network(agent organization) based on the\ncontentsimilarity measure among agents\" document collections in a\nbottom-up fashion. In the past work, we have shown that\nthis organization improves search performance significantly.\nHowever, this organizational structure does not take into\naccount the arrival patterns of queries, including their\nfrequency, types, and where they enter the system, nor the\navailable communication bandwidth of the network and\nprocessing capabilities of individual agents. The intention of\nthe reinforcement learning is to adapt the agents\" routing\ndecisions to the dynamic network situations and learn from\npast search sessions. Specifically, the contributions of this\npaper include: (1) a reinforcement learning based approach\nfor agents to acquire satisfactory routing policies based on\nestimates of the potential contribution of their neighboring\nagents; (2) two strategies to speed up the learning process.\nTo our best knowledge, this is one of the first reinforcement\nlearning applications in addressing distributed content\nsharing problems and it is indicative of some of the issues in\napplying reinforcement in a complex application.\nThe remainder of this paper is organized as follows:\nSection 2 reviews the hierarchical content sharing systems and\nthe two-phase search algorithm based on such topology.\nSection 3 describes a reinforcement learning based approach to\ndirect the routing process; Section 4 details the experimental\nsettings and analyze the results. Section 5 discusses related\nstudies and Section 6 concludes the paper.\n2. SEARCH IN HIERARCHICAL\nP2P IR SYSTEMS\nThis section briefly reviews our basic approaches to\nhierarchical P2P IR systems. In a hierarchical P2P IR\nsystem illustrated in Fig.1, agents are connected to each other\nthrough three types of links: upward links, downward links,\nand lateral links. In the following sections, we denote the\nset of agents that are directly connected to agent Ai as\nDirectConn(Ai), which is defined as\nDirectConn(Ai) = NEI(Ai) \u222a PAR(Ai) \u222a CHL(Ai)\n, where NEI(Ai) is the set of neighboring agents connected\nto Ai through lateral links; PAR(Ai) is the set of agents\nwhom agent Ai is connected to through upward links and\nCHL(Ai) is the set of agents that agent Ai connects to\nthrough downward links. These links are established through\na bottom-up content-similarity based distributed clustering\nprocess[15]. These links are then used by agents to locate\nother agents that contain documents relevant to the given\nqueries.\nA typical agent Ai in our system uses two queues: a local\nsearch queue, LSi, and a message forwarding queue MFi.\nThe states of the two queues constitute the internal states of\nan agent. The local search queue LSi stores search sessions\nthat are scheduled for local processing. It is a priority queue\nand agent Ai always selects the most promising queries to\nprocess in order to maximize the global utility. MFi\nconsists of a set of queries to forward on and is processed in\na FIFO (first in first out) fashion. For the first query in\nMFi, agent Ai determines which subset of its neighboring\nagents to forward it to based on the agent\"s routing policy\n\u03c0i. These routing decisions determine how the search\nprocess is conducted in the network. In this paper, we call Ai\nas Aj\"s upstream agent and Aj as Ai\"s downstream agent if\nA4 A5 A6 A7\nA2\nA3\nA9\nNEI(A2)={A3}\nPAR(A2)={A1}\nCHL(A2)={A4,A5}\nA1\nA8\nFigure 1: A fraction of a hierarchical P2PIR system\nan agent Ai routes a query to agent Aj.\nThe distributed search protocol of our hierarchical agent\norganization is composed of two steps. In the first step, upon\nreceipt of a query qk at time tl from a user, agent Ai\ninitiates a search session si by probing its neighboring agents\nAj \u2208 NEI(Ai) with the message PROBE for the similarity\nvalue Sim(qk, Aj) between qk and Aj. Here, Ai is defined as\nthe query initiator of search session si. In the second step,\nAi selects a group of the most promising agents to start\nthe actual search process with the message SEARCH. These\nSEARCH messages contain a TTL (Time To Live)\nparameter in addition to the query. The TTL value decreases by 1\nafter each hop. In the search process, agents discard those\nqueries that either have been previously processed or whose\nTTL drops to 0, which prevents queries from looping in the\nsystem forever. The search session ends when all the agents\nthat receive the query drop it or TTL decreases to 0. Upon\nreceipt of SEARCH messages for qk, agents schedule local\nactivities including local searching, forwarding qk to their\nneighbors, and returning search results to the query\ninitiator. This process and related algorithms are detailed in [15,\n14].\n3. A BASIC REINFORCEMENTLEARNING\nBASED SEARCH APPROACH\nIn the aforementioned distributed search algorithm, the\nrouting decisions of an agent Ai rely on the similarity\ncomparison between incoming queries and Ai\"s neighboring agents\nin order to forward those queries to relevant agents\nwithout flooding the network with unnecessary query messages.\nHowever, this heuristic is myopic because a relevant\ndirect neighbor is not necessarily connected to other relevant\nagents. In this section, we propose a more general approach\nby framing this problem as a reinforcement learning task.\nIn pursuit of greater flexibility, agents can switch between\ntwo modes: learning mode and non-learning mode. In the\nnon-learning mode, agents operate in the same way as they\ndo in the normal distributed search processes described in\n[14, 15]. On the other hand, in the learning mode, in\nparallel with distributed search sessions, agents also participate\nin a learning process which will be detailed in this section.\nNote that in the learning protocol, the learning process does\nnot interfere with the distributed search process. Agents can\nchoose to initiate and stop learning processes without\naffecting the system performance. In particular, since the learning\nprocess consumes network resources (especially bandwidth),\nagents can choose to initiate learning only when the network\nload is relatively low, thus minimizing the extra\ncommunication costs incurred by the learning algorithm.\nThe section is structured as follows, Section 3.1 describes\n232 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)\na reinforcement learning based model. Section 3.2 describes\na protocol to deploy the learning algorithm in the network.\nSection 3.3 discusses the convergence of the learning\nalgorithm.\n3.1 The Model\nAn agent\"s routing policy takes the state of a search\nsession as input and output the routing actions for that query.\nIn our work, the state of a search session sj is stipulated as:\nQSj = (qk, ttlj)\nwhere ttlj is the number of hops that remains for the\nsearch session sj , qk is the specific query. QL is an attribute\nof qk that indicates which type of queries qk most likely\nbelong to. The set of QL can be generated by running a\nsimple online classification algorithm on all the queries that\nhave been processed by the agents, or an o\ufb04ine algorithm on\na pre-designated training set. The assumption here is that\nthe set of query types is learned ahead of time and belongs to\nthe common knowledge of the agents in the network. Future\nwork includes exploring how learning can be accomplished\nwhen this assumption does not hold. Given the query types\nset, an incoming query qi can be classified to one query class\nQ(qi) by the formula:\nQ(qi) = arg max\nQj\nP(qi|Qj) (1)\nwhere P(qi|Qj ) indicates the likelihood that the query qi is\ngenerated by the query class Qj [8].\nThe set of atomic routing actions of an agent Ai is denoted\nas {\u03b1i}, where {\u03b1i} is defined as \u03b1i = {\u03b1i0 , \u03b1i1 , ..., \u03b1in }. An\nelement \u03b1ij represents an action to route a given query to\nthe neighboring agent Aij \u2208 DirectConn(Ai). The routing\npolicy \u03c0i of agent Ai is stochastic and its outcome for a\nsearch session with state QSj is defined as:\n\u03c0i(QSj) = {(\u03b1i0 , \u03c0i(QSi, \u03b1i0 )), (\u03b1i1 , \u03c0i(QSi, \u03b1i1 )), ...} (2)\nNote that operator \u03c0i is overloaded to represent either the\nprobabilistic policy for a search session with state QSj,\ndenoted as \u03c0i(QSj); or the probability of forwarding the query\nto a specific neighboring agent Aik \u2208 DirectConn(Ai)\nunder the policy \u03c0i(QSj), denoted as \u03c0i(QSj, \u03b1ik ).\nTherefore, equation (2) means that the probability of\nforwarding the search session to agent Ai0 is \u03c0i(QSi, \u03b1i0 ) and so\non. Under this stochastic policy, the routing action is\nnondeterministic. The advantage of such a strategy is that\nthe best neighboring agents will not be selected repeatedly,\nthereby mitigating the potential hot spots situations.\nThe expected utility, Un\ni (QSj), is used to estimate the\npotential utility gain of routing query type QSj to agent Ai\nunder policy \u03c0n\ni . The superscript n indicates the value at the\nnth iteration in an iterative learning process. The expected\nutility provides routing guidance for future search sessions.\nIn the search process, each agent Ai maintains partial\nobservations of its neighbors\" states, as shown in Fig. 2. The\npartial observation includes non-local information such as\nthe potential utility estimation of its neighbor Am for query\nstate QSj, denoted as Um(QSj), as well as the load\ninformation, Lm. These observations are updated periodically\nby the neighbors. The estimated utility information will be\nused to update Ai\"s expected utility for its routing policy.\nLoad Information\nExpected Utility For Different Query Types\nNeighboring Agents\n...\nA0\nA1\nA3\nA2\nUn\n0 (QS0) ...\n...\n...\n...\n......\nUn\n0 (QS1)\nUn\n1 (QS1)\nUn\n2 (QS1)\nUn\n3 (QS1)\nUn\n1 (QS0)\nUn\n2 (QS0)\nUn\n3 (QS0)\nLn\n0\nLn\n1\nLn\n2\nLn\n3\n...\nQS0 QS1 ...\nFigure 2: Agent Ai\"s Partial Observation about its\nneighbors(A0, A1...)\nThe load information of Am, Lm, is defined as\nLm =\n|MFm|\nCm\n, where |MFm| is the length of the message-forward queue\nand Cm is the service rate of agent Am\"s message-forward\nqueue. Therefore Lm characterizes the utilization of an\nagent\"s communication channel, and thus provide non-local\ninformation for Am\"s neighbors to adjust the parameters of\ntheir routing policy to avoid inundating their downstream\nagents. Note that based on the characteristics of the queries\nentering the system and agents\" capabilities, the loading of\nagents may not be uniform. After collecting the utilization\nrate information from all its neighbors, agent Ai computes\nLi as a single measure for assessing the average load\ncondition of its neighborhood:\nLi =\nP\nk Lk\n|DirectConn(Ai)|\nAgents exploit Li value in determining the routing\nprobability in its routing policy.\nNote that, as described in Section 3.2, information about\nneighboring agents is piggybacked with the query message\npropagated among the agents whenever possible to reduce\nthe traffic overhead.\n3.1.1 Update the Policy\nAn iterative update process is introduced for agents to\nlearn a satisfactory stochastic routing policy. In this\niterative process, agents update their estimates on the potential\nutility of their current routing policies and then propagate\nthe updated estimates to their neighbors. Their neighbors\nthen generate a new routing policy based on the updated\nobservation and in turn they calculate the expected utility\nbased on the new policies and continue this iterative process.\nIn particular, at time n, given a set of expected\nutility, an agent Ai, whose directly connected agents set is\nDirectConn(Ai) = {Ai0 , ..., Aim }, determines its\ncorresponding stochastic routing policy for a search session of state QSj\nbased on the following steps:\n(1) Ai first selects a subset of agents as the potential\ndownstream agents from set DirectConn(Ai), denoted as\nPDn(Ai, QSj). The size of the potential downstream agent\nis specified as\n|PDn(Ai, QSj)| = min(|NEI(Ai), dn\ni + k)|\nwhere k is a constant and is set to 3 in this paper; dn\ni ,\nthe forward width, is defined as the expected number of\nThe Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 233\nneighboring agents that agent Ai can forward to at time\nn. This formula specifies that the potential downstream\nagent set PDn(Ai, QSj) is either the subset of neighboring\nagents with dn\ni + k highest expected utility value for state\nQSj among all the agents in DirectConn(Ai), or all their\nneighboring agents. The k is introduced based on the idea\nof a stochastic routing policy and it makes the forwarding\nprobability of the dn\ni +k highest agent less than 100%. Note\nthat if we want to limit the number of downstream agents\nfor search session sj as 5, the probability of forwarding the\nquery to all neighboring agents should add up to 5.\nSetting up dn\ni value properly can improve the utilization rate\nof the network bandwidth when much of the network is idle\nwhile mitigating the traffic load when the network is highly\nloaded. The dn+1\ni value is updated based on dn\ni , the\nprevious and current observations on the traffic situation in the\nneighborhood. Specifically, the update formula for dn+1\ni is\ndn+1\ni = dn\ni \u2217 (1 +\n1 \u2212 Li\n|DirectConn(Ai)|\n)\nIn this formula, the forward width is updated based on the\ntraffic conditions of agent Ai\"s neighborhood, i.e Li, and its\nprevious value.\n(2) For each agent Aik in the PDn(Ai, QSj), the\nprobability of forwarding the query to Aik is determined in the\nfollowing way in order to assign higher forwarding\nprobability to the neighboring agents with higher expected utility\nvalue:\n\u03c0n+1\ni (QSj, \u03b1ik ) =\ndn+1\ni\n|PDn(Ai, QSj)|\n+\n\u03b2 \u2217\n`\nUik (QSj) \u2212\nPDU(Ai, QSj)\n|PDn(Ai, QSj)|\n\u00b4\n(3)\nwhere\nPDUn(Ai, QSj) =\nX\no\u2208P Dn(Ai,QSj )\nUo(QSj)\nand QSj is the subsequent state of agent Aik after agent\nAi forwards the search session with state QSj to its\nneighboring agent Aik ; If QSj = (qk, ttl0), then QSj = (qk, ttl0 \u2212\n1).\nIn formula 3, the first term on the right of the equation,\ndn+1\ni\n|P Dn(Ai,QSj )|\n, is used to to determine the forwarding\nprobability by equally distributing the forward width, dn+1\ni , to\nthe agents in PDn(Ai, QSj) set. The second term is used to\nadjust the probability of being chosen so that agents with\nhigher expected utility values will be favored. \u03b2 is\ndetermined according to:\n\u03b2 = min\n` m \u2212 dn+1\ni\nm \u2217 umax \u2212 PDUn(Ai, QSj)\n,\ndn+1\ni\nPDUn(Ai, QSj) \u2212 m \u2217 umin\n\u00b4\n(4)\nwhere m = |PDn(Ai, QSj)|,\numax = max\no\u2208P Dn(Ao,QSj )\nUo(QSj)\nand\numin = min\no\u2208P Dn(Ao,QSj )\nUo(QSj)\nThis formula guarantees that the final \u03c0n+1\ni (QSj, \u03b1ik ) value\nis well defined, i.e,\n0 \u2264 \u03c0n+1\ni (QSj, \u03b1ik ) \u2264 1\nand\nX\ni\n\u03c0n+1\ni (QSj, \u03b1ik ) = dn+1\ni\nHowever, such a solution does not explore all the\npossibilities. In order to balance between exploitation and\nexploration, a \u03bb-Greedy approach is taken. In the \u03bb-Greedy\napproach, in addition to assigning higher probability to those\nagents with higher expected utility value, as in the equation\n(3). Agents that appear to be not-so-good choices will\nalso be sent queries based on a dynamic exploration rate.\nIn particular, for agents in the set PDn(Ai, QSj), \u03c0n+1\ni1\n(QSj)\nis determined in the same way as the above, with the only\ndifference being that dn+1\ni is replaced with dn+1\ni \u2217 (1 \u2212 \u03bbn).\nThe remaining search bandwidth is used for learning by\nassigning probability \u03bbn evenly to agents Ai2 in the set\nDirectConn(Ai) \u2212 PDn(Ai, QSj).\n\u03c0n+1\ni2\n(QSj, \u03b1ik ) =\ndn+1\ni \u2217 \u03bbn\n|DirectConn(Ai) \u2212 PDn(Ai, QSj)|\n(5)\nwhere PDn(Ai, QSj) \u2282 DirectConn(Ai). Note that the\nexploration rate \u03bb is not a constant and it decreases\novertime. The \u03bb is determined according to the following\nequation:\n\u03bbn+1 = \u03bb0 \u2217 e\u2212c1n\n(6)\nwhere \u03bb0 is the initial exploration rate, which is a\nconstant; c1 is also a constant to adjust the decreasing rate of\nthe exploration rate; n is the current time unit.\n3.1.2 Update Expected Utility\nOnce the routing policy at step n+1, \u03c0n+1\ni , is determined\nbased on the above formula, agent Ai can update its own\nexpected utility, Un+1\ni (QSi), based on the the updated routing\npolicy resulted from the formula 5 and the updated U values\nof its neighboring agents. Under the assumption that after a\nquery is forwarded to Ai\"s neighbors the subsequent search\nsessions are independent, the update formula is similar to\nthe Bellman update formula in Q-Learning:\nUn+1\ni (QSj) = (1 \u2212 \u03b8i) \u2217 Un\ni (QSj) +\n\u03b8i \u2217 (Rn+1\ni (QSj) +\nX\nk\n\u03c0n+1\ni (QSj, \u03b1ik )Un\nk (QSj)) (7)\nwhere QSj = (Qj, ttl \u2212 1) is the next state of QSj =\n(Qj, ttl); Rn+1\ni (QSj) is the expected local reward for query\nclass Qk at agent Ai under the routing policy \u03c0n+1\ni ; \u03b8i is the\ncoefficient for deciding how much weight is given to the old\nvalue during the update process: the smaller \u03b8i value is, the\nfaster the agent is expected to learn the real value, while the\ngreater volatility of the algorithm, and vice versa. Rn+1\n(s)\nis updated according to the following equation:\n234 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nRn+1\ni (QSj) = Rn\ni (QSj)\n+\u03b3i \u2217 (r(QSj) \u2212 Rn\ni (QSj)) \u2217 P(qj|Qj ) (8)\nwhere r(QSj) is the local reward associated with the search\nsession. P(qj|Qj ) indicates how relevant the query qj is to\nthe query type Qj, and \u03b3i is the learning rate for agent Ai.\nDepending on the similarity between a specific query qi and\nits corresponding query type Qi, the local reward associated\nwith the search session has different impact on the Rn\ni (QSj)\nestimation. In the above formula, this impact is reflected by\nthe coefficient, the P(qj|Qj) value.\n3.1.3 Reward function\nAfter a search session stops when its TTL values expires,\nall search results are returned back to the user and are\ncompared against the relevance judgment. Assuming the set of\nsearch results is SR, the reward Rew(SR) is defined as:\nRew(SR) =\nj\n1 if |Rel(SR)| > c\n|Rel(SR)|\nc\notherwise.\nwhere SR is the set of returned search results, Rel(SR)\nis the set of relevant documents in the search results. This\nequation specifies that users give 1.0 reward if the number\nof returned relevant documents reaches a predefined number\nc. Otherwise, the reward is in proportion to the number of\nrelevant documents returned. This rationale for setting up\nsuch a cut-off value is that the importance of recall ratio\ndecreases with the abundance of relevant documents in real\nworld, therefore users tend to focus on only a limited number\nof searched results.\nThe details of the actual routing protocol will be\nintroduced in Section 3.2 when we introduce how the learning\nalgorithm is deployed in real systems.\n3.2 Deployment of the Learning algorithm\nThis section describes how the learning algorithm can be\nused in either a single-phase or a two-phase search process.\nIn the single-phase search algorithm, search sessions start\nfrom the initiators of the queries. In contrast, in the two-step\nsearch algorithm, the query initiator first attempts to seek a\nmore appropriate starting point for the query by introducing\nan exploratory step as described in Section 2. Despite the\ndifference in the quality of starting points, the major part\nof the learning process for the two algorithms is largely the\nsame as described in the following paragraphs.\nBefore learning starts, each agent initializes the expected\nutility value for all possible states as 0. Thereafter, upon\nreceipt of a query, in addition to the normal operations\ndescribed in the previous section, an agent Ai also sets up a\ntimer to wait for the search results returned from its\ndownstream agents. Once the timer expires or it has received\nresponse from all its downstream agents, Ai merges and\nforwards the search results accrued from its downstream agents\nto its upstream agent. Setting up the timer speeds up the\nlearning because agents can avoid waiting too long for the\ndownstream agents to return search results. Note that these\ndetailed results and corresponding agent information will\nstill be stored at Ai until the feedback information is passed\nfrom its upstream agent and the performance of its\ndownstream agents can be evaluated. The duration of the timer\nis related to the TTL value. In this paper, we set the timer\nto\nttimer = ttli \u2217 2 + tf\n, where ttli \u2217 2 is the sum of the travel time of the queries in\nthe network, and tf is the expected time period that users\nwould like to wait.\nThe search results will eventually be returned to the search\nsession initiator A0. They will be compared to the relevance\njudgment that is provided by the final users (as described\nin the experiment section, the relevance judgement for the\nquery set is provided along with the data collections). The\nreward will be calculated and propagated backward to the\nagents along the way that search results were passed. This\nis a reverse process of the search results propagation. In the\nprocess of propagating reward backward, agents update\nestimates of their own potential utility value, generate an\nupto-dated policy and pass their updated results to the\nneighboring agents based on the algorithm described in Section 3.\nUpon change of expected utility value, agent Ai sends out its\nupdated utility estimation to its neighbors so that they can\nact upon the changed expected utility and corresponding\nstate. This update message includes the potential reward\nas well as the corresponding state QSi = (qk, ttll) of agent\nAi. Each neighboring agent, Aj, reacts to this kind of\nupdate message by updating the expected utility value for state\nQSj(qk, ttll + 1) according to the newly-announced changed\nexpected utility value. Once they complete the update, the\nagents would again in turn inform related neighbors to\nupdate their values. This process goes on until the TTL value\nin the update message increases to the TTL limit.\nTo speed up the learning process, while updating the\nexpected utility values of an agent Ai\"s neighboring agents we\nspecify that\nUm(Qk, ttl0) >= Um(Qk, ttl1) iff ttl0 > ttl1\nThus, when agent Ai receives an updated expected utility\nvalue with ttl1, it also updates the expected utility values\nwith any ttl0 > ttl1 if Um(Qk, ttl0) < Um(Qk, ttl1) to speed\nup convergence. This heuristic is based on the fact that\nthe utility of a search session is a non-decreasing function of\ntime t.\n3.3 Discussion\nIn formalizing the content routing system as a learning\ntask, many assumptions are made. In real systems, these\nassumptions may not hold, and thus the learning algorithm\nmay not converge. Two problems are of particular note,\n(1) This content routing problem does not have Markov\nproperties. In contrast to IP-level based packet routing, the\nrouting decision of each agent for a particular search\nsession sj depends on the routing history of sj. Therefore,\nthe assumption that all subsequent search sessions are\nindependent does not hold in reality. This may lead to\ndouble counting problem that the relevant documents of some\nagents will be counted more than once for the state where\nthe TTL value is more than 1. However, in the context\nof the hierarchical agent organizations, two factors mitigate\nthis problems: first, the agents in each content group form\na tree-like structure. With the absense of the cycles, the\nestimates inside the tree would be close to the accurate value.\nSecondly, the stochastic nature of the routing policy partly\nremedies this problem.\nThe Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 235\n(2) Another challenge for this learning algorithm is that\nin a real network environment observations on neighboring\nagents may not be able to be updated in time due to the\ncommunication delay or other situations. In addition, when\nneighboring agents update their estimates at the same time,\noscillation may arise during the learning process[1].\nThis paper explores several approaches to speed up the\nlearning process. Besides the aforementioned strategy of\nupdating the expected utility values, we also employ an\nactive update strategy where agents notify their\nneighbors whenever its expected utility is updated. Thus a faster\nconvergence speed can be achieved. This strategy contrasts\nto the Lazy update, where agents only echo their\nneighboring agents with their expected utility change when they\nexchange information. The trade off between the two\napproaches is the network load versus learning speed.\nThe advantage of this learning algorithm is that once a\nrouting policy is learned, agents do not have to repeatedly\ncompare the similarity of queries as long as the network\ntopology remains unchanged. Instead, agent just have to\ndetermine the classification of the query properly and follow\nthe learned policies. The disadvantage of this learning-based\napproach is that the learning process needs to be conducted\nwhenever the network structure changes. There are many\npotential extensions for this learning model. For example, a\nsingle measure is currently used to indicate the traffic load\nfor an agent\"s neighborhood. A simple extension would be to\nkeep track of individual load for each neighbor of the agent.\n4. EXPERIMENTSSETTINGSAND RESULTS\nThe experiments are conducted on TRANO simulation\ntoolkit with two sets of datasets, TREC-VLC-921 and\nTREC123-100. The following sub-sections introduce the TRANO\ntestbed, the datasets, and the experimental results.\n4.1 TRANO Testbed\nTRANO (Task Routing on Agent Network Organization)\nis a multi-agent based network based information retrieval\ntestbed. TRANO is built on top of the Farm [4], a time\nbased distributed simulator that provides a data\ndissemination framework for large scale distributed agent network\nbased organizations. TRANO supports importation and\nexportation of agent organization profiles including topological\nconnections and other features. Each TRANO agent is\ncomposed of an agent view structure and a control unit. In\nsimulation, each agent is pulsed regularly and the agent checks\nthe incoming message queues, performs local operations and\nthen forwards messages to other agents .\n4.2 Experimental Settings\nIn our experiment, we use two standard datasets,\nTRECVLC-921 and TREC-123-100 datasets, to simulate the\ncollections hosted on agents. The TREC-VLC-921 and\nTREC123-100 datasets were created by the U.S. National Institute\nfor Standard Technology(NIST) for its TREC conferences.\nIn distributed information retrieval domain, the two data\ncollections are split to 921 and 100 sub-collections. It is\nobserved that dataset TREC-VLC-921 is more heterogeneous\nthan TREC-123-100 in terms of source, document length,\nand relevant document distribution from the statistics of the\ntwo data collections listed in [13]. Hence, TREC-VLC-921 is\nmuch closer to real document distributions in P2P\nenvironments. Furthermore, TREC-123-100 is split into two sets of\n0\n0.05\n0.1\n0.15\n0.2\n0.25\n0.3\n0.35\n0.4\n0 500 1000 1500 2000 2500 3000\nARSS\nQuery number\nARSS versus the number of incoming queries for TREC-VLC-921\nSSLA-921\nSSNA-921\nFigure 3: ARSS(Average reward per search\nsession) versus the number of search sessions for 1phase\nsearch in TREC-VLC-921\n0\n0.1\n0.2\n0.3\n0.4\n0.5\n0.6\n0 500 1000 1500 2000 2500 3000\nARSS\nQuery number\nARSS versus query number for TREC-VLC-921\nTSLA-921\nTSNA-921\nFigure 4: ARSS(Average reward per search\nsession) versus the number of search sessions for 2phase\nsearch in TREC-VLC-921\nsub-collections in two ways: randomly and by source. The\ntwo partitions are denoted as TREC-123-100-Random and\nTREC-123-100-Source respectively. The documents in each\nsubcollection in dataset TREC-123-100-Source are more\ncoherent than those in TREC-123-100-Random. The two\ndifferent sets of partitions allow us to observe how the\ndistributed learning algorithm is affected by the homogeneity\nof the collections.\nThe hierarchical agent organization is generated by the\nalgorithm described in our previous algorithm [15]. During\nthe topology generation process, degree information of each\nagent is estimated by the algorithm introduced by Palmer\net al. [9] with parameters \u03b1 = 0.5 and \u03b2 = 0.6. In our\nexperiments, we estimate the upward limit and downward\ndegree limit using linear discount factors 0.5, 0.8 and 1.0.\nOnce the topology is built, queries randomly selected from\nthe query set 301\u2212350 on TREC-VLC-921 and query set 1\u2212\n50 on TREC-123-100-Random and TREC-123-100-Source\nare injected to the system based on a Poisson distribution\nP(N(t) = n) =\n(\u03bbt)n\nn!\ne\u2212\u03bb\n236 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)\n0\n50\n100\n150\n200\n250\n300\n350\n400\n0 500 1000 1500 2000 2500 3000\nCumulativeutility\nQuery number\nCumulative utility over the number of incoming queries\nTSLA-921\nSSNA-921\nSSLA-921\nTSNA-921\nFigure 5: The cumulative utility versus the number\nof search sessions TREC-VLC-921\nIn addition, we assume that all agents have an equal chance\nof getting queries from the environment, i.e, \u03bb is the same\nfor every agent. In our experiments, \u03bb is set as 0.0543 so\nthat the mean of the incoming queries from the environment\nto the agent network is 50 per time unit. The service time\nfor the communication queue and local search queue, i.e tQij\nand trs, is set as 0.01 time unit and 0.05 time units\nrespectively. In our experiments, there are ten types of queries\nacquired by clustering the query set 301 \u2212 350 and 1 \u2212 50.\n4.3 Results analysis and evaluation\nFigure 3 demonstrates the ARSS(Average Reward per\nSearch Session) versus the number of incoming queries over\ntime for the the Single-Step based Non-learning Algorithm\n(SSNA), and the Single-Step Learning Algorithm(SSLA) for\ndata collection TREC-VLC-921. It shows that the average\nreward for SSNA algorithm ranges from 0.02 \u2212 0.06 and the\nperformance changes little over time. The average reward\nfor SSLA approach starts at the same level with the SSNA\nalgorithm. But the performance increases over time and\nthe average performance gain stabilizes at about 25% after\nquery range 2000 \u2212 3000.\nFigure 4 shows the ARSS(Average Reward per Search\nSession) versus the number of incoming queries over time for the\nthe Two-Step based Non-learning Algorithm(TSNA), and\nthe Two-Step Learning Algorithm(TSLA) for data\ncollection TREC-VLC-921. The TSNA approach has a relatively\nconsistent performance with the average reward ranges from\n0.05 \u2212 0.15. The average reward for TSLA approach, where\nlearning algorithm is exploited, starts at the same level with\nthe TSNA algorithm and improves the average reward over\ntime until 2000\u22122500 queries joining the system. The results\nshow that the average performance gain for TSLA approach\nover TNLA approach is 35% after stabilization.\nFigure 5 shows the cumulative utility versus the number\nof incoming queries over time for SSNA, SSLA,TSNA, and\nTSLA respectively. It illustrates that the cumulative\nutility of non-learning algorithms increases largely linearly over\ntime, while the gains of learning-based algorithms\naccelerate when more queries enter the system. These experimental\nresults demonstrate that learning-based approaches\nconsistently perform better than non-learning based routing\nalgorithm. Moreover, two-phase learning based algorithm is\nbetter than single-phase based learning algorithm because\nthe maximal reward an agent can receive from searching its\nneighborhood within TTL hops is related to the total\nnumber of the relevant documents in that area. Thus, even the\noptimal routing policy can do little beyond reaching these\nrelevant documents faster. On the contrary, the\ntwo-stepbased learning algorithm can relocate the search session to\na neighborhood with more relevant documents. The TSLA\ncombines the merits of both approaches and outperforms\nthem.\nTable 1 lists the cumulative utility for datasets\nTREC123-100-Random and TREC-123-100-Source with\nhierarchical organizations. The five columns show the results for four\ndifferent approaches. In particular, column TSNA-Random\nshows the results for dataset TREC-123-100-Random with\nthe TSNA approach. The column TSLA-Random shows the\nresults for dataset TREC-123-100-Random with the TSLA\napproach. There are two numbers in each cell in the\ncolumn TSLA-Random. The first number is the actual\ncumulative utility while the second number is the percentage\ngain in terms of the utility over TSNA approach. Columns\nTSNA-Source and TSLA-Source show the results for dataset\nTREC-123-100-Source with TSNA and TSLA approaches\nrespectively. Table 1 shows that the performance\nimprovement for TREC-123-100-Random is not as significant as the\nother datasets. This is because that the documents in the\nsub-collection of TREC-123-100-Random are selected\nrandomly which makes the collection model, the signature of\nthe collection, less meaningful. Since both algorithms are\ndesigned based on the assumption that document collections\ncan be well represented by their collection model, this result\nis not surprising.\nOverall, Figures 4, 5, and Table 1 demonstrate that the\nreinforcement learning based approach can considerably\nenhance the system performance for both data collections.\nHowever, it remains as future work to discover the\ncorrelation between the magnitude of the performance gains and\nthe size of the data collection and/or the extent of the\nheterogeneity between the sub-collections.\n5. RELATED WORK\nThe content routing problem differs from the\nnetworklevel routing in packet-switched communication networks in\nthat content-based routing occurs in application-level\nnetworks. In addition, the destination agents in our\ncontentrouting algorithms are multiple and the addresses are not\nknown in the routing process. IP-level Routing problems\nhave been attacked from the reinforcement learning\nperspective[2, 5, 11, 12]. These studies have explored fully\ndistributed algorithms that are able, without central\ncoordination to disseminate knowledge about the network, to\nfind the shortest paths robustly and efficiently in the face of\nchanging network topologies and changing link costs. There\nare two major classes of adaptive, distributed packet\nrouting algorithms in the literature: distance-vector algorithms\nand link-state algorithms. While this line of studies carry a\ncertain similarity with our work, it has mainly focused on\npacket-switched communication networks. In this domain,\nthe destination of a packet is deterministic and unique. Each\nagent maintains estimations, probabilistically or\ndeterministically, on the distance to a certain destination through its\nneighbors. A variant of Q-Learning techniques is deployed\nThe Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 237\nTable 1: Cumulative Utility for Datasets TREC-123-100-Random and TREC-123-100-Source with Hierarchical\nOrganization; The percentage numbers in the columns TSLA-Random and TSLA-Source demonstrate\nthe performance gain over the algorithm without learning\nQuery number TSNA-Random TSLA-Random TSNA-Source TSLA-Source\n500 25.15 28.45 13% 24.00 21.05 -13%\n1000 104.99 126.74 20% 93.95 96.44 2.6%\n1250 149.79 168.40 12% 122.64 134.05 9.3%\n1500 188.94 211.05 12% 155.30 189.60 22%\n1750 235.49 261.60 11% 189.14 243.90 28%\n2000 275.09 319.25 16% 219.0 278.80 26%\nto update the estimations to converge to the real distances.\nIt has been discovered that the locality property is an\nimportant feature of information retrieval systems in user\nmodeling studies[3]. In P2P based content sharing systems, this\nproperty is exemplified by the phenomenon that users tend\nto send queries that represent only a limited number of\ntopics and conversely, users in the same neighborhood are likely\nto share common interests and send similar queries [10]. The\nlearning based approach is perceived to be more beneficial\nfor real distributed information retrieval systems which\nexhibit locality property. This is because the users\" traffic and\nquery patterns can reduce the state space and speed up the\nlearning process. Related work in taking advantage of this\nproperty include [7], where the authors attempted to address\nthis problem by user modeling techniques.\n6. CONCLUSIONS\nIn this paper, a reinforcement-learning based approach\nis developed to improve the performance of distributed IR\nsearch algorithms. Particularly, agents maintain estimates,\nnamely expected utility, on the downstream agents\" ability to\nprovide relevant documents for incoming queries. These\nestimates are updated gradually by learning from the feedback\ninformation returned from previous search sessions. Based\non the updated expected utility information, the agents\nmodify their routing policies. Thereafter, these agents route the\nqueries based on the learned policies and update the\nestimates on the expected utility based on the new routing\npolicies. The experiments on two different distributed IR\ndatasets illustrates that the reinforcement learning approach\nimproves considerably the cumulative utility over time.\n7. REFERENCES\n[1] S. Abdallah and V. Lesser. Learning the task\nallocation game. In AAMAS \"06: Proceedings of the\nfifth international joint conference on Autonomous\nagents and multiagent systems, pages 850-857, New\nYork, NY, USA, 2006. ACM Press.\n[2] J. A. Boyan and M. L. Littman. Packet routing in\ndynamically changing networks: A reinforcement\nlearning approach. In Advances in Neural Information\nProcessing Systems, volume 6, pages 671-678. Morgan\nKaufmann Publishers, Inc., 1994.\n[3] J. C. French, A. L. Powell, J. P. Callan, C. L. Viles,\nT. Emmitt, K. J. Prey, and Y. Mou. Comparing the\nperformance of database selection algorithms. In\nResearch and Development in Information Retrieval,\npages 238-245, 1999.\n[4] B. Horling, R. Mailler, and V. Lesser. Farm: A\nscalable environment for multi-agent development and\nevaluation. In Advances in Software Engineering for\nMulti-Agent Systems, pages 220-237, Berlin, 2004.\nSpringer-Verlag.\n[5] M. Littman and J. Boyan. A distributed reinforcement\nlearning scheme for network routing. In Proceedings of\nthe International Workshop on Applications of Neural\nNetworks to Telecommunications, 1993.\n[6] J. Lu and J. Callan. Federated search of text-based\ndigital libraries in hierarchical peer-to-peer networks.\nIn In ECIR\"05, 2005.\n[7] J. Lu and J. Callan. User modeling for full-text\nfederated search in peer-to-peer networks. In ACM\nSIGIR 2006. ACM Press, 2006.\n[8] C. D. Manning and H. Sch\u00a8utze. Foundations of\nStatistical Natural Language Processing. The MIT\nPress, Cambridge, Massachusetts, 1999.\n[9] C. R. Palmer and J. G. Steffan. Generating network\ntopologies that obey power laws. In Proceedings of\nGLOBECOM \"2000, November 2000.\n[10] K. Sripanidkulchai, B. Maggs, and H. Zhang. Efficient\ncontent location using interest-based locality in\npeer-topeer systems. In INFOCOM, 2003.\n[11] D. Subramanian, P. Druschel, and J. Chen. Ants and\nreinforcement learning: A case study in routing in\ndynamic networks. In In Proceedings of the Fifteenth\nInternational Joint Conference on Artificial\nIntelligence, pages 832-839, 1997.\n[12] J. N. Tao and L. Weaver. A multi-agent, policy\ngradient approach to network routing. In In\nProceedings of the Eighteenth International Conference\non Machine Learning, 2001.\n[13] H. Zhang, W. B. Croft, B. Levine, and V. Lesser. A\nmulti-agent approach for peer-to-peer information\nretrieval. In Proceedings of Third International Joint\nConference on Autonomous Agents and Multi-Agent\nSystems, July 2004.\n[14] H. Zhang and V. Lesser. Multi-agent based\npeer-to-peer information retrieval systems with\nconcurrent search sessions. In Proceedings of the Fifth\nInternational Joint Conference on Autonomous Agents\nand Multi-Agent Systems, May 2006.\n[15] H. Zhang and V. R. Lesser. A dynamically formed\nhierarchical agent organization for a distributed\ncontent sharing system. In 2004 IEEE/WIC/ACM\nInternational Conference on Intelligent Agent\nTechnology (IAT 2004), 20-24 September 2004,\nBeijing, China, pages 169-175. IEEE Computer\nSociety, 2004.\n238 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)", "keywords": "peer-to-peer information retrieval;distributed search algorithm;reinforcement learning;distribute search control;routing decision;routing policy;learning algorithm;utility;peer-to-peer information retrieval system;query;multi-agent learn;network"}
-{"name": "test_I-15", "title": "Information Searching and Sharing in Large-Scale Dynamic Networks", "abstract": "Finding the right agents in a large and dynamic network to provide the needed resources in a timely fashion, is a long standing problem. This paper presents a method for information searching and sharing that combines routing indices with tokenbased methods. The proposed method enables agents to search effectively by acquiring their neighbors\" interests, advertising their information provision abilities and maintaining indices for routing queries, in an integrated way. Specifically, the paper demonstrates through performance experiments how static and dynamic networks of agents can be \u2018tuned\" to answer queries effectively as they gather evidence for the interests and information provision abilities of others, without altering the topology or imposing an overlay structure to the network of acquaintances.", "fulltext": "1. INTRODUCTION\nConsidering to be a decentralized control problem, information\nsearching and sharing in large-scale systems of cooperative agents\nis a hard problem in the general case: The computation of an\noptimal policy, when each agent possesses an approximate partial\nview of the state of the environment and when agents\"\nobservations and activities are interdependent (i.e. one agent\"s\nactions affect the observations and the state of an other) [3], is\nhard. This fact, has resulted to efforts that either require agents to\nhave a global view of the system [15], to heuristics [4], to\nprecomputation of agents\" information needs and information\nprovision capabilities for proactive communication [17], to\nlocalized reasoning processes built on incoming information\n[12,13,14], and to mathematical frameworks for coordination\nwhose optimal policies can be approximated [11] for small (sub-)\nnetworks of associated agents.\nOn the other hand, there is a lot of research on semantic peer to\npeer search networks and social networks [1,5,6,8,9,10,16,18,19]\nmany of which deal with tuning a network of peers for effective\ninformation searching and sharing. They do it mostly by imposing\nlogical and semantic overlay structures. However, as far as we\nknow there is no work that demonstrates the effectiveness of a\ngradual tuning process in large-scale dynamic networks that\nstudies the impact of the information gathered by agents as more\nand more queries are issued and served in concurrent sessions in\nthe network.\nThe main issue in this paper concerns \u2018tuning\" a network of\nagents, each with a specific expertise, for efficient and effective\ninformation searching and sharing, without altering the topology\nor imposing an overlay structure via clustering, introduction of\nshortcut indices, or re-wiring. \u2018 \" is the task of sharing and\ngathering the knowledge for agents to propagate\nrequests to the acquaintances, minimizing the searching\neffort, increasing the efficiency and the benefit of the system.\nSpecifically, this paper proposes a method for information\nsearching and sharing in dynamic and large scale networks, which\ncombines routing indices with token-based methods for\ninformation sharing in large-scale multi-agent systems.\nThis paper is structured as follows: Section 2 presents related\nwork and motivates the proposed method. Section 3 states the\nproblem and section 4 presents in detail the individual techniques\nand the overall proposed method. Section 5 presents the\nexperimental setup and results, and section 6 concludes the paper,\nsketching future work.\n2. RELATED WORK\nInformation provision and sharing can be considered to be a\ndecentralized partially-observable Markov decision process\n[3,4,11,14]. In the general case, decentralized control of\nlargescale dynamic systems of cooperative agents is a hard problem.\nOptimal solutions can only be approximated by means of\nheuristics, by relaxations of the original problem or by centralized\nsolutions. The computation of an optimal control policy is simple\ngiven that global states can be factored, that the probability of\ntransitions and observations are independent, the observations\ncombined determine the global state of the system and the reward\nfunction can be easily defined as the sum of local reward\nfunctions [3].\nHowever, in a large-scale dynamic system with decentralized\ncontrol it is very hard for agents to possess accurate partial views\nof the environment, and it is even more hard for agents to possess\na global view of the environment. Furthermore, agents\"\nobservations can not be assumed independent, as one agent\"s\nactions can affect the observations of others: For instance, when\none agent joins/leaves the system, then this may affect other\nagents\" assessment of neighbours\" information provision abilities.\nFurthermore, the probabilities of transitions can be dependent too;\nsomething that increases the complexity of the problem: For\nexample, when an agent sends a query to another agent, then this\nmay affect the state of the latter, as far as the assessed interests of\nthe former are concerned.\nConsidering independent activities and observations, authors in\n[4] propose a decision-theoretic solution treating standard action\nand information exchange as explicit choices that the decision\nmaker must make. They approximate the solution using a myopic\nalgorithm. Their work differs in the one reported here in the\nfollowing aspects: First, it aims at optimizing communication,\nwhile the goal here is to tune the network for effective\ninformation sharing, reducing communication and increasing\nsystem\"s benefit. Second, the solution is approximated using a\nmyopic algorithm, but authors do not demonstrate how\nsuboptimal are the solutions computed (something we neither do),\ngiven their interest to the optimal solution. Third, they consider\nthat transitions and observations made by agents are independent,\nwhich, as already discussed, is not true in the general case. Last,\nin contrast to their approach where agents broadcast messages,\nhere agents decide not only when to communicate, but to whom\nto send a message too.\nToken based approaches are promising for scaling coordination\nand therefore information provision and sharing to large-scale\nsystems effectively. In [11] authors provide a mathematical\nframework for routing tokens, providing also an approximation to\nsolving the original problem in case of independent agents\"\nactivities. The proposed method requires a high volume of\ncomputations that authors aim to reduce by restricting its\napplication to static logical teams of associated agents. In\naccordance to this approach, in [12,13,14], information sharing is\nconsidered only for static networks and self-tuning of networks is\nnot demonstrated. As it will be shown in section 5, our\nexperiments show that although these approaches can handle\ninformation sharing in dynamic networks, they require a larger\namount of messages in comparison to the approach proposed here\nand can not tune the network for efficient information sharing.\nProactive communication has been proposed in [17] as a result of\na dynamic decision theoretic determination of communication\nstrategies. This approach is based on the specification of agents as\nproviders and needers: This is done by a plan-based\nprecomputation of information needs and provision abilities of\nagents. However, this approach can not scale to large and\ndynamic networks, as it would be highly inefficient for each agent\nto compute and determine its potential needs and information\nprovision abilities given its potential interaction with 100s of\nother agents.\nViewing information retrieval in peer-to-peer systems from a\nmulti-agent system perspective, the approach proposed in [18] is\nbased on a language model of agents\" documents collection.\nExploiting the models of other agents in the network, agents\nconstruct their view of the network which is being used for\nforming routing decisions. Initially, agents build their views using\nthe models of their neighbours. Then, the system reorganizes by\nforming clusters of agents with similar content. Clusters are being\nexploited during information retrieval using a kNN approach and\na gradient search scheme. Although this work aims at tuning a\nnetwork for efficient information provision (through\nreorganization), it does not demonstrate the effectiveness of the\napproach with respect to this issue. Moreover, although during\nreorganization and retrieval they measure the similarity of content\nbetween agents, a more fine grained approach is needed that\nwould allow agents to measure similarities of information items\nor sub-collections of information items. However, it is expected\nthat this will complicate re-organization. Based on their work on\npeer-to-peer systems, H.Zhand and V.Lesser in [19] study\nconcurrent search sessions. Dealing with static networks, they\nfocus on minimizing processing and communication bottlenecks:\nAlthough we deal with concurrent search sessions, their work is\northogonal to ours, which may be further extended towards\nincorporating such features in the future.\nConsidering research in semantic peer-to-peer systems1\n, most of\nthe approaches exploit what can be loosely stated a routing\nindex. A major question concerning information searching is\nwhat information has to be shared between peers, when, and\nwhat adjustments have to be made so as queries to be routed to\ntrustworthy information sources in the most effective and efficient\nway.\nREMINDIN\" [10] peers gather information concerning the queries\nthat have been answered successfully by other peers, so as to\nsubsequently select peers to forward requests to: This is a lazy\nlearning approach that does not involve advertisement of peer\ninformation provision abilities. This results in a tuning process\nwhere the overall recall increases over time, while the number of\nmessages per query remains about the same. Here, agents actively\nadvertise their information provision abilities based on the\nassessed interests of their peers: This results in a much lower\nnumber of messages per query than those reported in\nREMINDIN\".\nIn [5,6] peers, using a common ontology, advertise their expertise,\nwhich is being exploited for the formation of a semantic overlay\nnetwork: Queries are propagated in this network depending on\ntheir similarity with peers\" expertise. It is on the receiver\"s side to\ndecide whether it shall accept or not an advertisement, based on\nthe similarity between expertise descriptions. According to our\napproach, agents advertise selectively their information provision\nabilities about specific topics to their neighbours with similar\ninformation interests (and only to these). However, this is done as\ntime passes and while agents\" receive requests from their peers.\nThe gradual creation of overlay networks via re-wiring, shortcuts\ncreation [1,8,16] or clustering of peers [17,9] are tuning\napproaches that differ fundamentally from the one proposed here:\nThrough local interactions, we aim at tuning the network for\nefficient information provision by gathering routing information\ngradually, as queries are being propagated in the network and\n1\nGeneral research in peer-to-peer systems concentrates either on\nnetwork topologies or on distribution of documents:\nApproaches do not aim to optimize advertising, and search\nmostly requires common keys for nodes and their contents.\nThey generate a substantial overhead in highly dynamic\nsettings, where nodes join/leave the system.\n248 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nagents advertise their information provision abilities given the\ninterests of their neighbours. Given the success of this method, we\nshall study how the addition of logical paths and gradual\nevolution of the network topology can further increase the\neffectiveness of the proposed method.\n3. PROBLEM STATEMENT\nLet { } be the set of agents in the system. The\nnetwork of agents is modelled as a graph =( , ), where is the\nset of agents and is a set of bidirectional edges denoted as\nnonordered pairs ( , ). The neighbourhood of an agent includes\nall the one-hop away agents (i.e. its acquaintance agents) such\nthat ( ) The set of acquaintances of is denoted by\nEach agent maintains (a) an ontology that represents categories of\ninformation, (b) indices of information pieces available to its local\ndatabase and to other agents, and (c) a profile model for some of\nits acquaintances. Indices and profile models are described in\ndetail in section 4.\nOntology concepts represent categories that classify the\ninformation pieces available. It is assumed that agents in the\nnetwork share the same ontology, but each agent has a set of\ninformation items in its local repository, which are classified\nunder the concepts of its expertise. The set of concepts is denoted\nby It is assumed that the sets of items in agents\" local\nrepositories are non-overlapping.\nFinally, it is assumed that there is a set of queries .\nEach query is represented by a tuple where is\nthe unique identity of the query is a non-negative integer\nrepresenting the maximum number of information pieces\nrequested, is the specific category to which these pieces must\nbelong, is a path in the network of agents through which the\nquery has been propagated (initially it contains the originator of\nthe query and each agent appends its id in the before\npropagating the query), and is a positive integer that specifies\nthe maximum number of hops that the query can reach. In case\nthis limit is exceeded and the corresponding number of\ninformation pieces have not been found, then the query is\nconsidered unfulfilled However, even in this case, a (possibly\nhigh) percentage of the requested pieces of information may have\nbeen found.\nThe problem that this article deals with is as follows: Given a\nnetwork of agents and a set of queries , agents must\nretrieve the pieces of information requested by queries, in\nconcurrent search sessions, and further \u2018tune\" the network so as to\nanswer future similar queries in the more effective and efficient\nway, increasing the benefit of the system and reducing the\ncommunication messages required. The of the system is\nthe ratio of information pieces retrieved to the number of\ninformation pieces requested. The of the system is\nmeasured by the number of messages needed for searching and\nupdating the indexes and profiles maintained.\n\u2018Tuning\" the network requires agents to acquire the necessary\ninformation about acquaintances\" interests and information\nprovision abilities (i.e. the routing and profiling tuples detailed in\nsection 4), so as to route queries and further share information in\nthe most efficient way. This must be done seamlessly to\nsearching: I.e. agents in the network must share/acquire the\nnecessary information while searching, increasing the benefit and\nefficiency gradually, as more queries are posed.\n4. INFORMATION SEARCHING AND\nSHARING\n4.1 Overall Method\nGiven a network =( , ) of agents and a set of queries , each\nagent maintains indices for routing queries to the right agents,\nas well as acquaintances\" profiles for advertising its information\nprovision abilities to those interested.\nTo capture information about pieces of information accessible by\nthe agents, each agent maintains a routing index that is realized\nas a set of tuples of the form < , , >. Each such tuple specifies\nthe number of information items in category that can be\nreached by , such that ( ) { }. This specifies\nthe of to with respect to the\ninformation category . As it can be noticed, each tuple\ncorresponds either to the agent itself (specifying the pieces of\ninformation classified in available to its local repository) or to\nan acquaintance of the agent (recording the pieces of information\nin category available to the acquaintance agent and to agents\nthat can be reached through this acquaintance). The routing index\nis exploited for the propagation of queries to the right agents:\nThose that are either more likely to provide answers or that know\nsomeone that can provide the requested pieces of information.\nConsidering an agent , the profile model of some of its\nacquaintances , denoted by is a set of tuples < , >,\nmaintained by . Such a tuple specifies the probability that the\nacquaintance is interested to pieces of information in category\nsubsequently, such a probability is also denoted by ).\nFormally, the profile model of an acquaintance of is\n{ , >| ( ) and }. Profile models are\nexploited by the agents to decide where to \u2018advertise\" their\ninformation provision abilities.\nGiven two acquaintances and in , the information\nsearching and sharing process proceeds as it is depicted in Figure\n1: Initially, each agent has no knowledge about the information\nprovision abilities of its acquaintances and also, it possesses no\ninformation about their interests. When that the query\nis sent to from the agent , then has to\nupdate the profile of concerning the category increasing the\nprobability that is interested to information in When this\nprobability is greater than a threshold value (due to the queries\nabout that has sent to ), then assesses that it is highly\nFigure 1. Typical pattern for information sharing between\ntwo acquaintances (numbers show the sequence of tasks)\nAj Ai\nThe Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 249\nprobable for to be interested about information in category .\nThis leads to inform about its information provision abilities\nas far as the category is concerned. This information is being\nused by to update its index about . This index is being\nexploited by to further propagate queries, and it is further\npropagated to those interested in . Moreover, the profile of\nmaintained by guides to propagate changes concerning its\ninformation provision abilities to .\nThe above method has the following features: (a) It combines\nrouting indices and token-based information sharing techniques\nfor efficient information searching and sharing, without imposing\nan overlay network structure. (b) It can be used by agents to adapt\nsafely and effectively to dynamic networks. (c) It supports the\nacquisition and exploitation of different types of locally available\ninformation for the \u2018tuning\" process. (d) It extends the\ntokenbased method for information sharing (as it was originally\nproposed in [12,13]) in two respects: First, to deal with categories\nof information represented by means of ontology concepts and not\nwith specific pieces of information, and second, to guide agents to\nadvertise information that is semantically similar to the\ninformation requested, by using a semantic similarity measure\nbetween information categories. Therefore, it paves the way for\nthe use of token-based methods for semantic peer-to-peer\nsystems. This is further described in section 4.3. (d) It provides a\nmore sophisticated way for agents to update routing indices than\nthat originally proposed in [2]. This is done by gathering and\nexploiting acquaintances\" profiles for effective information\nsharing, avoiding unnecessary and cyclic updates that may result\nto misleading information about agents\" information provision\nabilities. This is further described in the next sub-section.\n4.2 Routing Indices\nAs already specified, given a network of agents and the\nset of agent\"s acquaintances, the routing index (RI) of\n(denoted by ) is a collection of at most | |\nindexing tuples < , >. The key idea is that given such an index\nand a request concerning , will forward this request to if the\nresources available (i.e. the information abilities of to )\ncan best serve this request. To compute the information abilities\nof to , all tuples < , > concerning all agents in ( )-{ }\nmust be aggregated. Crespo and Garcia-Molina [2] examine\nvarious types of aggregations. In this paper, given some tuples\n< >,< , \u2026> maintained by the agent , their\naggregation is the tuple < , >. This gives\ninformation concerning the pieces of information that can be\nprovided through , but it does not distinguish what each of \"s\nacquaintances can provide: This is an inherent feature of routing\nindices. Without considering the interests of its acquaintances,\nmay compute aggregations concerning agents in ( ) { }-{ }\nand advertise/share its information provision abilities to each\nagent in ( ).\nFor instance, given the network configuration depicted in Figure 2\nand a category , agent sends the aggregation of the tuples\nconcerning agents in ( ) { }-{ } (denoted as\n( , )) to agent , which records the tuple\n< >. Similarly the aggregation of the tuples concerning the\nagents in ( ) { }-{ } (denoted as ( )) is\nsent to the agent , which also records the tuple < >. It must\nbe noticed that and record the information provision\nabilities of each from its own point of view. Every time the\ntuple that models the information provision abilities of an agent\nchanges, the aggregation has to re-compute and send the new\naggregation to the appropriate neighbors in the way described\nabove. Then, its neighbors have to propagate these updates to\ntheir acquaintances, and so on.\nFigure 2.Aggregating and sharing information provision\nindices.\nRouting indices may be misleading and lead to inefficiency in\narbitrary graphs containing cycles. The exploitation of\nacquaintances\" profiles can provide solutions to these\ndeficiencies. Each agent propagates its information provision\nabilities concerning a category only to these acquaintances that\nhave high interest in this category. As it has been mentioned, an\nagent expresses its interest in a category by propagating queries\nabout it. Therefore, indices concerning a category are\npropagated in the inverse direction in the paths to which queries\nabout are propagated. Indices are propagated as long as agents\nin the path have a high interest in . Queries can not be\npropagated in a cyclic fashion since an agent serves and\npropagates queries that have not been served by it in a previous\ntime point. Therefore, due to their relation to queries, indices are\nnot propagated in a cyclic fashion, as well. However, there is still\na specific case where cycles can not be avoided. Such a case is\nshown in Figure 3:\nFigure 3. Cyclic pattern for the sharing of indices.\nWhile the propagation of the query causes the propagation of\ninformation provision abilities of agents in a non cyclic way\n(since the agent A recognizes that has been served), the query\ncauses the propagation of information abilities of A to other\nagents in the network, causing, in conjunction to the propagation\nof indices due to a cyclic update of indices.\n4.3 Profiles\nThe key assumption behind the exploitation of acquaintances\"\nprofiles, as it was originally proposed in [12,13], is that for an\nagent to pass a specific information item, this agent has a high\ninterest on it or to related information. As already said, in our\ncase, acquaintances\" profiles are created based on received\nqueries and specify the interests of acquaintances to specific\ninformation categories. Given the query sent\nfrom to , has to record not only the interest of to , but\nAk\nA2\nA\nNotation\nAcquaintance relation\nFlow of query\nFlow of indices due to\nFlow of query\nFlow of indices due to\n250 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nthe interest of to all the related classes, given their semantic\nsimilarity to\nTo measure the similarity between two ontology classes we use\nthe similarity function [0,1] [7]:\n=\notherwise\ncc\n01.0\n1\ncofsubconceptaiscif\nji\nji\nwhere is the length of the shortest path between and in the\ngraph spanned by the sub concept relation and the minimal level\nin the hierarchy of either or . and are parameters scaling\nthe contribution of shortest path length and , respectively.\nBased on previous works we choose =0.2 and =0.6 as optimal\nvalues. It must be noticed that we measure similarity between\nsub-concepts, assigning a very low similarity value between\nconcepts that are not related by the sub-concept relation. This is\ndue to that, each query about information in category can be\nanswered by information in any sub-category of close enough\nto Given a threshold value 0.3, 0.3 indicates that an\nagent interested in is also interested in , while <0.3\nindicates that an agent interested in is unlikely to be interested\nin . This threshold value was chosen after some empirical\nexperiments with ontologies.\nThe update of \"s assessment on pc\nbased on an incoming query\nfrom is computed by leveraging Bayes Rule\nas follows [12,13]:\n, and ( )\nIf is the last in the\n||\n1\n||\n2\n),(\nIf is not the last in the\nThen probabilities must be normalized to ensure that\n, 1\n)(\n,\nAccording to the first case of the equation, the probability that the\nagent that has propagated a query about to be interested about\ninformation in , is updated based on the similarity between\nand . The second case updates the interests of agents other than\nthe requesting one, in a way that ensures that normalization\nworks. It must be noticed that in contrast to [12,13], the\ncomputation has been changed in favour to the agent that passed\nthe query.\nThe profiles of acquaintances enable an agent to decide where and\nwhich advertisements to be sent. Specifically, for each\nand for which is greater than a threshold value (currently\nset to 0.5), the agent aggregates the vectors ( ) of each\nagent ( ) { }-{ }and sends the tuple ( , ) to . Also,\ngiven a high , when a change to an index concerning occurs\n(e.g. due to a change in \"s local repository, or due to that the set\nof its acquaintances changed), sends the updated aggregated\nindex entry to . Doing so, the agent which is highly interested\nto pieces of information in category updates its index so as to\nbecome aware of the information provision abilities of as far as\nthe category is concerned.\n4.4 Tuning\nTuning is performed seamlessly to searching: As agents propagate\nqueries to be served, their profiles are getting updated by their\nacquaintances. As their profiles are getting updated, agents\nreceive the aggregated indices of their acquaintances, becoming\naware of their information provision abilities on information\ncategories to which they are probably interested. Given these\nindices, agents further propagate queries to acquaintances that are\nmore likely to serve queries, and so on. Concerning the routing\nindex and the profiles maintained by an agent , it must be\npointed that does not need to record all possible tuples, i.e.\n| | | { }|: It records only those that are of particular interest\nfor searching and sharing information, depending on the expertise\nand interests of its own and its acquaintances.\nInitially, agents do not possess profiles of their acquaintances. For\nindices there are two alternatives: Either agents do not initially\npossess any information about acquaintances\" local repositories\n(this is the case), or they do (this is\nthe case). Given a query, agents\npropagate this query to those acquaintances that have the highest\ninformation provision abilities. In the no initialization of indices\ncase where an agent does not initially possess information about\nits acquaintances\" abilities, it may initially propagate a query to\nall of them, resulting to a pure flooding approach; or it may\npropagate the query randomly to a percentage of them. In the\ninitialization of indices case, where an agent initially possesses\ninformation about its acquaintances\" local repository, it can\npropagate queries to all or to a percentage of those that can best\nserve the request. We considered both cases in our experiments.\nGiven a static setting where agents do not shift their expertise,\nand the distribution of information pieces does not change, the\nnetwork will eventually reach a state where no information\nconcerning agents\" information abilities will need to be\npropagated and no agents\" profiles will need to be updated:\nQueries shall be propagated only to those agents that will lead to a\nnear-to-the-maximum benefit of the system in a very efficient\nway. In a dynamic setting, agents may shift their expertise, their\ninterests, they may leave the network at will, or welcome new\nagents that join the network and bring new information provision\nabilities, new interests and new types of queries. In this paper we\nstudy settings where agents may leave or join the network. This\nrequires agents to adapt safely and effectively. Towards this goal,\nin case an agent does not receive a reply from one of its\nacquaintances within a time interval, then it retracts all the indices\nand the profile concerning the missing acquaintance and\nrepropagates the queries that have been sent to the missing agent\nsince the last successful handshake, to other agents. In case a new\nagent joins the network, then its acquaintances that are getting\naware of its presence propagate all the queries that have processed\nby them in the last time points (currently is set to 6) to the\nnewcomer. This is done so as to inform the newcomer about their\ninterests and initiate information sharing.\n5. EXPERIMENTAL SETUP\nTo validate the proposed approach we have built a prototype that\nsimulates large networks. To test the scalability of our approach\nThe Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 251\nwe have run several experiments with various types of networks.\nHere we present results from 3 network types with | |=100,\n| |=500 and | |=1000 that provide representative cases. Networks\nare constructed by distributing randomly | | agents in an area,\neach with a visibility ratio equal to The acquaintances of an\nagent are those that are visible to the agent and those from\nwhich the agent is visible (since edges in the network are\nbidirectional). Details about networks are given in Table 1. The\ncolumn avg(|N(A)|) shows the average number of acquaintances\nper agent in the network and the column |T| shows the number of\nqueries per network type. It must be noticed that the TypeA\nnetwork is more dense than the others, which are much larger\nthan this.\nEach experiment ran 40 times. In each run the network is\nprovided with a new set of randomly generated queries that are\noriginated from randomly chosen agents. The agents search and\ngather knowledge that they further use and enrich, tuning the\nnetwork gradually, run by run. Each run lasts a number of rounds\nthat depends on the of queries and on the parameters that\ndetermine the dynamics of the network: To end a run, all queries\nmust have either been served (i.e. 100% of the information\nitems requested must have been found), or they must have been\nunfulfilled (i.e. have exceeded their ). It must be noticed that\nin case of a dynamic setting, this ending criterion causes some of\nthe queries to be lost. This is the case when some queries are\nthe only active remained and the agents to whom they have\nbeen propagated left the network without their acquaintances to\nbe aware of it.\nTable1: Network types\n|N| R N avg(|N(A)|) |T|\nTypeA 100 10 25 50 363\nTypeB 500 10 125 20 1690\nTypeC 1000 10 250 10 3330\nInformation used in the experiments is synthetic and is being\nclassified in 15 distinct categories: Each agent\"s expertise\ncomprises a unique information category. For the category in its\nexpertise each agent holds at most 1000 information pieces, the\nexact number of which is determined randomly.\nAt each run a constant number of queries are being generated,\ndepending on the type of network used (last column in Table 1).\nAt each run, each query is randomly assigned to an originator\nagent and is set to request a random number of information items,\nclassified in a sub-category of the query-originator agent\"s\nexpertise. This sub-category is chosen in a random way and the\nrequested items are less than 6000. The for any query is set to\nbe equal to 6. In such a setting, the demand for information items\nis much higher than the agents\" information provision abilities,\ngiven the of queries: The maximum benefit in any\nexperimental case is much less than 60% (this has been done so as\nto challenge the \u2018tuning\" task in settings where queries can not be\nserved in the first hop or after 2-3 hops).\nGiven that agents are initially not aware of acquaintances\" local\nrepository ( case), we have run\nseveral evaluation experiments for each network type depending\non the percentage of acquaintances to which a query can be\npropagated by an agent. These types of experiments are denoted\nby TypeX-Y, where X denotes the type of network and Y the\npercentage of acquaintances: Here we present results for Y equal\nto 10, 20 or 50. For instance, TypeA-10 denotes a setting with a\n0\n50\n100\n150\n200\n250\n300\nTypeA-10\nType B-20 (no initialization)\nTypeB-20 (initialization)\nType C-50\n4000\n14000\n24000\n34000\n44000\n54000\n40\n42\n44\n46\n48\n50\n52\n54\n56\n58\n0\n0.0005\n0.001\n0.0015\n0.002\n0.0025\n0.003\n0.0035\n0.004\n0.0045\nTypeA-10 TypeB-20 (no initialization)\nType B-20 (initialization) TypeC-50\nTypeB-20 without RIs\nFigure 4. Results for static networks as agents gather\ninformation about acquaintances\" abilities and interests\nnetwork of TypeA where each query is being propagated to at\nmost 10% of an agent\"s acquaintances. The exact number of\nacquaintances is randomly chosen per agent and queries are being\npropagated only to those acquaintances that are likely to best\ni-messages per run\nq-messages per run\nbenefit per run\nmessage gain per run\n252 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nserve the request. Figures 4 and 5 show experiments for static and\ndynamic networks of TypeA-10 (dense network with a low\npercentage of acquaintances), TypeB-20 (quite dense network\nwith a low percentage of acquaintances), with initialization and\nwithout initialization, and TypeC-50 (not a so dense network with\na quite high percentage of acquaintances). To demonstrate the\nadvantages of our method we have considered networks without\n0\n200\n400\n600\n800\n1000\n1200\n1400\n1600\n1800\n2000\n0\n10000\n20000\n30000\n40000\n50000\n60000\n70000\n80000\n42\n43\n44\n45\n46\n47\n48\n49\n50\n0\n0.0005\n0.001\n0.0015\n0.002\n0.0025\n0.003\n0.0035\nTypeB-20 TypeB-20 without RIs TypeC-50\nTypeC-50 without RIs TypeC-50 (static)\nFigure 5. Results for dynamic networks as agents gather\ninformation about acquaintances\" abilities and interests\nrouting indices for TypeC-50 and TypeB-20 networks: Agents in\nthese networks, similarly to [12,13], share information concerning\ntheir local repository based on their assessments on\nacquaintances\" interests.\nResults computed in each experiment show the number of\nquerypropagation messages ( ), the number of messages for\nthe update of indices ( ), the of the system, i.e.\nthe average ratio of information pieces provided to the number of\npieces requested per query, and the i.e. the ratio of\nbenefit to the total number of messages. The horizontal axis in\neach diagram corresponds to the runs.\nAs it is shown in Figure 4, as agents search and share information\nfrom run 1 to run 40, they manage to increase the benefit of the\nsystem, by drastically reducing the number of messages. Also (not\nshown here due to space reasons) the number of unfulfilled\nqueries decrease, while the served queries increase gradually.\nExperiments show: (a) An effective tuning of the networks as\ntime passes and more queries are posed to the network, even if\nagents maintain the models of a small percentage of their\nacquaintances. (b) That \u2018tuning\" can greatly facilitate the\nscalability of the information searching and sharing tasks in\nnetworks.\nTo show whether initial knowledge about acquaintances local\nrepository (the case) affects the\neffective tuning of the network, we provide representative results\nfrom the TypeB-20 network. As it is shown in Figure 4, the\ntuning task in this case does not manage to achieve the benefit of\nthe system reported for the case. On\nthe contrary, while the tuning affects the drastically;\nthe are not affected in the same way: The\nin the case are less than those in the\nTypeB-20 with case. This is further\nshown in a more clear way in the message gain of both\napproaches: The message gain of the TypeB-20 with\ncase is higher than the message gain for\nthe TypeB-20 experiment with .\nTherefore, initial knowledge concerning local information of\nacquaintances can be used for guiding searching and tuning at the\ninitial stages of the tuning task, only if we need to gain efficiency\n(i.e. decrease the number of required messages) to the cost of\nloosing effectiveness (i.e. have lower benefit): This is due to the\nfact that, as agents posses information about acquaintances\" local\nrepositories, the tuning process enables the further exchange of\nmessages concerning agents\" information provision abilities\nin cases where agents\" profiles provide evidence for such a need.\nHowever, initial information about acquaintances\" local\nrepositories may mislead the searching process, resulting in low\nbenefit. In case we need to gain effectiveness to the cost of\nreducing efficiency, this type of local knowledge does not suffice.\nConsidering also the information sharing method without routing\nindices ( cases), we can see that for static networks\nit requires more without managing to tune the\nsystem, while the benefit is nearly the same to the one reported by\nour method. This is shown clearly in the message gain diagrams\nin Figure 4.\nFigure 5 provides results for dynamic networks. These are results\nfrom a particular representative case of our experiments where\nmore than 25% of (randomly chosen) nodes leave the network in\neach run during the experiment. After a random number of\ni-messages per run\nq-messages per run\nbenefit per run\nmessage gain per run\nThe Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 253\nrounds, a new node may replace the one left. This newcomer has\nno information about the network. Approximately 25% of the\nnodes that leave the network are not replaced for 50% of the\nexperiment, and approximately 50% are not replaced for more\nthan 35% of the experiment. In such a highly dynamic setting\nwith very scarce information resources distributed in the network,\nas Figure 5 shows, the tuning approach has managed to keep the\nbenefit to acceptable levels, while still reducing drastically the\nnumber of i-messages. However, as it can be expected, this\nreduction is not so drastic as it was in the corresponding static\ncases. Figure 5 shows that the message gain for the dynamic case\nis comparable to the message gain for the corresponding\n(TypeC50) static case, which proves the value of this approach for\ndynamic settings. The comparison to the case where no routing\nindices are exploited reveals the same results as in the static case,\nto the cost of a large number of messages.\nFinally it must be pointed that the maximum number of messages\nper query required by the proposed method is nearly 12, which is\nless than that that reported by other efforts.\n6. CONCLUSIONS\nThis paper presents a method for semantic query processing in\nlarge networks of agents that combines routing indices with\ninformation sharing methods. The presented method enables\nagents to keep records of acquaintances\" interests, to advertise\ntheir information provision abilities to those that have a high\ninterest on them, and to maintain indices for routing queries to\nthose agents that have the requested information provision\nabilities. Specifically, the paper demonstrates through extensive\nperformance experiments: (a) How networks of agents can be\n\u2018tuned\" so as to provide requested information effectively,\nincreasing the benefit and the efficiency of the system. (b) How\ndifferent types of local knowledge (number, local information\nrepositories, percentage, interests and information provision\nabilities of acquaintances) can guide agents to effectively answer\nqueries, balancing between efficiency and efficacy. (c) That the\nproposed tuning task manages to increase the efficiency of\ninformation searching and sharing in highly dynamic and large\nnetworks. (d) That the information gathered and maintained by\nagents supports efficient and effective information searching and\nsharing: Initial information about acquaintances information\nprovision abilities is not necessary and a small percentage of\nacquaintances suffices.\nFurther work concerns experimenting with real data and\nontologies, differences in ontologies between agents, shifts in\nexpertise and the parallel construction of overlay structure.\n7. REFERENCES\n[1] Cooper, B.F., Garcia-Molina, H. ad-hoc, self-supervising\npeer-to-peer search networks.\n, Volume 23 ,Issue 2\n(April 2005) ,169 - 200\n[2] Crespo, A., Garcia-Molina, H. Routing indices for\npeer-topeer systems, in\n, July 2002.\",\n[3] Goldman, C., and Zilberstein, S. Decentralized Control of\nCooperative Systems: Categorization and Complexity\nAnalysis. 22\n(2004), 143-174.\n[4] Goldman, C., and Zilberstein, S. Optimizing Information\nExchange in Cooperative Multi-agent Systems. In\n, July 2003.\n[5] Haase, P., Siebes, R., van Harmelen, F. Peer Selection in\nPeer-to-Peer Networks with Semantic Topologies. In\nLecture Notes in Computer\nScience, Springer, Volume 3226/2004, 108-125.\n[6] Haase, P., Broekstra, J., Ehrig, M., Menken, M., Mika, P.,\nPlechawski, M., Pyszlak, P., Schnizler, B., Siebes, R., Staab,\nS., Tempich, T. Bibster-A Semantics-Based Bibliographic\nPeer-to-Peer System, In , 122-136.\n[7] Li, Y., Bandar, Z., and McLean, D.. An approach for\nmeasuring semantic similarity between words using semantic\nmultiple information sources.\nvol. 15, No 4, 2003, 871-882.\n[8] Loser, A., Staab, S., Tempich, C. Semantic Social Overlay\nNetworks.\n. To appear 2006/2007\n[9] Nejdl, W., Wolpers M., Siberski, W., Schmitz, C., Schlosser,\nM., Brunkhorst, I., Loser, Super-Peer-Based Routing and\nClustering Strategies for RDF-Based Peer-to-Peer\nNetworks. 536-543.\n[10] Tempich, C., Staab, S., Wranik, A. REMINDIN': Semantic\nQuery Routing in Peer-to-Peer Networks Based on Social\nMetaphors. In 640-649.\n[11] Xu, Y., Scerri, P., Yu, B., Lewis, M., and Sycara, K. A\nPOMDP Approach to Token-Based Team Coordination. In\n, (July 25-29, Utrecht) ACM Press.\n[12] Xu, Y., Lewis, M., Sycara, K., and Scerri, P. Information\nSharing in Large Scale Teams. In\n, 2004.\n[13] Xu, Y., Liao, E., Scerri, P., Yu, B., Lewis, M., and Sycara,\nK. Towards Flexible Coordination of Large Scale\nMultiAgent Systems.\nSpringer, 2005.\n[14] Xu, Y., Scerri, P., Yu, B., Okamoto, S., Lewis, M., and\nSycara, K. An Integrated Token Based Algorithm for\nScalable Coordination. In 407-414\n[15] Xuan, P., Lesser, V., Zilberstein, S. Communication\nDecisions in Multi-agent Cooperation: Model and\nExperiments. In 2001, 616-623.\n[16] Yu, B., and Singh M. Searching Social Networks, In\n.\n[17] Zhang, Y., Volz, R., Ioeger, T.R., Yen, J. A Decision\nTheoretic Approach for Designing Proactive Communication\nin Multi-Agent Teamwork. In 2004, 64-71.\n[18] Zhang, H., Croft, W.B., Levine, B., Lesser, V. A\nMultiAgent Approach for Peer-to-Peer-based Information\nRetrieval Systems. In In 04, 456-464.\n[19] Zhang, H., Lesser, V. Multi-Agent Based Peer-to-Peer\nInformation Retrieval Systems with Concurrent Search\nSessions. In 06, 305-31.\n254 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)", "keywords": "decentralized partially-observable markov decision process;artificial social system;gradient search scheme;performance;scalability;robustness;cooperative agent;myopic algorithm;knn approach;peer to peer search network;dependability;social network;decentralized control;dynamic and large scale network;information searching and sharing;peer-to-peer system"}
-{"name": "test_I-16", "title": "An Advanced Bidding Agent for Advertisement Selection on Public Displays", "abstract": "In this paper we present an advanced bidding agent that participates in first-price sealed bid auctions to allocate advertising space on BluScreen - an experimental public advertisement system that detects users through the presence of their Bluetooth enabled devices. Our bidding agent is able to build probabilistic models of both the behaviour of users who view the adverts, and the auctions that it participates within. It then uses these models to maximise the exposure that its adverts receive. We evaluate the effectiveness of this bidding agent through simulation against a range of alternative selection mechanisms including a simple bidding strategy, random allocation, and a centralised optimal allocation with perfect foresight. Our bidding agent significantly outperforms both the simple bidding strategy and the random allocation, and in a mixed population of agents it is able to expose its adverts to 25% more users than the simple bidding strategy. Moreover, its performance is within 7.5% of that of the centralised optimal allocation despite the highly uncertain environment in which it must operate.", "fulltext": "1. INTRODUCTION\nElectronic displays are increasingly being used within public\nenvironments, such as airports, city centres and retail stores, in order to\nadvertise commercial products, or to entertain and inform\npassersby. Recently, researchers have begun to investigate how the\ncontent of such displays may be varied dynamically over time in order\nto increase its variety, relevance and exposure [9]. Particular\nresearch attention has focused on the need to take into account the\ndynamic nature of the display\"s audience, and to this end, a number\nof interactive public displays have been proposed. These displays\nhave typically addressed the needs of a closed set of known users\nwith pre-defined interests and requirements, and have facilitated\ncommunication with these users through the active use of handheld\ndevices such as PDAs or phones [3, 7]. As such, these systems\nassume prior knowledge about the target audience, and require\neither that a single user has exclusive access to the display, or that\nusers carry specific tracking devices so that their presence can be\nidentified [6, 11]. However, these approaches fail to work in\npublic spaces, where no prior knowledge regarding the users who may\nview the display exists, and where such displays need to react to\nthe presence of several users simultaneously.\nBy contrast, Payne et al. have developed an intelligent public\ndisplay system, named BluScreen, that detects and tracks users\nthrough the Bluetooth enabled devices that they carry with them\neveryday [8]. Within this system, a decentralised multi-agent auction\nmechanism is used to efficiently allocate advertising time on each\npublic display. Each advert is represented by an individual\nadvertising agent that maintains a history of users who have already been\nexposed to the advert. This agent then seeks to acquire advertising\ncycles (during which it can display its advert on the public displays)\nby submitting bids to a marketplace agent who implements a sealed\nbid auction. The value of these bids is based upon the number of\nusers who are currently present in front of the screen, the history\nof these users, and an externally derived estimate of the value of\nexposing an advert to a user.\nIn this paper, we present an advanced bidding agent that\nsignificantly extends the sophistication of this approach. In particular,\nwe consider the more general setting in which it is impossible to\ndetermine an a priori valuation for exposing an advert to a user.\nThis is likely to be the case for BluScreen installations within\nprivate organisations where the items being advertised are\nforthcoming events or news items of interest to employees and visitors, and\nthus have no direct monetary value (indeed in this case bidding is\nlikely to be conducted in some virtual currency). In addition, it\nis also likely to be the case within new commercial installations\nwhere limited market experience makes estimating a valuation\nimpossible. In both cases, it is more appropriate to assume that an\nadvertising agents will be assigned a total advertising budget, and\nthat it will have a limited period of time in which to spend this\nbudget (particularly so where the adverts are for forthcoming events).\nThe advertising agent is then simply tasked with using this budget\nto maximum effect (i.e. to achieve the maximum possible advert\nexposure within this time period).\nNow, in order to achieve this goal, the advertising agent must be\ncapable of modelling the behaviour of the users in order to predict\nthe number who will be present in any future advertising cycle. In\naddition, it must also understand the auction environment in which\n263\n978-81-904262-7-5 (RPS) c 2007 IFAAMAS\nit competes, in order that it may make best use of its limited budget.\nThus, in developing an advanced bidding agent that achieves this,\nwe advance the state of the art in four key ways:\n1. We enable the advertising agents to model the arrival and\ndeparture of users as independent Poisson processes, and to\nmake maximum likelihood estimates of the rates of these\nprocesses based on their observations. We show how these\nagents can then calculate the expected number of users who\nwill be present during any future advertising cycle.\n2. Using a decision theoretic approach we enable the\nadvertising agents to model the probability of winning any given\nauction when a specific amount is bid. The cumulative form of\nthe gamma distribution is used to represent this probability,\nand its parameters are fitted using observations of both the\nclosing price of previous auctions, and the bids that that\nadvertising agent itself submits.\n3. We show that our explicit assumption that the advertising\nagent derives no additional benefit by showing an advert to\na single user more than once, causes the expected utility of\neach future advertising cycle to be dependent on the expected\noutcome of all the auctions that precede it. We thus present a\nstochastic optimisation algorithm based upon simulated\nannealing that enables the advertising agent to calculate the\noptimal sequence of bids that maximises its expected utility.\n4. Finally, we demonstrate that this advanced bidding strategy\noutperforms a simple strategy with none of these features\n(within an heterogenous population the advertising agents\nwho use the advanced bidding strategy are able to expose\ntheir adverts to 25% more users than those using the simple\nbidding strategy), and we show that it performs within 7.5%\nof that of a centralised optimiser with perfect knowledge of\nthe number of users who will arrival and depart in all future\nadvertising cycles.\nThe remainder of this paper is organised as follows: Section 2\ndiscusses related work where agents and auction-based marketplaces\nare used to allocated advertising space. Section 3 describes the\nprototype BluScreen system that motivates our work. In section 4 we\npresent a detailed description of the auction allocation mechanism,\nand in section 5 we describe our advanced bidding strategy for the\nadvertising agents. In section 6 we present an empirical validation\nof our approach, and finally, we conclude in section 7.\n2. RELATED WORK\nThe commercial attractiveness of targeted advertising has been\namply demonstrated on the internet, where recommendation systems\nand contextual banner adverts are the norm [1]. These systems\ntypically select content based upon prior knowledge of the individual\nviewing the material, and such systems work well on personal\ndevices where the owner\"s preferences and interests can be gathered\nand cached locally, or within interactive environments which utilise\nsome form of credential to identify the user (e.g. e-commerce sites\nsuch as Amazon.com).\nAttempts to apply these approaches within the real world have\nbeen much more limited. Gerding et al. present a simulated system\n(CASy) whereby a Vickrey auction mechanism is used to sell\nadvertising space within a modelled electronic shopping mall [2]. The\nauction is used to rank a set of possible advertisements provided by\ndifferent retail outlets, and the top ranking advertisements are\nselected for presentation on public displays. Feedback is provided\nthrough subsequent sales information, allowing the model to build\nup a profile of a user\"s preferences. However, unlike the BluScreen\nFigure 1: A deployed BluScreen prototype.\nsystem that we consider here, it is not suitable for advertising to\nmany individuals simultaneously, as it requires explicit interaction\nwith a single user to acquire the user\"s preferences.\nBy contrast, McCarthy et al. have presented a prototype\nimplementation of a system (GroupCast) that attempts to respond to a\ngroup of individuals by assuming a priori profiles of several\nmembers of the audience [7]. User identification is based on infrared\nbadges and embedded sensors within an office environment. When\nseveral users pass by the display, a centralised system compares\nthe user\"s profiles to identify common areas of interest, and content\nthat matches this common interest is shown.\nThus, whilst CASy is a simulated system that allows advertisers\nto compete for the attention of single user, GroupCast is a\nprototype system that detects the presence of groups of users and selects\ncontent to match their profiles. Despite their similarities, neither\nsystem addresses the settings that interests us here: how to allocate\nadvertising space between competing advertisers who face an\naudience of multiple individuals about whom there is no a priori profile\ninformation. Thus, in the next section we describe the prototype\nBluScreen system that motivates our work.\n3. THE BLUSCREEN PROTOTYPE\nBluScreen is based on the notion of a scalable, extendable,\nadvertising framework whereby adverts can be efficiently displayed to\nas many relevant users as possible, within a knowledge-poor\nenvironment. To achieve these goals, several requirements have been\nidentified:\n1. Adverts should be presented to as diverse an audience as\npossible, whilst minimising the number of times the advert is\npresented to any single user.\n2. Users should be identified by existing ubiquitous, consumer\ndevices, so that future deployments within public arenas will\nnot require uptake of new hardware.\n3. The number of displays should be scalable, such that adverts\nappear on different displays at different times.\n4. Knowledge about observed behaviour and composition of the\naudience should be exploited to facilitate inference of user\ninterests which can be exploited by the system.\nTo date, a prototype systems that addresses the first two goals has\nbeen demonstrated [8]. This system uses a 23 inch flat-screen\ndisplay deployed within an office environment to advertise events and\nnews items. Rather than requiring the deployment of specialised\nhardware, such as active badges (see [11] for details), BluScreen\ndetects the presence of users in the vicinity of each display through\nthe Bluetooth-enabled devices that they carry with them everyday1\n.\n1\nDevices must be in discovery mode to detectable.\n264 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nDevice Type Unique Samples Devices\nOccasional < 10 135\nFrequent 10 \u2212 1000 70\nPersistent > 1000 6\nTable 1: Number of Bluetooth devices observed at different\nfrequencies over a six month sample period.\nThis approach is attractive since the Bluetooth wireless protocol\nis characterised by its relative maturity, market penetration, and\nemphasis on short-range communication. Table 1 summarises the\nnumber of devices detected by this prototype installation over a\nsix month period. Of the 212 Bluetooth devices detected,\napproximately 70 were detected regularly, showing that Bluetooth is a\nsuitable proxy for detecting individuals in front of the screen.\nIn order to achieve a scalable and extendable solution a\nmultiagent systems design philosophy is adopted whereby a number of\ndifferent agents types interact (see figure 2). The interactions of\nthese agents are implemented through a web services protocol2\n,\nand they constitute a decentralised marketplace that allocates\nadvertising space in an efficient and timely manner. In more detail,\nthe responsibilities of each agent types are:\nBluetooth Device Detection Agent: This agent monitors the\nenvironment in the vicinity of a BluScreen display and\ndetermines the number and identity of any Bluetooth devices that\nare close by. It keeps historical records of the arrival and\ndeparture of Bluetooth devices, and makes this information\navailable to advertising agents as requested.\nMarketplace Agent: This agent facilitates the sale of advertising\nspace to the advertising agents. A single marketplace agent\nrepresents each BluScreen display, and access to this screen\nis divided into discrete advertising cycles of fixed duration.\nBefore the start of each advertising cycle, the marketplace\nagent holds a sealed-bid auction (see section 4 for more\ndetails). The winner of this auction is allocated access to the\ndisplay during the next cycle.\nAdvertising Agent: This agent represents a single advert and is\nresponsible for submitting bids to the marketplace agent in\norder that it may be allocated advertising cycles, and thus,\ndisplay its advert to users. It interacts with the device\ndetection agent in order to collect information regarding the\nnumber and identity of users who are currently in front of\nthe display. On the basis of this information, its past\nexperiences, and its bidding strategy, it calculates the value of the\nbid that it should submit to the marketplace agent.\nThus, having described the prototype BluScreen system, we next go\non to describe the details of the auction mechanism that we consider\nin this work, and then the advanced bidding agent that operates bids\nwithin this auction.\n4. THE AUCTION MECHANISM\nAs described above, BluScreen is designed to efficiency allocate\nadvertising cycles in a distributed and timely manner. Thus,\noneshot sealed bid auctions are used for the market mechanism of the\nmarketplace agent. In previous work, each advertising agent was\nassumed to have an externally derived estimate of the value of\nexposing an advert to a user. Under this assumption, a\nsecondprice sealed-bid auction was shown to be effective, since\nadvertis2\nThis is implemented on a distributed Mac OS X based system\nusing the Bonjour networking protocol for service discovery.\nAdvert\nAdvert\nMarketplace Agent\nDevice\nID\nAdvert\nAdvertising Agent\nDevice\nID\nDevice\nID\nAdvertising Agent\nAdvertising Agent\nBluetoothDevice\nDetectionAgent\n2) Bids based on\npredicted future\ndevice presence\n1) Device\npresence\ndetected\n3) Winning Agent\ndisplays advert\non the screen\nDevice\nID\nFigure 2: The BluScreen agent architecture for a single display.\ning agents have a simple strategy of truthfully bidding their\nvaluation in each auction [8].\nHowever, as described earlier, in this paper we consider the more\ngeneral setting in which it is impossible to determine an a priori\nvaluation for exposing an advert to a single user. This may be because\nthe BluScreen installation is within a private organisation where\nwhat is being advertised (e.g. news items or forthcoming events)\nhas no monetary value, or it may be a new commercial installation\nwhere limited market experience makes estimating such a valuation\nimpossible. In the absence of such a valuation, the attractive\neconomic properties of the second-price auction can not be achieved\nin practise, and thus, in our work there is no need to limit our\nattention to the second-price auction. Indeed, since these auctions are\nactually extremely rare within real world settings [10], in this work\nwe consider the more widely adopted first-price auction since this\nincreases the applicability of our results.\nThus, in more detail, we consider an instance of a BluScreen\ninstallation with a single display screen that is managed by a single\nmarketplace agent3\n. We consider that access to the display screen\nis divided into discrete advertising cycles, each of length tc, and a\nfirst-price sealed bid auction is held immediately prior to the start of\neach advertising cycle. The marketplace agent announces the start\nand deadline of the auction, and collects sealed bids from each\nadvertising agent. At the closing time of the auction the marketplace\nagent announces to all participants and observers the amount of the\nwinning bid, and informs the winning advertising agent that it was\nsuccessful (the identity of the winning advertising agent is not\nannounced to all observers). In the case that no bids are placed within\nany auction, a default advert is displayed.\nHaving described the market mechanism that the marketplace\nagent implements, we now go on to describe and evaluate an\nadvanced bidding strategy for the advertising agents to adopt.\n5. ADVANCED BIDDING STRATEGY\nAs described above, we consider the case that the advertising agents\ndo not have an externally derived estimate of the value of exposing\nthe advert to a single user. Rather, they have a constrained budget,\nB, and a limited period of interest during which they wish to\ndisplay their advert. Their goal is then to find the appropriate amount\nto bid within each auction in this period, in order to maximise the\nexposure of their advert.\nIn attempting to achieve this goal the advertising agent is faced\nwith a high level of uncertainty about future events. It will be\nuncertain of the number of users who will be present during any\nadvertising cycle since even if the number of users currently present\n3\nThis assumption of having a single BluScreen instance is made\nto simplify our task of validating the correctness and the efficiency\nof the proposed mechanism and strategy, and generalising these\nresults to the case of multiple screens is the aim of our future work.\nThe Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 265\nis known, some may leave before the advert commences, and\nothers may arrive. Moreover, the amount that must be bid to ensure\nthat an auction is won is uncertain since it depends on the number\nand behaviour of the competing advertising agents.\nThus, we enable the agent to use its observations of the arrival\nand departure of users to build a probabilistic model, based upon\nindependent Poisson processes, that describes the number of users\nwho are likely to be exposed to any advert. In addition, we enable\nthe agent to observe the outcome of previous advertising cycle\nauctions, and use the observations of the closing price, and the success\nor otherwise of the bids that it itself submitted, to build a\nprobabilistic model of the bid required to win the auction. The agent then uses\nthese two models to calculate its expected utility in each advertising\ncycle, and in turn, determine the optimal sequence of bids that\nmaximises this utility given its constrained budget. Having calculated\nthis sequence of bids, then the first bid in the sequence is actually\nused in the auction for the next advertising cycle. However, at the\nclose of this cycle, the process is repeated with a new optimal\nsequence of bids being calculated in order take to account of what\nactually happened in the preceding auction (i.e. whether the bid\nwas successful or not, and how many users arrived or departed).\nThus, in the next three subsections we describe these two\nprobabilistic models, and their application within the bidding strategy of\nthe advertising agent.\n5.1 Predicting the Number of Users\nIn order to predict the number of users that will be present in any\nfuture advertising cycle, it is necessary to propose a probabilistic\nmodel for the behaviour of the users. Thus, our advanced bidding\nstrategy assumes that their arrival and departures are determined by\ntwo independent Poisson processes4\nwith arrival rate, \u03bba, and\ndeparture rate, \u03bbd. This represents a simple model that is commonly\napplied within queuing theory5\n[5], yet is one that we believe well\ndescribes the case where BluScreen displays are placed in\ncommunal areas where people meet and congregate. Given the history of\nusers\" arrivals and departures obtained from the device detection\nagent, the advertising agent makes a maximum likelihood\nestimation of the values of \u03bba and \u03bbd.\nIn more detail, if the advertising agent has observed n users\narriving within a time period t, then the maximum likelihood\nestimation for the arrival rate \u03bba is simply given by:\n\u03bba =\nn\nt\n(1)\nLikewise, if an agent observes n users each with a duration of stay\nof t1, t2, . . . , tn time periods, then the maximum likelihood\nestimation for the departure rate \u03bbd is given by:\n1\n\u03bbd\n=\n1\nn\nn\ni=1\nti (2)\n4\nGiven a Poisson distribution with rate parameter \u03bb, the number of\nevents, n, within an interval of time t is given by:\nP(n) =\ne\u2212\u03bbt\n(\u03bbt)n\nn!\nIn addition, the probability of having to wait a period of time, t,\nbefore the next event is determined by:\nP(t) = \u03bbe\u03bbt\n5\nNote however that in queuing theory it is typically the arrival rate\nand service times of customers that are modelled as Poisson\nprocesses. Our users are not actually modelled as a queue since the\nduration of their stay is independent of that of the other users.\n\n0 t t + tc \u03c4\n(i)\nn users\n?\n(iii)\n\u03bbatc users\n?\n(ii)\n\u03bbat users\n?\nFigure 3: Example showing how to predict the number of users\nwho see an advert shown in an advertising cycle of length tc,\ncommencing at time t in the future.\nIn environments where these rates are subject to change, the agent\ncan use a limited time window over which observations are used\nto estimate these rates. Alternatively, in situations where cyclic\nchanges in these rates are likely to occur (i.e. changing arrival and\ndeparture rates at different times of the day, as may be seen in areas\nwhere commuters pass through), the agent can estimate separate\nvalues over each hour long period.\nHaving estimated the arrival and departure rate of users, and\nknowing the number of users who are present at the current time,\nthe advertising agent is then able to predict the number of users\nwho are likely to be present in any future advertising cycle6\n. Thus,\nwe consider the problem of predicting this number for an\nadvertising cycle of duration tc that starts at a time t in the future, given\nthat n users are currently present (see figure 3). This number will\nbe composed of three factors: (i) the fraction of the n users that are\ninitially present who do not leave in the interval, 0 \u2264 \u03c4 < t, before\nthe advertising cycle commences, (ii) users that actually arrive in\nthe interval, 0 \u2264 \u03c4 < t, and are still present when the advertising\ncycle actually commences, and finally, (iii) users that arrive during\nthe course of the advertising cycle, t \u2264 \u03c4 < t + tc.\nNow, considering case (i) above, the probability of one of the n\nusers still being present when the advertising cycle starts is given\nby\n\u221e\nt\n\u03bbde\u2212\u03bbd\u03c4\nd\u03c4 = e\u2212\u03bbdt\n. Thus we expect ne\u2212\u03bbdt\nof these\nusers to be present. In case (ii), we expect \u03bbat new users to\narrive before the advertising cycle commences, and the probability\nthat any of these will still be there when it actually does so is\ngiven by 1\nt\nt\n0\ne\u2212\u03bbd(t\u2212\u03c4)\nd\u03c4 = 1\n\u03bbdt\n1 \u2212 e\u2212\u03bbdt\n. Thus we expect\n\u03bba\n\u03bbd\n1 \u2212 e\u2212\u03bbdt\nof these users to be present. Finally, in case (iii)\nwe expect \u03bbatc users to arrive during the course of the advertising\ncycle. Thus, the combination of these three factors gives an\nexpression for the expected number of users who will be present within\nan advertising cycle of length tc, that commencing at time t in the\nfuture, given that there are n users currently present:\nNn,t = ne\u2212\u03bbdt\n+\n\u03bba\n\u03bbd\n1 \u2212 e\u2212\u03bbdt\n+ \u03bbatc (3)\nNote that as t increases the results become less dependent upon the\ninitial number of users, n. The mean number of users present at\nany time is simply \u03bba/\u03bbd, and the mean number of users exposed\nto an advert in any advertising cycle is given by \u03bba tc + 1\n\u03bbd\n.\n5.2 Predicting the Probability of Winning\nIn addition to estimating the number of users who will be present\nin any advertising cycle, an effective bidding agent must also be\nable to predict the probability of it winning an auction given that it\nsubmits any specified bid. This is a common problem within\nbidding agents, and approaches can generally be classified as game\ntheoretic or decision theoretic. Since our advertising agents are\nunaware of the number or identity of the competing advertising\n6\nNote that we do not require a user to be present for the entire\nadvertising cycle in order to be counted as present.\n266 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nagents, the game theoretic approach is precluded. Thus, we take a\ndecision theoretic approach similar to that adopted within\ncontinuous double auctions where bidding agents estimate the market price\nof goods by observing transaction prices [4].\nThus, our advertising agents uses a parameterised function to\ndescribe the probability of winning the auction given any submitted\nbid, P(b). This function must have support [0, \u221e) since bids must\nbe positive. In addition, we expect it to exhibit by an \u2018s\" shaped\ncurve whereby the probability of winning an auction is small when\nthe submitted bid is very low, the probability is close to one when\nthe bid is very high, and there is a transition point that characterises\nthe change from a losing to a wining bid. To this end, we use the\ncumulative form of the gamma distribution for this function:\nP(b) =\n\u03b3 (k, b/\u03b8)\n\u0393 (k)\n(4)\nwhere \u0393(k) is the standard gamma function, and \u03b3 (k, b/\u03b8) is the\nincomplete gamma function. This function has the necessary\nproperties described above, and has two parameters, k and \u03b8. The\ntransition point where P(b) = 0.5 is given by k\u03b8 and the sharpness of\nthe transition is described by k\u03b82\n. In figure 4 we show examples of\nthis function for three different values of k and \u03b8.\nThe advertising agent chooses the most appropriate values of k\nand \u03b8 by fitting the probability function to observations of previous\nauctions. An observation is a pair {bi, oi} consisting of the bid,\nbi, and an auction outcome, oi. Each auction generates at least one\npair in which bi is equal to the closing price of the auction, and\noi = 1. In addition, another pair is generated for each unsuccessful\nbid submitted by the advertising agent itself, and in this case oi =\n0. Thus, having collected N such pairs7\n, the agent finds the values\nof k and \u03b8 by evaluating:\narg min\nk,\u03b8\nN\ni=1\noi \u2212\n\u03b3 (k, bi/\u03b8)\n\u0393 (k)\n2\n(5)\nThis expression can not be evaluated analytically, but can be simply\nfound using a numerical gradient descent method whereby the\nvalues of k and \u03b8 are initially estimated using their relationship to the\ntransition point described above. The gradient of this expression\nis then numerically evaluated at these points, and new estimates of\nk and \u03b8 calculated by making a fixed size move in the direction\nof maximum gradient. This process is repeated until k and \u03b8 have\nconverged to an appropriate degree of accuracy.\n5.3 Expected Utility of an Advertising Cycle\nThe goal of the advertising agent is to gain the maximum exposure\nfor its advert given its constrained budget. We define the utility\nof any advertising cycle as the expected number of users who will\nsee the advert for the first time during that cycle, and hence, we\nexplicitly assume that no additional utility is derived by showing\nthe advert to any user more than once8\n. Thus, we can use the results\nof the previous two sections to calculate the expected utility of each\nadvertising cycle remaining within the advertising agent\"s period of\n7\nIn the case that no unsuccessful bids have been observed, there is\nno evidence of where the transition point between successful and\nunsuccessful bids is likely to occur. Thus, in this case, an\nadditional pair with value {\u03b1 min(b1 . . . bn), 0} is automatically\ncreated. Here \u03b1 \u2208 [0, 1] determines how far below the lowest\nsuccessful bid the advertising agent believes the transition point to be. We\nhave typically used \u03b1 = 0.5 within our experiments.\n8\nAs noted before, we assume that a user has seen the advert if they\nare present during any part of the advertising cycle, and we do not\ndifferentiate between users who see the entire advert, or users who\nsee a fraction of it.\n0 10 20 30 40\n0\n0.2\n0.4\n0.6\n0.8\n1\nProbability of Winning Auction P(b)\nBid (b)\nk = 5\nk = 10\nk = 20\nFigure 4: Cumulative gamma distribution representing the\nprobability of winning an auction (\u03b8 = 1 and k = 5, 10 & 20).\ninterest. In the first advertising cycle this is simply determined by\nthe probability of the advertising agent winning the auction, given\nthat it submits a bid b1, and the number of users who are currently in\nfront of the BluScreen display, but have not seen the advert before,\nis n. Thus, the expected utility of this advertising cycle is simply\ndescribed by:\nu1 = P(b1)Nn,0 (6)\nNow, in the second advertising cycle, the expected utility will clearly\ndepend on the outcome of the auction for the first. If the first\nauction was indeed won by the agent, then there will be no users who\nhave yet to see the advert present at the start of the second\nadvertising cycle. Thus, in this case, the expected number of new users\nwho will see the advert in the second advertising cycle is described\nby N0,0 (i.e. only newly arriving users will contribute any utility).\nBy contrast, if the first auction was not won by the agent, then the\nexpected number of users who have yet to see the advert is given by\nNn,tc where tc is the length of the preceding adverting cycle (i.e.\nexactly the case described in section 5.1 where there are n users\ninitially present and the advertising cycle starts at a time tc in the\nfuture). Thus, the expected utility of the second advertising cycle\nis given by:\nu2 = P(b2) [P(b1)N0,0 + (1 \u2212 P(b1))Nn,tc ] (7)\nWe can generalise this result by noting that the number of users\nexpected to be present within any future advertising cycle will depend\non the number of cycles since an auction was last won (since at this\npoint the number of users who are present but have not seen the\nadvert must be equal to zero). Thus, we must sum over all possible\nways in which this can occur, and weight each by its probability.\nHence, the general case for any advertising cycle is described by\nthe rather complex expression:\nui = P(bi)\ni\u22121\nj=1\nN0,(i\u2212j\u22121)tc P(bj)\ni\u22121\nm=j+1\n(1 \u2212 P(bm))\n+ Nn,(i\u22121)tc\ni\u22121\nm=1\n(1 \u2212 P(bm)) (8)\nThus, given this expression, the goal of the advertising agent is to\ncalculate the sequence of bids over the c remaining auctions, such\nthat the total expected utility is maximised, whilst ensuring that the\nremaining budget, B, is not exceeded:\narg max\nb1...bc\nc\ni=1\nui such that\nc\ni=1\nbi = B (9)\nThe Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 267\n0 0.25 0.5 0.75 1\n0\n1\n2\n3\n4\n5\n6\nExpected Utility (U)\nb\n1\n/ B\nB = 5\nB = 10\nB = 20\nB = 30\nB = 40\nFigure 5: Total expected utility of the advertising agent over a\ncontinuous range of values of b1 for a number of discrete values\nof budget, B, when there are just two auction cycles.\nHaving calculated this sequence, a bid of b1 is submitted in the next\nauction. Once the outcome of this auction is known, the process\nrepeats with a new optimal sequence of bids being calculated for\nthe remaining advertising cycles of the agent\"s period of interest.\n5.4 Optimal Sequence of Bids\nSolving for the optimal sequence of bids expressed in equation 9\ncan not be performed analytically. Instead we develop a numerical\nroutine to perform this maximisation. However, it is informative to\ninitially consider the simple case of just two auctions.\n5.4.1 Two Auction Example\nIn this case the expected utility of the advertising agent is simply\ngiven by u1 + u2 (as described in equations 6 and 7), and the\nbidding sequence is solely dependent on b1 (since b2 = B\u2212b1). Thus,\nwe can plot the total expected utility against b1 and graphically\ndetermine the optimal value of b1 (and thus also b2).\nTo this end, figure 5 shows an example calculated using\nparameter values \u03bba = 1/120, \u03bbd = 1/480 and tc = 120. In this case, we\nassume that k = 10 and \u03b8 = 1, and thus, given that k\u03b8 describes\nthe midpoint of the cumulative gamma distribution, a bid of 10\nrepresents a 50% chance of winning any auction (i.e. P(10) = 0.5).\nIn addition, we assume that n = \u03bba/\u03bbd = 4, and thus the initial\nnumber of users present is equal to the mean number that we expect\nto find present at any time. The plot indicates that when the budget\nis small, then the maximum utility is achieved at the extreme values\nof b1. This corresponds to bidding in just one of the two auctions\n(i.e. b1 = 0 and b2 = B or b1 = B and b2 = 0). However, as the\nbudget increases, the plot passes through a transition whereby the\nmaximum utility occurs at the midpoint of the x-axis,\ncorresponding to bidding equally in both auctions (i.e. b1 = b2 = B/2).\nThis is simply understood by the fact that continuing to allocate\nthe budget to a single auction results in diminishing returns as the\nprobability of actually winning this auction approaches one.\nIn this case, the plot is completely symmetrical since the\nnumber of users present at the start is equal to its expected value (i.e.\nn = \u03bba/\u03bbd). If however, n < \u03bba/\u03bbd the plot is skewed such that\nwhen the budget is small, it should be allocated to the second\nauction (since more users are expected to arrive before this advertising\ncycle commences). Conversely, when n > \u03bba/\u03bbd the entire\nbudget should be allocated to the first auction (since the users who are\ncurrently present are likely to depart in the near future). However,\nin both cases, a transition occurs whereby given sufficient budget it\nis preferable to allocate the budget evenly between both auctions9\n.\n9\nIn fact, one auction is still slightly preferred, but the difference in\ntemp \u2190 1\nrate \u2190 0.995\nbold\n\u2190 initial random allocation\nUold\n\u2190 Evaluate(bold\n)\nWHILE temp > 0.0001\ni, j \u2190 random integer index within b\nt \u2190 random real number between 0 and bi\nbnew\n\u2190 bold\nbnew\ni \u2190 bold\ni \u2212 t\nbnew\nj \u2190 bold\nj + t\nUnew\n\u2190 Evaluate(bnew\n)\nIF rand < exp((Unew\n\u2212 Uold\n)/temp) THEN\nbold\n\u2190 bnew\nUold\n\u2190 Unew\nENDIF\ntemp \u2190 temp \u00d7 rate\nENDWHILE\nFigure 6: Stochastic optimisation algorithm to calculate the\noptimal sequence of bids in the general case of multiple auctions.\n5.4.2 General Case\nIn general, the behaviour seen in the previous example\ncharacterises the optimal bidding behaviour of the advertising agent. If\nthere is sufficient budget, bidding equally in all auctions results in\nthe maximum expected utility. However, typically this is not\npossible and thus utility is maximised by concentrating what budget\nis available into a subset of the available auction. The choice of\nthis subset is determined by a number of factors. If there are very\nfew users currently present, it is optimal to allocate the budget to\nlater auctions in the expectation that more users will arrive.\nConversely, if there are many users present, a significant proportion of\nthe budget should be allocated to the first auction to ensure that it is\nindeed won, and these users see the advert. Finally, since no utility\nis derived by showing the advert to a single user more than once,\nthe budget should be allocated such that there are intervals between\nshowings of the advert, in order that new users may arrive.\nNow, due to the complex form of the expression for the expected\nutility of the agent (shown in equation 8) it is not possible to\nanalytically calculate the optimal sequence of bids. However, the inverse\nproblem (that of calculating the expected utility for any given\nsequence of bids) is easy. Thus, we can use a stochastic optimisation\nroutine based on simulated annealing to solve the maximisation\nproblem. This algorithm starts by assuming some initial random\nallocation of bids (normalised such that the total of all the bids is\nequal to the budget B). It then makes small adjustments to this\nallocation by randomly transferring the budget from one auction to\nanother. If this transfer results in an increase in expected utility,\nthen it is accepted. If it results in a decrease in expected utility, it\nmight still be accepted, but with a probability that is determined by\na temperature parameter. This temperature parameter is annealed\nsuch that the probability of accepting such transfers decreases over\ntime. In figure 6 we present this algorithm in pseudo-code.\n6. EVALUATION\nIn order to evaluate the effectiveness of the advanced bidding\nstrategy developed within this paper we compare its performance to\nthree alternative mechanisms. One of these mechanisms represents\na simple alternative bidding strategy for the advertising agents, whilst\nthe other two are centralised allocation mechanisms that represent\nexpected utility between this and an even allocation is negliable.\n268 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)\n20 30 40 50 60\n0\n0.2\n0.4\n0.6\n0.8\n1\nNumber Advertising Agents\nMean Normalised Exposure\nRandom Allocation\nSimple Bidding Strategy\nAdvanced Bidding Strategy\nOptimal Allocation\nFigure 7: Comparison of four different allocation mechanisms for allocating advertising cycles to advertising agents. Results are\naveraged over 50 simulation runs and error bars indicate the standard error in the mean.\nthe upper and lower bounds to the overall performance of the\nsystem. In more detail, the four mechanisms that we compare are:\nRandom Allocation: Rather than implementing the auction\nmechanism, the advertising cycle is randomly allocated to one of\nthe advertising agents.\nSimple Bidding Strategy: We implement the full auction\nmechanism but with a population of advertising agents that\nemploy a simple bidding strategy. These advertising agents do\nnot attempt to model the users or the auction environment in\nwhich they bid, but rather, they simply evenly allocate their\nremaining budget over the remaining advertising cycles.\nAdvanced Bidding Strategy: We implement the full auction\nmechanism with a population of advertising agents using the\nprobabilistic models and the bidding strategy described here.\nOptimal Allocation: Rather than implementing the auction\nmechanism, the advertising cycle is allocated to the advertising\nagent that will derive the maximum utility from it, given\nperfect knowledge of the number of users who will arrive and\ndepart in all future advertising cycles.\nUsing these four alternative allocation mechanisms, we ran repeated\nsimulations of two hours of operation of the entire BluScreen\nenvironment for a default set of parameters whereby the arrival and\ndeparture rate of the users are given by \u03bba = 1/120s and \u03bbd =\n1/480s, and the length of an advertising cycle is 120s. Each\nadvertising agent is assigned an advert with a period of interest drawn\nfrom a Poisson distribution with a mean of 8 advertising cycles, and\nthese agents are initially allocated a budget equal to 10 times their\nperiod of interest. For each simulation run, we measure the mean\nnormalised exposure of each advert. That is, the fraction of users\nwho were detected by the BluScreen display during the period of\ninterest of the advertising agent who were actually exposed to the\nagent\"s advert. Thus a mean normalised exposure of 1 indicates\nthat the agent managed to expose its advert to all of the users who\nwere present during its period of interest (and a mean normalised\nexposure of 0 means that no users were exposed to the advert).\nFigure 7 shows the results of this experiments. We first observe\nthe general result that as the number of advertising agents increases,\nand thus the competition between them increases, then the mean\nnormalised exposure of all allocation mechanisms decreases. We\nthen observe that in all cases, there is no statistically significant\nimprovement in using the simple bidding strategy compared to\nrandom allocation (p > 0.25 in Student\"s t-test). Since this simple\nbidding strategy does not take account of the number of users present,\nand in general, simply increases its bid price in each auction until it\ndoes in fact win one, this is not unexpected. However, in all cases\nthe advanced bidding strategy does indeed significantly outperform\nthe simple bidding agent (p < 0.0005 in Student\"s t-test), and its\nperformance is within 7.5% of that of the optimal allocation that\nhas perfect knowledge of the number of users who will arrival and\ndepart in all future advertising cycles.\nIn addition, we present results of experiments performed over a\nrange of parameter values, and also with a mixed population of\nadvertising agents using both the advanced and simple bidding\nstrategies. This is an important scenario since advertisers may wish to\nsupply their own bidding agents, and thus, a homogeneous\npopulation is not guaranteed. In each case, keeping all other parameters\nfixed, we varied one parameter, and these results are shown in\nfigure 8. In general, we see the similar trends as before. Increasing\nthe departure rate causes an decrease in the mean normalised\nexposure since advertising agents have less opportunities to expose\nusers to their adverts. Increasing the period of interest of each\nagent decreases the mean normalised exposure, since more\nadvertising agents are now competing for the same users. Finally,\nincreasing the arrival rate of the users causes the results of the simple\nand advanced bidding strategies to approach one another, since the\nvariance in the number of users who are present during any\nadvertising cycle decreases, and thus, modelling their behaviour provides\nless gain. However, in all cases, the advanced bidding strategy\nsignificantly outperforms the simple one (p < 0.0005 in Student\"s\nt-test). On average, we observe that advertising agents who use the\nadvanced bidding strategy are able to expose their adverts to 25%\nmore users than those using the simple bidding strategy.\nFinally, we show that a rational advertising agent, who has a\nchoice of bidding strategy, would always opt to use the advanced\nbidding strategy over the simple bidding strategy, regardless of the\ncomposition of the population that it finds itself in. Figure 9 shows\nthe average normalised exposure of the advertising agents when\nthe population is composed of different fractions of the two\nbidding strategies. In each case, the advanced bidding strategy shows\na significant gain in performance compared to the simple bidding\nstrategy (p < 0.0005 in Student\"s t-test), and thus, gains improved\nexposure over all population compositions.\n7. CONCLUSIONS\nIn this paper, we presented an advanced bidding strategy for use by\nadvertising agents within the BluScreen advertising system. This\nbidding strategy enabled advertising agents to model and predict\nthe arrival and departure of users, and also to model their\nsuccess within a first-price sealed bid auction by observing both the\nbids that they themselves submitted and the winning bid. The\nexThe Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 269\n1/600 1/480 1/360\n0\n0.2\n0.4\n0.6\n0.8\nDeparture Rate (\u03bb )d\nMean Normalised Exposure\nSimple Bidding Strategy\nAdvanced Bidding Strategy\n6 8 10\n0\n0.2\n0.4\n0.6\n0.8\nMean Period of Interest (Cycles)\nMean Normalised Exposure\nSimple Bidding Strategy\nAdvanced Bidding Strategy\n1/240 1/120 1/80\n0\n0.2\n0.4\n0.6\n0.8\nArrival Rate (\u03bb )a\nMean Normalised Exposure\nSimple Bidding Strategy\nAdvanced Bidding Strategy\n(a) (b) (c)\nFigure 8: Comparison of an evenly mixed population of advertising agents using simple and advanced bidding strategies over a range\nof parameter settings. Results are averaged over 50 simulation runs and error bars indicate the standard error in the mean.\n1/39 5/35 10/30 20/20 30/10 5/35 1/39\n0\n0.2\n0.4\n0.6\n0.8\nNumber of Advertising Agents\nMean Normalised Exposure\nSimple Bidding Strategy\nAdvanced Bidding Strategy\nFigure 9: Comparison of an unevenly mixed population of advertising agents using simple and advanced bidding strategies. Results\nare averaged over 50 simulation runs and error bars indicate the standard error in the mean.\npected utility, measured as the number of users who the advertising\nagent exposes its advert to, was shown to depend on these factors,\nand resulted in a complex expression where the expected utility of\neach auction depended on the success or otherwise of earlier\nauctions. We presented an algorithm based upon simulated annealing\nto solve for the optimal bidding strategy, and in simulation, this\nbidding strategy was shown to significantly outperform a simple\nbidding strategy that had none of these features. Its performance\nclosely approached that of a central optimal allocation, with\nperfect knowledge of the arrival and departure of users, despite the\nuncertain environment in which the strategy must operate.\nOur future work in this area consists of extending this bidding\nstrategy to richer environments where there are multiple\ninterrelated display screens, where maintaining profiles of users allows\na richer matching of user to advert, and where alternative auction\nmechanisms are applied (we a particularly interesting in\nintroducing a \u2018pay per user\" auction setting similar to the \u2018pay per click\"\nauctions employed by internet search websites). This work will\ncontinue to be done in conjunction with the deployment of more\nBluScreen prototypes in order to gain further real world experience.\n8. ACKNOWLEDGEMENTS\nThe authors would like to thank Heather Packer and Matthew\nSharifi (supported by the ALADDIN project - www.aladdinproject.org)\nfor their help in developing the deployed prototype.\n9. REFERENCES\n[1] A. Amiri and S. Menon. Efficient scheduling of internet banner\nadvertisements. ACM Transactions on Internet Technology,\n3(4):334-346, 2003.\n[2] S. M. Bohte, E. Gerding, and H. L. Poutre. Market-based\nrecommendation: Agents that compete for consumer attention. ACM\nTransactions on Internet Technology, 4(4):420-448, 2004.\n[3] K. Cheverst, A. Dix, D. Fitton, C. Kray, M. Rouncefield, C. Sas,\nG. Saslis-Lagoudakis, and J. G. Sheridan. Exploring bluetooth based\nmobile phone interaction with the hermes photo display. In Proc. of\nthe 7th Int. Conf. on Human Computer Interaction with Mobile\nDevices & Services, pages 47-54, Salzburg, Austria, 2005.\n[4] S. Gjerstad and J. Dickhaut. Price formation in double auctions.\nGames and Economic Behavior, (22):1-29, 1998.\n[5] D. Gross and C. M. Harris. Fundamentals of Queueing Theory.\nWiley, 1998.\n[6] J. Hightower and G. Borriella. Location systems for ubiquitous\ncomputing. IEEE Computer, 34(8):57-66, 2001.\n[7] J. F. McCarthy, T. J. Costa, and E. S. Liongosari. Unicast, outcast &\ngroupcast: Three steps toward ubiquitous, peripheral displays. In\nProc. of the 3rd Int. Conf. on Ubiquitous Computing, pages\n332-345, Atlanta, USA, 2001.\n[8] T. R. Payne, E. David, M. Sharifi, and N. R. Jennings. Auction\nmechanisms for efficient advertisment selection on public display. In\nProc. of the 17th European Conf. on Artificial Intelligence, pages\n285-289, Trentino, Italy, 2006.\n[9] A. Ranganathan and R. H. Campbell. Advertising in a pervasive\ncomputing environment. In Proc. of the 2nd Int. Workshop on\nMobile Commerce, pages 10-14, Atlanta, Georgia, USA, 2002.\n[10] M. Rothkopf, T. Teisberg, and E. Kahn. Why are vickrey auctions\nrare? Journal of Political Economy, 98(1):94-109, 1990.\n[11] R. Want, A. Hopper, V. Falcao, and J. Gibbons. The active badge\nlocation system. ACM Transactions on Information Systems,\n10(1):91-102, 1992.\n270 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)", "keywords": "distributed artificial intelligence;decision theoretic approach;bluscreen;stochastic optimisation algorithm;decentralised multi-agent auction mechanism;experimental public advertisement system;independent poisson process;probabilistic model;bluetooth;auction;public display;bid agent;centralised optimal allocation;advanced bidding agent"}
-{"name": "test_I-18", "title": "Collaboration Among a Satellite Swarm", "abstract": "The paper deals with on-board planning for a satellite swarm via communication and negotiation. We aim at defining individual behaviours that result in a global behaviour that meets the mission requirements. We will present the formalization of the problem, a communication protocol, a solving method based on reactive decision rules, and first results. Categories and Subject Descriptors", "fulltext": "1. INTRODUCTION\nMuch research has been undertaken to increase satellite\nautonomy such as enabling them to solve by themselves\nproblems that may occur during a mission, adapting their\nbehaviour to new events and transferring planning on-board\n; even if the development cost of such a satellite is increased,\nthere is an increase in performance and mission possibilities\n[34]. Moreover, the use of satellite swarms - sets of satellites\nflying in formation or in constellation around the\nEarthmakes it possible to consider joint activities, to distribute\nskills and to ensure robustness.\nMulti-agent architectures have been developed for satellite\nswarms [36, 38, 42] but strong assumptions on deliberation\nand communication capabilities are made in order to build\na collective plan.\nMono-agent planning [4, 18, 28] and task allocation [20]\nare widely studied. In a multi-agent context, agents that\nbuild a collective plan must be able to change their goals,\nreallocate resources and react to environment changes and\nto the others\" choices. A coordination step must be added to\nthe planning step [40, 30, 11]. However, this step needs high\ncommunication and computation capabilities. For instance,\ncoalition-based [37], contract-based [35] and all\nnegotiationbased [25] mechanisms need these capabilities, especially in\ndynamic environments.\nIn order to relax communication constraints, coordination\nbased on norms and conventions [16] or strategies [17] are\nconsidered. Norms constraint agents in their decisions in\nsuch a way that the possibilities of conflicts are reduced.\nStrategies are private decision rules that allow an agent to\ndraw benefit from the knowledgeable world without\ncommunication. However, communication is still needed in order to\nshare information and build collective conjectures and plans.\nCommunication can be achieved through a stigmergic\napproach (via the environment) or through message exchange\nand a protocol. A protocol defines interactions between\nagents and cannot be uncoupled from its goal, e.g.\nexchanging information, finding a trade-off, allocating tasks and so\non. Protocols can be viewed as an abstraction of an\ninteraction [9]. They may be represented in a variety of ways,\ne.g. AUML [32] or Petri-nets [23]. As protocols are\noriginally designed for a single goal, some works aim at\nendowing them with flexibility [8, 26]. However, an agent cannot\nalways communicate with another agent or the\ncommunication possibilites are restricted to short time intervals.\nThe objective of this work is to use intersatellite\nconnections, called InterSatellite Links or ISL, in an Earth\nobservation constellation inspired from the Fuego mission [13, 19],\nin order to increase the system reactivity and to improve the\nmission global return through a hybrid agent approach. At\nthe individual level, agents are deliberative in order to create\na local plan but at the collective level, they use normative\ndecision rules in order to coordinate with one another. We\nwill present the features of our problem, a communication\nprotocol, a method for request allocation and finally,\ncollaboration strategies.\n287\n978-81-904262-7-5 (RPS) c 2007 IFAAMAS\n2. PROBLEM FEATURES\nAn observation satellite constellation is a set of satellites\nin various orbits whose mission is to take pictures of various\nareas on the Earth surface, for example hot points\ncorresponding to volcanos or forest fires. The ground sends the\nconstellation observation requests characterized by their\ngeographical positions, priorities specifying if the requests are\nurgent or not, the desired dates of observation and the\ndesired dates for data downloading.\nThe satellites are equipped with a single observation\ninstrument whose mirror can roll to shift the line of sight. A\nminimum duration is necessary to move the mirror, so\nrequests that are too close together cannot be realized by the\nsame satellite. The satellites are also equipped with a\ndetection instrument pointed forward that detects hot points\nand generates observation requests on-board.\nThe constellations that we consider are such as the orbits\nof the various satellites meet around the poles. A judicious\npositioning of the satellites in their orbits makes it possible\nto consider that two (or more) satellites meet in the polar\nareas, and thus can communicate without the ground\nintervention. Intuitively, intersatellite communication increases the\nreactivity of the constellation since each satellite is within\ndirect view of a ground station (and thus can communicate\nwith it) only 10 % of the time.\nThe features of the problem are the following:\n- 3 to 20 satellites in the constellation;\n- pair communication around the poles;\n- no ground intervention during the planning process;\n- asynchronous requests with various priorities.\n3. A MULTI-AGENT APPROACH\nAs each satellite is a single entity that is a piece of the\nglobal swarm, a multi-agent system fits to model satellite\nconstellations [39]. This approach has been developped through\nthe ObjectAgent architecture [38], TeamAgent [31], DIPS\n[14] or Prospecting ANTS [12].\n3.1 Satellite swarm\nAn observation satellite swarm1\nis a multi-agent system\nwhere the requests do not have to be carried out in a fixed\norder and the agents (the satellites) do not have any physical\ninteraction. Carrying out a request cannot prevent another\nagent from carrying out another one, even the same one. At\nmost, there will be a waste of resources. Formally, a swarm\nis defined as follows:\nDefinition 1 (Swarm). A satellite swarm E is a\ntriplet < S, T, Vicinity >:\n- S is a set of n agents {s1 . . . sn};\n- T \u2286 R+\nor N+\nis a set of dates with a total order <;\n- Vicinity : S \u00d7 T \u2192 2S\n.\nIn the sequel, we will assume that the agents share a\ncommon clock.\nFor a given agent and a given time, the vicinity relation\nreturns the set of agents with whom it can communicate at\nthat time. As we have seen previously, this relation exists\nwhen the agents meet.\n1\nThis term will designate a satellite constellation with\nInterSatellite Links.\n3.2 Requests\nRequests are the observation tasks that the satellite swarm\nmust achieve. As we have seen previously, the requests are\ngenerated both on the ground and on board. Each agent is\nallocated a set of initial requests. During the mission, new\nrequests are sent to the agents by the ground or agents can\ngenerate new requests by themselves. Formally, a request is\ndefined as follows:\nDefinition 2 (Request). A request R is defined as a\ntuple < idR, pos(R), prio(R), tbeg(R),bR >:\n- idR is an identifier;\n- pos(R) is the geographic position of R;\n- prio(R) \u2208 R is the request priority;\n- tbeg(R) \u2208 T is the desired date of observation;\n- bR \u2208 {true, false} specifies if R has been realized.\nThe priority prio(R) of a request represents how much it is\nimportant for the user, namely the request sender, that the\nrequest should be carried out. Thus a request with a high\npriority must be realized at all costs. In our application,\npriorities are comprised between 1 and 5 (the highest).\nIn the sequel, we will note Rt\nsi\nthe set of the requests that\nare known by agent si at time t \u2208 T.\nFor each request R in Rt\nsi\n, there is a cost value, noted\ncostsi (R) \u2208 R, representing how far from the desired date\nof observation tbeg(R) an agent si can realize R. So, the\nmore an agent can carry out a request in the vicinity of the\ndesired date of observation, the lower the cost value.\n3.3 Candidacy\nAn agent may have several intentions about a request, i.e.\nfor a request R, an agent si may:\n- propose to carry out R : si may realize R;\n- commit to carry out R : si will realize R;\n- not propose to carry out R : si may not realize R;\n- refuse to carry out R : si will not realize R.\nWe can notice that these four propositions are modalities\nof proposition C: si realizes R:\n- 3C means that si proposes to carry out R;\n- 2C means that si commits to carry out R;\n- \u00ac3C means that si does not propose to carry out R;\n- \u00ac2C means that si refuses to carry out R.\nMore formally:\nDefinition 3 (Candidacy). A candidacy C is a tuple\n< idC , modC, sC , RC , obsC, dnlC >:\n- idC is an identifier;\n- modC \u2208 {3, 2, \u00ac3, \u00ac2} is a modality;\n- sC \u2208 S is the candidate agent;\n- RC \u2208 Rt\nsC\nis the request on which sC candidates;\n- obsC \u2208 T is the realization date proposed by sC ;\n- dnlC \u2208 T is the download date.\n3.4 Problem formalization\nThen, our problem is the following: we would like each\nagent to build request allocations (i.e a plan) dynamically\nsuch as if these requests are carried out their number is the\nhighest possible or the global cost is minimal. More formally,\nDefinition 4 (Problem). Let E be a swarm. Agents\nsi in E must build a set {At\ns1\n. . . At\nsn\n} where At\nsi\n\u2286 Rt\nsi\nsuch\n288 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nas:\n- |\nS\nsi\u2208S At\nsi\n| is maximal;\n\nP\nsi\u2208S\nP\nR\u2208At\nsi\nprio(R) is maximal.\n\nP\nsi\u2208S\nP\nR\u2208At\nsi\ncostsi (R) is minimal.\nLet us notice that these criteria are not necessarily\ncompatible.\nAs the choices of an agent will be influenced by the choices\nof the others, it is necessary that the agents should reason\non a common knowledge about the requests. It is thus\nnecessary to set up an effective communication protocol.\n4. COMMUNICATION PROTOCOL\nCommunication is commonly associated with cooperation.\nDeliberative agents need communication to cooperate,\nwhereas it is not necessarily the case for reactive agents [2, 41].\nGossip protocols [22, 24], or epidemic protocols, are used\nto share knowledge with multicast. Each agent selects a set\nof agents at a given time in order to share information. The\nspeed of information transmission is contingent upon the\nlength of the discussion round.\n4.1 The corridor metaphor\nThe suggested protocol is inspired from what we name\nthe corridor metaphor, which represents well the satellite\nswarm problem. Various agents go to and fro in a corridor\nwhere objects to collect appear from time to time. Two\nobjects that are too close to each other cannot be collected\nby the same agent because the action takes some time and\nan agent cannot stop its movement. In order to optimize the\ncollection, the agents can communicate when they meet.\nS\n2\nS\nABel\nA\n1\nA\n3S\nFigure 1: Time t\n1\nS\n2S\nBel non A\n3S\nFigure 2: Time t\nExample 1. Let us suppose three agents, s1, s2, s3 and\nan object A to be collected. At time t, s1 did not collect A\nand s2 does not know that A exists. When s1 meets s2, it\ncommunicates the list of the objects it knows, that is to say\nA. s2 now believes that A exists and prepares to collect it.\nIt is not certain that A is still there because another agent\nmay have passed before s2, but it can take it into account in\nits plan.\nAt time t , s3 collects A. In the vicinity of s2, s3\ncommunicates its list of objects and A is not in the list. As both\nagents meet in a place where it is possible for s3 to have\ncollected A, the object would have been in the list if it had\nnot been collected. s2 can thus believe that A does not exist\nanymore and can withdraw it from its plan.\n4.2 Knowledge to communicate\nIn order to build up their plans, agents need to know the\ncurrent requests and the others agents\" intentions. For each\nagent two kinds of knowledge to maintain are defined:\n- requests (Definition 2);\n- candidacies (Definition 3).\nDefinition 5 (Knowledge). Knowledge K is a tuple\n< data(K), SK , tK >:\n- data(K) is a request R or a candidacy C;\n- SK \u2286 S is the set of agents knowing K;\n- tK \u2208 T is a temporal timestamp.\nIn the sequel, we will note Kt\nsi\nthe knowledge of agent si\nat time t \u2208 T.\n4.3 An epidemic protocol\nFrom the corridor metaphor, we can define a\ncommunication protocol that benefits from all the communication\nopportunities. An agent notifies any change within its\nknowledge and each agent must propagate these changes to its\nvicinity who update their knowledge bases and reiterate the\nprocess. This protocol is a variant of epidemic protocols [22]\ninspired from the work on overhearing [27].\nProtocol 1 (Communication). Let si be an agent\nin S. \u2200t \u2208 T:\n- \u2200 sj \u2208 Vicinity(si, t), si executes:\n1. \u2200 K \u2208 Kt\nsi\nsuch as sj \u2208 SK :\na. si communicates K to sj\nb. if sj acknowledges receipt of K, SK \u2190 SK \u222a {sj}.\n- \u2200 K \u2208 Kt\nsi\nreceived by sj at time t:\n1. sj updates Kt\nsj\nwith K\n2. sj acknowledges receipt of K to si.\nTwo kinds of updates exist for an agent:\n- an internal update from a knowledge modification by\nthe agent itself;\n- an external update from received knowledge.\nFor an internal update, updating K depends on data(K):\na candidacy C is modified when its modality changes and a\nrequest R is modified when an agent realizes it. When K is\nupdated, the timestamp is updated too.\nProtocol 2 (Internal update). Let si \u2208 S be an\nagent. An internal update from si at time t \u2208 T is\nperformed:\n- when knowledge K is created;\n- when data(K) is modified.\nIn both cases:\n1. tK \u2190 t;\n2. SK \u2190 {si}.\nFor an external update, only the most recent knowledge K\nis taken into account because timestamps change only when\ndata(K) is modified. If K is already known, it is updated\nif the content or the set of agents knowing it have been\nmodified. If K is unknown, it is simply added to the agent\"s\nknowledge.\nProtocol 3 (External update). Let si be an agent\nand K the knowledge transmitted by agent sj. \u2200 K \u2208 K, the\nexternal update at time t \u2208 T is defined as follows:\n1. if \u2203 K \u2208 Kt\nsi\nsuch as iddata(K) = iddata(K ) then\na. if tK \u2265 tK then\ni. if tK > tK then SK \u2190 SK \u222a {si}\nii. if tK = tK then SK \u2190 SK \u222a SK\niii. Kt\nsi\n\u2190 (Kt\nsi\n\\{K }) \u222a {K}\n2. else\na. Kt\nsi\n\u2190 Kt\nsi\n\u222a {K}\nb. SK \u2190 SK \u222a {si}\nThe Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 289\nIf the incoming information has a more recent timestamp,\nit means that the receiver agent has obsolete information.\nConsequently, it replaces the old information by the new one\nand adds itself to the set of agents knowing K (1.a.i).\nIf both timestamps are the same, both pieces of\ninformation are the same. Only the set of the agents knowing\nK may have changed because agents si and sj may have\nalready transmitted the information to other agents.\nConsequently, the sets of agents knowing K are unified (1.a.ii).\n4.4 Properties\nCommunication between two agents when they meet is\nmade of the conjunction of Protocol 1 and Protocol 3. In the\nsequel, we call this conjunction a communication occurrence.\n4.4.1 Convergence\nThe structure of the transmitted information and the\ninternal update mechanism (Protocol 2) allow the process to\nconverge. Indeed, a request R can only be in two states\n(realized or not) given by the boolean bR. Once an internal\nupdate is made - i.e. R is realized - R cannot go back to its\nformer state. Consequently, an internal update can only be\nperformed once.\nAs far as candidacies are concerned, updates only modify\nthe modalities, which may change many times and go back to\nprevious states. Then it seems that livelocks2\nwould be likely\nto appear. However, a candidacy C is associated to a request\nand a realization date (the deadline given by obsC ). After\nthe deadline, the candidacy becomes meaningless. Thus for\neach candidacy, there exists a date t \u2208 T when changes will\npropagate no more.\n4.4.2 Complexity\nIt has been shown that in a set of N agents where a\nsingle one has a new piece of information, an epidemic protocol\ntakes O(logN) steps to broadcast the information [33].\nDuring one step, each agent has a communication occurrence.\nAs agents do not have much time to communicate, such a\ncommunication occurrence must not have a too big temporal\ncomplexity, which we can prove formally:\nProposition 1. The temporal complexity of a\ncommunication occurrence at time t \u2208 T between two agents si and\nsj is, for agent si,\nO(|Rt\nsi\n|.|Rt\nsj\n|.|S|2\n)\nProof 1. For the worst case, each agent sk sends |Rt\nsk\n|\npieces of information on requests and |Rt\nsk\n|.|S| pieces of\ninformations on candidacies (one candidacy for each request\nand for each agent of the swarm). Let si and sj two agents\nmeeting at time t \u2208 T. For agent si, the complexity of\nProtocol 1 is\nO(|Rt\nsi\n| + |Rt\nsi\n|.|S|\n| {z }\nemission\n+ |Rt\nsj\n| + |Rt\nsj\n|.|S|\n| {z }\nreception\n)\nFor each received piece of information, agent si uses Protocol\n3 and searches through its knowledge bases: |Rt\nsi\n| pieces of\ninformation for each received request and |Rt\nsi\n|.|S| pieces of\n2\nCommunicating endlessly without converging.\ninformation for each received candidacy. Consequently, the\ncomplexity of Protocol 3 is\nO(|Rt\nsj\n|.|Rt\nsi\n| + |Rt\nsj\n|.|Rt\nsi\n|.|S|2\n)\nThus, the temporal complexity of a communication\noccurrence is:\nO(|Rt\nsi\n| + |Rt\nsi\n|.|S| + |Rt\nsj\n|.|Rt\nsi\n| + |Rt\nsj\n|.|Rt\nsi\n|.|S|2\n))\nThen:\nO(|Rt\nsi\n|.|Rt\nsj\n|.|S|2\n)\n5. ON-BOARD PLANNING\nIn space contexts, [5, 21, 6] present multi-agent\narchitectures for on-board planning. However, they assume high\ncommunication and computation capabilities [10]. [13] relax\nthese constraints by cleaving planning modules: on the first\nhand, satellites have a planner that builds plans on a large\nhorizon and on the second hand, they have a decision\nmodule that enables them to choose to realize or not a planned\nobservation.\nIn an uncertain environment such as the one of satellite\nswarms, it may be advantageous to delay the decision until\nthe last moment (i.e. the realization date), especially if there\nare several possibilities for a given request. The main idea\nin contingency planning [15, 29] is to determine the nodes in\nthe initial plan where the risks of failures are most important\nand to incrementally build contingency branches for these\nsituations.\n5.1 A deliberative approach\nInspired from both approaches, we propose to build\nallocations made up of a set of unquestionable requests and\na set of uncertain disjunctive requests on which a decision\nwill be made at the end of the decision horizon. This\nhorizon corresponds to the request realization date. Proposing\nsuch partial allocations allows conflicts to be solved locally\nwithout propagating them through the whole plan.\nIn order to build the agents\" initial plans, let us assume\nthat each agent is equipped with an on-board planner. A\nplan is defined as follows:\nDefinition 6 (Plan). Let si be an agent, Rt\nsi\na set\nof requests and Ct\nsi\na set of candidacies. Let us define three\nsets:\n- the set of potential requests:\nRp\n= {R \u2208 Rt\nsi\n|bR = false}\n- the set of mandatory requests:\nRm\n= {R \u2208 Rp\n|\u2203C \u2208 Ct\nsi\n: modC = 2, sC = si, RC = R}\n- the set of given-up requests:\nRg\n= {R \u2208 Rp\n|\u2203C \u2208 Ct\nsi\n: modC = \u00ac2, sC = si, RC = R}\nA plan At\nsi\ngenerated at time t \u2208 T is a set of requests such\nas Rm\n\u2286 At\nsi\n\u2286 Rp\nand \u2203 R \u2208 Rg\nsuch as R \u2208 At\nsi\n.\nBuilding a plan generates candidacies.\nDefinition 7 (Generating candidacies). Let si be\nan agent and At1\nsi\na (possibly empty) plan at time t1. Let\nAt2\nsi\nbe the plan generated at time t2 with t2 > t1.\n- \u2200 R \u2208 At1\nsi\nsuch as R \u2208 At2\nsi\n, a candidacy C such as\nmod(C) = \u00ac3, sC = si and RC = R is generated;\n- \u2200 R \u2208 At2\nsi\nsuch as R \u2208 At1\nsi\n, a candidacy C such as\nmod(C) = 3, sC = si and RC = R is generated;\n- Protocol 2 is used to update Kt1\nsi\nin Kt2\nsi\n.\n290 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)\n5.2 Conflicts\nWhen two agents compare their respective plans some\nconflicts may appear. It is a matter of redundancies between\nallocations on a given request, i.e.: several agents stand as\ncandidates to carry out this request. Whereas such\nredundancies may sometimes be useful to ensure the realization of\na request (the realization may fail, e.g. because of clouds),\nit may also lead to a loss of opportunity. Consequently,\nconflict has to be defined:\nDefinition 8 (Conflict). Let si and sj be two agents\nwith, at time t, candidacies Csi and Csj respectively (sCsi\n=\nsi and sCsj\n= sj). si and sj are in conflict if and only if:\n- RCsi\n= RCsj\n- modCsi\nand modCsj\n\u2208 {2, 3}\nLet us notice that the agents have the means to know\nwhether they are in conflict with another one during the\ncommunication process. Indeed, they exchange information\nnot only concerning their own plan but also concerning what\nthey know about the other agents\" plans.\nAll the conflicts do not have the same strength, meaning\nthat they can be solved with more or less difficulty according\nto the agents\" communication capacities. A conflict is soft\nwhen the concerned agents can communicate before one or\nthe other carries out the request in question. A conflict is\nhard when the agents cannot communicate before the\nrealization of the request.\nDefinition 9 (Soft/Hard conflict). Let si and sj\n(i < j) two agents in conflict with, at time t, candidacies\nCsi and Csj respectively (sCsi\n= si and sCsj\n= sj). If \u2203\nV \u2286 S such as V = {si . . . sj} and if \u2203 T \u2208 T such as\nT = {ti\u22121 . . . tj\u22121} (ti\u22121 = t) where: \u2200 i \u2264 k dnlCsj\nand |costsi (R) \u2212 costsj (R)| < then modCsi\n= \u00ac2 and\nmodCsj\n= 2.\nStrategy 3 (Insurance). Let si and sj be two agents\nin conflict on their respective candidacies Csi and Csj such\nas si is the expert agent. Let \u03b1 \u2208 R be a priority threshold.\nThe insurance strategy is : if prio(R)\ncardc(R)\u22121\n> \u03b1 then modCsi\n= 3 and modCsj\n= 3.\n3\ni.e. the agent using memory resources during a shorter\ntime.\nThe Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 291\nIn the insurance strategy, redundancy triggering is\nadjusted by the conflict cardinality cardc(R). The reason is\nthe following: the more redundancies on a given request,\nthe less a new redundancy on this request is needed.\nThe three strategies are implemented in a negotiation\nprotocol dedicated to soft conflicts. The protocol is based on\na subsumption architecture [7] on strategies: the insurance\nstrategy (1) is the major strategy because it ensures\nredundancy for which the swarm is implemented. Then the\naltruist strategy comes (2) in order to allocate the resources so as\nto enhance the mission return. Finally, the expert strategy\nthat does not have preconditions (3) enhances the cost of\nthe plan.\nProtocol 4 (Soft conflict solving). Let R be a\nrequest in a soft conflict between two agents, si and sj.\nThese agents have Csi and Csj for respective candidacies.\nLet si be the expert agent. Agents apply strategies as\nfollows:\n1. insurance strategy (\u03b1)\n2. altruist strategy ( )\n3. expert strategy\nThe choice of parameters \u03b1 and allows to adjust the\nprotocol results. For example, if = 0, the altruist strategy\nis never used.\n6.3 Hard conflict solving strategies\nIn case of a hard conflict, the agent that is not aware will\nnecessarily realize the request (with success or not).\nConsequently, a redundancy is useful only if the other agent is\nmore expert or if the priority of the request is high enough to\nneed redundancy. Therefore, we will use the insurance\nstrategy (refer to Section 6.2) and define a competitive strategy.\nThe latter is defined for two agents, si and sj, in a hard\nconflict on a request R. Let si be the agent that is aware of\nthe conflict4\n.\nStrategy 4 (Competitive). Let \u03bb \u2208 R+\nbe an cost\nthreshold. The competitive strategy is: if costsi (R) < costsj\n(R) \u2212 \u03bb then modCsi\n= 3.\nProtocol 5 (Hard conflict solving). Let si be an\nagent in a hard conflict with an agent sj on a request R. si\napplies strategies as follows:\n1. insurance strategy (\u03b1)\n2. competitive strategy (\u03bb)\n3. withdrawal : modCsi\n= \u00ac2\n6.4 Generalization\nAlthough agents use pair communication, they may have\ninformation about several agents and conflict cardinality\nmay be more than 2. Therefore, we define a k-conflict as\na conflict with a cardinality of k on a set of agents\nproposing or committing to realize the same request. Formally,\nDefinition 13 (k-conflict). Let S = {s1 . . . sk} be a\nset of agents with respective candidacies Cs1 . . . Csk at time\nt. The set S is in a k-conflict if and only if:\n- \u22001 \u2264 i \u2264 k, sCsi\n= si;\n- !\u2203R such as \u22001 \u2264 i \u2264 k, RCsi\n= R;\n4\ni.e. the agent that must make a decision on R.\n- \u22001 \u2264 i \u2264 k, modCsi\n\u2208 {2, 3}.\n- S is maximal (\u2286) among the sets that satisfy these\nproperties.\nAs previously, a k-conflict can be soft or hard. A k-conflict\nis soft if each pair conflict in the k-conflict is a soft conflict\nwith respect to Definition 9.\nAs conflicts bear on sets of agents, expertise is a total\norder on agents. We define rank-i-expertise where the\nconcerned agent is the ith expert.\nIn case of a soft k-conflict, the rank-i-expert agent makes\nits decision with respect to the rank-(i + 1)-expert agent\naccording to Protocol 4. The protocol is applied recursively\nand \u03b1 and parameters are updated at each step in order\nto avoid cost explosion5\n.\nIn case of a hard conflict, the set S of agents in conflict\ncan be splitted in SS\n(the subset of agents in a soft conflict)\nand SH\n(the subset of unaware agents). Only agents in SS\ncan take a decision and must adapt themselves to agents in\nSH\n. The rank-i-expert agent in SS\nuses Protocol 5 on the\nwhole set SH\nand the rank-(i \u2212 1)-expert agent in SS\n. If an\nagent in SS\napplies the competitive strategy all the others\nwithdraws.\n7. EXPERIMENTS\nSatellite swarm simulations have been implemented in\nJAVA with the JADE platform [3]. The on-board planner is\nimplemented with linear programming using ILOG CPLEX\n[1]. The simulation scenario implements 3 satellites on\n6hour orbits. Two scenarios have been considered: the first\none with a set of 40 requests with low mutual exclusion and\nconflict rate and the second one with a set of 74 requests\nwith high mutual exclusion and conflict rate.\nFor each scenario, six simulations have been performed:\none with centralized planning (all requests are planned by\nthe ground station before the simulation), one where agents\nare isolated (they cannot communicate nor coordinate with\none another), one informed simulation (agents only\ncommunicate requests) and three other simulations implementing\nthe instanciated collaboration strategies (politics):\n- neutral politics: \u03b1, and \u03bb are set to average values;\n- drastic politics: \u03b1 and \u03bb are set to higher values, i.e.\nagents will ensure redundancy only if the priorities are high\nand, in case of a hard conflict, if the cost payoff is much\nhigher;\n- lax politics: \u03b1 is set to a lower value, i.e. redundancies\nare more frequent.\nIn the case of low mutual exclusion and conflict rate\n(Table 1), centralized and isolated simulations lead to the same\nnumber of observations, with the same average priorities.\nIsolation leading to a lower cost is due to the high\nnumber of redundancies: many agents carry out the same\nrequest at different costs. The informed simulation reduces\nthe number of redundancies but sligthly increases the\naverage cost for the same reason. We can notice that the use of\n5\nFor instance, the rank-1-expert agent withdraws due to the\naltruist strategy and the cost increases by in the worst\ncase, then rank-2-expert agent withdraws due to the altruist\nstrategy and the cost increases by in the worst case. So\nthe cost has increased by 2 in the worst case.\n292 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nSimulation Observations Redundancies Messages Average priority Average cost\nCentralized 34 0 0 2.76 176.06\nIsolated 34 21 0 2.76 160.88\nInformed 34 6 457 2.65 165.21\nNeutral politics 31 4 1056 2.71 191.16\nDrastic politics 24 1 1025 2.71 177.42\nLax politics 33 5 1092 2.7 172.88\nTable 1: Scenario 1 - the 40-request simulation results\nSimulation Observations Redundancies Messages Average priority Average cost\nCentralized 59 0 0 2.95 162.88\nIsolated 37 37 0 3.05 141.62\nInformed 55 27 836 2.93 160.56\nNeutral politics 48 25 1926 3.13 149.75\nDrastic politics 43 21 1908 3.19 139.7\nLax politics 53 28 1960 3 154.02\nTable 2: Scenario 2 - the 74-request simulation results\ncollaboration strategies allows the number of redundancies\nto be much more reduced but the number of observations\ndecreases owing to the constraint created by commitments.\nFurthermore, the average cost is increased too. Nevertheless\neach avoided redundancy corresponds to saved resources to\nrealize on-board generated requests during the simulation.\nIn the case of high mutual exclusion and conflict rate\n(Table 2), noteworthy differences exist between the centralized\nand isolated simulations. We can notice that all informed\nsimulations (with or without strategies) allow to perform\nmore observations than isolated agents do with less\nredundancies. Likewise, we can notice that all politics reduce the\naverage cost contrary to the first scenario. The drastic\npolitics is interesting because not only does it allow to perform\nmore observations than isolated agents do but it allows to\nhighly reduce the average cost with the lowest number of\nredundancies.\nAs far as the number of exchanged messages is concerned,\nthere are 12 meetings between 2 agents during the\nsimulations. In the worst case, at each meeting each agent sends\nN pieces of information on the requests plus 3N pieces of\ninformation on the agents\" intentions plus 1 message for the\nend of communication, where N is the total number of\nrequests. Consequently, 3864 messages are exchanged in the\nworst case for the 40-request simulations and 7128 messages\nfor the 74-request simulations. These numbers are much\nhigher than the number of messages that are actually\nexchanged. We can notice that the informed simulations, that\ncommunicate only requests, allow a higher reduction.\nIn the general case, using communication and strategies\nallows to reduce redundancies and saves resources but\nincreases the average cost: if a request is realized, agents that\nknow it do not plan it even if its cost can be reduce\nafterwards. It is not the case with isolated agents. Using\nstrategies on little constrained problems such as scenario 1\nconstrains the agents too much and causes an additional cost\nincrease. Strategies are more useful on highly constrained\nproblems such as scenario 2. Although agents constrain\nthemselves on the number of observations, the average cost\nis widely reduce.\n8. CONCLUSION AND FUTURE WORK\nAn observation satellite swarm is a cooperative\nmultiagent system with strong constraints in terms of\ncommunication and computation capabilities. In order to increase\nthe global mission outcome, we propose an hybrid approach:\ndeliberative for individual planning and reactive for\ncollaboration.\nAgents reason both on requests to carry out and on the\nother agents\" intentions (candidacies). An epidemic\ncommunication protocol uses all communication opportunities\nto update this information. Reactive decision rules\n(strategies) are proposed to solve conflicts that may arise between\nagents. Through the tuning of the strategies (\u03b1, and \u03bb)\nand their plastic interlacing within the protocol, it is\npossible to coordinate agents without additional communication:\nthe number of exchanged messages remains nearly the same\nbetween informed simulations and simulations implementing\nstrategies.\nSome simulations have been made to experimentally\nvalidate these protocols and the first results are promising but\nraise many questions. What is the trade-off between the\nconstraint rate of the problem and the need of strategies?\nTo what extent are the number of redundancies and the\naverage cost affected by the tuning of the strategies?\nFuture works will focus on new strategies to solve new\nconflicts, specially those arising when relaxing the\nindependence assumption between the requests. A second point is\nto take into account the complexity of the initial planning\nproblem. Indeed, the chosen planning approach results in a\ncombinatory explosion with big sets of requests: an anytime\nor a fully reactive approach has to be considered for more\ncomplex problems.\nAcknowledgements\nWe would like to thank Marie-Claire Charmeau (CNES6\n),\nSerge Rainjonneau and Pierre Dago (Alcatel Space Alenia)\nfor their relevant comments on this work.\n6\nThe French Space Agency\nThe Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 293\n9. REFERENCES\n[1] ILOG inc. CPLEX.\nhttp://www.ilog.com/products/cplex.\n[2] T. Balch and R. Arkin. Communication in reactive\nmultiagent robotic systems. Autonomous Robots,\npages 27-52, 1994.\n[3] F. Bellifemine, A. Poggi, and G. Rimassa. JADE - a\nFIPA-compliant agent framework. In Proceedings of\nPAAM\"99, pages 97-108, 1999.\n[4] A. Blum and M. Furst. Fast planning through\nplanning graph analysis. Artificial Intelligence, Vol.\n90:281-300, 1997.\n[5] E. Bornschlegl, C. Guettier, G. L. Lann, and J.-C.\nPoncet. Constraint-based layered planning and\ndistributed control for an autonomous spacecraft\nformation flying. In Proceedings of the 1st ESA\nWorkshop on Space Autonomy, 2001.\n[6] E. Bornschlegl, C. Guettier, and J.-C. Poncet.\nAutomatic planning for autonomous spacecraft\nconstellation. In Proceedings of the 2nd NASA Intl.\nWorkshop on Planning and Scheduling for Space, 2000.\n[7] R. Brooks. A robust layered control system for a\nmobile robot. MIT AI Lab Memo, Vol. 864, 1985.\n[8] A. Chopra and M. Singh. Nonmonotonic commitment\nmachines. Lecture Notes in Computer Science:\nAdvances in Agent Communication, Vol.\n2922:183-200, 2004.\n[9] A. Chopra and M. Singh. Contextualizing commitment\nprotocols. In Proceedings of the 5th AAMAS, 2006.\n[10] B. Clement and A. Barrett. Continual coordination\nthrough shared activites. In Proceedings of the 2nd\nAAMAS, pages 57-64, 2003.\n[11] J. Cox and E. Durfee. Efficient mechanisms for\nmultiagent plan merging. In Proceedings of the 3rd\nAAMAS, 2004.\n[12] S. Curtis, M. Rilee, P. Clark, and G. Marr. Use of\nswarm intelligence in spacecraft constellations for the\nresource exploration of the asteroid belt. In\nProceedings of the Third International Workshop on\nSatellite Constellations and Formation Flying, pages\n24-26, 2003.\n[13] S. Damiani, G. Verfaillie, and M.-C. Charmeau. An\nEarth watching satellite constellation : How to\nmanage a team of watching agents with limited\ncommunications. In Proceedings of the 4th AAMAS,\npages 455-462, 2005.\n[14] S. Das, P. Gonzales, R. Krikorian, and\nW. Truszkowski. Multi-agent planning and scheduling\nenvironment for enhanced spacecraft autonomy. In\nProceedings of the 5th ISAIRAS, 1999.\n[15] R. Dearden, N. Meuleau, S. Ramakrishnan, D. Smith,\nand R. Wahington. Incremental contingency planning.\nIn Proceedings of ICAPS\"03 Workshop on Planning\nunder Uncertainty and Incomplete Information, pages\n1-10, 2003.\n[16] F. Dignum. Autonomous agents with norms. Artificial\nIntelligence and Law, Vol. 7:69-79, 1999.\n[17] E. Durfee. Scaling up agent coordination strategies.\nIEEE Computer, Vol. 34(7):39-46, 2001.\n[18] K. Erol, J. Hendler, and D. Nau. HTN planning :\nComplexity and expressivity. In Proceedings of the\n12th AAAI, pages 1123-1128, 1994.\n[19] D. Escorial, I. F. Tourne, and F. J. Reina. Fuego : a\ndedicated constellation of small satellites to detect\nand monitor forest fires. Acta Astronautica,\nVol.52(9-12):765-775, 2003.\n[20] B. Gerkey and M. Matari\u0107. A formal analysis and\ntaxonomy of task allocation in multi-robot systems.\nJournal of Robotics Research, Vol. 23(9):939-954,\n2004.\n[21] C. Guettier and J.-C. Poncet. Multi-level planning for\nspacecraft autonomy. In Proceedings of the 6th\nISAIRAS, pages 18-21, 2001.\n[22] I. Gupta, A.-M. Kermarrec, and A. Ganesh. Efficient\nepidemic-style protocols for reliable and scalable\nmulticast. In Proceedings of the 21st IEEE Symposium\non Reliable Distributed Systems, pages 180-189, 2002.\n[23] G. Gutnik and G. Kaminka. Representing\nconversations for scalable overhearing. Journal of\nArtificial Intelligence Research, Vol. 25:349-387, 2006.\n[24] K. Jenkins, K. Hopkinson, and K. Birman. A gossip\nprotocol for subgroup multicast. In Proceedings of the\n21st International Conference on Distributed\nComputing Systems Workshops, pages 25-30, 2001.\n[25] N. Jennings, S. Parsons, P. Norriega, and C. Sierra.\nOn augumentation-based negotiation. In Proceedings\nof the International Workshop on Multi-Agent\nSystems, pages 1-7, 1998.\n[26] J.-L. Koning and M.-P. Huget. A semi-formal\nspecification language dedicated to interaction\nprotocols. Information Modeling and Knowledge Bases\nXII: Frontiers in Artificial Intelligence and\nApplications, pages 375-392, 2001.\n[27] F. Legras and C. Tessier. LOTTO: group formation by\noverhearing in large teams. In Proceedings of 2nd\nAAMAS, 2003.\n[28] D. McAllester, D. Rosenblitt, P. Norriega, and\nC. Sierra. Systematic nonlinear planning. In\nProceedings of the 9th AAAI, pages 634-639, 1991.\n[29] N. Meuleau and D. Smith. Optimal limited\ncontingency planning. In Proceedings of the 19th\nAAAI, pages 417-426, 2003.\n[30] P. Modi and M. Veloso. Bumping strategies for the\nmultiagent agreement problem. In Proceedings of the\n4th AAMAS, pages 390-396, 2005.\n[31] J. B. Mueller, D. M. Surka, and B. Udrea.\nAgent-based control of multiple satellite formation\nflying. In Proceedings of the 6th ISAIRAS, 2001.\n[32] J. Odell, H. Parunak, and B. Bauer. Extending UML\nfor agents. In Proceedings of the Agent-Oriented\nInformation Systems Workshop at the 17th AAAI,\n2000.\n[33] B. Pittel. On spreading a rumor. SIAM Journal of\nApplied Mathematics, Vol. 47:213-223, 1987.\n[34] B. Polle. Autonomy requirement and technologies for\nfuture constellation. Astrium Summary Report, 2002.\n[35] T. Sandholm. Contract types for satisficing task\nallocation. In Proceedings of the AAAI Spring\nSymposium: Satisficing Models, pages 23-25, 1998.\n[36] T. Schetter, M. Campbell, and D. M. Surka. Multiple\nagent-based autonomy for satellite constellation.\nArtificial Intelligence, Vol. 145:147-180, 2003.\n[37] O. Shehory and S. Kraus. Methods for task allocation\nvia agent coalition formation. Artificial Intelligence,\nVol. 101(1-2):165-200, 1998.\n[38] D. M. Surka. ObjectAgent for robust autonomous\ncontrol. In Proceedings of the AAAI Spring\nSymposium, 2001.\n[39] W. Truszkowski, D. Zoch, and D. Smith. Autonomy\nfor constellations. In Proceedings of the SpaceOps\nConference, 2000.\n[40] R. VanDerKrogt and M. deWeerdt. Plan repair as an\nextension of planning. In Proceedings of the 15th\nICAPS, pages 161-170, 2005.\n[41] B. Werger. Cooperation without deliberation : A\nminimal behavior-based approach to multi-robot\nteams. Artificial Intelligence, Vol. 110:293-320, 1999.\n[42] P. Zetocha. Satellite cluster command and control.\nIEEE Aerospace Conference, Vol. 7:49-54, 2000.\n294 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)", "keywords": "communication and negotiation;prospecting ant;cooperation and teamwork;on-board planning;task and resource allocation;multiagent system;teamagent;cooperative distribute problem solve;dip;coordination;information system application;satellite swarm;reactive decision rule;objectagent architecture"}
-{"name": "test_I-19", "title": "Bidding Optimally in Concurrent Second-Price Auctions of Perfectly Substitutable Goods", "abstract": "We derive optimal bidding strategies for a global bidding agent that participates in multiple, simultaneous second-price auctions with perfect substitutes. We first consider a model where all other bidders are local and participate in a single auction. For this case, we prove that, assuming free disposal, the global bidder should always place non-zero bids in all available auctions, irrespective of the local bidders\" valuation distribution. Furthermore, for non-decreasing valuation distributions, we prove that the problem of finding the optimal bids reduces to two dimensions. These results hold both in the case where the number of local bidders is known and when this number is determined by a Poisson distribution. This analysis extends to online markets where, typically, auctions occur both concurrently and sequentially. In addition, by combining analytical and simulation results, we demonstrate that similar results hold in the case of several global bidders, provided that the market consists of both global and local bidders. Finally, we address the efficiency of the overall market, and show that information about the number of local bidders is an important determinant for the way in which a global bidder affects efficiency.", "fulltext": "1. INTRODUCTION\nThe recent surge of interest in online auctions has resulted\nin an increasing number of auctions offering very similar or\neven identical goods and services [9, 10]. In eBay alone, for\nexample, there are often hundreds or sometimes even\nthousands of concurrent auctions running worldwide selling such\nsubstitutable items1\n. Against this background, it is essential\nto develop bidding strategies that autonomous agents can use\nto operate effectively across a wide number of auctions. To\nthis end, in this paper we devise and analyse optimal\nbidding strategies for an important yet barely studied setting\n- namely, an agent that participates in multiple,\nconcurrent (i.e., simultaneous) second-price auctions for goods that\nare perfect substitutes. As we will show, however, this\nanalysis is also relevant to a wider context where auctions are\nconducted sequentially, as well as concurrently.\nTo date, much of the existing literature on multiple\nauctions focuses either on sequential auctions [6] or on\nsimultaneous auctions for complementary goods, where the value of\nitems together is greater than the sum of the individual items\n(see Section 2 for related research on simultaneous auctions).\nIn contrast, here we consider bidding strategies for markets\nwith multiple concurrent auctions and perfect substitutes.\nIn particular, our focus is on Vickrey or second-price sealed\nbid auctions. We choose these because they require little\ncommunication and are well known for their capacity to\ninduce truthful bidding, which makes them suitable for many\nmulti-agent system settings. However, our results generalise\nto settings with English auctions since these are strategically\nequivalent to second-price auctions. Within this setting, we\nare able to characterise, for the first time, a bidder\"s\nutilitymaximising strategy for bidding simultaneously in any\nnumber of such auctions and for any type of bidder valuation\ndistribution. In more detail, we first consider a market where a\nsingle bidder, called the global bidder, can bid in any number\nof auctions, whereas the other bidders, called the local\nbidders, are assumed to bid only in a single auction. For this\ncase, we find the following results:\n\u2022 Whereas in the case of a single second-price auction a\nbidder\"s best strategy is to bid its true value, the best\nstrategy for a global bidder is to bid below it.\n\u2022 We are able to prove that, even if a global bidder\nrequires only one item, the expected utility is maximised\nby participating in all the auctions that are selling the\ndesired item.\n\u2022 Finding the optimal bid for each auction can be an\narduous task when considering all possible combinations.\nHowever, for most common bidder valuation\ndistributions, we are able to significantly reduce this search\nspace and thus the computation required.\n\u2022 Empirically, we find that a bidder\"s expected utility\nis maximised by bidding relatively high in one of the\nauctions, and equal or lower in all other auctions.\nWe then go on to consider markets with more than one global\nbidder. Due to the complexity of the problem, we combine\nanalytical results with a discrete simulation in order to\nnumerically derive the optimal bidding strategy. By so doing,\nwe find that, in a market with only global bidders, the\ndynamics of the best response do not converge to a pure\nstrategy. In fact it fluctuates between two states. If the market\nconsists of both local and global bidders, however, the global\nbidders\" strategy quickly reaches a stable solution and we\napproximate a symmetric Nash equilibrium.\nThe remainder of the paper is structured as follows.\nSection 2 discusses related work. In Section 3 we describe the\nbidders and the auctions in more detail. In Section 4 we\ninvestigate the case with a single global bidder and characterise\nthe optimal bidding behaviour for it. Section 5 considers the\ncase with multiple global bidders and in Section 6 we address\nthe market efficiency. Finally, Section 7 concludes.\n2. RELATED WORK\nResearch in the area of simultaneous auctions can be\nsegmented along two broad lines. On the one hand, there is the\ngame-theoretic and decision-theoretic analysis of\nsimultaneous auctions which concentrates on studying the equilibrium\nstrategy of rational agents [3, 7, 8, 9, 12, 11]. Such\nanalyses are typically used when the auction format employed in\nthe concurrent auctions is the same (e.g. there are M\nVickrey auctions or M first-price auctions). On the other hand,\nheuristic strategies have been developed for more complex\nsettings when the sellers offer different types of auctions or\nthe buyers need to buy bundles of goods over distributed\nauctions [1, 13, 5]. This paper adopts the former approach in\nstudying a market of M simultaneous Vickrey auctions since\nthis approach yields provably optimal bidding strategies.\nIn this case, the seminal paper by Engelbrecht-Wiggans\nand Weber provides one of the starting points for the\ngametheoretic analysis of distributed markets where buyers have\nsubstitutable goods. Their work analyses a market\nconsisting of couples having equal valuations that want to bid for\na dresser. Thus, the couple\"s bid space can at most contain\ntwo bids since the husband and wife can be at most at two\ngeographically distributed auctions simultaneously. They\nderive a mixed strategy Nash equilibrium for the special case\nwhere the number of buyers is large. Our analysis differs\nfrom theirs in that we study concurrent auctions in which\nbidders have different valuations and the global bidder can\nbid in all the auctions concurrently (which is entirely possible\ngiven autonomous agents).\nFollowing this, [7] then studied the case of simultaneous\nauctions with complementary goods. They analyse the case\nof both local and global bidders and characterise the bidding\nof the buyers and resultant market efficiency. The setting\nprovided in [7] is further extended to the case of common\nvalues in [9]. However, neither of these works extend easily to\nthe case of substitutable goods which we consider. This case\nis studied in [12], but the scenario considered is restricted\nto three sellers and two global bidders and with each bidder\nhaving the same value (and thereby knowing the value of\nother bidders). The space of symmetric mixed equilibrium\nstrategies is derived for this special case, but again our result\nis more general. Finally, [11] considers the case of concurrent\nEnglish auctions, in which he develops bidding algorithms for\nbuyers with different risk attitudes. However, he forces the\nbids to be the same across auctions, which we show in this\npaper not always to be optimal.\n3. BIDDING IN MULTIPLE AUCTIONS\nThe model consists of M sellers, each of whom acts as an\nauctioneer. Each seller auctions one item; these items are\ncomplete substitutes (i.e., they are equal in terms of value\nand a bidder obtains no additional benefit from winning more\nthan one item). The M auctions are executed concurrently;\nthat is, they end simultaneously and no information about\nthe outcome of any of the auctions becomes available until\nthe bids are placed2\n. However, we briefly address markets\nwith both sequential and concurrent auctions in Section 4.4.\nWe also assume that all the auctions are equivalent (i.e., a\nbidder does not prefer one auction over another). Finally, we\nassume free disposal (i.e., a winner of multiple items incurs\nno additional costs by discarding unwanted ones) and risk\nneutral bidders.\n3.1 The Auctions\nThe seller\"s auction is implemented as a Vickrey auction,\nwhere the highest bidder wins but pays the second-highest\nprice. This format has several advantages for an agent-based\nsetting. Firstly, it is communication efficient. Secondly, for\nthe single-auction case (i.e., where a bidder places a bid in\nat most one auction), the optimal strategy is to bid the true\nvalue and thus requires no computation (once the valuation\nof the item is known). This strategy is also weakly dominant\n(i.e., it is independent of the other bidders\" decisions), and\ntherefore it requires no information about the preferences of\nother agents (such as the distribution of their valuations).\n3.2 Global and Local Bidders\nWe distinguish between global and local bidders. The former\ncan bid in any number of auctions, whereas the latter only bid\nin a single one. Local bidders are assumed to bid according to\nthe weakly dominant strategy and bid their true valuation3\n.\nWe consider two ways of modelling local bidders: static and\ndynamic. In the first model, the number of local bidders\nis assumed to be known and equal to N for each auction.\nIn the latter model, on the other hand, the average number\nof bidders is equal to N, but the exact number is unknown\nand may vary for each auction. This uncertainty is modelled\nusing a Poisson distribution (more details are provided in\nSection 4.1).\nAs we will later show, a global bidder who bids optimally\nhas a higher expected utility compared to a local bidder, even\nthough the items are complete substitutes and a bidder only\nrequires one of them. However, we can identify a number of\ncompelling reasons why not all bidders would choose to bid\nglobally. Firstly, participation costs such as entry fees and\ntime to set up an account may encourage occasional users to\n2\nAlthough this paper focuses on sealed-bid auctions, where\nthis is the case, the conditions are similar for last-minute\nbidding in English auctions such as eBay [10].\n3\nNote that, since bidding the true value is optimal for local\nbidders irrespective of what others are bidding, their strategy\nis not affected by the presence of global bidders.\n280 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nparticipate in auctions that they are already familiar with.\nSecondly, bidders may simply not be aware of other auctions\nselling the same type of item. Even if this is known,\nhowever, additional information such as the distribution of the\nvaluations of other bidders and the number of participating\nbidders is required for bidding optimally across multiple\nauctions. This lack of expert information often drives a novice\nto bid locally. Thirdly, an optimal global strategy is harder\nto compute than a local one. An agent with bounded\nrationality may therefore not have the resources to compute such\na strategy. Lastly, even though a global bidder profits on\naverage, such a bidder may incur a loss when inadvertently\nwinning multiple auctions. This deters bidders who are\neither risk averse or have budget constraints from participating\nin multiple auction. As a result, in most market places we\nexpect a combination of global and local bidders.\nIn view of the above considerations, human buyers are\nmore likely to bid locally. The global strategy, however, can\nbe effectively executed by autonomous agents since they can\ngather data from many auctions and perform the required\ncalculations within the desired time frame.\n4. A SINGLE GLOBAL BIDDER\nIn this section, we provide a theoretical analysis of the\noptimal bidding strategy for a global bidder, given that all other\nbidders are local and simply bid their true valuation.\nAfter we describe the global bidder\"s expected utility in\nSection 4.1, we show in Section 4.2 that it is always optimal for\na global bidder to participate in the maximum number of\nauctions available. In Section 4.3 we discuss how to\nsignificantly reduce the complexity of finding the optimal bids for\nthe multi-auction problem, and we then apply these methods\nto find optimal strategies for specific examples. Finally, in\nSection 4.4 we extend our analysis to sequential auctions.\n4.1 The Global Bidder\"s Expected Utility\nIn what follows, the number of sellers (auctions) is M \u2265 2 and\nthe number of local bidders is N \u2265 1. A bidder\"s valuation\nv \u2208 [0, vmax] is randomly drawn from a cumulative\ndistribution F with probability density f, where f is continuous,\nstrictly positive and has support [0, vmax]. F is assumed to\nbe equal and common knowledge for all bidders. A global\nbid B is a set containing a bid bi \u2208 [0, vmax] for each auction\n1 \u2264 i \u2264 M (the bids may be different for different auctions).\nFor ease of exposition, we introduce the cumulative\ndistribution function for the first-order statistics G(b) = F(b)N\n\u2208\n[0, 1], denoting the probability of winning a specific auction\nconditional on placing bid b in this auction, and its\nprobability density g(b) = dG(b)/db = NF(b)N\u22121\nf(b). Now, the\nexpected utility U for a global bidder with global bid B and\nvaluation v is given by:\nU(B, v) = v\n\u23a1\n\u23a31 \u2212\nbi\u2208B\n(1 \u2212 G(bi))\n\u23a4\n\u23a6 \u2212\nbi\u2208B\nbi\n0\nyg(y)dy (1)\nHere, the left part of the equation is the valuation\nmultiplied by the probability that the global bidder wins at\nleast one of the M auctions and thus corresponds to the\nexpected benefit. In more detail, note that 1 \u2212 G(bi) is\nthe probability of not winning auction i when bidding bi,\nbi\u2208B(1 \u2212 G(bi)) is the probability of not winning any\nauction, and thus 1 \u2212 bi\u2208B(1 \u2212 G(bi)) is the probability of\nwinning at least one auction. The right part of equation 1\ncorresponds to the total expected costs or payments. To see\nthe latter, note that the expected payment of a single\nsecondprice auction when bidding b equals\nb\n0\nyg(y)dy (see [6]) and\nis independent of the expected payments for other auctions.\nClearly, equation 1 applies to the model with static local\nbidders, i.e., where the number of bidders is known and equal\nfor each auction (see Section 3.2). However, we can use the\nsame equation to model dynamic local bidders in the\nfollowing way:\nLemma 1 By replacing the first-order statistic G(y) with\n\u02c6G(y) = eN(F (y)\u22121)\n, (2)\nand the corresponding density function g(y) with \u02c6g(y) =\nd \u02c6G(y)/dy = N f(y)eN(F (y)\u22121)\n, equation 1 becomes the\nexpected utility where the number of local bidders in each\nauction is described by a Poisson distribution with average N\n(i.e., where the probability that n local bidders participate\nis given by P(n) = Nn\ne\u2212N\n/n!).\nProof To prove this, we first show that G(\u00b7) and F(\u00b7) can\nbe modified such that the number of bidders per auction is\ngiven by a binomial distribution (where a bidder\"s decision\nto participate is given by a Bernoulli trial) as follows:\nG (y) = F (y)N\n= (1 \u2212 p + p F (y))N\n, (3)\nwhere p is the probability that a bidder participates in the\nauction, and N is the total number of bidders. To see this,\nnote that not participating is equivalent to bidding zero. As\na result, F (0) = 1 \u2212 p since there is a 1 \u2212 p probability\nthat a bidder bids zero at a specific auction, and F (y) =\nF (0) + p F(y) since there is a probability p that a bidder\nbids according to the original distribution F(y). Now, the\naverage number of participating bidders is given by N = p N.\nBy replacing p with N/N, equation 3 becomes G (y) = (1 \u2212\nN/N + (N/N)F(y))N\n. Note that a Poisson distribution is\ngiven by the limit of a binomial distribution. By keeping\nN constant and taking the limit N \u2192 \u221e, we then obtain\nG (y) = eN(F (y)\u22121)\n= \u02c6G(y). This concludes our proof.\nThe results that follow apply to both the static and dynamic\nmodel unless stated otherwise.\n4.2 Participation in Multiple Auctions\nWe now show that, for any valuation 0 < v < vmax, a\nutilitymaximising global bidder should always place non-zero bids\nin all available auctions. To prove this, we show that the\nexpected utility increases when placing an arbitrarily small bid\ncompared to not participating in an auction. More formally,\nTheorem 1 Consider a global bidder with valuation 0 <\nv < vmax and global bid B, where bi \u2264 v for all bi \u2208 B.\nSuppose B contains no bid for auction j \u2208 {1, 2, . . . , M},\nthen there exists a bj > 0 such that U(B\u222a{bj }, v) > U(B, v).\nProof Using equation 1, the marginal expected utility for\nparticipating in an additional auction can be written as:\nU(B \u222a {bj }, v) \u2212 U(B, v) = vG(bj )\nbi\u2208B\n(1 \u2212 G(bi)) \u2212\nbj\n0\nyg(y)dy\nNow, using integration by parts, we have\nbj\n0\nyg(y) = bjG(bj)\u2212\nbj\n0\nG(y)dy and the above equation can be rewritten as:\nU(B \u222a {bj }, v) \u2212 U(B, v) =\nG(bj )\n\u23a1\n\u23a3v\nbi\u2208B\n(1 \u2212 G(bi)) \u2212 bj\n\u23a4\n\u23a6 +\nbj\n0\nG(y)dy (4)\nThe Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 281\nLet bj = , where is an arbitrarily small strictly positive\nvalue. Clearly, G(bj) and\nbj\n0\nG(y)dy are then both strictly\npositive (since f(y) > 0). Moreover, given that bi \u2264 v <\nvmax for bi \u2208 B and that v > 0, it follows that v bi\u2208B(1 \u2212\nG(bi)) > 0. Now, suppose bj = 1\n2\nv bi\u2208B(1 \u2212 G(bi)), then\nU(B \u222a {bj }, v) \u2212 U(B, v) = G(bj ) 1\n2\nv bi\u2208B(1 \u2212 G(bi)) +\nbj\n0\nG(y)dy > 0 and thus U(B \u222a {bj }, v) > U(B, v). This\ncompletes our proof.\n4.3 The Optimal Global Bid\nA general solution to the optimal global bid requires the\nmaximisation of equation 1 in M dimensions, an arduous task,\neven when applying numerical methods. In this section,\nhowever, we show how to reduce the entire bid space to two\ndimensions in most cases (one continuous, and one discrete),\nthereby significantly simplifying the problem at hand. First,\nhowever, in order to find the optimal solutions to equation 1,\nwe set the partial derivatives to zero:\n\u2202U\n\u2202bi\n= g(bi)\n\u23a1\n\u23a3v\nbj \u2208B\\{bi}\n(1 \u2212 G(bj)) \u2212 bi\n\u23a4\n\u23a6 = 0 (5)\nNow, equality 5 holds either when g(bi) = 0 or when\nbj \u2208B\\{bi}(1 \u2212 G(bj ))v \u2212 bi = 0. In the dynamic model,\ng(bi) is always greater than zero, and can therefore be\nignored (since g(0) = Nf(0)e\u2212N\nand we assume f(y) > 0).\nIn the static model, g(bi) = 0 only when bi = 0.\nHowever, theorem 1 shows that the optimal bid is non-zero for\n0 < v < vmax. Therefore, we can ignore the first part, and\nthe second part yields:\nbi = v\nbj \u2208B\\{bi}\n(1 \u2212 G(bj)) (6)\nIn other words, the optimal bid in auction i is equal to the\nbidder\"s valuation multiplied by the probability of not\nwinning any of the other auctions. It is straightforward to show\nthat the second partial derivative is negative, confirming that\nthe solution is indeed a maximum when keeping all other bids\nconstant. Thus, equation 6 provides a means to derive the\noptimal bid for auction i, given the bids in all other auctions.\n4.3.1 Reducing the Search Space\nIn what follows, we show that, for non-decreasing\nprobability density functions (such as the uniform and logarithmic\ndistributions), the optimal global bid consists of at most two\ndifferent values for any M \u2265 2. That is, the search space\nfor finding the optimal bid can then be reduced to two\ncontinuous values. Let these values be bhigh and blow, where\nbhigh \u2265 blow. More formally:\nTheorem 2 Suppose the probability density function f is\nnon-decreasing within the range [0, vmax], then the following\nproposition holds: given v > 0, for any bi \u2208 B, either bi =\nbhigh, bi = blow, or bi = bhigh = blow.\nProof Using equation 6, we can produce M equations, one\nfor each auction, with M unknowns. Now, by combining\nthese equations, we obtain the following relationship: b1(1 \u2212\nG(b1)) = b2(1 \u2212 G(b2)) = . . . = bm(1 \u2212 G(bm)). By defining\nH(b) = b(1 \u2212 G(b)) we can rewrite the equation to:\nH(b1) = H(b2) = . . . = H(bm) = v\nbj \u2208B\n(1 \u2212 G(bj )) (7)\nIn order to prove that there exist at most two different bids,\nit is sufficient to show that b = H\u22121\n(y) has at most two\nsolutions that satisfy 0 \u2264 b \u2264 vmax for any y. To see this,\nsuppose H\u22121\n(y) has two solutions but there exists a third\nbid bj = blow = bhigh. From equation 7 it then follows that\nthere exists a y such that H(bj) = H(blow) = H(bhigh) = y.\nTherefore, H\u22121\n(y) must have at least three solutions, which\nis a contradiction.\nNow, note that, in order to prove that H\u22121\n(y) has at most\ntwo solutions, it is necessary and sufficient to show that H(b)\nhas at most one local maximum for 0 \u2264 b \u2264 vmax. A\nsufficient conditions, however, is for H(b) to be strictly concave4\n.\nThe function H is strictly concave if and only if the following\ncondition holds:\nH (b) =\nd\ndb\n(1 \u2212 b \u00b7 g(b) \u2212 G(b)) = \u2212 b\ndg\ndb\n+ 2g(b) < 0\n(8)\nwhere H (b) = d2\nH/db2\n. By performing standard\ncalculations, we obtain the following condition for the static model:\nb (N \u2212 1)\nf(b)\nF(b)\n+ N\nf (b)\nf(b)\n> \u22122 for 0 \u2264 b \u2264 vmax, (9)\nand similarly for the dynamic model we have:\nb N f(b) +\nf (b)\nf(b)\n> \u22122 for 0 \u2264 b \u2264 vmax, (10)\nwhere f (b) = df/db. Since both f and F are positive,\nconditions 9 and 10 clearly hold for f (b) \u2265 0. In other\nwords, conditions 9 and 10 show that H(b) is strictly\nconcave when the probability density function is non-decreasing\nfor 0 \u2264 b \u2264 vmax, completing our proof.\nNote from conditions 9 and 10 that the requirement of\nnon-decreasing density functions is sufficient, but far from\nnecessary. Moreover, condition 8 requiring H(b) to be strictly\nconcave is also stronger than necessary to guarantee only two\nsolutions. As a result, in practice we find that the reduction\nof the search space applies to most cases.\nGiven there are at most 2 possible bids, blow and bhigh, we\ncan further reduce the search space by expressing one bid in\nterms of the other. Suppose the buyer places a bid of blow in\nMlow auctions and bhigh for the remaining Mhigh = M\u2212Mlow\nauctions, equation 6 then becomes:\nblow = v(1 \u2212 G(blow))Mlow\u22121\n(1 \u2212 G(bhigh))Mhigh\n,\nand can be rearranged to give:\nbhigh = G\u22121\n1 \u2212\nblow\nv(1 \u2212 G(blow))Mlow\u22121\n1\nMhigh\n(11)\nHere, the inverse function G\u22121\n(\u00b7) can usually be obtained\nquite easily. Furthermore, note that, if Mlow = 1 or Mhigh =\n1, equation 6 can be used directly to find the desired value.\nUsing the above, we are able to reduce the bid search space\nto a single continuous dimension, given Mlow or Mhigh.\nHowever, we do not know the number of auctions in which to bid\nblow and bhigh, and thus we need to search M different\ncombinations to find the optimal global bid. Moreover, for each\n4\nMore precisely, H(b) can be either strictly convex or strictly\nconcave. However, it is easy to see that H is not convex since\nH(0) = H(vmax) = 0, and H(b) \u2265 0 for 0 < b < vmax.\n282 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)\n0 0.5 1\n0\n0.2\n0.4\n0.6\n0.8\n1\nvaluation (v)\nbidfraction(x)\n0 0.5 1\n0\n0.05\n0.1\n0.15\nlocal\nM=2\nM=4\nM=6\nvaluation (v)\nexpectedutility\nFigure 1: The optimal bid fractions x = b/v and\ncorresponding expected utility for a single global bidder\nwith N = 5 static local bidders and varying number of\nauctions (M). In addition, for comparison, the dark\nsolid line in the right figure depicts the expected\nutility when bidding locally in a randomly selected\nauction, given there are no global bidders (note that, in\ncase of local bidders only, the expected utility is not\naffected by M).\ncombination, the optimal blow and bhigh can vary.\nTherefore, in order to find the optimal bid for a bidder with\nvaluation v, it is sufficient to search along one continuous variable\nblow \u2208 [0, v], and a discrete variable Mlow = M \u2212 Mhigh \u2208\n{1, 2, . . . , M}.\n4.3.2 Empirical Evaluation\nIn this section, we present results from an empirical study\nand characterise the optimal global bid for specific cases.\nFurthermore, we measure the actual utility improvement that\ncan be obtained when using the global strategy. The results\npresented here are based on a uniform distribution of the\nvaluations with vmax = 1, and the static local bidder model, but\nthey generalise to the dynamic model and other distributions\n(not shown due to space limitations). Figure 1 illustrates\nthe optimal global bids and the corresponding expected\nutility for various M and N = 5, but again the bid curves for\ndifferent values of M and N follow a very similar pattern.\nHere, the bid is normalised by the valuation v to give the bid\nfraction x = b/v. Note that, when x = 1, a bidder bids its\ntrue value.\nAs shown in Figure 1, for bidders with a relatively low\nvaluation, the optimal strategy is to submit M equal bids\nat, or very close to, the true value. The optimal bid fraction\nthen gradually decreases for higher valuations. Interestingly,\nin most cases, placing equal bids is no longer the optimal\nstrategy after the valuation reaches a certain point. A\nsocalled pitchfork bifurcation is then observed and the optimal\nbids split into two values: a single high bid and M \u2212 1 low\nones. This transition is smooth for M = 2, but exhibits an\nabrupt jump for M \u2265 3. In all experiments, however, we\nconsistently observe that the optimal strategy is always to\nplace a high bid in one auction, and an equal or lower bid in\nall others. In case of a bifurcation and when the valuation\napproaches vmax, the optimal high bid goes to the true value\nand the low bids go to zero.\nAs illustrated in Figure 1, the utility of a global bidder\nbecomes progressively higher with more auctions. In absolute\nterms, the improvement is especially high for bidders that\nhave an above average valuation, but not too close to vmax.\nThe bidders in this range thus benefit most from bidding\nglobally. This is because bidders with very low valuations\nhave a very small chance of winning any auction, whereas\nbidders with a very high valuation have a high probability of\nwinning a single auction and benefit less from participating\nin more auctions. In contrast, if we consider the utility\nrelative to bidding in a single auction, this is much higher for\nbidders with relatively low valuations (this effect cannot be\nseen clearly in Figure 1 due to the scale). In particular, we\nnotice that a global bidder with a low valuation can improve\nits utility by up to M times the expected utility of bidding\nlocally. Intuitively, this is because the chance of winning one\nof the auctions increases by up to a factor M, whereas the\nincrease in the expected cost is negligible. For high valuation\nbuyers, however, the benefit is not that obvious because the\nchances of winning are relatively high even in case of a single\nauction.\n4.4 Sequential and Concurrent Auctions\nIn this section we extend our analysis of the optimal bidding\nstrategy to sequential auctions. Specifically, the auction\nprocess consists of R rounds, and in each round any number of\nauctions are running simultaneously. Such a combination of\nsequential and concurrent auctions is very common in\npractice, especially online5\n. It turns out that the analysis for\nthe case of simultaneous auctions is quite general and can\nbe easily extended to include sequential auctions. In the\nfollowing, the number of simultaneous auctions in round r is\ndenoted by Mr, and the set of bids in that round by Br. As\nbefore, the analysis assumes that all other bidders are local\nand bid in a single auction. Furthermore, we assume that the\nglobal bidders have complete knowledge about the number\nof rounds and the number of auctions in each round.\nThe expected utility in round r, denoted by Ur, is similar to\nbefore (equation 1 in Section 4.1) except that now additional\nbenefit can be obtained from future auctions if the desired\nitem is not won in one of the current set of simultaneous\nauctions. For convenience, Ur(Br, Mr) is abbreviated to Ur\nin the following. The expected utility thus becomes:\nUr = v \u00b7 Pr(Br) \u2212\nbri\u2208Br\nbri\n0\nyg(y)dy + Ur+1 \u00b7 (1 \u2212 Pr(Br))\n= Ur+1 + (v \u2212 Ur+1)Pr(Br) \u2212\nbri\u2208Br\nbri\n0\nyg(y)dy, (12)\nwhere Pr(Br) = 1 \u2212 bri\u2208Br\n(1 \u2212 G(bri)) is the probability of\nwinning at least one auction in round r. Now, we take the\npartial derivative of equation 12 in order to find the optimal\nbid brj for auction j in round r:\n\u2202Us\n\u2202brj\n= g(brj)\n\u23a1\n\u23a3(v \u2212 Us+1)\nbri\u2208Br\\{brj }\n(1 \u2212 G(bri)) \u2212 brj\n\u23a4\n\u23a6\n(13)\n5\nRather than being purely sequential in nature, online\nauctions also often overlap (i.e., new auctions can start while\nothers are still ongoing). In that case, however, it is optimal\nto wait and bid in the new auctions only after the outcome\nof the earlier auctions is known, thereby reducing the chance\nof unwittingly winning multiple items. Using this strategy,\noverlapping auctions effectively become sequential and can\nthus be analysed using the results in this section.\nThe Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 283\nNote that equation 13 is almost identical to equation 5 in\nSection 4.3, except that the valuation v is now replaced by\nv\u2212Ur+1. The optimal bidding strategy can thus be found by\nbackward induction (where UR+1 = 0) using the procedure\noutlined in Section 4.3.\n5. MULTIPLE GLOBAL BIDDERS\nAs argued in section 3.2, we expect a real-world market to\nexhibit a mix of global and local bidders. Whereas so far we\nassumed a single global bidder, in this section we consider a\nsetting where multiple global bidders interact with one\nanother and with local bidders as well. The analysis of this\nproblem is complex, however, as the optimal bidding\nstrategy of a global bidder depends on the strategy of other global\nbidders. A typical analytical approach is to find the\nsymmetric Nash equilibrium solution [9, 12], which occurs when all\nglobal bidders use the same strategy to produce their bids,\nand no (global) bidder has any incentive to unilaterally\ndeviate from the chosen strategy. Due to the complexity of the\nproblem, however, here we combine a computational\nsimulation approach with analytical results. The simulation works\nby iteratively finding the best response to the optimal\nbidding strategies in the previous iteration. If this should result\nin a stable outcome (i.e., when the optimal bidding\nstrategies remains unchanged for two subsequent iterations), the\nsolution is by definition a (symmetric) Nash equilibrium.\n5.1 The Global Bidder\"s Expected Utility\nIn order to find a global bidder\"s best response, we first need\nto calculate the expected utility given the global bid B and\nthe strategies of both the other global bidders as well as the\nlocal bidders. In the following, let Ng denote the number of\nother global bidders. Furthermore, let the strategies of the\nother global bidders be represented by the set of functions\n\u03b2k(v), 1 \u2264 k \u2264 M, producing a bid for each auction given a\nbidder\"s valuation v. Note that all other global bidders use\nthe same set of functions since we consider symmetric\nequilibria. However, we assume that the assignment of functions\nto auctions by each global bidder occurs in a random fashion\nwithout replacement (i.e., each function is assigned exactly\nonce by each global bidder). Let \u03a9 denote the set of all\npossible assignments. Each such assignment \u03c9 \u2208 \u03a9 is a (M, Ng)\nmatrix, where each entry \u03c9i,j identifies the function used by\nglobal bidder j in auction i. Note that the cardinality of \u03a9,\ndenoted by |\u03a9|, is equal to M!Ng\n. Now, the expected utility\nis the average expected utility over all possible assignments\nand is given by:\nU(B, v) =\n1\n|\u03a9| \u03c9\u2208\u03a9\nv\n\u239b\n\u239d1 \u2212\nbi\u2208B\n(1 \u2212 \u02dcG\u03c9i (bi))\n\u239e\n\u23a0\n\u2212\n1\n|\u03a9| \u03c9\u2208\u03a9 bi\u2208B\nbi\n0\ny\u02dcg\u03c9i (y)dy, (14)\nwhere \u02dcG\u03c9i (b) = G(b) \u00b7\nNg\nj=1\nb\n0\n\u03b2\u03c9i,j (y)f(y)dy denotes the\nprobability of winning auction i, given that each global\nbidder 1 \u2264 j \u2264 Ng bids according to the function \u03b2\u03c9i,j , and\n\u02dcg\u03c9i (y) = d \u02dcG\u03c9i (y)/dy. Here, G(b) is the probability of\nwinning an auction with only local bidders as described in\nSection 4.1, and f(y) is the probability density of the bidder\nvaluations as before.\n5.2 The Simulation\nThe simulation works by discretising the space of possible\nvaluations and bids and then finding a best response to an\ninitial set of bidding functions. The best response is found by\nmaximising equation 14 for each discrete valuation, which, in\nturn, results in a new set of bidding functions. These\nfunctions then affect the probabilities of winning in the next\niteration for which the new best response strategy is calculated.\nThis process is then repeated for a fixed number of iterations\nor until a stable solution has been found6\n.\nClearly, due to the large search space, finding the\nutilitymaximising global bid quickly becomes infeasible as the\nnumber of auctions and global bidders increases. Therefore, we\nreduce the search space by limiting the global bid to two\ndimensions where a global bidder bids high in one of the\nauctions and low in all the others7\n. This simplification is\njustified by the results in Section 4.3.1 which show that, for\na large number of commonly used distributions, the optimal\nglobal bid consist of at most two different values.\nThe results reported here are based on the following\nsettings.8\nIn order to emphasize that the valuations are\ndiscrete, we use integer values ranging from 1 to 1000. Each\nvaluation occurs with equal probability, equivalent to a\nuniform valuation distribution in the continuous case. A\nbidder can select between 300 different equally-spaced bid\nlevels. Thus, a bidder with valuation v can place bids b \u2208\n{0, v/300, 2v/300, . . . , v}. The local bidders are static and\nbid their valuation as before. The initial set of functions can\nplay an important role in the experiments. Therefore, to\nensure our results are robust, experiments are repeated with\ndifferent random initial functions.\n5.3 The Results\nFirst, we describe the results with no local bidders. For this\ncase, we find that the simulation does not converge to a stable\nstate. That is, when there is at least one other global bidder,\nthe best response strategy keeps fluctuating, irrespective of\nthe number of iterations and of the initial state. The\nfluctuations, however, show a distinct pattern and alternate between\ntwo states. Figure 2 depicts these two states for NG = 10 and\nM = 5. The two states vary most when there are at least as\nmany auctions as there are global bidders. In that case, one\nof the best response states is to bid truthfully in one auction\nand zero in all others. The best response to that, however,\nis to bid an equal positive amount close to zero in all\nauctions; this strategy guarantees at least one object at a very\nlow payment. The best response is then again to bid\ntruthfully in a single auction since this appropriates the object in\nthat particular auction. As a result, there exists no stable\nsolution. The same result is observed when the number of\nglobal bidders is less than the number of auctions. This\noc6\nThis approach is similar to an alternating-move\nbestresponse process with pure strategies [4], although here we\nconsider symmetric strategies within a setting where an\nopponent\"s best response depends on its valuation.\n7\nNote that the number of possible allocations still increases\nwith the number of auctions and global bids. However, by\nmerging all utility-equivalent permutations, we significantly\nincrease computation speed, allowing experiments with\nrelatively large numbers of auctions and bidders to be performed\n(e.g., a single iteration with 50 auctions and 10 global bidders\ntakes roughly 30 seconds on a 3.00 Ghz PC).\n8\nWe also performed experiments with different precision,\nother valuation distributions, and dynamic local bidders. We\nfind that the prinicipal conclusions generalise to these\ndifferent settings, and therefore we omit the results to avoid\nrepetitiveness.\n284 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)\n0 200 400 600 800 1000\n200\n400\n600\n800\n1000\nvaluation (v)\nbid(b)\nstate 1\nstate 2\nFigure 2: The two states of the best response\nstrategy for M = 5 and Ng = 10 without local bidders.\n5 10 15\n0\n1\n2\n3\n4\nx 10\n4\nnumber of static local bidders\nvariance\nNg = 5\nNg = 10\nNg = 15\nFigure 3: The variance of the best response strategy\nover 10 iterations and 10 experiments with different\ninitial settings and M = 5. The errorbars show the\n(small) standard deviations.\ncurs since global bidders randomise over auctions, and thus\nthey cannot coordinate and choose to bid high in different\nauctions.\nAs shown in Figure 2, a similar fluctuation is observed\nwhen the number of global bidders increases relative to the\nnumber of auctions. However, the bids in the equal-bid state\n(state 2 in Figure 2), as well as the low bids of the other\nstate, increase. Moreover, if the number of global bidders is\nincreased even further, a bifurcation occurs in the equal-bid\nstate similar to the case without local bidders.\nWe now consider the best response strategies when both\nlocal and global bidders participate and each auction contains\nthe same number of local bidders. To this end, Figure 3\nshows the average variance of the best response strategies.\nThis is measured as the variance of an actual best-response\nbid over different iterations, and then taking the average over\nthe discrete bidder valuations. Here, the variance is a gauge\nfor the amount of fluctuation and thus the instability of the\nstrategy. As can be seen from this figure, local bidders have\na large stabilising effect on the global bidder strategies. As a\nresult, the best response strategy approximates a pure\nsymmetric Nash equilibrium. We note that the results converge\nafter only a few iterations.\nThe results show that the principal conclusions in the case\nof a single global bidder carry over to the case of multiple\nglobal bidders. That is, the optimal strategy is to bid\npositive in all auctions (as long as there are at least as many\nbidders as auctions). Furthermore, a similar bifurcation point\nis observed. These results are very robust to changes to the\nauction settings and the parameters of the simulation.\nTo conclude, even though a theoretical analysis proves\ndifficult in case of several global bidders, we can approximate\na (symmetric) Nash equilibrium for specific settings using a\ndiscrete simulation in case the system consists of both local\nand global bidders. Thus, our simulation can be used as a\ntool to predict the market equilibrium and to find the\noptimal bidding strategy for practical settings where we expect\na combination of local and global bidders.\n6. MARKET EFFICIENCY\nEfficiency is an important system-wide property since it\ncharacterises to what extent the market maximises social welfare\n(i.e. the sum of utilities of all agents in the market). To this\nend, in this section we study the efficiency of markets with\neither static or dynamic local bidders, and the impact that a\nglobal bidder has on the efficiency in these markets.\nSpecifically, efficiency in this context is maximised when the bidders\nwith the M highest valuations in the entire market obtain a\nsingle item each. More formally, we define the efficiency of\nan allocation as:\nDefinition 1 Efficiency of Allocation. The efficiency \u03b7K\nof an allocation K is the obtained social welfare proportional\nto the maximum social welfare that can be achieved in the\nmarket and is given by:\n\u03b7K =\nNT\ni=1 vi(K)\nNT\ni=1 vi(K\u2217)\n, (15)\nwhere K\u2217\n= arg maxK\u2208K\nNT\ni=1 vi(K) is an efficient\nallocation, K is the set of all possible allocations, vi(K) is bidder\ni\"s utility for the allocation K \u2208 K, and NT is the total\nnumber of bidders participating across all auctions (including any\nglobal bidders).\nNow, in order to measure the efficiency of the market and\nthe impact of a global bidder, we run simulations for the\nmarkets with the different types of local bidders. The\nexperiments are carried out as follows. Each bidder\"s valuation is\ndrawn from a uniform distribution with support [0, 1]. The\nlocal bidders bid their true valuations, whereas the global\nbidder bids optimally in each auction as described in\nSection 4.3. The experiments are repeated 5000 times for each\nrun to obtain an accurate mean value, and the final\naverage results and standard deviations are taken over 10 runs in\norder to get statistically significant results.\nThe results of these experiments are shown in Figure 4.\nNote that a degree of inefficiency is inherent to a\nmultiauction market with only local bidders [2].9\nFor example,\nif there are two auctions selling one item each, and the two\nbidders with the highest valuations both bid locally in the\nsame auction, then the bidder with the second-highest value\ndoes not obtain the good. Thus, the allocation of items to\nbidders is inefficient. As can be observed from Figure 4,\nhowever, the efficiency increases when N becomes larger. This\nis because the differences between the bidders with the\nhighest valuations become smaller, thereby decreasing the loss of\nefficiency.\nFurthermore, Figure 4 shows that the presence of a global\nbidder has a slightly positive effect on the efficiency in case\nthe local bidders are static. In the case of dynamic bidders,\nhowever, the effect of a global bidder depends on the number\nof sellers. If M is low (i.e., for M = 2), a global bidder\nsignificantly increases the efficiency, especially for low values of\n9\nTrivial exceptions are when either M = 1 or N = 1 and\nbidders are static, since the market is then completely efficient\nwithout a global bidder.\nThe Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 285\n2 4 6 8 10 12\n0.75\n0.8\n0.85\n0.9\n0.95\n1\n1\n3\n4\n5\n6\n7\n8\n2\nM\nLocal\nBidders\nGlobal\nBidder\n2 Dynamic No\n2 Dynamic Yes\n2 Static No\n2 Static Yes\n6 Dynamic No\n6 Dynamic Yes\n6 Static No\n6 Static Yes\n2\n1\n3\n4\n5\n6\n7\n8\n(average) number of local bidders (N)\nefficiency(\u03b7K)\nFigure 4: Average efficiency for different market\nsettings as shown in the legend. The error-bars indicate\nthe standard deviation over the 10 runs.\nN. For M = 6, on the other hand, the presence of a global\nbidder has a negative effect on the efficiency (this effect\nbecomes even more pronounced for higher values of M). This\nresult is explained as follows. The introduction of a global\nbidder potentially leads to a decrease of efficiency since this\nbidder can unwittingly win more than one item. However,\nas the number of local bidders increase, this is less likely to\nhappen. Rather, since the global bidder increases the\nnumber of bidders, its presence makes an overall positive (albeit\nsmall) contribution in case of static bidders. In a market with\ndynamic bidders, however, the market efficiency depends on\ntwo other factors. On the one hand, the efficiency increases\nsince items no longer remain unsold (this situation can\noccur in the dynamic model when no bidder turns up at an\nauction). On the other hand, as a result of the uncertainty\nconcerning the actual number of bidders, a global bidder is\nmore likely to win multiple items (we confirmed this\nanalytically). As M increases, the first effect becomes negligible\nwhereas the second one becomes more prominent, reducing\nthe efficiency on average.\nTo conclude, the impact of a global bidder on the efficiency\nclearly depends on the information that is available. In case\nof static local bidders, the number of bidders is known and\nthe global bidder can bid more accurately. In case of\nuncertainty, however, the global bidder is more likely to win more\nthan one item, decreasing the overall efficiency.\n7. CONCLUSIONS\nIn this paper, we derive utility-maximising strategies for\nbidding in multiple, simultaneous second-price auctions. We\nfirst analyse the case where a single global bidder bids in\nall auctions, whereas all other bidders are local and bid in a\nsingle auction. For this setting, we find the counter-intuitive\nresult that it is optimal to place non-zero bids in all auctions\nthat sell the desired item, even when a bidder only requires\na single item and derives no additional benefit from having\nmore. Thus, a potential buyer can achieve considerable\nbenefit by participating in multiple auctions and employing an\noptimal bidding strategy. For a number of common\nvaluation distributions, we show analytically that the problem of\nfinding optimal bids reduces to two dimensions. This\nconsiderably simplifies the original optimisation problem and can\nthus be used in practice to compute the optimal bids for any\nnumber of auctions.\nFurthermore, we investigate a setting with multiple global\nbidders by combining analytical solutions with a simulation\napproach. We find that a global bidder\"s strategy does not\nstabilise when only global bidders are present in the market,\nbut only converges when there are local bidders as well. We\nargue, however, that real-world markets are likely to contain\nboth local and global bidders. The converged results are then\nvery similar to the setting with a single global bidder, and we\nfind that a bidder benefits by bidding optimally in multiple\nauctions. For the more complex setting with multiple global\nbidders, the simulation can thus be used to find these bids\nfor specific cases.\nFinally, we compare the efficiency of a market with\nmultiple concurrent auctions with and without a global bidder. We\nshow that, if the bidder can accurately predict the number of\nlocal bidders in each auction, the efficiency slightly increases.\nIn contrast, if there is much uncertainty, the efficiency\nsignificantly diminishes as the number of auctions increases due\nto the increased probability that a global bidder wins more\nthan two items. These results show that the way in which\nthe efficiency, and thus social welfare, is affected by a global\nbidder depends on the information that is available to that\nglobal bidder.\nIn future work, we intend to extend the results to imperfect\nsubstitutes (i.e., when a global bidder gains from winning\nadditional items), and to settings where the auctions are no\nlonger identical. The latter arises, for example, when the\nnumber of (average) local bidders differs per auction or the\nauctions have different settings for parameters such as the\nreserve price.\n8. REFERENCES\n[1] S. Airiau and S. Sen. Strategic bidding for multiple units in\nsimultaneous and sequential auctions. Group Decision and\nNegotiation, 12(5):397-413, 2003.\n[2] P. Cramton, Y. Shoham, and R. Steinberg. Combinatorial\nAuctions. MIT Press, 2006.\n[3] R. Engelbrecht-Wiggans and R. Weber. An example of a\nmultiobject auction game. Management Science,\n25:1272-1277, 1979.\n[4] D. Fudenberg and D. Levine. The Theory of Learning in\nGames. MIT Press, 1999.\n[5] A. Greenwald, R. Kirby, J. Reiter, and J. Boyan. Bid\ndetermination in simultaneous auctions: A case study. In\nProc. of the Third ACM Conference on Electronic\nCommerce, pages 115-124, 2001.\n[6] V. Krishna. Auction Theory. Academic Press, 2002.\n[7] V. Krishna and R. Rosenthal. Simultaneous auctions with\nsynergies. Games and Economic Behaviour, 17:1-31, 1996.\n[8] K. Lang and R. Rosenthal. The contractor\"s game. RAND\nJ. Econ, 22:329-338, 1991.\n[9] R. Rosenthal and R. Wang. Simultaneous auctions with\nsynergies and common values. Games and Economic\nBehaviour, 17:32-55, 1996.\n[10] A. Roth and A. Ockenfels. Last-minute bidding and the\nrules for ending second-price auctions: Evidence from ebay\nand amazon auctions on the internet. The American\nEconomic Review, 92(4):1093-1103, 2002.\n[11] O. Shehory. Optimal bidding in multiple concurrent\nauctions. Int. Journal of Cooperative Information Systems,\n11:315-327, 2002.\n[12] B. Szentes and R. Rosenthal. Three-object two-bidder\nsimultaeous auctions:chopsticks and tetrahedra. Games and\nEconomic Behaviour, 44:114-133, 2003.\n[13] D. Yuen, A. Byde, and N. R. Jennings. Heuristic bidding\nstrategies for multiple heterogeneous auctions. In Proc. 17th\nEuropean Conference on AI (ECAI), pages 300-304, 2006.\n286 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)", "keywords": "global bidding agent;online market;social and behavioral science;perfect substitute;simultaneous auction;simultaneous second-price auction;utilitymaximising strategy;optimal bidding strategy;vickrey auction;non-decreasing valuation distribution;multiagent system;market efficiency"}
-{"name": "test_I-20", "title": "Computing the Banzhaf Power Index in Network Flow Games", "abstract": "Preference aggregation is used in a variety of multiagent applications, and as a result, voting theory has become an important topic in multiagent system research. However, power indices (which reflect how much real power a voter has in a weighted voting system) have received relatively little attention, although they have long been studied in political science and economics. The Banzhaf power index is one of the most popular; it is also well-defined for any simple coalitional game. In this paper, we examine the computational complexity of calculating the Banzhaf power index within a particular multiagent domain, a network flow game. Agents control the edges of a graph; a coalition wins if it can send a flow of a given size from a source vertex to a target vertex. The relative power of each edge/agent reflects its significance in enabling such a flow, and in real-world networks could be used, for example, to allocate resources for maintaining parts of the network. We show that calculating the Banzhaf power index of each agent in this network flow domain is #P-complete. We also show that for some restricted network flow domains there exists a polynomial algorithm to calculate agents\" Banzhaf power indices.", "fulltext": "1. INTRODUCTION\nSocial choice theory can serve as an appropriate foundation upon\nwhich to build multiagent applications. There is a rich literature\non the subject of voting1\nfrom political science, mathematics, and\neconomics, with important theoretical results, and builders of\nautomated agents can benefit from this work as they engineer systems\nthat reach group consensus.\nInterest in the theory of economics and social choice has in fact\nbecome widespread throughout computer science, because it is\nrecognized as having direct implications on the building of systems\ncomprised of multiple automated agents [16, 4, 22, 17, 14, 8, 15].\nWhat distinguishes computer science work in these areas is its\nconcern for computational issues: how are results arrived at (e.g.,\nequilibrium points)? What is the complexity of the process? Can\ncomplexity be used to guard against unwanted phenomena? Does\ncomplexity of computation prevent realistic implementation of a\ntechnique?\nThe practical applications of voting among automated agents are\nalready widespread. Ghosh et al. [6] built a movie\nrecommendation system; a user\"s preferences were represented as agents, and\nmovies to be suggested were selected through agent voting.\nCandidates in virtual elections have also been beliefs, joint plans [5],\nand schedules [7]. In fact, to see the generality of the (automated)\nvoting scenario, consider modern web searching. One of the most\nmassive preference aggregation schemes in existence is Google\"s\nPageRank algorithm, which can be viewed as a vote among\nindexed web pages on candidates determined by a user-input search\nstring; winners are ranked (Tennenholtz and Altman [21] consider\nthe axiomatic foundations of ranking systems such as this).\nIn this paper, we consider a topic that has been less studied in the\ncontext of automated agent voting, namely power indices. A power\nindex is a measure of the power that a subgroup, or equivalently\na voter in a weighted voting environment, has over decisions of a\nlarger group. The Banzhaf power index is one of the most popular\nmeasures of voting power, and although it has been used primarily\nfor measuring power in weighted voting games, it is well-defined\nfor any simple coalitional game.\nWe look at some computational aspects of the Banzhaf power\nindex in a specific environment, namely a network flow game. In\nthis game, a coalition of agents wins if it can send a flow of size k\nfrom a source vertex s to a target vertex t, with the relative power\nof each edge reflecting its significance in allowing such a flow. We\nshow that calculating the Banzhaf power index of each agent in\nthis general network flow domain is #P-complete. We also show\nthat for some restricted network flow domains (specifically, of\ncon1\nWe use the term in its intuitive sense here, but in the social choice\nliterature, preference aggregation and voting are basically\nsynonymous.\n335\n978-81-904262-7-5 (RPS) c 2007 IFAAMAS\nnectivity games on bounded layer graphs), there does exist a\npolynomial algorithm to calculate the Banzhaf power index of an agent.\nThere are implications in this scenario to real-world networks; for\nexample, the power index might be used to allocate maintenance\nresources (a more powerful edge being more critical), in order to\nmaintain a given flow of data between two points.\nThe paper proceeds as follows. In Section 2 we give some\nbackground concerning coalitional games and the Banzhaf power\nindex, and in Section 3 we introduce our specific network flow game.\nIn Section 4 we discuss the Banzhaf power index in network flow\ngames, presenting our complexity result in the general case. In\nSection 5 we consider a restricted case of the network flow game,\nand present results. In Section 6 we discuss related work, and we\nconclude in Section 7.\n2. TECHNICAL BACKGROUND\nA coalitional game is composed of a set of n agents, I, and a\nfunction mapping any subset (coalition) of the agents to a real value\nv : 2I\n\u2192 R. In a simple coalitional game, v only gets values of 0 or\n1 (v : 2I\n\u2192 {0, 1}). We say a coalition C \u2282 I wins if v(C) = 1,\nand say it loses if v(C) = 0. We denote the set of all winning\ncoalitions as W(v) = {C \u2282 2I\n|v(C) = 1}.\nAn agent i is a swinger (or pivot) in a winning coalition C\nif the agent\"s removal from that coalition would make it a losing\ncoalition: v(C) = 1, v(C \\ {i}) = 0. A swing is a pair < i, S >\nsuch that agent i is a swinger in coalition S.\nA question that arises in this context is that of measuring the\ninfluence a given agent has on the outcome of a simple game. One\napproach to measuring the power of individual agents in simple\ncoalitional games is the Banzhaf index.\n2.1 The Banzhaf Index\nA common interpretation of the power an agent possesses is that\nof its a priori probability of having a significant role in the game.\nDifferent assumptions about the formation of coalitions, and\ndifferent definitions of having a significant role, have caused\nresearchers to define different power indices, one of the most\nprominent of which is the Banzhaf index [1]. This index has been widely\nused, though primarily for the purpose of measuring individual\npower in a weighted voting system. However, it can also easily\nbe applied to any simple coalitional game.\nThe Banzhaf index depends on the number of coalitions in which\nan agent is a swinger, out of all possible coalitions.2\nThe Banzhaf\nindex is given by \u03b2(v) = (\u03b21(v), ..., \u03b2n(v)) where\n\u03b2i(v) =\n1\n2n\u22121\nS\u2282N|i\u2208S\n[v(S) \u2212 v(S \\ {i})].\nDifferent probabilistic models on the way a coalition is formed\nyield different appropriate power indices [20]. The Banzhaf power\nindex reflects the assumption that the agents are independent in\ntheir choices.\n3. NETWORK FLOW GAMES\n3.1 Motivation\nConsider a communication network, where it is crucial to be able\nto send a certain amount of information between two sites. Given\nlimited resources to maintain network links, which edges should\nget those resources?\n2\nBanzhaf actually considered the percentage of such coalitions out\nof all winning coalitions. This is called the normalized Banzhaf\nindex.\nWe model this problem by considering a network flow game. The\ngame consists of agents in a network flow graph, with a certain\nsource vertex s and target vertex t. Each agent controls one of the\ngraph\"s edges, and a coalition of agents controls all the edges its\nmembers control. A coalition of agents wins the game if it manages\nto send a flow of at least k from source s to target t, and loses\notherwise.\nTo ensure that the network is capable of maintaining the desired\nflow between s and t, we may choose to allocate our limited\nmaintenance resources to the edges according to their impact on\nallowing this flow. In other words, resources could be devoted to the\nlinks whose failure is most likely to cause us to lose the ability to\nsend the required amount of information between the source and\ntarget.\nUnder a reasonable probabilistic model, the Banzhaf index\nprovides us with a measure of the impact each edge has on enabling\nthis amount of information to be sent between the sites, and thus\nprovides a reasonable basis for allocation of scarce maintenance\nresources.\n3.2 Formal Definition\nFormally, a network flow game is defined as follows. The game\nconsists of a network flow graph G =< V, E >, with capacities on\nthe edges c : E \u2192 R, a source vertex s, a target vertex t, and a set I\nof agents, where agent i controls the edge ei. Given a coalition C,\nwhich controls the edges EC = {ei|i \u2208 C}, we can check whether\nthe coalition allows a flow of k from s to t. We define the simple\ncoalitional game of network flow as the game where the coalition\nwins if it allows such a flow, and loses otherwise:\nv(C) =\n1 if EC allows a flow of k from s to t;\n0 otherwise;\nA simplified version of the network flow game is the connectivity\ngame; in a connectivity game, a coalition wants to have some path\nfrom source to target. More precisely, a connectivity game is a\nnetwork flow game where each of the edges has identical capacity,\nc(e) = 1, and the target flow value is k = 1. In such a scenario,\nthe goal of a coalition is to have at least one path from s to t:\nv(C) =\n1 if EC contains a path from s to t;\n0 otherwise;\nGiven a network flow game (or a connectivity game), we can\ncompute the power indices of the game. When a coalition of edges\nis chosen at random, and each coalition is equiprobable, the\nappropriate index is the Banzhaf index.3\nWe can use the Banzhaf value\nof an agent i \u2208 I (or the edge it controls, ei), \u03b2ei (v) = \u03b2i(v), to\nmeasure its impact on allowing a given flow between s and t.\n4. THE BANZHAF INDEX IN NETWORK\nFLOW GAMES\nWe now define the problem of calculating the Banzhaf index in\nthe network flow game.\nDEFINITION 1. NETWORK-FLOW-BANZHAF: We are given a\nnetwork flow graph G =< V, E > with a source vertex s and a\ntarget vertex t, a capacity function c : E \u2192 R, and a target flow\nvalue k. We consider the network flow game, as defined above in\nSection 3. We are given an agent i, controlling the edge ei, and are\nasked to calculate the Banzhaf index for that agent. In the network\n3\nWhen each ordering of edges is equiprobable, the appropriate\nindex is the Shapley-Shubik index.\n336 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nflow game, let Cei be the set of all subsets of E that contain ei:\nCei = {C \u2282 E|ei \u2208 C}. In this game, the Banzhaf index of ei is:\n\u03b2i(v) =\n1\n2|E|\u22121\nE \u2282Cei\n[v(E ) \u2212 v(E \\ {ei})].\nLet W(Cei ) be the set of winning subsets of edges in Cei , i.e.,\nthe subsets E \u2208 Cei where a flow of at least k can be sent from\ns to t using only the edges in E . The Banzhaf index of ei is the\nproportion of subsets in W(Cei ) where ei is crucial to maintaining\nthe k-flow. All the edge subsets in W(Cei ) contain ei and are\nwinning, but only for some of them, E \u2208 W(Cei ), do we have\nthat v(E \\ {ei}) = 0 (i.e., E is no longer winning if we remove\nei). The Banzhaf index of ei is the proportion of such subsets.\n4.1 #P-Completeness of Calculating the Banzhaf\nIndex in the Network Flow Game\nWe now show that the general case of NETWORK-FLOW-BANZHAF\nis #P-complete, by a reduction from #MATCHING.\nFirst, we note that NETWORK-FLOW-BANZHAF is in #P. There\nare several polynomial algorithms to calculate the maximal\nnetwork flow, so it is easy to check if a certain subset of edges E \u2282 E\ncontains ei and allows a flow of at least k from s to t. It is also\neasy to check if a flow of at least k is no longer possible when\nwe remove ei from E (again, by running a polynomial algorithm\nfor calculating the maximal flow). The Banzhaf index of ei is\nexactly the number of such subsets E \u2282 E, so\nNETWORK-FLOWBANZHAF is in #P. To show that NETWORK-FLOW-BANZHAF\nis #P-complete, we reduce a #MATCHING problem4\nto a\nNETWORKFLOW-BANZHAF problem.\nDEFINITION 2. #MATCHING: We are given a bipartite graph\nG =< U, V, E >, such that |U| = |V | = n, and are asked to\ncount the number of perfect matchings possible in G.\n4.2 The Overall Reduction Approach\nThe reduction is done as follows. From the #MATCHING\ninput, G =< U, V, E >, we build two inputs for the\nNETWORKFLOW-BANZHAF problem. The difference between the answers\nobtained from the NETWORK-FLOW-BANZHAF runs is the\nanswer to the #MATCHING problem. Both runs of the\nNETWORKFLOW-BANZHAF problem are constructed with the same graph\nG =< V , E >, with the same source vertex s and target vertex\nt, and with the same edge ef for which to compute the Banzhaf\nindex. They differ only in the target flow value. The first run is with\na target flow of k, and the second run is with a target flow of k + .\nA choice of subset Ec \u2282 E reflects a possible matching in the\noriginal graph. G is a subgraph of the constructed G . We identify\nan edge in G , e \u2208 E , with the same edge in G. This edge indicates\na particular match between some vertex u \u2208 U and another vertex\nv \u2208 V . Thus, if Ec \u2282 E is a subset of edges in G which contains\nonly edges in the subgraph of G, we identify it with a subset of\nedges in G, or with some candidate of a matching.\nWe say Ec \u2282 E matches some vertex v \u2208 V , if Ec contains\nsome edge that connects to v, i.e., for some u \u2208 U we have (u, v) \u2208\nEc. Ec is a possible matching if it does not match a vertex v \u2208 V\nwith more than one vertex in U, i.e., there are not two vertices\nu1 = u2 in U that both (u1, v) \u2208 Ec and (u2, v) \u2208 Ec. A perfect\nmatching matches all the vertices in V .\nIf Ec fails to match a vertex in V (the right side of the partition),\nthe maximal possible flow that Ec allows in G is less than k. If it\nmatches all the vertices in V , a flow of k is possible. If it matches\n4\nThis is one of the most well-known #P-complete problems.\nall the vertices in V , but matches some vertex in V more than once\n(which means this is not a true matching), a flow of k+ is possible.\nis chosen so that if a single vertex v \u2208 V is unmatched, the\nmaximal possible flow would be less than |V |, even if all the other\nvertices are matched more than once. In other words, is chosen\nso that matching several vertices in V more than once can never\ncompensate for not matching some vertex in V , in terms of the\nmaximal possible flow.\nThus, when we check the Banzhaf index of ef when the required\nflow is at least k, we get the number of subsets E \u2282 E that match\nall the vertices in V at least once. When we check the Banzhaf\nindex of ef with a required flow of at least k+ , we get the number\nof subsets E \u2282 E that match all the vertices in V at least once, and\nmatch at least one vertex v \u2208 V more than once. The difference\nbetween the two is exactly the number of perfect matchings in G.\nTherefore, if there existed a polynomial algorithm for\nNETWORKFLOW-BANZHAF, we could use it to build a polynomial\nalgorithm for #MATCHING, so NETWORK-FLOW-BANZHAF is\n#Pcomplete.\n4.3 Reduction Details\nThe reduction takes the #MATCHING input, the bipartite graph\nG =< U, V, E >, where |U| = |V | = k. It then generates\na network flow graph G as follows. The graph G is kept as a\nsubgraph of G , and each edge in G is given a capacity of 1. A\nnew source vertex s is added, along with a new vertex t and a new\ntarget vertex t. Let = 1\nk+1\nso that \u00b7 k < 1. The source s is\nconnected to each of the vertices in U, the left partition of G, with\nan edge of capacity 1 + . Each of the vertices in V is connected to\nt with an edge of capacity 1 + . t is connected to t with an edge\nef of capacity 1 + .\nAs mentioned above, we perform two runs of\nNETWORK-FLOWBANZHAF, both checking the Banzhaf index of the edge ef in the\nflow network G . We denote the network flow game defined on G\nwith target flow k as v(G ,k). The first run is performed on the game\nwith a target flow of k, v(G ,k), returning the index \u03b2ef (v(G ,k)).\nThe second run is performed on the game with a target flow of\nk + , v(G ,k+ ), returning the index \u03b2ef (v(G ,k+ )). The number\nof perfect matchings in G is the difference between the answers\nin the two runs, \u03b2ef (v(G ,k)) \u2212 \u03b2ef (v(G ,k+ )). This is proven in\nTheorem 5.\nFigure 1 shows an example of constructing G from G. On the\nleft is the original graph G, and on the right is the constructed\nnetwork flow graph G .\n4.4 Proof of the reduction\nWe now prove that the reduction above is correct. In all of this\nsection, we take the input to the #MATCHING problem to be\nG =< U, V, E > with |U| = |V | = k, the network flow graph\nconstructed in the reduction to be G =< V , E > with capacities\nc : E \u2192 R as defined in Section 4.3, the edge for which to\ncalculate the Banzhaf index to be ef , and target flow values of k and\nk + .\nPROPOSITION 1. Let Ec \u2282 E be a subset of edges that lacks\none or more edges of the following:\n1. The edges connected to s;\n2. The edges connected to t ;\n3. The edge ef = (t , t).\nWe call such a subset a missing subset. The maximal flow between\ns and t using only the edges in the missing subset Ec is less than k.\nThe Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 337\nFigure 1: Reducing #MATCHING to NETWORK-FLOW-BANZHAF\nPROOF. The graph is a layer graph, with s being the vertex in\nthe first layer, U the vertices in the second layer, V the vertices in\nthe third, t the vertex in the fourth, and t in the fifth. Edges in G\nonly go between consecutive layers. The maximal flow in a layer\ngraph is limited by the total capacity of the edges between every\ntwo consecutive layers. If any of the edges between s and U is\nmissing, the flow is limited by (|V | \u2212 1)(1 + ) < k. If any of\nthe edges between V and t is missing, the flow is also limited by\n(|V | \u2212 1)(1 + ) < k. If the edge ef is missing, there are no edges\ngoing to the last layer, and the maximal flow is 0.\nSince such missing subsets of edges do not affect the Banzhaf\nindex of ef (they add 0 to the sum), from now on we will consider\nonly non-missing subsets. As explained in Section 4.2, we identify\nthe edges in G that were copied from G (the edges between U and\nV in G ) with their counterparts in G. Each such edge (u, v) \u2208 E\nrepresents a match between u and v in G. Ec is a perfect matching\nif it matches every vertex u to a single vertex v and vice versa.\nPROPOSITION 2. Let Ec \u2282 E be a subset of edges that fails\nto match some vertex v \u2208 V . The maximal flow between s and t\nusing only the edges in the missing subset Ec is less than k. We call\nsuch a set sub-matching, and it is not a perfect matching.\nPROOF. If Ec fails to match some vertex v \u2208 V , the maximal\nflow that can reach the vertices in the V layer is (1+ )(k\u22121) < k,\nso this is also the maximal flow that can reach t.\nPROPOSITION 3. Let Ec \u2282 E be a subset of edges that is a\nperfect matching in G. Then the maximal flow between s and t\nusing only the edges in Ec is exactly k.\nPROOF. A flow of k is possible. We send a flow of 1 from s\nto each of the vertices in U, send a flow of 1 from each vertex\nu \u2208 U to its match v \u2208 V , and send a flow of 1 from each v \u2208 V\nto t . t gets a total flow of exactly k, and sends it to t. A flow\nof more than k is not possible since there are exactly k edges of\ncapacity 1 between the U layer and the V layer, and the maximal\nflow is limited by the total capacity of the edges between these two\nconsecutive layers.\nPROPOSITION 4. Let Ec \u2282 E be a subset of edges that\ncontains a perfect matching M \u2282 E in G and at least one more edge\nex between some vertex ua \u2208 U and va \u2208 V . Then the maximal\nflow between s and t using only the edges in Ec is at least k+ . We\ncall such a set a super-matching, and it is not a perfect matching.\nPROOF. A flow of k is possible, by using the edges of the perfect\nmatch as in Proposition 3. We send a flow of 1 from s to each of\nthe vertices in U, send a flow of 1 from each vertex u \u2208 U to its\nmatch v \u2208 V , and send a flow of 1 from each v \u2208 V to t . t gets\na total flow of exactly k, and sends it to t. After using the edges\nof the perfect matching, we send a flow of from s to ua (this is\npossible since the capacity of the edge (s, ua) is 1 + and we have\nonly used up 1). We then send a flow of from ua to va. This\nis possible since we have not used this edge at all-it is the edge\nwhich is not a part of the perfect matching. We then send a flow of\nfrom va to t . Again, this is possible since we have used 1 out of\nthe total capacity of 1 + which that edge has. Now t gets a total\nflow of k + , and sends it all to t, so we have achieved a total flow\nof k + . Thus, the maximal possible flow is at least k + .\nTHEOREM 5. Consider a #MATCHING instance G =< U, V, E >\nreduced to a BANZHAF-NETWORK-FLOW instance G as explained\nin Section 4.3. Let v(G ,k) be the network flow game defined on G\nwith target flow k, and v(G ,k+ ) be the game defined with a target\nflow of k+ . Let the resulting index of the first run be \u03b2ef (v(G ,k)),\nand \u03b2ef (v(G ,k+ )) be the resulting index of the second run. Then\nthe number of perfect matchings in G is the difference between the\nanswers in the two runs, \u03b2ef (v(G ,k)) \u2212 \u03b2ef (v(G ,k+ )).\nPROOF. Consider the game v(G ,k). According to Proposition 1,\nin this game, the Banzhaf index of Ef does not count missing\nsubsets Ec \u2208 E , since they are losing in this game. According to\nProposition 2, it does not count subsets Ec \u2208 E that are\nsubmatchings, since they are also losing. According to Proposition 3,\nit adds 1 to the count for each perfect matching, since such subsets\nallow a flow of k and are winning. According to Proposition 3,\nit adds 1 to the count for each super-matching, since such subsets\nallow a flow of k (and more than k) and are winning.\nConsider the game v(G ,k+ ). Again, according to Proposition 1,\nin this game the Banzhaf index of Ef does not count missing\nsubsets Ec \u2208 E , since they are losing in this game. According to\nProposition 2, it does not count subsets Ec \u2208 E that are\nsubmatchings, since they are also losing. According to Proposition 3,\nit adds 0 to the count for each perfect matching, since such subsets\nallow a flow of k but not k + , and are thus losing. According to\nProposition 3, it adds 1 to the count for each super-matching, since\nsuch subsets allow a flow of k + and are winning.\nThus the difference between the two indices,\n\u03b2ef (v(G ,k)) \u2212 \u03b2ef (v(G ,k+ )),\nis exactly the number of perfect matchings in G.\nWe have reduced a #MATCHING problem to a\nNETWORKFLOW-BANZHAF problem. This means that given a polynomial\n338 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nalgorithm to calculate the Banzhaf index of an agent in a\ngeneral network flow game, we can build an algorithm to solve the\n#MATCHING problem. Thus, the problem of calculating the Banzhaf\nindex of agents in general network flow games is also #P-complete.\n5. CALCULATING THE BANZHAF INDEX\nIN BOUNDED LAYER GRAPH\nCONNECTIVITY GAMES\nWe here present a polynomial algorithm to calculate the Banzhaf\nindex of an edge in a connectivity game, where the network is a\nbounded layer graph. This positive result indicates that for some\nrestricted domains of network flow games, it is possible to calculate\nthe Banzhaf index in a reasonable amount of time.\nDEFINITION 3. A layer graph is a graph G =< V, E >, with\nsource vertex s and target vertex t, where the vertices of the graph\nare partitioned into n + 1 layers, L0 = {s}, L1, ..., Ln = {t}.\nThe edges run only between consecutive layers.\nDEFINITION 4. A c-bounded layer graph is a layer graph where\nthe number of vertices in each layer is bounded by some constant\nnumber c.\nAlthough there is no limit on the number of layers in a bounded\nlayer graph, the structure of such graphs makes it possible to\ncalculate the Banzhaf index of edges in connectivity games on such\ngraphs. The algorithm provided below is indeed polynomial in the\nnumber of vertices given that the network is a c-bounded layer\ngraph. However, there is a constant factor to the running time,\nwhich is exponential in c. Therefore, this method is only tractable\nfor graphs where the bound c is small. Bounded layer graphs may\noccur in networks when the nodes are located in several ordered\nsegments, where nodes can be connected only between\nconsecutive segments.\nLet v be a vertex in layer Li. We say an edge e occurs before v if\nit connects two vertices in v\"s layer or a previous layer: e = (u, w)\nconnects vertex u \u2208 Lj to vertex w \u2208 Lj+1 and j + 1 \u2264 i. Let\nPredv \u2282 E be the subset of edges that occur before v. Consider\na subset of these edges, E \u2282 Predv. E may contain a path\nfrom s to v, or it may not. We define Pv as the number of subsets\nE \u2282 Predv that contain a path from s to v.\nSimilarly, let Vi \u2208 V be the subset of all the vertices in the same\nlayer Li. Let PredVi \u2282 E be the subset of edges that occur before\nVi (all the vertices in Vi are in the same layer, so any edge that\noccurs before some v \u2208 Vi occurs before any other vertex w \u2208 Vi).\nConsider a subset of these edges, E \u2282 PredV . Let Vi(E ) be the\nsubset of vertices in Vi that are reachable from s using only the\nedges in E : Vi(E ) = {v \u2208 Vi|E contains a path from s to v}.\nWe say E \u2208 PredV connects exactly the vertices in Si \u2282 Vi if all\nthe vertices in Si are reachable from s using the edges in E but no\nother vertices in Vi are reachable from s using E , so Vi(E ) = Si.\nLet V \u2282 Vi be a subset of the vertices in layer Li. We define\nPV as the number of subsets E \u2282 PredV that connect exactly\nthe vertices in V : PV = |{E \u2282 PredV |Vi(E ) = V }|.\nLEMMA 1. Let S1, S2 \u2282 Vi where S1 = S2 be two different\nsubsets of vertices in the same layer. Let E , E \u2282 PredVi be\ntwo sets of edge subsets, so that E connects exactly the vertices in\nS1 and E connects exactly the vertices in S2: Vi(E ) = S1 and\nVi(E ) = S2. Then E and E do not contain the same edges:\nE = E .\nPROOF. If E = E then both sets of edges allow the same\npaths from s, so Vi(E ) = Vi(E ).\nLet Si \u2282 Vi be a subset of vertices in layer Li. Let Ei \u2282\nE be the set of edges between the vertices in layer Li and layer\nLi+1. Let E \u2282 Ei be some subset of these edges. We denote by\nDests(Si, E) the set of vertices in layer Li+1 that are connected\nto some vertex in Si by an edge in E:\nDests(Si, E) = {v \u2208 Vi+1|there exists some\nw \u2208 Si and some e \u2208 E that e = (w, v)}.\nLet Si \u2282 Vi be a subset of vertices in Li and E \u2282 Ei be some\nsubset of the edges between layer Li and layer Li+1. PSi counts\nthe number of edge subsets in PredVi that connect exactly the\nvertices in Si. Consider such a subset E counted in PSi . E \u222a E is a\nsubset of edges in PredVi+1 that connects exactly to Dest(Si, E).\nAccording to Lemma 1, if we iterate over the different Si\"s in layer\nLi, the PSi \"s count different subsets of edges, and thus every\nexpansion using the edges in E is also different.\nAlgorithm 1 calculates Pt. It iterates through the layers, and\nupdates the data for the next layer given the data for the current\nlayer. For each layer Li and every subset of edges in that layer\nSi \u2282 Vi, it calculates PSi . It does so using the values calculated in\nthe previous layer. The algorithm considers every subset of possible\nvertices in the current layer, and every possible subset of expanding\nedges to the next layer, and updates the value of the appropriate\nsubset in the next layer.\nAlgorithm 1\n1: procedure CONNECTING-EXACTLY-SUBSETS(G, v)\n2: P{s} \u2190 1 Initialization\n3: for all other subsets of vertices S do Initialization\n4: PS \u2190 0\n5: end for\n6: for i \u2190 0 to n \u2212 1 do Iterate through layers\n7: for all vertex subsets Si in Li do\n8: for all edge subsets E between Li, Li+1 do\n9: D \u2190 Dests(Si, E) subset in Li+1\n10: PD \u2190 PD + PSi\n11: end for\n12: end for\n13: end for\n14: end procedure\nA c-bounded layer graph contains at most c vertices in each\nlayer, so for each layer there are at most 2c\ndifferent subsets of\nvertices in that layer. There are also at most c2\nedges between 2\nconsecutive layers, and thus at most 2(c2\n)\nedge subsets between\ntwo layers.\nIf the graph contains k layers, the running time of the algorithm\nis bounded by k\u00b72c\n\u00b72(c2\n)\n. Since c is a constant, this is a polynomial\nalgorithm.\nConsider the connectivity game on a layer graph G, with a single\nsource vertex s and target vertex t. The Banzhaf index of the edge\ne is the number of subsets of edges that allow a path between s\nand t, but do not allow such a path when e is removed (divided\nby a constant). We can calculate P{t} = P{t}(G) for G using\nthe algorithm to count the number of subsets of edges that allow\na path from s to t. We can then remove e from G to obtain the\ngraph G =< V, E \\ {e} >, and calculate P{t} = P{t}(G ). The\ndifference P{t}(G) \u2212 P{t}(G ) is the number of subsets of edges\nthat contain a path from s to t but no longer contain such a path\nwhen e is removed. The Banzhaf index for e is\nP{t}(G)\u2212P{t}(G )\n2|E|\u22121 .\nThus, this algorithm allows us to calculate the Banzhaf index on an\nedge in the connectivity games on bounded layer graphs.\nThe Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 339\n6. RELATED WORK\nMeasuring the power of individual players in coalitional games\nhas been studied for many years. The most popular indices\nsuggested for such measurement are the Banzhaf index [1] and the\nShapley-Shubik index [19].\nIn his seminal paper, Shapley [18] considered coalitional games\nand the fair allocation of the utility gained by the grand coalition\n(the coalition of all agents) to its members. The Shapley-Shubik\nindex [19] is the direct application of the Shapley value to simple\ncoalitional games.\nThe Banzhaf index emerged directly from the study of voting in\ndecision-making bodies. The normalized Banzhaf index measures\nthe proportion of coalitions in which a player is a swinger, out of\nall winning coalitions. This index is similar to the Banzhaf index\ndiscussed in Section 1, and is defined as:\n\u03b2i =\n\u03b2i(v)\nk\u2208N \u03b2k\n.\nThe Banzhaf index was mathematically analyzed in [3], where\nit was shown that this normalization lacks certain desirable\nproperties, and the more natural Banzhaf index is introduced.\nBoth the Shapley-Shubik and the Banzhaf indices have been widely\nstudied, and Straffin [20] has shown that each index reflects specific\nconditions in a voting body. [11] considers these two indices along\nwith several others, and describes the axioms that characterize the\ndifferent indices.\nThe naive implementation of an algorithm for calculating the\nBanzhaf index of an agent i enumerates all coalitions containing\ni. There are 2n\u22121\nsuch coalitions, so the performance is\nexponential in the number of agents. [12] contains a survey of algorithms\nfor calculating power indices of weighted majority games. Deng\nand Papadimitriou [2] show that computing the Shapley value in\nweighted majority games is #P-complete, using a reduction from\nKNAPSACK. Since the Shapley value of any simple game has the\nsame value as its Shapley-Shubik index, this shows that\ncalculating the Shapley-Shubik index in weighted majority games is\n#Pcomplete.\nMatsui and Matsui [13] have shown that calculating both the\nBanzhaf and Shapley-Shubik indices in weighted voting games is\nNP-complete.\nThe problem of computing power indices in simple games\ndepends on the chosen representation of the game. Since the number\nof possible coalitions is exponential in the number of agents,\ncalculating power indices in time polynomial in the number of agents\ncan only be achieved in specific domains.\nIn this paper, we have considered the network flow domain, where\na coalition of agents must achieve a flow beyond a certain value.\nThe network flow game we have defined is a simple game. [10, 9]\nhave considered a similar network flow domain, where each agent\ncontrols an edge of a network flow graph. However, they\nintroduced a non-simple game, where the value a coalition of agents\nachieves is the maximal total flow. They have shown that certain\nfamilies of network flow games and similar games have nonempty\ncores.\n7. CONCLUSIONS AND FUTURE\nDIRECTIONS\nWe have considered network flow games, where a coalition of\nagents wins if it manages to send a flow of more than some value k\nbetween two vertices. We have assessed the relative power of each\nagent in this scenario using the Banzhaf index. This power index\nmay be used to decide how to allocate maintenance resources in\nreal-world networks, in order to maximize our ability to maintain a\ncertain flow of information between two sites.\nAlthough the Banzhaf index theoretically allows us to measure\nthe power of the agents in the network flow game, we have shown\nthat the problem of calculating the Banzhaf index in this domain\nin #P-complete. Despite this discouraging result for the general\nnetwork flow domain, we have also provided a more encouraging\nresult for a restricted domain. In the case of connectivity games\n(where it is only required for a coalition to contain a path from\nthe source to the destination) played on bounded layer graphs, it is\npossible to calculate the Banzhaf index of an agent in polynomial\ntime.\nIt remains an open problem to find ways to tractably\napproximate the Banzhaf index in the general network flow domain. It\nmight also be possible to find other useful restricted domains where\nit is possible to exactly calculate the Banzhaf index. We have only\nconsidered the complexity of calculating the Banzhaf index; it\nremains an open problem to find the complexity of calculating the\nShapley-Shubik or other indices in the network flow domain.\nFinally, we believe that there are many additional interesting domains\nother than weighted voting games and network flow games, and it\nwould be worthwhile to investigate the complexity of calculating\nthe Banzhaf index or other power indices in such domains.\n8. ACKNOWLEDGMENT\nThis work was partially supported by grant #898/05 from the\nIsrael Science Foundation.\n9. REFERENCES\n[1] J. F. Banzhaf. Weighted voting doesn\"t work: a mathematical\nanalysis. Rutgers Law Review, 19:317-343, 1965.\n[2] X. Deng and C. H. Papadimitriou. On the complexity of\ncooperative solution concepts. Math. Oper. Res.,\n19(2):257-266, 1994.\n[3] P. Dubey and L. Shapley. Mathematical properties of the\nBanzhaf power index. Mathematics of Operations Research,\n4(2):99-131, 1979.\n[4] E. Ephrati and J. S. Rosenschein. The Clarke Tax as a\nconsensus mechanism among automated agents. In\nProceedings of the Ninth National Conference on Artificial\nIntelligence, pages 173-178, Anaheim, California, July\n1991.\n[5] E. Ephrati and J. S. Rosenschein. A heuristic technique for\nmultiagent planning. Annals of Mathematics and Artificial\nIntelligence, 20:13-67, Spring 1997.\n[6] S. Ghosh, M. Mundhe, K. Hernandez, and S. Sen. Voting for\nmovies: the anatomy of a recommender system. In\nProceedings of the Third Annual Conference on Autonomous\nAgents, pages 434-435, 1999.\n[7] T. Haynes, S. Sen, N. Arora, and R. Nadella. An automated\nmeeting scheduling system that utilizes user preferences. In\nProceedings of the First International Conference on\nAutonomous Agents, pages 308-315, 1997.\n[8] E. Hemaspaandra, L. Hemaspaandra, and J. Rothe. Anyone\nbut him: The complexity of precluding an alternative. In\nProceedings of the 20th National Conference on Artificial\nIntelligence, Pittsburgh, July 2005.\n[9] E. Kalai and E. Zemel. On totally balanced games and games\nof flow. Discussion Papers 413, Northwestern University,\nCenter for Mathematical Studies in Economics and\nManagement Science, Jan. 1980. available at\nhttp://ideas.repec.org/p/nwu/cmsems/413.html.\n340 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)\n[10] E. Kalai and E. Zemel. Generalized network problems\nyielding totally balanced games. Operations Research,\n30:998-1008, September 1982.\n[11] A. Laruelle. On the choice of a power index. Papers 99-10,\nValencia - Instituto de Investigaciones Economicas, 1999.\n[12] Y. Matsui and T. Matsui. A survey of algorithms for\ncalculating power indices of weighted majority games.\nJournal of the Operations Research Society of Japan, 43,\n2000.\n[13] Y. Matsui and T. Matsui. NP-completeness for calculating\npower indices of weighted majority games. Theoretical\nComputer Science, 263(1-2):305-310, 2001.\n[14] N. Nisan and A. Ronen. Algorithmic mechanism design.\nGames and Economic Behavior, 35:166-196, 2001.\n[15] A. D. Procaccia and J. S. Rosenschein. Junta distributions\nand the average-case complexity of manipulating elections.\nIn The Fifth International Joint Conference on Autonomous\nAgents and Multiagent Systems, pages 497-504, Hakodate,\nJapan, May 2006.\n[16] J. S. Rosenschein and M. R. Genesereth. Deals among\nrational agents. In Proceedings of the Ninth International\nJoint Conference on Artificial Intelligence, pages 91-99, Los\nAngeles, California, August 1985.\n[17] T. Sandholm and V. Lesser. Issues in automated negotiation\nand electronic commerce: Extending the contract net\nframework. In Proceedings of the First International\nConference on Multiagent Systems (ICMAS-95), pages\n328-335, San Francisco, 1995.\n[18] L. S. Shapley. A value for n-person games. Contributions to\nthe Theory of Games, pages 31-40, 1953.\n[19] L. S. Shapley and M. Shubik. A method for evaluating the\ndistribution of power in a committee system. American\nPolitical Science Review, 48:787-792, 1954.\n[20] P. Straffin. Homogeneity, independence and power indices.\nPublic Choice, 30:107-118, 1977.\n[21] M. Tennenholtz and A. Altman. On the axiomatic\nThe Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 341", "keywords": "social choice theory;network flow game;banzhaf power index;preference aggregation;automated agent voting;power index;analysis of algorithm and problem complexity;probabilistic model;connectivity game;multiagent application;algorithm and problem complexity analysis;computational complexity;vote;voting theory"}
-{"name": "test_I-21", "title": "Interactions between Market Barriers and Communication Networks in Marketing Systems", "abstract": "We investigate a framework where agents search for satisfying products by using referrals from other agents. Our model of a mechanism for transmitting word-of-mouth and the resulting behavioural effects is based on integrating a module governing the local behaviour of agents with a module governing the structure and function of the underlying network of agents. Local behaviour incorporates a satisficing model of choice, a set of rules governing the interactions between agents, including learning about the trustworthiness of other agents over time, and external constraints on behaviour that may be imposed by market barriers or switching costs. Local behaviour takes place on a network substrate across which agents exchange positive and negative information about products. We use various degree distributions dictating the extent of connectivity, and incorporate both small-world effects and the notion of preferential attachment in our network models. We compare the effectiveness of referral systems over various network structures for easy and hard choice tasks, and evaluate how this effectiveness changes with the imposition of market barriers.", "fulltext": "1. INTRODUCTION\nDefection behaviour, that is, why people might stop\nusing a particular product or service, largely depends on the\npsychological affinity or satisfaction that they feel toward\nthe currently-used product [14] and the availability of more\nattractive alternatives [17]. However, in many cases the\ndecision about whether to defect or not is also dependent on\nvarious external constraints that are placed on switching\nbehaviour, either by the structure of the market, by the\nsuppliers themselves (in the guise of formal or informal contracts),\nor other so-called \u2018switching costs\" or market barriers [12, 5].\nThe key feature of all these cases is that the extent to which\npsychological affinity plays a role in actual decision-making\nis constrained by market barriers, so that agents are\nprevented from pursuing those courses of action which would\nbe most satisfying in an unconstrained market.\nWhile the level of satisfaction with a currently-used\nproduct will largely be a function of one\"s own experiences of\nthe product over the period of use, knowledge of any\npotentially more satisfying alternatives is likely to be gained\nby augmenting the information gained from personal\nexperiences with information about the experiences of others\ngathered from casual word-of-mouth communication. Moreover,\nthere is an important relationship between market barriers\nand word-of-mouth communication. In the presence of\nmarket barriers, constrained economic agents trapped in\ndissatisfying product relationships will tend to disseminate this\ninformation to other agents. In the absence of such\nbarriers, agents are free to defect from unsatisfying products\nand word-of-mouth communication would thus tend to be\nof the positive variety. Since the imposition of at least some\nforms of market barriers is often a strategic decision taken\nby product suppliers, these relationships may be key to the\nsuccess of a particular supplier.\nIn addition, the relationship between market barriers and\nword-of-mouth communication may be a reciprocal one. The\nstructure and function of the network across which\nword-ofmouth communication is conducted, and particularly the\nway in which the network changes in response to the\nimposition of market barriers, also plays a role in determining\nwhich market barriers are most effective. These are complex\nquestions, and our main interest in this paper is to address\nthe simpler problems of investigating (a) the extent to which\nnetwork structure influences the ways in which information\nis disseminated across a network of decision makers, (b) the\nextent to which market barriers affect this dissemination,\nand (c) the consequent implications for overall system\nperformance, in terms of the proportion of agents who are\nsatisfied, and the speed with which the system moves towards\nequilibrium, which we term stability.\nAn agent-based model framework allows for an\ninvestigation at the level of the individual decision maker, at the\n387\n978-81-904262-7-5 (RPS) c 2007 IFAAMAS\nproduct-level, or at the level of the entire system; we are\nparticularly interested in the implications of market barriers for\nthe latter two. The model presented here allows for an\ninvestigation into the effects of market barriers to be carried\nout in a complex environment where at every time period\neach agent in a population must decide which one of a set of\nproducts to purchase. These decisions are based on\nmultiattribute information gathered by personal product trials as\nwell as from the referrals of agents. Agents use this gathered\ninformation to search for a product that exceeds their\nsatisfaction thresholds on all attributes - so that the agents may\nbe said to be satisficing rather than optimising (e.g. [15]).\nMarket barriers may act to influence an agent to continue\nto use a product that is no longer offering satisfactory\nperformance. We allow agents to hold different opinions about\nthe performance of a product, so that as a result a referral\nfrom another agent may not lead to a satisfying experience.\nAgents therefore adjust their evaluations of the validity of\nother agents\" referrals according to the success of past\nreferrals, and use these evaluations to judge whether or not\nto make use of any further referrals. The level of\nsatisfaction provided to an agent by a product is itself inherently\ndynamic, being subject to random fluctuations in product\nperformance as well as a tendency for an agent to discount\nthe performance of a product they have used for a long time\n- a process akin to habituation.\n2. BACKGROUND\n2.1 Word-of-mouth communication\nMuch of the work done on word-of-mouth communication\nin the context of social psychology and marketing research\nhas focused on its forms and determinants, suggesting that\nword-of-mouth arises in three possible ways: it may be\ninduced by a particular transaction or product experience [11],\nparticularly when that transaction has been an especially\ngood or bad one [1]; it may be solicited from others [10],\nusually when the task involved is difficult, ambiguous, or\nnew [7]; and it may come about when talk of products and\nbrands arise in the course of informal conversation,\nparticularly when a \u2018passion for the subject\" is present [4].\nWordof-mouth becomes more influential when the source of the\ncommunication is credible, with credibility decisions based\nlargely on one or a combination of evaluations of professional\nqualification, informal training, social distance [7], and\nsimilarity of views and experiences [3].\nThe role of word-of-mouth communication on the behaviour\nof complex systems has been studied in both analytical and\nsimulation models. The analytical work in [8] investigates\nthe conditions under which word-of-mouth leads to\nconformity in behaviour and the adoption of socially efficient\noutcomes (e.g. choosing an alternative that is on average better\nthan another), finding that conformity of behaviour arises\nwhen agents are exposed to word-of-mouth communication\nfrom only a small number of other agents, but that this\nconformity may result in socially inefficient outcomes where the\ntendency toward conformity is so strong that it overwhelms\nthe influence of the superior payoffs provided by the socially\nefficient outcome. Simulation-based investigations of\nwordof-mouth [6, 13] have focused on developing strategies for\nensuring that a system reaches an equilibrium level where\nall agents are satisfied, largely by learning about the\neffectiveness of others\" referrals or by varying the degree of\ninertia in individual behaviour. These studies have found that,\ngiven a sufficient number of service providers, honest\nreferrals lead to faster convergence to satisfactory distributions\nthan deceitful ones, and that both forms of word-of-mouth\nprovide better performance than none at all. The\nsimulation framework allows for a more complex modelling of the\nenvironment than the analytical models, in which referrals\nare at random and only two choices are available, and the\nwork in [6] in particular is a close antecedent of the work\npresented in this paper, our main contribution being to include\nnetwork structure and the constraints imposed by market\nbarriers as additional effects.\n2.2 Market barriers\nThe extent to which market barriers are influential in\naffecting systems behaviour draws attention mostly from\neconomists interested in how barriers distort competition\nand marketers interested in how barriers distort consumer\nchoices. While the formalisation of the idea that\nsatisfaction drives purchase behaviour can be traced back to the\nwork of Fishbein and Ajzen [9] on reasoned choice, nearly\nall writers, including Fishbein and Ajzen, recognise that this\nrelationship can be thwarted by circumstances (e.g. [17]).\nA useful typology of market barriers distinguishes\n\u2018transactional\" barriers associated with the monetary cost of\nchanging (e.g. in financial services), \u2018learning\" barriers associated\nwith deciding to replace well-known existing products, and\n\u2018contractual\" barriers imposing legal constraints for the term\nof the contract [12]. A different typology [5] introduces the\nadditional aspect of \u2018relational\" barriers arising from\npersonal relationships that may be interwoven with the use of\na particular product.\nThere is generally little empirical evidence on the\nrelationship between the creation of barriers to switching and the\nretention of a customer base, and to the best of our knowledge\nno previous work using agent-based modelling to generate\nempirical findings. Burnham et al. [5] find that perceived\nmarket barriers account for nearly twice the variance in\nintention to stay with a product than that explained by\nsatisfaction with the product (30% and 16% respectively), and\nthat so-called relational barriers are considerably more\ninfluential than either transactional or learning barriers. Further,\nthey find that switching costs are perceived by consumers\nto exist even in markets which are fluid and where barriers\nwould seem to be weak. Simply put, market barriers appear\nto play a greater role in what people do than satisfaction;\nand their presence may be more pervasive than is generally\nthought.\n3. MODEL FRAMEWORK\n3.1 Product performance evaluations\nWe use a problem representation in which, at each time\nperiod, every agent must decide which one of a set of\nproducts to choose. Let A = {ak}k=1...p be the set of agents,\nB = {bi}i=1...n be the set of products, and C = {cj }j=1...m\nbe the set of attributes on which the choice decision is to be\nbased i.e. the decision to be made is a multiattribute choice\none. Let fj : B \u2192 [0, 1] be an increasing function\nproviding the intrinsic performance of a product on attribute j (so\nthat 0 and 1 are the worst- and best-possible performances\nrespectively), and Sij : A \u00d7 [0, 1] \u2192 [0, 1] be a subjective\nopinion function of agents. The intrinsic performance of\n388 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nproduct i on attribute j is given by fj (bi). However, the\nsubjective opinion of the level of performance (of product i\non attribute j) given by agent k is given by sij(ak, fj (bi)).\nAll subsequent modelling is based on these subjective\nperformance ratings. For the purposes of this paper, each agent\nbelongs to one of three equally-sized groups, with each group\npossessing its own subjective performance ratings.\nWe assume that the subjective performance ratings are\nnot known a priori by the agents, and it is their task to\ndiscover these ratings by a combination of personal exploration\nand referral gathering. In order to model this process we\nintroduce the notion of perceived performance ratings at time\nt, denoted by pij(ak, fj (bi), t). Initially, all perceived\nperformance ratings are set to zero, so that the initial selection of a\nproduct is done randomly. Subsequent variation in product\nperformance over time is modelled using two quantities: a\nrandom perturbation jkt applied at each purchase occasion\nensures that the experience of a particular product can vary\nover purchase occasions for the same agent, and a\nhabituation discounting factor Hikt tends to decrease the perceived\nperformance of a product over time as boredom creeps in\nwith repeated usage. Our habituation mechanism supposes\nthat habituation builds up with repeated use of a product,\nand is used to discount the performance of the product.\nIn most cases i.e. unless the habituation factor is one or\nextremely close to one, this habituation-based discounting\neventually leads to defection, after which the level of\nhabituation dissipates as time passes without the product being\nused. More formally, once a product i\u2217\nhas been chosen by\nagent k, the subjective level of performance is perceived and\npi\u2217j(ak, fj (b\u2217\ni ), t) is set equal to si\u2217j(ak, fj (b\u2217\ni ))Hi\u2217kt + jkt,\nwhere jkt is distributed as N(0, \u03c3) and Hi\u2217kt is an\ndecreasing function of the number of time periods that agent k has\nbeen exposed to i\u2217\n.\nIn evaluating the performance of a product, agents make\nuse of a satisficing framework by comparing the perceived\nperformance of the chosen product with their satisfaction\nthresholds \u0393k = {g1k, . . . , gmk}, with 0 \u2264 gik \u2264 1. Agent\nk will be satisfied with a product i\u2217\nselected in time t if\npi\u2217j(ak, fj (b\u2217\ni ), t) \u2265 gjk, \u2200j.\n3.2 Decision processes\nIn designing the mechanism by which agents make their\nchoice decisions, we allow for the possibility that satisfied\nagents defect from the products that are currently satisfying\nthem. Satisfied agents stay with their current product with\nprobability Pr(stay), with a strategy prohibiting satisfied\nagents from moving (e.g. [6]) obtained as a special case when\nPr(stay) = 1.\nA defecting satisfied agent decides on which product to\nchoose by considering all other products for which it has\ninformation, either by previous personal exploration or by\nreferrals from other agents. The selection of a new\nproduct begins by the agent identifying those products from\nwhich he or she expects to gain a satisfactory performance\non all attributes i.e. those products for which \u03b4ik < 0, where\n\u03b4ik = maxj [gjk \u2212 pij(ak, fj(bi), t)], and selecting a\nproduct from this set with selection probabilities proportional to\n\u2212\u03b4ik. If no satisfactory product exists (or at least the agent\nis unaware of any such product) the agent identifies those\nproducts that offer at least a minimum level of \u2018acceptable\"\nperformance \u03b3\u2212\nk . The minimum level of acceptability is\ndefined as the maximum deviation from his or her aspirations\nacross all attributes that the agent is willing to accept i.e.\na product is minimally acceptable if and only if \u03b4ik < \u03b3\u2212\nk .\nAgents then select a product at random from the set of\nminimally acceptable products. If the set of minimally\nacceptable products is empty, agents select a product from the full\nset of products B at random.\nThe decision process followed by unsatisfied agents is largely\nsimilar to that of defecting satisfied agents, with the\nexception that at the outset of the decision process agents will\nchose to explore a new product, chosen at random from the\nset of remaining products, with probability \u03b1. With\nprobability 1 \u2212 \u03b1, they will use a decision process like the one\noutlined above for satisfied agents.\n3.3 Constraints on decision processes\nIn some circumstances market barriers may exist that\nmake switching between products more difficult,\nparticularly where some switching costs are incurred as a result of\nchanging one\"s choice of product. When barriers are present,\nagents do not switch when they become unsatisfied, but\nrather only when the performance evaluation drops below\nsome critical level i.e. when \u03b4ik > \u03b2, where \u03b2 > 0 measures\nthe strength of the market barriers. Although in this\npaper market barriers do not vary over products or time, it is\nstraightforward to allow this to occur by allowing barriers\ntake the general form \u03b2 = max(\u03b2\u2217 +\u0394tuse, \u03b2\u2217\n), where \u03b2\u2217 is a\nbarrier to defection that is applied when the product is\npurchased for the first time (e.g. a contractual agreement), \u0394 is\nthe increase in barriers that are incurred for every additional\ntime period the product is used for, and \u03b2\u2217\nis the maximum\npossible barrier, and all three quantities are allowed to vary\nover products i.e. be a function of i.\n3.4 Referral processes\nEach agent is assumed to be connected to qk < p agents\ni.e. to give and receive information from qk other agents.\nThe network over which word-of-mouth communication\ntravels is governed by the small-world effect [18], by which\nnetworks simultaneously exhibit a high degree of clustering of\nagents into \u2018communities\" and a short average path length\nbetween any two agents in the network, and preferential\nattachment [2], by which agents with greater numbers of\nexisting connections are more likely to receive new ones.\nThis is easily achieved by building a one-dimensional lattice\nwith connections between all agent pairs separated by \u03ba or\nfewer lattice spacings, and creating a small-world network\nby choosing at random a small fraction r of the connections\nin the network and moving one end of each to a new agent,\nwith that new agent chosen with probability proportional to\nits number of existing connections. This results in a\ndistribution of the number of connections possessed by each agent\ni.e. a distribution of qk, that is strongly skewed to the right.\nIn fact, if the construction of the network is slightly modified\nso that new connections are added with preferential\nattachment (but no connections are removed), the distribution of\nqk follows a power-law distribution, but a distribution with a\nnon-zero probability of an agent having less than the modal\nnumber of connections seems more realistic in the context\nof word-of-mouth communication in marketing systems.\nWhen an agent purchases a product, they inform each\nof the other agents in their circle with probability equal to\nPr(spr)k\u2217 + |\u03b4ik\u2217 |, where Pr(spr)k\u2217 is the basic propensity\nof agent k\u2217\nto spread word of mouth and \u03b4ik\u2217 captures the\nThe Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 389\nextent to which the agent\"s most recent experience was\nsatisfying or dissatisfying. Agents are thus more likely to spread\nword-of-mouth about products that they have just\nexperienced as either very good or very bad. If an agent receives\ninformation on the same product from more than one agent,\nhe or she selects the referral of only one of these agents, with\nselection probabilities proportional to Tt(k\u2217\n, k), the degree\nto which previous referrals from k\u2217\nto k were successful i.e.\nresulted in satisfying experiences for agent k. Thus agents\nhave the capacity to learn about the quality of other agents\"\nreferrals and use this information to accept or block future\nreferrals. In this paper, we employ a learning condition in\nwhich Tt(k\u2217\n, k) is multiplied by a factor of 0.1 following an\nunsatisfying referral and a factor of 3 following a satisfying\nreferral. The asymmetry in the weighting is similar to that\nemployed in [16], and is motivated by the fact that an\nunsatisfying referral is likely to be more reliable evidence that a\nreferring agent k\u2217\ndoes not possess the same subjective\npreferences as agent k than a positive referral is of indicating the\nconverse.\nOther referral process are certainly possible, for\nexample one integrating multiple sources of word-of-mouth rather\nthan choosing only the most-trusted source: our main\nreason for employing the process described above is\nsimplicity. Integrating different sources considerably complicates\nthe process of learning about the trustworthiness of others,\nand raises further questions about the precise nature of the\nintegration.\nAfter determining who contacts whom, the actual\nreferral is modelled as a transmittance of information about the\nperceived level of performance of an experience of product\ni\u2217\nfrom the referring agent k\u2217\nto the accepting agent k i.e.\npi\u2217j(ak, fj (bi), t) takes on the value pi\u2217j(ak\u2217 , fj(bi), t\u22121), \u2200j,\nprovided that agent k is not currently using i\u2217\n. Information\nabout other products is not transmitted, and an agent will\nignore any word-of-mouth about the product he or she is\ncurrently using. In effect, the referral creates an expected\nlevel of performance in the mind of an accepting agent for\nthe product referred to, which that agent may then use when\nmaking choice decision in subsequent time periods using the\ndecision processes outlined in the previous section. Once an\nagent has personally experienced a product, any expected\nperformance levels suggested by previous referrals are\nreplaced by the experienced (subjective) performance levels\nsij(ak, fj(bi)) + jkt and Tt(k\u2217\n, k) is adjusted depending on\nwhether the experience was a satisfying one or not.\n4. EXPERIMENTAL RESULTS\nWe examine the behaviour of a system of 200 agents\nconsisting of three groups of 67, 67, and 66 agents respectively.\nAgents in each of the three groups have homogeneous\nsubjective opinion functions Sij. Simulations were run for 500\ntime periods, and twenty repetitions of each condition were\nused in order to generate aggregate results.\n4.1 Choice task difficulty\nWe begin by examining the effect of task difficulty on\nthe ability of various network configurations to converge to\na state in which an acceptable proportion of the\npopulation are satisfied. In the \u2018easy\" choice condition, there are\n50 products to choose from in the market, evaluated over\n4 attributes with all satisfaction thresholds set to 0.5 for\nall groups. There are therefore on average approximately\n3 products that can be expected to satisfy any particular\nagent. In the \u2018hard\" choice condition, there are 500\nproducts to choose from in the market, still evaluated over 4\nattributes but with all satisfaction thresholds now set to\n0.7 for all groups, so there are on average approximately\n4 products that can be expected to satisfy any particular\nagent. Locating a satisfactory product is therefore far more\ndifficult under the \u2018hard\" condition. The effect of task\ndifficulty is evaluated on three network structures\ncorresponding to r = 1 (random network), r = 0.05 (small-world\nnetwork), and r = 0 (tight \u2018communities\" of agents), with\nresults shown in Figure 1 for the case of \u03ba = 3.\n0 100 200 300 400 500\n0.00.10.20.30.40.50.60.7\nTime\nProportionsatisfied\nEasy task\nr = 1\nr = 0.05\nr = 0\n(a) Proportion of agents satisfied\n0 100 200 300 400 500\n0.00.10.20.30.40.50.6\nTime\nShareofmarket\nEasy task\nr = 1\nr = 0.05\nr = 0\n(b) Market share for leading product\nFigure 1: Moderating effect of task difficulty on\nrelationship between network structure (r) and system\nbehaviour\nGiven a relatively easy task, the system very quickly i.e. in\nlittle over 50 time periods, converges to a state in which\njust less than 60% of agents are satisfied at any one time.\nFurthermore, different network structures have very little\ninfluence on results, so that only a single (smoothed) series\nis given for comparison with the \u2018hard\" condition. Clearly,\nthere are enough agents independently solving the task i.e.\n390 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nfinding a satisfying brand, to make the dissemination of\ninformation relatively independent of the ways in which\nconnections are made. However, when it is more difficult to\nlocate a satisfying product, the structure of the network\nbecomes integral to the speed at which the system converges\nto a stable state. Importantly, the overall satisfaction level\nto which the system converges remains just below 60%\nregardless of which network structure is used, but convergence\nis considerably speeded by the random rewiring of even a\nsmall proportion of connections. Thus while the random\nnetwork (r = 1) converges quickest, the small-world\nnetwork (r = 0.05) also shows a substantial improvement over\nthe tight communities represented by the one-dimensional\nring lattice. This effect of the rewiring parameter r is much\nless pronounced for more highly-connected networks (e.g.\n\u03ba = 9), which suggests that the degree distribution of a\nnetwork is a more important determinant of system behaviour\nthan the way in which agents are connected to one another.\nSimilar results are observed when looking at the market\nshare achieved by the market leading product under each\nlevel of choice difficulty: market share is essentially\nindependent of network structure for the easy task, with\naverage share converging quickly to around 35%. Set a more\ndifficult task, the convergence of market share to an\napproximate long-run equilibrium is in fact fastest for the\nsmallworld network, with the random and tight community\nnetworks taking different paths but similar times to reach their\nequilibrium levels. Also interesting is the finding that\nequilibrium market shares for the market leader appear to be\nslightly (of the order of 5-10%) higher when network\nconnections are non-random - the random network seems to\nsuffer more from the effects of habituatation than the other\nnetworks as a result of the rapid early adoption of the\nmarket leading product.\n4.2 Market barriers\nIn the remainder of this paper we focus on the effect of\nvarious forms of market barriers on the ability of a system of\nagents to reach a state of acceptable satisfaction. For\nsimplicity, we concentrate on the smaller set of 50 products i.e.\nthe \u2018easy\" choice task discussed above, but vary the\nnumber of connections that each agent begins with in order to\nsimultaneously investigate the effect of degree distribution\non system behaviour. Tables 1 and 2 show the effect of\ndifferent degree distributions on the equilibrium proportion of\nagents that are satisfied at any one time, and the\nequilibrium proportion of agents switching products (moving) in\nany one time period, under various levels of market barriers\nconstraining their behaviour. In these two tables,\nequilibrium results have been calculated by averaging over time\nperiods 450 to 500, when the system is in equilibrium or\nextremely close to equilibrium (Table 3 and 4 make use of\nall time periods).\nNo WoM \u03ba = 1 \u03ba = 3 \u03ba = 9\n\u03b2 = 0 0.27 0.44 0.56 0.58\n\u03b2 = 0.05 0.26 0.39 0.50 0.52\n\u03b2 = 0.2 0.14 0.27 0.32 0.34\n\u03b2 = 0.4 0.07 0.17 0.22 0.25\nTable 1: Effect of degree distribution and market\nbarriers on proportion of market satisfied\nNo WoM \u03ba = 1 \u03ba = 3 \u03ba = 9\n\u03b2 = 0 0.74 0.51 0.45 0.45\n\u03b2 = 0.05 0.66 0.43 0.38 0.37\n\u03b2 = 0.2 0.41 0.21 0.21 0.21\n\u03b2 = 0.4 0.17 0.09 0.09 0.09\nTable 2: Effect of degree distribution and market\nbarriers on proportion of market moving\nThree aspects are worth noting. Firstly, there is a strong\ndiminishing marginal return of additional connections\nbeyond a small number. The first few connections one makes\nincreases the probability of finding a satisfying product 60%\nfrom 0.27 to 0.44 (for the first two contacts), followed by a\nfurther increase of roughly 25% to 0.56 for the next four.\nIn contrast, adding a further 12 contacts improves relative\nsatisfaction levels by less than 4%. Secondly, word-of-mouth\ncommunication continues to play an important role in\nimproving the performance of the system even when market\nbarriers are high. In fact, the role may even be more\nimportant in constrained conditions, since the relative gains\nobtained from word-of-mouth are greater the higher market\nbarriers are - just having two contacts more than doubles\nthe aggregate satisfaction level under the most extreme\nbarriers (\u03b2 = 0.4). Finally, it is clear that the mechanism by\nwhich barriers reduce satisfaction is by restricting movement\n(reflected in the lower proportion of agents moving in any\nparticular column of Tables 1 and 2), but that increases in\ndegree distribution act to increase satisfaction by precisely\nthe same mechanism of reducing movement - this time by\nreducing the average amount of time required to find a\nsatisfying brand.\nPositive referrals Negative referrals\n\u03ba = 1 \u03ba = 3 \u03ba = 9 \u03ba = 1 \u03ba = 3 \u03ba = 9\n\u03b2 = 0 0.21 0.93 3.27 0.00 0.00 0.00\n\u03b2 = 0.05 0.19 0.85 2.96 0.06 0.19 0.60\n\u03b2 = 0.2 0.13 0.57 2.04 0.30 0.92 2.83\n\u03b2 = 0.4 0.08 0.40 1.49 0.60 1.81 5.44\nTable 3: Median number of positive and negative\nreferrals made per agent per time period\nPerhaps the most interesting effects exerted by market\nbarriers are those exerted over the market shares of leading\nproducts. Figure 2 shows the cumulative market share\ncaptured by the top three products in the market over time, for\nall types of market barriers using different degree\ndistributions. Again, two comments can be made. Firstly, in the\nabsence of market barriers, a greater proportion of the\nmarket is captured by the market leading products when\nmarkets are highly-connected relative to when they are\npoorlyconnected. This difference can amount to as much as 15%,\nand is explained by positive feedback within the more\nhighlyconnected networks that serves to increase the probability\nthat, once a set of satisfying products have emerged, one\nis kept informed about these leading products because at\nleast one of one\"s contacts is using it. Secondly, the\nrelatively higher market share enjoyed by market leaders in\nhighly-connected networks is eroded by market barriers. In\nmoving from \u03b2 = 0 to \u03b2 = 0.2 to \u03b2 = 0.4, market leaders\ncollectively lose an absolute share of 15% and 10% under\nthe larger degree distributions \u03ba = 9 and \u03ba = 3 respectively.\nThe Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 391\n0 100 200 300 400 500\n0.10.20.30.40.50.60.70.8\nTime\nShareofmarket\n\u03ba = 9\n\u03ba = 3\n\u03ba = 1\nNo WoM\n(a) \u03b2 = 0\n0 100 200 300 400 500\n0.10.20.30.40.50.60.70.8\nTime\nShareofmarket\n\u03ba = 9\n\u03ba = 3\n\u03ba = 1\nNo WoM\n(b) \u03b2 = 0.05\n0 100 200 300 400 500\n0.10.20.30.40.50.60.70.8\nTime\nShareofmarket\n\u03ba = 9\n\u03ba = 3\n\u03ba = 1\nNo WoM\n(c) \u03b2 = 0.2\n0 100 200 300 400 500\n0.10.20.30.40.50.60.70.8\nTime\nShareofmarket\n\u03ba = 9\n\u03ba = 3\n\u03ba = 1\nNo WoM\n(d) \u03b2 = 0.4\nFigure 2: Effect of market barriers on the share of the market captured by the leading 3 products\nIn contrast, no change in collective market share is observed\nwhen \u03ba = 1, although convergence to equilibrium conditions\nis slower. It seems reasonable to suggest that increases in\nnegative word-of-mouth, which occurs when an unsatisfied\nagent is prevented from switching to another product, are\nparticularly damaging to leading products when agents are\nwell-connected, and that under moderate to strong market\nbarriers these effects more than offset any gains achieved by\nthe spread of positive word-of-mouth through the network.\nTable 3 displays the number of attempted referrals, both\npositive and negative, as a function of degree distribution\nand extent of market barriers, and shows that stronger\nmarket barriers act to simultaneously depress positive\nword-ofmouth communication and increase negative\ncommunication from those trapped in unsatisfying product\nrelationships, and that this effect is particularly pronounced for\nmore highly-connected networks. The reduction in the\nnumber of positive referrals as market barriers impose\nincreasingly severe constraints is also reflected in Table 4, which\nshows the median number of product trials each agent makes\nper time period based on a referral from another agent.\nWhereas under few or no barriers agents in a highly-connected\nnetwork make substantially more reference-based product\ntrials than agents in poorly-connected networks, when\nbarriers are severe both types of network carry only very little\npositive referral information. This clearly has a relatively\ngreater impact on the highly-connected network, which\nrelies on the spread of positive referral information to achieve\nhigher satisfaction levels. Moreover, this result might be\neven stronger in reality if agents in poorly-connected\nnetworks attempt to compensate for the relative sparcity of\nconnections by making more forceful or persuasive referrals\nwhere they do occur.\n\u03ba = 1 \u03ba = 3 \u03ba = 9\n\u03b2 = 0 0.13 0.27 0.35\n\u03b2 = 0.05 0.11 0.22 0.28\n\u03b2 = 0.2 0.05 0.10 0.14\n\u03b2 = 0.4 0.02 0.05 0.06\nTable 4: Median number of referrals leading to a\nproduct trial received per agent per time period\n392 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)\n5. CONCLUSIONS AND RELATED WORK\nPurchasing behaviour in many markets takes place on\na substrate of networks of word-of-mouth communication\nacross which agents exchange information about products\nand their likes and dislikes. Understanding the ways in\nwhich flows of word-of-mouth communication influence\naggregate market behaviour requires one to study both the\nunderlying structural properties of the network and the\nlocal rules governing the behaviour of agents on the network\nwhen making purchase decisions and when interacting with\nother agents. These local rules are often constrained by the\nnature of a particular market, or else imposed by\nstrategic suppliers or social customs. The proper modelling of a\nmechanism for word-of-mouth transmittal and resulting\nbehavioural effects thus requires a consideration of a number of\ncomplex and interacting components: networks of\ncommunication, source credibility, learning processes, habituation\nand memory, external constraints on behaviour, theories of\ninformation transfer, and adaptive behaviour. In this\npaper we have attempted to address some of these issues in\na manner which reflects how agents might act in the real\nworld.\nUsing the key notions of a limited communication\nnetwork, a simple learning process, and a satisficing heuristic\nthat may be subject to external constraints, we showed (1)\nthe importance of word-of-mouth communication to both\nsystem effectiveness and stability, (2) that the degree\ndistribution of a network is more influential than the way in\nwhich agents are connected, but that both are important in\nmore complex environments, (3) that rewiring even a small\nnumber of connections to create a small-world network can\nhave dramatic results for the speed of convergence to\nsatisficing distributions and market share allocations, (4) that\nword-of-mouth continues to be effective when movements\nbetween products are constrained by market barriers, and\n(5) that increases in negative word-of-mouth incurred as a\nresult of market barriers can reduce the market share\ncollectively captured by leading brands, but that this is dependent\non the existence of a suitably well-connected network\nstructure.\nIt is the final finding that is likely to be most\nsurprising and practically relevant for the marketing research field,\nand suggests that it may not always be in the best\ninterests of a market leader to impose barriers that prevent\ncustomers from leaving. In poorly-connected networks, the\neffect of barriers on market shares is slight. In contrast, in\nwell-connected networks, negative word-of-mouth can\nprevent agents from trying a product that they might\notherwise have found satisfying, and this can inflict significant\nharm on market share. Products with small market share\n(which, in the context of our simulations, is generally due to\nthe product offering poor performance) are relatively\nunaffected by negative word-of-mouth, since most product trials\nare likely to be unsatisfying in any case.\nAgent-based modelling provides a natural way for\nbeginning to investigate the types of dynamics that occur in\nmarketing systems. Naturally the usefulness of results is for the\nmost part dependent on the quality of the modelling of the\ntwo \u2018modules\" comprising network structure and local\nbehaviour. On the network side, future work might investigate\nthe relationship between degree distributions, the way\nconnections are created and destroyed over time, whether\npreferential attachment is influential, and the extent to which\nsocial identity informs network strucutre, all in larger\nnetworks of more heterogenous agents. On the behavioural side,\none might look at the adaptation of satisfaction thresholds\nduring the course of communication, responses to systematic\nchanges in product performances over time, the integration\nof various information sources, and different market barrier\nstructures. All these areas provide fascinating opportunities\nto introduce psychological realities into models of\nmarketing systems and to observe the resulting behaviour of the\nsystem under increasingly realistic scenario descriptions.\n6. REFERENCES\n[1] E. Anderson. Customer satisfaction and\nword-of-mouth. Journal of Service Research,\n1(Aug):5-17, 1998.\n[2] A. Barab\u00b4asi. Emergence of scaling in random\nnetworks. Science, 286:509-512, 1999.\n[3] J. Brown and P. Reingen. Social ties and\nword-of-mouth referral behaviour. Journal of\nConsumer Research, 14(Dec):350-362, 1987.\n[4] T. Brown, T. Berry, P. Dacin, and R. Gunst.\nSpreading the word: investigating positive\nword-of-mouth intentions and behaviours in a retailing\ncontext. Journal of the Academy of Marketing\nSciences, 33(2):123-139, 2005.\n[5] T. Burnham, J. Frels, and V. Mahajan. Consumer\nswitching costs: A typology, antecedents, and\nconsequences. Journal of the Academy of Marketing\nSciences, 31(2):109-126, 2003.\n[6] T. Candale and S. Sen. Effect of referrals on\nconvergence to satisficing distributions. In Proceedings\nof the fourth international joint conference on\nAutonomous agents and multiagent systems, pages\n347-354. ACM Press, New York, 2005.\n[7] D. Duhan, S. Johnson, J. Wilcox, and G. Harrell.\nInfluences on consumer use of word-of-mouth\nrecommendation sources. Journal of the Academy of\nMarketing Sciences, 25(4):283-295, 1997.\n[8] G. Ellison and D. Fudenberg. Word-of-mouth\ncommunication and social learning. Quarterly Journal\nof Economics, 110(1):93-125, 1995.\n[9] M. Fishbein and I. Ajzen. Belief, Attitude, Intention,\nand Behaviour: An Introduction to the Theory and\nResearch. Addison-Wesley, Reading, 1975.\n[10] R. Fisher and L. Price. An investigation into the\nsocial context of early adoption behaviour. Journal of\nConsumer Research, 19(Dec):477-486, 1992.\n[11] S. Keaveney. Customer switching behaviour in service\nindustries: an exploratory study. Journal of Marketing\nResearch, 59(Apr):71-82, 1995.\n[12] P. Klemperer. Markets with consumer switching costs.\nQuarterly Journal of Economics, 102:375-394, 1979.\n[13] D. McDonald. Recommending collaboration with\nsocial networks: a comparative evaluation. In CHI \"03:\nProceedings of the SIGCHI conference on human\nfactors in computing systems, pages 593-600. ACM\nPress, New York, 2003.\n[14] R. Oliver. A cognitive model of the antecedents and\nconsequences of satisfaction decisions. Journal of\nMarketing, 17:460-469, 1980.\n[15] H. Simon. Administrative Behaviour. The Free Press,\nNew York, 1976.\nThe Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 393\n[16] T. Tran and R. Cohen. Improving user satisfaction in\nagent-based electronic marketplaces by reputation\nmodelling and adjustable product quality. In\nProceedings of the third international joint conference\non Autonomous agents and multiagent systems, pages\n826-833. ACM Press, New York, 2004.\n[17] P. Warshaw. A New Model for Predicting Behavioural\nIntentions: An Alternative to Fishbein. Journal of\nMarketing Research, 17:153-172, 1980.\n[18] D. Watts. Networks, dynamics, and the small world\nphenomenon. American Journal of Sociology,\n105:493-592, 1999.\n394 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)", "keywords": "psychological affinity;switching cost;word-of-mouth communication;cognitive model;artificial social system;market barrier;referral system;switching behaviour;social psychology;social network;purchasing behaviour;agent-based model;consumer choice;marketing system;defection behaviour"}
-{"name": "test_I-22", "title": "Realistic Cognitive Load Modeling for Enhancing Shared Mental Models in Human-Agent Collaboration", "abstract": "Human team members often develop shared expectations to predict each other\"s needs and coordinate their behaviors. In this paper the concept Shared Belief Map is proposed as a basis for developing realistic shared expectations among a team of Human-Agent-Pairs (HAPs). The establishment of shared belief maps relies on inter-agent information sharing, the effectiveness of which highly depends on agents\" processing loads and the instantaneous cognitive loads of their human partners. We investigate HMM-based cognitive load models to facilitate team members to share the right information with the right party at the right time. The shared belief map concept and the cognitive/processing load models have been implemented in a cognitive agent architectureSMMall. A series of experiments were conducted to evaluate the concept, the models, and their impacts on the evolving of shared mental models of HAP teams.", "fulltext": "1. INTRODUCTION\nThe entire movement of agent paradigm was spawned,\nat least in part, by the perceived importance of fostering\nhuman-like adjustable autonomy. Human-centered\nmultiagent teamwork has thus attracted increasing attentions in\nmulti-agent systems field [2, 10, 4]. Humans and autonomous\nsystems (agents) are generally thought to be\ncomplementary: while humans are limited by their cognitive capacity in\ninformation processing, they are superior in spatial,\nheuristic, and analogical reasoning; autonomous systems can\ncontinuously learn expertise and tacit problem-solving\nknowledge from humans to improve system performance. In short,\nhumans and agents can team together to achieve better\nperformance, given that they could establish certain mutual\nawareness to coordinate their mixed-initiative activities.\nHowever, the foundation of human-agent collaboration\nkeeps being challenged because of nonrealistic modeling of\nmutual awareness of the state of affairs. In particular, few\nresearchers look beyond to assess the principles of modeling\nshared mental constructs between a human and his/her\nassisting agent. Moreover, human-agent relationships can go\nbeyond partners to teams. Many informational processing\nlimitations of individuals can be alleviated by having a group\nperform tasks. Although groups also can create additional\ncosts centered on communication, resolution of conflict, and\nsocial acceptance, it is suggested that such limitations can\nbe overcome if people have shared cognitive structures for\ninterpreting task and social requirements [8]. Therefore, there\nis a clear demand for investigations to broaden and deepen\nour understanding on the principles of shared mental\nmodeling among members of a mixed human-agent team.\nThere are lines of research on multi-agent teamwork, both\ntheoretically and empirically. For instance, Joint Intention\n[3] and SharedPlans [5] are two theoretical frameworks for\nspecifying agent collaborations. One of the drawbacks is\nthat, although both have a deep philosophical and\ncognitive root, they do not accommodate the modeling of\nhuman team members. Cognitive studies suggested that teams\nwhich have shared mental models are expected to have\ncommon expectations of the task and team, which allow them\nto predict the behavior and resource needs of team\nmembers more accurately [14, 6]. Cannon-Bowers et al. [14]\nexplicitly argue that team members should hold compatible\nmodels that lead to common expectations. We agree on\nthis and believe that the establishment of shared\nexpectations among human and agent team members is a critical\nstep to advance human-centered teamwork research.\nIt has to be noted that the concept of shared expectation\ncan broadly include role assignment and its dynamics,\nteamwork schemas and progresses, communication patterns and\nintentions, etc. While the long-term goal of our research is\nto understand how shared cognitive structures can enhance\nhuman-agent team performance, the specific objective of the\nwork reported here is to develop a computational cognitive\n395\n978-81-904262-7-5 (RPS) c 2007 IFAAMAS\ncapacity model to facilitate the establishment of shared\nexpectations. In particular, we argue that to favor\nhumanagent collaboration, an agent system should be designed to\nallow the estimation and prediction of human teammates\"\n(relative) cognitive loads, and use that to offer improvised,\nunintrusive help. Ideally, being able to predict the\ncognitive/processing capacity curves of teammates could allow\na team member to help the right party at the right time,\navoiding unbalanced work/cognitive loads among the team.\nThe last point is on the modeling itself. Although an\nagent\"s cognitive model of its human peer is not necessarily\nto be descriptively accurate, having at least a realistic model\ncan be beneficial in offering unintrusive help, bias\nreduction, as well as trustable and self-adjustable autonomy. For\nexample, although humans\" use of cognitive simplification\nmechanisms (e.g., heuristics) does not always lead to errors\nin judgment, it can lead to predictable biases in responses\n[8]. It is feasible to develop agents as cognitive aids to\nalleviate humans\" biases, as long as an agent can be trained\nto obtain a model of a human\"s cognitive inclination. With\na realistic human cognitive model, an agent can also better\nadjust its automation level. When its human peer is\nbecoming overloaded, an agent can take over resource-consuming\ntasks, shifting the human\"s limited cognitive resources to\ntasks where a human\"s role is indispensable. When its\nhuman peer is underloaded, an agent can take the chance to\nobserve the human\"s operations to refine its cognitive model\nof the human. Many studies have documented that human\nchoices and behaviors do not agree with predictions from\nrational models. If agents could make recommendations in\nways that humans appreciate, it would be easier to establish\ntrust relationships between agents and humans; this in turn,\nwill encourage humans\" automation uses.\nThe rest of the paper is organized as follows. In Section\n2 we review cognitive load theories and measurements. A\nHMM-based cognitive load model is given in Section 3 to\nsupport resource-bounded teamwork among\nhuman-agentpairs. Section 4 describes the key concept shared belief\nmap as implemented in SMMall, and Section 5 reports the\nexperiments for evaluating the cognitive models and their\nimpacts on the evolving of shared mental models.\n2. COGNITIVE CAPACITY-OVERVIEW\nPeople are information processors. Most cognitive\nscientists [8] believe that human information-processing system\nconsists of an executive component and three main\ninformation stores: (a) sensory store, which receives and retains\ninformation for one second or so; (b) working (or\nshortterm) memory, which refers to the limited capacity to hold\n(approximately seven elements at any one time [9]), retain\n(for several seconds), and manipulate (two or three\ninformation elements simultaneously) information; and (c)\nlongterm memory, which has virtually unlimited capacity [1] and\ncontains a huge amount of accumulated knowledge organized\nas schemata. Cognitive load studies are, by and large,\nconcerned about working memory capacity and how to\ncircumvent its limitations in human problem-solving activities such\nas learning and decision making.\nAccording to the cognitive load theory [11], cognitive load\nis defined as a multidimensional construct representing the\nload that a particular task imposes on the performer. It\nhas a causal dimension including causal factors that can be\ncharacteristics of the subject (e.g. expertise level), the task\n(e.g. task complexity, time pressure), the environment (e.g.\nnoise), and their mutual relations. It also has an\nassessment dimension reflecting the measurable concepts of\nmental load (imposed exclusively by the task and environmental\ndemands), mental effort (the cognitive capacity actually\nallocated to the task), and performance.\nLang\"s information-processing model [7] consists of three\nmajor processes: encoding, storage, and retrieval. The\nencoding process selectively maps messages in sensory stores\nthat are relevant to a person\"s goals into working memory;\nthe storage process consolidates the newly encoded\ninformation into chunks, and form associations and schema to\nfacilitate subsequent recalls; the retrieval process searches the\nassociated memory network for a specific element/schema\nand reactivates it into working memory. The model\nsuggests that processing resources (cognitive capacity) are\nindependently allocated to the three processes. In addition,\nworking memory is used both for holding and for\nprocessing information [1]. Due to limited capacity, when greater\neffort is required to process information, less capacity\nremains for the storage of information. Hence, the allocation\nof the limited cognitive resources has to be balanced in\norder to enhance human performance. This comes to the issue\nof measuring cognitive load, which has proven difficult for\ncognitive scientists.\nCognitive load can be assessed by measuring mental load,\nmental effort, and performance using rating scales,\npsychophysiological (e.g. measures of heart activity, brain\nactivity, eye activity), and secondary task techniques [12].\nSelfratings may appear questionable and restricted, especially\nwhen instantaneous load needs to be measured over time.\nAlthough physiological measures are sometimes highly\nsensitive for tracking fluctuating levels of cognitive load, costs\nand work place conditions often favor task- and\nperformancebased techniques, which involve the measure of a secondary\ntask as well as the primary task under consideration.\nSecondary task techniques are based on the assumption that\nperformance on a secondary task reflects the level of\ncognitive load imposed by a primary task [15]. From the resource\nallocation perspective, assuming a fixed cognitive capacity,\nany increase in cognitive resources required by the primary\ntask must inevitably decrease resources available for the\nsecondary task [7]. Consequently, performance in a secondary\ntask deteriorates as the difficulty or priority of the primary\ntask increases. The level of cognitive load can thus be\nmanifested by the secondary task performance: the subject is\ngetting overloaded if the secondary task performance drops.\nA secondary task can be as simple as detecting a visual or\nauditory signal but requires sustained attention. Its\nperformance can be measured in terms of reaction time, accuracy,\nand error rate. However, one important drawback of\nsecondary task performance, as noted by Paas [12], is that it\ncan interfere considerably with the primary task\n(competing for limited capacity), especially when the primary task is\ncomplex. To better understand and measure cognitive load,\nXie and Salvendy [16] introduced a conceptual framework,\nwhich distinguishes instantaneous load, peak load,\naccumulated load, average load, and overall load. It seems that\nthe notation of instantaneous load, which represents the\ndynamics of cognitive load over time, is especially useful for\nmonitoring the fluctuation trend so that free capacity can\nbe exploited at the most appropriate time to enhance the\noverall performance in human-agent collaborations.\n396 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nAgent n\nHuman\nHuman-Agent Pair n\nAgent 1\nHuman\nHuman-Agent Pair 1\nTeammates\nAgent\nProcessing\nModel\nAgent\nComm\nModel\nHuman\nPartner\nHAI\nAgent\nProcessing\nModel\nAgent\nComm\nModel\nHuman\nPartner\nHAI\nTeammates\nFigure 1: Human-centered teamwork model.\n3. HUMAN-CENTERED TEAMWORK MODEL\nPeople are limited information processors, and so are\nintelligent agent systems; this is especially true when they act\nunder hard or soft timing constraints imposed by the domain\nproblems. In respect to our goal to build realistic\nexpectations among teammates, we take two important steps.\nFirst, agents are resource-bounded; their processing\ncapacity is limited by computing resources, inference\nknowledge, concurrent tasking capability, etc. We withdraw the\nassumption that an agent knows all the information/intentions\ncommunicated from other teammates. Instead, we contend\nthat due to limited processing capacity, an agent may only\nhave opportunities to process (make sense of) a portion of\nthe incoming information, with the rest ignored. Taking\nthis approach will largely change the way in which an agent\nviews (models) the involvement and cooperativeness of its\nteammates in a team activity. In other words, the\nestablishment of shared mental models regarding team members\"\nbeliefs, intentions, and responsibilities can no longer rely on\ninter-agent communication only. This being said, we are not\ndropping the assumption that teammates are trustable.\nWe still stick to this, only that team members cannot\novertrust each other; an agent has to consider the possibility that\nits information being shared with others might not be as\neffective as expected due to the recipients\" limited\nprocessing capacities. Second, human teammates are bounded by\ntheir cognitive capacities. As far as we know, the research\nreported here is the first attempt in the area of\nhumancentered multi-agent teamwork that really considers\nbuilding and using human\"s cognitive load model to facilitate\nteamwork involving both humans and agents.\nWe use Hi, Ai to denote Human-Agent-Pair (HAP) i.\n3.1 Computational Cognitive Capacity Model\nAn intelligent agent being a cognitive aid, it is desirable\nthat the model of its human partner implemented within\nthe agent is cognitively-acceptable, if not descriptively\naccurate. Of course, building a cognitive load model that is\ncognitively-acceptable is not trivial; there exist a variety of\ncognitive load theories and different measuring techniques.\nWe here choose to focus on the performance variables of\nsecondary tasks, given the ample evidence supporting\nsecondary task performance as a highly sensitive and reliable\ntechnique for measuring human\"s cognitive load [12]. It\"s\nworth noting that just for the purpose of estimating a\nhuman subject\"s cognitive load, any artificial task (e.g, pressing\na button in response to unpredictable stimuli) can be used\nas a secondary task to force the subject to go through.\nHowever, in a realistic application, we have to make sure that the\nselected secondary task interacts with the primary task in\nmeaningful ways, which is not easy and often depends on the\ndomain problem at hand. For example, in the experiment\nbelow, we used the number of newly available information\ncorrectly recalled as the secondary task, and the\neffective0 1 2 3 4\nnegligibly slightly fairly heavily overly\n0.4 0.4 0.4 0.4 0.6\n0.4\n0.2 0.1\n0.2\n0.3\n0.2\n0.2\n0.1\n0.1\n0.25\n0.25\n0.1\n0.2\n0.2\n0 1 2 3 4 5 6 7 8 \u2265 9\nB =\n0\n1\n2\n3\n4\n\u23a1\n\u23a2\n\u23a2\n\u23a2\n\u23a2\n\u23a3\n0 0 0 0 0 0.02 0.03 0.05 0.1 0.8\n0 0 0 0 0 0.05 0.05 0.1 0.7 0.1\n0 0 0 0 0.01 0.02 0.45 0.4 0.1 0.02\n0.02 0.03 0.05 0.15 0.4 0.3 0.03 0.02 0 0\n0.1 0.3 0.3 0.2 0.1 0 0 0 0 0\n\u23a4\n\u23a5\n\u23a5\n\u23a5\n\u23a5\n\u23a6\nFigure 2: A HMM Cognitive Load Model.\nness of information sharing as the primary task. This is\nrealistic to intelligence workers because in time stress\nsituations they have to know what information to share in order\nto effectively establish an awareness of the global picture.\nIn the following, we adopt the Hidden Markov Model\n(HMM) approach [13] to model human\"s cognitive\ncapacity. It is thus assumed that at each time step the secondary\ntask performance of a human subject in a team composed\nof human-agent-pairs (HAP) is observable to all the team\nmembers. Human team members\" secondary task\nperformance is used for estimating their hidden cognitive loads.\nA HMM is denoted by \u03bb = N, V, A, B, \u03c0 , where N is a\nset of hidden states, V is a set of observation symbols, A\nis a set of state transition probability distributions, B is a\nset of observation symbol probability distributions (one for\neach hidden state), and \u03c0 is the initial state distribution.\nWe consider a 5-state HMM model of human cognitive\nload as follows (Figure 2). The hidden states are 0\n(negligiblyloaded), 1 (slightly-loaded), 2 (fairly-loaded), 3\n(heavilyloaded), and 4 (overly loaded). The observable states are\ntied with secondary task performance, which, in this study,\nis measured in terms of the number of items correctly\nrecalled. According to Miller\"s 7\u00b12 rule, the observable states\ntake integer values from 0 to 9 ( the state is 9 when the\nnumber of items correctly recalled is no less than 9). For\nthe example B Matrix given in Fig. 2, it is very likely that\nthe cognitive load of the subject is negligibly when the\nnumber of items correctly recalled is larger than 9.\nHowever, to determine the current hidden load status of\na human partner is not trivial. The model might be\noversensitive if we only consider the last-step secondary task\nperformance to locate the most likely hidden state. There\nis ample evidence suggesting that human cognitive load is\na continuous function over time and does not manifest\nsudden shifts unless there is a fundamental changes in tasking\ndemands. To address this issue, we place a constraint on\nthe state transition coefficients: no jumps of more than 2\nstates are allowed. In addition, we take the position that,\na human subject is very likely overloaded if his secondary\ntask performance is mostly low in recent time steps, while\nhe is very likely not overloaded if his secondary task\nperformance is mostly high recently. This leads to the following\nWindowed-HMM approach.\nGiven a pre-trained HMM \u03bb of human cognitive load and\nthe recent observation sequence Ot of length w, let\nparameter w be the effective window size, \u03b5\u03bb\nt be the estimated\nhidden state at time step t. First apply the HMM to the\nobservation sequence to find the optimal sequence of hidden\nstates S\u03bb\nt = s1s2 \u00b7 \u00b7 \u00b7 sw (Viterbi algorithm). Then, compute\nthe estimated hidden state \u03b5\u03bb\nt for the current time step,\nviewing it as a function of S\u03bb\nt . We consider all the hidden states\nin S\u03bb\nt , weighted by their respective distance to \u03b5\u03bb\nt\u22121 (the\nestimated state of the last step): the closer of a state in S\u03bb\nt\nThe Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 397\nto \u03b5\u03bb\nt\u22121, the higher probability of the state being \u03b5\u03bb\nt . \u03b5\u03bb\nt is\nset to be the state with the highest probability (note that a\nstate may have multiple appearances in S\u03bb\nt ). More formally,\nthe probability of state s \u2208 S being \u03b5\u03bb\nt is given by:\np\u03bb(s, t) =\ns=sj \u2208S\u03bb\nt\n\u03b7(sj)e\u2212|sj \u2212\u03b5\u03bb\nt\u22121|\n, (1)\nwhere \u03b7(sj) = ej\n/ w\nk=1 ek\nis the weight of sj \u2208 S\u03bb\nt (the\nmost recent hidden state has the most significant influence\nin predicting the next state). The estimated state for the\ncurrent step is the state with maximum likelihood:\n\u03b5\u03bb\nt = argmax\ns\u2208S\u03bb\nt\np\u03bb(s, t) (2)\n3.2 Agent Processing Load Model\nAccording to schema theory [11], multiple elements of\ninformation can be chunked as single elements in cognitive\nschemas. A schema can hold a huge amount of information,\nyet is processed as a single unit. We adapt this idea and\nassume that agent i\"s estimation of agent j\"s processing load\nat time step t is a function of two factors: the number of\nchunks cj(t) and the total number sj(t) of information\nbeing considered by agent j. If cj(t) and sj(t) are observable\nto agent i, agent i can employ a Windowed-HMM approach\nas described in Section 3.1 to model and estimate agent j\"s\ninstantaneous processing load.\nIn the study reported below, we also used 5-state HMM\nmodels for agent processing load. With the 5 hidden states\nsimilar to the HMM models adopted for human cognitive\nload, we employed multivariate Gaussian observation\nprobability distributions for the hidden states.\n3.3 HAP\"s Processing Load Model\nAs discussed above, a Human-Agent-Pair (HAP) is viewed\nas a unit when teaming up with other HAPs. The processing\nload of a HAP can thus be modeled as the co-effect of the\nprocessing load of the agent and the cognitive load of the\nhuman partner as captured by the agent.\nSuppose agent Ai has models for its processing load and\nits human partner Hi\"s cognitive load. Denote the agent\nprocessing load and human cognitive load of HAP Hi, Ai\nat time step t by \u03bci\nt and \u03bdi\nt, respectively. Agent Ai can use \u03bci\nt\nand \u03bdi\nt to estimate the load of Hi, Ai as a whole. Similarly,\nif \u03bcj\nt and \u03bdj\nt are observable to agent Ai, it can estimate the\nload of Hj, Aj . For model simplicity, we still used 5-state\nHMM models for HAP processing load, with the estimated\nhidden states of the corresponding agent processing load and\nhuman cognitive load as the input observation vectors.\nBuilding a load estimation model is the means. The goal\nis to use the model to enhance information sharing\nperformance so that a team can form better shared mental models\n(e.g., to develop inter-agent role expectations in their\ncollaboration), which is the key to high team performance.\n3.4 Load-Sensitive Information Processing\nEach agent has to adopt a certain strategy to process the\nincoming information. As far as resource-bounded agents\nare concerned, it is of no use for an agent to share\ninformation with teammates who are already overloaded; they\nsimply do not have the capacity to process the information.\nConsider the incoming information processing strategy as\nshown in Table 1. Agent Ai (of HAPi) ignores all the\nincoming information when it is overloaded, and processes all the\nincoming information when it is negligibly loaded. When it\nTable 1: Incoming information processing strategy\nHAPi Load Strategy\nOverly Ignore all shared info\nHeavily Consider every teammate A \u2208 [1, 1\nq\n|Q| ],\nrandomly process half amount of info from A;\nIgnore info from any teammate B \u2208 ( 1\nq\n|Q|, |Q|]\nFairly Process half of shared info from any teammate\nSlightly Process all info from any A \u2208 [1, 1\nq\n|Q| ];\nFor any teammate B \u2208 ( 1\nq\n|Q|, |Q|]\nrandomly process half amount of info from B\nNegligibly Process all shared info\nHAPj Process all info from HAPj if it is overloaded\n*Q is a FIFO queue of agents from whom this HAP has received\ninformation at the current step; q is a constant known to all.\nis heavily loaded, Ai randomly processes half of the messages\nfrom those agents which are the first 1/q teammates\nappeared in its communication queue; when it is fairly loaded,\nAi randomly processes half of the messages from any\nteammates; when it is slightly loaded, Ai processes all the\nmessages from those agents which are the first 1/q teammates\nappeared in its communication queue, and randomly\nprocesses half of the messages from other teammates.\nTo further encourage sharing information at the right time,\nthe last row of Table 1 says that HAPi , if having not sent\ninformation to HAPj who is currently overloaded, will\nprocess all the information from HAPj . This can be justified\nfrom resource allocation perspective: an agent can reallocate\nits computing resource reserved for communication to\nenhance its capacity of processing information. This strategy\nfavors never sending information to an overloaded\nteammate, and it suggests that estimating and exploiting\nothers\" loads can be critical to enable an agent to share the\nright information with the right party at the right time.\n4. SYSTEM IMPLEMENTATION\nSMMall (Shared Mental Models for all) is a cognitive\nagent architecture developed for supporting human-centric\ncollaborative computing. It stresses human\"s role in team\nactivities by means of novel collaborative concepts and\nmultiple representations of context woven through all aspects of\nteam work. Here we describe two components pertinent to\nthe experiment reported in Section 5: multi-party\ncommunication and shared mental maps (a complete description of\nthe SMMall system is beyond the scope of this paper).\n4.1 Multi-Party Communication\nMulti-party communication refers to conversations\ninvolving more than two parties. Aside from the speaker, the\nlisteners involved in a conversation can be classified into\nvarious roles such as addressees (the direct listeners), auditors\n(the intended listeners), overhearers (the unintended but\nanticipated listeners), and eavesdroppers (the unanticipated\nlisteners). Multi-party communication is one of the\ncharacteristics of human teams. SMMall agents, which can form\nHuman-Agent-Pairs with human partners, support\nmultiparty communication with the following features.\n1. SMMall supports a collection of multi-party\nperformatives such as MInform (multi-party inform), MAnnounce\n(multi-party announce), and MAsk (multi-party ask). The\nlisteners of a multi-party performative can be addressees,\nauditors, and overhearers, which correspond to \u2018to\", \u2018cc\", and\n\u2018bcc\" in e-mail terms, respectively.\n2. SMMall supports channelled-communication. There\n398 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nare three built-in channels: agentTalk channel (inter-agent\nactivity-specific communication), control channel (meta\ncommunication for team coordination), and world channel\n(communication with the external world). An agent can fully\ntune to a channel to collect messages sent (or cc, bcc)\nto it. An agent can also partially tune to a channel to get\nstatistic information about the messages communicated over\nthe channel. This is particularly useful if an agent wants to\nknow the communication load imposed on a teammate.\n4.2 Shared Belief Map & Load Display\nA concept shared belief map has been proposed and\nimplemented into SMMall; this responds to the need to seek\ninnovative perspectives or concepts that allow group\nmembers to effectively represent and reason about shared mental\nmodels at different levels of abstraction. As described in\nSection 5, humans and agents interacted through shared belief\nmaps in the evaluation of HMM-based load models.\nA shared belief map is a table with color-coded\ninfo-cellscells associated with information. Each row captures the\nbelief model of one team member, and each column\ncorresponds to a specific information type (all columns together\ndefine the boundary of the information space being\nconsidered). Thus, info-cell Cij of a map encodes all the beliefs\n(instances) of information type j held by agent i. Color\ncoding applies to each info-cell to indicate the number of\ninformation instances held by the corresponding agent.\nThe concept of shared belief map helps maintain and\npresent a human partner with a synergy view of the shared\nmental models evolving within a team. Briefly, SMMall has\nimplemented the concept with the following features:\n1. A context menu can be popped up for each info-cell\nto view and share the associated information instances. It\nallows selective (selected subset) or holistic info-sharing.\n2. Mixed-initiative info-sharing: both agents and human\npartners can initiate a multi-party conversation. It also\nallows third-party info-sharing, say, A shares the information\nheld by B with C.\n3. Information types that are semantically related (e.g.,\nby inference rules) can be closely organized. Hence, nearby\ninfo-cells can form meaningful plateaus (or contour lines) of\nsimilar colors. Colored plateaus indicate those sections of a\nshared mental model that bear high overlapping degrees.\n4. The perceptible color (hue) difference manifested from\na shared belief map indicates the information difference among\nteam members, and hence visually represents the potential\ninformation needs of each team member (See Figure 3).\nSMMall has also implemented the HMM-based models\n(Section 3) to allow an agent to estimate its human\npartner\"s and other team members\" cognitive/processing loads.\nAs shown in Fig. 3, below the shared belief map there is\na load display for each team member. There are 2 curves in\na display: the blue (dark) one plots human\"s instantaneous\ncognitive loads and the red one plots the processing loads of\na HAP as a whole. If there are n team members, each agent\nneeds to maintain 2n HMM-based models to support the\nload displays. The human partner of a HAP can adjust her\ncognitive load at runtime, as well as monitor another HAP\"s\nagent processing load and its probability of processing the\ninformation she sends at the current time step. Thus, the\nmore closely a HAP can estimate the actual processing loads\nof other HAPs, the better information sharing performance\nthe HAP can achieve.\nFigure 3: Shared Mental Map Display\nIn sum, shared belief maps allow the inference of who\nneeds what, and load displays allow the judgment of when\nto share information. Together they allow us to investigate\nthe impact of sharing the right info. with the right party at\nthe right time on the evolving of shared mental models.\n4.3 Metrics for Shared Mental Models\nWe here describe how we measure team performance in\nour experiment. We use mental model overlapping\npercentage (MMOP) as the base to measure shared mental\nmodels. MMOP of a group is defined as the intersection of\nall the individual mental states relative to the union of\nindividual mental states of the group. Formally, given a group\nof k agents G = {Ai|1 \u2264 i \u2264 k}, let Bi = {Iim|1 \u2264 m \u2264 n}\nbe the beliefs (information) held by agent Ai, where each\nIim is a set of information of the same type, and n (the size\nof information space) is fixed for the agents in G, then\nMMOP(G) =\n100\nn\n1\u2264m\u2264n\n(\n| \u22291\u2264i\u2264k Iim|\n| \u222a1\u2264i\u2264k Iim|\n). (3)\nFirst, a shared mental model can be measured in terms of\nthe distance of averaged subgroup MMOPs to the MMOP\nof the whole group. Without losing generality, we define\npaired SMM distance (subgroups of size 2) D2 as:\nD2 (G) =\n1\u2264iTH1-6,\nTH2-8>TH1-8, TH2-10>TH1-10), and the performance\ndifference of TH1 and TH2 teams increased as communication\ncapacity increased. This indicates that, other things being\nequal, the benefit of exploiting load estimation when sharing\ninformation becomes more significant when communication\ncapacity is larger. From Fig. 4 the same findings can be\nderived for the performance of agent teams.\nIn addition, the results also show that the SMMs of each\nteam type were maintained steadily at a certain level after\nabout 20 time steps. However, to maintain a SMM steadily\nat a certain level is a non-trivial team task. The performance\nof teams who did not share any information (the \u2018NoSharing\"\ncurve in Fig. 4) decreased constantly as time proceeded.\n5.4 Multi-Party Communication for SMM\nWe now compare teams of type 2 and type 3 (which splits\nmulti-party messages by receivers\" loads). As plotted in Fig.\n4, for HAP teams, the performance of team type 2 for each\nfixed communication capacity was consistently better than\nteam type 3 (TH3-6\u2264TH2-6, TH3-8TH2>TH1\nholds in Fig. 6(c) (larger distances indicate better\nsubgroup SMMs), and TH3TH1>TH2 holds in Fig. 6(a), and TH2U(resume)}\n6: Send a reject message to each agent in the set {A \\ A\u2217\n}\n7: while (A\u2217\n= \u2205) do\n8: Send a commit message to Aj = argmaxAl\u2208A\u2217 U(Ai, Al)\n9: Remove Aj from A\u2217\n10: Wait for Aj\"s decision\n11: if (Aj responded commit) then\n12: Send reject messages to the remaining agents in A\u2217\n13: Terminate search\n14: end if\n15: end while\n16: end loop\nwhere U(resume) denotes the expected utility of continuing the\nsearch (in the following paragraphs we show that U(resume) is\nfixed throughout the search and derives from the agent\"s strategy).\nIn the above algorithm, any agent Ai first identifies the set A\u2217\nof\nother agents it is willing to accept out of those reviewed in the\ncurrent search stage and sends a reject message to the rest. Then it\nsends a commit message to the agent Aj \u2208 A\u2217\nthat is associated\nwith the partnership yielding the highest utility. If a reject message\nwas received from agent Aj then this agent is removed from A\u2217\nand a new commit message is sent according to the same criteria.\nThe process continues until either: (a) the set A\u2217\nbecomes empty,\nin which case the agent initiates another search stage; or (b) a dual\ncommitment is obtained, in which case the agent sends reject\nmessages to the remaining agents in A\u2217\n. The method differs from the\none used in the I-DM model in the way it handles the commitment\nmessages: in the I-DM model, after evaluating the set of utilities\n(step 4), the agent merely sends instantaneously a commit message\nto the agent associated with the greatest utility and a reject message\nto all the other agents it interacted with (as a replacement to steps\n5-15 in the above procedure). Our proposed S-DM model is much\nmore intuitive as it allows an agent to hold and possibly exploit\nrelatively beneficial opportunities even if its first priority\npartnership is rejected by the other agent. In the I-DM model, on the other\nhand, since reject messages are sent alongside the commit message,\nsimultaneously, a reject message from the agent associated with the\nbest partnership enforces a new search round.\nNotice that the two-sided search mechanism above aligns with\nmost other two-sided search mechanisms in a sense that it is based\non random matching (i.e., in each search round the agent\nencounters a random sample of agents). While the maintenance of the\nrandom matching infrastructure is an interesting research question, it\nis beyond the scope of this paper. Notwithstanding, we do wish\nto emphasize that given the large number of agents in the\nenvironment and the fact that in MAS the turnover rate is quite substantial\ndue to the open nature of the environment (and the\ninteroperability between environments). Therefore, the probability of ending up\ninteracting with the same agent more than once, when initiating a\nrandom interaction, is practically negligible.\nTHEOREM 1. The S-DM agent\"s decision making process: (a)\nis the optimal one (maximizes the utility) for any individual agent\nin the environment; and (b) guarantees a zero deadlock probability\nfor any given agent in the environment.\nProof:\n(a) The method is optimal since it cannot be changed in a way that\nproduces a better utility for the agent. Since bargaining is not\napplicable here (benefits are non-divisible) then the agent\"s strategy is\nlimited to accepting or rejecting offers. The decision of rejecting a\npartnership in step 6 is based only on the immediate utility that can\nbe gained from this partnership in comparison to the expected\nutility of resuming the search (i.e., moving on to the next search stage)\nand is not affected by the willingness of the other agents to commit\nor reject a partnership with Ai. As for partnerships that yield a\nutility greater than the expected utility of resuming the search (i.e., the\npartnerships with agents from the set A\u2217\n), the agent always prefers\nto delay its decision concerning partnerships of this type until\nreceiving all notifications concerning potential partnerships that are\nassociated with a greater immediate utility. The delay never results\nwith a loss of opportunity since the other agent\"s decision\nconcerning this opportunity is not affected by agent Ai\"s willingness to\ncommit or reject this opportunity (but rather by the other agent\"s\nestimation of its expected utility if resuming the search and the\nrejection messages it receives for more beneficial potential\npartnerships). Finally, the agent cannot benefit from delaying a commit\nmessage to the agent associated with the highest utility in A\u2217\n, thus\nwill always send it a commit message.\n(b) We first prove the following lemma that states that the\nprobability of having two partnering opportunities associated with an\nidentical utility is zero.\nLEMMA 2.1. When f is a continuous distribution function, then\nlim\ny\u2192x\n\u00bbZ y\nz=x\nf(z)dz\n-2\n!\n= 0.\nProof: since f is continuous and the interval between x and y is\nfinite, by the intermediate value theorem (found in most calculus\ntexts) there exists a c between x and y thatZ y\nz=x\nf(z)dz = f(c)(y \u2212 x)\n(intuitively, a rectangle with the base from z = x to z = y and\nheight = f(c) has the same area as the integral on the left hand\nside.). Therefore\n\u00bbZ y\nz=x\nf(z)dz\n-2\n= |f(c)|2\n|y \u2212 x|2\nWhen y \u2192 x, f(c) stays bounded due to continuity of f, moreover\nlimy\u2192x f(c) = f(x), hence\nlim\ny\u2192x\n\u00bbZ y\nz=x\nf(z)dz\n-2\n!\n= f(x)2\nlim\ny\u2192x\n|y \u2212 x|2\n= 0. .\nAn immediate derivative from the above lemma is that no\ntiebreaking procedures are required and an agent in a waiting state is\nalways waiting for a reply from the single agent that is associated\nwith the highest utility among the agents in the set A\u2217\n(i.e., no other\nagent in the set A\u2217\nis associated with an equal utility). A deadlock\ncan be formed only if we can create a cyclic sequence of agents in\nwhich any agent is waiting for a reply from the subsequent agent in\nthe sequence. However, in our method any agent Ai will be\nwaiting for a reply from another agent Aj, to which it sent a commit\nmessage, only if: (1) any agent Ak \u2208 A, associated with a\nutility U(Ai, Ak) > U(Ai, Aj), has already rejected the partnership\nwith agent Ai; and (2) agent Aj itself is waiting for a reply from\nagent Al where U(Al, Aj) > U(Aj, Ai). Therefore, if we have a\nsequence of waiting agents then the utility associated with\npartnerships between any two subsequent agents in the sequence must\nincrease along the sequence. If the sequence is cyclic, then we have a\n452 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)\npattern of the form: U(Ai, Al) > U(Al, Aj) > U(Aj, Ai). Since\nU(Ai, Al) > U(Aj, Ai), agent Ai can be waiting for agent Aj\nonly if it has already been rejected by Al (see (1) above). However,\nif agent Al has rejected agent Ai then it has also rejected agent\nAj. Therefore, agent Aj cannot be waiting for agent Al to make a\ndecision. The same logic can be applied to any longer sequence. 2\nThe search activity is assumed to be costly [11, 1, 16] in a way\nthat any agent needs to consume some of its resources in order to\nlocate other agents to interact with, and for maintaining the\ninteractions themselves. We assume utilities and costs are additive and\nthat the agents are trying to maximize their overall utility, defined as\nthe utility from the partnership formed minus the aggregated search\ncosts along the search process. The agent\"s cost of interacting with\nN other agents (in parallel) is given by the function c(N). The\nsearch cost structure is principally a parameter of the environment\nand thus shared by all agents.\nAn agent\"s strategy S(A ) \u2192 {commit Aj \u2208 A , reject A \u2282\nA , N} defines for any given set of partnership opportunities, A ,\nwhat is the subset of opportunities that should be immediately\ndeclined, to which agent to send a commit message (if no pending\nnotification from another agent is expected) or the number of new\ninteractions to initiate (N). Since the search process is two-sided,\nour goal is to find an equilibrium set of strategies for the agents.\n2.1 Strategy Structure\nRecall that each agent declines partnerships based on (a) the\npartnerships\" immediate utility in comparison to the agent\"s expected\nutility from resuming search; and (b) achieving a mutual\ncommitment (thus declining pending partnerships that were not rejected\nin (a)). Therefore an agent\"s strategy can be represented by a pair\n(Nt\n, xt\n) where Nt\nis the number of agents with whom it chooses\nto interact in search stage t and xt\nis its reservation value5\n(a\nthreshold) for accepting/rejecting the resulting N potential partnerships.\nThe subset A\u2217\n, thus, will include all partnership opportunities of\nsearch stage t that are associated with a utility equal to or greater\nthan xt\n. The reservation value xt\nis actually the expected utility for\nresuming the search at time t (i.e., U(resume)). The agent will\nalways prefer committing to an opportunity greater than the expected\nutility of resuming the search and will always prefer to resume the\nsearch otherwise.\nSince the agents are not limited by a decision horizon, and their\nsearch process does not imply any new information about the\nmarket structure (e.g., about the utility distribution of future partnership\nopportunities), their strategy is stationary - an agent will not accept\nan opportunity it has rejected beforehand (i.e., x1\n= x2\n= ... = x)\nand will use the same sample size, N1\n= N2\n= ... = N, along its\nsearch.\n2.2 Calculating Acceptance Probabilities\nThe transition from instantaneous decision making process to a\nsequential one introduces several new difficulties in extracting the\nagents\" strategies. Now, in order to estimate the probability of\nbeing accepted by any of the other agents, the agent needs to\nrecursively model, while setting its strategy, the probabilities of\nrejections other agents might face from other agents they interact with.\nIn the following paragraphs we introduce several complementary\ndefinitions and notations, facilitating the formal introduction of the\nacceptance probabilities. Consider an agent Ai, using a strategy\n(N, xN ) while operating in an environment where all other agents\n5\nNotice the reservation value used here is different from a reservation price\nconcept (that is usually used as buyers\" private evaluation). The use of\nreservation-value based strategies is common in economic search models\n[21, 17].\nare using a strategy (k, xk). The probability that agent Ai will\nreceive a commitment message from agent Aj it interacted with\ndepends on the utility associated with the potential partnership\nbetween them, x. This probability, denoted by Gk(x) can be\ncalculated as:6\nGk(x) =\n8\n><\n>:\n\u201e\n1 \u2212\nZ \u221e\ny=x\nf(y)Gk(y)dy\n\u00abk\u22121\nif x \u2265 xk\n0 otherwise.\n(1)\nThe case where x < xk above is trivial: none of the other agents\nwill accept agent Ai if the utility in such a partnership is smaller\nthan their reservation value xk. However even when the\npartnership\"s utility is greater or equal to xk, commitment is not\nguaranteed. In the latter scenario, a commitment message from agent Aj\nwill be received only if agent Aj has been rejected by all other\nagents in its set A\u2217\nthat were associated with a utility greater than\nthe utility of a partnership with agent Ai.\nThe unique solution to the recursive Equation 1 is:\nGk(x) =\n8\n>>>>><\n>>>>>:\n\n1+(k\u22122)\nR \u221e\ny=xf(y)dy\n1\u2212k\nk\u22122\n, k>2, x\u2265xk,\nexp(\u2212\nR \u221e\ny=x f(y)dy), k=2, x\u2265xk,\n1, k=1, x\u2265xk\n0, x < xk.\n(2)\nNotice that as expected, a partnership opportunity that yields the\nmaximum mutual utility is necessarily accepted by both agents, i.e.,\nlimx\u2192\u221e Gk(x) = 1. On the other hand, when the utility\nassociated with a potential partnership opportunity is zero (x = 0) the\nacceptance probability is non-negligible:\nlim\nx\u21920\nGk(x) = (k \u2212 1)\n1\u2212k\nk\u22122 (3)\nThis non-intuitive result derives from the fact that there is still a\nnon-negligible probability that the other agent is rejected by all\nother agents it interacts with.\n2.3 Setting the Agents\" Strategies\nUsing the function Gk(x), we can now formulate and explore\nthe agents\" expected utility when using their search strategies.\nConsider again an agent Ai that is using a sample of size N while all\nother agents are using a strategy (k, xk). We denote by RN (x)\nthe probability that the maximum utility that agent Ai can be\nguaranteed when interacting with N agents (i.e., the highest utility to\nwhich a commit message will be received) is at most x. This can be\ncalculated as the probability that none of N agents send agent Ai a\ncommit message for a partnership associated with a utility greater\nthan x:\nRN (x) =\n\n1 \u2212\nZ \u221e\nmax(x,xk)\nf(y)Gk(y)dy\nN\n(4)\nNotice that RN (x) is in fact a cumulative distribution function,\nsatisfying: limx\u2192\u221e RN (x) = 1 and dRN (x)/dx > 0 (the function\nnever gets a zero value simply because there is always a positive\nprobability that none of the agents commit at all to a partnership\nwith agent Ai). Therefore, the derivative of the function RN (x),\ndenoted rN (x), is in fact the probability distribution function of the\nmaximum utility that can be guaranteed for agent Ai when\nsampling N other agents:\nrN (x) =\ndRN (x)\ndx\n=\n8\n<\n:\nNf(x)Gk(x)\nN+k\u22122\nk\u22121 , x \u2265 xk\n0, x < xk\n(5)\n6\nThe use of the recursive Equation 1 is enabled since we assume that the\nnumber of agents is infinite (thus the probability of having an overlap\nbetween the interacting agents and the affect of such overlap on the\nprobabilities we calculate become insignificant).\nThe Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 453\nThis function rN (x) is essential for calculating VN (xN ), the\nexpected utility of agent Ai when using a strategy (N, xN ), given the\nstrategy (k, xk) used by the other agents:\nVN (xN )=\nZ \u221e\ny=max(xN ,xk)\nyrN (y)dy+\n\n1\u2212\nZ \u221e\ny=max(xN ,xk)\nrN (y)dy\n\nVN (xN ) \u2212 c(N) (6)\nThe right hand side of the above equation represents the expected\nutility of agent Ai from taking an additional search stage. The first\nterm represents the expected utility from mutual commitment\nscenarios, whereas the second term is the expected utility associated\nwith resuming the search (which equals VN (xN ) since nothing has\nchanged for the agent). Using simple mathematical manipulations\nand substituting rN (x), Equation 6 transforms into:\nVN (x) =\nR \u221e\ny=max(x,xk) yNf(y)Gk(y)\nN+k\u22122\nk\u22121 dy \u2212 c(N)\nR \u221e\ny=max(x,xk) Nf(y)Gk(y)\nN+k\u22122\nk\u22121 dy\n(7)\nand further simplified into:\nVN (x) = max(x, xk) +\nZ \u221e\nmax(x,xk)\n(1 \u2212 Gk(y)\nN\nk\u22121 )dy \u2212 c(N)\n1 \u2212 Gk(max(x, xk))\nN\nk\u22121\n(8)\nEquation 8, allows us to prove some important characteristics of\nthe model as summarized in the following Theorem 2.\nTHEOREM 2. When other agents use strategy (k, xk):\n(a) An agent\"s expected utility function, VN (xN ), when using a\nstrategy (N, x), is quasi concave in x with a unique maximum,\nobtained for the value xN satisfying:\nVN (xN ) = xN (9)\n(b) The value xN satisfies:\nc(N) =\n`\nmax(xN , xk) \u2212 xN\n\u00b4`\n1 \u2212 Gk(xk)\nN\nk\u22121\n\u00b4\n+\n+\nZ \u221e\nmax(xN ,xk)\n(1 \u2212 Gk(y)\nN\nk\u22121 )dy (10)\nThe proof is obtained by deriving VN (xN ) in Equation 8 and\nsetting it to zero. After applying further mathematical manipulations\nwe obtain (9) and (10).\nBoth parts of Theorem 2 can be used as an efficient means for\nextracting the optimal reservation value xN of an agent, given the\nstrategies of the other agents in the environment and the number\nof parallel interactions it uses. Furthermore, in the case of\ncomplex distribution functions where extracting xN from Equation 10\nis not immediate, a simple algorithm (principally based on binary\nsearch) can be constructed for calculating the agent\"s optimal\nreservation value (which equals its expected utility, according to 9), with\na complexity O(log( \u02c6x\n\u03c1\n)), where \u03c1 is the required precision level for\nxN and \u02c6x is the solution to:\nR \u221e\ny=\u02c6x\nyNf(y)F(y)N\u22121\ndy = c(N).\nHaving the ability to calculate xN , we can now prove the\nfollowing Proposition 2.1.\nPROPOSITION 2.1. An agent operating in an environment where\nall agents are using a strategy according to the instantaneous\nparallel search equilibrium (i.e., according to the I-DM model [21])\ncan only benefit from deviating to the proposed S-DM strategy.\nSketch of proof: For the I-DM model the following holds [21]:\nc(N) =\nN\n2N \u2212 1\nZ \u221e\ny=xI\u2212DM\nN\n(1 \u2212 F(y)2N\u22121\n)dy (11)\nWe apply the methodology used above in this subsection for\nconstructing the expected utility of the agent using the S-DM strategy\nas a function of its reservation value, assuming all other agents are\nusing the I-DM search strategy. This results with an optimal\nreservation value for the agent using S-DM, satisfying:\nc(N) =\nZ \u221e\ny=xS\u2212DM\nN\n(1 \u2212 (1 \u2212\n1\nN\n+\nF(y)N\nN\n)N\n)dy (12)\nFinally, we prove that the integrand in Equation 11 is smaller than\nthe integrand in Equation 12. Given the fact that both terms equal\nc(N), we obtain xS\u2212DM\nN > xI\u2212DM\nN and consequently (according\nto Theorem 2) a similar relationship in terms of expected utilities.\nFigure 1 illustrates the superiority of the proposed search\nstrategy S-DM, as well as the expected utility function\"s\ncharacteristics (as reflected in Theorem 2). For comparative reasons we use\nthe same synthetic environment that was used for the I-DM model\n[21]. Here the utilities are assumed to be drawn from a uniform\ndistribution function and the cost function was taken to be c(N) =\n0.05 + 0.005N. The agent is using N = 3 while other agents\nare using k = 25 and xk = 0.2. The different curves depict the\nexpected utility of the agent as a function of the reservation value,\nx, that it uses, when: (a) all agents are using the I-DM strategy\n(marked as I-DM); (b) the agent is using the S-DM strategy while\nthe other agents are using the I-DM strategy (marked as\nI-DM/SDM); and (c) all agents are using the S-DM strategy (marked as\nS-DM). As expected, according to Equation 8 and Theorem 2, the\nagent\"s expected utility remains constant until its reservation value\nexceeds xk. Then, it reaches a global maximum when the\nreservation value satisfies VN (x) = x. From the graph we can see that the\nagent always has an incentive to deviate from the I-DM strategy to\nS-DM strategy (as was proven in Proposition 2.1).\n0.2\n0.3\n0.4\n0.5\n0.6\n0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9\nreservation value (x)\nexpected utility\nVN(x)\nS-D M\nI-D M\nI-D M / S-D M\nFigure 1: The expected utility as a function of the reservation\nvalue used by the agent\n3. EQUILIBRIUM DYNAMICS\nSince all agents are subject to similar search costs, and their\nperceived utilities are drawn from the same distribution function, they\nall share the same strategy in equilibrium. A multi-equilibria\nscenario may occur, however as we discuss in the following paragraphs\nsince all agents share the same preferences/priorities (unlike, for\nexample, in the famous battle of the sexes scenario) we can\nalways identify which equilibrium strategy will be used.\nNotice that if all agents are using the same sample size, N, then\nthe value xN resulting from solving Equation 10 by substituting\nk = N and xk = xN is a stable reservation value (i.e., none of the\nagents can benefit from changing just the value of xN ).\nAn equilibrium strategy (N, xN ) can be found by identifying an\nN value for which no single agent has an incentive to use a different\nnumber of parallel interactions, k (and the new optimal reservation\n454 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nvalue that is associated with k according to Equation 10). While\nthis implies an infinite solution space, we can always bound it\nusing Equations 8 and 10. Within the framework of this paper, we\ndemonstrate such a bounding methodology for the common case\nwere c(N) is linear7\nor convex, by using the following Theorem 3.\nTHEOREM 3. When c(N) is linear (or convex), then: (a) When\nall other agents sample k potential partners over a search round,\nif an agent\"s expected utility of sampling k + 1 potential partners,\nVk+1(xk+1), is smaller than Vk(xk), then the expected utility when\nsampling N potential partners, VN (xN ), where N > k+1, is also\nsmaller than Vk(xk). (b) Similarly, when all other agents sample\nk potential partners over a search round, if an agent\"s expected\nutility of using k \u2212 1 potential partners, Vk\u22121(xk\u22121), is smaller\nthan the expected utility when using k potential partners, Vk(xk),\nthen the expected utility when using N potential partners, where\nN < k \u2212 1, is also smaller than Vk(xk).\nProof: Let us use the notation ci for c(i). Since Vk(xk) = xk \u2200k\n(according to Equation 9), the claims are: (a) if xk+1 < xk then\nxN < xk for all N \u2265 k + 1, and (b) if xk\u22121 < xk then xN < xk\nfor all N \u2264 k \u2212 1.\n(a) We start by proving that if xk+1 < xk then xk+2 < xk.\nAssume otherwise, i.e., xk+1 < xk and xk+2 > xk. Therefore,\naccording to Equation 10, the following holds:\n0 < ck+2 \u2212 2ck+1 + ck <\nZ \u221e\nxk+2\n(1 \u2212 Gk(y)\nk+2\nk\u22121 )dy\n\u2212 2\nZ \u221e\nxk\n(1 \u2212 Gk(y)\nk+1\nk\u22121 )dy +\nZ \u221e\nxk\n(1 \u2212 Gk(y)\nk\nk\u22121 )dy\nwhere the transition to inequality is valid since c(i) is convex. Since\nthe assumption in this proof is that xk+2 > xk then the above can\nbe transformed into:\nZ \u221e\nxk\n\n2Gk(y)\nk+1\nk\u22121 \u2212 Gk(y)\nk+2\nk\u22121 \u2212 Gk(y)\nk\nk\u22121\n\ndy > 0 (13)\nNow notice that the integrated term is actually \u2212Gk(y)\nk\nk\u22121\n`\n1\u2212\nGk(y)\n1\nk\u22121\n\u00b42\nwhich is obviously negative, contradicting the initial\nassumption, thus if xk+1 < xk then necessarily xk+2 < xk.\nNow we need to prove the same for any xk+j. We will prove\nthis in two steps: first, if xk+i < xk then xk+2i < xk. Second, if\nxk+i < xk and xk+i+1 < xk, then xk+2i+1 < xk. Together these\nconstitute the necessary induction arguments to prove the case (a).\nWe start with the even case, using a similar methodology: Assume\notherwise, i.e., xk+l < xk \u2200l = 1, ..., j \u2212 1 and xk+2i > xk.\nAccording to Equation 10, and the fact that c(i) is convex, the\nfollowing holds:\nZ \u221e\nxk\n\n2Gk(y)\nk+i\nk\u22121 \u2212 Gk(y)\nk+2i\nk\u22121 \u2212 Gk(y)\nk\nk\u22121\n\ndy > 0 (14)\nAnd again the integrand is actually \u2212Gk(y)\nk\nk\u22121\n`\n1\u2212Gk(y)\ni\nk\u22121\n\u00b42\nwhich is obviously negative, contradicting the initial assumption,\nthus xk+2i < xk.\nAs for the odd case, we use Equation 10 once for k + i + 1\nparallel interactions and once for k + 2i + 1. From the convexity\nof ci, we obtain: ck+2i+1 \u2212 ck+i \u2212 ck+i+1 + ck > 0, thus:\nZ \u221e\nxk\n`\nGk(y)\nk+i\nk\u22121 +Gk(y)\nk+i+1\nk\u22121 \u2212Gk(y)\nk+2i+1\nk\u22121 \u2212Gk(y)\nk\nk\u22121\n\u00b4\ndy>0 (15)\n7\nA linear cost function is mostly common in agent-based two-sided search\napplications, since often the cost function can be divided into fixed costs\n(e.g. operating the agent per time unit) and variable costs (i.e., cost of\nprocessing a single interaction\"s data).\nThis time the integrated term in Equation 15 can be re-written as\nGk(y)\nk\nk\u22121 (1 \u2212 Gk(y)\ni\nk\u22121 )(Gk(y)\ni+1\nk\u22121 \u2212 1) which is obviously\nnegative, contradicting the initial assumption, thus xk+i+1 < xk.\nNow using induction one can prove that if xk+1 < xk then\nxk+i < xk. This concludes part (a) of the proof.\nThe proof for part (b) of the theorem is obtained in a similar\nmanner. In this case: ck \u2212 2ck\u2212i + ck\u22122i > 0 and ck \u2212 ck\u2212i\u22121 \u2212\nck\u2212i + ck\u22122i\u22121 > 0.\nThe above theorem supplies us with a powerful tool for\neliminating non-equilibrium N values. It suggests that we can check the\nstability of a sample size N and the appropriate reservation value\nxN simply by calculating the optimal reservation values of a\nsingle agent when deviating towards using samples of sizes N \u2212 1\nand N + 1 (keeping the other agents with strategy (N, xN )). If\nboth the appropriate reservation values associated with the two\nlatter sample sizes are smaller than xN then according to Theorems\n3 the same holds when deviating to any other sample size k. The\nabove process can be further simplified by using VN+1(xN ) > xN\nand VN\u22121(xN ) > xN as the two elimination rules. This derives\nfrom Theorem 3 and the properties of the function VN (x) found in\nTheorem 2.\nNotice that a multi-equilibria scenario may occur, however can\neasily be resolved. If several strategies satisfy the stability\ncondition defined above, then the agents will always prefer the one\nassociated with the highest expected utility. Therefore an algorithm\nthat goes over the different N values and checks them according\nto the rules above can be applied, assuming that we can bound the\ninterval for searching the equilibrium N. The following Theorem\n4 suggests such an upper bound.\nTHEOREM 4. An upper bound for the equilibrium number of\npartners to be considered over a search round is the solution of the\nequation:\nA(N) = c(N) (16)\nprovided A(N \u2212 1) > c(N \u2212 1), where we denote,\nA(N) :=\nZ \u221e\ny=0\nyNf(y)Gk(y)\nN+k\u22122\nk\u22121 dy.\nProof: We denote:\nA(N, x) =\nZ \u221e\ny=x\nyNf(y)Gk(y)\nN+k\u22122\nk\u22121 dy\nso that A(N) = A(N, 0). From Equation 7:\nVN (x) =\nA(N, x) \u2212 c(N)\nN\nR \u221e\nx f(y)Gk(y)bdy\n=\nA(N, x) \u2212 c(N)\npositive\n,\nClearly A(N) \u2265 A(N, x)\u2200x since the integrand is positive. Hence\nif A(N) \u2212 c(N) < 0, then A(N, x) \u2212 c(N) < 0\u2200x and VN (x) < 0 \u2200x.\nNext we prove that if A(N)\u2212c(N) gets negative, it stays negative.\nRecalling that for any g(y):\nd\ndN\n(g(y)b(N)\n) = g(y)b(N)\nlog(g(y))\ndb\ndN\nwe get:\nA (N) =\n\u22121\n(k \u2212 1)2\nZ \u221e\n0\nGk(y)\nN\nk\u22121 (log Gk(y))2\ndy\nwhich is always negative, since the integrand is nonnegative.\nTherefore A(N) is concave. Since c(N) is convex, \u2212c(N) is\nconcave, and a sum of concave functions is concave, we obtain that\nThe Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 455\nA(N) \u2212 c(N) is concave. This guarantees that once the\nconcave expression A(N) \u2212 c(N) shifts from a positive value to a\nnegative one (with the increase in N), it cannot become positive\nagain. Therefore, having N\u2217\nsuch that A(N\u2217\n) = c(N\u2217\n), and\nA(N\u2217\u2217\n) > c(N\u2217\u2217\n) for some N\u2217\u2217\n< N\u2217\n, is an upper bound for N,\ni.e., VN (x) < 0 \u2200N \u2265 N\u2217\n. The condition we specify for N\u2217\u2217\nis\nmerely for ensuring that VN is switching from a positive value to a\nnegative one (and not vice versa) and is trivial to implement.\nGiven the existence of the upper bound, we can design an\nalgorithm for finding the equilibrium strategy (if one exists). The\nalgorithm extracts the upper bound, \u02c6N, for the equilibrium\nnumber of parallel interactions according to Theorem 4. Out of the set\nof values satisfying the stability condition defined above, the\nalgorithm chooses the one associated with the highest reservation value\naccording to Equation 10. This is the equilibrium associated with\nthe highest expected utility to all agents according to Theorem 2.\n0.1875\n0.39\n0.41\n0.43\n0.45\n0.47\n0.49\n2 3 4 5 6 7 8 9 10 11 12 13\nexpected utility\nVN(x)\nnum ber ofparallelinteractions (N)\nVN+ 1 ( XN)\nVN( XN)\nVN-1 ( XN)\nenlarged\nFigure 2: The incentive to deviate from strategy (N, xN )\nThe process is illustrated in Figure 2 for an artificial environment\nwhere partnerships\" utilities are associated with a uniform\ndistribution. The cost function used is c(N) = 0.2 + 0.02N. The graph\ndepicts a single agent\"s expected utility when all other agents are\nusing N parallel interactions (on the horizontal axis) and the\nappropriate reservation value xN (calculated according to Equation 10).\nThe different curves depict the expected utility of the agent when\nit uses a strategy: (a) (N, xN ) similar to the other agents (marked\nas VN (xN )); (b) (N + 1, xN ) (marked as VN+1(xN )); and (c)\n(N \u2212 1, xN ) (marked as VN\u22121(xN )). According to the discussion\nfollowing Theorem 3, a stable equilibrium satisfies: VN (xN ) >\nmax{VN+1(xN ), VN\u22121(xN )}. The strategy satisfying the latter\ncondition in our example is (9, 0.437).\n4. RELATED WORK\nThe two-sided economic search for partnerships in AI literature\nis a sub-domain of coalition formation8\n. While coalition\nformation models usually consider general coalition-sizes [24], the\npartnership formation model (often referred as matchmaking)\nconsiders environments where agents have a benefit only when forming a\npartnership and this benefit can not be improved by extending the\npartnership to more than two agents [12, 23] (e.g., in the case of\nbuyers and sellers or peer-to-peer applications). As in the general\n8\nThe use of the term partnership in this context refers to the agreement\nbetween two individual agents to cooperate in a pre-defined manner. For\nexample, in the buyer-seller application a partnership is defined as an agreed\ntransaction between the two-parties [9].\ncoalition formation case, agents have the incentive to form\npartnerships when they are incapable of executing a task by their own\nor when the partnership can improve their individual utilities [14].\nVarious centralized matching mechanisms can be found in the\nliterature [6, 2, 8]. However, in many MAS environments, in the\nabsence of any reliable central matching mechanism, the matching\nprocess is completely distributed.\nWhile the search in agent-based environments is well recognized\nto be costly [11, 21, 1], most of the proposed coalition formation\nmechanisms assume that an agent can scan as many partnership\nopportunities in its environment as needed or have access to central\nmatchers or middle agents [6]. The incorporation of costly search\nin this context is quite rare [21] and to the best of our knowledge, a\ndistributed two-sided search for partners model similar to the S-DM\nmodel has not been studied to date.\nClassical economic search theory ([15, 17], and references therein)\nwidely addresses the problem of a searcher operating in a costly\nenvironment, seeking to maximize his long term utility. In these\nmodels, classified as one-sided search, the focus is on establishing the\noptimal strategies for the searcher, assuming no mutual search\nactivities (i.e., no influence on the environment). Here the sequential\nsearch procedure is often applied, allowing the searcher to\ninvestigate a single [15] or multiple [7, 19] opportunities at a time. While\nthe latter method is proven to be beneficial for the searcher, it was\nnever used in the two-sided search models that followed (where\ndual search activities are modeled) [22, 5, 18]. Therefore, in these\nmodels, the equilibrium strategies are always developed based on\nthe assumption that the agents interact with others sequentially (i.e.,\nwith one agent at a time). A first attempt to integrate the parallel\nsearch into a two-sided search model is given in [21], as detailed in\nthe introduction section.\nSeveral of the two-sided search essences can be found in the\nstrategic theory of bargaining [3] - both coalition formation and\nmatching can be represented as a sequential bargaining game [4]\nin which payoffs are defined as a function of the coalition structure\nand can be divided according to a fixed or negotiated division rule.\nNevertheless, in the sequential bargaining literature, most emphasis\nis put on specifying the details of the sequential negotiating process\nover the division of the utility (or cost) jointly owned by parties or\nthe strategy the coalition needs to adopt [20, 4]. The models\npresented in this area do not associate the coalition formation process\nwith search costs, which is the essence of the analysis that\neconomic search theory aims to supply. Furthermore, even in repeated\npairwise bargaining [10] models the agents are always limited to\ninitiating a single bargaining interaction at a time.\n5. DISCUSSION AND CONCLUSIONS\nThe phenomenal growth evidenced in recent years in the number\nof software agent-based applications, alongside the continuous\nimprovement in agents\" processing and communication capabilities,\nsuggest various incentives for agents to improve their search\nperformance by applying advanced search strategies such as parallel\nsearch. The multiple-interactions technique is known to be\nbeneficial for agents both in one-sided and two-sided economic search\n[7, 16, 21], since it allows the agents to decrease their average cost\nof learning about potential partnerships and their values. In this\npaper we propose a new parallel two-sided search mechanism that\ndiffers from the existing one in a sense that it allows the agents\nto delay their decision making process concerning the acceptance\nand rejection of potential partnerships as necessary. This, in\ncomparison to the existing instantaneous model [21] which force each\nagent to make a simultaneous decision concerning each of the\npotential partnerships revealed to it during the current search stage.\n456 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nAs discussed throughout the paper, the new method is much more\nintuitive to the agent than the existing model - an agent will always\nprefer to keep all options available. Furthermore, as we prove in\nthe former sections, an agent\"s transition to the new search method\nalways results with a better utility.\nAs we prove in Section 2, in spite of the transition to a sequential\ndecision making, deadlocks never occur in the proposed method as\nlong as all agents use the proposed strategies. Since our analysis is\nequilibrium-based, a deviation from the proposed strategies is not\nbeneficial. Similarly, we show that a deviation of a single agent\n(back) to the instantaneous decision making strategy is not\nbeneficial. The only problem that may arise in the transition from an\ninstantaneous to sequential decision making is when an agent fails\n(technically) to function (endlessly delaying the notification to the\nagents it interacted with). While equilibrium analysis normally do\nnot consider malfunction as a legitimate strategy, we do wish to\nemphasize that the malfunctioning agent problem can be resolved\nby using a simple timeout for receiving responses and skipping this\nagent in the sequential decision process if the timeout is exceeded.\nOur analysis covers all aspects of the new two-sided search\ntechnique, from individual strategy construction throughout the\ndynamics that lead to stability (equilibrium). The difficulty in the\nextraction of the agents\" equilibrium strategies in the new model derives\nfrom the need to recursively model, while setting an agent\"s\nstrategy, the rejection other agents might face from other agents they\ninteract with. This complexity (that does not exist in former\nmodels) is resolved by the introduction of the recursive function Gk(x)\nin Section 2. Using the different theorems and propositions we\nprove, we proffer efficient tools for calculating the agents\"\nequilibrium strategies. Our capabilities to produce an upper bound for\nthe number of parallel interactions used in equilibrium (Theorem 4)\nand to quickly identify (and eliminate) non-equilibrium strategies\n(Theorem 3) resolves the problem of the computational complexity\nassociated with having to deal with a theoretically infinite strategy\nspace.\nWhile the analysis we present is given in the context of software\nagents, the model we suggest is general, and can be applied to any\ntwo-sided economic search environment where the searchers can\nsearch in parallel. In particular, in addition to weakly dominating\nthe instantaneous decision making model (as we prove in the\nanalysis section) the proposed method weakly dominates the purely\nsequential two-sided search model (where each agent interacts with\nonly one other agent at a time) [5]. This derives from the fact that\nthe proposed method is a generalization of the latter (i.e., in the\nworst case scenario, the agent can interact with one other agent at\na time in parallel).\nNaturally the attempt to integrate search theory techniques into\nday-to-day applications brings up the applicability question.\nJustification and legitimacy considerations for this integration were\ndiscussed in the wide literature we refer to throughout the paper. The\ncurrent paper is not focused on re-arguing applicability, but rather\non the improvement of the the core two-sided search model. We\nsee great importance in future research that will combine\nbargaining as part of the interaction process. We believe such research can\nresult in many rich variants of our two-sided search model.\n6. REFERENCES\n[1] Y. Bakos. Reducing buyer search costs: Implications for\nelectronic marketplaces. Management Science,\n42(12):1676-1692, June 1997.\n[2] G. Becker. A theory of marriage. Journal of Political\nEconomy, 81:813-846, 1973.\n[3] K. Binmore, M. Osborne, and A. Rubinstein.\nNon-cooperative models of bargaining. In Handbook of\nGame Theory with Economic Applications, pages 180-220.\nElsevier, New York, 1992.\n[4] F. Bloch. Sequential formation of coalitions in games with\nexternalities and fixed payoff division. Games and Economic\nBehavior, 14(1):90-123, 1996.\n[5] K. Burdett and R. Wright. Two-sided search with\nnontransferable utility. Review of Economic Dynamics,\n1:220-245, 1998.\n[6] K. Decker, K. Sycara, and M. Williamson. Middle-agents for\nthe internet. In Proc. of IJCAI, pages 578-583, 1997.\n[7] S. Gal, M. Landsberger, and B. Levykson. A compound\nstrategy for search in the labor market. Int. Economic\nReview, 22(3):597-608, 1981.\n[8] D. Gale and L. Shapley. College admissions and the stability\nof marriage. American Math. Monthly, 69:9-15, 1962.\n[9] M. Hadad and S. Kraus. Sharedplans in electronic\ncommerce. In M. Klusch, editor, Intelligent Information\nAgents, pages 204-231. Springer Publisher, 1999.\n[10] M. Jackson and T. Palfrey. Efficiency and voluntary\nimplementation in markets with repeated pairwise\nbargaining. Econometrica, 66(6):1353-1388, 1998.\n[11] J. Kephart and A. Greenwald. Shopbot economics. JAAMAS,\n5(3):255-287, 2002.\n[12] M. Klusch. Agent-mediated trading: Intelligent agents and\ne-business. J. on Data and Knowledge Engineering, 36(3),\n2001.\n[13] S. Kraus, O. Shehory, and G. Taase. Coalition formation with\nuncertain heterogeneous information. In Proc. of AAMAS\n\"03, pages 1-8, 2003.\n[14] K. Lermann and O. Shehory. Coalition formation for large\nscale electronic markets. In Proc. of ICMAS\"2000, pages\n216-222, Boston, 2000.\n[15] S. A. Lippman and J. J. McCall. The economics of job\nsearch: A survey. Economic Inquiry, 14:155-189, 1976.\n[16] E. Manisterski, D. Sarne, and S. Kraus. Integrating parallel\ninteractions into cooperative search. In AAMAS, pages\n257-264, 2006.\n[17] J. McMillan and M. Rothschild. Search. In R. Aumann and\nS. Hart, editors, Handbook of Game Theory with Economic\nApplications, pages 905-927. 1994.\n[18] J. M. McNamara and E. J. Collins. The job search problem\nas an employer-candidate game. Journal of Applied\nProbability, 27(4):815-827, 1990.\n[19] P. Morgan. Search and optimal sample size. Review of\nEconomic Studies, 50(4):659-675, 1983.\n[20] A. Rubinstein. Perfect equilibrium in a bargaining model.\nEconometrica, 50(1):97-109, 1982.\n[21] D. Sarne and S. Kraus. Agents strategies for the dual parallel\nsearch in partnership formation applications. In Proc. of\nAMEC2004, LNCS 3435, pages 158 - 172, 2004.\n[22] R. Shimer and L. Smith. Assortative matching and search.\nEconometrica, 68(2):343-370, 2000.\n[23] K. Sycara, S. Widoff, M. Klusch, and J. Lu. Larks: Dynamic\nmatchmaking among heterogeneous software agents in\ncyberspace. JAAMAS, 5:173-203, 2002.\n[24] N. Tsvetovat, K. Sycara, Y. Chen, and J. Ying. Customer\ncoalitions in electronic markets. In Proc. of AMEC2000,\npages 121-138, 2000.\nThe Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 457", "keywords": "equilibrium strategy;sequential decision making;two-side search;bounding methodology;parallel interaction;multi-equilibrium scenario;coalition formation;partnership;information processing;costly environment;pairwise partnership;two-sided search;utility;search cost;decision;search performance;peer-to-peer application;partnership formation;match;instantaneous decision making"}
-{"name": "test_I-29", "title": "Distributed Management of Flexible Times Schedules", "abstract": "We consider the problem of managing schedules in an uncertain, distributed environment. We assume a team of collaborative agents, each responsible for executing a portion of a globally pre-established schedule, but none possessing a global view of either the problem or solution. The goal is to maximize the joint quality obtained from the activities executed by all agents, given that, during execution, unexpected events will force changes to some prescribed activities and reduce the utility of executing others. We describe an agent architecture for solving this problem that couples two basic mechanisms: (1) a flexible times representation of the agent\"s schedule (using a Simple Temporal Network) and (2) an incremental rescheduling procedure. The former hedges against temporal uncertainty by allowing execution to proceed from a set of feasible solutions, and the latter acts to revise the agent\"s schedule when execution is forced outside of this set of solutions or when execution events reduce the expected value of this feasible solution set. Basic coordination with other agents is achieved simply by communicating schedule changes to those agents with inter-dependent activities. Then, as time permits, the core local problem solving infra-structure is used to drive an inter-agent option generation and query process, aimed at identifying opportunities for solution improvement through joint change. Using a simulator to model the environment, we compare the performance of our multi-agent system with that of an expected optimal (but non-scalable) centralized MDP solver.", "fulltext": "1. INTRODUCTION\nThe practical constraints of many application\nenvironments require distributed management of executing plans\nand schedules. Such factors as geographical separation of\nexecuting agents, limitations on communication bandwidth,\nconstraints relating to chain of command and the high tempo\nof execution dynamics may all preclude any single agent\nfrom obtaining a complete global view of the problem, and\nhence necessitate collaborative yet localized planning and\nscheduling decisions. In this paper, we consider the problem\nof managing and executing schedules in an uncertain and\ndistributed environment as defined by the DARPA\nCoordinators program. We assume a team of collaborative agents,\neach responsible for executing a portion of a globally\npreestablished schedule, but none possessing a global view of\neither the problem or solution. The team goal is to maximize\nthe total quality of all activities executed by all agents, given\nthat unexpected events will force changes to pre-scheduled\nactivities and alter the utility of executing others as\nexecution unfolds. To provide a basis for distributed coordination,\neach agent is aware of dependencies between its scheduled\nactivities and those of other agents. Each agent is also given\na pre-computed set of local contingency (fall-back) options.\nCentral to our approach to solving this multi-agent\nproblem is an incremental flexible-times scheduling framework.\nIn a flexible-times representation of an agent\"s schedule, the\nexecution intervals associated with scheduled activities are\nnot fixed, but instead are allowed to float within imposed\ntime and activity sequencing constraints. This\nrepresentation allows the explicit use of slack as a hedge against simple\nforms of executional uncertainty (e.g., activity durations),\nand its underlying implementation as a Simple Temporal\nNetwork (STN) model provides efficient updating and\nconsistency enforcement mechanisms. The advantages of\nflexible times frameworks have been demonstrated in various\ncentralized planning and scheduling contexts (e.g., [12, 8, 9,\n10, 11]). However their use in distributed problem solving\nsettings has been quite sparse ([7] is one exception), and\nprior approaches to multi-agent scheduling (e.g., [6, 13, 5])\nhave generally operated with fixed-times representations of\nagent schedules.\nWe define an agent architecture centered around\nincremental management of a flexible times schedule. The\nunderlying STN-based representation is used (1) to loosen the\ncoupling between executor and scheduler threads, (2) to\nretain a basic ability to absorb unexpected executional delays\n(or speedups), and (3) to provide a basic criterion for\ndetecting the need for schedule change. Local change is\nac484\n978-81-904262-7-5 (RPS) c 2007 IFAAMAS\nFigure 1: A two agent C TAEMS problem.\ncomplished by an incremental scheduler, designed to\nmaximize quality while attempting to minimize schedule change.\nTo this schedule management infra-structure, we add two\nmechanisms for multi-agent coordination. Basic\ncoordination with other agents is achieved by simple\ncommunication of local schedule changes to other agents with\ninterdependent activities. Layered over this is a non-local option\ngeneration and evaluation process (similar in some respects\nto [5]), aimed at identification of opportunities for global\nimprovement through joint changes to the schedules of\nmultiple agents. This latter process uses analysis of detected\nconflicts in the STN as a basis for generating options.\nThe remainder of the paper is organized as follows. We\nbegin by briefly summarizing the general distributed\nscheduling problem of interest in our work. Next, we introduce the\nagent architecture we have developed to solve this problem\nand sketch its operation. In the following sections, we\ndescribe the components of the architecture in more detail,\nconsidering in turn issues relating to executing agent\nschedules, incrementally revising agent schedules and\ncoordinating schedule changes among multiple agents. We then give\nsome experimental results to indicate current system\nperformance. Finally we conclude with a brief discussion of\ncurrent research plans.\n2. THE COORDINATORS PROBLEM\nAs indicated above the distributed schedule management\nproblem that we address in this paper is that put forth by\nthe DARPA Coordinators program. The Coordinators\nproblem is concerned generally with the collaborative execution\nof a joint mission by a team of agents in a highly dynamic\nenvironment. A mission is formulated as a network of tasks,\nwhich are distributed among the agents by the MASS\nsimulator such that no agent has a complete, objective view\nof the whole problem. Instead, each agent receives only a\nsubjective view containing just the portion of the task\nnetwork that relates to ground tasks that it is responsible\nfor and any remote tasks that have interdependencies with\nthese local tasks. A pre-computed initial schedule is also\ndistributed to the agents, and each agent\"s schedule indicates\nwhich of its local tasks should be executed and when. Each\ntask has an associated quality value which accrues if it is\nsuccessfully executed within its constraints, and the overall\ngoal is to maximize the quality obtained during execution.\nFigure 2: Subjective view for Agent 2.\nAs execution proceeds, agents must react to unexpected\nresults (e.g., task delays, failures) and changes to the mission\n(e.g., new tasks, deadline changes) generated by the\nsimulator, recognize when scheduled tasks are no longer feasible or\ndesirable, and coordinate with each other to take corrective,\nquality-maximizing rescheduling actions that keep execution\nof the overall mission moving forward.\nProblems are formally specified using a version of the\nTAEMS language (Task Analysis, Environment Modeling\nand Simulation) [4] called C TAEMS [1]. Within C TAEMS,\ntasks are represented hierarchically, as shown in the\nexample in Figure 1. At the highest, most abstract level, the\nroot of the tree is a special task called the task group.\nOn successive levels, tasks constitute aggregate activities,\nwhich can be decomposed into sets of subtasks and/or\nprimitive activities, termed methods. Methods appear at the\nleaf level of C TAEMS task structures and are those that\nare directly executable in the world. Each declared method\nm can only be executed by a specified agent (denoted by\nag : AgentN in Figure 1) and each agent can be\nexecuting at most one method at any given time (i.e. agents are\nunit-capacity resources). Method durations and quality are\ntypically specified as discrete probability distributions, and\nhence known with certainty only after they have been\nexecuted.1\nIt is also possible for a method to fail unexpectedly\nin execution, in which case the reported quality is zero.\nFor each task, a quality accumulation function qaf is\ndefined, which specifies when and how a task accumulates\nquality as its subtasks (methods) are executed. For example, a\ntask with a min qaf will accrue the quality of its child with\nlowest quality if all its children execute and accumulate\npositive quality. Tasks with sum or max qafs acquire quality as\nsoon as one child executes with positive quality; as their qaf\nnames suggest, their respective values ultimately will be the\ntotal or maximum quality of all children that executed. A\nsync-sum task will accrue quality only for those children\nthat commence execution concurrently with the first child\nthat executes, while an exactly-one task accrues quality only\nif precisely one of its children executes.\nInter-dependencies between tasks/methods in the\nproblem are modeled via non-local effects (nles). Two types of\nnles can be specified: hard and soft. Hard nles express\n1\nFor simplicity, Figures 1 and 2 show only fixed values for\nmethod quality and duration.\nThe Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 485\ncausal preconditions: for example, the enables nle in Figure\n1 stipulates that the target method M5 can not be executed\nuntil the source M4 accumulates quality. Soft nles, which\ninclude facilitates and hinders, are not required constraints;\nhowever, when they are in play, they amplify (or dampen)\nthe quality and duration of the target task.\nAny given task or method a can also be constrained by an\nearliest start time and a deadline, specifying the window in\nwhich a can be feasibly executed. a may also inherit these\nconstraints from ancestor tasks at any higher level in the\ntask structure, and its effective execution window will be\ndefined by the tightest of these constraints.\nFigure 1 shows the complete objective view of a simple 2\nagent problem. Figure 2 shows the subjective view available\nto agent 2 for the same problem. In what follows, we will\nsometimes use the term activity to refer generically to both\ntask and method nodes.\n3. OVERVIEW OF APPROACH\nOur solution framework combines two basic principles for\ncoping with the problem of managing multi-agent schedules\nin an uncertain and time stressed execution environment.\nFirst is the use of a STN-based flexible times\nrepresentation of solution constraints, which allows execution to be\ndriven by a set of schedules rather than a single point\nsolution. This provides a basic hedge against temporal\nuncertainty and can be used to modulate the need for solution\nrevision. The second principle is to first respond locally to\nexceptional events, and then, as time permits, explore\nnonlocal options (i.e., options involving change by 2 or more\nagents) for global solution improvement. This provides a\nmeans for keeping pace with execution, and for tying the\namount of effort spent in more global multi-agent solution\nimprovement to the time available. Both local and non-local\nproblem solving time is further minimized by the use of a\ncore incremental scheduling procedure.\nFigure 3: Agent Architecture.\nOur solution framework is made concrete in the agent\narchitecture depicted in Figure 3. In its most basic form, an\nagent comprises four principal components - an Executor, a\nScheduler, a Distributed State Manager (DSM), and an\nOptions Manager - all of which share a common model of the\ncurrent problem and solution state that couples a\ndomainlevel representation of the subjective c taems task structure\nto an underlying STN. At any point during operation, the\ncurrently installed schedule dictates the timing and sequence\nof domain-level activities that will be initiated by the agent.\nThe Executor, running in its own thread, continually\nmonitors the enabling conditions of various pending activities,\nand activates the next pending activity as soon as all of its\ncausal and temporal constraints are satisfied.\nWhen execution results are received back from the\nenvironment (MASS) and/or changes to assumed external\nconstraints are received from other agents, the agent\"s model of\ncurrent state is updated. In cases where this update leads\nto inconsistency in the STN or it is otherwise recognized\nthat the current local schedule might now be improved, the\nScheduler, running on a separate thread, is invoked to revise\nthe current solution and install a new schedule. Whenever\nlocal schedule constraints change either in response to a\ncurrent state update or through manipulation by the Scheduler,\nthe DSM is invoked to communicate these changes to\ninterested agents (i.e., those agents that share dependencies and\nhave overlapping subjective views).\nAfter responding locally to a given state update and\ncommunicating consequences, the agent will use any remaining\ncomputation time to explore possibilities for improvement\nthrough joint change. The Option Manager utilizes the\nScheduler (in this case in hypothetical mode) to generate\none or more non-local options, i.e., identifying changes to\nthe schedule of one or more other agents that will enable the\nlocal agent to raise the quality of its schedule. These options\nare formulated and communicated as queries to the\nappropriate remote agents, who in turn hypothetically evaluate\nthe impact of proposed changes from their local\nperspective. In those cases where global improvement is verified,\njoint changes are committed to.\nIn the following sections we consider the mechanics of\nthese components in more detail.\n4. THE SCHEDULER\nAs indicated above, our agent scheduler operates\nincrementally. Incremental scheduling frameworks are ideally\nsuited for domains requiring tight scheduler-execution\ncoupling: rather than recomputing a new schedule in response\nto every change, they respond quickly to execution events\nby localizing changes and making adjustments to the current\nschedule to accommodate the event. There is an inherent\nbias toward schedule stability which provides better support\nfor the continuity in execution. This latter property is also\nadvantageous in multi-agent settings, since solution stability\ntends to minimize the ripple across different agents\"\nschedules.\nThe coupling of incremental scheduling with flexible times\nscheduling adds additional leverage in an uncertain,\nmultiagent execution environment. As mentioned earlier, slack\ncan be used as a hedge against uncertain method execution\ntimes. It also provides a basis for softening the impact of\ninter-dependencies across agents.\nIn this section, we summarize the core scheduler that we\nhave developed to solve the Coordinators problem. In\nsubsequent sections we discuss its use in managing execution\nand coordinating with other agents.\n4.1 STN Solution Representation\nTo maintain the range of admissible values for the start\nand end times of various methods in a given agent\"s\nsched486 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nule, all problem and scheduling constraints impacting these\ntimes are encoded in an underlying Simple Temporal\nNetwork (STN)[3]. An STN represents temporal constraints\nas a graph G < N, E >, where nodes in N represent the\nset of time points of interest, and edges in E are distances\nbetween pairs of time points in N. A special time point,\ncalled calendar zero grounds the network and has the value\n0. Constraints on activities (e.g. release time, due time,\nduration) and relationships between activities (e.g.\nparentchild relation, enables) are uniformly represented as\ntemporal constraints (i.e., edges) between relevant start and finish\ntime points. An agent\"s schedule is designated as a total\nordering of selected methods by posting precedence\nconstraints between the end and start points of each ordered\npair. As new methods are inserted into a schedule or\nexternal state updates require adjustments to existing constraints\n(e.g., substitution of an actual duration constraint,\ntightening of a deadline), the network propagates constraints and\nmaintains lower and upper bounds on all time points in the\nnetwork. This is accomplished efficiently via the use of a\nstandard all-pairs shortest path algorithm; in our\nimplementation, we take advantage of an incremental procedure based\non [2]. As bounds are updated, a consistency check is made\nfor the presence of negative cycles, and the absence of any\nsuch cycle ensures the continued temporal feasibility of the\nnetwork (and hence the schedule). Otherwise a conflict has\nbeen detected, and some amount of constraint retraction is\nnecessary to restore feasibility.\n4.2 Maintaining High-Quality Schedules\nThe scheduler consists of two basic components: a quality\npropagator and an activity allocator that work in a tightly\nintegrated loop. The quality propagator analyzes the activity\nhierarchy and collects a set of methods that (if scheduled)\nwould maximize the quality of the agent\"s local problem.\nThe methods are collected without regard for resource\ncontention; in essence, the quality propagator optimally solves\na relaxed problem where agents are capable of performing\nan infinite number of activities at once. The allocator\nselects methods from this list and attempts to install them in\nthe agent\"s schedule. Failure to do so reinvokes the quality\npropagator with the problematic activity excluded.\nThe Quality Propagator - The quality propagator\nperforms the following actions on the C TAEMS task structure:\n\u2022 Computes the quality of all activities in the task\nstructure: The expected quality qual(m) of a method m is\ncomputed from the probability distribution of the\nexecution outcomes. The quality qual(t) of a task t is\ncomputed by applying its qaf to the assessed quality\nof its children.\n\u2022 Generates a list of contributors for each task: methods\nthat, if scheduled, will maximize the quality obtained\nby the task.\n\u2022 Generates a list of activators for each task: methods\nthat, if scheduled, are sufficient to qualify the task as\nscheduled. Methods in the activators list are chosen\nto minimize demands on the agent\"s timeline without\nregard to quality.\nThe first time the quality propagator is invoked, the\nqualities of all tasks and methods are calculated and the initial\nlists of contributors and activators are determined.\nSubsequent calls to the propagator occur as the allocator installs\nmethods on the agent\"s timeline: failure of the allocator to\ninstall a method causes the propagator to recompute a new\nlist of contributors and activators.\nThe Activity Allocator - The activity allocator seeks\nto install the contributors of the taskgroup identified by\nthe quality propagator onto the agent\"s timeline. Any\ncurrently scheduled methods that do not appear in the\ncontributors list are first unscheduled and removed from the\ntimeline. The contributors are then preprocessed using a\nquality-centric heuristic to create an agenda sorted in\ndecreasing quality order. In addition, methods associated with\na and task (i.e., min, sumand) are grouped consecutively\nwithin the agenda. Since an and task accumulates quality\nonly if all its children are scheduled, this biases the\nscheduling process towards failing early (and regenerating\ncontributors) when the methods chosen for the and cannot\ntogether be allocated.\nThe allocator iteratively pops the first method mnew from\nthe agenda and attempts to install it. This entails first\nchecking that all activities that enable mnew have been\nscheduled, while attempting to install any enabler that is not. If\nany of the enabler activities fails to install, the allocation\npass fails. When successful, the enables constraints linking\nthe enabler activities to mnew are activated. The STN\nrejects an infeasible enabler constraint by returning a conflict.\nIn this event any enabler activities it has scheduled are\nuninstalled and the allocator returns failure. Once scheduling\nof enablers is ensured, a feasible slot on the agent\"s\ntimeline within mnew\"s time window is sought and the allocator\nattempts to insert mnew between two currently scheduled\nmethods. At the STN level, mnew\"s insertion breaks the\nsequencing constraint between the two extant timeline\nmethods and attempts to insert two new sequencing constraints\nthat chain mnew to these methods. If these insertions\nsucceed, the routine returns success, otherwise the two extant\ntimeline methods are relinked and allocation attempts the\nnext possible slot for mnew insertion.\n5. THE DYNAMICS OF EXECUTION\nMaintaining a flexible-times schedule enables us to use\na conflict-driven approach to schedule repair: Rather than\nreacting to every event in the execution that may impact\nthe existing schedule by computing an updated solution, the\nSTN can absorb any change that does not cause a conflict.\nConsequently, computation (producing a new schedule) and\ncommunication costs (informing other agents of changes that\naffect them) are minimized.\nOne basic mechanism needed to model execution in the\nSTN is a dynamic model for current time. We employ a\nmodel proposed by [7] that establishes a \u2018current-time\" time\npoint and includes a link between it and the calendar-zero\ntime point. As each method is scheduled, a simple\nprecedence constraint between the current-time time point and\nthe method is established. When the scheduler receives a\ncurrent time update, the link between calendar-zero and\ncurrent-time is modified to reflect this new time, and the\nconstraint propagates to all scheduled methods.\nA second issue concerns synchronization between the\nexecutor and the scheduler, as producer and consumer of the\nschedule running on different threads within a given agent.\nThis coordination must be robust despite the fact that the\nThe Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 487\nexecutor needs to start methods for execution in real-time\neven while the scheduler may be reassessing the schedule to\nmaximize quality, and/or transmitting a revised schedule.\nIf the executor, for example, slates a method for execution\nbased on current time while the scheduler is instantiating\na revised schedule in which that method is no longer\nnextto-be-executed, an inconsistent state may arise within the\nagent architecture. This is addressed in part by\nintroducing a freeze window; a specified short (and adjustable)\ntime period beyond current time within which any activity\nslated as eligible to start in the current schedule cannot be\nrescheduled by the scheduler.\nThe scheduler is triggered in response to various\nenvironmental messages. There are two types of environmental\nmessage classes that we discuss here as execution\ndynamics: 1) feedback as a result of method execution - both\nthe agent\"s own and that of other agents, and 2) changes in\nthe C TAEMS model corresponding to a set of\nsimulatordirected evolutions of the problem and environment. Such\nmessages are termed updates and are treated by the\nscheduler as directives to permanently modify parameters in its\nmodel. We discuss these update types in turn here and\ndefer until later the discussion of queries to the scheduler, a\n\"what-if\" mode initiated by a remote agent that is pursuing\nhigher global quality.\nWhether it is invoked via an update or a query, the\nscheduler\"s response is an option; essentially a complete\nschedule of activities the agent can execute along with associated\nquality metrics. We define a local option as a valid schedule\nfor an agent\"s activities, which does not require change to\nany other agent\"s schedule. The overarching design for\nhandling execution dynamics aims at anytime scheduling\nbehavior in which a local option maximizing the local view\nof quality is returned quickly, possibly followed by globally\nhigher quality schedules that entail inter-agent coordination\nif available scheduler cycles permit. As such, the default\nscheduling mode for updates is to seek the highest quality\nlocal option according to the scheduler\"s search strategy,\ninstantiate the option as its current schedule, and notify the\nexecutor of the revision.\n5.1 Responding to Activity Execution\nAs suggested earlier, a committed schedule consists of a\nsequence of methods, each with a designated [est, lst] start\ntime window (as provided by the underlying STN\nrepresentation). The executor is free to execute a method any time\nwithin its start time window, once any additional enabling\nconditions have been confirmed. These scheduled start time\nwindows are established using the expected duration of each\nscheduled method (derived from associated method duration\ndistributions during schedule construction). Of course as\nexecution unfolds, actual method durations may deviate from\nthese expectations. In these cases, the flexibility retained\nin the schedule can be used to absorb some of this\nunpredictability and modulate invocation of a schedule revision\nprocess.\nConsider the case of a method completion message, one\nof the environmental messages that could be communicated\nto the scheduler as an execution state update. If the\ncompletion time is coincident with the expected duration (i.e.,\nit completes exactly as expected), then the scheduler\"s\nresponse is to simply mark it as \u2018completed\" and the agent can\nproceed to communicate the time at which it has\naccumulated quality to any remote agents linked to this method.\nHowever if the method completes with a duration shorter\nthan expected a rescheduling action might be warranted.\nThe posting of the actual duration in the STN introduces\nno potential for conflict in this case, either with the latest\nstart times (lsts) of local or remote methods that depend\non this method as an enabler, or to successively scheduled\nmethods on the agent\"s timeline. However, it may present a\npossibility for exploiting the unanticipated scheduling slack.\nThe flexible times representation afforded by the STN\nprovides a quick means of assessing whether the next method on\nthe timeline can begin immediate execution instead of\nwaiting for its previously established earliest start time (est).\nIf indeed the est of the next scheduled method can spring\nback to current-time once the actual duration constraint is\nsubstituted for the expected duration constraint, then the\nschedule can be left intact and simply communicated back\nto the executor. If alternatively, other problem constraints\nprevent this relaxation of the est, then there is forced idle\ntime that may be exploited by revising the schedule, and the\nscheduler is invoked (always respecting the freeze period).\nIf the method completes later than expected, then there\nis no need for rescheduling under flexible times scheduling\nunless 1) the method finishes later than the lst of the\nsubsequent scheduled activity, or 2) it finishes later than its\ndeadline. Thus we only invoke the scheduler if, upon\nposting the late finish in the STN, a constraint violation occurs.\nIn the latter case no quality is accrued and rescheduling\nis mandated even if there are no conflicts with subsequent\nscheduled activities.\nOther execution status updates the agent may receive\ninclude:\n\u2022 method start - If a method sent for execution is started\nwithin its [est, lst] window, the response is to mark it\nas \"executing\". A method cannot start earlier than\nwhen it is transmitted by the executor but it is\npossible for it to start later than requested. If the posted\nstart time causes an inconsistency in the STN (e.g.\nbecause the expected method duration can no longer be\naccommodated) the duration constraint in the STN is\nshortened based on the known distribution until either\nconsistency is restored or rescheduling is mandated.\n\u2022 method failure - Any method under execution may fail\nunexpectedly, garnering no quality for the agent. At\nthis point rescheduling is mandated as the method may\nenable other activities or significantly impact quality\nin the absence of local repair. Again, the executor will\nproceed with execution of the next method if its start\ntime arrives before the revised schedule is committed,\nand the scheduler accommodates this by respecting the\nfreeze window.\n\u2022 current time advances An update on \"current time\"\nmay arrive either alone or as part of any of the\npreviously discussed updates. If, when updating the\ncurrenttime link in the STN (as described above), a conflict\nresults, the execution state is inconsistent with the\nschedule. In this case, the scheduler proceeds as if\nexecution were consistent with its expectations, subject\nto possible later updates.\n488 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)\n5.2 Responding to Model Updates\nThe agent can also dynamically receive changes to the\nagent\"s underlying C TAEMS model. Dynamic revisions in\nthe outcome distributions for methods already in an agent\"s\nsubjective view may impact the assessed quality and/or\nduration values that shaped the current schedule. Similarly,\ndynamic revisions in the designated release times and\ndeadlines for methods and tasks already in an agent\"s subjective\nview can invalidate an extant schedule or present\nopportunities to boost quality. It is also possible during execution\nto receive updates in which new methods and possibly\nentire task structures are given to the agent for inclusion in\nits subjective view. Model changes that involve temporal\nconstraints are handled in much the same fashion as\ndescribed for method starts and completions, i.e, rescheduling\nis required only when the posting of the revised constraints\nleads to an STN conflict. In the case of non-temporal model\nchanges, rescheduling action is currently always initiated.\n6. INTER-AGENT COORDINATION\nHaving responded locally to an unexpected execution\nresult or model change, it is necessary to communicate the\nconsequences to agents with inter-dependent activities so\nthat they can align their decisions accordingly. Responses\nthat look good locally may have a sub-optimal global effect\nonce alignments are made, and hence agents must have the\nability to seek mutually beneficial joint schedule changes.\nIn this section we summarize the coordination mechanisms\nprovided in the agent architecture to address these issues.\n6.1 Communicating Non-Local Constraints\nA basic means of coordination with other agents is\nprovided by the Distributed State Mechanism (DSM), which is\nresponsible for communicating changes made to the model\nor schedule of a given agent to other interested agents.\nMore specifically, the DSM of a given agent acts to push\nany changes made to the time bounds, quality, or status\nof a local task/method to all the other agents that have\nthat same task/method as a remote node in their subjective\nviews. A recipient agent treats any communicated changes\nas additional forms of updates, in this case an update that\nmodifies the current constraints associated with non-local\n(but inter-dependent) tasks or methods. These changes are\nhandled identically to updates reflecting schedule execution\nresults, potentially triggering the local scheduler if the need\nto reschedule is detected.\n6.2 Generating Non-Local Options\nAs mentioned in the previous section, the agent\"s first\nresponse to any given query or update (either from execution\nor from another agent) is to generate one or more local\noptions. Such options represent local schedule changes that are\nconsistent with all currently known constraints originating\nfrom other agents\" schedules, and hence can be implemented\nwithout interaction with other agents. In many cases,\nhowever, a larger-scoped change to the schedules of two or more\nagents can produce a higher-quality response.\nExploration of opportunities for such coordinated action\nby two or more agents is the responsibility of the Options\nManager. Running in lower priority mode than the\nExecutor and Scheduler, the Options Manager initiates a non-local\noption generation and evaluation process in response to any\nlocal schedule change made by the agent if computation time\nconstraints permits. Generally speaking, a non-local option\nidentifies certain relaxations (to one or more constraints\nimposed by methods that are scheduled by one or more remote\nagents) that enable the generation of a higher quality local\nschedule. When found, a non-local option is used by a\ncoordinating agent to formulate queries to any other involved\nagents in order to determine the impact of such constraint\nrelaxations on their local schedules. If the combined\nquality change reported back from a set of one or more relevant\nqueries is a net gain, then the issuing agent signals to the\nother involved agents to commit to this joint set of schedule\nchanges. The Option Manager currently employs two\nbasic search strategies for generating non-local options, each\nexploiting the local scheduler in hypothetical mode.\nOptimistic Synchronization - Optimistic\nsynchronization is a non-local option generation strategy where search\nis used to explore the impact on quality if optimistic\nassumptions are made about currently unscheduled remote\nenablers. More specifically, the strategy looks for would\nbe contributor methods that are currently unscheduled due\nto the fact that one or more remote enabling (source) tasks\nor methods are not currently scheduled. For each such local\nmethod, the set of remote enablers are hypothetically\nactivated, and the scheduler attempts to construct a new local\nschedule under these optimistic assumptions. If successful,\na non-local option is generated, specifying the value of the\nnew, higher quality local schedule, the temporal constraints\non the local target activity, and the set of must-schedule\nenabler activities that must be scheduled by remote agents\nin order to achieve this local quality. The needed queries\nrequesting the quality impact of scheduling these activities\nare then formulated and sent to the relevant remote agents.\nTo illustrate, consider again the example in Figure 1. The\nmaximum quality that Agent1 can contribute to the task\ngroup is 15 (by scheduling M1, M2 and M3). Assume\nthat this is Agent1\"s current schedule. Given this state, the\nmaximum quality that Agent2 can contribute to the task\ngroup is 10, and the total task group quality would then\nbe 15 + 10 = 25. Using optimistic synchronization, Agent2\nwill generate a non-local option that indicates that if M5\nbecomes enabled, both M5 and M6 would be scheduled,\nand the quality contributed by Agent2 to the task group\nwould become 30. Agent2 sends a must schedule M4 query\nto Agent1. Because of the time window constraints, Agent1\nmust remove M3 from its schedule to get M4 on,\nresulting in a new lower quality schedule of 5. However, when\nAgent2 receives this option response from Agent1, it\ndetermines that the total quality accumulated for the task group\nwould be 5 + 30 = 35, a net gain of 10. Hence, Agent 2\nsignals to Agent1 to commit to this non-local option.\nConflict-Driven Relaxation - A second strategy for\ngenerating non-local options, referred to as Conflict-Directed\nRelaxation, utilizes analysis of STN conflicts to identify and\nprioritize external constraints to relax in the event that a\nparticular method that would increase local quality is found\nto be unschedulable. Recall that if a method cannot be\nfeasibly inserted into the schedule, an attempt to do so will\ngenerate a negative cycle. Given this cycle, the mechanism\nproceeds in three steps. First, the constraints involved in\nthe cycle are collected. Second, by virtue of the connections\nin the STN to the domain-level C TAEMS model, this set is\nfiltered to identify the subset associated with remote nodes.\nThird, constraints in this subset are selectively retracted to\nThe Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 489\nFigure 4: A high quality task is added to the task\nstructure of Agent2.\nFigure 5: If M4, M5 and M7 are scheduled, a conflict\nis detected by the STN.\ndetermine if STN consistency is restored. If successful, a\nnon-local option is generated indicating which remote\nconstraint(s) must be relaxed and by how much to allow\ninstallation of the new, higher quality local schedule.\nTo illustrate this strategy, consider Figure 5 where Agent1\nhas M1, M2 and M4 on its timeline, and therefore est(M4) =\n21. Agent2 has M5 and M6 on its timeline, with est(M5) =\n31 (M6 could be scheduled before or after M5). Suppose\nthat Agent2 receives a new task M7 with deadline 55 (see\nFigure 4). If Agent2 could schedule M7, the quality\ncontributed by Agent2 to the task group would be 70.\nHowever, an attempt to schedule M7 together with M5 and M6\nleads to a conflict, since the est(M7) = 46, dur(M7) = 10\nand lft(M7) = 55 (see Figure 5). Conflict-directed\nrelaxation by Agent 2 suggests relaxing the lft(M4) by 1 tick\nto 30, and this query is communicated to Agent 1. In fact,\nby retracting either method M1 or M2 from the schedule\nthis relaxation can be accommodated with no quality loss\nto Agent1 (due to the min qaf). Upon communication of\nthis fact Agent 2 signals to commit.\n7. EXPERIMENTAL RESULTS\nAn initial version of the agent described in this paper\nwas developed in collaboration with SRI International and\nsubjected to the independently conducted Coordinators\nprogrammatic evaluation. This evaluation involved over 2000\nproblem instances randomly generated by a scenario\ngenerator that was configured to produce scenarios of varying\nProblem Class Description Agent\nClass Quality\nOD \u2018Only Dynamics\". No NLEs. 97.9%\n(390 probs) Actual task duration & quality\nvary according to distribution.\nINT \u2018Interdependent\". Frequent & 100%\n(360 probs) random (esp. facilitates)\nCHAINS Activities chained together 99.5%\n(360 probs) via sequences of enables NLEs\n(1-4 chains/prob)\nTT \u2018Temporal Tightness\". Release - 94.9%\n(360 probs) Deadline windows preclude\npreferred high quality (longest\nduration) tasks from all\nbeing scheduled.\nSYNC Problems contain range of 97.1%\n(360 probs) different Sync sum tasks\nNTA \u2018New Task Arrival\". cTaems 99.0%\n(360 probs) model is augmented with new\ntasks dynamically during run.\nOVERALL Avg: 98.1%\n(2190 probs) Std dev: 6.96\nTable 1: Performance of year 1 agent over\nCoordinators evaluation. \u2018Agent Quality\" is % of \u2018optimal\"\ndurations within six experiment classes. These classes,\nsummarized in Table 1, were designed to evaluate key aspects of\na set of Coordinators distributed scheduling agents, such as\ntheir ability to handle unexpected execution results, chains\nof nle\"s involving multiple agents, and effective scheduling\nof new activities that arise unexpectedly at some point\nduring the problem run. Year 1 evaluation problems were\nconstrained to be small enough (3 -10 agents, 50 - 100 methods)\nsuch that comparison against an optimal centralized solver\nwas feasible. The evaluation team employed an MDP-based\nsolver capable of unrolling the entire search space for these\nproblems, choosing for an agent at each execution decision\npoint the activity most likely to produce maximum global\nquality. This established a challenging benchmark for the\ndistributed agent systems to compare against. The\nhardware configuration used by the evaluators instantiated and\nran one agent per machine, dedicating a separate machine\nto the MASS simulator.\nAs reported in Table 1, the year 1 prototype agent clearly\ncompares favorably to the benchmark on all classes,\ncoming within 2% of the MDP optimal averaged over the\nentire set of 2190 problems. These results are particularly\nnotable given that each agent\"s STN-based scheduler does\nvery little reasoning over the success probability of the\nactivity sequences it selects to execute. Only simple tactics\nwere adopted to explicitly address such uncertainty, such as\nthe use of expected durations and quality for activities and a\npolicy of excluding from consideration those activities with\nfailure likelihood of >75%. The very respectable agent\nperformance can be at least partially credited to the fact that\nthe flexible times representation employed by the scheduler\naffords it an important buffer against the uncertainty of\nexecution and exogenous events.\nThe agent turns in its lowest performance on the TT\n(Temporal Tightness) experiment classes, and an\nexamination of the agent trace logs reveals possible reasons. In about\nhalf of the TT problems the year 1 agent under-performs\non, the specified time windows within which an agent\"s\nac490 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)\ntivities must be scheduled are so tight that any scheduled\nactivity which executes with a longer duration than the\nexpected value, causes a deadline failure. This constitutes a\ncase where more sophisticated reasoning over success\nprobability would benefit this agent. The other half of\nunderperforming TT problems involve activities that depend on\nfacilitation relationships in order to fit in their time windows\n(recall that facilitation increases quality and decreases\nduration). The limited facilitates reasoning performed by the\nyear 1 scheduler sometimes causes failures to install a\nheavily facilitated initial schedule. Even when such activities\nare successfully installed they tend to be prone to deadline\nfailures -If a source-side activity(s) either fails or exceeds its\nexpected duration the resulting longer duration of the target\nactivity can violate its time window deadline.\n8. STATUS AND DIRECTIONS\nOur current research efforts are aimed at extending the\ncapabilities of the Year 1 agent and scaling up to significantly\nlarger problems. Year 2 programmatic evaluation goals call\nfor solving problems on the order of 100 agents and 10,000\nmethods. This scale places much higher computational\ndemands on all of the agent\"s components. We have recently\ncompleted a re-implementation of the prototype agent\ndesigned to address some recognized performance issues. In\naddition to verifying that the performance on Year 1\nproblems is matched or exceeded, we have recently run some\nsuccessful tests with the agent on a few 100 agent problems.\nTo fully address various scale up issues, we are\ninvestigating a number of more advanced coordination mechanisms.\nTo provide more global perspective to local scheduling\ndecisions, we are introducing mechanisms for computing,\ncommunicating and using estimates of the non-local impact of\nremote nodes. To better address the problem of establishing\ninter-agent synchronization points, we expanding the use of\ntask owners and qaf-specifc protocols as a means for\ndirecting coordination activity. Finally, we plan to explore the use\nof more advanced STN-driven coordination mechanisms,\nincluding the use of temporal decoupling [7] to insulate the\nactions of inter-dependent agents and the introduction of\nprobability sensitive contingency schedules.\n9. ACKNOWLEDGEMENTS\nThe Year 1 agent architecture was developed in\ncollaboration with Andrew Agno, Roger Mailler and Regis\nVincent of SRI International. This paper is based on work\nsupported by the Department of Defense Advance Research\nProjects Agency (DARPA) under Contract #\nFA8750-05-C0033. Any opinions findings and conclusions or\nrecommendations expressed in this paper are those of the authors and\ndo not necessarily reflect the views of DARPA.\n10. REFERENCES\n[1] M. Boddy, B. Horling, J. Phelps, R. Goldman,\nR. Vincent, A. Long, and B. Kohout. C taems\nlanguage specification v. 1.06, October 2005.\n[2] A. Cesta and A. Oddi. Gaining efficiency and\nflexibility in the simple temporal problem. In Proc.\n3rd Int. Workshop on Temporal Representation and\nReasoning, Key West FL, May 1996.\n[3] R. Dechter, I. Meiri, and J. Pearl. Temporal constraint\nnetworks. Artificial Intelligence, 49:61-95, May 1991.\n[4] K. Decker. T\u00c6MS: A framework for environment\ncentered analysis & design of coordination\nmechanisms. In G. O\"Hare and N. Jennings, editors,\nFoundations of Distributed Artificial Intelligence,\nchapter 16, pages 429-448. Wiley Inter-Science, 1996.\n[5] K. Decker and V. Lesser. Designing a family of\ncoordination algorithms. In Proc. 1st. Int. Conference\non Multi-Agent Systems, San Francisco, 1995.\n[6] A. J. Garvey. Design-To-Time Real-Time Scheduling.\nPhD thesis, Univ. of Massachusetts, Feb. 1996.\n[7] L. Hunsberger. Algorithms for a temporal decoupling\nproblem in multi-agent planning. In Proc. 18th\nNational Conference on AI, 2002.\n[8] S. Lemai and F. Ingrand. Interleaving temporal\nplanning and execution in robotics domains. In Proc.\n19th National Conference on AI, 2004.\n[9] N. Muscettola, P. P. Nayak, B. Pell, and B. C.\nWilliams. Remote agent: To boldly go where no AI\nsystem has gone before. Artificial Intelligence,\n103(1-2):5-47, 1998.\n[10] W. Ruml, M. B. Do, and M. Fromherz. On-line\nplanning and scheduling of high-speed manufacturing.\nIn Proc. ICAPS-05, Monterey, 2005.\n[11] I. Shu, R. Effinger, and B. Williams. Enabling fast\nflexible planning through incremental temporal\nreasoning with conflict extraction. In Proce.\nICAPS-05, Monterey, 2005.\n[12] S. Smith and C. Cheng. Slack-based heuristics for\nconstraint satisfaction scheduling. In Proc. 12th\nNational Conference on AI, Wash DC, July 1993.\n[13] T. Wagner, A. Garvey, and V. Lesser. Criteria-directed\nheuristic task scheduling. International Journal of\nApproximate Reasoning, 19(1):91-118, 1998.\nThe Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 491", "keywords": "shortest path algorithm;agent architecture;scheduler-execution;performance;slack;flexible time;management;conflict-driven approach;multi-agent schedule;inter-dependent activity;managing schedule;optimistic synchronization;inter-agent coordination;activity allocator;centralized planning;geographical separation;distributed environment;schedule"}
-{"name": "test_I-30", "title": "Distributed Task Allocation in Social Networks", "abstract": "This paper proposes a new variant of the task allocation problem, where the agents are connected in a social network and tasks arrive at the agents distributed over the network. We show that the complexity of this problem remains NPhard. Moreover, it is not approximable within some factor. We develop an algorithm based on the contract-net protocol. Our algorithm is completely distributed, and it assumes that agents have only local knowledge about tasks and resources. We conduct a set of experiments to evaluate the performance and scalability of the proposed algorithm in terms of solution quality and computation time. Three different types of networks, namely small-world, random and scale-free networks, are used to represent various social relationships among agents in realistic applications. The results demonstrate that our algorithm works well and that it scales well to large-scale applications.", "fulltext": "1. INTRODUCTION\nRecent years have seen a lot of work on task and\nresource allocation methods, which can potentially be applied\nto many real-world applications. However, some interesting\napplications where relations between agents play a role\nrequire a slightly more general model. Such situations appear\nvery frequently in real-world scenarios, and recent\ntechnological developments are bringing more of them within the\nrange of task allocation methods. Especially in business\napplications, preferential partner selection and interaction is\nvery common, and this aspect becomes more important for\ntask allocation research, to the extent that technological\ndevelopments need to be able to support it.\nFor example, the development of semantic web and grid\ntechnologies leads to increased and renewed attention for\nthe potential of the web to support business processes [7,\n15]. As an example, virtual organizations (VOs) are\nbeing re-invented in the context of the grid, where they are\ncomposed of a number of autonomous entities (representing\ndifferent individuals, departments and organizations), each\nof which has a range of problem-solving capabilities and\nresources at its disposal [15, p. 237]. The question is how VOs\nare to be dynamically composed and re-composed from\nindividual agents, when different tasks and subtasks need to be\nperformed. This would be done by allocating them to\ndifferent agents who may each be capable of performing different\nsubsets of those tasks. Similarly, supply chain formation\n(SCF) is concerned with the, possibly ad-hoc, allocation of\nservices to providers in the supply chain, in such a way that\noverall profit is optimized [6, 21].\nTraditionally, such allocation decisions have been\nanalyzed using transaction cost economics (TCE) [4], which\ntakes the transaction between consecutive stages of\ndevelopment as its basic unit of analysis, and considers the firm\nand the market as alternative structural forms for\norganizing transactions. (Transaction cost) economics has\ntraditionally built on analysis of comparative statics: the central\nproblem of economic organization is considered to be the\nadaptation of organizational forms to the characteristics of\ntransactions. More recently, TCE\"s founding father, Ronald\nCoase, acknowledged that this is too simplistic an approach\n[5, p. 245]: The analysis cannot be confined to what\nhappens within a single firm. (. . . ) What we are dealing with\nis a complex interrelated structure.\nIn this paper, we study the problem of task allocation\nfrom the perspective of such a complex interrelated\nstructure. In particular, \u2018the market\" cannot be considered as an\norganizational form without considering specific partners to\ninteract with on the market [11]. Specifically, therefore, we\nconsider agents to be connected to each other in a social\nnetwork. Furthermore, this network is not fully connected:\nas informed by the business literature, firms typically have\nestablished working relations with limited numbers of\npreferred partners [10]; these are the ones they consider when\nnew tasks arrive and they have to form supply chains to\nallocate those tasks [19]. Other than modeling the interrelated\n500\n978-81-904262-7-5 (RPS) c 2007 IFAAMAS\nstructure between business partners, the social network\nintroduced in this paper can also be used to represent other\ntypes of connections or constraints among autonomous\nentities that arise from other application domains.\nThe next section gives a formal description of the task\nallocation problem on social networks. In Section 3, we prove\nthat the complexity of this problem remains NP-hard. We\nthen proceed to develop a distributed algorithm in Section 4,\nand perform a series of experiments with this algorithm, as\ndescribed in Section 5. Section 6 discusses related work, and\nSection 7 concludes.\n2. PROBLEM DESCRIPTION\nWe formulate the social task allocation problem in this\nsection. There is a set A of agents: A = {a1, . . . , am}.\nAgents need resources to complete tasks. Let R = {r1, . . . , rk}\ndenote the collection of the resource types available to the\nagents A. Each agent a \u2208 A controls a fixed amount of\nresources for each resource type in R, which is defined by a\nresource function: rsc : A \u00d7 R \u2192 N. Moreover, we assume\nagents are connected by a social network.\nDefinition 1 (Social network). An agent social\nnetwork SN = (A, AE) is an undirected graph, where vertices\nA are agents, and each edge (ai, aj) \u2208 AE indicates the\nexistence of a social connection between agents ai and aj.\nSuppose a set of tasks T = {t1, t2, . . . , tn} arrives at such\nan agent social network. Each task t \u2208 T is then defined by\na tuple u(t), rsc(t), loc(t) , where u(t) is the utility gained\nif task t is accomplished, and the resource function rsc :\nT \u00d7R \u2192 N specifies the amount of resources required for the\naccomplishment of task t. Furthermore, a location function\nloc : T \u2192 A defines the locations (i.e., agents) at which the\ntasks arrive in the social network. An agent a that is the\nlocation of a task t, i.e. loc(t) = a, is called the manager of\nthis task.\nEach task t \u2208 T needs some specific resources from the\nagents in order to complete the task. The exact assignment\nof tasks to agents is defined by a task allocation.\nDefinition 2 (Task allocation). Given a set of tasks\nT = {t1, . . . , tn} and a set of agents A = {a1, . . . , am}\nin a social network SN, a task allocation is a mapping\n\u03c6 : T \u00d7 A \u00d7 R \u2192 N. A valid task allocation in SN must\nsatisfy the following constrains:\n\u2022 A task allocation must be correct. Each agent a \u2208 A\ncannot use more than its available resources, i.e. for\neach r \u2208 R,\nP\nt\u2208T \u03c6(t, a, r) \u2264 rsc(a, r).\n\u2022 A task allocation must be complete. For each task t \u2208\nT , either all allocated agents\" resources are sufficient,\ni.e. for each r \u2208 R,\nP\na\u2208A \u03c6(t, a, r) \u2265 rsc(t, r), or t is\nnot allocated, i.e. \u03c6(t, \u00b7, \u00b7) = 0.\n\u2022 A task allocation must obey the social relationships.\nEach task t \u2208 T can only be allocated to agents that are\n(direct) neighbors of agent loc(t) in the social network\nSN. Each such agent that can contribute to a task is\ncalled a contractor.\nWe write T\u03c6 to represent the tasks that are fully allocated\nin \u03c6. The utility of \u03c6 is then the summation of the utilities of\neach task in T\u03c6, i.e., U\u03c6 =\nP\nt\u2208T\u03c6\nu(t). Using this notation,\nwe define the efficient task allocation below.\nDefinition 3 (Efficient task allocation). We say\na task allocation \u03c6 is efficient if it is valid and U\u03c6 is\nmaximized, i.e., U\u03c6 = max(\nP\nt\u2208T\u03c6\nu(t)).\nWe are now ready to define the task allocation problem\nin social network that we study in this paper.\nDefinition 4 (Social task allocation problem).\nGiven a set of agents A connected by a social network\nSN = (A, AE), and a finite set of tasks T , the social task\nallocation problem (or STAP for short) is the problem of\nfinding the efficient task allocation \u03c6, such that \u03c6 is valid\nand the social welfare U\u03c6 is maximized.\n3. COMPLEXITY RESULTS\nThe traditional task allocation problem, TAP (without\nthe condition of the social network SN), is NP-complete [18],\nand the complexity comes from the fact that we need to\nevaluate the exponential number of subsets of the task set.\nAlthough we may consider the TAP as a special case of the\nSTAP by assuming agents are fully connected, we cannot\ndirectly use the complexity results from the TAP, since we\nstudy the STAP in an arbitrary social network, which, as we\nargued in the introduction, should be partially connected.\nWe now show that the TAP with an arbitrary social\nnetwork is also NP-complete, even when the utility of each task\nis 1, and the quantity of all required and available resources\nis 1.\nTheorem 1. Given the social task allocation problem with\nan arbitrary social network, as defined in Definition 4, the\nproblem of deciding whether a task allocation \u03c6 with utility\nmore than k exists is NP-complete.\nProof. We first show that the problem is in NP. Given\nan instance of the problem and an integer k, we can verify in\npolynomial time whether an allocation \u03c6 is a valid allocation\nand whether the utility of \u03c6 is greater than k.\nWe now prove that the STAP is NP-hard by showing\nthat MAXIMUM INDEPENDENT SET \u2264P STAP. Given\nan undirected graph G = (V, E) and an integer k, we\nconstruct a network G = (V , E ) which has an efficient task\nallocation with k tasks of utility 1 allocated if and only if G\nhas an independent set (IS) of size k.\nav1\nav3\nae3\nrsc(ae1\n) = {e1}\nrsc(ae4\n) = {e4}\nav4\nae2\nav2\nae4\nae1\nrsc(ae2 ) =\n{e2}{e3}\nrsc(av3\n) =\n{v3}\nrsc(av4\n) =\n{v4}\nt1 = {v1, e1, e3} t2 = {v2, e1, e2}\nrsc(ae3 ) =\nrsc(av1\n) =\n{v1}\nrsc(av2\n) =\n{v2}\nt3 = {v3, e3, e4} t4 = {v4, e2, e4}\ne1\ne2\ne4\ne3\nv1 v2\nv4v3\nFigure 1: The MIS problem can be reduced to the\nSTAP. The left figure is an undirected graph G, which\nhas the optimal solution {v1, v4} or {v2, v3}; the right\nfigure is the constructed instance of the STAP, where the\noptimal allocation is {t1, t4} or {t2, t3}.\nAn instance of the following construction is shown in\nFigure 1. For each node v \u2208 V and each edge e \u2208 E in the graph\nG, we create a vertex agent av and an edge agent ae in G .\nThe Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 501\nWhen v was incident to e in G we correspondingly add an\nedge e in G between av and ae. We assign each agent in G\none resource, which is related to the node or the edge in the\ngraph G, i.e., for each v \u2208 V , rsc(av) = {v} (here we write\nrsc(a) and rsc(t) to represent the set of resources available\nto/required by a and t), and for each e \u2208 E, rsc(ae) = {e}.\nEach vertex agent avi in G has a task ti that requires a\nset of neighboring resources ti = {vi} \u222a {e|e = (u, vi) \u2208 E}.\nThere is no task on the edge agents in G . We define utility 1\nfor each task, and the quantity of all required and available\nresources to be 1.\nTaken an instance of the IS problem, suppose there is a\nsolution of size k, i.e., a subset N \u2286 V such that no two\nvertices in N are joined by an edge in E and |N| = k.\nN specifies a set of vertex agents AN in the corresponding\ngraph G . Given two agents a1, a2 \u2208 AN we now know that\nthere is no edge agent ae connected to both a1 and a2. Thus,\nfor each agent a \u2208 AN , a assigns its task to the edge agents\nwhich are connected to a. All other vertex agents a /\u2208 AN\nare not able to assign their tasks, since the required resources\nof the edge agents are already used by the agents a \u2208 AN .\nThe set of tasks of the agents AN (|AN | = k) is thus the\nmaximum set of tasks that can be allocated. The utility of\nthis allocation is k.\nSimilarly, if there is a solution for the STAP with the\nutility value k, and the allocated task set is N, then for the\nIS problem, there exists a maximum independent set N of\nsize k in G. An example can be found in Figure 1.\nWe just proved that the STAP is NP-hard for an\narbitrary graph. In our proof, the complexity comes from the\nintroduction of a social network. One may expect that the\ncomplexity of this problem can be reduced for some networks\nwhere the number of neighbors of the agents is bounded by\na fixed constant. We now give a complexity result on this\nclass of networks as follows.\nTheorem 2. Let the number of neighbors of each agent\nin the social network SN be bounded by \u0394 for \u0394 \u2265 3.\nComputing the efficient task allocation given such a network is\nNP-complete. In addition, it is not approximable within \u0394\u03b5\nfor some \u03b5 > 0.\nProof. It has been shown in [2] that the maximum\nindependent set problem in the case of the degree bounded by\n\u0394 for \u0394 \u2265 3 is NP-complete and is not approximable within\n\u0394\u03b5\nfor some \u03b5 > 0. Using the similar reduction from the\nproof of Theorem 1, this result also holds for the STAP.\nSince our problem is as hard as MIS as shown in\nTheorem 1, it is not possible to give a worst case bound better\nthan \u0394\u03b5\nfor any polynomial time algorithm, unless P = NP.\n4. ALGORITHMS\nTo deal with the problem of allocating tasks in a social\nnetwork, we present a distributed algorithm. We introduce\nthis algorithm by describing the protocol for the agents.\nAfter that we give the optimal, centralized algorithm and an\nupper bound algorithm, which we use in Section 5 to\nbenchmark the quality of our distributed algorithm.\n4.1 Protocol for distributed task allocation\nWe can summarize the description of the task allocation\nproblem in social networks from Section 2 as follows. We\nAlgorithm 1 Greedy distributed allocation protocol\n(GDAP).\nEach manager a calculates the efficiency e(t) for each of their\ntasks t \u2208 Ta, and then while Ta = \u2205:\n1. Each manager a selects the most efficient task t \u2208 Ta\nsuch that for each task t \u2208 Ta: e(t ) \u2264 e(t).\n2. Each manager a requests help for t from all its\nneighbors (of a) by informing these neighbors of the\nefficiency e(t) and the required resources for t.\n3. Contractors receive and store all requests, and then\noffer all relevant resources to the manager for the task\nwith the highest efficiency.\n4. The managers that have received sufficient offers\nallocate their tasks, and inform each contractor which\npart of the offer is accepted. When a task is\nallocated, or when a manager has received offers from all\nneighbors, but still cannot satisfy its task, the task is\nremoved from the task list Ta.\n5. Contractors update their used resources.\nhave a (social) network of agents. Each agent has a set of\nresources of different types at its disposal. We also have a\nset of tasks. Each task requires some resources, has a fixed\nbenefit, and is located at a certain agent. This agent is called\na manager. We only allow neighboring agents to help with a\ntask. These agents are called contractors. Agents can fulfill\nthe role of manager as well as contractor. The problem is\nto find out which tasks to execute, and which resources of\nwhich contractors to use for these tasks.\nThe idea of the protocol is as follows. All manager agents\na \u2208 A try to find neighboring contractors to help them with\ntheir task(s) Ta = {ti \u2208 T | loc(ti) = a}. They start with\noffering the task that is most efficient in terms of the ratio\nbetween benefit and required resources. Out of all tasks\noffered, contractors select the task with the highest efficiency,\nand send a bid to the related manager. A bid consists of all\nthe resources the agent is able to supply for this task. If\nsufficient resources have been offered, the manager selects the\nrequired resources and informs all contractors of its choice.\nThe efficiency of a task is defined as follows:\nDefinition 5. The efficiency e of a task t \u2208 T is defined\nby the utility of this task divided by the sum of all required\nresources: e(t) = u(t)P\nr\u2208R rsc(t,r)\n.\nA more detailed description of this protocol can be found\nin Algorithm 1. Here it is also defined how to determine\nwhen a task should not be offered anymore, because it is\nimpossible to fulfill locally. Obviously, a task is also not\noffered anymore when it has been allocated. This protocol\nis such that, when no two tasks have exactly the same\nefficiency, in every iteration at least one task is removed from\na task list.1\nFrom this the computation and communication\nproperty of the algorithm follows.\nProposition 1. For a STAP with n tasks and m agents,\nthe run time of the distributed algorithm is O(nm), and the\nnumber of communication messages is O(n2\nm).\n1\nEven when some tasks have the same efficiency, it is\nstraightforward to make this result work. For example, the\nimplementation can ensure that the contractors choose the\ntask with the lowest task-id.\n502 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nAlgorithm 2 Optimal social task allocation (OPT).\nRepeat the following for each combination of tasks:\n1. If the total reward for this combination is higher than\nany previous combination, test if this combination is\nfeasible as follows:\n2. Create a network flow problem for each resource type\nr \u2208 R (separately) as follows:\n(a) Create a source s and a sink s .\n(b) For each agent a \u2208 A create an agent node and\nan edge from s to this node with capacity equal\nto the amount of resources of type r agent a has.\n(c) For each task t \u2208 T create a task node and an\nedge from this node to s with capacity equal to\nthe amount of resources of type r task T requires.\n(d) For each agent a connect the agent node to all\ntask nodes of neighboring tasks, i.e., t \u2208 {t \u2208 T |\n(a, loc(t)) \u2208 AE}. Give this connection unlimited\ncapacity.\n3. Solve the maximum flow problem for the created flow\nnetworks. If the maximum flow in each network is\nequal to the total required resources of that type, the\ncurrent combination of tasks is feasible. In that case,\nthis is the current best combination of tasks.\nProof. In the worst case, in each iteration exactly one\ntask is removed from a task list, so there are n iterations.\nIn each iteration in the worst case (i.e., a fully connected\nnetwork), for each of the O(n) managers, O(m) messages\nare sent. Next the task with the highest efficiency can be\nselected by each contractor in O(n). Assigning an allocation\ncan be done in O(m). This leads to a total of O(n + m)\noperations for each iteration, and thus O(n2\n+ nm) operations\nin total. The number of messages sent is O(n(nm + nm +\nnm)) = O(n2\nm).\nWe establish the quality of this protocol experimentally\n(in Section 5). Preferably, we compare the results to the\noptimal solution.\n4.2 Optimal social task allocation\nThe optimal task allocation algorithm should deal with\nthe restrictions posed by the social network. For this\nNPcomplete problem we used an exponential brute-force\nalgorithm to consider relevant combinations of tasks to execute.\nFor each combination we use a maximum-flow algorithm to\ncheck whether the resources are sufficient for the selected\nsubset of tasks. The flow network describes which resources\ncan be used for which tasks, depending on the social\nnetwork. If the maximum flow is equal to the sum of all\nresources required by the subset of tasks, we know that a\nfeasible solution exists (see Algorithm 2). Clearly, we cannot\nexpect this optimal algorithm to be able to find solutions\nfor larger problem sizes. To establish the quality of our\nprotocol for large instances, we use the following method to\ndetermine an upper bound.\n4.3 Upper bound for social task allocation\nGiven a social task allocation problem, if the number of\nresource types for every task t \u2208 T is bounded by 1, the\nAlgorithm 3 An upper bound for social task allocation\n(UB).\nCreate a network flow problem with costs as follows:\n1. Create a source s and a sink s .\n2. For each agent a \u2208 A and each resource type ri \u2208 R,\ncreate an agent-resource node ai, and an edge from\ns to this node with capacity equal to the amount of\nresources of type r agent a has available and with costs\n0.\n3. For each task t \u2208 T and each resource type ri \u2208 R,\ncreate a task-resource node ti, and an edge from this node\nto s with capacity equal to the amount of resources of\ntype r task t requires and costs \u2212e(t).\n4. For each resource type ri \u2208 R and for each agent a\nconnect the agent-resource node ai to all task-resource\nnodes ti for neighboring tasks t \u2208 {t \u2208 T | (a, loc(t)) \u2208\nAE or a = loc(t)}. Give this connection unlimited\ncapacity and zero costs.\n5. Create an edge directly from s to s with unlimited\ncapacity and zero costs.\nSolve the minimum cost flow network problem for this\nnetwork. The costs of the resulting network is an upper bound\nfor the social task allocation problem.\nproblem is polynomially solvable by transforming it to a flow\nnetwork problem. Our method for efficiently calculating an\nupper bound for STAP makes use of this special case by\nconverting any given STAP instance P into a new problem\nP where each task only has one resource type.\nMore specifically, for every task t \u2208 T with utility u(t),\nwe do the following. Let m be the number of resource types\n{r1, . . . , rm} required by t. We then split t into a set of\nm tasks T = {t1, . . . , tm} where each task ti only has one\nunique resource type (of {r1, . . . , rm}) and each task has a\nfair share of the utility, i.e., the efficiency of t from\nDefinition 5 times the amount of this resource type rsc(ti, ri).\nAfter polynomially performing this conversion for every task\nin T , the original problem P becomes the special case P .\nNote that the set of valid allocations in P is only a subset of\nthe set of valid allocations in P , because it is now possible\nto partially allocate a task. From this it is easy to see that\nthe solution of P gives an upper bound of the solution of\nthe original problem P.\nTo compute the optimal solution for P , we transform it\nto a minimum cost flow problem. We model the cost in\nthe flow network by the negation of the new task\"s utility. A\npolynomial-time implementation of a scaling minimum cost\nflow algorithm [9] is used for the computation. The\nresulting minimum cost flow represents a maximum allocation of\nthe tasks for P . The detailed modeling is described in\nAlgorithm 3. In the next section, we use this upper bound to\nestimate the quality of the GDAP for large-scale instances.\n5. EXPERIMENTS\nWe implemented the greedy distributed allocation\nprotocol (GDAP), the optimal allocation algorithm (OPT), and\nthe upper bound algorithm (UB) in Java, and tested them\non a Linux PC. The purpose of these experiments is to study\nthe performance of the distributed algorithm in different\nproblem settings using different social networks. The\nperThe Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 503\n0.7\n0.8\n0.9\n1\n1.1\n1.2\n1.3\n1.4\n0.4 0.6 0.8 1 1.2 1.4 1.6\nRewardrelativetooptimal\nResource ratio\nsmall-world - upper bound\nrandom - upper bound\nscale-free - upper bound\nsmall-world - GDAP\nrandom - GDAP\nscale-free - GDAP\nFigure 2: The solution qualities of the GDAP and\nthe upper bound depend on the resource ratio.\n0\n5\n10\n15\n20\n25\n30\n0 2 4 6 8 10 12 14 16 18\nNumberofagents\nDegree\nsmall-world\nrandom\nscale-free\nFigure 3: The histogram of the degrees.\nformance measurements are the solution quality and\ncomputation time, where the solution quality (SQ) is computed\nas follows. When the number of tasks is small, we compare\nthe output of the distributed algorithm with the optimal\nsolution, i.e., SQ = GDAP\nOP T\n, but if it is not feasible to compute\nthe optimal solution, we use the value returned by the upper\nbound algorithm for evaluation, i.e., SQ = GDAP\nUB\n.\nTo see whether the latter is a good measure, we also\ncompare the quality of the upper bound to the optimal solution\nfor smaller problems. In the following, we describe the\nsetup of all experiments, and present the results.\n5.1 Experimental settings\nWe consider several experimental environments. In all\nenvironments the agents are connected by a social network. In\nthe experiments, three different networks are used to\nsimulate the social relationships among agents in potential\nrealworld problems.\nSmall-world networks are networks where most neighbors\nof an agent are also connected to each other. For the\nexperiments we use a method for generating random small-world\nnetworks proposed by Watts et al. [22], with a fixed rewiring\nprobability p = 0.05.\nScale-free networks have the property that a couple of\nnodes have many connections, and many nodes have only\na small number of connections. To generate these we use\nthe implementation in the JUNG library of the generator\nproposed by Barab\u00b4asi and Albert [3].\nWe also generate random networks as follows. First we\nconnect each agent to another agent such that all agents are\nconnected. Next, we randomly add connections until the\ndesired average degree has been reached.\nWe now describe the different settings used in our\nexperiments with both small and large-scale problems.\nSetting 1. The number of agents is 40, and the number of\ntasks is 20. The number of different resource types is\nbounded by 5, and the average number of resources\nrequired by a task is 30. Consequently, the total number\nof resources required by the tasks is fixed. However,\nthe resources available to the agents are varied. We\ndefine the resource ratio to refer to the ratio between\nthe total number of available resources and the total\nnumber of required resources. Resources are allocated\nuniformly to the agents. The average degrees of the\nnetworks may also change. In this setting the task\nbenefits are distributed normally around the number\nof resources required.\nSetting 2. This setting is similar to Setting 1, but here we\nlet the benefits of the tasks vary dramatically-40% of\nthe tasks have around 10 times higher benefit than the\nother 60% of the tasks.\nSetting 3. This setting is for large-scale problems. The\nratio between the number of agents and the number of\ntasks is set to 5/3, and the number of agents varies\nfrom 100 to 2000. We also fix the resource ratio to 1.2\nand the average degree to 6. The number of different\nresource types is 20, and the average resource\nrequirement of a tasks is 100. The task benefits are again\nnormally distributed.\n5.2 Experimental results\nThe experiments are done with the three different settings\nin the three different networks mentioned before, where each\nrecorded data is the average over 20 random instances.\n5.2.1 Experiment 1\nExperimental setting 1 is used for this set of experiments.\nWe would like to see how the GDAP behaves in the\ndifferent networks when the number of resources available to the\nagents is changing. We also study the behavior of our upper\nbound algorithm. For this experiment we fix the average\nnumber of neighbors (degree) in each network type to six.\nIn Figure 2 we see how the quality of both the upper\nbound and the GDAP algorithm depends on the resource\nratio. Remarkably, for lower resource ratios our GDAP is\nmuch closer to the optimal allocation than the upper bound.\nWhen the resource ratio grows above 1.5, the graphs of the\nupper bound and the GDAP converge, meaning that both\nare really close to the optimal solution. This can be\nexplained by the fact that when plenty of resources are\navailable, all tasks can be allocated without any conflicts.\nHowever, when resources are very scarce, the upper bound is\nmuch too optimistic, because it is based on the allocation of\nsub-tasks per resource type, and does not reason about how\nmany of the tasks can actually be allocated completely. We\nalso notice from the graph that the solution quality of the\nGDAP on all three networks is quite high (over 0.8) when\nthe available resource is very limited (0.3). It drops below\n0.8 with the increased ratio and goes up again once there are\nplenty of resources available (resource ratio 0.9). Clearly, if\n504 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)\n0.7\n0.8\n0.9\n1\n1.1\n1.2\n1.3\n1.4\n2 4 6 8 10 12 14 16\nRewardrelativetooptimal\nDegree\nsmall-world - upper bound\nrandom - upper bound\nscale-free - upper bound\nsmall-world - GDAP\nrandom - GDAP\nscale-free - GDAP\nFigure 4: The quality of the GDAP and the upper\nbound depend on the network degree.\nresources are really scarce, only a few tasks can be\nsuccessfully allocated even by the optimal algorithm. Therefore,\nthe GDAP is able to give quite a good allocation.\nAlthough the differences are minor, it can also be seen\nthat the results for the small-world network are consistently\nslightly better than those of random networks, which in turn\noutperform scale-free networks. This can be understood by\nlooking at the distribution of the agents\" degree, as shown\nin Figure 3. In this experiment, in the small-world network\nalmost every manager has a degree of six. In random\nnetworks, the degree varies between one and about ten.\nHowever, in the scale-free network, most nodes have only three\nor four connections, and only a very few have up to twenty\nconnections. As we will see in the next experiment, having\nmore connections means getting better results.\nFor the next experiment we fix the resource ratio to 1.0\nand study the quality of both the upper bound and the\nGDAP algorithm related to the degree of the social\nnetwork. The result can be found in Figure 4. In this figure\nwe can see that a high average degree also leads to\nconvergence of the upper bound and the GDAP. Obviously, when\nmanagers have many connections, it becomes easier to\nallocate tasks. An exception is, similar to what we have seen in\nFigure 2, that the solution of the GDAP is also very good\nif the connections are extremely limited (degree 2), due to\nthe fact that the number of possibly allocated tasks is very\nsmall. Again we see that the upper bound is not that good\nfor problems where resources are hard to reach, i.e. in social\nnetworks with a low average degree.2\nSince the solution quality clearly depends on the resource\nratio as well as on the degree of the social network, we study\nthe effect of changing both, to see whether they influence\neach other. Figure 5 shows how the solution quality\ndepends on both the resource ratio and the network degree.\nThis graph confirms the results that the GDAP performs\nbetter for problems with higher degree and higher resource\nratio. However, it is now also more clear that it performs\nbetter for very low degree and resource availability. For this\nexperiment with 40 agents and 20 tasks, the worst\nperformance is met for instances with degree six and resource ratio\n0.6 to instances with degree twelve and resource ratio 0.3.\nBut even for those instances, the performance lies above 0.7.\n2\nThe consistent standard deviation of about 15% over the\n20 problem instances is not displayed as error-bars in these\nfirst graphs, because it would obfuscate the interesting\ncorrelations that can now be seen.\n4\n6\n8\n10\n12\n14\n16 0.4 0.6 0.8 1 1.2 1.4 1.6\n0.7\n0.75\n0.8\n0.85\n0.9\n0.95\n1\nRelative reward\nsmall-world\nAverage degree\nResource ratio\nRelative reward\nFigure 5: The quality of the GDAP depends on both\nthe resource ratio and the network degree.\n5.2.2 Experiment 2\nTo study the robustness of the GDAP against different\nproblem settings, we generate instances where the task\nbenefit distribution is different: 40% of the tasks gets a 10 times\nhigher benefit (as described in Setting 2). The effect of this\ndifferent distribution can be seen in Figure 6. These two\ngraphs show that the results for the skewed task benefit\ndistribution are slightly better on average, both when\nvarying the resource ratio, and when varying the average degree\nof the network. We argue that this can be explained by the\ngreedy nature of GDAP, which causes the tasks with high\nefficiency to be allocated first, and makes the algorithm\nperform better in this heterogeneous setting.\n5.2.3 Experiment 3\nThe purpose of this final experiment is to test whether the\nalgorithm can be scaled to large problems, like applications\nrunning on the internet. We therefore generate instances\nwhere the number of agents varies from 100 to 2000, and\nsimultaneously increase the number of tasks from 166 to 3333\n(Setting 3). Figure 7 shows the run time for these instances\non a Linux machine with an AMD Opteron 2.4 GHz\nprocessor. These graphs confirm the theoretical analysis from\nthe previous section, saying that both the upper bound and\nthe GDAP are polynomial. In fact, the graphs show that\nthe GDAP almost behaves linearly. Here we see that the\nlocality of the GDAP really helps in reducing the\ncomputation time. Also note that the GDAP requires even less\ncomputation time than the upper bound.\nThe quality of the GDAP for these large instances cannot\nbe compared to the optimal solution. Therefore, in Figure 8\nthe upper bound is used instead. This result shows that\nthe GDAP behaves stably and consistently well with the\nincreasing problem size. It also shows once more that the\nGDAP performs better in a small-world network.\n6. RELATED WORK\nTask allocation in multiagent systems has been\ninvestigated by many researchers in recent years with different\nassumptions and emphases. However, most of the research\nto date on task allocation does not consider social\nconnections among agents, and studies the problem in a centralized\nThe Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 505\n0.65\n0.7\n0.75\n0.8\n0.85\n0.9\n0.95\n1\n0.4 0.6 0.8 1 1.2 1.4 1.6\nRewardrelativetooptimal\nResource ratio\nskewed small-world\nskewed random\nskewed scale-free\nuniform small-world\nuniform random\nuniform scale-free\n0.7\n0.75\n0.8\n0.85\n0.9\n0.95\n1\n2 4 6 8 10 12 14 16\nRewardrelativetooptimal\nDegree\nskewed small-world\nskewed random\nskewed scale-free\nuniform small-world\nuniform random\nuniform scale-free\nFigure 6: The quality of the GDAP algorithm for a\nuniform and a skewed task benefit distribution\nrelated to the resource ratio (the first graph), and the\nnetwork degree (the second graph).\nsetting. For example, Kraus et al. [12] develop an auction\nprotocol that enables agents to form coalitions with time\nconstraints. It assumes each agent knows the capabilities\nof all others. The proposed protocol is centralized, where\none manager is responsible for allocating the tasks to all\ncoalitions. Manisterski at al. [14] discuss the possibilities of\nachieving efficient allocations in both cooperative and\nnoncooperative settings. They propose a centralized algorithm\nto find the optimal solution. In contrast to this work, we\nintroduce also an efficient completely distributed protocol\nthat takes the social network into account.\nTask allocation has also been studied in distributed\nsettings by for example Shehory and Kraus [18] and by\nLerman and Shehory [13]. They propose distributed algorithms\nwith low communication complexity for forming coalitions\nin large-scale multiagent systems. However, they do not\nassume the existence of any agent network. The work of\nSander et al. [16] introduces computational geometry-based\nalgorithms for distributed task allocation in geographical\ndomains. Agents are then allowed to move and actively search\nfor tasks, and the capability of agents to perform tasks is\nhomogeneous. In order to apply their approach, agents need\nto have some knowledge about the geographical positions\nof tasks and some other agents. Other work [17] proposes\na location mechanism for open multiagent systems to\nallocate tasks to unknown agents. In this approach each agent\ncaches a list of agents they know. The analysis of the\ncommunication complexity of this method is based on lattice-like\ngraphs, while we investigate how to efficiently solve task\nallocation in a social network, whose topology can be arbitrary.\nNetworks have been employed in the context of task\nallocation in some other works as well, for example to limit the\n0\n1000\n2000\n3000\n4000\n5000\n6000\n7000\n0 200 400 600 800 1000 1200 1400 1600 1800 2000\nTime(ms)\nAgents\nupper bound - small-world\nupper bound - random\nupper bound - scale-free\nGDAP - small-world\nGDAP - random\nGDAP - scale-free\nFigure 7: The run time of the GDAP algorithm.\n0.75\n0.8\n0.85\n0.9\n0.95\n1\n0 200 400 600 800 1000 1200 1400 1600 1800 2000\nRewardrelativetoupperbound\nAgents\nsmall-world\nrandom\nscale-free\nFigure 8: The quality of the GDAP algorithm\ncompared to the upper bound.\ninteractions between agents and mediators [1]. Mediators in\nthis context are agents who receive the task and have\nconnections to other agents. They break up the task into subtasks,\nand negotiate with other agents to obtain commitments to\nexecute these subtasks. Their focus is on modeling the\ndecision process of just a single mediator. Another approach is\nto partition the network into cliques of nodes, representing\ncoalitions which the agents involved may use as a\ncoordination mechanism [20]. The focus of that work is distributed\ncoalition formation among agents, but in our approach, we\ndo not need agents to form groups before allocating tasks.\nEaswaran and Pitt [6] study \u2018complex tasks\" that require\n\u2018services\" for their accomplishment. The problem concerns\nthe allocation of subtasks to service providers in a supply\nchain. Another study of task allocation in supply chains\nis [21], where it is argued that the defining characteristic\nof Supply Chain Formation is hierarchical subtask\ndecomposition (HSD). HSD is implemented using task dependency\nnetworks (TDN), with agents and goods as nodes, and I/O\nrelations between them as edges. Here, the network is given,\nand the problem is to select a subgraph, for which the\nauthors propose a market-based algorithm, in particular, a\nseries of auctions. Compared to these works, our approach is\nmore general in the sense that we are able to model different\ntypes of connections or constraints among agents for\ndifferent problem domains in addition to supply chain formation.\nFinally, social networks have been used in the context of\nteam formation. Previous work has shown how to learn\nwhich relations are more beneficial in the long run [8], and\nadapt the social network accordingly. We believe these\nresults can be transferred to the domain of task allocation as\nwell, leaving this as a topic for further study.\n506 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)\n7. CONCLUSIONS\nIn this paper we studied the task allocation problem in a\nsocial network (STAP), which can be seen as a new, more\ngeneral, variant of the TAP. We believe it has a great amount\nof potential for realistic problems. We provided complexity\nresults on computing the efficient solution for the STAP, as\nwell as a bound on possible approximation algorithms. Next,\nwe presented a distributed protocol, related to the\ncontractnet protocol. We also introduced an exponential algorithm\nto compute the optimal solution, as well as a fast\nupperbound algorithm. Finally, we used the optimal solution and\nthe upper-bound (for larger instances) to conduct an\nextensive set of experiments to assess the solution quality and\nthe computational efficiency of the proposed distributed\nalgorithm in different types of networks, namely, small-world\nnetworks, random networks, and scale-free networks.\nThe results presented in this paper show that the\ndistributed algorithm performs well in small-world, scale-free,\nand random networks, and for many different settings. Also\nother experiments were done (e.g. on grid networks) and\nthese results held up over a wider range of scenarios.\nFurthermore, we showed that it scales well to large networks,\nboth in terms of quality and of required computation time.\nThe results also suggest that small-world networks are slightly\nbetter suited for local task allocation, because there are no\nnodes with very few neighbors.\nThere are many interesting extensions to our current work.\nIn this paper, we focus on the computational aspect in the\ndesign of the distributed algorithm. In our future work, we\nwould also like to address some of the related issues in game\ntheory, such as strategic agents, and show desirable\nproperties of a distributed protocol in such a context.\nIn the current algorithm we assume that agents can only\ncontact their neighbors to request resources, which may\nexplain why our algorithm performs not as good in the\nscalefree networks as in the small-world networks. Our future\nwork may allow agents to reallocate (sub)tasks. We are\ninterested in seeing how such interactions will affect the\nperformance of task allocation in different social networks.\nA third interesting topic for further work is the addition\nof reputation information among the agents. This may help\nto model changing business relations and incentivize agents\nto follow the protocol.\nFinally, it would be interesting to study real-life instances\nof the social task allocation problem, and see how they\nrelate to the randomly generated networks of different types\nstudied in this paper.\nAcknowledgments. This work is supported by the\nTechnology Foundation STW, applied science division of NWO,\nand the Ministry of Economic Affairs.\n8. REFERENCES\n[1] S. Abdallah and V. Lesser. Modeling Task Allocation\nUsing a Decision Theoretic Model. In Proc. AAMAS,\npages 719-726. ACM, 2005.\n[2] N. Alon, U. Feige, A. Wigderson, and D. Zuckerman.\nDerandomized Graph Products. Computational\nComplexity, 5(1):60-75, 1995.\n[3] A.-L. Barab\u00b4asi and R. Albert. Emergence of scaling in\nrandom networks. Science, 286(5439):509-512, 1999.\n[4] R. H. Coase. The Nature of the Firm. Economica NS,\n4(16):386-405, 1937.\n[5] R. H. Coase. My Evolution as an Economist. In\nW. Breit and R. W. Spencer, editors, Lives of the\nLaureates, pages 227-249. MIT Press, 1995.\n[6] A. M. Easwaran and J. Pitt. Supply Chain Formation\nin Open, Market-Based Multi-Agent Systems.\nInternational J. of Computational Intelligence and\nApplications, 2(3):349-363, 2002.\n[7] I. Foster, N. R. Jennings, and C. Kesselman. Brain\nMeets Brawn: Why Grid and Agents Need Each\nOther. In Proc. AAMAS, pages 8-15, Washington,\nDC, USA, 2004. IEEE Computer Society.\n[8] M. E. Gaston and M. desJardins. Agent-organized\nnetworks for dynamic team formation. In Proc.\nAAMAS, pages 230-237, New York, NY, USA, 2005.\nACM Press.\n[9] A. Goldberg. An Efficient Implementation of a Scaling\nMinimum-Cost Flow Algorithm. J. of Algorithms,\n22:1-29, 1997.\n[10] R. Gulati. Does Familiarity Breed Trust? The\nImplications of Repeated Ties for Contractual Choice\nin Alliances. Academy of Management Journal,\n38(1):85-112, 1995.\n[11] T. Klos and B. Nooteboom. Agent-based\nComputational Transaction Cost Economics.\nEconomic Dynamics and Control, 25(3-4):503-526, 01.\n[12] S. Kraus, O. Shehory, and G. Taase. Coalition\nformation with uncertain heterogeneous information.\nIn Proc. AAMAS, pages 1-8. ACM, 2003.\n[13] K. Lerman and O. Shehory. Coalition formation for\nlarge-scale electronic markets. In Proc. ICMAS, pages\n167-174. IEEE Computer Society, 2000.\n[14] E. Manisterski, E. David, S. Kraus, and N. Jennings.\nForming Efficient Agent Groups for Completing\nComplex Tasks. In Proc. AAMAS, pages 257-264.\nACM, 2006.\n[15] J. Patel et al. Agent-Based Virtual Organizations for\nthe Grid. Multi-Agent and Grid Systems,\n1(4):237-249, 2005.\n[16] P. V. Sander, D. Peleshchuk, and B. J. Grosz. A\nscalable, distributed algorithm for efficient task\nallocation. In Proc. AAMAS, pages 1191-1198, New\nYork, NY, USA, 2002. ACM Press.\n[17] O. Shehory. A scalable agent location mechanism. In\nProc. ATAL, volume 1757 of LNCS, pages 162-172.\nSpringer, 2000.\n[18] O. Shehory and S. Kraus. Methods for Task\nAllocation via Agent Coalition Formation. Artificial\nIntelligence, 101(1-2):165-200, 1998.\n[19] R. M. Sreenath and M. P. Singh. Agent-based service\nselection. Web Semantics, 1(3):261-279, 2004.\n[20] P. T. To\u02c7si\u00b4c and G. A. Agha. Maximal Clique Based\nDistributed Coalition Formation for Task Allocation\nin Large-Scale Multi-Agent Systems. In Proc. MMAS,\nvolume 3446 of LNAI, pages 104-120. Springer, 2005.\n[21] W. E. Walsh and M. P. Wellman. Modeling Supply\nChain Formation in Multiagent Systems. In Proc.\nAMEC II, volume 1788 of LNAI, pages 94-101.\nSpringer, 2000.\n[22] D. J. Watts and S. H. Strogatz. Collective dynamics of\n\u2018small world\" networks. Nature, 393:440-442, 1998.\nThe Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 507", "keywords": "algorithm;agent;strategic agent;task allocation;social network;utility;communication message;interaction;resource;computational complexity;social relationship;multiagent system;behavior;allocation"}
-{"name": "test_I-31", "title": "Reasoning about Judgment and Preference Aggregation", "abstract": "Agents that must reach agreements with other agents need to reason about how their preferences, judgments, and beliefs might be aggregated with those of others by the social choice mechanisms that govern their interactions. The recently emerging field of judgment aggregation studies aggregation from a logical perspective, and considers how multiple sets of logical formulae can be aggregated to a single consistent set. As a special case, judgment aggregation can be seen to subsume classical preference aggregation. We present a modal logic that is intended to support reasoning about judgment aggregation scenarios (and hence, as a special case, about preference aggregation): the logical language is interpreted directly in judgment aggregation rules. We present a sound and complete axiomatisation of such rules. We show that the logic can express aggregation rules such as majority voting; rule properties such as independence; and results such as the discursive paradox, Arrow\"s theorem and Condorcet\"s paradox - which are derivable as formal theorems of the logic. The logic is parameterised in such a way that it can be used as a general framework for comparing the logical properties of different types of aggregation - including classical preference aggregation.", "fulltext": "1. INTRODUCTION\nIn this paper, we are interested in knowledge representation\nformalisms for systems in which agents need to aggregate their\npreferences, judgments, beliefs, etc. For example, an agent may need\nto reason about majority voting in a group he is a member of.\nPreference aggregation - combining individuals\" preference relations\nover some set of alternatives into a preference relation which\nrepresents the joint preferences of the group by so-called social\nwelfare functions - has been extensively studied in social choice theory\n[2]. The recently emerging field of judgment aggregation studies\naggregation from a logical perspective, and discusses how, given\na consistent set of logical formulae for each agent, representing\nthe agent\"s beliefs or judgments, we can aggregate these to a\nsingle consistent set of formulae. A variety of judgment aggregation\nrules have been developed to this end. As a special case, judgment\naggregation can be seen to subsume preference aggregation [5].\nIn this paper we present a logic, called Judgment Aggregation\nLogic (jal), for reasoning about judgment aggregation. The\nformulae of the logic are interpreted as statements about judgment\naggregation rules, and we give a sound and complete axiomatisation of\nall such rules. The axiomatisation is parameterised in such a way\nthat we can instantiate it to get a range of different judgment\naggregation logics. For example, one instance is an axiomatisation, in\nour language, of all social welfare functions - thus we get a logic\nof classical preference aggregation as well. And this is one of the\nmain contributions of this paper: we identify the logical properties\nof judgment aggregation, and we can compare the logical\nproperties of different classes of judgment aggregation - and of general\njudgment aggregation and preference aggregation in particular.\nOf course, a logic is only interesting as long as it is\nexpressive. One of the goals of this paper is to investigate the\nrepresentational and logical capabilities an agent needs for judgment and\npreference aggregation; that is, what kind of logical language might\nbe used to represent and reason about judgment aggregation? An\nagent\"s knowledge representation language should be able to\nexpress: common aggregation rules such as majority voting;\ncommonly discussed properties of judgment aggregation rules and\nsocial welfare functions such as independence; paradoxes commonly\nused to illustrate judgment aggregation and preference aggregation,\nviz. the discursive paradox and Condorcet\"s paradox respectively;\nand other important properties such as Arrow\"s theorem. In order\nto illustrate in more detail what such a language would need to be\nable to express, take the example of a potential property of social\nwelfare functions (SWFs) called independence of irrelevant\nalternatives (IIA): given two preference profiles (each consisting of one\npreference relation for each agent) and two alternatives, if for each\nagent the two alternatives have the same order in the two preference\nprofiles, then the two alternatives must have the same order in the\ntwo preference relations resulting from applying the SWF to the\ntwo preference profiles, respectively. From this example it seems\nthat a formal language for SWFs should be able to express:\n566\n978-81-904262-7-5 (RPS) c 2007 IFAAMAS\n\u2022 Quantification on several levels: over alternatives; over\npreference profiles, i.e., over relations over alternatives\n(secondorder quantification); and over agents.\n\u2022 Properties of preference relations for different agents, and\nproperties of several different preference relations for the same\nagent in the same formula.\n\u2022 Comparison of different preference relations.\n\u2022 The preference relation resulting from applying a SWF to\nother preference relations.\nFrom these points it might seem that such a language would be\nrather complex (in particular, these requirements seem to rule out a\nstandard propositional modal logic). Perhaps surprisingly, the\nlanguage of jal is syntactically and semantically rather simple; and yet\nthe language is, nevertheless, expressive enough to give elegant and\nsuccinct expressions of, e.g., IIA, majority voting, the discursive\ndilemma, Condorcet\"s paradox and Arrow\"s theorem. This means,\nfor example, that Arrow\"s theorem is a formal theorem of jal, i.e.,\na derivable formula; we thus have a formal proof theory for social\nchoice.\nThe structure of the rest of the paper is as follows. In the next\nsection we review the basics of judgment aggregation as well as\npreference aggregation, and mention some commonly discussed\nproperties of judgment aggregation rules and social welfare\nfunctions. In Section 3 we introduce the syntax and semantics of jal,\nand study the complexity of the model checking problem.\nFormulae of jal are interpreted directly by, and thus represent properties\nof, judgment aggregation rules. In Section 4 we demonstrate that\nthe logic can express commonly discussed properties of judgment\naggregation rules, such as the discursive paradox. We give a sound\nand complete axiomatisation of the logic in Section 5, under the\nassumption that the agenda the agents make judgments over is finite.\nAs mentioned above, preference aggregation can be seen as a\nspecial case of judgment aggregation, and in Section 6 we introduce an\nalternative interpretation of jal formulae directly in social welfare\nfunctions. We obtain a sound and complete axiomatisation of the\nlogic for preference aggregation as well. Sections 7 and 8 discusses\nrelated work and concludes.\n2. JUDGMENT AND PREFERENCE\nAGGREGATION\nJudgment aggregation is concerned with judgment aggregation\nrules aggregating sets of logical formulae; preference aggregation\nis concerned with social welfare functions aggregating preferences\nover some set of alternatives. Let n be a number of agents; we write\n\u03a3 for the set {1, . . . , n}.\n2.1 Judgment Aggregation Rules\nLet L be a logic with language L(L). We require that the\nlanguage has negation and material implication, with the usual\nsemantics. We will sometimes refer to L as the underlying logic. An\nagenda over L is a non-empty set A \u2286 L(L), where for every\nformula \u03c6 that does not start with a negation, \u03c6 \u2208 A iff \u00ac\u03c6 \u2208 A. We\nsometimes call a member of A an agenda item. A subset A \u2286 A is\nconsistent unless A entails both \u00ac\u03c6 and \u03c6 in L for some \u03c6 \u2208 L(L);\nA is complete if either \u03c6 \u2208 A or \u00ac\u03c6 \u2208 A for every \u03c6 \u2208 A which\ndoes not start with negation. An (admissible) individual judgment\nset is a complete and consistent subset Ai \u2286 A of the agenda. The\nidea here is that a judgment set Ai represents the choices from A\nmade by agent i. Two rationality criteria demand that an agents\"\nchoices at least be internally consistent, and that each agent makes\na decision between every item and its negation. An (admissible)\njudgment profile is an n-tuple A1, . . . , An , where Ai is the\nindividual judgment set of agent i. J(A, L) denotes the set of all individual\n(complete and L-consistent) judgment sets over A, and J(A, L)n\nthe set of all judgment profiles over A. When \u03b3 \u2208 J(A, L)n\n, we use\n\u03b3i to denote the ith element of \u03b3, i.e., agent i\"s individual judgment\nset in judgment profile \u03b3.\nA judgment aggregation rule (JAR) is a function f that maps each\njudgment profile A1, . . . , An to a complete and consistent\ncollective judgment set f(A1, . . . , An) \u2208 J(A, L). Such a rule hence is a\nrecipe to enforce a rational group decision, given an tuple of\nrational choices by the individual agents. Of course, such a rule should\nto a certain extent be \u2018fair\". Some possible properties of a judgment\naggregation rule f over an agenda A:\nNon-dictatorship (ND1) There is no agent i such that for every\njudgment profile A1, . . . , An , f(A1, . . . , An) = Ai\nIndependence (IND) For any p \u2208 A and judgment profiles\nA1, . . . , An and B1, . . . , Bn , if for all agents i (p \u2208 Ai iff\np \u2208 Bi), then p \u2208 f(A1, . . . , An) iff p \u2208 f(B1, . . . , Bn)\nUnanimity (UNA) For any judgment profile A1, . . . , An and any\np \u2208 A, if p \u2208 Ai for all agents i, then p \u2208 f(A1, . . . , An)\n2.2 Social Welfare Functions\nSocial welfare functions (SWFs) are usually defined in terms of\nordinal preference structures, rather than cardinal structures such as\nutility functions. An SWF takes a preference relation, a binary\nrelation over some set of alternatives, for each agent, and outputs\nanother preference relation representing the aggregated preferences.\nThe most well known result about SWFs is Arrow\"s theorem [1].\nMany variants of the theorem appear in the literature, differing in\nassumptions about the preference relations. In this paper, we take\nthe assumption that all preference relations are linear orders, i.e.,\nthat neither agents nor the aggregated preference can be indifferent\nbetween distinct alternatives. This gives one of the simplest\nformulations of Arrow\"s theorem (Theorem 1 below). Cf., e.g., [2] for a\ndiscussion and more general formulations.\nFormally, let K be a set of alternatives. We henceforth\nimplicitly assume that there are always at least two alternatives. A\npreference relation (over K) is, here, a total (linear) order on K, i.e.,\na relation R over K which is antisymmetric (i.e., (a, b) \u2208 R and\n(b, a) \u2208 R implies that a = b), transitive (i.e., (a, b) \u2208 R and\n(b, c) \u2208 R implies that (a, c) \u2208 R), and total (i.e., either (a, b) \u2208\nR or (b, a) \u2208 R). We sometimes use the infix notation aRb for\n(a, b) \u2208 R. The set of preference relations over alternatives K is\ndenoted L(K). Alternatively, we can view L(K) as the set of all\npermutations of K. Thus, we shall sometimes use a permutation of\nK to denote a member of L(K). For example, when K = {a, b, c},\nwe will sometimes use the expression acb to denote the relation\n{(a, c), (a, b), (c, b), (a, a), (b, b), (c, c)}. aRb means that b is\npreferred over a if a and b are different. Rs\ndenotes the irreflexive\nversion of R, i.e., Rs\n= R \\ {(a, a) : a \u2208 K}. aRs\nb means that b is\npreferred over a and that a b.\nA preference profile for \u03a3 over alternatives K is a tuple\n(R1, . . . , Rn) \u2208 L(K)n\n, consisting of one preference relation Ri for\neach agent i. A social welfare function (SWF) is a function\nF : L(K)n\n\u2192 L(K)\nmapping each preference profile to an aggregated preference\nrelation. The class of all SWFs over alternatives K is denoted F (K).\nProperties of SWFs F corresponding to the judgment\naggregation rule properties discussed in Section 2.1 are:\nThe Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 567\nNon-dictatorship (ND2) \u00ac\u2203i\u2208\u03a3\u2200(R1, . . . , Rn) \u2208 L(K)n\nF(R1, . . . , Rn) = Ri (corresponds to ND1)\nIndependence of irrelevant alternatives (IIA) \u2200(R1, . . . , Rn)\n\u2208 L(K)n\n\u2200(S1, . . . , Sn) \u2208 L(K)n\n\u2200a \u2208 K\u2200b \u2208 K((\u2200i \u2208 \u03a3(aRib \u21d4\naSib)) \u21d2 (aF(R1, . . . , Rn)b \u21d4 aF(S1, . . . , Sn)b)) (corresponds\nto IND)\nPareto Optimality (PO) \u2200(R1, . . . , Rn) \u2208 L(K)n\n\u2200a \u2208 K\u2200b \u2208 K\n((\u2200i \u2208 \u03a3aRs\ni b) \u21d2aF(R1, . . . , Rn)s\nb) (corresponds to UNA)\nArrow\"s theorem says that the three properties above are\ninconsistent if there are more than two alternatives.\nTheorem 1 (Arrow). If there are more than two alternatives,\nno SWF has all the properties PO, ND2 and IIA.\n3. JUDGMENT AGGREGATION LOGIC:\nSYNTAX AND SEMANTICS\nThe language of Judgment Aggregation Logic (jal) is\nparameterised by a set of agents \u03a3 = {1, 2, . . . , n} (we will assume that\nthere are at least two agents) and an agenda A. The following\natomic propositions are used:\n\u03a0 = {i, \u03c3, hp | p \u2208 A, i \u2208 \u03a3}\nThe language L(\u03a3, A) of jal is defined by the following grammar:\n\u03c6 ::= \u03b1 | \u03c6 | \u03c6 | \u03c6 \u2227 \u03c6 | \u00ac\u03c6\nwhere \u03b1 \u2208 \u03a0. This language will be formally interpreted in\nstructures consisting of an agenda item, a judgment profile and a\njudgment aggregation function; informally, i means that the agenda item\nis in agent i\"s judgment set in the current judgment profile; \u03c3 means\nthat the agenda item is in the aggregated judgment set of the current\njudgment profile; hp means that the agenda item is p; \u03c6 means that\n\u03c6 is true in every judgment profile; \u03c6 means that \u03c6 is true in every\nagenda item.\nWe define \u03c8 = \u00ac \u00ac\u03c8, intuitively meaning \u03c8 is true for some\njudgment profile, and \u03c8 = \u00ac \u00ac\u03c8, intuitively meaning \u03c8 is true\nfor some agenda item, as usual, in addition to the usual derived\npropositional connectives.\nWe now define the formal semantics of L(\u03a3, A). A model wrt.\nL(\u03a3, A) and underlying logic L is a judgment aggregation rule f\nover A. Recall that J(A, L)n\ndenotes the set of complete and\nLconsistent judgment profiles over A. A table is a tuple T = f, \u03b3, p\nsuch that f is a model, \u03b3 \u2208 J(A, L)n\nand p \u2208 A. A formula is\ninterpreted on a table as follows.\nf, \u03b3, p |=L hq \u21d4 p = q\nf, \u03b3, p |=L i \u21d4 p \u2208 \u03b3i\nf, \u03b3, p |=L \u03c3 \u21d4 p \u2208 f(\u03b3)\nf, \u03b3, p |=L \u03c8 \u21d4 \u2200\u03b3 \u2208 J(A, L)n\nf, \u03b3 , p |=L \u03c8\nf, \u03b3, p |=L \u03c8 \u21d4 \u2200p \u2208 A f, \u03b3, p |=L \u03c8\nf, \u03b3, p |=L \u03c6 \u2227 \u03c8 \u21d4 f, \u03b3, p |=L \u03c6 and f, \u03b3, p |=L \u03c8\nf, \u03b3, p |=L \u00ac\u03c6 \u21d4 f, \u03b3, p |=L \u03c6\nSo, e.g., we have that f, \u03b3, p |=L i\u2208\u03a3 i if everybody chooses p in \u03b3.\nExample 1. A committee of three agents are voting on the\nfollowing three propositions: the candidate is qualified (p), if the\ncandidate is qualified he will get an offer (p \u2192 q), and the\ncandidate will get an offer (q). One possible voting scenario\nis illustrated in the left part of Table 1. In the table, the results\nof proposition-wise majority voting, i.e., the JAR fmaj accepting a\nproposition iff it is accepted by a majority of the agents, are also\np p \u2192 q q\n1 yes yes yes\n2 no yes yes\n3 yes no no\nfmaj yes yes yes\n1 mdc\n2 mcd\n3 cmd\nFmaj mcd\nTable 1: Examples\nshown. This example can be modelled by taking the agenda to\nbe A = {p, p \u2192 q, q, \u00acp, \u00ac(p \u2192 q), \u00acq} (recall that agendas are\nclosed under single negation) and L to be propositional logic. The\nagents\" votes can be modelled by the following judgment profile:\n\u03b3 = \u03b31, \u03b32, \u03b33 , where \u03b31 = {p, p \u2192 q, q}, \u03b32 = {\u00acp, p \u2192 q, q},\n\u03b33 = {p, \u00ac(p \u2192 q), \u00acq}. We then have that:\n\u2022 fmaj, \u03b3, p |=L 1 \u2227 \u00ac2 \u2227 3 (agents 1 and 3 judges p to be true in\nthe profile \u03b3, while agent 2 does not)\n\u2022 fmaj, \u03b3, p |=L \u03c3 (majority voting on p given the preference\nprofile \u03b3 leads to acceptance of p)\n\u2022 fmaj, \u03b3, p |=L (1 \u2227 2) (agents 1 and 2 agree on some agenda\nitem, under the judgment profile \u03b3. Note that this formula\ndoes not depend on which agenda item is on the table.)\n\u2022 fmaj, \u03b3, p |=L ((1 \u2194 2) \u2227 (2 \u2194 3) \u2227 (1 \u2194 3)) (there is some\njudgment profile on which all agents agree on p. Note that\nthis formula does not depend on which judgment profile is on\nthe table.)\n\u2022 fmaj, \u03b3, p |=L ((1 \u2194 2) \u2227 (2 \u2194 3) \u2227 (1 \u2194 3)) (there\nis some judgment profile on which all agents agree on all\nagenda items. Note that this formula does not depend on any\nof the elements on the table.)\n\u2022 fmaj, \u03b3, p |=L \u03c3 \u2194 G\u2286{1,2,3},|G|\u22652 i\u2208G i (the JAR fmaj\nimplements majority voting)\nWe write f |=L \u03c6 iff f, \u03b3, p |=L \u03c6 for every \u03b3 over A and p \u2208 A;\n|=L \u03c6 iff f |=L \u03c6 for all models f. Given a possible property of a\nJAR, such as, e.g., independence, we say that a formula expresses\nthe property if the formula is true in an aggregation rule f iff f has\nthe property.\nNote that when we are given a formula \u03c6 \u2208 L(\u03a3, A), validity,\ni.e., |=L \u03c6, is defined with respect to models of the particular\nlanguage L(\u03a3, A) defined over the particular agenda A (and similar\nfor validity with respect to a JAR, i.e., f |=L \u03c6). The agenda, like\nthe set of agents \u03a3, is given when we define the language, and is\nthus implicit in the interpretation of the language1\n.\nLet an outcome o be a maximal conjunction of literals\n(\u00ac)1, . . . , (\u00ac)n. The set O is the set of all possible outcomes. Note\nthat the decision of the society is not incorporated here: an outcome\nonly collects votes of agents from \u03a3.\n3.1 Model Checking\nModel checking is currently one of the most active areas of\nresearch with respect to reasoning in modal logics [4], and it is natural\nto investigate the complexity of this problem for judgment\naggregation logic. Intuitively, the model checking problem for judgment\naggregation logic is as follows:\nGiven f, \u03b3, p and formula \u03c6 of jal, is it the case that\nf, \u03b3, p |= \u03c6 or not?\n1\nLikewise, in classical modal logic the language is parameterised\nwith a set of primitive propositions, and validity is defined with\nrespect to all models with valuations over that particular set.\n568 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nWhile this problem is easy to understand mathematically, it presents\nsome difficulties if we want to analyse it from a computational\npoint of view. Specifically, the problem lies in the\nrepresentation of the judgment aggregation rule, f. Recall that this\nfunction maps judgment profiles to complete and consistent judgment\nsets. A JAR must be defined for all judgment profiles over some\nagenda, i.e., it must produce an output for all these possible\ninputs. But how are we to represent such a rule? The simplest\nrepresentation of a function f : X \u2192 Y is as the set of ordered pairs\n{(x, y) | x \u2208 X & y = f(x)}. However, this is not a feasible\nrepresentation for JARs, as there will be exponentially many judgment\nprofiles in the size of the agenda, and so the representation would\nbe unfeasibly large in practice. If we did assume this\nrepresentation for JARs, then it is not hard to see that model checking for our\nlogic would be decidable in polynomial time: the naive algorithm,\nderivable from semantics, serves this purpose.\nHowever, we emphasise that this result is of no practical\nsignificance, since it assumes an unreasonable representation for models\n- a representation that simply could not be used in practice for\nexamples of anything other than trivial size.\nSo, what is a more realistic representation for JARs? Let us say\na representation Rf of a JAR f is reasonable if: (i) the size of Rf\nis polynomial in the size of the agenda; and (ii) there is a\npolynomial time algorithm A, which takes as input a representation Rf\nand a judgment profile \u03b3, and produces as output f(\u03b3). There are,\nof course, many such representations Rf for JARs f. Here, we will\nlook at one very general one: where the JAR is represented as a\npolynomially bounded two-tape Turing machine Tf , which takes\non its first tape a judgment profile, and writes on its second tape\nthe resulting judgment set. The requirement that the Turing\nmachine should be polynomially bounded roughly corresponds to the\nrequirement that a JAR is reasonable to compute; if there is some\nJAR that cannot be represented by such a machine, then it is\narguably of little value, since it could not be used in practice2\n. With\nsuch a representation, we can investigate the complexity of our\nmodel checking problem.\nIn modal logics, the usual source of complexity, over and above\nthe classical logic connectives, is the modal operators. With\nrespect to judgment aggregation logic, the operator quantifies over\nall judgment profiles, and hence over all consistent subsets of the\nagenda. It follows that this is a rather powerful operator: as we will\nsee, it can be used as an np oracle [9, p.339]. In contrast, the\noperator quantifies over members of the agenda, and is hence much\nweaker, from a computational perspective (we can think of it as a\nconjunction over elements of the agenda).\nThe power of the quantifier suggests that the complexity of\nmodel checking judgment aggregation logic over relatively\nsuccinct representations of JAR is going to be relatively high; we now\nprove that the complexity of model checking judgment aggregation\nlogic is as hard as solving a polynomial number of np-hard\nproblems [9, pp.424-429].\nTheorem 2. The model checking problem for judgment\naggregation logic, assuming the representation of JARs described above,\nis \u0394p\n2-hard; it is np-hard even if the formula to be checked is of the\nform \u03c8, where \u03c8 contains no further or operators.\nProof. For \u0394p\n2-hardness, we reduce snsat (sequentially nested\n2\nOf course, we have no general way of checking whether any\ngiven Turing machine is guaranteed to terminate in polynomial\ntime; the problem is undecidable. As a consequence, we cannot\nalways check whether a particular Turing machine representation\nof a JAR meets our requirements. However, this does not prevent\nspecific JARs being so represented, with corresponding proofs that\nthey terminate in polynomial time.\nsatisfiability). An instance is given by a series of equations of the\nform\nz1 = \u2203X1.\u03c61(X1) z2 = \u2203X2.\u03c62(X2, z1) z3 = \u2203X3.\u03c63(X3, z1, z2)\n. . .\nzk = \u2203Xk.\u03c6k(Xk, z1, . . . , zk\u22121)\nwhere X1, . . . , Xk are disjoint sets of variables, and each \u03c6i(Y) is a\npropositional logic formula over the variables Y; the idea is we first\ncheck whether \u03c61(X1) is satisfiable, and if it is, we assign z1 the\nvalue true, otherwise assign it false; we then check whether \u03c62 is\nsatisfiable under the assumption that z1 takes the value just derived,\nand so on. Thus the result of each equation depends on the value of\nthe previous one. The goal is to determine whether zk is true.\nTo reduce this problem to judgment aggregation logic model\nchecking, we first fix the JAR: this rule simply copies whatever\nagent 1\"s judgment set is. (Clearly this can be implemented by a\npolynomially bounded Turing machine.) The agenda is assumed to\ncontain the variables X1 \u222a \u00b7 \u00b7 \u00b7 \u222a Xk \u222a {z1, . . . , zk} and their negations.\nWe fix the initial judgment profile \u03b3 to be X1 \u222a\u00b7 \u00b7 \u00b7\u222aXk \u222a{z1, . . . , zk},\nand fix p = x1. Given a variable xi, define x\u2217\ni to be (hxi \u22271). If \u03c6i is\none of the formulae \u03c61, . . . , \u03c6k, define \u03c6\u2217\ni to be the formula obtained\nfrom \u03c6i by systematically substituting x\u2217\ni for each variable xi and z\u2217\ni\nsimilarly.\nNow, we define the function \u03bei for natural numbers i > 0 as:\n\u03bek =\nz\u2217\n1 \u2194 (\u03c6\u2217\n1) if i = 1\nz\u2217\ni \u2194 (\u03c6\u2217\ni \u2227i\u22121\nj=1 \u03bej) otherwise.\nAnd we define the formula to be model checked as:\n\u03c6\u2217\nk \u2227k\u22121\nj=1 \u03bej\nIt is now straightforward from construction that this formula is true\nunder the interpretation iff zk is true in the snsat instance. The proof\nof the latter half of the theorem is immediate from the special case\nwhere k = 1.\n3.2 Some Properties\nWe have thus defined a language which can be used to express\nproperties of judgment aggregation rules. An interesting question\nis then: what are the universal properties of aggregation rules\nexpressible in the language; which formulae are valid? Here, in order\nto illustrate the logic, we discuss some of these logical properties.\nIn Section 5 we give a complete axiomatisation of all of them.\nRecall that we defined the set O of outcomes as the set of all\nconjunctions with exactly one, possibly negated, atom from \u03a3. Let\nP = {o \u2227 \u03c3, o \u2227 \u00ac\u03c3 : o \u2208 O}; p \u2208 P completely describes the\ndecisions of the agents and the aggregation function. Let denote\nexclusive or.\nWe have that:\n|=L p\u2208Pp - any agent and the JAR always have to make a decision\n|=L (i \u2227 \u00acj) \u2192 \u00aci - if some agent can think differently about an\nitem than i does, then also i can change his mind about it. In\nfact this principle can be strengthened to\n|=L ( i \u2227 \u00acj) \u2192 (\u00aci \u2227 j)\n|=L x - for any x \u2208 {i, \u00aci, \u03c3, \u00ac\u03c3 : i \u2208 \u03a3} - both the individual\nagents and the JAR will always judge some agenda item to\nbe true, and conversely, some agenda item to be false\n|=L (i \u2227 j) - there exist admissible judgment sets such that agents\ni and j agree on some judgment.\n|=L (i \u2194 j) - there exist admissible judgment sets such that agents\ni and j always agree.\nThe Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 569\nThe interpretation of formulae depends on the agenda A and the\nunderlying logic L, in the quantification over the set J(A, L)n\nof\nadmissible, e.g., complete and L-consistent, judgment profiles. Note\nthat this means that some jal formula might be valid under one\nunderlying logic, while not under another. For example, if the agenda\ncontains some formula which is inconsistent in the underlying logic\n(and, by implication, some tautology), then the following hold:\n|=L (i \u2227 \u03c3) - for every judgment profile, there is some agenda\nitem (take a tautology) which both agent i and the JAR judges\nto be true\nBut this property does not hold when every agenda item is\nconsistent with respect to the underlying logic. One such agenda and\nunderlying logic will be discussed in Section 6.\n4. EXPRESSIVITY EXAMPLES\nNon-dictatorship can be expressed as follows:\nND =\ni\u2208\u03a3\n\u00ac(\u03c3 \u2194 i) (1)\nLemma 1. f |=L ND iff f has the property ND1.\nIndependence can be expressed as follows:\nIND =\no\u2208O\n((o \u2227 \u03c3) \u2192 (o \u2192 \u03c3)) (2)\nLemma 2. f |=L IND iff f has the property IND.\nUnanimity can be expressed as follows:\nUNA = ((1 \u2227 \u00b7 \u00b7 \u00b7 \u2227 n) \u2192 \u03c3) (3)\nLemma 3. f |=L UNA iff f has the property UNA.\n4.1 The Discursive Paradox\nAs illustrated in Example 1, the following formula expresses\nproposition-wise majority voting over some proposition p\nMV = \u03c3 \u2194\nG\u2286\u03a3,|G|> n\n2\ni\u2208G\ni (4)\ni.e., the following property of a JAR f and admissible profile\nA1, . . . , An :\np \u2208 f(A1, . . . , An) \u21d4 |{i : p \u2208 Ai}| > |{i : p Ai}|\nf |= MV exactly iff f has the above property for all judgment\nprofiles and propositions.\nHowever, we have the following in our logic. Assume that the\nagenda contains at least two distinct formulae and their material\nimplication (i.e., A contains p, q, p \u2192 q for some p, q \u2208 L(L)).\nProposition 1 (Discursive Paradox).\n|=L (( MV) \u2192 \u22a5)\nwhen there are at least three agents and the agenda contains at\nleast two distinct formulae and their material implication.\nProof. Assume the opposite, e.g., that A = {p, p \u2192 q, q, \u00acp,\n\u00ac(p \u2192 q), \u00acq, . . .} and there exists an aggregation rule f over A\nsuch that f |=L (\u03c3 \u2194 G\u2286\u03a3,|G|> n\n2 i\u2208G i). Let \u03b3 be the\njudgment profile \u03b3 = A1, A2, A3 where A1 = {p, p \u2192 q, q, . . .}, A2 =\n{p, \u00ac(p \u2192 q), \u00acq, . . .} and A3 = {\u00acp, p \u2192 q, \u00acq, . . .}. We have\nthat f, \u03b3, p |=L (\u03c3 \u2194 G\u2286\u03a3,|G|> n\n2 i\u2208G i) for any p , so f, \u03b3, p |=L\n\u03c3 \u2194 G\u2286\u03a3,|G|> n\n2 i\u2208G i. Because f, \u03b3, p |=L 1 \u2227 2, it follows that\nf, \u03b3, p |=L \u03c3. In a similar manner it follows that f, \u03b3, p \u2192 q |=L \u03c3\nand f, \u03b3, q |=L \u00ac\u03c3. In other words, p \u2208 f(\u03b3), p \u2192 q \u2208 f(\u03b3) and\nq f(\u03b3). Since f(\u03b3) is complete, \u00acq \u2208 f(\u03b3). But that contradicts\nthe fact that f(\u03b3) is required to be consistent.\nProposition 1 is a logical statement of a variant of the well-known\ndiscursive dilemma: if three agents are voting on propositions p, q\nand p \u2192 q, proposition-wise majority voting might not yield a\nconsistent result.\n5. AXIOMATISATION\nGiven an underlying logic L, a finite agenda A over L, and a set\nof agents \u03a3, Judgment Aggregation Logic (jal(L), or just jal when\nL is understood) for the language L(\u03a3, A), is defined in Table 2.\n\u00ac(hp \u2227 hq) if p q Atmost\np\u2208A hp Atleast\nhp p \u2208 A Agenda\n(hp \u2227 \u03d5) \u2192 (hp \u2192 \u03d5) Once\n(hp \u2227 x) \u2228 (hp \u2227 x) CpJS\nall instantiations of propositional tautologies taut\n(\u03c81 \u2192 \u03c82) \u2192 ( \u03c81 \u2192 \u03c82) K\n\u03c8 \u2192 \u03c8 T\n\u03c8 \u2192 \u03c8 4\n\u00ac \u03c8 \u2192 \u00ac \u03c8 5\n( i \u2227 \u00acj) \u2192 o\u2208O o C\n\u03c8 \u2194 \u03c8 (COMM)\nFrom p1, . . . pn L q infer\n(hp1\n\u2227 x) \u2227 \u00b7 \u00b7 \u00b7 \u2227 (hpn \u2227 x) \u2192\n(hq \u2192 x) \u2227 (hq \u2192 \u00acx) Closure\nFrom \u03d5 \u2192 \u03c8 and \u03d5 infer \u03c8 MP\nFrom \u03c8 infer \u03c8 Nec\nTable 2: The logic jal(L) for the language L(\u03a3, A). p, pi, q range\nover the agenda A; \u03c6,\u03c8,\u03c8i over L(\u03a3, A); x over {\u03c3, i : i \u2208 \u03a3};\nover { , }; i, j over \u03a3; o over the set of outcomes O. hp means\nhq when p = \u00acq for some q, otherwise it means h\u00acp. L is the\nunderlying logic.\nThe first 5 axioms represent properties of a table and of judgment\nsets. Axiom Atmost says that there is at most one item on the table\nat a time, and Atleast says that we always have an item on the table.\nAxiom Agenda says that every agenda item will appear on the table,\nwhereas Once says that every item of the agenda only appears on\nthe table once. Note that a conjunction hp \u2227 x reads: item p is on\nthe agenda, and x is in favour of it, or x judges it true. Axiom CpJS\ncorresponds to the requirement that judgment sets are complete.\nNote that from Agenda, CsJS and CpJS we derive the scheme x \u2227\n\u00acx, which says that everybody should at least express one opinion\nin favour of something, and against something.\nThe axioms taut \u2212 5 are well familiar from modal logic: they\ndirectly reflect the unrestricted quantification in the truth definition\nof and . Axiom C says that for any agenda item for which it\nis possible to have opposing opinions, every possible outcome for\nthat item should be achievable. COMM says that everything that\nis true for an arbitrary profile and item, is also true for an arbitrary\nitem and profile. Closure guarantees that agents behave\nconsistently with respect to consequence in the logic L. MP and Nec are\nstandard. We use JAL(L) to denote derivability in jal(L).\nTheorem 3. If the agenda is finite, we have that for any formula\n\u03c8 \u2208 L(\u03a3, A), JAL(L) \u03c8 iff |=L \u03c8.\nProof. Soundness is straightforward. For completeness (we\nfocus on the main idea here and leave out trivial details), we build a\n570 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)\njal table for a consistent formula \u03c8 as follows. In fact, our\naxiomatisation completely determines a table, except for the behaviour of\nf. To be more precise, let a table description be a conjunction of\nthe form hp \u2227 o \u2227 (\u00ac)\u03c3. It is easy to see that table descriptions are\nmutually exclusive, and, moreover, we can derive \u03c4\u2208T \u03c4, where T\nis the set of all table descriptions. Let D be the set of all\nmaximal consistent sets \u0394. We don\"t want all of those: it might well\nbe that \u03c8 requires \u03c3 to be in a certain way, which is incompatible\nwith some \u0394\"s. We define two accessibility relations in the standard\nway: R \u03941\u03942 iff for all \u03c8: \u03c8 \u2208 \u03941 \u21d2 \u03c8 \u2208 \u03942. Similarly for R\nwith respect to . Both relations are equivalences (due to taut-5),\nand moreover, when R \u03941\u03942 and R \u03942\u03943 then for some \u03942, also\nR \u03941\u03942 and R \u03942\u03943 (because of axiom COMM).\nLet \u03940 be a MCS containing \u03c8. We now define the set Tables =\n{\u03940} \u222a {\u03941, \u03942 | (R \u03940\u03941 and R \u03941\u03942) or (R \u03940\u03941 and R \u03941\u03942)}\nEvery \u0394 \u2208 Tables can be conceived as a pair \u03b3, p, since every \u0394\ncontains a unique (hq \u2227 o \u2227 (\u00ac)\u03c3) for every hq and a unique hp.\nIt is then easy to verify that, for every \u0394 \u2208 Tables, and every\nformula \u03d5, \u0394 |= \u03d5 iff \u03d5 \u2208 \u0394, where |= here means truth in the\nordinary modal logic sense when the set of states is taken to be\nTables. Now, we extract an aggregation function f and pairs \u03b3, p as\nfollows:\nFor every \u0394 \u2208 Tables, find a conjunction hp \u2227 o \u2227 (\u00ac)\u03c3. There\nwill be exactly one such p. This defines the p we are looking for.\nFurthermore, the \u03b3 is obtained, for every agent i, by finding all q for\nwhich (hq \u2227 i) is currently true. Finally, the function f is a table\nof all tuples hp, o(p), \u03c3 for which (hp \u2227 o(o) \u2227 \u03c3) is contained in\nsome set in Tables.\nWe point out that jal has all the axioms taut, K, T, 4, 5 and the\nrules MP and Nec of the modal logic S5. However, uniform\nsubstitution, a principle of all normal modal logics (cf., e.g., [3]), does\nnot hold. A counter example is the fact that the following is valid:\n\u03c3 (5)\n- no matter what preferences the agents have, the JAR will always\nmake some judgment - while this is not valid:\n(\u03c3 \u2227 i) (6)\n- the JAR will not necessarily make the same judgments as agent i.\nSo, for example, we have that the discursive paradox is provable\nin jal(L): JAL(L) (( MV) \u2192 \u22a5). An example of a derivation of\nthe less complicated (valid) property (i \u2227 j) is shown in Table 3.\n6. PREFERENCE AGGREGATION\nRecently, Dietrich and List [5] showed that preference\naggregation can be embedded in judgment aggregation. In this section we\nshow that our judgment aggregation logic also can be used to\nreason about preference aggregation.\nGiven a set K of alternatives, [5] defines a simple predicate logic\nLK\nwith language L(LK\n) as follows:\n\u2022 L(LK\n) has one constant a for each alternative a \u2208 K,\nvariables v1, v2, . . ., a binary identity predicate =, a binary\npredicate P for strict preference, and the usual propositional and\nfirst order connectives\n\u2022 Z is the collection of the following axioms:\n- \u2200v1 \u2200v2 (v1Pv2 \u2192 \u00acv2Pv1)\n- \u2200v1 \u2200v2 \u2200v3 ((v1Pv2 \u2227 v2Pv3) \u2192 v1Pv3)\n- \u2200v1 \u2200v2 (\u00acv1 = v2 \u2192 (v1Pv2 \u2228 v2Pv1))\n\u2022 When \u0393 \u2286 L(LK\n) and \u03c6 is a formula, \u0393 |= \u03c6 is defined to hold\niff \u0393 \u222a Z entails \u03c6 in the standard sense of predicate logic\n1 (hp \u2227 i) \u2228 (hp \u2227 i) CpJS(i)\n2 (hp \u2227 j) \u2228 (hp \u2227 j) CpJS(j)\n3 Call 1 A \u2228 B and 2 C \u2228 D abbreviation, 1, 2\n4 (A \u2227 C) \u2228 (A \u2227 D) \u2228 (B \u2227 C) \u2228 (B \u2227 D) taut, 3\n5 derive (i \u2227 j) from every disjunct of 4 strategy is \u2228 elim\n6 (hp \u2227 i) \u2227 (hp \u2227 j) assume A \u2227 C\n7 (hp \u2192 (i \u2227 j)) Once, 6, K( )\n8 (i \u2227 j) 7, Agenda\n9 (i \u2227 j) 8, T( )\n10 (hp \u2227 i) \u2227 (hp \u2227 j) assume A \u2227 D\n11 (hp \u2227 x) \u2194 (hp \u2227 \u00acx) Agenda, Closure\n12 (hp \u2227 i) \u2227 (hp \u2227 \u00acj) 10, 11\n13 (hp \u2227 i \u2227 \u00acj) 12, Once, K( )\n14 (i \u2227 \u00acj) 13, taut\n15 (i \u2227 \u00acj) 14, K( )\n16 (i \u2227 \u00acj) 15, COMM\n17 ( i \u2227 D\u00acj) 16, K( )\n18 (i \u2227 j) 17, C\n19 (hp \u2227 i) \u2227 (hp \u2227 j) assume B \u2227 D\n20 goes as 6-9\n21 (hp \u2227 i) \u2227 (hp \u2227 j) assume B \u2227 C\n22 goes as 10 - 18\n23 (i \u2227 j) \u2228-elim, 1, 2, 9, 18,\n20, 22\nTable 3: jar derivation of (i \u2227 j)\nIt is easy to see that there is an one-to-one correspondence between\nthe set of preference relations (total linear orders) over K and the set\nof LK\n-consistent and complete judgment sets over the preference\nagenda\nAK\n= {aPb, \u00acaPb : a, b \u2208 K, a b}\nGiven a SWF F over K, the corresponding JAR fF\nover the\npreference agenda AK\nis defined as follows fF\n(A1, . . . , An) = A, where\nA is the consistent and complete judgment set corresponding to\nF(L1, . . . , Ln) where Li is the preference relation corresponding to\nthe consistent and complete judgment set Ai.\nThus we can use jal to reason about preference aggregation as\nfollows. Take the logical language L(\u03a3, AK\n), for some set of agents\n\u03a3, and take the underlying logic to be LK\n. We can then interpret our\nformulae in an SWF F over K, a preference profile L \u2208 L(K) and a\npair (a, b) \u2286 K \u00d7 K, a b, as follows:\nF, L, (a, b) |=swf\n\u03c6 \u21d4 fF\n, \u03b3L\n, aPb |=LK \u03c6\nwhere \u03b3L\nis the judgment profile corresponding to the preference\nprofile L.\nWhile in the general judgment aggregation case a formula is\ninterpreted in the context of an agenda item, in the preference\naggregation case a formula is thus interpreted in the context of a pair of\nalternatives.\nExample 2. Three agents must decide between going to dinner\n(d), a movie (m) or a concert (c). Their individual preferences\nare illustrated on the right in Table 1 in Section 3, along with the\nresult of a SWF Fmaj implementing pair-wise majority voting. Let\nL = mdc, mcd, cmd be the preference profile corresponding to the\npreferences in the example. We have the following:\n\u2022 Fmaj, L, (m, d) |=swf\n1 \u2227 2 \u2227 3 (all agents agree, under the\nindividual rankings L, on the relative ranking of m and\ndthey agree that d is better than m)\n\u2022 Fmaj, L, (m, d) |=swf\n\u00ac(1 \u2194 2) (under the individual\nrankings L, there is some pair of alternatives on which agents 1\nand 2 disagree)\nThe Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 571\n\u2022 Fmaj, L, (m, d) |=swf\n(1 \u2227 2) (agents 1 and 2 can choose\ntheir preferences such that they will agree on some pair of\nalternatives)\n\u2022 Fmaj, L, (m, d) |=swf\n\u03c3 \u2194 G\u2286{1,2,3},|G|\u22652 i\u2208G i (the SWF Fmaj\nimplements pair-wise majority voting)\nAs usual, we write F |=swf\n\u03c6 when F, L, (a, b) |=swf\n\u03c6 for any L\nand (a, b), and so on. Thus, our formulae can be seen as expressing\nproperties of social welfare functions.\nExample 3. Take the formula (i \u2194 \u03c3). When this formula is\ninterpreted as a statement about a social welfare function, it says\nthat there exists a preference profile such that for all pairs (a, b) of\nalternatives, b is preferred over a in the aggregation (by the SWF)\nof the preference profile if and only if agent i prefers b over a.\n6.1 Expressivity Examples\nWe make precise the claim in Section 2.2 that the three\nmentioned SWF properties correspond to the three mentioned JAR\nproperties, respectively. Recall the formulae defined in Section 4.\nProposition 2.\nF |=swf\nND iff F has the property ND2\nF |=swf\nIND iff F has the property IIA\nF |=swf\nUNA iff F has the property PO\nThe properties expressed above are properties of SWFs. Let us\nnow look at properties of the set of alternatives K we can express.\nProperties involving cardinality is often of interest, for example in\nArrow\"s theorem. Let:\nMT2 = ( (1 \u2227 2) \u2227 (1 \u2227 \u00ac2))\nProposition 3. Let F \u2208 F (K). |K| > 2 iff F |=swf\nMT2.\nProof. For the direction to the left, let F |=swf\nMT2. Thus, there\nis a \u03b3 such that there exists (a1\n, b1\n), (a2\n, b2\n) \u2208 K \u00d7 K, where a1\nb1\n, and a2\nb2\n, such that (i) a1\nPb1\n\u2208 \u03b31, (ii) a1\nPb1\n\u2208 \u03b32, (iii)\na2\nPb2\n\u2208 \u03b31 and (iv) a2\nPb2\n\u03b32. From (ii) and (iv) we get that\n(a1\n, b1\n) (a2\n, b2\n), and from that and (i) and (iii) it follows that\n\u03b31 contains two different pairs a1\nPb1\nand a2\nPb2\neach having two\ndifferent elements. But that is not possible if |K| = 2, because if K =\n{a, b} then AK\n= {aPb, \u00acaPb, bPa, \u00acbPa} and thus it is impossible\nthat \u03b31 \u2286 AK\nsince we cannot have aPb, bPa \u2208 \u03b31.\nFor the direction to the right, let |K| > 2; let a, b, c be three\ndistinct elements of K. Let \u03b31 be the judgment set corresponding\nto the ranking abc and \u03b32 the judgment set corresponding to acb.\nNow, for any aggregation rule f, f, \u03b3, aPb |= 1 \u2227 2 and f, \u03b3, bPc |=\n1 \u2227 \u00ac2. Thus, F |=swf\nMT2, for any SWF F.\nWe now have everything we need to express Arrow\"s statement\nas a formula. It follows from his theorem that the formula is valid\non the class of all social welfare functions.\nTheorem 4. |=swf\nMT2 \u2192 \u00ac(PO \u2227 ND \u2227 IIA)\nProof. Note that MT2, PO, ND and IIA are true SWF properties,\ntheir truth value wrt. a table is determined solely by the SWF. For\nexample, F, L, (a, b) |=swf\nMT2 iff F |= MT2, for any F, L, a, b.\nLet F \u2208 F (K), and F, L, (a, b) |=swf\nMT2 for some L and a, b. By\nProposition 3, K has more than two alternatives. By Arrow\"s\ntheorem, F cannot have all the properties PO, ND2 and IIA. W.l.o.g\nassume that F does not have the PO property. By Proposition 2,\nF |=swf\nPO. Since PO is a SWF property, this means that\nF, L, (a, b) |=swf\nPO (satisfaction of PO is independent of L, a, b),\nand thus that F, L, (a, b) |=swf\n\u00acPO \u2228 \u00acND \u2228 \u00acIIA.\nNote that the formula in Theorem 4 does not mention any agenda\nitems (i.e., pairs of alternatives) such as haPb directly in an\nexpression. This means that the formula is a member of L(\u03a3, AK\n) for any\nset of alternatives K, and is valid no matter which set of alternatives\nwe assume.\nThe formula MV which in the general judgment aggregation case\nexpresses proposition-wise majority voting, expresses in the\npreference aggregation case pair-wise majority voting, as illustrated in\nExample 2. The preference aggregation correspondent to the\ndiscursive paradox of judgment aggregation is the well known\nCondorcet\"s voting paradox, stating that pair-wise majority voting can\nlead to aggregated preferences which are cyclic (even if the\nindividual preferences are not). We can express Condorcet\"s paradox\nas follows, again as a universally valid logical property of SWFs.\nProposition 4. |=swf\nMT2 \u2192 \u00acMV, when there are at least\nthree agents.\nProof. The proof is similar to the proof of the discursive\nparadox. Let fF\n, \u03b3, aPb |=LK MT2; there are thus three distinct\nelements a, b, c \u2208 K. Assume that fF\n, \u03b3, aPb |=LK MV. Let\n\u03b3 be the judgment profile corresponding to the preference\nprofile X = (abc, cab, bca). We have that fF\n, \u03b3 , aPb |=LK 1 \u2227 2 and,\nsince fF\n, \u03b3 , aPb |=LK MV, we have that fF\n, \u03b3 , aPb |=LK \u03c3 and thus\nthat aPb \u2208 fF\n(\u03b3 ) and (a, b) \u2208 F(X). In a similar manner we get\nthat (c, a) \u2208 F(X) and (b, c) \u2208 F(X). But that is impossible, since\nby transitivity we would also have that (a, c) \u2208 F(X) which\ncontradicts the fact that F(X) is antisymmetric. Thus, it follows that\nfF\n, \u03b3, aPb |=LK MV.\n6.2 Axiomatisation and Logical Properties\nWe immediately get, from Theorem 3, a sound and complete\naxiomatisation of preference aggregation over a finite set of\nalternatives.\nCorollary 1. If the set of alternatives K is finite, we have that\nfor any formula \u03c8 \u2208 L(\u03a3, AK\n), JAL(LK ) \u03c8 iff |=swf\n\u03c8.\nProof. Follows immediately from Theorem 3 and the fact that\nfor any JAR f, there is a SWF F such that f = fF\n.\nSo, for example, Arrow\"s theorem is provable in jal(LK\n): JAL(LK )\nMT2 \u2192 \u00ac(PO \u2227 ND \u2227 IIA).\nEvery formula which is valid with respect to judgment\naggregation rules is also valid with respect to social welfare functions, so\nall general logical properties of JARs are also properties of SWFs.\nDepending on the agenda, SWFs may have additional properties,\ninduced by the logic LK\n, which are not always shared by JARs with\nother underlying logics. One such property is i. While we have\n|=swf\ni,\nfor other agendas there are underlying logics L such that\n|=L i\nTo see the latter, take an agenda with a formula p which is\ninconsistent in the underlying logic L - p can never be included in a\njudgment set. To see the former, take an arbitrary pair of\nalternatives (a, b). There exists some preference profile in which agent i\nprefers b over a.\nTechnically speaking, the formula i holds in SWFs because the\nagenda AK\ndoes not contain a formula which (alone) is inconsistent\nwrt. the underlying logic LK\n. By the same reason, the following\nproperties also hold in SWFs but not in JARs in general.\n|=swf\no\u2208O\no\n572 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)\n- for any pair of alternatives (a, b), any possible combination of the\nrelative ranking of a and b among the agents is possible.\n|=swf\ni \u2192 \u00aci\n- given an alternative b which is preferred over some other\nalternative a by agent i, there is some other pair of alternatives c and d\nsuch that d is not preferred over c - namely (c, d) = (b, a).\n|=swf\n( (i \u2228 j) \u2192 (i \u2227 \u00acj))\n- if, given preferences of agents and a SWF, for any two alternatives\nit is always the case that either agent i or agent j prefers the second\nalternative over the first, then there must exist a pair of alternatives\nfor which the two agents disagree. A justification is that no single\nagent can prefer the second alternative over the first for every pair\nof alternatives, so in this case if i prefers b over a then j must prefer\na over b. Again, this property does not necessarily hold for other\nagendas, because the agenda might contain an inconsistency the\nagents could not possibly disagree upon.\nProof theoretically, these additional properties of SWFs are\nderived using the Closure rule.\n7. RELATED WORK\nFormal logics related to social choice have focused mostly on the\nlogical representation of preferences when the set of alternatives is\nlarge and on the computation properties of computing aggregated\npreferences for a given representation [6, 7, 8].\nA notable and recent exception is a logical framework for\njudgment aggregation developed by Marc Pauly in [10], in order to be\nable to characterise the logical relationships between different\njudgment aggregation rules. While the motivation is similar to the work\nin this paper, the approaches are fundamentally different: in [10],\nthe possible results from applying a rule to some judgment\nprofile are taken as primary and described axiomatically; in our\napproach the aggregation rule and its possible inputs, i.e., judgment\nprofiles, are taken as primary and described axiomatically. The two\napproaches do not seem to be directly related to each other in the\nsense that one can be embedded in the other.\nThe modal logic arrow logic [11] is designed to reason about any\nobject that can be graphically represented as an arrow, and has\nvarious modal operators for expressing properties of and relationships\nbetween these arrows. In the preference aggregation logic jal(LK\n)\nwe interpreted formulae in pairs of alternatives - which can be seen\nas arrows. Thus, (at least) the preference aggregation variant of our\nlogic is related to arrow logic. However, while the modal operators\nof arrow logic can express properties of preference relations such\nas transitivity, they cannot directly express most of the properties\nwe have discussed in this paper. Nevertheless, the relationship to\narrow logic could be investigated further in future work. In\nparticular, arrow logics are usually proven complete wrt. an algebra.\nThis could mean that it might be possible to use such algebras as\nthe underlying structure to represent individual and collective\npreferences. Then, changing the preference profile takes us from one\nalgebra to another, and a SWF determines the collective preference,\nin each of the algebras.\n8. DISCUSSION\nWe have presented a sound and complete logic jal for\nrepresenting and reasoning about judgment aggregation. jal is expressive:\nit can express judgment aggregation rules such as majority voting;\ncomplicated properties such as independence; and important results\nsuch as the discursive paradox, Arrow\"s theorem and Condorcet\"s\nparadox. We argue that these results show exactly which logical\ncapabilities an agent needs in order to be able to reason about\njudgment aggregation. It is perhaps surprising that a relatively simple\nlanguage provides these capabilities. jal provides a proof theory, in\nwhich results such as those mentioned above can be derived3\n.\nThe axiomatisation describes the logical principles of judgment\naggregation, and can also be instantiated to reason about specific\ninstances of judgment aggregation, such as classical Arrovian\npreference aggregation. Thus our framework sheds light on the\ndifferences between the logical principles behind general judgment\naggregation on the one hand and classical preference aggregation\non the other.\nIn future work it would be interesting to relax the completeness\nand consistency requirements of judgment sets, and try to\ncharacterise these in the logical language, as properties of general\njudgment sets, instead.\n9. ACKNOWLEDGMENTS\nWe thank the anonymous reviewers for their helpful remarks.\nThomas \u00c5gotnes\" work on this paper was supported by grants\n166525/V30 and 176853/S10 from the Research Council of\nNorway.\n10. REFERENCES\n[1] K. J. Arrow. Social Choice and Individual Values. Wiley,\n1951.\n[2] K. J. Arrow, A. K. Sen, and K. Suzumura, eds. Handbook of\nSocial Choice and Welfare, volume 1. North-Holland, 2002.\n[3] P. Blackburn, M. de Rijke, and Y. Venema. Modal Logic.\nCambridge University Press, 2001.\n[4] E. M. Clarke, O. Grumberg, and D. A. Peled. Model\nChecking. The MIT Press: Cambridge, MA, 2000.\n[5] F. Dietrich and C. List. Arrow\"s theorem in judgment\naggregation. Social Choice and Welfare, 2006. Forthcoming.\n[6] C. Lafage and J. Lang. Logical representation of preferences\nfor group decision making. In Proceedings of the Conference\non Principles of Knowledge Representation and Reasoning\n(KR-00), pages 457-470. Morgan Kaufman, 2000.\n[7] J. Lang. From preference representation to combinatorial\nvote. Proceedings of the Eighth International Conference on\nPrinciples and Knowledge Representation and Reasoning\n(KR-02), pages 277-290. Morgan Kaufmann, 2002.\n[8] J. Lang. Logical preference representation and combinatorial\nvote. Ann. Math. Artif. Intell, 42(1-3):37-71, 2004.\n[9] C. H. Papadimitriou. Computational Complexity.\nAddison-Wesley: Reading, MA, 1994.\n[10] M. Pauly. Axiomatizing collective judgment sets in a\nminimal logical language, 2006. Manuscript.\n[11] Y. Venema. A crash course in arrow logic. In M. Marx,\nM. Masuch, and L. Polos, editors, Arrow Logic and\nMulti-Modal Logic, pages 3-34. CSLI Publications,\nStanford, 1996.\n3\nDietrich and List [5] prove a general version of Arrow\"s theorem\nfor JARs: for a strongly connected agenda, a JAR has the IND\nand UNA properties iff it does not have the ND1 property, where\nstrong connectedness is an algebraic and logical condition on\nagendas. Thus, if we assume that the agenda is strongly connected then\n(ND \u2227 UNA) \u2194 \u00acND1 is valid, and derivable in jar. An interesting\npossibility for future work is to try to characterise conditions such\nas strong connectedness directly as a logical formula.\nThe Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 573", "keywords": "syntax and semantics of jal;arrow logic;discursive paradox;jal syntax and semantics;arrow's theorem;modal logic;judgment aggregation rule;judgment aggregation;preference aggregation;social welfare function;complete axiomatisation;knowledge representation formalism;expressivity;jal;unanimity;non-dictatorship"}
-{"name": "test_I-32", "title": "An Adversarial Environment Model for Bounded Rational Agents in Zero-Sum Interactions", "abstract": "Multiagent environments are often not cooperative nor collaborative; in many cases, agents have conflicting interests, leading to adversarial interactions. This paper presents a formal Adversarial Environment model for bounded rational agents operating in a zero-sum environment. In such environments, attempts to use classical utility-based search methods can raise a variety of difficulties (e.g., implicitly modeling the opponent as an omniscient utility maximizer, rather than leveraging a more nuanced, explicit opponent model). We define an Adversarial Environment by describing the mental states of an agent in such an environment. We then present behavioral axioms that are intended to serve as design principles for building such adversarial agents. We explore the application of our approach by analyzing log files of completed Connect-Four games, and present an empirical analysis of the axioms\" appropriateness.", "fulltext": "1. INTRODUCTION\nEarly research in multiagent systems (MAS) considered\ncooperative groups of agents; because individual agents had\nlimited resources, or limited access to information (e.g.,\nlimited processing power, limited sensor coverage), they worked\ntogether by design to solve problems that individually they\ncould not solve, or at least could not solve as efficiently.\nMAS research, however, soon began to consider\ninteracting agents with individuated interests, as representatives of\ndifferent humans or organizations with non-identical\ninterests. When interactions are guided by diverse interests,\nparticipants may have to overcome disagreements,\nuncooperative interactions, and even intentional attempts to damage\none another. When these types of interactions occur,\nenvironments require appropriate behavior from the agents\nsituated in them. We call these environments Adversarial\nEnvironments, and call the clashing agents Adversaries.\nModels of cooperation and teamwork have been\nextensively explored in MAS through the axiomatization of\nmental states (e.g., [8, 4, 5]). However, none of this research\ndealt with adversarial domains and their implications for\nagent behavior. Our paper addresses this issue by\nproviding a formal, axiomatized mental state model for a subset\nof adversarial domains, namely simple zero-sum adversarial\nenvironments.\nSimple zero-sum encounters exist of course in various\ntwoplayer games (e.g., Chess, Checkers), but they also exist in\nn-player games (e.g., Risk, Diplomacy), auctions for a\nsingle good, and elsewhere. In these latter environments\nespecially, using a utility-based adversarial search (such as the\nMin-Max algorithm) does not always provide an adequate\nsolution; the payoff function might be quite complex or\ndifficult to quantify, and there are natural computational\nlimitations on bounded rational agents. In addition, traditional\nsearch methods (like Min-Max) do not make use of a model\nof the opponent, which has proven to be a valuable addition\nto adversarial planning [9, 3, 11].\nIn this paper, we develop a formal, axiomatized model\nfor bounded rational agents that are situated in a zero-sum\nadversarial environment. The model uses different modality\noperators, and its main foundations are the SharedPlans [4]\nmodel for collaborative behavior. We explore environment\nproperties and the mental states of agents to derive\nbehavioral axioms; these behavioral axioms constitute a formal\nmodel that serves as a specification and design guideline for\nagent design in such settings.\nWe then investigate the behavior of our model empirically\nusing the Connect-Four board game. We show that this\ngame conforms to our environment definition, and analyze\nplayers\" behavior using a large set of completed match log\n550\n978-81-904262-7-5 (RPS) c 2007 IFAAMAS\nfiles. In addition, we use the results presented in [9] to\ndiscuss the importance of opponent modeling in our\nConnectFour adversarial domain.\nThe paper proceeds as follows. Section 2 presents the\nmodel\"s formalization. Section 3 presents the empirical\nanalysis and its results. We discuss related work in Section 4,\nand conclude and present future directions in Section 5.\n2. ADVERSARIAL ENVIRONMENTS\nThe adversarial environment model (denoted as AE) is\nintended to guide the design of agents by providing a\nspecification of the capabilities and mental attitudes of an agent in\nan adversarial environment. We focus here on specific types\nof adversarial environments, specified as follows:\n1. Zero-Sum Interactions: positive and negative utilities\nof all agents sum to zero;\n2. Simple AEs: all agents in the environment are\nadversarial agents;\n3. Bilateral AEs: AE\"s with exactly two agents;\n4. Multilateral AEs\": AE\"s of three or more agents.\nWe will work on both bilateral and multilateral\ninstantiations of zero-sum and simple environments. In particular,\nour adversarial environment model will deal with\ninteractions that consist of N agents (N \u2265 2), where all agents\nare adversaries, and only one agent can succeed. Examples\nof such environments range from board games (e.g., Chess,\nConnect-Four, and Diplomacy) to certain economic\nenvironments (e.g., N-bidder auctions over a single good).\n2.1 Model Overview\nOur approach is to formalize the mental attitudes and\nbehaviors of a single adversarial agent; we consider how a\nsingle agent perceives the AE. The following list specifies\nthe conditions and mental states of an agent in a simple,\nzero-sum AE:\n1. The agent has an individual intention that its own goal\nwill be completed;\n2. The agent has an individual belief that it and its\nadversaries are pursuing full conflicting goals (defined\nbelow)there can be only one winner;\n3. The agent has an individual belief that each adversary\nhas an intention to complete its own full conflicting goal;\n4. The agent has an individual belief in the (partial) profile\nof its adversaries.\nItem 3 is required, since it might be the case that some\nagent has a full conflicting goal, and is currently considering\nadopting the intention to complete it, but is, as of yet, not\ncommitted to achieving it. This might occur because the\nagent has not yet deliberated about the effects that\nadopting that intention might have on the other intentions it is\ncurrently holding. In such cases, it might not consider itself\nto even be in an adversarial environment.\nItem 4 states that the agent should hold some belief about\nthe profiles of its adversaries. The profile represents all the\nknowledge the agent has about its adversary: its weaknesses,\nstrategic capabilities, goals, intentions, trustworthiness, and\nmore. It can be given explicitly or can be learned from\nobservations of past encounters.\n2.2 Model Definitions for Mental States\nWe use Grosz and Kraus\"s definitions of the modal\noperators, predicates, and meta-predicates, as defined in their\nSharedPlan formalization [4]. We recall here some of the\npredicates and operators that are used in that\nformalization: Int.To(Ai, \u03b1, Tn, T\u03b1, C) represents Ai\"s intentions at\ntime Tn to do an action \u03b1 at time T\u03b1 in the context of\nC. Int.Th(Ai, prop, Tn, Tprop, C) represents Ai\"s intentions\nat time Tn that a certain proposition prop holds at time\nTprop in the context of C. The potential intention\noperators, Pot.Int.To(...) and Pot.Int.Th(...), are used to\nrepresent the mental state when an agent considers adopting an\nintention, but has not deliberated about the interaction of\nthe other intentions it holds. The operator Bel(Ai, f, Tf )\nrepresents agent Ai believing in the statement expressed in\nformula f, at time Tf . MB(A, f, Tf ) represents mutual\nbelief for a group of agents A.\nA snapshot of the system finds our environment to be in\nsome state e \u2208 E of environmental variable states, and each\nadversary in any LAi \u2208 L of possible local states. At any\ngiven time step, the system will be in some world w of the\nset of all possible worlds w \u2208 W, where w = E\u00d7LA1 \u00d7LA2 \u00d7\n...LAn , and n is the number of adversaries. For example, in\na Texas Hold\"em poker game, an agent\"s local state might\nbe its own set of cards (which is unknown to its adversary)\nwhile the environment will consist of the betting pot and\nthe community cards (which are visible to both players).\nA utility function under this formalization is defined as a\nmapping from a possible world w \u2208 W to an element in ,\nwhich expresses the desirability of the world, from a single\nagent perspective. We usually normalize the range to [0,1],\nwhere 0 represents the least desirable possible world, and\n1 is the most desirable world. The implementation of the\nutility function is dependent on the domain in question.\nThe following list specifies new predicates, functions,\nvariables, and constants used in conjunction with the original\ndefinitions for the adversarial environment formalization:\n1. \u03c6 is a null action (the agent does not do anything).\n2. GAi is the set of agent Ai\"s goals. Each goal is a set of\npredicates whose satisfaction makes the goal complete (we\nuse G\u2217\nAi\n\u2208 GAi to represent an arbitrary goal of agent Ai).\n3. gAi is the set of agent Ai\"s subgoals. Subgoals are\npredicates whose satisfaction represents an important milestone\ntoward achievement of the full goal. gG\u2217\nAi\n\u2286 gAi is the set of\nsubgoals that are important to the completion of goal G\u2217\nAi\n(we will use g\u2217\nG\u2217\nAi\n\u2208 gG\u2217\nAi\nto represent an arbitrary subgoal).\n4. P\nAj\nAi\nis the profile object agent Ai holds about agent Aj.\n5. CA is a general set of actions for all agents in A which\nare derived from the environment\"s constraints. CAi \u2286 CA\nis the set of agent Ai\"s possible actions.\n6. Do(Ai, \u03b1, T\u03b1, w) holds when Ai performs action \u03b1 over\ntime interval T\u03b1 in world w.\n7. Achieve(G\u2217\nAi\n, \u03b1, w) is true when goal G\u2217\nAi\nis achieved\nfollowing the completion of action \u03b1 in world w \u2208 W, where\n\u03b1 \u2208 CAi .\n8. Profile(Ai, PAi\nAi\n) is true when agent Ai holds an object\nprofile for agent Aj.\nDefinition 1. Full conflict (FulConf ) describes a\nzerosum interaction where only a single goal of the goals in\nconflict can be completed.\nFulConf(G\u2217\nAi\n, G\u2217\nAj\n) \u21d2 (\u2203\u03b1 \u2208 CAi , \u2200w, \u03b2 \u2208 CAj )\n(Achieve(G\u2217\nAi\n, \u03b1, w) \u21d2 \u00acAchieve(G\u2217\nAj\n, \u03b2, w)) \u2228\n(\u2203\u03b2 \u2208 CAj , \u2200w, \u03b1 \u2208 CAi )(Achieve(G\u2217\nAj\n, \u03b2, w) \u21d2\n\u00acAchieve(G\u2217\nAi\n, \u03b1, w))\nDefinition 2. Adversarial Knowledge (AdvKnow) is a\nfunction returning a value which represents the amount of\nThe Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 551\nknowledge agent Ai has on the profile of agent Aj, at time\nTn. The higher the value, the more knowledge agent Ai has.\nAdvKnow : P\nAj\nAi\n\u00d7 Tn \u2192\nDefinition 3. Eval - This evaluation function returns an\nestimated expected utility value for an agent in A, after\ncompleting an action from CA in some world state w.\nEval : A \u00d7 CA \u00d7 w \u2192\nDefinition 4. TrH - (Threshold) is a numerical constant\nin the [0,1] range that represents an evaluation function\n(Eval) threshold value. An action that yields an estimated\nutility evaluation above the TrH is regarded as a highly\nbeneficial action.\nThe Eval value is an estimation and not the real utility\nfunction, which is usually unknown. Using the real utility\nvalue for a rational agent would easily yield the best outcome\nfor that agent. However, agents usually do not have the real\nutility functions, but rather a heuristic estimate of it.\nThere are two important properties that should hold for\nthe evaluation function:\nProperty 1. The evaluation function should state that the\nmost desirable world state is one in which the goal is achieved.\nTherefore, after the goal has been satisfied, there can be no\nfuture action that can put the agent in a world state with\nhigher Eval value.\n(\u2200Ai, G\u2217\nAi\n, \u03b1, \u03b2 \u2208 CAi , w \u2208 W)\nAchieve(G\u2217\nAi\n, \u03b1, w) \u21d2 Eval(Ai, \u03b1, w) \u2265 Eval(Ai, \u03b2, w)\nProperty 2. The evaluation function should project an\naction that causes a completion of a goal or a subgoal to a value\nwhich is greater than TrH (a highly beneficial action).\n(\u2200Ai, G\u2217\nAi\n\u2208 GAi , \u03b1 \u2208 CAi , w \u2208 W, g\u2217\nGAi\n\u2208 gGAi\n)\nAchieve(G\u2217\nAi\n, \u03b1, w) \u2228 Achieve(g\u2217\nGAi\n, \u03b1, w) \u21d2\nEval(Ai, \u03b1, w) \u2265 TrH.\nDefinition 5. SetAction We define a set action\n(SetAction) as a set of action operations (either complex or basic\nactions) from some action sets CAi and CAj which,\naccording to agent Ai\"s belief, are attached together by a temporal\nand consequential relationship, forming a chain of events\n(action, and its following consequent action).\n(\u2200\u03b11\n, . . . , \u03b1u\n\u2208 CAi , \u03b21\n, . . . , \u03b2v\n\u2208 CAj , w \u2208 W)\nSetAction(\u03b11\n, . . . , \u03b1u\n, \u03b21\n, . . . , \u03b2v\n, w) \u21d2\n((Do(Ai, \u03b11\n, T\u03b11 , w) \u21d2 Do(Aj, \u03b21\n, T\u03b21 , w)) \u21d2\nDo(Ai, \u03b12\n, T\u03b12 , w) \u21d2 . . . \u21d2 Do(Ai, \u03b1u\n, T\u03b1u , w))\nThe consequential relation might exist due to various\nenvironmental constraints (when one action forces the\nadversary to respond with a specific action) or due to the agent\"s\nknowledge about the profile of its adversary.\nProperty 3. As the knowledge we have about our\nadversary increases we will have additional beliefs about its\nbehavior in different situations which in turn creates new set\nactions. Formally, if our AdvKnow at time Tn+1 is greater\nthan AdvKnow at time Tn, then every SetAction known at\ntime Tn is also known at time Tn+1.\nAdvKnow(P\nAj\nAi\n, Tn+1) > AdvKnow(P\nAj\nAi\n, Tn) \u21d2\n(\u2200\u03b11\n, . . . , \u03b1u\n\u2208 CAi , \u03b21\n, . . . , \u03b2v\n\u2208 CAj )\nBel(Aag, SetAction(\u03b11\n, . . . , \u03b1u\n, \u03b21\n, . . . , \u03b2v\n), Tn) \u21d2\nBel(Aag, SetAction(\u03b11\n, . . . , \u03b1u\n, \u03b21\n, . . . , \u03b2v\n), Tn+1)\n2.3 The Environment Formulation\nThe following axioms provide the formal definition for a\nsimple, zero-sum Adversarial Environment (AE).\nSatisfaction of these axioms means that the agent is situated in\nsuch an environment. It provides specifications for agent\nAag to interact with its set of adversaries A with respect to\ngoals G\u2217\nAag\nand G\u2217\nA at time TCo at some world state w.\nAE(Aag, A, G\u2217\nAag\n, A1, . . . , Ak, G\u2217\nA1\n, . . . , G\u2217\nAk\n, Tn, w)\n1. Aag has an Int.Th his goal would be completed:\n(\u2203\u03b1 \u2208 CAag , T\u03b1)\nInt.Th(Aag, Achieve(G\u2217\nAag\n, \u03b1), Tn, T\u03b1, AE)\n2. Aag believes that it and each of its adversaries Ao are\npursuing full conflicting goals:\n(\u2200Ao \u2208 {A1, . . . , Ak})\nBel(Aag, FulConf(G\u2217\nAag\n, G\u2217\nAo\n), Tn)\n3. Aag believes that each of his adversaries in Ao has the\nInt.Th his conflicting goal G\u2217\nAoi\nwill be completed:\n(\u2200Ao \u2208 {A1, . . . , Ak})(\u2203\u03b2 \u2208 CAo , T\u03b2)\nBel(Aag, Int.Th(Ao, Achieve(G\u2217\nAo\n, \u03b2), TCo, T\u03b2, AE), Tn)\n4. Aag has beliefs about the (partial) profiles of its\nadversaries\n(\u2200Ao \u2208 {A1, . . . , Ak})\n(\u2203PAo\nAag\n\u2208 PAag )Bel(Aag, Profile(Ao, PAo\nAag\n), Tn)\nTo build an agent that will be able to operate successfully\nwithin such an AE, we must specify behavioral guidelines for\nits interactions. Using a naive Eval maximization strategy\nto a certain search depth will not always yield satisfactory\nresults for several reasons: (1) the search horizon problem\nwhen searching for a fixed depth; (2) the strong assumption\nof an optimally rational, unbounded resources adversary; (3)\nusing an estimated evaluation function which will not give\noptimal results in all world states, and can be exploited [9].\nThe following axioms specify the behavioral principles\nthat can be used to differentiate between successful and\nless successful agents in the above Adversarial Environment.\nThose axioms should be used as specification principles when\ndesigning and implementing agents that should be able to\nperform well in such Adversarial Environments. The\nbehavioral axioms represent situations in which the agent will\nadopt potential intentions to (Pot.Int.To(...)) perform an\naction, which will typically require some means-end\nreasoning to select a possible course of action. This reasoning will\nlead to the adoption of an Int.To(...) (see [4]).\nA1. Goal Achieving Axiom. The first axiom is the\nsimplest case; when the agent Aag believes that it is one action\n(\u03b1) away from achieving his conflicting goal G\u2217\nAag\n, it should\nadopt the potential intention to do \u03b1 and complete its goal.\n(\u2200Aag, \u03b1 \u2208 CAag , Tn, T\u03b1, w \u2208 W)\n(Bel(Aag, Do(Aag, \u03b1, T\u03b1, w) \u21d2 Achieve(G\u2217\nAag\n, \u03b1, w))\n\u21d2 Pot.Int.To(Aag, \u03b1, Tn, T\u03b1, w)\nThis somewhat trivial behavior is the first and strongest\naxiom. In any situation, when the agent is an action away\nfrom completing the goal, it should complete the action.\nAny fair Eval function would naturally classify \u03b1 as the\nmaximal value action (property 1). However, without\nexplicit axiomatization of such behavior there might be\nsituations where the agent will decide on taking another action\nfor various reasons, due to its bounded decision resources.\nA2. Preventive Act Axiom. Being in an adversarial\nsituation, agent Aag might decide to take actions that will\ndamage one of its adversary\"s plans to complete its goal,\neven if those actions do not explicitly advance Aag towards\nits conflicting goal G\u2217\nAag\n. Such preventive action will take\nplace when agent Aag has a belief about the possibility of\nits adversary Ao doing an action \u03b2 that will give it a high\n552 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nutility evaluation value (> TrH). Believing that taking\naction \u03b1 will prevent the opponent from doing its \u03b2, it will\nadopt a potential intention to do \u03b1.\n(\u2200Aag, Ao \u2208 A, \u03b1 \u2208 CAag , \u03b2 \u2208 CAo , Tn, T\u03b2, w \u2208 W)\n(Bel(Aag, Do(Ao, \u03b2, T\u03b2, w) \u2227 Eval(Ao, \u03b2, w) > TrH, Tn) \u2227\nBel(Aag, Do(Aag, \u03b1, T\u03b1, w) \u21d2 \u00acDo(Ao, \u03b2, T\u03b2, w), Tn)\n\u21d2 Pot.Int.To(Aag, \u03b1, Tn, T\u03b1, w)\nThis axiom is a basic component of any adversarial\nenvironment. For example, looking at a Chess board game,\na player could realize that it is about to be checkmated by\nits opponent, thus making a preventive move. Another\nexample is a Connect Four game: when a player has a row of\nthree chips, its opponent must block it, or lose.\nA specific instance of A1 occurs when the adversary is one\naction away from achieving its goal, and immediate\npreventive action needs to be taken by the agent. Formally, we\nhave the same beliefs as stated above, with a changed belief\nthat doing action \u03b2 will cause agent Ao to achieve its goal.\nProposition 1: Prevent or lose case.\n(\u2200Aag, Ao \u2208 A, \u03b1 \u2208 CAag , \u03b2 \u2208 CAo , G\u2217\nAo\n, Tn, T\u03b1, T\u03b2, w \u2208\nW)\nBel(Aag, Do(Ao, \u03b2, T\u03b2, w) \u21d2 Achieve(G\u2217\nAo\n, \u03b2, w), Tn) \u2227\nBel(Aag, Do(Aag, \u03b1, T\u03b1, w) \u21d2 \u00acDo(Ao, \u03b2, T\u03b2, w))\n\u21d2 Pot.Int.To(Aag, \u03b1, Tn, T\u03b1, w)\nSketch of proof: Proposition 1 can be easily derived\nfrom axiom A1 and the property 2 of the Eval function,\nwhich states that any action that causes a completion of a\ngoal is a highly beneficial action.\nThe preventive act behavior will occur implicitly when\nthe Eval function is equal to the real world utility function.\nHowever, being bounded rational agents and dealing with an\nestimated evaluation function we need to explicitly\naxiomatize such behavior, for it will not always occur implicitly\nfrom the evaluation function.\nA3. Suboptimal Tactical Move Axiom. In many\nscenarios a situation may occur where an agent will decide not\nto take the current most beneficial action it can take (the\naction with the maximal utility evaluation value), because it\nbelieves that taking another action (with lower utility\nevaluation value) might yield (depending on the adversary\"s\nresponse) a future possibility for a highly beneficial action.\nThis will occur most often when the Eval function is\ninaccurate and differs by a large extent from the Utility function.\nPut formally, agent Aag believes in a certain SetAction that\nwill evolve according to its initial action and will yield a high\nbeneficial value (> TrH) solely for it.\n(\u2200Aag, Ao \u2208 A, Tn, w \u2208 W)\n(\u2203\u03b11\n, . . . , \u03b1u\n\u2208 CAi , \u03b21\n, . . . , \u03b2v\n\u2208 CAj , T\u03b11 )\nBel(Aag, SetAction(\u03b11\n, . . . , \u03b1u\n, \u03b21\n, . . . , \u03b2v\n), Tn) \u2227\nBel(Aag, Eval(Ao, \u03b2v\n, w) < TrH < Eval(Aag, \u03b1u\n, w), Tn)\n\u21d2 Pot.Int.To(Aag, \u03b11\n, Tn, T\u03b11 , w)\nAn agent might believe that a chain of events will\noccur for various reasons due to the inevitable nature of the\ndomain. For example, in Chess, we often observe the\nfollowing: a move causes a check position, which in turn limits the\nopponent\"s moves to avoiding the check, to which the first\nplayer might react with another check, and so on. The agent\nmight also believe in a chain of events based on its\nknowledge of its adversary\"s profile, which allows it to foresee the\nadversary\"s movements with high accuracy.\nA4. Profile Detection Axiom. The agent can adjust\nits adversary\"s profiles by observations and pattern study\n(specifically, if there are repeated encounters with the same\nadversary). However, instead of waiting for profile\ninformation to be revealed, an agent can also initiate actions that\nwill force its adversary to react in a way that will reveal\nprofile knowledge about it. Formally, the axiom states that\nif all actions (\u03b3) are not highly beneficial actions (< TrH),\nthe agent can do action \u03b1 in time T\u03b1 if it believes that it will\nresult in a non-highly beneficial action \u03b2 from its adversary,\nwhich in turn teaches it about the adversary\"s profile, i.e.,\ngives a higher AdvKnow(P\nAj\nAi\n, T\u03b2).\n(\u2200Aag, Ao \u2208 A, \u03b1 \u2208 CAag , \u03b2 \u2208 CAo , Tn, T\u03b1, T\u03b2, w \u2208 W)\nBel(Aag, (\u2200\u03b3 \u2208 CAag )Eval(Aag, \u03b3, w) < TrH, Tn) \u2227\nBel(Aag, Do(Aag, \u03b1, T\u03b1, w) \u21d2 Do(Ao, \u03b2, T\u03b2, w), Tn) \u2227\nBel(Aag, Eval(Ao, \u03b2, w) < TrH) \u2227\nBel(Aag, AdvKnow(P\nAj\nAi\n, T\u03b2) > AdvKnow(P\nAj\nAi\n, Tn), Tn) \u21d2\nPot.Int.To(Aag, \u03b1, Tn, T\u03b1, w)\nFor example, going back to the Chess board game\nscenario, consider starting a game versus an opponent about\nwhom we know nothing, not even if it is a human or a\ncomputerized opponent. We might start playing a strategy that\nwill be suitable versus an average opponent, and adjust our\ngame according to its level of play.\nA5. Alliance Formation Axiom The following\nbehavioral axiom is relevant only in a multilateral instantiation\nof the adversarial environment (obviously, an alliance\ncannot be formed in a bilateral, zero-sum encounter). In\ndifferent situations during a multilateral interaction, a group\nof agents might believe that it is in their best interests to\nform a temporary alliance. Such an alliance is an agreement\nthat constrains its members\" behavior, but is believed by its\nmembers to enable them to achieve a higher utility value\nthan the one achievable outside of the alliance.\nAs an example, we can look at the classical Risk board\ngame, where each player has an individual goal of being the\nsole conquerer of the world, a zero-sum game. However, in\norder to achieve this goal, it might be strategically wise to\nmake short-term ceasefire agreements with other players, or\nto join forces and attack an opponent who is stronger than\nthe rest.\nAn alliance\"s terms defines the way its members should\nact. It is a set of predicates, denoted as Terms, that is agreed\nupon by the alliance members, and should remain true for\nthe duration of the alliance. For example, the set Terms in\nthe Risk scenario, could contain the following predicates:\n1. Alliance members will not attack each other on territories\nX, Y and Z;\n2. Alliance members will contribute C units per turn for\nattacking adversary Ao;\n3. Members are obligated to stay as part of the alliance until\ntime Tk or until adversary\"s Ao army is smaller than Q.\nThe set Terms specifies inter-group constraints on each\nof the alliance member\"s (\u2200Aal\ni \u2208 Aal\n\u2286 A) set of actions\nCal\ni \u2286 C.\nDefinition 6. Al val - the total evaluation value that\nagent Ai will achieve while being part of Aal\nis the sum of\nEvali (Eval for Ai) of each of Aal\nj Eval values after taking\ntheir own \u03b1 actions (via the agent(\u03b1) predicate):\nAl val(Ai, Cal\n, Aal\n, w) = \u03b1\u2208Cal Evali(Aal\nj , agent(\u03b1), w)\nDefinition 7. Al TrH - is a number representing an Al val\nThe Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 553\nthreshold; above it, the alliance can be said to be a highly\nbeneficial alliance.\nThe value of Al TrH will be calculated dynamically\naccording to the progress of the interaction, as can be seen in [7].\nAfter an alliance is formed, its members are now working in\ntheir normal adversarial environment, as well as according\nto the mental states and axioms required for their\ninteractions as part of the alliance. The following Alliance model\n(AL) specifies the conditions under which the group Aal\ncan\nbe said to be in an alliance and working with a new and\nconstrained set of actions Cal\n, at time Tn.\nAL(Aal\n, Cal\n, w, Tn)\n1. Aal\nhas a MB that all members are part of Aal\n:\nMB(Aal\n, (\u2200Aal\ni \u2208 Aal\n)member(Aal\ni , Aal\n), Tn)\n2. Aal\nhas a MB that the group be maintained:\nMB(Aal\n, (\u2200Aal\ni \u2208 Aal\n)Int.Th\n(Ai, member(Ai, Aal\n), Tn, Tn+1, Co), Tn)\n3. Aal\nhas a MB that being members gives them high utility\nvalue:\nMB(Aal\n, (\u2200Aal\ni \u2208 Aal\n)Al val(Aal\ni , Cal\n, Aal\n, w) \u2265 Al TrH, Tn)\nMembers\" profiles are a crucial part of successful alliances.\nWe assume that agents that have more accurate profiles of\ntheir adversaries will be more successful in such\nenvironments. Such agents will be able to predict when a\nmember is about to breach the alliance\"s contract (item 2 in the\nabove model), and take counter measures (when item 3 will\nfalsify). The robustness of the alliance is in part a function\nof its members\" trustfulness measure, objective position\nestimation, and other profile properties. We should note that an\nagent can simultaneously be part of more than one alliance.\nSuch a temporary alliance, where the group members do\nnot have a joint goal but act collaboratively for the interest\nof their own individual goals, is classified as a Treatment\nGroup by modern psychologists [12] (in contrast to a Task\nGroup, where its members have a joint goal). The Shared\nActivity model as presented in [5] modeled Treatment Group\nbehavior using the same SharedPlans formalization.\nWhen comparing both definitions of an alliance and a\nTreatment Group we found an unsurprising resemblance\nbetween both models: the environment model\"s definitions\nare almost identical (see SA\"s definitions in [5]), and their\nSelfish-Act and Cooperative Act axioms conform to our\nadversarial agent\"s behavior. The main distinction between\nboth models is the integration of a Helpful-behavior act\naxiom, in the Shared Activity which cannot be part of ours.\nThis axiom states that an agent will consider taking action\nthat will lower its Eval value (to a certain lower bound), if it\nbelieves that a group partner will gain a significant benefit.\nSuch behavior cannot occur in a pure adversarial\nenvironment (as a zero-sum game is), where the alliance members\nare constantly on watch to manipulate their alliance to their\nown advantage.\nA6. Evaluation Maximization Axiom. In a case when\nall other axioms are inapplicable, we will proceed with the\naction that maximizes the heuristic value as computed in\nthe Eval function.\n(\u2200Aag, Ao \u2208 A, \u03b1 \u2208 Cag, Tn, w \u2208 W)\nBel(Aag, (\u2200\u03b3 \u2208 Cag)Eval(Aag, \u03b1, w) \u2265 Eval(Aag, \u03b3, w), Tn)\n\u21d2 Pot.Int.To(Aag, \u03b1, Tn, T\u03b1, w)\nT1. Optimality on Eval = Utility The above axiomatic\nmodel handles situations where the Utility is unknown and\nthe agents are bounded rational agents. The following\ntheorem shows that in bilateral interactions, where the agents\nhave the real Utility function (i.e., Eval = Utility) and are\nrational agents, the axioms provide the same optimal result\nas classic adversarial search (e.g., Min-Max).\nTheorem 1. Let Ae\nag be an unbounded rational AE agent\nusing the Eval heuristic evaluation function, Au\nag be the same\nagent using the true Utility function, and Ao be a sole\nunbounded utility-based rational adversary. Given that Eval =\nUtility:\n(\u2200\u03b1 \u2208 CAu\nag\n, \u03b1 \u2208 CAe\nag\n, Tn, w \u2208 W)\nPot.Int.To(Au\nag, \u03b1, Tn, T\u03b1, w) \u2192\nPot.Int.To(Ae\nag, \u03b1 , Tn, T\u03b1, w) \u2227\n((\u03b1 = \u03b1 ) \u2228 (Utility(Au\nag, \u03b1, w) = Eval(Ae\nag, \u03b1 , w)))\nSketch of proof - Given that Au\nag has the real utility\nfunction and unbounded resources, it can generate the full\ngame tree and run the optimal MinMax algorithm to choose\nthe highest utility value action, which we denote by, \u03b1. The\nproof will show that Ae\nag, using the AE axioms, will select\nthe same or equal utility \u03b1 (when there is more than one\naction with the same max utility) when Eval = Utility.\n(A1) Goal achieving axiom - suppose there is an \u03b1 such\nthat its completion will achieve Au\nag\"s goal. It will obtain\nthe highest utility by Min-Max for Au\nag. The Ae\nag agent will\nselect \u03b1 or another action with the same utility value via\nA1. If such \u03b1 does not exist, Ae\nag cannot apply this axiom,\nand proceeds to A2.\n(A2) Preventive act axiom - (1) Looking at the basic case\n(see Prop1), if there is a \u03b2 which leads Ao to achieve its\ngoal, then a preventive action \u03b1 will yield the highest\nutility for Au\nag. Au\nag will choose it through the utility, while\nAe\nag will choose it through A2. (2) In the general case, \u03b2\nis a highly beneficial action for Ao, thus yields low utility\nfor Au\nag, which will guide it to select an \u03b1 that will prevent\n\u03b2, while Ae\nag will choose it through A2.1\nIf such \u03b2 does not\nexist for Ao, then A2 is not applicable, and Ae\nag can proceed\nto A3.\n(A3) Suboptimal tactical move axiom - When using a\nheuristic Eval function, Ae\nag has a partial belief in the profile\nof its adversary (item 4 in AE model), which may lead it\nto believe in SetActions (Prop1). In our case, Ae\nag is\nholding a full profile on its optimal adversary and knows that\nAo will behave optimally according to the real utility\nvalues on the complete search tree, therefore, any belief about\nsuboptimal SetAction cannot exist, yielding this axiom\ninapplicable. Ae\nag will proceed to A4.\n(A4) Profile detection axiom - Given that Ae\nag has the full\nprofile of Ao, none of Ae\nag\"s actions can increase its\nknowledge. That axiom will not be applied, and the agent will\nproceed with A6 (A5 will be disregarded because the\ninteraction is bilateral).\n(A6) Evaluation maximization axiom - This axiom will\nselect the max Eval for Ae\nag. Given that Eval = Utility, the\nsame \u03b1 that was selected by Au\nag will be selected.\n3. EVALUATION\nThe main purpose of our experimental analysis is to\nevaluate the model\"s behavior and performance in a real\nadversarial environment. This section investigates whether bounded\n1\nA case where following the completion of \u03b2 there exists a \u03b3\nwhich gives high utility for Agent Au\nag, cannot occur because\nAo uses the same utility, and \u03b3\"s existence will cause it to\nclassify \u03b2 as a low utility action.\n554 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nrational agents situated in such adversarial environments\nwill be better off applying our suggested behavioral axioms.\n3.1 The Domain\nTo explore the use of the above model and its behavioral\naxioms, we decided to use the Connect-Four game as our\nadversarial environment. Connect-Four is a 2-player,\nzerosum game which is played using a 6x7 matrix-like board.\nEach turn, a player drops a disc into one of the 7 columns\n(the set of 21 discs is usually colored yellow for player 1 and\nred for player 2; we will use White and Black respectively to\navoid confusion). The winner is the first player to complete\na horizontal, vertical, or diagonal set of four discs with its\ncolor. On very rare occasions, the game might end in a tie\nif all the empty grids are filled, but no player managed to\ncreate a 4-disc set.\nThe Connect-Four game was solved in [1], where it is\nshown that the first player (playing with the white discs)\ncan force a win by starting in the middle column (column 4)\nand playing optimally However, the optimal strategy is very\ncomplex, and difficult to follow even for complex bounded\nrational agents, such as human players.\nBefore we can proceed checking agent behavior, we must\nfirst verify that the domain conforms to the adversarial\nenvironment\"s definition as given above (which the behavioral\naxioms are based on). First, when playing a Connect-Four\ngame, the agent has an intention to win the game (item 1).\nSecond (item 2), our agent believes that in Connect-Four\nthere can only be one winner (or no winner at all in the rare\noccurrence of a tie). In addition, our agent believes that its\nopponent to the game will try to win (item 3), and we hope\nit has some partial knowledge (item 4) about its adversary\n(this knowledge can vary from nothing, through simple facts\nsuch as age, to strategies and weaknesses).\nOf course, not all Connect-Four encounters are\nadversarial. For example, when a parent is playing the game with its\nchild, the following situation might occur: the child, having\na strong incentive to win, treats the environment as\nadversarial (it intends to win, understands that there can only\nbe one winner, and believes that its parent is trying to beat\nhim). However, the parent\"s point of view might see the\nenvironment as an educational one, where its goal is not to\nwin the game, but to cause enjoyment or practice strategic\nreasoning. In such an educational environment, a new set of\nbehavioral axioms might be more beneficial to the parent\"s\ngoals than our suggested adversarial behavioral axioms.\n3.2 Axiom Analysis\nAfter showing that the Connect-Four game is indeed a\nzero-sum, bilateral adversarial environment, the next step\nis to look at players\" behaviors during the game and check\nwhether behaving according to our model does improve\nperformance. To do so we have collected log files from\ncompleted Connect-Four games that were played by human\nplayers over the Internet. Our collected log file data came from\nPlay by eMail (PBeM) sites. These are web sites that host\nemail games, where each move is taken by an email\nexchange between the server and the players. Many such\nsites\" archives contain real competitive interactions, and also\nmaintain a ranking system for their members. Most of the\ndata we used can be found in [6].\nAs can be learned from [1], Connect-Four has an optimal\nstrategy and a considerable advantage for the player who\nstarts the game (which we call the White player). We will\nconcentrate in our analysis on the second player\"s moves (to\nbe called Black). The White player, being the first to act,\nhas the so-called initiative advantage. Having the advantage\nand a good strategy will keep the Black player busy reacting\nto its moves, instead of initiating threats. A threat is a\ncombination of three discs of the same color, with an empty\nspot for the fourth winning disk. An open threat is a threat\nthat can be realized in the opponent\"s next move. In order\nfor the Black player to win, it must somehow turn the tide,\ntake the advantage and start presenting threats to the White\nplayer. We will explore Black players\" behavior and their\nconformance to our axioms.\nTo do so, we built an application that reads log files and\nanalyzes the Black player\"s moves. The application contains\ntwo main components: (1) a Min-Max algorithm for\nevaluation of moves; (2) open threats detector for the discovering of\nopen threats. The Min-Max algorithm will work to a given\ndepth, d and for each move \u03b1 will output the heuristic value\nfor the next action taken by the player as written in the log\nfile, h(\u03b1), alongside the maximum heuristic value, maxh(\u03b1),\nthat could be achieved prior to taking the move (obviously,\nif h(\u03b1) = maxh(\u03b1), then the player did not do the optimal\nmove heuristically). The threat detector\"s job is to notify\nif some action was taken in order to block an open threat\n(not blocking an open threat will probably cause the player\nto lose in the opponent\"s next move).\nThe heuristic function used by Min-Max to evaluate the\nplayer\"s utility is the following function, which is simple to\ncompute, yet provides a reasonable challenge to human\nopponents:\nDefinition 8. Let Group be an adjacent set of four squares\nthat are horizontal, vertical, or diagonal. Groupn\nb (Groupn\nw)\nbe a Group with n pieces of the black (white) color and 4\u2212n\nempty squares.\nh =\n((Group1\nb \u2217\u03b1)+(Group2\nb \u2217\u03b2)+(Group3\nb \u2217\u03b3)+(Group4\nb \u2217\u221e))\n\u2212\n((Group1\nw \u2217\u03b1)+(Group2\nw \u2217\u03b2)+(Group3\nw \u2217\u03b3)+(Group4\nw \u2217\u221e))\nThe values of \u03b1, \u03b2 and \u03b4 can vary to form any desired\nlinear combination; however, it is important to value them\nwith the \u03b1 < \u03b2 < \u03b4 ordering in mind (we used 1, 4, and 8 as\ntheir respective values). Groups of 4 discs of the same color\nmeans victory, thus discovery of such a group will result in\n\u221e to ensure an extreme value.\nWe now use our estimated evaluation function to evaluate\nthe Black player\"s actions during the Connect-Four\nadversarial interaction. Each game from the log file was input into\nthe application, which processed and output a reformatted\nlog file containing the h value of the current move, the maxh\nvalue that could be achieved, and a notification if an open\nthreat was detected. A total of 123 games were analyzed\n(57 with White winning, and 66 with Black winning). A\nfew additional games were manually ignored in the\nexperiment, due to these problems: a player abandoning the game\nwhile the outcome is not final, or a blunt irrational move in\nthe early stages of the game (e.g., not blocking an obvious\nwinning group in the first opening moves). In addition, a\nsingle tie game was also removed. The simulator was run to\na search depth of 3 moves. We now proceed to analyze the\ngames with respect to each behavioral axiom.\nThe Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 555\nTable 1: Average heuristic difference analysis\nBlack losses Black Won\nAvg\" minh -17.62 -12.02\nAvg\" 3 lowest h moves (min3\nh) -13.20 -8.70\n3.2.1 Affirming the Suboptimal tactical move axiom\nThe following section presents the heuristic evaluations\nof the Min-Max algorithm for each action, and checks the\namount and extent of suboptimal tactical actions and their\nimplications on performance.\nTable 1 shows results and insights from the games\"\nheuristic analysis, when search depth equals 3 (this search depth\nwas selected for the results to be comparable to [9], see\nSection 3.2.3). The table\"s heuristic data is the difference\nbetween the present maximal heuristic value and the heuristic\nvalue of the action that was eventually taken by the player\n(i.e., the closer the number is to 0, the closer the action was\nto the maximum heuristic action).\nThe first row presents the difference values of the\naction that had the maximal difference value among all the\nBlack player\"s actions in a given game, as averaged over all\nBlack\"s winning and losing games (see respective columns).\nIn games in which the Black player loses, its average\ndifference value was -17.62, while in games in which the Black\nplayer won, its average was -12.02. The second row expands\nthe analysis by considering the 3 highest heuristic difference\nactions, and averaging them. In that case, we notice an\naverage heuristic difference of 5 points between games which the\nBlack player loses and games in which it wins. Nevertheless,\nthe importance of those numbers is that they allowed us to\ntake an educated guess on a threshold number of 11.5, as\nthe value of the TrH constant, which differentiates between\nnormal actions and highly beneficial ones.\nAfter finding an approximated TrH constant, we can\nproceed with an analysis of the importance of suboptimal moves.\nTo do so we took the subset of games in which the minimum\nheuristic difference value for Black\"s actions was 11.5. As\npresented in Table 2, we can see the different min3\nh\naverage of the 3 largest ranges and the respective percentage of\ngames won. The first row shows that the Black player won\nonly 12% of the games in which the average of its 3 highest\nheuristically difference actions (min3\nh) was smaller than the\nsuggested threshold, TrH = 11.5.\nThe second row shows a surprising result: it seems that\nwhen min3\nh > \u22124 the Black player rarely wins. Intuition\nwould suggest that games in which the action evaluation\nvalues were closer to the maximal values will result in more\nwinning games for Black. However, it seems that in the\nConnect-Four domain, merely responding with somewhat\neasily expected actions, without initiating a few surprising\nand suboptimal moves, does not yield good results. The last\nrow sums up the main insights from the analysis; most of\nBlack\"s wins (83%) came when its min3\nh was in the range\nof -11.5 to -4. A close inspection of those Black winning\ngames shows the following pattern behind the numbers:\nafter standard opening moves, Black suddenly drops a disc\ninto an isolated column, which seems a waste of a move.\nWhite continues to build its threats, while usually\ndisregarding Black\"s last move, which in turn uses the isolated disc\nas an anchor for a future winning threat.\nThe results show that it was beneficial for the Black player\nTable 2: Black\"s winnings percentages\n% of games\nmin3\nh < \u221211.5 12%\nmin3\nh > \u22124 5%\n\u221211.5 \u2264 min3\nh \u2264 \u22124 83%\nto take suboptimal actions and not give the current\nhighest possible heuristic value, but will not be too harmful\nfor its position (i.e., will not give high beneficial value to\nits adversary). As it turned out, learning the threshold is\nan important aspect of success: taking wildly risky moves\n(min3\nh < \u221211.5) or trying to avoid them (min3\nh > \u22124)\nreduces the Black player\"s winning chances by a large margin.\n3.2.2 Affirming the Profile Monitoring Axiom\nIn the task of showing the importance of monitoring one\"s\nadversaries\" profiles, our log files could not be used because\nthey did not contain repeated interactions between players,\nwhich are needed to infer the players\" knowledge about their\nadversaries. However, the importance of opponent\nmodeling and its use in attaining tactical advantages was already\nstudied in various domains ([3, 9] are good examples).\nIn a recent paper, Markovitch and Reger [9] explored the\nnotion of learning and exploitation of opponent weakness in\ncompetitive interactions. They apply simple learning\nstrategies by analyzing examples from past interactions in a\nspecific domain. They also used the Connect-Four adversarial\ndomain, which can now be used to understand the\nimportance of monitoring the adversary\"s profile.\nFollowing the presentation of their theoretical model, they\ndescribe an extensive empirical study and check the agent\"s\nperformance after learning the weakness model with past\nexamples. One of the domains used as a competitive\nenvironment was the same Connect-Four game (Checkers was\nthe second domain). Their heuristic function was identical\nto ours with three different variations (H1, H2, and H3) that\nare distinguished from one another in their linear\ncombination coefficient values. The search depth for the players was\n3 (as in our analysis). Their extensive experiments check\nand compare various learning strategies, risk factors,\npredefined feature sets and usage methods. The bottom line is\nthat the Connect-Four domain shows an improvement from\na 0.556 winning rate before modeling to a 0.69 after\nmodeling (page 22). Their conclusions, showing improved\nperformance when holding and using the adversary\"s model,\njustify the effort to monitor the adversary profile for\ncontinuous and repeated interactions.\nAn additional point that came up in their experiments is\nthe following: after the opponent weakness model has been\nlearned, the authors describe different methods of\nintegrating the opponent weakness model into the agent\"s decision\nstrategy. Nevertheless, regardless of the specific method\nthey chose to work with, all integration methods might cause\nthe agent to take suboptimal decisions; it might cause the\nagent to prefer actions that are suboptimal at the present\ndecision junction, but which might cause the opponent to\nreact in accordance with its weakness model (as represented\nby our agent) which in turn will be beneficial for us in the\nfuture. The agent\"s behavior, as demonstrated in [9] further\nconfirms and strengthens our Suboptimal Tactical Axiom as\ndiscussed in the previous section.\n556 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)\n3.2.3 Additional Insights\nThe need for the Goal Achieving, Preventive Act, and\nEvaluation Maximization axioms are obvious, and need no\nfurther verification. However, even with respect to those\naxioms, a few interesting insights came up in the log analysis.\nThe Goal achieving and Preventive Act axioms, though\ntheoretically trivial, seem to provide some challenge to a human\nplayer. In the initial inspection of the logs, we encountered\nfew games2\nwhere a player, for inexplicable reasons, did not\nblock the other from winning or failed to execute its own\nwinning move. We can blame those faults on the human\"s\nlack of attention, or a typing error in its move reply;\nnevertheless, those errors might occur in bounded rational agents,\nand the appropriate behavior needs to be axiomatized.\nA typical Connect-Four game revolves around generating\nthreats and blocking them. In our analysis we looked for\nexplicit preventive actions, i.e., moves that block a group of\n3 discs, or that remove a future threat (in our limited search\nhorizon). We found that in 83% of the total games there was\nat least one preventive action taken by the Black player. It\nwas also found that Black averaged 2.8 preventive actions\nper game on the games in which it lost, while averaging 1.5\npreventive actions per game when winning. It seems that\nBlack requires 1 or 2 preventive actions to build its initial\ntaking position, before starting to present threats. If it did\nnot manage to win, it will usually prevent an extra threat\nor two before succumbing to White.\n4. RELATED WORK\nMuch research deals with the axiomatization of teamwork\nand mental states of individuals: some models use\nknowledge and belief [10], others have models of goals and\nintentions [8, 4]. However, all these formal theories deal with\nagent teamwork and cooperation. As far as we know, our\nmodel is the first to provide a formalized model for explicit\nadversarial environments and agents\" behavior in it.\nThe classical Min-Max adversarial search algorithm was\nthe first attempt to integrate the opponent into the search\nspace with a weak assumption of an optimally playing\nopponent. Since then, much effort has gone into integrating\nthe opponent model into the decision procedure to predict\nfuture behavior. The M\u2217 algorithm presented by Carmel\nand Markovitch [2] showed a method of incorporating\nopponent models into adversary search, while in [3] they used\nlearning to provide a more accurate opponent model in a\n2player repeated game environment, where agents\" strategies\nwere modeled as finite automata. Additional Adversarial\nplanning work was done by Willmott et al. [13], which\nprovided an adversarial planning approach to the game of GO.\nThe research mentioned above dealt with adversarial search\nand the integration of opponent models into classical\nutilitybased search methods. That work shows the importance of\nopponent modeling and the ability to exploit it to an agent\"s\nadvantage. However, the basic limitations of those search\nmethods still apply; our model tries to overcome those\nlimitations by presenting a formal model for a new, mental\nstate-based adversarial specification.\n5. CONCLUSIONS\nWe presented an Adversarial Environment model for a\n2\nThese were later removed from the final analysis.\nbounded rational agent that is situated in an N-player,\nzerosum environment. We used the SharedPlans formalization\nto define the model and the axioms that agents can apply\nas behavioral guidelines.\nThe model is meant to be used as a guideline for\ndesigning agents that need to operate in such adversarial\nenvironments. We presented empirical results, based on\nConnectFour log file analysis, that exemplify the model and the\naxioms for a bilateral instance of the environment.\nThe results we presented are a first step towards an\nexpanded model that will cover all types of adversarial\nenvironments, for example, environments that are non-zero-sum,\nand environments that contain natural agents that are not\npart of the direct conflict. Those challenges and more will\nbe dealt with in future research.\n6. ACKNOWLEDGMENT\nThis research was supported in part by Israel Science\nFoundation grants #1211/04 and #898/05.\n7. REFERENCES\n[1] L. V. Allis. A knowledge-based approach of\nConnect-Four - the game is solved: White wins.\nMaster\"s thesis, Free University, Amsterdam, The\nNetherlands, 1988.\n[2] D. Carmel and S. Markovitch. Incorporating opponent\nmodels into adversary search. In Proceedings of the\nThirteenth National Conference on Artificial\nIntelligence, pages 120-125, Portland, OR, 1996.\n[3] D. Carmel and S. Markovitch. Opponent modeling in\nmulti-agent systems. In G. Wei\u00df and S. Sen, editors,\nAdaptation and Learning in Multi-Agent Systems,\npages 40-52. Springer-Verlag, 1996.\n[4] B. J. Grosz and S. Kraus. Collaborative plans for\ncomplex group action. Artificial Intelligence,\n86(2):269-357, 1996.\n[5] M. Hadad, G. Kaminka, G. Armon, and S. Kraus.\nSupporting collaborative activity. In Proc. of\nAAAI-2005, pages 83-88, Pittsburgh, 2005.\n[6] http://www.gamerz.net/\u02dcpbmserv/.\n[7] S. Kraus and D. Lehmann. Designing and building a\nnegotiating automated agent. Computational\nIntelligence, 11:132-171, 1995.\n[8] H. J. Levesque, P. R. Cohen, and J. H. T. Nunes. On\nacting together. In Proc. of AAAI-90, pages 94-99,\nBoston, MA, 1990.\n[9] S. Markovitch and R. Reger. Learning and exploiting\nrelative weaknesses of opponent agents. Autonomous\nAgents and Multi-Agent Systems, 10(2):103-130, 2005.\n[10] Y. M. Ronald Fagin, Joseph Y. Halpern and M. Y.\nVardi. Reasoning about knowledge. MIT Press,\nCambridge, Mass., 1995.\n[11] P. Thagard. Adversarial problem solving: Modeling an\noponent using explanatory coherence. Cognitive\nScience, 16(1):123-149, 1992.\n[12] R. W. Toseland and R. F. Rivas. An Introduction to\nGroup Work Practice. Prentice Hall, Englewood Cliffs,\nNJ, 2nd edition edition, 1995.\n[13] S. Willmott, J. Richardson, A. Bundy, and J. Levine.\nAn adversarial planning approach to Go. Lecture\nNotes in Computer Science, 1558:93-112, 1999.\nThe Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 557", "keywords": "bilateral and multilateral instantiation;multiagent environment;modal logic;empirical study;eval value;adversarial interaction;beneficial action;agent;connect-four game;behavioral axiom;axiomatized model;interaction;treatment group;adversarial environment;zero-sum encounter;multiagent system;evaluation function"}
-{"name": "test_I-33", "title": "A Formal Road from Institutional Norms to Organizational Structures", "abstract": "Up to now, the way institutions and organizations have been used in the development of open systems has not often gone further than a useful heuristics. In order to develop systems actually implementing institutions and organizations, formal methods should take the place of heuristic ones. The paper presents a formal semantics for the notion of institution and its components (abstract and concrete norms, empowerment of agents, roles) and defines a formal relation between institutions and organizational structures. As a result, it is shown how institutional norms can be refined to constructsorganizational structures-which are closer to an implemented system. It is also shown how such a refinement process can be fully formalized and it is therefore amenable to rigorous verification.", "fulltext": "1. INTRODUCTION\nThe opportunity of a technology transfer from the field of\norganizational and social theory to distributed AI and multiagent\nsystems (MASs) has long been advocated ([8]). In MASs the\napplication of the organizational and institutional metaphors to system\ndesign has proven to be useful for the development of\nmethodologies and tools. In many cases, however, the application of these\nconceptual apparatuses amounts to mere heuristics guiding the high\nlevel design of the systems. It is our thesis that the application of\nthose apparatuses can be pushed further once their key concepts are\ntreated formally, that is, once notions such as norm, role, structure,\netc. obtain a formal semantics. This has been the case for agent\nprogramming languages after the relevant concepts borrowed from\nfolk psychology (belief, intention, desire, knowledge, etc.) have\nbeen addressed in comprehensive formal logical theories such as,\nfor instance, BDICTL ([22]) and KARO ([17]). As a matter of\nfact, those theories have fostered the production of architectures\nand programming languages.\nWhat is lacking at the moment for the design and development of\nopen MASs is, in our opinion, something that can play the role that\nBDI-like formalisms have played for the design and development\nof single-agent architectures. Aim of the present paper is to fill this\ngap with respect to the notion of institution providing formal\nfoundations for the application of the institutional metaphor and for its\nrelation to the organizational one. The main result of the paper\nconsists in showing how abstract constraints (institutions) can be step\nby step refined to concrete structural descriptions (organizational\nstructures) of the to-be-implemented system, bridging thus the gap\nbetween abstract norms and concrete system specifications.\nConcretely, in Section 2, a logical framework is presented which\nprovides a formal semantics for the notions of institution, norm,\nrole, and which supports the account of key features of institutions\nsuch as the translation of abstract norms into concrete and\nimplementable ones, the institutional empowerment of agents, and some\naspects of the design of norm enforcement. In Section 3 the\nframework is extended to deal with the notion of the infrastructure of an\ninstitution. The extended framework is then studied in relation to\nthe formalism for representing organizational structures presented\nin [11]. In Section 4 some conclusions follow.\n2. INSTITUTIONS\nSocial theory usually thinks of institutions as the rules of the\ngame ([18, 23]). From an agent perspective institutions are, to\nparaphrase this quote, the rules of the various games agents can\nplay in order to interact with one another. To assume an\ninstitutional perspective on MASs means therefore to think of MASs in\nnormative terms:\n[. . . ] law, computer systems, and many other kinds of\norganizational structure may be viewed as instances of\nnormative systems. We use the term to refer to any set\nof interacting agents whose behavior can usefully be\nregarded as governed by norms ([15], p.276).\nThe normative system perspective on institutions is, as such,\nnothing original and it is already a quite acknowledged position within\nthe community working on electronic institutions, or eInstitutions\n([26]). What has not been sufficiently investigated and understood\nwith formal methods is, in our view, the question: what does it\n628\n978-81-904262-7-5 (RPS) c 2007 IFAAMAS\namount to, for a MAS, to be put under a set of norms? Or in other\nwords: what does it mean for a designer of an eInstitution to state\na set of norms? We advance a precise thesis on this issue, which is\nalso inspired by work in social theory:\nNow, as the original manner of producing physical\nentities is creation, there is hardly a better way to\ndescribe the production of moral entities than by the word\n\u2018imposition\" [impositio]. For moral entities do not arise\nfrom the intrinsic substantial principles of things but\nare superadded to things already existent and\nphysically complete ([21], pp. 100-101).\nBy ignoring for a second the philosophical jargon of the\nSeventeenth century we can easily extract an illuminating message from\nthe excerpt: what institutions do is to impose properties on already\nexisting entities. That is to say, institutions provide descriptions of\nentities by making use of conceptualizations that are not proper of\nthe common descriptions of those entities. For example, that cars\nhave wheels is a common factual property, whereas the fact that\ncars count as vehicles in some technical legal sense is a property\nthat law imposes on the concept car. To say it with [25], the\nfact that cars have wheels is a brute fact, while the fact that cars\nare vehicles is an institutional fact. Institutions build structured\ndescriptions of institutional properties upon brute descriptions of a\ngiven domain.\nAt this point, the step toward eInstitutions is natural.\neInstitutions impose properties on the possible states of a MAS: they\nspecify what are the states in which an agent i enacts a role r; what are\nthe states in which a certain agent is violating the norms of the\ninstitution, etc. They do this via linking some institutional properties\nof the possible states and transitions of the system (e.g., agent i\nenacts role r) to some brute properties of those states and transitions\n(e.g., agent i performs protocol No.56). An institutional property\nis therefore a property of system states or system transitions (i.e.,\na state type or a transition type) that does not belong to a merely\ntechnical, or factual, description of the system.\nTo sum up, institution are viewed as sets of norms (normative\nsystem perspective), and norms are thought of as the imposition\nof an institutional description of the system upon its description in\nterms of brute properties. In a nutshell, institutions are impositions\nof institutional terminologies upon brute ones. The following\nsections provide a formal analysis of this thesis and show its\nexplanatory power in delivering a rigorous understanding of key features\nof institutions. Because of its suitability for representing complex\ndomain descriptions, the formal framework we will make use of\nis the one of Description Logics (DL). The use of such formalism\nwill also stress the idea of viewing institutions as the impositions\nof domain descriptions.\n2.1 Preliminaries: a very expressive DL\nThe description logic language enabling the necessary\nexpressivity expands the standard description logic language ALC ([3])\nwith relational operators ( ,\u25e6,\u00ac,id) to express complex transition\ntypes, and relational hierarchies (H) to express inclusion between\ntransition types. Following a notational convention common within\nDL we denote this language with ALCH( ,\u25e6,\u00ac,id)\n.\nDEFINITION 1. (Syntax of ALCH( ,\u25e6,\u00ac,id)\n)\ntransition types and state type constructs are defined by the\nfollowing BNF:\n\u03b1 := a | \u03b1 \u25e6 \u03b1 | \u03b1 \u03b1 | \u00ac\u03b1 | id(\u03b3)\n\u03b3 := c | \u22a5 | \u00ac\u03b3 | \u03b3 \u03b3 | \u2200\u03b1.\u03b3\nwhere a and c are atomic transition types and, respectively, atomic\nstate types.\nIt is worth providing the intuitive reading of a couple of the\noperators and the constructs just introduced. In particular \u2200\u03b1.\u03b3 has to\nbe read as: after all executions of transitions of type \u03b1, states of\ntype \u03b3 are reached. The operator \u25e6 denotes the concatenation of\ntransition types. The operator id applies to a state description \u03b3\nand yields a transition description, namely, the transition ending in\n\u03b3 states. It is the description logic variant of the test operator in\nDynamic Logic ([5]). Notice that we use the same symbols and\n\u00ac for denoting the boolean operators of disjunction and negation\nof both state and transition types. Atomic state types c are often\nindexed by an agent identifier i in order to express agent properties\n(e.g., dutch(i)), and atomic transition types a are often indexed by\na pair of agent identifiers (i, j) (e.g., PAY(i, j)) denoting the\nactor and, respectively, the recipient of the transition. By removing\nthe agent identifiers from state types and transition types we\nobtain state type forms (e.g., dutch or rea(r)) and transition type form\n(e.g., PAY).\nA terminological box (henceforth TBox) T = \u0393, A consists of\na finite set \u0393 of state type inclusion assertions (\u03b31 \u03b32), and of a\nfinite set A of transition type inclusion assertions (\u03b11 \u03b12).\nThe semantics of ALCH( ,\u25e6,\u00ac,id)\nis model theoretical and it is\ngiven in terms of interpreted transition systems. As usual, state\ntypes are interpreted as sets of states and transition types as sets of\nstate pairs.\nDEFINITION 2. (Semantics of ALCH( ,\u25e6,\u00ac,id)\n)\nAn interpreted transition system m for ALCH( ,\u25e6,\u00ac,id)\nis a\nstructure S, I where S is a non-empty set of states and I is a function\nsuch that:\nI(c) \u2286 S I(a) \u2286 S \u00d7 S\nI(\u22a5) = \u2205 I(\u00ac\u03b3) = \u0394m\\ I(\u03b3)\nI(\u03b31 \u03b32) = I(\u03b31) \u2229 I(\u03b32)\nI(\u2200\u03b1.\u03b3) = {s \u2208 S | \u2200t, (s, t) \u2208 I(\u03b1) \u21d2 t \u2208 I(\u03b3)}\nI(\u03b11 \u03b12) = I(\u03b11) \u222a I(\u03b12)\nI(\u00ac\u03b1) = S \u00d7 S \\ I(\u03b1)\nI(\u03b11 \u25e6 \u03b12) = {(s, s ) | \u2203s , (s, s ) \u2208 I(\u03b11) & (s , s ) \u2208 I(\u03b12)}\nI(id(\u03b3)) = {(s, s) | s \u2208 I(\u03b3)}\nAn interpreted transition system m is a model of a state type\ninclusion assertion \u03b31 \u03b32 if I(\u03b31) \u2286 I(\u03b32). It is a model of a\ntransition type inclusion assertion \u03b11 \u03b12 if I(\u03b11) \u2286 I(\u03b12). An\ninterpreted transition system m is a model of a TBox T = \u0393, A\nif m is a model of each inclusion assertion in \u0393 and A.\nREMARK 1. (Derived constructs) The correspondence between\ndescription logic and dynamic logic is well-known ([3]). In fact, the\nlanguage presented in Definitions 1 and 2 is a notational variant of\nthe language of Dynamic Logic ([5]) without the iteration operator\nof transition types. As a consequence, some key constructs are still\ndefinable in ALCH( ,\u25e6,\u00ac,id)\n. In particular we will make use of the\nfollowing definition of the if-then-else transition type:\nif \u03b3 then \u03b11else \u03b12 = (id(\u03b3) \u25e6 \u03b11) (id(\u00ac\u03b3) \u25e6 \u03b12).\nBoolean operators are defined as usual.\nWe will come back to some complexity features of this logic in\nSection 2.5.\nThe Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 629\n2.2 Institutions as terminologies\nWe have upheld that institutions impose new system\ndescriptions which are formulated in terms of sets of norms. The step\ntoward a formal grounding of this view of institutions is now short:\nnorms can be thought of as terminological axioms, and institutions\nas sets of terminological axioms, i.e., terminological boxes.\nAn institution can be specified as a terminological box Ins =\n\u0393ins, Ains , where each inclusion statement in \u0393ins and Ains\nmodels a norm of the institution. Obviously, not every TBox can\nbe considered to be an institution specification. In particular, an\ninstitution specification Ins must have some precise linguistic\nrelationship with the \u2018brute\" descriptions upon which the institution is\nspecified. We denote by Lins the non-logical alphabet containing\nonly institutional state and transition types, and by Lbrute the\nnonlogical alphabet containing those types taken to talk about, instead,\n\u2018brute\" states and transitions1\n.\nDEFINITION 3. (Institutions as TBoxes)\nA TBox Ins = \u0393ins, Ains is an institution specification if:\n1. The non-logical alphabet on which Ins is specified contains\nelements of both Lins and Lbrute. In symbols: L(Ins) \u2286\nLins \u222a Lbrute.\n2. There exist sets of terminological axioms \u0393bridge \u2286 \u0393ins\nand Abridge \u2286 Ains such that either the left-hand side of\nthese axioms is always a description expressed in Lbrute and\nthe right-hand side a description expressed in Lins, or those\naxioms are definitions. In symbols: if \u03b31 \u03b32 \u2208 \u0393bridge\nthen either \u03b31 \u2208 Lbrute and \u03b32 \u2208 Lins or it is the case\nthat also \u03b32 \u03b31 \u2208 \u0393bridge. The clause for Abridge is\nanalogous.\n3. The remaining sets of terminological axioms \u0393ins\\\u0393bridge\nand Ains\\Abridge are all expressed in Lins. In symbols:\nL(\u0393ins\\\u0393bridge) \u2286 Lins and L(Ains\\Abridge) \u2286 Lins.\nThe definition states that an institution specification needs to be\nexpressed on a language including institutional as well as brute terms\n(1); that a part of the specification concerns a description of mere\ninstitutional terms (3); and that there needs to be a part of the\nspecification which connects institutional terms to brute ones (2).\nTerminological axioms in \u0393bridge and Abridge formalize in DL the\nSearlean notion of counts-as conditional ([25]), that is, rules\nstating what kind of meaning an institution gives to certain brute facts\nand transitions (e.g., checking box No.4 in form No.2 counts as\naccepting your personal data to be used for research purposes). A\nformal theory of counts-as statements has been thoroughly\ndeveloped in a series of papers among which [10, 13]. The technical\ncontent of the present paper heavily capitalizes on that work.\nNotice also that given the semantics presented in Definition 2,\nif institutions can be specified via TBoxes then the meaning of\nsuch specifications is a set of interpreted transition systems, i.e.,\nthe models of those TBoxes. These transitions systems can be in\nturn thought of as all the possible MASs which model the specified\ninstitution.\nREMARK 2. (Lbrute from a designer\"s perspective) From a\ndesign perspective language Lbrute has to be thought of as the\nlanguage on which a designer would specify a system instantiating a\ngiven institution2\n. Definition 3 shows that for such a design task\n1\nSymbols from Lins and Lbrute will be indexed (especially with\nagent identifiers) to add some syntactic sugar.\n2\nTo make a concrete example, the AMELI middleware [7] can be\nviewed as a specification tool at a Lbrute level.\nit is needed to formally specify an explicit bridge between the\nconcepts used in the description of the actual system and the\ninstitutional \u2018abstract\" concepts. We will come back to this issue in\nSection 3.\n2.3 From abstract to concrete norms\nTo illustrate Definition 3, and show its explanatory power, an\nexample follows which depicts an essential phenomenon of\ninstitutions.\nEXAMPLE 1. (From abstract to concrete norms) Consider an\ninstitution supposed to regulate access to a set of public web\nservices. It may contain the following norm: it is forbidden to\ndiscriminate access on the basis of citizenship. Suppose now a\nsystem has to be built which complies with this norm. The first question\nis: what does it mean, in concrete, to discriminate on the basis\nof citizenship? The system designer should make some concrete\nchoices for interpreting the norm and these choices should be kept\ntrack of in order to explicitly link the abstract norm to its concrete\ninterpretation. The problem can be represented as follows. The\nabstract norm is formalized by Formula 1 by making use of a\nstandard reduction technique of deontic notions (see [16]): the\nstatement it is forbidden to discriminate on the basis of citizenship\namounts to the statement after every execution of a transition of\ntype DISCR(i, j) the system always ends up in a violation state.\nTogether with the norm also some intuitive background knowledge\nabout the discrimination action needs to be formalized. Here, as\nwell as in the rest of the examples in the paper, we provide just that\npart of the formalization which is strictly functional to show how\nthe formalism works in practice. Formulae 2 and 3 express two\neffect laws: if the requester j is Dutch then after all executions of\ntransitions of type DISCR(i, j) j is accepted by i, whereas if it is\nnot all the executions of the transitions of the same type have as\nan effect that it is not accepted. All formulae have to be read as\nschemata determining a finite number of subsumption expressions\ndepending on the number of agents i, j considered.\n\u2200DISCR(i, j).viol \u2261 (1)\ndutch(j) \u2200DISCR(i, j).accepted(j) (2)\n\u00acdutch(j) \u2200DISCR(i, j).\u00acaccepted(j) (3)\nThe rest of the axioms concern the translation of the abstract type\nDISCR(i, j) to concrete transition types. Formula 4 refines it by\nmaking explicit that a precise if-then-else procedure counts as a\ndiscriminatory act of agent i. Formulae 5 and 6 specify which\nmessages of i to j count as acceptance and rejection. If the designer\nuses transition types SEND(msg33, i, j) and SEND(msg38, i, j) for\nthe concrete system specification, then Formulae 5 and 6 can be\nthought of as bridge axioms connecting notions belonging to the\ninstitutional alphabet (to accept, and to reject) to concrete ones (to\nsend specific messages). Finally, Formulae 7 and 8 state two\nintuitive effect laws concerning the ACCEPT(i, j) and REJECT(i, j)\ntypes.\nif dutch(j)then ACCEPT(i, j)\nelse REJECT(i, j) DISCR(i, j) (4)\nSEND(msg33, i, j) ACCEPT(i, j) (5)\nSEND(msg38, i, j) REJECT(i, j) (6)\n\u2200ACCEPT(i, j).accepted(j) \u2261 (7)\n\u2200REJECT(i, j).\u00acaccepted(j) \u2261 (8)\nIt is easy to see, on the grounds of the semantics exposed in\nDefinition 2, that the following concrete inclusion statement holds w.r.t.\n630 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nthe specified institution:\nif dutch(j) then SEND(msg33, i, j)\nelse SEND(msg38, i, j) DISCR(i, j) (9)\nThis scenario exemplifies a pervasive feature of human institutions\nwhich, as extensively argued in [10], should be incorporated by\nelectronic ones. Current formal approaches to institutions, such as\nISLANDER [6], do not allow for the formal specification of\nexplicit translations of abstract norms into concrete ones, and focus\nonly on norms that can be specified at the concrete system\nspecification level. What Example 1 shows is that the problem of the\nabstractness of norms in institutions can be formally addressed and\ncan be given a precise formal semantics.\nThe scenario suggests that, just by modifying an appropriate set\nof terminological axioms, it is possible for the designer to obtain\na different institution by just modifying the sets of bridge axioms\nwithout touching the terminological axioms expressed only in the\ninstitutional language Lins. In fact, it is the case that a same set of\nabstract norms can be translated to different and even incompatible\nsets of concrete norms. This translation can nevertheless not be\narbitrary ([1]).\nEXAMPLE 2. (Acceptable and unacceptable translations of\nabstract norms) Reconsider again the scenario sketched in Example\n1. The transition type DISCR(i, j) has been translated to a\ncomplex procedure composed by concrete transition types. Would any\ntranslation do? Consider an alternative institution specification\nIns containing Formulae 1-3 and the following translation rule:\nPAY(j, i, e10) DISCR(i, j) (10)\nWould this formula be an acceptable translation of the abstract\nnorm expressed in Formula 1? The axiom states that transitions\nwhere i receives e10 from j count as transitions of type DISCR(i, j).\nNeedless to say this is not intuitive, because the abstract transition\ntype DISCR(i, j) obeys some intuitive conceptual constraints\n(Formulae 2 and 3) that all its translations should also obey. In fact,\nthe following inclusions would then hold in Ins :\ndutch(j) \u2200PAY(j, i, e10).accepted(j) (11)\n\u00acdutch(j) \u2200PAY(j, i, e10).\u00acaccepted(j) (12)\nIn fact, there properties of the transition type PAY(j, i, e10) look\nat least awkward: if an agent is Dutch than by paying e10 it would\nbe accepted, while if it was not Dutch the same action would make\nit not accepted. The problem is that the meaning of \u2018paying\" is not\nintuitively subsumed by the meaning of \u2018discriminating\". In other\nwords, a transition type PAY(j, i, e10) does not intuitively yield the\neffects that a sub-type of DISCR(i, j) yields. It is on the contrary\nperfectly intuitive that Formula 9 obeys the constraints in Formulae\n2 and 3, which it does, as it can be easily checked on the grounds\nof the semantics.\nIt is worth stressing that without providing a model-theoretic\nsemantics for the translation rules linking the institutional notions to\nthe brute ones, it would not be so straightforward to model the\nlogical constraints to which the translations are subjected (Example 2).\nThis is precisely the advantage of viewing translation rules as\nspecific terminological axioms, i.e., \u0393bridge and Abridge, working as a\nbridge between two languages (Definition 3). In [12], we have\nthoroughly compared this approach with approaches such as [9] which\nconceive of translation rules as inference rules.\nThe two examples have shown how our approach can account for\nsome essential features of institutions. In the next section the same\nframework is applied to provide a formal analysis of the notion of\nrole.\n2.4 Institutional modules and roles\nViewing institutions as the impositions of institutional\ndescriptions on systems\" states and transitions allows for analyzing the\nnormative system perspective itself (i.e., institutions are sets of\nnorms) at a finer granularity. We have seen that the terminological\naxioms specifying an institution concern complex descriptions of\nnew institutional notions. Some of the institutional state types\noccurring in the institution specification play a key role in structuring\nthe specification of the institution itself. The paradigmatic example\nin this sense ([25]) are facts such as agent i enacts role r which\nwill be denoted by state types rea(i, r). By stating how an agent\ncan enact and \u2018deact\" a role r, and what normative consequences\nfollow from the enactment of r, an institution describes expected\nforms of agents\" behavior while at the same time abstracting from\nthe concrete agents taking part to the system.\nThe sets of norms specifying an institution can be clustered on\nthe grounds of the rea state types. For each relevant institutional\nstate type (e.g., rea(i, r)), the terminological axioms which define\nan institution, i.e., its norms, can be clustered in (possibly\noverlapping) sets of three different types: the axioms specifying how states\nof that institutional type can be reached (e.g., how an agent i can\nenact the role r); how states of that type can be left (e.g., how an\nagent i can \u2018deact\" the a role r); and what kind of institutional\nconsequences do those states bear (e.g., what rights and power does\nagent i acquire by enacting role r). Borrowing the terminology\nfrom work in legal and institutional theory ([23, 25]), these clusters\nof norms can be called, respectively, institutive, terminative and\nstatus modules.\nStatus modules We call status modules those sets of\nterminological axioms which specify the institutional consequences of the\noccurrence of a given institutional state-of-affairs, for instance, the\nfact that agent i enacts role r.\nEXAMPLE 3. (A status module for roles) Enacting a role within\nan institution bears some institutional consequences that are grouped\nunder the notion of status: by playing a role an agent acquires a\nspecific status. Some of these consequences are deontic and\nconcern the obligations, rights, permissions under which the agent puts\nitself once it enacts the role. An example which pertains to the\nnormative description of the status of both a buyer and a seller\nroles is the following:\nrea(i, buyer) rea(j, seller) win bid(i, j, b)\n\u2200\u00acPAY(i, j, b).viol(i) (13)\nIf agent i enacts the buyer role and j the seller role and i wins\nbid b then if i does not perform a transition of type PAY (i, j, b),\ni.e., does not pay to j the price corresponding to bid b, then the\nsystem ends up in a state that the institution classifies as a violation\nstate with i being the violator. Notice that Formula 13 formalizes\nat the same time an obligation pertaining to the role buyer and\na right pertaining to the role seller. Of particular interest are\nthen those consequences that attribute powers to agents enacting\nspecific roles:\nrea(i, buyer) rea(j, seller) \u2200BID(i, j, b).bid(i, j, b) (14)\nSEND(i, j, msg49) BID(i, j, b) (15)\nIf agent i enacts the buyer role and j the seller role, every time\nagent i bids b to j this action results in an institutional state\ntestifying that the corresponding bid has been placed by i (Formula\n14). Formula 15 states how the bidding action can be executed by\nsending a specific message to j (SEND(i, j, msg49)).\nSome observations are in order. As readers acquainted with\ndeontic logic have probably already noticed, our treatment of the notion\nThe Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 631\nof obligation (Formula 13) makes again use of a standard\nreduction approach ([16]). More interesting is instead how the notion\nof institutional power is modeled. Essentially, the empowerment\nphenomenon is analyzed in term of two rules: one specifying the\ninstitutional effects of an institutional action (Formula 14), and one\ntranslating the institutional transition type in a brute one (Formula\n15). Systems of rules of this type empower the agents enacting\nsome relevant role by establishing a connection between the brute\nactions of the agents and some institutional effect. Whether the\nagents are actually able to execute the required \u2018brute\" actions is\na different issue, since agent i can be in some states (or even all\nstates) unable to effectuate a SEND(i, j, msg49) transition. This\nis the case also in human societies: priests are empowered to give\nrise to marriages but if a priest is not in state of performing the\nrequired speech acts he is actually unable to marry anybody. There is\na difference between being entitled to make a bid and being in\nstate of making a bid ([4]). In other words, Formulae 14 and 15\nexpress only that agents playing the buyer role are entitled to make\nbids. The actual possibility of performing the required \u2018brute\"\nactions is not an institutional issue, but rather an issue concerning the\nimplementation of an institution in a concrete system. We address\nthis issue extensively in Section 33\n.\nInstitutive modules We call institutive modules those sets of\nterminological axioms of an institution specification describing how\nstates with certain institutional properties can be reached, for\ninstance, how an agent i can reach a state in which it enacts role r.\nThey can be seen as procedures that the institution define in order\nfor the agents to bring institutional states of affairs about.\nEXAMPLE 4. (An institutive module for roles) The fact that an\nagent i enacts a role r (rea(i, r)) is the effect of a corresponding\nenactment action ENACT(i, r) performed under certain\ncircumstances (Formula 16), namely that the agent does not already enact\nthe role, and that the agent satisfies given conditions (cond(i, r)),\nwhich might for instance pertain the computational capabilities\nrequired for an agent to play the chosen role, or its capability\nto interact with some specific system\"s infrastructures. Formula\n17 specifies instead the procedure counting as an action of type\nENACT(i, r). Such a procedure is performed through a system\ninfrastructure s, which notifies to i that it has been registered as\nenacting role r after sending the necessary piece of data d (SEND(i, s,\nd)), e.g., a valid credit card number.\n\u00acrea(i, r) cond(i, r) ENACT(i, r).rea(i, r) (16)\nSEND(i, s, d) \u25e6 NOTIFY(s, i) ENACT(i, r) (17)\nTerminative modules Analogously, we call terminative\nmodules those sets of terminological axioms stating how a state with\ncertain institutional properties can be left. Rules of this kind state\nfor instance how an agent can stop enacting a certain role. They\ncan be thus thought of as procedures that the institution defines in\norder for the agent to see to it that certain institutional states stop\nholding.\nEXAMPLE 5. (A terminative module for roles) Terminative\nmodules for roles specify, for instance, how a transition type DEACT(i, r)\ncan be executed which has as consequence the reaching of a state\nof type \u00acrea(i, r):\nrea(i, r) DEACT(i, r).\u00acrea(i, r) (18)\nSEND(i, s, msg9) DEACT(i, r) (19)\nThat is to say, i deacting a role r always leads to a state where\n3\nSee in particular Example 6 and Definition 5\ni does not enact role r; and i sending message No.9 to a specific\ninterface infrastructure s count as i deacting role r.\nExamples 3-5 have shown how roles can be formalized in our\nframework thereby getting a formal semantics: roles are also sets of\nterminological axioms concerning state types of the sort rea(i, r).\nIt is worth noticing that this modeling option is aligned with work\non social theory addressing the concept of role such as [20].\n2.5 Tractable specifications of institutions\nIn the previous sections we fully deployed the expressivity of\nthe language introduced in Section 2.1 and used its semantics to\nprovide a formal understanding of many essential aspects of\ninstitutions in terms of transition systems. This section spends a few\nwords about the viability of performing automated reasoning in the\nlogic presented. The satisfiability problem4\nin logic ALCH( ,\u25e6,\u00ac,id)\nis undecidable since transition type inclusion axioms correspond to\na version of what in Description Logic are known as role-value\nmaps and logics extending ALC with role-value maps are known\nto be undecidable ([3]).\nTractable (i.e., polynomial time decidable) fragments of logic\nALCH( ,\u25e6,\u00ac,id)\ncan however be isolated which still exhibit some\nkey expressive features. One of them is logic ELH(\u25e6)\n. It is\nobtained from description logic EL, which contains only state types\nintersection , existential restriction \u2203 and 5\n, but extended with\nthe \u22a5 state type and with transition type inclusion axioms of a\ncomplex form: a1 \u25e6. . .\u25e6an a (with n finite number). Logic ELH(\u25e6)\nis also a fragment of the well investigated description logic EL++\nwhose satisfiability problem has been shown in [2] to be decidable\nin polynomial time. Despite the very limited expressivity of this\nfragment, some rudimentary institutional specifications can still be\nsuccessfully represented. Specifically, institutive and terminative\nmodules can be represented which contain transition types\ninclusion axioms. Restricted versions of status modules can also be\nrepresented enabling two essential deontic notions: it is possible\n(respectively, impossible) to reach a violation state by performing a\ntransition of a certain type, and it is possible (respectively,\nimpossible) to reach a legal state by performing a transition of a certain\ntype. To this aim language Lins would need to be expanded with\na set of state types {legal(i)}0\u2264i\u2264n whose intuitive meaning is to\ndenote legal states as opposed to states of type viol(i).\nFragments like ELH(\u25e6)\ncould be used as target logics within\ntheory approximation approaches ([24]) by aiming at compiling\nTBoxes expressed in ALCH( ,\u25e6,\u00ac,id)\ninto approximations in those\nfragments.\n3. FROM NORMS TO STRUCTURES\n3.1 Infrastructures\nIn discussing Example 3 we observed how being entitled to\nmake a bid does not imply being in state of making a bid. In\nother words, an institution can empower agents by means of\nappropriate rules but this empowerment can remain dead letter. Similar\n4\nThis problem amounts to check whether a state description \u03b3\nis satisfiable w.r.t. a given TBox T, i.e., to check if there\nexists a model m of T such that \u2205 \u2282 I(\u03b3). Notice that language\nALCH( ,\u25e6,\u00ac,id)\ncontains negation and intersection of arbitrary\nstate types. It is well-known that if these operators are available\nthen all most typical reasoning tasks at the TBox level can be\nreduced to the satisfiability problem.\n5\nNotice therefore that EL is a seriously restricted fragment of ALC\nsince it does not contain the negation operator for state types\n(operators and \u2200 remain thus undefinable).\n632 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nobservations apply also to deontic notions: agents might be allowed\nto perform certain transactions under some relevant conditions but\nthey might be unable to do so under those same conditions. We\nrefer to this kind of problems as infrastructural. The implementation\nof an institution in a concrete system calls therefore for the design\nof appropriate infrastructures or artifacts ([19]). The formal\nspecification of an infrastructure amounts to the formal specification of\ninteraction requirements, that is to say, the specification of which\nrelevant transition types are executable and under what conditions.\nDEFINITION 4. (Infrastructures as TBoxes)\nAn infrastructure Inf = \u0393inf , Ainf for institution Ins is a TBox\non Lbrute such that for all a \u2208 L(Abridge) there exist\nterminological axioms in \u0393inf of the following form: \u03b3 \u2261 \u2203a. (a is\nexecutable exactly in \u03b3 states) and \u03b3 \u2261 \u2203\u00aca. (the negation of a\nis executable exactly in \u03b3 states).\nIn other words, an infrastructure specification states all and only\nthe conditions under which an atomic brute transition type and its\nnegation are executable, which occur in the brute alphabet of the\nbridge axioms of Ins. It states what can be in concrete done and\nunder what conditions.\nEXAMPLE 6. (Infrastructure specification) Consider the\ninstitution specified in Example 1. A simple infrastructure Inf for that\ninstitution could contain for instance the following terminological\naxioms for any pair of different agents i, j and message type msg:\n\u2203SEND(msg33, i, j). (20)\nThe formula states that it is always in the possibilities of agent i to\nsend message No. 33 to agent j. It then follows on the grounds of\nExample 1 that agent i can always accept agent j.\n\u2203ACCEPT(i, j). (21)\nNotice that the executability condition is just .\nWe call a concrete institution specification CIns an institution\nspecification Ins coupled with an infrastructure specification Inf.\nDEFINITION 5. (Concrete institution)\nA concrete institution obtained by joining the institution Ins =\n\u0393ins, Ains and the infrastructure Inf = \u0393inf , Ainf is a TBox\nCIns = \u0393, A such that \u0393 = \u0393ins \u222a \u0393inf and A = Ains \u222a Ainf .\nObviously, different infrastructures can be devised for a same\ninstitution giving rise to different concrete institutions which makes\nprecise implementation choices explicit. Of particular relevance\nare the implementation choices concerning abstract norms like the\none represented in Formula 13. A designer can choose to regiment\nsuch norm ([15]), i.e., make violation states unreachable, via an\nappropriate infrastructure.\nEXAMPLE 7. (Regimentation via infrastructure specification)\nConsider Example 3 and suppose the following translation rule to\nbe also part of the institution:\nBNK(i, j, b) CC(i, j, b) \u2261 PAY(i, j, b) (22)\ncondition pay(i, j, b) \u2261 rea(i, buyer)\nrea(j, seller) win bid(i, j, b) (23)\nThe first formula states how the payment can be concretely carried\nout (via bank transfer or credit card) and the second just provides\na concrete label grouping the institutional state types relevant for\nthe norm. In order to specify a regimentation at the infrastructural\nlevel it is enough to state that:\ncondition pay(i, j, b) \u2261 \u2203(BNK(i, j, b) CC(i, j, b)). (24)\n\u00accondition pay(i, j, b) \u2261 \u2203\u00ac(BNK(i, j, b) CC(i, j, b)). (25)\nIn other words, in states of type condition pay(i, j, b) the only\nexecutable brute actions are BANK(i, j, b) or CC(i, j, b) and,\ntherefore, PAY(i, j, b) would necessarily be executed. As a result, the\nfollowing inclusion does not hold with respect to the corresponding\nconcrete institution: condition pay(i, j, b) \u2203\u00acPAY(i).viol(i).\n3.2 Organizational Structures\nThis section briefly summarizes and adapts the perspective and\nresults on organizational structures presented in [14, 11]. We refer\nto that work for a more comprehensive exposition.\nOrganizational structures typically concern the way agents\ninteract within organizations. These interactions can be depicted as the\nlinks of a graph defined on the set of roles of the organization. Such\nlinks are then to be labeled on the basis of the type of interaction\nthey stand for. First of all, it should be clear whether a link\ndenotes that a certain interaction between two roles can, or ought to,\nor may etc. take place. Secondly, links should be labeled according\nto the transition type \u03b1 they refer to and the conditions \u03b3 in which\nthat transition can, ought to, may etc. take place. Links in a\nformal specification of an organizational structure stand therefore for\nstatements of the kind: role r can (ought to, may) execute \u03b1 w.r.t.\nrole s if \u03b3 is the case. For the sake of simplicity, the following\ndefinition will consider only the can and ought-to interaction\nmodalities. State and transition types in Lins \u222aLbrute will be used\nto label the links of the structure. Interaction modalities can\ntherefore be of an institutional kind or of a brute kind.\nDEFINITION 6. (Organizational structure)\nAn organizational structure is a multi-graph:\nOS = Roles, {Cp}p\u2208Mod, {Op}p\u2208Mod\nwhere:\n\u2022 Mod denotes a set of pairs p = \u03b3 : \u03b1, that is, a set of\nstate type (condition) and transition type (action) pairs of\nLins\u222aLbrute with \u03b1 being an atomic transition-type indexed\nwith a pair (i, j) denoting placeholders for the actor and the\nrecipient of the transition;\n\u2022 C (can) denotes links to be interpreted in terms of the\nexecutability of the related \u03b1 in \u03b3, whereas O (ought) denotes\nlinks to be interpreted in terms of the obligation to execute\nthe related \u03b1 in \u03b3.\nBy the expressions (r, s) \u2208 C\u03b3:\u03b1 and (r, s) \u2208 O\u03b3:\u03b1 we mean\ntherefore: agents enacting role r can and, respectively, ought to interact\nwith agents enacting role s by performing \u03b1 in states of type \u03b3.\nAs shown in [11] such formal representations of organizational\nstructures are of use for investigating the structural properties\n(robustness, flexibility, etc.) that a given organization exhibits.\nAt this point all the formal means are put in place which allow\nus to formally represent institutions as well as organizational\nstructures. The next and final step of the work consists in providing\na formal relation between the two frameworks. This formal\nrelation will make explicit how institutions are related to organizational\nstructures and vice versa. In particular, it will become clear how a\nnormative conception of the notion of role relates to a structural\none, that is, how the view of roles as a sets of norms (specifying\nhow an agent can enact and deact the role, and what social status\nit obtains by doing that) relates to the view of roles as positions\nwithin social structures.\n3.3 Relating institutions to organizations\nThe Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 633\nTo translate a given concrete institution into a corresponding\norganizational structure we need a function t assigning pairs of roles\nto axioms. Let us denote with Sub the set of all state type inclusion\nstatements \u03b31 \u03b32 that can be expressed on Lins \u222a Lbrute.\nFunction t is a partial function Sub Roles \u00d7 Roles such that, for\nany x \u2208 Sub if x = rea(i, r) rea(j, s) \u03b3 \u2203\u03b1.\n(executability) or x = rea(i, r) rea(j, s) \u03b3 \u2200\u00ac\u03b1.viol(i) (obligation)\nthen t(x) = (r, s), where \u03b1 is an atomic transition-type indexed\nwith a pair (i, j). That is to say, executability and obligation laws\ncontaining the enactment configuration rea(i, r) rea(j, s) as a\npremise and concerning transition of types \u03b1, with i actor and j\nrecipient of the \u03b1 transition, are translated into role pairs (r, s).\nDEFINITION 7. (Correspondence of specifications)\nA concrete institution CIns = \u0393, A is said to correspond to an\norganizational structure OS (and vice versa) if, for every x \u2208 \u0393:\n\u2022 x = rea(i, r) rea(j, s) \u03b3 \u2203\u03b1. iff t(x) \u2208 C\u03b3:\u03b1\n\u2022 x = rea(i, r) rea(j, s) \u03b3 \u2200\u00ac\u03b1.viol(i) iff t(x) \u2208 O\u03b3:\u03b1\nIntuitively, function t takes axioms from \u0393 (i.e., the set of state type\nterminological axioms of CIns) and yields pairs of roles.\nDefinition 7 labels the yielded pairs accordingly to the syntactic form\nof the translated axioms. More concretely, axioms of the form\nrea(i, r) rea(j, s) \u03b3 \u2203\u03b1. (executability laws) are\ntranslated into the pair (r, s) belonging to the executability dimension\n(i.e., C) of the organizational structure w.r.t. the execution of \u03b1\nunder circumstances \u03b3. Analogously, axioms of the form rea(i, r)\nrea(j, s) \u03b3 \u2200\u00ac\u03b1.viol(i) (obligation laws) are translated into\nthe pair (r, s) belonging to the obligation dimension (i.e., O) of\nthe organizational structure w.r.t. the execution of \u03b1 under\ncircumstances \u03b3. Leaving technicalities aside, function t distills thus the\nterminological and infrastructural constraints of CIns into\nstructural ones. The institutive, terminative and status modules of roles\nare translated into definitions of positions within a OS.\nFrom a design perspective the interpretation of Definition 7 is\ntwofold. On the one hand (from left to right), it can make\nexplicit what the structural consequences are of a given institution\nsupported by a given infrastructure. On the other hand (from right\nto left), it can make explicit what kind of institution is actually\nimplemented by a given organizational structure. Let us see this in\nsome more details.\nGiven a concrete institution CIns, Definition 7 allows a designer\nto be aware of the impact that specific terminological choices (in\nparticular, the choice of certain bridge axioms) and infrastructural\nones have at a structural level. Notice that Definition 7 supports the\ninference of links in a structure. By checking whether a given\ninclusion statement of the relevant syntactic form follows from CIns\n(i.e., the so-called subsumption problem of DL) it is possible, via\nt, to add new links to the corresponding organizational structure.\nThis can be recursively done by just adding any new inferred\ninclusion x to the previous set of axioms \u0393, thus obtaining an\nupdated institutional specification containing \u0393 \u222a {x}. This process\ncan be thought of as the inference of structural links from\ninstitutional specifications. In other words, it is possible to use institution\nspecifications as inference tools for structural specifications. For\ninstance, the infrastructural choice formalized in Example 7\nimplies that for the pair of roles (buyer, seller), it is always the case\nthat (buyer, seller) \u2208 C :PAY(i,j,b). This link follows from links\n(buyer, seller) \u2208 C :BNK(i,j,b) and (buyer, seller) \u2208 C :CC(i,j,b)\non the grounds of the bridge axioms of the institution (Formula 22).\nSuppose now a designer to be interested in a system which,\nbesides implementing an institution, also incorporates an\norganizational structure enjoying desirable structural properties such as\nflexibility, or robustness6\n. By relating structural links to state type\ninclusions it is therefore possible to check whether adding a link in\nOS results in a stronger institutional specification, that is, if the\ncorresponding inclusion statement is not already implied by Ins.\nTo draw a parallelism with what just said in the previous\nparagraph, this process can be thought of as the inference of norms and\ninfrastructural constraints from the specification of organizational\nstructures. To give a simple example consider again Example 6 but\nfrom a reversed perspective. Suppose a designer wants a fully\nconnected graph in the dimension C :SEND(i,j)\nof the organizational\nstructure. Exploiting Definition 7, we would obtain a number of\nexecutability laws in the fashion of Formula 20 for all roles in Roles\n(thus 2|Roles|\naxioms).\nDefinition 7 establishes a correspondence between two\nessentially different perspectives on the design of open systems allowing\nfor feedbacks between the two to be formally analyzed. One last\nobservation is in order. While given a concrete institution an\norganizational structure can be in principle fully specified (by\nchecking for all -finitely many- relevant inclusion statements whether\nthey are implied or not by the institution), it is not possible to\nobtain a full terminological specification from an organizational\nstructure. This lies on the fact that in Definition 6 the strictly\nterminological information contained in the specification of an institution\n(eminently, the set of transition type axioms A and therefore the\nbridge axioms) is lost while moving to a structural description. This\nshows, in turn, that the added value of the specification of\ninstitutions lies precisely in the terminological link they establish between\ninstitutional and brute, i.e., system level notions.\n4. CONCLUSIONS\nThe paper aimed at providing a comprehensive formal\nanalysis of the institutional metaphor and its relation to the\norganizational one. The predominant formal tool has been description logic.\nTBoxes has been used to represent the specifications of\ninstitutions (Definition 3) and their infrastructures (Definition 6),\nproviding therefore a transition system semantics for a number of\ninstitutional notions (Examples 1-7). Multi-graphs has then been used to\nrepresent the specification of organizational structures (Definition\n6). The last result presented concerned the definition of a formal\ncorrespondence between institution and organization specifications\n(Definition 7), which provides a formal way for switching between\nthe two paradigms. All in all, these results deliver a way for relating\nabstract system specifications (i.e., institutions as sets of norms) to\nspecifications that are closer to an implemented system (i.e.,\norganizational structures).\n5. REFERENCES\n[1] G. Azzoni. Il cavallo di caligola. In Ontologia sociale potere\ndeontico e regole costitutive, pages 45-54. Quodlibet,\nMacerata, Italy, 2003.\n[2] F. Baader, S. Brandt, and C. Lutz. Pushing the EL envelope.\nIn Proceedings of IJCAI\"05, Edinburgh, UK, 2005.\nMorgan-Kaufmann Publishers.\n[3] F. Baader, D. Calvanese, D. McGuinness, D. Nardi, and\nP. Patel-Schneider. The Description Logic Handbook.\nCambridge Univ. Press, Cambridge, 2002.\n[4] C. Castelfranchi. The micro-macro constitution of power.\nProtoSociology, 18:208-268, 2003.\n6\nIn [11] it is shown how these and analogous properties can be\nprecisely measured within the type of structures presented in\nDefinition 6.\n634 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)\n[5] D. D. Harel amd Kozen and J. Tiuryn. Dynamic logic. In\nD. Gabbay and F. Guenthner, editors, Handbook of\nPhilosophical Logic: Volume II, pages 497-604. Reidel,\nDordrecht, 1984.\n[6] M. Esteva, D. de la Cruz, and C. Sierra. ISLANDER: an\nelectronic institutions editor. In Proceedings of AAMAS\"02,\npages 1045-1052, New York, NY, USA, 2002. ACM Press.\n[7] M. Esteva, J. Rodr\u00b4\u0131guez-Aguilar, B. Rosell, and J. Arcos.\nAmeli: An agent-based middleware for electronic\ninstitutions. In Proceedings of AAMAS\"04, New York, US,\nJuly 2004.\n[8] M. S. Fox. An organizational view of distributed systems.\nIEEE Trans. Syst. Man Cyber, 11(1)70 - 80, 1981.\n[9] C. Ghidini and F. Giunchiglia. A semantics for abstraction.\nIn R. de M\u00b4antaras and L. Saitta, editors, Proceedings of\nECAI\"04, pages 343-347, 2004.\n[10] D. Grossi, H. Aldewereld, J. V\u00b4azquez-Salceda, and\nF. Dignum. Ontological aspects of the implementation of\nnorms in agent-based electronic institutions. Computational\n& Mathematical Organization Theory, 12(2-3):251-275,\nApril 2006.\n[11] D. Grossi, F. Dignum, V. Dignum, M. Dastani, and\nL. Royakkers. Structural evaluation of agent organizations.\nIn Proceedings of AAMAS\"06, pages 1110 - 1112, Hakodate,\nJapan, May 2006. ACM Press.\n[12] D. Grossi, F. Dignum, and J.-J. C. Meyer. Context in\ncategorization. In L. Serafini and P. Bouquet, editors,\nProceedings of CRR\"05, volume 136 of CEUR Workshp\nProceedings, Paris, June 2005.\n[13] D. Grossi, J.-J. Meyer, and F. Dignum. Classificatory aspects\nof counts-as: An analysis in modal logic. Journal of Logic\nand Computation, October 2006. doi:\n10.1093/logcom/exl027.\n[14] J. F. H\u00a8ubner, J. S. Sichman, and O. Boissier. Moise+:\nTowards a structural functional and deontic model for mas\norganization. In Proceedings of AAMAS\"02, Bologna, Italy,\nJuly 2002. ACM Press.\n[15] A. J. I. Jones and M. Sergot. On the characterization of law\nand computer systems: The normative systems perspective.\nDeontic Logic in Computer Science, pages 275-307, 1993.\n[16] J. Krabbendam and J.-J. C. Meyer. Contextual deontic logics.\nIn P. McNamara and H. Prakken, editors, Norms, Logics and\nInformation Systems, pages 347-362, Amsterdam, 2003. IOS\nPress.\n[17] J.-J. Meyer, F. de Boer, R. M. van Eijk, K. V. Hindriks, and\nW. van der Hoek. On programming karo agents. Logic\nJournal of the IGPL, 9(2), 2001.\n[18] D. C. North. Institutions, Institutional Change and Economic\nPerformance. Cambridge University Press, Cambridge, 1990.\n[19] A. Omicini, A. Ricci, A. Viroli, C. Castelfranchi, and\nL. Tummolini. Coordination artifacts: Environment-based\ncoordination for intelligent agents. In Proceedings of\nAAMAS\"04, 2004.\n[20] I. P\u00a8orn. Action theory and social science. Some formal\nmodels. Reidel Publishing Company, Dordrecht, The\nNetherlands, 1977.\n[21] S. Pufendorf. De Jure Naturae et Gentium. Amsterdam,\n1688. English translation, Clarendon, 1934.\n[22] A. S. Rao and M. P. Georgeff. Modeling rational agents\nwithin a BDI-architecture. In J. Allen, R. Fikes, and\nE. Sandewall, editors, Proceedings of KR\"91), pages\n473-484. Morgan Kaufmann: San Mateo, CA, USA, 1991.\n[23] D. W. P. Ruiter. A basic classification of legal institutions.\nRatio Juris, 10:357-371, 1997.\n[24] M. Schaerf and M. Cadoli. Tractable reasoning via\napproximation. Artificial Intelligence, 74(2):249-310, 1995.\n[25] J. Searle. The Construction of Social Reality. Free Press,\n1995.\n[26] J. V\u00b4azquez-Salceda. The role of Norms and Electronic\nInstitutions in Multi-Agent Systems. Birkhuser Verlag AG,\n2004.\nThe Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 635", "keywords": "dynamic logic;terminological axiom;institutional norm;formal method;institution;formalism for representing organizational structure;property;role;organizational structure;infrastructure;norm;logic;description logic;abstract constraint;entity"}
-{"name": "test_I-34", "title": "Resolving Conflict and Inconsistency in Norm-Regulated Virtual Organizations", "abstract": "Norm-governed virtual organizations define, govern and facilitate coordinated resource sharing and problem solving in societies of agents. With an explicit account of norms, openness in virtual organizations can be achieved: new components, designed by various parties, can be seamlessly accommodated. We focus on virtual organizations realised as multi-agent systems, in which human and software agents interact to achieve individual and global goals. However, any realistic account of norms should address their dynamic nature: norms will change as agents interact with each other and their environment. Due to the changing nature of norms or due to norms stemming from different virtual organizations, there will be situations when an action is simultaneously permitted and prohibited, that is, a conflict arises. Likewise, there will be situations when an action is both obliged and prohibited, that is, an inconsistency arises. We introduce an approach, based on first-order unification, to detect and resolve such conflicts and inconsistencies. In our proposed solution, we annotate a norm with the set of values their variables should not have in order to avoid a conflict or an inconsistency with another norm. Our approach neatly accommodates the domain-dependent interrelations among actions and the indirect conflicts/inconsistencies these may cause. More generally, we can capture a useful notion of inter-agent (and inter-role) delegation of actions and norms associated to them, and use it to address conflicts/inconsistencies caused by action delegation. We illustrate our approach with an e-Science example in which agents support Grid services.", "fulltext": "1. INTRODUCTION\nVirtual organizations (VOs) facilitate coordinated resource\nsharing and problem solving involving various parties geographically\nremote [9]. VOs define and regulate interactions (thus facilitating\ncoordination) among software and/or human agents that\ncommunicate to achieve individual and global goals [16]. VOs are realised as\nmulti-agent systems and a most desirable feature of such systems is\nopenness whereby new components designed by other parties are\nseamlessly accommodated. The use of norms, that is, prohibitions,\npermissions and obligations, in the specification and operation of\nmulti-agent systems (MASs) is a promising approach to achieving\nopenness [2, 4, 5, 6]. Norms regulate the observable behaviour of\nself-interested, heterogeneous software agents, designed by\nvarious parties who may not entirely trust each other [3, 24].\nHowever, norm-regulated VOs may experience problems when norms\nassigned to their agents are in conflict (i.e., an action is\nsimultaneously prohibited and permitted) or inconsistent (i.e., an action is\nsimultaneously prohibited and obliged).\nWe propose a means to automatically detect and solve conflict\nand inconsistency in norm-regulated VOs. We make use of\nfirstorder term unification [8] to find out if and how norms overlap in\ntheir influence (i.e., the agents and values of parameters in agents\"\nactions that norms may affect). This allows for a fine-grained\nsolution whereby the influence of conflicting or inconsistent norms is\ncurtailed for particular sets of values. For instance, norms agent\nx is permitted to send bid(ag1, 20) and agent ag2 is prohibited\nfrom doing send bid(y, z) (where x, y, z are variables and ag1,\nag2, 20 are constants) are in conflict because their agents, actions\nand terms (within the actions) unify. We solve the conflict by\nannotating norms with sets of values their variables cannot have, thus\ncurtailing their influence. In our example, the conflict is avoided if\nwe require that variable y cannot be ag1 and that z cannot be 20.\nThis paper is organized as follows. In the next section we\nprovide a minimalistic definition for norm-regulated VOs. In section 3\nwe formally define norm conflicts, and explain how they are\ndetected and resolved. In section 4 we describe how the machinery\nof the previous section can be adapted to detect and resolve norm\ninconsistencies. In section 5 we describe how our curtailed norms\nare used in norm-aware agent societies. In section 6 we explain\nhow our machinery can be used to detect and solve indirect\nconflicts/inconsistencies, that is, those caused via relationships among\nactions; we extend and adapt the machinery to accommodate the\ndelegation of norms. In section 7 we illustrate our approach with\nan example of norm-regulated software agents serving the Grid. In\nsection 8 we survey related work and in section 9 we discuss our\ncontributions and give directions for future work.\n644\n978-81-904262-7-5 (RPS) c 2007 IFAAMAS\n2. VIRTUAL ORGANIZATIONS\nVirtual organizations [17] allow various parties to come together\nto share resources and engage in problem solving. This paradigm\nhas found strong applications in Web-service orchestration [14],\ne-Science [16] and the Grid [9]. VOs, in their most generic\nformulation, can be seen as coordination artifacts, allowing software and\nhuman agents to engage in sophisticated forms of interaction.\nWe formally represent our VOs as finite-state machines in which\nthe actions of individual agents label the edges between discrete\nstates. This provides us with a lowest common denominator:\nthere are much more sophisticated, convenient and expressive ways\nto represent interactions among agents (e.g., AUML [19] and\nelectronic institutions [20], to name a few), but for the sake of\ngeneralising our approach, we shall assume any higher-level formalism\ncan be mapped onto a finite-state machine (possibly with some loss\nof expressiveness). We show in Figure 1 a simple VO graphically\nrepresented as a finite-state machine1\n. The labels on the edges\ncon//?>=<89:;0\np(X)\n\nq(Y,Z)\n//?>=<89:;1\ns(A,B)\n//?>=<89:;/.-,()*+2\nFigure 1: Sample VO as a Finite-State Machine\nnecting the states are first-order atomic formulae, denoted\ngenerically as \u03d5; they stand for actions performed by individual agents.\nWe define our virtual organizations as follows:\nDEF. 1. A virtual organization I is the triple S, s0, E, T where\nS = {s1, . . . , sn} is a finite and non-empty set of states, s0 \u2208 S\nis the initial state, E is a finite set of edges (s, s , \u03d5), s, s \u2208 S\nconnecting s to s with a first-order atomic formula \u03d5 as a label,\nand T \u2286 S is the set of terminal states.\nNotice that edges are directed, so (s, t, \u03d5) = (t, s, \u03d5). The sample\nVO of Figure 1 is formally represented as I = {0, 1, 2}, 0, {(0, 0,\np(X)), (0, 1, q(Y, Z)), (1, 2, s(A, B)}, {2} . We assume an\nimplicit existential quantification on any variables in \u03d5, so that, for\ninstance, s(A, B) stands for \u2203A, B s(A, B).\nVOs should allow for two kinds of non-determinism\ncorresponding to choices autonomous agents can make, viz., i) the one\narising when there is more than one edge leaving a state; and ii) the\none arising from variables in the formulae \u03d5 labelling an edge, for\nwhich the agent carrying out the action instantiates. These kinds\nof non-determinism are desirable as they help define generic and\nflexible coordination mechanisms.\nAnother important concept we use is the roles of agents in VOs.\nRoles, as exploited in, for instance, [18] and [20], help us\nabstract from individual agents and define a pattern of behaviour to\nwhich any agent that adopts a role ought to conform. Moreover,\nall agents with the same role are guaranteed the same rights, duties\nand opportunities. We shall make use of two finite, non-empty sets,\nAgents = {ag1, . . . , agn} and Roles = {r1, . . . , rm},\nrepresenting, respectively, the sets of agent identifiers and role labels. We\nrefer generically to first-order terms, i.e., constants, variables, and\n(nested) functions as \u03c4.\n2.1 Semantics of VOs\nThe specification of a VO as a finite-state machine gives rise\nto a possibly infinite set of histories of computational behaviours,\nin which the actions labelling the paths from the initial state to a\nfinal state are recorded. Although the actions comprising a VO are\ncarried out distributedly, we propose an explicit global account of\nall events. In practice, this can be achieved if we require individual\n1\nWe adopt Prolog\"s convention [1] and use strings starting with a capital letter to\nrepresent variables and strings starting with a small letter to represent constants.\nagents to declare/inform whatever actions they have carried out;\nthis assumes trustworthy agents, naturally2\n.\nIn order to record the authorship of the action, we annotate the\nformulae with the agents\" unique identification. Our explicit global\naccount of all events is a set of ground atomic formulae \u03d5, that\nis, we only allow constants to appear as terms of formulae. Each\nformula is a truthful record of an action specified in the VO. Notice,\nhowever, that in the VO specification we do not restrict the syntax\nof the formulae: variables may appear in them, and when an agent\nperforms an actual action then any variables of the specified action\nmust be assigned values. We thus define:\nDEF. 2. A global execution state of a VO, denoted as \u039e, is a\nfinite, possibly empty, set of tuples a : r, \u00af\u03d5, t where a \u2208 Agents\nis an agent identifier, r \u2208 Roles is a role label, \u00af\u03d5 is a ground\nfirst-order atomic formula, and t \u2208 IN is a time stamp.\nFor instance, ag1:buyer, p(a, 34), 20 states that agent ag1\nadopting role buyer performed action p(a, 34) at instant 20. Given a VO\nI = S, s0, E, T , an execution state \u039e and a state s \u2208 S, we can\ndefine a function which obtains a possible next execution state, viz.,\nh(I, \u039e, s) = \u039e \u222a { a:r, \u00af\u03d5, t }, for one (s, s , \u03d5) \u2208 E. Such\nfunction h must address the two kinds of non-determinism above, as\nwell as the choice on the potential agents that can carry out the\naction and their adopted roles. We also define a function to compute\nthe set of all possible execution states, h\u2217\n(I, \u039e, s) = {\u039e \u222a { a:\nr, \u00af\u03d5, t }|(s, s , \u03d5) \u2208 E}.\n2.2 Norm-Regulated VOs\nWe advocate a separation of concerns whereby the virtual\norganization is complemented with an explicit and separate set of norms\nthat further regulates the behaviour of agents as they take part in\nthe enactment of an organization. The freedom of choice given to\nagents (captured via the non-determinism of VOs, explained above)\nmust be curtailed in some circumstances. For instance, we might\nneed to describe that whoever carried out \u03d5 is obliged to carry out\n\u03d5 , so that if there is a choice point in which \u03d5 appears as a label\nof an edge, then that edge should be followed.\nRather than embedding such normative aspects into the agents\"\ndesign (say, by explicitly encoding normative aspects in the agents\"\nbehaviour) or into the VO itself (say, by addressing exceptions and\ndeviant behaviour in the mechanism itself), we adopt the view that\na VO should be supplemented with a separate set of norms that\nfurther regulates the behaviour of agents as they take part in the\nenactment of the organization. This separation of concerns should\nfacilitate the design of MASs; however, the different components\n(VOs and norms) must come together at some point in the design\nprocess. Our norms are defined as below:\nDEF. 3. A norm, generically referred to as \u03bd, is any construct\nof the form O\u03c4:\u03c4 \u03d5, P\u03c4:\u03c4 \u03d5, or F\u03c4:\u03c4 \u03d5, where \u03c4, \u03c4 are either\nvariables or constants and \u03d5 is a first-order atomic formula.\nWe adopt the notation of [18]: O\u03c4:\u03c4 \u03d5 represents an obligation on\nagent \u03c4 taking up role \u03c4 to bring about \u03d5; we recall that \u03c4, \u03c4 are\nvariables, constants and functions applied to (nested) terms. P\u03c4:\u03c4 \u03d5\nand F\u03c4:\u03c4 \u03d5 stand for, respectively, a permission and a prohibition\non agent \u03c4, playing role \u03c4 to bring about \u03d5. We shall assume that\nsorts are used to properly manipulate variables for agent identifiers\nand role labels.\nWe propose to formally represent the normative positions of all\nagents enacting a VO. By normative position we mean the\nsocial burden associated to individuals [12], that is, their obligations,\npermissions and prohibitions:\n2\nNon-trustworthy agents can be accommodated in this proposal, if we associate to\neach of them a governor agent which supervises the actions of the external agent and\nreports on them. This approach was introduced in [12] and is explained in section 5.\nThe Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 645\nDEF. 4. A global normative state \u03a9 is a finite and possibly\nempty set of tuples \u03c9 = \u03bd, td, ta, te where \u03bd is a norm as above\nand td, ta, te \u2208 IN are, respectively, the time when \u03bd was\ndeclared (introduced), when \u03bd becomes active and when \u03bd expires,\ntd \u2264 ta < te.\nIt is worth noticing that we do not require the atomic formulae\nof norms to be ground: there could be variables in them. We\nassume an implicit universal quantification on the variables A, R\nof norms XA:R\u03d5 (for the deontic modalities X \u2208 {O, P, F}), so\nthat, for instance, PA:Rp(X, b, c) stands for \u2200A \u2208 Agents.\u2200R \u2208\nRoles.\u2203X.PA:Rp(X, b, c). We also refer to the tuples in \u03a9 as\nnorms.\nGlobal normative states complement the execution states of VOs\nwith information on the normative positions of individual agents.\nWe can relate them via a function to obtain a norm-regulated next\nexecution state of a VOs, that is, g(I, \u039e, s, \u03a9, t) = \u039e , t\nstanding for the time of the update. For instance, we might want all\nprohibited actions to be excluded from the next execution state,\nthat is, g(I, \u039e, s, \u03a9, t) = \u039e \u222a { a :r, \u00af\u03d5, t }, (s, s , \u03d5) \u2208 E and\nFa:r\u03d5, td, ta, te \u2208 \u03a9, ta \u2264 t \u2264 te. We might equally wish that\nonly permitted actions be chosen for the next execution state. We\ndo not legislate, or indeed recommend, any particular way to\nregulate VOs. We do, however, offer simple underpinnings to allow\narbitrary policies to be put in place.\nIn the same way that a normative state is useful to obtain the\nnext execution state of a VO, we can use an execution state to\nupdate a normative state. For instance, we might want to remove any\nobligation specific to an agent and role, which has been carried\nout by that specific agent and role, that is, f(\u039e, \u03a9) = \u03a9 \u2212 Obls,\nObls = { Oa:r\u03d5, td, ta, te \u2208 \u03a9| a:r, \u00af\u03d5, t \u2208 \u039e}.\nThe management (i.e., creation and updating) of global\nnormative states is an interesting area of research. A simple but useful\napproach is reported in [11]: production rules generically depict\nhow norms should be updated to reflect what agents have done and\nwhich norms currently hold. In this paper our focus is not to\npropose how \u03a9\"s should be managed; we assume some mechanism\nwhich does that.\n3. NORM CONFLICTS\nWe now define means to detect and resolve norm conflicts and\ninconsistencies. We make use of the concept of unification [1, 8]\nof first-order terms \u03c4, i.e., constants, variables or (nested) functions\nwith terms as parameters. Initially we define substitutions:\nDEF. 5. A substitution \u03c3 is a finite and possibly empty set of\npairs x/\u03c4, where x is a variable and \u03c4 is a term.\nWe define the application of a substitution as:\n1. c \u00b7 \u03c3 = c for a constant c\n2. x \u00b7 \u03c3 = \u03c4 \u00b7 \u03c3 if x/\u03c4 \u2208 \u03c3; otherwise x \u00b7 \u03c3 = x\n3. pn\n(\u03c40, . . . , \u03c4n) \u00b7 \u03c3 = pn\n(\u03c40 \u00b7 \u03c3, . . . , \u03c4n \u00b7 \u03c3).\n4. (X\u03c41:\u03c42 \u03d5) \u00b7 \u03c3 = X(\u03c41\u00b7\u03c3):(\u03c42\u00b7\u03c3)(\u03d5 \u00b7 \u03c3)\n5. \u03bd, td, ta, te \u00b7 \u03c3 = (\u03bd \u00b7 \u03c3), td, ta, te\nWhere X generically refers to any of the deontic modalities O, P, F.\nUnification between two terms \u03c4, \u03c4 consists of finding a\nsubstitution \u03c3 (also called, in this context, the unifier of \u03c4 and \u03c4 ) such\nthat \u03c4 \u00b7 \u03c3 = \u03c4 \u00b7 \u03c3. Many algorithms have been proposed to solve\nthe unification problem, a fundamental issue in automated theorem\nproving [8], and more recent work provides very efficient ways to\nobtain unifiers. We shall make use of the following definition:\nDEF. 6. Relationship unify(\u03c4, \u03c4 , \u03c3) holds iff there is a\npossibly empty \u03c3 such that \u03c4 \u00b7 \u03c3 = \u03c4 \u00b7 \u03c3.\nWe also define the unification of atomic formulae as unify(pn\n(\u03c40,\n. . . , \u03c4n), pn\n(\u03c40, . . . , \u03c4n), \u03c3) which holds iff \u03c4i \u00b7 \u03c3 = \u03c4i \u00b7 \u03c3, 0 \u2264\ni \u2264 n. The unify relationship checks if a substitution \u03c3 is indeed\na unifier for \u03c4, \u03c4 but it can also be used to find such \u03c3. We assume\nthat unify is a suitable implementation of a unification algorithm\nwhich i) always terminates (possibly failing, if a unifier cannot be\nfound); ii) is correct; and iii) has a linear computational complexity.\n3.1 Conflict Detection\nA norm conflict arises when an atomic formula labelling an edge\nin the VO, i.e. an action, is simultaneously permitted and\nprohibited [13]. In this case, both norms are in conflict with regard to\ntheir agents, roles and parameters (terms) of specific actions. We\npropose to use unification to detect when a prohibition and a\npermission overlap and to employ the unifier to resolve the conflict.\nFor instance, PA:Rp(c, X) and Fa:bp(Y, Z) are in conflict as they\nunify under \u03c3 = {A/a, R/b, Y/c, X/d}). If, however, the\nvariables in Fa:bp(Y, Z) do not get the values in \u03c3 then there will be\nno conflicts. We thus propose to annotate the prohibitions in \u03a9\nwith unifiers, called here conflict sets, and use these annotations\nto determine what the variables of the prohibition cannot be in\nfuture unifications in order to avoid a conflict. Each prohibition\nis henceforth regarded as having such an annotation, denoted as\n(F\u03c41:\u03c42 \u03d5) \u03a3c, td, ta, te . Initially, this annotation is empty.\nWe propose to curtail the influence of prohibitions, thus giving\nagents more choices in the actions they may perform. A similar\napproach could be taken whereby permissions are curtailed, thus\nlimiting the available agents\" actions. Each of these policies is\npossible: we do not legislate over any of them nor do we give\npreference over any. In this paper we are interested in formalising such\npolicies within a simple mathematical framework. A prohibition\ncan be in conflict with various permissions in \u03a9. We, therefore,\nhave to find the maximal set of conflicting pairs of permissions and\nprohibitions in \u03a9, by performing a pairwise inspection. This\nrequires identifying the substitution between two pairs of norms that\ncharacterises a conflict. This is formally captured by the following\ndefinition:\nDEF. 7. A conflict arises between two tuples \u03c9, \u03c9 \u2208 \u03a9 under\na substitution \u03c3, denoted as cflct(\u03c9, \u03c9 , \u03c3), iff the following\nconditions hold:\n1. \u03c9 = (F\u03c41:\u03c42 \u03d5) \u03a3c, td, ta, te , \u03c9 = P\u03c41:\u03c42\n\u03d5 , td, ta, te\n2. unify(\u03c41, \u03c41, \u03c3), unify(\u03c42, \u03c42, \u03c3), and unify(\u03d5, \u03d5 , \u03c3)\n3. |te \u2212 te| \u2264 |ta \u2212 ta|\nThat is, a prohibition and a permission conflict (condition 1) if,\nand only if, the agents and roles they apply to and their actions,\nrespectively, unify under \u03c3 (condition 2) and their activation\nperiods overlap (condition 3). Substitution \u03c3, the conflict set,\nunifies the agents, roles and atomic formulae of a permission and a\nprohibition. The annotation \u03a3c does not play any role when\ndetecting conflicts, but, as we show below, we have to update the\nannotation to reflect new curtailments to solve conflicts. For\ninstance, cflct( (Fa:bp(Y, d)) \u2205, 1, 3, 5 , PA:Rp(c, X), 2, 3, 4 ,\n{A/a, R/b, Y/c, Z/X}) holds. We define below how we obtain\nthe set of conflicting norms of a normative state \u03a9:\nDEF. 8. The finite, possibly empty set of conflicting norms of a\nnormative state \u03a9, denoted as CFLS(\u03a9), is defined as\nCFLS(\u03a9) = { \u03c9, \u03c9 , \u03c3 |\u03c9, \u03c9 \u2208 \u03a9, cflct(\u03c9, \u03c9 , \u03c3)}\n3.2 Conflict Resolution\nA fine-grained way of resolving conflict can be done via\nunification. We detect the overlapping of the norms\" influences, i.e. how\nthey affect the behaviours of agents in the VO, and we curtail the\n646 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)\ninfluence of the prohibition. We illustrate with Venn diagrams in\nFigure 2 the overlap of norm influences (left) which characterises a\nconflict and the curtailment necessary to resolve the conflict (right).\nThe illustration shows the space of possible values for p(X, Y ) and\np(X, Y )\nPA:Rp(c, X)\nFa:bp(Y, Z)\np(X, Y )\nFa:bp(Y, Z)\nPA:Rp(c, X)\nFigure 2: Overlap of Influence (Left) and Curtailment (Right)\ntwo portions of this space defining the scope of influence of norms\nPA:Rp(c, X) and Fa:bp(Y, Z). The scope of these norms overlap,\nillustrated by the intersection of boxes on the left, in actions with\nvalues, for instance, a, b, p(c, 2) , . . . , a, b, p(c, n) . The\ncurtailment of the prohibition eliminates the intersection: it moves the\nscope of the norm influence to outside the influence of the\npermission. If there were multiple overlaps among one prohibition and\nvarious permissions, which is likely to happen, then the prohibition\nwill be multiply curtailed to move the scope of the norm to avoid\nall intersections.\nThe algorithm shown in Figure 3 depicts how we obtain a\nconflictfree set of norms. It maps an existing set \u03a9 possibly with\nconflictalgorithm conflictResolution(\u03a9, \u03a9 )\ninput \u03a9\noutput \u03a9\nbegin\n\u03a9 := \u03a9\nfor each \u03c9 \u2208 \u03a9 s.t. \u03c9 = (Fa:r \u00af\u03d5) \u03a3c, td, ta, te do\nif \u03c9, \u03c9 , \u03c3 \u2208 CFLS(\u03a9) then \u03a9 := \u03a9 \u2212 {\u03c9}\nend for\nfor each \u03c9 \u2208 \u03a9 s.t. \u03c9 = (F\u03c41:\u03c42 \u03d5) \u03a3c, td, ta, te do\n\u03a3MAX\nc :=\n[\n\u03c9,\u03c9 ,\u03c3c \u2208CFLS(\u03a9 )\n{\u03c3c}\n\u03a9 := (\u03a9 \u2212 {\u03c9}) \u222a { (F\u03c41:\u03c42 \u03d5) (\u03a3c \u222a \u03a3MAX\nc ), td, ta, te }\nend for\nend\nFigure 3: Algorithm to Resolve Conflicts in a Set of Norms\ning norms onto a new set \u03a9 in which the conflicts (if any) are\nresolved. The algorithm forms \u03a9 as a set that is\nconflict-freethis means that prohibitions are annotated with a conflict set that\nindicates which bindings for variables have to be avoided.\nInitially, \u03a9 is set to be \u03a9. The algorithm operates in two stages.\nIn the first stage (first for each loop), we remove all conflicting\nprohibitions \u03c9 = (Fa:r \u00af\u03d5) \u03a3c, td, ta, te with ground agent/role\npairs a : r and ground formulae \u00af\u03d5: the only way to resolve\nconflicts arising from such prohibitions is to remove them altogether,\nas we cannot curtail a fully ground norm. In the second stage\n(second for each loop), the remaining prohibitions in \u03a9 are examined:\nthe set CFLS(\u03a9 ) contains all conflicts between permissions and\nthe remaining prohibitions in \u03a9 represented as tuples \u03c9, \u03c9 , \u03c3c ,\nwith \u03c3c representing the conflict set. As a prohibition may have\nconflicts with various permissions, the set CFLS(\u03a9 ) may contain\nmore than one tuple for each prohibition. In order to provide an \u03a9\nthat reflects all these conflicts for a specific prohibition, we have\nto form \u03a3MAX\nc containing all conflict sets \u03c3c for a given\nprohibition \u03c9. The maximal set is used to update the annotation of the\nprohibition.\nIt is important to explain the need for updating the conflict set\nof prohibitions. Normative states change as a result of agents\"\nactions [11]: existing permissions, prohibitions and obligations are\nrevoked and/or new ones are put in place as a result of agents\"\ninteractions with the environment and other agents. Whenever new\nnorms are added we must check for new conflicts and\ninconsistencies. If we only apply our algorithm to a pair consisting of an\nold and a new norm, then some re-processing of pairs of old norms\n(which were dealt with before) can be saved. The removal of norms\nfrom the set \u03a9 is dealt with efficiently: each permission to be\nremoved must be checked first for conflicts with any existing\nprohibition (re-processing can be avoided if we record the conflict, instead\nof detecting it again). If there is a conflict, then the conflict set will\nhave been recorded in the prohibition\"s annotation; this conflict set\nis thus removed from the prohibition\"s annotation. The removal\nof obligations follows a similar process. Prohibitions are removed\nwithout the need to consider their relationships with other norms.\nOur algorithm is correct in that it provides, for a given \u03a9, a new\n\u03a9 in which i) all ground prohibitions which conflict with\npermissions have been removed; and ii) all remaining annotated\nprohibitions (F\u03c4:\u03c4 \u00af\u03d5) \u03a3c, td, ta, te will not unify with any of the\npermissions in \u03a9 , provided the unifier does not appear in \u03a3c. The\nfirst requirement is addressed by the first for each loop, which does\nprecisely this: it removes all ground prohibitions which unify with\nan obligation. The second requirement is addressed by the second\nfor each loop: each prohibition has its annotation \u03a3c added with\n\u03a3MAX\nc , thus accommodating the unifiers from all permissions that\nunify with the prohibition. It is easy to see that the algorithm\nalways terminates: each of its two loops go through a finite set,\nprocessing one element at a time. The set CFLS(\u03a9) is computed in\na finite number of steps as are the set operations performed within\neach loop. The algorithm has, however, exponential complexity3\n,\nas the computation of CFLS(\u03a9) requires a pairwise comparison of\nall elements in \u03a9.\nWe illustrate our algorithm with the following example. Let there\nbe the following global normative state \u03a9:\nj\n(FA:Rp(X, Y )) {}, 2, 2, 9 , Pa:rp(a, b), 3, 4, 8 ,\n(Fa:rp(a, b)) {}, 2, 4, 12 Pa:rp(d, e), 3, 4, 9 ,\nff\nThe first loop removes the ground prohibition, thus obtaining the\nfollowing \u03a9 :\nj\n(FA:Rp(X, Y )) {}, 2, 2, 9 , Pa:bp(c, d), 3, 4, 8 ,\nPe:f p(g, h), 3, 4, 9\nff\nWe then have the following set of conflicting norms CFLS(\u03a9 ):\n8\n<\n:\n*\n(FA:Rp(X, Y )) {}, 2, 2, 9 ,\nPa:bp(c, d), 3, 4, 8 ,\n{A/a, R/b, X/c, Y/d}\n+\n,\n*\n(FA:Rp(X, Y )) {}, 2, 2, 9 ,\nPe:f p(g, h), 3, 4, 9 ,\n{A/e, R/f, X/g, Y/h}\n+9\n=\n;\nFor each prohibition \u03c9 \u2208 \u03a9 we retrieve all elements from w, w ,\n\u03c3 \u2208 CFLS(\u03a9 ) and collect their \u03c3\"s in \u03a3MAX\nc . The final \u03a9 is\nthus: 8\n<\n:\n(FA:Rp(X, Y ))\nj\n{A/a, R/b, X/c, Y/d}\n{A/e, R/f, X/g, Y/h}\nff\n, 2, 2, 9 ,\nPa:rp(a, b), 3, 4, 8 , Pa:rp(d, e), 3, 4, 9 ,\n9\n=\n;\nThe annotated set of conflict sets should be understood as a record\nof past unifications, which informs how prohibitions should be used\nin the future in order to avoid any conflicts with permissions. We\nshow in Section 5.1 how annotations are used by norm-aware agents.\n4. NORM INCONSISTENCIES\nIf a substitution \u03c3 can be found that unifies an obligation and\na prohibition, then a situation of norm inconsistency occurs [13].\nThe obligation demands that an agent performs an action that is\nforbidden. We can reuse the machinery, introduced above for\nresolving conflicts between permissions and prohibitions, in order to\na) detect and b) resolve such inconsistencies. With Definition 7, we\n3\nThe combinatorial effort is not necessary anymore if instead we maintain a set of\nnorms conflict-free: each time a new norm is to be introduced then we compare it with\nthe existing ones, thus making the maintenance process of linear complexity.\nThe Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 647\nexpress the nature of a conflict between a prohibition and\npermission. Similarly, a situation of inconsistency can be defined reusing\nthis definition and replacing the P deontic modality with O. We can\nreuse the machinery for conflict resolution, developed previously,\nfor resolving inconsistency. The conflict resolution algorithm can\nbe applied without change to accumulate a maximal conflict set\n\u03a3MAX\nc for each prohibition in \u03a9 that unifies with obligations.\n5. NORM-AWARE AGENT SOCIETIES\nWe now describe how our norm-regulated VOs give rise to\nnormaware agent societies. We address open and heterogeneous MASs:\nwe accommodate external agents by providing each of them with\na corresponding governor agent [12]. This is a kind of chaperon\nthat interacts with an external agent, and observes and reports on its\nbehaviour. We show our architecture in Figure 4 below: a number\nExternal Governor\nAgents Agents Tuple Space\nag1\n\u00a3\n\u00a2\n\n\u00a1\ngov1 \u21d0\u21d2\n.\n.\n.\n.\n.\n.\n.\n.\n.\n.\n.\n. I, s, \u039e, \u03a9 I, s , \u039e , \u03a9 \u00b7 \u00b7 \u00b7\nagn\n\u00a3\n\u00a2\n\n\u00a1\ngovn \u21d0\u21d2\nFigure 4: Architecture for Norm-Aware Agent Societies\nof external agents interact (denoted by the ) with their\ncorresponding governor agents. The governor agents have access to\nthe VO description I, the current state s of the VO enactment, the\nglobal execution state \u039e and the global normative state \u03a9.\nGovernor agents are able to write to and read from (denoted by the\n\u21d0\u21d2) a shared memory space (e.g., a blackboard-like solution\nimplemented as a tuple space), updating the global configuration\n(denoted by the ) to reflect the dynamics of the VO enactment.\nGovernor agents are necessary because we cannot anticipate or\nlegislate over the design or behaviour of external agents. We depict\nbelow how the pairs of governor/external agents work together: any\nnon-deterministic choices on the VO are decided by the external\nagent; any normative aspects are considered by the governor agent.\nThe governor agent represents the external agent within the VO.\nAs such, it has the unique identifier of the external agent. The\ngovernor agent keeps an account of all roles the external agent is\ncurrently playing: in our VOs, it is possible for agents to take up more\nthan one role simultaneously. We define in Figure 5 how governor\nagents work - we use a logic program for this purpose. We show\n1 main(Id, Roles) \u2190\n2 get tuple( I, s, \u039e, \u03a9 )\u2227\n3 terminate(Id, Roles, I, \u039e, \u03a9)\n4 main(Id, Roles) \u2190\n5 get tuple( I, s, \u039e, \u03a9 )\u2227\n6 filter norms(Id, Roles, \u03a9, \u03a9Id )\u2227\n7 discuss norms(Id, Roles, I, s, \u039e, \u03a9Id , Actions)\u2227\n8 update tuple(Roles, Actions, NewRoles)\u2227\n9 main(Id, NewRoles)\nFigure 5: Governor Agent as a Logic Program\nthe lines of our clauses numbered 1-9. The first clause (lines 1-3)\ndepicts the termination condition: get tuple/1 (line 2) retrieves\nI, s, \u039e, \u03a9 from the shared tuple space and terminate/4 checks\nif the current VO enactment (recorded in \u039e) has come to an end.\nThe team of governor agents synchronise their access to the tuple\nspace [12], thus ensuring each has a chance to function.\nThe second clause (lines 4-9) depicts a generic loop when the\ntermination condition of the first clause does not hold. In this case,\nthe tuple is again retrieved (line 5) and the governor agent proceeds\n(line 6) to analyse the current global normative state \u03a9 with a view\nto obtaining the subset \u03a9Id \u2286 \u03a9 of norms referring to agent Id\nunder roles Roles. Predicate filter norms/4 collects the norms\nwhich apply to agent Id (the governor agent\"s external agent). In\nline 7 the governor agent, in possession of the applicable norms as\nwell as other relevant information, interacts with the external agent\nto decide on a set of Actions which are norm-compliant - these\nactions will be used to update (line 8) the global execution state \u039e.\nIn the process of updating the state of execution, a new set of roles\nmust be assigned to the external agent, represented as NewRoles.\nThe governor agent keeps looping (line 9) using the identifier for\nthe external agent and its new set of roles.\n5.1 Using Annotated Norms\nWe now explain how annotated norms are used by norm-aware\nagents. We do so via the definition of predicate check/2, which\nholds if its first argument, a candidate action (in the format of the\nelements of \u039e of Def. 2), is within the influence of an annotated\nprohibition \u03c9, its second parameter. The definition, as a logic\nprogram, is shown in Figure 6. It checks (line 4) if the agent identifier\n1 check(Action, \u03c9) \u2190\n2 Action = a:r, \u00af\u03d5, t \u2227\n3 \u03c9 = (F\u03c41:\u03c42 \u03d5) \u03a3c, td, ta, te \u2227\n4 unify(a, \u03c41, \u03c3) \u2227 unify(r, \u03c42, \u03c3) \u2227 unify( \u00af\u03d5, \u03d5, \u03c3)\u2227\n5 forall(\u03c3 , (\u03c3c \u2208 \u03a3c, unify(\u03c3c, \u03c3, \u03c3 )), MGUs)\u2227\n6 MGUs = \u2205\u2227\n7 ta \u2264 t \u2264 te\nFigure 6: Check if Action is within Influence of Curtailed Norm\nand role of the action unify with the appropriate terms \u03c41, \u03c42 of \u03c9\nand that the actions \u00af\u03d5, \u03d5 themselves unify, all under the same\nunifier \u03c3. It then verifies (lines 5-6) that \u03c3 does not unify with any of\nthe conflict sets in \u03a3c. Finally, in line 7 it checks if the time of the\naction is within the norm temporal influence.\nThe verification of non-unification of \u03c3 with any element of \u03a3c\ndeserves an explanation. The elements of \u03a3c are unifiers stating\nwhat values the variables of the norm cannot have, that is, they\nrepresent gaps in the original scope of the norm\"s influence. The\ntest thus equates to asking if the action is outside such gaps, that is,\nthe action is within the curtailed scope of influence of the norm.\n6. ACTION CONFLICT & INCONSISTENCY\nIn our previous discussion, norm conflict and inconsistency were\ndetected via a direct comparison of the atomic formulae\nrepresenting the action. However, conflicts and inconsistencies may also\narise indirectly via relationships among actions. For instance, if\np(X) amounts to q(X, X), then norms PA:Rp(X) and FA:Rq(X,\nX) are in conflict since PA:Rp(X) can be rewritten as PA:Rq(X,\nX) and we thus have both PA:Rq(X, X) and FA:Rq(X, X). In\nthe discussion below we concentrate on norm conflict, but norm\ninconsistency can be dealt with similarly, if we change the deontic\nmodalities P for O.\nRelationships among actions are domain-dependent. Different\ndomains have distinct ways of relating their actions; engineers build\nontologies to represent such relationships. We propose a simple\nmeans to account for such relationships and show how these can\nbe connected to the mechanisms introduced above. Rather than\nmaking use of sophisticated formalisms for ontology construction,\nwe employ a set of domain axioms, defined below:\nDEF. 9. The domain axioms, denoted as \u0394, are a finite and\npossibly empty set of formulae \u03d5 \u2192 (\u03d51 \u2227 \u00b7 \u00b7 \u00b7 \u2227 \u03d5n) where \u03d5, \u03d5i, 1 \u2264\ni \u2264 n, are atomic first-order formulae.\nOur example above can be captured by \u0394 = {(p(X) \u2192 q(X, X)),\n(q(X, X) \u2192 p(X))}. By explicitly representing and manipulating\ndomain knowledge we achieve generality: the very same\nmachinery can be used with different domains. A set of norms can have\ndifferent conflicts and inconsistencies, for distinct domains of\napplication.\n648 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nThe Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 649\n650 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nThe Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 651", "keywords": "conflicting prohibition;external agent;governor agent;norm conflict;agent;norm-regulated vo;norm inconsistency;virtual organization;artificial social systems;multi-agent system"}
-{"name": "test_I-35", "title": "Distributed Norm Management in Regulated Multi-Agent Systems", "abstract": "Norms are widely recognised as a means of coordinating multi-agent systems. The distributed management of norms is a challenging issue and we observe a lack of truly distributed computational realisations of normative models. In order to regulate the behaviour of autonomous agents that take part in multiple, related activities, we propose a normative model, the Normative Structure (NS), an artifact that is based on the propagation of normative positions (obligations, prohibitions, permissions), as consequences of agents\" actions. Within a NS, conflicts may arise due to the dynamic nature of the MAS and the concurrency of agents\" actions. However, ensuring conflict-freedom of a NS at design time is computationally intractable. We show this by formalising the notion of conflict, providing a mapping of NSs into Coloured Petri Nets and borrowing well-known theoretical results from that field. Since online conflict resolution is required, we present a tractable algorithm to be employed distributedly. We then demonstrate that this algorithm is paramount for the distributed enactment of a NS.", "fulltext": "1. INTRODUCTION\nA fundamental feature of open, regulated multi-agent\nsystems in which autonomous agents interact, is that\nparticipating agents are meant to comply with the conventions\nof the system. Norms can be used to model such\nconventions and hence as a means to regulate the observable\nbehaviour of agents [6, 29]. There are many contributions on\nthe subject of norms from sociologists, philosophers and\nlogicians (e.g., [15, 28]). However, there are very few proposals\nfor computational realisations of normative models - the\nway norms can be integrated in the design and execution\nof MASs. The few that exist (e.g. [10, 13, 24]), operate in\na centralised manner which creates bottlenecks and single\npoints-of-failure. To our knowledge, no proposal truly\nsupports the distributed enactment of normative environments.\nIn our paper we approach that problem and propose means\nto handle conflicting commitments in open, regulated,\nmultiagent systems in a distributed manner. The type of\nregulated MAS we envisage consists of multiple, concurrent,\nrelated activities where agents interact. Each agent may\nconcurrently participate in several activities, and change from\none activity to another. An agent\"s actions within an\nactivity may have consequences in the form of normative\npositions (i.e. obligations, permissions, and prohibitions) [26]\nthat may constrain its future behaviour. For instance, a\nbuyer agent who runs out of credit may be forbidden to\nmake further offers, or a seller agent is obliged to deliver\nafter closing a deal. We assume that agents may choose not\nto fulfill all their obligations and hence may be sanctioned\nby the MAS. Notice that, when activities are distributed,\nnormative positions must flow from the activities in which\nthey are generated to those in which they take effect. For\ninstance, the seller\"s obligation above must flow (or be\npropagated) from a negotiation activity to a delivery activity.\nSince in an open, regulated MAS one cannot embed\nnormative aspects into the agents\" design, we adopt the view\nthat the MAS should be supplemented with a separate set of\nnorms that further regulates the behaviour of participating\nagents. In order to model the separation of concerns between\nthe coordination level (agents\" interactions) and the\nnormative level (propagation of normative positions), we propose\nan artifact called the Normative Structure (NS).\nWithin a NS conflicts may arise due to the dynamic\nnature of the MAS and the concurrency of agents\" actions. For\ninstance, an agent may be obliged and prohibited to do the\n636\n978-81-904262-7-5 (RPS) c 2007 IFAAMAS\nvery same action in an activity. Since the regulation of a\nMAS entails that participating agents need to be aware of\nthe validity of those actions that take place within it, such\nconflicts ought to be identified and possibly resolved if a\nclaim of validity is needed for an agent to engage in an\naction or be sanctioned. However, ensuring conflict-freedom of\na NS at design time is computationally intractable. We show\nthis by formalising the notion of conflict, providing a\nmapping of NSs into Coloured Petri Nets (CPNs) and borrowing\nwell-known theoretical results from the field of CPNs.\nWe believe that online conflict detection and resolution\nis required. Hence, we present a tractable algorithm for\nconflict resolution. This algorithm is paramount for the\ndistributed enactment of a NS.\nThe paper is organised as follows. In Section 2 we detail a\nscenario to serve as an example throughout the paper. Next,\nin Section 3 we formally define the normative structure\nartifact. Further on, in Section 4 we formalise the notion of\nconflict to subsequently analyse the complexity of conflict\ndetection in terms of CPNs in Section 5. Section 6 describes\nthe computational management of NSs by describing their\nenactment and presenting an algorithm for conflict\nresolution. Finally, we comment on related work, draw conclusions\nand report on future work in Section 7.\n2. SCENARIO\nWe use a supply-chain scenario in which companies and\nindividuals come together at an online marketplace to\nconduct business. The overall transaction procedure may be\norganised as six distributed activities, represented as nodes\nin the diagram in Figure 1. They involve different\nparticipants whose behaviour is coordinated through protocols.\nIn this scenario agents can play one of four roles:\nmarExit\nRegistration\nPayment\nDelivery\nNegotiation\nCoordination Model\nContract\nFigure 1: Activity Structure of the Scenario\nketplace accountant (acc), client, supplier (supp) and\nwarehouse managers (wm). The arrows connecting the activities\nrepresent how agents can move from one activity to another.\nAfter registering at the marketplace, clients and suppliers\nget together in an activity where they negotiate the terms of\ntheir transaction, i.e. prices, amounts of goods to be\ndelivered, deadlines and other details. In the contract activity,\nthe order becomes established and an invoice is prepared.\nThe client will then participate in a payment activity,\nverifying his credit-worthiness and instructing his bank to transfer\nthe correct amount of money. The supplier in the meantime\nwill arrange for the goods to be delivered (e.g. via a\nwarehouse manager) in the delivery activity. Finally, agents can\nleave the marketplace conforming to a predetermined exit\nprotocol. The marketplace accountant participates in most\nof the activities as a trusted provider of auditing tools.\nIn the rest of the paper we shall build on this scenario\nto exemplify the notion of normative structure and to\nillustrate our approach to conflict detection and resolution in a\ndistributed setting.\n3. NORMATIVE STRUCTURE\nIn MASs agents interact according to protocols which\nnaturally are distributed. We advocate that actions in one such\nprotocol may have an effect on the enactment of other\nprotocols. Certain actions can become prohibited or obligatory,\nfor example. We take normative positions to be obligations,\nprohibitions and permissions akin to work described in [26].\nThe intention of adding or removing a normative position\nwe call normative command. Occurrences of normative\npositions in one protocol may also have consequences for other\nprotocols1\n.\nIn order to define our norm language and specify how\nnormative positions are propagated, we have been inspired by\nmulti-context systems [14]. These systems allow the\nstructuring of knowledge into distinct formal theories and the\ndefinition of relationships between them. The relationships are\nexpressed as bridge rules - deducibility of formulae in some\ncontexts leads to the deduction of other formulae in other\ncontexts. Recently, these systems have been successfully\nused to define agent architectures [11, 23]. The metaphor\ntranslates to our current work as follows: the utterance of\nillocutions and/or the existence of normative positions in\nsome normative scenes leads to the deduction of\nnormative positions in other normative scenes. We are concerned\nwith the propagation and distribution of normative positions\nwithin a network of distributed, normative scenes as a\nconsequence of agents\" actions. We take normative scenes to be\nsets of normative positions and utterances that are\nassociated with an underlying interaction protocol corresponding\nto an activity.\nIn this section, we first present a simple language\ncapturing these aspects and formally introduce the notions of\nnormative scene, normative transition rule and normative\nstructure. We give the intended semantics of these rules\nand show how to control a MAS via norms in an example.\n3.1 Basic Concepts\nThe building blocks of our language are terms and atomic\nformulae:\nDef. 1. A term, denoted as t, is (i) any constant\nexpressed using lowercase (with or without subscripts), e.g.\na, b0, c or (ii) any variable expressed using uppercase (with\nor without subscripts), e.g. X, Y, Zb or (iii) any function\nf(t1, . . . , tn), where f is an n-ary function symbol and t1, .., tn\nare terms.\nSome examples of terms and functions are Credit, price or\noffer(bible, 30) being respectively a variable, a constant and\na function. We will be making use of identifiers\nthroughout the paper, which are constant terms and also need the\nfollowing definition:\nDef. 2. An atomic formula is any construct p(t1, . . . , tn),\nwhere p is an n-ary predicate symbol and t1, . . . , tn are terms.\nThe set of all atomic formulae is denoted as \u0394.\nWe focus on an expressive class of MASs in which\ninteraction is carried out by means of illocutionary speech acts\nexchanged among participating agents:\nDef. 3. Illocutions I are ground atomic formulae which\nhave the form p(ag, r, ag , r , \u03b4, t) where p is an element of\n1\nHere, we abstract from protocols and refer to them generically as\nactivities.\nThe Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 637\na set of illocutionary particles (e.g. inform, request, offer);\nag, ag are agent identifiers; r, r are role identifiers; \u03b4, an\narbitrary ground term, is the content of the message, built\nfrom a shared content language; t \u2208 N is a time stamp.\nThe intuitive meaning of p(ag, r, ag , r , m, t) is that agent\nag playing role r sent message m to agent ag playing role\nr at time t. An example of an illocution is inform(ag4,\nsupp, ag3, client, offer(wire, 12), 10). Sometimes it is useful\nto refer to illocutions that are not fully grounded, that is,\nthose that may contain uninstantiated (free) variables. In\nthe description of a protocol, for instance, the precise values\nof the message exchanged can be left unspecified. During\nthe enactment of the protocol agents will produce the actual\nvalues which will give rise to a ground illocution. We can\nthus define illocution schemata:\nDef. 4. An illocution schema \u00afI is any atomic formula\np(ag, r, ag , r , \u03b4, t) in which some of the terms may either be\nvariables or may contain variables.\n3.2 Formal Definition of the Notion of NS\nWe first define normative scenes as follows:\nDef. 5. A normative scene is a tuple s = ids, \u0394s where\nids is a scene identifier and \u0394s is the set of atomic formulae\n\u03b4 (i.e. utterances and normative positions) that hold in s.\nWe will also refer to \u0394s as the state of normative scene s.\nFor instance, a snapshot of the state of the delivery\nnormative scene of our scenario could be represented as:\n\u0394s =\n8\n<\n:\nutt(request(sean, client, kev, wm, receive(wire, 200), 20)),\nutt(accept(kev, wm, sean, client, receive(wire, 200), 30)),\nobl(inform(kev, wm, sean, client, delivered(wire, 200), 30))\n9\n=\n;\nThat is, agent Sean taking up the client role has requested\nagent Kev (taking up the warehouse manager role wm) to\nreceive 200kg of wire, and agent Kev is obliged to deliver\n200kg of wire to Sean since he accepted the request. Note\nthat the state of a normative scene \u0394s evolves over time.\nThese normative scenes are connected to one another via\nnormative transitions that specify how utterances and\nnormative positions in one scene affect other normative scenes.\nAs mentioned above, activities are not independent since\nillocutions uttered in some of them may have an effect on\nother ones. Normative transition rules define the conditions\nunder which a normative command is generated. These\nconditions are either utterances or normative positions\nassociated with a given protocol (denoted e.g. activity : utterance)\nwhich yield a normative command, i.e. the addition or\nremoval of another normative position, possibly related to a\ndifferent activity. Our transition rules are thus defined:\nDef. 6. A normative transition rule R is of the form:\nR ::= V C\nV ::= ids : D | V, V\nD ::= N | utt(\u00afI)\nN ::= per(\u00afI) | prh(\u00afI) | obl(\u00afI)\nC ::= add(ids : N) | remove(ids : N)\nwhere \u00afI is an illocution schema, N is a normative position\n(i.e. permission, prohibition or obligation), ids is an\nidentifier for activity s and C is a normative command.\nWe endow our language with the usual semantics of\nrulebased languages [19]. Rules map an existing normative\nstructure to a new normative structure where only the state\nof the normative scenes change. In the definitions below we\nrely on the standard concept of substitution [9].\nDef. 7. A normative transition is a tuple b = idb, rb\nwhere idb is an identifier and rb is a normative transition\nrule.\nWe are proposing to extend the notion of MAS, regulated\nby protocols, with an extra layer consisting of normative\nscenes and normative transitions. This layer is represented\nas a bi-partite graph that we term normative structure. A\nnormative structure relates normative scenes and normative\ntransitions specifying which normative positions are to be\ngenerated or removed in which normative scenes.\nDef. 8. A normative structure is a labelled bi-partite graph\nNS = Nodes, Edges, Lin\n, Lout\n. Nodes is a set S\u222aB where\nS is a set of normative scenes and B is a set of normative\ntransitions. Edges is a set Ain\n\u222a Aout\nwhere Ain\n\u2286 S \u00d7 B\nis a set of input arcs labelled with an atomic formula using\nthe labelling function Lin\n: Ain\n\u2192 D; and Aout\n\u2286 B \u00d7 S is\na set of output arcs labelled with a normative position using\nthe labelling function Lout\n: Aout\n\u2192 N. The following must\nhold:\n1. Each atomic formula appearing in the LHS of a rule\nrb must be of the form (ids : D) where s \u2208 S and\nD \u2208 \u0394 and \u2203ain\n\u2208 Ain\nsuch that ain\n= (s, b) and\nLin\n(ain\n) = D.\n2. The atomic formula appearing in the RHS of a rule rb\nmust be of the form add(ids : N) or remove(ids : N)\nwhere s \u2208 S and \u2203aout\n\u2208 Aout\nsuch that aout\n= (b, s)\nand Lout\n(aout\n) = N.\n3. \u2200a \u2208 Ain\nsuch that a = (s, b) and b = idb, rb and\nLin\n(a) = D then (ids:D) must occur in the LHS of rb.\n4. \u2200a \u2208 Aout\nsuch that a = (b, s) and b = idb, rb and\nLout\n(a) = N then add(ids : N) or remove(ids : N) must\noccur in the RHS of rb.\nThe first two points ensure that every atomic formulae on\nthe LHS of a normative transition rule labels an arc\nentering the appropriate normative transition in the normative\nstructure, and that the atomic formula on the RHS labels\nthe corresponding outgoing arc. Points three and four\nensure that labels from all incoming arcs are used in the LHS\nof the normative transition rule that these arcs enter into,\nand that the labels from all outgoing arcs are used in the\nRHS of the normative transition rule that these arcs leave.\n3.3 Intended Semantics\nThe formal semantics will be defined via a mapping to\nColoured Petri Nets in Section 5.1. Here we start\ndefining the intended semantics of normative transition rules by\ndescribing how a rule changes a normative scene of an\nexisting normative structure yielding a new normative structure.\nEach rule is triggered once for each substitution that\nunifies the left-hand side V of the rule with the state of the\ncorresponding normative scenes. An atomic formula (i.e.\nan utterance or a normative position) holds iff it is\nunifiable with an utterance or normative position that belongs\nto the state of the corresponding normative scene. Every\ntime a rule is triggered, the normative command specified\non the right-hand side of that rule is carried out,\nintending to add or remove a normative position from the state\nof the corresponding normative scene. However, addition is\nnot unconditional as conflicts may arise. This topic will be\ntreated in Sections 4 and 6.1.\n638 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)\n3.4 Example\nIn our running example we have the following exemplary\nnormative transition rule:\n\u201e\npayment : obl(inform(X, client, Y, acc, pay(Z, P, Q), T )),\npayment : utt(inform(X, client, Y, acc, pay(Z, P, Q), T ))\n\u00ab\ndelivery : add(obl(inform(Y, wm, X, client, delivered(Z, Q), T )))\nThat is, during the payment activity, an obligation on client\nX to inform accountant Y about the payment P of item Z\nat time T and the corresponding utterance which fulfills this\nobligation allows the flow of a norm to the delivery activity.\nThe norm is an obligation on agent Y (this time taking up\nthe role of the warehouse manager wm) to send a message\nto client X that item Z has been delivered. We show in\nFigure 2 a diagrammatic representation of how activities\nand a normative structure relate:\nPayment\nDelivery\nContract\nNormative Level\nExit\nRegistration\nPayment\nDelivery\nNegotiation\nCoordination Level\nContract\nnt\nFigure 2: Activities and Normative Structure\nAs illocutions are uttered during activities, normative\npositions arise. Utterances and normative positions are\ncombined in transition rules, causing the flow of normative\npositions between normative scenes. The connection between\nthe two levels is described in Section 6.2.\n4. CONFLICT DEFINITION\nThe terms deontic conflict and deontic inconsistency have\nbeen used interchangeably in the literature. However, in\nthis paper we adopt the view of [7] in which the authors\nsuggest that a deontic inconsistency arises when an action\nis simultaneously permitted and prohibited - since a\npermission may not be acted upon, no real conflict occurs. The\nsituations when an action is simultaneously obliged and\nprohibited are, however, deontic conflicts, as both obligations\nand prohibitions influence behaviours in a conflicting\nfashion. The content of normative positions in this paper are\nillocutions. Therefore, a normative conflict arises when an\nillocution is simultaneously obliged and prohibited.\nWe propose to use the standard notion of unification [9] to\ndetect when a prohibition and a permission overlap. For\ninstance, an obligation obl(inform(A1, R1, A2, R2, p(c, X), T))\nand a prohibition prh(inform(a1, r1, a2, r2, p(Y, d), T )) are\nin conflict as they unify under \u03c3 = {A1/a1, R1/r1, A2/a2,\nR2/r2, Y/c, X/d, T/T }). We formally capture this notion:\nDef. 9. A (deontic) conflict arises between two\nnormative positions N and N under a substitution \u03c3, denoted as\nconflict(N, N , \u03c3), if and only if N = prh(\u00afI), N = obl(\u00afI )\nand unify(\u00afI,\u00afI , \u03c3).\nThat is, a prohibition and an obligation are in conflict if,\nand only if, their illocutions unify under \u03c3. The\nsubstitution \u03c3, called here the conflict set, unifies the agents, roles\nand atomic formulae. We assume that unify is a suitable\nimplementation of a unification algorithm which i) always\nterminates (possibly failing, if a unifier cannot be found); ii)\nis correct; and iii) has linear computational complexity.\nInconsistencies caused by the same illocution being\nsimultaneously permitted and prohibited can be formalised\nsimilarly. In this paper we focus on prohibition/obligation\nconflicts, but the computational machinery introduced in\nSection 6.1 can equally be used to detect prohibition/permission\ninconsistencies, if we substitute modality obl for per.\n5. FORMALISING CONFLICT-FREEDOM\nIn this section we introduce some background knowledge\non CPNs assuming a basic understanding of ordinary Petri\nNets. For technical details we refer the reader to [16]. We\nthen map NSs to CPNs and analyse their properties.\nCPNs combine the strength of Petri nets with the strength\nof functional programming languages. On the one hand,\nPetri nets provide the primitives for the description of the\nsynchronisation of concurrent processes. As noticed in [16],\nCPNs have a semantics which builds upon true concurrency,\ninstead of interleaving. In our opinion, a true-concurrency\nsemantics is easier to work with because it is the way we\nenvisage the connection between the coordination level and\nthe normative level of a multi-agent system to be. On the\nother hand, the functional programming languages used by\nCPNs provide the primitives for the definition of data types\nand the manipulation of their data values. Thus, we can\nreadily translate expressions of a normative structure. Last\nbut not least, CPNs have a well-defined semantics which\nunambiguously defines the behaviour of each CPN.\nFurthermore, CPNs have a large number of formal analysis methods\nand tools by which properties of CPNs can be proved.\nSumming up, CPNs provide us with all the necessary features\nto formally reason about normative structures given that an\nadequate mapping is provided.\nIn accordance with Petri nets, the states of a CPN are\nrepresented by means of places. But unlike Petri Nets, each\nplace has an associated data type determining the kind of\ndata which the place may contain. A state of a CPN is called\na marking. It consists of a number of tokens positioned on\nthe individual places. Each token carries a data value which\nhas the type of the corresponding place. In general, a place\nmay contain two or more tokens with the same data value.\nThus, a marking of a CPN is a function which maps each\nplace into a multi-set2\nof tokens of the correct type. One\noften refers to the token values as token colours and one\nalso refers to the data types as colour sets. The types of a\nCPN can be arbitrarily complex.\nActions in a CPN are represented by means of transitions.\nAn incoming arc into a transition from a place indicates that\nthe transition may remove tokens from the corresponding\nplace while an outgoing arc indicates that the transition\nmay add tokens. The exact number of tokens and their\ndata values are determined by the arc expressions, which\nare encoded using the programming language chosen for the\nCPN. A transition is enabled in a CPN if and only if all the\n2\nA multi-set (or bag) is an extension to the notion of set, allowing\nthe possibility of multiple appearances of the same element.\nThe Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 639\nvariables in the expressions of its incoming arcs are bound\nto some value(s) (each one of these bindings is referred to\nas a binding element). If so, the transition may occur by\nremoving tokens from its input places and adding tokens\nto its output places. In addition to the arc expressions,\nit is possible to attach a boolean guard expression (with\nvariables) to each transition. Putting all the elements above\ntogether we obtain a formal definition of CPN that shall be\nemployed further ahead for mapping purposes.\nDef. 10. A CPN is a tuple \u03a3, P, T, A, N, C, G, E, I\nwhere: (i) \u03a3 is a finite set of non-empty types, also called\ncolour sets; (ii) P is a finite set of places; (iii) T is a finite\nset of transitions; (iv) A is a finite set of arcs; (v) N is a\nnode function defined from A into P \u00d7 T \u222a T \u00d7 P; (vi) C\nis a colour function from P into \u03a3; (vii) G is a guard\nfunction from T into expressions; (viii) E is an arc expression\nfunction from A into expressions; (ix) I is an initialisation\nfunction from P into closed expressions;\nNotice that the informal explanation of the enabling and\noccurrence rules given above provides the foundations to\nunderstand the behaviour of a CPN. In accordance with\nordinary Petri nets, the concurrent behaviour of a CPN is\nbased on the notion of step. Formally, a step is a non-empty\nand finite multi-set over the set of all binding elements. Let\nstep S be enabled in a marking M. Then, S may occur,\nchanging the marking M to M . Moreover, we say that\nmarking M is directly reachable from marking M by the\noccurrence of step S, and we denote it by M[S > M .\nA finite occurrence sequence is a finite sequence of steps\nand markings: M1[S1 > M2 . . . Mn[Sn > Mn+1 such that\nn \u2208 N and Mi[Si > Mi+1 \u2200i \u2208 {1, . . . , n}. The set of all\npossible markings reachable for a net Net from a marking M\nis called its reachability set, and is denoted as R(Net, M).\n5.1 Mapping to Coloured Petri Nets\nOur normative structure is a labelled bi-partite graph.\nThe same is true for a Coloured Petri Net. We are\npresenting a mapping f from one to the other, in order to provide\nsemantics for the normative structure and prove properties\nabout it by using well-known theoretical results from work\non CPNs. The mapping f makes use of correspondences\nbetween normative scenes and CPN places, normative\ntransitions and CPN transitions and finally, between arc labels\nand CPN arc expressions.\nS \u2192 P\nB \u2192 T\nLin\n\u222a Lout\n\u2192 E\nThe set of types is the singleton set containing the colour\nNP (i.e. \u03a3 = {NP}). This complex type is structured as\nfollows (we use CPN-ML [4] syntax):\ncolor NPT = with Obl | Per | Prh | NoMod\ncolor IP = with inform | declare | offer\ncolor UTT = record\nillp : IP\nag1, role1, ag2, role2 : string\ncontent: string\ntime : int\ncolor NP = record\nmode : NPT\nilloc : UTT\nModelling illocutions as norms without modality (NoMod)\nis a formal trick we use to ensure that sub-nets can be\ncombined as explained below. Arcs are mapped almost directly.\nA is a finite set of arcs and N is a node function, such that\n\u2200a \u2208 A \u2203a \u2208 Ain\n\u222aAout\n. N(a) = a . The initialisation\nfunction I is defined as I(p) = \u0394s (\u2200s \u2208 S where p is obtained\nfrom s using the mapping; remember that s = ids, \u0394s ).\nFinally, the colour function C assigns the colour NP to\nevery place: C(p) = NP (\u2200p \u2208 P). We are not making use of\nthe guard function G. In future work, this function can be\nused to model constraints when we extend the\nexpressiveness of our norm language.\n5.2 Properties of Normative Structures\nHaving defined the mapping from normative structures\nto Coloured Petri Nets, we now look at properties of CPNs\nthat help us understand the complexity of conflict detection.\nOne question we would like to answer is, whether at a given\npoint in time, a given normative structure is conflict-free.\nSuch a snapshot of a normative structure corresponds to a\nmarking in the mapped CPN.\nDef. 11. Given a marking Mi, this marking is\nconflictfree if \u00ac\u2203p \u2208 P. \u2203np1, np2 \u2208 Mi(p) such that np1.mode =\nObl and np2.mode = Prh and np1.illoc and np2.illoc unify\nunder a valid substitution.\nAnother interesting question would be, whether a conflict\nwill occur from such a snapshot of the system by\npropagating the normative positions. In order to answer this\nquestion, we first translate the snapshot of the normative\nstructure to the corresponding CPN and then execute the\nfinite occurence sequence of markings and steps, verifying\nthe conflict-freedom of each marking as we go along.\nDef. 12. Given a marking Mi, a finite occurrence\nsequence Si, Si+1, ..., Sn is called conflict-free, if and only if\nMi[Si > Mi+1 . . . Mn[Sn > Mn+1 and Mk is conflict-free\nfor all k such that i \u2264 k \u2264 n + 1.\nHowever, the main question we would like to investigate,\nis whether or not a given normative structure is\nconflictresistant, that is, whether or not the agents enacting the\nMAS are able to bring about conflicts through their actions.\nAs soon as one includes the possibility of actions (or\nutterances) from autonomous agents, one looses determinism.\nHaving mapped the normative structure to a CPN, we\nnow add CPN models of the agents\" interactions. Each form\nof agent interaction (i.e. each activity) can be modelled\nusing CPNs along the lines of Cost et al. [5]. These\nnondeterministic CPNs feed tokens into the CPN that models\nthe normative structure. This leads to the introduction of\nnon-determinism into the combined CPN.\nThe lower half of figure 3 shows part of a CPN model of\nan agent protocol where the arc denoted with \u20181\" represents\nsome utterance of an illocution by an agent. The target\ntransition of this arc, not only moves a token on to the next\nstate of this CPN, but also places a token in the place\ncorresponding to the appropriate normative scene in the CPN\nmodel of the normative structure (via arc \u20182\"). Transition \u20183\"\nfinally could propagate that token in form of an obligation,\nfor example. Thus, from a given marking, many different\noccurrence sequences are possible depending on the agents\"\nactions. We make use of the reachability set R to define a\nsituation in which agents cannot cause conflicts.\n640 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nFigure 3: Constructing the combined CPN\nDef. 13. Given a net N, a marking M is conflict-resistant\nif and only if all markings in R(N,M) are conflict-free.\nChecking conflict-freedom of a marking can be done in\npolynomial time by checking all places of the CPN for\nconflicting tokens. Conflict-freedom of an occurrence sequence\nin the CPN that represents the normative structure can also\nbe done in polynomial time since this sequence is\ndeterministic given a snapshot.\nWhether or not a normative structure is designed safely\ncorresponds to checking the conflict-resistance of the\ninitial marking M0. Now, verifying conflict-resistance of a\nmarking becomes a very difficult task. It corresponds to the\nreachability problem in a CPN: can a state be reached or\na marking achieved, that contains a conflict?. This\nreachability problem is known to be NP-complete for ordinary\nPetri Nets [22] and since CPNs are functionally identical,\nwe cannot hope to verify conflict-resistance of a normative\nstructure off-line in a reasonable amount of time. Therefore,\ndistributed, run-time mechanisms are needed to ensure that\na normative structure maintains consistency. We present\none such mechanism in the following section.\n6. MANAGING NORMATIVE STRUCTURES\nOnce a conflict (as defined in Section 4) has been detected,\nwe propose to employ the unifier to resolve the conflict.\nIn our example, if the variables in prh(inform(a1, r1, a2, r2,\np(Y, d), T )) do not get the values specified in substitution\n\u03c3 then there will not be a conflict. However, rather than\ncomputing the complement set of a substitution (which can\nbe an infinite set) we propose to annotate the prohibition\nwith the unifier itself and use it to determine what the\nvariables of that prohibition cannot be in future unifications in\norder to avoid a conflict. We therefore denote annotated\nprohibitions as prh(\u00afI) \u03a3, where \u03a3 = {\u03c31, . . . , \u03c3n}, is a\nset of unifiers. Annotated norms3\nare interpreted as deontic\nconstructs with curtailed influences, that is, their effect (on\nagents, roles and illocutions) has been limited by the set \u03a3\nof unifiers. A prohibition may be in conflict with various\nobligations in a given normative scene s = id, \u0394 and we\nneed to record (and possibly avoid) all these conflicts. We\ndefine below an algorithm which ensures that a normative\nposition will be added to a normative scene in such a way\nthat it will not cause any conflicts.\n3\nAlthough we propose to curtail prohibitions, the same machinery\ncan be used to define the curtailment of obligations instead. These\ndifferent policies are dependent on the intended deontic semantics and\nrequirements of the systems addressed. For instance, some MASs may\nrequire that their agents should not act in the presence of conflicts,\nthat is, the obligation should be curtailed.\n6.1 Conflict Resolution\nWe propose a fine-grained way of resolving normative\nconflicts via unification. We detect the overlapping of the\ninfluences of norms , i.e. how they affect the behaviour of the\nconcerned agents, and we curtail the influence of the\nnormative position, by appropriately using the annotations when\nchecking if the norm applies to illocutions. The algorithm\nshown in Figure 4 depicts how we maintain a conflict-free\nset of norms. It adds a given norm N to an existing,\nconflictfree normative state \u0394, obtaining a resulting new normative\nstate \u0394 which is conflict-free, that is, its prohibitions are\nannotated with a set of conflict sets indicating which\nbindings for variables have to be avoided for conflicts not to take\nplace.\nalgorithm addNorm(N, \u0394)\nbegin\n1 timestamp(N)\n2 case N of\n3 per(\u00afI): \u0394 := \u0394 \u222a {N}\n4 prh(I): if N \u2208 \u0394 s.t. conflict(N, N , \u03c3) then \u0394 := \u0394\n5 else \u0394 := \u0394 \u222a {N}\n6 prh(\u00afI):\n7 begin\n8 \u03a3 := \u2205\n9 for each N \u2208 \u0394 do\n10 if conflict(N, N , \u03c3) then \u03a3 := \u03a3 \u222a {\u03c3}\n11 \u0394 := \u0394 \u222a {N \u03a3}\n12 end\n13 obl(\u00afI):\n14 begin\n15 \u03941 := \u2205; \u03942 := \u2205\n16 for each (N \u03a3) \u2208 \u0394 do\n17 if N = prh(I) then\n18 if conflict(N , N, \u03c3) then \u03941 := \u03941 \u222a {N \u03a3}\n19 else nil\n20 else\n21 if conflict(N , N, \u03c3) then\n22 begin\n23 \u03941 := \u03941 \u222a {N \u03a3}\n24 \u03942 := \u03942 \u222a {N (\u03a3 \u222a {\u03c3})}\n25 end\n26 \u0394 := (\u0394 \u2212 \u03941) \u222a \u03942 \u222a {N}\n27 end\n28 end case\n29 return \u0394\nend\nFigure 4: Algorithm to Preserve Conflict-Freedom\nThe algorithm uses a case of structure to differentiate the\ndifferent possibilities for a given norm N. Line 3 addresses\nthe case when the given norm is a permission: N is simply\nadded to \u0394. Lines 4-5 address the case when we attempt\nto add a ground prohibition to a normative state: if it\nconflicts with any obligation, then it is discarded; otherwise it\nis added to the normative state. Lines 6-12 describe the\nsituation when the normative position to be added is a\nnonground prohibition. In this case, the algorithm initialises \u03a3\nto an empty set and loops (line 9-10) through the norms N\nin the old normative state \u0394. Upon finding one that\nconflicts with N, the algorithm updates \u03a3 by adding the newly\nfound conflict set \u03c3 to it (line 10). By looping through \u0394,\nwe are able to check any conflicts between the new\nprohibition and the existing obligations, adequately building the\nannotation \u03a3 to be used when adding N to \u0394 in line 11.\nLines 13-27 describe how a new obligation is\naccommodated to an existing normative state. We make use of two\ninitially empty, temporary sets, \u03941, \u03942. The algorithm loops\nthrough \u0394 (lines 16-25) picking up those annotated\nprohibitions N \u03a3 which conflict with the new obligation. There\nare, however, two cases to deal with: the one when a ground\nThe Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 641\nprohibition is found (line 17), and its exception, covering\nnon-ground prohibitions (line 20). In both cases, the old\nprohibition is stored in \u03941 (lines 18 and 23) to be later\nremoved from \u0394 (line 26). However, in the case of a\nnonground prohibition, the algorithm updates its annotation of\nconflict sets (line 24). The loop guarantees that an\nexhaustive (linear) search through a normative state takes place,\nchecking if the new obligation is in conflict with any\nexisting prohibitions, possibly updating the annotations of these\nconflicting prohibitions. In line 26 the algorithm builds the\nnew updated \u0394 by removing the old prohibitions stored in\n\u03941 and adding the updated prohibitions stored in \u03942 (if\nany), as well as the new obligation N.\nOur proposed algorithm is correct in that, for a given\nnormative position N and a normative state \u0394, it provides a\nnew normative state \u0394 in which all prohibitions have\nannotations recording how they unify with existing obligations.\nThe annotations can be empty, though: this is the case when\nwe have a ground prohibition or a prohibition which does\nnot unify/conflict with any obligation. Permissions do not\naffect our algorithm and they are appropriately dealt with\n(line 3). Any attempt to insert a ground prohibition which\nconflicts, yields the same normative state (line 4). When a\nnew obligation is being added then the algorithm guarantees\nthat all prohibitions are considered (lines 14-27), leading to\nthe removal of conflicting ground prohibitions or the update\nof annotations of non-ground prohibitions. The algorithm\nalways terminates: the loops are over a finite set \u0394 and the\nconflict checks and set operations always terminate. The\ncomplexity of the algorithm is linear: the set \u0394 is only\nexamined once for each possible case of norm to be added.\nWhen managing normative states we may also need to\nremove normative positions. This is straightforward:\npermissions can be removed without any problems; annotated\nprohibitions can also be removed without further\nconsiderations; obligations, however, require some housekeeping.\nWhen an obligation is to be removed, we must check it\nagainst all annotated prohibitions in order to update their\nannotations. We apply the conflict check and obtain a\nunifier, then remove this unifier from the prohibition\"s\nannotation. We invoke the removal algorithm as removeNorm(N, \u0394):\nit returns a new normative state \u0394 in which N has been\nremoved, with possible alterations to other normative\npositions as explained.\n6.2 Enactment of a Normative Structure\nThe enactment of a normative structure amounts to the\nparallel, distributed execution of normative scenes and\nnormative transitions. For illustrative purposes, hereafter we\nshall describe the interplay between the payment and\ndelivery normative scenes and the normative transition nt\nlinking them in the upper half of figure 2. With this aim,\nconsider for instance that obl(inform(jules, client, rod, acc,\npay(copper, 400, 350), T) \u2208 \u0394payment and that \u0394delivery\nholds prh(inform(rod,wm, jules, client, delivered(Z, Q), T)).\nSuch states indicate that client Jules is obliged to pay \u00a3400\nfor 350kg of copper to accountant Rod according to the\npayment normative scene, whereas Rod, taking up the role of\nwarehouse manager this time, is prohibited to deliver\nanything to client Jules according to the delivery normative\nscene.\nFor each normative scene, the enactment process goes as\nfollows. Firstly, it processes its incoming message queue\nthat contains three types of messages: utterances from the\nactivity it is linked to; and normative commands either\nto add or remove normative positions. For instance, in\nour example, the payment normative scene collects the\nillocution I = utt((inform(jules, client, rod, acc, pay(copper,\n400, 350), 35)) standing for client Jules\" pending payment\nfor copper (via arrow A in figure 2). Utterances are\ntimestamped and subsequently added to the normative state.\nWe would have \u0394payment = \u0394payment \u222a {I}, in our\nexample. Upon receiving normative commands to either add or\nremove a normative position, the normative scene invokes\nthe corresponding addition or removal algorithm described\nin Section 6.1. Secondly, the normative scene acknowledges\nits state change by sending a trigger message to every\noutgoing normative transition it is connected to. In our example,\nthe payment normative scene would be signalling its state\nchange to normative transition nt.\nFor normative transitions, the process works differently.\nBecause each normative transition controls the operation of\na single rule, upon receiving a trigger message, it polls every\nincoming normative scene for substitutions for the relevant\nillocution schemata on the LHS of its rule. In our example,\nnt (being responsible for the rule described in Section 3.4),\nwould poll the payment normative scene (via arrow B) for\nsubstitutions. Upon receiving replies from them (in the form\nof sets of substitutions together with time-stamps), it has to\nunify substitutions from each of these normative scenes. For\neach unification it finds, the rule is fired, and hence the\ncorresponding normative command is sent along to the output\nnormative scene. The normative transition then keeps track\nof the firing message it sent on and of the time-stamps of the\nnormative positions that triggered the firing. This is done\nto ensure that the very same normative positions in the LHS\nof a rule only trigger its firing once.\nIn our example, nt would be receiving \u03c3 = {X/jules,\nY/rod, Z/copper, Q/350} from the payment normative scene.\nSince the substitions in \u03c3 unify with nt\"s rule, the rule is\nfired, and the normative command add(delivery : obl(rod,\nwm, jules, client, delivered(copper, 350), T)) is sent along to\nthe delivery normative scene to oblige Rod to deliver to\nclient Jules 350kg of copper. After that, the delivery\nnormative scene would invoke the addNorm algorithm from\nfigure 4 with \u0394delivery and N = obl(rod, wm, jules, client,\ndelivered(copper, 350)) as arguments.\n7. RELATED WORK AND CONCLUSIONS\nOur contributions in this paper are three-fold. Firstly, we\nintroduce an approach for the management of and reasoning\nabout norms in a distributed manner.\nTo our knowledge, there is little work published in this\ndirection. In [8, 21], two languages are presented for the\ndistributed enforcement of norms in MAS. However, in both\nworks, each agent has a local message interface that forwards\nlegal messages according to a set of norms. Since these\ninterfaces are local to each agent, norms can only be expressed\nin terms of actions of that agent. This is a serious\ndisadvantage, e.g. when one needs to activate an obligation to one\nagent due to a certain message of another one.\nThe second contribution is the proposal of a normative\nstructure. The notion is fruitful because it allows the\nseparation of normative and procedural concerns. The normative\nstructure we propose makes evident the similarity between\nthe propagation of normative positions and the propagation\n642 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nof tokens in Coloured Petri Nets. That similarity readily\nsuggests a mapping between the two, and gives grounds to\na convenient analytical treatment of the normative\nstructure, in general, and the complexity of conflict detection,\nin particular. The idea of modelling interactions (in the\nform of conversations) via Petri Nets has been investigated\nin [18], where the interaction medium and individual agents\nare modelled as CPN sub-nets that are subsequently\ncombined for analysis. In [5], conversations are first designed\nand analysed at the level of CPNs and thereafter translated\ninto protocols. Lin et al. [20] map conversation schemata to\nCPNs. To our knowledge, the use of this representation in\nthe support of conflict detection in regulated MAS has not\nbeen reported elsewhere.\nFinally, we present a distributed mechanism to resolve\nnormative conflicts. Sartor [25] treats normative conflicts\nfrom the point of view of legal theory and suggests a way to\norder the norms involved. His idea is implemented in [12]\nbut requires a central resource for norm maintenance. The\napproach to conflict detection and resolution is an\nadaptation and extension of the work on instantiation graphs\nreported in [17] and a related algorithm in [27]. The algorithm\npresented in the current paper can be used to manage\nnormative states distributedly: normative scenes that happen\nin parallel have an associated normative state \u0394 to which the\nalgorithm is independently applied each time a new norm is\nto be introduced.\nThese three contributions we present in this paper open\nmany possibilities for future work. We should mention first,\nthat as a broad strategy we are working on a\ngeneralisation of the notion of normative structure to make it operate\nwith different coordination models, with richer deontic\ncontent and on top of different computational realisations of\nregulated MAS. As a first step in this direction we are\ntaking advantage of the de-coupling between interaction\nprotocols and declarative normative guidance that the normative\nstructure makes available, to provide a normative layer for\nelectronic institutions (as defined in [1]). We expect such\ncoupling will endow electronic institutions with a more\nflexible -and more expressive- normative environment.\nFurthermore, we want to extend our model along several\ndirections: (1) to handle negation and constraints as part\nof the norm language, and in particular the notion of time;\n(2) to accommodate multiple, hierarchical norm authorities\nbased on roles, along the lines of Cholvy and Cuppens [3]\nand power relationships as suggested by Carabelea et al. [2];\n(3) to capture in the conflict resolution algorithm different\nsemantics relating the deontic notions by supporting\ndifferent axiomations (e.g., relative strength of prohibition versus\nobligation, default deontic notions, deontic inconsistencies).\nOn the theoretical side, we intend to use analysis\ntechniques of CPNs in order to characterise classes of CPNs\n(e.g., acyclic, symmetric, etc.) corresponding to families of\nNormative Structures that are susceptible to tractable\noffline conflict detection. The combination of these techniques\nalong with our online conflict resolution mechanisms is\nintended to endow MAS designers with the ability to\nincorporate norms into their systems in a principled way.\n8. REFERENCES\n[1] J. L. Arcos, M. Esteva, P. Noriega, J. A. Rodr\u00b4\u0131guez, and\nC. Sierra. Engineering open environments with electronic\ninstitutions. Journal on Engineering Applications of Artificial\nIntelligence, 18(2):191-204, 2005.\n[2] C. Carabelea, O. Boissier, and C. Castelfranchi. Using social\npower to enable agents to reason about being part of a group.\nIn 5th Internat. Workshop, ESAW 2004, pages 166-177, 2004.\n[3] L. Cholvy and F. Cuppens. Solving normative conflicts by\nmerging roles. In Fifth International Conference on Artificial\nIntelligence and Law, Washington, USA, 1995.\n[4] S. Christensen and T. B. Haagh. Design CPN - overview of\nCPN ML syntax. Technical report, University of Aarhus, 1996.\n[5] R. S. Cost, Y. Chen, T. W. Finin, Y. Labrou, and Y. Peng.\nUsing colored petri nets for conversation modeling. In Issues in\nAgent Communication, pages 178-192, London, UK, 2000.\n[6] F. Dignum. Autonomous Agents with Norms. Artificial\nIntelligence and Law, 7(1):69-79, 1999.\n[7] A. Elhag, J. Breuker, and P. Brouwer. On the Formal Analysis\nof Normative Conflicts. Information & Comms. Techn. Law,\n9(3):207-217, Oct. 2000.\n[8] M. Esteva, W. Vasconcelos, C. Sierra, and J. A.\nRodr\u00b4\u0131guez-Aguilar. Norm consistency in electronic institutions.\nvolume 3171 (LNAI), pages 494-505. Springer-Verlag, 2004.\n[9] M. Fitting. First-Order Logic and Automated Theorem\nProving. Springer-Verlag, New York, U.S.A., 1990.\n[10] N. Fornara, F. Vigan`o, and M. Colombetti. An Event Driven\nApproach to Norms in Artificial Institutions. In AAMAS05\nWorkshop: Agents, Norms and Institutions for Regulated\nMultiagent Systems (ANI@REM), Utrecht, 2005.\n[11] D. Gaertner, P. Noriega, and C. Sierra. Extending the BDI\narchitecture with commitments. In Proceedings of the 9th\nInternational Conference of the Catalan Association of\nArtificial Intelligence, 2006.\n[12] A. Garc\u00b4\u0131a-Camino, P. Noriega, and J.-A. Rodr\u00b4\u0131guez-Aguilar.\nAn Algorithm for Conflict Resolution in Regulated Compound\nActivities. In 7th Int.Workshop - ESAW \"06, 2006.\n[13] A. Garc\u00b4\u0131a-Camino, J.-A. Rodr\u00b4\u0131guez-Aguilar, C. Sierra, and\nW. Vasconcelos. A Distributed Architecture for Norm-Aware\nAgent Societies. In DALT III, volume 3904 (LNAI), pages\n89-105. Springer, 2006.\n[14] F. Giunchiglia and L. Serafini. Multi-language hierarchical\nlogics or: How we can do without modal logics. Artificial\nIntelligence, 65(1):29-70, 1994.\n[15] J. Habermas. The Theory of Communication Action, Volume\nOne, Reason and the Rationalization of Society. Beacon\nPress, 1984.\n[16] K. Jensen. Coloured Petri Nets: Basic Concepts, Analysis\nMethods and Practical Uses (Volume 1). Springer, 1997.\n[17] M. Kollingbaum and T. Norman. Strategies for resolving norm\nconflict in practical reasoning. In ECAI Workshop\nCoordination in Emergent Agent Societies 2004, 2004.\n[18] J.-L. Koning, G. Francois, and Y. Demazeau. Formalization\nand pre-validation for interaction protocols in a multi agent\nsystems. In ECAI, pages 298-307, 1998.\n[19] B. Kramer and J. Mylopoulos. Knowledge Representation. In\nS. C. Shapiro, editor, Encyclopedia of Artificial Intelligence,\nvolume 1, pages 743-759. John Wiley & Sons, 1992.\n[20] F. Lin, D. H. Norrie, W. Shen, and R. Kremer. A schema-based\napproach to specifying conversation policies. In Issues in Agent\nCommunication, pages 193-204, 2000.\n[21] N. Minsky. Law Governed Interaction (LGI): A Distributed\nCoordination and Control Mechanism (An Introduction, and a\nReference Manual). Technical report, Rutgers University, 2005.\n[22] T. Murata. Petri nets: Properties, analysis and applications.\nProceedings of the IEEE, 77(4):541-580, 1989.\n[23] S. Parsons, C. Sierra, and N. Jennings. Agents that reason and\nnegotiate by arguing. Journal of Logic and Computation,\n8(3):261-292, 1998.\n[24] A. Ricci and M. Viroli. Coordination Artifacts: A Unifying\nAbstraction for Engineering Environment-Mediated\nCoordination in MAS. Informatica, 29:433-443, 2005.\n[25] G. Sartor. Normative conflicts in legal reasoning. Artificial\nIntelligence and Law, 1(2-3):209-235, June 1992.\n[26] M. Sergot. A Computational Theory of Normative Positions.\nACM Trans. Comput. Logic, 2(4):581-622, 2001.\n[27] W. W. Vasconcelos, M. Kollingbaum, and T. Norman.\nResolving Conflict and Inconsistency in Norm-Regulated\nVirtual Organisations. In Proceedings of AAMAS \"07, Hawai\"i,\nUSA, 2007. IFAAMAS.\n[28] G. H. von Wright. Norm and Action: A Logical Inquiry.\nRoutledge and Kegan Paul, London, 1963.\n[29] M. Wooldridge. An Introduction to Multiagent Systems. John\nWiley & Sons, Chichester, UK, Feb. 2002.\nThe Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 643", "keywords": "algorithm;protocol;normative scene;organisation;prohibition;norm conflict;normative position;normative structure;conflict;scenario;electronic institution;bi-partite graph;permission overlap;activity;coordination;normative transition rule;token;regulate multi-agent system"}
-{"name": "test_I-4", "title": "Meta-Level Coordination for Solving Negotiation Chains in Semi-Cooperative Multi-Agent Systems", "abstract": "A negotiation chain is formed when multiple related negotiations are spread over multiple agents. In order to appropriately order and structure the negotiations occurring in the chain so as to optimize the expected utility, we present an extension to a singleagent concurrent negotiation framework. This work is aimed at semi-cooperative multi-agent systems, where each agent has its own goals and works to maximize its local utility; however, the performance of each individual agent is tightly related to other agent\"s cooperation and the system\"s overall performance. We introduce a pre-negotiation phase that allows agents to transfer meta-level information. Using this information, the agent can build a more accurate model of the negotiation in terms of modeling the relationship of flexibility and success probability. This more accurate model helps the agent in choosing a better negotiation solution in the global negotiation chain context. The agent can also use this information to allocate appropriate time for each negotiation, hence to find a good ordering of all related negotiations. The experimental data shows that these mechanisms improve the agents\" and the system\"s overall performance significantly.", "fulltext": "1. INTRODUCTION\nSophisticated negotiation for task and resource allocation is\ncrucial for the next generation of multi-agent systems (MAS)\napplications. Groups of agents need to efficiently negotiate over multiple\nrelated issues concurrently in a complex, distributed setting where\nthere are deadlines by which the negotiations must be completed.\nThis is an important research area where there has been very little\nwork done.\nThis work is aimed at semi-cooperative multi-agent systems,\nwhere each agent has its own goals and works to maximize its\nlocal utility; however, the performance of each individual agent is\ntightly related to other agent\"s cooperation and the system\"s overall\nperformance. There is no single global goal in such systems,\neither because each agent represents a different organization/user, or\nbecause it is difficult/impossible to design one single global goal.\nThis issue arises due to multiple concurrent tasks, resource\nconstrains and uncertainties, and thus no agent has sufficient\nknowledge or computational resources to determine what is best for the\nwhole system [11]. An example of such a system would be a virtual\norganization [12] (i.e. a supply chain) dynamically formed in an\nelectronic marketplace such as the one developed by the CONOISE\nproject [5]. To accomplish tasks continuously arriving in the virtual\norganization, cooperation and sub-task relocation are needed and\npreferred. There is no single global goal since each agent may be\ninvolved in multiple virtual organizations. Meanwhile, the\nperformance of each individual agent is tightly related to other agents\"\ncooperation and the virtual organization\"s overall performance. The\nnegotiation in such systems is not a zero-sum game, a deal that\nincreases both agents\" utilities can be found through efficient\nnegotiation. Additionally, there are multiple encounters among agents\nsince new tasks are arriving all the time. In such negotiations, price\nmay or may not be important, since it can be fixed resulting from a\nlong-term contract. Other factors like quality and delivery time are\nimportant too. Reputation mechanisms in the system makes\ncheating not attractive from a long term viewpoint due to multiple\nencounters among agents. In such systems, agents are self-interested\nbecause they primarily focus on their own goals; but they are also\nsemi-cooperative, meaning they are willing to be truthful and\ncollaborate with other agents to find solutions that are beneficial to all\nparticipants, including itself; though it won\"t voluntarily scarify its\nown utility in exchange of others\" benefits.\nAnother major difference between this work and other work on\nnegotiation is that negotiation, here, is not viewed as a stand-alone\nprocess. Rather it is one part of the agent\"s activity which is tightly\ninterleaved with the planning, scheduling and executing of the agent\"s\nactivities, which also may relate to other negotiations. Based on\nthis recognition, this work on negotiation is concerned more about\nthe meta-level decision-making process in negotiation rather than\nthe basic protocols or languages. The goal of this research is to\ndevelop a set of macro-strategies that allow the agents to effectively\nmanage multiple related negotiations, including, but not limited to\nthe following issues: how much time should be spent on each\nnegotiation, how much flexibility (see formal definition in Formula 3)\nshould be allocated for each negotiation, and in what order should\n50\n978-81-904262-7-5 (RPS) c 2007 IFAAMAS\nthe negotiations be performed. These macro-strategies are different\nfrom those micro-strategies that direct the individual negotiation\nthread, such as whether the agent should concede and how much\nthe agent should concede, etc[3].\nIn this paper we extend a multi-linked negotiation model [10]\nfrom a single-agent perspective to a multi-agent perspective, so that\na group of agents involved in chains of interrelated negotiations can\nfind nearly-optimal macro negotiation strategies for pursuing their\nnegotiations. The remainder of this paper is structured in the\nfollowing manner. Section 2 describes the basic negotiation process\nand briefly reviews a single agent\"s model of multi-linked\nnegotiation. Section 3 introduces a complex supply-chain scenario.\nSection 4 details how to solve those problems arising in the negotiation\nchain. Section 5 reports on the experimental work. Section 6\ndiscusses related work and Section 7 presents conclusions and areas\nof future work.\n2. BACKGROUND ON MULTI-LINKED\nNEGOTIATION\nIn this work, the negotiation process between any pair of agents\nis based on an extended version of the contract net [6]: the\ninitiator agent announces the proposal including multiple features; the\nresponding agent evaluates it and responds with either a yes/no\nanswer or a counter proposal with some features modified. This\nprocess can go back and forth until an agreement is reached or the\nagents decide to stop. If an agreement is reached and one agent\ncannot fulfill the commitment, it needs to pay the other party a\ndecommitment penalty as specified in the commitment. A negotiation\nstarts with a proposal, which announces that a task (t) needs to be\nperformed includes the following attributes:\n1. earliest start time (est): the earliest start time of task t; task\nt cannot be started before time est.\n2. deadline (dl): the latest finish time of the task; the task needs\nto be finished before the deadline dl.\n3. minimum quality requirement (minq): the task needs to be\nfinished with a quality achievement no less than minq.\n4. regular reward (r): if the task is finished as the contract\nrequested, the contractor agent will get reward r.\n5. early finish reward rate (e): if the contractor agent can finish\nthe task earlier than dl, it will get the extra early finish reward\nproportional to this rate.\n6. decommitment penalty rate (p): if the contractor agent\ncannot perform the task as it promised in the contract or if the\ncontractee agent needs to cancel the contract after it has been\nconfirmed, it also needs to pay a decommitment penalty (p\u2217r)\nto the other agent.\nThe above attributes are also called attribute-in-negotiation which\nare the features of the subject (issue) to be negotiated, and they are\ndomain-dependent. Another type of attribute 1\nis the\nattribute-ofnegotiation, which describes the negotiation process itself and is\ndomain-independent, such as:\n1\nThese attributes are similar to those used in project management;\nhowever, the multi-linked negotiation problem cannot be\nreduced to a project management problem or a scheduling\nproblem. The multi-linked negotiation problem has two dimensions:\nthe negotiations, and the subjects of negotiations. The negotiations\nare interrelated and the subjects are interrelated; the attributes of\nnegotiations and the attributes of the subjects are interrelated as\nwell. This two-dimensional complexity of interrelationships\ndistinguishes it from the classic project management problem or\nscheduling problem, where all tasks to be scheduled are local tasks and no\nnegotiation is needed.\n1. negotiation duration (\u03b4(v)): the maximum time allowed for\nnegotiation v to complete, either reaching an agreed upon\nproposal (success) or no agreement (failure).\n2. negotiation start time (\u03b1(v)): the start time of negotiation v.\n\u03b1(v) is an attribute that needs to be decided by the agent.\n3. negotiation deadline ( (v)): negotiation v needs to be\nfinished before this deadline (v). The negotiation is no longer\nvalid after time (v), which is the same as a failure outcome\nof this negotiation.\n4. success probability (ps(v)): the probability that v is\nsuccessful. It depends on a set of attributes, including both\nattributes-in-negotiation (i.e. reward, flexibility, etc.) and\nattributes-of-negotiation (i.e. negotiation start time,\nnegotiation deadline, etc.).\nAn agent involved in multiple related negotiation processes needs\nto reason on how to manage these negotiations in terms of ordering\nthem and choosing the appropriate values for features. This is the\nmulti-linked negotiation problem [10] :\nDEFINITION 2.1. A multi-linked negotiation problem is\ndefined as an undirected graph (more specifically, a forest as a set\nof rooted trees): M = (V, E), where V = {v} is a finite set\nof negotiations, and E = {(u, v)} is a set of binary relations on\nV . (u, v) \u2208 E denotes that negotiation u and negotiation v are\ndirectly-linked. The relationships among the negotiations are\ndescribed by a forest, a set of rooted trees {Ti}. There is a relation\noperator associated with every non-leaf negotiation v (denoted as\n\u03c1(v)), which describes the relationship between negotiation v and\nits children. This relation operator has two possible values: AND\nand OR. The AND relationship associated with a negotiation v\nmeans the successful accomplishment of the commitment on v\nrequires all its children nodes have successful accomplishments. The\nOR relationship associated with a negotiation v means the\nsuccessful accomplishment of the commitment on v requires at least\none child node have successful accomplishment, where the\nmultiple children nodes represent alternatives to accomplish the same\ngoal.\nMulti-linked negotiation problem is a local optimization\nproblem. To solve a multi-linked negotiation problem is to find a\nnegotiation solution (\u03c6, \u03d5) with optimized expected utility EU(\u03c6, \u03d5),\nwhich is defined as:\nEU(\u03c6, \u03d5) =\n2n\nX\ni=1\nP(\u03c7i, \u03d5) \u2217 (R(\u03c7i, \u03d5) \u2212 C(\u03c7i, \u03c6, \u03d5)) (1)\nA negotiation ordering \u03c6 defines a partial order of all\nnegotiation issues. A feature assignment \u03d5 is a mapping function\nthat assigns a value to each attribute that needs to be decided in\nthe negotiation. A negotiation outcome \u03c7 for a set of negotiations\n{vj}, (j = 1, ..., n) specifies the result for each negotiation, either\nsuccess or failure. There are a total of 2n\ndifferent outcomes for n\nnegotiations: {chii}, (i = 1, ..., 2n\n). P(\u03c7i, \u03d5) denotes the\nprobability of the outcome \u03c7i given the feature assignment \u03d5, which\nis calculated based on the success probability of each negotiation.\nR(\u03c7i, \u03d5) denotes the agent\"s utility increase given the outcome \u03c7i\nand the feature assignment \u03d5, and C(\u03c7i, \u03c6, \u03d5) is the sum of the\ndecommitment penalties of those negotiations, which are successful,\nbut need to be abandoned because the failure of other directly\nrelated negotiations; these directly related negotiations are performed\nconcurrently with this negotiation or after this negotiation\naccording to the negotiation ordering \u03c6.\nThe Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 51\nComputer\nProducer\nCPU\nOther Tasks\nDistribution\nCenter\nMemory\nProducer\nTransporter\nDeliver Hardware\nOrder Memory (2)\nOther Tasks\nOther Tasks\nOrder Chips\nPC\nManufacturer\nOrder\nStore\nOrder Memory (1)\nOther Tasks\nPurchase Memory\nCustomer\nDeliver Computer\nHardware\nComputer\nOrder Purchase\nFigure 1: A Complex Negotiation Chain Scenario\nA heuristic search algorithm [10] has been developed to solve\nthe single agent\"s multi-linked negotiation problem that produces\nnearly-optimal solutions. This algorithm is used as the core of the\ndecision-making for each individual agent in the negotiation chain\nscenario. In the rest of the paper, we present our work on how to\nimprove the local solution of a single agent in the global negotiation\nchain context.\n3. NEGOTIATION CHAIN PROBLEM\nNegotiation chain problem occurs in a multi-agent system, where\neach agent represents an individual, a company, or an organization,\nand there is no absolute authority in the system. Each agent has\nits own utility function for defining the implications of achieving\nits goals. The agent is designed to optimize its expected utility\ngiven its limited information, computational and communication\nresources. Dynamic tasks arrive to individual agents, most tasks\nrequiring the coordination of multiple agents. Each agent has the\nscheduling and planning ability to manage its local activities, some\nof these activities are related to other agents\" activities. Negotiation\nis used to coordinate the scheduling of these mutual related\nactivities. The negotiation is tightly connected with the agent\"s local\nscheduling/planning processes and is also related to other\nnegotiations. An agent may be involved in multiple related negotiations\nwith multiple other agents, and each of the other agents may be\ninvolved in related negotiations with others too.\nFigure 1 describes a complex negotiation chain scenario. The\nStore, the PC manufacturer, the Memory Producer and the\nDistribution Center are all involved in multi-linked negotiation\nproblems. Figure 2 shows a distributed model of part of the negotiation\nchain described in Figure 1. Each agent has a local optimization\nproblem - the multi-linked negotiation problem (represented as an\nand-or tree), which can be solved using the model and procedures\ndescribed in Section 2. However, the local optimal solution may\nnot be optimal in the global context given the local model is neither\ncomplete or accurate. The dash line in Figure 2 represents the\nconnection of these local optimization problem though the common\nnegotiation subject.\nNegotiation chain problem O is a group of tightly-coupled local\noptimization problems:\nO = {O1, O2, ....On}, Oi denotes the local optimization problem\n(multi-linked negotiation problem) of agent Ai\nAgent Ai\"s local optimal solution Slo\ni maximizes the expected\nlocal utility based on an incomplete information and assumptions\nabout other agents\" local strategies - we defined such incomplete\ninformation and imperfect assumptions of agent i as Ii):\nUexp\ni (Slo\ni , Ii) \u2265 Uexp\ni (Sx\ni , Ii) for all x = lo.\nHowever, the combination of these local optimal solutions {Slo\ni } :\n< Slo\n1 , Slo\n2 , ....Slo\nn > can be sub-optimal to a set of better local\noptimal solutions {Sblo\ni } : < Sblo\n1 , Sblo\n2 , ....Sblo\nn > if the global\nutility can be improved without any agent\"s local utility being\ndecreased by using {Sblo\ni }. In other words, {Slo\ni } is dominated by\n{Sblo\ni } ({Slo\ni } \u227a {Sblo\ni }) iff:\nUi(< Slo\n1 , Slo\n2 , ....Slo\nn >) \u2264 Ui(< Sblo\n1 , Sblo\n2 , ....Sblo\nn >) for\ni = 1, ...n and\nPn\ni=1 Ui(< Slo\n1 , Slo\n2 , ....Slo\nn >) <\nPn\ni=1 Ui(< Sblo\n1 , Sblo\n2 , ....Sblo\nn >)\nThere are multiple sets of better local optimal solutions: {Sblo1\ni },\n{Sblo2\ni }, ... {Sblom\ni }. Some of them may be dominated by others.\nA set of better local optimal solutions {S\nblog\ni } that is not\ndominated by any others is called best local optimal. If a set of best\nlocal optimal solutions {S\nblog\ni } dominates all others, {S\nblog\ni } is\ncalled globally local optimal. However, sometimes the globally\nlocal optimal set does not exist, instead, there exist multiple sets\nof best local optimal solutions. Even if the globally local optimal\nsolution does exist in theory, finding it may not be realistic given\nthe agents are making decision concurrently, to construct the\nperfect local information and assumptions about other agents (Ii) in\nthis dynamic environment is a very difficult and sometimes even\nimpossible task.\nThe goal of this work is to improve each agent\"s local model\nabout other agents (Ii) through meta-level coordination. As Ii\nbecome more accurate, the agent\"s local optimal solution to its local\nmulti-linked negotiation problem become a better local optimal\nsolution in the context of the global negotiation chain problem. We\nare not arguing that this statement is a universal valid statement\nthat holds in all situations, but our experimental work shows that\nthe sum of the agents\" utilities in the system has been improved by\n95% on average when meta-level coordination is used to improve\neach agent\"s local model Ii. In this work, we focus on improving\nthe agent\"s local model through two directions. One direction is\nto build a better function to describe the relationship between the\nsuccess probability of the negotiation and the flexibility allocated\nto the negotiation. The other direction is to find how to allocate\ntime more efficiently for each negotiation in the negotiation chain\ncontext.\n4. NEW MECHANISM - META-LEVEL\nCOORDINATION\nIn order for an agent to get a better local model about other\nagents in the negotiation chain context, we introduce a pre-negotiation\nphase into the local negotiation process. During the pre-negotiation\nphase, agents communicate with other agents who have tasks\ncontracting relationships with them, they transfer meta-level\ninformation before they decide on how and when to do the negotiations.\nEach agent tells other agents what types of tasks it will ask them\nto perform, and the probability distributions of some parameters of\nthose tasks, i.e. the earliest start times and the deadlines, etc. When\nthese probability distributions are not available directly, agents can\nlearn such information from their past experience. In our\nexperiment described later, such distributed information is learned rather\nthan being directly told by other agents. Specifically, each agent\nprovides the following information to other related agents:\n\u2022 Whether additional negotiation is needed in order to make a\ndecision on the contracting task; if so, how many more\nnegotiations are needed. negCount represents the total number\nof additional negotiations needed for a task, including\nadditional negotiations needed for its subtasks that happen among\nother agents. In a negotiation chain situation, this\ninformation is being propagated and updated through the chain until\n52 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nE: Order Hardware F: Deliver Computer H: Get Memory I: Deliver Hardware I: Deliver Hardware\nF: Deliver Computer\nG: Get CPU\nE: Get Hardware\nand\nTransporterDistribution Center\nA: Purchase Computer B: Purchase Memory\nC: Order Computer D: Order Memory\nStore Agent PC Manufacturer\nand\nC: Order Computer\nFigure 2: Distributed Model of Negotiation Chains\nevery agent has accurate information. Let subNeg(T) be a\nset of subtasks of task T that require additional negotiations,\nthen we have:\nnegCount(T) = |subNeg(T)| +\nX\nt\u2208subNeg(T )\n(negCount(t))\n(2)\nFor example, in the scenario described in Figure 1, for the\ndistribution center, task Order Hardware consists of three\nsubtasks that need additional negotiations with other agents:\nOrder Chips, Order Memory and Deliver Hardware.\nHowever, no further negotiations are needed for other agents to\nmake decision on these subtasks, hence the negCount for\nthese subtasks are 0. The following information is sent to the\nPC manufacturer by the distribution center:\nnegCount(Order Hardware) = 3\nFor the PC manufacturer task Order Computer contains two\nsubtasks that requires additional negotiations: Deliver\nComputer and Order Hardware. When the PC manufacturer\nreceives the message from the Distribution Center, it updates\nits local information:\nnegCount(Order Computer) = 2+\nnegCount(Deliver Computer)(0)+\nnegCount(Order Hardware)(3) = 5\nand sends the updated information to the Store Agent.\n\u2022 Whether there are other tasks competing with this task and\nwhat is the likelihood of conflict. Conflict means that given\nall constrains, the agent cannot accomplish all tasks on time,\nit needs to reject some tasks. The likelihood of conflict Pcij\nbetween a task of type i and another task of type j is\ncalculated based on the statistical model of each task\"s parameters,\nincluding earliest start time (est), deadline (dl), task\nduration (dur) and slack time (sl), using a formula [7]: Pcij =\nP(dli \u2212 estj \u2264 duri + durj \u2227 dlj \u2212 esti \u2264 duri + durj)\nWhen there are more than two types of tasks, the likelihood\nof no conflict between task i and the rest of the tasks, is\ncalculated as: PnoConflict(i) =\nQn\nj=1,j=i(1 \u2212 Pcij)\nFor example, the Memory Producer tells the Distribution Center\nabout the task Order Memory. Its local decision does not involve\nadditional negotiation with other agents (negCount = 0),\nhowever, there is another task from the Store Agent that competes with\nthis task, thus the likelihood of no conflict is 0.5 (PnoConflict =\n0.5). On the other hand, the CPU Producer tells the Distribution\nCenter about the task Order Chips: its local decision does not\ninvolve additional negotiation with other agents, and there are no\nother tasks competing with this task (PnoConflict = 1.0) given\nthe current environment setting. Based on the above information,\nthe Distribution Center knows that task Order Memory needs more\nflexibility than task Order Chips in order to be successful in\nnegotiation. Meanwhile, the Distribution Center would tell the PC\nManufacturer that task Order Hardware involves further\nnegotiation with other agents (negCount = 3), and that its local decision\ndepends on other agents\" decisions. This piece of information helps\nthe PC Manufacturer allocate appropriate flexibility for task Order\nHardware in negotiation. In this work, we introduce a short period\nand\nProduce_Computer\nGet_Software\nInstall_Software\nDeliver_Computer\nMemory ProducerHardware Producer Transporter\nConsumer Agent\nOrder_Computer\nOrder_Memory\nOrder_Hardware\nOrder_Hardware\nprocess\u2212time: 3\nDistribution Center PC Manufacturer\nOrder_Chips\nDeliver_HardwareGet_Parts\nprocess\u2212time: 11\nenables\nenables\nprocess\u2212time: 4\nprocess\u2212time: 3\nand and\nenables\nprocess\u2212time: 4\nand\nenables\nprocess\u2212time: 3\nprocess\u2212time: 2\nFigure 3: Task Structures of PC Manufacturer and\nDistribution Center\nfor agents to learn the characteristics of those incoming tasks,\nincluding est, dl, dur and sl, which are used to calculate Pcij and\nPnoConflict for the meta-level coordination. During system\nperformance, agents are continually monitoring these characteristics.\nAn updated message will be send to related agents when there is\nsignificant change of the meta-level information.\nNext we will describe how the agent uses the meta-level\ninformation transferred during the pre-negotiation phase. This\ninformation will be used to improve the agent\"s local model, more\nspecifically, they are used in the agent\"s local decision-making process\nby affecting the values of some features. Especially, we will be\nconcerned with two features that have strong implications for the\nagent\"s macro strategy for the multi-linked negotiations, and hence\nalso affect the performance of a negotiation chain significantly. The\nfirst is the amount of flexibility specified in the negotiation\nparameter. The second feature we will explore is the time allocated for\nthe negotiation process to complete. The time allocated for each\nnegotiation affects the possible ordering of those negotiations, and\nit also affects the negotiation outcome. Details are discussed in the\nfollowing sections.\n4.1 Flexibility and Success Probability\nAgents not only need to deal with complex negotiation problems,\nthey also need to handle their own local scheduling and planning\nprocess that are interleaved with the negotiation process. Figure\n3 shows the local task structures of the PC Manufacturer and the\nDistribution Center. Some of these tasks can be performed locally\nby the PC manufacturer, such as Get Software and Install Software,\nwhile other tasks (non-local tasks) such as Order Hardware and\nDeliver Computer need to be performed by other agents.The PC\nManufacturer needs to negotiate with the Distribution Center and\nthe Transporter about whether they can perform these tasks, and if\nso, when and how they will perform them.\nWhen the PC Manufacturer negotiates with other agents about\nthe non-local task, it needs to have the other agents\" arrangement\nfit into its local schedule. Since the PC Manufacturer is dealing\nwith multiple non-local tasks simultaneously, it also needs to\nensure the commitments on these non-local tasks are consistent with\neach other. For example, the deadline of task Order Hardware\ncannot be later than the start time of task Deliver Computer. Figure 4\nThe Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 53\nOrder_Hardware\nDeliver_Computer\n[34, 40]\nprocess time: 4\nprocess time: 3\n[11, 28]\n[11, 28]\nprocess time: 11\nGet_Software\nInstall_Software\n[28, 34]\nprocess time: 2\nOrder_Computer starts at time 11 and finishes by 40\nFigure 4: A Sample Local Schedule of the PC Manufacturer\nshows a sample local schedule of the PC Manufacturer. According\nto this schedule, as long as task Order Hardware is performed\nduring time [11, 28] and task Deliver Computer is performed during\ntime [34, 40], there exists a feasible schedule for all tasks and task\nOrder Computer can be finished by time 40, which is the\ndeadline promised to the Customer. These time ranges allocated for\ntask Order Hardware and task Deliver Computer are called\nconsistent ranges; the negotiations on these tasks can be performed\nindependently within these ranges without worrying about conflict.\nNotice that each task should be allocated with a time range that is\nlarge enough to accommodate the estimated task process time. The\nlarger the range is, the more likely the negotiation will succeed,\nbecause it is easier for the other agent to find a local schedule for this\ntask. Then the question is, how big should this time range be? We\ndefined a quantitative measure called flexibility:\nGiven a task t, suppose the allocated time range for t is [est, dl],\nest is the earliest start time and dl stands for the deadline,\nflexibility(t) =\ndl \u2212 est \u2212 process time(t)\nprocess time(t)\n(3)\nFlexibility is an important attribute because it directly affects the\npossible outcome of the negotiation. The success probability of a\nnegotiation can be described as a function of the flexibility. In this\nwork, we adopt the following formula for the success probability\nfunction based on the flexibility of the negotiation issue:\nps(v) = pbs(v) \u2217 (2/\u03c0) \u2217 (arctan(f(v) + c))) (4)\nThis function describes a phenomenon where initially the\nlikelihood of a successful negotiation increases significantly as the\nflexibility grows, and then levels off afterward, which mirrors our\nexperience from previous experiments. pbs is the basic success\nprobability of this negotiation v when the flexibility f(v) is very large.\nc is a parameter used to adjust the relationship. Different\nfunction patterns can result from different parameter values, as shown\nin Figure 5. This function describes the agent\"s assumption about\nhow the other agent involved in this negotiation would response to\nthis particular negotiation request, when it has flexibility f(v). This\nfunction is part of the agent\"s local model about other agents. To\nimprove the accuracy of this function and make it closer to the\nreality, the agent adjusts these two values according to the meta-level\ninformation transferred during pre-negotiation phase. The values\nof c depends on whether there is further negotiation involved and\nwhether there are other tasks competing with this task for common\nresources. If so, more flexibility is needed for this issue and hence\nc should be assigned a smaller value. In our implementation, the\nfollowing procedure is used to calculate c based on the meta-level\ninformation negCount and PnoConflict:\nif(PnoConflict > 0.99) // no other competing task\nc = Clarge \u2212 negCount\nelse // competing task exists\nc = Csmall\nThis procedure works as follows: when there is no other competing\nFigure 5: Different Success Probability Functions\ntask, c depends on the number of additional negotiations needed.\nThe more additional negotiations that are needed, the smaller value\nc has, hence more flexibility will be assigned to this issue to\nensure the negotiation success. If no more negotiation is needed, c is\nassigned to a large number Clarge, meaning that less flexibility is\nneeded for this issue. When there are other competing tasks, c is\nassigned to a small number Csmall, meaning that more flexibility\nis needed for this issue. In our experimental work, we have Clarge\nas 5 and Csmall as 1. These values are selected according to our\nexperience; however, a more practical approach is to have agents\nlearn and dynamically adjust these values. This is also part of our\nfuture work.\npbs is calculated based on PnoConflict, f(v) (the flexibility of v\nin previous negotiation), and c, using the reverse format of equation\n4.\npbs(v) = min(1.0, PnoConflict(v)\u2217(\u03c0/2)/(arctan(f(v)+c))) (5)\nFor example, based on the scenario described above, the agents\nhave the following values for c and pbs based on the meta-level\ninformation transferred:\n\u2022 PC Manufacturer, Order Hardware: pbs = 1.0, c = 2;\n\u2022 Distribution Center, Order Chips: pbs = 1.0, c = 5;\n\u2022 Store Agent, Order Memory: pbs = 0.79, c = 1;\nFigure 5 shows the different patterns of the success probability\nfunction given different parameter values. Based on such patterns,\nthe Store Agent would allocate more flexibility to the task Order\nMemory to increase the likelihood of success in negotiation. In the\nagent\"s further negotiation process, formula 4 with different\nparameter values is used in reasoning on how much flexibility should be\nallocated to a certain issue.\nThe pre-negotiation communication occurs before negotiation,\nbut not before every negotiation session. Agents only need to\ncommunicate when the environment changes, for example, new types\nof tasks are generated, the characteristics of tasks changes, the\nnegotiation partner changes, etc. If no major change happens, the\nagent can just use the current knowledge from previous\ncommunications. The communication and computation overhead of this\nprenegotiation mechanism is very small, given the simple information\ncollection procedure and the short message to be transferred. We\nwill discuss the effect of this mechanism in Section 5.\n4.2 Negotiation Duration and Deadline\nIn the agent\"s local model, there are two attributes that describe\nhow soon the agent expects the other agent would reply to the\nnegotiation v: negotiation duration \u03b4(v) and negotiation deadline (v)\n54 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nTable 1: Examples of negotiations (\u03b4(v): negotiation duration,\ns.p.: success probability)\nindex task-name \u03b4(v) reward s.p. penalty\n1 Order Hardware 4 6 0.99 3\n2 Order Chips 4 1 0.99 0.5\n3 Order Memory 4 1 0.80 0.5\n4 Deliver Hardware 4 1 0.70 0.5\n. These two important attributes that affect the negotiation\nsolution. Part of the negotiation solution is a negotiation ordering \u03c6\nwhich specifies in what order the multiple negotiations should be\nperformed. In order to control the negotiation process, every\nnegotiation should be finished before its negotiation deadline, and the\nnegotiation duration is the time allocated for this negotiation. If a\nnegotiation cannot be finished during the allocated time, the agent\nhas to stop this negotiation and consider it as a failure. The\ndecision about the negotiation order depends on the success probability,\nreward, and decommitment penalty of each negotiation. A good\nnegotiation order should reduce the risk of decommitment and hence\nreduce the decommitment penalty. A search algorithm has been\ndeveloped to find such negotiation order described in [10].\nFor example, Table 1 shows some of the negotiations for the\nDistribution Center and their related attributes. Given enough time\n(negotiation deadline is greater than 16), the best negotiation order is:\n4 \u2192 3 \u2192 2 \u2192 1. The most uncertain negotiation (4: Deliver\nHardware) is performed first. The negotiation with highest penalty\n(1: Order hardware) is performed after all related negotiations (2,\n3, and 4) have been completed so as to reduce the risk of\ndecommitment. If the negotiation deadline is less than 12 and greater than 8,\nthe following negotiation order is preferred: (4, 3, 2) \u2192 1, which\nmeans negotiation 4, 3, 2 can be performed in parallel, and 1 needs\nto be performed after them. If the negotiation deadline is less than\n8, then all negotiations have to be performed in parallel, because\nthere is no time for sequencing negotiations.\nIn the original model for single agent [10], the negotiation\ndeadline (v) is assumed to be given by the agent who initiates the\ncontract. The negotiation duration \u03b4(v) is an estimation of how long\nthe negotiation takes based on experience. However, the situation\nis not that simple in a negotiation chain problem. Considering the\nfollowing scenario. When the customer posts a contract for task\nPurchase Computer, it could require the Store Agent to reply by\ntime 20. Time 20 can be considered as the negotiation deadline\nfor Purchase Computer. When the Store Agent negotiates with the\nPC Manufacturer about Order Computer, what negotiation\ndeadline should it specify? How long the negotiation on Order\nComputer takes depends on how the PC Manufacturer handles its local\nmultiple negotiations: whether it replies to the Store Agent first or\nwaits until all other related negotiations have been settled.\nHowever, the ordering of negotiations depends on the negotiation\ndeadline on Order Computer, which should be provided by the Store\nAgent. The negotiation deadline of Order Computer for the PC\nManufacturer is actually decided based on the negotiation duration\nof Order Computer for the Store Agent. How much time the Store\nAgent would like to spend on the negotiation Order Computer is its\nduration, and also determines the negotiation deadline for the PC\nManufacturer.\nNow the question arises: how should an agent decide how much\ntime it should spend on each negotiation, which actually affects the\nother agents\" negotiation decisions. The original model does not\nhandle this question since it assumes the negotiation duration \u03b4(v)\nis known. Here we propose three different approaches to handle\nthis issue.\n1. same-deadline policy. Use the same negotiation deadline for\nall related negotiations, which means allocate all available\ntime to all negotiations:\n\u03b4(v) = total available time\nFor example if the negotiation deadline for Purchase\nComputer is 20, the Store Agent will tell the PC Manufacturer to\nreply by 20 for Order Computer (ignoring the\ncommunication delay). This strategy allows every negotiation to have\nthe largest possible duration, however it also eliminates the\npossibility of performing negotiations in sequence - all\nnegotiations need to be performed in parallel because the total\navailable time is the same as the duration of each negotiation.\n2. meta-info-deadline policy. Allocate time for each\nnegotiation according to the meta-level information transferred in\nthe pre-negotiation phase. A more complicated negotiation,\nwhich involves further negotiations, should be allocated\nadditional time. For example, the PC Manufacturer allocates\na duration of 12 for the negotiation Order Hardware, and a\nduration of 4 for Deliver Computer. The reason is that the\nnegotiation with the Distribution Center about Order Hardware\nis more complicated because it involves further negotiations\nbetween the Distribution Center and other agents. In our\nimplementation, we use the following procedure to decide the\nnegotiation duration \u03b4(v):\nif(negCount(v) >= 3) // more additional\nnegotiation needed\n\u03b4(v) = (negCount(v)\u22121)\u2217basic neg cycle\nelse if(negCount(v) > 0) // one or two\nadditional negotiations needed\n\u03b4(v) = 2 \u2217 basic neg cycle\nelse //no additional negotiation\n\u03b4(v) = basic neg cycle + 1\nbasic neg cycle represents the minimum time needed for a\nnegotiation cycle (proposal-think-reply), which is 3 in our\nsystem setting including communication delay. One\nadditional time unit is allocated for the simplest negotiation\nbecause it allows the agent to perform a more complicated\nreasoning process in thinking. Again, the structure of this\nprocedure is selected according to experience, and it can be learned\nand adjusted by agents dynamically.\n3. evenly-divided-deadline policy. Evenly divide the available\ntime among the n related negotiations:\n\u03b4(v) = total available time/n\nFor example, if the current time is 0, and the negotiation\ndeadline for Order Computer is 21, given two other related\nnegotiations, Order Hardware and Deliver Computer, each\nnegotiation is allocated with a duration of 7.\nIntuitively we feel the strategy 1 may not be a good one, because\nperforming all negotiations in parallel would increase the risk of\ndecommitment and hence also decommitment penalties. However,\nit is not very clear how strategy 2 and 3 perform, and we will\ndiscuss some experimental results in Section 5.\n5. EXPERIMENTS\nTo verify and evaluate the mechanisms presented for the\nnegotiation chain problem, we implemented the scenario described in\nFigure 1 . New tasks were randomly generated with\ndecommitment penalty rate p \u2208 [0, 1], early finish reward rate e \u2208 [0, 0.3],\nand deadline dl \u2208 [10, 60] (this range allows different flexibilities\navailable for those sub-contracted tasks), and arrived at the store\nagent periodically. We performed two sets of experiments to study\nThe Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 55\nTable 2: Parameter Values Without/With Meta-level\nInformation\nfixed-flex meta-info-flex\nnegotiation pbs pbs c\nOrder Computer 0.95 1.0 0\nOrder Memory (1) 0.95 0.79 1\nOrder Hardware 0.95 1.0 2\nDeliver Computer 0.95 1.0 1\nDeliver Hardware 0.95 1.0 5\nOrder Chips 0.95 1.0 1\nOrder Memory (2) 0.95 0.76 1\nFigure 6: Different Flexibility Policies\nhow the success probability functions and negotiation deadlines\naffect the negotiation outcome, the agents\" utilities and the system\"s\noverall utility. In this experiment, agents need to make decision on\nnegotiation ordering and feature assignment for multiple attributes\nincluding: earliest start time, deadline, promised finish time, and\nthose attributes-of-negotiation. To focus on the study of flexibility,\nin this experiment, the regular rewards for each type of tasks are\nfixed and not under negotiation. Here we only describe how agents\nhandle the negotiation duration and negotiation deadlines because\nthey are the attributes affected by the pre-negotiation phase. All\nother attributes involved in negotiation are handled according to\nhow they affect the feasibility of local schedule (time-related\nattributes) and how they affect the negotiation success probability\n(time and cost related attributes) and how they affect the expect\nutility. A search algorithm [10] and a set of partial order\nscheduling algorithms are used to handle these attributes.\nWe tried two different flexibility policies.\n1. fixed-flexibility policy: the agent uses a fixed value as the\nsuccess probability (ps(v) = pbs(v)), according to its local\nknowledge and estimation.\n2. meta-info-flexibility policy: the agent uses the function ps(v) =\npbs(v) \u2217 (2/\u03c0) \u2217 (arctan(f(v) + c))) to model the\nsuccess probability. It also adjusts those parameters (pbs(v) and\nc) according to the meta-level information obtained in\nprenegotiation phase as described in Section 4. Table 2 shows\nthe values of those parameters for some negotiations.\nFigure 6 shows the results of this experiment. This set of\nexperiments includes 10 system runs, and each run is for 1000 simulating\ntime units. In the first 200 time units, agents are learning about\nthe task characteristics, which will be used to calculate the conflict\nprobabilities Pcij. At time 200, agents perform meta-level\ninformation communication, and in the next 800 time units, agents use\nthe meta-level information in their local reasoning process. The\ndata was collected over the 800 time units after the pre-negotiation\nFigure 7: Different Negotiation Deadline Policies\nphase 2\n. One Purchase Computer task is generated every 20 time\nunits, and two Purchase Memory tasks are generated every 20 time\nunits. The deadline for task Purchase Computer is randomly\ngenerated in the range of [30, 60], the deadline for task Purchase\nMemory is in the range of [10, 30]. The decommitment penalty rate is\nrandomly generated in the range of [0, 1]. This setting creates\nmultiple concurrent negotiation chain situations; there is one long\nchain:\nCustomer - Store - PC Manufacturer - Distribution Center -\nProducers - Transporter\nand two short chains:\nCustomer - Store - Memory Producer\nThis demonstrates that this mechanism is capable of handling\nmultiple concurrent negotiation chains.\nAll agents perform better in this example (gain more utility)\nwhen they are using the meta-level information to adjust their local\ncontrol through the parameters in the success probability function\n(meta-info-flex policy). Especially for those agents in the middle of\nthe negotiation chain, such as the PC Manufacturer and the\nDistribution Center, the flexibility policy makes a significant difference.\nWhen the agent has a better understanding of the global negotiation\nscenario, it is able to allocate more flexibility for those tasks that\ninvolve complicated negotiations and resource contentions.\nTherefore, the success probability increases and fewer tasks are rejected\nor canceled (90% of the tasks have been successfully negotiated\nover when using meta-level information, compared to 39% when\nno pre-negotiation is used), resulting in both the agent and the\nsystem achieving better performance.\nIn the second set of experiments studies, we compare three\nnegotiation deadline policies described in Section 4.2 when using\nthe meta-info flexibility policy described above. The initial result\nshows that the same-deadline policy and the meta-info-deadline\npolicy perform almost the same when the amount of system\nworkload level is moderate, tasks can be accommodated given sufficient\nflexibility. In this situation, with either of the policies, most\nnegotiations are successful, and there are few decommitment\noccurrences, so the ordering of negotiations does not make too much\ndifference. Therefore, in this second set of experiments, we increase\nthe number of new tasks generated to raise the average workload\nin the system. One Purchase Computer task is generated every\n15 time units, three Purchase Memory tasks are generated every\n2\nWe only measure the utility collected after the learning phase\nbecause that the learning phase is relatively short comparing to the\nevaluation phase, also during the learning phase, no meta-level\ninformation is used, so some of the policies are invalid.\n56 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)\n15 time units, and one task Deliver Gift (directly from the\ncustomer to the Transporter) is generated every 10 time units. This\nsetup generates a higher level of system workload, which results\nin some tasks not being completed no matter what negotiation\nordering is used. In this situation, we found the meta-info-deadline\npolicy performs much better than same-deadline policy (See\nFigure 7). When an agent uses the same-deadline policy, all\nnegotiations have to be performed in parallel. In the case that one\nnegotiation fails, all related tasks have to be canceled, and the agent\nneeds to pay multiple decommitment penalties. When the agent\nuses the meta-info-deadline policy, complicated negotiations are\nallocated more time and, correspondingly, simpler negotiations are\nallocated less time. This also has the effect of allowing some\nnegotiations to be performed in sequence. The consequence of\nsequencing negotiation is that, if there is failure, an agent can simply\ncancel the other related negotiations that have not been started. In\nthis way, the agent does not have to pay decommitment penalty for\nthose canceled negotiations because no commitment has been\nestablished yet. The evenly-divided-deadline policy performs much\nworse than the meta-info-deadline policy. In the\nevenly-divideddeadline policy, the agent allocates negotiation time evenly among\nthe related negotiations, hence the complicated negotiation does not\nget enough time to complete.\nThe above experiment results show that the meta-level\ninformation transferred among agents during the pre-negotiation phase is\ncritical in building a more accurate model of the negotiation\nproblem. The reasoning process based on this more accurate model\nproduces an efficient negotiation solution, which improves the agent\"s\nand the system\"s overall utility significantly. This conclusion holds\nfor those environments where the system is facing moderate heavy\nload and tasks have relatively tight time deadline (our experiment\nsetup is to produce such environment); the efficient negotiation is\nespecially important in such environments.\n6. RELATED WORK\nFatima, Wooldridge and Jennings [1] studied the multiple issues\nin negotiation in terms of the agenda and negotiation procedure.\nHowever, this work is limited since it only involves a single agent\"s\nperspective without any understanding that the agent may be part\nof a negotiation chain. Mailler and Lesser [4] have presented an\napproach to a distributed resource allocation problem where the\nnegotiation chain scenario occurs. It models the negotiation problem as\na distributed constraint optimization problem (DCOP) and a\ncooperative mediation mechanism is used to centralize relevant portions\nof the DCOP. In our work, the negotiation involves more\ncomplicated issues such as reward, penalty and utility; also, we adopt a\ndistribution approach where no centralized control is needed. A\nmediator-based partial centralized approach has been applied to the\ncoordination and scheduling of complex task network [8], which is\ndifferent from our work since the system is a complete cooperative\nsystem and individual utility of single agent is not concerned at all.\nA combinatorial auction [2, 9] could be another approach to\nsolving the negotiation chain problem. However, in a combinatorial\nauction, the agent does not reason about the ordering of\nnegotiations. This would lead to a problem similar to those we discussed\nwhen the same-deadline policy is used.\n7. CONCLUSION AND FUTURE WORK\nIn this paper, we have solved negotiation chain problems by\nextending our multi-linked negotiation model from the perspective of\na single agent to multiple agents. Instead of solving the negotiation\nchain problem in a centralized approach, we adopt a distributed\napproach where each agent has an extended local model and\ndecisionmaking process. We have introduced a pre-negotiation phase that\nallows agents to transfer meta-level information on related\nnegotiation issues. Using this information, the agent can build a more\naccurate model of the negotiation in terms of modeling the\nrelationship of flexibility and success probability. This more accurate\nmodel helps the agent in choosing the appropriate negotiation\nsolution. The experimental data shows that these mechanisms improve\nthe agent\"s and the system\"s overall performance significantly. In\nfuture extension of this work, we would like to develop\nmechanisms to verify how reliable the agents are. We also recognize\nthat the current approach of applying the meta-level information\nis mainly heuristic, so we would like to develop a learning\nmechanism that enables the agent to learn how to use such information to\nadjust its local model from previous experience. To further verify\nthis distributed approach, we would like to develop a centralized\napproach, so we can evaluate how good the solution from this\ndistributed approach is compared to the optimal solution found by the\ncentralized approach.\n8. REFERENCES\n[1] S. S. Fatima, M. Wooldridge, and N. R. Jennings. Optimal\nnegotiation strategies for agents with incomplete information. In\nRevised Papers from the 8th International Workshop on Intelligent\nAgents VIII, pages 377-392. Springer-Verlag, 2002.\n[2] L. Hunsberger and B. J. Grosz. A combinatorial auction for\ncollaborative planning. In Proceedings of the Fourth International\nConference on Multi-Agent Systems (ICMAS-2000), 2000.\n[3] N. R. Jennings, P. Faratin, T. J. Norman, P. O\"Brien, B. Odgers, and\nJ. L. Alty. Implementing a business process management system\nusing adept: A real-world case study. Int. Journal of Applied\nArtificial Intelligence, 2000.\n[4] R. Mailler and V. Lesser. A Cooperative Mediation-Based Protocol\nfor Dynamic, Distributed Resource Allocation. IEEE Transaction on\nSystems, Man, and Cybernetics, Part C, Special Issue on\nGame-theoretic Analysis and Stochastic Simulation of Negotiation\nAgents, 2004.\n[5] T. J. Norman, A. Preece, S. Chalmers, N. R. Jennings, M. Luck, V. D.\nDang, T. D. Nguyen, V. Deora, J. Shao, A. Gray, and N. Fiddian.\nAgent-based formation of virtual organisations. Int. J. Knowledge\nBased Systems, 17(2-4):103-111, 2004.\n[6] T. Sandholm and V. Lesser. Issues in automated negotiation and\nelectronic commerce: Extending the contract net framework. In\nProceedings of the First International Conference on Multi-Agent\nSystems (ICMAS95), pages 328-335, 1995.\n[7] J. Shen, X. Zhang, and V. Lesser. Degree of Local Cooperation and\nits Implication on Global Utility. Proceedings of Third International\nJoint Conference on Autonomous Agents and MultiAgent Systems\n(AAMAS 2004), July 2004.\n[8] M. Sims, H. Mostafa, B. Horling, H. Zhang, V. Lesser, and\nD. Corkill. Lateral and Hierarchical Partial Centralization for\nDistributed Coordination and Scheduling of Complex Hierarchical\nTask Networks. AAAI 2006 Spring Symposium on Distributed Plan\nand Schedule Management, 2006.\n[9] W. Walsh, M. Wellman, and F. Ygge. Combinatorial auctions for\nsupply chain formation. In Second ACM Conference on Electronic\nCommerce, 2000.\n[10] X. Zhang, V. Lesser, and S. Abdallah. Efficient management of\nmulti-linked negotiation based on a formalized model. Autonomous\nAgents and MultiAgent Systems, 10(2):165-205, 2005.\n[11] X. Zhang, V. Lesser, and T. Wagner. Integrative negotiation among\nagents situated in organizations. IEEE Transactions on System, Man,\nand Cybernetics: Part C, Special Issue on Game-theoretic Analysis\nand Stochastic Simulation of Negotiation Agents, 36(1):19-30,\nJanuary 2006.\n[12] Q. Zheng and X. Zhang. Automatic formation and analysis of\nmulti-agent virtual organization. Journal of the Brazilian Computer\nSociety: Special Issue on Agents Organizations, 11(1):74-89, July\n2005.\nThe Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 57", "keywords": "distributed setting;complex supply-chain scenario;flexibility;pre-negotiation;multi-link negotiation;sub-task relocation;agent;multiple agent;multi-linked negotiation;virtual organization;reputation mechanism;multiple concurrent task;negotiation framework;semi-cooperative multi-agent system;negotiation chain"}
-{"name": "test_I-5", "title": "Towards Self-organising Agent-based Resource Allocation in a Multi-Server Environment", "abstract": "Distributed applications require distributed techniques for efficient resource allocation. These techniques need to take into account the heterogeneity and potential unreliability of resources and resource consumers in a distributed environments. In this paper we propose a distributed algorithm that solves the resource allocation problem in distributed multiagent systems. Our solution is based on the self-organisation of agents, which does not require any facilitator or management layer. The resource allocation in the system is a purely emergent effect. We present results of the proposed resource allocation mechanism in the simulated static and dynamic multi-server environment.", "fulltext": "1. INTRODUCTION\nWith the increasing popularity of distributed computing\ntechnologies such as Grid [12] and Web services [20], the\nInternet is becoming a powerful computing platform where\ndifferent software peers (e.g., agents) can use existing\ncomputing resources to perform tasks. In this sense, each agent\nis a resource consumer that acquires a certain amount of\nresources for the execution of its tasks. It is difficult for a\ncentral resource allocation mechanism to collect and\nmanage the information about all shared resources and resource\nconsumers to effectively perform the allocation of resources.\nHence, distributed solutions of the resource allocation\nproblem are required. Researchers have recognised these\nrequirements [10] and proposed techniques for distributed resource\nallocation. A promising kind of such distributed approaches\nare based on economic market models [4], inspired by\nprinciples of real stock markets. Even if those approaches are\ndistributed, they usually require a facilitator for pricing,\nresource discovery and dispatching jobs to resources [5, 9].\nAnother mainly unsolved problem of those approaches is the\nfine-tuning of price and time, budget constraints to enable\nefficient resource allocation in large, dynamic systems [22].\nIn this paper we propose a distributed solution of the\nresource allocation problem based on self-organisation of the\nresource consumers in a system with limited resources. In\nour approach, agents dynamically allocate tasks to servers\nthat provide a limited amount of resources. In our approach,\nagents select autonomously the execution platform for the\ntask rather than ask a resource broker to do the allocation.\nAll control needed for our algorithm is distributed among\nthe agents in the system. They optimise the resource\nallocation process continuously over their lifetime to changes\nin the availability of shared resources by learning from past\nallocation decisions. The only information available to all\nagents are resource load and allocation success information\nfrom past resource allocations. Additional resource load\ninformation about servers is not disseminated. The basic\nconcept of our solution is inspired by inductive reasoning and\nbounded rationality introduced by W. Brian Arthur [2].\nThe proposed mechanism does not require a central\ncontrolling authority, resource management layer or introduce\nadditional communication between agents to decide which\ntask is allocated on which server. We demonstrate that\nthis mechanism performs well dynamic systems with a large\nnumber of tasks and can easily be adapted to various\nsystem sizes. In addition, the overall system performance is\nnot affected in case agents or servers fail or become\nunavailable. The proposed approach provides an easy way to\nimplement distributed resource allocation and takes into\naccount multi-agent system tendencies toward autonomy,\nheterogeneity and unreliability of resources and agents.\nThis proposed technique can be easily supplemented by\ntechniques for queuing or rejecting resource allocation\nrequests of agents [11]. Such self-managing capabilities of\nsoftware agents allow a reliable resource allocation even in\nan environment with unreliable resource providers. This\ncan be achieved by the mutual interactions between agents\nby applying techniques from complex system theory.\nSelforganisation of all agents leads to a self-organisation of the\n74\n978-81-904262-7-5 (RPS) c 2007 IFAAMAS\nsystem resources and is an emergent property of the\nsystem [21].\nThe remainder of the paper is structured as follows: The\nnext section gives an overview of the related work already\ndone in the area of load balancing, resource allocation or\nscheduling. Section 3 describes the model of a multi-agent\nenvironment that was used to conduct simulations for a\nperformance evaluation. Sections 4 and 5 describe the\ndistributed resource allocation algorithm and presents various\nexperimental results. A summary, conclusion and outlook\nto future work finish this paper.\n2. RELATED WORK\nResource allocation is an important problem in the area\nof computer science. Over the past years, solutions based on\ndifferent assumptions and constraints have been proposed by\ndifferent research groups [7, 3, 15, 10]. Generally speaking,\nresource allocation is a mechanism or policy for the efficient\nand effective management of the access to a limited resource\nor set of resources by its consumers. In the simplest case,\nresource consumers ask a central broker or dispatcher for\navailable resources where the resource consumer will be\nallocated. The broker usually has full knowledge about all\nsystem resources. All incoming requests are directed to the\nbroker who is the solely decision maker. In those approaches,\nthe resource consumer cannot influence the allocation\ndecision process. Load balancing [3] is a special case of the\nresource allocation problem using a broker that tries to be\nfair to all resources by balancing the system load equally\namong all resource providers. This mechanism works best\nin a homogeneous system.\nA simple distributed technique for resource management\nis capacity planning by refusing or queuing incoming agents\nto avoid resource overload [11]. From the resource owner\nperspective, this technique is important to prevent overload\nat the resource but it is not sufficient for effective resource\nallocation. This technique can only provide a good\nsupplement for distributed resource allocation mechanisms.\nMost of today\"s techniques for resource allocation in grid\ncomputing toolkits like Globus [12] or Condor-G [13]\ncoordinate the resource allocation with an auctioneer,\narbitrator, dispatcher, scheduler or manager. Those coordinators\nusually need to have global knowledge on the state of all\nsystem resources. An example of a dynamic resource\nallocation algorithm is the Cactus project [1] for the allocation\nof computational very expensive jobs.\nThe value of distributed solutions for the resource\nallocation problem has been recognised by research [10]. Inspired\nby the principles in stock markets, economic market models\nhave been developed for trading resources for the\nregulation of supply and demand in the grid. These approaches\nuse different pricing strategies such as posted price models,\ndifferent auction methods or a commodity market model.\nUsers try to purchase cheap resources required to run the job\nwhile providers try to make as much profit as possible and\noperate the available resources at full capacity. A collection\nof different distributed resource allocation techniques based\non market models is presented in Clearwater [10]. Buyya et\nal. developed a resource allocation framework based on the\nregulation of supply and demand [4] for Nimrod-G [6] with\nthe main focus on job deadlines and budget constraints.\nThe Agent based Resource Allocation Model (ARAM) for\ngrids is designed to schedule computational expensive jobs\nusing agents. Drawback of this model is the extensive use\nof message exchange between agents for periodic\nmonitoring and information exchange within the hierarchical\nstructure. Subtasks of a job migrate through the network until\nthey find a resource that meets the price constraints. The\njob\"s migration itinerary is determined by the resources in\nconnecting them in different topologies [17]. The proposed\nmechanism in this paper eliminates the need of periodic\ninformation exchange about resource loads and does not need\na connection topology between the resources.\nThere has been considerable work on decentralised\nresource allocation techniques using game theory published\nover recent years. Most of them are formulated as\nrepetitive games in an idealistic and simplified environment. For\nexample, Arthur [2] introduced the so called El Farol bar\nproblem that does not allow a perfect, logical and rational\nsolution. It is an ill-defined decision problem that assumes\nand models inductive reasoning. It is probably one of the\nmost studied examples of complex adaptive systems derived\nfrom the human way of deciding ill-defined problems. A\nvariation of the El Farol problem is the so called minority\ngame [8]. In this repetitive decision game, an odd number of\nagents have to choose between two resources based on past\nsuccess information trying to allocate itself at the resource\nwith the minority. Galstyan et al. [14] studied a variation\nwith more than two resources, changing resource capacities\nand information from neighbour agents. They showed that\nagents can adapt effectively to changing capacities in this\nenvironment using a set of simple look-up tables (strategies)\nper agent.\nAnother distributed technique that is employed for solving\nthe resource allocation problem is based on reinforcement\nlearning [18]. Similar to our approach, a set of agents\ncompete for a limited number of resources based only on prior\nindividual experience. In this paper, the system objective is\nto maximise system throughput while ensuring fairness to\nresources, measured as the average processing time per job\nunit.\nA resource allocation approach for sensor networks based\non self-organisation techniques and reinforcement learning\nis presented in [16] with main focus on the optimisation of\nenergy consumption of network nodes. We [19] proposed a\nself-organising load balancing approach for a single server\nwith focus on optimising the communication costs of mobile\nagents. A mobile agent will reject a migration to a remote\nagent server, if it expects the destination server to be\nalready overloaded by other agents or server tasks. Agents\nmake their decisions themselves based on forecasts of the\nserver utilisation. In this paper a solution for a multi-server\nenvironment is presented without consideration of\ncommunication or migration costs.\n3. MODEL DESCRIPTION\nWe model a distributed multi-agent system as a network\nof servers L = {l1, . . . , lm}, agents A = {a1, . . . , an} and\ntasks T = {T1, ..., Tm}. Each agent has a number of tasks\nTi that needs to be executed during its lifetime. A task Ti\nrequires U(Ti, t) resources for its execution at time t\nindependent from its execution server. Resources for the execution\nof tasks are provided by each server li. The task\"s execution\nlocation in general is specified by the map L : T \u00d7t \u2192 L. An\nagent has to know about the existence of server resources in\norder to allocate tasks at those resources. We write LS\n(ai)\nThe Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 75\nSysytem\nResources\nHost l4Host l3Host l2\n2a 3a 4a a\nHost l1\n1a 6a5\nT1 T2 T3 T4 T5 T6\nFigure 1: An illustration of our multi-server model\nwith exclusive and shared resources for the agent\nexecution.\nto address the set of resources known by agent ai.\nResources in the system can be used by all agents for\nthe execution of tasks. The amount of provided resources\nC(li, t) of each server can vary over time. The resource\nutilisation of a server li at time t is calculated using equation 1,\nby adding the resource consumption U(Tj, t) of each task Tj\nthat is executed at the resource at time t. All resource units\nused in our model represent real metrics such as memory or\nprocessor cycles.\nU(li, t) =\nn\nj=1\nU(Tj, t)| L(Tj, t) = li (1)\nAdditional to the case that the total amount of system\nresources is enough to execute all tasks, we are also interested\nin the case that not enough system resources are provided\nto fulfil all allocation requests. That is, the overall shared\nresource capacity is lower than the amount of requested\nresources by agents. In this case, some agents must wait with\ntheir allocation request until free resources are expected.\nThe multi-agent system model used for our simulations is\nillustrated in Fig. 1.\n4. SELF-ORGANISING RESOURCE\nALLOCATION\nThe resource allocation algorithm as described in this\nsection is integrated in each agent. The only information\nrequired in order to make a resource allocation decision for\na task is the server utilisation from completed task\nallocations at those servers. There is no additional information\ndissemination about server resource utilisation or\ninformation about free resources. Our solution demonstrates that\nagents can self-organise in a dynamic environment without\nactive monitoring information that causes a lot of network\ntraffic overhead. Additionally, we do not have any central\ncontrolling authority. All behaviour that leads to the\nresource allocation is created by the effective competition of\nthe agents for shared resources and is a purely emergent\neffect.\nThe agents in our multi-agent system compete for\nresources or a set of resources to execute tasks. The\ncollective action of these agents change the environment and,\nas time goes by, they have to adapt to these changes to\ncompete more effectively in the newly created environment.\nOur approach is based on different agent beliefs, represented\nby predictors and different information about their\nenvironment. Agents prefer a task allocation at a server with free\nresources. However, there is no way to be sure of the amount\nof free server resources in advance. All agents have the same\npreferences and a agent will allocate a task on a server if it\nexpects enough free resources for its execution. There is no\ncommunication between agents. Actions taken by agents\ninfluence the actions of other agents indirectly. The applied\nmechanism is inspired by inductive reasoning and bounded\nrationality principles [2]. It is derived from the human way\nof deciding ill-defined problems. Humans tend to keep in\nmind many hypotheses and act on the most plausible one.\nTherefore, each agent keeps track of the performance of a\nprivate collection of its predictors and selects the one that\nis currently most promising for decision making.\n4.1 Resource Allocation Algorithm\nThis section describes the decision mechanism for our\nselforganising resource allocation. All necessary control is\nintegrated in the agents themselves. There is no higher\ncontrolling authority, management layer for decision support or\ninformation distribution. All agents have a set of predictors\nfor each resource to forecast the future resource utilisation of\nthese servers for potential task allocation. To do so, agents\nuse historical information from past task allocations at those\nresources. Based on the forecasted resource utilisation, the\nagent will make its resource allocation decision. After the\ntask has finished its execution and returned the results back\nto the agent, the predictor performances are evaluated and\nhistory information is updated.\nAlgorithm 1 shows the resource allocation algorithm for\neach agent. The agent first predicts the next step\"s resource\nload for each server with historical information (line 3-7).\nIf the predicted resource load plus the task\"s resource\nconsumption is below the last known server capacity, this server\nis added to the list of candidates for the allocation. The\nagent then evaluates if any free shared resources for the task\nallocation are expected. In the case, no free resources are\nexpected (line 9), the agent will explore resources by\nallocating the task at a randomly selected server from all not\npredictable servers to gather resource load information. This\nis the standard case at the beginning of the agent life-cycle\nas there is no information about the environment available.\nThe resource load prediction itself uses a set of r\npredictors P(a, l) := {pi|1 \u2264 i \u2264 r} per server. One predictor\npA\n\u2208 P of each set is called active predictor, which forecasts\nthe next steps resource load. Each predictor is a function\nP : H \u2192 \u2135+\n\u222a {0} from the space of history data H to\na non-negative integer, which is the forecasted value. For\nexample, a predictor could forecast a resource load equal to\nthe average amount of occupied resources during the last\nexecution at this resource. A history H of resource load\ninformation is a list of up to m history items hi = (xi, yi),\ncomprising the observation date xi and the observed value\nyi. The most recent history item is h0.\nHm(li) = ((x0, y0), ..., (xk, yk))| 0 \u2264 k < m (2)\nOur algorithm uses a set of predictors rather than only\none, to avoid that all agents make the same decision based\non the predicted value leading to an invalidation of their\nbeliefs. Imagine that only one shared resource is known\nby a number of agents using one predictor forecasting the\n76 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nResourceLoad\nTime\n(a)\nPredictor6\nPredictor7\nPredictor 8\nPredictor9\nPredictor10\nPredictor2\nPredictor 4\nPredictor 3\nPredictor5\nPredictor1\n(b)\nFigure 2: (a) Collected resource load information\nfrom previous task allocations that is used for future\npredictions. (b) Predictor\"s probability distribution\nfor selecting the new active predictor.\nsame value as the last known server resource utilisation. All\nagents that allocated a task at a server that was slightly\noverloaded would dismiss another allocation at this server\nas they expect the server to be overloaded again based on\nthe predictions. As the result, the server would have a large\namount of free resources. A set of different predictors that\npredict different values avoids this situation of invalidating\nthe beliefs of the agents [19].\nAn example of a collected resource load information from\nthe last 5 visits of an agent at a shared resource can be\nseen in Fig. 2(a). It shows that the resource was visited\nfrequently, which means free resources for execution were\navailable and an exploration of other servers was\nunnecessary. This may change in the future as the resource load has\nsignificantly increased recently.\nIn the case where the set of servers predicted having free\nresources available is not empty (line 13), the agent selects\none of those for allocation. We have implemented two\nalternative algorithms for the selection of a server for the task\nallocation.\nAlgorithm 1 Resource Allocation algorithm of an agent\n1 L \u2190 \u2205 //server with free resources\n2 u \u2190 U(T, t + 1) //task resource consumption\n3 for all P(a, l)|l \u2208 LS\n(a) do\n4 U(l) \u2190 resourceLoadPrediction(P(a, l), t + 1)\n5 if U(l) + u \u2264 C(l) then\n6 L \u2190 L \u222a {P(a, l)}\n7 end if\n8 end for\n9 if L = \u2205 then\n10 //all unpredictable shared resources\n11 E \u2190 LS\n/{l \u2208 LS\n(a)|P(a, l) \u2208 L}\n12 allocationServer \u2190 a random element of E\n13 else\n14 allocationServer \u2190 serverSelection (L)\n15 end if\n16 return allocationServer\nAlgorithm 2 shows the first method, which is a\nnon-deterministic selection according to the predictability of the\nserver resource utilisation. A probability distribution is\ncalculated from the confidence levels of the resource\npredictions. The confidence level depends on three factors: the\naccuracy of the active predictor, the amount of historical\ninformation about the server and the average age of the\nhistory information (see Eq. 3. The server with the highest\nconfidence level has the biggest chance to be selected as the\nactive server.\nG(P) =\nw1 \u00b7 size(H)\nm\n+\nw2 \u00b7 Age(H)\nmax Age(H)\n+\nw3 \u00b7 g(p)\nmax (g(p))\n(3)\nwhere:\nwi \u2212 weights\nsize(H) \u2212 number of data in history\nm \u2212 maximal number of history values\nAge(H) \u2212 average age of historical data\ng(p) \u2212 see eq. 4\nAlgorithm 2 serverSelection(L)- best predictable server\n1 for all P(a, l) \u2208 L do\n2 calculate G(P)\n3 end for\n4 transform all G(P) into a probability distribution\n5 return l \u2208 LS\nselected according to the probability\ndistribution\nAlgorithm 3 serverSelection(L) - most free resources\n1 for all P(a, l) \u2208 L do\n2 c(l) \u2190 C(l) \u2212 Ul\n3 end for\n4 return l \u2208 LS\n|c(l) is maximum\nThe second alternative selection method of a server from\nthe set of predicted servers with free resources is\ndeterministic and shown in algorithm 3. The server with most expected\nfree resources from the set L of server with expected free\nresources is chosen. In the case where all agents predict the\nmost free resources for one particular server, all agents will\nallocate the task at this server, which would invalidate the\nagent\"s beliefs. However, our experiments show that\ndifferent individual history information and the non-deterministic\nactive predictor selection usually prevent this situation.\nIn the case, the resource allocation algorithm does not\nreturn any server (Alg. 1, line 16), the allocation at a resource\nin not recommended. The agent will not allocate the task\nat a resource. This case happens only if a resource load\nprediction for all servers is possible but no free resources are\nexpected.\nAfter the agent execution has finished, the evaluation\nprocess described in algorithm 4 is preformed. This process is\ndivided into three cases. First, the task was not allocated\nat a resource. In this case, the agent cannot decide if the\ndecision not to allocate the task was correct or not. The\nagent then removes old historical data. This is necessary\nfor a successful adaptation in the future. If the agent would\nnot delete old historical information, the prediction would\nalways forecast that no free resources are available. The\nagent would never allocate a task at one of the resources in\nthe future.\nOld historical information is removed from the agent\"s\nresource history using a decay rate. The decay rate is a\ncumulative distribution function that calculates the probability\nthat a history item is deleted after it has reached a certain\nThe Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 77\nAge of the history data\nDecayrate\n0\n1\nolder\nFigure 3: Decay rate of historical information\nage. The current implementation uses a constant probability\ndensity function in a configurable domain. Figure 3 shows\nan example of such a cumulative distribution function for the\ndecay rate. Depending on the environment, the probability\ndensity function must be altered. If the number of potential\nserver per agent is high, historical information must be kept\nlonger to avoid the exploration of unexplored resources. In\naddition, a dynamic environment requires more up-to-date\ninformation to make more reliable predictions.\nThe second case in the evaluation process (Alg. 4, line\n5) describes the actions taken after a server was visited the\nfirst time. The agent creates a new predictor set for this\nserver and records the historical information. All predictors\nfor this set are chosen randomly from some predefined set.\ng(p) =\nl\ni=0\nri (4)\nwhere:\nri =\n\u23a7\n\u23a8\n\u23a9\n1 if ith\ncorrect decision\n0 if ith\nunknown outcome\n\u22121 if ith\nwrong decision\nThe general case (Alg. 4, line 8) is the evaluation after the\nagent allocated the task at a resource. The agent evaluates\nall predictors of the predictor set for this resource by\npredicting the resource load with all predictors based on the old\nhistorical data. Predictors that made a correct prediction\nmeaning the resource allocation was correct, will receive a\npositive rating. This is the case that the resource was not\noverloaded and free resources for execution were predicted,\nor the resource was overloaded and this predictor would have\nprevented the allocation. All predictors that predicted\nvalues which would lead to wrong decisions will receive negative\nratings. In all other cases, which includes that no\nprediction was possible, a neutral rating is given to the predictors.\nBased on these performance ratings, the confidence levels\nare calculated using equation 4. The confidence for all\npredictors that cannot predict with the current historical\ninformation about the server is set to zero to prevent the selection\nof those as the new active predictor. These values are\ntransformed into a probability distribution. According to this\nprobability distribution the new active predictor is chosen,\nimplemented as a roulette wheel selection. Figure 2(b)\nillustrates the probabilities of a set of 10 predictors, which\nhave been calculated from the predictor confidence levels.\nEven if predictor 9 has the highest selection probability, its\nwas not chosen by roulette wheel selection process as the\nactive predictor. This non-deterministic predictor selection\nprevents the invalidation of the agents\" beliefs in case agents\nhave the same set of predictors.\nThe prediction accuracy that is the error of the prediction\ncompared to the observed value is not taken into\nconsideration. Suppose the active predictor predicts slightly above\nthe resource capacity which leads not to a allocation on a\nresources. In fact, enough resources for the execution would\nbe available. A less accurate prediction which is far below\nthe capacity would lead to the correct decision and is\ntherefore preferred.\nThe last action of the evaluation algorithm (Alg. 4, line\n22) updates the history with the latest resource load\ninformation of the server. The oldest history data is overwritten\nif already m history values are recorded for the server.\nAlgorithm 4 Decision Evaluation\n1 if l \u2208 LE\nthen\n2 for all P(a, l)|l \u2208 LS\n(a) do\n3 evaporate old historical data\n4 end for\n5 else if P(a, l) = null then\n6 create (P(a, l))\n7 update H(l)\n8 else\n9 for all p \u2208 P(a, l) do\n10 pred \u2190 resourceLoadPrediction(p)\n11 if (U(l) \u2264 C(l) AND pred + U(a, t) \u2264 C(l)) OR\n(U(l) > C(l) AND pred + U(a, t) > C(l)) then\n12 addPositiveRating(p)\n13 else if U(l) \u2264 C(l) AND pred + U(a, t) > C(l) OR\nU(l) \u2264 C(l) AND pred + U(a, t) > C(l) then\n14 addNegativeRating(p)\n15 else\n16 addNeutralRating(p)\n17 end if\n18 end for\n19 calculate all g(p); g(p) \u2190 0, if p is not working\n20 transform all g(p) into a probability distribution\n21 pA\n\u2190 p \u2208 P(a, l) is selected according to this\nprobability distribution\n22 update H(l)\n23 end if\n4.2 Remarks and Limitation of the Approach\nOur prediction mechanism uses a number of different types\nof simple predictors rather than of one sophisticated\npredictor. This method assures that agents can compete more\neffectively in a changing environment. Different types of\npredictors are suitable for different situations and\nenvironments. Therefore, all predictors are being evaluated after\neach decision and the active predictor is selected. This\nnondeterministic of the new active predictor supports that the\nagents\" beliefs will not be invalidated, which happens in the\ncase that all predictors are making the same decision.\nEspecially if there is only one shared resource available and\nall agents have only the choice to go the this one shared\nresource or not [19].\nOur self-organising approach is robust against failures of\nresources or agents in the system. If they join or leave,\nthe system can self-organise quickly and adapts to the new\nconditions. There is no classical bottleneck or single point\nof failure like in centralised mechanisms. The limitations\nare the reliance on historical resource utilisation information\nabout other servers. A forecast of the resource utilisation of\na remote server is only possible if an agent has a number of\nhistorical information about a shared resource. If the\nnumber of servers per agent is very large, there is no efficient way\nto gather historical information about remote servers. This\nproblem occurs if the amount of provided shared resources\n78 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nis limited and not enough for all resource consumers. In this\ncase, the an agent would randomly try all known servers\nuntil it will find one with free resources or there is no one. In\nthe worst case, by the time for trying all servers, historical\ninformation of the servers is already outdated.\n5. EXPERIMENTAL EVALUATION\nThe first part of this section gives a short overview of\nthe setup of our simulation environment. In the rest of\nthe section, results of the experiments are presented and\ndiscussed. All experiments are conducted in a special\ntestbed that simulates and models a multi-agent system. We\nhave implemented this test-bed in the Java programming\nlanguage, independent from any specific agent toolkit. It\nallows a variety of experiments in in stable as well as\ndynamic environments with a configurable number of agents,\ntasks and servers. An event-driven model is used to trigger\nall activities in the system.\nFor all simulations, we limited the number of history data\nfor each server to 10, the number of performance ratings per\npredictor to 10 and assigned 10 predictors to every predictor\nset for each agent. All predictors are chosen randomly from\na arbitrary predefined set of 32 predictors of the following\ntype. Predictors differ in different cycles or window sizes.\n- n-cycle predictor: p(n) = yn uses the nth\n-last history\nvalue\n- n-mean predictor: p(n) = 1\nn\n\u00b7\nn\ni=1\nyi uses the mean value\nof the n-last history values\n- n-linear regression predictor: p(n, t) = a\u00b7t+b uses the\nlinear regression value from the last n history values\nwhere a, b are calculated using linear regression with\nleast squares fitting under consideration of the last n\nhistory data.\n- n-distribution predictor: uses a random value from the\nfrequency distribution of the n last history values\n- n-mirror predictor: p(n) = 2 \u00b7 H \u2212 yn uses the mirror\nimage around the mean of all history values of the nth\nlast history value\nThe efficiency of our proposed self-organising resource\nallocation is assessed by the resource load development of each\nserver over the simulation as well as the total resource load\ndevelopment cumulated over all shared resources. Resource\nloads for each server are calculated using equation 1 as the\nsum of the resource consumption of all currently executed\nagents at this server. The total resource load of the system\nis calculated as the sum of the resources load of all resources.\nThe self-organising resource allocation algorithm has\nrandom elements. Therefore, the presented results show mean\nvalues and standard derivation calculated over 100 repeated\nexperiments.\n5.1 Experimental Setup\nThe following parameters have an impact on the resource\nallocation process. We give an overview of the parameters\nand a short description.\n- Agents: The number of agents involved in the resource\nallocation. This number varies in the experiments\nbetween 650 and 750 dependent on the total amount of\navailable system resources.\n- Resource consumption: Each task consumes server\nresources for its execution. The resource consumption is\nassigned randomly to each task prior to its allocation\nfrom an interval. Resource consumption is specified in\nresource units which corresponds to real world metrics\nlike memory or processor cycles.\n- Agent home server: All agents are located on a home\nagent server. The resources of those servers not\nconsidered in our simulation and does not affect the resource\nallocation performance.\n- Server resources: Experiments use servers with\ndifferent amount of available shared resources. The first\nexperiment is conducted in a static server environment\nthat provides the same amount of shared resources,\nwhile the other experiment varies the available server\nresource during the simulation. The total amount of\nresources remains constant in both experiments.\n- Execution time: The execution time of a task for the\nexecution, independent from the execution platform.\nFor this time the task consumes the assigned amount of\nserver resources. This parameter is randomly assigned\nbefore the execution.\n- Task creation time: The time before the next task\nis created after successful or unsuccessful completion.\nThis parameter influences the age of the historical\ninformation about resources and has a major influence\non the length of the initial adaptation phase. This\nparameter is randomly assigned after the task was\ncompleted.\n5.2 Experimental Results\nThis section shows results from selected experiments that\ndemonstrate the performance of our proposed resource\nallocation mechanism. The first experiment show the\nperformance in a stable environment where a number of agents\nallocate tasks to servers that provide a constant amount of\nresources. The second experiment was conducted in a\ndynamic server environment with a constant number of agents.\nThe first experiment shows our model in a stable 3-server\nenvironment that provide a total amount of 7000 resource\nunits. The resource capacity of each server remains constant\nover the experiment. We used 650 agents with the\nparameters of the execution time between 1 and 15 time units and\na task creation time in the interval [0 \u2212 30] time units. The\ntask\"s resource consumption is randomly assigned from the\ninterval [1 \u2212 45] resource units. Figure 4 shows the results\nfrom 100 repetitions of this experiment. Figure 4(a) shows\nthat the total amount of provided resources is larger than\nthe demand of resource in average. At the beginning of the\nexperiment, all agents allocate their tasks randomly at one\nof the available servers and explore the available capacities\nand resource utilisations for about 150 time units. This\ninitial exploration phase shows that the average resource load\nof each server has a similar level. This causes an overload\nsituation at server 1 because of its low capacity of shared\nresources, and a large amount of free resources on server\n2. Agents that allocated tasks to server 1 detect the\noverload situation and explore randomly other available servers.\nThey find free resources at server 2. After learning period,\nthe agents have self-organised themselves in this stable\nenvironment and find a stable solution for the allocation of\nThe Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 79\n0 250 500 750 1,000 1,250 1,500\nTime\n0\n1,000\n2,000\n3,000\n4,000\n5,000\n6,000\n7,000\nResourceLoad\n(a) Total resource load\nversus total shared resource\ncapacity\n0 250 500 750 1,000 1,250 1,500\nTime\n0\n500\n1,000\n1,500\n2,000\n2,500\nResourceLoad\n(b) Resource load server 0\n0 250 500 750 1,000 1,250 1,500\nTime\n0\n500\n1,000\n1,500\n2,000\n2,500\nResourceLoad\n(c) Resource load server 1\n0 250 500 750 1,000 1,250 1,500\nTime\n0\n500\n1,000\n1,500\n2,000\n2,500\n3,000\n3,500\n4,000\nResourceLoad\n(d) Resource load server 2\nFigure 4: Results of experiment 1 in a static 3-server\nenvironment averaged over 100 repetitions.\nall tasks. The standard deviation of the resource loads are\nsmall for each server, which indicates that our distributed\napproach find stable solutions in almost every run.\nThis experiment used algorithm 2 for the selection of the\nactive server. We also ran the same experiment with the\nmost free resources selection mechanism to select the active\nserver. The resource allocation for each server is similar.\nThe absolute amount of free resources per server is almost\nthe same.\nExperiment 2 was conducted in a dynamic 3-server\nenvironment with a number of 750 agents. The amount of\nresources of server 0 and server 1 changes periodically, while\nthe total amount of available resources remains constant.\nServer 0 has an initial capacity of 1000 units, server 1 start\nwith a capacity of 4000 units. The change in capacity starts\nafter 150 time units, which is approximately the end of the\nlearning phase. Figure 5 (b, c, d) shows the behaviour of our\nself-organising resource allocation in this environment. All\nagents use the deterministic most free resources selection\nmechanism to select the active server. It can bee seen in\nFig. 5(b) and 5(c) that the number of allocated resources to\nserver 0 and server 1 changes periodically with the amount of\nprovided resources. This shows that agents can sense\navailable resources in this dynamic environment and are able to\nadapt to those changes. The resource load development of\nserver 2 (see Fig. 5(d)) shows a periodic change because\nsome agent try to be allocated tasks to this server in case\ntheir previously favoured server reduce the amount of shared\nresources. The total resource load of all shared resources is\nconstant over the experiments, which indicates the all agents\nallocate their tasks to one of the shared resource (comp. Fig.\n4(a)).\n6. CONCLUSIONS AND FUTURE WORK\nIn this paper a self-organising distributed resource\nallocation technique for multi-agent systems was presented. We\nenable agents to select the execution platform for their tasks\nthemselves before each execution at run-time. In our\napproach the agents compete for an allocation at one of the\n0 500 1,000 1,500 2,000\nTime\n0\n2,500\n5,000\n7,500\nResourceLoad\n(a) Total resource load\nversus total shared resource\ncapacity\n0 500 1,000 1,500 2,000\nTime\n0\n500\n1,000\n1,500\n2,000\n2,500\n3,000\n3,500\n4,000\nResourceLoad\n(b) Resource load server 1\n0 500 1,000 1,500 2,000\nTime\n0\n500\n1,000\n1,500\n2,000\n2,500\n3,000\n3,500\n4,000\nResourceLoad\n(c) Resource load server 2\n0 500 1,000 1,500 2,000\nTime\n0\n500\n1,000\n1,500\n2,000\n2,500\n3,000\n3,500\n4,000\nResourceLoad\n(d) Resource load server 3\nFigure 5: Results of experiment 2 in a dynamic\nserver environment averaged over 100 repetitions.\navailable shared resource. Agents sense their server\nenvironment and adopt their action to compete more efficient in\nthe new created environment. This process is adaptive and\nhas a strong feedback as allocation decisions influence\nindirectly decisions of other agents. The resource allocation is a\npurely emergent effect. Our mechanism demonstrates that\nresource allocation can be done by the effective\ncompetition of individual and autonomous agents. Neither do they\nneed coordination or information from a higher authority\nnor is an additional direct communication between agents\nrequired.\nThis mechanism was inspired by inductive reasoning and\nbounded rationality principles which enables the agents\"\nadaptation of their strategies to compete effectively in a\ndynamic environment. In the case of a server becomes\nunavailable, the agents can adapt quickly to this new situation\nby exploring new resources or remain at the home server\nif an allocation is not possible. Especially in dynamic and\nscalable environments such as grid systems, a robust and\ndistributed mechanism for resource allocation is required.\nOur self-organising resource allocation approach was\nevaluated with a number of simulation experiments in a dynamic\nenvironment of agents and server resources. The presented\nresults for this new approach for strategic migration\noptimisation are very promising and justify further investigation\nin a real multi-agent system environment.\nIt is a distributed, scalable and easy-to-understand\npolicy for the regulation of supply and demand of resources.\nAll control is implemented in the agents. A simple decision\nmechanism based on different beliefs of the agent creates an\nemergent behaviour that leads to effective resource\nallocation. This approach can be easily extended or supported\nby resource balancing/queuing mechanisms provided by\nresources.\nOur approach adapts to changes in the environment but it\nis not evolutionary. There is no discovery of new strategies\nby the agents. The set of predictors stays the same over the\nwhole life. In fact, we believe that this could further improve\nthe system\"s behaviour over a long term period and could be\n80 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)\ninvestigated in the future. The evolution would be very slow\nand selective and will not influence the system behaviour\nin a short-term period that is covered by our experimental\nresults.\nIn the near future we will investigate if an automatic\nadaptation of the decay rate of historical information our\nalgorithm is possible and can improve the resource allocation\nperformance. The decay rate is currently predefined and\nmust be altered manually depending on the environment.\nA large number of shared resources requires older historical\ninformation to avoid a too frequently resources exploration.\nIn contrast, a dynamic environment with varying capacities\nrequires more up-to-date information to make more reliable\npredictions.\nWe are aware of the long learning phase in environments\nwith a large number of shared resources known by each\nagent. In the case that more resources are requested by\nagents than shared resources are provided by all servers, all\nagents will randomly explore all known servers. This\nprocess of acquiring resource load information about all servers\ncan take a long time in the case that no not enough shared\nresources for all tasks are provided. In the worst case, by\nthe time for exploring all servers, historical information of\nsome servers could be already outdated and the exploration\nstarts again. In this situation, it is difficult for an agent\nto efficiently gather historical information about all remote\nservers. This issue needs more investigation in the future.\n7. REFERENCES\n[1] G. Allen, W. Benger, T. Dramlitsch, T. Goodale,\nH.-C. Hege, G. Lanfermann, A. Merzky, T. Radke,\nE. Seidel, and J. Shalf. Cactus Tools for Grid\nApplications. In Cluster Computing, volume 4, pages\n179-188, Hingham, MA, USA, 2001. Kluwer Academic\nPublishers.\n[2] W. B. Arthur. Inductive Reasoning and Bounded\nRationality. American Economic Review (Papers and\nProceedings), 84(2):406-411, May 1994.\n[3] T. Bourke. Server Load Balancing. O\"Reilly Media, 1\nedition, August 2001.\n[4] R. Buyya. Economic-based Distributed Resource\nManagement and Scheduling for Grid Computing.\nPhD thesis, Monash University, Melbourne, Australia,\nMay 2002.\n[5] R. Buyya, D. Abramson, J. Giddy, and H. Stockinger.\nEconomic Models for Resource Management and\nScheduling in Grid Computing. Special Issue on Grid\nComputing Environments of the Journal Concurrency\nand Computation, 13-15(14):1507-1542, 2002.\n[6] R. Buyya, S. Chapin, and D. DiNucci. Architectural\nModels for Resource Management in the Grid. In\nProceedings of the First International Workshop on\nGrid Computing, pages 18-35. Springer LNCS, 2000.\n[7] T. L. Casavant and J. G. Kuhl. A taxonomy of\nscheduling in general-purpose distributed computing\nsystems. IEEE Transactions on Software Engineering,\n14(2):141-154, February 1988.\n[8] D. Challet and Y. Zhang. Emergence of Cooperation\nand Organization in an Evolutionary Game. Physica\nA, 407(246), 1997.\n[9] K.-P. Chow and Y.-K. Kwok. On load balancing for\ndistributed multiagent computing. In IEEE\nTransactions on Parallel and Distributed Systems,\nvolume 13, pages 787- 801. IEEE, August 2002.\n[10] S. H. Clearwater. Market-based control. A Paradigm\nfor Distributed Resource Allocation. World Scientific,\nSingapore, 1996.\n[11] C. Fl\u00a8us. Capacity Planning of Mobile Agent Systems\nDesigning Efficient Intranet Applications. PhD thesis,\nUniversit\u00a8at Duisburg-Essen (Germany), Feb. 2005.\n[12] I. Foster and C. Kesselman. Globus: A\nMetacomputing Infrastructure Toolkit. International\nJournal of Supercomputing Applications,\n11(2):115-129, 1997.\n[13] J. Frey, T. Tannenbaum, I. Foster, M. Livny, and\nS. Tuecke. Condor-G: A Computation Management\nAgent for Multi-Institutional Grids. Cluster\nComputing, 5(3):237-246, 2002.\n[14] A. Galstyan, S. Kolar, and K. Lerman. Resource\nallocation games with changing resource capacities. In\nProceedings of the second international joint\nconference on Autonomous agents and multiagent\nsystems, pages 145 - 152, Melbourne, Australia, 2003.\nACM Press, New York, NY, USA.\n[15] C. Georgousopoulos and O. F. Rana. Combining state\nand model-based approaches for mobile agent load\nbalancing. In SAC \"03: Proceedings of the 2003 ACM\nsymposium on Applied computing, pages 878-885, New\nYork, NY, USA, 2003. ACM Press.\n[16] G. Mainland, D. C. Parkes, and M. Welsh.\nDecentralized Adaptive Resource Allocation for Sensor\nNetworks. In Proceedings of the 2nd USENIX\nSymposium on Network Systems Design and\nImplementation(NSDI \"05), May 2005.\n[17] S. Manvi, M. Birje, and B. Prasad. An Agent-based\nResource Allocation Model for Computational Grids.\nMultiagent and Grid Systems - An International\nJournal, 1(1):17-27, 2005.\n[18] A. Schaerf, Y. Shoham, and M. Tennenholtz.\nAdaptive Load Balancing: A Study in Multi-Agent\nLearning. In Journal of Artificial Intelligence\nResearch, volume 2, pages 475-500, 1995.\n[19] T. Schlegel, P. Braun, and R. Kowalczyk. Towards\nAutonomous Mobile Agents with Emergent Migration\nBehaviour. In Proceedings of the Fifth International\nJoint Conference on Autonomous Agents & Multi\nAgent Systems (AAMAS 2006), Hakodate (Japan),\npages 585-592. ACM Press, May 2006.\n[20] W3C. Web services activity, 2002.\nhttp://www.w3.org/2002/ws - last visited 23.10.2006.\n[21] M. M. Waldrop. Complexity: The Emerging Science at\nthe Edge of Order and Chaos. Simon & Schuster, 1st\nedition, 1992.\n[22] R. Wolsk, J. S. Plank, J. Brevik, and T. Bryan.\nAnalyzing Market-Based Resource Allocation\nStrategies for the Computational Grid. In\nInternational Journal of High Performance Computing\nApplications, volume 15, pages 258-281. Sage Science\nPress, 2001.\nThe Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 81", "keywords": "dynamically allocated task;distribute control;self-organisation;network of server;server network;adaptive process;agent;competition;server utilisation;predictor;multi-agent system;resource allocation;distributed algorithm"}
-{"name": "test_I-6", "title": "Dynamic Semantics for Agent Communication Languages", "abstract": "This paper proposes dynamic semantics for agent communication languages (ACLs) as a method for tackling some of the fundamental problems associated with agent communication in open multiagent systems. Based on the idea of providing alternative semantic variants for speech acts and transition rules between them that are contingent on previous agent behaviour, our framework provides an improved notion of grounding semantics in ongoing interaction, a simple mechanism for distinguishing between compliant and expected behaviour, and a way to specify sanction and reward mechanisms as part of the ACL itself. We extend a common framework for commitment-based ACL semantics to obtain these properties, discuss desiderata for the design of concrete dynamic semantics together with examples, and analyse their properties.", "fulltext": "1. INTRODUCTION\nThe field of agent communication language (ACL)\nresearch has long been plagued by problems of verifiability\nand grounding [10, 13, 17]. Early mentalistic semantics\nthat specify the semantics of speech acts in terms of\npreand post-conditions contingent on mental states of the\nparticipants (e.g. [3, 4, 12, 15]) lack verifiability regarding\ncompliance of agents with the intended semantics (as the\nmental states of agents cannot be observed in open multiagent\nsystems (MASs)). Unable to safeguard themselves against\nabuse by malicious, deceptive or malfunctioning agents,\nmentalistic semantics are inherently unreliable and\ninappropriate for use in open MAS in which agents with potentially\nconflicting objectives might deliberately exploit their\nadversaries\" conceptions of message semantics to provoke a certain\nbehaviour.\nCommitment-based semantics [6, 8, 14], on the other\nhand, define the meaning of messages exchanged among\nagents in terms of publicly observable commitments,\ni.e. pledges to bring about a state of affairs or to perform\ncertain actions. Such semantics solve the verifiability problem\nas they allow for tracing the status of existing commitments\nat any point in time given observed messages and actions\nso that any observer can, for example, establish whether an\nagent has performed a promised action. However, this can\nonly be done a posteriori, and this creates a grounding\nproblem as no expectations regarding what will happen in the\nfuture can be formed at the time of uttering or receiving a\nmessage purely on the grounds of the ACL semantics.\nFurther, this implies that the semantics specification does\nnot provide an interface to agents\" deliberation and\nplanning mechanisms and hence it is unclear how rational agents\nwould be able to decide whether to subscribe to a suggested\nACL semantics when it is deployed.\nFinally, none of the existing approaches allows the ACL\nto specify how to respond to a violation of its semantics by\nindividual agents. This has two implications: Firstly, it is\nleft it up to the individual agent to reason about potential\nviolations, i.e. to bear the burden of planning its own\nreaction to others\" non-compliant behaviour (e.g. in order to\nsanction them) and to anticipate others\" reactions to own\nmisconduct without any guidance from the ACL\nspecification. Secondly, existing approaches fail to exploit the\npossibilities of sanctioning and rewarding certain behaviours in a\ncommunication-inherent way by modifying the future\nmeaning of messages uttered or received by compliant/deviant\nagents.\nIn this paper, we propose dynamic semantics (DSs) for\nACLs as a solution to these problems. Our notion of DS\nis based on the very simple idea of defining different\nalternatives for the meaning of individual speech acts (so-called\nsemantic variants) in an ACL semantics specification, and\ntransition rules between semantic states (i.e. collections of\nvariants for different speech acts) that describe the current\nmeaning of the ACL. These elements taken together result in\na FSM-like view of ACL specifications where each individual\nstate provides a complete ACL semantics and state\ntransitions are triggered by observed agent behaviour in order to\n(1) reflect future expectations based on previous interaction\nexperience and (2) sanction or reward certain kinds of\nbehaviour.\nIn defining a DS framework for commitment-based ACLs,\nthis paper makes three contributions:\n1. An extension of commitment-based ACL semantics to\nprovide an improved notion of grounding commitments\nin agent interaction and to allow ACL specifications to\nbe directly used for planning-based rational decision\nmaking.\n2. A simple way of distinguishing between compliant and\nexpected behaviour with respect to an ACL\nspecification that enables reasoning about the potential\nbehaviour of agents purely from an ACL semantics\nperspective.\n3. A mechanism for specifying how meaning evolves\nwith agent behaviour and how this can be used to\ndescribe communication-inherent sanctioning and\nrewarding mechanisms essential to the design of open\nMASs.\nFurthermore, we discuss desiderata for DS design that can\nbe derived from our framework, present examples and\nanalyse their properties.\nThe remainder of this paper is structured as follows:\nSection 2 introduces a formal framework for dynamic ACL\nsemantics. In section 3 we present an analysis and discussion\nof this framework and discuss desiderata for the design of\nACLs with dynamic semantics. Section 4 reviews related\napproaches, and section 5 concludes.\n2. FORMAL FRAMEWORK\nOur general framework for describing the kind of MASs\nwe are interested in is fairly simple. Let Ag = {1, . . . , n}\na finite set of agents, {Aci}i\u2208Ag a collection of action sets\n(where Aci are the actions of agent i), A = \u00d7n\ni=1Aci the\njoint action space, and Env a set of environment states. A\nrun is a sequence r = e1\na1\n\u2192 . . .\nat\u22121\n\u2192 et where ai \u2208 A (ai[j]\ndenotes the action of agent j in this tuple), and ei \u2208 Env.\nWe define |r| = t, last(r) = et, r[1 : j] is short for the j-long\ninitial sub-sequence of r, and we write r r for any run r\niff \u2203j \u2208 N.r = r[1 : j].\nWriting R(Env, A) for the set of all possible runs, we can\nview each agent i as a function gi : R(Env, A) \u2192 Aci\ndescribing the agent\"s action choices depending on the\nhistory of previous environment states and joint actions. The\nset of all agent functions for i given A and Env is\ndenoted by Gi(Env, A). The (finite, discrete, stationary,\nfully accessible, deterministic) environment is defined by a\nstate transformer function f : Env \u00d7 A \u2192 Env, so that\nthe system\"s operation for an initial state e1 is defined by\nei+1 = f(ei, g(e1\na1\n\u2192 . . .\nai\u22121\n\u2192 ei)) for all i \u2265 1 (g is the joint\nvector of functions gi). This definition implies that\nexecution of actions is synchronised among agents, so that the\nsystem evolves though an execution of rounds where all\nagents perform their actions simultaneously.\nWe denote the set of all runs given a particular\nconfiguration of agent functions g by R(Env, A, g). We write\ngi \u223c r where gi an agent function and r a run iff \u22001 \u2264\nj \u2264 |r|.gi(r[1 : j]) = aj [i] (i.e. gi is compatible with r in\nevery time step as far as i\"s actions are concerned).\nWe use a (standard) propositional logical language L with\nentailment relation e |= \u03d5 for e \u2208 Env and \u03d5 \u2208 L\ndeunset pending\ncancelled\nactive\nviolated\nfulfilled\nFigure 1: Commitment states and state transitions\nin the Fornara and Colombetti model: edges drawn\nusing solid lines indicate transitions brought about\nby agent communication, dashed lines indicate\nphysical agent action or environmental events that cause\nstate transitions\nfined in the usual way.1\nWe introduce special propositions\nDone(i, a) for each action a \u2208 \u222an\ni=1Aci in L to denote it is\ntrue that action a has just been performed, extending |=\nto runs r in the following way:\nr |= \u03d5 if last(r) |= \u03d5\nr |= Done(i, a) if r = e1\na1\n\u2192 . . .\nat\u22121\n\u2192 et \u2227 a = at\u22121[i]\ni.e. Done(i, a) is exactly true for those actions that made up\npart of the joint action vector ai\u22121 in the predecessor state,\nand all other formulae that were entailed by the last state of\nr are still valid. Our model implies that each agent executes\nexactly one action in each time step.\n2.1 Commitments\nOur notion of commitments is based on a slight variation\nof the framework proposed by Fornara and Colombetti [6]:\nCommitments come into existence as unset, e.g. when a\nrequest for achieving \u03c7 if a certain condition \u03d5 becomes true\nis issued from i to j. The commitment becomes pending if\nthe debtor j is required to fulfill it, e.g. after having accepted\nit. A pending commitment will become active if its\ncondition \u03d5 becomes true, and if \u03c7 is brought about in that case\nit becomes fulfilled, otherwise violated. Commitments\ncan become cancelled in different situations, e.g. if an\nunset commitment is rejected. Also, environmental events can\nlead \u03c7 to become true in which case the commitment\nbecomes fulfilled without the debtor\"s contribution. Figure 1\nprovides a graphic representation of commitment state\ntransitions in this framework.\nApart from a slightly different notation used to maintain\na more detailed history of commitments, we will extend\nthem to also contain a deactivation condition \u03c8 apart from\n\u03d5 (which we call activation condition) which causes any\ncommitment to be cancelled if it becomes true.\n1\nMore precisely L contains atomic propositions P =\n{p, q, . . .}, the usual connectives \u2228 and \u00ac (with\nabbreviations \u21d2 and \u2227). As for semantics, a function interpretation\nfunction I : P \u00d7Env \u2192 { , \u22a5} assigns a truth value to each\nproposition in each environmental state, and the entailment\nrelation e |= \u03d5 for e \u2208 Env and \u03d5 \u2208 L is defined inductively:\ne |= \u03d5 if \u03d5 \u2208 P and I(\u03d5, e) = ; e |= \u00ac\u03d5 if e |= \u03d5; e |= \u03d5 \u2228 \u03c8\nif e |= \u03d5 or e |= \u03c8.\nThe Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 101\nD : CS \u2190 CS\u222a{ \u03b9, c : \u03c7 \u2295 \u03d5 \u03c8 t| \u03b9, s : \u03c7 \u2295 \u03d5 \u03c8 \u2208 CS, r |= \u03c8, s \u2208 {u, p, a}, \u03b9, c : \u03c7 \u2295 \u03d5 \u03c8 /\u2208 CS}\nA : CS \u2190 CS\u222a{ \u03b9, a : \u03c7 \u2295 \u03d5 \u03c8 t| \u03b9, p : \u03c7 \u2295 \u03d5 \u03c8 \u2208 CS, r |= \u03d5, \u03b9, a : \u03c7 \u2295 \u03d5 \u03c8 /\u2208 CS}\nS : CS \u2190 CS\u222a{ \u03b9, f : \u03c7 \u2295 \u03d5 \u03c8 t| \u03b9, a : \u03c7 \u2295 \u03d5 \u03c8 \u2208 CS, r |= \u03c7, \u03b9, f : \u03c7 \u2295 \u03d5 \u03c8 /\u2208 CS}\nF : CS \u2190 CS\u222a{ \u03b9, f : \u03c7 \u2295 \u03d5 \u03c8 i\u2192j\nt | \u03b9, a : \u03c7 \u2295 \u03d5 \u03c8 i\u2192j\nt\u22121 \u2208 CS, r |= Done(i, a), causes(a, \u03c7)}\nV : CS \u2190 CS\u222a{ \u03b9, v : \u03c7 \u2295 \u03d5 \u03c8 i\u2192j\nt | \u03b9, a : \u03c7 \u2295 \u03d5 \u03c8 i\u2192j\nt\u22121 \u2208 CS, r |= Done(i, a), \u00accauses(a, \u03c7)}\nTable 1: Environmental commitment processing rules for current run r with |r| = t\nDefinition 1. A commitment is a structure\n\u03b9, s : \u03c7 \u2295 \u03d5 \u03c8 i\u2192j\nt\nwhere\n- \u03b9 is a unique commitment identifier,\n- s denotes the commitment state (any of unset,\npending, active, violated, fulfilled, or cancelled,\nabbreviated by the respective initial),\n- i is the debtor, j is the creditor,\n- \u03c7 \u2208 L is the debitum (i.e. the proposition that i\ncommits to making true in front of j),\n- \u03d5, \u03c8 \u2208 L are the activation/deactivation conditions,\n- and t is the instant (in a run) at which this\ncommitment entered its current state s.\nAs an example,\nx, v : received(5, $500) \u2295 received(3, toys)\nreturned(3, toys) 3\u21925\n12\ndenotes that agent 3 violated commitment x towards agent\n5 to pay him $500 in timestep 12. He was supposed to make\nthe payment after receiving the toys unless he sent back the\ntoys. We introduce deactivation conditions so as to be able\nto completely revoke existing commitments: Sending back\nthe money does not constitute a fulfillment of the original\ncontract, but instead an annulment thereof. This provides\nus with the capability to define validity conditions using\n\u03d5 and \u03c8, which is useful for things like deadlines for unset\ncommitments (if I don\"t get a response within 3 time-steps\nmy request will expire).\nFor brevity, we sometimes omit indices or content\nelements when clear from the context (in particular, we often\nwrite \u0393 for the content \u03c7 \u2295 \u03d5 \u03c8). We write C for the set of\nall possible commitments and denote sets of commitments\n(so-called commitment stores) by CS \u2208 \u2118fin (C).\nTo handle the effects of environmental events and agent\nactions on a commitment store CS, table 1 introduces five\ncommitment transition rules which are executed in each time\nstep by the system or any observer who intends to clarify\nthe status of existing commitments in the order shown: the\ndeactivation rule D is the first to fire and cancels any\nunset, pending or active commitments if \u03c8 becomes true. For\nthe remaining pending commitments2\n, the activation rule A\ndescribes how these become active if \u03d5 becomes true. Note\nthat when \u03d5 is true in subsequent states we check whether\n2\nTo avoid problems with contradictory commitment\nspecifications (e.g. when both \u03d5 and \u03c8 become true), we give\ndeactivation strict precedence over activation.\nthis active commitment is contained in CS to avoid\nduplicates (this is because we keep a full record of the\ncommitment history for reasons which will become clear below).3\nRule S caters for serendipity i.e. fulfillment of\ncommitments not brought about by the respective agent, but\nsimply by environmental changes that made the debitum true.\nFinally, the fulfilment/violation rules F/V record whether\nthe action performed by the debtor in the previous step\n(r |= Done(i, a)) has caused the debitum \u03c7 of any\ncommitment which became active in the previous timestep to\nbecome true. We need only consider those commitments that\nbecame active in the previous step t \u2212 1 since we can\nverify their fulfilment status in t. This verification hinges on a\ndomain-dependent predicate causes(a, \u03c7) which we have not\nmentioned so far. It should be true if action a is supposed\nto bring about \u03c7, and delineates the existing social notion\nof what constitutes a reasonable attempt to achieve \u03c7 in\nthe given context (its definition may range from requiring\nthat \u03c7 has actually been achieved to allowing any action a\nthat does not necessarily result in \u00ac\u03c7).\n2.2 Grounding\nIn Fornara and Colombetti\"s and similar approaches,\nthe status of commitments is verifiable, but they are not\ngrounded in expectations about interaction. Such semantics\n(similar in style to what he have just defined in terms of\nCS update rules) tell us what commitments exist and which\nstate they are in, but not how this will affect future agent\nbehaviour.\nTo provide such grounding, we introduce notions of\ncompliant and expected behaviour. An agent is behaving in\ncompliance with its commitments if it always immediately\nfulfills all active commitments. More precisely, the behaviour\nof agent i is said to be compliant with CS at time t iff\n\u2200k \u2264 t\n\n\u03b9, a : \u0393 i\u2192j\nk \u2208 CS \u21d2 \u03b9, f : \u0393 i\u2192j\nk \u2208 CS\n\nThough simple, this definition of compliance is not very\nuseful because it places constraints on CSs but not on actual\nagent functions. To achieve this, we can instead use the\ncontents of the CS to restrict the range of admissible agent\nfunctions to those that are in accordance with it using the\nfollowing definition:\nDefinition 2. For any run r \u2208 R(Env, A), let CS(r) the\nset of commitments that has resulted from execution of r\nassuming that certain actions (including messages) create\ncommitments or change their status. The set of compliant\nagent functions with respect to a commitment store CS is\n3\nWhile commitment identifiers adversely affect the\nreadability of our notation, they are necessary here to uniquely\ndetermine which pending commitment is activated.\n102 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)\ndefined as\ncompliant(CS) :=\n\u02d8\ngi \u2208 Gi(Env, A)\n\u02db\n\u02db\n\u2200r \u223c gi. \u03b9, p : \u03c7 \u2295 \u03d5 \u03c8 i\u2192j\n\u2208 CS(r) = CS.\n\u2200r r. \u03b9, a : \u03c7 \u2295 \u03d5 \u03c8 i\u2192j\n|r | \u2208 CS(r ) \u21d2\n`\n\u2203a \u2208 Aci.causes(a, \u03c7) \u2227 gi(r ) = a\n\u00b4 \u00af\nWhat this definition captures is the following\ncharacterisation of a compliant agent function gi: for all runs r that the\nagent function gi contributes to: if r has created a pending\ncommitment regarding \u03c7, then if this commitment becomes\nactive at the end of some extension r of r in the future, gi\nwill cause the agent to perform an action a that causes \u03c7.4\nNext, to cater for the anticipation of non-compliant\nbehaviour we need to introduce a notion of expected\nbehaviour that overrides compliant behaviour. For this, we\nintroduce a second type of commitments which we will call\nexpectations to avoid confusion and distinguish from\nordinary (now called normative) commitments by using round\nbrackets (\u03b9, s : \u0393)i\u2192j\nt . They are treated exactly like other\ncommitments in terms of the rules introduced above but\nexpress what the agent is expected to do (in the non-normative\nsense of an objective prediction of behaviour) rather than\nwhat it is supposed to do in a normative sense.\nTo define the notions we need below, we introduce the\nfollowing constructs:\nCS := { \u03b9, s : \u0393 \u2208 CS|s \u2208 {u, p, a, f, v}}\nCS := {(\u03b9, s : \u0393) \u2208 CS|(\u03b9, s : \u0393) \u2208 CS,\n\u03b9, s : \u0393 \u2208 CS, s, s \u2208 {u, p, a, f, v}}\nCS simply restricts the commitment store to all\nnormative commitments. Hence, compliant( CS ) specifies what\nagents are supposed to do. CS , on the other hand,\noverrides all normative commitment elements in CS for which\nan expectation also exists, i.e. expectations are given\nprecedence over the normative commitments. With this, we can\ndefine expected behaviour as\nexpected(CS) := compliant( CS )\ni.e. behaviour that adheres to expectations where such\nexpectations exist and is compliant otherwise. The\nseparate, parallel, treatment of compliant and expected\nbehaviour has two advantages: Firstly, we can respond to\nunexpected compliant behaviour, i.e. when we expect\nthat someone will not obey their commitments we can still\nrespond to it if they do (and, for example, regain trust\nin them). Secondly, we can cater for a variety of rules\nfor translating commitment stores to actual future events\nwhich a reasoning agent can use in its planning process.\nFor the purposes of this paper, we will assume that agents\nbase their predictions about others on expected behaviour\nif it is different from compliant behaviour, and that they\npredict compliant behaviour, else.\n4\nNote the quantification in this definition: the property has\nto hold for every run that gave rise to \u03b9 and is compatible\nwith gi. In particular, this must be independent of any\npart of the history (e.g. other agents\" actions and previous\nenvironment states) given CS(r). We also quantify over all\nextensions r of r, i.e. fulfillment of the commitment has to\nhappen if the appropriate conditions arise regardless of other\nfactors.\n2.3 Static ACL Semantics\nTable 2 shows an example for a small fragment of an ACL\nsemantics defined using our framework, with two alternative\ndefinitions (AC and AC2) for the semantics of the accept\nmessage type. Each of the so-called dialogue operators\n(similar to AI planning action schemata) is defined using the\ngraphical notation\np\na\nq\nwhere p, a, and q are schemata for preconditions, messages\n(of a certain type), and post-conditions, respectively.\nPreconditions determine whether an action schema is\napplicable in a certain situation or not and contain formulae from\nL and/or constraints on the current contents of CS.\nPostConditions contain changes to the knowledge base and\nmodifications to CS, i.e. they are interpreted like\nadd/deletelists in traditional AI planning. For any such operator\no = p, a, q we define pre(o) = p, action(o) = a and\npost(o) = q. All elements of a dialogue operator can\ncontain logical variables in their pre- and post-conditions and\nsender/receiver/content variables in the action slot.\nIn our example fragment, the operator RQ for requests\ncreates an unset commitment with a fresh identifier \u03b9 and\ncurrent timestamp (we assume that r |= time(t) \u21d4 |r| = t,\nand there is a global system time that can be inspected by all\nagents), and AC/RJ add a pending/cancelled equivalent of\n\u03b9 to CS. A fragment consisting of {RQ, RJ, AC} is\nequivalent to the standard semantics of the respective performative\ntypes defined in [6].5\nNote that our operators only contain\nobjectively verifiable pre- and post-conditions, and if agents\nwant to conform to it they need to comply with these\noperators. In the following, we will assume that agents always\nadhere to the ACL specification syntactically6\n.\nUsing AC2 instead of AC enables us to exploit the\npower of our distinction between compliant and expected\nbehaviour, expressing that we don\"t trust i to adhere to the\nnormal semantics of accept: its postcondition specifies\nthat expected(CS) is not restricted to behaviours that will\nfulfill the commitment but suggest that it has actually been\ncancelled. At the same time, we maintain the normative\ncommitment that \u03b9 is pending so that i\"s behaviour would\nbe seen to lie within compliant(CS) if i deviates from our\n(pessimistic) expectation and does the right thing instead.\n2.4 Dynamic Semantics\n2.4.1 Defining Dynamic Semantics\nTo define DS for ACLs we now introduce a state\ntransition system in which each state specifies an ordinary\n(static) commitment-based semantics and a range of\nagent pairs for which these semantics are assumed to apply.\n5\nNote that we allow for requesting identical things before\nreceiving a response and responding several times to the same\nrequest. Simple additional conditions can be introduced to\navoid these effects which we omit here for lack of space. The\nsame is true of additional constraints to manage control flow\nissues in actual dialogues (e.g. turn-taking).\n6\nThis means that, for an appropriate variable substitution\n\u03d1, r |= pre(o)\u03d1 holds when o is applied at r and that CS(r)\nis transformed according to post(o)\u03d1 after its application.\nThe Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 103\nRQ :\ntime(t), new(\u03b9)\nrequest(i, j, \u03b9 : \u0393)\nCS \u2190 CS \u222a { \u03b9, u : \u0393 i\u2192j\nt }\nRJ :\n\u03b9, u : \u0393 j\u2192i\nt \u2208 CS, time(t)\nreject(i, j, \u03b9 : \u0393)\nCS \u2190 CS \u222a { \u03b9, c : \u0393 i\u2192j\nt }\nAC :\n\u03b9, u : \u0393 j\u2192i\nt \u2208 CS, time(t)\naccept(i, j, \u03b9 : \u0393)\nCS \u2190 CS \u222a { \u03b9, p : \u0393 i\u2192j\nt }\nAC2 :\n\u03b9, u : \u0393 j\u2192i\nt \u2208 CS, time(t)\naccept(i, j, \u03b9 : \u0393)\nCS \u2190 CS \u222a { \u03b9, p : \u0393 i\u2192j\nt } \u222a {(\u03b9, c : \u0393)i\u2192j\nt }\nTable 2: Example commitment-based semantics for a small ACL fragment\n\u03b9, v : \u0393 i\u2192j\n\u2208 CS : {(i, \u2217)} \u222a {(j, i)}\ns0\ns1\n\u2200 \u03b9, v : \u0393 i\u2192j\nt \u2208 CS \u2203 \u03b9, f : \u0393 i\u2192j\nt \u2208 CS.t > t : {(i, \u2217)}\nFigure 2: FSM-like state transition diagram\ndescribing the \u0394-relation in a DS specification\nDefinition 3. A dynamic semantics (DS) is a structure\nO, S, s0, \u0394 where\n- O = {o1, o2, . . . , on} a set of dialogue operators,\n- S \u2286 \u2118(O) is a set of semantic states specified as\nsubsets of dialogue operators which are valid in this state,\n- s0 \u2208 S is the initial semantic state,\n- and the transition relation \u0394 \u2286 S \u00d7 \u2118(C) \u00d7 \u2118(Ag \u00d7\nAg) \u00d7 S defines the transitions over S triggered by\nconditions expressed as elements of \u2118(C) (C is the set\nof all possible commitments).\nThe meaning of a transition (s, c, {(i1, j1), . . . , (in, jn)}, s ) \u2208\n\u0394 is as follows: Assume a mapping act : Ag \u00d7 Ag \u2192 S\nwhich specifies that the semantics of operators in s holds for\nmessages sent from i to j . Then, if CS \u2208 c (i.e. the current\nCS matches the constraint c given as a collection of possible\nCSs) this will trigger a transition to state s for all pairs of\nagents in {(i1, j1), . . . , (in, jn)} for which the constraint was\nsatisfied and will update act accordingly. In other words,\nthe act mapping tracks which version of the semantics is\nvalid for which pairs of communication partners over time.\n2.4.2 Example\nTo illustrate these concepts, consider the following\nexample: Let O = {RQ, RJ, AC, AC2}, S = {s0, s1} where\ns0 = {RQ, RJ, AC} and s1 = {RQ, RJ, AC2}, i.e. there are\ntwo possible states of the semantics which only differ in their\ndefinition of accept (we call alternative versions of a single\ndialogue operator like AC and AC2 semantic variants). We\nassume that initially act(i, j) = s0 for all agents i, j \u2208 Ag.\nWe describe \u03b4 by the transition diagram shown in figure 2.\nIn this diagram, edges carry labels c : A where c is a\nconstraint on the contents of CS followed by a description\nof the set of agent pairs A for which the transition should\nbe made to the target state. Writing A(s) = act\u22121\n(s) for\nthe so-called range of agent pairs for which s is active, we\nuse agent variables like i and j and the wildcard symbol \u2217\nthat can be bound to any agent in A(s), and we assume that\nthis binding carries over to descriptions of A. For example,\nthe edge with label \u03b9, v : \u0393 i\u2192j\n\u2208 CS : {(i, \u2217)} \u222a {(j, i)}\ncan be interpreted as follows: select all pairs (i, j) \u2208 A(s0)\nfor which \u03b9, v : \u0393 i\u2192j\n\u2208 CS applies (i.e. i has violated\nsome commitment toward j) and make s1 valid for the set\nof agents {(i, k)|k \u2208 A(s0)} \u222a {(j, i)}. This means that for\nall agents i who have lied, s1 will become active for (i, j )\nwhere j \u2208 A(s0) and s1 will also become active for (j, i).\nThe way the DS of the diagram above works is as\nfollows: initially the semantics says (for every agent i) that\nthey will fulfill any commitment truthfully (the use of AC\nensures that expected behaviour is equivalent to compliant\nbehaviour). If an agent i violates a commitment once then\ns1 will become active for i towards all other agents, so that\nthey won\"t expect i to fulfill any future commitments.\nMoreover, this will also apply to (j, i) so that the culprit i should\nnot expect the deceived agent j to keep its promises towards\ni either in the future. However, this will not affect\nexpectations regarding their interactions with i by agents other\nthan i (i.e. they still have no right to violate their own\ncommitments). This reflects the idea that (only) agents that\nhave been fooled are allowed to trespass (only) against\nthose agents who trespassed against them. However, if\ni ever fulfills any commitment again (after the latest\nviolation, this is ensured by the complex constraint used as a\nlabel for the transition from s1 to s0), the semantics in s0\nwill become valid for i again. In this case, though, s1 will\nstill be valid for the pair (j, i), i.e. agent j will regain trust\nin i but cannot be expected to be trustworthy toward i ever\nagain.\nRather than suggesting that this is a particularly useful\ncommunication-inherent mechanism for sanctioning and\nrewarding specific kinds of behaviour, this example serves to\nillustrate the expressiveness of our framework and the kind\nof distinctions it enables us to make.\n2.4.3 Formal Semantics\nThe semantics of a DS can be defined inductively as\nfollows: Let CS(r) denote the contents of the commitment\nstore after run r as before. We use the notation\nA(\u03b4, CS) = {(i, j)|CS|i,j \u2208 c} \u2229 A(s) \u2229 A\nto denote the set of agents that are to be moved from\ns to s due to transition rule \u03b4 = (s, c, A, s ) \u2208 \u0394 given\nCS, where CS|i,j is the set of commitments that mention i\nand/or j (in their sender/receiver/content slots). In other\nwords, A(\u03b4, CS) contains those pairs of agents who are (i)\nmentioned in the commitments covered by the constraint c,\n(ii) contained in the range of s, and (iii) explicitly listed in A\nas belonging to those pairs of agents that should be affected\nby the transition \u03b4.\n104 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nDefinition 4. The state of a dynamic semantics\nO, S, s0, \u0394 after run r with immediate predecessor r\nis defined as a mapping actr as follows:\n1. r = \u03b5: act\u03b5(i, j) = s0 for all i, j \u2208 Ag\n2. r = \u03b5:\nactr(i, j) =\n8\n><\n>:\ns if \u2203\u03b4 = (s, c, A, s ) \u2208 \u0394.\n(i, j) \u2208 A(\u03b4, CS(r))\nactr (i, j) else\nThis maintains the property act\u22121\nr (s) = act\u22121\nr (s) \u2212\nA(\u03b4, CS(r )), which specifies that the agent pairs to be\nmoved from s to s are removed from the range of s and\nadded to the range of s .\nWhat is not ensured by this definition is consistency of the\nstate transition system, i.e. making sure that the semantic\nsuccessor state is uniquely identified for any state of the\ncommitment store and previous state so that every agent\npair is only assigned one active state in each step, i.e. actr\nis actually a function for any r.7\n2.4.4 Integration\nOnce the DS itself has been specified, we need to\nintegrate the different components of our framework to monitor\nthe dynamics of our ACL semantics and its implications for\nexpected agent behaviour.\nStarting with an initially empty commitment store CS\nand initial semantic state s0 such that act\u03b5(i, j) = s0 for any\ntwo agents i and j, the agent (or external observer) observes\n(a partial subset of) everything that is communicated in the\nsystem in each step. By applying the commitment transition\nrules (D, A, S, F and V ) we can update CS accordingly,\nignoring any observed message sent from i to j that does\nnot syntactically match the dialogue operator set defined\nin actr(i, j) for a current run r. After this update has been\nperformed for all observed messages and actions in this cycle,\nwhich should not depend on the ordering of messages8\n, we\ncan compute for any message sent from i to j the new value\nof actr (i, j) depending on the semantic transition rules of\nthe DS if r is the successor run of r. With this, we can\nthen determine what the compliant and expected behaviour\nof agents will be under these new conditions.\nThus, an agent can use information about expected\nbehaviour in its own planning processes by assuming that all\nagents involved will exhibit their expected (rather than just\ncompliant) behaviours. This prediction will not always be\nmore accurate than under normal (static) ACL semantics,\nbut since it is common knowledge that agents assume\nexpected behaviour to occur (and, by virtue of the DS-ACL\nspecification, have the right to do that) most reasonable\ndynamic ACL specifications will make provisions to ensure\nthat it is safer to assume expected rather than fully\ncompliant behaviour if they want to promote their use by agents.\n7\nOne way of ensuring this is to require that \u2200s \u2208\nS. (\u2229{c|(s, c, A, s ) \u2208 \u0394(s)} = \u2205) so that no two constraints\npertaining to outgoing edges of s can be fulfilled by CS at\na time. In some cases this may be too coarse-grained - it\nwould be sufficient for constraints to be mutually exclusive\nfor the same pair of agents at any point in time - but this\nwould have to be verified for an individual DS on a\ncase-bycase basis.\n8\nThis is the case for our operators, because their pre- and\npost-conditions never concern or affect any commitments\nother than those that involve both i and j - avoiding any\nconnection to third parties helps us keep the CS-update\nindependent of the order in which observations are processed.\n2.4.5 Complexity Issues\nThe main disadvantage of our approach is the space\ncomplexity of the dynamic ACL specification: If d is the number\nof dialogue operators in a language and b is the maximum\nnumber of semantic variants of a single dialogue operator\nwithin this language, the DS specification would have to\nspecify O(db\n) states. In many cases, however, most of the\nspeech acts will not have different variants (like RQ and\nRJ in our example) and this may significantly reduce the\nnumber of DS states that need to be specified.\nAs for the run-time behaviour of our semantics\nprocessing mechanism, we can assume that n messages/actions are\nsent/performed in each processing step in a system with n\nagents. Every commitment processing rule (D, S, etc.) has\nto perform a pass over the contents of CS. In the worst case\nevery originally created commitment (of which there may be\nnt after t steps) may have immediately become pending,\nactive and violated (which doesn\"t require any further physical\nactions, so that every agent can create a new commitment\nin each step).Thus, if any agent creates a new commitment\nin each step without ever fulfilling it, this will result in the\ntotal size of CS being in O(nt).9\nRegarding semantic state transitions, as many as n\ndifferent pairs of agents could be affected in a single iteration by n\nmessages. Assuming that the verification of CS-constraints\nfor these transitions would take O(nt), this yields a total\nupdate time of O(n2\nt) for tracking DS evolution. This bound\ncan be reduced to O(n2\n) if a quasi-stationarity\nassumption is made by limiting the window of earlier\ncommitments that are being considered when verifying transition\nconstraints to a constant size (and thus obtaining a finite\nset of possible commitment stores).10\n3. ANALYSIS AND DISCUSSION\nThe main strength of our framework is that it allows us\nto exploit the three main elements of reciprocity:\n\u2022 Reputation-based adaptation: The DS adapts the\nexpectations toward agent i according to i\"s previous\nbehaviour by modifying the semantic state to better\nreflect this behaviour (based on the assumption that it\nwill repeat itself in the future).\n\u2022 Mutuality of expectations: The DS adapts the\nexpectations toward j\"s behaviour according to i\"s previous\nbehaviour toward j to better reflect j\"s response to i\"s\nobserved behaviour (in particular, allowing j to behave\ntoward i as i behaved toward j earlier).\n\u2022 Recovery mechanisms: The DS allows i to revert to an\nearlier semantic state after having undone a change in\nexpectations by a further, later change of behaviour\n(e.g. by means of redemption).\nIn open systems in which we cannot enforce certain\nbehaviours, these are effectively the only available means for\nindirect sanctions and rewards.\n9\nThis is actually only a lower bound on the complexity for\ncommitment processing which could become even worse if\ndominated by the complexity of verifying entailment |=;\nhowever, this would also hold for a static ACL semantics.\n10\nFor example, this could be useful if we want to discard\ncommitments whose status was last modified more than k\ntime steps ago (this is problematic, as it might force us to\ndiscard certain unset/pending commitments before they\nbecome pending/active).\nThe Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 105\nThere are two further dimensions that affect DS-based\nsanctioning and reward mechanisms and are orthogonal to\nthe above: One concerns the character of the semantic state\nchanges (i.e. whether it is a reward or punishment), the other\nthe degree of adaptation (reputation-based mechanisms, for\nexample, need not realistically reflect the behaviour of the\nculprit, but may instead utilise immediate (exaggerated)\nstigmatisation of agents as a deterrent).\nAlbeit simple, our example DS described above makes use\nof all these aspects, and apart from consistency and\ncompleteness, it also satisfies some other useful properties:\n1. Non-redundancy: No two dialogue operators in O\nshould have identical pre- and post-conditions, and\nany two semantic variants of an operator must differ\nin terms of pre- and/or post-conditions:\n\u2200o, o \u2208 O .(pre(o) = pre(o )\u2227post(o) = post(o ) \u21d2 o = o )\n\u2200o, o \u2208 O .(action(o) = action(o ) \u21d2\npre(o) = pre(o ) \u2228 post(o) = post(o))\n2. Reachability of all semantic states: Any constraint\ncausing a transition must be satisfiable in principle\nwhen using the dialogue operators and physical actions\nthat are provided:\n\u2200(s, c, A, s ) \u2208 \u0394 \u2203r \u2208 R(Env, A).CS(r) \u2229 c = \u2205\n3. Distinction between expected and compliant\nbehaviour: The content of expectations must differ from\nthat of normative commitments at least for some\nsemantic variants (giving rise to non-compliant\nexpectations for some runs):\n\u2203r \u2208 R(Env, A) .expected(CS(r)) = compliant(CS(r))\n4. Compliance/deviance realisability: It must be\npossible for agents in principle to comply with normative\ncommitments or deviate from them in principle:\n\u2203r \u2208 R(Env, A) .expected(CS(r)) = \u2205\u2227\ncompliant(CS(r)) = \u2205\nWhile not absolutely essential, these constitute desiderata\nfor the design of DS-ACLs as they add to the simplicity\nand clarity of a given semantics specification. Our\nframework raises interesting questions regarding further potential\nproperties of DS such as:\n1. Respect for commitment autonomy: The semantics\nmust not allow an agent to create a pending\ncommitment for another agent or to violate a commitment on\nbehalf of another agent. While in some cases some\nagents should be able to enforce commitments upon\nothers, this should generally be avoided to ensure agent\nautonomy.\n2. Avoiding commitment inconsistency: The ACL must\neither disallow commitment to contradictory actions\nor beliefs, or at least provide operators for rectifying\nsuch contradictory claims. Under contradictory\ncommitments, no possible behaviour can be\ncompliantit is up to the designer to decide to which extent this\nshould be permitted.\n3. Unprejudiced judgement: Expected behaviour\nprediction must not deviate from compliant behaviour\nprediction if deviant behaviour has not been observed so\nfar (in particular this must hold for the initial semantic\nstate). This might not always be desirable as initial\ndistrust is necessary in some systems, but it increases\nthe chances that agents will agree to participate in\ncommunication.\n4. Convergence: The semantic state of each of the\ndialogue operators will remain stable after a finite\nnumber of transitions, regardless of any further agent\nbehaviour11\n. If this property holds, this would imply that\nagents can stop tracking semantic state transitions\nafter some amount of initial interaction. The advantage\nof this is reduced complexity, which of course comes at\nthe price of giving up adaptiveness.\n5. Forgiveness: After initial deviance, further compliant\nbehaviour of an agent should lead to a semantic state\nthat predicts compliant behaviour for that agent again.\nHere, we have to trade off cautiousness against the\nprovision of incentives to resume cooperative behaviour.\nTrusting an agent makes others vulnerable to\nexploitation - blacklisting an agent forever, though, might\nlead that agent to keep up its unpredictable and\npotentially malicious behaviour.\n6. Equality: Unless this is required by domain-specific\nconstraints, the same dynamics of semantics should\napply to all parties involved.\nOur simple example semantics satisfies all these\nproperties apart from convergence. Many of the above\nproperties are debatable, as we have to trade off cautiousness\nagainst the provision of incentives for cooperative behaviour.\nWhile we cannot make any general statements here\nregarding optimal DS-ACL design, our framework provides the\ntools to test and evaluate the performance of different such\ncommunication-inherent sanctioning and rewarding\nmechanisms (i.e. social rules that do not presuppose ability to\ndirect punishment or reward through physical actions) in\nreal-world applications.\n4. RELATED WORK\nExpectation-based reasoning about interaction was first\nproposed in [2], considering the evolution of expectations\ndescribed as probabilistic expectations of communication\nand action sequences. The same authors suggested a more\ngeneral framework for expectation-based communication\nsemantics [9], and argue for a consequentialist view of\nsemantics that is based on defining the meaning of utterances\nin terms of their expected consequences and updating these\nexpectations with new observations [11]. However, their\napproach does not use an explicit notion of commitments\nwhich in our framework mediates between communication\nand behaviour-based grounding, and provides a clear\ndistinction between a normative notion of compliance and a\nmore empirical notion of expectation.\nGrounding for (mentalistic) ACL semantics has been\ninvestigated in [7] where grounded information is viewed as\ninformation that is publicly expressed and accepted as\nbeing true by all the agents participating in a conversation.\nLike [1] (which bases the notion of publicly expressed on\nroles rather than internal states of agents) these authors\"\nmain concern is to provide a verifiable basis for determining\nthe semantics of expressed mental states and commitments.\nThough our framework is only concerned with commitment\nto the achievement of states of affairs rather than exchanged\ninformation, in a sense, DS provides an alternative view by\nspecifying what will happen if the assumptions on which\nwhat is publicly accepted is based are violated.\n11\nIn a non-trivial sense, i.e. when some initial transitions are\npossible in principle\n106 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nOur framework is also related to deontic methods for the\nspecification of obligations, norms and sanctions. In this\narea, [16] is the only framework that we are aware of which\nconsiders dynamic obligations, norms and sanctions.\nHowever, as we have described above we solely utilise semantic\nevolution as a sanctioning and rewarding mechanism, i.e.\nunlike this work we do not assume that agents can be directly\npunished or rewarded.\nFinally, the FSM-like structure of the DS transition\nsystems in combination with agent communication is\nreminiscent of work on electronic institutions [5], but there the focus\nis on providing different means of communication in different\nscenes of the interaction process (e.g. different protocols\nfor different phases of market-based interaction) whereas we\nfocus on different semantic variants that are to be used in\nthe same interaction context.\n5. CONCLUSION\nThis paper introduces dynamic semantics for ACLs as\na method for dealing with some fundamental problems of\nagent communication in open systems, the simple\nunderlying idea being that different courses of agent behaviour can\ngive rise to different interpretations of meaning of the\nmessages exchanged among agents. Based on a common\nframework of commitment-based semantics, we presented a notion\nof grounding for commitments based on notions of compliant\nand expected behaviour. We then defined dynamic\nsemantics as state transition systems over different semantic states\nthat can be viewed as different versions of ACL\nsemantics in the traditional sense, and can be easily associated\nwith a planning-based view of reasoning about\ncommunication. Thereby, our focus was on simplicity and on providing\nmechanisms for tracking semantic evolution in a\ndown-toearth, algorithmic fashion to ensure applicability to many\ndifferent agent designs.\nWe discussed the properties of our framework showing\nhow it can be used as a powerful communication-inherent\nmechanism for rewarding and sanctioning agent behaviour\nin open systems without compromising agent autonomy,\ndiscussed its integration with agents\" planning processes,\ncomplexity issues, and presented a list of desiderata for the\ndesign of ACLs with such semantics.\nCurrently, we are working on fully-fledged specifications\nof dynamic semantics for more complex languages and on\nextending our approach to mentalistic semantics where we\nview statements about mental states as commitments\nregarding the rational implications of these mental states (a\nsimple example for this is that an agent commits itself to\ndropping an ostensible intention that it is claiming to\nmaintain if that intention turns out to be unachievable). In this\ncontext, we are particularly interested in appropriate\nmechanisms to detect and respond to lying by interrogating\nsuspicious agents and forcing them to commit themselves\nto (sets of) mental states publicly while sanctioning them\nwhen these are inconsistent with their actions.\n6. REFERENCES\n[1] G. Boella, R. Damiano, J. Hulstijn, and L. van der\nTorre. ACL Semantics between Social Commitments\nand Mental Attitudes. In Proceedings of the\nInternational Workshop on Agent Communication , 2006.\n[2] W. Brauer, M. Nickles, M. Rovatsos, G. Wei\u00df, and\nK. F. Lorentzen. Expectation-Oriented Analysis and\nDesign. In Proceedings of the 2nd Workshop on\nAgent-Oriented Software Engineering , LNCS 2222,\n2001. Springer-Verlag, Berlin.\n[3] P. R. Cohen and H. J. Levesque. Communicative\nactions for artificial agents. In Proceedings of the First\nInternational Conference on Multi-Agent Systems,\npages 65-72, 1995.\n[4] P. R. Cohen and C. R. Perrault. Elements of a\nPlan-Based Theory of Speech Acts. Cognitive Science,\n3:177-212, 1979.\n[5] M. Esteva, J. Rodriguez, J. Arcos, C. Sierra, and\nP. Garcia. Formalising Agent Mediated Electronic\nInstitutions. In Catalan Congres on AI, pages 29-38,\n2000.\n[6] N. Fornara and M. Colombetti. Operational\nspecification of a commitment-based agent communication\nlanguage. In Proceedings of the First International\nJoint Conference on Autonomous Agents and\nMultiagent Systems, pages 536-542, Bologna, Italy,\n2002. ACM Press.\n[7] B. Gaudou, A. Herzig, D. Longin, and M. Nickles. A\nNew Semantics for the FIPA Agent Communication\nLanguage based on Social Attitudes. In Proceedings of\nthe 17th European Conference on Artificial\nIntelligence, Riva del Garda, Italy, 2006. IOS Press.\n[8] F. Guerin and J. Pitt. Denotational Semantics for\nAgent Communication Languages. In Proceedings of\nthe Fifth International Conference on Autonomous\nAgents, pages 497-504. ACM Press, 2001.\n[9] M. Nickles, M. Rovatsos, and G. Weiss.\nEmpiricalRational Semantics of Agent Communication. In\nProceedings of the Third International Joint\nConference on Autonomous Agents and Multiagent\nSystems, New York, NY, 2004.\n[10] J. Pitt and A. Mamdani. Some Remarks on the\nSemantics of FIPA\"s Agent Communication Language.\nAutonomous Agents and Multi-Agent Systems,\n2:333-356, 1999.\n[11] M. Rovatsos, M. Nickles, and G. Wei\u00df. Interaction is\nMeaning: A New Model for Communication in Open\nSystems. In Proceedings of the Second International\nJoint Conference on Autonomous Agents and\nMultiagent Systems, Melbourne, Australia, 2003.\n[12] M. D. Sadek. Dialogue acts are rational plans. In\nProceedings of the ESCA/ETRW Workshop on the\nStructure of Multimodal Dialogue, pages 1-29, 1991.\n[13] M. Singh. Agent communication languages:\nRethinking the principles. IEEE Computer,\n31(12):55-61, 1998.\n[14] M. Singh. A social semantics for agent communication\nlanguages. In Proceedings of the IJCAI Workshop on\nAgent Communication Languages, 2000.\n[15] M. P. Singh. A semantics for speech acts. Annals of\nMathematics and Artificial Intelligence, 8(1-2):47-71,\n1993.\n[16] G. Wei\u00df, M. Nickles, M. Rovatsos, and F. Fischer.\nSpecifying the Intertwining of Cooperation and\nAutonomy in Agent-based Systems. Journal of\nNetworks and Computer Applications, 29, 2007.\n[17] M. J. Wooldridge. Verifiable semantics for agent\ncommunication languages. In Proceedings of the Third\nInternational Conference on Multi-Agent Systems,\npages 349-356, Paris, France, 1998.\nThe Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 107", "keywords": "state transition system;commitment-based semantics;agent communication language;recovery mechanism;mutuality of expectation;expectation mutuality;non-redundancy;social reason;dynamic semantics;social reasoning;reputation-based adaptation"}
-{"name": "test_I-7", "title": "Commitment and Extortion", "abstract": "Making commitments, e.g., through promises and threats, enables a player to exploit the strengths of his own strategic position as well as the weaknesses of that of his opponents. Which commitments a player can make with credibility depends on the circumstances. In some, a player can only commit to the performance of an action, in others, he can commit himself conditionally on the actions of the other players. Some situations even allow for commitments on commitments or for commitments to randomized actions. We explore the formal properties of these types of (conditional) commitment and their interrelationships. So as to preclude inconsistencies among conditional commitments, we assume an order in which the players make their commitments. Central to our analyses is the notion of an extortion, which we define, for a given order of the players, as a profile that contains, for each player, an optimal commitment given the commitments of the players that committed earlier. On this basis, we investigate for different commitment types whether it is advantageous to commit earlier rather than later, and how the outcomes obtained through extortions relate to backward induction and Pareto efficiency.", "fulltext": "1. INTRODUCTION\nOn one view, the least one may expect of game theory is that\nit provides an answer to the question which actions maximize an\nagent\"s expected utility in situations of interactive decision making.\nA slightly divergent view is expounded by Schelling when he states\nthat strategy [. . . ] is not concerned with the efficient application of\nforce but with the exploitation of potential force [9, page 5]. From\nthis perspective, the formal model of a game in strategic form only\noutlines the strategic features of an interactive situation. Apart from\nmerely choosing and performing an action from a set of actions,\nthere may also be other courses open to an agent. E.g., the strategic\nlie of the land may be such that a promise, a threat, or a combination\nof both would be more conductive to his ends.\nThe potency of a promise, however, essentially depends on the\nextent the promisee can be convinced of the promiser\"s resolve to\nsee to its fulfillment. Likewise, a threat only succeeds in deterring\nan agent if the latter can be made to believe that the threatener is\nbound to execute the threat, should it be ignored. In this sense,\npromises and threats essentially involve a commitment on the part\nof the one who makes them, thus purposely restricting his freedom\nof choice. Promises and threats epitomize one of the fundamental\nand at first sight perhaps most surprising phenomena in game\ntheory: it may occur that a player can improve his strategic position\nby limiting his own freedom of action. By commitments we will\nunderstand such limitations of one\"s action space. Action itself could\nbe seen as the ultimate commitment. Performing a particular action\nmeans doing so to the exclusion of all other actions.\nCommitments come in different forms and it may depend on the\ncircumstances which ones can and which ones cannot credibly be\nmade. Besides simply committing to the performance of an action,\nan agent might make his commitment conditional on the actions\nof other agents, as, e.g., the kidnapper does, when he promises to\nset free a hostage on receiving a ransom, while threatening to cut\noff another toe, otherwise. Some situations even allow for\ncommitments on commitments or for commitments to randomized actions.\nBy focusing on the selection of actions rather than on\ncommitments, it might seem that the conception of game theory as mere\ninteractive decision theory is too narrow. In this respect, Schelling\"s\nview might seem to evince a more comprehensive understanding of\nwhat game theory tries to accomplish. One might object, that\ncommitments could be seen as the actions of a larger game. In reply to\nthis criticism Schelling remarks:\nWhile it is instructive and intellectually satisfying to\nsee how such tactics as threats, commitments, and\npromises can be absorbed in an enlarged, abstract\nsupergame (game in normal form), it should be\nemphasized that we cannot learn anything about those\ntactics by studying games that are already in normal\nform. [. . . ] What we want is a theory that systematizes\nthe study of the various universal ingredients that make\nup the move-structure of games; too abstract a model\nwill miss them. [9, pp. 156-7]\n108\n978-81-904262-7-5 (RPS) c 2007 IFAAMAS\nOur concern is with these commitment tactics, be it that our\nanalysis is confined to situations in which the players can commit in\na given order and where we assume the commitments the players\ncan make are given. Despite Schelling\"s warning for too abstract a\nframework, our approach will be based on the formal notion of an\nextortion, which we will propose in Section 4 as a uniform tactic\nfor a comprehensive class of situations in which commitments can\nbe made sequentially. On this basis we tackle such issues as the\nusefulness of certain types of commitment in different situations\n(strategic games) or whether it is better to commit early rather than\nlate. We also provide a framework for the assessment of more\ngeneral game theoretic matters like the relationship of extortions to\nbackward induction or Pareto efficiency.\nInsight into these matters has proved itself invaluable for a proper\nunderstanding of diplomatic policy during the Cold War.\nNowadays, we believe, these issues are equally significant for\napplications and developments in such fields as multiagent systems,\ndistributed computing and electronic markets. For example,\ncommitments have been argued to be of importance for interacting\nsoftware agents as well as for mechanism design. In the former setting,\nthe inability to re-program a software agent on the fly can be seen as\na commitment to its specification and thus exploited to strengthen\nits strategic position in a multiagent setting. A mechanism, on the\nother hand, could be seen as a set of commitments that steers the\nplayers\" behavior in a certain desired way (see, e.g., [2]).\nOur analysis is conceptually similar to that of Stackelberg or\nleadership games [15], which have been extensively studied in the\neconomic literature (cf., [16]). These games analyze situations in\nwhich a leader commits to a pure or mixed strategy, and a number\nof followers, who then act simultaneously. Our approach, however,\ndiffers in that it is assumed that the players all move in a\nparticular order-first, second, third and so on-and that it is specifically\naimed at incorporating a wide range of possible commitments, in\nparticular conditional commitments.\nAfter briefly discussing related work in Section 2, we present\nthe formal game theoretic framework, in which we define the\nnotions of a commitment type as well as conditional and unconditional\ncommitments (Section 3). In Section 4 we propose the generic\nconcept of an extortion, which for each commitment type captures the\nidea of an optimal commitment profile. We point out an\nequivalence between extortions and backward induction solutions, and\ninvestigate whether it is advantageous to commit earlier rather than\nlater and how the outcomes obtained through extortions relate to\nPareto efficiency. Section 5 briefly reviews some other\ncommitment types, such as inductive, mixed and mixed conditional\ncommitments. The paper concludes with an overview of the results and\nan outlook for future research in Section 6.\n2. RELATED WORK\nCommitment is a central concept in game theory. The\npossibility to make commitments distinguishes cooperative from\nnoncooperative game theory [4, 6]. Leadership games, as mentioned\nin the introduction, analyze commitments to pure or mixed\nstrategies in what is essentially a two-player setting [15, 16]. Informally,\nSchelling [9] has emphasized the importance of promises, threats\nand the like for a proper understanding of social interaction. On a\nmore formal level, threats have also figured in bargaining theory.\nNash\"s threat game [5] and Harsanyi\"s rational threats [3] are two\nimportant early examples. Also, commitments have played a\nsignificant role in the theory of equilibrium selection (see, e.g., [13].\nOver the last few years, game theory has become almost\nindispensable as a research tool for computer science and (multi)agent\nresearch. Commitments have by no means gone unnoticed (see,\n\u23a1\n\u23a2\u23a2\u23a2\u23a2\u23a3\n(1, 3) (3, 2)\n(0, 0) (2, 1)\n\u23a4\n\u23a5\u23a5\u23a5\u23a5\u23a6\nFigure 1: Committing to a dominated strategy can be\nadvantageous.\ne.g., [1, 11]). Recently, also the strategic aspects of commitments\nhave attracted the attention of computer scientists. Thus, Conitzer\nand Sandholm [2] have studied the computational complexity of\ncomputing the optimal strategy to commit to in normal form and\nBayesian games. Sandholm and Lesser [8] employ levelled\ncommitments for the design of multiagent systems in which\ncontractual agreements are not fully binding. Another connection\nbetween commitments and computer science has been pointed out\nby Samet [7] and Tennenholtz [12]. Their point of departure is the\nobservation that programs can be used to formulate commitments\nthat are conditional on the programs of other systems.\nOur approach is similar to the Stackleberg setting in that we\nassume an order in which the players commit. We, however, consider\na number of different commitment types, among which conditional\ncommitments, and propose a generic solution concept.\n3. COMMITMENTS\nBy committing, an agent can improve his strategic position. It\nmay even be advantageous to commit to a strategy that is strongly\ndominated, i.e., one for which there is another strategy that yields\na better payoff no matter how the other agents act. Consider for\nexample the 2\u00d72 game in Figure 1, in which one player, Row, chooses\nrows and another, Col, chooses columns. The entries in the matrix\nindicate the payoffs to Row and Col, respectively. Then, top-left\nis the solution obtained by iterative elimination of strongly\ndominated strategies: for Row, playing top is always better than playing\nbottom, and assuming that Row will therefore never play bottom,\nleft is always better than right for Col. However, if Row succeeds\nin convincing Col of his commitment to play bottom, the latter had\nbetter choose the right column. Thus, Row attains a payoff of two\ninstead of one. Along a similar line of reasoning, however, Col\nwould wish to commit to the left column, as convincing Row of\nthis commitment guarantees him the most desirable outcome. If,\non the other hand, both players actually commit themselves in this\nway but without convincing the other party of their having done so,\nthe game ends in misery for both.\nImportant types of commitments, however, cannot simply be\nanalyzed as unconditional commitments to actions. The essence of a\nthreat, for example, is deterrence. If successful, it is not carried out.\n(This is also the reason why the credibility of a threat is not\nnecessarily undermined if its putting into effect means that the threatener\nis also harmed.) By contrast, promises are made to entice and, as\nsuch, meant to be fulfilled. Thus, both threats and promises would\nbe strategically void if they were unconditional.\nFigure 2 shows an example, in which Col can guarantee himself\na payoff of three by threatening to choose the right column if Row\nchooses top. (This will suffice to deter Row, and there is no need\nfor an additional promise on the part of Col.) He cannot do so by\nmerely committing unconditionally, and neither can Row if he were\nto commit first.\nIn the case of conditional commitments, however, a particular\nkind of inconsistency can arise. It is not in general the case that\nany two commitments can both be credible. In a 2 \u00d7 2 game, it\ncould occur that Row commits conditionally on playing top if the\nThe Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 109\n\u23a1\n\u23a2\u23a2\u23a2\u23a2\u23a3\n(2, 2) (0, 0)\n(1, 3) (3, 1)\n\u23a4\n\u23a5\u23a5\u23a5\u23a5\u23a6\nFigure 2: The column player Col can guarantee himself a payoff of\nthree by threatening to play right if the row player Row plays top.\nCol plays left, and bottom, otherwise. If now, Col simultaneously\nwere able to commit to the conditional strategy to play right if Row\nplays top, and left, otherwise, there is no strategy profile that can\nbe played without one of the players\" bluff being called.\nTo get around this problem, one can write down conditional\ncommitments in the form of rules and define appropriate fixed point\nconstructions, as suggested by Samet [7] and Tennenholtz [12].\nSince checking the semantic equivalence of two commitments (or\ncommitment conditions) is undecidable in general, Tennenholtz\nbases his definition of program equilibrium on syntactic\nequivalence. We, by contrast, try to steer clear from fixed point\nconstructions by assuming that the players make their commitment in\na particular order. Each player can then make his commitments\ndependent on the actions of the players to commit after him, but not\non the commitments of the players that committed before. On the\nissue how this order comes about we do not here enter. Rather, we\nassume it to be determined by the circumstances, which may force\nor permit some players to commit earlier and others later. We will\nfind that it is not always beneficial to commit earlier than later or\nvice versa.\nAnother point to heed is that we only consider the case in which\nthe commitments are considered absolutely binding. We do not\ntake into account commitments that can be violated. Intuitively,\nthis could be understood as that the possibility of violation fatally\nundermines the credibility of the commitment. We also assume\ncommitments to be complete, in the sense that they fully lay down a\nplayer\"s behavior in all foreseeable circumstances. These\nassumptions imply that the outcome of the game is entirely determined by\nthe commitments the players make. Although these might be\nimplausible assumptions for some situations, we had better study the\nidealized case first, before tackling the complications of the more\ngeneral case. To make these concepts formally precise, we first\nhave to fix some notation.\n3.1 Strategic Games\nA strategic game is a tuple (N, (Ai)i\u2208N, (ui)i\u2208N), where N =\n{1, . . . , n} is a finite set of players, Ai is a set of actions available\nto player i and ui a real-valued utility function for player i on the\nset of (pure) strategy profiles S = A1\u00d7\u00b7 \u00b7 \u00b7\u00d7An. We call a game finite\nif for all players i the action set Ai is finite. A mixed strategy \u03c3i for\na player i is a probability distribution over Ai. We write \u03a3i for the\nset of mixed strategies available to player i, and \u03a3 = \u03a31 \u00d7 \u00b7 \u00b7 \u00b7 \u00d7 \u03a3n\nfor the set of mixed strategy profiles. We further have \u03c3(a) and\n\u03c3i(a) denote the probability of action a in mixed strategy profile \u03c3\nor mixed strategy \u03c3i, respectively. In settings involving expected\nutility, we will generally assume that utility functions represent\nvon Neumann-Morgenstern preferences. For a player i and (mixed)\nstrategy profiles \u03c3 and \u03c4 we write \u03c3 i \u03c4 if ui (\u03c3) ui (\u03c4).\n3.2 Conditional Commitments\nRelative to a strategic game (N, (Ai)i\u2208N, (ui)i\u2208N) and an\nordering \u03c0 = (\u03c01, . . . , \u03c0n) of the players, we define the set F\u03c0i\nof (pure)\nconditional commitments of a player \u03c0i as the set of functions\nfrom A\u03c01\n\u00d7 \u00b7 \u00b7 \u00b7 \u00d7 A\u03c0i\u22121\nto A\u03c0i\n. For \u03c01 we have the set of conditional\ncommitments coincide with A\u03c01\n. By a conditional commitment\nprofile f we understand any combination of conditional commitments\nin F\u03c01\n\u00d7 \u00b7 \u00b7 \u00b7 \u00d7 F\u03c0n .\nIntuitively, \u03c0 reflects the sequential order in which the players\ncan make their commitments, with \u03c0n committing first, \u03c0n\u22121 second,\nand so on. Each player can condition his action on the actions of\nall players that are to commit after him. In this manner, each\nconditional commitment profile f can be seen to determine a unique\nstrategy profile, denoted by f , which will be played if all players\nstick to their conditional commitments. More formally, the strategy\nprofile f = (f\u03c01\n, . . . , f\u03c0n\n) for a conditional commitment profile f is\ndefined inductively as\nf\u03c01\n=df. f\u03c01\n,\nf\u03c0i+1\n=df. f\u03c0i+1\n(f\u03c01\n, . . . , f\u03c0i\n).\nThe sequence f\u03c01\n, (f\u03c01\n, f\u03c02\n), . . . , (f\u03c01\n, . . . , f\u03c0n\n) will be called the path\nof f . E.g., in the two-player game of Figure 2 and given the\norder (Row, Col), Row has two conditional commitments, top and\nbottom, which we will henceforth denote t and b. Col, on the other\nhand, has four conditional commitments, corresponding to the\ndifferent functions mapping strategies of Row to those of Col. If we\nconsider a conditional commitment f for Col such that f (t) = l\nand f (b) = r, then (t, f ) is a conditional commitment profile\nand (t, f ) = (t, f (t)) = (t, l).\nThere is a natural way in which a strategic game G together with\nan ordering (\u03c01, . . . , \u03c0n) of the players can be interpreted as an\nextensive form game with perfect information (see, e.g., [4, 6])1\n, in\nwhich \u03c01 chooses his action first, \u03c02 second, and so on. Observe\nthat under this assumption the strategies in the extensive form game\nand the conditional commitments in the strategic game G with\nordering \u03c0 are mathematically the same objects. Applying backward\ninduction to the extensive form game yields subgame perfect\nequilibria, which arguably provide appropriate solutions in this setting.\nFrom the perspective of conditional commitments, however,\nplayers move in reverse order. We will argue that under this\ninterpretation other strategy profiles should be singled out as appropriate.\nTo illustrate this point, consider once more the game in Figure 2\nand observe that neither player can improve on the outcome\nobtained via iterated strong dominance by committing\nunconditionally to some strategy. Situations like this, in which players can\nmake unconditional commitments in a fixed order, can fruitfully\nbe analyzed as extensive form games, and the most lucrative\nunconditional commitment can be found through backward induction.\nFigure 3 shows the extensive form associated with the game of\nFigure 2. The strategies available to the row player are the same as in\nthe strategic form: choosing the top or the bottom row. The\nstrategies for the column player in the extensive game are given by the\nfour functions that map strategies of the row player in the\nstrategic game to one of his own. Transforming this extensive form\nback into a strategic game (see Figure 4), we find that there exists\na second equilibrium besides the one found by means of backward\ninduction. This equilibrium with outcome (1, 3), indicated by the\nthick lines in Figure 3, has been argued to be unacceptable in the\nsequential game as it would involve an incredible threat by Col:\nonce Row has played top, Col finds himself confronted with a fait\naccompli. He had better make the best of a bad bargain and opt\nfor the left column after all. This is in essence the line of thought\nSelten followed in his famous argument for subgame perfect\nequilibria [10]. If, however, the strategies of Col in the extensive form\nare thought of as his conditional commitments he can make in case\n1\nFor a formal definition of a game in extensive form, the reader\nconsult one of the standard textbooks, such as [4] or [6]. In this\npaper all formal definitions are based on strategic games and\norderings of the players only.\n110 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)\n2\n2\n3\n1\n0\n0\n1\n3\nRow\nCol\nFigure 3: Extensive form obtained from the strategic game of\nFigure 2 when the row player chooses an action first. The backward\ninduction solution is indicated by dashed lines, the conditional\ncommitment solution by solid ones. (The horizontal dotted lines do not\nindicate information sets, but merely indicate which players are to\nmove when.)\nhe moves first, the situation is radically different. Thus we also\nassume that it is possible for Col to make credible the threat to choose\nthe right column if Row were to play top, so as to ensure the latter is\nalways better off to play the bottom row. If Col can make a\nconditional commitment of playing the right column if Row chooses top,\nand the left column otherwise, this leaves Row with the easy choice\nbetween a payoff of zero or one, and Col may expect a payoff of\nthree.\nThis line of reasoning can be generalized to yield an\nalgorithm for finding optimal conditional commitments for general\ntwoplayer games:\n1. Find a strategy profile s = (s\u03c01\n, s\u03c02\n) with maximum payoff to\nplayer \u03c02, and set f\u03c01\n= s\u03c01\nand f\u03c02\n(s\u03c01\n) = s\u03c02\n.\n2. For each t\u03c01\n\u2208 A\u03c01\nwith t\u03c01\ns\u03c01\n, find a strategy t\u03c02\n\u2208 A\u03c02\nthat minimizes u\u03c01\n(t\u03c01\n, t\u03c02\n), and set f\u03c02\n(t\u03c01\n) = t\u03c02\n.\n3. If u\u03c01\n(t\u03c01\n, f\u03c02\n(t\u03c01\n)) u\u03c01\n(s\u03c01\n, s\u03c02\n) for all t\u03c01\ns\u03c01\n, return f .\n4. Otherwise, find the strategy profile (s\u03c01\n, s\u03c02\n) with the highest\npayoff to \u03c02 among the ones that have not yet been\nconsidered. Set f\u03c01\n= s\u03c01\nand f\u03c02\n(s\u03c01\n) = s\u03c02\n, and continue with\nStep 2.\nGeneralizing the idea underlying this algorithm, we present in\nSection 4 the concept of an extortion, which applies to games with\nany number of players. For any order of the players an extortion\ncontains, for each player, an optimal commitment given the\ncommitments of the players that committed earlier.\n3.3 Commitment Types\nSo far, we have distinguished between conditional and\nunconditional commitments. If made sequentially, both of them determine\na unique strategy profile in a given strategic game. This notion of\nsequential commitment allows for generalization and gives rise to\nthe following definition of a (sequential) commitment type.\nDefinition 3.1. (Sequential commitment type) A\n(sequential) commitment type \u03c4 associates with each strategic game G\nand each ordering \u03c0 of its players, a tuple X\u03c01\n, . . . , X\u03c0n , \u03c6 ,\nwhere X\u03c01\n, . . . , X\u03c0n are (abstract) sets of commitments and \u03c6 is a\nfunction mapping each profile in X = X\u03c01\n\u00d7 \u00b7 \u00b7 \u00b7 \u00d7 X\u03c0n to a (mixed)\nstrategy profile of G. A commitment type X\u03c01\n, . . . , X\u03c0n , \u03c6 is finite\nwhenever X\u03c0i\nis finite for each i with 1 i n.\nThus, the type of unconditional commitments associates with a\ngame and an ordering \u03c0 of its players the tuple S \u03c01\n, . . . , S \u03c0n , id ,\n\u23a1\n\u23a2\u23a2\u23a2\u23a2\u23a3\n(2, 2) (2, 2) (0, 0) (0, 0)\n(1, 3) (3, 1) (1, 3) (3, 1)\n\u23a4\n\u23a5\u23a5\u23a5\u23a5\u23a6\nFigure 4: The strategic game corresponding to the extensive form\nof Figure 3\nwhere id is the identity function. Similarly, F\u03c01\n, . . . , F\u03c0n , is the\ntuple associated with the same game by the type of (pure)\nconditional commitments.\n4. EXTORTIONS\nIn the introduction, we argued informally how players could\nimprove their position by conditionally committing. How well they\ncan do, could be analyzed by means of an extensive game with the\nactions of each player being defined as the possible commitments\nhe can make. Here, we introduce for each commitment type a\ncorresponding notion of extortion, which is defined relative to a\nstrategic game and an ordering of the players. Extortions are meant to\ncapture the concept of a profile that contains, for each player, an\noptimal commitment given the commitments of the players that\ncommitted earlier. A complicating factor is that in finding a player\"s\noptimal commitment, one should not only take into account how\nsuch a commitment affects other players\" actions, but also how it\nenables them to make their commitments.\nDefinition 4.1. (Extortions) Let G be a strategic game, \u03c0 an\nordering of its players, and \u03c4 a commitment type. Let \u03c4(G, \u03c0) be\ngiven by X\u03c01\n, . . . , X\u03c0n , \u03c6 . A \u03c4-extortion of order 0 is any\ncommitment profile x \u2208 X\u03c01\n\u00d7 \u00b7 \u00b7 \u00b7 \u00d7 X\u03c0n . For m > 0, a commitment\nprofile x \u2208 X\u03c01\n\u00d7 \u00b7 \u00b7 \u00b7 \u00d7 X\u03c0n is a \u03c4-extortion of order m in G given \u03c0\nif x is an \u03c4-extortion of order m \u2212 1 with\n\u03c6 y\u03c01\n, . . . , y\u03c0m , x\u03c0m+1\n, . . . , x\u03c0n \u03c0m \u03c6 x\u03c01\n, . . . , x\u03c0m , x\u03c0m+1\n, . . . , x\u03c0n\nfor all commitment profiles g in X with (y\u03c01\n, . . . , y\u03c0m , x\u03c0m+1\n, . . . , x\u03c0n )\na \u03c4-extortion of order m \u2212 1. A \u03c4-extortion is a commitment profile\nthat is a \u03c4-extortion of order m for all m with 0 m n.\nFurthermore, we say that a (mixed) strategy profile \u03c3 is \u03c4-extortionable if\nthere is some \u03c4-extortion x with \u03c6(x) = s.\nThus, an extortion of order 1 is a commitment profile in which\nplayer \u03c01, makes a commitment that maximizes his payoff, given\nfixed commitments of the other players. An extortion of order m is\nan extortion of order m \u2212 1 that maximizes player m\"s payoff, given\nfixed commitments of the players \u03c0m+1 through \u03c0n.\nFor the type of conditional commitments we have that any\nconditional commitment profile f is an extortion of order 0 and an\nextortion of an order m greater than 0 is any extortion of order m \u2212 1\nfor which:\ng\u03c01\n, . . . , g\u03c0m , f\u03c0m+1\n, . . . , f\u03c0n \u03c0m f\u03c01\n, . . . , f\u03c0m , f\u03c0m+1\n, . . . , f\u03c0n ,\nfor each conditional commitment profile g such that\ng\u03c01\n, . . . , g\u03c0m , f\u03c0m+1\n, . . . , f\u03c0n an extortion of order m \u2212 1.\nTo illustrate the concept of an extortion for conditional\ncommitments consider the three-player game in Figure 5 and assume\n\u23a1\n\u23a2\u23a2\u23a2\u23a2\u23a3\n(1, 4, 0) (1, 4, 0)\n(3, 3, 2) (0, 0, 2)\n\u23a4\n\u23a5\u23a5\u23a5\u23a5\u23a6\n\u23a1\n\u23a2\u23a2\u23a2\u23a2\u23a3\n(4, 1, 1) (4, 0, 0)\n(3, 3, 2) (0, 0, 2)\n\u23a4\n\u23a5\u23a5\u23a5\u23a5\u23a6\nFigure 5: A three-player strategic game\nThe Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 111\n\u239b\n\u239c\u239c\u239c\u239c\u239c\u239c\u239c\u239d\n1\n4\n0\n\u239e\n\u239f\u239f\u239f\u239f\u239f\u239f\u239f\u23a0\n\u239b\n\u239c\u239c\u239c\u239c\u239c\u239c\u239c\u239d\n3\n3\n2\n\u239e\n\u239f\u239f\u239f\u239f\u239f\u239f\u239f\u23a0\n\u239b\n\u239c\u239c\u239c\u239c\u239c\u239c\u239c\u239d\n4\n1\n1\n\u239e\n\u239f\u239f\u239f\u239f\u239f\u239f\u239f\u23a0\n\u239b\n\u239c\u239c\u239c\u239c\u239c\u239c\u239c\u239d\n3\n3\n2\n\u239e\n\u239f\u239f\u239f\u239f\u239f\u239f\u239f\u23a0\nRow\nCol\nMat\n\u239b\n\u239c\u239c\u239c\u239c\u239c\u239c\u239c\u239d\n1\n4\n0\n\u239e\n\u239f\u239f\u239f\u239f\u239f\u239f\u239f\u23a0\n\u239b\n\u239c\u239c\u239c\u239c\u239c\u239c\u239c\u239d\n4\n0\n0\n\u239e\n\u239f\u239f\u239f\u239f\u239f\u239f\u239f\u23a0\n\u239b\n\u239c\u239c\u239c\u239c\u239c\u239c\u239c\u239d\n0\n0\n2\n\u239e\n\u239f\u239f\u239f\u239f\u239f\u239f\u239f\u23a0\n\u239b\n\u239c\u239c\u239c\u239c\u239c\u239c\u239c\u239d\n0\n0\n2\n\u239e\n\u239f\u239f\u239f\u239f\u239f\u239f\u239f\u23a0\n\u239b\n\u239c\u239c\u239c\u239c\u239c\u239c\u239c\u239d\n1\n4\n0\n\u239e\n\u239f\u239f\u239f\u239f\u239f\u239f\u239f\u23a0\n\u239b\n\u239c\u239c\u239c\u239c\u239c\u239c\u239c\u239d\n3\n3\n2\n\u239e\n\u239f\u239f\u239f\u239f\u239f\u239f\u239f\u23a0\n\u239b\n\u239c\u239c\u239c\u239c\u239c\u239c\u239c\u239d\n4\n1\n1\n\u239e\n\u239f\u239f\u239f\u239f\u239f\u239f\u239f\u23a0\n\u239b\n\u239c\u239c\u239c\u239c\u239c\u239c\u239c\u239d\n3\n3\n2\n\u239e\n\u239f\u239f\u239f\u239f\u239f\u239f\u239f\u23a0\nRow\nCol\nMat\n\u239b\n\u239c\u239c\u239c\u239c\u239c\u239c\u239c\u239d\n1\n4\n0\n\u239e\n\u239f\u239f\u239f\u239f\u239f\u239f\u239f\u23a0\n\u239b\n\u239c\u239c\u239c\u239c\u239c\u239c\u239c\u239d\n4\n0\n0\n\u239e\n\u239f\u239f\u239f\u239f\u239f\u239f\u239f\u23a0\n\u239b\n\u239c\u239c\u239c\u239c\u239c\u239c\u239c\u239d\n0\n0\n2\n\u239e\n\u239f\u239f\u239f\u239f\u239f\u239f\u239f\u23a0\n\u239b\n\u239c\u239c\u239c\u239c\u239c\u239c\u239c\u239d\n0\n0\n2\n\u239e\n\u239f\u239f\u239f\u239f\u239f\u239f\u239f\u23a0\nFigure 6: A conditional extortion f of order 1 (left) and an extortion g of order 3 (right).\n(Row, Col, Mat) to be the order in which the players commit.\nFigure 6 depicts the possible conditional commitments of the players\nin extensive forms, with the left branch corresponding to Row\"s\nstrategy of playing the top row. Let f and g be the conditional\ncommitment strategies indicated by the thick lines in the left and\nright figures respectively. Both f and g are extortions of order 1.\nIn both f and g Row guarantees himself the higher payoff given\nthe conditional commitments of Mat and Col. Only g, however, is\nalso an extortion of order 2. To appreciate that f is not, consider\nthe conditional commitment profile h in which Row chooses top\nand Col chooses right no matter how Row decides, i.e., h is such\nthat hRow = t and hCol(t) = hCol(b) = r. Then, (hRow, hCol, fMat) is\nalso an extortion of order 1, but yields Col a higher payoff than f\ndoes. We leave it to the reader to check that, by contrast, g is an\nextortion of order 3, and therewith an extortion per se.\n4.1 Promises and Threats\nOne way of understanding conditional extortions is by\nconceiving of them as combinations of precisely one promise and a\nnumber of threats. From the strategy profiles that can still be realized\ngiven the conditional commitments of players that have\ncommitted before him, a player tries to enforce the strategy profile that\nyields him as much payoff as possible. Hence, he chooses his\ncommitment so as to render deviations from the path that leads to this\nstrategy profile as unattractive as possible (\u2018threats\") and the\ndesired strategy profile as appealing as possible (\u2018promises\") for the\nrelevant players. If (s\u03c01\n, . . . , s\u03c0n ) is such a desirable strategy\nprofile for player \u03c0i and f\u03c0i\nhis conditional commitment, the value\nof f\u03c0i\n(s\u03c01\n, . . . , s\u03c0i\u22121\n) could be taken as his promise, whereas the\nvalues of f\u03c0i\nfor all other (t\u03c01\n, . . . , t\u03c0i\u22121\n) could be seen as constituting\nhis threats. The higher the payoff is to the other players in a strategy\nprofile a player aims for, the easier it is for him to formulate an\neffective threat. However, making appropriate threats in this respect\ndoes not merely come down to minimizing the payoffs to players to\ncommit later wherever possible. A player should also take into\naccount the commitments, promises and threats the following players\ncan make on the basis of his and his predecessors\" commitments.\nThis is what makes extortionate reasoning sometimes so\ncomplicated, especially in situations with more than two players.\nFor example, in the game of Figure 5, there is no conditional\nextortion that ensures Mat a payoff of two. To appreciate this,\nconsider the possible commitments Mat can make in case Row plays\ntop and Col plays left (tl) and in case Row plays top and Col plays\nright (tr). If Mat commits to the right matrix in both cases, he\nvirtually promises Row a payoff of four, leaving himself with a payoff of\nat most one. Otherwise, he puts Col in a position to deter Row from\nchoosing bottom by threatening to choose the right column if the\nlatter does so. Again, Mat cannot expect a payoff higher than one.\nIn short, no matter how Mat conditionally commits, he will either\nenable Col to threaten Row into playing top or fail to lure Row into\nplaying the bottom row.\n4.2 Benign Backward Induction\nThe solutions extortions provide can also be obtained by\nmodeling the situation as an extensive form game and applying a\nbackward inductive type of argument. The actions of the players in any\nsuch extensive form game are then given by their conditional\ncommitments, which they then choose sequentially. For higher types\nof commitment, such as conditional commitments, such\n\u2018metagames\", however, grow exponentially in the number of strategies\navailable to the players and are generally much larger than the\noriginal game. The correspondence between the backward induction\nsolutions in the meta-game and the extortions of the original\nstrategic game rather signifies that the concept of an extortion is defined\nproperly. First we define the concept of benign backward\ninduction in general relative to a game in strategic form together with\nan ordering of the players. Intuitively it reflects the idea that each\nplayer chooses for each possible combination of actions of his\npredecessors the action that yields the highest payoff, given that his\nsuccessors do similarly. The concept is called benign backward\ninduction, because it implies that a player, when indifferent between\na number of actions, chooses the one that benefits his\npredecessors most. For an ordering \u03c0 of the players, we have \u03c0R\ndenote its\nreversal (\u03c0n, . . . , \u03c01).\nDefinition 4.2. (Benign backward induction) Let G be a\nstrategic game and \u03c0 an ordering of its players. A benign\nbackward induction of order 0 is any conditional commitment profile f\nsubject to \u03c0. For m > 0, a conditional commitment strategy\nprofile f is a benign backward induction (solution) of order m if f is a\nbenign backward induction of order m \u2212 1 and\n(g\u03c0R\nn\n, . . . , g\u03c0R\nm+1\n, g\u03c0R\nm\n, . . . , g\u03c0R\n1\n) \u03c0R\nm\n(g\u03c0R\nn\n, . . . , g\u03c0R\nm+1\n, f\u03c0R\nm\n, . . . , f\u03c0R\n1\n)\nfor any backward induction (g\u03c0R\nn\n,..., g\u03c0R\nm+1\n, g\u03c0R\nm\n,..., g\u03c0R\n1\n) of order m\u22121.\nA conditional commitment profile f is a benign backward induction\nif it is a benign backward induction of order k for each k with 0\nk n.\nFor games with a finite action set for each player, the\nfollowing result follows straightforwardly from Kuhn\"s Theorem (cf. [6,\np. 99]). In particular, this result holds if the players\" actions are\ncommitments of a finite type.\nFact 4.3. For each finite game and each ordering of the\nplayers, benign backward inductions exist.\nFor each game, each ordering of its players and each\ncommitment type, we can define another game G\u2217\nwith the the actions\nof each player i given by his \u03c4-commitments Xi in G. The utility\n112 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nof a strategy profile (x\u03c01\n, . . . , x\u03c0n ) for a player i in G\u2217\ncan then be\nequated to his utility of the strategy profile \u03c6(x\u03c0n , . . . , x\u03c01\n) in G. We\nnow find that the extortions of G can be retrieved as the paths of\nthe benign backward induction solutions of the game G\u2217\nfor the\nordering \u03c0R\nof the players, provided that the commitment type is\nfinite.\nTheorem 4.4. Let G = (N, (Ai)i\u2208N, (ui)i\u2208N) be a game and \u03c0\nan ordering of its players with which the finite commitment\ntype \u03c4 associates the tuple X\u03c01\n, . . . , X\u03c0n , \u03c6 . Let further G\u2217\n=\nN, (X\u03c0i\n)i\u2208N, (u\u2217\n\u03c0i\n)i\u2208N , where u\u2217\n\u03c0i\n(x\u03c0n , . . . , x\u03c01\n) = u\u03c0i\n(\u03c6(x\u03c01\n, . . . , x\u03c0n )),\nfor each \u03c4-commitment profile (x\u03c01\n, . . . , x\u03c0n ). Then, a\n\u03c0commitment profile (x\u03c01\n, . . . , x\u03c0n ) is a \u03c4-extortion in G given \u03c0 if and\nonly if there is some benign backward induction f in G\u2217\ngiven \u03c0R\nwith f = (x\u03c0n , . . . , x\u03c01\n).\nProof. Assume that f is a benign backward induction in G\u2217\nrelative to \u03c0R\n. Then, f = (x\u03c0n , . . . , x\u03c01\n), for some commitment\nprofile (x\u03c01\n, . . . , x\u03c0n ) of G relative to \u03c0. We show by induction\nthat (x\u03c01\n, . . . , x\u03c0n ) is an extortion of order m, for all m with 0\nm n. For m = 0, the proof is trivial. For the induction step,\nconsider an arbitrary commitment profile (y\u03c01\n, . . . , y\u03c0n ) such that\n(y\u03c01\n, . . . , y\u03c0m , x\u03c0m+1\n, . . . , x\u03c0n ) is an extortion of order m \u2212 1. In virtue\nof the induction hypothesis, there is a benign backward induction g\nof order m \u2212 1 in G\u2217\nwith g = (x\u03c0n , . . . , x\u03c0m+1\n, y\u03c0m , . . . , y\u03c01\n). As f is\nalso a benign backward induction of order m:\n(g\u03c0n , . . . , g\u03c01\n) \u2217\n\u03c0m\n(g\u03c0n , . . . , g\u03c0m+1\n, f\u03c0m , . . . , f\u03c01\n) .\nHence, (x\u03c0n , . . . , x\u03c0m+1\n, y\u03c0m , . . . , y\u03c01\n) \u2217\n\u03c0m\n(x\u03c0n , . . . , x\u03c01\n). By\ndefinition of u\u2217\n\u03c0m\n, then also:\n\u03c6(y\u03c01\n, . . . , y\u03c0m , x\u03c0m+1\n, . . . , x\u03c0n ) \u03c0m \u03c6(x\u03c01\n, . . . , x\u03c0n ).\nWe may conclude that x is an extortion of order m.\nFor the only if direction, assume that x is an extortion of G\ngiven \u03c0. We prove that there is a benign backward induction f (\u2217)\nin G\u2217\nfor \u03c0R\nwith f (\u2217)\n= x. In virtue of Fact 4.3, there is a benign\nbackward induction h in G\u2217\ngiven \u03c0R\n. Now define f (\u2217)\nin such a way\nthat f (\u2217)\n\u03c0i\n(z\u03c0n , . . . , z\u03c0i\u22121\n) = x\u03c0i\n, if (z\u03c0n , . . . , z\u03c0i\u22121\n) = (x\u03c0n , . . . , x\u03c0i\u22121\n),\nand f (\u2217)\n\u03c0i\n(z\u03c0n , . . . , z\u03c0i\u22121\n) = h\u03c0i\n(z\u03c0n , . . . , z\u03c0i\u22121\n), otherwise. We prove\nby induction on m, that f (\u2217)\nis a benign backward induction of\norder m, for each m with 0 m n. The basis is trivial. So\nassume that f (\u2217)\nis a backward induction of order m \u2212 1 in G\u2217\ngiven \u03c0R\nand consider an arbitrary benign backward induction g\nof order m \u2212 1 in G\u2217\ngiven \u03c0R\n. Let g be given by (y\u03c0n , . . . , y\u03c01\n).\nEither (y\u03c0n , . . . , y\u03c0m+1\n) = (x\u03c0n , . . . , x\u03c0m+1\n), or this is not the case. If\nthe latter, it can readily be appreciated that:\n(g\u03c0n , . . . , g\u03c0m+1\n, f (\u2217)\n\u03c0m\n, . . . , f (\u2217)\n\u03c01\n) = (g\u03c0n , . . . , g\u03c0m+1\n, h\u03c0m , . . . , h\u03c01\n) .\nHaving assumed that h is a benign backward induction,\nsubsequently, (g\u03c0n , . . . , g\u03c01\n) \u2217\nm (g\u03c0n , . . . , g\u03c0m+1\n, h\u03c0m , . . . , h\u03c01\n) , and\n(g\u03c0n , . . . , g\u03c01\n) \u2217\nm (g\u03c0n , . . . , g\u03c0m+1\n, f (\u2217)\n\u03c0m , . . . , f (\u2217)\n\u03c01\n) . Hence, f (\u2217)\nis\na benign backward induction of order m. In the former case\nthe reasoning is slightly different. Then, (g\u03c0n , . . . , g\u03c01\n) =\n(x\u03c0n , . . . , x\u03c0m+1\n, y\u03c0m , . . . , y\u03c01\n). It follows that:\n(g\u03c0n , . . . , g\u03c0m+1\n, f (\u2217)\n\u03c0m\n, . . . , f (\u2217)\n\u03c01\n) = (f (\u2217)\n\u03c0n\n, . . . , f (\u2217)\n\u03c01\n) = (x\u03c0n , . . . , x\u03c01\n).\nIn virtue of the induction hypothesis, (y\u03c01\n, . . . , y\u03c0n ) is an extortion\nof order m \u2212 1 in G given \u03c0. As the reasoning takes place under the\nassumption that x is an extortion in G given \u03c0, we also have:\n\u03c6(y\u03c01\n, . . . , y\u03c0m , x\u03c0m+1\n, . . . , x\u03c0n ) \u03c0m \u03c6(x\u03c01\n, . . . , x\u03c0n ).\nThen, (x\u03c0n , . . . , x\u03c0m+1\n, y\u03c0m , . . . , y\u03c01\n, ) \u2217\n\u03c0m\n(x\u03c0n , . . . , x\u03c01\n)., by\ndefinition of u\u2217\n. We may conclude that:\n(g\u03c0n , . . . , g\u03c01\n) \u2217\n\u03c0m\n(g\u03c0n , . . . , g\u03c0m+1\n, f (\u2217)\n\u03c0m\n, . . . , f (\u2217)\n\u03c01\n) ,\nsignifying that f (\u2217)\nis a benign backward induction of order m.\nAs an immediate consequence of Theorem 4.4 and Fact 4.3 we also\nhave the following result.\nCorollary 4.5. Let \u03c4 be a finite commitment type. Then,\n\u03c4-extortions exist for each strategic game and for each ordering\nof the players.\n4.3 Commitment Order\nIn the case of unconditional commitments, it is not always\nfavorable to be the first to commit. This is well illustrated by the familiar\ngame rock-paper-scissors. If, on the other hand, the players are in a\nposition to make conditional commitments in this particular game,\nmoving first is an advantage. Rather, we find that it can never harm\nto move first in a two-player game with conditional commitments.\nTheorem 4.6. Let G be a two-player strategic game involving\nplayer i. Further let f be an extortion of G in which i commits first,\nand g an extortion in which i commits second. Then, g i f .\nProof sketch. Let f be a conditional extortion in G given \u03c0. It\nsuffices to show that there is some conditional extortion h of\norder 1 in G given \u03c0 with h = f . Assume for a contradiction that\nthere is no such extortion of order 1 in G given \u03c0 . Then there must\nbe some b\u2217\n\u2208 Aj such that f \u227aj b\u2217\n, a , for all a \u2208 Ai.\n(Otherwise we could define (gj, gi) such that gj = fj(fi), gi(gj) = fi,\nand for any other b \u2208 Aj, gi(b) = a\u2217\n, where a\u2217\nis an action in Ai\nsuch that (b, a\u2217\n) j f . Then g would be an extortion of order 1\nin G given \u03c0 with g .) Now consider a conditional commitment\nprofile h for G and \u03c0 such that hj(a) = b\u2217\n, for all a \u2208 Ai. Let\nfurther hi be such that (a, hj) i (hi, hj) , for all a \u2208 Ai. Then, h is an\nextortion of order 1 in G given \u03c0. Observe that (hi, hj) = (fi , b\u2217\n).\nHence, f \u227aj h , contradicting the assumption that f is an extortion\nin G given \u03c0.\nTheorem 4.6 does not generalize to games with more than two\nplayers. Consider the three-player game in Figure 7, with\nextensive forms as in Figure 8. Here, Row and Mat have identical\npreferences. The latter\"s extortionate powers relative Col, however, are\nvery weak if he is to commit first: any conditional commitment\nhe makes puts Col in a situation in which she can enforce a\npayoff of two, leaving Mat and Row in the cold with a payoff of one.\nHowever, if Mat is last to commit and Row first, then the latter can\nexploit his strategic powers, threaten Col so that she plays left, and\nguarantee both himself and Mat a payoff of two.\n4.4 Pareto Efficiency\nAnother issue concerns the Pareto efficiency of the strategy\nprofiles extortionable through conditional commitments. We say that\na strategy profile s (weakly) Pareto dominates another strategy\nprofile t if t i s for all players i and s it for some. Moreover, a\nstrategy profile s is (weakly) Pareto efficient if it is not (weakly)\nPareto dominated by any other strategy profile. We extend this\nterminology to conditional commitment profiles by saying that a\nconditional commitment profile f is (weakly) Pareto efficient or\n(weakly) Pareto dominates another conditional commitment profile\nif f is or does so. We now have the following result.\n\u23a1\n\u23a2\u23a2\u23a2\u23a2\u23a3\n(0, 1, 0) (0, 0, 0)\n(0, 0, 0) (1, 2, 1)\n\u23a4\n\u23a5\u23a5\u23a5\u23a5\u23a6\n\u23a1\n\u23a2\u23a2\u23a2\u23a2\u23a3\n(2, 1, 2) (0, 0, 0)\n(0, 0, 0) (1, 2, 1)\n\u23a4\n\u23a5\u23a5\u23a5\u23a5\u23a6\nFigure 7: A three-person game.\nThe Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 113\n\u239b\n\u239c\u239c\u239c\u239c\u239c\u239c\u239c\u239d\n0\n1\n0\n\u239e\n\u239f\u239f\u239f\u239f\u239f\u239f\u239f\u23a0\n\u239b\n\u239c\u239c\u239c\u239c\u239c\u239c\u239c\u239d\n0\n0\n0\n\u239e\n\u239f\u239f\u239f\u239f\u239f\u239f\u239f\u23a0\n\u239b\n\u239c\u239c\u239c\u239c\u239c\u239c\u239c\u239d\n2\n1\n2\n\u239e\n\u239f\u239f\u239f\u239f\u239f\u239f\u239f\u23a0\n\u239b\n\u239c\u239c\u239c\u239c\u239c\u239c\u239c\u239d\n0\n0\n0\n\u239e\n\u239f\u239f\u239f\u239f\u239f\u239f\u239f\u23a0\nRow\nCol\nMat\n\u239b\n\u239c\u239c\u239c\u239c\u239c\u239c\u239c\u239d\n0\n0\n0\n\u239e\n\u239f\u239f\u239f\u239f\u239f\u239f\u239f\u23a0\n\u239b\n\u239c\u239c\u239c\u239c\u239c\u239c\u239c\u239d\n0\n0\n0\n\u239e\n\u239f\u239f\u239f\u239f\u239f\u239f\u239f\u23a0\n\u239b\n\u239c\u239c\u239c\u239c\u239c\u239c\u239c\u239d\n1\n2\n1\n\u239e\n\u239f\u239f\u239f\u239f\u239f\u239f\u239f\u23a0\n\u239b\n\u239c\u239c\u239c\u239c\u239c\u239c\u239c\u239d\n1\n2\n1\n\u239e\n\u239f\u239f\u239f\u239f\u239f\u239f\u239f\u23a0\n\u239b\n\u239c\u239c\u239c\u239c\u239c\u239c\u239c\u239d\n0\n1\n0\n\u239e\n\u239f\u239f\u239f\u239f\u239f\u239f\u239f\u23a0\n\u239b\n\u239c\u239c\u239c\u239c\u239c\u239c\u239c\u239d\n2\n1\n2\n\u239e\n\u239f\u239f\u239f\u239f\u239f\u239f\u239f\u23a0\n\u239b\n\u239c\u239c\u239c\u239c\u239c\u239c\u239c\u239d\n0\n0\n0\n\u239e\n\u239f\u239f\u239f\u239f\u239f\u239f\u239f\u23a0\n\u239b\n\u239c\u239c\u239c\u239c\u239c\u239c\u239c\u239d\n0\n0\n0\n\u239e\n\u239f\u239f\u239f\u239f\u239f\u239f\u239f\u23a0\nRow\nCol\nMat\n\u239b\n\u239c\u239c\u239c\u239c\u239c\u239c\u239c\u239d\n0\n0\n0\n\u239e\n\u239f\u239f\u239f\u239f\u239f\u239f\u239f\u23a0\n\u239b\n\u239c\u239c\u239c\u239c\u239c\u239c\u239c\u239d\n1\n2\n1\n\u239e\n\u239f\u239f\u239f\u239f\u239f\u239f\u239f\u23a0\n\u239b\n\u239c\u239c\u239c\u239c\u239c\u239c\u239c\u239d\n0\n0\n0\n\u239e\n\u239f\u239f\u239f\u239f\u239f\u239f\u239f\u23a0\n\u239b\n\u239c\u239c\u239c\u239c\u239c\u239c\u239c\u239d\n1\n2\n1\n\u239e\n\u239f\u239f\u239f\u239f\u239f\u239f\u239f\u23a0\nFigure 8: It is not always better to commit early than late, even in the case of conditional or inductive commitments.\nTheorem 4.7. In each game, Pareto efficient conditional\nextortions exist. Moreover, any strategy profile that Pareto dominates an\nextortion is also extortionable through a conditional commitment.\nProof sketch. Since, in virtue of Fact 4.5, extortions\ngenerally exists in each game, it suffices to recognize that the second\nclaim holds. Let s be the strategy profile (s\u03c01\n, . . . , s\u03c0n ). Let\nfurther the conditional extortion f be Pareto dominated by s. An\nextortion g with g = s can then be constructed by adopting\nall threats of f while promising g . I.e., for all players \u03c0i we\nhave g\u03c0i\n(s\u03c01\n, . . . , s\u03c0i\u22121\n) = si and g\u03c0i\n(t\u03c01\n, . . . , t\u03c0n ) = f\u03c0i\n(t\u03c01\n, . . . , t\u03c0n ),\nfor all other t\u03c01\n, . . . , t\u03c0n . As s Pareto dominates f , the threats of f\nremain effective as threats of g given that s is being promised.\nThis result hints at a difference between (benign) backward\ninduction and extortions. In general, solutions of benign backward\ninductions can be Pareto dominated by outcomes that are no benign\nbackward induction solutions. Therefore, although every extortion\ncan be seen as a benign backward induction in a larger game, it is\nnot the case that all formal properties of extortions are shared by\nbenign backward inductions in general.\n5. OTHER COMMITMENT TYPES\nConditional and unconditional commitments are only two\npossible commitment types. The definition also provides for types\nof commitment that allow for committing on commitments, thus\nachieving a finer adjustment of promises and threats. Similarly, it\nsubsumes commitments on and to mixed strategies. In this section\nwe comment on some of these possibilities.\n5.1 Inductive Commitments\nApart from making commitments conditional on the actions of\nthe players to commit later, one could also commit on the\ncommitments of the following players. Informally, such commitments\nwould have the form of if you only dare to commit in such and\nsuch a way, then I do such and such, otherwise I promise to act so\nand so.\nFor a strategic game G and an ordering \u03c0 of the players, we\ndefine the inductive commitments of the players inductively. The\ninductive commitments available to \u03c01 coincide with the actions that\nare available to him. An inductive commitment for player \u03c0i+1 is a\nfunction mapping each profile of inductive commitments of\nplayers \u03c01 through \u03c0i to one of his basic actions. Formally we define the\ntype of inductive commitments F\u03c01\n, . . . , F\u03c0n , such that for each\nplayer \u03c0i in a game G and given \u03c0:\nF\u03c01\n=df. A\u03c01\n,\nF\u03c0i+1\n=df. A\nF\u03c01 \u00d7\u00b7\u00b7\u00b7\u00d7F\u03c0i\n\u03c0i+1\n.\nLet f\u03c0i\n= f\u03c0i\nf\u03c01\n, . . . , f\u03c0i\u22121\n, for each player \u03c0i and have f denote\nthe pure strategy profile f\u03c01\n, . . . , f\u03c0n\n.\nInductive commitments have a greater extortionate power than\nconditional commitments. To appreciate this, consider once more\nthe game in Figure 5. We found that the strategy profile in\nwhich Row chooses bottom and Col and Mat both choose left is\nnot extortionable through conditional commitments. By means of\ninductive commitments, however, this is possible. Let f be the\ninductive commitment profile such that fRow is Row choosing the\nbottom row (b), fCol is the column player choosing the left column\n(l) no matter how Row decides, and fMat is defined such that:\nfMat fRow, fCol =\n\u23a7\n\u23aa\u23aa\u23a8\n\u23aa\u23aa\u23a9\nr if fRow = t and fCol (b) = r,\nl otherwise.\nInstead of showing formally that f is an inductive extortion of the\nstrategy profile (b, l, l), we point out informally how this can be\ndone. We argued that in order to exact a payoff of two by means of\na conditional extortion, Mat would have to lure Row into choosing\nthe bottom row without at the same time putting Col in a position\nto successfully threaten Row not to choose top. This, we found,\nis an impossibility if the players can only make conditional\ncommitments. By contrast, if Mat can commit to commitments, he can\nundermine Col\"s efforts to threaten Row by playing the right\nmatrix, if Col were to do so. Yet, Mat can still force Row to choose\nthe bottom row, in case Col desists form making this threat.\nAs can readily be observed, in any game, the inductive\ncommitments of the first two players to commit coincide with their\nconditional commitments. Hence, as an immediate consequence\nof Theorem 4.6, it can never harm to be the first to commit to\nan inductive commitment in the two player case. Similarly, we\nfind that the game depicted in Figure 7 also serves as an example\nshowing that, in case there are more than two players, it is not\nalways better to commit to an inductive commitment early. In this\nexample the strategic position of Mat is so weak if he is to\ncommit first, that even the possibility to commit inductively does not\nstrengthen it, whereas, in a similar fashion as with conditional\ncommitments, Row can enforce a payoff of two to both himself and Mat\nif he is the first to commit.\n5.2 Mixed Commitments Types\nSo far we have merely considered commitments to and on pure\nstrategies. A natural extension would be also to consider\ncommitments to and on mixed strategies. We distinguish between\nconditional, unconditional as well as inductive mixed commitments.\nWe find that they are generally quite incomparable with their pure\ncounterparts: in some situations a player can achieve more using\na mixed commitment, in another using a pure commitment type.\nA complicating factor with mixed commitment types is that they\n114 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)\ncan result in a mixed strategy profile being played. This makes\nthat the distinction between promises and threats, as delineated in\nSection 4.1, gets blurred for mixed commitment types.\nThe type of mixed unconditional commitments associates\nwith each game G and ordering \u03c0 of its players the\ntuple \u03a3\u03c01\n, . . . , \u03a3\u03c0n , id . The two-player case has been extensively\nstudied (e.g., [2, 16]). As a matter of fact, von Neumann\"s\nfamous minimax theorem shows that for two-player zero-sum games,\nit does not matter which player commits first. If the second player\nto commit plays a mixed strategy that ensures his security level, the\nfirst player to commit can do no better than to do so as well [14].\nIn the game of Figure 5 we found that, with conditional\ncommitments, Mat is unable to enforce an outcome that awards him a\npayoff of two. Recall that the reason of this failure is that any effort to\ndeter Row from choosing the top row is flawed, as it would put Col\nin an excellent position to threaten Row not to choose the bottom\nrow. If Mat has inductive commitments at his disposal, however,\nthis is a possibility. We now find that in case the players can\ndispose of unconditional mixed strategies, Mat is in a much similar\nposition. He could randomize uniformly between the left and right\nmatrix. Then, Row\"s expected utility is 21\n2\nif he plays the top row,\nno matter how Col randomizes. The expected payoff of Col does\nnot exceed 21\n2\n, either, in case Row chooses top. By purely\ncommitting to the left column, Col player entices Row to play bottom,\nas his expected utility then amounts to 3. This ensures an expected\nutility of three for Col as well.\nHowever, a player is not always better off with unconditional\nmixed commitments than with pure conditional commitments. For\nan example, consider the game in Figure 2. Using pure conditional\ncommitments, he can ensure a payoff of three, whereas with\nunconditional mixed commitments 21\n2\nwould be the most he could\nachieve. Neither is it in general advantageous to commit first to a\nmixed strategy in a three-player game. To appreciate this, consider\nonce more the game in Figure 7. Again committing to a mixed\nstrategy will not achieve much for Mat if he is to move first, and as\nbefore the other players have no reason to commit to anything other\nthan a pure strategy. This holds for all players if Row commits first,\nCol second and Mat last, be it that in this case Mat obtains the best\npayoff he can get.\nAnalogous to conditional and inductive commitments one can\nalso define the types of mixed conditional and mixed inductive\ncommitments. With the former, a player can condition his mixed\nstrategies on the mixed strategies of the players to commit after him.\nThese tend to be very large objects and, knowing little about them\nyet, we shelve their formal analysis for future research.\nConceptually, it might not be immediately clear how such mixed conditional\ncommitments can be made with credibility. For one, when one\"s\ncommitments are conditional on a particular mixed strategy being\nplayed, how can it be recognized that it was in fact this mixed\nstrategy that was played rather than another one? If this proves to be\nimpossible, how can one know how his conditional commitments\nis to be effectuated? A possible answer would be, that all depends\non the circumstances in which the commitments were made. E.g.,\nif the different agents can submit their mixed conditional\ncommitments to an independent party, the latter can execute the\nrandomizations and determine the unique mixed strategy profile that their\ncommitments induce.\n6. SUMMARY AND CONCLUSION\nIn some situations agents can strengthen their strategic position\nby committing themselves to a particular course of action. There\nare various types of commitment, e.g., pure, mixed and conditional.\nWhich type of commitment an agent is in a position in to make\nessentially depends on the situation under consideration. If the agents\ncommit in a particular order, there is a tactic common to making\ncommitments of any type, which we have formalized by means the\nconcept of an extortion. This generic concept of extortion can be\nanalyzed in abstracto. Moreover, on its basis the various\ncommitment types can be compared formally and systematically.\nWe have seen that the type of commitment an agent can make\nhas a profound impact on what an agent can achieve in a\ngamelike situation. In some situations a player is much helped if he\nis in a position to commit conditionally, whereas in others mixed\ncommitments would be more profitable. This raises the question\nas to the characteristic formal features of the situations in which it\nis advantageous for a player to be able to make commitments of a\nparticular type.\nAnother issue which we leave for future research is the\ncomputational complexity of finding an extortion for the different\ncommitment types.\n7. REFERENCES\n[1] A. K. Chopra and M. Singh. Contextualizing commitment\nprotocols. In Proceedings of the 5th International Joint\nConference on Autonomous Agents and Multi-Agent Systems\n(AAMAS), pages 1345-1352. ACM Press, 2006.\n[2] V. Conitzer and T. Sandholm. Computing the optimal\nstrategy to commit to. In Proceedings of the 7th ACM\nConference on Electronic Commerce (ACM-EC), pages\n82-90. ACM Press, 2006.\n[3] J. C. Harsanyi. A simplified bargaining model for the\nn-person cooperative game. International Economic Review,\n4(2):194-220, 1963.\n[4] R. D. Luce and H. Raiffa. Games and Decisions:\nIntroduction and Critical Survey. Wiley, 1957.\n[5] J. Nash. Two-person cooperative games. Econometrica,\n21:128-140, 1953.\n[6] M. J. Osborne and A. Rubinstein. A Course in Game Theory.\nMIT Press, 1994.\n[7] D. Samet. How to commit to cooperation, 2005. Invited talk\nat the 4th International Joint Conference on Autonomous\nAgents and Multi-Agent Systems (AAMAS).\n[8] T. Sandholm and V. Lesser. Leveled-commitment\ncontracting. A backtracking instrument for multiagent\nsystems. AI Magazine, 23(3):89-100, 2002.\n[9] T. C. Schelling. The Strategy of Conflict. Harvard University\nPress, 1960.\n[10] R. Selten. Spieltheoretische Behandlung eines\nOligopolmodells mit Nachfragetr\u00a8agheit. Zeitschrift f\u00a8ur die\ngesamte Staatswissenschaft, 121:301-324, 1965.\n[11] M. P. Singh. An ontology for commitments in multiagent\nsystems: Toward a unification of normative concepts.\nArtificial Intelligence and Law, 7(1):97-113, 1999.\n[12] M. Tennenholtz. Program equilibrium. Games and Economic\nBehavior, 49:363-373, 2004.\n[13] E. van Damme and S. Hurkens. Commitment robust\nequilibria and endogenous timing. Games and Economic\nBehavior, 15:290-311, 1996.\n[14] J. von Neumann and O. Morgenstern. The Theory of Games\nand Economic Behavior. Princeton University Press, 1944.\n[15] H. von Stackelberg. Marktform und Gleichgewicht. Julius\nSpringer Verlag, 1934.\n[16] B. von Stengel and S. Zamir. Leadership with commitment to\nmixed strategies. CDAM Research Report\nLSE-CDAM-2004-01, London School of Economics, 2003.\nThe Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 115", "keywords": "game theory;strategic position;credibility;distributed computing;decision making;freedom of action;commitment;electronic market;induction hypothesis;stackleberg setting;optimal conditional commitment;sequential commitment type;extortion;pareto efficient conditional extortion;action freedom;multiagent system;pareto efficiency"}
-{"name": "test_I-9", "title": "Temporal Linear Logic as a Basis for Flexible Agent Interactions", "abstract": "Interactions between agents in an open system such as the Internet require a significant degree of flexibility. A crucial aspect of the development of such methods is the notion of commitments, which provides a mechanism for coordinating interactive behaviors among agents. In this paper, we investigate an approach to model commitments with tight integration with protocol actions. This means that there is no need to have an explicit mapping from protocols actions to operations on commitments and an external mechanism to process and enforce commitments. We show how agents can reason about commitments and protocol actions to achieve the end results of protocols using a reasoning system based on temporal linear logic, which incorporates both temporal and resource-sensitive reasoning. We also discuss the application of this framework to scenarios such as online commerce.", "fulltext": "1. INTRODUCTION AND MOTIVATION\nRecently, software development has evolved toward the\ndevelopment of intelligent, interconnected systems working\nin a distributed manner. The agent paradigm has become\nwell suited as a design metaphor to deal with complex\nsystems comprising many components each having their own\nthread of control and purposes and involved in dynamic and\ncomplex interactions.\nIn multi-agent environments, agents often need to interact\nwith each other to fulfill their goals. Protocols are used to\nregulate interactions. In traditional approaches to protocol\nspecification, like those using Finite State Machines or Petri\nNets, protocols are often predetermined legal sequences of\ninteractive behaviors. In frequently changing environments\nsuch as the Internet, such fixed sequences can quickly\nbecome outdated and are prone to failure. Therefore, agents\nare required to adapt their interactive behaviors to succeed\nand interactions among agents should not be constructed\nrigidly.\nTo achieve flexibility, as characterized by Yolum and Singh\nin [11], interaction protocols should ensure that agents have\nautonomy over their interactive behaviors, and be free from\nany unnecessary constraints. Also, agents should be allowed\nto adjust their interactive actions to take advantages of\nopportunities or handle exceptions that arise during\ninteraction.\nFor example, consider the scenario below for online sales.\nA merchant Mer has 200 cricket bats available for sale with\na unit price of 10 dollars. A customer Cus has $50. Cus\nhas a goal of obtaining from Mer a cricket bat at some time.\nThere are two options for Cus to pay. If Cus uses credit\npayment, Mer needs a bank Ebank to check Cus\"s credit. If\nCus\"s credit is approved, Ebank will arrange the credit\npayment. Otherwise, Cus may then take the option to pay via\nPayPal. The interaction ends when goods are delivered and\npayment is arranged.\nA flexible approach to this example should include several\nfeatures. Firstly, the payment method used by Cus should\nbe at Cus\"s own choice and have the property that if Cus\"s\ncredit check results in a disapproval, this exception should\nalso be handled automatically by Cus\"s switching to PayPal.\nSecondly, there should be no unnecessary constraint on the\norder in which actions are performed, such as which of\nmaking payments and sending the cricket bat should come first.\nThirdly, choosing a sequence of interactive actions should be\nbased on reasoning about the intrinsic meanings of\nprotocol actions, which are based on the notion of commitment,\ni.e. which refers to a strong promise to other agent(s) to\nundertake some courses of action.\nCurrent approaches [11, 12, 10, 1] to achieve flexibilities\nusing the notion of commitment make use of an abstract\nlayer of commitments. However, in these approaches, a\nmapping from protocol actions onto operations on commitments\n124\n978-81-904262-7-5 (RPS) c 2007 IFAAMAS\nas well as handling and enforcement mechanisms of\ncommitments must be externally provided. Execution of protocol\nactions also requires concurrent execution of operations on\nrelated commitments. As a result, the overhead of\nprocessing the commitment layer makes specification and execution\nof protocols more complicated and error prone. There is also\na lack of a logic to naturally express aspects of resources,\ninternal and external choices as well as time of protocols.\nRather than creating another layer of commitment outside\nprotocol actions, we try to achieve a modeling of\ncommitments that is integrated with protocol actions. Both\ncommitments and protocol actions can then be reasoned about\nin one consistent system. In order to achieve that, we specify\nprotocols in a declarative manner, i.e. what is to be achieved\nrather then how agents should interact. A key to this is using\nlogic. Temporal logic, in particular, is suitable for\ndescribing and reasoning about temporal constraints while linear\nlogic [3] is quite suitable for modeling resources. We suggest\nusing a combination of linear logic and temporal logic to\nconstruct a commitment based interaction framework which\nallows both temporal and resource-related reasoning for\ninteraction protocols. This provides a natural manipulation\nand reasoning mechanism as well as internal enforcement\nmechanisms for commitments based on proof search.\nThis paper is organized as follows. Section 2 discusses the\nbackground material of linear logic, temporal linear logic\nand commitments. Section 3 introduces our modeling\nframework and specification of protocols. Section 4 discusses how\nour framework can be used for an example of online sale\ninteractions between a merchant, a bank and a customer.\nWe then discuss the advantages and limitations of using our\nframework to model interaction protocols and achieve\nflexibility in Section 5. Section 6 presents our conclusions and\nitems of further work.\n2. BACKGROUND\nIn order to increase the agents\" autonomy over their\ninteractive behaviors, protocols should be specified in terms of\nwhat is to be achieved rather than how the agents should act.\nIn other words, protocols should be specified in a declarative\nmanner. Using logic is central to this specification process.\n2.1 Linear Logic\nLogic has been used as formalism to model and reason\nabout agent systems. Linear logic [3] is well-known for\nmodeling resources as well as updating processes. It has been\nconsidered in agent systems to support agent negotiation\nand planning by means of proof search [5, 8].\nIn real life, resources are consumed and new resources are\ncreated. In such logic as classical or temporal logic, however,\na direct mapping of resources onto formulas is troublesome.\nIf we model resources like A as one dollar and B as a\nchocolate bar, then A \u21d2 B in classical logic is read as\nfrom one dollar we can get a chocolate bar. This causes\nproblems as the implication allows to get a chocolate bar (B\nis true) while retaining one dollar (A remains true).\nIn order to resolve such resource - formula mapping issues,\nGirard proposed the constraints on which formulas will be\nused exactly once and can no longer be freely added or\nremoved in derivations and hence treating linear logic formulas\nas resources. In linear logic, a linear implication A B,\nhowever, allows A to be removed after deriving B, which\nmeans the dollar is gone after using one dollar to buy a\nchocolate bar.\nClassical conjunction (and) and disjunction (or) are recast\nover different uses of contexts - multiplicative as combining\nand additive as sharing to come up with four connectives.\nA \u2297 (multiplicative conjunction) A, means that one has two\nAs at the same time, which is different from A \u2227 A = A.\nHence, \u2297 allows a natural expression of proportion. A \u2118\n(multiplicative disjunction) B, means that if not A then B\nor vice versa but not both A and B.\nThe ability to specify choices via the additive connectives\nis a particularly useful feature of linear logic. A (additive\nconjunction) B, stands for one own choice, either of A or\nB but not both. A \u2295 (additive disjunction) B, stands for\nthe possibility of either A or B, but we don\"t know which.\nAs remarked in [5], and \u2295 allow choices to be made clear\nbetween internal choices (one\"s own), and external choices\n(others\" choice). For instance, to specify that the choice of\nplaces A or B for goods\" delivery is ours as the supplier, we\nuse A B, or is the client\"s, we use A \u2295 B.\nIn agent systems, this duality between inner and outer\nchoices is manifested by one agent having the power to\nchoose between alternatives and the other having to react\nto whatever choice is made.\nMoreover, during interaction, the ability to match\nconsumption and supply of resources among agents can\nsimplify the specification of resource allocations. Linear logic is\na natural mechanism to provide this ability [5]. In addition,\nit is emphasized in [8] that linear logic is used to model agent\nstates as sets of consumable resources and particularly,\nlinear implication is used to model transitions among states\nand capabilities of agents.\n2.2 Temporal Linear Logic\nWhile linear logic provides advantages to modeling and\nreasoning about resources, it does not deal naturally with\ntime constraints. Temporal logic, on the other hand, is a\nformal system which addresses the description and reasoning\nabout the changes of truth values of logic expressions over\ntime [2]. Temporal logic can be used for specification and\nverification of concurrent and reactive programs [2].\nTemporal Linear Logic (TLL) [6] is the result of\nintroducing temporal logic into linear logic and hence is\nresourceconscious as well as deals with time. The temporal\noperators used are (next), (anytime), and (sometime) [6].\nFormulas with no temporal operators can be considered as\nbeing available only at present. Adding to a formula A,\ni.e. A, means that A can be used only at the next time\nand exactly once. Similarly, A means that A can be used\nat any time and exactly once. A means that A can be used\nonce at some time.\nThough both and refer to a point in time, the choice\nof which time is different. Regarding , the choice is an\ninternal choice, as appropriate to one\"s own capability. With\n, the choice is externally decided by others.\n2.3 Commitment\nThe concept of social commitment has been recognized\nas fundamental to agent interaction. Indeed, social\ncommitment provides intrinsic meanings of protocol actions and\nstates [11]. In particular, persistence in commitments\nintroduces into agents\" consideration a certain level of\npredictability of other agents\" actions, which is important when agents\ndeal with issues of inter-dependencies, global constraints or\nThe Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 125\nresources sharing [7].\nCommitment based approaches associate protocols actions\nwith operations on commitments and protocol states with\nthe set of effective commitments [11]. Completing the\nprotocol is done via means-end reasoning on commitment\noperations to bring the current state to final states where all\ncommitments are resolved. From then, the corresponding legal\nsequences of interactive actions are determined. Hence, the\napproaches systematically enhance a variety of legal\ncomputations [11].\nCommitments can be reduced to a more fundamental form\nknown as pre-commitments. A pre-commitment here refers\nto a potential commitment that specifies what the owner\nagent is willing to commit [4], like performing some actions\nor achieving a particular state. Agents can negotiate about\npre-commitments by sending proposals of them to others.\nThe others can respond by agreeing or disagreeing with the\nproposal or proposing another pre-commitment. Once a\nprecommitment is agreed, it then becomes a commitment and\nthe process moves from negotiation phase to commitment\nphase, in which the agents act to fulfill their commitments.\n3. MODELING AGENT INTERACTIONS\nProtocols are normally viewed external to agents and are\nessentially a set of commitments externally imposed on\nparticipating agents. We take an internal view of protocols,\ni.e. from the view of participating agents by putting the\nspecification of commitments locally at the respective agents\naccording to their roles.\nSuch an approach enables agents to manage their own\nprotocol commitments. Indeed, agents no longer accept and\nfollow a given set of commitments but can reason about\nwhich commitments of theirs to offer and which\ncommitments of others to take, while considering the current needs\nand the environment. Protocols arise as commitments are\nthen linked together via agents\" reasoning based on proof\nsearch during the interaction. Also, ongoing changes in the\nenvironment are taken as input into the generation of\nprotocols by agent reasoning. This is the reverse of other\napproaches which try to make the specification flexible to\naccommodate changes in the environment. Hence, it is a step\ncloser to enabling emergent protocols, which makes\nprotocols more dynamic and flexible to the context.\nIn a nutshell, services are what agents are capable of\nproviding to other agents. Commitments can then be seen to\narise from combinations of services, i.e. an agent\"s\ncapabiliA unit of consumable resources is modeled as a\nproposition in linear logic. Numeric figures can be used to\nabbreviate a multiplicative conjunction of the same instances. For\nexample, 2 dollar = dollar \u2297 dollar. Moreover, such 3\nA\nis a shorthand for A.\nIn order to address the dynamic manipulation of resources,\nwe also include information about the location and\nownership in the encoding of resources to address the relocation\nand changes in possession of resources during agent\ninteraction. That resource A is located at agent \u03b1 and owned\nby agent \u03b2 is expressed via a shorthand notation as A@\u03b1\u03b2 ,\nwhich is treated as a logic proposition in our framework.\nThis notation can be later extended to a more complex logic\nconstruct to reason about changes in location and\nownership.\nIn our running example, a cricket bat cricket b being\nlocated at and owned by agent Mer is denoted as cricket b@M .M\nAfter a successful sale to the agent customer Cus, the cricket\nbat will be relocated to and owned by agent Cus. The\nformula cricket b@CC will replace the formula cricket b@MM\nto reflect the changes.\nOur treatment of unlimited resources is to model it as a\nnumber \u03c3 of copies of the resource\"s formula such that the\nnumber \u03c3 is chosen to be extremely large, relative to the\ncontext. For instance, to indicate that the merchant Mer\ncan issue an unlimited number of sale quotes at any time,\nwe use \u03c3 sale quote@M .M\nDeclaration of actions is also modeled in a similar manner\nas of resources.\nThe capabilities of agents refer to producing, consuming,\nrelocating and changing ownership of resources. Capabilities\nare represented by describing the state before and after\nperforming them. The general representation form is \u0393 \u0394, in\nwhich \u0393 describes the conditions before and \u0394 describes the\nconditions after. The linear implication in linear logic\nindeed ensures that the conditions before will be transformed\ninto the conditions after. Moreover, some capabilities can\nbe applied at any number of times in the interaction context\nand their formulas are also preceded by the number \u03c3.\nTo take an example, we consider the capability of agent\nMer of selling a cricket bat for 10 dollars. The conditions\nbefore are 10 dollars and a payment method from agent\nCus: 10$@CC \u2297 pay m@C . Given these, by applying theC\ncapability, Mer will gain 10 dollars (10$@MM ) and\ncom\u22a5\n) so that CusM\nmit to providing a cricket bat (cricket b@M\nwill get a cricket bat (cricket b@CC ). Together, the\ncapability is encoded as 10$@C \u2297 pay m@C 10$@MC C\n\u2297 cricket b@C\nM \u2297\nties. Hence, our approach shifts specifying a set of protocol\ncommitments to specifying sets of pre-commitments as\ncapabilities for each agent. Commitments are then can be\n\u22a5\nM\ncricket b@M .C\n3.2 Modeling commitmentsreasoned about and manipulated by the same logic\nmechanism as is used for the agents\" actions, resources and goals,\nwhich greatly simplifies the system.\nOur framework uses TLL as a means of specifying\ninteraction protocols. We encode various concepts such as resource,\ncapability and commitment in TLL. The symmetry between\na formula and its negation in TLL is explored as a way to\nmodel resources and commitments. We then discuss the\ncentral role of pre-commitments, and how they are specified at\neach participating agent. It then remains for agents to\nreason about pre-commitments to form protocol commitments,\nWe discuss the modeling of various types of commitments,\ntheir fulfillments and enforcement mechanisms.\nDue to duality in linear logic, positive formulas can be\nregarded as formulas in supply and negative formulas can\nbe regarded as formulas in demand. Hence, we take an\napproach of modeling non-conditional or base commitments as\nnegative formulas. In particular, by turning a formula into\nits negative form, a base commitment to derive the resources\nor carry out the actions associated with the formula is\ncreated. In the above example, a commitment of agent Mer to\n\u22a5\nM\n.which are subsequently discharged. provide a cricket bat (cricket b@MM ) is cricket b@M\nA base commitment is fulfilled (discharged) whenever the\n3.1 Modeling resources and capabilities committing agent successfully brings about the respective\n126 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nresources or carries out the actions as required by the\ncommitment. In TLL modeling, this means that the\ncorresponding positive formula is derived. Resolution of\ncommitments can then be naturally carried out by inference in\nTLL. For example, cricket b@M will fulfil the commit-M\nment cricket b@M\u22a5\nand both formulas are automaticallyM\nremoved as cricket b@MM \u2297 cricket b@M\u22a5\n.M\n\u22a5\nUnder a further assumption that agents are expected to\nresolve all formulas in demand (removing negative\nformulas), this creates a driving pressure on agents to resolve base\ncommitments. This pressure then becomes a natural and\ninternal enforcement mechanism for base commitments.\nA commitment with conditions (or conditional\ncommitment) can be modeled by connecting the conditions to base\ncommitments via a linear implication. A general form is\n\u0393 \u0394 where \u0393 is the condition part and \u0394 includes base\ncommitments. If the condition \u0393 is derived, by consuming\n\u0393, the linear implication will ensure that \u0394 results, which\nmeans the base commitments in \u0394 become effective. If the\nconditions can not be achieved, the linear implication can\nnot be applied and hence commitment part in the\nconditional commitment is still inactive.\nIn our approach, conditional commitments are specified\nin their potential form as pre-commitments of participating\nagents. Pre-commitments are negotiated among agents via\nproposals and upon being accepted, will form conditional\ncommitments among the engaged agents. Conditional\ncommitments are interpreted as that the condition \u0393 is required\nof the proposed agent and the commitment part \u0394 is the\nresponsibility of the owner (proposing) agent. Indeed, such\ninterpretation and the encoding of realize the notion of\na conditional commitment that owner agent is willing to\ncommit to deriving \u0394 given the proposed agent satisfies the\nconditions \u0393.\nConditional commitments, pre-commitments and\ncapabilities all have similar encodings. However, their differences lie\nin the phases of commitment that they are in. Capabilities\nare used internally by the owner agent and do not involve\nany commitment. Pre-commitments can be regarded as\ncapabilities intended for forming conditional commitments.\nUpon being accepted, pre-commitments will turn into\nconditional commitments and bring the two engaged agents into a\ncommitment phase. As an example, consider that Mer has a\ncapability of selling cricket bats: (10$@CC \u2297pay m@CC )\n(10$@M \u2297 cricket b@M\u22a5\n\u2297 cricket b@CC ). When MerM M\nproposes its capability to Cus, the capability acts as a\nprecommitment. When the proposal gets accepted, that\nprecommitment will turn into a conditional commitment in\nwhich Mer commits to fulfilling the base commitment\ncricket b@M\u22a5\n(which leads to having cricket b@CC ) uponM\nthe condition that Cus derives 10$@CC \u2297pay m@C (whichC\nleads to having 10$@MM ).\nBreakable commitments which are in place to provide agents\nwith the desired flexibility to remove itself from its\ncommitments (cancel commitments) are also modeled naturally in\nour framework. A base commitment Com\u22a5\nis turned into\na breakable base commitment (cond \u2295 Com)\u22a5\n. The extra\ntoken cond reflects the agent\"s internal deliberation about\nwhen the commitment to derive Com is broken. Once cond\nis produced, due to the logic deduction cond \u2297 (cond \u2295\nCom)\u22a5\n\u22a5, the commitment (cond \u2295 Com)\u22a5\nis removed,\nand hence breaking the commitment of deriving Com.\nMoreover, a breakable conditional commitment is modeled as\nA (1 B), instead of A B. When the condition A\nis provided, the linear implication brings about (1 B) and\nit is now up to the owner agent\"s internal choice whether 1\nor B is resulted. If the agent chooses 1, which practically\nmeans nothing is derived, then the conditional commitment\nis deliberately broken.\n3.3 Protocol Construction\nGiven the modeling of various interaction concepts like\nresource, action, capability, and commitment, we will discuss\nhow protocols can be specified.\nIn our framework, each agent is encoded with the\nresources, actions, capabilities, pre-commitments, any\npending commitments that it has. Pre-commitments, which stem\nfrom services the agents are capable of providing, are\ndesignated to be fair exchanges. In a pre-commitment, all the\nrequirements of the other party are put in the condition part\nand all the effects to be provided by the owner agent are put\non the commitment part to make up a trade-off. Such a\ndesign allows agents to freely propose pre-commitments to any\ninterested parties.\nAn example of pre-commitments is that of an agent\nMerchant regarding a sale of a cricket bat: [10$@CC \u2297pay m@C\n10 $@MM \u2297 cricket b@CC \u2297cricket b@M\u22a5\nM\n]. The\ncondition is the requirement that the customer agent provides\n10 dollars, which is assumed to be the price of a cricket\nbat, via a payment method. The exchange is the cricket bat\nfor the customer ( cricket b@CC ) and hence is fair to the\nmerchant.\nProtocols are specified in terms of sets of pre-commitments\nat participating agents. Given some initial interaction\ncommitments, a protocol emerges as agents are reasoning about\nwhich pre-commitments to offer and accept in order to fulfill\nthese commitments.\nGiven a such a protocol specification, we then discuss how\ninteraction might take place. An interaction can start with a\nrequest or a proposal. When an agent can not achieve some\ncommitments by itself, it can make a request of them or\npropose a relevant pre-commitment to an appropriate agent to\nfulfill them. The choice of which pre-commitments depends\non if such pre-commitments can produce the formulas to\nfulfill the agent\"s pending commitments.\nWhen an agent receives a request, it searches for\nprecommitments that can together produce the required\nformulas of the requests. Those pre-commitments found will\nbe used as proposals to the requesting agents. Otherwise, a\nfailure notice will be returned.\nWhen a proposal is received, the recipient agent also\nperforms a search with the inclusion of the proposal for a proof\nof those formulas that can resolve its commitments. If the\nsearch result is positive, the proposal is accepted and\nbecomes a commitment. The recipient then attempts to fulfill\nconditions of the commitments. Otherwise, the proposal is\nrefused and no commitment is formed.\nThroughout the interaction, proof search has played a\nvital role in protocol construction. Proof search reveals\nthat some commitments can not be resolved locally or some\npre-commitments can be used to resolve pending\ncommitments, which prompts the agent to make a request or\nproposal respectively. Proof search also determines which\nprecommitments are relevant to fulfillment of a request, which\nhelps agents to decide which pre-commitments to propose to\nanswer the request. Moreover, whether a received proposal\nC\nThe Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 127\nis relevant to any pending commitments or not is also\ndetermined by a search for proof of these commitments with\nan inclusion of the proposal. Conditions of proposals can\nbe resolved by proof search as it links them with the agents\"\ncurrent resources and capabilities as well as any relevant\nprecommitments. Therefore, it can be seen that proof search\nperformed by participating agents can link up their\nrespective pre-commitments and turn them into commitments as\nappropriate, which give rise to protocol formation. We will\ndemonstrate this via our running example in section 4.\n3.4 Interactive Messages\nAgents interact by sending messages. We address agent\ninteraction in a simple model which contains messages of\ntype requests, proposals, acceptance, refusal and failure\nnotice.\nWe denote Source to Destination: prior to each message\nto indicate the source and destination of the message. For\nexample Cust to Mer: denotes that the message is sent\nfrom agent Cust to agent Mer.\nRequest messages start with the key word REQUEST:\nREQUEST + formula. Formulas in request messages\nare of commitments.\nProposal messages are preceded with PROPOSE.\nFormulas are of capabilities. For example, \u03b1 to \u03b2: PROPOSE\n\u0393 \u0394 is a proposal from agent \u03b1 to agent \u03b2.\nThere are messages that agents use to response to a\nproposal. Agents can indicate an acceptance: ACCEPT, or\na refusal: REFUSE. To notice a failure in fulfilling a\nrequest or proposal, agents reply with that request or proposal\nmessage appended with FAIL.\n3.5 Generating Interactions\nAs we have seen, temporal linear logic provides an elegant\nmeans for encoding the various concepts of agent\ninteraction in a commitment based specification framework.\nAppropriate interaction is generated as agents negotiate their\nspecified pre-commitments to fulfill their goals. The\nassociation among pre-commitments at participating agents and\nthe monitoring of commitments to ensure that all are\ndischarged are performed by proof search. In the next section,\nwe will demonstrate how specification and generation of\ninteractions in our framework might work.\n4. EXAMPLE\nWe return to the online sales scenario introduced in\nSection 1.\n4.1 Specifying Protocol\nWe design a set of pre-commitments and capabilities to\nimplement the above scenario. For simplicity, we refer to\nthem as rules.\nRules at agent Mer\nMer has available at any time 200 cricket bats for sale and\ncan issue sale quotes at any time:\n200 cricket b@M \u2297 \u03c3 sale quote@M .M M\nRule 1: Mer commits to offering a cricket bat\n(cricket b@M\u22a5\nM\n) to Cus ( cricket b@CC ) if Cus pays 10\ndollars (10$@CC ) either via Paypal or credit card. The choice\nis at Cus.\n\u03c3 [10$@C \u2297 (Paypal paid@M \u2295 credit paid@MM )C M\n10 $@M \u2297 cricket b@C \u2297 cricket b@M\u22a5\nM C M\n]\nRule 2: If EBank carries out the credit payment to Mer\nthen the requirement of credit payment at Mer is fulfilled:\n\u03c3 [credit paym@M credit paid@MM ]B\nRule 3: If Ebank informs Mer of its disapproval of Cus\"s\ncredit then Mer will also let Cus know.\n\u03c3 [credit not appr@M credit not appr@CB ]B\nRules at agent Ebank\nRule 4: Upon receiving a sale quote from Mer, at the\nnext time point, Ebank commits to either informing Mer\nthat Cus\"s credit is not approved ( credit not appr@MB ) or\narranging a credit payment to Mer ( credit paym@MB ).\nThe decision is dependent on the credibility of Cus and hence\nis external (\u2295) to Ebank and Mer:\n\u03c3 [sale quote@M ( credit not appr@MB ) \u2295M\ncredit paym@MB ]\nRules at agent Cus\nCus has an amount of 50 dollars available at any time, can\nbe used for credit payment or cash payment: $50@C.\nCus has a commitment of obtaining a cricket bat at some\ntime: [ cricket b@CC ]\u22a5\n.\nRule 5: Cus will pay Mer via Paypal if there is an\nindication from EBank that Cus\"s credit is not approved:\n\u03c3 [credit not appr@C Paypal paid@MM ]B\n4.2 Description of the interaction\nCus requests from Mer a cricket bat at some time. Mer\nreplies with a proposal in which each cricket bat costs 10\ndollars. Cus needs to prepare 10 dollars and payment can\nbe made by credit or via Paypal.\nAssuming that Cus only pays via Paypal if credit payment\nfails, Cus will let Mer charges by credit. Mer will then ask\nEBank to arrange a credit payment. EBank proposes that\nMer gives a quote of sale and depending on Cus\"s\ncredibility, at the next time point, either credit payment will be\narranged or a disapproval of Cus\"s credit will be informed.\nMer accepts and fulfills the conditions. If the first case\nhappens, credit payment is done. If the second case happens,\ncredit payment is failed, Cus may back track to take the\noption paying via Paypal.\nOnce payment is arranged, Mer will apply its original\nproposal to satisfy the Cus\"s request of a cricket bat and hence\nremoving one cricket bat and adding 10 dollars into its set\nof resources.\n4.3 Interaction\n1. Cus can not fulfill its commitment of [ (cricket b@CC )]\u22a5\nand hence, makes a request to Merchant:\nC to M: REQUEST [ cricket b@CC ]\u22a5\n2. To meet the request, Mer searches for applicable rules.\nOne applications of rule 1 can derive cricket b@C and\ncricket b@C cricket b@C . Mer will propose rule 1C C\nat a time instance n1 to Cus as a pre-commitment.\nM to C: PROPOSE\nn1\n[10$@C \u2297 (Paypal paid@M \u2295 credit paid@MM )C M\n10 $@M \u2297 cricket b@C \u2297 cricket b@M\u22a5\nM\n]M C\nWith similar analysis, Cus determines that given the\nconC\n128 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nditions can be satisfied, the proposals can help to derive its\nrequest. Hence,\nC to M: ACCEPT\nCus analyzes the conditions of the accepted proposal by\nproof search.\nn110$@CC ;\nn1Paypal paid@M or n1credit paid@MM ; -(*)-M\nn110$@C \u2297 ( n1Paypal paid@M \u2295 n1credit paid@MM )C M\nn1(10$@C \u2297 (Paypal paid@M \u2295 credit paid@MM ))C M\nFrom (*), one way to satisfy the conditions is for Cus to\nderive, at the next n1 time points, 10 dollars ( n1\n10$@CC );\nand to choose paying via Paypal ( n1\nPaypal paid@MM )\nOR by credit payment ( n1\ncredit paid@MM ).\n3. Deriving n1\n10$@C : as Cus has 50 dollars, it canC\nmake use of 10 dollars: 10 $@C 10$@C n1\n10$@C .C C C\nThere are two options for payment method, the\nchoice is at agent Cus. We assume that Cus prefers\ncredit payment.\n4. Deriving n1\ncredit paid@M : Cus can not deriveM\nthis formula by itself, hence, it will make a request to Mer:\nC to M: REQUEST [ n1\ncredit paid@MM ]\u22a5\n.\n5. Rule 2 at Mer is applicable but Mer can not derive its\ncondition ( n1\ncredit paym@MB ). Hence, Mer will further\nmake a request to EBank.\nM to E: REQUEST [ n1\ncredit paym@MB ]\u22a5\nEbank searches and finds rule 4 applicable. Because\ncredit paym@M will be available one time point after theB\nrule\"s application time, Ebank proposes to Mer an instance\nof rule 4 at the next n1-1 time points.\n6. B to M: PROPOSE n1\u22121\n[quote@MM (\ncredit not appr@M \u2295 credit paym@MB )]B\nWith similar analysis, Mer accepts the proposal.\nM to B: ACCEPT\nThe rule condition is fulfilled by Mer as quote@MM\nn1\u22121\nquote@M . Hence, Ebank then applies the proposalM\nto derive:\nn1\u22121\n( credit not appr@M \u2295 credit paym@MB ).B\n\u2295 indicates the choice is external to both agents. There are\ntwo cases, Cus\"s credit is approved or disapproved.\nFor simplicity, we show only the case where Cus\"s credit is\napproved. At the next (n1-1) time point,\nn1\u22121\n( credit not appr@MB \u2295 credit paym@MB )\nbecomes n1\u22121\ncredit paym@M n1\ncredit paym@M .B B\nAs a result, at the next n1 time points, Ebank will arrange\nthe credit payment.\n7. Mer fulfills Cus\"s initial request.\nWhen any of n1\nPaypal paid@M (if Cus pays via Pay-M\npal) or n1\ncredit paid@M (if Cus pays by credit card)M\nis derived, n1\n(credit paym@M \u2295 Paypal paid@MM ) isM\nalso derived, hence the payment method is arranged.\nTogether with the other condition 10$@C being satisfied,C\nthis allows the initial proposal to be applied by Mer to derive\nn1\ncricket b@CC and a commitment of n1\ncricket b@M\u22a5\nM\nfor Mer, which is also resolved by the resource cricket b@MM\navailable at Mer.\nAny values of n1 such that n1 \u2212 1 \u2265 0 \u21d4 n1 \u2265 1 will allow\nMer to fulfill Cus\"s initial request of [ cricket b@CC ]\u22a5\n.\nThe interaction ends as all commitments are resolved.\n4.4 Flexibility\nThe desired flexibility has been achieved in the example.\nIt is Cus\"s own decision to proceed with the preferred\npayment method. Also, non-determinism that whether Cus\"s\ncredit is disapproved or credit payment is made to Mer is\nfaithfully represented. If an exception happens that Cus\"s\ncredit is not approved, credit not appr@C is produced andB\nCus can backtrack to paying via Paypal. Rule 5 will then\nbe utilized to allow Cus to handle the exception by paying\nvia Paypal.\nMoreover, in order to specify that making payments and\nsending cricket bats can be in any order, we can add in\nfront of payment method in rule 1 as follows:\n\u03c3 [10$@C \u2297 (Paypal paid@M \u2295 credit paid@MM )C M\n10 $@M \u2297 cricket b@C \u2297 cricket b@M\u22a5\nM\n].M C\nThis addition in the condition of the rule means that the\ntime of payment can be any time up to Cus\"s choice, as\nlong as Cus pays and hence, the time order between making\npayments and sending goods becomes flexible.\n5. ENCODING ISSUES\n5.1 Advantages of TLL framework\nOur TLL framework has demonstrated natural and\nexpressive specification of agent interaction protocols. Linear\nimplication ( ) expresses causal relationship, which makes\nit natural to model a removal or a consumption, especially\nof resources, together with its consequences. Hence, in our\nframework, resource transformation is modeled as a linear\nimplication of the consumed resources to the produced\nresources. Resource relocation is modeled as a linear\nimplication from a resource at one agent to that resource at the\nother agent. Linear implication also ensures that fulfillment\nof the conditions of a conditional commitment will cause the\ncommitments to happen. Moreover, state updates of agents\nare resulted from a linear implication from the old state to\nthe current state.\nTemporal operators ( , and ) and their combinations\nhelp to specify the time of actions, of resource availability\nand express the time order of events. Particularly, precise\ntime points are described by the use of operator or\nmultiple copies of it. Based on this ability to specify correct\ntime points for actions or events, time order or sequencing\nof them can also be captured. Also, a sense of duration is\nsimulated by spreading copies of the resources or actions\"\nformulas across multiple adjacent time points. Moreover,\nuncertainty in time can represented and reasoned about by\nthe use of and and their combinations with . can\nbe used to express outer non-determinism while expresses\ninner non-determinism. These time properties of resources,\nactions and events are correctly respected through out the\nagent reasoning process based on sequent calculus rules.\nFurthermore, the centrality of the notion of commitment\nin agent interaction has been recognized in many frameworks\n[11, 12, 1, 10, 4]. However, to the best of our knowledge,\nmodeling commitments directly at the propositional level of\nsuch a resource conscious and time aware logic as TLL is\nThe Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 129\nfirstly investigated in our framework. Our framework\nmodels base commitments as negative formulas and conditional\ncommitments via the use of linear implication and/or\nnegative formulas. The modeling of commitments has a number\nof advantages:\n\u2022 Commitments are represented directly at the\npropositional logic level or via a logic connective rather than a\nnon-logical construct like [11], which makes treatment\nof commitments more natural and simple and allows to\nmake use of readily available proof search systems like\nusing sequent calculus for handling commitments.\nExisting logic connectives like \u2297, , \u2295, are also readily\navailable for describing the relationships among\ncommitments.\n\u2022 Fulfillment of commitments then becomes deriving the\ncorresponding positive formulas or condition\nformulas, which then simply reduces to a proof search task.\nAlso, given the required formulas, fulfillment of\ncommitments can be implemented easily and\nautomatically as deduction (com \u2297 com\u22a5\n\u22a5).\nThe enforcement of commitments is also internal and\u2022\nsimply implemented via the assumption that agents\nare driven to remove all negative formulas for base\ncommitments and via the use of linear implication for\nconditional commitments.\nRegarding making protocol specification more flexible, our\napproach has marked a number of significant points.\nFirstly, flexibility of protocol specifications in our\nframework comes from the expressive power of the connectives\nof TLL. and \u2295 refer to internal and external choices of\nagents on resources and actions while and refer to\ninternal choices and external choices in time domain. Given\nthat flexibility includes the ability to make a sensible choice,\nhaving the choices expressed explicitly in the specification of\ninteraction protocols provides agents with an opportunity to\nreason about the right choices during interaction and hence\nexplore the flexibility in them.\nSecondly, instead of being sequences of interactive actions,\nprotocols are structured on commitments, which are more\nabstract than protocol actions. Execution of protocols is\nthen based on fulfilling commitments. Hence, unnecessary\nconstraints on which particular interactive actions to\nexecute by which agents and on the order among them are\nnow removed, which is a step forward to flexibility as\ncompared to traditional approaches. On the other hand, in the\npresence of changes introduced externally, agents have the\nfreedom to explore new sets of interactive actions or skip\nsome interactive actions ahead as long as they still fulfill\nthe protocol\"s commitments. This brings more flexibility to\nthe overall level of agents\" interactive behaviors, and thus\nthe protocol.\nThirdly, the protocol is specified in a declarative manner\nessentially as sets of pre-commitments at each\nparticipating agents. To achieve goals, agents use reasoning based\non TLL sequent calculus to construct proofs of goals from\npre-commitments and state formulas. This essentially gives\nagents an autonomy in utilization of pre-commitments and\nhence agents can adapt the ways they use these to flexibly\ndeal with changing environments.\nIn particular, as proof construction by agents selects a\nsequence of pre-commitments for interaction, being able to\nselect from all the possible combinations of pre-commitments\nin proof search gives more chances and flexibility than\nselecting from only a few fixed and predefined sequences. It\nis then also more likely to allow agents to handle exceptions\nor explore opportunities that arise. Moreover, as the actual\norder of pre-commitments is determined by the proof\nconstruction process rather than predefined, agents can flexibly\nchange the order to suit new situations.\nFourthly, changes in the environment can be regarded as\nremoving or adding formulas onto the state formulas.\nBecause the proof construction by agents takes into account the\ncurrent state formulas when it picks up pre-commitments,\nchanges in the state formulas will be reflected in the choice\nof which relevant pre-commitments to proceed. Hence, the\nagents have the flexibility in deciding what to do to deal\nwith changes.\nLastly, specifying protocols in our framework has a\nmodular approach which adds ease and flexibility to the designing\nprocess of protocols. Protocols are specified by placing a set\nof pre-commitments at each participating agent according to\ntheir roles. Each pre-commitment can indeed be specified as\na process in its own with condition formulas as its input\nand commitment part\"s formulas as its output. Execution\nof each conditional commitment is a relatively independent\nthread and they are linked together by the proof search to\nfulfill agents\" commitments. As a results, with such a design\nof pre-commitments, one pre-commitment can be added or\nremoved without interfering the others and hence, achieving\na modular design of the protocols.\n5.2 Limitations of TLL Framework on\nModeling\nAs all the temporal operators in TLL refer to concrete\ntime points, we can not express durations in time faithfully.\nOne major disadvantage of simulating a duration of an event\nby spreading copies of that event over adjacent time points\nA\u2297 10\ncontinuously (like A\u2297 2\n. . . A) is that it requires\nthe time range to be provided explicitly. Hence, such notion\nlike until can not be naturally expressed in TLL.\nCommitments of agents can be in conflict, especially when\nresolving all of them requires more resources or actions than\nwhat agents have. Our work has not covered handling\ncommitments that are in conflict.\nAnother troublesome aspect of this approach is that the\nrules for interaction require some detailed knowledge of the\nformulas of temporal linear logic. Clearly it would be\nbeneficial to have a visually-based tool similar to UML diagrams\nwhich would allow non-experts to specify the appropriate\nrules without having to learn the details of the formulas\nthemselves.\n6. CONCLUSIONS AND FURTHER WORK\nThis paper uses TLL for specifying interaction protocols.\nIn particular, TLL is used to model the concept of resource,\ncapability, pre-commitment and commitment with tight\nintegration as well as their manipulations with respect to time.\nAgents then make use of proof search techniques to perform\nthe desired interactions.\nIn particular, the approach allows protocol specifications\nto capture the meaning of interactive actions via\ncommitments, to capture the internal choices and external choices of\nagents about resources, commitments and about time as well\nas updating processes. The proof construction mechanism\n130 The Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nprovides agents with the ability to dynamically select\nappropriate pre-commitments, and hence, help agents to gain the\nflexibility in choosing the interactive actions that are most\nsuitable and the flexibility in the order of them, taking into\nconsideration on-going changes in the environment.\nMany other approaches to modeling protocols also use the\ncommitment concept to bring more meaning into agents\"\ninteractive actions. Approaches based on commitment\nmachines [11, 12, 10, 1] endure a number of issues. These\napproaches use logic systems that are limited in their\nexpressiveness to model resources. Also, as an extra abstract\nlayer of commitments is created, more tasks are created\naccordingly. In particular, there must be a human-designed\nmapping between protocol actions and operations on\ncommitments as well as between control variables (fluent) and\nphases of commitment achievement. Moreover, external\nmechanisms must be in place to comprehend and handle\noperations and resolution of commitments as well as enforcement\nof the notion of commitment on its abstract data type\nrepresentations. This requires another execution in the\ncommitment layer in conjunction with the actual execution of\nthe protocol. Not only these extra tasks create an overhead\nbut also makes the specification and execution of protocols\nmore error prone.\nSimilar works in [8] and [9] explore the advantages of\nlinear logic and TLL respectively by using partial deduction\ntechniques to help agents to figure out the missing\ncapabilities or resources and based on that, to negotiate with other\nagents about cooperation strategies. Our approach differs\nin bringing the concept of commitment into the modeling of\ninteraction, and providing a more natural and detailed map\nfor specifying interaction, especially about choices, time and\nupdating using the full propositional TLL. Moreover, we\nemphasize on the use of pre-commitments as interaction rules\nwith a full set of TLL inference rules to provide the\nadvantages of proof construction in achieving flexible interaction.\nOur further work will include using TLL to verify\nvarious properties of interaction protocols such as liveness and\nsafety. Also, we will investigate developing an execution\nmechanism for such TLL specifications in our framework.\nAcknowledgments\nWe are very thankful to Michael Winikoff for many\nstimulating and helpful discussions of this material. We also would\nlike to acknowledge the support of the Australian Research\nCouncil under grant DP0663147.\n7. REFERENCES\n[1] A. K. Chopra and M. P. Singh. Contextualizing\ncommitment protocol. In AAMAS \"06: Proceedings of\nthe fifth international joint conference on Autonomous\nagents and multiagent systems, pages 1345-1352, New\nYork, NY, USA, 2006. ACM Press.\n[2] E. A. Emerson. Temporal and modal logic. Handbook\nof Theoretical Computer Science, B, Chapter\n16:995-1072, 1990.\n[3] J.-Y. Girard. Linear logic. Theoretical Computer\nScience, 50:1-102, 1987.\n[4] A. Haddadi. Communication and Cooperation in\nAgent Systems: a pragmatic theory. Springer-Verlag,\nBerlin Heidelberg, 1995.\n[5] J. Harland and M. Winikoff. Agent negotiation as\nproof search in linear logic. In AAMAS \"02:\nProceedings of the first international joint conference\non Autonomous agents and multiagent systems, pages\n938-939, New York, NY, USA, 2002. ACM Press.\n[6] T. Hirai. Temporal Linear Logic and Its Applications.\nPhD thesis, Graduate School of Science and\nTechnology, Kobe University, 2000.\n[7] N. R. Jennings. Commitments and conventions: The\nfoundation of coordination in multi-agent systems.\nThe Knowledge Engineering Review, 8(3):223-250,\n1993.\n[8] P. K\u00a8ungas. Linear logic, partial deduction and\ncooperative problem solving. In J. A. Leite,\nA. Omicini, L. Sterling, and P. Torroni, editors,\nDeclarative Agent Languages and Technologies, First\nInternational Workshop, DALT 2003. Melbourne,\nVictoria, July 15th, 2003. Workshop Notes, pages\n97-112, 2003.\n[9] P. K\u00a8ungas. Temporal linear logic for symbolic agent\nnegotiation. Lecture Notes in Artificial Intelligence,\n3157:23-32, 2004.\n[10] M. Venkatraman and M. P. Singh. Verifying\ncompliance with commitment protocols. Autonomous\nAgents and Multi-Agent Systems, 2(3):217-236, 1999.\n[11] P. Yolum and M. P. Singh. Commitment machines. In\nProceedings of the 8th International Workshop on\nAgent Theories, Architectures, and Languages\n(ATAL-01), pages 235-247. Springer-Verlag, 2002.\n[12] P. Yolum and M. P. Singh. Flexible protocol\nspecification and execution: applying event calculus\nplanning using commitments. In AAMAS \"02:\nProceedings of the first international joint conference\non Autonomous agents and multiagent systems, pages\n527-534, New York, NY, USA, 2002. ACM Press.\nAPPENDIX\nA. TEMPORAL SEQUENT RULES FOR TLL\nA, \u0393 \u0394 !\u0393, \u0394 A, \u039b, ?\u03a3\nA, \u0393 \u0394\nL\n!\u0393, \u0394 A, \u039b, ?\u03a3\nR\n!\u0393, \u0394, A \u039b, ?\u03a3 \u0393 A.\u0394\n!\u0393, \u0394, A \u039b, ?\u03a3\nL\n\u0393 A, \u0394\nR\n!\u0393, \u0394, \u039e A, \u03a6, \u039b, ?\u03a0\n!\u0393, \u0394, \u039e A, \u03a6, \u039b, ?\u03a0\n!\u0393, \u0394, \u039e, A \u03a6, \u039b, ?\u03a0\n!\u0393, \u0394, \u039e, A \u03a6, \u039b, ?\u03a0\n!\u0393, \u0394, \u039e \u03a6, \u039b, ?\u03a0\n!\u0393, \u0394, \u039e \u03a6, \u039b, ?\u03a0\n\u2192\nThe Sixth Intl. Joint Conf. on Autonomous Agents and Multi-Agent Systems (AAMAS 07) 131", "keywords": "request message;classical conjunction;multiplicative conjunction;linear logic;multi-agent environment;temporal constraint;interaction protocol;logic and formal model of agency and multi-agent system;agent communication language and protocol;predictability level;pre-commitment;conditional commitment;interactive behavior;causal relationship;emergent protocol;level of predictability;linear implication"}
-{"name": "test_J-1", "title": "Generalized Trade Reduction Mechanisms", "abstract": "When designing a mechanism there are several desirable properties to maintain such as incentive compatibility (IC), individual rationality (IR), and budget balance (BB). It is well known [15] that it is impossible for a mechanism to maximize social welfare whilst also being IR, IC, and BB. There have been several attempts to circumvent [15] by trading welfare for BB, e.g., in domains such as double-sided auctions[13], distributed markets[3] and supply chain problems[2, 4]. In this paper we provide a procedure called a Generalized Trade Reduction (GTR) for single-value players, which given an IR and IC mechanism, outputs a mechanism which is IR, IC and BB with a loss of welfare. We bound the welfare achieved by our procedure for a wide range of domains. In particular, our results improve on existing solutions for problems such as double sided markets with homogenous goods, distributed markets and several kinds of supply chains. Furthermore, our solution provides budget balanced mechanisms for several open problems such as combinatorial double-sided auctions and distributed markets with strategic transportation edges.", "fulltext": "1. INTRODUCTION\nWhen designing a mechanism there are several key\nproperties that are desirable to maintain. Some of the more\nimportant ones are individual rationality (IR) - to make it\nworthwhile for all players to participate, incentive\ncompatibility (IC) - to give incentive to players to report their true\nvalue to the mechanism and budget balance (BB) - not to\nrun the mechanism on a loss. In many of the mechanisms\nthe goal function that a mechanism designer attempts to\nmaximize is the social welfare1\n- the total benefit to society.\nHowever, it is well known from [15] that any mechanism that\nmaximizes social welfare while maintaining individual\nrationality and incentive compatibility runs a deficit perforce,\ni.e., is not budget balanced.\nOf course, for many applications of practical importance\nwe lack the will and the capability to allow the mechanism\nto run a deficit and hence one must balance the payments\nmade by the mechanism. To maintain the BB property in\nan IR and IC mechanism it is necessary to compromise on\nthe optimality of the social welfare.\n1.1 Related Work and Specific Solutions\nThere have been several attempts to design budget\nbalanced mechanisms for particular domains2\n. For instance,\nfor double-sided auctions where both the buyers and sellers\nare strategic and the goods are homogeneous [13] (or when\nthe goods are heterogeneous [5]). [13] developed a\nmechanism that given valuations of buyers and sellers produces an\nallocation (which are the trading players) and a matching\nbetween buyers and sellers such that the mechanism is IR,\nIC, and BB while retaining most of the social welfare. In the\ndistributed markets problem (and closely related problems)\ngoods are transported between geographic locations while\nincurring some constant cost for transportation. [16, 9,\n3] present mechanisms that approximate the social welfare\nwhile achieving an IR, IC and BB mechanism. For supply\nchain problems [2, 4] bounds the loss of social welfare that\nis necessary to inflict on the mechanism in order to achieve\nthe desired combination of IR, IC, and BB.\nDespite the works discussed above, the question of how to\ndesign a general mechanism that achieves IR, IC, and BB\nindependently of the problem domain remains open.\nFurthermore, there are several domains where the question of\nhow to design an IR, IC and BB mechanism which\napprox1\nSocial Welfare is also referred to as efficiency in the\neconomics literature.\n2\nA brief reminder of all of the problems used in this paper\ncan be found in Appendix B\n20\nimates the social welfare remains an open problem. For\nexample, in the important domain of combinatorial\ndoublesided auctions there is no known result that bounds the loss\nof social welfare needed to achieve budget balance. Another\ninteresting example is the open question left by [3]:How can\none bound the loss in social welfare that is needed to achieve\nbudget balance in an IR and IC distributed market where\nthe transportation edges are strategic. Naturally an answer\nto the BB distributed market with strategic edges has vast\npractical implications, for example to transportation\nnetworks.\n1.2 Our Contribution\nIn this paper we unify all the problems discussed above\n(both the solved as well as the open ones) into one solution\nconcept procedure. The solution procedure called the\nGeneralized Trade Reduction (GTR). GTR accepts an IR and\nIC mechanism for single-valued players and outputs an IR,\nIC and BB mechanism. The output mechanism may suffer\nsome welfare loss as a tradeoff of achieving BB. There are\nproblem instances in which no welfare loss is necessary but\nby [15] there are problem instances in which there is welfare\nloss. Nevertheless for a wide class of problems we are able to\nbound the loss in welfare. A particularly interesting case is\none in which the input mechanism is an efficient allocation.\nIn addition to unifying many of the BB problems under\na single solution concept, the GTR procedure improves on\nexisting results and solves several open problems in the\nliterature. The existing solutions our GTR procedure improves\nare homogeneous double-sided auctions, distributed markets\n[3], and supply chain [2, 4]. For the homogeneous\ndoublesided auctions the GTR solution procedure improves on the\nwell known solution [13] by allowing for some cases of no\ntrade reduction at all. For the distributed markets [3] and\nthe supply chain [2, 4] the GTR solution procedure improves\non the welfare losses\" bound, i.e., allows one to achieve an\nIR, IC and BB mechanism with smaller loss on the social\nwelfare. Recently we also learned that the GTR procedure\nallows one to turn the model newly presented [6] into a\nBB mechanism. The open problems that are answered by\nGTR are distributed markets with strategic transportation\nedges and bounded paths, combinatorial double-sided\nauctions with bounded size of the trading group i.e., a buyer and\nits bundle goods\" sellers, combinatorial double-sided\nauctions with bounded number of possible trading groups.\nIn addition to the main contribution described above, this\npaper also defines an important classification of problem\ndomains. We define class based domain and procurement class\nbased domains. The above definitions build on the\ndifferent competition powers of players in a mechanisms called\ninternal and external competition. Most of the studied\nproblem domains are of the more restrictive procurement class\ndomains and we believe that the more general setting will\ninspire more research.\n2. PRELIMINARIES\n2.1 The Model\nIn this paper we design a method which given any IR\nand IC mechanism outputs a mechanism that maintains the\nIC and IR properties while achieving BB. For some classes\nof mechanisms we bound the competitive approximation of\nwelfare.\nIn our model there are N players divided into sets of trade.\nThe sets of trade are called procurement sets and are defined\n(following [2]) as follows:\nDefinition 2.1. A procurement set s is the smallest set\nof players that is required for trade to occur.\nFor example, in a double-sided auction, a procurement set\nis a pair consisting of a buyer and a seller. In a\ncombinatorial double-sided auction a procurement set can consist of a\nbuyer and several sellers. We mark the set of all\nprocurement sets as S and assume that any allocation is a disjoint\nunion of procurement sets.\nEach player i, 1 \u2264 i \u2264 n, assigns a real value vi(s) to\neach possible procurement set s \u2208 S. Namely, vi(s) is the\nvaluation of player i on procurement set s. We assume that\nfor each player i vi(s) is i\"s private value and that i is a single\nvalue player, meaning that if vi(sj) > 0 then for every other\nsk, k = j, either vi(sk) = vi(sj) or vi(sk) = 0. For the\nease of notation we will mark by vi the value of player i\nfor any procurement set s such that vi(s) > 0. The set\nVi \u2286 R is the set of all possible valuations vi. The set\nof all possible valuations of all the players is denoted by\nV = V1 \u00d7 ... \u00d7 Vn. Let v\u2212i = (v1, ..., vi\u22121, vi+1, ..., vn) be the\nvector of valuations of all the players besides player i, and\nlet V\u2212i be the set of all possible vectors v\u2212i.\nWe denote by W(s) the value of a procurement set s \u2208\nS such that W(s) = i\u2208s vi(s) + F(s), where F is some\nfunction that assigns a constant to procurement sets. For\nexample, F can be a (non-strategic) transportation cost in\na distributed market problem. Let the size of a procurement\nset s be denoted as |s|.\nIt is assumed that any allocation is a disjoint union of\nprocurement sets and therefore one can say that an\nallocation partitions the players into two sets; a set of players that\ntrade and a set of players that do not trade.\nThe paper denotes by O the set of possible partitions of an\nallocation A into procurement sets. The value W(A) of an\nallocation A is the sum of the values of its most efficient\npartition to procurement sets, that is W(A) = maxS\u2208O s\u2208S W(s).\nThis means that W(A) = i\u2208A vi +maxS\u2208O s\u2208S F(s). In\nthe case where F is identically zero, then W(A) = i\u2208A vi.\nAn optimal partition S\u2217\n(A) is a partition that maximizes\nthe above sum for an allocation A. Let the value of A be\nW(S\u2217\n(A)) (note that the value can depend on F). We say\nthat the allocation A is efficient if there is no other allocation\nwith a higher value. The efficiency of the allocation \u02c6A is\nW ( \u02c6A)\nW (A)\n, where A is a maximal valued allocation. We assume\nw.l.o.g. that there are no two allocations with the same\nvalue3\n.\nA mechanism M defines an allocation and payment rules,\nM = (R, P). A payment rule P decides i\"s payment pi where\nP is a function P : V \u2192 RN\n. We work with mechanisms\n3\nTies can be broken using the identities of the players.\n21\nin which players are required to report their values. An\nexample of such a mechanism is the VCG mechanism [17,\n8, 10]. The reported value bi \u2208 Vi of player i is called a\nbid and might be different from his private value vi. Let\nb \u2208 V be the bids of all players. An allocation rule R decides\nthe allocation according to the reported values b \u2208 V . We\nmake the standard assumption that players have quasi-linear\nutility so that when player i trades and pays pi then his\nutility is ui(vi, b\u2212i) = vi \u2212 pi, ui : V \u21d2 R. We also assume\nthat players are rational utility maximizers.\nMechanism M is Budget Balanced (BB) if i\u2208N pi \u2265 0\nfor any bids b \u2208 V . M is Incentive-Compatible (IC) in\ndominant strategies if for any player i, value vi and any b\u2212i \u2208 V\u2212i,\nui(vi, b\u2212i) \u2265 ui(b) meaning that for any player i, bidding vi\nmaximized i\"s utility over all possible bids of the other\nplayers. M is (ex-post) Individually Rational (IR) if for any\nplayer i value vi, and any b\u2212i \u2208 V\u2212i ui(vi, b\u2212i) \u2265 0\nmeaning that for all possible bids of the other players, player\"s i\nutility is non-negative. Note that since our mechanisms are\nnormalized IR, if a player does not trade then the player\npays 0 and has utility 0.\nOur algorithm presented in the next section employs a\ncommonly used payment scheme, the critical value payment\nscheme.\nDefinition 2.2. Critical value payment scheme: A\nmechanism uses a critical value payment scheme if given an\nallocation it charges players the minimum value they need to\nreport to the mechanism in order to remain allocated.\nWe denote by Ci the critical value price computed for player\ni.\n2.2 Competitions and Domains\nIn this paper we present two generalized trade reduction\nalgorithms. The two algorithms are such that given an IR\nand IC mechanism M that solves a problem in some\ndomain (different domains are formally defined below), turns\nM into IR, IC and BB mechanism. The algorithm presented\nfinds procurement sets and removes them in iterations until\nthe right conditions are fulfilled and the mechanism M is\nturned into a BB one. The right conditions that need to\nbe met are conditions of competition among the players in\nthe given problem. The following definitions leads us to the\ncompetition conditions we are looking for.\nDefinition 2.3. For any player i \u2208 N, we say that the\nset Ri \u2286 N \\ {i} is a replacing set of i, if for any\nprocurement set s \u2208 S such that i \u2208 s and Ri\u2229s = \u2205, s\\{i}\u222aRi \u2208 S.\nFor example, in a (homogeneous) double-sided auction\n(see problem B.1) the replacement set for any buyer is simply\nany other buyer. In an auction for transportation slots (see\nproblem B.7), the replacement set of an edge is a path\nbetween the endpoints of the edge. Note that a set can replace\na single player. Furthermore, this relationship is transitive\nbut not necessarily symmetric. If i is a replacement set for\nj, it is not necessarily true that j is a replacement set for i.\nDefinition 2.4. For any allocation A, procurement set\ns \u2286 A, and any i \u2208 s we say Ri(A, s) is an internal\ncompetition for i with respect to A and s, if Ri(A, s) \u2286 N \\ A\nis a replacement set for i s.t. T = s \\ {i} \u222a Ri(A, s) \u2208 S and\nW(T) \u2265 0.\nDefinition 2.5. For any allocation A and procurement\nset s \u2286 A and any i \u2208 s we say that Ei(A, s) is an external\ncompetition for i with respect to A and s, if Ei(A, s) \u2286\nN \\ A is a set s.t., T = {i} \u222a Ei(A, s) \u2208 S and W(T) \u2265 0.\nWe will assume, without loss of generality, that there are\nno ties between the values of any allocations, and in\nparticular there are no ties between values of procurement sets.\nIn case of ties, these can be broken by using the identities\nof the players4\n. So for any allocation A, procurement set s\nand player i with external competition Ei(A, s), there exists\nexactly one set representing the maximally valued external\ncompetition.\nDefinition 2.6. A set X \u2282 N is closed under\nreplacement if \u2200i \u2208 X then Ri \u2282 X\nThe following defines the required competition needed to\nmaintain IC, IR and BB. The set X5\ndenotes this\ncompetition and is closed under replacement. In the remainder of\nthe paper we will assume that all of our sets which define\ncompetition in a mechanism are closed under replacement.\nDefinition 2.7. Let X \u2282 N be a set that is closed under\nreplacement, we say that the mechanism is an X-external\nmechanism, if\n1. Each player i \u2208 X has an external competition.\n2. Each player i /\u2208 X has an internal competition.\n3. For all players i1, . . . , it \u2208 s \\ X there exist\nRi1 (A, s), . . . , Rit (A, s)\nsuch that for every iz = iq, Riz (A, s) \u2229 Riq (A, s) = \u2205\n4. for every procurement set s \u2208 S it holds that s\u2229X = \u2205\nFor general domains the choice of X can be crucial. In\nfact even for the same domain the welfare (and revenue) can\nvary widely depending on how X is defined. In appendix C\nwe give an example where two possible choices of X yield\ngreatly different results. Although we show that X should\nbe chosen as small as possible we do not give any\ncharacterization of the optimality of X and this is an important open\nproblem.\nOur two generalized trade reduction algorithms will\nensure that for any allocation we have the desired types of\ncompetition. So given a mechanism M that is IC and IR\nwith allocation A, the goal of the algorithms is to turn M\ninto an X-external mechanism. The two generalized trade\nreduction algorithms utilize a dividing function D which\ndivides allocation A into disjoint procurement sets. The\nalgorithms order the procurements sets defined by D in order\nof increasing value. For any procurement set there is a\ndesired type of competition that depends only on the players\nwho compose the procurement set. The generalized trade\nreduction algorithms go over the procurement sets in order\n(from the smallest to the largest) and remove any\nprocurement set that does not have the desired competition when\nthe set is reached. The reduction of procurement sets will\nalso be referred to as a trade reduction.\nFormally,\n4\nThe details of how to break ties in allocations are standard\nand are omitted.\n5\nWe present some tradeoffs between the different possible\nsets in Appendix C.\n22\nDefinition 2.8. D is a dividing function if for any\nallocation A and the players\" value vector v, D divides the\nallocation into disjoint procurements sets s1, . . . , sk s.t. \u222asj =\nA and for any player i with value vi if i \u2208 sj1 and t \u2208 sj2 s.t.\nj1 \u2265 j2 then for any value vi > vi of player i and division by\nD into s1, . . . , sk such that i \u2208 sj1\nand t \u2208 sj2\nthen j1 > j2.\nThe two generalized trade reduction algorithms presented\naccept problems in different domains. The formal domain\ndefinitions follow:\nDefinition 2.9. A domain is a class domain if for all\ni \u2208 N and all replacement sets of i, Ri, |Ri| = 1 and for all\ni, j, i = j if j = Ri then i = Rj.\nIntuitively, this means that replacement sets are of size 1\nand the replacing relationship is symmetric.\nWe define the class of a player i as the set of the player\"s\nreplacement sets and denote the class of player i by [i]. It is\nimportant to note that since replacement sets are transitive\nrelations and since class domains also impose symmetric\nrelations on the replacement sets, the class of a player i, [i] is\nactually an equivalence class for i.\nDefinition 2.10. A domain is a procurement-class\ndomain if the domain is a class-based domain and if for any\nplayer i such that there exists two procurement sets s1, s2\n(not necessarily trading simultaneously in any allocation)\nsuch that i \u2208 s1 and i \u2208 s2 then there exists a bijection\nf : s1 \u2192 s2 such that for any j \u2208 s1, f(j) is a replacement\nset for j in s2.\nExample 2.1. A (homogeneous) double-sided auction (see\nproblem B.1) is a procurement-class based domain. For the\n(homogeneous) double-sided auction each procurement set\nconsists of a buyer and a seller.\nThe double sided combinatorial auction consisting of a\nsingle multi-minded buyer and multiple sellers of heterogenous\ngoods (see problem B.9), is a class based domain (as we have\na single buyer) but not a procurement-class based domain.\nIn this case, the buyer is a class and each set of sellers of\nthe same good is a class. However, for a buyer there is no\nbijection between the different the procurement sets of the\nbundles of goods the buyer is interested in.\nThe spatial-distributed market with strategic edges (see\nproblem B.6) is not a class-based domain (and therefore not\na procurement-class domain). For example, even for a fixed\nbuyer and a fixed seller there are two different procurement\nsets consisting of different paths between the buyers and\nsellers.\nThe next sections present two algorithms GTR-1 and\nGTR2. GTR-1 accepts problems in procurement-class based\ndomains, its properties are proved with a general dividing\nfunction D. The GTR-2 algorithm accepts problems in any\ndomain. We prove the GTR-2\"s properties with specific\ndividing function D0. The function will be defined in section\n4. Since the dividing function can have a large practical\nimpact on welfare (and revenue) the generality of GTR \u2212 1\n(albeit in special domains) can be an important practical\nconsideration.\n3. PROCUREMENT-CLASS BASED\nDOMAINS\nThis section focuses on the problems that are\nprocurementclass based domains. For this domain, we present an\nalgorithm called GTR-1, which given a mechanism that is IR\nand IC outputs a mechanism with reduced welfare which is\nIR, IC and budget balanced.\nAlthough procurement class domains appear to be a\nrelatively restricted model, in fact many domains studied in the\nliterature are procurement class domains.\nExample 3.1. The following domains are procurement\nclass domains:\n\u2022 double-sided auctions with homogenous goods\n[13](problem B.1). In this domain there are two classes. The\nclass of buyers and the class of sellers. Each\nprocurement set consists of a single buyer and a single seller.\nSince every pair of (buyer, seller) is a valid\nprocurement set (albeit possible with negative value) this is a\nprocurement class domain. In this domain the constant\nassigned to the procurement sets is F = 0.\n\u2022 Spatially distributed markets with non strategic edges\n[3, 9](problem B.3). Like the double-sided auctions\nwith homogenous goods, their are two classes in the\ndomain. Class of buyers and class of sellers with\nprocurement sets consisting of single buyer and single seller.\nThe sellers and buyers are nodes in a graph and the\nfunction F is the distance of two nodes (length of the\nedge) which represent transport costs. These costs\ndiffer between different (buyer, seller) pairs.\n\u2022 Supply chains [2, 4] (problem B.5). The assumption\nof unique manufactory by [2, 4] can best be understood\nas turning general supply chains (which need not be\na procurement class domain) into a procurement class\ndomain.\n\u2022 Single minded combinatorial auctions [11] (problem B.8).\nIn this context each seller sells a single good and each\nbuyer wants a set of goods. The classes are the sets of\nsellers selling the same good as well as the buyers who\ndesire the same bundle. A procurement set consists of\na single buyer as well as a set of sellers who can satisfy\nthat buyer.\nA definition of the mechanism follows:\nDefinition 3.1. The GTR-1 algorithm - given a\nmechanism M, a set X \u2282 N which is closed under replacement,\na dividing function D, and allocation A, GTR-1 operates as\nfollows:\n1. Use the dividing function D to divide A into\nprocurement sets s1, . . . , sk \u2208 S.\n2. Order the procurement sets by increasing value.\n3. For each sj, starting from the lowest value procurement\nset:\nIf for every i \u2208 sj \u2229 X there is external competition\nand every i \u2208 sj \\ X there is internal competition then\n23\nkeep sj. Otherwise reduce the trade sj (i.e., remove\nevery i \u2208 sj from the allocation).6\n4. All trading players are charged the critical value for\ntrading. All non trading players are charged nothing.\nRemark 3.1. The special case where X = N has received\nattention under different guises in various special cases, such\nas ([13, 3, 4]).\n3.1 The GTR-1 Produces an X-external\nMechanism that is IR, IC and BB\nIn this subsection we prove that the GTR-1 algorithm\nproduces an X-external mechanism that is IR, IC and BB.\nTo prove GTR-1\"s properties we make use of theorem 3.1\nwhich is a well known result (e.g., [14, 11]). Theorem 3.1\ncharacterizes necessary and sufficient conditions for a\nmechanism for single value players to be IR and IC:\nDefinition 3.2. An allocation rule R is Bid Monotonic\nif for any player i, any bids of the other players b\u2212i \u2208 V\u2212i,\nand any two possible bids of i, \u02c6bi > bi, if i trades under the\nallocation rule R when reporting bi, then i also trades when\nreporting \u02c6bi.\nIntuitively, a bid monotonic allocation rule ensures that\nno trading player can become a non-trading player by\nimproving his bid.\nTheorem 3.1. An IR mechanism M with allocation rule\nR is IC if and only if R is Bid Monotonic and each trading\nplayer i pays his critical value Ci (pi = Ci).\nSo for normalized IR7\nand IC mechanisms, the allocation\nrule which is bid monotonic uniquely defines the critical\nvalues for all the players and thus the payments.\nObservation 3.1. Let M1 and M2 be two IR and IC\nmechanisms with the same allocation rule. Then M1 and\nM2 must have the same payment rule.\nIn the following we prove that the X-external GTR-1\nalgorithm produces a IR, IC and BB mechanism, but first a\nsubsidiary lemma is shown.\nLemma 3.1. For procurement class domains if there\nexists a procurement set sj s.t. i \u2208 sj and i has external\ncompetition than all t = i t \u2208 sj, t has internal competition.\nProof. This follows from the definition of procurement\nclass domains. Suppose that i has external competition,\nthen there exists a set of players Ei(A, s) such that {i} \u222a\nEi(A, s) \u2208 S. Let us denote by sj = {i} \u222a Ei(A, s). Since\nthe domain is a procurement-class domain there exists a\nbijection function f between sj and sj. f defines the required\ninternal competition.\nWe start by proving IR and IC:\n6\nAlthough the definition of an X-external mechanism\nrequires that X intersects every procurement set, this is not\nstrictly necessary. It is possible to define an X that does\nnot intersect every possible procurement set. In this case,\nany procurement set s \u2208 S s.t. s \u2229 X = \u2205 will be reduced.\n7\nNote that this is not true for mechanisms which are not\nnormalized e.g., [7, 12]\nLemma 3.2. For any X, the X-external mechanism with\na critical value pricing scheme produced by the GTR-1\nalgorithm is an IR and IC mechanism.\nProof. By the definition of a critical value pricing scheme\n2.2 and the GTR-1 algorithm 3.1 it follows that for every\ntrading player i, vi \u2265 0. By the GTR-1 algorithm 3.1\nnontrading players i have a payment of zero. Thus for every\nplayer i, value vi, and any b\u2212i \u2208 V\u2212i ui(vi, b\u2212i) \u2265 0,\nmeaning the produced X-external mechanism is IR.\nAs the X-external GTR-1 algorithm is IR and applies\nthe critical value payment scheme according to theorem 3.1,\nin order to show that the produced X-external mechanism\nwith the critical value payment scheme is IC, it remains to\nshow that the produced mechanism\"s allocation rule is bid\nmonotonic.\nSince GTR-1 orders the procurement sets according to\nincreasing value, if player i increases his bid from bi to bi > bi\nthen for any division function D of procurement sets, the\nprocurement set s containing i always appears later with\nthe bid bi than with the bid bi. So the likelihood of\ncompetition can only increase if i appears in later procurement\nsets. This follows as GTR-1 can reduce more of the lower\nvalue procurement sets which will result in more non-trading\nplayers.\nTherefore if s has the required competition and is not\nreduced with bi then it will have the required competition\nwith bi and will not be reduced.\nFinally we prove BB:\nLemma 3.3. For any X, the X-external mechanism with\ncritical value pricing scheme produced by the GTR-1\nalgorithm is a BB mechanism.\nProof. In order to show that the produced mechanism is\nBB we show that each procurement set that is not reduced\nhas a positive budget (i.e., the sum of payments is positive).\nLet s \u2208 S be a procurement set that is not reduced. Let\ni \u2208 s \u2229 X then according to the definition of X-external\ndefinition 2.7 and the GTR-1 algorithm 3.1 i has an external\ncompetition. Assume w.l.o.g.8\nthat i is the only player with\nexternal competition in s and all other players j = i, j \u2208 s\nhave internal competition.\nLet A be the allocation after the procurement sets\nreduction by the GTR-1 algorithm. According to the definition of\nexternal competition 2.5, there exists a set Ei(A, s) \u2282 N \\A\nsuch that i \u222a Ei(A, s) \u2208 S and W(i \u222a Ei(A, s)) \u2265 0. Since\nW(i\u222aEi(A, s)) = vi +W(Ei(A, s)) then vi \u2265 \u2212W(Ei(A, s)).\nBy the critical value pricing scheme definition 2.2 it means\nthat if player i bids any less than \u2212W(Ei(A, s)) he will\nnot have external competition and therefore will be removed\nfrom trading. Thus i pays no less than min \u2212W(Ei(A, s)).\nSince all other players j \u2208 s have internal competition their\ncritical price can not be less than their maximal value\ninternal competitor (set) i.e., max W(Rj(A, s)). If any player\nj \u2208 s bids less then its maximal internal competitor (set)\nthen he will not be in s but his maximal internal competitor\n(set) will.\nAs a possible Ei(A, s) is \u222aj\u2208sRj(A, s) one can bound the\nmaximal value of i\"s external competition W(Ei(A, s)) by\nthe sum of the maximal values of the rest of the players in s\n8\nsince the domain is a procurement class domain we can use\nlemma 3.1\n24\ninternal competition i.e., j\u2208s max W(Rj(A, s)). Therefore\nmin \u2212W(Ei(A, s)) = \u2212( j\u2208s max W(Rj(A, s))). As the F\nfunction is defined to be a positive constant we get that\nW(s) = min \u2212W(Ei(A, s))+( j\u2208s max W(Rj(A, s)))+F(s) \u2265\n0 and thus s is at least budget balanced. As each\nprocurement set that is not reduced is at least budget balanced, it\nfollows that the produced X-external mechanism is BB.\nThe above two lemmas yield the following theorem:\nTheorem 3.2. For procurement class domains for any\nX, the X-external mechanism with critical value pricing\nscheme produced by the GTR-1 algorithm is an IR, IC and\nBB mechanism.\nRemark 3.2. The proof of the theorem yields bounds on\nthe payments any player has to make to the mechanism.\n4. NON PROCUREMENT-CLASS BASED\nDOMAINS\nThe main reason that GTR-1 works for the\nprocurementclass domains is that each player\"s possibility of being\nreduced is monotonic. By the definition of a dividing function\nif a player i \u2208 sj increases his value, i can only appear in\nlater procurement set sj and hence has a higher chance of\nhaving the desired competition.\nTherefore, the chance of i lacking the requisite\ncompetition is decreased. Since the domain is a procurement class\ndomain, all other players t = i,t \u2208 sj are also more likely\nto have competition since members of their class continue\nto appear before i and hence the likelihood that i will be\nreduced is decreased. Since by theorem 3.1 a necessary and\nsufficient condition for the mechanism to be IC is\nmonotonicity. GTR-1 is IC for procurement-class domains.\nHowever, for domains that are not procurement class\ndomains this does not suffice even if the domain is a class based\ndomain. Although, all members of sj continue to have the\nrequired competition it is possible that there are members of\nsj who do not have analogues in sj who do not have\ncompetition. Hence i might be reduced after increasing his value\nwhich by lemma 3.1 means the mechanism is not IC.\nWe therefore define a different algorithm for non\nprocurement class domains.\nOur modified algorithm requires a special dividing\nfunction in order to maintain the IC property. Although our\nrestriction to this special dividing function appears\nstringent, the dividing function we use is a generalization of the\nway that procurement sets are chosen in procurement-class\nbased domains e.g., [13, 16, 9, 3, 2, 4].\nFor ease of presentation in this section we assume that\nF = 0.\nThe dividing function for general domains is defined by\nlooking at all possible dividing functions. For each\ndividing function Di and each set of bids, the GTR-1 algorithm\nyields a welfare that is a function of the bids and the\ndividing function9\n. We denote by D0 the dividing function that\ndivides the players into sets s.t. the welfare that GTR-1\nfinds is maximal10\n.\n9\nNote that for any particular Di this might not be IC as\nGTR-1 is IC only for procurement class domains and not\nfor general domains\n10\nIn Appendix A we show how to calculate D0 in\npolynoFormally,\nLet D be the set of all dividing functions D. Denote\nthe welfare achieved by the mechanism produced by\nGTR1 when using dividing function D and a set of bids \u00afb by\nGTR1(D,\u00afb). Denote by D0(\u00afb) = argmaxD\u2208D(GTR1(D,\u00afb)).\nFor ease of presentation we denote D0(\u00afb) by D0 when the\ndependence on b is clear from the context.\nRemark 4.1. D0 is an element of the set of dividing\nfunctions, and therefore is a dividing function.\nThe second generalized trade reduction algorithm GTR-2\nfollows.\nDefinition 4.1. The GTR-2 algorithm - Given\nmechanism M, allocation A, and a set X \u2282 N closed under\nreplacement, GTR-2 operates as follows:\n1. Calculate the dividing function D0 as defined above.\n2. Use the dividing function D0 to divide A into\nprocurement sets s1, . . . , sk \u2208 S.\n3. For each sj, starting from the lowest value\nprocurement set, do the following:\nIf for i \u2208 sj \u2229 X there is an external competition and\nthere is at most one i \u2208 sj that does not have an\ninternal competition then keep sj. Otherwise, reduce the\ntrade sj.\n4. All trading players are charged the critical value for\ntrading. All non trading players are charged zero.\n11\nWe will prove that the mechanism produced by GTR-2\nmaintains the desired properties of IR, IC, and BB. The\nfollowing lemma shows that the GTR-2 produced mechanism\nis IR, and IC.\nLemma 4.1. For any X, the X-external mechanism with\ncritical value pricing scheme produced by the GTR-2\nalgorithm is an IR and IC mechanism.\nProof. By theorem 3.1 it suffices to prove that the\nproduced mechanism by the GTR-2 algorithm is bid monotonic\nfor every player i. Suppose that i was not reduced when\nbidding bi we need to prove that i will not be reduced when\nbidding bi > bi. Denote by D1 = D0(b) the dividing function\nused by GTR-2 when i reported bi and the rest of the\nplayers reported b\u2212i. Denote by D1 = D0(bi, b\u2212i) the dividing\nfunction used by GTR-2 when i reported bi and the rest of\nthe players reported b\u2212i. Denote by \u00afD1(b) a maximal\ndividing function that results in GTR-1 reducing i when\nreporting bi. Assume to the contrary that GTR-2 reduced i from\nthe trade when i reported bi then GTR1(D1, (bi, b\u2212i)) =\nGTR1( \u00afD1, b). Since D1 \u2208 D it follows that GTR1(D1, b) >\nGTR1( \u00afD1, b) and therefore\nGTR1(D1, b) > GTR1(D1, (bi, b\u2212i)). However according to\nthe definition D1 \u2208 D, GTR-2 should not have reduced i\nmial time for procurement-class domains. Calculating D0 in\npolynomial time for general domains is an important open\nproblem.\n11\nIn the full version GTR-2 is extend such that it suffices\nthat there exists some time in which the third step holds.\nThat extension is omitted from current version due to lack\nof space.\n25\nwith the dividing function D1 and gained a greater welfare\nthan GTR1(D1, b). Thus a contradiction arises and and\nGTR-2 does not reduce i from the trade when i reports\nbi > bi.\nLemma 4.2. For any X, the X-external mechanism with\ncritical value pricing scheme produced by the GTR-2\nalgorithm is a BB mechanism.\nProof. This proof is similar to the proof of lemma 3.3.\nCombining the two lemmas above we get:\nTheorem 4.1. For any X closed under replacement, the\nX-external mechanism with critical value pricing scheme\nproduced by the GTR-2 algorithm is an IR, IC and BB\nmechanism.\nAppendix A shows how to calculate D0 for procurement\nclass domains in polynomial time, it is not generally known\nhow to easily calculate D0. Creating a general method for\ncalculating the needed dividing function in polynomial time\nremains as an open question.\n4.1 Bounding the Welfare for\nProcurementClass Based Domains and other General\nDomains Cases\nThis section shows that in addition to producing a\nmechanism with the desired properties, GTR-2 also produces a\nmechanism that maintains high welfare. Since the GTR-2\nalgorithm finds a budget balanced mechanism in arbitrary\ndomains we are unable to bound the welfare for general\ncases. However we can bound the welfare for\nprocurementclass based domain and a wide variety of cases in general\ndomains which includes many cases previously studied.\nDefinition 4.2. Denote freqk([i], sj) to indicate that a\nclass [i] appears in a procurement set sj, k times and there\nare k members of [i] in sj.\nDefinition 4.3. Denote by freqk([i], S) the maximal k\ns.t. there are k members of [i] in sj. I.e., freqk([i], S) =\nmaxsj \u2208S freqk([i], sj).\nLet the set of equivalence classes in procurement class\nbased domain mechanism be ec and |ec| be the number of\nthose equivalence classes.\nUsing the definition of class appearance frequency we can\nbound the welfare achieved by the GTR-2 produced\nmechanism for procurement class domains12\n:\nLemma 4.3. For procurement class domains with F = 0,\nthe number of procurement sets that are reduced by GTR-213\nis at most |ec| times the maximal frequency of each class.\nFormally, the maximal number of procurement sets that is\nreduced is O( [i]\u2208ec\nfreqk([i], S))\nProof. Let D be an arbitrary dividing function. We\nnote that by definition any procurement set sj will not be\nreduced if every i \u2208 sj has both internal competition and\nexternal competition.\n12\nThe welfare achieved by GTR-1 can also be bounded for\nthe cases presented in this section. However, we focus on\nGTR-2 as it always achieves better welfare.\n13\nor GTR-1\nEvery procurement set s that is reduced has at least one\nplayer i who has no competition. Once s is reduced all\nplayers of [i] have internal competition. So by reducing the\nnumber of equivalence classes |ec| procurement sets we cover\nall the remaining players with internal competition.\nIf the maximal frequency of every equivalence classes was\none then each remaining player t in procurement set sk also\nhave external competition as all the internal competitors of\nplayers \u00aft = t, \u00aft \u2208 sk are an external competition for t.\nIf we have freqk([t], S) players from class [t] who were\nreduced then there is sufficient external competition for all\nplayers in sk.\nTherefore it suffices to reduce O( [i]\u2208ec\nfreqk([i], S))\nprocurement sets in order to ensure that both the requisite\ninternal and external competition exists.\nThe next theorem follows as an immediate corollary for\nlemma 4.3:\nTheorem 4.2. Given procurement-class based domain\nmechanisms with H procurement sets, the efficiency is at least a\n1 \u2212 O(\nO( [i]\u2208ec\nfreqk([i],S))\nH\n) fraction of the optimal welfare.\nThe following corollaries are direct results of theorem 4.2.\nAll of these corollaries either improve prior results or achieve\nthe same welfare as prior results.\nCorollary 4.1. Using GTR-2 for homogenous\ndoublesided auctions (problem B.1) at most14\none procurement set\nmust be reduced.\nSimilarly, for spatially distributed markets without\nstrategic edges (problem B.3) using GTR-2 improves the result of\n[3] where a minimum cycle including a buyer and seller is\nreduced.\nCorollary 4.2. Using GTR-2 for spatially distributed\nmarkets without strategic edges at most one cycle per\nconnected component15\nwill be reduced.\nFor supply chains (problem B.5) using GTR-2 improves\nthe result of [2, 4] similar to corollary 4.2.\nCorollary 4.3. Using GTR-2 for supply chains at most\none cycle per connected component16\nwill be reduced.\nThe following corollary solves the open problem at [3].\nCorollary 4.4. For distributed markets on n nodes with\nstrategic agents and paths of bounded length K (problem B.6)\nit suffices to remove at most K \u2217 n procurements sets.\nProof. Sketch: These will create at least K spanning\ntrees, hence we can disjointly cover every remaining\nprocurement set. This improves the naive algorithm of reducing n2\nprocurement sets.\nWe provide results for two special cases of double sided\nCA with single value players (problem B.8).\n14\nIt is possible that no reductions will be made, for instance\nwhen there is a non-trading player who is the requisite\nexternal competition.\n15\nSimilar to the double-sided auctions, sometimes there will\nbe enough competition without a reduction.\n16\nSimilar to double-sided auctions, sometimes there will be\nenough competition without a reduction.\n26\nCorollary 4.5. if there are at most M different kinds of\nprocurement sets it suffices to remove M procurement sets.\nCorollary 4.6. If there are K types of goods and each\nprocurement set consists of at most one of each type it\nsuffices to remove at most K procurement sets.\n5. CONCLUSIONS AND FUTURE WORK\nIn this paper we presented a general solution procedure\ncalled the Generalized Trade Reduction (GTR). GTR\naccepts an IR and IC mechanism as an input and outputs\nmechanisms that are IR, IC and BB. The output\nmechanisms achieves welfare that is close to optimal for a wide\nrange of domains.\nThe GTR procedure improves on existing results such\nas homogeneous double-sided auctions, distributed markets,\nand supply chains, and solves several open problems such as\ndistributed markets with strategic transportation edges and\nbounded paths, combinatorial double-sided auctions with\nbounded size procurements sets, and combinatorial\ndoublesided auctions with a bounded number of procurement sets.\nThe question of the quality of welfare approximation both\nin general and in class domains that are not procurement\nclass domains is an important and interesting open\nquestion. We also leave open the question of upper bounds for\nthe quality of approximation of welfare. Although we know\nthat it is impossible to have IR, IC and BB in an efficient\nmechanism it would be interesting to have an upper bound\non the approximation to welfare achievable in an IR, IC and\nBB mechanism.\nThe GTR procedure outputs a mechanism which depends\non a set X \u2282 N. Another interesting question is what the\nquality of approximation is when X is chosen randomly from\nN before valuations are declared.\nAcknowledgements\nThe authors wish to thank Eva Tardos et al for sharing their\nresults with us. The authors also wish to express their\ngratitude to the helpful comments of the anonymous reviewers.\n6. REFERENCES\n[1] A. Archer and E. Tardos. Frugal path mechanisms.\nSymposium on Discrete Algorithms, Proceedings of the\nthirteenth annual ACM-SIAM symposium on Discrete\nalgorithms,2002.\n[2] M. Babaioff and N. Nisan. Concurrent Auctions\nAcross the Supply Chain. Journal of Artificial\nIntelligence Research,2004.\n[3] Babaioff M., Nisan N. and Pavlov E. Mechanisms for\na Spatially Distributed Market. In proceedings of the\n5th ACM Conference on Electronic Commerce,2004.\n[4] M. Babaioff and W. E. Walsh. Incentive-Compatible,\nBudget-Balanced, yet Highly Efficient Auctions for\nSupply Chain Formation. In proceedings of Fourth\nACM Conference on Electronic Commerce,2003.\n[5] Y. Bartal, R. Gonen and P. La Mura.\nNegotiation-range mechanisms: exploring the limits of\ntruthful efficient markets. EC \"04: Proceedings of the\n5th ACM conference on Electronic commerce, 2004.\n[6] Blume L, Easley D., Kleinberg J. and Tardos E.\nTrading Networks with Price-Setting Agents. In\nproceedings of the 8th ACM conference on Electronic\ncommerce,2007.\n[7] Cavallo R. Optimal decision-making with minimal\nwaste: Strategyproof redistribution of VCG payments.\nIn Proc. 5th Int. Conf. on Auton. Agents and\nMulti-Agent Systems (AAMAS06).\n[8] E. H. Clarke Multipart Pricing of Public Goods. In\njournal Public Choice 1971, vol. 2, pp. 17-33.\n[9] Chu L. Y. and Shen Zuo-Jun M. Agent Competition\nDouble Auction Mechanism. Management Science,vol\n52(8),2006.\n[10] T. Groves Incentives in teams. In journal\nEconometrica 1973, vol. 41, pp. 617-631.\n[11] D. Lehmann, L. I. O\"Callaghan, and Y. Shoham.\nTruth Revelation in Approximately Efficient\nCombinatorial Auctions. In Journal of ACM 2002,\nvol. 49(5), pp. 577-602.\n[12] Leonard H. Elicitation of Honest Preferences for the\nAssignment of Individuals to Positions. Journal of\npolitical econ,1983.\n[13] McAfee R. P. A Dominant Strategy Double Auction.\nJournal of Economic Theory,vol 56, 434-450, 1992.\n[14] A. Mu\"alem, and N. Nisan. Truthful Approximation\nMechanisms for Restricted Combinatorial Auctions.\nProceeding of AAAI 2002.\n[15] Myerson R. B. and Satterthwaite M. A. Efficient\nMechanisms for Bilateral Trading. Journal of\nEconomic Theory,vol 29, 265-281, 1983.\n[16] Roundy R., Chen R., Janakriraman G. and Zhang R.\nQ. Efficient Auction Mechanisms for Supply Chain\nProcurement. School of Operations Research and\nIndustrial Engineering, Cornell University,2001.\n[17] W. Vickrey Counterspeculation, Auctions and\nCompetitive Sealed Tenders. In Journal of Finance\n1961, vol. 16, pp. 8-37.\nAPPENDIX\nA. CALCULATING THE\nOPTIMAL DIVIDING FUNCTION IN\nPROCUREMENT CLASS DOMAINS IN\nPOLYNOMIAL TIME\nIn this section we show how to calculate the optimal\ndividing function for procurement class domains in polynomial\ntime. We first define a special dividing function D0 which\nis easy to calculate:\nWe define the dividing function D0 recursively as follows:\nAt stage j, D0 divides the trading players into two sets Aj\nand Aj s.t.\n\u2022 Aj is a procurement set\n\u2022 Aj can be divided into a disjoint union of procurement\nsets.\n\u2022 Aj has minimal value from all possible such partitions.\nDefine sj = Aj and recursively invoke D0 and Aj until\nAj = \u2205.\nWe now prove that D0 is the required dividing function.\nLemma A.1. For procurement class domains D0 = D0.\nProof. Since the domain is a procurement class domain,\nfor every reduced procurement set the set of players which\nachieve competition (either internal or external) is fixed.\n27\nTherefore, the number of procurement sets which are\nreduced is independent of the dividing function D. Since\nthe goal is to optimize welfare by reducing procurement\nsets with the least value we can optimize welfare. This is\nachieved by D0.\nB. PROBLEMS AND EXAMPLES\nFor completeness we present in this section the formal\ndefinitions of the problems that we use to illustrate our\nmechanism.\nThe first problem that we define is the double-sided\nauction with homogeneous goods.\nProblem B.1. Double-sided auction with\nhomogeneous goods: There are m sellers each of which have a\nsingle good (all goods are identical) and n buyers each of\nwhich are interested in receiving a good. We denote the set\nof sellers by S and the set of buyers by B. Every player\ni \u2208 S \u222a B (both buyers and sellers) has a value vi for the\ngood. In this model a procurement set consists of a single\nbuyer and a single seller, i.e., |s| = 2. The value of a\nprocurement set is W(s) = vj \u2212 vi where j \u2208 B and i \u2208 S, i.e.,\nthe gain from trade.\nIf procurement sets are created by matching the highest\nvalue buyer to the lowest value seller then [13]\"s\ndeterministic trade reduction mechanism17\nreduces the lowest value\nprocurement set.\nA related model is the pair related costs [9] model.\nProblem B.2. The pair related costs: A double-sided\nauction B.1 in which every pair of players i \u2208 S and j \u2208 B\nhas a related cost F(i, j) \u2265 0 in order to trade. F(i, j) is a\nfriction cost which should be minimized in order to maximize\nwelfare.\n[9] defines two budget-balanced mechanisms for this case.\nOne of [9]\"s mechanisms has the set of buyers B as the X\nset for the X-external mechanism and the other has the set\nof sellers S as the X set for the X-external mechanism.\nA similar model is the spatially distributed markets (SDM)\nmodel [3] in which there is a graph imposing relationships\non the cost.\nProblem B.3. Spatially distributed markets: there\nis a graph G = (V, E) such that each v \u2208 V has a set of\nsellers Sv\nand a set of buyers Bv\n. Each edge e \u2208 E has an\nassociated cost which is the cost to transport a single unit\nof good along the edge. The edges are non strategic but all\nplayers are strategic.\n[3] defines a budget balanced mechanism for this case. Our\npaper improves on [3] result.\nAnother graph model is the model defined in [6].\nProblem B.4. Trading Networks: Given a graph and\nbuyers and sellers who are situated on nodes of the graph.\nAll trade must pass through a trader. In this case\nprocurement sets are of the form (buyer, seller, trader) where the\npossible sets of this form are defined by a graph.\nThe supply chain model [2, 4] can be seen as a\ngeneralization of [6] in which procurement sets consist of the form\n(producer, consumer, trader1, . . . , traderk).\n17\nIt is also possible to randomize the reduction of\nprocurements sets so as to achieve an expected budget of zero similar\nto [13], details are obvious and omitted.\nProblem B.5. Supply Chain: There is a set D of agents\nand a set G of goods and a graph G = (V, E) which defines\npossible trading relationships. Agents can require an input of\nmultiple quantities of goods in order to output a single good.\nThe producer type of player can produce goods out of\nnothing, the consumer has a valuation and an entire chain of\ninterim traders is necessary to create a viable procurement\nset.\n[2, 4] consider unique manufacturing technology in which the\ngraph defining possible relationships is a tree.\nAll of the above problems are procurement-class domains.\nWe also consider several problems which are not\nprocurement class domains and generally the questions of budget\nbalance have been left as open problems.\nAn open problem raised in [3] is the SDM model in which\nedges are strategic.\nProblem B.6. Spatially distributed markets with\nstrategic edges: there is a graph G = (V, E) such that\neach v \u2208 V has a set of sellers Sv\nand a set of buyers\nBv\n. Each edge e \u2208 E has an associated cost which is the\ncost to transport a single unit of good along the edge. Each\nbuyer,seller and edge has a value for the trade, i.e., all\nentities are strategic.\n[2, 4] left open the question of budget balanced\nmechanisms for supply chains where there is no unique\nmanufacturing technology. It is easy to see that this problem is not\na procurement class domain.\nAnother interesting problem is transport networks.\nProblem B.7. Transport networks: A graph G = (V, E)\nwhere the edges are strategic players with costs and the goal\nis to find a minimum cost transportation route between a\npair of privileged nodes Source, Target \u2208 V .\nIt was shown in [1] that the efficient allocation can have\na budget deficit that is linear in the number of players.\nClearly, this problem is not a procurement class domain and\n[1] left the question of a budget balanced mechanism open.\nAnother non procurement-class based domain mechanism\nis the double-sided combinatorial auction (CA) with\nsinglevalue players.\nProblem B.8. Double-sided combinatorial auction\n(CA) with single value players: There exists a set S of\nsellers each selling a single good. There also exists a set B\nof buyers each interested in bundles of 2S18\n.\nThere are two variants of this problem. In the single\nminded case each buyer has a positive value for only a\nsingle subset whereas in the multi minded case each buyer can\nhave multiple bundles with positive valuation but all of the\nvalues are the same. In both cases we assume free disposal\nso that all bundles containing the desired bundle have the\nsame value for the buyer.\nWe also consider problems that are non class domains.\nProblem B.9. Double-sided combinatorial auction\n(CA) with general multi-minded players: same as B.8\nbut each buyer can have multiple bundles with positive\nvaluation which are not necessarily the same.\n18\nWe abuse notation and identify the seller with the good.\n28\nC. COMPARING DIFFERENT CHOICES\nOF X\nThe choice of X can have a large impact on the welfare\n(and revenue) of the reduced mechanism and therefore the\nquestion arises of how one should choose the set X.\nAs the X-external mechanism is required to maintain IC\nclearly the choice of X can not depend on the value of\nthe players as otherwise the reduced mechanism will not\nbe truthful.\nIn this section we motivate the choice of small X sets for\nprocurement class domains and give intuition that it may\nalso be the case for some other domains.\nWe start by illustrating the effect of the set X over the\nwelfare and revenue in the double-sided auction with\nhomogeneous goods problem B.1. Similar examples can be\nconstructed for the other problems defined is B.\nThe following example shows an effect on the welfare.\nExample C.1. There are two buyers and two sellers and\ntwo non intersecting (incomparable) sets X = {buyers} and\nY = {sellers}. If the values of the buyers are 101, 100 and\nthe sellers are 150, 1 then the X-external mechanism will\nyield a gain from trade of 0 and the Y -external mechanism\nwill yield a gain from trade of 100.\nConversely, if the buyers values are 100, 1 and the sellers\nare 2, 3 the X-external mechanism will yield a gain from\ntrade of 98 and and the Y -external mechanism will yield a\ngain from trade of zero.\nThe example clearly shows that the difference between\nthe X-external and the Y -external mechanism is unbounded\nalthough as shown above the fraction each of them reduces\ncan be bound and therefore the multiplicative ratio between\nthem can be bound (as a function of the number of trades).\nOn the revenue side we can not even bound the ratio as\nseen from the following example:\nExample C.2. Consider k buyers with value 100 and k+\n1 sellers with value 1.\nIf X = {buyers} then there is no need to reduce any trade\nand all of the buyer receive the good and pay 1. k + 1 of\nthe sellers sell and each of them receive 1. This yields a net\nrevenue of zero.\nIf Y = {sellers} then one must reduce a trade! This\nmeans that all of the buyers pay 100 while all of the\nsellers still receive 1. the revenue is then 99k.\nSimilarly, an example can be constructed that yields much\nhigher revenue for the X-external mechanism as compared\nto the Y -external mechanism.\nThe above examples refer to sets X and Y which do not\nintersect and are incomparable. The following theorem\ncompares the X-external and Y -external mechanisms for\nprocurement class domains where X is a subset of Y .\nTheorem C.1. For procurement class domains, if X \u2282\nY and for any s \u2208 S, s \u2229 X \u2229 Y = \u2205 then:\n1. The efficiency of the X external mechanism in GTR-1\n(and hence GTR-2) is at least that of the Y -external\nmechanism.\n2. Any winning player that wins in both the X-external\nand Y -external mechanisms pays no less in the Y -external\nthan in the X-external and therefore the ratio of\nbudget to welfare is no worse in the Y external then the\nX-external.\nProof. 1. For any dividing function D if there is a\nprocurement set sj that is reduced in the X-external\nmechanism there are two possible reasons:\n(a) sj lacks external competition in the X-external\nmechanism. In this case sj lacks external competition\nin the internal mechanism.\n(b) sj has all required external competitions in X-external.\nIn this case sj has all required internal\ncompetitions in Y -external by lemma 3.1 but might lack\nsome external competition for sj \u222a {Y \\ X} and be\nreduced,\n2. This follows from the fact that for any ordering D any\nprocurement set s that is reduced in the X-external\nmechanism is also reduced in the Y -external\nmechanism. Therefore, the critical value is no less in the\nYexternal mechanism than the X-external mechanism.\nRemark C.1. For any two sets X, Y it is easy to build\nan example in which the X-external and Y -external\nmechanisms reduce the same procurement sets so the inequality is\nweak.\nTheorem C.1 shows an inequality in welfare as well as for\npayments but it is easy to construct an example in which\nthe revenue can increase for X as compared to Y as well as\nthe opposite. This suggests that in general we want X to\nbe as small as possible although in some domains it is not\npossible to compare different X\"s.\n29", "keywords": "budget-balanced mechanism;inequality in welfare;external competition;homogeneous good;multi-minded player;efficiency;generalized trade reduction;player power;internal competition;optimality;budget balance;gtr;power of player;spatially distributed market;trade reduction"}
-{"name": "test_J-10", "title": "Understanding User Behavior in Online Feedback Reporting", "abstract": "Online reviews have become increasingly popular as a way to judge the quality of various products and services. Previous work has demonstrated that contradictory reporting and underlying user biases make judging the true worth of a service difficult. In this paper, we investigate underlying factors that influence user behavior when reporting feedback. We look at two sources of information besides numerical ratings: linguistic evidence from the textual comment accompanying a review, and patterns in the time sequence of reports. We first show that groups of users who amply discuss a certain feature are more likely to agree on a common rating for that feature. Second, we show that a user\"s rating partly reflects the difference between true quality and prior expectation of quality as inferred from previous reviews. Both give us a less noisy way to produce rating estimates and reveal the reasons behind user bias. Our hypotheses were validated by statistical evidence from hotel reviews on the TripAdvisor website.", "fulltext": "1. MOTIVATIONS\nThe spread of the internet has made it possible for online\nfeedback forums (or reputation mechanisms) to become an\nimportant channel for Word-of-mouth regarding products,\nservices or other types of commercial interactions.\nNumerous empirical studies [10, 15, 13, 5] show that buyers\nseriously consider online feedback when making purchasing\ndecisions, and are willing to pay reputation premiums for\nproducts or services that have a good reputation.\nRecent analysis, however, raises important questions\nregarding the ability of existing forums to reflect the real\nquality of a product. In the absence of clear incentives, users\nwith a moderate outlook will not bother to voice their\nopinions, which leads to an unrepresentative sample of reviews.\nFor example, [12, 1] show that Amazon1\nratings of books or\nCDs follow with great probability bi-modal, U-shaped\ndistributions where most of the ratings are either very good,\nor very bad. Controlled experiments, on the other hand,\nreveal opinions on the same items that are normally\ndistributed. Under these circumstances, using the arithmetic\nmean to predict quality (as most forums actually do) gives\nthe typical user an estimator with high variance that is often\nfalse.\nImproving the way we aggregate the information available\nfrom online reviews requires a deep understanding of the\nunderlying factors that bias the rating behavior of users. Hu\net al. [12] propose the Brag-and-Moan Model where users\nrate only if their utility of the product (drawn from a normal\ndistribution) falls outside a median interval. The authors\nconclude that the model explains the empirical distribution\nof reports, and offers insights into smarter ways of estimating\nthe true quality of the product.\nIn the present paper we extend this line of research, and\nattempt to explain further facts about the behavior of users\nwhen reporting online feedback. Using actual hotel reviews\nfrom the TripAdvisor2\nwebsite, we consider two additional\nsources of information besides the basic numerical ratings\nsubmitted by users. The first is simple linguistic evidence\nfrom the textual review that usually accompanies the\nnumerical ratings. We use text-mining techniques similar to\n[7] and [3], however, we are only interested in identifying\nwhat aspects of the service the user is discussing, without\ncomputing the semantic orientation of the text. We find\nthat users who comment more on the same feature are more\nlikely to agree on a common numerical rating for that\nparticular feature. Intuitively, lengthy comments reveal the\nimportance of the feature to the user. Since people tend to be\nmore knowledgeable in the aspects they consider important,\nusers who discuss a given feature in more details might be\nassumed to have more authority in evaluating that feature.\nSecond we investigate the relationship between a review\n1\nhttp://www.amazon.com\n2\nhttp://www.tripadvisor.com/\n134\nFigure 1: The TripAdvisor page displaying reviews\nfor a popular Boston hotel. Name of hotel and\nadvertisements were deliberatively erased.\nand the reviews that preceded it. A perusal of online\nreviews shows that ratings are often part of discussion threads,\nwhere one post is not necessarily independent of other posts.\nOne may see, for example, users who make an effort to\ncontradict, or vehemently agree with, the remarks of previous\nusers. By analyzing the time sequence of reports, we\nconclude that past reviews influence the future reports, as they\ncreate some prior expectation regarding the quality of\nservice. The subjective perception of the user is influenced by\nthe gap between the prior expectation and the actual\nperformance of the service [17, 18, 16, 21] which will later reflect\nin the user\"s rating. We propose a model that captures the\ndependence of ratings on prior expectations, and validate it\nusing the empirical data we collected.\nBoth results can be used to improve the way reputation\nmechanisms aggregate the information from individual\nreviews. Our first result can be used to determine a\nfeatureby-feature estimate of quality, where for each feature, a\ndifferent subset of reviews (i.e., those with lengthy comments\nof that feature) is considered. The second leads to an\nalgorithm that outputs a more precise estimate of the real\nquality.\n2. THE DATA SET\nWe use in this paper real hotel reviews collected from the\npopular travel site TripAdvisor. TripAdvisor indexes hotels\nfrom cities across the world, along with reviews written by\ntravelers. Users can search the site by giving the hotel\"s\nname and location (optional). The reviews for a given hotel\nare displayed as a list (ordered from the most recent to the\noldest), with 5 reviews per page. The reviews contain:\n\u2022 information about the author of the review (e.g., dates\nof stay, username of the reviewer, location of the\nreviewer);\n\u2022 the overall rating (from 1, lowest, to 5, highest);\n\u2022 a textual review containing a title for the review, free\ncomments, and the main things the reviewer liked and\ndisliked;\n\u2022 numerical ratings (from 1, lowest, to 5, highest) for\ndifferent features (e.g., cleanliness, service, location,\netc.)\nBelow the name of the hotel, TripAdvisor displays the\naddress of the hotel, general information (number of rooms,\nnumber of stars, short description, etc), the average overall\nrating, the TripAdvisor ranking, and an average rating for\neach feature. Figure 1 shows the page for a popular Boston\nhotel whose name (along with advertisements) was explicitly\nerased.\nWe selected three cities for this study: Boston, Sydney\nand Las Vegas. For each city we considered all hotels that\nhad at least 10 reviews, and recorded all reviews. Table 1\npresents the number of hotels considered in each city, the\ntotal number of reviews recorded for each city, and the\ndistribution of hotels with respect to the star-rating (as\navailable on the TripAdvisor site). Note that not all hotels have\na star-rating.\nTable 1: A summary of the data set.\nCity # Reviews # Hotels # of Hotels with\n1,2,3,4 & 5 stars\nBoston 3993 58 1+3+17+15+2\nSydney 1371 47 0+0+9+13+10\nLas Vegas 5593 40 0+3+10+9+6\nFor each review we recorded the overall rating, the\ntextual review (title and body of the review) and the numerical\nrating on 7 features: Rooms(R), Service(S), Cleanliness(C),\nValue(V), Food(F), Location(L) and Noise(N).\nTripAdvisor does not require users to submit anything other than\nthe overall rating, hence a typical review rates few\nadditional features, regardless of the discussion in the textual\ncomment. Only the features Rooms(R), Service(S),\nCleanliness(C) and Value(V) are rated by a significant number\nof users. However, we also selected the features Food(F),\nLocation(L) and Noise(N) because they are referred to in a\nsignificant number of textual comments. For each feature we\nrecord the numerical rating given by the user, or 0 when the\nrating is missing. The typical length of the textual comment\namounts to approximately 200 words. All data was collected\nby crawling the TripAdvisor site in September 2006.\n2.1 Formal notation\nWe will formally refer to a review by a tuple (r, T) where:\n\u2022 r = (rf ) is a vector containing the ratings\nrf \u2208 {0, 1, . . . 5} for the features f \u2208 F =\n{O, R, S, C, V, F, L, N}; note that the overall rating,\nrO, is abusively recorded as the rating for the feature\nOverall(O);\n\u2022 T is the textual comment that accompanies the review.\n135\nReviews are indexed according to the variable i, such that\n(ri\n, Ti\n) is the ith\nreview in our database. Since we don\"t\nrecord the username of the reviewer, we will also say that\nthe ith\nreview in our data set was submitted by user i. When\nwe need to consider only the reviews of a given hotel, h, we\nwill use (ri(h)\n, Ti(h)\n) to denote the ith\nreview about the hotel\nh.\n3. EVIDENCE FROM TEXTUAL\nCOMMENTS\nThe free textual comments associated to online reviews\nare a valuable source of information for understanding the\nreasons behind the numerical ratings left by the reviewers.\nThe text may, for example, reveal concrete examples of\naspects that the user liked or disliked, thus justifying some of\nthe high, respectively low ratings for certain features. The\ntext may also offer guidelines for understanding the\npreferences of the reviewer, and the weights of different features\nwhen computing an overall rating.\nThe problem, however, is that free textual comments are\ndifficult to read. Users are required to scroll through many\nreviews and read mostly repetitive information. Significant\nimprovements would be obtained if the reviews were\nautomatically interpreted and aggregated. Unfortunately, this\nseems a difficult task for computers since human users often\nuse witty language, abbreviations, cultural specific phrases,\nand the figurative style.\nNevertheless, several important results use the textual\ncomments of online reviews in an automated way. Using well\nestablished natural language techniques, reviews or parts of\nreviews can be classified as having a positive or negative\nsemantic orientation. Pang et al. [2] classify movie reviews\ninto positive/negative by training three different classifiers\n(Naive Bayes, Maximum Entropy and SVM) using\nclassification features based on unigrams, bigrams or part-of-speech\ntags.\nDave et al. [4] analyze reviews from CNet and\nAmazon, and surprisingly show that classification features based\non unigrams or bigrams perform better than higher-order\nn-grams. This result is challenged by Cui et al. [3] who\nlook at large collections of reviews crawled from the web.\nThey show that the size of the data set is important, and\nthat bigger training sets allow classifiers to successfully use\nmore complex classification features based on n-grams.\nHu and Liu [11] also crawl the web for product reviews and\nautomatically identify product attributes that have been\ndiscussed by reviewers. They use Wordnet to compute the\nsemantic orientation of product evaluations and summarize\nuser reviews by extracting positive and negative evaluations\nof different product features. Popescu and Etzioni [20]\nanalyze a similar setting, but use search engine hit-counts to\nidentify product attributes; the semantic orientation is\nassigned through the relaxation labeling technique.\nGhose et al. [7, 8] analyze seller reviews from the Amazon\nsecondary market to identify the different dimensions (e.g.,\ndelivery, packaging, customer support, etc.) of reputation.\nThey parse the text, and tag the part-of-speech for each\nword. Frequent nouns, noun phrases and verbal phrases\nare identified as dimensions of reputation, while the\ncorresponding modifiers (i.e., adjectives and adverbs) are used to\nderive numerical scores for each dimension. The enhanced\nreputation measure correlates better with the pricing\ninformation observed in the market. Pavlou and Dimoka [19]\nanalyze eBay reviews and find that textual comments have\nan important impact on reputation premiums.\nOur approach is similar to the previously mentioned\nworks, in the sense that we identify the aspects (i.e.,\nhotel features) discussed by the users in the textual reviews.\nHowever, we do not compute the semantic orientation of the\ntext, nor attempt to infer missing ratings.\nWe define the weight, wi\nf , of feature f \u2208 F in the text\nTi\nassociated with the review (ri\n, Ti\n), as the fraction of Ti\ndedicated to discussing aspects (both positive and negative)\nrelated to feature f. We propose an elementary method to\napproximate the values of these weights. For each feature\nwe manually construct the word list Lf containing\napproximately 50 words that are most commonly associated to the\nfeature f. The initial words were selected from reading some\nof the reviews, and seeing what words coincide with\ndiscussion of which features. The list was then extended by adding\nall thesaurus entries that were related to the initial words.\nFinally, we brainstormed for missing words that would\nnormally be associated with each of the features.\nLet Lf \u2229Ti\nbe the list of terms common to both Lf and Ti.\nEach term of Lf is counted the number of times it appears\nin Ti\n, with two exceptions:\n\u2022 in cases where the user submits a title to the review,\nwe account for the title text by appending it three\ntimes to the review text Ti\n. The intuitive assumption\nis that the user\"s opinion is more strongly reflected in\nthe title, rather than in the body of the review. For\nexample, many reviews are accurately summarized by\ntitles such as Excellent service, terrible location or\nBad value for money;\n\u2022 certain words that occur only once in the text are\ncounted multiple times if their relevance to that\nfeature is particularly strong. These were \"root\" words for\neach feature (e.g., \"staff\" is a root word for the feature\nService), and were weighted either 2 or 3. Each\nfeature was assigned up to 3 such root words, so almost\nall words are counted only once.\nThe list of words for the feature Rooms is given for reference\nin Appendix A.\nThe weight wi\nf is computed as:\nwi\nf =\n|Lf \u2229 Ti|\nf\u2208F |Lf \u2229 Ti|\n(1)\nwhere |Lf \u2229Ti\n| is the number of terms common to Lf and Ti\n.\nThe weight for the feature Overall was set to min{ |T i\n|\n5000\n, 1}\nwhere |Ti\n| is the number of character in Ti\n.\nThe following is a TripAdvisor review for a Boston hotel\n(the name of the hotel is omitted): I\"ll start by saying that\nI\"m more of a Holiday Inn person than a *** type. So I get\nfrustrated when I pay double the room rate and get half the\namenities that I\"d get at a Hampton Inn or Holiday Inn. The\nlocation was definitely the main asset of this place. It was\nonly a few blocks from the Hynes Center subway stop and it\nwas easy to walk to some good restaurants in the Back Bay\narea. Boylston isn\"t far off at all. So I had no trouble with\nforegoing a rental car and taking the subway from the\nairport to the hotel and using the subway for any other travel.\nOtherwise, they make you pay for anything and everything.\n136\nAnd when you\"ve already dropped $215/night on the room,\nthat gets frustrating.The room itself was decent, about what I\nwould expect. Staff was also average, not bad and not\nexcellent. Again, I think you\"re paying for location and the ability\nto walk to a lot of good stuff. But I think next time I\"ll stay\nin Brookline, get more amenities, and use the subway a bit\nmore.\nThis numerical ratings associated to this review are rO =\n3, rR = 3, rS = 3, rC = 4, rV = 2 for features Overall(O),\nRooms(R), Service(S), Cleanliness(C) and Value(V)\nrespectively. The ratings for the features Food(F), Location(L) and\nNoise(N) are absent (i.e., rF = rL = rN = 0).\nThe weights wf are computed from the following lists of\ncommon terms:\nLR \u2229 T ={room}; wR = 0.066\nLS \u2229 T ={3 * Staff, amenities}; wS = 0.267\nLC \u2229 T = \u2205; wC = 0\nLV \u2229 T ={$, rate}; wV = 0.133\nLF \u2229 T ={restaurant}; wF = 0.067\nLL \u2229 T ={2 * center, 2 * walk, 2 * location, area}; wL = 0.467\nLN \u2229 T = \u2205; wN = 0\nThe root words \"Staff\" and \"Center\" were tripled and\ndoubled respectively. The overall weight of the textual review\nis wO = 0.197. These values account reasonably well for the\nweights of different features in the discussion of the reviewer.\nOne point to note is that some terms in the lists Lf possess\nan inherent semantic orientation. For example the word\n\"grime\" (belonging to the list LC ) would be used most often\nto assert the presence, and not the absence of grime. This is\nunavoidable, but care was taken to ensure words from both\nsides of the spectrum were used. For this reason, some lists\nsuch as LR contain only nouns of objects that one would\ntypically describe in a room (see Appendix A).\nThe goal of this section is to analyse the influence of the\nweights wi\nf on the numerical ratings ri\nf . Intuitively, users\nwho spent a lot of their time discussing a feature f (i.e., wi\nf\nis high) had something to say about their experience with\nregard to this feature. Obviously, feature f is important for\nuser i. Since people tend to be more knowledgeable in the\naspects they consider important, our hypothesis is that the\nratings ri\nf (corresponding to high weights wi\nf ) constitute a\nsubset of expert ratings for feature f.\nFigure 2 plots the distribution of the rates r\ni(h)\nC with\nrespect to the weights w\ni(h)\nC for the cleanliness of a Las Vegas\nhotel, h. Here, the high ratings are restricted to the reviews\nthat discuss little the cleanliness. Whenever cleanliness\nappears in the discussion, the ratings are low. Many hotels\nexhibit similar rating patterns for various features. Ratings\ncorresponding to low weights span the whole spectrum from\n1 to 5, while the ratings corresponding to high weights are\nmore grouped together (either around good or bad ratings).\nWe therefore make the following hypothesis:\nHypothesis 1. The ratings ri\nf corresponding to the\nreviews where wi\nf is high, are more similar to each other than\nto the overall collection of ratings.\nTo test the hypothesis, we take the entire set of reviews,\nand feature by feature, we compute the standard deviation\nof the ratings with high weights, and the standard deviation\nof the entire set of ratings. High weights were defined as\nthose belonging to the upper 20% of the weight range for\nthe corresponding feature. If Hypothesis 1 were true, the\nstandard deviation of all ratings should be higher than the\nstandard deviation of the ratings with high weights.\n0\n1\n2\n3\n4\n5\n6\n0 0.1 0.2 0.3 0.4 0.5 0.6\nRating\nWeight\nFigure 2: The distribution of ratings against the\nweight of the cleanliness feature.\nWe use a standard T-test to measure the significance of\nthe results. City by city and feature by feature, Table 2\npresents the average standard deviation of all ratings, and\nthe average standard deviation of ratings with high weights.\nIndeed, the ratings with high weights have lower standard\ndeviation, and the results are significant at the standard 0.05\nsignificance threshold (although for certain cities taken\nindependently there doesn\"t seem to be a significant difference,\nthe results are significant for the entire data set). Please\nnote that only the features O,R,S,C and V were considered,\nsince for the others (F, L, and N) we didn\"t have enough\nratings.\nTable 2: Average standard deviation for all\nratings, and average standard deviation for ratings with\nhigh weights. In square brackets, the corresponding\np-values for a positive difference between the two.\nCity O R S C V\nall 1.189 0.998 1.144 0.935 1.123\nBoston high 0.948 0.778 0.954 0.767 0.891\np-val [0.000] [0.004] [0.045] [0.080] [0.009]\nall 1.040 0.832 1.101 0.847 0.963\nSydney high 0.801 0.618 0.691 0.690 0.798\np-val [0.012] [0.023] [0.000] [0.377] [0.037]\nall 1.272 1.142 1.184 1.119 1.242\nVegas high 1.072 0.752 1.169 0.907 1.003\np-val [0.0185] [0.001] [0.918] [0.120] [0.126]\nHypothesis 1 not only provides some basic understanding\nregarding the rating behavior of online users, it also suggests\nsome ways of computing better quality estimates. We can,\nfor example, construct a feature-by-feature quality estimate\nwith much lower variance: for each feature we take the\nsubset of reviews that amply discuss that feature, and output\nas a quality estimate the average rating for this subset.\nInitial experiments suggest that the average feature-by-feature\nratings computed in this way are different from the average\nratings computed on the whole data set. Given that,\nindeed, high weights are indicators of expert opinions, the\nestimates obtained in this way are more accurate than the\ncurrent ones. Nevertheless, the validation of this underlying\nassumption requires further controlled experiments.\n137\n4. THE INFLUENCE OF PAST RATINGS\nTwo important assumptions are generally made about\nreviews submitted to online forums. The first is that ratings\ntruthfully reflect the quality observed by the users; the\nsecond is that reviews are independent from one another. While\nanecdotal evidence [9, 22] challenges the first assumption3\n,\nin this section, we address the second.\nA perusal of online reviews shows that reviews are often\npart of discussion threads, where users make an effort to\ncontradict, or vehemently agree with the remarks of previous\nusers. Consider, for example, the following review:\nI don\"t understand the negative reviews... the hotel was a\nlittle dark, but that was the style. It was very artsy. Yes\nit was close to the freeway, but in my opinion the sound\nof an occasional loud car is better than hearing the ding\nding of slot machines all night! The staff on-hand is\nFABULOUS. The waitresses are great (and *** does not deserve\nthe bad review she got, she was 100% attentive to us!), the\nbartenders are friendly and professional at the same time...\nHere, the user was disturbed by previous negative reports,\naddressed these concerns, and set about trying to correct\nthem. Not surprisingly, his ratings were considerably higher\nthan the average ratings up to this point.\nIt seems that TripAdvisor users regularly read the reports\nsubmitted by previous users before booking a hotel, or\nbefore writing a review. Past reviews create some prior\nexpectation regarding the quality of service, and this expectation\nhas an influence on the submitted review. We believe this\nobservation holds for most online forums. The subjective\nperception of quality is directly proportional to how well\nthe actual experience meets the prior expectation, a fact\nconfirmed by an important line of econometric and\nmarketing research [17, 18, 16, 21].\nThe correlation between the reviews has also been\nconfirmed by recent research on the dynamics of online review\nforums [6].\n4.1 Prior Expectations\nWe define the prior expectation of user i regarding the\nfeature f, as the average of the previously available ratings\non the feature f4\n:\nef (i) =\nj \u03b8}\nRlow\nf = {ri\nf |ef (i) < \u03b8}\nThese sets are specific for each (hotel, feature) pair, and\nin our experiments we took \u03b8 = 4. This rather high value\nis close to the average rating across all features across all\nhotels, and is justified by the fact that our data set contains\nmostly high quality hotels.\nFor each city, we take all hotels and compute the average\nratings in the sets Rhigh\nf and Rlow\nf (see Table 3). The average\nrating amongst reviews following low prior expectations is\nsignificantly higher than the average rating following high\nexpectations.\nAs further evidence, we consider all hotels for which the\nfunction eV (i) (the expectation for the feature Value) has\na high value (greater than 4) for some i, and a low value\n(less than 4) for some other i. Intuitively, these are the\nhotels for which there is a minimal degree of variation in the\ntimely sequence of reviews: i.e., the cumulative average of\nratings was at some point high and afterwards became low,\nor vice-versa. Such variations are observed for about half\nof all hotels in each city. Figure 3 plots the median (across\nconsidered hotels) rating, rV , when ef (i) is not more than\nx but greater than x \u2212 0.5.\n2.5\n3\n3.5\n4\n4.5\n5\n2.5 3 3.5 4 4.5 5\nMedianofrating\nexpectation\nBoston\nSydney\nVegas\nFigure 3: The ratings tend to decrease as the\nexpectation increases.\n138\nThere are two ways to interpret the function ef (i):\n\u2022 The expected value for feature f obtained by user i\nbefore his experience with the service, acquired by\nreading reports submitted by past users. In this case, an\noverly high value for ef (i) would drive the user to\nsubmit a negative report (or vice versa), stemming from\nthe difference between the actual value of the service,\nand the inflated expectation of this value acquired\nbefore his experience.\n\u2022 The expected value of feature f for all subsequent\nvisitors of the site, if user i were not to submit a report. In\nthis case, the motivation for a negative report\nfollowing an overly high value of ef is different: user i seeks\nto correct the expectation of future visitors to the site.\nUnlike the interpretation above, this does not require\nthe user to derive an a priori expectation for the value\nof f.\nNote that neither interpretation implies that the average\nup to report i is inversely related to the rating at report i.\nThere might exist a measure of influence exerted by past\nreports that pushes the user behind report i to submit\nratings which to some extent conforms with past reports: a low\nvalue for ef (i) can influence user i to submit a low rating\nfor feature f because, for example, he fears that submitting\na high rating will make him out to be a person with low\nstandards5\n. This, at first, appears to contradict Hypothesis\n2. However, this conformity rating cannot continue\nindefinitely: once the set of reports project a sufficiently deflated\nestimate for vf , future reviewers with comparatively positive\nimpressions will seek to correct this misconception.\n4.2 Impact of textual comments on quality\nexpectation\nFurther insight into the rating behavior of TripAdvisor\nusers can be obtained by analyzing the relationship between\nthe weights wf and the values ef (i). In particular, we\nexamine the following hypothesis:\nHypothesis 3. When a large proportion of the text of a\nreview discusses a certain feature, the difference between the\nrating for that feature and the average rating up to that point\ntends to be large.\nThe intuition behind this claim is that when the user is\nadamant about voicing his opinion regarding a certain\nfeature, his opinion differs from the collective opinion of\nprevious postings. This relies on the characteristic of reputation\nsystems as feedback forums where a user is interested in\nprojecting his opinion, with particular strength if this opinion\ndiffers from what he perceives to be the general opinion.\nTo test Hypothesis 3 we measure the average absolute\ndifference between the expectation ef (i) and the rating ri\nf\nwhen the weight wi\nf is high, respectively low. Weights are\nclassified high or low by comparing them with certain cutoff\nvalues: wi\nf is low if smaller than 0.1, while wi\nf is high if\ngreater than \u03b8f . Different cutoff values were used for\ndifferent features: \u03b8R = 0.4, \u03b8S = 0.4, \u03b8C = 0.2, and \u03b8V = 0.7.\nCleanliness has a lower cutoff since it is a feature rarely\ndiscussed; Value has a high cutoff for the opposite reason.\nResults are presented in Table 4.\n5\nThe idea that negative reports can encourage further\nnegative reporting has been suggested before [14]\nTable 4: Average of |ri\nf \u2212ef (i)| when weights are high\n(first value in the cell) and low (second value in the\ncell) with P-values for the difference in sq. brackets.\nCity R S C V\n1.058 1.208 1.728 1.356\nBoston 0.701 0.838 0.760 0.917\n[0.022] [0.063] [0.000] [0.218]\n1.048 1.351 1.218 1.318\nSydney 0.752 0.759 0.767 0.908\n[0.179] [0.009] [0.165] [0.495]\n1.184 1.378 1.472 1.642\nLas Vegas 0.772 0.834 0.808 1.043\n[0.071] [0.020] [0.006] [0.076]\nThis demonstrates that when weights are unusually high,\nusers tend to express an opinion that does not conform to\nthe net average of previous ratings. As we might expect,\nfor a feature that rarely was a high weight in the discussion,\n(e.g., cleanliness) the difference is particularly large. Even\nthough the difference in the feature Value is quite large for\nSydney, the P-value is high. This is because only few reviews\ndiscussed value heavily. The reason could be cultural or\nbecause there was less of a reason to discuss this feature.\n4.3 Reporting Incentives\nPrevious models suggest that users who are not highly\nopinionated will not choose to voice their opinions [12]. In\nthis section, we extend this model to account for the\ninfluence of expectations. The motivation for submitting\nfeedback is not only due to extreme opinions, but also to the\ndifference between the current reputation (i.e., the prior\nexpectation of the user) and the actual experience.\nSuch a rating model produces ratings that most of the\ntime deviate from the current average rating. The ratings\nthat confirm the prior expectation will rarely be submitted.\nWe test on our data set the proportion of ratings that\nattempt to correct the current estimate. We define a deviant\nrating as one that deviates from the current expectation by\nat least some threshold \u03b8, i.e., |ri\nf \u2212 ef (i)| \u2265 \u03b8. For each\nof the three considered cities, the following tables, show the\nproportion of deviant ratings for \u03b8 = 0.5 and \u03b8 = 1.\nTable 5: Proportion of deviant ratings with \u03b8 = 0.5\nCity O R S C V\nBoston 0.696 0.619 0.676 0.604 0.684\nSydney 0.645 0.615 0.672 0.614 0.675\nLas Vegas 0.721 0.641 0.694 0.662 0.724\nTable 6: Proportion of deviant ratings with \u03b8 = 1\nCity O R S C V\nBoston 0.420 0.397 0.429 0.317 0.446\nSydney 0.360 0.367 0.442 0.336 0.489\nLas Vegas 0.510 0.421 0.483 0.390 0.472\nThe above results suggest that a large proportion of users\n(close to one half, even for the high threshold value \u03b8 =\n1) deviate from the prior average. This reinforces the idea\nthat users are more likely to submit a report when they\nbelieve they have something distinctive to add to the current\nstream of opinions for some feature. Such conclusions are in\ntotal agreement with prior evidence that the distribution of\nreports often follows bi-modal, U-shaped distributions.\n139\n5. MODELLING THE BEHAVIOR OF\nRATERS\nTo account for the observations described in the previous\nsections, we propose a model for the behavior of the users\nwhen submitting online reviews. For a given hotel, we make\nthe assumption that the quality experienced by the users is\nnormally distributed around some value vf , which represents\nthe objective quality offered by the hotel on the feature\nf. The rating submitted by user i on feature f is:\n\u02c6ri\nf = \u03b4f vi\nf + (1 \u2212 \u03b4f ) \u00b7 sign vi\nf \u2212 ef (i) c + d(vi\nf , ef (i)|wi\nf ) (2)\nwhere:\n\u2022 vi\nf is the (unknown) quality actually experienced by\nthe user. vi\nf is assumed normally distributed around\nsome value vf ;\n\u2022 \u03b4f \u2208 [0, 1] can be seen as a measure of the bias when\nreporting feedback. High values reflect the fact that\nusers rate objectively, without being influenced by\nprior expectations. The value of \u03b4f may depend on\nvarious factors; we fix one value for each feature f;\n\u2022 c is a constant between 1 and 5;\n\u2022 wi\nf is the weight of feature f in the textual comment\nof review i, computed according to Eq. (1);\n\u2022 d(vi\nf , ef (i)|wi\nf ) is a distance function between the\nexpectation and the observation of user i. The distance\nfunction satisfies the following properties:\n- d(y, z|w) \u2265 0 for all y, z \u2208 [0, 5], w \u2208 [0, 1];\n- |d(y, z|w)| < |d(z, x|w)| if |y \u2212 z| < |z \u2212 x|;\n- |d(y, z|w1)| < |d(y, z|w2)| if w1 < w2;\n- c + d(vf , ef (i)|wi\nf ) \u2208 [1, 5];\nThe second term of Eq. (2) encodes the bias of the\nrating. The higher the distance between the true\nobservation vi\nf and the function ef , the higher the bias.\n5.1 Model Validation\nWe use the data set of TripAdvisor reviews to validate the\nbehavior model presented above. We split for convenience\nthe rating values in three ranges: bad (B = {1, 2}),\nindifferent (I = {3, 4}), and good (G = {5}), and perform the\nfollowing two tests:\n\u2022 First, we will use our model to predict the ratings that\nhave extremal values. For every hotel, we take the\nsequence of reports, and whenever we encounter a rating\nthat is either good or bad (but not indifferent) we try\nto predict it using Eq. (2)\n\u2022 Second, instead of predicting the value of extremal\nratings, we try to classify them as either good or bad.\nFor every hotel we take the sequence of reports, and\nfor each report (regardless of it value) we classify it as\nbeing good or bad\nHowever, to perform these tests, we need to estimate the\nobjective value, vf , that is the average of the true quality\nobservations, vi\nf . The algorithm we are using is based on the\nintuition that the amount of conformity rating is minimized.\nIn other words, the value vf should be such that as often as\npossible, bad ratings follow expectations above vf and good\nratings follow expectations below vf .\nFormally, we define the sets:\n\u03931 = {i|ef (i) < vf and ri\nf \u2208 B};\n\u03932 = {i|ef (i) > vf and ri\nf \u2208 G};\nthat correspond to irregularities where even though the\nexpectation at point i is lower than the delivered value, the\nrating is poor, and vice versa. We define vf as the value\nthat minimize these union of the two sets:\nvf = arg min\nvf\n|\u03931 \u222a \u03932| (3)\nIn Eq. (2) we replace vi\nf by the value vf computed in Eq.\n(3), and use the following distance function:\nd(vf , ef (i)|wi\nf ) =\n|vf \u2212 ef (i)|\nvf \u2212 ef (i)\n|vf\n2 \u2212 ef (i)2\n| \u00b7 (1 + 2wi\nf );\nThe constant c \u2208 I was set to min{max{ef (i), 3}}, 4}. The\nvalues for \u03b4f were fixed at {0.7, 0.7, 0.8, 0.7, 0.6} for the\nfeatures {Overall, Rooms, Service, Cleanliness, Value}\nrespectively. The weights are computed as described in Section 3.\nAs a first experiment, we take the sets of extremal\nratings {ri\nf |ri\nf /\u2208 I} for each hotel and feature. For every such\nrating, ri\nf , we try to estimate it by computing \u02c6ri\nf using Eq.\n(2). We compare this estimator with the one obtained by\nsimply averaging the ratings over all hotels and features:\ni.e.,\n\u00afrf =\nj,r\nj\nf\n=0\nrj\nf\nj,r\nj\nf\n=0\n1\n;\nTable 7 presents the ratio between the root mean square\nerror (RMSE) when using \u02c6ri\nf and \u00afrf to estimate the actual\nratings. In all cases the estimate produced by our model is\nbetter than the simple average.\nTable 7: Average of\nRMSE(\u02c6rf )\nRMSE(\u00afrf )\nCity O R S C V\nBoston 0.987 0.849 0.879 0.776 0.913\nSydney 0.927 0.817 0.826 0.720 0.681\nLas Vegas 0.952 0.870 0.881 0.947 0.904\nAs a second experiment, we try to distinguish the sets\nBf = {i|ri\nf \u2208 B} and Gf = {i|ri\nf \u2208 G} of bad, respectively\ngood ratings on the feature f. For example, we compute the\nset Bf using the following classifier (called \u03c3):\nri\nf \u2208 Bf (\u03c3f (i) = 1) \u21d4 \u02c6ri\nf \u2264 4;\nTables 8, 9 and 10 present the Precision(p), Recall(r) and\ns = 2pr\np+r\nfor classifier \u03c3, and compares it with a naive\nmajority classifier, \u03c4, \u03c4f (i) = 1 \u21d4 |Bf | \u2265 |Gf |:\nWe see that recall is always higher for \u03c3 and precision is\nusually slightly worse. For the s metric \u03c3 tends to add a\n140\nTable 8: Precision(p), Recall(r), s= 2pr\np+r\nwhile\nspotting poor ratings for Boston\nO R S C V\np 0.678 0.670 0.573 0.545 0.610\n\u03c3 r 0.626 0.659 0.619 0.612 0.694\ns 0.651 0.665 0.595 0.577 0.609\np 0.684 0.706 0.647 0.611 0.633\n\u03c4 r 0.597 0.541 0.410 0.383 0.562\ns 0.638 0.613 0.502 0.471 0.595\nTable 9: Precision(p), Recall(r), s= 2pr\np+r\nwhile\nspotting poor ratings for Las Vegas\nO R S C V\np 0.654 0.748 0.592 0.712 0.583\n\u03c3 r 0.608 0.536 0.791 0.474 0.610\ns 0.630 0.624 0.677 0.569 0.596\np 0.685 0.761 0.621 0.748 0.606\n\u03c4 r 0.542 0.505 0.767 0.445 0.441\ns 0.605 0.607 0.670 0.558 0.511\n1-20% improvement over \u03c4, much higher in some cases for\nhotels in Sydney. This is likely because Sydney reviews are\nmore positive than those of the American cities and cases\nwhere the number of bad reviews exceeded the number of\ngood ones are rare. Replacing the test algorithm with one\nthat plays a 1 with probability equal to the proportion of\nbad reviews improves its results for this city, but it is still\noutperformed by around 80%.\n6. SUMMARY OF RESULTS AND\nCONCLUSION\nThe goal of this paper is to explore the factors that drive\na user to submit a particular rating, rather than the\nincentives that encouraged him to submit a report in the first\nplace. For that we use two additional sources of information\nbesides the vector of numerical ratings: first we look at the\ntextual comments that accompany the reviews, and second\nwe consider the reports that have been previously submitted\nby other users.\nUsing simple natural language processing algorithms, we\nwere able to establish a correlation between the weight of a\ncertain feature in the textual comment accompanying the\nreview, and the noise present in the numerical rating.\nSpecifically, it seems that users who discuss amply a certain feature\nare likely to agree on a common rating. This observation\nallows the construction of feature-by-feature estimators of\nquality that have a lower variance, and are hopefully less\nnoisy. Nevertheless, further evidence is required to support\nthe intuition that ratings corresponding to high weights are\nexpert opinions that deserve to be given higher priority when\ncomputing estimates of quality.\nSecond, we emphasize the dependence of ratings on\nprevious reports. Previous reports create an expectation of\nquality which affects the subjective perception of the user. We\nvalidate two facts about the hotel reviews we collected from\nTripAdvisor: First, the ratings following low expectations\n(where the expectation is computed as the average of the\nprevious reports) are likely to be higher than the ratings\nTable 10: Precision(p), Recall(r), s= 2pr\np+r\nwhile\nspotting poor ratings for Sydney\nO R S C V\np 0.650 0.463 0.544 0.550 0.580\n\u03c3 r 0.234 0.378 0.571 0.169 0.592\ns 0.343 0.452 0.557 0.259 0.586\np 0.562 0.615 0.600 0.500 0.600\n\u03c4 r 0.054 0.098 0.101 0.015 0.175\ns 0.098 0.168 0.172 0.030 0.271\nfollowing high expectations. Intuitively, the perception of\nquality (and consequently the rating) depends on how well\nthe actual experience of the user meets her expectation.\nSecond, we include evidence from the textual comments, and\nfind that when users devote a large fraction of the text to\ndiscussing a certain feature, they are likely to motivate a\ndivergent rating (i.e., a rating that does not conform to the\nprior expectation). Intuitively, this supports the hypothesis\nthat review forums act as discussion groups where users are\nkeen on presenting and motivating their own opinion.\nWe have captured the empirical evidence in a behavior\nmodel that predicts the ratings submitted by the users. The\nfinal rating depends, as expected, on the true observation,\nand on the gap between the observation and the expectation.\nThe gap tends to have a bigger influence when an important\nfraction of the textual comment is dedicated to discussing a\ncertain feature. The proposed model was validated on the\nempirical data and provides better estimates of the ratings\nactually submitted.\nOne assumption that we make is about the existence of an\nobjective quality value vf for the feature f. This is rarely\ntrue, especially over large spans of time. Other\nexplanations might account for the correlation of ratings with past\nreports. For example, if ef (i) reflects the true value of f at a\npoint in time, the difference in the ratings following high and\nlow expectations can be explained by hotel revenue models\nthat are maximized when the value is modified accordingly.\nHowever, the idea that variation in ratings is not primarily\na function of variation in value turns out to be a useful one.\nOur approach to approximate this elusive \"objective value\" is\nby no means perfect, but conforms neatly to the idea behind\nthe model.\nA natural direction for future work is to examine\nconcrete applications of our results. Significant improvements\nof quality estimates are likely to be obtained by\nincorporating all empirical evidence about rating behavior. Exactly\nhow different factors affect the decisions of the users is not\nclear. The answer might depend on the particular\napplication, context and culture.\n7. REFERENCES\n[1] A. Admati and P. Pfleiderer. Noisytalk.com:\nBroadcasting opinions in a noisy environment.\nWorking Paper 1670R, Stanford University, 2000.\n[2] P. B., L. Lee, and S. Vaithyanathan. Thumbs up?\nsentiment classification using machine learning\ntechniques. In Proceedings of the EMNLP-02, the\nConference on Empirical Methods in Natural\nLanguage Processing, 2002.\n[3] H. Cui, V. Mittal, and M. Datar. Comparative\n141\nExperiments on Sentiment Classification for Online\nProduct Reviews. In Proceedings of AAAI, 2006.\n[4] K. Dave, S. Lawrence, and D. Pennock. Mining the\npeanut gallery:opinion extraction and semantic\nclassification of product reviews. In Proceedings of the\n12th International Conference on the World Wide\nWeb (WWW03), 2003.\n[5] C. Dellarocas, N. Awad, and X. Zhang. Exploring the\nValue of Online Product Ratings in Revenue\nForecasting: The Case of Motion Pictures. Working\npaper, 2006.\n[6] C. Forman, A. Ghose, and B. Wiesenfeld. A\nMulti-Level Examination of the Impact of Social\nIdentities on Economic Transactions in Electronic\nMarkets. Available at SSRN:\nhttp://ssrn.com/abstract=918978, July 2006.\n[7] A. Ghose, P. Ipeirotis, and A. Sundararajan.\nReputation Premiums in Electronic Peer-to-Peer\nMarkets: Analyzing Textual Feedback and Network\nStructure. In Third Workshop on Economics of\nPeer-to-Peer Systems, (P2PECON), 2005.\n[8] A. Ghose, P. Ipeirotis, and A. Sundararajan. The\nDimensions of Reputation in electronic Markets.\nWorking Paper CeDER-06-02, New York University,\n2006.\n[9] A. Harmon. Amazon Glitch Unmasks War of\nReviewers. The New York Times, February 14, 2004.\n[10] D. Houser and J. Wooders. Reputation in Auctions:\nTheory and Evidence from eBay. Journal of\nEconomics and Management Strategy, 15:353-369,\n2006.\n[11] M. Hu and B. Liu. Mining and summarizing customer\nreviews. In Proceedings of the ACM SIGKDD\nInternational Conference on Knowledge Discovery and\nData Mining (KDD04), 2004.\n[12] N. Hu, P. Pavlou, and J. Zhang. Can Online Reviews\nReveal a Product\"s True Quality? In Proceedings of\nACM Conference on Electronic Commerce (EC 06),\n2006.\n[13] K. Kalyanam and S. McIntyre. Return on reputation\nin online auction market. Working Paper\n02/03-10-WP, Leavey School of Business, Santa Clara\nUniversity., 2001.\n[14] L. Khopkar and P. Resnick. Self-Selection, Slipping,\nSalvaging, Slacking, and Stoning: the Impacts of\nNegative Feedback at eBay. In Proceedings of ACM\nConference on Electronic Commerce (EC 05), 2005.\n[15] M. Melnik and J. Alm. Does a seller\"s reputation\nmatter? evidence from ebay auctions. Journal of\nIndustrial Economics, 50(3):337-350, 2002.\n[16] R. Olshavsky and J. Miller. Consumer Expectations,\nProduct Performance and Perceived Product Quality.\nJournal of Marketing Research, 9:19-21, February\n1972.\n[17] A. Parasuraman, V. Zeithaml, and L. Berry. A\nConceptual Model of Service Quality and Its\nImplications for Future Research. Journal of\nMarketing, 49:41-50, 1985.\n[18] A. Parasuraman, V. Zeithaml, and L. Berry.\nSERVQUAL: A Multiple-Item Scale for Measuring\nConsumer Perceptions of Service Quality. Journal of\nRetailing, 64:12-40, 1988.\n[19] P. Pavlou and A. Dimoka. The Nature and Role of\nFeedback Text Comments in Online Marketplaces:\nImplications for Trust Building, Price Premiums, and\nSeller Differentiation. Information Systems Research,\n17(4):392-414, 2006.\n[20] A. Popescu and O. Etzioni. Extracting product\nfeatures and opinions from reviews. In Proceedings of\nthe Human Language Technology Conference and\nConference on Empirical Methods in Natural\nLanguage Processing, 2005.\n[21] R. Teas. Expectations, Performance Evaluation, and\nConsumers\" Perceptions of Quality. Journal of\nMarketing, 57:18-34, 1993.\n[22] E. White. Chatting a Singer Up the Pop Charts. The\nWall Street Journal, October 15, 1999.\nAPPENDIX\nA. LIST OF WORDS, LR, ASSOCIATED TO\nTHE FEATURE ROOMS\nAll words serve as prefixes: room, space, interior, decor,\nambiance, atmosphere, comfort, bath, toilet, bed, building,\nwall, window, private, temperature, sheet, linen, pillow, hot,\nwater, cold, water, shower, lobby, furniture, carpet, air,\ncondition, mattress, layout, design, mirror, ceiling, lighting,\nlamp, sofa, chair, dresser, wardrobe, closet\n142", "keywords": "utility of the product;the product utility;semantic orientation of product evaluation;brag-and-moan model;great probability bi-modal;clear incentive absence;rating;u-shaped distribution;correlation;reputation mechanism;large span of time;feature-by-feature estimator of quality;absence of clear incentive;online review"}
-{"name": "test_J-11", "title": "Trading Networks with Price-Setting Agents", "abstract": "In a wide range of markets, individual buyers and sellers often trade through intermediaries, who determine prices via strategic considerations. Typically, not all buyers and sellers have access to the same intermediaries, and they trade at correspondingly different prices that reflect their relative amounts of power in the market. We model this phenomenon using a game in which buyers, sellers, and traders engage in trade on a graph that represents the access each buyer and seller has to the traders. In this model, traders set prices strategically, and then buyers and sellers react to the prices they are offered. We show that the resulting game always has a subgame perfect Nash equilibrium, and that all equilibria lead to an efficient (i.e. socially optimal) allocation of goods. We extend these results to a more general type of matching market, such as one finds in the matching of job applicants and employers. Finally, we consider how the profits obtained by the traders depend on the underlying graph - roughly, a trader can command a positive profit if and only if it has an essential connection in the network structure, thus providing a graph-theoretic basis for quantifying the amount of competition among traders. Our work differs from recent studies of how price is affected by network structure through our modeling of price-setting as a strategic activity carried out by a subset of agents in the system, rather than studying prices set via competitive equilibrium or by a truthful mechanism.", "fulltext": "1. INTRODUCTION\nIn a range of settings where markets mediate the interactions of\nbuyers and sellers, one observes several recurring properties:\nIndividual buyers and sellers often trade through intermediaries, not all\nbuyers and sellers have access to the same intermediaries, and not\nall buyers and sellers trade at the same price. One example of this\nsetting is the trade of agricultural goods in developing countries.\nGiven inadequate transportation networks, and poor farmers\"\nlimited access to capital, many farmers have no alternative to trading\nwith middlemen in inefficient local markets. A developing country\nmay have many such partially overlapping markets existing\nalongside modern efficient markets [2].\nFinancial markets provide a different example of a setting with\nthese general characteristics. In these markets much of the trade\nbetween buyers and sellers is intermediated by a variety of agents\nranging from brokers to market makers to electronic trading\nsystems. For many assets there is no one market; trade in a single asset\nmay occur simultaneously on the floor of an exchange, on crossing\nnetworks, on electronic exchanges, and in markets in other\ncountries. Some buyers and sellers have access to many or all of these\ntrading venues; others have access to only one or a few of them.\nThe price at which the asset trades may differ across these trading\nvenues. In fact, there is no price as different traders pay or\nreceive different prices. In many settings there is also a gap between\nthe price a buyer pays for an asset, the ask price, and the price a\nseller receives for the asset, the bid price. One of the most striking\nexamples of this phenomenon occurs in the market for foreign\nexchange, where there is an interbank market with restricted access\nand a retail market with much more open access. Spreads, defined\nas the difference between bid and ask prices, differ significantly\nacross these markets, even though the same asset is being traded in\nthe two markets.\nIn this paper, we develop a framework in which such phenomena\nemerge from a game-theoretic model of trade, with buyers, sellers,\nand traders interacting on a network. The edges of the network\nconnect traders to buyers and sellers, and thus represent the access that\ndifferent market participants have to one another. The traders serve\nas intermediaries in a two-stage trading game: they strategically\nchoose bid and ask prices to offer to the sellers and buyers they are\nconnected to; the sellers and buyers then react to the prices they\nface. Thus, the network encodes the relative power in the structural\npositions of the market participants, including the implicit levels of\ncompetition among traders. We show that this game always has a\n143\nsubgame perfect Nash equilibrium, and that all equilibria lead to an\nefficient (i.e. socially optimal) allocation of goods. We also\nanalyze how trader profits depend on the network structure, essentially\ncharacterizing in graph-theoretic terms how a trader\"s payoff is\ndetermined by the amount of competition it experiences with other\ntraders.\nOur work here is connected to several lines of research in\neconomics, finance, and algorithmic game theory, and we discuss these\nconnections in more detail later in the introduction. At a general\nlevel, our approach can be viewed as synthesizing two important\nstrands of work: one that treats buyer-seller interaction using\nnetwork structures, but without attempting to model the processses by\nwhich prices are actually formed [1, 4, 5, 6, 8, 9, 10, 13]; and\nanother strand in the literature on market microstructure that\nincorporates price-setting intermediaries, but without network-type\nconstraints on who can trade with whom [12]. By developing a\nnetwork model that explicitly includes traders as price-setting agents,\nin a system together with buyers and sellers, we are able to capture\nprice formation in a network setting as a strategic process carried\nout by intermediaries, rather than as the result of a centrally\ncontrolled or exogenous mechanism.\nThe Basic Model: Indistinguishable Goods. Our goal in\nformulating the model is to express the process of price-setting in\nmarkets such as those discussed above, where the participants do not\nall have uniform access to one another. We are given a set B of\nbuyers, a set S of sellers, and a set T of traders. There is an\nundirected graph G that indicates who is able to trade with whom. All\nedges have one end in B \u222a S and the other in T; that is, each edge\nhas the form (i, t) for i \u2208 S and t \u2208 T, or (j, t) for j \u2208 B and\nt \u2208 T. This reflects the constraints that all buyer-seller transactions\ngo through traders as intermediaries.\nIn the most basic version of the model, we consider identical\ngoods, one copy of which is initially held by each seller. Buyers and\nsellers each have a value for one copy of the good, and we assume\nthat these values are common knowledge. We will subsequently\ngeneralize this to a setting in which goods are distinguishable,\nbuyers can value different goods differently, and potentially sellers can\nvalue transactions with different buyers differently as well. Having\ndifferent buyer valuations captures settings like house purchases;\nadding different seller valuations as well captures matching\nmarkets - for example, sellers as job applicants and buyers as\nemployers, with both caring about who ends up with which good\n(and with traders acting as services that broker the job search).\nThus, to start with the basic model, there is a single type of good;\nthe good comes in individisible units; and each seller initially holds\none unit of the good. All three types of agents value money at the\nsame rate; and each i \u2208 B \u222a S additionally values one copy of the\ngood at \u03b8i units of money. No agent wants more than one copy of\nthe good, so additional copies are valued at 0. Each agent has an\ninitial endowment of money that is larger than any individual\nvaluation \u03b8i; the effect of this is to guarantee that any buyer who ends\nup without a copy of the good has been priced out of the market\ndue to its valuation and network position, not a lack of funds.\nWe picture each good that is sold flowing along a sequence of\ntwo edges: from a seller to a trader, and then from the trader to a\nbuyer. The particular way in which goods flow is determined by the\nfollowing game. First, each trader offers a bid price to each seller\nit is connected to, and an ask price to each buyer it is connected\nto. Sellers and buyers then choose from among the offers presented\nto them by traders. If multiple traders propose the same price to a\nseller or buyer, then there is no strict best response for the seller or\nbuyer. In this case a selection must be made, and, as is standard\n(see for example [10]), we (the modelers) choose among the best\noffers. Finally, each trader buys a copy of the good from each seller\nthat accepts its offer, and it sells a copy of the good to each buyer\nthat accepts its offer. If a particular trader t finds that more buyers\nthan sellers accept its offers, then it has committed to provide more\ncopies of the good than it has received, and we will say that this\nresults in a large penalty to the trader for defaulting; the effect of\nthis is that in equilibrium, no trader will choose bid and ask prices\nthat result in a default.\nMore precisely, a strategy for each trader t is a specification of a\nbid price \u03b2ti for each seller i to which t is connected, and an ask\nprice \u03b1tj for each buyer j to which t is connected. (We can also\nhandle a model in which a trader may choose not to make an offer\nto certain of its adjacent sellers or buyers.) Each seller or buyer\nthen chooses at most one incident edge, indicating the trader with\nwhom they will transact, at the indicated price. (The choice of a\nsingle edge reflects the facts that (a) sellers each initially have only\none copy of the good, and (b) buyers each only want one copy of\nthe good.) The payoffs are as follows:\nFor each seller i, the payoff from selecting trader t is \u03b2ti,\nwhile the payoff from selecting no trader is \u03b8i. (In the former\ncase, the seller receives \u03b2ti units of money, while in the latter\nit keeps its copy of the good, which it values at \u03b8i.)\nFor each buyer j, the payoff from selecting trader t is \u03b8j \u2212\u03b1tj,\nwhle the payoff from selecting no trader is 0. (In the former\ncase, the buyer receives the good but gives up \u03b1tj units of\nmoney.)\nFor each trader t, with accepted offers from sellers i1, . . . , is\nand buyers j1, . . . , jb, the payoff is\nP\nr \u03b1tjr \u2212\nP\nr \u03b2tir ,\nminus a penalty \u03c0 if b > s. The penalty is chosen to be large\nenough that a trader will never incur it in equilibrium, and\nhence we will generally not be concerned with the penalty.\nThis defines the basic elements of the game. The equilibrium\nconcept we use is subgame perfect Nash equilibrium.\nSome Examples. To help with thinking about the model, we now\ndescribe three illustrative examples, depicted in Figure 1. To keep\nthe figures from getting too cluttered, we adopt the following\nconventions: sellers are drawn as circles in the leftmost column and\nwill be named i1, i2, . . . from top to bottom; traders are drawn as\nsquares in the middle column and will be named t1, t2, . . . from top\nto bottom; and buyers are drawn as circles in the rightmost column\nand will be named j1, j2, . . . from top to bottom. All sellers in the\nexamples will have valuations for the good equal to 0; the valuation\nof each buyer is drawn inside its circle; and the bid or ask price on\neach edge is drawn on top of the edge.\nIn Figure 1(a), we show how a standard second-price auction\narises naturally from our model. Suppose the buyer valuations from\ntop to bottom are w > x > y > z. The bid and ask prices shown\nare consistent with an equilibrium in which i1 and j1 accept the\noffers of trader t1, and no other buyer accepts the offer of its adjacent\ntrader: thus, trader t1 receives the good with a bid price of x, and\nmakes w \u2212 x by selling the good to buyer j1 for w. In this way, we\ncan consider this particular instance as an auction for a single good\nin which the traders act as proxies for their adjacent buyers. The\nbuyer with the highest valuation for the good ends up with it, and\nthe surplus is divided between the seller and the associated trader.\nNote that one can construct a k-unit auction with > k buyers just\nas easily, by building a complete bipartite graph on k sellers and\ntraders, and then attaching each trader to a single distinct buyer.\nIn Figure 1(b), we show how nodes with different positions in\nthe network topology can achieve different payoffs, even when all\n144\nw\nx\ny\nz\nx\nw\nx\nx\ny\ny\nz\nz\n(a) Auction\n1\n1\n1\n0\nx\nx\n0\n1\nx\nx\n1\n(b) Heterogeneous outcomes\n1\n1\n1\n0\nx\nx\n0\n1\nx\nx\n1\n(c) Implicit perfect competition\nFigure 1: (a) An auction, mediated by traders, in which the buyer with the highest valuation for the good ends up with it. (b)\nA network in which the middle seller and buyer benefit from perfect competition between the traders, while the other sellers and\nbuyers have no power due to their position in the network. (c) A form of implicit perfect competition: all bid/ask spreads will be zero\nin equilibrium, even though no trader directly competes with any other trader for the same buyer-seller pair.\nbuyer valuations are the same numerically. Specifically, seller i2\nand buyer j2 occupy powerful positions, because the two traders\nare competing for their business; on the other hand, the other sellers\nand buyers are in weak positions, because they each have only one\noption. And indeed, in every equilibrium, there is a real number\nx \u2208 [0, 1] such that both traders offer bid and ask prices of x to\ni2 and j2 respectively, while they offer bids of 0 and asks of 1\nto the other sellers and buyers. Thus, this example illustrates a\nfew crucial ingredients that we will identify at a more general level\nshortly. Specifically, i2 and j2 experience the benefits of perfect\ncompetition, in that the two traders drive the bid-ask spreads to 0 in\ncompeting for their business. On the other hand, the other sellers\nand buyers experience the downsides of monopoly - they receive\n0 payoff since they have only a single option for trade, and the\ncorresponding trader makes all the profit. Note further how this\nnatural behavior emerges from the fact that traders are able to offer\ndifferent prices to different agents - capturing the fact that there\nis no one fixed price in the kinds of markets that motivate the\nmodel, but rather different prices reflecting the relative power of\nthe different agents involved.\nThe previous example shows perhaps the most natural way in\nwhich a trader\"s profit on a particular transaction can drop to 0:\nwhen there is another trader who can replicate its function\nprecisely. (In that example, two traders each had the ability to move\na copy of the good from i2 to j2.) But as our subsequent results\nwill show, traders make zero profit more generally due to global,\ngraph-theoretic reasons. The example in Figure 1(c) gives an initial\nindication of this: one can show that for every equilibrium, there is\na y \u2208 [0, 1] such that every bid and every ask price is equal to y. In\nother words, all traders make zero profit, whether or not a copy of\nthe good passes through them - and yet, no two traders have any\nseller-buyer paths in common. The price spreads have been driven\nto zero by a global constraint imposed by the long cycle through\nall the agents; this is an example of implicit perfect competition\ndetermined by the network topology.\nExtending the Model to Distinguishable Goods. We extend the\nbasic model to a setting with distinguishable goods, as follows.\nInstead of having each agent i \u2208 B \u222a S have a single numerical\nvaluation \u03b8i, we index valuations by pairs of buyers and sellers: if\nbuyer j obtains the good initially held by seller i, it gets a utility of\n\u03b8ji, and if seller i sells its good to buyer j, it experiences a loss of\nutility of \u03b8ij . This generalizes the case of indistinguishable goods,\nsince we can always have these pairwise valuations depend only on\none of the indices. A strategy for a trader now consists of offering\na bid to each seller that specifies both a price and a buyer, and\noffering an ask to each buyer that specifies both a price and a seller.\n(We can also handle a model in which a trader offers bids\n(respectively, asks) in the form of vectors, essentially specifying a menu\nwith a price attached to each buyer (resp. seller).) Each buyer and\nseller selects an offer from an adjacent trader, and the payoffs to all\nagents are determined as before.\nThis general framework captures matching markets [10, 13]: for\nexample, a job market that is mediated by agents or employment\nsearch services (as in hiring for corporate executives, or sports or\nentertainment figures). Here the sellers are job applicants, buyers\nare employers, and traders are the agents that mediate the job\nmarket. Of course, if one specifies pairwise valuations on buyers but\njust single valuations for sellers, we model a setting where buyers\ncan distinguish among the goods, but sellers don\"t care whom they\nsell to - this (roughly) captures settings like housing markets.\nOur Results. Our results will identify general forms of some of\nthe principles noted in the examples discussed above - including\nthe question of which buyers end up with the good; the question\nof how payoffs are differently realized by sellers, traders, and\nbuyers; and the question of what structural properties of the network\ndetermine whether the traders will make positive profits.\nTo make these precise, we introduce the following notation. Any\noutcome of the game determines a final allocation of goods to some\nof the agents; this can be specified by a collection M of triples\n(ie, te, je), where ie \u2208 S, te \u2208 T, and je \u2208 B; moreover, each\nseller and each buyer appears in at most one triple. The meaning is\nfor each e \u2208 M, the good initially held by ie moves to je through\nte. (Sellers appearing in no triple keep their copy of the good.)\nWe say that the value of the allocation is equal to\nP\ne\u2208M \u03b8jeie \u2212\n\u03b8ieje . Let \u03b8\u2217\ndenote the maximum value of any allocation M that\nis feasible given the network.\nWe show that every instance of our game has an equilibrium,\nand that in every such equilibrium, the allocation has value \u03b8\u2217\n\n145\nin other words, it achieves the best value possible. Thus,\nequilibria in this model are always efficient, in that the market enables the\nright set of people to get the good, subject to the network\nconstraints. We establish the existence and efficiency of equilibria by\nconstructing a linear program to capture the flow of goods through\nthe network; the dual of this linear program contains enough\ninformation to extract equilibrium prices.\nBy the definition of the game, the value of the equilibrium\nallocation is divided up as payoffs to the agents, and it is interesting to\nask how this value is distributed - in particular how much profit a\ntrader is able to make based on its position in the network. We find\nthat, although all equilibria have the same value, a given trader\"s\npayoff can vary across different equilibria. However, we are able\nto characterize the maximum and minimum amounts that a given\ntrader is able to make, where these maxima and minima are taken\nover all equilibria, and we give an efficient algorithm to compute\nthis. In particular, our results here imply a clean combinatorial\ncharacterization of when a given trader t can achieve non-zero\npayoff: this occurs if and only there is some edge e incident to t that is\nessential, in the sense that deleting e reduces the value of the\noptimal allocation \u03b8\u2217\n. We also obtain results for the sum of all trader\nprofits.\nRelated Work. The standard baseline approach for analyzing the\ninteraction of buyers and sellers is the Walrasian model in which\nanonymous buyers and sellers trade a good at a single market\nclearing price. This reduced form of trade, built on the idealization of a\nmarket price, is a powerful model which has led to many insights.\nBut it is not a good model to use to examine where prices come\nfrom or exactly how buyers and sellers and trade with each other.\nThe difficulty is that in the Walrasian model there is no agent who\nsets the price, and agents don\"t actually trade with each other. In\nfact there is no market, in the everyday sense of that word, in the\nWalrasian model. That is, there is no physical or virtual place\nwhere buyers and sellers interact to trade and set prices. Thus in\nthis simple model, all buyers and sellers are uniform and trade at\nthe same price, and there is also no role for intermediaries.\nThere are several literatures in economics and finance which\nexamine how prices are set rather than just determining equilibrium\nprices. The literature on imperfect competition is perhaps the\noldest of these. Here a monopolist, or a group of oliogopolists, choose\nprices in order to maximize their profits (see [14] for the standard\ntextbook treatment of these markets). A monopolist uses its\nknowledge of market demand to choose a price, or a collection of prices\nif it discriminates. Oliogopolists play a game in which their\npayoffs depend on market demand and the actions of their competitors.\nIn this literature there are agents who set prices, but the fiction of\na single market is maintained. In the equilibrium search literature,\nfirms set prices and consumers search over them (see [3]).\nConsumers do end up paying different prices, but all consumers have\naccess to all firms and there are no intermediaries. In the general\nequilibrium literature there have been various attempts to introduce\nprice determination. A standard proof technique for the existence\nof competitive equilibrium involves a price adjustment mechanism\nin which prices respond to excess demand. The Walrasian\nauctioneer is often introduced as a device to explain how this process\nworks, but this is a fundamentally a metaphor for an iterative\npriceupdating algorithm, not for the internals of an actual market. More\nsophisticated processes have been introduced to study the\nstability of equilibrium prices or the information necessary to compute\nthem. But again there are no price-setting agents here.\nIn the finance literature the work on market microstructure does\nhave price-setting agents (specialists), parts of it do determine\nseparate bid and ask prices, and different agents receive different prices\nfor the same asset (see [12] for a treatment of microstructure\ntheory). Work in information economics has identified similar\nphenomena (see e.g. [7]). But there is little research in these literatures\nexamining the effect of restrictions on who can trade with whom.\nThere have been several approaches to studying how network\nstructure determines prices. These have posited price determination\nthrough definitions based on competitive equilibrium or the core, or\nthrough the use of truthful mechanisms. In briefly reviewing this\nwork, we will note the contrast with our approach, in that we model\nprices as arising from the strategic behavior of agents in the system.\nIn recent work, Kakade et al. [8] have studied the distribution of\nprices at competitive equilibrium in a bipartite graph on buyers and\nsellers, generated using a probabilistic model capable of producing\nheavy-tailed degree distributions [11]. Even-Dar et al. [6] build\non this to consider the strategic aspects of network formation when\nprices arise from competitive equilibrium.\nLeonard [10], Babaioff et al. [1], and Chu and Shen [4] consider\nan approach based on mechanism design: buyers and sellers reside\nat different nodes in a graph, and they incur a given\ntransportation cost to trade with one another. Leonard studies VCG prices in\nthis setting; Babaioff et al. and Chu and Shen additionally provide\na a budget-balanced mechanism. Since the concern here is with\ntruthful mechanisms that operate on private valuations, there is an\ninherent trade-off between the efficiency of the allocation and the\nbudget-balance condition.\nIn contrast, our model has known valuations and prices arising\nfrom the strategic behavior of traders. Thus, the assumptions\nbehind our model are in a sense not directly comparable to those\nunderlying the mechanism design approach: while we assume known\nvaluations, we do not require a centralized authority to impose a\nmechanism. Rather, price-setting is part of the strategic outcome,\nas in the real markets that motivate our work, and our equilibria\nare simultaneously budget-balanced and efficient - something not\npossible in the mechanism design frameworks that have been used.\nDemange, Gale, and Sotomayor [5], and Kranton and Minehart\n[9], analyze the prices at which trade occurs in a network, working\nwithin the framework of mechanism design. Kranton and Minehart\nuse a bipartite graph with direct links between buyers and sellers,\nand then use an ascending auction mechanism, rather than strategic\nintermediaries, to determine the prices. Their auction has desirable\nequilibrium properties but as Kranton and Minehart note it is an\nabstraction of how goods are allocated and prices are determined\nthat is similar in spirit to the Walrasian auctioneer abstraction. In\nfact, we can show how the basic model of Kranton and Minehart\ncan be encoded as an instance of our game, with traders producing\nprices at equilibrium matching the prices produced by their auction\nmechanism.1\nFinally, the classic results of Shapley and Shubik [13] on the\nassignment game can be viewed as studying the result of trade on a\nbipartite graph in terms of the core. They study the dual of a linear\nprogram based on the matching problem, similar to what we use for\na reduced version of our model in the next section, but their focus\nis different as they do not consider agents that seek to set prices.\n2. MARKETS WITH PAIR-TRADERS\nFor understanding the ideas behind the analysis of the general\nmodel, it is very useful to first consider a special case with a\nre1\nKranton and Minehart, however, can also analyze a more\ngeneral setting in which buyers values are private and thus buyers and\nsellers play a game of incomplete information. We deal only with\ncomplete information.\n146\nstricted form of traders that we refer to as pair-traders. In this case,\neach trader is connected to just one buyer and one seller. (Thus, it\nessentially serves as a trade route between the two.) The\ntechniques we develop to handle this case will form a useful basis for\nreasoning about the case of traders that may be connected\narbitrarily to the sellers and buyers.\nWe will relate profits in a subgame perfect Nash equilibrium to\noptimal solutions of a certain linear program, use this relation to\nshow that all equilibria result in efficient allocation of the goods,\nand show that a pure equilibrium always exists. First, we consider\nthe simplest model where sellers have indistinguishable items, and\neach buyer is interested in getting one item. Then we extend the\nresults to the more general case of a matching market, as discussed\nin the previous section, where valuations depend on the identity\nof the seller and buyer. We then characterize the minimum and\nmaximum profits traders can make. In the next section, we extend\nthe results to traders that may be connected to any subset of sellers\nand buyers.\nGiven that we are working with pair-traders in this section, we\ncan represent the problem using a bipartite graph G whose node set\nis B \u222a S, and where each trader t, connecting seller i and buyer j,\nappears as an edge t = (i, j) in G. Note, however, that we allow\nmultiple traders to connect the same pair of agents. For each buyer\nand seller i, we will use adj(i) to denote the set of traders who can\ntrade with i.\n2.1 Indistinguishable Goods\nThe socially optimal trade for the case of indistinguishable goods\nis the solution of the transportation problem: sending goods along\nthe edges representing the traders. The edges along which trade\noccurs correspond to a matching in this bipartite graph, and the\noptimal trade is described by the following linear program.\nmax SV (x) =\nX\nt\u2208T :t=(i,j)\nxt(\u03b8j \u2212 \u03b8i)\nxt \u2265 0 \u2200t \u2208 T\nX\nt\u2208adj(i)\nxt \u2264 1 \u2200i \u2208 S\nX\nt\u2208adj(j)\nxt \u2264 1 \u2200j \u2208 B\nNext we consider an equilibrium. Each trader t = (i, j) must\noffer a bid \u03b2t and an ask \u03b1t. (We omit the subscript denoting the\nseller and buyer here since we are dealing with pair-traders.) Given\nthe bid and ask price, the agents react to these prices, as described\nearlier. Instead of focusing on prices, we will focus on profits. If\na seller i sells to a trader t \u2208 adj(i) with bid \u03b2t then his profit is\npi = \u03b2t \u2212 \u03b8i. Similarly, if a buyer j buys from a trader t \u2208 adj(j)\nwith ask \u03b1t, then his profit is pj = \u03b8j \u2212 \u03b1t. Finally, if a trader t\ntrades with ask \u03b1t and bid \u03b2t then his profit is yt = \u03b1t \u2212 \u03b2t. All\nagents not involved in trade make 0 profit. We will show that the\nprofits at equilibrium are an optimal solution to the following linear\nprogram.\nmin sum(p, y) =\nX\ni\u2208B\u222aS\npi +\nX\nt\u2208T\nyt\nyt \u2265 0 \u2200t \u2208 T :\npi \u2265 0 \u2200i \u2208 S \u222a B :\nyt \u2265 (\u03b8j \u2212 pj) \u2212 (\u03b8i + pi) \u2200t = (i, j) \u2208 T\nLEMMA 2.1. At equilibrium the profits must satisfy the above\ninequalities.\nProof. Clearly all profits are nonnegative, as trading is optional for\nall agents.\nTo see why the last set of inequalities holds, consider two cases\nseparately. For a trader t who conducted trade, we get equality by\ndefinition. For other traders t = (i, j), the value pi +\u03b8i is the price\nthat seller i sold for (or \u03b8i if seller i decided to keep the good).\nOffering a bid \u03b2t > pi + \u03b8i would get the seller to sell to trader t.\nSimilarly, \u03b8j \u2212 pj is the price that buyer j bought for (or \u03b8j if he\ndidn\"t buy), and for any ask \u03b1t < \u03b8j \u2212 pj, the buyer will buy from\ntrader t. So unless \u03b8j \u2212 pj \u2264 \u03b8i + pi the trader has a profitable\ndeviation.\nNow we are ready to prove our first theorem:\nTHEOREM 2.2. In any equilibrium the trade is efficient.\nProof. Let x be a flow of goods resulting in an equilibrium, and let\nvariables p and y be the profits.\nConsider the linear program describing the socially optimal trade.\nWe will also add a set of additional constraints xt \u2264 1 for all traders\nt \u2208 T; this can be added to the description, as it is implied by the\nother constraints. Now we claim that the two linear programs are\nduals of each other. The variables pi for agents B \u222a S correspond\nto the equations\nP\nt\u2208adj(i) xt \u2264 1. The additional dual variable yt\ncorresponds to an additional inequality xt \u2264 1.\nThe optimality of the social value of the trade will follow from\nthe claim that the solution of these two linear programs derived\nfrom an equilibrium satisfy the complementary slackness\nconditions for this pair of linear programs, and hence both x and (p, y)\nare optimal solutions to the corresponding linear programs.\nThere are three different complementary slackness conditions we\nneed to consider, corresponding to the three sets of variables x, y\nand p. Any agent can only make profit if he transacts, so pi > 0\nimplies\nP\nt\u2208adj(i) xt = 1, and similarly, yt > 0 implies that xt =\n1 also. Finally, consider a trader t with xt > 0 that trades between\nseller i and buyer j, and recall that we have seen above that the\ninequality yt \u2265 (\u03b8j \u2212 pj) \u2212 (\u03b8i + pi) is satisfied with equality for\nthose who trade.\nNext we argue that equilibria always exist.\nTHEOREM 2.3. For any efficient trade between buyers and\nsellers there is a pure equilibrium of bid-ask values that supports this\ntrade.\nProof. Consider an efficient trade; let xt = 1 if t trades and 0\notherwise; and consider an optimal solution (p, y) to the dual linear\nprogram.\nWe would like to claim that all dual solutions correspond to\nequilibrium prices, but unfortunately this is not exactly true. Before we\ncan convert a dual solution to equilibrium prices, we may need to\nmodify the solution slightly as follows. Consider any agent i that\nis only connected to a single trader t. Because the agent is only\nconnected to a single trader, the variables yt and pi are dual\nvariables corresponding to the same primal inequality xt \u2264 1, and\nthey always appear together as yt + pi in all inequalities, and also\nin the objective function. Thus there is an optimal solution in which\npi = 0 for all agents i connected only to a single trader.\nAssume (p, y) is a dual solution where agents connected only\nto one trader have pi = 0. For a seller i, let \u03b2t = \u03b8i + pi be\nthe bid for all traders t adjacent to i. Similarly, for each buyer j,\nlet \u03b1t = \u03b8j \u2212 pj be the ask for all traders t adjacent to j. We\nclaim that this set of bids and asks, together with the trade x, are an\nequilibrium. To see why, note that all traders t adjacent to a seller\nor buyer i offer the same ask or bid, and so trading with any trader\nis equally good for agent i. Also, if i is not trading in the solution\n147\nx then by complementary slackness pi = 0, and hence not trading\nis also equally good for i. This shows that sellers and buyers don\"t\nhave an incentive to deviate.\nWe need to show that traders have no incentive to deviate either.\nWhen a trader t is trading with seller i and buyer j, then profitable\ndeviations would involve increasing \u03b1t or decreasing \u03b2t. But by\nour construction (and assumption about monopolized agents) all\nsellers and buyers have multiple identical ask/bid offers, or trade\nis occurring at valuation. In either case such a deviation cannot be\nsuccessful.\nFinally, consider a trader t = (i, j) who doesn\"t trade. A\ndeviation for t would involve offering a lower ask to seller i and a\nhigher bid to seller j than their current trade. However, yt = 0 by\ncomplementary slackness, and hence pi + \u03b8i \u2265 \u03b8j \u2212 pj, so i sells\nfor a price at least as high as the price at which j buys, so trader t\ncannot create profitable trade.\nNote that a seller or buyer i connected to a single trader t cannot\nhave profit at equilibrium, so possible equilibrium profits are in\none-to-one correspondence with dual solutions for which pi = 0\nwhenever i is monopolized by one trader.\nA disappointing feature of the equilibrium created by this proof\nis that some agents t may have to create ask-bid pairs where \u03b2t >\n\u03b1t, offering to buy for more than the price at which they are willing\nto sell. Agents that make such crossing bid-ask pairs never actually\nperform a trade, so it does not result in negative profit for the agent,\nbut such pairs are unnatural. Crossing bid-ask pairs are weakly\ndominated by the strategy of offering a low bid \u03b2 = 0 and an\nextremely high ask to guarantee that neither is accepted.\nTo formulate a way of avoiding such crossing pairs, we say an\nequilibrium is cross-free if \u03b1t \u2265 \u03b2t for all traders t. We now show\nthere is always a cross-free equilibrium.\nTHEOREM 2.4. For any efficient trade between buyers and\nsellers there is a pure cross-free equilibrium.\nProof. Consider an optimal solution to the dual linear program.\nTo get an equilibrium without crossing bids, we need to do a more\ngeneral modification than just assuming that pi = 0 for all sellers\nand buyers connected to only a single trader. Let the set E be the\nset of edges t = (i, j) that are tight, in the sense that we have\nthe equality yt = (\u03b8j \u2212 pj) \u2212 (\u03b8i + pi). This set E contain all\nthe edges where trade occurs, and some more edges. We want to\nmake sure that pi = 0 for all sellers and buyers that have degree at\nmost 1 in E. Consider a seller i that has pi > 0. We must have i\ninvolved in a trade, and the edge t = (i, j) along which the trade\noccurs must be tight. Suppose this is the only tight edge adjacent\nto agent i; then we can decrease pi and increase yt till one of the\nfollowing happens: either pi = 0 or the constraint of some other\nagent t \u2208 adj(i) becomes tight. This change only increases the set\nof tight edges E, keeps the solution feasible, and does not change\nthe objective function value. So after doing this for all sellers, and\nanalogously changing yt and pj for all buyers, we get an optimal\nsolution where all sellers and buyers i either have pi = 0 or have\nat least two adjacent tight edges.\nNow we can set asks and bids to form a cross-free equilibrium.\nFor all traders t = (i, j) associated with an edge t \u2208 E we set \u03b1t\nand \u03b2t as before: we set the bid \u03b2t = pi + \u03b8i and the ask \u03b1t =\n\u03b8j \u2212pj. For a trader t = (i, j) \u2208 E we have that pi +\u03b8i > \u03b8j \u2212pj\nand we set \u03b1t = \u03b2t to be any value in the range [\u03b8j \u2212 pj, pi + \u03b8i].\nThis guarantees that for each seller or buyer the best sell or buy\noffer is along the edge where trade occurs in the solution. The\naskbid values along the tight edges guarantee that traders who trade\ncannot increase their spread. Traders t = (i, j) who do not trade\ncannot make profit due to the constraint pi + \u03b8i \u2265 \u03b8j \u2212 pj\n1\n1\n1\n0 0\n1\n0\n1\n1\n0\n0 0\n1\n(a) No trader profit\n1\n1\n1\n0 x\nx\nx\nx 1\nx\nx\n0 x\n(b) Trader profit\nFigure 2: Left: an equilibrium with crossing bids where traders\nmake no money. Right: an equilibrium without crossing bids\nfor any value x \u2208 [0, 1]. Total trader profit ranges between 1\nand 2.\n2.2 Distinguishable Goods\nWe now consider the case of distinguishable goods. As in the\nprevious section, we can write a transshipment linear program for\nthe socially optimal trade, with the only change being in the\nobjective function.\nmax SV (x) =\nX\nt\u2208T :t=(i,j)\nxt(\u03b8ji \u2212 \u03b8ij)\nWe can show that the dual of this linear program corresponds to\ntrader profits. Recall that we needed to add the constraints xt \u2264 1\nfor all traders. The dual is then:\nmin sum(p, y) =\nX\ni\u2208B\u222aS\npi +\nX\nt\u2208T\nyt\nyt \u2265 0 \u2200t \u2208 T :\npi \u2265 0 \u2200i \u2208 S \u222a B :\nyt \u2265 (\u03b8ji \u2212 pj) \u2212 (\u03b8ij + pi) \u2200t = (i, j) \u2208 T\nIt is not hard to extend the proofs of Theorems 2.2 - 2.4 to this case.\nProfits in an equilibrium satisfy the dual constraints, and profits and\ntrade satisfy complementary slackness. This shows that trade is\nsocially optimal. Taking an optimal dual solution where pi = 0 for\nall agents that are monopolized, we can convert it to an equilibrium,\nand with a bit more care, we can also create an equilibrium with no\ncrossing bid-ask pairs.\nTHEOREM 2.5. All equilibria for the case of pair-traders with\ndistinguishable goods result in socially optimal trade. Pure\nnoncrossing equilibria exist.\n2.3 Trader Profits\nWe have seen that all equilibria are efficient. However, it turns\nout that equilibria may differ in how the value of the allocation is\nspread between the sellers, buyers and traders. Figure 2 depicts a\nsimple example of this phenomenon.\nOur goal is to understand how a trader\"s profit is affected by its\nposition in the network; we will use the characterization we\nobtained to work out the range of profits a trader can make. To\nmaximize the profit of a trader t (or a subset of traders T ) all we need\nto do is to find an optimal solution to the dual linear program\nmaximizing the value of yt (or the sum\nP\nt\u2208T yt). Such dual solutions\nwill then correspond to equilibria with non-crossing prices.\n148\nTHEOREM 2.6. For any trader t or subset of traders T the\nmaximum total profit they can make in any equilibrium can be\ncomputed in polynomial time. This maximum profit can be obtained by\na non-crossing equilibrium.\nOne way to think about the profit of a trader t = (i, j) is as a\nsubtraction from the value of the corresponding edge (i, j). The\nvalue of the edge is the social value \u03b8ji \u2212 \u03b8ij if the trader makes\nno profit, and decreases to \u03b8ji \u2212 \u03b8ij \u2212 yt if the trader t insists on\nmaking yt profit. Trader t gets yt profit in equilibrium, if after this\ndecrease in the value of the edge, the edge is still included in the\noptimal transshipment.\nTHEOREM 2.7. A trader t can make profit in an equilibrium if\nand only if t is essential for the social welfare, that is, if deleting\nagent t decreases social welfare. The maximum profit he can make\nis exactly his value to society, that is, the increase his presence\ncauses in the social welfare.\nIf we allow crossing equilibria, then we can also find the\nminimum possible profit. Recall that in the proof of Theorem 2.3,\ntraders only made money off of sellers or buyers that they have\na monopoly over. Allowing such equilibria with crossing bids we\ncan find the minimum profit a trader or set of traders can make, by\nminimizing the value yt (or sum\nP\nt\u2208T yt) over all optimal\nsolutions that satisfy pi = 0 whenever i is connected to only a single\ntrader.\nTHEOREM 2.8. For any trader t or subset of traders T the\nminimum total profit they can make in any equilibrium can be\ncomputed in polynomial time.\n3. GENERAL TRADERS\nNext we extend the results to a model where traders may be\nconnected to an arbitrary number of sellers and buyers. For a trader\nt \u2208 T we will use S(t) and B(t) to denote the set of buyers and\nsellers connected to trader t. In this section we focus on the general\ncase when goods are distinguishable (i.e. both buyers and sellers\nhave valuations that are sensitive to the identity of the agent they\nare paired with in the allocation). In the full version of the paper\nwe also discuss the special case of indistinguishable goods in more\ndetail.\nTo get the optimal trade, we consider the bipartite graph G =\n(S \u222a B, E) connecting sellers and buyers where an edge e = (i, j)\nconnects a seller i and a buyer j if there is a trader adjacent to\nboth: E = {(i, j) : adj(i) \u2229 adj(j) = \u2205}. On this graph, we\nthen solve the instance of the assignment problem that was also\nused in Section 2.2, with the value of edge (i, j) equal to \u03b8ji \u2212 \u03b8ij\n(since the value of trading between i and j is independent of which\ntrader conducted the trade). We will also use the dual of this linear\nprogram:\nmin val(z) =\nX\ni\u2208B\u222aS\nzi\nzi \u2265 0 \u2200i \u2208 S \u222a B.\nzi + zj \u2265 \u03b8ji \u2212 \u03b8ij \u2200i \u2208 S, j \u2208 B :\nadj(i) \u2229 adj(j) = \u2205.\n3.1 Bids and Asks and Trader Optimization\nFirst we need to understand what bidding model we will use.\nEven when goods are indistinguishable, a trader may want to\npricediscriminate, and offer different bid and ask values to different\nsellers and buyers. In the case of distinguishable goods, we have to deal\nwith a further complication: the trader has to name the good she is\nproposing to sell or buy, and can possibly offer multiple different\nproducts.\nThere are two variants of our model depending whether a trader\nmakes a single bid or ask to a seller or buyer, or she offers a menu\nof options.\n(i) A trader t can offer a buyer j a menu of asks \u03b1tji, a vector of\nvalues for all the products that she is connected to, where \u03b1tji\nis the ask for the product of seller i. Symmetrically, a trader\nt can offer to each seller i a menu of bids \u03b2tij for selling to\ndifferent buyers j.\n(ii) Alternatively, we can require that each trader t can make at\nmost one ask to each seller and one bid for each buyer, and\nan ask has to include the product sold, and a bid has to offer\na particular buyer to sell to.\nOur results hold in either model. For notational simplicity we will\nuse the menu option here.\nNext we need to understand the optimization problem of a trader\nt. Suppose we have bid and ask values for all other traders t \u2208 T,\nt = t. What are the best bid and ask offers trader t can make as a\nbest response to the current set of bids and asks? For each seller i\nlet pi be the maximum profit seller i can make using bids by other\ntraders, and symmetrically assume pj is the maximum profit buyer\nj can make using asks by other traders (let pi = 0 for any seller or\nbuyer i who cannot make profit). Now consider a seller-buyer pair\n(i, j) that trader t can connect. Trader t will have to make a bid of at\nleast \u03b2tij = \u03b8ij +pi to seller i and an ask of at most \u03b1tji = \u03b8ji \u2212pj\nto buyer j to get this trade, so the maximum profit she can make\non this trade is vtij = \u03b1tji \u2212 \u03b2tij = \u03b8ji \u2212 pj \u2212 (\u03b8ij + pi). The\noptimal trade for trader t is obtained by solving a matching problem\nto find the matching between the sellers S(t) and buyers B(t) that\nmaximizes the total value vtij for trader t.\nWe will need the dual of the linear program of finding the trade of\nmaximum profit for the trader t. We will use qti as the dual variable\nassociated with the constraint of seller or buyer i. The dual is then\nthe following problem.\nmin val(qt) =\nX\ni\u2208B(t)\u222aS(t)\nqti\nqti \u2265 0 \u2200i \u2208 S(t) \u222a B(t).\nqti + qtj \u2265 vtij \u2200i \u2208 S(t), j \u2208 B(t).\nWe view qti as the profit made by t from trading with seller or buyer\ni. Theorem 3.1 summarizes the above discussion.\nTHEOREM 3.1. For a trader t, given the lowest bids \u03b2tij and\nhighest asks \u03b1tji that can be accepted for sellers i \u2208 S(t) and\nbuyers j \u2208 B(t), the best trade t can make is the maximum value\nmatching between S(t) and B(t) with value vtij = \u03b1tji \u2212 \u03b2tij for\nthe edge (i, j). This maximum value is equal to the minimum of the\ndual linear program above.\n3.2 Efficient Trade and Equilibrium\nNow we can prove trade at equilibrium is always efficient.\nTHEOREM 3.2. Every equilibrium results in an efficient\nallocation of the goods.\nProof. Consider an equilibrium, with xe = 1 if and only if trade\noccurs along edge e = (i, j). Trade is a solution to the\ntransshipment linear program used in Section 2.2.\nLet pi denote the profit of seller or buyer i. Each trader t\ncurrently has the best solution to his own optimization problem. A\ntrader t finds his optimal trade (given bids and asks by all other\n149\ntraders) by solving a matching problem. Let qti for i \u2208 B(t)\u222aS(t)\ndenote the optimal dual solution to this matching problem as\ndescribed by Theorem 3.1.\nWhen setting up the optimization problem for a trader t above,\nwe used pi to denote the maximum profit i can make without the\noffer of trader t. Note that this pi is exactly the same pi we use\nhere, the profit of agent i. This is clearly true for all traders t that\nare not trading with i in the equilibrium. To see why it is true for the\ntrader t that i is trading with we use that the current set of bid-ask\nvalues is an equilibrium. If for any agent i the bid or ask of trader\nt were the unique best option, then t could extract more profit by\noffering a bit larger ask or a bit smaller bid, a contradiction.\nWe show the trade x is optimal by considering the dual solution\nzi = pi +\nP\nt qti for all agents i \u2208 B \u222a S. We claim z is a dual\nsolution, and it satisfies complementary slackness with trade x. To\nsee this we need to show a few facts.\nWe need that zi > 0 implies that i trades. If zi > 0 then either\npi > 0 or qti > 0 for some trader t. Agent i can only make\nprofit pi > 0 if he is involved in a trade. If qti > 0 for some t,\nthen trader t must trade with i, as his solution is optimal, and\nby complementary slackness for the dual solution, qti > 0\nimplies that t trades with i.\nFor an edge (i, j) associated with a trader t we need to show\nthe dual solution is feasible, that is zi + zj \u2265 \u03b8ji \u2212 \u03b8ij .\nRecall vtij = \u03b8ji \u2212pj \u2212(\u03b8ij +pi), and the dual constraint of\nthe trader\"s optimization problem requires qti + qtj \u2265 vtij.\nPutting these together, we have\nzi + zj \u2265 pi + qti + pj + qtj \u2265 vtij + pi + pj = \u03b8ji \u2212 \u03b8ij .\nFinally, we need to show that the trade variables x also\nsatisfy the complementary slackness constraint: when xe > 0\nfor an edge e = (i, j) then the corresponding dual constraint\nis tight. Let t be the trader involved in the trade. By\ncomplementary slackness of t\"s optimization problem we have\nqti + qtj = vtij. To see that z satisfies complementary\nslackness we need to argue that for all other traders t = t we have\nboth qt i = 0 and qt j = 0. This is true as qt i > 0 implies by\ncomplementary slackness of t \"s optimization problem that t\nmust trade with i at optimum, and t = t is trading.\nNext we want to show that a non-crossing equilibrium always\nexists. We call an equilibrium non-crossing if the bid-ask offers\na trader t makes for a seller-buyer pair (i, j) never cross, that is\n\u03b2tij \u2264 \u03b1tji for all t, i, j.\nTHEOREM 3.3. There exists a non-crossing equilibrium\nsupporting any socially optimal trade.\nProof. Consider an optimal trade x and a dual solution z as before.\nTo find a non-crossing equilibrium we need to divide the profit zi\nbetween i and the trader t trading with i. We will use qti as the\ntrader t\"s profit associated with agent i for any i \u2208 S(t) \u222a B(t).\nWe will need to guarantee the following properties:\nTrader t trades with agent i whenever qti > 0. This is one\nof the complementary slackness conditions to make sure the\ncurrent trade is optimal for trader t.\nFor all seller-buyer pairs (i, j) that a trader t can trade with,\nwe have\npi + qti + pj + qtj \u2265 \u03b8ji \u2212 \u03b8ij , (1)\nwhich will make sure that qt is a feasible dual solution for the\noptimization problem faced by trader t.\nWe need to have equality in (1) when trader t is trading\nbetween i and j. This is one of the complementary slackness\nconditions for trader t, and will ensure that the trade of t is\noptimal for the trader.\nFinally, we want to arrange that each agent i with pi > 0 has\nmultiple offers for making profit pi, and the trade occurs at\none of his best offers. To guarantee this in the corresponding\nbids and asks we need to make sure that whenever pi > 0\nthere are multiple t \u2208 adj(i) that have equation in the above\nconstraint (1).\nWe start by setting pi = zi for all i \u2208 S \u222a B and qti = 0\nfor all i \u2208 S \u222a B and traders t \u2208 adj(i). This guarantees all\ninvariants except the last property about multiple t \u2208 adj(t) having\nequality in (1). We will modify p and q to gradually enforce the last\ncondition, while maintaining the others.\nConsider a seller with pi > 0. By optimality of the trade and\ndual solution z, seller i must trade with some trader t, and that\ntrader will have equality in (1) for the buyer j that he matches with\ni. If this is the only trader t that has a tight constraint in (1)\ninvolving seller i then we increase qti and decrease pi till either pi = 0 or\nanother trader t = t will be achieve equality in (1) for some buyer\nedge adjacent to i (possibly a different buyer j ). This change\nmaintains all invariants, and increases the set of sellers that also satisfy\nthe last constraint. We can do a similar change for a buyer j that\nhas pj > 0 and has only one trader t with a tight constraint (1)\nadjacent to j. After possibly repeating this for all sellers and buyers,\nwe get profits satisfying all constraints.\nNow we get equilibrium bid and ask values as follows. For a\ntrader t that has equality for the seller-buyer pair (i, j) in (1) we\noffer \u03b1tji = \u03b8ji \u2212 pj and \u03b2tij = \u03b8ij + pi. For all other traders\nt and seller-buyer pairs (i, j) we have the invariant (1), and using\nthis we know we can pick a value \u03b3 in the range \u03b8ij +pi+qti \u2265 \u03b3 \u2265\n\u03b8ji \u2212 (pj + qtj ). We offer bid and ask values \u03b2tij = \u03b1tji = \u03b3.\nNeither the bid nor the ask will be the unique best offer for the\nbuyer, and hence the trade x remains an equilibrium.\n3.3 Trader Profits\nFinally we turn to the goal of understanding, in the case of\ngeneral traders, how a trader\"s profit is affected by its position in the\nnetwork.\nFirst, we show how to maximize the total profit of a set of traders.\nThe profit of trader t in an equilibrium is\nP\ni qti. To find the\nmaximum possible profit for a trader t or a set of traders T , we\nneed to do the following: Find profits pi \u2265 0 and qti > 0 so\nthat zi = pi +\nP\nt\u2208adj(i) qti is an optimal dual solution, and also\nsatisfies the constraints (1) for any seller i and buyer j connected\nthrough a trader t \u2208 T. Now, subject to all these conditions, we\nmaximize the sum\nP\nt\u2208T\nP\ni\u2208S(t)\u222aB(t) qti. Note that this\nmaximization is a secondary objective function to the primary objective\nthat z is an optimal dual solution. Then we use the proof of\nTheorem 3.3 shows how to turn this into an equilibrium.\nTHEOREM 3.4. The maximum value for\nP\nt\u2208T\nP\ni qti above\nis the maximum profit the set T of traders can make.\nProof. By the proof of Theorem 3.2 the profits of trader t can\nbe written in this form, so the set of traders T cannot make more\nprofit than claimed in this theorem.\nTo see that T can indeed make this much profit, we use the proof\nof Theorem 3.3. We modify that proof to start with profit vectors p\nand qt for t \u2208 T , and set qt = 0 for all traders t \u2208 T . We verify\nthat this starting solution satisfies the first three of the four required\nproperties, and then we can follow the proof to make the fourth\nproperty true. We omit the details of this in the present version.\nIn Section 2.3 we showed that in the case of pair traders, a trader\nt can make money if he is essential for efficient trade. This is not\n150\n1\n1\nFigure 3: The top trader is essential for social welfare. Yet the\nonly equilibrium is to have bid and ask values equal to 0, and\nthe trader makes no profit.\ntrue for the type of more general traders we consider here, as shown\nby the example in Figure 3.\nHowever, we still get a characterization for when a trader t can\nmake a positive profit.\nTHEOREM 3.5. A trader t can make profit in an equilibrium if\nand only if there is a seller or buyer i adjacent to t such that the\nconnection of trader t to agent i is essential for social\nwelfarethat is, if deleting agent t from adj(i) decreases the value of the\noptimal allocation.\nProof. First we show the direction that if a trader t can make money\nthere must be an agent i so that t\"s connection to i is essential to\nsocial welfare. Let p, q be the profits in an equilibrium where t makes\nmoney, as described by Theorem 3.2 with\nP\ni\u2208S(t)\u222aB(t) qti > 0.\nSo we have some agent i with qti > 0. We claim that the\nconnection between agent i and trader t must be essential, in particular, we\nclaim that social welfare must decrease by at least qti if we delete\nt from adj(t). To see why note that decreasing the value of all\nedges of the form (i, j) associated with trader t by qti keeps the\nsame trade optimum, as we get a matching dual solution by simply\nresetting qti to zero.\nTo see the opposite, assume deleting t from adj(t) decreases\nsocial welfare by some value \u03b3. Assume i is a seller (the case of\nbuyers is symmetric), and decrease by \u03b3 the social value of each\nedge (i, j) for any buyer j such that t is the only agent connecting\ni and j. By assumption the trade is still optimal, and we let z be\nthe dual solution for this matching. Now we use the same process\nas in the proof of Theorem 3.3 to create a non-crossing equilibrium\nstarting with pi = zi for all i \u2208 S \u222aB, and qti = \u03b3, and all other q\nvalues 0. This creates an equilibrium with non-crossing bids where\nt makes at least \u03b3 profit (due to trade with seller i).\nFinally, if we allow crossing equilibria, then we can find the\nminimum possible profit by simply finding a dual solution\nminimizing the dual variables associated with agents monopolized by some\ntrader.\nTHEOREM 3.6. For any trader t or subset of traders T , the\nminimum total profit they can make in any equilibrium can be\ncomputed in polynomial time.\n4. REFERENCES\n[1] M. Babaioff, N. Nisan, E. Pavlov. Mechanisms for a\nSpatially Distributed Market. ACM EC Conference, 2005.\n[2] C. Barrett, E. Mutambatsere. Agricultural markets in\ndeveloping countries. The New Palgrave Dictionary of\nEconomics, 2nd edition, forthcoming.\n[3] Kenneth Burdett and Kenneth Judd. Equilibrium Price\nDisperison. Econometrica, 51/4, July 1983, 955-969.\n[4] L. Chu, Z.-J. Shen. Agent Competition Double Auction\nMechanism. Management Science, 52/8, 2006.\n[5] G. Demange, D. Gale, M. Sotomayor. Multi-item auctions. J.\nPolitical Econ. 94(1986).\n[6] E. Even-Dar, M. Kearns, S. Suri. A Network Formation\nGame for Bipartite Exchange Economies. ACM-SIAM\nSymp. on Discrete Algorithms (SODA), 2007.\n[7] J. Kephart, J. Hanson, A. Greenwald. Dynamic Pricing by\nSoftware Agents. Computer Networks, 2000.\n[8] S. Kakade, M. Kearns, L. Ortiz, R. Pemantle, S. Suri.\nEconomic Properties of Social Networks. NIPS 2004.\n[9] R. Kranton, D. Minehart. A Theory of Buyer-Seller\nNetworks. American Economic Review 91(3), June 2001.\n[10] H. Leonard. Elicitation of Honest Preferences for the\nAssignment of Individuals to Positions. J. Pol. Econ, 1983.\n[11] M. E. J. Newman. The structure and function of complex\nnetworks. SIAM Review, 45:167-256, 2003.\n[12] M. O\"Hara. Market Microstructure Theory. Blackwell\nPublishers, Cambridge, MA, 1995.\n[13] L. Shapley M. Shubik, The Assignment Game I: The Core.\nIntl. J. Game Theory 1/2 111-130, 1972.\n[14] Jean Tirole. The Theory of Industrial Organization. The MIT\nPress, Cambridge, MA, 1988.\n151", "keywords": "initial endowment of money;bid price;economics and finance;maximum and minimum amount;market;perfect competition;algorithmic game theory;interaction of buyer and seller;trader strategic behavior;benefit;strategic behavior of trader;monopoly;trade network;buyer and seller interaction;trading network;money initial endowment;complementary slackness"}
-{"name": "test_J-13", "title": "On The Complexity of Combinatorial Auctions: Structured Item Graphs and Hypertree Decompositions", "abstract": "The winner determination problem in combinatorial auctions is the problem of determining the allocation of the items among the bidders that maximizes the sum of the accepted bid prices. While this problem is in general NPhard, it is known to be feasible in polynomial time on those instances whose associated item graphs have bounded treewidth (called structured item graphs). Formally, an item graph is a graph whose nodes are in one-to-one correspondence with items, and edges are such that for any bid, the items occurring in it induce a connected subgraph. Note that many item graphs might be associated with a given combinatorial auction, depending on the edges selected for guaranteeing the connectedness. In fact, the tractability of determining whether a structured item graph of a fixed treewidth exists (and if so, computing one) was left as a crucial open problem. In this paper, we solve this problem by proving that the existence of a structured item graph is computationally intractable, even for treewidth 3. Motivated by this bad news, we investigate different kinds of structural requirements that can be used to isolate tractable classes of combinatorial auctions. We show that the notion of hypertree decomposition, a recently introduced measure of hypergraph cyclicity, turns out to be most useful here. Indeed, we show that the winner determination problem is solvable in polynomial time on instances whose bidder interactions can be represented with (dual) hypergraphs having bounded hypertree width. Even more surprisingly, we show that the class of tractable instances identified by means of our approach properly contains the class of instances having a structured item graph.", "fulltext": "1. INTRODUCTION\nCombinatorial auctions. Combinatorial auctions are\nwell-known mechanisms for resource and task allocation\nwhere bidders are allowed to simultaneously bid on\ncombinations of items. This is desirable when a bidder\"s valuation\nof a bundle of items is not equal to the sum of her valuations\nof the individual items. This framework is currently used to\nregulate agents\" interactions in several application domains\n(cf., e.g., [21]) such as, electricity markets [13], bandwidth\nauctions [14], and transportation exchanges [18].\nFormally, a combinatorial auction is a pair I, B , where\nI = {I1, ..., Im} is the set of items the auctioneer has\nto sell, and B = {B1, ..., Bn} is the set of bids from the\nbuyers interested in the items in I. Each bid Bi has\nthe form item(Bi), pay(Bi) , where pay(Bi) is a rational\nnumber denoting the price a buyer offers for the items in\nitem(Bi) \u2286 I. An outcome for I, B is a subset b of B\nsuch that item(Bi)\u2229item(Bj) = \u2205, for each pair Bi and Bj\nof bids in b with i = j.\nThe winner determination problem. A crucial\nproblem for combinatorial auctions is to determine the outcome\nb\u2217\nthat maximizes the sum of the accepted bid prices (i.e.,\nBi\u2208b\u2217 pay(Bi)) over all the possible outcomes. This\nproblem, called winner determination problem (e.g., [11]), is\nknown to be intractable, actually NP-hard [17], and even\nnot approximable in polynomial time unless NP = ZPP [19].\nHence, it comes with no surprise that several efforts have\nbeen spent to design practically efficient algorithms for\ngeneral auctions (e.g., [20, 5, 2, 8, 23]) and to identify classes of\ninstances where solving the winner determination problem\nis feasible in polynomial time (e.g., [15, 22, 12, 21]). In fact,\nconstraining bidder interaction was proven to be useful for\nidentifying classes of tractable combinatorial auctions.\nItem graphs. Currently, the most general class of\ntractable combinatorial auctions has been singled out by\nmodelling interactions among bidders with the notion of\nitem graph, which is a graph whose nodes are in one-to-one\ncorrespondence with items, and edges are such that for any\n152\nFigure 1: Example MaxWSP problem: (a) Hypergraph\nH I0,B0\n, and a packing h for it; (b) Primal graph for\nH I0,B0\n; and, (c,d) Two item graphs for H I0,B0\n.\nbid, the items occurring in it induce a connected subgraph.\nIndeed, the winner determination problem was proven to be\nsolvable in polynomial time if interactions among bidders\ncan be represented by means of a structured item graph,\ni.e., a tree or, more generally, a graph having tree-like\nstructure [3]-formally bounded treewidth [16].\nTo have some intuition on how item graphs can be built,\nwe notice that bidder interaction in a combinatorial auction\nI, B can be represented by means of a hypergraph H I,B\nsuch that its set of nodes N(H I,B ) coincides with set of\nitems I, and where its edges E(H I,B ) are precisely the bids\nof the buyers {item(Bi) | Bi \u2208 B}. A special item graph for\nI, B is the primal graph of H I,B , denoted by G(H I,B ),\nwhich contains an edge between any pair of nodes in some\nhyperedge of H I,B . Then, any item graph for H I,B can be\nviewed as a simplification of G(H I,B ) obtained by deleting\nsome edges, yet preserving the connectivity condition on the\nnodes included in each hyperedge.\nExample 1. The hypergraph H I0,B0\nreported in\nFigure 1.(a) is an encoding for a combinatorial auction I0, B0 ,\nwhere I0 = {I1, ..., I5}, and item(Bi) = hi, for each\n1 \u2264 i \u2264 3. The primal graph for H I0,B0\nis reported in\nFigure 1.(b), while two example item graphs are reported in\nFigure 1.(c) and (d), where edges required for maintaining\nthe connectivity for h1 are depicted in bold. \u00a1\nOpen Problem: Computing structured item\ngraphs efficiently. The above mentioned tractability result\non structured item graphs turns out to be useful in practice\nonly when a structured item graph either is given or can be\nefficiently determined. However, exponentially many item\ngraphs might be associated with a combinatorial auction,\nand it is not clear how to determine whether a structured\nitem graph of a certain (constant) treewidth exists, and if\nso, how to compute such a structured item graph efficiently.\nPolynomial time algorithms to find the best\nsimplification of the primal graph were so far only known for the cases\nwhere the item graph to be constructed is a line [10], a cycle\n[4], or a tree [3], but it was an important open problem (cf.\n[3]) whether it is tractable to check if for a combinatorial\nauction, an item graph of treewidth bounded by a fixed\nnatural number k exists and can be constructed in polynomial\ntime, if so.\nWeighted Set Packing. Let us note that the hypergraph\nrepresentation H I,B of a combinatorial auction I, B is\nalso useful to make the analogy between the winner\ndetermination problem and the maximum weighted-set packing\nproblem on hypergraphs clear (e.g., [17]).\nFormally, a packing h for a hypergraph H is a set of\nhyperedges of H such that for each pair h, h \u2208 h with h = h , it\nholds that h \u2229 h = \u2205. Letting w be a weighting function for\nH, i.e., a polynomially-time computable function from E(H)\nto rational numbers, the weight of a packing h is the\nrational number w(h) = h\u2208h w(h), where w({}) = 0. Then,\nthe maximum-weighted set packing problem for H w.r.t. w,\ndenoted by MaxWSP(H, w), is the problem of finding a\npacking for H having the maximum weight over all the packings\nfor H. To see that MaxWSP is just a different formulation\nfor the winner determination problem, given a\ncombinatorial auction I, B , it is sufficient to define the weighting\nfunction w I,B (item(Bi)) = pay(Bi). Then, the set of the\nsolutions for the weighted set packing problem for H I,B\nw.r.t. w I,B coincides with the set of the solutions for the\nwinner determination problem on I, B .\nExample 2. Consider again the hypergraph H I0,B0\n\nreported in Figure 1.(a). An example packing for H I0,B0\nis\nh = {h1}, which intuitively corresponds to an outcome for\nI0, B0 , where the auctioneer accepted the bid B1. By\nassuming that bids B1, B2, and B3 are such that pay(B1) =\npay(B2) = pay(B3), the packing h is not a solution for\nthe problem MaxWSP(H I0,B0\n, w I0,B0\n). Indeed, the packing\nh\u2217\n= {h2, h3} is such that w I0,B0\n(h\u2217\n) > w I0,B0\n(h). \u00a1\nContributions\nThe primary aim of this paper is to identify large tractable\nclasses for the winner determination problem, that are,\nmoreover polynomially recognizable. Towards this aim, we\nfirst study structured item graphs and solve the open\nproblem in [3]. The result is very bad news:\nIt is NP complete to check whether a combinatorial\nauction has a structured item graph of treewidth 3. More\nformally, letting C(ig, k) denote the class of all the\nhypergraphs having an item tree of treewidth bounded by k,\nwe prove that deciding whether a hypergraph (associated\nwith a combinatorial auction problem) belongs to C(ig, 3)\nis NP-complete.\nIn the light of this result, it was crucial to assess whether\nthere are some other kinds of structural requirement that\ncan be checked in polynomial time and that can still be\nused to isolate tractable classes of the maximum\nweightedset packing problem or, equivalently, the winner\ndetermination problem. Our investigations, this time, led to very good\nnews which are summarized below:\nFor a hypergraph H, its dual \u00afH = (V, E) is such that\nnodes in V are in one-to-one correspondence with\nhyperedges in H, and for each node x \u2208 N(H), {h | x \u2208 h \u2227 h \u2208\n153\nE(H)} is in E. We show that MaxWSP is tractable on the\nclass of those instances whose dual hypergraphs have\nhypertree width[7] bounded by k (short: class C(hw, k) of\nhypergraphs). Note that a key issue of the tractability is to\nconsider the hypertree width of the dual hypergraph \u00afH\ninstead of the auction hypergraph H. In fact, we can show\nthat MaxWSP remains NP-hard even when H is acyclic (i.e.,\nwhen it has hypertree width 1), even when each node is\ncontained in 3 hyperedges at most.\nFor some relevant special classes of hypergraphs in\nC(hw, k), we design a higly-parallelizeable algorithm for\nMaxWSP. Specifically, if the weighting functions can be\ncomputed in logarithmic space and weights are\npolynomial (e.g., when all the hyperegdes have unitary weights\nand one is interested in finding the packing with the\nmaximum number of edges), we show that MaxWSP can be solved\nby a LOGCFL algorithm. Recall, in fact, that LOGCFL is\nthe class of decision problems that are logspace reducible\nto context free languages, and that LOGCFL \u2286 NC2 \u2286 P\n(see, e.g., [9]).\nSurprisingly, we show that nothing is lost in terms of\ngenerality when considering the hypertree decomposition\nof dual hypergraphs instead of the treewidth of item\ngraphs. To the contrary, the proposed hypertree-based\ndecomposition method is strictly more general than the\nmethod of structured item graphs. In fact, we show that\nstrictly larger classes of instances are tractable according\nto our new approach than according to the structured item\ngraphs approach. Intuitively, the NP-hardness of\nrecognizing bounded-width structured item graphs is thus not due\nto its great generality, but rather to some peculiarities in\nits definition.\nThe proof of the above results give us some interesting\ninsight into the notion of structured item graph. Indeed, we\nshow that structured item graphs are in one-to-one\ncorrespondence with some special kinds of hypertree\ndecomposition of the dual hypergraph, which we call strict hypertree\ndecompositions. A game-characterization for the notion of\nstrict hypertree width is also proposed, which specializes\nthe Robber and Marshals game in [6] (proposed to\ncharacterize the hypertree width), and which makes it clear the\nfurther requirements on hypertree decompositions.\nThe rest of the paper is organized as follows. Section 2\ndiscusses the intractability of structured item graphs. Section 3\npresents the polynomial-time algorithm for solving MaxWSP\non the class of those instances whose dual hypergraphs have\nbounded hypertree width, and discusses the cases where the\nalgorithm is also highly parallelizable. The comparison\nbetween the classes C(ig, k) and C(hw, k) is discussed in\nSection 4. Finally, in Section 5 we draw our conclusions by also\noutlining directions for further research.\n2. COMPLEXITY OF STRUCTURED\nITEM GRAPHS\nLet H be a hypergraph. A graph G = (V, E) is an item\ngraph for H if V = N(H) and, for each h \u2208 E(H), the\nsubgraph of G induced over the nodes in h is connected.\nAn important class of item graphs is that of structured item\ngraphs, i.e., of those item graphs having bounded treewidth\nas formalized below.\nA tree decomposition [16] of a graph G = (V, E) is a pair\nT, \u03c7 , where T = (N, F) is a tree, and \u03c7 is a labelling\nfunction assigning to each vertex p \u2208 N a set of vertices\n\u03c7(p) \u2286 V , such that the following conditions are satisfied:\n(1) for each vertex b of G, there exists p \u2208 N such that\nb \u2208 \u03c7(p); (2) for each edge {b, d} \u2208 E, there exists p \u2208 N\nsuch that {b, d} \u2286 \u03c7(p); (3) for each vertex b of G, the\nset {p \u2208 N | b \u2208 \u03c7(p)} induces a connected subtree of T.\nThe width of T, \u03c7 is the number maxp\u2208N |\u03c7(p) \u2212 1|. The\ntreewidth of G, denoted by tw(G), is the minimum width\nover all its tree decompositions.\nThe winner determination problem can be solved in\npolynomial time on item graphs having bounded treewidth [3].\nTheorem 1 (cf. [3]). Assume a k-width tree\ndecomposition T, \u03c7 of an item graph for H is given. Then,\nMaxWSP(H, w) can be solved in time O(|T|2\n\u00d7(|E(H)|+1)k+1\n).\nMany item graphs can be associated with a hypergraph.\nAs an example, observe that the item graph in Figure 1.(c)\nhas treewidth 1, while Figure 1.(d) reports an item graph\nwhose treewidth is 2. Indeed, it was an open question\nwhether for a given constant k it can be checked in\npolynomial time if an item graph of treewidth k exists, and if so,\nwhether such an item graph can be efficiently computed.\nLet C(ig, k) denote the class of all the hypergraphs having\nan item graph G such that tw(G) \u2264 k. The main result of\nthis section is to show that the class C(ig, k) is hard to\nrecognize.\nTheorem 2. Deciding whether a hypergraph H belongs to\nC(ig, 3) is NP-hard.\nThe proof of this result relies on an elaborate reduction from\nthe Hamiltonian path problem HP(s, t) of deciding whether\nthere is an Hamiltonian path from a node s to a node t in a\ndirected graph G = (N, E). To help the intuition, we report\nhere a high-level overview of the main ingredients exploited\nin the proof1\n.\nThe general idea it to build a hypergraph HG such that\nthere is an item graph G for HG with tw(G ) \u2264 3 if and only\nif HP(s, t) over G has a solution. First, we discuss the way\nHG is constructed. See Figure 2.(a) for an illustration, where\nthe graph G consists of the nodes s, x, y, and t, and the set of\nits edges is {e1 = (s, x), e2 = (x, y), e3 = (x, t), e4 = (y, t)}.\nFrom G to HG. Let G = (N, E) be a directed graph.\nThen, the set of the nodes in HG is such that: for each\nx \u2208 N, N(HG) contains the nodes bsx, btx, bx, bx, bdx; for\neach e = (x, y) \u2208 E, N(HG) contains the nodes nsx, nsx,\nnty, nty , nse\nx and nte\ny. No other node is in N(HG).\nHyperedges in HG are of three kinds:\n1) for each x \u2208 N, E(HG) contains the hyperedges:\n\u2022 Sx = {bsx} \u222a {nse\nx | e = (x, y) \u2208 E};\n\u2022 Tx = {btx} \u222a {nte\nx | e = (z, x) \u2208 E};\n\u2022 A1\nx = {bdx, bx}, A2\nx = {bdx, bx}, and A3\nx = {bx, bx}\n-notice that these hyperedges induce a clique on\nthe nodes {bx, bx, bdx};\n1\nDetailed proofs can be found in the Appendix, available at\nwww.mat.unical.it/\u223cggreco/papers/ca.pdf.\n154\nFigure 2: Proof of Theorem 2: (a) from G to HG - hyperedges in 1) and 2) are reported only; (b) a skeleton\nfor a tree decomposition TD for HG.\n\u2022 SA1\nx = {bsx, bx}, SA2\nx = {bsx, bx}, SA3\nx =\n{bsx, bdx} -notice that these hyperedges plus\nA1\nx, A2\nx, and A3\nx induce a clique on the nodes\n{bsx, bx, bx, bdx};\n\u2022 TA1\nx = {btx, bx}, TA2\nx = {btx, bx}, and TA3\nx =\n{btx, bdx} -notice that these hyperedges plus\nA1\nx, A2\nx, and A3\nx induce a clique on the nodes\n{btx, bx, bx, bdx};\n2) for each e = (x, y) \u2208 E, E(HG) contains the hyperedges:\n\u2022 SHx = {nsx, nsx};\n\u2022 THy = {nty, nty };\n\u2022 SEe = {nsx, nse\nx} and SEe = {nsx, nse\nx} -notice\nthat these two hyperedges plus SHx induce a clique\non the nodes {nsx, nsx, nse\nx};\n\u2022 TEe = {nty, nte\ny} and TEe = {nty , nte\ny} -notice\nthat these two hyperedges plus THy induce a clique\non the nodes {nty, nty , nte\ny}.\nNotice that each of the above hyperedges but those of the\nform Sx and Tx contains exactly two nodes. As an example\nof the hyperedges of kind 1) and 2), the reader may refer\nto the example construction reported in Figure 2.(a), and\nnotice, for instance, that Sx = {bsx, nse2\nx , nse3\nx } and that\nTt = {btt, nte4\nt , nte3\nt }.\n3) finally, we denote by DG the set containing the\nhyperedges in E(HG) of the third kind. In the\nreduction we are exploiting, DG can be an arbitrary set\nof hyperedges satisfying the four conditions that are\ndiscussed below. Let PG be the set of the following\n|PG| \u2264 |N| + 3 \u00d7 |E| pairs: PG = {(bx, bx) | x \u2208 N} \u222a\n{(nsx, nsx), (nty, nty ), (nse\nx, nte\ny) | e = (x, y) \u2208 E}.\nAlso, let I(v) denote the set {h \u2208 E(H) | v \u2208 h} of the\nhyperedges of H that are touched by v; and, for a set\nV \u2286 N(H), let I(V ) = v\u2208V I(v). Then, DG has to be\na set such that:\n(c1) \u2200(\u03b1, \u03b2) \u2208 PG, I(\u03b1) \u2229 I(\u03b2) \u2229 DG = \u2205;\n(c2) \u2200(\u03b1, \u03b2) \u2208 PG, I(\u03b1) \u222a I(\u03b2) \u2287 DG;\n(c3) \u2200\u03b1 \u2208 N such that \u2203\u03b2 \u2208 N with (\u03b1, \u03b2) \u2208 PG or\n(\u03b2, \u03b1) \u2208 PG, it holds: I(\u03b1) \u2229 DG = \u2205; and,\n(c4) \u2200S \u2286 N such that |S| \u2264 3 and where \u2203\u03b1, \u03b2 \u2208 S\nwith (\u03b1, \u03b2) \u2208 PG, it is the case that: I(S) \u2287 DG.\nIntuitively, the set DG is such that each of its hyperedges\nis touched by exactly one of the two nodes in every pair\n155\nof PG - cf. (c1) and (c2). Moreover, hyperedges in\nDG touch only vertices included in at least a pair of PG\n- cf. (c3); and, any triple of nodes is not capable of\ntouching all the elements of DG if none of the pairs that\ncan be built from it belongs to PG - cf. (c4).\nThe reader may now ask whether a set DG exists at\nall satisfying (c1), (c2), (c3) and (c4). In the following\nlemma, we positively answer this question and refer the\nreader to its proof for an example construction.\nLemma 1. A set DG, with |DG| = 2 \u00d7 |PG| + 2,\nsatisfying conditions (c1), (c2), (c3), and (c4) can be built\nin time O(|PG|2\n).\nKey Ingredients. We are now in the position of presenting\nan overview of the key ingredients of the proof. Let G be\nan arbitrary item graph for HG, and let TD = T, \u03c7 be a\n3-width tree decomposition of G (note that, because of the\ncliques, e.g., on the nodes {bsx, bx, bx, bdx}, any item graph\nfor HG has treewidth 3 at least).\nThere are three basic observations serving the purpose of\nproving the correctness of the reduction.\nBlocks of TD: First, we observe that TD must contain\nsome special kinds of vertex. Specifically, for each\nnode x \u2208 N, TD contains a vertex bs(x) such that\n\u03c7(bs(x)) \u2287 {bsx, bx, bx, bdx}, and a vertex bt(x) such\nthat \u03c7(bt(x)) \u2287 {btx, bx, bx, bdx}. And, for each edge\ne = (x, y) \u2208 E, TD contains a vertex ns(x,e) such that\n\u03c7(ns(x,e)) \u2287 {nse\nx, nsx, nsx}, and a vertex nt(y,e) such\nthat \u03c7(nt(y,e)) \u2287 {nte\ny, nty, nty }.\nIntuitively, these vertices are required to cover the\ncliques of HG associated with the hyperedges of kind\n1) and 2). Each of these vertices plays a specific\nrole in the reduction. Indeed, each directed edge\ne = (x, y) \u2208 E is encoded in TD by means of the\nvertices: ns(x,e), representing precisely that e starts\nfrom x; and, nt(y,e), representing precisely that e\nterminates into y. Also, each node x \u2208 N is encoded in\nTD be means of the vertices: bs(x), representing the\nstarting point of edges originating from x; and, bt(x),\nrepresenting the terminating point of edges ending into\nx. As an example, Figure 2.(b) reports the skeleton\nof a tree decomposition TD. The reader may notice\nin it the blocks defined above and how they are\nrelated with the hypergraph HG in Figure 2.(a) - other\nblocks in it (of the form w(x,y)) are defined next.\nConnectedness between blocks,\nand uniqueness of the connections: The second\ncrucial observation is that in the path connecting a\nvertex of the form bs(x) (resp., bt(y)) with a vertex of\nthe form ns(x,e) (resp., nt(y,e)) there is one special\nvertex of the form w(x,y) such that: \u03c7(w(x,y)) \u2287\n{nse\nx , nte\ny }, for some edge e = (x, y) \u2208 E.\nGuaranteeing the existence of one such vertex is precisely\nthe role played by the hyperedges in DG. The\narguments for the proof are as follows. First, we\nobserve that I(\u03c7(bs(x))) \u2229 I(\u03c7(ns(x,e))) \u2287 DG \u222a {Sx}\nand I(\u03c7(bt(y))) \u2229 I(\u03c7(nt(y,e))) \u2287 DG \u222a {Ty}. Then,\nwe show a property stating that for a pair of\nconsecutive vertices p and q in the path connecting bs(x)\nand ns(x,e) (resp., bt(y) and nt(y,e)), I(\u03c7(p) \u2229 \u03c7(q)) \u2287\nI(\u03c7(bs(x))) \u2229 I(\u03c7(ns(x,e))) (resp., I(\u03c7(p) \u2229 \u03c7(q)) \u2287\nI(\u03c7(bt(x))) \u2229 I(\u03c7(nt(y,e)))). Thus, we have: I(\u03c7(p) \u2229\n\u03c7(q)) \u2287 DG \u222a{Sx} (resp., I(\u03c7(p)\u2229\u03c7(q)) \u2287 DG \u222a{Ty}).\nBased on this observation, and by exploiting the\nproperties of the hyperedges in DG, it is not difficult to\nshow that any pair of consecutive vertices p and q\nmust share two nodes of HG forming a pair in PG, and\nmust both touch Sx (resp., Ty). When the treewidth\nof G is 3, we can conclude that a vertex, say w(x,y),\nin this path is such that \u03c7(w(x,y)) \u2287 {nse\nx , nte\ny }, for\nsome edge e = (x, y) \u2208 E - to this end, note that\nnse\nx \u2208 Sx, nte\nt \u2208 Ty, and I(\u03c7(w(x,y))) \u2287 DG. In\nparticular, w(x,y) is the only kind of vertex satisfying these\nconditions, i.e., in the path there is no further vertex\nof the form w(x,z), for z = y (resp., w(z,y), for z = x).\nTo help the intuition, we observe that having a vertex\nof the form w(x,y) in TD corresponds to the selection\nof an edge from node x to node y in the Hamiltonian\npath. In fact, given the uniqueness of these vertices\nselected for ensuring the connectivity, a one-to-one\ncorrespondence can be established between the existence\nof a Hamiltonian path for G and the vertices of the\nform w(x,y). As an example, in Figure 2.(b), the\nvertices of the form w(s,x), w(x,y), and w(y,t) are in TD,\nand GT D shows the corresponding Hamiltonian path.\nUnused blocks: Finally, the third ingredient of the proof\nis the observation that if a vertex of the form w(x,y),\nfor an edge e = (x, y) \u2208 E is not in TD (i.e., if the\nedge (x, y) does not belong to the Hamiltonian path),\nthen the corresponding block ns(x,e ) (resp., nt(y,e ))\ncan be arbitrarily appended in the subtree rooted at\nthe block ns(x,e) (resp., nt(y,e)), where e is the edge of\nthe form e = (x, z) (resp., e = (z, y)) such that w(x,z)\n(resp., w(z,y)) is in TD.\nE.g., Figure 2.(a) shows w(x,t), which is not used in\nTD, and Figure 2.(b) shows how the blocks ns(x,e3)\nand nt(t,e3) can be arranged in TD for ensuring the\nconnectedness condition.\n3. TRACTABLE CASES VIA HYPERTREE\nDECOMPOSITIONS\nSince constructing structured item graphs is intractable, it\nis relevant to assess whether other structural restrictions can\nbe used to single out classes of tractable MaxWSP instances.\nTo this end, we focus on the notion of hypertree\ndecomposition [7], which is a natural generalization of hypergraph\nacyclicity and which has been profitably used in other\ndomains, e.g, constraint satisfaction and database query\nevaluation, to identify tractability islands for NP-hard problems.\nA hypertree for a hypergraph H is a triple T, \u03c7, \u03bb , where\nT = (N, E) is a rooted tree, and \u03c7 and \u03bb are labelling\nfunctions which associate each vertex p \u2208 N with two sets\n\u03c7(p) \u2286 N(H) and \u03bb(p) \u2286 E(H). If T = (N , E ) is a subtree\nof T, we define \u03c7(T ) = v\u2208N \u03c7(v). We denote the set of\nvertices N of T by vertices(T). Moreover, for any p \u2208 N,\nTp denotes the subtree of T rooted at p.\nDefinition 1. A hypertree decomposition of a\nhypergraph H is a hypertree HD = T, \u03c7, \u03bb for H which satisfies\nall the following conditions:\n1. for each edge h \u2208 E(H), there exists p \u2208 vertices(T)\nsuch that h \u2286 \u03c7(p) (we say that p covers h);\n156\nFigure 3: Example MaxWSP problem: (a) Hypergraph\nH1; (b) Hypergraph \u00afH1; (b) A 2-width hypertree\ndecomposition of \u00afH1.\n2. for each node Y \u2208 N(H), the set {p \u2208 vertices(T) |\nY \u2208 \u03c7(p)} induces a (connected) subtree of T;\n3. for each p \u2208 vertices(T), \u03c7(p) \u2286 N(\u03bb(p));\n4. for each p \u2208 vertices(T), N(\u03bb(p)) \u2229 \u03c7(Tp) \u2286 \u03c7(p).\nThe width of a hypertree decomposition T, \u03c7, \u03bb is\nmaxp\u2208vertices(T )|\u03bb(p)|. The HYPERTREE width hw(H) of H\nis the minimum width over all its hypertree decompositions.\nA hypergraph H is acyclic if hw(H) = 1. P\nExample 3. The hypergraph H I0,B0\nreported in\nFigure 1.(a) is an example acyclic hypergraph. Instead, both\nthe hypergraphs H1 and \u00afH1 shown in Figure 3.(a) and\nFigure 3.(b), respectively, are not acyclic since their hypertree\nwidth is 2. A 2-width hypertree decomposition for \u00afH1 is\nreported in Figure 3.(c).\nIn particular, observe that H1 has been obtained by\nadding the two hyperedges h4 and h5 to H I0,B0\nto model,\nfor instance, that two new bids, B4 and B5, respectively,\nhave been proposed to the auctioneer. \u00a1\nIn the following, rather than working on the hypergraph\nH associated with a MaxWSP problem, we shall deal with its\ndual \u00afH, i.e., with the hypergraph such that its nodes are in\none-to-one correspondence with the hyperedges of H, and\nwhere for each node x \u2208 N(H), {h | x \u2208 h \u2227 h \u2208 E(H)}\nis in E( \u00afH). As an example, the reader may want to check\nagain the hypergraph H1 in Figure 3.(a) and notice that the\nhypergraph in Figure 3.(b) is in fact its dual.\nThe rationale for this choice is that issuing restrictions on\nthe original hypergraph is a guarantee for the tractability\nonly in very simple scenarios.\nTheorem 3. On the class of acyclic hypergraphs, MaxWSP\nis (1) in P if each node occurs into two hyperedges at most;\nand, (2) NP-hard, even if each node is contained into three\nhyperedges at most.\n3.1 Hypertree Decomposition on the Dual\nHypergraph and Tractable Packing\nProblems\nFor a fixed constant k, let C(hw, k) denote the class of\nall the hypergraphs whose dual hypergraphs have\nhypertree width bounded by k. The maximum weighted-set\npacking problem can be solved in polynomial time on the class\nC(hw, k) by means of the algorithm ComputeSetPackingk,\nshown in Figure 4.\nThe algorithm receives in input a hypergraph H, a\nweighting function w, and a k-width hypertree decomposition\nHD = T=(N, E), \u03c7, \u03bb of \u00afH.\nFor each vertex v \u2208 N, let Hv be the hypergraph whose\nset of nodes N(Hv) \u2286 N(H) coincides with \u03bb(v), and whose\nset of edges E(Hv) \u2286 E(H) coincides with \u03c7(v). In an\ninitialization step, the algorithm equips each vertex v with all\nthe possible packings for Hv, which are stored in the set\nHv. Note that the size of Hv is bounded by (|E(H)| + 1)k\n,\nsince each node in \u03bb(v) is either left uncovered in a\npacking or is covered with precisely one of the hyperedges in\n\u03c7(v) \u2286 E(H). Then, ComputeSetPackingk is designed to\nfilter these packings by retaining only those that conform\nwith some packing for Hc, for each children c of v in T, as\nformalized next. Let hv and hc be two packings for Hv and\nHc, respectively. We say that hv conforms with hc, denoted\nby hv \u2248 hc if: for each h \u2208 hc \u2229 E(Hv), h is in hv; and, for\neach h \u2208 (E(Hc) \u2212 hc), h is not in hv.\nExample 4. Consider again the hypertree\ndecomposition of \u00afH1 reported in Figure 3.(c). Then, the set of all\nthe possible packings (which are build in the initialization\nstep of ComputeSetPackingk), for each of its vertices, is\nreFigure 5: Example application of Algorithm\nComputeSetPackingk.\n157\nInput: H, w, and a k-width hypertree decomposition HD = T =(N, E), \u03c7, \u03bb of \u00afH;\nOutput: A solution to MaxWSP(H, w);\nvar Hv : set of packings for Hv, for each v \u2208 N; h\u2217\n: packing for H;\nv\nhv\n: rational number, for each partial packing hv for Hv;\nhhv,c : partial packing for Hc, for each partial packing hv for Hv, and for each (v, c) \u2208 E;\n\n-------------------------------------------Procedure BottomUp;\nbegin\nDone := the set of all the leaves of T ;\nwhile \u2203v \u2208 T such that (i) v \u2208 Done, and (ii) {c | c is child of v} \u2286 Done do\nfor each c such that (v, c) \u2208 E do\nHv := Hv \u2212 {hv | \u2203hc \u2208 Hc s.t. hv \u2248 hc};\nfor each hv \u2208 Hv do\nv\nhv\n:= w(hv);\nfor each c such that (v, c) \u2208 E do\n\u00afhc := arg maxhc\u2208Hc|hv\u2248 hc\nc\nhc\n\u2212 w(hc \u2229 hv) ;\nhhv,c := \u00afhc; (* set best packing *)\nv\nhv\n:= v\nhv\n+ c\n\u00afhc\n\u2212 w(\u00afhc \u2229 hv);\nend for\nend for\nDone := Done \u222a {v};\nend while\nend;\n\n-------------------------------------------begin (* MAIN *)\nfor each vertex v in T do\nHv := {hv packing for Hv};\nBottomUp;\nlet r be the root of T ;\n\u00afhr := arg maxhr\u2208Hr\nr\nhr\n;\nh\u2217\n:= \u00afhr; (* include packing *)\nT opDown(r, hr);\nreturn h\u2217\n;\nend.\nProcedure T opDown(v : vertex of N, \u00afhv \u2208 Hv);\nbegin\nfor each c \u2208 N s.t. (v, c) \u2208 E do\n\u00afhc := h\u00afhv,c;\nh\u2217\n:= h\u2217\n\u222a \u00afhc; (* include packing *)\nT opDown(c, \u00afhc);\nend for\nend;\nFigure 4: Algorithm ComputeSetPackingk.\nported in Figure 5.(a). For instance, the root v1 is such that\nHv1 = { {}, {h1}, {h3}, {h5} }.\nMoreover, an arrow from a packing hc to hv denotes that\nhv conforms with hc. For instance, the reader may check\nthat the packing {h3} \u2208 Hv1 conforms with the packing\n{h2, h3} \u2208 Hv3 , but do not conform with {h1} \u2208 Hv3 . \u00a1\nComputeSetPackingk builds a solution by traversing T in\ntwo phases. In the first phase, vertices of T are processed\nfrom the leaves to the root r, by means of the procedure\nBottomUp. For each node v being processed, the set Hv is\npreliminary updated by removing all the packings hv that\ndo not conform with any packing for some of the children\nof v. After this filtering is performed, the weight hv is\nupdated. Intuitively, v\nhv\nstores the weight of the best partial\npacking for H computed by using only the hyperedges\noccurring in \u03c7(Tv). Indeed, if v is a leaf, then v\nhv\n= w(hv).\nOtherwise, for each child c of v in T, v\nhv\nis updated with\nthe maximum of c\nhc\n\u2212 w(hc \u2229 hv) over all the packings hc\nthat conforms with hv (resolving ties arbitrarily). The\npacking \u00afhc for which this maximum is achieved is stored in the\nvariable hhv,c.\nIn the second phase, the tree T is processed starting from\nthe root. Firstly, the packing h\u2217\nis selected that maximizes\nthe weight equipped with the packings in Hr. Then,\nprocedure TopDown is used to extend h\u2217\nto all the other partial\npackings for vertices of T. In particular, at each vertex v,\nh\u2217\nis extended with the packing hhv,c, for each child c of v.\nExample 5. Assume that, in our running example,\nw(h1) = w(h2) = w(h3) = w(h4) = 1. Then, an\nexecution of ComputeSetPackingk is graphically depicted in\nFigure 5.(b), where an arrow from a packing hc to a\npacking hv is used to denote that hc = hhv,c. Specifically, the\nchoices made during the computation are such that the\npacking {h2, h3} is computed.\nIn particular, during the bottom-up phase, we have that:\n(1) v4 is processed, and we set v4\n{h2} = v4\n{h4} = 1 and v4\n{} = 0;\n(2) v3 is processed, and we set v3\n{h1} = v3\n{h3} = 1 and v3\n{} = 0;\n(3) v2 is processed, and we set v2\n{h1} = v2\n{h2} = v2\n{h3} =\nv2\n{h4} = 1, v2\n{h2,h3} = 2 and v3\n{} = 0; (4) v1 is processed\nand we set v1\n{h1} = 1, v1\n{h5} = v1\n{h3} = 2 and v1\n{} = 0. For\ninstance, note that v1\n{h5} = 2 since {h5} conforms with the\npacking {h4} of Hv2 such that v2\n{h4} = 1.\nThen, at the beginning of the top-down phase,\nComputeSetPackingk selects {h3} as a packing for Hv1 and\npropagates this choice in the tree. Equivalently, the\nalgorithm may have chosen {h5}.\nAs a further example, the way the solution {h1} is\nobtained by the algorithm when w(h1) = 5 and w(h2) =\nw(h3) = w(h4) = 1 is reported in Figure 5.(c). Notice\nthat, this time, in the top-down phase, ComputeSetPackingk\nstarts selecting {h1} as the best packing for Hv1 . \u00a1\nTheorem 4. Let H be a hypergraph and w be a weighting\nfunction for it. Let HD = T, \u03c7, \u03bb be a complete k-width\nhypertree decomposition of \u00afH. Then, ComputeSetPackingk\non input H, w, and HD correctly outputs a solution for\nMaxWSP(H, w) in time O(|T| \u00d7 (|E(H)| + 1)2k\n).\nProof. [Sketch] We observe that h\u2217\n(computed by\nComputeSetPackingk) is a packing for H. Indeed, consider\na pair of hyperedges h1 and h2 in h\u2217\n, and assume, for the\nsake of contradiction, that h1 \u2229 h2 = \u2205. Let v1 (resp., v2)\nbe an arbitrary vertex of T, for which ComputeSetPackingk\nincluded h1 (resp., h2) in h\u2217\nin the bottom-down\ncomputation. By construction, we have h1 \u2208 \u03c7(v1) and h2 \u2208 \u03c7(v2).\n158\nLet I be an element in h1 \u2229 h2. In the dual hypergraph\nH, I is a hyperedge in E( \u00afH) which covers both the nodes\nh1 and h2. Hence, by condition (1) in Definition 1, there is\na vertex v \u2208 vertices(T) such that {h1, h2} \u2286 \u03c7(v). Note\nthat, because of the connectedness condition in Definition 1,\nwe can also assume, w.l.o.g., that v is in the path connecting\nv1 and v2 in T.\nLet hv \u2208 Hv denote the element added by\nComputeSetPackingk into h\u2217\nduring the bottom-down\nphase. Since the elements in Hv are packings for Hv, it is the\ncase that either h1 \u2208 hv or h2 \u2208 hv. Assume, w.l.o.g., that\nh1 \u2208 hv, and notice that each vertex w in T in the path\nconnecting v to v1 is such that h1 \u2208 \u03c7(w), because of the\nconnectedness condition. Hence, because of definition of\nconformance, the packing hw selected by ComputeSetPackingk\nto be added at vertex w in h\u2217\nmust be such that h1 \u2208 hw.\nThis holds in particular for w = v1. Contradiction with the\ndefinition of v1.\nTherefore, h\u2217\nis a packing for H. It remains then to show\nthat it has the maximum weight over all the packings for H.\nTo this aim, we can use structural induction on T to prove\nthat, in the bottom-up phase, the variable v\nhv\nis updated\nto contain the weight of the packing on the edges in \u03c7(Tv),\nwhich contains hv and which has the maximum weight over\nall such packings for the edges in \u03c7(Tv). Then, the result\nfollows, since in the top-down phase, the packing hr giving\nthe maximum weight over \u03c7(Tr) = E(H) is first included in\nh\u2217\n, and then extended at each node c with the packing hhv,c\nconformingly with hv and such that the maximum value of\nv\nhv\nis achieved.\nAs for the complexity, observe that the initialization step\nrequires the construction of the set Hv, for each vertex v, and\neach set has size (|E(H)| + 1)k\nat most. Then, the function\nBottomUp checks for the conformance between strategies\nin Hv with strategies in Hc, for each pair (v, c) \u2208 E, and\nupdates the weight v\nhv\n. These tasks can be carried out in\ntime O((|E(H)| + 1)2k\n) and must be repeated for each edge\nin T, i.e., O(|T|) times. Finally, the function TopDown can\nbe implemented in linear time in the size of T, since it just\nrequires updating h\u2217\nby accessing the variable hhv,c.\nThe above result shows that if a hypertree decomposition\nof width k is given, the MaxWSP problem can be efficiently\nsolved. Moreover, differently from the case of structured\nitem graphs, it is well known that deciding the existence of\na k-bounded hypertree decomposition and computing one\n(if any) are problems which can be efficiently solved in\npolynomial time [7]. Therefore, Theorem 4 witnesses that the\nclass C(hw, k) actually constitutes a tractable class for the\nwinner determination problem.\nAs the following theorem shows, for large subclasses\n(that depend only on how the weight function is specified),\nMaxWSP(H, w) is even highly parallelizeable. Let us call a\nweighting function smooth if it is logspace computable and\nif all weights are polynomial (and thus just require O(log n)\nbits for their representation). Recall that LOGCFL is a\nparallel complexity class contained in NC2, cf. [9]. The\nfunctional version of LOGCFL is LLOGCFL\n, which is obtained by\nequipping a logspace transducer with an oracle in LOGCFL.\nTheorem 5. Let H be a hypergraph in C(hw, k), and let w\nbe a smooth weighting function for it. Then, MaxWSP(H, w)\nis in LLOGCFL\n.\n4. HYPERTREE DECOMPOSITIONS VS\nSTRUCTURED ITEM GRAPHS\nGiven that the class C(hw, k) has been shown to be an\nisland of tractability for the winner determination problem,\nand given that the class C(ig, k) has been shown not to be\nefficiently recognizable, one may be inclined to think that\nthere are instances having unbounded hypertree width, but\nadmitting an item graph of bounded tree width (so that the\nintractability of structured item graphs would lie in their\ngenerality).\nSurprisingly, we establish this is not the case. The line of\nthe proof is to first show that structured item graphs are in\none-to-one correspondence with a special kind of hypertree\ndecompositions of the dual hypergraph, which we shall call\nstrict. Then, the result will follow by proving that k-width\nstrict hypertree decompositions are less powerful than\nkwith hypertree decompositions.\n4.1 Strict Hypertree Decompositions\nLet H be a hypergraph, and let V \u2286 N(H) be a set of\nnodes and X, Y \u2208 N(H). X is [V ]-adjacent to Y if there\nexists an edge h \u2208 E(H) such that {X, Y } \u2286 (h \u2212 V ). A\n[V ]-path \u03c0 from X to Y is a sequence X = X0, . . . , X = Y\nof variables such that: Xi is [V ]-adjacent to Xi+1, for each\ni \u2208 [0... -1]. A set W \u2286 N(H) of nodes is [V ]-connected\nif \u2200X, Y \u2208 W there is a [V ]-path from X to Y . A\n[V ]-component is a maximal [V ]-connected non-empty set\nof nodes W \u2286 (N(H) \u2212 V ). For any [V ]-component C, let\nE(C) = {h \u2208 E(H) | h \u2229 C = \u2205}.\nDefinition 2. A hypertree decomposition HD =\nT, \u03c7, \u03bb of H is strict if the following conditions hold:\n1. for each pair of vertices r and s in vertices(T)\nsuch that s is a child of r, and for each\n[\u03c7(r)]-component Cr s.t. Cr \u2229 \u03c7(Ts) = \u2205, Cr is a\n[\u03c7(r) \u2229 N(\u03bb(r) \u2229 \u03bb(s))]-component;\n2. for each edge h \u2208 E(H), there is a vertex p such that\nh \u2208 \u03bb(p) and h \u2286 \u03c7(p) (we say p strongly covers h);\n3. for each edge h \u2208 E(H), the set {p \u2208 vertices(T) | h \u2208\n\u03bb(p)} induces a (connected) subtree of T.\nThe strict hypertree width shw(H) of H is the minimum\nwidth over all its strict hypertree decompositions. P\nThe basic relationship between nice hypertree\ndecompositions and structured item graphs is shown in the following\ntheorem.\nTheorem 6. Let H be a hypergraph such that for each\nnode v \u2208 N(H), {v} is in E(H). Then, a k-width tree\ndecomposition of an item graph for H exists if and only if \u00afH\nhas a (k + 1)-width strict hypertree decomposition2\n.\nNote that, as far as the maximum weighted-set packing\nproblem is concerned, given a hypergraph H, we can always\nassume that for each node v \u2208 N(H), {v} is in E(H). In\nfact, if this hyperedge is not in the hypergraph, then it can\nbe added without loss of generality, by setting w({v}) = 0.\nTherefore, letting C(shw, k) denote the class of all the\nhypergraphs whose dual hypergraphs (associated with maximum\n2\nThe term +1 only plays the technical role of taking care\nof the different definition of width for tree decompositions\nand hypertree decompositions.\n159\nweighted-set packing problems) have strict hypertree width\nbounded by k, we have that C(shw, k + 1) = C(ig, k).\nBy definition, strict hypertree decompositions are special\nhypertree decompositions. In fact, we are able to show that\nthe additional conditions in Definition 2 induce an actual\nrestriction on the decomposition power.\nTheorem 7. C(ig, k) = C(shw, k + 1) \u2282 C(hw, k + 1).\nA Game Theoretic View. We shed further lights on\nstrict hypertree decompositions by discussing an interesting\ncharacterization based on the strict Robber and Marshals\nGame, defined by adapting the Robber and Marshals game\ndefined in [6], which characterizes hypertree width.\nThe game is played on a hypergraph H by a robber against\nk marshals which act in coordination. Marshals move on\nthe hyperedges of H, while the robber moves on nodes of\nH. The robber sees where the marshals intend to move, and\nreacts by moving to another node which is connected with\nits current position and through a path in G(H) which does\nnot use any node contained in a hyperedge that is occupied\nby the marshals before and after their move-we say that\nthese hyperedges are blocked. Note that in the basic game\ndefined in [6], the robber is not allowed to move on vertices\nthat are occupied by the marshals before and after their\nmove, even if they do not belong to blocked hyperedges.\nImportantly, marshals are required to play monotonically,\ni.e., they cannot occupy an edge that was previously\noccupied in the game, and which is currently not. The marshals\nwin the game if they capture the robber, by occupying an\nedge covering a node where the robber is. Otherwise, the\nrobber wins.\nTheorem 8. Let H be a hypergraph such that for each\nnode v \u2208 N(H), {v} is in E(H). Then, \u00afH has a k-width\nstrict hypertree decomposition if and only if k marshals can\nwin the strict Robber and Marshals Game on \u00afH, no matter\nof the robber\"s moves.\n5. CONCLUSIONS\nWe have solved the open question of determining the\ncomplexity of computing a structured item graph\nassociated with a combinatorial auction scenario. The result is\nbad news, since it turned out that it is NP-complete to\ncheck whether a combinatorial auction has a structured item\ngraph, even for treewidth 3. Motivated by this result, we\ninvestigated the use of hypertree decomposition (on the dual\nhypergraph associated with the scenario) and we shown that\nthe problem is tractable on the class of those instances whose\ndual hypergraphs have bounded hypertree width. For some\nspecial, yet relevant cases, a highly parallelizable algorithm\nis also discussed. Interestingly, it also emerged that the class\nof structured item graphs is properly contained in the class\nof instances having bounded hypertree width (hence, the\nreason of their intractability is not their generality).\nIn particular, the latter result is established by showing\na precise relationship between structured item graphs and\nrestricted forms of hypertree decompositions (on the dual\nhypergraph), called query decompositions (see, e.g., [7]). In\nthe light of this observation, we note that proving some\napproximability results for structured item graphs requires a\ndeep understanding of the approximability of query\ndecompositions, which is currently missing in the literature.\nAs a further avenue of research, it would be relevant to\nenhance the algorithm ComputeSetPackingk, e.g., by using\nspecialized data structures, in order to avoid the quadratic\ndependency from (|E(H)| + 1)k\n.\nFinally, an other interesting question is to assess whether\nthe structural decomposition techniques discussed in the\npaper can be used to efficiently deal with generalizations of the\nwinner determination problem. For instance, it might be\nrelevant in several application scenarios to design algorithms\nthat can find a selling strategy when several copies of the\nsame item are available for selling, and when moreover the\nauctioneer is satisfied when at least a given number of copies\nis actually sold.\nAcknowledgement\nG. Gottlob\"s work was supported by the EC3 - E-Commerce\nCompetence Center (Vienna) and by a Royal Society\nWolfson Research Merit Award. In particular, this Award\nallowed Gottlob to invite G. Greco for a research visit to\nOxford. In addition, G. Greco is supported by ICAR-CNR,\nand by M.I.U.R. under project TOCAI.IT.\n6. REFERENCES\n[1] I. Adler, G. Gottlob, and M. Grohe. Hypertree-Width\nand Related Hypergraph Invariants. In Proc. of\nEUROCOMB\"05, pages 5-10, 2005.\n[2] C. Boutilier. Solving Concisely Expressed\nCombinatorial Auction Problems. In Proc. of\nAAAI\"02, pages 359-366, 2002.\n[3] V. Conitzer, J. Derryberry, and T. Sandholm.\nCombinatorial auctions with structured item graphs.\nIn Proc. of AAAI\"04, pages 212-218, 2004.\n[4] E. M. Eschen and J. P. Sinrad. An o(n2\n) algorithm for\ncircular-arc graph recognition. In Proc. of SODA\"93,\npages 128-137, 1993.\n[5] Y. Fujishima, K. Leyton-Brown, and Y. Shoham.\nTaming the computational complexity of\ncombinatorial auctions: Optimal and approximate. In\nProc. of IJCAI\"99, pages 548-553, 1999.\n[6] G. Gottlob, N. Leone, and F. Scarcello. Robbers,\nmarshals, and guards: game theoretic and logical\ncharacterizations of hypertree width. Journal of\nComputer and System Sciences, 66(4):775-808, 2003.\n[7] G. Gottlob, N. Leone, and S. Scarcello. Hypertree\ndecompositions and tractable queries. Journal of\nComputer and System Sciences, 63(3):579-627, 2002.\n[8] H. H. Hoos and C. Boutilier. Solving combinatorial\nauctions using stochastic local search. In Proc. of\nAAAI\"00, pages 22-29, 2000.\n[9] D. Johnson. A Catalog of Complexity Classes. In\nP. Cramton, Y. Shoham, and R. Steinberg, editors,\nHandbook of Theoretical Computer Science, Volume\nA: Algorithms and Complexity, pages 67-161. 1990.\n[10] N. Korte and R. H. Mohring. An incremental\nlinear-time algorithm for recognizing interval graphs.\nSIAM Journal on Computing, 18(1):68-81, 1989.\n[11] D. Lehmann, R. M\u00a8uller, and T. Sandholm. The\nWinner Determination Problem. In P. Cramton,\nY. Shoham, and R. Steinberg, editors, Combinatorial\nAuctions. MIT Press, 2006.\n[12] D. Lehmann, L. I. O\"Callaghan, and Y. Shoham.\nTruth revelation in approximately efficient\n160\ncombinatorial auctions. J. ACM, 49(5):577-602, 2002.\n[13] R. McAfee and J. McMillan. Analyzing the airwaves\nauction. Journal of Economic Perspectives,\n10(1):159175, 1996.\n[14] J. McMillan. Selling spectrum rights. Journal of\nEconomic Perspectives, 8(3):145-62, 1994.\n[15] N. Nisan. Bidding and allocation in combinatorial\nauctions. In Proc. of EC\"00, pages 1-12, 2000.\n[16] N. Robertson and P. Seymour. Graph minors ii.\nalgorithmic aspects of tree width. Journal of\nAlgorithms, 7:309-322, 1986.\n[17] M. H. Rothkopf, A. Pekec, and R. M. Harstad.\nComputationally manageable combinatorial auctions.\nManagement Science, 44:1131-1147, 1998.\n[18] T. Sandholm. An implementation of the contract net\nprotocol based on marginal cost calculations. In Proc.\nof AAAI\"93, pages 256-262, 1993.\n[19] T. Sandholm. Algorithm for optimal winner\ndetermination in combinatorial auctions. Artificial\nIntelligence, 135(1-2):1-54, 2002.\n[20] T. Sandholm. Winner determination algorithms. In\nP. Cramton, Y. Shoham, and R. Steinberg, editors,\nCombinatorial Auctions. MIT Press, 2006.\n[21] T. Sandholm and S. Suri. Bob: Improved winner\ndetermination in combinatorial auctions and\ngeneralizations. Artificial Intelligence, 7:33-58, 2003.\n[22] M. Tennenholtz. Some tractable combinatorial\nauctions. In Proc. of AAAI\"00, pages 98-103, 2000.\n[23] E. Zurel and N. Nisan. An efficient approximate\nallocation algorithm for combinatorial auctions. In\nProc. of EC\"01, pages 125-136, 2001.\n161", "keywords": "structured item graph complexity;simplification of the primal graph;hypergraph;structured item graph;hypertree-based decomposition method;hypergraph hg;the primal graph simplification;polynomial time;combinatorial auction;fixed treewidth;accepted bid price;well-known mechanism for resource and task allocation;complexity of structured item graph;hypertree decomposition"}
-{"name": "test_J-14", "title": "Computing Good Nash Equilibria in Graphical Games \u2217", "abstract": "This paper addresses the problem of fair equilibrium selection in graphical games. Our approach is based on the data structure called the best response policy, which was proposed by Kearns et al. [13] as a way to represent all Nash equilibria of a graphical game. In [9], it was shown that the best response policy has polynomial size as long as the underlying graph is a path. In this paper, we show that if the underlying graph is a bounded-degree tree and the best response policy has polynomial size then there is an efficient algorithm which constructs a Nash equilibrium that guarantees certain payoffs to all participants. Another attractive solution concept is a Nash equilibrium that maximizes the social welfare. We show that, while exactly computing the latter is infeasible (we prove that solving this problem may involve algebraic numbers of an arbitrarily high degree), there exists an FPTAS for finding such an equilibrium as long as the best response policy has polynomial size. These two algorithms can be combined to produce Nash equilibria that satisfy various fairness criteria.", "fulltext": "1. INTRODUCTION\nIn a large community of agents, an agent\"s behavior is not likely\nto have a direct effect on most other agents: rather, it is just the\nagents who are close enough to him that will be affected. However,\nas these agents respond by adapting their behavior, more agents\nwill feel the consequences and eventually the choices made by a\nsingle agent will propagate throughout the entire community.\nThis is the intuition behind graphical games, which were\nintroduced by Kearns, Littman and Singh in [13] as a compact\nrepresentation scheme for games with many players. In an n-player\ngraphical game, each player is associated with a vertex of an underlying\ngraph G, and the payoffs of each player depend on his action as\nwell as on the actions of his neighbors in the graph. If the\nmaximum degree of G is \u0394, and each player has two actions available\nto him, then the game can be represented using n2\u0394+1\nnumbers.\nIn contrast, we need n2n\nnumbers to represent a general n-player\n2-action game, which is only practical for small values of n. For\ngraphical games with constant \u0394, the size of the game is linear in n.\nOne of the most natural problems for a graphical game is that\nof finding a Nash equilibrium, the existence of which follows from\nNash\"s celebrated theorem (as graphical games are just a special\ncase of n-player games). The first attempt to tackle this problem\nwas made in [13], where the authors consider graphical games with\ntwo actions per player in which the underlying graph is a\nboundeddegree tree. They propose a generic algorithm for finding Nash\nequilibria that can be specialized in two ways: an exponential-time\nalgorithm for finding an (exact) Nash equilibrium, and a fully\npolynomial time approximation scheme (FPTAS) for finding an\napproximation to a Nash equilibrium. For any > 0 this algorithm\noutputs an -Nash equilibrium, which is a strategy profile in which\nno player can improve his payoff by more than by unilaterally\nchanging his strategy.\nWhile -Nash equilibria are often easier to compute than exact\nNash equilibria, this solution concept has several drawbacks. First,\nthe players may be sensitive to a small loss in payoffs, so the\nstrategy profile that is an -Nash equilibrium will not be stable. This\nwill be the case even if there is only a small subset of players who\nare extremely price-sensitive, and for a large population of players\nit may be difficult to choose a value of that will satisfy everyone.\nSecond, the strategy profiles that are close to being Nash equilibria\nmay be much better with respect to the properties under\nconsideration than exact Nash equilibria. Therefore, the (approximation to\nthe) value of the best solution that corresponds to an -Nash\nequilibrium may not be indicative of what can be achieved under an\nexact Nash equilibrium. This is especially important if the purpose\nof the approximate solution is to provide a good benchmark for a\nsystem of selfish agents, as the benchmark implied by an -Nash\nequilibrium may be unrealistic. For these reasons, in this paper we\nfocus on the problem of computing exact Nash equilibria.\nBuilding on ideas of [14], Elkind et al. [9] showed how to find an\n(exact) Nash equilibrium in polynomial time when the underlying\n162\ngraph has degree 2 (that is, when the graph is a collection of paths\nand cycles). By contrast, finding a Nash equilibrium in a general\ndegree-bounded graph appears to be computationally intractable: it\nhas been shown (see [5, 12, 7]) to be complete for the complexity\nclass PPAD. [9] extends this hardness result to the case in which\nthe underlying graph has bounded pathwidth.\nA graphical game may not have a unique Nash equilibrium,\nindeed it may have exponentially many. Moreover, some Nash\nequilibria are more desirable than others. Rather than having an\nalgorithm which merely finds some Nash equilibrium, we would like to\nhave algorithms for finding Nash equilibria with various\nsociallydesirable properties, such as maximizing overall payoff or\ndistributing profit fairly.\nA useful property of the data structure of [13] is that it\nsimultaneously represents the set of all Nash equilibria of the underlying\ngame. If this representation has polynomial size (as is the case for\npaths, as shown in [9]), one may hope to extract from it a Nash\nequilibrium with the desired properties. In fact, in [13] the authors\nmention that this is indeed possible if one is interested in finding\nan (approximate) -Nash equilibrium. The goal of this paper is to\nextend this to exact Nash equilibria.\n1.1 Our Results\nIn this paper, we study n-player 2-action graphical games on\nbounded-degree trees for which the data structure of [13] has size\npoly(n). We focus on the problem of finding exact Nash equilibria\nwith certain socially-desirable properties. In particular, we show\nhow to find a Nash equilibrium that (nearly) maximizes the social\nwelfare, i.e., the sum of the players\" payoffs, and we show how\nto find a Nash equilibrium that (nearly) satisfies prescribed payoff\nbounds for all players.\nGraphical games on bounded-degree trees have a simple\nalgebraic structure. One attractive feature, which follows from [13], is\nthat every such game has a Nash equilibrium in which the strategy\nof every player is a rational number. Section 3 studies the algebraic\nstructure of those Nash equilibria that maximize social welfare. We\nshow (Theorems 1 and 2) that, surprisingly, the set of Nash\nequilibria that maximize social welfare is more complex. In fact, for\nany algebraic number \u03b1 \u2208 [0, 1] with degree at most n, we exhibit\na graphical game on a path of length O(n) such that, in the unique\nsocial welfare-maximizing Nash equilibrium of this game, one of\nthe players plays the mixed strategy \u03b1.1\nThis result shows that it\nmay be difficult to represent an optimal Nash equilibrium. It seems\nto be a novel feature of the setting we consider here, that an optimal\nNash equilibrium is hard to represent, in a situation where it is easy\nto find and represent a Nash equilibrium.\nAs the social welfare-maximizing Nash equilibrium may be hard\nto represent efficiently, we have to settle for an approximation.\nHowever, the crucial difference between our approach and that of\nprevious papers [13, 16, 19] is that we require our algorithm to\noutput an exact Nash equilibrium, though not necessarily the optimal\none with respect to our criteria. In Section 4, we describe an\nalgorithm that satisfies this requirement. Namely, we propose an\nalgorithm that for any > 0 finds a Nash equilibrium whose total\npayoff is within of optimal. It runs in polynomial time (Theorem 3,4)\nfor any graphical game on a bounded-degree tree for which the data\nstructure proposed by [13] (the so-called best response policy,\ndefined below) is of size poly(n) (note that, as shown in [9], this is\nalways the case when the underlying graph is a path). More\npre1\nA related result in a different context was obtained by Datta [8],\nwho shows that n-player 2-action games are universal in the sense\nthat any real algebraic variety can be represented as the set of totally\nmixed Nash equilibria of such games.\ncisely, the running time of our algorithm is polynomial in n, Pmax,\nand 1/ , where Pmax is the maximum absolute value of an entry of\na payoff matrix, i.e., it is a pseudopolynomial algorithm, though it\nis fully polynomial with respect to . We show (Section 4.1) that\nunder some restrictions on the payoff matrices, the algorithm can\nbe transformed into a (truly) polynomial-time algorithm that\noutputs a Nash equilibrium whose total payoff is within a 1 \u2212 factor\nfrom the optimal.\nIn Section 5, we consider the problem of finding a Nash\nequilibrium in which the expected payoff of each player Vi exceeds\na prescribed threshold Ti. Using the idea from Section 4 we give\n(Theorem 5) a fully polynomial time approximation scheme for this\nproblem. The running time of the algorithm is bounded by a\npolynomial in n, Pmax, and . If the instance has a Nash equilibrium\nsatisfying the prescribed thresholds then the algorithm constructs a\nNash equilibrium in which the expected payoff of each player Vi is\nat least Ti \u2212 .\nIn Section 6, we introduce other natural criteria for selecting\na good Nash equilibrium and we show that the algorithms\ndescribed in the two previous sections can be used as building blocks\nin finding Nash equilibria that satisfy these criteria. In particular, in\nSection 6.1 we show how to find a Nash equilibrium that\napproximates the maximum social welfare, while guaranteeing that each\nindividual payoff is close to a prescribed threshold. In Section 6.2\nwe show how to find a Nash equilibrium that (nearly) maximizes\nthe minimum individual payoff. Finally, in Section 6.3 we show\nhow to find a Nash equilibrium in which the individual payoffs of\nthe players are close to each other.\n1.2 Related Work\nOur approximation scheme (Theorem 3 and Theorem 4) shows\na contrast between the games that we study and two-player n-action\ngames, for which the corresponding problems are usually intractable.\nFor two-player n-action games, the problem of finding Nash\nequilibria with special properties is typically NP-hard. In particular, this\nis the case for Nash equilibria that maximize the social welfare [11,\n6]. Moreover, it is likely to be intractable even to approximate such\nequilibria. In particular, Chen, Deng and Teng [4] show that there\nexists some , inverse polynomial in n, for which computing an\n-Nash equilibrium in 2-player games with n actions per player is\nPPAD-complete.\nLipton and Markakis [15] study the algebraic properties of Nash\nequilibria, and point out that standard quantifier elimination\nalgorithms can be used to solve them. Note that these algorithms are\nnot polynomial-time in general. The games we study in this\npaper have polynomial-time computable Nash equilibria in which all\nmixed strategies are rational numbers, but an optimal Nash\nequilibrium may necessarily include mixed strategies with high algebraic\ndegree.\nA correlated equilibrium (CE) (introduced by Aumann [2]) is a\ndistribution over vectors of players\" actions with the property that\nif any player is told his own action (the value of his own\ncomponent) from a vector generated by that distribution, then he cannot\nincrease his expected payoff by changing his action. Any Nash\nequilibrium is a CE but the converse does not hold in general. In\ncontrast with Nash equilibria, correlated equilibria can be found for\nlow-degree graphical games (as well as other classes of\nconciselyrepresented multiplayer games) in polynomial time [17]. But, for\ngraphical games it is NP-hard to find a correlated equilibrium that\nmaximizes total payoff [18]. However, the NP-hardness results\napply to more general games than the one we consider here, in\nparticular the graphs are not trees. From [2] it is also known that there\nexist 2-player, 2-action games for which the expected total payoff\n163\nof the best correlated equilibrium is higher than the best Nash\nequilibrium, and we discuss this issue further in Section 7.\n2. PRELIMINARIES AND NOTATION\nWe consider graphical games in which the underlying graph G is\nan n-vertex tree, in which each vertex has at most \u0394 children. Each\nvertex has two actions, which are denoted by 0 and 1. A mixed\nstrategy of a player V is represented as a single number v \u2208 [0, 1],\nwhich denotes the probability that V selects action 1.\nFor the purposes of the algorithm, the tree is rooted arbitrarily.\nFor convenience, we assume without loss of generality that the root\nhas a single child, and that its payoff is independent of the action\nchosen by the child. This can be achieved by first choosing an\narbitrary root of the tree, and then adding a dummy parent of this\nroot, giving the new parent a constant payoff function, e.g., 0.\nGiven an edge (V, W ) of the tree G, and a mixed strategy w\nfor W , let G(V,W ),W =w be the instance obtained from G by (1)\ndeleting all nodes Z which are separated from V by W (i.e., all\nnodes Z such that the path from Z to V passes through W ), and\n(2) restricting the instance so that W is required to play mixed\nstrategy w.\nDefinition 1. Suppose that (V, W ) is an edge of the tree, that\nv is a mixed strategy for V and that w is a mixed strategy for W .\nWe say that v is a potential best response to w (denoted by v \u2208\npbrV (w)) if there is an equilibrium in the instance G(V,W ),W =w in\nwhich V has mixed strategy v. We define the best response policy\nfor V , given W , as B(W, V ) = {(w, v) | v \u2208 pbrV (w), w \u2208\n[0, 1]}.\nThe upstream pass of the generic algorithm of [13] considers\nevery node V (other than the root) and computes the best response\npolicy for V given its parent. With the above assumptions about\nthe root, the downstream pass is straightforward. The root selects a\nmixed strategy w for the root W and a mixed strategy v \u2208 B(W, V )\nfor each child V of W . It instructs each child V to play v. The\nremainder of the downward pass is recursive. When a node V is\ninstructed by its parent to adopt mixed strategy v, it does the\nfollowing for each child U - It finds a pair (v, u) \u2208 B(V, U) (with\nthe same v value that it was given by its parent) and instructs U to\nplay u.\nThe best response policy for a vertex U given its parent V can be\nrepresented as a union of rectangles, where a rectangle is defined\nby a pair of closed intervals (IV , IU ) and consists of all points in\nIV \u00d7 IU ; it may be the case that one or both of the intervals IV\nand IU consists of a single point. In order to perform computations\non B(V, U), and to bound the number of rectangles, [9] used the\nnotion of an event point, which is defined as follows. For any set\nA \u2286 [0, 1]2\nthat is represented as a union of a finite number of\nrectangles, we say that a point u \u2208 [0, 1] on the U-axis is a\nUevent point of A if u = 0 or u = 1 or the representation of A\ncontains a rectangle of the form IV \u00d7 IU and u is an endpoint of\nIU ; V -event points are defined similarly.\nFor many games considered in this paper, the underlying graph is\nan n-vertex path, i.e., a graph G = (V, E) with V = {V1, . . . , Vn}\nand E = {(V1, V2), . . . , (Vn\u22121, Vn)}. In [9], it was shown that for\nsuch games, the best response policy has only polynomially-many\nrectangles. The proof that the number of rectangles in B(Vj+1, Vj)\nis polynomial proceeds by first showing that the number of event\npoints in B(Vj+1, Vj ) cannot exceed the number of event points\nin B(Vj, Vj\u22121) by more than 2, and using this fact to bound the\nnumber of rectangles in B(Vj+1, Vj ).\nLet P0\n(V ) and P1\n(V ) be the expected payoffs to V when it\nplays 0 and 1, respectively. Both P0\n(V ) and P1\n(V ) are multilinear\nfunctions of the strategies of V \"s neighbors. In what follows, we\nwill frequently use the following simple observation.\nCLAIM 1. For a vertex V with a single child U and parent W ,\ngiven any A, B, C, D \u2208 Q, A , B , C , D \u2208 Q, one can select the\npayoffs to V so that P0\n(V ) = Auw + Bu + Cw + D, P1\n(V ) =\nA uw + B u + C w + D . Moreover, if all A, B, C, D, A , B ,\nC , D are integer, the payoffs to V are integer as well.\nPROOF. We will give the proof for P0\n(V ); the proof for P1\n(V )\nis similar. For i, j = 0, 1, let Pij be the payoff to V when U\nplays i, V plays 0 and W plays j. We have P0\n(V ) = P00(1 \u2212\nu)(1 \u2212 w) + P10u(1 \u2212 w) + P01(1 \u2212 u)w + P11uw. We have\nto select the values of Pij so that P00 \u2212 P10 \u2212 P01 + P11 = A,\n\u2212P00 + P10 = B, \u2212P00 + P01 = C, P00 = D. It is easy to\nsee that the unique solution is given by P00 = D, P01 = C + D,\nP10 = B + D, P11 = A + B + C + D.\nThe input to all algorithms considered in this paper includes the\npayoff matrices for each player. We assume that all elements of\nthese matrices are integer. Let Pmax be the greatest absolute value\nof any element of any payoff matrix. Then the input consists of\nat most n2\u0394+1\nnumbers, each of which can be represented using\nlog Pmax bits.\n3. NASH EQUILIBRIA THAT MAXIMIZE\nTHE SOCIAL WELFARE: SOLUTIONS\nIN R \\ Q\nFrom the point of view of social welfare, the best Nash\nequilibrium is the one that maximizes the sum of the players\" expected\npayoffs. Unfortunately, it turns out that computing such a strategy\nprofile exactly is not possible: in this section, we show that even if\nall players\" payoffs are integers, the strategy profile that maximizes\nthe total payoff may have irrational coordinates; moreover, it may\ninvolve algebraic numbers of an arbitrary degree.\n3.1 Warm-up: quadratic irrationalities\nWe start by providing an example of a graphical game on a path\nof length 3 with integer payoffs such that in the Nash equilibrium\nthat maximizes the total payoff, one of the players has a strategy in\nR \\ Q. In the next subsection, we will extend this example to\nalgebraic numbers of arbitrary degree n; to do so, we have to consider\npaths of length O(n).\nTHEOREM 1. There exists an integer-payoff graphical game G\non a 3-vertex path UV W such that, in any Nash equilibrium of G\nthat maximizes social welfare, the strategy, u, of the player U and\nthe total payoff, p, satisfy u, p \u2208 R \\ Q.\nPROOF. The payoffs to the players in G are specified as follows.\nThe payoff to U is identically 0, i.e., P0\n(U) = P1\n(U) = 0. Using\nClaim 1, we select the payoffs to V so that P0\n(V ) = \u2212uw + 3w\nand P1\n(V ) = P0\n(V ) + w(u + 2) \u2212 (u + 1), where u and w are\nthe (mixed) strategies of U and W , respectively. It follows that V\nis indifferent between playing 0 and 1 if and only if w = f(u) =\nu+1\nu+2\n. Observe that for any u \u2208 [0, 1] we have f(u) \u2208 [0, 1]. The\npayoff to W is 0 if it selects the same action as V and 1 otherwise.\nCLAIM 2. All Nash equilibria of the game G are of the form\n(u, 1/2, f(u)). That is, in any Nash equilibrium, V plays v = 1/2\nand W plays w = f(u). Moreover, for any value of u, the vector\nof strategies (u, 1/2, f(u)) constitutes a Nash equilibrium.\nPROOF. It is easy to check that for any u \u2208 [0, 1], the vector\n(u, 1/2, f(u)) is a Nash equilibrium. Indeed, U is content to play\n164\nany mixed strategy u no matter what V and W do. Furthermore,\nV is indifferent between 0 and 1 as long as w = f(u), so it can\nplay 1/2. Finally, if V plays 0 and 1 with equal probability, W is\nindifferent between 0 and 1, so it can play f(u).\nConversely, suppose that v > 1/2. Then W strictly prefers to\nplay 0, i.e., w = 0. Then for V we have P1\n(V ) = P0\n(V ) \u2212 (u +\n1), i.e., P1\n(V ) < P0\n(V ), which implies v = 0, a contradiction.\nSimilarly, if v < 1/2, player W prefers to play 1, so we have\nw = 1. Hence, P1\n(V ) = P0\n(V ) + (u + 2) \u2212 (u + 1), i.e.,\nP1\n(V ) > P0\n(V ), which implies v = 1, a contradiction. Finally,\nif v = 1/2, but w = f(u), player V is not indifferent between 0\nand 1, so he would deviate from playing 1/2. This completes the\nproof of Claim 2.\nBy Claim 2, the total payoff in any Nash equilibrium of this game\nis a function of u. More specifically, the payoff to U is 0, the payoff\nto V is \u2212uf(u) + 3f(u), and the payoff to W is 1/2. Therefore,\nthe Nash equilibrium with the maximum total payoff corresponds\nto the value of u that maximizes\ng(u) = \u2212u\n(u + 1)\nu + 2\n+ 3\nu + 1\nu + 2\n= \u2212\n(u \u2212 3)(u + 1)\nu + 2\n.\nTo find extrema of g(u), we compute h(u) = \u2212 d\ndu\ng(u). We have\nh(u) =\n(2u \u2212 2)(u + 2) \u2212 (u \u2212 3)(u + 1)\n(u + 2)2\n=\nu2\n+ 4u \u2212 1\n(u + 2)2\n.\nHence, h(u) = 0 if and only if u \u2208 {\u22122 +\n\u221a\n5, \u22122 \u2212\n\u221a\n5}. Note\nthat \u22122 +\n\u221a\n5 \u2208 [0, 1].\nThe function g(u) changes sign at \u22122, \u22121, and 3. We have\ng(u) < 0 for g > 3, g(u) > 0 for u < \u22122, so the extremum\nof g(u) that lies between 1 and 3, i.e., u = \u22122 +\n\u221a\n5, is a local\nmaximum. We conclude that the social welfare-maximizing Nash\nequilibrium for this game is given by the vector of strategies (\u22122+\u221a\n5, 1/2, (5 \u2212\n\u221a\n5)/5). The respective total payoff is\n0 \u2212\n(\n\u221a\n5 \u2212 5)(\n\u221a\n5 \u2212 1)\n\u221a\n5\n+\n1\n2\n= 13/2 \u2212 2\n\u221a\n5.\nThis concludes the proof of Theorem 1.\n3.2 Strategies of arbitrary degree\nWe have shown that in the social welfare-maximizing Nash\nequilibrium, some players\" strategies can be quadratic irrationalities,\nand so can the total payoff. In this subsection, we will extend this\nresult to show that we can construct an integer-payoff graphical\ngame on a path whose social welfare-maximizing Nash equilibrium\ninvolves arbitrary algebraic numbers in [0, 1].\nTHEOREM 2. For any degree-n algebraic number \u03b1 \u2208 [0, 1],\nthere exists an integer payoff graphical game on a path of length\nO(n) such that, in all social welfare-maximizing Nash equilibria\nof this game, one of the players plays \u03b1.\nPROOF. Our proof consists of two steps. First, we construct a\nrational expression R(x) and a segment [x , x ] such that x , x \u2208\nQ and \u03b1 is the only maximum of R(x) on [x , x ]. Second, we\nconstruct a graphical game whose Nash equilibria can be\nparameterized by u \u2208 [x , x ], so that at the equilibrium that corresponds\nto u the total payoff is R(u) and, moreover, some player\"s\nstrategy is u. It follows that to achieve the payoff-maximizing Nash\nequilibrium, this player has to play \u03b1. The details follow.\nLEMMA 1. Given an algebraic number \u03b1 \u2208 [0, 1], deg(\u03b1) =\nn, there exist K2, . . . , K2n+2 \u2208 Q and x , x \u2208 (0, 1) \u2229 Q such\nthat \u03b1 is the only maximum of\nR(x) =\nK2\nx + 2\n+ \u00b7 \u00b7 \u00b7 +\nK2n+2\nx + 2n + 2\non [x , x ].\nPROOF. Let P(x) be the minimal polynomial of \u03b1, i.e., a\npolynomial of degree n with rational coefficients whose leading\ncoefficient is 1 such that P(\u03b1) = 0. Let A = {\u03b11, . . . , \u03b1n} be the set of\nall roots of P(x). Consider the polynomial Q1(x) = \u2212P2\n(x). It\nhas the same roots as P(x), and moreover, for any x \u2208 A we have\nQ1(x) < 0. Hence, A is the set of all maxima of Q1(x). Now, set\nR(x) = Q1(x)\n(x+2)...(x+2n+1)(x+2n+2)\n. Observe that R(x) \u2264 0 for all\nx \u2208 [0, 1] and R(x) = 0 if and only if Q1(x) = 0. Hence, the set\nA is also the set of all maxima of R(x) on [0, 1].\nLet d = min{|\u03b1i \u2212 \u03b1| | \u03b1i \u2208 A, \u03b1i = \u03b1}, and set \u03b1 =\nmax{\u03b1 \u2212 d/2, 0}, \u03b1 = min{\u03b1 + d/2, 1}. Clearly, \u03b1 is the\nonly zero (and hence, the only maximum) of R(x) on [\u03b1 , \u03b1 ].\nLet x and x be some rational numbers in (\u03b1 , \u03b1) and (\u03b1, \u03b1 ),\nrespectively; note that by excluding the endpoints of the intervals\nwe ensure that x , x = 0, 1. As [x , x ] \u2282 [\u03b1 , \u03b1 ], we have that\n\u03b1 is the only maximum of R(x) on [x , x ].\nAs R(x) is a proper rational expression and all roots of its\ndenominator are simple, by partial fraction decomposition theorem,\nR(x) can be represented as\nR(x) =\nK2\nx + 2\n+ \u00b7 \u00b7 \u00b7 +\nK2n+2\nx + 2n + 2\n,\nwhere K2, . . . , K2n+2 are rational numbers.\nConsider a graphical game on the path\nU\u22121V\u22121U0V0U1V1 . . . Uk\u22121Vk\u22121Uk,\nwhere k = 2n + 2. Intuitively, we want each triple (Ui\u22121, Vi\u22121,\nUi) to behave similarly to the players U, V , and W from the game\ndescribed in the previous subsection. More precisely, we define the\npayoffs to the players in the following way.\n\u2022 The payoff to U\u22121 is 0 no matter what everyone else does.\n\u2022 The expected payoff to V\u22121 is 0 if it plays 0 and u0 \u2212 (x \u2212\nx )u\u22121 \u2212x if it plays 1, where u0 and u\u22121 are the strategies\nof U0 and U\u22121, respectively.\n\u2022 The expected payoff to V0 is 0 if it plays 0 and u1(u0 + 1)\u2212\nu0 if it plays 1, where u0 and u1 are the strategies of U0 and\nU1, respectively.\n\u2022 For each i = 1, . . . , k \u2212 1, the expected payoff to Vi when\nit plays 0 is P0\n(Vi) = Aiuiui+1 \u2212 Aiui+1, and the\nexpected payoff to Vi when it plays 1 is P1\n(Vi) = P0\n(Vi) +\nui+1(2 \u2212 ui) \u2212 1, where Ai = \u2212Ki+1 and ui+1 and ui are\nthe strategies of Ui+1 and Ui, respectively.\n\u2022 For each i = 0, . . . , k, the payoff to Ui does not depend\non Vi and is 1 if Ui and Vi\u22121 select different actions and 0\notherwise.\nWe will now characterize the Nash equilibria of this game using a\nsequence of claims.\nCLAIM 3. In all Nash equilibria of this game V\u22121 plays 1/2,\nand the strategies of u\u22121 and u0 satisfy u0 = (x \u2212 x )u\u22121 + x .\nConsequently, in all Nash equilibria we have u0 \u2208 [x , x ].\n165\nPROOF. The proof is similar to that of Claim 2. Let f(u\u22121) =\n(x \u2212 x )u\u22121 + x . Clearly, the player V\u22121 is indifferent between\nplaying 0 and 1 if and only if u0 = f(u\u22121). Suppose that v\u22121 <\n1/2. Then U0 strictly prefers to play 1, i.e., u0 = 1, so we have\nP1\n(V\u22121) = P0\n(V\u22121) + 1 \u2212 (x \u2212 x )u\u22121 \u2212 x .\nAs\n1 \u2212 x \u2264 1 \u2212 (x \u2212 x )u\u22121 \u2212 x \u2264 1 \u2212 x\nfor u\u22121 \u2208 [0, 1] and x < 1, we have P1\n(V\u22121) > P0\n(V\u22121), so\nV\u22121 prefers to play 1, a contradiction. Similarly, if v\u22121 > 1/2, the\nplayer U0 strictly prefers to play 0, i.e., u0 = 0, so we have\nP1\n(V\u22121) = P0\n(V\u22121) \u2212 (x \u2212 x )u\u22121 \u2212 x .\nAs x < x , x > 0, we have P1\n(V\u22121) < P0\n(V\u22121), so V\u22121\nprefers to play 0, a contradiction. Finally, if V\u22121 plays 1/2, but\nu0 = f(u\u22121), player V\u22121 is not indifferent between 0 and 1, so he\nwould deviate from playing 1/2.\nAlso, note that f(0) = x , f(1) = x , and, moreover, f(u\u22121) \u2208\n[x , x ] if and only if u\u22121 \u2208 [0, 1]. Hence, in all Nash equilibria of\nthis game we have u0 \u2208 [x , x ].\nCLAIM 4. In all Nash equilibria of this game for each i =\n0, . . . , k \u2212 1, we have vi = 1/2, and the strategies of the\nplayers Ui and Ui+1 satisfy ui+1 = fi(ui), where f0(u) = u/(u + 1)\nand fi(u) = 1/(2 \u2212 u) for i > 0.\nPROOF. The proof of this claim is also similar to that of Claim 2.\nWe use induction on i to prove that the statement of the claim is true\nand, additionally, ui = 1 for i > 0.\nFor the base case i = 0, note that u0 = 0 by the previous claim\n(recall that x , x are selected so that x , x = 0, 1) and consider\nthe triple (U0, V0, U1). Let v0 be the strategy of V0. First, suppose\nthat v0 > 1/2. Then U1 strictly prefers to play 0, i.e., u1 = 0.\nThen for V0 we have P1\n(V0) = P0\n(V0) \u2212 u0. As u0 = 0, we\nhave P1\n(V0) < P0\n(V0), which implies v1 = 0, a contradiction.\nSimilarly, if v0 < 1/2, player U1 prefers to play 1, so we have\nu1 = 1. Hence, P1\n(V0) = P0\n(V0) + 1. It follows that P1\n(V0) >\nP0\n(V0), which implies v0 = 1, a contradiction. Finally, if v0 =\n1/2, but u1 = u0/(u0 + 1), player V0 is not indifferent between 0\nand 1, so he would deviate from playing 1/2. Moreover, as u1 =\nu0/(u0 + 1) and u0 \u2208 [0, 1], we have u1 = 1.\nThe argument for the inductive step is similar. Namely, suppose\nthat the statement is proved for all i < i and consider the triple\n(Ui, Vi, Ui+1).\nLet vi be the strategy of Vi. First, suppose that vi > 1/2. Then\nUi+1 strictly prefers to play 0, i.e., ui+1 = 0. Then for Vi we\nhave P1\n(Vi) = P0\n(Vi)\u22121, i.e., P1\n(Vi) < P0\n(Vi), which implies\nvi = 0, a contradiction. Similarly, if vi < 1/2, player Ui+1 prefers\nto play 1, so we have ui+1 = 1. Hence, P1\n(Vi) = P0\n(Vi) +\n1 \u2212 ui. By inductive hypothesis, we have ui < 1. Consequently,\nP1\n(Vi) > P0\n(Vi), which implies vi = 1, a contradiction. Finally,\nif vi = 1/2, but ui+1 = 1/(2 \u2212 ui), player Vi is not indifferent\nbetween 0 and 1, so he would deviate from playing 1/2. Moreover,\nas ui+1 = 1/(2 \u2212 ui) and ui < 1, we have ui+1 < 1.\nCLAIM 5. Any strategy profile of the form\n(u\u22121, 1/2, u0, 1/2, u1, 1/2, . . . , uk\u22121, 1/2, uk),\nwhere u\u22121 \u2208 [0, 1], u0 = (x \u2212 x )u\u22121 + x , u1 = u0/(u0 + 1),\nand ui+1 = 1/(2 \u2212 ui) for i \u2265 1 constitutes a Nash equilibrium.\nPROOF. First, the player U\u22121\"s payoffs do not depend on other\nplayers\" actions, so he is free to play any strategy in [0, 1]. As long\nas u0 = (x \u2212x )u\u22121 +x , player V\u22121 is indifferent between 0 and\n1, so he is content to play 1/2; a similar argument applies to players\nV0, . . . , Vk\u22121. Finally, for each i = 0, . . . , k, the payoffs of player\nUi only depend on the strategy of player Vi\u22121. In particular, as\nlong as vi\u22121 = 1/2, player Ui is indifferent between playing 0 and\n1, so he can play any mixed strategy ui \u2208 [0, 1]. To complete the\nproof, note that (x \u2212 x )u\u22121 + x \u2208 [0, 1] for all u\u22121 \u2208 [0, 1],\nu0/(u0 + 1) \u2208 [0, 1] for all u0 \u2208 [0, 1], and 1/(2 \u2212 ui) \u2208 [0, 1]\nfor all ui \u2208 [0, 1], so we have ui \u2208 [0, 1] for all i = 0, . . . , k.\nNow, let us compute the total payoff under a strategy profile of\nthe form given in Claim 5. The payoff to U\u22121 is 0, and the\nexpected payoff to each of the Ui, i = 0, . . . , k, is 1/2. The expected\npayoffs to V\u22121 and V0 are 0. Finally, for any i = 1, . . . , k \u2212 1,\nthe expected payoff to Vi is Ti = Aiuiui+1 \u2212 Aiui+1. It\nfollows that to find a Nash equilibrium with the highest total payoff,\nwe have to maximize\nPk\u22121\ni=1 Ti subject to conditions u\u22121 \u2208 [0, 1],\nu0 = (x \u2212x )u\u22121+x , u1 = u0/(u0+1), and ui+1 = 1/(2\u2212ui)\nfor i = 1, . . . , k \u2212 1.\nWe would like to express\nPk\u22121\ni=1 Ti as a function of u0. To\nsimplify notation, set u = u0.\nLEMMA 2. For i = 1, . . . , k, we have ui = u+i\u22121\nu+i\n.\nPROOF. The proof is by induction on i. For i = 1, we have\nu1 = u/(u + 1). Now, for i \u2265 2 suppose that ui\u22121 = (u + i \u2212\n2)/(u + i \u2212 1). We have ui = 1/(2 \u2212 ui\u22121) = (u + i \u2212 1)/(2u +\n2i \u2212 2 \u2212 u \u2212 i + 2) = (u + i \u2212 1)/(u + i).\nIt follows that for i = 1, . . . , k \u2212 1 we have\nTi = Ai\nu + i \u2212 1\nu + i\nu + i\nu + i + 1\n\u2212 Ai\nu + i\nu + i + 1\n=\n\u2212Ai\n1\nu + i + 1\n=\nKi+1\nu + i + 1\n.\nObserve that as u\u22121 varies from 0 to 1, u varies from x to x .\nTherefore, to maximize the total payoff, we have to choose u \u2208\n[x , x ] so as to maximize\nK2\nu + 2\n+ \u00b7 \u00b7 \u00b7 +\nKk\nu + k\n= R(u).\nBy construction, the only maximum of R(u) on [x , x ] is \u03b1. It\nfollows that in the payoff-maximizing Nash equilibrium of our game\nU0 plays \u03b1.\nFinally, note that the payoffs in our game are rational rather than\ninteger. However, it is easy to see that we can multiply all payoffs\nto a player by their greatest common denominator without affecting\nhis strategy. In the resulting game, all payoffs are integer. This\nconcludes the proof of Theorem 2.\n4. APPROXIMATING THE SOCIALLY\nOPTIMAL NASH EQUILIBRIUM\nWe have seen that the Nash equilibrium that maximizes the\nsocial welfare may involve strategies that are not in Q. Hence, in this\nsection we focus on finding a Nash equilibrium that is almost\noptimal from the social welfare perspective. We propose an algorithm\nthat for any > 0 finds a Nash equilibrium whose total payoff\nis within from optimal. The running time of this algorithm is\npolynomial in 1/ , n and |Pmax| (recall that Pmax is the maximum\nabsolute value of an entry of a payoff matrix).\nWhile the negative result of the previous section is for graphical\ngames on paths, our algorithm applies to a wider range of\nscenarios. Namely, it runs in polynomial time on bounded-degree trees\n166\nas long as the best response policy of each vertex, given its parent,\ncan be represented as a union of a polynomial number of\nrectangles. Note that path graphs always satisfy this condition: in [9] we\nshowed how to compute such a representation, given a graph with\nmaximum degree 2. Consequently, for path graphs the running time\nof our algorithm is guaranteed to be polynomial. (Note that [9]\nexhibits a family of graphical games on bounded-degree trees for\nwhich the best response policies of some of the vertices, given their\nparents, have exponential size, when represented as unions of\nrectangles.)\nDue to space restrictions, in this version of the paper we present\nthe algorithm for the case where the graph underlying the graphical\ngame is a path. We then state our result for the general case; the\nproof can be found in the full version of this paper [10].\nSuppose that s is a strategy profile for a graphical game G. That\nis, s assigns a mixed strategy to each vertex of G. let EPV (s)\nbe the expected payoff of player V under s and let EP(s) =P\nV EPV (s). Let\nM(G) = max{EP(s) | s is a Nash equilibrium for G}.\nTHEOREM 3. Suppose that G is a graphical game on an\nnvertex path. Then for any > 0 there is an algorithm that\nconstructs a Nash equilibrium s for G that satisfies EP(s ) \u2265 M(G)\u2212\n. The running time of the algorithm is O(n4\nP3\nmax/ 3\n)\nPROOF. Let {V1, . . . , Vn} be the set of all players. We start by\nconstructing the best response policies for all Vi, i = 1, . . . , n \u2212 1.\nAs shown in [9], this can be done in time O(n3\n).\nLet N > 5n be a parameter to be selected later, set \u03b4 = 1/N,\nand define X = {j\u03b4 | j = 0, . . . , N}. We say that vj is an event\npoint for a player Vi if it is a Vi-event point for B(Vi, Vi\u22121) or\nB(Vi+1, Vi). For each player Vi, consider a finite set of strategies\nXi given by\nXi = X \u222a {vj |vj is an event point for Vi}.\nIt has been shown in [9] that for any i = 2, . . . , n, the best response\npolicy B(Vi, Vi\u22121) has at most 2n + 4 Vi-event points. As we\nrequire N > 5n, we have |Xi| \u2264 2N; assume without loss of\ngenerality that |Xi| = 2N. Order the elements of Xi in increasing\norder as x1\ni = 0 < x2\ni < \u00b7 \u00b7 \u00b7 < x2N\ni . We will refer to the strategies\nin Xi as discrete strategies of player Vi; a strategy profile in which\neach player has a discrete strategy will be referred to as a discrete\nstrategy profile.\nWe will now show that even we restrict each player Vi to\nstrategies from Xi, the players can still achieve a Nash equilibrium, and\nmoreover, the best such Nash equilibrium (with respect to the\nsocial welfare) has total payoff at least M(G) \u2212 as long as N is\nlarge enough.\nLet s be a strategy profile that maximizes social welfare. That is,\nlet s = (s1, . . . , sn) where si is the mixed strategy of player Vi and\nEP(s) = M(G). For i = 1, . . . , n, let ti = max{xj\ni | xj\ni \u2264 si}.\nFirst, we will show that the strategy profile t = (t1, . . . , tn) is a\nNash equilibrium for G.\nFix any i, 1 < i \u2264 n, and let R = [v1, v2]\u00d7[u1, u2] be the\nrectangle in B(Vi, Vi\u22121) that contains (si, si\u22121). As v1 is a Vi-event\npoint of B(Vi, Vi\u22121), we have v1 \u2264 ti, so the point (ti, si\u22121) is\ninside R. Similarly, the point u1 is a Vi\u22121-event point of B(Vi, Vi\u22121),\nso we have u1 \u2264 ti\u22121, and therefore the point (ti, ti\u22121) is inside R.\nThis means that for any i, 1 < i \u2264 n, we have ti\u22121 \u2208 pbrVi\u22121\n(ti),\nwhich implies that t = (t1, . . . , tn) is a Nash equilibrium for G.\nNow, let us estimate the expected loss in social welfare caused\nby playing t instead of s.\nLEMMA 3. For any pair of strategy profiles t, s such that |ti \u2212\nsi| \u2264 \u03b4 we have |EPVi (s) \u2212 EPVi (t)| \u2264 24Pmax\u03b4 for any i =\n1, . . . , n.\nPROOF. Let Pi\nklm be the payoff of the player Vi, when he plays\nk, Vi\u22121 plays l, and Vi+1 plays m. Fix i = 1, . . . , n and for\nk, l, m \u2208 {0, 1}, set\ntklm\n= tk\ni\u22121(1 \u2212 ti\u22121)1\u2212k\ntl\ni(1 \u2212 ti)1\u2212l\ntm\ni+1(1 \u2212 ti+1)1\u2212m\nsklm\n= sk\ni\u22121(1 \u2212 si\u22121)1\u2212k\nsl\ni(1 \u2212 si)1\u2212l\nsm\ni+1(1 \u2212 si+1)1\u2212m\n.\nWe have\n|EPVi (s) \u2212 EPVi (t)| \u2264\nX\nk,l,m=0,1\n|Pi\nklm(tklm\n\u2212 sklm\n)| \u2264\n8Pmax max\nklm\n|tklm\n\u2212 sklm\n|\nWe will now show that for any k, l, m \u2208 {0, 1} we have |tklm\n\u2212\nsklm\n| \u2264 3\u03b4; clearly, this implies the lemma.\nIndeed, fix k, l, m \u2208 {0, 1}. Set\nx = tk\ni\u22121(1 \u2212 ti\u22121)1\u2212k\n, x = sk\ni\u22121(1 \u2212 si\u22121)1\u2212k\n,\ny = tl\ni(1 \u2212 ti)1\u2212l\n, y = sl\ni(1 \u2212 si)1\u2212l\n,\nz = tm\ni+1(1 \u2212 ti+1)1\u2212m\n, z = sm\ni+1(1 \u2212 si+1)1\u2212m\n.\nObserve that if k = 0 then x \u2212 x = (1 \u2212 ti\u22121) \u2212 (1 \u2212 si\u22121),\nand if k = 1 then x \u2212 x = ti\u22121 \u2212 si\u22121, so |x \u2212 x | \u2264 \u03b4. A\nsimilar argument shows |y \u2212 y | \u2264 \u03b4, |z \u2212 z | \u2264 \u03b4. Also, we have\nx, x , y, y , z, z \u2208 [0, 1]. Hence, |tklm\n\u2212sklm\n| = |xyz\u2212x y z | =\n|xyz \u2212 x yz + x yz \u2212 x y z + x y z \u2212 x y z | \u2264 |x \u2212 x |yz +\n|y \u2212 y |x z + |z \u2212 z |x y \u2264 3\u03b4.\nLemma 3 implies\nPn\ni=1 |EPVi (s) \u2212 EPVi (t)| \u2264 24nPmax\u03b4,\nso by choosing \u03b4 < /(24nPmax), or, equivalently, setting N >\n24nPmax/ , we can ensure that the total expected payoff for the\nstrategy profile t is within from optimal.\nWe will now show that we can find the best discrete Nash\nequilibrium (with respect to the social welfare) using dynamic\nprogramming. As t is a discrete strategy profile, this means that the strategy\nprofile found by our algorithm will be at least as good as t.\nDefine ml,k\ni to be the maximum total payoff that V1, . . . , Vi\u22121\ncan achieve if each Vj , j \u2264 i, chooses a strategy from Xj , for each\nj < i the strategy of Vj is a potential best response to the strategy\nof Vj+1, and, moreover, Vi\u22121 plays xl\ni\u22121, Vi plays xk\ni . If there is\nno way to choose the strategies for V1, . . . , Vi\u22121 to satisfy these\nconditions, we set ml,k\ni = \u2212\u221e. The values ml,k\ni , i = 1, . . . , n;\nk, l = 1, . . . , N, can be computed inductively, as follows.\nWe have ml,k\n1 = 0 for k, l = 1, . . . , N. Now, suppose that we\nhave already computed ml,k\nj for all j < i; k, l = 1, . . . , N. To\ncompute mk,l\ni , we first check if (xk\ni , xl\ni\u22121) \u2208 B(Vi, Vi\u22121). If this\nis not the case, we have ml,k\ni = \u2212\u221e. Otherwise, consider the set\nY = Xi\u22122 \u2229 pbrVi\u22122\n(xl\ni\u22121), i.e., the set of all discrete strategies\nof Vi\u22122 that are potential best responses to xl\ni\u22121. The proof of\nTheorem 1 in [9] implies that the set pbrVi\u22122\n(xl\ni\u22121) is non-empty:\nthe player Vi\u22122 has a potential best response to any strategy of\nVi\u22121, in particular, xl\ni\u22121. By construction of the set Xi\u22122, this\nimplies that Y is not empty. For each xj\ni\u22122 \u2208 Y , let pjlk be the\npayoff that Vi\u22121 receives when Vi\u22122 plays xj\ni\u22122, Vi\u22121 plays xl\ni\u22121,\nand Vi plays xk\ni . Clearly, pjlk can be computed in constant time.\nThen we have ml,k\ni = max{mj,l\ni\u22121 + pjlk | xj\ni\u22122 \u2208 Y }.\nFinally, suppose that we have computed ml,k\nn for l, k = 1, . . . , N.\nWe still need to take into account the payoff of player Vn. Hence,\n167\nwe consider all pairs (xk\nn, xl\nn\u22121) that satisfy xl\nn\u22121 \u2208 pbrVn\u22121\n(xk\nn),\nand pick the one that maximizes the sum of mk,l\nn and the payoff of\nVn when he plays xk\nn and Vn\u22121 plays xl\nn\u22121. This results in the\nmaximum total payoff the players can achieve in a Nash\nequilibrium using discrete strategies; the actual strategy profile that\nproduces this payoff can be reconstructed using standard dynamic\nprogramming techniques.\nIt is easy to see that each ml,k\ni can be computed in time O(N),\ni.e., all of them can be computed in time O(nN3\n). Recall that\nwe have to select N \u2265 (24nPmax)/ to ensure that the strategy\nprofile we output has total payoff that is within from optimal. We\nconclude that we can compute an -approximation to the best Nash\nequilibrium in time O(n4\nP3\nmax/ 3\n). This completes the proof of\nTheorem 3.\nTo state our result for the general case (i.e., when the underlying\ngraph is a bounded-degree tree rather than a path), we need\nadditional notation. If G has n players, let q(n) be an upper bound\non the number of event points in the representation of any best\nresponse policy. That is, we assume that for any vertex U with\nparent V , B(V, U) has at most q(n) event points. We will be\ninterested in the situation in which q(n) is polynomial in n.\nTHEOREM 4. Let G be an n-player graphical game on a tree\nin which each node has at most \u0394 children. Suppose we are given a\nset of best-response policies for G in which each best-response\npolicy B(V, U) is represented by a set of rectangles with at most q(n)\nevent points. For any > 0, there is an algorithm that constructs a\nNash equilibrium s for G that satisfies EP(s ) \u2265 M(G) \u2212 . The\nrunning time of the algorithm is polynomial in n, Pmax and \u22121\nprovided that the tree has bounded degree (that is, \u0394 = O(1)) and\nq(n) is a polynomial in n. In particular, if\nN = max((\u0394 + 1)q(n) + 1, n2\u0394+2\n(\u0394 + 2)Pmax\n\u22121\n)\nand \u0394 > 1 then the running time is O(n\u0394(2N)\u0394\n.\nFor the proof of this theorem, see [10].\n4.1 A polynomial-time algorithm for\nmultiplicative approximation\nThe running time of our algorithm is pseudopolynomial rather\nthan polynomial, because it includes a factor which is polynomial\nin Pmax, the maximum (in absolute value) entry in any payoff\nmatrix. If we are interested in multiplicative approximation rather than\nadditive one, this can be improved to polynomial.\nFirst, note that we cannot expect a multiplicative\napproximation for all inputs. That is, we cannot hope to have an algorithm\nthat computes a Nash equilibrium with total payoff at least (1 \u2212\n)M(G). If we had such an algorithm, then for graphical games\nG with M(G) = 0, the algorithm would be required to output\nthe optimal solution. To show that this is infeasible, observe that\nwe can use the techniques of Section 3.2 to construct two\nintegercoefficient graphical games on paths of length O(n) such that for\nsome X \u2208 R the maximal total payoff in the first game is X,\nthe maximal total payoff in the second game is \u2212X, and for both\ngames, the strategy profiles that achieve the maximal total payoffs\ninvolve algebraic numbers of degree n. By combining the two\ngames so that the first vertex of the second game becomes\nconnected to the last vertex of the first game, but the payoffs of all\nplayers do not change, we obtain a graphical game in which the\nbest Nash equilibrium has total payoff 0, yet the strategies that lead\nto this payoff have high algebraic complexity.\nHowever, we can achieve a multiplicative approximation when\nall entries of the payoff matrices are positive and the ratio between\nany two entries is polynomially bounded. Recall that we assume\nthat all payoffs are integer, and let Pmin > 0 be the smallest entry\nof any payoff matrix. In this case, for any strategy profile the payoff\nto player i is at least Pmin, so the total payoff in the social-welfare\nmaximizing Nash equilibrium s satisfies M(G) \u2265 nPmin.\nMoreover, Lemma 3 implies that by choosing \u03b4 < /(24Pmax/Pmin),\nwe can ensure that the Nash equilibrium t produced by our\nalgorithm satisfies\nnX\ni=1\nEPVi (s) \u2212\nnX\ni=1\nEPVi (t) \u2264 24Pmax\u03b4n \u2264 nPmin \u2264 M(G),\ni.e., for this value of \u03b4 we have\nPn\ni=1 EPVi (t) \u2265 (1 \u2212 )M(G).\nRecall that the running time of our algorithm is O(nN3\n), where N\nhas to be selected to satisfy N > 5n, N = 1/\u03b4. It follows that if\nPmin > 0, Pmax/Pmin = poly(n), we can choose N so that our\nalgorithm provides a multiplicative approximation guarantee and\nruns in time polynomial in n and 1/ .\n5. BOUNDED PAYOFF NASH EQUILIBRIA\nAnother natural way to define what is a good Nash equilibrium\nis to require that each player\"s expected payoff exceeds a certain\nthreshold. These thresholds do not have to be the same for all\nplayers. In this case, in addition to the payoff matrices of the n players,\nwe are given n numbers T1, . . . , Tn, and our goal is to find a Nash\nequilibrium in which the payoff of player i is at least Ti, or report\nthat no such Nash equilibrium exists. It turns out that we can\ndesign an FPTAS for this problem using the same techniques as in the\nprevious section.\nTHEOREM 5. Given a graphical game G on an n-vertex path\nand n rational numbers T1, . . . , Tn, suppose that there exists a\nstrategy profile s such that s is a Nash equilibrium for G and\nEPVi (s) \u2265 Ti for i = 1, . . . , n. Then for any > 0 we can\nfind in time O(max{nP3\nmax/ 3\n, n4\n/ 3\n}) a strategy profile s such\nthat s is a Nash equilibrium for G and EPVi (s ) \u2265 Ti \u2212 for\ni = 1, . . . , n.\nPROOF. The proof is similar to that of Theorem 3. First, we\nconstruct the best response policies for all players, choose N > 5n,\nand construct the sets Xi, i = 1, . . . , n, as described in the proof\nof Theorem 3.\nConsider a strategy profile s such that s is a Nash equilibrium\nfor G and EPVi (s) \u2265 Ti for i = 1, . . . , n. We construct a\nstrategy profile ti = max{xj\ni | xj\ni \u2264 si} and use the same\nargument as in the proof of Theorem 3 to show that t is a Nash\nequilibrium for G. By Lemma 3, we have |EPVi (s) \u2212 EPVi (t)| \u2264\n24Pmax\u03b4, so choosing \u03b4 < /(24Pmax), or, equivalently, N >\nmax{5n, 24Pmax/ }, we can ensure EPVi (t) \u2265 Ti \u2212 for i =\n1, . . . , n.\nNow, we will use dynamic programming to find a discrete Nash\nequilibrium that satisfies EPVi (t) \u2265 Ti \u2212 for i = 1, . . . , n. As t\nis a discrete strategy profile, our algorithm will succeed whenever\nthere is a strategy profile s with EPVi (s) \u2265 Ti\u2212 for i = 1, . . . , n.\nLet zl,k\ni = 1 if there is a discrete strategy profile such that for any\nj < i the strategy of the player Vj is a potential best response to the\nstrategy of Vj+1, the expected payoff of Vj is at least Tj \u2212 , and,\nmoreover, Vi\u22121 plays xl\ni\u22121, Vi plays xk\ni . Otherwise, let zl,k\ni = 0.\nWe can compute zl,k\ni , i = 1, . . . , n; k, l = 1, . . . , N inductively,\nas follows.\nWe have zl,k\n1 = 1 for k, l = 1, . . . , N. Now, suppose that we\nhave already computed zl,k\nj for all j < i; k, l = 1, . . . , N. To\ncompute zk,l\ni , we first check if (xk\ni , xl\ni\u22121) \u2208 B(Vi, Vi\u22121). If this\n168\nis not the case, clearly, zk,l\ni = 0. Otherwise, consider the set Y =\nXi\u22122 \u2229pbrVi\u22122\n(xl\ni\u22121), i.e., the set of all discrete strategies of Vi\u22122\nthat are potential best responses to xl\ni\u22121. It has been shown in the\nproof of Theorem 3 that Y = \u2205. For each xj\ni\u22122 \u2208 Y , let pjlk be the\npayoff that Vi\u22121 receives when Vi\u22122 plays xj\ni\u22122, Vi\u22121 plays xl\ni\u22121,\nand Vi plays xk\ni . Clearly, pjlk can be computed in constant time. If\nthere exists an xj\ni\u22122 \u2208 Y such that zj,l\ni\u22121 = 1 and pjlk \u2265 Ti\u22122 \u2212 ,\nset zl,k\ni = 1. Otherwise, set zl,k\ni = 0.\nHaving computed zl,k\nn , l, k = 1, . . . , N, we check if zl,k\nn =\n1 for some pair (l, k). if such a pair of indices exists, we\ninstruct Vn to play xk\nn and use dynamic programming techniques\n(or, equivalently, the downstream pass of the algorithm of [13])\nto find a Nash equilibrium s that satisfies EPVi (s ) \u2265 Ti \u2212 for\ni = 1, . . . , n (recall that Vn is a dummy player, i.e., we assume\nTn = 0, EPn(s ) = 0 for any choice of s ). If zl,k\nn = 0 for all\nl, k = 1, . . . , N, there is no discrete Nash equilibrium s that\nsatisfies EPVi (s ) \u2265 Ti \u2212 for i = 1, . . . , n and hence no Nash\nequilibrium s (not necessarily discrete) such that EPVi (s) \u2265 Ti\nfor i = 1, . . . , n.\nThe running time analysis is similar to that for Theorem 3; we\nconclude that the running time of our algorithm is O(nN3\n) =\nO(max{nP3\nmax/ 3\n, n4\n/ 3\n}).\nREMARK 1. Theorem 5 can be extended to trees of bounded\ndegree in the same way as Theorem 4.\n5.1 Exact Computation\nAnother approach to finding Nash equilibria with bounded\npayoffs is based on inductively computing the subsets of the best\nresponse policies of all players so as to exclude the points that do\nnot provide sufficient payoffs to some of the players. Formally, we\nsay that a strategy v of the player V is a potential best response\nto a strategy w of its parent W with respect to a threshold vector\nT = (T1, . . . , Tn), (denoted by v \u2208 pbrV (w, T)) if there is an\nequilibrium in the instance G(V,W ),W =w in which V plays mixed\nstrategy v and the payoff to any player Vi downstream of V\n(including V ) is at least Ti. The best response policy for V with respect\nto a threshold vector T is defined as B(W, V, T) = {(w, v) | v \u2208\npbrV (w, T), w \u2208 [0, 1]}.\nIt is easy to see that if any of the sets B(Vj, Vj\u22121, T), j =\n1, . . . , n, is empty, it means that it is impossible to provide all\nplayers with expected payoffs prescribed by T. Otherwise, one\ncan apply the downstream pass of the original algorithm of [13] to\nfind a Nash equilibrium. As we assume that Vn is a dummy\nvertex whose payoff is identically 0, the Nash equilibrium with these\npayoffs exists as long as Tn \u2264 0 and B(Vn, Vn\u22121, T) is not empty.\nUsing the techniques developed in [9], it is not hard to show that\nfor any j = 1, . . . , n, the set B(Vj , Vj\u22121, T) consists of a finite\nnumber of rectangles, and one can compute B(Vj+1, Vj , T) given\nB(Vj , Vj\u22121, T). The advantage of this approach is that it allows\nus to represent all Nash equilibria that provide required payoffs\nto the players. However, it is not likely to be practical, since it\nturns out that the rectangles that appear in the representation of\nB(Vj , Vj\u22121, T) may have irrational coordinates.\nCLAIM 6. There exists a graphical game G on a 3-vertex path\nUV W and a vector T = (T1, T2, T3) such that B(V, W, T)\ncannot be represented as a union of a finite number of rectangles with\nrational coordinates.\nPROOF. We define the payoffs to the players in G as follows.\nThe payoff to U is identically 0, i.e., P0\n(U) = P1\n(U) = 0.\nUsing Claim 1, we select the payoffs to V so that P0\n(V ) = uw,\nP1\n(V ) = P0\n(V ) + w \u2212 .8u \u2212 .1, where u and w are the (mixed)\nstrategies of U and W , respectively. It follows that V is indifferent\nbetween playing 0 and 1 if and only if w = f(u) = .8u + .1;\nobserve that for any u \u2208 [0, 1] we have f(u) \u2208 [0, 1]. It is not hard\nto see that we have\nB(W, V ) = [0, .1]\u00d7{0} \u222a [.1, .9]\u00d7[0, 1] \u222a [.9, 1]\u00d7{1}.\nThe payoffs to W are not important for our construction; for\nexample, set P0(W ) = P0(W ) = 0.\nNow, set T = (0, 1/8, 0), i.e., we are interested in Nash\nequilibria in which V \"s expected payoff is at least 1/8. Suppose w \u2208\n[0, 1]. The player V can play a mixed strategy v when W is\nplaying w as long as U plays u = f\u22121\n(w) = 5w/4 \u2212 1/8 (to ensure\nthat V is indifferent between 0 and 1) and P0\n(V ) = P1\n(V ) =\nuw = w(5w/4 \u2212 1/8) \u2265 1/8. The latter condition is satisfied if\nw \u2264 (1 \u2212\n\u221a\n41)/20 < 0 or w \u2265 (1 +\n\u221a\n41)/20. Note that we\nhave .1 < (1 +\n\u221a\n41)/20 < .9. For any other value of w, any\nstrategy of U either makes V prefer one of the pure strategies or\ndoes not provide it with a sufficient expected payoff. There are also\nsome values of w for which V can play a pure strategy (0 or 1)\nas a potential best response to W and guarantee itself an expected\npayoff of at least 1/8; it can be shown that these values of w form\na finite number of segments in [0, 1]. We conclude that any\nrepresentation of B(W, V, T) as a union of a finite number of rectangles\nmust contain a rectangle of the form [(1 +\n\u221a\n41)/20, w ]\u00d7[v , v ]\nfor some w , v , v \u2208 [0, 1].\nOn the other hand, it can be shown that for any integer payoff\nmatrices and threshold vectors and any j = 1, . . . , n \u2212 1, the sets\nB(Vj+1, Vj, T) contain no rectangles of the form [u , u ]\u00d7{v} or\n{v}\u00d7[w , w ], where v \u2208 R\\Q. This means that if B(Vn, Vn\u22121, T)\nis non-empty, i.e., there is a Nash equilibrium with payoffs\nprescribed by T, then the downstream pass of the algorithm of [13]\ncan always pick a strategy profile that forms a Nash equilibrium,\nprovides a payoff of at least Ti to the player Vi, and has no\nirrational coordinates. Hence, unlike in the case of the Nash\nequilibrium that maximizes the social welfare, working with irrational\nnumbers is not necessary, and the fact that the algorithm discussed\nin this section has to do so can be seen as an argument against using\nthis approach.\n6. OTHER CRITERIA FOR SELECTING A\nNASH EQUILIBRIUM\nIn this section, we consider several other criteria that can be\nuseful in selecting a Nash equilibrium.\n6.1 Combining welfare maximization with\nbounds on payoffs\nIn many real life scenarios, we want to maximize the social\nwelfare subject to certain restrictions on the payoffs to individual\nplayers. For example, we may want to ensure that no player gets a\nnegative expected payoff, or that the expected payoff to player i is\nat least Pi\nmax \u2212 \u03be, where Pi\nmax is the maximum entry of i\"s\npayoff matrix and \u03be is a fixed parameter. Formally, given a graphical\ngame G and a vector T1, . . . , Tn, let S be the set of all Nash\nequilibria s of G that satisfy Ti \u2264 EPVi (s) for i = 1, . . . , n, and let\n\u02c6s = argmaxs\u2208S EP(s).\nIf the set S is non-empty, we can find a Nash equilibrium \u02c6s that\nis -close to satisfying the payoff bounds and is within from \u02c6s with\nrespect to the total payoff by combining the algorithms of Section 4\nand Section 5.\nNamely, for a given > 0, choose \u03b4 as in the proof of Theorem 3,\nand let Xi be the set of all discrete strategies of player Vi (for a\n169\nformal definition, see the proof of Theorem 3). Combining the\nproofs of Theorem 3 and Theorem 5, we can see that the strategy\nprofile \u02c6t given by \u02c6ti = max{xj\ni | xj\ni \u2264 \u02c6si} satisfies EPVi (\u02c6t) \u2265\nTi \u2212 , |EP(\u02c6s) \u2212 EP(\u02c6t)| \u2264 .\nDefine \u02c6ml,k\ni to be the maximum total payoff that V1, . . . , Vi\u22121\ncan achieve if each Vj, j \u2264 i, chooses a strategy from Xj , for each\nj < i the strategy of Vj is a potential best response to the strategy\nof Vj+1 and the payoff to player Vj is at least Tj \u2212 , and,\nmoreover, Vi\u22121 plays xl\ni\u22121, Vi plays xk\ni . If there is no way to choose\nthe strategies for V1, . . . , Vi\u22121 to satisfy these conditions, we set\nml,k\ni = \u2212\u221e. The \u02c6ml,k\ni can be computed by dynamic\nprogramming similarly to the ml,k\ni and zl,k\ni in the proofs of Theorems 3\nand 5. Finally, as in the proof of Theorem 3, we use ml,k\nn to select\nthe best discrete Nash equilibrium subject to the payoff constraints.\nEven more generally, we may want to maximize the total payoff\nto a subset of players (who are assumed to be able to redistribute\nthe profits fairly among themselves) while guaranteeing certain\nexpected payoffs to (a subset of) the other players. This problem can\nbe handled similarly.\n6.2 A minimax approach\nA more egalitarian measure of the quality of a Nash equilibrium\nis the minimal expected payoff to a player. The optimal solution\nwith respect to this measure is a Nash equilibrium in which the\nminimal expected payoff to a player is maximal. To find an\napproximation to such a Nash equilibrium, we can combine the\nalgorithm of Section 5 with binary search on the space of potential\nlower bounds. Note that the expected payoff to any player Vi given\na strategy s always satisfies \u2212Pmax \u2264 EPVi (s) \u2264 Pmax.\nFor a fixed > 0, we start by setting T = \u2212Pmax, T = Pmax,\nT\u2217\n= (T + T )/2. We then run the algorithm of Section 5 with\nT1 = \u00b7 \u00b7 \u00b7 = Tn = T\u2217\n. If the algorithm succeeds in finding a\nNash equilibrium s that satisfies EPVi (s ) \u2265 T\u2217\n\u2212 for all i =\n1, . . . , n, we set T = T\u2217\n, T\u2217\n= (T + T )/2; otherwise, we set\nT = T\u2217\n, T\u2217\n= (T + T )/2 and loop. We repeat this process\nuntil |T \u2212 T | \u2264 . It is not hard to check that for any p \u2208 R,\nif there is a Nash equilibrium s such that mini=1,...,n EPVi (s) \u2265\np, then our algorithm outputs a Nash equilibrium s that satisfies\nmini=1,...,n EPVi (s) \u2265 p\u22122 . The running time of our algorithm\nis O(max{nP3\nmax log \u22121\n/ 3\n, n4\nlog \u22121\n/ 3\n}).\n6.3 Equalizing the payoffs\nWhen the players\" payoff matrices are not very different, it is\nreasonable to demand that the expected payoffs to the players do\nnot differ by much either. We will now show that Nash equilibria\nin this category can be approximated in polynomial time as well.\nIndeed, observe that the algorithm of Section 5 can be easily\nmodified to deal with upper bounds on individual payoffs rather\nthan lower bounds. Moreover, we can efficiently compute an\napproximation to a Nash equilibrium that satisfies both the upper\nbound and the lower bound for each player. More precisely,\nsuppose that we are given a graphical game G, 2n rational numbers\nT1, . . . , Tn, T1, . . . , Tn and > 0. Then if there exists a\nstrategy profile s such that s is a Nash equilibrium for G and Ti \u2264\nEPVi (s) \u2264 Ti for i = 1, . . . , n, we can find a strategy profile s\nsuch that s is a Nash equilibrium for G and Ti \u2212 \u2264 EPVi (s ) \u2264\nTi + for i = 1, . . . , n. The modified algorithm also runs in time\nO(max{nP3\nmax/ 3\n, [4]n4\n/ 3\n}).\nThis observation allows us to approximate Nash equilibria in\nwhich all players\" expected payoffs differ by at most \u03be for any fixed\n\u03be > 0. Given an > 0, we set T1 = \u00b7 \u00b7 \u00b7 = Tn = \u2212Pmax,\nT1 = \u00b7 \u00b7 \u00b7 = Tn = \u2212Pmax + \u03be + , and run the modified version\nof the algorithm of Section 5. If it fails to find a solution, we\nincrement all Ti, Ti by and loop. We continue until the algorithm\nfinds a solution, or Ti \u2265 Pmax.\nSuppose that there exists a Nash equilibrium s that satisfies\n|EPVi (s) \u2212 EPVj (s)| \u2264 \u03be for all i, j = 1, . . . , n. Set r =\nmini=1,...,n EPVi (s); we have r \u2264 EPVi (s) \u2264 r + \u03be for all\ni = 1, . . . , n. There exists a k \u2265 0 such that \u2212Pmax + (k \u2212 1) \u2264\nr \u2264 \u2212Pmax + k . During the kth step of the algorithm, we set\nT1 = \u00b7 \u00b7 \u00b7 = Tn = \u2212Pmax +(k\u22121) , i.e., we have r\u2212 \u2264 Ti \u2264 r,\nr + \u03be \u2264 Ti \u2264 r + \u03be + . That is, the Nash equilibrium s satisfies\nTi \u2264 r \u2264 EPVi (s) \u2264 r + \u03be \u2264 Ti , which means that when Ti is\nset to \u2212Pmax + (k \u2212 1) , our algorithm is guaranteed to output a\nNash equilibrium t that satisfies r \u2212 2 \u2264 Ti \u2212 \u2264 EPVi (t) \u2264\nTi + \u2264 r +\u03be +2 . We conclude that whenever such a Nash\nequilibrium s exists, our algorithm outputs a Nash equilibrium t that\nsatisfies |EPVi (t) \u2212 EPVj (t)| \u2264 \u03be + 4 for all i, j = 1, . . . , n.\nThe running time of this algorithm is O(max{nP3\nmax/ 4\n, n4\n/ 4\n}).\nNote also that we can find the smallest \u03be for which such a Nash\nequilibrium exists by combining this algorithm with binary search\nover the space \u03be = [0, 2Pmax]. This identifies an approximation\nto the fairest Nash equilibrium, i.e., one in which the players\"\nexpected payoffs differ by the smallest possible amount.\nFinally, note that all results in this section can be extended to\nbounded-degree trees.\n7. CONCLUSIONS\nWe have studied the problem of equilibrium selection in\ngraphical games on bounded-degree trees. We considered several criteria\nfor selecting a Nash equilibrium, such as maximizing the social\nwelfare, ensuring a lower bound on the expected payoff of each\nplayer, etc. First, we focused on the algebraic complexity of a\nsocial welfare-maximizing Nash equilibrium, and proved strong\nnegative results for that problem. Namely, we showed that even for\ngraphical games on paths, any algebraic number \u03b1 \u2208 [0, 1] may\nbe the only strategy available to some player in all social\nwelfaremaximizing Nash equilibria. This is in sharp contrast with the fact\nthat graphical games on trees always possess a Nash equilibrium in\nwhich all players\" strategies are rational numbers.\nWe then provided approximation algorithms for selecting Nash\nequilibria with special properties. While the problem of finding\napproximate Nash equilibria for various classes of games has\nreceived a lot of attention in recent years, most of the existing work\naims to find -Nash equilibria that satisfy (or are -close to\nsatisfying) certain properties. Our approach is different in that we insist\non outputting an exact Nash equilibrium, which is -close to\nsatisfying a given requirement. As argued in the introduction, there are\nseveral reasons to prefer a solution that constitutes an exact Nash\nequilibrium.\nOur algorithms are fully polynomial time approximation schemes,\ni.e., their running time is polynomial in the inverse of the\napproximation parameter , though they may be pseudopolynomial with\nrespect to the input size. Under mild restrictions on the inputs, they\ncan be modified to be truly polynomial. This is the strongest\npositive result one can derive for a problem whose exact solutions may\nbe hard to represent, as is the case for many of the problems\nconsidered here. While we prove our results for games on a path, they can\nbe generalized to any tree for which the best response policies have\ncompact representations as unions of rectangles. In the full version\nof the paper we describe our algorithms for the general case.\nFurther work in this vein could include extensions to the kinds\nof guarantees sought for Nash equilibria, such as guaranteeing total\npayoffs for subsets of players, selecting equilibria in which some\nplayers are receiving significantly higher payoffs than their peers,\netc. At the moment however, it is perhaps more important to\ninves170\ntigate whether Nash equilibria of graphical games can be computed\nin a decentralized manner, in contrast to the algorithms we have\nintroduced here.\nIt is natural to ask if our results or those of [9] can be generalized\nto games with three or more actions. However, it seems that this\nwill make the analysis significantly more difficult. In particular,\nnote that one can view the bounded payoff games as a very limited\nspecial case of games with three actions per player. Namely, given\na two-action game with payoff bounds, consider a game in which\neach player Vi has a third action that guarantees him a payoff of\nTi no matter what everyone else does. Then checking if there is\na Nash equilibrium in which none of the players assigns a\nnonzero probability to his third action is equivalent to checking if there\nexists a Nash equilibrium that satisfies the payoff bounds in the\noriginal game, and Section 5.1 shows that finding an exact solution\nto this problem requires new ideas.\nAlternatively it may be interesting to look for similar results in\nthe context of correlated equilibria (CE), especially since the best\nCE may have higher value (total expected payoff) than the best NE.\nThe ratio between these values is called the mediation value in [1].\nIt is known from [1] that the mediation value of 2-player, 2-action\ngames with non-negative payoffs is at most 4\n3\n, and they exhibit a\n3-player game for which it is infinite. Furthermore, a 2-player,\n3action example from [1] also has infinite mediation value.\n8. REFERENCES\n[1] I. Ashlagi, D. Monderer and M. Tenneholtz, On the Value of\nCorrelation, Proceedings of Dagstuhl seminar 05011 (2005)\n[2] R. Aumann, Subjectivity and Correlation in Randomized\nStrategies, Journal of Mathematical Economics 1 pp. 67-96\n(1974)\n[3] B. Blum, C. R. Shelton, and D. Koller, A Continuation\nMethod for Nash Equilibria in Structured Games, Proceedings\nof IJCAI\"03\n[4] X. Chen, X. Deng and S. Teng, Computing Nash Equilibria:\nApproximation and Smoothed Complexity, Proceedings of\nFOCS\"06\n[5] X. Chen, X. Deng, Settling the Complexity of 2-Player\nNash-Equilibrium, Proceedings of FOCS\"06\n[6] V. Conitzer and T. Sandholm, Complexity Results about Nash\nEquilibria, Proceedings of IJCAI\"03\n[7] C. Daskalakis, P. W. Goldberg and C. H. Papadimitriou, The\nComplexity of Computing a Nash Equilibrium, Proceedings\nof STOC\"06\n[8] R. S. Datta, Universality of Nash Equilibria, Mathematics of\nOperations Research 28:3, 2003\n[9] E. Elkind, L. A. Goldberg, and P. W. Goldberg, Nash\nEquilibria in Graphical games on Trees Revisited,\nProceedings of ACM EC\"06\n[10] E. Elkind, L. A. Goldberg, and P. W. Goldberg, Computing\nGood Nash Equilibria in Graphical Games,\nhttp://arxiv.org/abs/cs.GT/0703133\n[11] I. Gilboa and E. Zemel, Nash and Correlated Equilibria:\nSome Complexity Considerations, Games and Economic\nBehavior, 1 pp. 80-93 (1989)\n[12] P. W. Goldberg and C. H. Papadimitriou, Reducibility\nAmong Equilibrium Problems, Proceedings of STOC\"06\n[13] M. Kearns, M. Littman, and S. Singh, Graphical Models for\nGame Theory, Proceedings of UAI\"01\n[14] M. Littman, M. Kearns, and S. Singh, An Efficient Exact\nAlgorithm for Singly Connected Graphical Games,\nProceedings of NIPS\"01\n[15] R. Lipton and E. Markakis, Nash Equilibria via Polynomial\nEquations, Proceedings of LATIN\"04\n[16] L. Ortiz and M. Kearns, Nash Propagation for Loopy\nGraphical Games, Proceedings of NIPS\"03\n[17] C.H. Papadimitriou, Computing Correlated Equilibria in\nMulti-Player Games, Proceedings of STOC\"05\n[18] C.H. Papadimitriou and T. Roughgarden, Computing\nEquilibria in Multi-Player Games, Proceedings of SODA\"05\n[19] D. Vickrey and D. Koller, Multi-agent Algorithms for\nSolving Graphical Games, Proceedings of AAAI\"02\n171", "keywords": "strategy profile;nash equilibrium;approximation;exponential-time algorithm;approximation scheme;overall payoff;several drawback;social welfare;distributing profit;graphical game;degree-bounded graph;various sociallydesirable property;integer-payoff graphical game g"}
-{"name": "test_J-15", "title": "Generalized Value Decomposition and Structured Multiattribute Auctions", "abstract": "Multiattribute auction mechanisms generally either remain agnostic about traders\" preferences, or presume highly restrictive forms, such as full additivity. Real preferences often exhibit dependencies among attributes, yet may possess some structure that can be usefully exploited to streamline communication and simplify operation of a multiattribute auction. We develop such a structure using the theory of measurable value functions, a cardinal utility representation based on an underlying order over preference differences. A set of local conditional independence relations over such differences supports a generalized additive preference representation, which decomposes utility across overlapping clusters of related attributes. We introduce an iterative auction mechanism that maintains prices on local clusters of attributes rather than the full space of joint configurations. When traders\" preferences are consistent with the auction\"s generalized additive structure, the mechanism produces approximately optimal allocations, at approximate VCG prices.", "fulltext": "1. INTRODUCTION\nMultiattribute trading mechanisms extend traditional, price-only\nmechanisms by facilitating the negotiation over a set of predefined\nattributes representing various non-price aspects of the deal. Rather\nthan negotiating over a fully defined good or service, a\nmultiattribute mechanism delays commitment to specific configurations\nuntil the most promising candidates are identified. For example,\na procurement department of a company may use a multiattribute\nauction to select a supplier of hard drives. Supplier offers may be\nevaluated not only over the price they offer, but also over various\nqualitative attributes such as volume, RPM, access time, latency,\ntransfer rate, and so on. In addition, suppliers may offer different\ncontract conditions such as warranty, delivery time, and service.\nIn order to account for traders\" preferences, the auction\nmechanism must extract evaluative information over a complex domain\nof multidimensional configurations. Constructing and\ncommunicating a complete preference specification can be a severe burden\nfor even a moderate number of attributes, therefore practical\nmultiattribute auctions must either accommodate partial specifications,\nor support compact expression of preferences assuming some\nsimplified form. By far the most popular multiattribute form to adopt\nis the simplest: an additive representation where overall value is a\nlinear combination of values associated with each attribute. For\nexample, several recent proposals for iterative multiattribute auctions\n[2, 3, 8, 19] require additive preference representations.\nSuch additivity reduces the complexity of preference\nspecification exponentially (compared to the general discrete case), but\nprecludes expression of any interdependencies among the attributes. In\npractice, however, interdependencies among natural attributes are\nquite common. For example, the buyer may exhibit complementary\npreferences for size and access time (since the performance effect\nis more salient if much data is involved), or may view a strong\nwarranty as a good substitute for high reliability ratings. Similarly, the\nseller\"s production characteristics (such as increasing access time\nis harder for larger hard drives) can easily violate additivity. In\nsuch cases an additive value function may not be able to provide\neven a reasonable approximation of real preferences.\nOn the other hand, fully general models are intractable, and it\nis reasonable to expect multiattribute preferences to exhibit some\nstructure. Our goal, therefore, is to identify the subtler yet more\nwidely applicable structured representations, and exploit these\nproperties of preferences in trading mechanisms.\nWe propose an iterative auction mechanism based on just such a\nflexible preference structure. Our approach is inspired by the\ndesign of an iterative multiattribute procurement auction for additive\npreferences, due to Parkes and Kalagnanam (PK) [19]. PK\npropose two types of iterative auctions: the first (NLD) makes no\nassumptions about traders\" preferences, and lets sellers bid on the full\nmultidimensional attribute space. Because NLD maintains an\nexponential price structure, it is suitable only for small domains. The\nother auction (AD) assumes additive buyer valuation and seller cost\nfunctions. It collects sell bids per attribute level and for a single\ndiscount term. The price of a configuration is defined as the sum of\nthe prices of the chosen attribute levels minus the discount.\nThe auction we propose also supports compact price spaces,\nalbeit for levels of clusters of attributes rather than singletons. We\nemploy a preference decomposition based on generalized additive\nindependence (GAI), a model flexible enough to accommodate\ninterdependencies to the exact degree of accuracy desired, yet\nproviding a compact functional form to the extent that interdependence\ncan be limited. Given its roots in multiattribute utility theory [13],\n227\nthe GAI condition is defined with respect to the expected utility\nfunction. To apply it for modeling values for certain outcomes,\ntherefore, requires a reinterpretation for preference under certainty.\nTo this end, we exploit the fact that auction outcomes are associated\nwith continuous prices, which provide a natural scale for assessing\nmagnitude of preference.\nWe first lay out a representation framework for preferences that\ncaptures, in addition to simple orderings among attribute\nconfiguration values, the difference in the willingness to pay (wtp) for each.\nThat is, we should be able not only to compare outcomes but also\ndecide whether the difference in quality is worth a given difference\nin price. Next, we build a direct, formally justified link from\npreference statements over priced outcomes to a generalized additive\ndecomposition of the wtp function. After laying out this\ninfrastructure, we employ this representation tool for the development of a\nmultiattribute iterative auction mechanism that allows traders to\nexpress their complex preferences in GAI format. We then study the\nauction\"s allocational, computational, and practical properties.\nIn Section 2 we present essential background on our\nrepresentation framework, the measurable value function (MVF). Section 3\ndevelops new multiattribute structures for MVF, supporting\ngeneralized additive decompositions. Next, we show the applicability of\nthe theoretical framework to preferences in trading. The rest of the\npaper is devoted to the proposed auction mechanism.\n2. MULTIATTRIBUTE PREFERENCES\nAs mentioned, most tools facilitating expression of multiattribute\nvalue for trading applications assume that agents\" preferences can\nbe represented in an additive form. By way of background, we start\nby introducing the formal prerequisites justifying the additive\nrepresentation, as provided by multiattribute utility theory. We then\npresent the generalized additive form, and develop the formal\nunderpinnings for measurable value needed to extend this model to\nthe case of choice under certainty.\n2.1 Preferential Independence\nLet \u0398 denote the space of possible outcomes, with a\npreference relation (weak total order) over \u0398. Let A = {a0, . . . , am}\nbe a set of attributes describing \u0398. Capital letters denote subsets\nof variables, small letters (with or without numeric subscripts)\ndenote specific variables, and \u00afX denotes the complement of X with\nrespect to A. We indicate specific variable assignments with prime\nsigns or superscripts. To represent an instantiation of subsets X, Y\nat the same time we use a sequence of instantiation symbols, as in\nX Y .\nDEFINITION 1. A set of attributes Y \u2282 A is preferentially\nindependent (PI) of its complement Z = A \\ Y if the conditional\npreference order over Y given a fixed level Z0\nof Z is the same\nregardless of the choice of Z0\n.\nIn other words, the preference order over the projection of A on the\nattributes in Y is the same for any instantiation of the attributes in\nZ.\nDEFINITION 2. A = {a1, . . . , am} is mutually preferentially\nindependent (MPI) if any subset of A is preferentially independent\nof its complement.\nThe preference relation when no uncertainty is modeled is\nusually represented by a value function v [17]. The following\nfundamental result greatly simplifies the value function representation.\nTHEOREM 1 ([9]). A preference order over set of attributes\nA has an additive value function representation\nv(a1, . . . , am) =\nmX\ni=1\nvi(ai)\niff A is mutually preferential independent.\nEssentially, the additive forms used in trading mechanisms assume\nmutual preferential independence over the full set of attributes,\nincluding the money attribute. Intuitively that means that willingness\nto pay for value of an attribute or attributes cannot be affected by\nthe value of other attributes.\nA cardinal value function representing an ordering over certain\noutcomes need not in general coincide with the cardinal utility\nfunction that represents preference over lotteries or expected utility\n(EU). Nevertheless, EU functions may possess structural\nproperties analogous to that for value functions, such as additive\ndecomposition. Since the present work does not involve decisions under\nuncertainty, we do not provide a full exposition of the EU concept.\nHowever we do make frequent reference to the following additive\nindependence relations.\nDEFINITION 3. Let X, Y, Z be a partition of the set of attributes\nA. X and Y are conditionally additive independent given Z,\ndenoted as CAI(X, Y | Z), if preferences over lotteries on A depend\nonly on their marginal conditional probability distributions over X\nand Y .\nDEFINITION 4. Let I1, . . . , Ig \u2286 A such that\nSg\ni=1 Ii = A.\nI1, . . . , Ig are called generalized additive independent (GAI) if\npreferences over lotteries on A depend only on their marginal\ndistributions over I1, . . . , Ig.\nAn (expected) utility function u(\u00b7) can be decomposed additively\naccording to its (possibly overlapping) GAI sub-configurations.\nTHEOREM 2 ([13]). Let I1, . . . , Ig be GAI. Then there exist\nfunctions f1, . . . , fg such that\nu(a1, . . . , am) =\ng\nX\nr=1\nfr(Ir). (1)\nWhat is now known as the GAI condition was originally\nintroduced by Fishburn [13] for EU, and was named GAI and brought to\nthe attention of AI researchers by Bacchus and Grove [1].\nGraphical models and elicitation procedures for GAI decomposable utility\nwere developed for EU [4, 14, 6], for a cardinal representation of\nthe ordinal value function [15], and for an ordinal preference\nrelations corresponding to a TCP-net structure by Brafman et al. [5].\nApart from the work on GAI in the context of preference handling\nthat were discussed above, GAI have been recently used in the\ncontext of mechanism design by Hyafil and Boutilier [16], as an aid in\ndirect revelation mechanisms.\nAs shown by Bacchus and Grove [1], GAI structure can be\nidentified based on a set of CAI conditions, which are much easier to\ndetect and verify. In general, utility functions may exhibit GAI\nstructure not based on CAI. However, to date all proposals for\nreasoning and eliciting utility in GAI form take advantage of the GAI\nstructure primarily to the extent that it represents a collection of\nCAI conditions. For example, GAI trees [14] employ triangulation\nof the CAI map, and Braziunas and Boutilier\"s [6] conditional set\nCj of a set Ij corresponds to the CAI separating set of Ij.\nSince the CAI condition is also defined based on preferences\nover lotteries, we cannot apply Bacchus and Grove\"s result\nwithout first establishing an alternative framework based on priced\noutcomes. We develop such a framework using the theory of\nmeasurable value functions, ultimately producing a GAI decomposition\n228\n(Eq. 1) of the wtp function. Readers interested primarily in the\nmultiattribute auction and willing to grant the well-foundedness of\nthe preference structure may skip down to Section 5.\n2.2 Measurable Value Functions\nTrading decisions represent a special case of decisions under\ncertainty, where choices involve multiattribute outcomes and\ncorresponding monetary payments. In such problems, the key decision\noften hinges on relative valuations of price differences compared\nto differences in alternative configurations of goods and services.\nTheoretically, price can be treated as just another attribute,\nhowever, such an approach fails to exploit the special character of the\nmoney dimension, and can significantly add to complexity due to\nthe inherent continuity and typical wide range of possible monetary\noutcome values.\nWe build on the fundamental work of Dyer and Sarin [10, 11] on\nmeasurable value functions (MVFs). As we show below, wtp\nfunctions in a quasi-linear setting can be interpreted as MVFs. However\nwe first present the MVF framework in a more generic way, where\nthe measurement is not necessarily monetary. We present the\nessential definitions and refer to Dyer and Sarin for more detailed\nbackground and axiomatic treatment. The key concept is that of\npreference difference. Let \u03b81\n, \u03b82\n, \u03d11\n, \u03d12\n\u2208 \u0398 such that \u03b81\n\u03b82\nand \u03d11\n\u03d12\n. [\u03b82\n, \u03b81\n] denotes the preference difference between\n\u03b82\nand \u03b81\n, interpreted as the strength, or degree, to which \u03b82\nis\npreferred over \u03b81\n. Let \u2217\ndenote a preference order over \u0398 \u00d7 \u0398. We\ninterpret the statement\n[\u03b82\n, \u03b81\n] \u2217\n[\u03d12\n, \u03d11\n]\nas the preference of \u03d12\nover \u03d11\nis at least as strong as the\npreference of \u03b82\nover \u03b81\n. We use the symbol \u223c\u2217\nto represent equality of\npreference differences.\nDEFINITION 5. u : D \u2192 is a measurable value function\n(MVF) wrt \u2217\nif for any \u03b81\n, \u03b82\n, \u03d11\n, \u03d12\n\u2208 D,\n[\u03b82\n, \u03b81\n] \u2217\n[\u03d12\n, \u03d11\n] \u21d4\nu(\u03b82\n) \u2212 u(\u03b81\n) \u2264 u(\u03d12\n) \u2212 u(\u03d11\n).\nNote that an MVF can also be used as a value function representing\n, since [\u03b8 , \u03b8] \u2217\n[\u03b8 , \u03b8] iff \u03b8 \u03b8 .\nDEFINITION 6 ([11]). Attribute set X \u2282 A is called\ndifference independent of \u00afX if for any two assignments X1 \u00afX X2 \u00afX ,\n[X1 \u00afX , X2 \u00afX ] \u223c\u2217\n[X1 \u00afX , X2 \u00afX ]\nfor any assignment \u00afX .\nOr, in words, the preference differences on assignments to X given\na fixed level of \u00afX do not depend on the particular level chosen for\n\u00afX.\nAs with additive independence for EU, this condition is stronger\nthan preferential independence of X. Also analogously to EU,\nmutual preferential independence combined with other conditions\nleads to additive decomposition of the MVF. Moreover, Dyer and\nSarin [11] have defined analogs of utility independence [17] for\nMVF, and worked out a parallel set of decomposition results.\n3. ADVANCED MVF STRUCTURES\n3.1 Conditional Difference Independence\nOur first step is to generalize Definition 6 to a conditional\nversion.\nDEFINITION 7. Let X, Y, Z be a partition of the set of attributes\nA. X is conditionally difference independent of Y given Z,\ndenoted as CDI(X, Y | Z), if\n\u2200 instantiations \u02c6Z, X1\n, X2\n, Y 1\n, Y 2\n[X1\nY 1 \u02c6Z, X2\nY 1 \u02c6Z] \u223c [X1\nY 2 \u02c6Z, X2\nY 2 \u02c6Z].\nSince the conditional set is always the complement, we sometimes\nleave it implicit, using the abbreviated notation CDI(X, Y ).\nCDI leads to a decomposition similar to that obtained from CAI\n[17].\nLEMMA 3. Let u(A) be an MVF representing preference\ndifferences. Then CDI(X, Y | Z) iff\nu(A) = u(X0\n, Y, Z) + u(X, Y 0\n, Z) \u2212 u(X0\n, Y 0\n, Z).\nTo complete the analogy with CAI, we generalize Lemma 3 as\nfollows.\nPROPOSITION 4. CDI(X, Y | Z) iff there exist functions\n\u03c81(X, Z) and \u03c82(Y, Z), such that\nu(X, Y, Z) = \u03c81(X, Z) + \u03c82(Y, Z). (2)\nAn immediate result of Proposition 4 is that CDI is a symmetric\nrelation.\nThe conditional independence condition is much more\napplicable than the unconditional one. For example, if attributes a \u2208 X\nand b /\u2208 X are complements or substitutes, X cannot be difference\nindependent of \u00afX. However, X \\ {a} may still be CDI of \u00afX given\na.\n3.2 GAI Structure for MVF\nA single CDI condition decomposes the value function into two\nparts. We seek a finer-grain global decomposition of the utility\nfunction, similar to that obtained from mutual preferential\nindependence. For this purpose we are now ready to employ the results\nof Bacchus and Grove [1], who establish that the CAI condition\nhas a perfect map [20]; that is, there exists a graph whose nodes\ncorrespond to the set A, and its node separation reflects exactly the\ncomplete set of CAI conditions on A. Moreover, they show that the\nutility function decomposes over the set of maximal cliques of the\nperfect map. Their proofs can be easily adapted to CDI, since they\nonly rely on the decomposition property of CAI that is also implied\nby CDI according to Proposition 4.\nTHEOREM 5. Let G = (A, E) be a perfect map for the CDI\nconditions on A. Then\nu(A) =\ng\nX\nr=1\nfr(Ir), (3)\nwhere I1, . . . , Ig are (overlapping) subsets of A, each\ncorresponding to a maximal clique of G.\nGiven Theorem 5, we can now identify an MVF GAI structure\nfrom a collection of CDI conditions. The CDI conditions, in turn,\nare particularly intuitive to detect when the preference differences\ncarry a direct interpretation, as in the case with monetary\ndifferences discussed below. Moreover, the assumption or detection of\nCDI conditions can be performed incrementally, until the MVF is\ndecomposed to a reasonable dimension. This is in contrast with the\nfully additive decomposition of MVF that requires mutual\npreferential independence [11].\nTheorem 5 defines a decomposition structure, but to represent\nthe actual MVF we need to specify the functions over the cliques.\n229\nThe next theorem establishes that the functional constituents of\nMVF are the same as those for GAI decompositions as defined by\nFishburn [13] for EU. We adopt the following conventional\nnotation. Let (a0\n1, . . . , a0\nm) be a predefined vector called the reference\noutcome. For any I \u2286 A, the function u([I]) stands for the\nprojection of u(A) to I where the rest of the attributes are fixed at their\nreference levels.\nTHEOREM 6. Let G = (A, E) be a perfect map for the CDI\ncondition on A, and {I1, . . . , Ig} a set of maximal cliques as\ndefined in Theorem 5. Then the functional decomposition from that\ntheorem can be defined as\nf1 = u([I1]), and for r = 2, . . . , g (4)\nfr = u([Ir]) +\nr\u22121X\nk=1\n(\u22121)k\nX\n1\u2264i1<\u00b7\u00b7\u00b7 fb,r(\u03b8r) for all \u03b8r. The discount \u0394 is\ninitialized to zero. The auction has the dynamics of a descending\nclock auction: at each round t, bids are collected for current prices\nand then prices are reduced according to price rules. A seller is\nconsidered active in a round if she submits at least one full bid. In\nround t > 1, only sellers who where active in round t \u2212 1 are\nallowed to participate, and the auction terminates when no more than\na single seller is active. We denote the set of sub-bids submitted by\nsi by Bt\ni , and the corresponding set of full bids is\nBt\ni = {\u03b8 = (\u03b81, . . . , \u03b8g) \u2208 \u0398 | \u2200r.\u03b8r \u2208 Bt\ni }.\nIn our example, a seller could submit sub-bids on a set of\nsubconfigurations such as a1\nb1\nand b1\nc1\n, and that combines to a full\nbid on a1\nb1\nc1\n.\nThe auction proceeds in two phases. In the first phase (A), at\neach round t the auction computes a set of preferred\nsub-configurations Mt\n. Section 5.4 shows how to define Mt\nto ensure\nconvergence, and Section 5.5 shows how to efficiently compute it.\nIn phase A, the auction adjusts prices after each round, reducing\nthe price of every sub-configuration that has received a bid but is\nnot in the preferred set. Let be the prespecified price increment\nparameter. Specifically, the phase A price change rule is applied to\nall \u03b8r \u2208\nSn\ni=1 Bt\ni \\ Mt\n:\npt+1\n(\u03b8r) \u2190 max(pt\n(\u03b8r) \u2212\ng\n, fb,r(\u03b8r)). [A]\nThe RHS maximum ensures that prices do not get reduced below\nthe buyer\"s valuation in phase A.\nLet Mt\ndenote the set of configurations that are consistent covers\nin Mt\n:\nMt\n= {\u03b8 = (\u03b81, . . . , \u03b8g) \u2208 \u0398 | \u2200r.\u03b8r \u2208 Mt\n}\nThe auction switches to phase B when all active sellers have at\nleast one full bid in the buyer\"s preferred set:\n\u2200i. Bt\ni = \u2205 \u2228 Bt\ni \u2229 Mt\n= \u2205. [SWITCH]\nLet T be the round at which [SWITCH] becomes true. At this\npoint, the auction selects the buyer-optimal full bid \u03b7i for each\nseller si.\n\u03b7i = arg max\n\u03b8\u2208BT\ni\n(ub(\u03b8) \u2212 pT\n(\u03b8)). (6)\nIn phase B, si may bid only on \u03b7i. The prices of sub-configurations\nare fixed at pT\n(\u00b7) during this phase. The only adjustment in phase B\nis to \u0394, which is increased in every round by . The auction\nterminates when at most one seller (if exactly one, designate it s\u02c6i) is\nactive. There are four distinct cases:\n1. All sellers drop out in phase A (i.e., before rule [SWITCH]\nholds). The auction returns with no allocation.\n6\nThe discount term could be replaced with a uniform price\nreduction across all sub-configurations.\n2. All active sellers drop out in the same round in phase B. The\nauction selects the best seller (s\u02c6i) from the preceding round,\nand applies the applicable case below.\n3. The auction terminates in phase B with a final price above\nbuyer\"s valuation, pT\n(\u03b7\u02c6i) \u2212 \u0394 > ub(\u03b7\u02c6i). The auction offers\nthe winner s\u02c6i an opportunity to supply \u03b7\u02c6i at price ub(\u03b7\u02c6i).\n4. The auction terminates in phase B with a final price pT\n(\u03b7\u02c6i)\u2212\n\u0394 \u2264 ub(\u03b7\u02c6i). This is the ideal situation, where the auction\nallocates the chosen configuration and seller at this resulting\nprice.\nThe overall auction is described by high-level pseudocode in\nAlgorithm 1. As explained in Section 5.4, the role of phase A is\nto guide the traders to their efficient configurations. Phase B is a\none-dimensional competition over the surplus that remaining seller\ncandidates can provide to the buyer. In Section 5.5 we discuss the\ncomputational tasks associated with the auction, and Section 5.6\nprovides a detailed example.\nAlgorithm 1 GAI-based multiattribute auction\ncollect a reported valuation, \u02c6v from the buyer\nset high initial prices, p1\n(\u03b8r) on each level \u03b8r, and set \u0394 = 0\nwhile not [SWITCH] do\ncollect sub-bids from sellers\ncompute Mt\napply price change by [A]\nend while\ncompute \u03b7i\nwhile more than one active seller do\nincrease \u0394 by\ncollect bids on (\u03b7i, \u0394) from sellers\nend while\nimplement allocation and payment to winning seller\n5.4 Economic Analysis\nWhen the optimal solution to MAP (5) provides negative\nwelfare and sellers do not bid below their cost, the auction\nterminates in phase A, no trade occurs and the auction is trivially\nefficient. We therefore assume throughout the analysis that the optimal\n(seller,configuration) pair provides non-negative welfare.\nThe buyer profit from a configuration \u03b8 is defined as7\n\u03c0b(\u03b8) = ub(\u03b8) \u2212 p(\u03b8)\nand similarly \u03c0i(\u03b8) = p(\u03b8) \u2212 ci(\u03b8) is the profit of si. In\naddition, for \u03bc \u2286 {1, . . . , g} we denote the corresponding set of\nsubconfigurations by \u03b8\u03bc, and define the profit from a configuration \u03b8\nover the subset \u03bc as\n\u03c0b(\u03b8\u03bc) =\nX\nr\u2208\u03bc\n(fb,r(\u03b8r) \u2212 p(\u03b8r)).\n\u03c0i(\u03b8\u03bc) is defined similarly for si. Crucially, for any \u03bc and its\ncomplement \u00af\u03bc and for any trader \u03c4,\n\u03c0\u03c4 (\u03b8) = \u03c0\u03c4 (\u03b8\u03bc) + \u03c0\u03c4 (\u03b8\u00af\u03bc).\nThe function \u03c3i : \u0398 \u2192 R represents the welfare, or surplus\nfunction ub(\u00b7) \u2212 ci(\u00b7). For any price system p,\n\u03c3i(\u03b8) = \u03c0b(\u03b8) + \u03c0i(\u03b8).\n7\nWe drop the t superscript in generic statements involving price\nand profit functions, understanding that all usage is with respect to\nthe (currently) applicable prices.\n232\nSince we do not assume anything about the buyer\"s strategy, the\nanalysis refers to profit and surplus with respect to the face value\nof the buyer\"s report. The functions \u03c0i and \u03c3i refer to the true cost\nfunctions of si.\nDEFINITION 10. A seller is called Straightforward Bidder (SB)\nif at each round t she bids on Bt\ni as follows: if max\u03b8\u2208\u0398 \u03c0t\ni (\u03b8) < 0,\nthen Bt\ni = \u2205. Otherwise let\n\u03a9t\ni \u2286 arg max\n\u03b8\u2208\u0398\n\u03c0t\ni (\u03b8)\nBt\ni = {\u03b8r | \u03b8 \u2208 \u03a9t\ni, r \u2208 {1, . . . , g}}.\nIntuitively, an SB seller follows a myopic best response strategy\n(MBR), meaning they bid myopically rather than strategically by\noptimizing their profit with respect to current prices. To\ncalculate Bt\ni sellers need to optimize their current profit function, as\ndiscussed in Section 4.2.\nThe following lemma bridges the apparent gap between the\ncompact pricing and bid structure and the global optimization performed\nby the traders.\nLEMMA 8. Let \u03a8 be a set of configurations, all maximizing\nprofit for a trader \u03c4 (seller or buyer) at the relevant prices. Let\n\u03a6 = {\u03b8r | \u03b8 \u2208 \u03a8, r \u2208 {1, . . . , g}. Then any consistent cover in \u03a6\nis also a profit-maximizing configuration for \u03c4.\nProof sketch (full proof in the online appendix): A source of an\nelement \u03b8r is a configuration \u02dc\u03b8 \u2208 \u03a8 from which it originated\n(meaning, \u02dc\u03b8r = \u03b8r). Starting from the supposedly suboptimal cover \u03b81\n,\nwe build a series of covers \u03b81\n, . . . , \u03b8L\n. At each \u03b8j\nwe flip the value\nof a set of sub-configurations \u03bcj corresponding to a subtree, with\nthe sub-configurations of the configuration \u02c6\u03b8j\n\u2208 \u03a8 which is the\nsource of the parent \u03b3j of \u03bcj . That ensures that all elements in\n\u03bcj \u222a {\u03b3j} have a mutual source \u02c6\u03b8j\n. We show that all \u03b8j\nare\nconsistent and that they must all be suboptimal as well, and since all\nelements of \u03b8L\nhave a mutual source, meaning \u03b8L\n= \u02c6\u03b8L\n\u2208 \u03a8, it\ncontradicts optimality of \u03a8.\nCOROLLARY 9. For SB seller si,\n\u2200t, \u2200\u03b8 \u2208 Bt\ni , \u03c0t\ni (\u03b8 ) = max\n\u03b8\u2208\u0398\n\u03c0t\ni (\u03b8).\nNext we consider combinations of configurations that are only\nwithin some \u03b4 of optimality.\nLEMMA 10. Let \u03a8 be a set of configurations, all are within \u03b4\nof maximizing profit for a trader \u03c4 at the prices, and \u03a6 defined\nas in Lemma 8. Then any consistent cover in \u03a6 is within \u03b4g of\nmaximizing utility for \u03c4.\nThis bound is tight, that is for any GAI tree and a non-trivial\ndomain we can construct a set \u03a8 as above in which there exists a\nconsistent cover whose utility is exactly \u03b4g below the maximal.\nNext we formally define Mt\n. For connected GAI trees, Mt\nis\nthe set of sub-configurations that are part of a configuration within\nof optimal. When the GAI tree is in fact a forest, we apportion\nthe error proportionally across the disconnected trees. Let G be\ncomprised of trees G1, . . . , Gh. We use \u03b8j to denote the projection\nof a configuration \u03b8 on the tree Gj , and gj denotes the number of\nGAI elements in Gj .\nMt\nj = {\u03b8r | \u03c0t\nb(\u03b8j) \u2265 max\n\u03b8j \u2208\u0398j\n\u03c0t\nb(\u03b8j ) \u2212 gj\ng\n, r \u2208 Gj }\nThen define Mt\n=\nSh\nj=1 Mt\nj.\nLet ej = gj \u22121 denote the number of edges in Gj . We define the\nconnectivity parameter, e = maxj=1,...,h ej . As shown below, this\nconnectivity parameter is an important factor in the performance of\nthe auction.\nCOROLLARY 11.\n\u2200\u03b8 \u2208 Mt\n, \u03c0t\nb(\u03b8 ) \u2265 max\n\u03b8\u2208\u0398\n\u03c0t\nb(\u03b8) \u2212 (e + 1)\nIn the fully additive case this loss of efficiency reduces to . On the\nother extreme, if the GAI network is connected then e+1 = g. We\nalso note that without assuming any preference structure, meaning\nthat the CDI map is fully connected, g = 1 and the efficiency loss\nis again .\nLemmas 12 through 15 show that through the price system, the\nchoice of buyer preferred configurations, and price change rules,\nPhase A leads the buyer and each of the sellers to their mutually\nefficient configuration.\nLEMMA 12. max\u03b8\u2208\u0398 \u03c0t\nb(\u03b8) does not change in any round t of\nphase A.\nPROOF. We prove the lemma per each tree Gj. The optimal\nvalues for disconnected components are independent of each other\nhence if the maximal profit for each component does not change\nthe combined maximal profit does not change as well. If the price\nof \u03b8j was reduced during phase A, that is pt+1\n(\u03b8j) = pt\n(\u03b8j ) \u2212 \u03b4,\nit must be the case that some w \u2264 gj sub-configurations of \u03b8j are\nnot in Mt\nj, and \u03b4 = w\ng\n. The definition of Mt\nj ensures\n\u03c0t\nb(\u03b8j ) < max\n\u03b8\u2208\u0398\n\u03c0t\nb(\u03b8j) \u2212 gj\ng\n.\nTherefore,\n\u03c0t+1\nb (\u03b8 ) = \u03c0t\n(\u03b8 ) + \u03b4 = \u03c0t\n(\u03b8 ) +\nw\ng\n\u2264 max\n\u03b8\u2208\u0398\n\u03c0t\nb(\u03b8j).\nThis is true for any configuration whose profit improves,\ntherefore the maximal buyer profit does not change during phase A.\nLEMMA 13. The price of at least one sub-configuration must\nbe reduced at every round in phase A.\nPROOF. In each round t < T of phase A there exists an active\nseller i for whom Bt\ni \u2229 Mt\n= \u2205. However to be active in round t,\nBt\ni = \u2205. Let \u02c6\u03b8 \u2208 Bt\ni . If \u2200r.\u02c6\u03b8r \u2208 Mt\n, then \u02c6\u03b8 \u2208 Mt\nby definition\nof Mt\n. Therefore there must be \u02c6\u03b8r \u2208 Mt\n. We need to prove that\nfor at least one of these sub-configurations, \u03c0t\nb(\u02c6\u03b8r) < 0 to ensure\nactivation of rule [A].\nAssume for contradiction that for any \u02c6\u03b8r \u2208 \u00afMt\n, \u03c0t\nb(\u02c6\u03b8r) \u2265 0.\nFor simplicity we assume that for any \u03b8r, \u03c01\nb (\u03b8r) is some product\nof g\n(that can be easily done), and that ensures that \u03c0t\nb(\u02c6\u03b8r) = 0\nbecause once profit hits 0 it cannot increase by rule [A].\nIf \u02c6\u03b8r \u2208 \u00afMt\n, \u2200r = 1, . . . , g then \u03c0t\nb(\u02c6\u03b8) = 0. This contradicts\nLemma 12 since we set high initial prices. Therefore some of the\nsub-configurations of \u02c6\u03b8 are in Mt\n, and WLOG we assume it is\n\u02c6\u03b81, . . . , \u02c6\u03b8k. To be in Mt\nthese k sub-configurations must have been\nin some preferred full configuration, meaning there exists \u03b8 \u2208 Mt\nsuch that\n\u03b8 = (\u02c6\u03b81, . . . , \u02c6\u03b8k, \u03b8k+1, . . . , \u03b8g)\nSince \u02c6\u03b8 /\u2208 Mt\nIt must be that case that \u03c0t\nb(\u02c6\u03b8) < \u03c0t\nb(\u03b8 ). Therefore\n\u03c0t\nb(\u03b8k+1, . . . , \u03b8g) > \u03c0t\nb(\u02c6\u03b8k+1, . . . , \u02c6\u03b8g) = 0\nHence for at least one r \u2208 {k + 1, . . . , g}, \u03c0t\nb(\u03b8r) > 0\ncontradicting rule [A].\n233\nLEMMA 14. When the solution to MAP provides positive\nsurplus, and at least the best seller is SB, the auction must reach\nphase B.\nPROOF. By Lemma 13 prices must go down in every round\nof phase A. Rule [A] sets a lower bound on all prices therefore\nthe auction either terminates in phase A or must reach condition\n[SWITCH].\nWe set the initial prices are high such that max\u03b8\u2208\u0398 \u03c01\nb (\u03b8) < 0,\nand by Lemma 12 max\u03b8\u2208\u0398 \u03c0t\nb(\u03b8) < 0 during phase A. We\nassume that the efficient allocation (\u03b8\u2217\n, i\u2217\n) provides positive welfare,\nthat is \u03c3i\u2217 (\u03b8\u2217\n) = \u03c0t\nb(\u03b8\u2217\n) + \u03c0t\ni\u2217 (\u03b8\u2217\n) > 0. si\u2217 is SB therefore\nshe will leave the auction only when \u03c0t\ni\u2217 (\u03b8\u2217\n) < 0. This can\nhappen only when \u03c0t\nb(\u03b8\u2217\n) > 0, therefore si\u2217 does not drop in\nphase A hence the auction cannot terminate before reaching\ncondition [SWITCH].\nLEMMA 15. For SB seller si, \u03b7i is (e + 1) -efficient.\nPROOF. \u03b7i is chosen to maximize the buyer\"s surplus out of Bt\ni\nat the end of phase A. Since Bt\ni \u2229 Mt\n= \u2205, clearly \u03b7i \u2208 Mt\n. From\nCorollary 11 and Corollary 9, for any \u02dc\u03b8,\n\u03c0T\nb (\u03b7i) \u2265 \u03c0T\nb (\u02dc\u03b8) \u2212 (e + 1)\n\u03c0T\ni (\u03b7i) \u2265 \u03c0T\ni (\u02dc\u03b8)\n\u21d2 \u03c3i(\u03b7i) \u2265 \u03c3i(\u02dc\u03b8) \u2212 (e + 1)\nThis establishes the approximate bilateral efficiency of the results\nof Phase A (at this point under the assumption of SB). Based on\nPhase B\"s simple role as a single-dimensional bidding competition\nover the discount, we next assert that the overall result is efficient\nunder SB, which in turn proves to be an approximately ex-post\nequilibrium strategy in the two phases.\nLEMMA 16. If sellers si and sj are SB, and si is active at least\nas long as sj is active in phase B, then\n\u03c3i(\u03b7i) \u2265 max\n\u03b8\u2208\u0398\n\u03c3j(\u03b8) \u2212 (e + 2)\n.\nTHEOREM 17. Given a truthful buyer and SB sellers, the\nauction is (e+2) -efficient: the surplus of the final allocation is within\n(e + 2) of the maximal surplus.\nFollowing PK, we rely on an equivalence to the one-sided VCG\nauction to establish incentive properties for the sellers. In the\nonesided multiattribute VCG auction, buyer and sellers report\nvaluation and cost functions \u02c6ub, \u02c6ci, and the buyer pays the sell-side VCG\npayment to the winning seller.\nDEFINITION 11. Let (\u03b8\u2217\n, i\u2217\n) be the optimal solution to MAP.\nLet (\u02dc\u03b8,\u02dci) be the best solution to MAP when i\u2217\ndoes not participate.\nThe sell-side VCG payment is\nV CG(\u02c6ub, \u02c6ci) = \u02c6ub(\u03b8\u2217\n) \u2212 (\u02c6ub(\u02dc\u03b8) \u2212 \u02c6c\u02dci(\u02dc\u03b8)).\nIt is well-known that truthful bidding is a dominant strategy for\nsellers in the one-sided VCG auction. It is also shown by PK that\nthe maximal regret for buyers from bidding truthfully in this\nmechanism is ub(\u03b8\u2217\n) \u2212 ci\u2217 (\u03b8\u2217\n) \u2212 (ub(\u02dc\u03b8) \u2212 \u02c6c\u02dci(\u02dc\u03b8)), that is, the marginal\nproduct of the efficient seller.\nUsually in iterative auctions the VCG outcome is only nearly\nachieved, but the deviation is bounded by the minimal price change.\nWe show a similar result, and therefore define \u03b4-VCG payments.\nDEFINITION 12. Sell-side \u03b4-VCG payment for MAP is a\npayment p such that\nV CG(\u02c6ub, \u02c6ci) \u2212 \u03b4 \u2264 p \u2264 V CG(\u02c6ub, \u02c6ci) + \u03b4.\nWhen payment is guaranteed to be \u03b4-VCG sellers can only affect\ntheir payment within that range, therefore their gain by falsely\nreporting their cost is bounded by 2\u03b4.\nLEMMA 18. When sellers are SB, the payment in the end of\nGAI auction is sell-side (e + 2) -VCG.\nTHEOREM 19. SB is an (3e + 5) ex-post Nash Equilibrium\nfor sellers in GAI auction. That is, sellers cannot gain more than\n(3e + 5) by deviating.\nIn practice, however, sellers are unlikely to have the information\nthat would let them exploit that potential gain. They are much more\nlikely to lose from bidding on their less attractive configurations.\n5.5 Computation and Complexity\nThe size of the price space maintained in the auction is equal to\nthe total number of sub-configurations, meaning it is exponential in\nmaxr |Ir|. This is also equivalent to the tree-width (plus one) of the\noriginal CDI-map. For the purpose of the computational analysis\nlet dj denote the domain of attribute aj, and I =\nSg\nr=1\nQ\nj\u2208Ir\ndj,\nthe collection of all sub-configurations. The first purpose of this\nsub-section is to show that the complexity of all the computations\nrequired for the auction depends only on |I|, i.e., no computation\ndepends on the size of the full exponential domain.\nWe are first concerned with the computation of Mt\n. Since Mt\ngrows monotonically with t, a naive application of optimization\nalgorithm to generate the best outcomes sequentially might end up\nenumerating significant portions of the fully exponential domain.\nHowever as shown below this plain enumeration can be avoided.\nPROPOSITION 20. The computation of Mt\ncan be done in time\nO(|I|2\n). Moreover, the total time spent on this task throughout the\nauction is O(|I|(|I| + T)).\nThe bounds are in practice significantly lower, based on results on\nsimilar problems from the probabilistic reasoning literature [18].\nOne of the benefits of the compact pricing structure is the\ncompact representation it lends for bids: sellers submit only sub-bids,\nand therefore the number of them submitted and stored per seller\nis bounded by |I|. Since the computation tasks: Bt\ni = \u2205, rule\n[SWITCH] and choice of \u03b7i are all involving the set Bt\ni , it is\nimportant to note that their performance only depend on the size of\nthe set Bt\ni , since they are all subsumed by the combinatorial\noptimization task over Bt\ni or Bt\ni \u2229 Mt\n.\nNext, we analyze the number of rounds it takes for the auction\nto terminate. Phase B requires maxi=1,...n \u03c0T\ni (\u03b7i)1\n. Since this is\nequivalent to price-only auctions, the concern is only with the time\ncomplexity of phase A. Since prices cannot go below fb,r(\u03b8r), an\nupper bound on the number of rounds required is\nT \u2264\nX\n\u03b8r\u2208I\n(p1\n(\u03b8r) \u2212 fb,r(\u03b8r))\ng\nHowever phase A may converge faster. Let the initial negative\nprofit chosen by the auctioneer be m = max\u03b8\u2208\u0398 \u03c01\nb (\u03b8). In the\nworst case phase A needs to run until \u2200\u03b8 \u2208 \u0398.\u03c0b(\u03b8) = m. This\nhappens for example when \u2200\u03b8r \u2208 I.pt\n(\u03b8r) = fb,r(\u03b8r) + m\ng\n. In\ngeneral, the closer the initial prices reflect buyer valuation, the\nfaster phase A converges. One extreme is to choose p1\n(\u03b8r) =\n234\nI1 I2\na1\nb1\na2\nb1\na1\nb2\na2\nb2\nb1\nc1\nb2\nc1\nb1\nc2\nb2\nc2\nfb 65 50 55 70 50 85 60 75\nf1 35 20 30 70 65 65 70 61\nf2 35 20 25 25 55 110 70 95\nTable 1: GAI utility functions for the example domain. fb\nrepresents the buyer\"s valuation, and f1 and f2 costs of the sellers\ns1 and s2.\nfb,r(\u03b8r) + m\ng\n. That would make phase A redundant, at the cost of\nfull initial revelation of buyer\"s valuation as done in other\nmechanisms discussed below. Between this option and the other extreme,\nwhich is \u2200\u03b1, \u02c6\u03b1 \u2208 I, p1\n(\u03b1) = p1\n(\u02c6\u03b1) the auctioneer has a range of\nchoices to determine the right tradeoff between convergence time\nand information revelation. In the example below the choice of a\nlower initial price for the domain of I1 provides some speedup by\nrevealing a harmless amount of information.\nAnother potential concern is the communication cost associated\nwith the Japanese auction style. The sellers need to send their bids\nover and over again at each round. A simple change can be made\nto avoid much of the redundant communication: the auction can\nretain sub-bids from previous rounds on sub-configurations whose\nprice did not change. Since combinations of sub-bids from different\nrounds can yield sub-optimal configurations, each sub-bid should\nbe tagged with the number of the latest round in which it was\nsubmitted, and only consistent combinations from the same round are\nconsidered to be full bids. With this implementation sellers need\nnot resubmit their bid until a price of at least one sub-configuration\nhas changed.\n5.6 Example\nWe use the example settings introduced in Section 5.2. Recall\nthat the GAI structure is I1 = {a, b}, I2 = {b, c} (note that e = 1).\nTable 1 shows the GAI utilities for the buyer and the two sellers\ns1, s2. The efficient allocation is (s1, a1\nb2\nc1\n) with a surplus of 45.\nThe maximal surplus of the second best seller, s2, is 25, achieved\nby a1\nb1\nc1\n, a2\nb1\nc1\n, and a2\nb2\nc2\n. We set all initial prices over I1\nto 75, and all initial prices over I2 to 90. We set = 8, meaning\nthat price reduction for sub-configurations is 4. Though with these\nnumbers it is not guaranteed by Theorem 17, we expect s1 to win on\neither the efficient allocation or on a1\nb2\nc2\nwhich provides a surplus\nof 39. The reason is that these are the only two configurations\nwhich are within (e + 1) = 16 of being efficient for s1 (therefore\none of them must be chosen by Phase A), and both provide more\nthan surplus over s2\"s most efficient configuration (and this is\nsufficient in order to win in Phase B).\nTable 2 shows the progress of phase A. Initially all configuration\nhave the same cost (165), so sellers bid on their lowest cost\nconfiguration which is a2\nb1\nc1\nfor both (with profit 80 to s1 and 90 to s2),\nand that translates to sub-bids on a2\nb1\nand b1\nc1\n. M1\ncontains the\nsub-configurations a2\nb2\nand b2\nc1\nof the highest value configuration\na2\nb2\nc1\n. Price is therefore decreased on a2\nb1\nand b1\nc1\n. After the\nprice change, s1 has higher profit (74) on a1\nb2\nc2\nand she therefore\nbids on a1\nb2\nand b2\nc2\n. Now (round 2) their prices go down,\nreducing the profit on a1\nb2\nc2\nto 66 and therefore in round 3 s1 prefers\na2\nb1\nc2\n(profit 67). After the next price change the configurations\na1\nb2\nc1\nand a1\nb2\nc2\nboth become optimal (profit 66), and the\nsubbids a1\nb2\n, b2\nc1\nand b2\nc2\ncapture the two. These configurations stay\noptimal for another round (5), with profit 62. At this point s1 has\na full bid (in fact two full bids: a1\nb2\nc2\nand a1\nb2\nc1\n) in M5\n, and\nI1 I2\nt a1b1 a2b1 a1b2 a2b2 b1c1 b2c1 b1c2 b2c2\n1 75 75 75 75 90 90 90 90\ns1, s2 \u2217 s1, s2 \u2217\n2 75 71 75 75 86 90 90 90\ns2 s1 \u2217 s2 \u2217 s1\n3 75 67 71 75 82 90 90 86\ns1, s2 \u2217 s2 \u2217 s1 \u2217\n4 75 63 71 75 78 90 86 86\ns2 s1 \u2217 s2 \u2217, s1 \u2217, s1\n5 75 59 67 75 74 90 86 86\ns2 \u2217, s1 \u2217 s2 \u2217, s1 \u2217, s1\n6 71 59 67 75 70 90 86 86\ns2 \u2217, s1 \u2217 \u2217, s1 s2 \u2217, s1\n7 71 55 67 75 70 90 82 86\ns2 \u2217, s1 \u2217 s2 \u2217, s1 \u2217, s1\n8 67 55 67 75 66 90 82 86\n\u2217 s2 \u2217, s1 \u2217 \u2217 \u2217, s1 s2 \u2217, s1\n9 67 51 67 75 66 90 78 86\n\u2217, s2 \u2217, s1 \u2217 \u2217, s2 \u2217, s1 \u2217, s1\nTable 2: Auction progression in phase A. Sell bids and\ndesignation of Mt\n(using \u2217) are shown below the price of each\nsubconfiguration.\ntherefore she no longer changes her bids since the price of her\noptimal configurations does not decrease. s2 sticks to a2\nb1\nc1\nduring\nthe first four rounds, switching to a1\nb1\nc1\nin round 5. It takes four\nmore rounds for s2 and Mt\nto converge (M10\n\u2229B10\n2 = {a1\nb1\nc1\n}).\nAfter round 9 the auction sets \u03b71 = a1\nb2\nc1\n(which yields more\nbuyer profit than a1\nb2\nc2\n) and \u03b72 = a1\nb1\nc1\n. For the next round\n(10) \u0394 = 8, increased by 8 for each subsequent round. Note that\np9\n(a1\nb1\nc1\n) = 133, and c2(a1\nb1\nc1\n) = 90, therefore \u03c0T\n2 (\u03b72) =\n43. In round 15, \u0394 = 48 meaning p15\n(a1\nb1\nc1\n) = 85 and that\ncauses s2 to drop out, setting the final allocation to (s1, a1\nb2\nc1\n)\nand p15\n(a1\nb2\nc1\n) = 157 \u2212 48 = 109. That leaves the buyer with a\nprofit of 31 and s1 with a profit of 14, less than below the VCG\nprofit 20.\nThe welfare achieved in this case is optimal. To illustrate how\nsome efficiency loss could occur consider the case that c1(b2\nc2\n) =\n60. In that case, in round 3 the configuration a1\nb2\nc2\nprovides the\nsame profit (67) as a2\nb1\nc2\n, and s1 bids on both. While a2\nb1\nc2\nis\nno longer optimal after the price change, a1\nb2\nc2\nremains optimal\non subsequent rounds because b2\nc2\n\u2208 Mt\n, and the price change\nof a1\nb2\naffects both a1\nb2\nc2\nand the efficient configuration a1\nb2\nc1\n.\nWhen phase A ends B10\n1 \u2229 M10\n= {a1\nb2\nc2\n} so the auction\nterminates with the slightly suboptimal configuration and surplus 40.\n6. DISCUSSION\n6.1 Preferential Assumptions\nA key aspect in implementing GAI based auctions is the choice\nof the preference structure, that is, the elements {I1, . . . , Ig}. In\nsome domains the structure can be more or less robust over time\nand over different decision makers. When this is not the case,\nextracting reliable structure from sellers (in the form of CDI\nconditions) is a serious challenge. This could have been a deal breaker\nfor such domains, but in fact it can be overcome. It turns out that\nwe can run this auction without any assumptions on sellers\"\npreference structure. The only place where this assumption is used in our\nanalysis is for Lemma 8. If sellers whose preference structure does\nnot agree with the one used by the auction are guided to submit\nonly one full bid at each round, or a set of bids that does not yield\nundesired consistent combinations, all the properties of the auction\n235\nstill hold. Locally, the sellers can optimize their profit functions\nusing the union of their GAI structure with the auction\"s structure.\nIt is therefore essential only that the buyer\"s preference structure is\naccurately modeled. Of course, capturing sellers\" structures as well\nis still preferred since it can speed up the execution and let sellers\ntake advantage of the compact bid representation.\nIn both cases the choice of clusters may significantly affect the\ncomplexity of the price structure and the runtime of the auction.\nIt is sometimes better to ignore some weaker interdependencies in\norder to reduce dimensionality. The complexity of the structure\nalso affects the efficiency of the auction through the value of e.\n6.2 Information Revelation Properties\nIn considering information properties of this mechanism we\ncompare to the standard approach for iterative multiattribute auctions,\nwhich is based on the theoretical foundations of Che [7]. In most of\nthese mechanisms the buyer reveals a scoring function and then the\nmechanism solicits bids from the sellers [3, 22, 8, 21] (the\nmechanisms suggested by Beil and Wein [2] is different since buyers\ncan modify their scoring function each round, but the goal there is\nto maximize the buyer\"s profit). Whereas these iterative\nprocurement mechanisms tend to relieve the burden of information\nrevelation from the sellers, a major drawback is that the buyer\"s\nutility function must be revealed to the sellers before receiving any\ncommitment. In the mechanisms suggested by PK and in our GAI\nauction above, buyer information is revealed only in exchange for\nsell commitments. In particular, sellers learn nothing (beyond the\ninitial price upper bound, which can be arbitrarily loose) about the\nutility of configurations for which no bid was submitted. When\nbids are submitted for a configuration \u03b8, sellers would be able to\ninfer its utility relative to the current preferred configurations only\nafter the price of \u03b8 is driven down sufficiently to make it a preferred\nconfiguration as well.\n6.3 Conclusions\nWe propose a novel exploitation of preference structure in\nmultiattribute auctions. Rather than assuming full additivity, or no\nstructure at all, we model preferences using the GAI decomposition.\nWe developed an iterative auction mechanism directly relying on\nthe decomposition, and also provided direct means of constructing\nthe representation from relatively simple statements of\nwillingnessto-pay. Our auction mechanism generalizes PK\"s preference\nmodeling, while in essence retaining their information revelation\nproperties. It allows for a range of tradeoffs between accuracy of\npreference representation and both the complexity of the pricing structure\nand efficiency of the auction, as well as tradeoffs between buyer\"s\ninformation revelation and the time required for convergence.\n7. ACKNOWLEDGMENTS\nThis work was supported in part by NSF grants IIS-0205435\nand IIS-0414710, and the STIET program under NSF IGERT grant\n0114368. We are grateful to comments from anonymous reviewers.\n8. REFERENCES\n[1] F. Bacchus and A. Grove. Graphical models for preference\nand utility. In Eleventh Conference on Uncertainty in\nArtificial Intelligence, pages 3-10, Montreal, 1995.\n[2] D. R. Beil and L. M. Wein. An inverse-optimization-based\nauction for multiattribute RFQs. Management Science,\n49:1529-1545, 2003.\n[3] M. Bichler. The Future of e-Markets: Multi-Dimensional\nMarket Mechanisms. Cambridge University Press, 2001.\n[4] C. Boutilier, F. Bacchus, and R. I. Brafman. UCP-networks:\nA directed graphical representation of conditional utilities. In\nSeventeenth Conference on Uncertainty in Artificial\nIntelligence, pages 56-64, Seattle, 2001.\n[5] R. I. Brafman, C. Domshlak, and T. Kogan. Compact\nvalue-function representations for qualitative preferences. In\nTwentieth Conference on Uncertainty in Artificial\nIntelligence, pages 51-59, Banff, 2004.\n[6] D. Braziunas and C. Boutilier. Local utility elicitation in GAI\nmodels. In Twenty-first Conference on Uncertainty in\nArtificial Intelligence, pages 42-49, Edinburgh, 2005.\n[7] Y.-K. Che. Design competition through multidimensional\nauctions. RAND Journal of Economics, 24(4):668-680,\n1993.\n[8] E. David, R. Azoulay-Schwartz, and S. Kraus. An English\nauction protocol for multi-attribute items. In Agent Mediated\nElectronic Commerce IV: Designing Mechanisms and\nSystems, volume 2531 of Lecture Notes in Artificial\nIntelligence, pages 52-68. Springer, 2002.\n[9] G. Debreu. Topological methods in cardinal utility theory. In\nK. Arrow, S. Karlin, and P. Suppes, editors, Mathematical\nMethods in the Social Sciences. Stanford Univ. Press, 1959.\n[10] J. S. Dyer and R. K. Sarin. An axiomatization of cardinal\nadditive conjoint measurement theory. Working Paper 265,\nWMSI, UCLA, February 1977.\n[11] J. S. Dyer and R. K. Sarin. Measurable multiattribute value\nfunctions. Operations Research, 27:810-822, 1979.\n[12] Y. Engel, M. P. Wellman, and K. M. Lochner. Bid\nexpressiveness and clearing algorithms in multiattribute\ndouble auctions. In Seventh ACM Conference on Electronic\nCommerce, pages 110-119, Ann Arbor, MI, 2006.\n[13] P. C. Fishburn. Interdependence and additivity in\nmultivariate, unidimensional expected utility theory. Intl.\nEconomic Review, 8:335-342, 1967.\n[14] C. Gonzales and P. Perny. GAI networks for utility\nelicitation. In Ninth Intl. Conf. on the Principles of\nKnowledge Representation and Reasoning, pages 224-234,\nWhistler, BC, 2004.\n[15] C. Gonzales and P. Perny. GAI networks for decision making\nunder certainty. In IJCAI-05 Workshop on Advances in\nPreference Handling, Edinburgh, 2005.\n[16] N. Hyafil and C. Boutilier. Regret-based incremental partial\nrevelation mechanisms. In Twenty-first National Conference\non Artificial Intelligence, pages 672-678, Boston, MA, 2006.\n[17] R. L. Keeney and H. Raiffa. Decisions with Multiple\nObjectives: Preferences and Value Tradeoffs. Wiley, 1976.\n[18] D. Nilsson. An efficient algorithm for finding the M most\nprobable configurations in probabilistic expert systems.\nStatistics and Computinge, 8(2):159-173, 1998.\n[19] D. C. Parkes and J. Kalagnanam. Models for iterative\nmultiattribute procurement auctions. Management Science,\n51:435-451, 2005.\n[20] J. Pearl and A. Paz. Graphoids: A graph based logic for\nreasoning about relevance relations. In B. Du Boulay, editor,\nAdvances in Artificial Intelligence II. 1989.\n[21] J. Shachat and J. T. Swarthout. Procurement auctions for\ndifferentiated goods. IBM Research Report RC22587, IBM\nT.J. Watson Research Laboratory, 2002.\n[22] N. Vulkan and N. R. Jennings. Efficient mechanisms for the\nsupply of services in multi-agent environments. Decision\nSupport Systems, 28:5-19, 2000.\n236", "keywords": "mvf;multiattribute auction;gai based auction;auction;measurable value function theory;iterative auction mechanism;preference handling;theory of measurable value function;gaus"}
-{"name": "test_J-17", "title": "Truthful Mechanism Design for Multi-Dimensional Scheduling via Cycle Monotonicity", "abstract": "We consider the problem of makespan minimization on m unrelated machines in the context of algorithmic mechanism design, where the machines are the strategic players. This is a multidimensional scheduling domain, and the only known positive results for makespan minimization in such a domain are O(m)-approximation truthful mechanisms [22, 20]. We study a well-motivated special case of this problem, where the processing time of a job on each machine may either be low or high, and the low and high values are public and job-dependent. This preserves the multidimensionality of the domain, and generalizes the restricted-machines (i.e., {pj, \u221e}) setting in scheduling. We give a general technique to convert any c-approximation algorithm to a 3capproximation truthful-in-expectation mechanism. This is one of the few known results that shows how to export approximation algorithms for a multidimensional problem into truthful mechanisms in a black-box fashion. When the low and high values are the same for all jobs, we devise a deterministic 2-approximation truthful mechanism. These are the first truthful mechanisms with non-trivial performance guarantees for a multidimensional scheduling domain. Our constructions are novel in two respects. First, we do not utilize or rely on explicit price definitions to prove truthfulness; instead we design algorithms that satisfy cycle monotonicity. Cycle monotonicity [23] is a necessary and sufficient condition for truthfulness, is a generalization of value monotonicity for multidimensional domains. However, whereas value monotonicity has been used extensively and successfully to design truthful mechanisms in singledimensional domains, ours is the first work that leverages cycle monotonicity in the multidimensional setting. Second, our randomized mechanisms are obtained by first constructing a fractional truthful mechanism for a fractional relaxation of the problem, and then converting it into a truthfulin-expectation mechanism. This builds upon a technique of [16], and shows the usefulness of fractional mechanisms in truthful mechanism design.", "fulltext": "1. INTRODUCTION\nMechanism design studies algorithmic constructions\nunder the presence of strategic players who hold the inputs\nto the algorithm. Algorithmic mechanism design has\nfocused mainly on settings were the social planner or designer\nwishes to maximize the social welfare (or equivalently,\nminimize social cost), or on auction settings where\nrevenuemaximization is the main goal. Alternative optimization\ngoals, such as those that incorporate fairness criteria (which\nhave been investigated algorithmically and in social choice\ntheory), have received very little or no attention.\nIn this paper, we consider such an alternative goal in the\ncontext of machine scheduling, namely, makespan\nminimization. There are n jobs or tasks that need to be assigned to\nm machines, where each job has to be assigned to exactly\none machine. Assigning a job j to a machine i incurs a load\n(cost) of pij \u2265 0 on machine i, and the load of a machine is\nthe sum of the loads incurred due to the jobs assigned to it;\nthe goal is to schedule the jobs so as to minimize the\nmaximum load of a machine, which is termed the makespan of the\nschedule. Makespan minimization is a common objective in\nscheduling environments, and has been well studied\nalgorithmically in both the Computer Science and Operations\nResearch communities (see, e.g., the survey [12]). Following\nthe work of Nisan and Ronen [22], we consider each machine\nto be a strategic player or agent who privately knows its own\nprocessing time for each job, and may misrepresent these\nvalues in order to decrease its load (which is its incurred\ncost). Hence, we approach the problem via mechanism\ndesign: the social designer, who holds the set of jobs to be\nassigned, needs to specify, in addition to a schedule, suitable\npayments to the players in order to incentivize them to\nreveal their true processing times. Such a mechanism is called\na truthful mechanism. The makespan-minimization\nobjective is quite different from the classic goal of social-welfare\nmaximization, where one wants to maximize the total\nwelfare (or minimize the total cost) of all players. Instead, it\n252\ncorresponds to maximizing the minimum welfare and the\nnotion of max-min fairness, and appears to be a much harder\nproblem from the viewpoint of mechanism design. In\nparticular, the celebrated VCG [26, 9, 10] family of mechanisms\ndoes not apply here, and we need to devise new techniques.\nThe possibility of constructing a truthful mechanism for\nmakespan minimization is strongly related to assumptions\non the players\" processing times, in particular, the\ndimensionality of the domain. Nisan and Ronen considered the\nsetting of unrelated machines where the pij values may be\narbitrary. This is a multidimensional domain, since a player\"s\nprivate value is its entire vector of processing times (pij)j.\nVery few positive results are known for multidimensional\ndomains in general, and the only positive results known for\nmultidimensional scheduling are O(m)-approximation\ntruthful mechanisms [22, 20]. We emphasize that regardless of\ncomputational considerations, even the existence of a\ntruthful mechanism with a significantly better (than m)\napproximation ratio is not known for any such scheduling domain.\nOn the negative side, [22] showed that no truthful\ndeterministic mechanism can achieve approximation ratio better\nthan 2, and strengthened this lower bound to m for two\nspecific classes of deterministic mechanisms. Recently, [20]\nextended this lower bound to randomized mechanisms, and [8]\nimproved the deterministic lower bound.\nIn stark contrast with the above state of affairs, much\nstronger (and many more) positive results are known for a\nspecial case of the unrelated machines problem, namely, the\nsetting of related machines. Here, we have pij = pj/si for\nevery i, j, where pj is public knowledge, and the speed si\nis the only private parameter of machine i. This\nassumption makes the domain of players\" types single-dimensional.\nTruthfulness in such domains is equivalent to a convenient\nvalue-monotonicity condition [21, 3], which appears to make\nit significantly easier to design truthful mechanisms in such\ndomains. Archer and Tardos [3] first considered the related\nmachines setting and gave a randomized 3-approximation\ntruthful-in-expectation mechanism. The gap between the\nsingle-dimensional and multidimensional domains is perhaps\nbest exemplified by the fact that [3] showed that there\nexists a truthful mechanism that always outputs an optimal\nschedule. (Recall that in the multidimensional unrelated\nmachines setting, it is impossible to obtain a truthful\nmechanism with approximation ratio better than 2.) Various\nfollow-up results [2, 4, 1, 13] have strengthened the notion\nof truthfulness and/or improved the approximation ratio.\nSuch difficulties in moving from the single-dimensional to\nthe multidimensional setting also arise in other mechanism\ndesign settings (e.g., combinatorial auctions). Thus, in\naddition to the specific importance of scheduling in strategic\nenvironments, ideas from multidimensional scheduling may\nalso have a bearing in the more general context of truthful\nmechanism design for multidimensional domains.\nIn this paper, we consider the makespan-minimization\nproblem for a special case of unrelated machines, where\nthe processing time of a job is either low or high on\neach machine. More precisely, in our setting, pij \u2208 {Lj, Hj}\nfor every i, j, where the Lj, Hj values are publicly known\n(Lj \u2261low, Hj \u2261high). We call this model the\njobdependent two-values case. This model generalizes the\nclassic restricted machines setting, where pij \u2208 {Lj, \u221e} which\nhas been well-studied algorithmically. A special case of our\nmodel is when Lj = L and Hj = H for all jobs j, which we\ndenote simply as the two-values scheduling model. Both\nof our domains are multidimensional, since the machines are\nunrelated: one job may be low on one machine and high on\nthe other, while another job may follow the opposite\npattern. Thus, the private information of each machine is a\nvector specifying which jobs are low and high on it. Thus,\nthey retain the core property underlying the hardness of\ntruthful mechanism design for unrelated machines, and by\nstudying these special settings we hope to gain some insights\nthat will be useful for tackling the general problem.\nOur Results and Techniques We present various\npositive results for our multidimensional scheduling domains.\nOur first result is a general method to convert any\ncapproximation algorithm for the job-dependent two values\nsetting into a 3c-approximation truthful-in-expectation\nmechanism. This is one of the very few known results that use\nan approximation algorithm in a black-box fashion to obtain\na truthful mechanism for a multidimensional problem. Our\nresult implies that there exists a 3-approximation\ntruthfulin-expectation mechanism for the Lj-Hj setting.\nInterestingly, the proof of truthfulness is not based on supplying\nexplicit prices, and our construction does not necessarily yield\nefficiently-computable prices (but the allocation rule is\nefficiently computable). Our second result applies to the\ntwovalues setting (Lj = L, Hj = H), for which we improve both\nthe approximation ratio and strengthen the notion of\ntruthfulness. We obtain a deterministic 2-approximation truthful\nmechanism (along with prices) for this problem. These are\nthe first truthful mechanisms with non-trivial performance\nguarantees for a multidimensional scheduling domain.\nComplementing this, we observe that even this seemingly simple\nsetting does not admit truthful mechanisms that return an\noptimal schedule (unlike in the case of related machines).\nBy exploiting the multidimensionality of the domain, we\nprove that no truthful deterministic mechanism can obtain\nan approximation ratio better than 1.14 to the makespan\n(irrespective of computational considerations).\nThe main technique, and one of the novelties,\nunderlying our constructions and proofs, is that we do not rely on\nexplicit price specifications in order to prove the\ntruthfulness of our mechanisms. Instead we exploit certain\nalgorithmic monotonicity conditions that characterize\ntruthfulness to first design an implementable algorithm, i.e., an\nalgorithm for which prices ensuring truthfulness exist, and then\nfind these prices (by further delving into the proof of\nimplementability). This kind of analysis has been the method\nof choice in the design of truthful mechanisms for\nsingledimensional domains, where value-monotonicity yields a\nconvenient characterization enabling one to concentrate on the\nalgorithmic side of the problem (see, e.g., [3, 7, 4, 1, 13]).\nBut for multidimensional domains, almost all positive\nresults have relied on explicit price specifications in order to\nprove truthfulness (an exception is the work on unknown\nsingle-minded players in combinatorial auctions [17, 7]), a\nfact that yet again shows the gap in our understanding of\nmultidimensional vs. single-dimensional domains.\nOur work is the first to leverage monotonicity conditions\nfor truthful mechanism design in arbitrary domains. The\nmonotonicity condition we use, which is sometimes called\ncycle monotonicity, was first proposed by Rochet [23] (see\nalso [11]). It is a generalization of value-monotonicity and\ncompletely characterizes truthfulness in every domain. Our\nmethods and analyses demonstrate the potential benefits\n253\nof this characterization, and show that cycle monotonicity\ncan be effectively utilized to devise truthful mechanisms for\nmultidimensional domains. Consider, for example, our first\nresult showing that any c-approximation algorithm can be\nexported to a 3c-approximation truthful-in-expectation\nmechanism. At the level of generality of an arbitrary\napproximation algorithm, it seems unlikely that one would be\nable to come up with prices to prove truthfulness of the\nconstructed mechanism. But, cycle monotonicity does allow\nus to prove such a statement. In fact, some such condition\nbased only on the underlying algorithm (and not on the\nprices) seems necessary to prove such a general statement.\nThe method for converting approximation algorithms into\ntruthful mechanisms involves another novel idea. Our\nrandomized mechanism is obtained by first constructing a\ntruthful mechanism that returns a fractional schedule. Moving\nto a fractional domain allows us to plug-in truthfulness\ninto the approximation algorithm in a rather simple\nfashion, while losing a factor of 2 in the approximation ratio.\nWe then use a suitable randomized rounding procedure to\nconvert the fractional assignment into a random integral\nassignment. For this, we use a recent rounding procedure\nof Kumar et al. [14] that is tailored for unrelated-machine\nscheduling. This preserves truthfulness, but we lose another\nadditive factor equal to the approximation ratio. Our\nconstruction uses and extends some observations of Lavi and\nSwamy [16], and further demonstrates the benefits of\nfractional mechanisms in truthful mechanism design.\nRelated Work Nisan and Ronen [22] first considered the\nmakespan-minimization problem for unrelated machines. They\ngave an m-approximation positive result and proved various\nlower bounds. Recently, Mu\"alem and Schapira [20] proved\na lower bound of 2 on the approximation ratio achievable\nby truthful-in-expectation mechanisms, and Christodoulou,\nKoutsoupias, and Vidali [8] proved a (1 +\n\u221a\n2)-lower bound\nfor deterministic truthful mechanisms.Archer and Tardos [3]\nfirst considered the related-machines problem and gave a\n3-approximation truthful-in-expectation mechanism. This\nbeen improved in [2, 4, 1, 13] to: a 2-approximation\nrandomized mechanism [2]; an FPTAS for any fixed number of\nmachines given by Andelman, Azar and Sorani [1], and a\n3-approximation deterministic mechanism by Kov\u00b4acs [13].\nThe algorithmic problem (i.e., without requiring\ntruthfulness) of makespan-minimization on unrelated machines\nis well understood and various 2-approximation algorithms\nare known. Lenstra, Shmoys and Tardos [18] gave the first\nsuch algorithm. Shmoys and Tardos [25] later gave a\n2approximation algorithm for the generalized assignment\nproblem, a generalization where there is a cost cij for assigning\na job j to a machine i, and the goal is to minimize the cost\nsubject to a bound on the makespan. Recently, Kumar,\nMarathe, Parthasarathy, and Srinivasan [14] gave a\nrandomized rounding algorithm that yields the same bounds. We\nuse their procedure in our randomized mechanism.\nThe characterization of truthfulness for arbitrary domains\nin terms of cycle monotonicity seems to have been first\nobserved by Rochet [23] (see also Gui et al. [11]). This\ngeneralizes the value-monotonicity condition for single-dimensional\ndomains which was given by Myerson [21] and rediscovered\nby [3]. As mentioned earlier, this condition has been\nexploited numerous times to obtain truthful mechanisms for\nsingle-dimensional domains [3, 7, 4, 1, 13]. For convex\ndomains (i.e., each players\" set of private values is convex), it\nis known that cycle monotonicity is implied by a simpler\ncondition, called weak monotonicity [15, 6, 24]. But even\nthis simpler condition has not found much application in\ntruthful mechanism design for multidimensional problems.\nObjectives other than social-welfare maximization and\nrevenue maximization have received very little attention in\nmechanism design. In the context of combinatorial auctions, the\nproblems of maximizing the minimum value received by a\nplayer, and computing an envy-minimizing allocation have\nbeen studied briefly. Lavi, Mu\"alem, and Nisan [15] showed\nthat the former objective cannot be implemented truthfully;\nBezakova and Dani [5] gave a 0.5-approximation mechanism\nfor two players with additive valuations. Lipton et al. [19]\nshowed that the latter objective cannot be implemented\ntruthfully. These lower bounds were strengthened in [20].\n2. PRELIMINARIES\n2.1 The scheduling domain\nIn our scheduling problem, we are given n jobs and m\nmachines, and each job must be assigned to exactly one\nmachine. In the unrelated-machines setting, each machine i\nis characterized by a vector of processing times (pij)j, where\npij \u2208 R\u22650 \u222a {\u221e} denotes i\"s processing time for job j with\nthe value \u221e specifying that i cannot process j. We consider\ntwo special cases of this problem:\n1. The job-dependent two-values case, where pij \u2208\n{Lj, Hj} for every i, j, with Lj \u2264 Hj, and the values\nLj, Hj are known. This generalizes the classic\nscheduling model of restricted machines, where Hj = \u221e.\n2. The two-values case, which is a special case of above\nwhere Lj = L and Hj = H for all jobs j, i.e., pij \u2208\n{L, H} for every i, j.\nWe say that a job j is low on machine i if pij = Lj, and high\nif pij = Hj. We will use the terms schedule and assignment\ninterchangeably. We represent a deterministic schedule by a\nvector x = (xij)i,j, where xij is 1 if job j is assigned to\nmachine i, thus we have xij \u2208 {0, 1} for every i, j,\nP\ni xij = 1\nfor every job j. We will also consider randomized algorithms\nand algorithms that return a fractional assignment. In both\nthese settings, we will again specify an assignment by a\nvector x = (xij)i,j with\nP\nj xij = 1, but now xij \u2208 [0, 1] for\nevery i, j. For a randomized algorithm, xij is simply the\nprobability that j is assigned to i (thus, x is a convex\ncombination of integer assignments).\nWe denote the load of machine i (under a given\nassignment) by li =\nP\nj xijpij, and the makespan of a schedule is\ndefined as the maximum load on any machine, i.e., maxi li.\nThe goal in the makespan-minimization problem is to\nassign the jobs to the machines so as to minimize the makespan\nof the schedule.\n2.2 Mechanism design\nWe consider the makespan-minimization problem in the\nabove scheduling domains in the context of mechanism\ndesign. Mechanism design studies strategic settings where the\nsocial designer needs to ensure the cooperation of the\ndifferent entities involved in the algorithmic procedure. Following\nthe work of Nisan and Ronen [22], we consider the machines\nto be the strategic players or agents. The social designer\nholds the set of jobs that need to be assigned, but does\n254\nnot know the (true) processing times of these jobs on the\ndifferent machines. Each machine is a selfish entity, that\nprivately knows its own processing time for each job. on a\nmachine incurs a cost to the machine equal to the true\nprocessing time of the job on the machine, and a machine may\nchoose to misrepresent its vector of processing times, which\nare private, in order to decrease its cost.\nWe consider direct-revelation mechanisms: each machine\nreports its (possibly false) vector of processing times, the\nmechanism then computes a schedule and hands out\npayments to the players (i.e., machines) to compensate them\nfor the cost they incur in processing their assigned jobs. A\n(direct-revelation) mechanism thus consists of a tuple (x, P):\nx specifies the schedule, and P = {Pi} specifies the\npayments handed out to the machines, where both x and the Pis\nare functions of the reported processing times p = (pij)i,j.\nThe mechanism\"s goal is to compute a schedule that has\nnear-optimal makespan with respect to the true processing\ntimes; a machine i is however only interested in maximizing\nits own utility, Pi \u2212 li, where li is its load under the output\nassignment, and may declare false processing times if this\ncould increase its utility. The mechanism must therefore\nincentivize the machines/players to truthfully reveal their\nprocessing times via the payments. This is made precise\nusing the notion of dominant-strategy truthfulness.\nDefinition 2.1 (Truthfulness) A scheduling mechanism\nis truthful if, for every machine i, every vector of processing\ntimes of the other machines, p\u2212i, every true processing-time\nvector p1\ni and any other vector p2\ni of machine i, we have:\nP1\ni \u2212\nX\nj\nx1\nijp1\nij \u2265 P2\ni \u2212\nX\nj\nx2\nijp1\nij, (1)\nwhere (x1\n, P1\n) and (x2\n, P2\n) are respectively the schedule and\npayments when the other machines declare p\u2212i and machine\ni declares p1\ni and p2\ni , i.e., x1\n= x(p1\ni , p\u2212i), P1\ni = Pi(p1\ni , p\u2212i)\nand x2\n= x(p2\ni , p\u2212i), P2\ni = Pi(p2\ni , p\u2212i).\nTo put it in words, in a truthful mechanism, no machine can\nimprove its utility by declaring a false processing time, no\nmatter what the other machines declare.\nWe will also consider fractional mechanisms that return\na fractional assignment, and randomized mechanisms that\nare allowed to toss coins and where the assignment and the\npayments may be random variables. The notion of\ntruthfulness for a fractional mechanism is the same as in\nDefinition 2.1, where x1\n, x2\nare now fractional assignments.\nFor a randomized mechanism, we will consider the notion\nof truthfulness in expectation [3], which means that a\nmachine (player) maximizes her expected utility by declaring\nher true processing-time vector. Inequality (1) also defines\ntruthfulness-in-expectation for a randomized mechanism, where\nP1\ni , P2\ni now denote the expected payments made to player i,\nx1\n, x2\nare the fractional assignments denoting the\nrandomized algorithm\"s schedule (i.e., xk\nij is the probability that j\nis assigned to i in the schedule output for (pk\ni , p\u2212i)).\nFor our two scheduling domains, the informational\nassumption is that the values Lj, Hj are publicly known. The\nprivate information of a machine is which jobs have value\nLj (or L) and which ones have value Hj (or H) on it. We\nemphasize that both of our domains are multidimensional,\nsince each machine i needs to specify a vector saying which\njobs are low and high on it.\n3. CYCLE MONOTONICITY\nAlthough truthfulness is defined in terms of payments,\nit turns out that truthfulness actually boils down to a\ncertain algorithmic condition of monotonicity. This seems to\nhave been first observed for multidimensional domains by\nRochet [23] in 1987, and has been used successfully in\nalgorithmic mechanism design several times, but for\nsingledimensional domains. However for multidimensional\ndomains, the monotonicity condition is more involved and there\nhas been no success in employing it in the design of\ntruthful mechanisms. Most positive results for multidimensional\ndomains have relied on explicit price specifications in order\nto prove truthfulness. One of the main contributions of this\npaper is to demonstrate that the monotonicity condition for\nmultidimensional settings, which is sometimes called cycle\nmonotonicity, can indeed be effectively utilized to devise\ntruthful mechanisms. We include a brief exposition on it for\ncompleteness. The exposition here is largely based on [11].\nCycle monotonicity is best described in the abstract social\nchoice setting: there is a finite set A of alternatives, there\nare m players, and each player has a private type\n(valuation function) v : A \u2192 R, where vi(a) should be interpreted\nas i\"s value for alternative a. In the scheduling domain, A\nrepresents all the possible assignments of jobs to machines,\nand vi(a) is the negative of i\"s load in the schedule a. Let\nVi denote the set of all possible types of player i. A\nmechanism is a tuple (f, {Pi}) where f : V1 \u00d7 \u00b7 \u00b7 \u00b7 \u00d7 Vm \u2192 A\nis the algorithm for choosing the alternative, and Pi :\nV1 \u00d7 \u00b7 \u00b7 \u00b7 \u00d7 Vm \u2192 A is the price charged to player i (in the\nscheduling setting, the mechanism pays the players, which\ncorresponds to negative prices). The mechanism is\ntruthful if for every i, every v\u2212i \u2208 V\u2212i =\nQ\ni =i Vi , and any\nvi, vi \u2208 Vi we have vi(a) \u2212 Pi(vi, v\u2212i) \u2265 vi(b) \u2212 Pi(vi, v\u2212i),\nwhere a = f(vi, v\u2212i) and b = f(vi, v\u2212i). A basic question\nthat arises is given an algorithm f : V1 \u00d7 \u00b7 \u00b7 \u00b7 \u00d7 Vm \u2192 A, do\nthere exist prices that will make the resulting mechanism\ntruthful? It is well known (see e.g. [15]) that the price Pi\ncan only depend on the alternative chosen and the others\"\ndeclarations, that is, we may write Pi : V\u2212i \u00d7 A \u2192 R. Thus,\ntruthfulness implies that for every i, every v\u2212i \u2208 V\u2212i, and\nany vi, vi \u2208 Vi with f(vi, v\u2212i) = a and f(vi, v\u2212i) = b, we\nhave vi(a) \u2212 Pi(a, v\u2212i) \u2265 vi(b) \u2212 Pi(b, v\u2212i).\nNow fix a player i, and fix the declarations v\u2212i of the\nothers. We seek an assignment to the variables {Pa}a\u2208A\nsuch that vi(a) \u2212 vi(b) \u2265 Pa \u2212 Pb for every a, b \u2208 A and vi \u2208\nVi with f(vi, v\u2212i) = a. (Strictly speaking, we should use\nA = f(Vi, v\u2212i) instead of A here.) Define \u03b4a,b := inf{vi(a)\u2212\nvi(b) : vi \u2208 Vi, f(vi, v\u2212i) = a}. We can now rephrase the\nabove price-assignment problem: we seek an assignment to\nthe variables {Pa}a\u2208A such that\nPa \u2212 Pb \u2264 \u03b4a,b \u2200a, b \u2208 A (2)\nThis is easily solved by looking at the allocation graph and\napplying a standard basic result of graph theory.\nDefinition 3.1 (Gui et al. [11]) The allocation graph of\nf is a directed weighted graph G = (A, E) where E = A \u00d7 A\nand the weight of an edge b \u2192 a (for any a, b \u2208 A) is \u03b4a,b.\nTheorem 3.2 There exists a feasible assignment to (2) iff\nthe allocation graph has no negative-length cycles.\nFurthermore, if all cycles are non-negative, a feasible assignment is\n255\nobtained as follows: fix an arbitrary node a\u2217\n\u2208 A and set Pa\nto be the length of the shortest path from a\u2217\nto a.\nThis leads to the following definition, which is another way\nof phrasing the condition that the allocation graph have no\nnegative cycles.\nDefinition 3.3 (Cycle monotonicity) A social choice\nfunction f satisfies cycle monotonicity if for every player i, every\nv\u2212i \u2208 V\u2212i, every integer K, and every v1\ni , . . . , vK\ni \u2208 Vi,\nKX\nk=1\nh\nvk\ni (ak) \u2212 vk\ni (ak+1)\ni\n\u2265 0\nwhere ak = f(vk\ni , v\u2212i) for 1 \u2264 k \u2264 K, and aK+1 = a1.\nCorollary 3.4 There exist prices P such that the\nmechanism (f, P) is truthful iff f satisfies cycle monotonicity.1\nWe now consider our specific scheduling domain. Fix a\nplayer i, p\u2212i, and any p1\ni , . . . , pK\ni . Let x(pk\ni , p\u2212i) = xk\nfor\n1 \u2264 k \u2264 K, and let xK+1\n= x1\n, pK+1\n= p1\n. xk\ncould\nbe a {0, 1}-assignment or a fractional assignment. We have\nvk\ni (xk\n) = \u2212\nP\nj xk\nijpk\nij, so cycle monotonicity translates to\nPK\nk=1\n\u02c6\n\u2212\nP\nj xk\nijpk\nij +\nP\nj xk+1\nij pk\nij\n\u02dc\n\u2265 0. Rearranging, we get\nKX\nk=1\nX\nj\nxk+1\nij\n`\npk\nij \u2212 pk+1\nij\n\u00b4\n\u2265 0. (3)\nThus (3) reduces our mechanism design problem to a\nconcrete algorithmic problem. For most of this paper, we\nwill consequently ignore any strategic considerations and\nfocus on designing an approximation algorithm for minimizing\nmakespan that satisfies (3).\n4. A GENERAL TECHNIQUE TO OBTAIN\nRANDOMIZED MECHANISMS\nIn this section, we consider the case of job-dependent Lj,\nHj values (with Lj \u2264 Hj), which generalizes the\nclassical restricted-machines model (where Hj = \u221e). We show\nthe power of randomization, by providing a general\ntechnique that converts any c-approximation algorithm into a\n3c-approximation, truthful-in-expectation mechanism. This\nis one of the few results that shows how to export\napproximation algorithms for a multidimensional problem into truthful\nmechanisms when the algorithm is given as a black box.\nOur construction and proof are simple, and based on two\nideas. First, as outlined above, we prove truthfulness using\ncycle monotonicity. It seems unlikely that for an arbitrary\napproximation algorithm given only as a black box, one\nwould be able to come up with payments in order to prove\ntruthfulness; but cycle-monotonicity allows us to prove\nprecisely this. Second, we obtain our randomized mechanism\nby (a) first moving to a fractional domain, and\nconstructing a fractional truthful mechanism that is allowed to return\nfractional assignments; then (b) using a rounding procedure\nto express the fractional schedule as a convex combination\nof integer schedules. This builds upon a theme introduced\nby Lavi and Swamy [16], namely that of using fractional\nmechanisms to obtain truthful-in-expectation mechanisms.\n1\nIt is not clear if Theorem 3.2, and hence, this statement,\nhold if A is not finite.\nWe should point out however that one cannot simply plug\nin the results of [16]. Their results hold for\nsocial-welfaremaximization problems and rely on using VCG to obtain\na fractional truthful mechanism. VCG however does not\napply to makespan minimization, and in our case even the\nexistence of a near-optimal fractional truthful mechanism is\nnot known. We use the following result adapted from [16].\nLemma 4.1 (Lavi and Swamy [16]) Let M = (x, P) be\na fractional truthful mechanism. Let A be a randomized\nrounding algorithm that given a fractional assignment x,\noutputs a random assignment X such that E\n\u02c6\nXij\n\u02dc\n= xij for\nall i, j. Then there exist payments P such that the\nmechanism M = (A, P ) is truthful in expectation. Furthermore,\nif M is individually rational then M is individually rational\nfor every realization of coin tosses.\nLet OPT(p) denote the optimal makespan (over integer\nschedules) for instance p. As our first step, we take a\ncapproximation algorithm and convert it to a 2c-approximation\nfractional truthful mechanism. This conversion works even\nwhen the approximation algorithm returns only a fractional\nschedule (satisfying certain properties) of makespan at most\nc \u00b7 OPT(p) for every instance p. We prove truthfulness by\nshowing that the fractional algorithm satisfies cycle\nmonotonicity (3). Notice that the alternative-set of our fractional\nmechanism is finite (although the set of all fractional\nassignments is infinite): its cardinality is at most that of the\ninputdomain, which is at most 2mn\nin the two-value case. Thus,\nwe can apply Corollary 3.4 here. To convert this fractional\ntruthful mechanism into a randomized truthful mechanism\nwe need a randomized rounding procedure satisfying the\nrequirements of Lemma 4.1. Fortunately, such a procedure is\nalready provided by Kumar, Marathe, Parthasarathy, and\nSrinivasan [14].\nLemma 4.2 (Kumar et al. [14]) Given a fractional\nassignment x and a processing time vector p, there exists a\nrandomized rounding procedure that yields a (random)\nassignment X such that,\n1. for any i, j, E\n\u02c6\nXij\n\u02dc\n= xij.\n2. for any i,\nP\nj Xijpij <\nP\nj xijpij + max{j:xij \u2208(0,1)} pij\nwith probability 1.\nProperty 1 will be used to obtain truthfulness in\nexpectation, and property 2 will allow us to prove an approximation\nguarantee. We first show that any algorithm that returns\na fractional assignment having certain properties satisfies\ncycle monotonicity.\nLemma 4.3 Let A be an algorithm that for any input p,\noutputs a (fractional) assignment x such that, if pij = Hj\nthen xij \u2264 1/m, and if pij = Lj then xij \u2265 1/m. Then A\nsatisfies cycle-monotonicity.\nProof. Fix a player i, and the vector of processing times\nof the other players p\u2212i. We need to prove (3), that is,\nPK\nk=1\nP\nj xk+1\nij\n`\npk\nij \u2212 pk+1\nij\n\u00b4\n\u2265 0 for every p1\ni , . . . , pK\ni , where\nindex k = K + 1 is taken to be k = 1. We will show that for\nevery job j,\nPK\nk=1 xk+1\nij\n`\npk\nij \u2212 pk+1\nij\n\u00b4\n\u2265 0.\nIf pk\nij is the same for all k (either always Lj or always Hj),\nthen the above inequality clearly holds. Otherwise we can\n256\ndivide the indices 1, . . . , K, into maximal segments, where\na maximal segment is a maximal set of consecutive indices\nk , k + 1, . . . , k \u2212 1, k (where K + 1 \u2261 1) such that pk\nij =\nHj \u2265 pk +1\nij \u2265 \u00b7 \u00b7 \u00b7 \u2265 pk\nij = Lj. This follows because there\nmust be some k such that pk\nij = Hj > pk\u22121\nij = Lj. We take\nk = k and then keep including indices in this segment till\nwe reach a k such that pk\nij = Lj and pk+1\nij = Hj. We set\nk = k, and then start a new maximal segment with index\nk + 1. Note that k = k and k + 1 = k \u2212 1. We now\nhave a subset of indices and we can continue recursively. So\nall indices are included in some maximal segment. We will\nshow that for every such maximal segment k , k +1, . . . , k ,P\nk \u22121\u2264k 0\nimplies that pij \u2264 T, where T is the makespan of x. (In\nparticular, note that any algorithm that returns an integral\nassignment has these properties.) Our algorithm, which we\ncall A , returns the following assignment xF\n. Initialize xF\nij =\n0 for all i, j. For every i, j,\n1. if pij = Hj, set xF\nij =\nP\ni :pi j =Hj\nxi j/m;\n2. if pij = Lj, set xF\nij = xij +\nP\ni =i:pi j =Lj\n(xi j \u2212xij)/m+\nP\ni :pi j =Hj\nxi j/m.\nTheorem 4.4 Suppose algorithm A satisfies the conditions\nin Algorithm 1 and returns a makespan of at most c\u00b7OPT(p)\nfor every p. Then, the algorithm A constructed above is a\n2c-approximation, cycle-monotone fractional algorithm.\nMoreover, if xF\nij > 0 on input p, then pij \u2264 c \u00b7 OPT(p).\nProof. First, note that xF\nis a valid assignment: for\nevery job j,\nP\ni xF\nij =\nP\ni xij +\nP\ni,i =i:pij =pi j =Lj\n(xi j \u2212\nxij)/m =\nP\ni xij = 1. We also have that if pij = Hj\nthen xF\nij =\nP\ni :pi j =Hj\nxi j/m \u2264 1/m. If pij = Lj, then\nxF\nij = xij(1 \u2212 /m) +\nP\ni =i xi j/m where = |{i = i :\npi j = Lj}| \u2264 m \u2212 1; so xF\nij \u2265\nP\ni xi j/m \u2265 1/m. Thus, by\nLemma 4.3, A satisfies cycle monotonicity.\nThe total load on any machine i under xF\nis at mostP\nj:pij =Hj\nP\ni :pi j =Hj\nHj\u00b7\nxi j\nm\n+\nP\nj:pij =Lj\nLj\n`\nxij+\nP\ni =i\nxi j\nm\n\u00b4\n,\nwhich is at most\nP\nj pijxij +\nP\ni =i\nP\nj pi jxi j/m \u2264 2c \u00b7\nOPT(p). Finally, if xF\nij > 0 and pij = Lj, then pij \u2264\nOPT(p). If pij = Hj, then for some i (possibly i) with\npi j = Hj we have xi j > 0, so by assumption, pi j = Hj =\npij \u2264 c \u00b7 OPT(p).\nTheorem 4.4 combined with Lemmas 4.1 and 4.2, gives a\n3c-approximation, truthful-in-expectation mechanism. The\ncomputation of payments will depend on the actual\napproximation algorithm used. Section 3 does however give an\nexplicit procedure to compute payments ensuring\ntruthfulness, though perhaps not in polynomial-time.\nTheorem 4.5 The procedure in Algorithm 1 converts any\nc-approximation fractional algorithm into a 3c-approximation,\ntruthful-in-expectation mechanism.\nTaking A in Algorithm 1 to be the algorithm that returns\nan LP-optimum assignment satisfying the required\nconditions (see [18, 25]), we obtain a 3-approximation mechanism.\nCorollary 4.6 There is a truthful-in-expectation mechanism\nwith approximation ratio 3 for the Lj-Hj setting.\n5. A DETERMINISTIC MECHANISM FOR\nTHE TWO-VALUES CASE\nWe now present a deterministic 2-approximation truthful\nmechanism for the case where pij \u2208 {L, H} for all i, j. In\nthe sequel, we will often say that j is assigned to a\nlowmachine to denote that j is assigned to a machine i where\npij = L. We will call a job j a low job of machine i if\npij = L; the low-load of i is the load on i due to its low jobs,\ni.e.,\nP\nj:pij =L xijpij.\nAs in Section 4, our goal is to obtain an approximation\nalgorithm that satisfies cycle monotonicity. We first obtain\na simplification of condition (3) for our two-values {L, H}\nscheduling domain (Proposition 5.1) that will be convenient\nto work with. We describe our algorithm in Section 5.1.\nIn Section 5.2, we bound its approximation guarantee and\nprove that it satisfies cycle-monotonicity. In Section 5.3,\nwe compute explicit payments giving a truthful mechanism.\nFinally, in Section 5.4 we show that no deterministic\nmechanism can achieve the optimum makespan. Define\nnk,\nH =\n\u02db\n\u02db{j : xk\nij = 1, pk\nij = L, pij = H}\n\u02db\n\u02db (4)\nnk,\nL =\n\u02db\n\u02db{j : xk\nij = 1, pk\nij = H, pij = L}\n\u02db\n\u02db. (5)\nThen,\nP\nj xk+1\nij (pk\nij \u2212 pk+1\nij ) = (nk+1,k\nH \u2212 nk+1,k\nL )(H \u2212 L).\nPlugging this into (3) and dividing by (H \u2212 L), we get the\nfollowing.\nProposition 5.1 Cycle monotonicity in the two-values\nscheduling domain is equivalent to the condition that, for every\nplayer i, every p\u2212i, every integer K, and every p1\ni , . . . , pK\ni ,\nKX\nk=1\n`\nnk+1,k\nH \u2212 nk+1,k\nL\n\u00b4\n\u2265 0. (6)\n257\n5.1 Acycle-monotone approximation algorithm\nWe now describe an algorithm that satisfies condition (6)\nand achieves a 2-approximation. We will assume that L, H\nare integers, which is without loss of generality.\nA core component of our algorithm will be a procedure\nthat takes an integer load threshold T and computes an\ninteger partial assignment x of jobs to machines such that (a)\na job is only assigned to a low machine; (b) the load on any\nmachine is at most T; and (c) the number of jobs assigned is\nmaximized. Such an assignment can be computed by solving\na max-flow problem: we construct a directed bipartite graph\nwith a node for every job j and every machine i, and an edge\n(j, i) of infinite capacity if pij = L. We also add a source\nnode s with edges (s, j) having capacity 1, and sink node t\nwith edges (i, t) having capacity T/L . Clearly any integer\nflow in this network corresponds to a valid integer partial\nassignment x of makespan at most T, where xij = 1 iff there\nis a flow of 1 on the edge from j to i. We will therefore use\nthe terms assignment and flow interchangeably. Moreover,\nthere is always an integral max-flow (since all capacities are\nintegers). We will often refer to such a max-flow as the\nmax-flow for (p, T).\nWe need one additional concept before describing the\nalgorithm. There could potentially be many max-flows and\nwe will be interested in the most balanced ones, which we\nformally define as follows. Fix some max-flow. Let ni\np,T be\nthe amount of flow on edge (i, t) (or equivalently the number\nof jobs assigned to i in the corresponding schedule), and let\nnp,T be the total size of the max-flow, i.e., np,T =\nP\ni ni\np,T .\nFor any T \u2264 T, define ni\np,T |T = min(ni\np,T , T ), that is, we\ntruncate the flow/assignment on i so that the total load\non i is at most T . Define np,T |T =\nP\ni ni\np,T |T . We define\na prefix-maximal flow or assignment for T as follows.\nDefinition 5.2 (Prefix-maximal flow) A flow for the above\nnetwork with threshold T is prefix-maximal if for every\ninteger T \u2264 T, we have np,T |T = np,T .\nThat is, in a prefix-maximal flow for (p, T), if we truncate\nthe flow at some T \u2264 T, we are left with a max-flow for\n(p, T ). An elementary fact about flows is that if an\nassignment/flow x is not a maximum flow for (p, T) then there\nmust be an augmenting path P = (s, j1, i1, . . . , jK , iK , t) in\nthe residual graph that allows us to increase the size of the\nflow. The interpretation is that in the current assignment,\nj1 is unassigned, xi j = 0, which is denoted by the\nforward edges (j , i ), and xi j +1 = 1, which is denoted by\nthe reverse edges (i , j +1). Augmenting x using P changes\nthe assignment so that each j is assigned to i in the new\nassignment, which increases the value of the flow by 1. A\nsimple augmenting path does not decrease the load of any\nmachine; thus, one can argue that a prefix-maximal flow for\na threshold T always exists. We first compute a max-flow for\nthreshold 1, use simple augmenting paths to augment it to a\nmax-flow for threshold 2, and repeat, each time augmenting\nthe max-flow for the previous threshold t to a max-flow for\nthreshold t + 1 using simple augmenting paths.\nAlgorithm 2 Given a vector of processing times p,\nconstruct an assignment of jobs to machines as follows.\n1. Compute T\u2217\n(p) = min\n\u02d8\nT \u2265 H, T multiple of L :\nnp,T \u00b7 L + (n \u2212 np,T ) \u00b7 H \u2264 m \u00b7 T\n\u00af\n.\nNote that np,T \u00b7L+(n\u2212np,T )\u00b7H \u2212m\u00b7T is a decreasing\nfunction of T, so T\u2217\n(p) can be computed in polynomial\ntime via binary search.\n2. Compute a prefix-maximal flow for threshold T\u2217\n(p)\nand the corresponding partial assignment (i.e., j is\nassigned to i iff there is 1 unit of flow on edge (j, i)).\n3. Assign the remaining jobs, i.e., the jobs unassigned in\nthe flow-phase, in a greedy manner as follows.\nConsider these jobs in an arbitrary order and assign each\njob to the machine with the current lowest load (where\nthe load includes the jobs assigned in the flow-phase).\nOur algorithm needs to compute a prefix-maximal\nassignment for the threshold T\u2217\n(p). The proof showing the\nexistence of a prefix-maximal flow only yields a\npseudopolynomial time algorithm for computing it. But notice that the\nmax-flow remains the same for any T \u2265 T = n \u00b7 L. So\na prefix-maximal flow for T is also prefix-maximal for any\nT \u2265 T . Thus, we only need to compute a prefix-maximal\nflow for T = min{T\u2217\n(p), T }. This can be be done in\npolynomial time by using the iterative-augmenting-paths\nalgorithm in the existence proof to compute iteratively the\nmaxflow for the polynomially many multiples of L up to (and\nincluding) T .\nTheorem 5.3 One can efficiently compute payments that\nwhen combined with Algorithm 2 yield a deterministic\n2approximation truthful mechanism for the two-values\nscheduling domain.\n5.2 Analysis\nLet OPT(p) denote the optimal makespan for p. We now\nprove that Algorithm 2 is a 2-approximation algorithm that\nsatisfies cycle monotonicity. This will then allow us to\ncompute payments in Section 5.3 and prove Theorem 5.3.\n5.2.1 Proof of approximation\nClaim 5.4 If OPT(p) < H, the makespan is at most OPT(p).\nProof. If OPT(p) < H, it must be that the optimal\nschedule assigns all jobs to low machines, so np,OPT(p) = n.\nThus, we have T\u2217\n(p) = L \u00b7 H\nL\n. Furthermore, since we\ncompute a prefix-maximal flow for threshold T\u2217\n(p) we have\nnp,T \u2217(p)|OPT(p) = np,OPT(p) = n, which implies that the\nload on each machine is at most OPT(p). So in this case\nthe makespan is at most (and hence exactly) OPT(p).\nClaim 5.5 If OPT(p) \u2265 H, then T\u2217\n(p) \u2264 L \u00b7 OPT(p)\nL\n\u2264\nOPT(p) + L.\nProof. Let nOPT(p) be the number of jobs assigned to\nlow machines in an optimum schedule. The total load on all\nmachines is exactly nOPT(p) \u00b7 L + (n \u2212 nOPT(p)) \u00b7 H, and is\nat most m \u00b7 OPT(p), since every machine has load at most\nOPT(p). So taking T = L \u00b7 OPT(p)\nL\n\u2265 H, since np,T \u2265\nnOPT(p) we have that np,T \u00b7L+(n\u2212np,T )\u00b7H \u2264 m\u00b7T. Hence,\nT\u2217\n(p), the smallest such T, is at most L \u00b7 OPT(p)\nL\n.\nClaim 5.6 Each job assigned in step 3 of the algorithm is\nassigned to a high machine.\n258\nProof. Suppose j is assigned to machine i in step 3. If\npij = L, then we must have ni\np,T \u2217(p) = T\u2217\n(p), otherwise we\ncould have assigned j to i in step 2 to obtain a flow of larger\nvalue. So at the point just before j is assigned in step 3,\nthe load of each machine must be at least T\u2217\n(p). Hence,\nthe total load after j is assigned is at least m \u00b7 T\u2217\n(p) + L >\nm \u00b7 T\u2217\n(p). But the total load is also at most np,T \u2217(p) \u00b7 L +\n(n \u2212 np,T \u2217(p)) \u00b7 H \u2264 m \u00b7 T\u2217\n(p), yielding a contradiction.\nLemma 5.7 The above algorithm returns a schedule with\nmakespan at most OPT(p)+max\n\u02d8\nL, H(1\u2212 1\nm\n)\n\u00af\n\u2264 2\u00b7OPT(p).\nProof. If OPT(p) < H, then by Claim 5.4, we are done.\nSo suppose OPT(p) \u2265 H. By Claim 5.5, we know that\nT\u2217\n(p) \u2264 OPT(p) + L. If there are no unassigned jobs after\nstep 2 of the algorithm, then the makespan is at most T\u2217\n(p)\nand we are done. So assume that there are some unassigned\njobs after step 2. We will show that the makespan after step\n3 is at most T +H\n`\n1\u2212 1\nm\n\u00b4\nwhere T = min\n\u02d8\nT\u2217\n(p), OPT(p)\n\u00af\n.\nSuppose the claim is false. Let i be the machine with the\nmaximum load, so li > T + H\n`\n1 \u2212 1\nm\n\u00b4\n. Let j be the last job\nassigned to i in step 3, and consider the point just before\nit is assigned to i. So li > T \u2212 H/m at this point. Also\nsince j is assigned to i, by our greedy rule, the load on all\nthe other machines must be at least li. So the total load\nafter j is assigned, is at least H + m \u00b7 li > m \u00b7 T (since\npij = H by Claim 5.6). Also, for any assignment of jobs to\nmachines in step 3, the total load is at most np,T \u2217(p) \u00b7 L +\n(n \u2212 np,T \u2217(p)) \u00b7 H since there are np,T \u2217(p) jobs assigned to\nlow machines. Therefore, we must have m \u00b7 T < np,T \u2217(p) \u00b7\nL + (n \u2212 np,T \u2217(p)) \u00b7 H. But we will argue that m \u00b7 T \u2265\nnp,T \u2217(p) \u00b7L+(n\u2212np,T \u2217(p))\u00b7H, which yields a contradiction.\nIf T = T\u2217\n(p), this follows from the definition of T\u2217\n(p).\nIf T = OPT(p), then letting nOPT(p) denote the number of\njobs assigned to low machines in an optimum schedule, we\nhave np,T \u2217(p) \u2265 nOPT(p). So np,T \u2217(p) \u00b7L+(n\u2212np,T \u2217(p))\u00b7H \u2264\nnOPT(p) \u00b7L+(n\u2212nOPT(p))\u00b7H. This is exactly the total load\nin an optimum schedule, which is at most m \u00b7 OPT(p).\n5.2.2 Proof of cycle monotonicity\nLemma 5.8 Consider any two instances p = (pi, p\u2212i) and\np = (pi, p\u2212i) where pi \u2265 pi, i.e., pij \u2265 pij \u2200j. If T is a\nthreshold such that np,T > np ,T , then every maximum flow\nx for (p , T) must assign all jobs j such that pij = L.\nProof. Let Gp denote the residual graph for (p , T) and\nflow x . Suppose by contradiction that there exists a job j\u2217\nwith pij\u2217 = L that is unassigned by x . Since pi \u2265 pi, all\nedges (j, i) that are present in the network for (p , T) are\nalso present in the network for (p, T). Thus, x is a valid\nflow for (p, T). But it is not a max-flow, since np,T > np ,T .\nSo there exists an augmenting path P in the residual graph\nfor (p, T) and flow x . Observe that node i must be included\nin P, otherwise P would also be an augmenting path in the\nresidual graph Gp contradicting the fact that x is a\nmaxflow. In particular, this implies that there is a path P \u2282 P\nfrom i to the sink t. Let P = (i, j1, i1, . . . , jK , iK , t). All\nthe edges of P are also present as edges in Gp - all reverse\nedges (i , j +1) are present since such an edge implies that\nxi j +1\n= 1; all forward edges (j , i ) are present since i = i\nso pi j = pi j = L, and xi j +1\n= 0. But then there is\nan augmenting path (j\u2217\n, i, j1, i1, . . . , jK , iK , t) in Gp which\ncontradicts the maximality of x .\nLet L denote the all-low processing time vector. Define\nTL\ni (p\u2212i) = T\u2217\n(L, p\u2212i). Since we are focusing on machine i,\nand p\u2212i is fixed throughout, we abbreviate TL\ni (p\u2212i) to TL\n.\nAlso, let pL\n= (L, p\u2212i). Note that T\u2217\n(p) \u2265 TL\nfor every\ninstance p = (pi, p\u2212i).\nCorollary 5.9 Let p = (pi, p\u2212i) be any instance and let x be\nany prefix-maximal flow for (p, T\u2217\n(p)). Then, the low-load\non machine i is at most TL\n.\nProof. Let T\u2217\n= T\u2217\n(p). If T\u2217\n= TL\n, then this is clearly\ntrue. Otherwise, consider the assignment x truncated at TL\n.\nSince x is prefix-maximal, we know that this constitutes a\nmax-flow for (p, TL\n). Also, np,T L < npL,T L because T\u2217\n>\nTL\n. So by Lemma 5.8, this truncated flow must assign all\nthe low jobs of i. Hence, there cannot be a job j with pij = L\nthat is assigned to i after the TL\n-threshold since then j\nwould not be assigned by this truncated flow. Thus, the\nlow-load of i is at most TL\n.\nUsing these properties, we will prove the following key\ninequality: for any p1\n= (p\u2212i, p1\ni ) and p2\n= (p\u2212i, p2\ni ),\nnp1,T L \u2265 np2,T L \u2212 n2,1\nH + n2,1\nL (7)\nwhere n2,1\nH and n2,1\nL are as defined in (4) and (5),\nrespectively. Notice that this immediately implies cycle\nmonotonicity, since if we take p1\n= pk\nand p2\n= pk+1\n, then (7)\nimplies that npk,T L \u2265 npk+1,T L \u2212 nk+1,k\nH + nk+1,k\nL ; summing\nthis over all k = 1, . . . , K gives (6).\nLemma 5.10 If T\u2217\n(p1\n) > TL\n, then (7) holds.\nProof. Let T1\n= T\u2217\n(p1\n) and T2\n= T\u2217\n(p2\n). Take the\nprefix-maximal flow x2\nfor (p2\n, T2\n), truncate it at TL\n, and\nremove all the jobs from this assignment that are counted in\nn2,1\nH , that is, all jobs j such that x2\nij = 1, p2\nij = L, p1\nij = H.\nDenote this flow by x. Observe that x is a valid flow for\n(p1\n, TL\n), and the size of this flow is exactly np2,T 2 |T L \u2212n2,1\nH =\nnp2,T L \u2212n2,1\nH . Also none of the jobs that are counted in n2,1\nL\nare assigned by x since each such job j is high on i in p2\n.\nSince T1\n> TL\n, we must have np1,T L < npL,T L . So if we\naugment x to a max-flow for (p1\n, TL\n), then by Lemma 5.8\n(with p = pL\nand p = p1\n), all the jobs corresponding to\nn2,1\nL must be assigned in this max-flow. Thus, the size of\nthis max-flow is at least (size of x) + n2,1\nL , that is, np1,T L \u2265\nnp2,T L \u2212 n2,1\nH + n2,1\nL , as claimed.\nLemma 5.11 Suppose T\u2217\n(p1\n) = TL\n. Then (7) holds.\nProof. Again let T1\n= T\u2217\n(p1\n) = TL\nand T2\n= T\u2217\n(p2\n).\nLet x1\n, x2\nbe the complete assignment, i.e., the assignment\nafter both steps 2 and 3, computed by our algorithm for\np1\n, p2\nrespectively. Let S = {j : x2\nij = 1 and p2\nij = L} and\nS = {j : x2\nij = 1 and p1\nij = L}. Therefore, |S | = |S| \u2212\nn2,1\nH + n2,1\nL and |S| = ni\np2,T 2 = ni\np2,T 2 |T L (by Corollary 5.9).\nLet T = |S | \u00b7 L. We consider two cases.\nSuppose first that T \u2264 TL\n. Consider the following flow\nfor (p1\n, TL\n): assign to every machine other than i the\nlowassignment of x2\ntruncated at TL\n, and assign the jobs in S\nto machine i. This is a valid flow for (p1\n, TL\n) since the load\non i is T \u2264 TL\n. Its size is equal to\nP\ni =i ni\np2,T 2 |T L +|S | =\nnp2,T 2 |T L \u2212n2,1\nH +n2,1\nL = np2,T L \u2212n2,1\nH +n2,1\nL . The size of the\nmax-flow for (p1\n, TL\n) is no smaller, and the claim follows.\n259\nNow suppose T > TL\n. Since |S| \u00b7 L \u2264 TL\n(by\nCorollary 5.9), it follows that n2,1\nL > n2,1\nH \u2265 0. Let \u02c6T = T \u2212 L \u2265\nTL\nsince T , TL\nare both multiples of L. Let M = np2,T 2 \u2212\nn2,1\nH + n2,1\nL = |S | +\nP\ni =i ni\np2,T 2 . We first show that\nm \u00b7 \u02c6T < M \u00b7 L + (n \u2212 M) \u00b7 H. (8)\nLet N be the number of jobs assigned to machine i in x2\n.\nThe load on machine i is |S|\u00b7L+(N \u2212|S|)\u00b7H \u2265 |S |\u00b7L\u2212n2,1\nL \u00b7\nL+(N\u2212|S|)\u00b7H which is at least |S |\u00b7L > \u02c6T since n2,1\nL \u2264 N\u2212\n|S|. Thus we get the inequality |S |\u00b7L+(N \u2212|S |)\u00b7H > \u02c6T.\nNow consider the point in the execution of the algorithm\non instance p2\njust before the last high job is assigned to i\nin Step 3 (there must be such a job since n2,1\nL > 0). The\nload on i at this point is |S| \u00b7 L + (N \u2212 |S| \u2212 1) \u00b7 H which is\nleast |S | \u00b7 L \u2212 L = \u02c6T by a similar argument as above. By\nthe greedy property, every i = i also has at least this load\nat this point, so\nP\nj p2\ni jx2\ni j \u2265 \u02c6T. Adding these inequalities\nfor all i = i, and the earlier inequality for i, we get that\n|S | \u00b7 L + (N \u2212 |S |) \u00b7 H +\nP\ni =i\nP\nj p2\ni jx2\ni j > m \u02c6T. But the\nleft-hand-side is exactly M \u00b7 L + (n \u2212 M) \u00b7 H.\nOn the other hand, since T1\n= TL\n, we have\nm \u00b7 \u02c6T \u2265 m \u00b7 TL\n\u2265 np1,T L \u00b7 L + (n \u2212 np1,T L ) \u00b7 H. (9)\nCombining (8) and (9), we get that np1,T L > M = np2,T 2 \u2212\nn2,1\nH + n2,1\nL \u2265 np2,T L \u2212 n2,1\nH + n2,1\nL .\nLemma 5.12 Algorithm 2 satisfies cycle monotonicity.\nProof. Taking p1\n= pk\nand p2\n= pk+1\nin (7), we get that\nnpk,T L \u2265 npk+1,T L \u2212nk+1,k\nH +nk+1,k\nL . Summing this over all\nk = 1, . . . , K (where K + 1 \u2261 1) yields (6).\n5.3 Computation of prices\nLemmas 5.7 and 5.12 show that our algorithm is a\n2approximation algorithm that satisfies cycle monotonicity.\nThus, by the discussion in Section 3, there exist prices that\nyield a truthful mechanism. To obtain a polynomial-time\nmechanism, we also need to show how to compute these\nprices (or payments) in polynomial-time. It is not clear,\nif the procedure outlined in Section 3 based on computing\nshortest paths in the allocation graph yields a polynomial\ntime algorithm, since the allocation graph has an\nexponential number of nodes (one for each output assignment).\nInstead of analyzing the allocation graph, we will leverage our\nproof of cycle monotonicity, in particular, inequality (7), and\nsimply spell out the payments.\nRecall that the utility of a player is ui = Pi \u2212 li, where Pi\nis the payment made to player i. For convenience, we will\nfirst specify negative payments (i.e., the Pis will actually\nbe prices charged to the players) and then show that these\ncan be modified so that players have non-negative utilities\n(if they act truthfully). Let Hi\ndenote the number of jobs\nassigned to machine i in step 3. By Corollary 5.6, we know\nthat all these jobs are assigned to high machines (according\nto the declared pis). Let H\u2212i\n=\nP\ni =i Hi\nand n\u2212i\np,T =\nP\ni =i ni\np,T . The payment Pi to player i is defined as:\nPi(p) = \u2212L \u00b7 n\u2212i\np,T \u2217(p) \u2212 H \u00b7 H\u2212i\n(p)\n\u2212 (H \u2212 L)\n`\nnp,T \u2217(p) \u2212 np,T L\ni (p\u2212i)\n\u00b4 (10)\nWe can interpret our payments as equating the player\"s cost\nto a careful modification of the total load (in the spirit of\nVCG prices). The first and second terms in (10), when\nsubtracted from i\"s load li equate i\"s cost to the total load.\nThe term np,T \u2217(p) \u2212 np,T L\ni (p\u2212i) is in fact equal to n\u2212i\np,T \u2217(p) \u2212\nn\u2212i\np,T \u2217(p)|T L\ni (p\u2212i) since the low-load on i is at most TL\ni (p\u2212i)\n(by Claim 5.9). Thus the last term in equation (10) implies\nthat we treat the low jobs that were assigned beyond the\nTL\ni (p\u2212i) threshold (to machines other than i) effectively as\nhigh jobs for the total utility calculation from i\"s point of\nview. It is not clear how one could have conjured up these\npayments a priori in order to prove the truthfulness of our\nalgorithm. However, by relying on cycle monotonicity, we\nwere not only able to argue the existence of payments, but\nalso our proof paved the way for actually inferring these\npayments. The following lemma explicitly verifies that the\npayments defined above do indeed give a truthful mechanism.\nLemma 5.13 Fix a player i and the other players\"\ndeclarations p\u2212i. Let i\"s true type be p1\ni . Then, under the payments\ndefined in (10), i\"s utility when she declares her true type p1\ni\nis at least her utility when she declares any other type p2\ni .\nProof. Let c1\ni , c2\ni denote i\"s total cost, defined as the\nnegative of her utility, when she declares p1\n, and p2\n,\nrespectively (and the others declare p\u2212i). Since p\u2212i is fixed, we\nomit p\u2212i from the expressions below for notational clarity.\nThe true load of i when she declares her true type p1\ni is\nL \u00b7 ni\np1,T \u2217(p1) + H \u00b7 Hi\n(p1\n), and therefore\nc1\ni = L \u00b7 np1,T \u2217(p1) + H \u00b7 (n \u2212 np1,T \u2217(p1))\n+ (H \u2212 L)\n`\nnp1,T \u2217(p1) \u2212 np1,T L\ni\n\u00b4\n= n \u00b7 H \u2212 (H \u2212 L)np1,T L\ni\n(11)\nOn the other hand, i\"s true load when she declares p2\ni is\nL \u00b7 (ni\np2,T \u2217(p2) \u2212 n2,1\nH + n2,1\nL ) + H \u00b7 (Hi\n+ n2,1\nH \u2212 n2,1\nL ) (since\ni\"s true processing time vector is p1\ni ), and thus\nc2\ni = n \u00b7 H \u2212 (H \u2212 L)np2,T L\ni\n+ (H \u2212 L)n2,1\nH \u2212 (H \u2212 L)n2,1\nL .\nThus, (7) implies that c1\ni \u2264 c2\ni .\nPrice specifications are commonly required to satisfy, in\naddition to truthfulness, individual rationality, i.e., a player\"s\nutility should be non-negative if she reveals her true value.\nThe payments given by (10) are not individually rational as\nthey actually charge a player a certain amount. However,\nit is well-known that this problem can be easily solved by\nadding a large-enough constant to the price definition. In\nour case, for example, letting H denote the vector of all H\"s,\nwe can add the term n\u00b7H \u2212(H \u2212L)n(H,p\u2212i),T L\ni (p\u2212i) to (10).\nNote that this is a constant for player i. Thus, the new\npayments are Pi (p) = n \u00b7 H \u2212 L \u00b7 n\u2212i\np,T \u2217(p) \u2212 H \u00b7 H\u2212i\n(p) \u2212\n(H \u2212L)\n`\nnp,T \u2217(p) \u2212np,T L\ni (p\u2212i) +n(H,p\u2212i),T L\ni (p\u2212i)\n\u00b4\n. As shown\nby (11), this will indeed result in a non-negative utility for\ni (since n(H,p\u2212i),T L\ni (p\u2212i) \u2264 n(pi,p\u2212i),T L\ni (p\u2212i) for any type pi\nof player i). This modification also ensures the additionally\ndesired normalization property that if a player receives no\njobs then she receives zero payment: if player i receives the\nempty set for some type pi then she will also receive the\nempty set for the type H (this is easy to verify for our\nspecific algorithm), and for the type H, her utility equals zero;\nthus, by truthfulness this must also be the utility of every\nother declaration that results in i receiving the empty set.\nThis completes the proof of Theorem 5.3.\n260\n5.4 Impossibility of exact implementation\nWe now show that irrespective of computational\nconsiderations, there does not exist a cycle-monotone algorithm for\nthe L-H case with an approximation ratio better than 1.14.\nLet H = \u03b1\u00b7L for some 2 < \u03b1 < 2.5 that we will choose later.\nThere are two machines I, II and seven jobs. Consider the\nfollowing two scenarios:\nScenario 1. Every job has the same processing time on\nboth machines: jobs 1-5, are L, and jobs 6, 7 are H. Any\noptimal schedule assigns jobs 1-5 to one machine and jobs 6,\n7 to the other, and has makespan OPT1 = 5L. The\nsecondbest schedule has makespan at least Second1 = 2H + L.\nScenario 2. If the algorithm chooses an optimal schedule\nfor scenario 1, assume without loss of generality that jobs 6,\n7 are assigned to machine II. In scenario 2, machine I has\nthe same processing-time vector. Machine II lowers jobs 6,\n7 to L and increases 1-5 to H. An optimal schedule has\nmakespan 2L + H, where machine II gets jobs 6, 7 and one\nof the jobs 1-5. The second-best schedule for this scenario\nhas makespan at least Second2 = 5L.\nTheorem 5.14 No deterministic truthful mechanism for the\ntwo-value scheduling problem can obtain an approximation\nratio better than 1.14.\nProof. We first argue that a cycle-monotone algorithm\ncannot choose the optimal schedule in both scenarios. This\nfollows because otherwise cycle monotonicity is violated for\nmachine II. Taking p1\nII , p2\nII to be machine II\"s\nprocessingtime vectors for scenarios 1, 2 respectively, we get\nP\nj(p1\nII ,j \u2212\np2\nII ,j)(x2\nII ,j \u2212x1\nII ,j) = (L\u2212H)(1\u22120) < 0. Thus, any truthful\nmechanism must return a sub-optimal makespan in at least\none scenario, and therefore its approximation ratio is at least\nmin\n\u02d8Second1\nOPT1\n, Second2\nOPT2\n\u00af\n\u2265 1.14 for \u03b1 = 2.364.\nWe remark that for the {Lj, Hj}-case where there is a\ncommon ratio r =\nHj\nLj\nfor all jobs (this generalizes the\nrestricted-machines setting) one can obtain a fractional\ntruthful mechanism (with efficiently computable prices) that\nreturns a schedule of makespan at most OPT(p) for every\np. One can view each job j as consisting of Lj sub-jobs of\nsize 1 on a machine i if pij = Lj, and size r if pij = Hj.\nFor this new instance \u02dcp, note that \u02dcpij \u2208 {1, r} for every\ni, j. Notice also that any assignment \u02dcx for the instance \u02dcp\ntranslates to a fractional assignment x for p, where pijxij =P\nj : sub-job of j \u02dcpij \u02dcxij . Thus, if we use Algorithm 2 to\nobtain a schedule for the instance \u02dcp, equation (6) translates\nprecisely to (3) for the assignment x; moreover, the prices\nfor \u02dcp translate to prices for the instance p. The number of\nsub-jobs assigned to low-machines in the flow-phase is\nsimply the total work assigned to low-machines. Thus, we can\nimplement the above reduction by setting up a max-flow\nproblem that seems to maximize the total work assigned\nto low machines. Moreover, since we have a fractional\ndomain, we can use a more efficient greedy rule for packing the\nunassigned portions of jobs and argue that the fractional\nassignment has makespan at most OPT(p). The assignment\nx need not however satisfy the condition that xij > 0\nimplies pij \u2264 OPT(p) for arbitrary r, therefore, the rounding\nprocedure of Lemma 4.2 does not yield a 2-approximation\ntruthful-in-expectation mechanism. But if r > OPT(p) (as\nin the restricted-machines setting), this condition does hold,\nso we get a 2-approximation truthful mechanism.\nAcknowledgments\nWe thank Elias Koutsoupias for his help in refining the\nanalysis of the lower bound in Section 5.4, and the reviewers for\ntheir helpful comments.\n6. REFERENCES\n[1] N. Andelman, Y. Azar, and M. Sorani. Truthful approximation\nmechanisms for scheduling selfish related machines. In Proc.\n22nd STACS, 69-82, 2005.\n[2] A. Archer. Mechanisms for discrete optimization with\nrational agents. PhD thesis, Cornell University, 2004.\n[3] A. Archer and \u00b4E. Tardos. Truthful mechanisms for\none-parameter agents. In Proc. 42nd FOCS, pages 482-491,\n2001.\n[4] V. Auletta, R. De-Prisco, P. Penna, and G. Persiano.\nDeterministic truthful approximation mechanisms for\nscheduling related machines. In Proc. 21st STACS, pages\n608-619, 2004.\n[5] I. Bez\u00b4akov\u00b4a and V. Dani. Allocating indivisible goods. In ACM\nSIGecom Exchanges, 2005.\n[6] S. Bikhchandani, S. Chatterjee, R. Lavi, A. Mu\"alem,\nN. Nisan, and A. Sen. Weak monotonicity characterizes\ndeterministic dominant-strategy implementation.\nEconometrica, 74:1109-1132, 2006.\n[7] P. Briest, P. Krysta, and B. Vocking. Approximation\ntechniques for utilitarian mechanism design. In Proc. 37th\nSTOC, pages 39-48, 2005.\n[8] G. Christodoulou, E. Koutsoupias, and A. Vidali. A lower\nbound for scheduling mechanisms. In Proc. 18th SODA, pages\n1163-1170, 2007.\n[9] E. Clarke. Multipart pricing of public goods. Public Choice,\n8:17-33, 1971.\n[10] T. Groves. Incentives in teams. Econometrica, 41:617-631,\n1973.\n[11] H. Gui, R. Muller, and R. V. Vohra. Characterizing dominant\nstrategy mechanisms with multi-dimensional types, 2004.\nWorking paper.\n[12] L. A. Hall. Approximation algorithms for scheduling. In\nD. Hochbaum, editor, Approximation Algorithms for\nNP-Hard Problems. PWS Publishing, MA, 1996.\n[13] A. Kov\u00b4acs. Fast monotone 3-approximation algorithm for\nscheduling related machines. In Proc. 13th ESA, pages\n616-627, 2005.\n[14] V. S. A. Kumar, M. V. Marathe, S. Parthasarathy, and\nA. Srinivasan. Approximation algorithms for scheduling on\nmultiple machines. In Proc. 46th FOCS, pages 254-263, 2005.\n[15] R. Lavi, A. Mu\"alem, and N. Nisan. Towards a characterization\nof truthful combinatorial auctions. In Proc. 44th FOCS, pages\n574-583, 2003.\n[16] R. Lavi and C. Swamy. Truthful and near-optimal mechanism\ndesign via linear programming. In Proc. 46th FOCS, pages\n595-604, 2005.\n[17] D. Lehmann, L. O\"Callaghan, and Y. Shoham. Truth\nrevelation in approximately efficient combinatorial auctions.\nJournal of the ACM, 49:577-602, 2002.\n[18] J. K. Lenstra, D. B. Shmoys, and \u00b4E. Tardos. Approximation\nalgorithms for scheduling unrelated parallel machines. Math.\nProg., 46:259-271, 1990.\n[19] R. J. Lipton, E. Markakis, E. Mossel, and A. Saberi. On\napproximately fair allocations of indivisible goods. In Proc.\n5th EC, pages 125-131, 2004.\n[20] A. Mu\"alem and M. Schapira. Setting lower bounds on\ntruthfulness. In Proc. 18th SODA, 1143-1152, 2007.\n[21] R. Myerson. Optimal auction design. Mathematics of\nOperations Research, 6:58-73, 1981.\n[22] N. Nisan and A. Ronen. Algorithmic mechanism design.\nGames and Econ. Behavior, 35:166-196, 2001.\n[23] J. C. Rochet. A necessary and sufficient condition for\nrationalizability in a quasilinear context. Journal of\nMathematical Economics, 16:191-200, 1987.\n[24] M. Saks and L. Yu. Weak monotonicity suffices for truthfulness\non convex domains. In Proc. 6th EC, pages 286-293, 2005.\n[25] D. B. Shmoys and \u00b4E. Tardos. An approximation algorithm for\nthe generalized assignment problem. Mathematical\nProgramming, 62:461-474, 1993.\n[26] W. Vickrey. Counterspeculations, auctions, and competitive\nsealed tenders. J. Finance, 16:8-37, 1961.\n261", "keywords": "algorithm;fractional mechanism usefulness;makespan minimization;multi-dimensional scheduling;usefulness of fractional mechanism;randomized mechanism;fractional domain;truthful mechanism design;scheduling;cycle monotonicity;schedule;approximation algorithm;mechanism design"}
-{"name": "test_J-18", "title": "Mediators in Position Auctions", "abstract": "A mediator is a reliable entity, which can play on behalf of agents in a given game. A mediator however can not enforce the use of its services, and each agent is free to participate in the game directly. In this paper we introduce a study of mediators for games with incomplete information, and apply it to the context of position auctions, a central topic in electronic commerce. VCG position auctions, which are currently not used in practice, possess some nice theoretical properties, such as the optimization of social surplus and having dominant strategies. These properties may not be satisfied by current position auctions and their variants. We therefore concentrate on the search for mediators that will allow to transform current position auctions into VCG position auctions. We require that accepting the mediator services, and reporting honestly to the mediator, will form an ex post equilibrium, which satisfies the following rationality condition: an agent\"s payoff can not be negative regardless of the actions taken by the agents who did not choose the mediator\"s services, or by the agents who report false types to the mediator. We prove the existence of such desired mediators for the next-price (Google-like) position auctions, as well as for a richer class of position auctions, including all k-price position auctions, k > 1. For k=1, the self-price position auction, we show that the existence of such mediator depends on the tie breaking rule used in the auction.", "fulltext": "1. INTRODUCTION\nConsider an interaction in a multi-agent system, in which\nevery player holds some private information, which is called\nthe player\"s type. For example, in an auction interaction\nthe type of a player is its valuation, or, in more complex\nauctions, its valuation function. Every player has a set of\nactions, and a strategy of a player is a function that maps\neach of its possible types to an action. This interaction\nis modeled as a game with incomplete information. This\ngame is called a Bayesian game, when a commonly known\nprobability measure on the profiles of types is added to the\nsystem. Otherwise it is called a pre-Bayesian game. In this\npaper we deal only with pre-Bayesian games. The leading\nsolution concept for pre-Bayesian games is the ex post\nequilibrium: A profile of strategies, one for each player, such\nthat no player has a profitable deviation independently of\nthe types of the other players. Consider the following simple\nexample of a pre-Bayesian game, which possesses an ex post\nequilibrium. The game is denoted by H.\na b\na 5, 2 3, 0\nb 0, 0 4, 2\nA\na b\na 2, 2 0, 0\nb 3, 3 5, 2\nB\nAt the game H there are two players. Both players can\nchoose among two actions: a and b. The column player,\nplayer 2, has a private type, A or B (player 1 has only one\npossible type). A strategy of player 1 is g1,where g1 = a or\ng1 = b. A strategy of player 2 is a function g2 : {A, B} \u2192\n{a, b}.That is, player 2 has 4 strategies. In this game the\nstrategy profile (g1, g2) is an ex post equilibrium, where g1 =\nb and g2(A) = b, g2(B) = a.\nUnfortunately, pre-Bayesian games do not, in general,\npossess ex post equilibria, even if we allow mixed strategies. In\norder to address this problem we suggest in this paper the\nuse of mediators. A mediator is a reliable entity that can\ninteract with the players and perform on their behalf actions\nin a given game. However, a mediator can not enforce\nbehavior. Indeed, an agent is free to participate in the game\nwithout the help of the mediator. The mediator\"s\nbehavior on behalf of the agents that give it the right of play is\npre-specified, and is conditioned on information the agents\nwould provide to the mediator. This notion is highly\nnatural; in many systems there is some form of reliable party or\n279\nadministrator that can be used as a mediator. The simplest\nform of a mediator discussed in the game theory literature\nis captured by the notion of correlated equilibrium [1]. This\nnotion was generalized to communication equilibrium by [5,\n15]. Another type of mediators is discussed in [13]. However,\nin all these settings the mediator can not perform actions\non behalf of the agents that allow it to do so. Mediators\nthat can obtain the right of play but can not enforce the\nuse of their services have been already defined and discussed\nfor games with complete information in [14].1\nThe topic of\nmediators for games with complete information has been\nfurther generalized and analyzed in [16]. In this paper we\nintroduce another use of mediators, in establishing\nbehaviors which are stable against unilateral deviations in games\nwith incomplete information. Notice that we assume that\nthe multi-agent interaction (formalized as a game) is given,\nand all the mediator can do is to perform actions on behalf\nof the agents that explicitly allow it to do so.2\nIn order to illustrate the power of mediators for games\nwith incomplete information consider the following pre-Bayesian\ngame G that does not possess an ex post equilibrium. In G,\nthe column player has two possible types: A and B.\na b\na 5, 2 3, 0\nb 0, 0 2, 2\nA\na b\na 2, 2 0, 0\nb 3, 0 5, 2\nB\nA mediator for G should specify the actions it will choose\non behalf of the players that give it the right to play. If\nplayer 2 wants to give the mediator the right to play it should\nalso report a type. Consider the following mediator:\nIf both players give the mediator the right of play, then the\nmediator will play on their behalf (a, a) if player 2 reports\nA and (b, b) if player 2 reports B. If only player 1 gives the\nmediator the right of play then the mediator will choose a\non his behalf. If only player 2 gives the mediator the right\nof play, the mediator will choose action a (resp. b) on its\nbehalf, if B (resp. A) has been reported.\nThe mediator generates a new pre-Bayesian game,which\nis called the mediated game. In the mediated game player\n1 has three actions: Give the mediator the right of play,\ndenoted by m, or play directly a or b. Player 2 has four\nactions: m \u2212 A, m \u2212 B,a,b, where m \u2212 A (m \u2212 B) means\nreporting A (B) to the mediator and give it the right of play.\nThe mediated game is described in the following figure:\nm \u2212 A m \u2212 B a b\nm 5, 2 2, 2 5, 2 3, 0\na 3, 0 5, 2 5, 2 3, 0\nb 2, 2 0, 0 0, 0 2, 2\nA\n1\nFor games with complete information the main interest is\nin leading agents to behaviors, which are stable against\ndeviations by coalitions. A special case of mediators was already\ndiscussed in [8]. In this paper the authors discussed\nmediators for a two-person game, which is known to the players\nbut not to the mediators, and they looked for Nash\nequilibrium in the new game generated by the mediator.\n2\nThis natural setting is different from the one discussed in\nthe classical theories of implementation and mechanism\ndesign, where a designer designs a new game from scratch in\norder to yield some desired behavior.\nm \u2212 A m \u2212 B a b\nm 2, 2 5, 2 2, 2 0, 0\na 0, 0 2, 2 2, 2 0, 0\nb 5, 2 3, 0 3, 0 5, 2\nB\nIt is now easy to verify that giving the mediator the right\nof play, and reporting truthfully, is an ex-post equilibrium\nat the mediated game. That is, (f1, f2) is an ex post\nequilibrium, where f1 = m, and f2(A) = m\u2212A, f2(B) = m\u2212B.\nThe aim of this paper is twofold. We introduce mediators\nfor games with incomplete information, and apply them in\nthe context of position auctions. Our choice of positions\nauctions as the domain of application is not an accident; indeed,\npositions auctions have become a central issue in\nadvertisement and the selection of appropriate position auctions for\nthat task is a subject of considerable amount of study [17,\n3, 9, 4].3\nCurrent position auctions however do not possess\nex-post equilibrium, i.e. solutions which are stable against\nunilateral deviations regardless of the agents\" private\ninformation, nor guarantee optimal social surplus. In contrast,\nin the VCG position auction, which is currently not used\nin practice, there is a truth-revealing ex post equilibrium,\nwhich yields optimal surplus. We therefore suggest the use\nof mediators in order to attempt and implement the output\nof the VCG position auction, by transforming other (and in\nparticular current) position auctions into a VCG position\nauction.4\nMore specifically, the mediated game will have\nan ex post equilibrium, which generates the outcome of the\nVCG position auction. One such mediator has already been\ndiscussed for other purposes in the literature: An English\nauction type of algorithm was constructed in [3] that takes\nas an input the valuations of the players and outputs bids for\nthe next-price position auction. It was proved there that\nreporting the true type to this algorithm by each player forms\nan ex post equilibrium, which generates the VCG outcome.\nIn our language this algorithm can be almost considered as a\nmediator for the next-price position auction that implement\nthe VCG outcome function. What is missing, is a\ncomponent that punishes players who send their bids directly to\nthe auctioneer, and a proof that using the mediator services\nand reporting the true type by each player is an ex post\nequilibrium in the mediated game defined by the algorithm\nand by the additional component. A mediator may generate\na desired outcome by punishing the players who do not use\nits services with very high bids by the players that use its\nservices. We believe that such mediators are not realistic,\nand therefore we concentrate on the search for valid\nmediators that generate an ex post equilibrium and satisfy the\nadditional rationality condition: an agent\"s payoff can not\nbe negative regardless of the actions taken by the agents\nwho did not choose the mediator\"s services, or agents who\nreport false types to the mediator. We prove the existence\nof such desired mediators for the next-price (Google-like)\nposition auctions5\n, as well as for a richer class of position\nauctions, including all k-price position auctions, k > 1. For\nk=1, the self-price position auction, we show that the\nex3\nSee also [12], where position auctions are titled ad auctions.\n4\nIn general, except for the VCG position auction we do not\nexpect position auctions to possess an ex post equilibrium\n(see Footnote 7).\n5\nOur proof uses an algorithm, which is different from the\nalgorithm in [3] discussed earlier.\n280\nistence of such mediator depends on the tie breaking rule\nused in the auction.\nMediators in one-item auctions (in particular first price\nand second price auctions) have been already discussed in\n[6, 11, 2]; however they all used a Bayesian model.\nPosition auctions are a restricted type of general pre-Bayesian\ngames. In this conference version we make the formal\ndefinition of mediators and implementing by mediation for the\nspecial case of position auctions, and only in the full version\nwe present the general theory of mediators for pre-Bayesian\ngames. Most of the proofs are omitted from this conference\nversion.\n2. POSITION AUCTIONS\nIn a position auction there is a seller who sells a finite\nnumber of positions j \u2208 K = {1, ..., m}. There is a finite\nnumber of (potential) bidders i \u2208 N = {1, ..., n}. We assume\nthat there are more bidders than positions, i.e. n > m.\nThe positions are sold for a fixed period of time. For each\nposition j there is a commonly-known number \u03b1j > 0, which\nis interpreted as the expected number of visitors at that\nposition. \u03b1j is called the click-through rate of position j.\nWe assume that \u03b11 > \u03b12 > \u03b1m > 0. If i holds a position\nthen every visitor to this position gives i a revenue of vi > 0,\nwhere vi is called the valuation of i. The set of possible\nvaluations of i is Vi = (0, \u221e).\nWe assume that the players\" utility functions are\nquasilinear. That is, if player i is assigned to position j and pays\npi per click then his utility is \u03b1j(vi \u2212 pi).\nEvery player is required to submit a bid, bi \u2208 Bi = [0, \u221e).\nWe assume that bidding 0 is a symbol for non-participation.\nTherefore, a player with a bid 0 is not assigned to any\nposition, and it pays 0.\nIn all position auctions we consider, the player with the\nhighest positive bid receives the first position, the player\nwith the second highest positive bid receives the second\nposition, and so on. It is useful to define for every position\nauction two dummy positions m + 1 and \u22121, which more\nthan one player may be assigned to. All players, who\nparticipate in the auction but do not get a position in K are\nassigned to position m + 1 and all players who choose not\nto participate are assigned to position \u22121. We also define\n\u03b1m+1 = \u03b1\u22121 = 0.\nAn assignment of players to positions is called an\nallocation. Hence, an allocation is a vector s = (s1, s2, \u00b7 \u00b7 \u00b7 , sn)\nwith si \u2208 K \u222a {\u22121, m + 1} such that if si \u2208 K then si = sl\nfor every l = i; si is the position of player i. Given the\nabove, a position auction is defined by its tie breaking rule,\nwhich determines the allocation in case of ties, and by its\npayment scheme. These are discussed below.\n2.1 Tie breaking rules\nIn practice, the most commonly used tie breaking rule\nis the First-Arrival rule: if a set of players submit the same\nbid, their priority in receiving the positions is determined by\nthe times their bids were recorded; An earlier bid receives\na higher priority. In auction theory this tie breaking rule is\ntypically modelled by assuming that the auctioneer is using\na random priority rule. More specifically, let \u0393 be the set of\nall permutations, \u03b3 = (\u03b31, ..., \u03b3n) of N. Every such \u03b3 defines\na priority rule as follows: i has a higher priority than k if and\nonly if \u03b3i < \u03b3k. Every vector of bids b and a permutation\n\u03b3 uniquely determine an allocation. An auctioneer who is\nusing the random priority rule chooses a fixed priority rule\n\u03b3 by randomizing uniformly over \u0393. However, the resulting\npriority rule is not told to the players before they make their\nbids. When the priority rule \u03b3 is told to the players before\nthey make their bids, the tie breaking rule is called a fixed\npriority rule. Dealing with a fixed priority rule simplifies\nnotations and proofs, and in most cases, and in particular\nin this paper, results that are obtained with this tie breaking\nrule are identical to the results obtained with the random\npriority rule. Therefore we will assume this tie breaking\nrule. In contrast, in Section 7 we discuss a non-standard\napproach for analyzing directly the first-arrival tie breaking\nrule.\nUnless we say specifically otherwise we assume in this\npaper a fixed priority rule.\nWithout loss of generality we assume that the fixed\npriority rule is defined by the natural order, \u02dc\u03b3 = (1, 2, ..., n).\nThat is, bidder i has a higher priority than bidder k if and\nonly if i < k. Given this fixed priority rule we can make the\nfollowing definitions, which apply to all position auctions:\nWe denote by s(b, i) the position player i is assigned to\nwhen the bid profile is b. The allocation determined by b is\ndenoted by\ns(b) = (s(b, 1), s(b, 2), \u00b7 \u00b7 \u00b7 , s(b, n)).\nFor every j \u2208 K \u222a {\u22121, m + 1} we denote by \u03b4(b, j) the\nset of players assigned to position j. Note that for j \u2208 K,\n\u03b4(b, j) contains at most one player.\n2.2 The payment schemes\nLet \u03b1 be a click-trough rate vector. Each position j \u2208\nK \u222a {\u22121, m + 1} is associated with a payment function,\np\u03b1\nj : B \u2192 R+, where p\u03b1\nj (b) is the payment for position j\nwhen the bid profile is b. Naturally we assume that p\u03b1\n\u22121\nis identically zero. However, we also assume that p\u03b1\nm+1 is\nidentically zero. Hence, a participant who is not assigned a\nreal position pays nothing.\nWe call the vector of payment functions p\u03b1\n= (p\u03b1\nj )j\u2208K the\nposition payment scheme.\nRemark: Whenever \u03b1 is fixed or its value is clear from the\ncontext we will allow ourselves to omit the superscript \u03b1\nfrom the payment and other functions.\nWe deal with anonymous position payment schemes, i.e.\nthe players\" payments to the auctioneer are not influenced\nby their identities. This is modeled as follows: Let b \u2208 B =\nB1 \u00d7 B2 \u00d7 \u00b7 \u00b7 \u00b7 \u00d7 Bn be a bid profile. We denote by b(j) the\njth\nhighest bid in b. For j > n we let b(j) = 0. For example\nif b = (3, 7, 3, 0, 2) then b(1) = 7, b(2) = 3, b(3) = 3, b(4) =\n2, b(5) = 0. We denote b\u2217\n= (b(1), \u00b7 \u00b7 \u00b7 , b(n)). Anonymity is\nmodeled by the requirement that for every two bid profiles\nb, d \u2208 B, p(b) = p(d) whenever b\u2217\n= d\u2217\n. That is, for\nevery position j there exists a real-valued function \u02dcpj defined\nover all ordered vectors of bids such that for every b \u2208 B\npj(b) = \u02dcpj(b\u2217\n).\nWe further assume that a player never pays more than\nhis bid. That is pj(b) \u2264 b(j) for every b \u2208 B and for every\nj \u2208 K.\nIt is convenient in certain cases to describe the payment\nfunctions indexed by the players. Let G be a position\nauction with a position payment scheme p.\nFor every player i we denote\nqi(b) = ps(b,i)(b),\n281\nand\nq(b) = (q1(b), q2(b), \u00b7 \u00b7 \u00b7 qn(b)).\nNote that the correspondence p \u2192 q is one-to-one. We call q\nthe player payment scheme. All our assumptions about the\nposition payment schemes can be transformed to analogous\nassumptions about the player payment schemes. For\nconvenience, a position auction will be described either by its\nposition payment scheme or by its player payment scheme.\nThe utility function for player i, wi : Vi \u00d7 B \u2192 R+ is\ndefined as follows:\nwi(vi, b) = \u03b1s(b,i)(vi \u2212 qi(b)) = \u03b1s(b,i)(vi \u2212 ps(b,i)(b)).\n2.3 Central position auctions\nWe next describe the payment schemes of three central\nposition auctions.\nSelf-price position auctions: Each player who is assigned\nto a position with a positive click-through rate pays his own\nbid. That is, for every j \u2208 K and every b \u2208 B\npj(b) = b(j) (1)\nNext-price position auctions: In this auction (run with\na slight variation by Google), every player who is assigned\nto a position with a positive click-through rate pays the bid\nof the player assigned to the position right after him if there\nis such a player, and zero otherwise. That is for every j \u2208 K\nand for every b \u2208 B\npj(b) = b(j+1) (2)\nVCG position auctions: In a Vickrey-Clarke-Groves (VCG)\nposition auction the payment function for position j \u2208 K is\ndefined as follows.6\nFor every b \u2208 B\npvcg\nj (b) =\nPm+1\nk=j+1 b(k)(\u03b1k\u22121 \u2212 \u03b1k)\n\u03b1j\n(3)\nNote that the VCG position auction is not the next-price\nposition auction unless there is only one position and \u03b11 = 1.\n2.4 Mediators for position auctions\nWe denote by G = G(\u03b1, p) the position auction with\nthe click-through rate vector \u03b1 and the payment scheme\np. Recall that the set of types of i is Vi = (0, \u221e). Let\nV = V1 \u00d7 V2 \u00d7 \u00b7 \u00b7 \u00b7 \u00d7 Vn be the set of profile of types, and for\nevery S \u2286 N let VS = \u00d7i\u2208SVi.\nA mediator for G is a vector of functions m = (mS)S\u2286N ,\nwhere mS : VS \u2192 BS. The mediator m generates a\npreBayesian game Gm, which is called the mediated game. In\nthis game every player i receives his type vi and can either\nsend a type, \u02c6vi (not necessarily the true type) to the\nmediator, or send a bid directly to the auction. If S is the set of\nplayers that send a type to the mediator, the mediator bids\non their behalf mS(\u02c6vS). Hence, the action set of player i in\nthe mediated game is Bi \u222aVi, where conveniently Vi denotes\nboth, (0, \u221e) , and a copy of (0, \u221e), which is disjoint from\n6\nWe use the standard payment function of the VCG\nmechanism. A general VCG mechanism may be obtained from\nthe standard one by adding an additional payment function\nto each player, which depends only on the types of the other\nplayers. Some authors (see e.g., [7]) call the standard VCG\nmechanism, the VC mechanism. According to this\nterminology we actually deal with VC position auctions. However,\nwe decided to use the more common terminology.\nBi. We introduce the following terminology: The T-strategy\nfor a player in the mediated game is the strategy, in which\nthis player uses the mediator\"s services and reports his true\nvalue to the mediator. The T-strategy profile is the profile of\nstrategies in which every player is using the T-strategy. The\nT-strategy profile is an ex post equilibrium in the mediated\ngame if for every player i and type vi, and for every vector of\ntypes of the other players, v\u2212i, the following two conditions\nhold:\nE1: i is not better off when he gives the mediator the right\nof play and report a false type. That is, for every \u02c6vi \u2208 Vi\nwi(vi, mN (vi, v\u2212i)) \u2265 wi(vi, mN (\u02c6vi, v\u2212i)).\nE2: i is not better off when he bids directly. That is for\nevery bi \u2208 Bi,\nwi(vi, mN (vi, v\u2212i)) \u2265 wi(vi, (bi, mN\\i(v\u2212i))).\nWhenever the T-strategy profile is an ex post equilibrium\nin Gm, the mediator m implements an outcome function\nin G. This outcome function is denoted by \u03d5m\n, and it is\ndefined as follows:\n\u03d5m\n(v) = (s(mN (v)), q(mN (v)).\nHence, the range of the function \u03d5m\nis the Cartesian product\nof the set of allocations with Rn\n+.\n3. IMPLEMENTING THE VCG OUTCOME\nFUNCTION BY MEDIATION\nIn general, except for the VCG position auction we do not\nexpect position auctions to possess an ex post equilibrium.7\nTherefore, the behavior of the participants in most position\nauctions cannot be analytically predicted, and in practice it\ncan form a non-efficient allocation: an allocation that does\nnot maximize social surplus. In contrast, in the VCG\nposition auction the truth-reporting strategy is a dominant\nstrategy for every player, and the resulting allocation is\nefficient. Given a position auction G our goal is to construct a\nmediator that would implement the outcome function of the\nVCG position auction. This outcome function is defined as\nfollows:\n\u03d5vcg\n(v) = (s(v), qvcg\n(v)).\nDefinition: Let G be a position auction . Let m be a\nmediator for G. We say the m implements the VCG outcome\nfunction in G, or that it implements \u03d5vcg\nin G if the\nTstrategy profile is an ex post equilibrium in Gm, and \u03d5m\n=\n\u03d5vcg\n.\nWe demonstrate our definitions so far by a simple\nexample:\nExample 1. Consider a self-price auction G = G(\u03b1, p)\nwith 2 players and one position, with \u03b11 = 1. That is, G is\na standard two-person first-price auction. The\ncorresponding VCG position auction is a standard second-price\nauction. We define a family of mediators mc\n, c \u2265 1, each of\nthem implements the VCG position auction. Assume both\n7\nActually, it can be shown that if a strategy profile b in\na position auction is an ex post equilibrium then for every\nplayer i bi is a dominant strategy. It is commonly\nconjectured that except for some extremely artificial combinatorial\nauctions , the VCG combinatorial auctions are the only ones\nwith dominant strategies (see [10]).\n282\nplayers use the mediator\"s services and send him the types\n\u02c6v = (\u02c6v1, \u02c6v2), then all mediators act similarly and as follows:\nIf \u02c6v1 \u2265 \u02c6v2 the mediator makes the following bids on behalf\nof the players: b1 = \u02c6v2, and b2 = 0. If \u02c6v2 > \u02c6v1, the mediator\nmakes the bids b1 = 0, b2 = \u02c6v1. If only one player uses\nthe mediator services, say player i, then mediator mc\nbids\nbi = c\u02c6vi on behalf of i. We claim that for every c > 1, the\nT-strategy profile is an ex post equilibrium in the mediated\ngame generated by mc\n. Indeed, assume player 2 reports his\ntype v2 to the mediator, and consider player 1.\nIf v1 \u2265 v2 then by using the T-strategy player 1 receives the\nposition and pays v2. Hence, 1\"s utility is v1 \u2212v2. If player 1\ndeviates by using the mediator\"s services and reporting \u02c6v1 \u2265\nv2 his utility is still v1 \u2212 v2. If he reports \u02c6v1 < v2 his utility\nwill be 0. If player 1 does not use the mediator, he should\nbid at least cv2 in order to get the positions, and therefore\nhis utility cannot exceed v1 \u2212 v2.\nIf v1 < v2, then the T-strategy yields 0 to player 1, and\nany other strategy yields a non-positive utility.\nObviously each of the mediators mc\nimplements the VCG\noutcome function. Note however, that the T-strategy is not\na dominant strategy when c > 1; e.g. if v1 > v2 and player\n2 bids directly v2 (without using the mediator services), then\nbidding directly v1 is better for player 1 than using the\nTstrategy: in the former case player 1\"s utility is 0 and in the\nlatter case her utility is negative.\nIt is interesting to note that this simple example can not\nbe extended to general self-price position auctions, as will\nbe discussed in section 4.\nWhile each of the mediators mc\nin Example 1 implements\nthe VCG outcome function, the mediator with c = 1 has\na distinct characteristic: a player who uses the T-strategy\ncannot get a negative utility. In contrast, for every c > 1, if\nsay player 2 does not use the mediator services, participates\ndirectly and bids less than cv1, then the T-strategy yields a\nnegative utility of (1 \u2212 c)v1 to player 1. This motivates our\ndefinition of valid mediators:\nLet G be a position auction. A mediator for G is valid,\nif for every player, using the T-strategy guarantees a\nnonnegative level of utility.\nFormally, a mediator m for G, is valid if for every subset\nS \u2286 N and every player i \u2208 S wi(vi, mS(vS), b\u2212S) \u2265 0 for\nevery b\u2212S \u2208 B\u2212S and every vs \u2208 VS.\n4. MEDIATORS IN NEXT-PRICE POSITION\nAUCTIONS\nWe now show that there exists a valid mediator, which\nimplements the VCG outcome function in next-price position\nauctions. Although in the following section we prove a more\ngeneral result, we present this result first, given the\nimportance of next-price position auctions in the literature and\nin practice. Our proof makes use of the following technical\nlemma.\nLemma 1. Let pvcg\nbe the VCG payment scheme.\n1. pvcg\nj (b) \u2264 b(j+1) for every j \u2208 K.\n2. pvcg\nj (b) \u2265 pvcg\nj+1(b) for every j = 1, ..., m \u2212 1 and for\nevery b \u2208 B, where for every j, equality holds if and\nonly if b(j+1) = b(j+2) = \u00b7 \u00b7 \u00b7 = b(m+1).\nThe proof of Lemma 1 is given in the full version. We can\nnow show:\nTheorem 2. Let G be a next-price position auction. There\nexists a valid mediator that implements \u03d5vcg\nin G.\nTheorem 2 follows from a more general theorem given in the\nnext section. However, we still provide a proof since a more\nsimple and intuitive mediator is constructed for this case.\nProof of Theorem 2. We define a mediator m, which will\nimplement the VCG outcome function in G: For every v \u2208\nV let mN (v) = b(v), where b(v) is defined as follows:\nFor every player i such that 2 \u2264 s(v, i) \u2264 m let bi(v) =\npvcg\ns(v,i)\u22121(v).8\nFor every i \u2208 \u03b4(v, m + 1), bi(v) = pvcg\nm (v).\nLet b\u03b4(v,1)(v) = 1 + max{i:s(v,i)\u22652}bi(v).\nFor every S \u2286 N such that S = N and for every vS \u2208 VS\nlet mS(v) = vS. This completes the description of the\nmediator m.\nWe show that \u03d5m\n(v) = \u03d5vcg\n(v) for every v \u2208 V . Let\nv \u2208 V be an arbitrary valuation vector.\nWe have to show that s(b(v)) = s(v) and that q(b(v)) =\nqvcg\n(v):\nWe begin by showing that s(b(v)) = s(v). It is sufficient\nto show that whenever 1 \u2264 s(v, i) < s(v, l) \u2264 m + 1 for\nsome i = l, then s(b(v), i) < s(b(v), l).\nWe first show it for s(v, i) = 1, that is \u03b4(v, 1) = i. In\nthis case b\u03b4(v,1)(v) > bj(v) for every j = i, s(b(v), i) = 1.\nTherefore s(b(v), i) < s(b(v), l). If s(v, i) > 1, we\ndistinguish between two cases.\n1. vi = vl. Since s(v, i) < s(v, l), the fixed priority rule\nimplies that i < l. By the second part of Lemma 1,\npvcg\ns(v,i)\u22121(v) \u2265 pvcg\ns(v,l)\u22121(v). Therefore bi(v) \u2265 bl(v),\nwhich yields s(b(v), i) < s(b(v), l).\n2. vi > vl. Let j + 1 = s(v, i). That is v(j+1) = vi, and\ntherefore by the second part of Lemma 1, pvcg\ns(v,i)\u22121(v) >\npvcg\ns(v,i)(v). Since s(v, i) \u2264 s(v, l) \u2212 1, by the second\npart of Lemma 1, pvcg\ns(v,i)(v) \u2265 pvcg\ns(v,l)\u22121(v).\nTherefore pvcg\ns(v,i)\u22121(v) > pvcg\ns(v,l)\u22121(v), which yields bi(v) >\nbl(v). Therefore s(b(v), i) < s(b(v), l).\nThis completes the proof that s(b(v)) = s(v) for all v \u2208 V .\nObserve that for every player i such that s(b(v), i) \u2208 K\nps(b(v),i)(b(v)) = pvcg\ns(v,i)(v).\nTherefore qi(b(v)) = qvcg\ni (v) for every i \u2208 N. This shows\nthat q(b(v)) = qvcg\n(v) for all v \u2208 V . Hence, \u03d5m\n= \u03d5vcg\n.\nWe proceed to prove that the T-strategy is an ex-post\nequilibrium. Note that by the truthfulness of VCG, it is not\nbeneficial for any player i to miss report her value to the\nmediator, given that all other players use the T-strategy.\nNext we show that it is not beneficial for a single player i \u2208\nN to participate in the auction directly if all other players\nuse the T-strategy. Fix some v \u2208 V . Assume that player i\nis the only player that participates directly in the auction.\nHence, v\u2212i is the vector of bids submitted by the mediator.\nLet bi be player i\"s bid. Let k = s(v, i). Therefore, since\n\u03d5m\n= \u03d5vcg\n, s(b(v), i) = k. Let j be player i\"s position in the\ndeviation. Hence j = s((v\u2212i, bi), i). If j /\u2208 K then player i\"s\n8\nRecall that s(b, i) denotes the position of player i under the\nbid profile b, and \u03b4(b, j) denotes the set of players assigned\nto position j. Whenever j \u2208 K, we slightly abuse notations,\nand also refer to \u03b4(b, j) as the player that is assigned to\nposition j.\n283\nutility is zero and therefore deviating is not worthwhile for\ni. Suppose j \u2208 K. Then\n\u03b1k(vi \u2212 pk(b(v))) = \u03b1k(vi \u2212 pvcg\nk (v)) \u2265\n\u03b1j(vi \u2212 pvcg\nj (v\u2212i, bi)) \u2265 \u03b1j(vi \u2212 v(j+1)),\nwhere the first equality follows from \u03d5m\n= \u03d5vcg\n, the first\ninequality follows since VCG is truthful, and the second\ninequality follows from the first part of Lemma 1. Since pj\nis position j\"s payment function in the next-price position\nauction, \u03b1j(vi \u2212 v(j+1)) = \u03b1j(vi \u2212 pj(v\u2212i, bi)). Therefore\n\u03b1k(vi \u2212 pk(b(v))) \u2265 \u03b1j(vi \u2212 pj(v\u2212i, bi)).\nHence, player i does not gain from participating directly in\nthe auction.\nFinally we show that m is valid. If all players choose\nthe mediator then by the first part of Lemma 1 each player\nwhich uses the T-strategy will not pay more than his value.\nConsider the situation in which a subset of players, S,\nparticipate directly in the auction. Since the mediator submits\nthe reported values on behalf of the other players, these\nother players will not pay more than their reported values.\nHence a player which used the T-strategy will not pay more\nthan his value. 2\n5. MEDIATORS IN GENERALIZED\nNEXTPRICE POSITION AUCTIONS\nIn the previous section we discussed the implementation\nof the VCG outcome function in the next price position\nauction. In this section we deal with a more general family of\nposition auctions, in which the payment of each player who\nhas been assigned a position, is a function of the bids of\nplayers assigned to lower positions than his own. The\npayment scheme p of such a position auction satisfies the\nfollowing condition:\nN1: For every j \u2208 K and every b1\n, b2\n\u2208 B such that b1\n(l) =\nb2\n(l) for every l > j, we have that pj(b1\n) = pj(b2\n).\nWe next provide sufficient conditions for implementing the\nVCG outcome function by a valid mediator in position\nauctions whose payment schemes satisfy N1.\nWe need the following notation and definition. For every\nposition auction G and every b \u2208 B let \u03d5G\n(b) = (s(b), q(b)).\nWe say that G is a V CG cover if for every v \u2208 V there\nexists b \u2208 B such that \u03d5G\n(b) = \u03d5vcg\n(v).\nWe say that G is monotone if pj(b) \u2265 pj(b ) for every\nj \u2208 K and for every b \u2265 b , where b \u2265 b if and only if\nbi \u2265 bi for every i \u2208 N.\nWe are now able to show:\nTheorem 3. Let G = G(\u03b1, p) be a position auction such\nthat p satisfies N1. If the following conditions hold then\nthere exists a valid mediator that implements \u03d5vcg\nin G:\n1. G is a V CG cover\n2. G is monotone.\nThe proof of Theorem 3 is given in the full version. We\nnext provide the construction of the valid mediator, which\nwill implement the VCG outcome function in a position\nauction G, which satisfies the conditions of Theorem 3:\nAlgorithm for building m for G:\n\u2022 For every v \u2208 V let mN (v) = b(v), where b(v) is\nsome bid profile such that \u03d5G\n(b(v)) = \u03d5vcg\n(v)\n\u2022 For every i and for every v\u2212i \u2208 V\u2212i, let\nvi\n= (v\u2212i, M(v\u2212i)), where M(v\u2212i) = 1 + maxj=ivj.\n\u2022 For every i \u2208 N and every v\u2212i \u2208 V\u2212i, let mN\\{i}(v\u2212i) =\nb\u2212i(vi\n), where b(vi\n) is some bid profile such that\n\u03d5G\n(b(vi\n)) = \u03d5vcg\n(vi\n).\n\u2022 For every S \u2286 N, such that 1 \u2264 |S| \u2264 n \u2212 2, let\nmS(vS) = vS.\nRemark: As we wrote, Theorem 3 applies in particular\nto next-price position auctions discussed in Section 4.\nHowever, this Theorem applies to many other interesting\nposition auctions as will be shown later. Moreover, the mediator\nconstructed for this general case is different from the one in\nthe proof of Theorem 2.\nWe now show that condition N1 as well as the\nrequirement that G is a V CG cover, and the requirement that G\nis monotone are all necessary for establishing our result. It\nis easy to see that if G is not a V CG cover then Theorem\n3 does not hold. The following example shows the necessity\nof the monotonicity condition.\nExample 4. Let G = G(\u03b1, p) be the following position\nauction. Let n = 4, m = 3, \u03b1 = (100, 10, 1), p1(b) = b(2) \u2212\nb(3) and p2(b) =\nb(3)+b(4)\n2\n, and p3(b) = b(4). Notice that G\nis not monotone. Observe that condition N1 is satisfied. In\nthe full version we show that G is a V CG cover, and it is\nnot possible to implement the VCG outcome function in G\nwith a valid mediator.\nThe next example shows that Theorem 3 does not hold,\nwhen condition N1 is not satisfied.\nExample 5. Let G = G(\u03b1, p) be the following position\nauction. Let N = {1, 2, 3}, K = {1, 2} and \u03b1 = (2, 1). Let\np1(b) =\nb(1)\n4\nand p2(b) = b(2). It is immediate to see that the\nmonotonicity condition is satisfied. We next show that G is\na V CG cover. Let v \u2208 V be an arbitrary valuation vector.\nWe need to find a bid profile b(v) such that \u03d5G\n(b(v)) =\n\u03d5vcg\n(v). Note that pvcg\n1 (v) =\nv(2)+v(3)\n2\nand pvcg\n2 (v) = v(3).\nWe define the bid profile b(v) as follows.\nLet b\u03b4(v,3)(v) =\nv(3)\n2\n, b\u03b4(v,2)(v) = v(3) and b\u03b4(v,1)(v) =\n2v2 + 2v(3).\nBy the construction of b(v), s(b(v), i) = s(v, i) for i =\n1, 2, 3 . In addition observe that pj(b(v)) = pvcg\nj (v) for\nj = 1, 2, 3, 4. Therefore \u03d5G\n(b(v)) = \u03d5vcg(v). Since v is\narbitrary, G is a V CG cover.\nNaturally N1 is not satisfied. Suppose in negation that\nthere exists a valid mediator m, which implements the VCG\noutcome function in G. Consider the following vector of\nvaluations v = (12, 10, 8). If all players use the mediator\nthen player 2 (with valuation 10) gets position 2, pays 8, and\ntherefore her utility is 1(10\u22128) = 2. Player 2 can always bid\nmore than the other players, and by that cause some other\nplayer to be positioned second; Since the mediator is required\nto be valid it must be that the mediator submits not more\nthan 12 on behalf of both players 1 and 3. But then player\n2 can bid 13, and win the first position; therefore, player 2\"s\nutility will be 2(10 \u2212 13\n4\n) > 8. This contradicts that m is a\nvalid mediator that implements the VCG outcome function\nin G.\nTo summarize, we have shown sufficient conditions for\ntransforming a large class of position auctions to the V CG\n284\nposition auction by mediation. Moreover by dropping any\nof our conditions we get that such transformation might not\nbe feasible.\nIn the next subsections we provide classes of interesting\nposition auctions which can be transformed to the VCG\nposition auction by mediation. These auctions satisfy the\nconditions of Theorem 3. However, in order to use Theorem 3\none has to check that a certain position auction, G is a VCG\ncover. In the full version paper, before we apply this\ntheorem we present another useful theorem that gives sufficient\nconditions guaranteeing that G is a VCG cover.\n5.1 Generalized next-price position auctions\nIn a generalized next-price auction the payment scheme is\nof the following form. For every j \u2208 K and for every b \u2208 B\npj(b) = b(l(j)) where l(j) is an integer such that l(j) > j.9\n.\nWe show:\nProposition 1. Let G be a generalized next-price\nposition auction. There exists a valid mediator that implements\n\u03d5vcg\nin G if and only if the following two conditions hold:\n(i) l(j + 1) > l(j) for j = 1, ..., m \u2212 1, and (ii) l(m) \u2264 n.\n5.2 K-next-price position auctions\nIn k-next-price position auctions the payment scheme is\ndefined as follows: For every j \u2208 K and for every b pj(b) =\nb(j+k). K-next-price position auctions are, in particular\ngeneralized next-price position auctions. Therefore Proposition\n1 yields as a corollary:\nProposition 2. Let k \u2265 1. Let G be a k-next-price\nposition auction. There exists a valid mediator that implements\n\u03d5vcg\nin G if and only if n \u2265 m + k \u2212 1.\n5.3 Weighted next-price position auctions\nIn weighted next-price position auctions the payment schemes\nare of the following form. For every j \u2208 K and for every\nb \u2208 B, pj(b) =\nb(j+1)\ncj\n, where cj \u2265 1.\nProposition 3. Let G be a weighted next-price position\nauction with the weights c1, c2, ..., cm. There exists a valid\nmediator that implements \u03d5vcg\nin G if and only if c1 \u2265 \u00b7 \u00b7 \u00b7 \u2265\ncm.\n5.4 Google-like position auctions\nGoogle-like ad auctions are slightly different from\nnextprice auction. In these auctions the click-trough rate of an\nad i in position j is the product of the quality of ad i,\n\u03b2i > 0, and the position click-trough rate \u03b1j > 0.10\nPlayers\nare ranked in the positions by bi\u03b2i.\nLet b \u2208 B. Let \u02dc\u03b4(b, j) be defined as follows. For every\nj \u2208 K, let \u02dc\u03b4(b, j) be the player i that obtains position j,\nand for j = m let \u02dc\u03b4(b, j) = i, where i obtained position m\nin case there is more than one player i such that bi = b(m),\nthen i is chosen between them via the breaking rule \u02dc\u03b3.\nIf player i obtains position j \u2208 K then she pays pj(b) =\n\u03b2\u02dc\u03b4(b,j+1)\nb\u02dc\u03b4(b,j+1)\n\u03b2i\n. Therefore player i\"s utility will be\n\u03b1j\u03b2i(vi \u2212\nb\u02dc\u03b4(b,j+1)\n\u03b2i\n\u03b2\u02dc\u03b4(b,j+1)) = \u03b1j(vi\u03b2i \u2212 b\u02dc\u03b4(b,j+1)\u03b2\u02dc\u03b4(b,j+1)).\n9\nRecall that b(j) = 0 for every j > n\n10\nSee e.g. [17].\nHence by denoting \u02dcvi = vi\u03b2i for every i \u2208 N, and by\napplying Theorem 2 we obtain:\nProposition 4. There exists a valid mediator which\nimplements the VCG outcome function in the Google-like\nposition auction.\n6. SELF-PRICE POSITION AUCTIONS\nLet G be a self-price position auction as described at\nsection 2. At example 1 we showed that when there is one\nposition and two players, the VCG outcome function is\nimplemented by a valid mediator in this auction. The proof\nin this example can be easily generalized to show that the\nVCG outcome function can be implemented by a valid\nmediator in a self-price position auction, in which there is one\nposition and an arbitrary number of players, n \u2265 2.\nNext we show that it is impossible to implement the VCG\noutcome function, even by a non-valid mediator, in a\nselfprice position auction which has more than one position\n(m > 1).\nTheorem 6. Let G be a self-price position auction with\nmore than one position. There is no mediator that\nimplements the VCG outcome function in G.\nProof. Let v \u2208 V be the following valuation profile. vn = 10\nand v1 = v2 = \u00b7 \u00b7 \u00b7 = vn\u22121 = 5. The VCG outcome function\nassigns to this v an allocation, in which player n receives\nposition 1 and player 1 receives position 2. The payments of\nplayers n and 1 are both equal to 5. In order to implement\nsuch an outcome, a mediator must bid 5 on behalf of player\nn (so that this player pays 5), and it must bid less than 5 on\nbehalf of any other player, because otherwise another player\nreceives position 1. Note that the bid of any other player\ncannot equal 5 because every other player has an higher\npriority than n. In particular, even if player 1 gets indeed\nposition 2 he will pay less than 5. Hence, no mediator can\nimplement the VCG outcome function in G.2\nThe proof of Theorem 6 heavily uses the fixed priority\nrule assumption. However, as we have already said, all our\nresults including this theorem hold also for the tie breaking\nrule defined by the random priority rule. The proof of the\nimpossibility theorem for the random priority rule uses the\nfact that the particular bad priority rule used in the proof\nof Theorem 6, has a positive probability.\nAs we previously discussed, the fixed and random\npriority rules are just convenient ways to model the first-arrival\nrule, which is common in practice. When one attempts to\ndirectly model position auctions that use the first-arrival\nrule without these modeling choices he tackles a lot of\nmodeling problems. In particular, it is not clear how to model a\nposition auction with the first-arrival rule as a game with\nincomplete information. To do this, one has to allow a player\nnot only to submit a bid but also to decide about the time\nof the bid. This raises a lot of additional modeling\nproblems, such as determining the relationship between the time\na player decides to submit a bid and the time in which this\nbid is actually recorded. Hence, efficient modeling as a game\nmay be untractable. Nevertheless, in the next section we\nwill analyze mediators in position auctions, which use the\nfirst-arrival rule. We will define ex post equilibrium and the\nnotion of implementation by mediation without explicitly\nmodeling well-defined games. We will show that in this case\n285\nthere is a way to implement the VCG outcome function in\na self-price position auction. Moreover, we will find a valid\nmediator that does the job.\n7. POSITION AUCTIONS WITH THE FIRST\nARRIVAL RULE\nLet G be a position auction with the first-arrival rule.\nEvery mediator for G has the ability to determine the order\nin which the bids he submits on behalf of the players are\nrecorded; He can just submit the bids sequentially, waiting\nfor a confirmation before submitting the next bid. We need\nthe following notations.\nEvery order of bidding can be described by some \u03b3 \u2208 \u0393; i\nbids before k if and only if \u03b3i < \u03b3k. Hence, an order of bids\ncan serve as a priority rule. For every order of bids \u03b3 and a\nvector of bids b we define s(b, \u03b3, i) as the position assigned\nto i. We denote the payment of i when the vector of bids\nis b and the order of bidding is \u03b3 by qi(b, \u03b3) = ps(b,\u03b3,i)(b),\nand we denote wi(vi, b, \u03b3) the utility of i.\nA mediator for G should determine the bids of the players\nwho use its services and also the order of bids as a function\nof the reported types. However, all mediators discussed in\nthis paper will use the same rule to determine the order of\nbids: If all players report the vector of types \u02c6v, the\nmediator uses the order of bids \u03b3\u02c6v\n, which is defined as follows:\n\u03b3\u02c6v\ni < \u03b3\u02c6v\nk if and only if \u02c6vi > \u02c6vk, or \u02c6vi = \u02c6vk and i < k. For\nexample, if n = 3 and the reported types are \u02c6v = (6, 7, 6),\nthen \u03b3\u02c6v\n= (2, 1, 3). If only a strict subset of the players\nuse the mediator\"s services, the mediator applies the same\norder of bids rule to this subset. A mediator for a position\nauction with the first arrival rule is therefore defined by a\nvector m = (mS)S\u2286N . However, such a mediator is called\na directed mediator in order to stress the fact that it\ndetermines not only the bids but also the order of bids. To\nsummarize: If all players use the directed mediator m, and\nthe reported bids are \u02c6v, then the directed mediator bids\nmN (\u02c6v)i on behalf of i, i receives the position s(\u02c6v, \u03b3\u02c6v\n, i),\nand pays qi(mN (\u02c6v), \u03b3\u02c6v\n). If only the subset S uses the\nmediator\"s services, the reported types are \u02c6vS, and the other\nplayers bid directly b\u2212S then the actual order of bids is not\nuniquely determined. If this order is \u03b3 then the position\nof i \u2208 N is s(b, \u03b3, i), and its payment is qi(b, \u03b3), where\nb = (mS(\u02c6vS), b\u2212S). In particular, if every player is using\nthe T-strategy and the players\" profile of types is v, then\nthe outcome generated by the directed mediator is\n\u03c8m\n(v) = (s(v, \u03b3v\n), q(mN (v), \u03b3v\n).\nBut why should the players use the T strategy? Assume all\nplayers but i use the T strategy. If player i deviates from the\nT strategy by reporting a false type to the directed mediator,\nthe resulting outcome is well-defined. On the other hand,\nwhen this player sends a bid directly to the auctioneer, the\nresulting outcome is not clear, because the order of bids is\nnot clear.11\nA good desired directed mediator would be one that no\nplayer would want to deviate from the T strategy\nindependently of the order in which the bids are recorded because\nof his deviation. More specifically:\nDefinition: Let G be a position auction with the\nfirstarrival rule, and let m be a directed mediator for G. The\n11\nIt is clear however, that the resulting order \u03b3 is consistent\nwith the well-defined order of bids of N \\ i.\nT-strategy profile is an ex post equilibrium with respect to\nm if for every player i and type vi, and for every vector of\ntypes of the other players, v\u2212i, the following two conditions\nhold:\nF1: i is not better off when he gives the directed mediator\nthe right of play and reports a false type. That is, for every\n\u02c6vi \u2208 Vi\nwi(vi, mN (vi, v\u2212i), \u03b3(vi,v\u2212i)\n) \u2265 wi(vi, mN (\u02c6vi, v\u2212i), \u03b3(\u02c6vi,v\u2212i)\n).\nF2: i is not better off when he bids directly independently\nof the resulting order of recorded bids. That is for every\nbi \u2208 Bi, and for every \u03b3 \u2208 \u0393, which is consistent with the\norder of bids of members of N \\ i resulting from the vector\nof types v\u2212i,\nwi(vi, mN (vi, v\u2212i), \u03b3(vi,v\u2212i)\n) \u2265 wi(vi, (bi, mN\\i(v\u2212i)), \u03b3).\nThe notion of valid directed mediators is analogously\ndefined:\nDefinition: Let G be a position auction with the\nfirstarrival rule. A directed mediator for G is valid, if for every\nplayer, using the T-strategy guarantees a non-negative level\nof utility.\nFormally, a directed mediator m for G is valid, if for every\nplayer i with type vi, for every subset S \u2286 N such that i \u2208 S,\nfor every vS\\i, and for every b\u2212S, wi(vi, mS(vS), b\u2212S, \u03b3) \u2265\n0 for every \u03b3 \u2208 \u0393, which is consistent with the standard\norder of bids of S determined by the mediator when the\nreported types are vS.\nThe notion of implementation by mediation remains as\nbefore: The directed mediator m implements the VCG\noutcome function in G if \u03c8m\n= \u03d5vcg\n.\nOur previous results remain true for directed mediators for\nposition auctions with the first arrival rule. Next we show\nthat in contrast to Theorem 6, it is possible to implement the\nVCG outcome function in every self-price position auction\nwith the first-arrival rule.\nTheorem 7. Let G = G(\u03b1, p) be the self-price position\nauction with the first arrival rule. There exists a valid\ndirected mediator that implements the VCG outcome function\nin G.\nIn the following theorem we provide sufficient conditions\nfor implementing that the VCG outcome function in a\nposition auction with the first-arrival rule. A special\ncharacteristic of auctions satisfying these sufficient conditions is\nthat players\" payments may depend also on their own bid,\nin contrast to the auctions discussed in Theorem 3. The long\nproof of this theorem is in the spirit of all previous proofs,\nand therefore it is omitted.\nTheorem 8. Let G = G(\u03b1, p) be a position auction with\nthe first-arrival rule. If the following conditions hold then\nthere exists a valid directed mediator for G that implements\nthe VCG outcome function in G.\n1. For every v \u2208 V there exists b \u2208 B such that pj(b) =\nv(j) for every j \u2208 K.\n2. G is monotone.\n3. pj(b) \u2265 pj+1(b) for every j \u2208 K and every b \u2208 B.\n4. For every j \u2208 K and every b1\n, b2\n\u2208 B such that b1\n(l) =\nb2\n(l) for every l \u2265 j, pj(b1\n) = pj(b2\n).\n286\n8. REFERENCES\n[1] R.J. Aumann. Subjectivity and correlation in\nrandomized strategies. Journal of Mathematical\nEconomics, 1:67-96, 1974.\n[2] N.A.R. Bhat, K. Leyton-Brown, Y. Shoham, and\nM. Tennenholtz. Bidding Rings Revisited. Working\nPaper, 2005.\n[3] B. Edelman, M. Ostrovsky, and M. Schwarz. Internet\nadvertising and the generalized second price auction:\nSelling billions of dollars worth of keywords. NBER\nworking paper 11765, Novenmber 2005.\n[4] J. Feng, H.K. Bhargava, and D.M. Pennock.\nImplementing sponsored search in web search engines:\nComputational evaluation of alternative mechanisms.\nINFORMS Journal on Computing, 2006.\n[5] F. M. Forges. An approach to communication\nequilibria. Econometrica, 54(6):1375-85, 1986.\n[6] D. Graham and R. Marshall. Collusive Bidder\nBehavior at Single-Object Second-Price and English\nAuctions. Journal of Political Economy, 95:1217-1239,\n1987.\n[7] R. Holzman, N. Kfir-Dahav, D. Monderer, and\nM. Tennenholtz. Bundling equilibrium in\ncombinatorial auctions. Games and Economic\nBehavior, 47:104-123, 2004.\n[8] E. Kalai and R.W. Rosenthal. Arbitration of\nTwo-Party Disputes under Ignorance. International\nJournal of Game Theory, 7:65-72, 1976.\n[9] S. Lahaie. An analysis of alternative slot auction\ndesigns for sponsored search. In Proceedings of the 7th\nACM conference on Electronic commerce, pages\n218-227, 2006.\n[10] R. Lavi, A. Mu\"alem, and N. Nisan. Towards a\ncharacterization of truthful combinatorial auctions. In\nProceedings of the 44th Annual IEEE Symposium on\nFoundations of Computer Science (FOCS)., 2003.\n[11] R. McAfee and J. McMillan. Bidding Rings. American\nEconomic Review, 82:579-599, 1992.\n[12] A. Mehta, A. Saberi, , V. Vazirani, and U. Vazirani.\nAdwords and Generalized Online Matching. In\nTwentieth International joint conference on Artificial\nIntelligence (FOCS 05) , 2005.\n[13] D. Monderer and M. Tennenholtz. K-Implementation.\nJournal of Artificial Intelligence Research (JAIR),\n21:37-62, 2004.\n[14] D. Monderer and M. Tennenholtz. Strong mediated\nequilibrium. In Proceedings of the AAAI, 2006.\n[15] R. B. Myerson. Multistage games with\ncommunication. Econometrica, 54(2):323-58, 1986.\n[16] O. Rozenfeld and M. Tennenholtz. Routing mediators.\nIn Proceedings of the 23rd International Joint\nConferences on Artificial Intelligence(IJCAI-07),\npages 1488-1493, 2007.\n[17] H. Varian. Position auctions. Technical report, UC\nBerkeley, 2006.\n287", "keywords": "mediator;richer class of position auction;electronic commerce;vcg outcome function;t-strategy;self-price position auction;next-price position auction;agent;auction;position auction;equilibrium;ex post equilibrium;multi-agent system"}
-{"name": "test_J-2", "title": "Worst-Case Optimal Redistribution of VCG Payments", "abstract": "For allocation problems with one or more items, the wellknown Vickrey-Clarke-Groves (VCG) mechanism is efficient, strategy-proof, individually rational, and does not incur a deficit. However, the VCG mechanism is not (strongly) budget balanced: generally, the agents\" payments will sum to more than 0. If there is an auctioneer who is selling the items, this may be desirable, because the surplus payment corresponds to revenue for the auctioneer. However, if the items do not have an owner and the agents are merely interested in allocating the items efficiently among themselves, any surplus payment is undesirable, because it will have to flow out of the system of agents. In 2006, Cavallo [3] proposed a mechanism that redistributes some of the VCG payment back to the agents, while maintaining efficiency, strategy-proofness, individual rationality, and the non-deficit property. In this paper, we extend this result in a restricted setting. We study allocation settings where there are multiple indistinguishable units of a single good, and agents have unit demand. (For this specific setting, Cavallo\"s mechanism coincides with a mechanism proposed by Bailey in 1997 [2].) Here we propose a family of mechanisms that redistribute some of the VCG payment back to the agents. All mechanisms in the family are efficient, strategyproof, individually rational, and never incur a deficit. The family includes the Bailey-Cavallo mechanism as a special case. We then provide an optimization model for finding the optimal mechanism-that is, the mechanism that maximizes redistribution in the worst case-inside the family, and show how to cast this model as a linear program. We give both numerical and analytical solutions of this linear program, and the (unique) resulting mechanism shows significant improvement over the Bailey-Cavallo mechanism (in the worst case). Finally, we prove that the obtained mechanism is optimal among all anonymous deterministic mechanisms that satisfy the above properties.", "fulltext": "1. INTRODUCTION\nMany important problems in computer science and\nelectronic commerce can be modeled as resource allocation\nproblems. In such problems, we want to allocate the resources\n(or items) to the agents that value them the most.\nUnfortunately, agents\" valuations are private knowledge, and\nself-interested agents will lie about their valuations if this\nis to their benefit. One solution is to auction off the items,\npossibly in a combinatorial auction where agents can bid\non bundles of items. There exist ways of determining the\npayments that the agents make in such an auction that\nincentivizes the agents to report their true valuations-that\nis, the payments make the auction strategy-proof. One very\ngeneral way of doing so is to use the VCG mechanism [23,\n4, 12]. (The VCG mechanism is also known as the Clarke\nmechanism or, in the specific context of auctions, the\nGeneralized Vickrey Auction.)\nBesides strategy-proofness, the VCG mechanism has\nseveral other nice properties in the context of resource\nallocation problems. It is efficient: the chosen allocation always\nmaximizes the sum of the agents\" valuations. It is also\n(expost) individually rational: participating in the mechanism\nnever makes an agent worse off than not participating.\nFinally, it has a no-deficit property: the sum of the agents\"\npayments is always nonnegative.\nIn many settings, another property that would be\ndesirable is (strong) budget balance, meaning that the payments\nsum to exactly 0. Suppose the agents are trying to\ndistribute some resources among themselves that do not have\na previous owner. For example, the agents may be trying\nto allocate the right to use a shared good on a given day.\nOr, the agents may be trying to allocate a resource that\nthey have collectively constructed, discovered, or otherwise\nobtained. If the agents use an auction to allocate these\nresources, and the sum of the agents\" payments in the auction\nis positive, then this surplus payment must leave the system\n30\nof the agents (for example, the agents must give the money\nto an outside party, or burn it). Na\u00a8\u0131ve redistribution of the\nsurplus payment (e.g. each of the n agents receives 1/n of\nthe surplus) will generally result in a mechanism that is not\nstrategy-proof (e.g. in a Vickrey auction, the second-highest\nbidder would want to increase her bid to obtain a larger\nredistribution payment). Unfortunately, the VCG mechanism\nis not budget balanced: typically, there is surplus payment.\nUnfortunately, in general settings, it is in fact impossible to\ndesign mechanisms that satisfy budget balance in addition\nto the other desirable properties [16, 11, 21].\nIn light of this impossibility result, several authors have\nobtained budget balance by sacrificing some of the other\ndesirable properties [2, 6, 22, 5]. Another approach that\nis perhaps preferable is to use a mechanism that is more\nbudget balanced than the VCG mechanism, and maintains\nall the other desirable properties. One way of trying to\ndesign such a mechanism is to redistribute some of the VCG\npayment back to the agents in a way that will not affect the\nagents\" incentives (so that strategy-proofness is maintained),\nand that will maintain the other properties. In 2006,\nCavallo [3] pursued exactly this idea, and designed a mechanism\nthat redistributes a large amount of the total VCG payment\nwhile maintaining all of the other desirable properties of\nthe VCG mechanism. For example, in a single-item auction\n(where the VCG mechanism coincides with the second-price\nsealed-bid auction), the amount redistributed to bidder i\nby Cavallo\"s mechanism is 1/n times the second-highest bid\namong bids other than i\"s bid. The total redistributed is at\nmost the second-highest bid overall, and the redistribution\nto agent i does not affect i\"s incentives because it does not\ndepend on i\"s own bid.\nIn this paper, we restrict our attention to a limited\nsetting, and in this setting we extend Cavallo\"s result. We\nstudy allocation settings where there are multiple\nindistinguishable units of a single good, and all agents have unit\ndemand, i.e. they want only a single unit. For this specific\nsetting, Cavallo\"s mechanism coincides with a mechanism\nproposed by Bailey in 1997 [2]. Here we propose the family\nof linear VCG redistribution mechanisms. All mechanisms\nin this family are efficient, strategy-proof, individually\nrational, and never incur a deficit. The family includes the\nBailey-Cavallo mechanism as a special case (with the caveat\nthat we only study allocation settings with multiple\nindistinguishable units of a single good and unit demand, while\nBailey\"s and Cavallo\"s mechanisms can be applied outside\nthese settings as well). We then provide an optimization\nmodel for finding the optimal mechanism inside the family,\nbased on worst-case analysis. Both numerical and\nanalytical solutions of this model are provided, and the resulting\nmechanism shows significant improvement over the\nBaileyCavallo mechanism (in the worst case). For example, for\nthe problem of allocating a single unit, when the number\nof agents is 10, our mechanism always redistributes more\nthan 98% of the total VCG payment back to the agents\n(whereas the Bailey-Cavallo mechanism redistributes only\n80% in the worst case). Finally, we prove that our\nmechanism is in fact optimal among all anonymous deterministic\nmechanisms (even nonlinear ones) that satisfy the desirable\nproperties.\nAround the same time, the same mechanism has been\nindependently derived by Moulin [19].1\nMoulin actually\npursues a different objective (also based on worst-case analysis):\nwhereas our objective is to maximize the percentage of VCG\npayments that are redistributed, Moulin tries to minimize\nthe overall payments from agents as a percentage of\nefficiency. It turns out that the resulting mechanisms are the\nsame. Towards the end of this paper, we consider dropping\nthe individual rationality requirement, and show that this\ndoes not change the optimal mechanism for our objective.\nFor Moulin\"s objective, dropping individual rationality does\nchange the optimal mechanism (but only if there are\nmultiple units).\n2. PROBLEM DESCRIPTION\nLet n denote the number of agents, and let m denote the\nnumber of units. We only consider the case where m < n\n(otherwise the problem becomes trivial). We also assume\nthat m and n are always known. (This assumption is not\nharmful: in environments where anyone can join the auction,\nrunning a redistribution mechanism is typically not a good\nidea anyway, because everyone would want to join to collect\npart of the redistribution.)\nLet the set of agents be {a1, a2, . . . , an}, where ai is the\nagent with ith highest report value \u02c6vi-that is, we have \u02c6v1 \u2265\n\u02c6v2 \u2265 . . . \u2265 \u02c6vn \u2265 0. Let vi denote the true value of ai.\nGiven that the mechanism is strategy-proof, we can assume\nvi = \u02c6vi.\nUnder the VCG mechanism, each agent among a1, . . . , am\nwins a unit, and pays \u02c6vm+1 for this unit. Thus, the total\nVCG payment equals m\u02c6vm+1. When m = 1, this is the\nsecond-price or Vickrey auction.\nWe modify the mechanism as follows. After running the\noriginal VCG mechanism, the center returns to each agent\nai some amount zi, agent ai\"s redistribution payment. We\ndo not allow zi to depend on \u02c6vi; because of this, ai\"s\nincentives are unaffected by this redistribution payment, and the\nmechanism remains strategy-proof.\n3. LINEAR VCG REDISTRIBUTION\nMECHANISMS\nWe are now ready to introduce the family of linear VCG\nredistribution mechanisms. Such a mechanism is defined by\na vector of constants c0, c1, . . . , cn\u22121. The amount that the\nmechanism returns to agent ai is zi = c0 + c1\u02c6v1 + c2\u02c6v2 +\n. . . + ci\u22121\u02c6vi\u22121 + ci\u02c6vi+1 + . . . + cn\u22121\u02c6vn. That is, an agent\nreceives c0, plus c1 times the highest bid other than the\nagent\"s own bid, plus c2 times the second-highest other bid,\netc. The mechanism is strategy-proof, because for all i, zi\nis independent of \u02c6vi. Also, the mechanism is anonymous.\nIt is helpful to see the entire list of redistribution\npayments:\nz1 = c0 + c1\u02c6v2 + c2\u02c6v3 + c3\u02c6v4 + . . . + cn\u22122\u02c6vn\u22121 + cn\u22121\u02c6vn\nz2 = c0 + c1\u02c6v1 + c2\u02c6v3 + c3\u02c6v4 + . . . + cn\u22122\u02c6vn\u22121 + cn\u22121\u02c6vn\nz3 = c0 + c1\u02c6v1 + c2\u02c6v2 + c3\u02c6v4 + . . . + cn\u22122\u02c6vn\u22121 + cn\u22121\u02c6vn\nz4 = c0 + c1\u02c6v1 + c2\u02c6v2 + c3\u02c6v3 + . . . + cn\u22122\u02c6vn\u22121 + cn\u22121\u02c6vn\n. . .\nzi = c0 + c1\u02c6v1 + c2\u02c6v2 + . . . + ci\u22121\u02c6vi\u22121 + ci\u02c6vi+1 + . . . + cn\u22121\u02c6vn\n. . .\nzn\u22122 = c0 + c1\u02c6v1 + c2\u02c6v2 + c3\u02c6v3 + . . . + cn\u22122\u02c6vn\u22121 + cn\u22121\u02c6vn\nzn\u22121 = c0 + c1\u02c6v1 + c2\u02c6v2 + c3\u02c6v3 + . . . + cn\u22122\u02c6vn\u22122 + cn\u22121\u02c6vn\nzn = c0 + c1\u02c6v1 + c2\u02c6v2 + c3\u02c6v3 + . . . + cn\u22122\u02c6vn\u22122 + cn\u22121\u02c6vn\u22121\n1\nWe thank Rakesh Vohra for pointing us to Moulin\"s\nworking paper.\n31\nNot all choices of the constants c0, . . . , cn\u22121 produce a\nmechanism that is individually rational, and not all choices\nof the constants produce a mechanism that never incurs a\ndeficit. Hence, to obtain these properties, we need to place\nsome constraints on the constants.\nTo satisfy the individual rationality criterion, each agent\"s\nutility should always be non-negative. An agent that does\nnot win a unit obtains a utility that is equal to the agent\"s\nredistribution payment. An agent that wins a unit obtains a\nutility that is equal to the agent\"s valuation for the unit,\nminus the VCG payment \u02c6vm+1, plus the agent\"s redistribution\npayment.\nConsider agent an, the agent with the lowest bid. Since\nthis agent does not win an item (m < n), her utility is just\nher redistribution payment zn. Hence, for the mechanism\nto be individually rational, the ci must be such that zn is\nalways nonnegative. If the ci have this property, then it\nactually follows that zi is nonnegative for every i, for the\nfollowing reason. Suppose there exists some i < n and some\nvector of bids \u02c6v1 \u2265 \u02c6v2 \u2265 . . . \u2265 \u02c6vn \u2265 0 such that zi < 0.\nThen, consider the bid vector that results from replacing \u02c6vj\nby \u02c6vj+1 for all j \u2265 i, and letting \u02c6vn = 0. If we omit \u02c6vn\nfrom this vector, the same vector results that results from\nomitting \u02c6vi from the original vector. Therefore, an\"s\nredistribution payment under the new vector should be the same as\nai\"s redistribution payment under the old vector-but this\npayment is negative.\nIf all redistribution payments are always nonnegative, then\nthe mechanism must be individually rational (because the\nVCG mechanism is individually rational, and the\nredistribution payment only increases an agent\"s utility). Therefore,\nthe mechanism is individually rational if and only if for any\nbid vector, zn \u2265 0.\nTo satisfy the non-deficit criterion, the sum of the\nredistribution payments should be less than or equal to the total\nVCG payment. So for any bid vector \u02c6v1 \u2265 \u02c6v2 \u2265 . . . \u2265 \u02c6vn \u2265\n0, the constants ci should make z1 + z2 + . . . + zn \u2264 m\u02c6vm+1.\nWe define the family of linear VCG redistribution\nmechanisms to be the set of all redistribution mechanisms\ncorresponding to constants ci that satisfy the above constraints\n(so that the mechanisms will be individually rational and\nhave the no-deficit property). We now give two examples of\nmechanisms in this family.\nExample 1 (Bailey-Cavallo mechanism): Consider the\nmechanism corresponding to cm+1 = m\nn\nand ci = 0 for all\nother i. Under this mechanism, each agent receives a\nredistribution payment of m\nn\ntimes the (m+1)th highest bid from\nanother agent. Hence, a1, . . . , am+1 receive a redistribution\npayment of m\nn\n\u02c6vm+2, and the others receive m\nn\n\u02c6vm+1. Thus,\nthe total redistribution payment is (m+1)m\nn\n\u02c6vm+2 +(n\u2212m\u2212\n1)m\nn\n\u02c6vm+1. This redistribution mechanism is individually\nrational, because all the redistribution payments are\nnonnegative, and never incurs a deficit, because (m + 1) m\nn\n\u02c6vm+2 +\n(n\u2212m\u22121)m\nn\n\u02c6vm+1 \u2264 nm\nn\n\u02c6vm+1 = m\u02c6vm+1. (We note that for\nthis mechanism to make sense, we need n \u2265 m + 2.)\nExample 2: Consider the mechanism corresponding to\ncm+1 = m\nn\u2212m\u22121\n, cm+2 = \u2212 m(m+1)\n(n\u2212m\u22121)(n\u2212m\u22122)\n, and ci = 0\nfor all other i. In this mechanism, each agent receives a\nredistribution payment of m\nn\u2212m\u22121\ntimes the (m + 1)th\nhighest reported value from other agents, minus m(m+1)\n(n\u2212m\u22121)(n\u2212m\u22122)\ntimes the (m+2)th highest reported value from other agents.\nThus, the total redistribution payment is m\u02c6vm+1 \u2212\nm(m+1)(m+2)\n(n\u2212m\u22121)(n\u2212m\u22122)\n\u02c6vm+3. If n \u2265 2m+3 (which is equivalent to\nm\nn\u2212m\u22121\n\u2265 m(m+1)\n(n\u2212m\u22121)(n\u2212m\u22122)\n), then each agent always receives\na nonnegative redistribution payment, thus the mechanism\nis individually rational. Also, the mechanism never incurs\na deficit, because the total VCG payment is m\u02c6vm+1, which\nis greater than the amount m\u02c6vm+1 \u2212 m(m+1)(m+2)\n(n\u2212m\u22121)(n\u2212m\u22122)\n\u02c6vm+3\nthat is redistributed.\nWhich of these two mechanisms is better? Is there another\nmechanism that is even better? This is what we study in\nthe next section.\n4. OPTIMAL REDISTRIBUTION\nMECHANISMS\nAmong all linear VCG redistribution mechanisms, we would\nlike to be able to identify the one that redistributes the\ngreatest percentage of the total VCG payment.2\nThis is not\na well-defined notion: it may be that one mechanism\nredistributes more on some bid vectors, and another more on\nother bid vectors. We emphasize that we do not assume that\na prior distribution over bidders\" valuations is available, so\nwe cannot compare them based on expected redistribution.\nBelow, we study three well-defined ways of comparing\nredistribution mechanisms: best-case performance, dominance,\nand worst-case performance.\nBest-case performance. One way of evaluating a\nmechanism is by considering the highest redistribution\npercentage that it achieves. Consider the previous two examples.\nFor the first example, the total redistribution payment is\n(m + 1)m\nn\n\u02c6vm+2 + (n \u2212 m \u2212 1)m\nn\n\u02c6vm+1. When \u02c6vm+2 = \u02c6vm+1,\nthis is equal to the total VCG payment m\u02c6vm+1. Thus, this\nmechanism redistributes 100% of the total VCG payment in\nthe best case. For the second example, the total\nredistribution payment is m\u02c6vm+1 \u2212 m(m+1)(m+2)\n(n\u2212m\u22121)(n\u2212m\u22122)\n\u02c6vm+3. When\n\u02c6vm+3 = 0, this is equal to the total VCG payment m\u02c6vm+1.\nThus, this mechanism also redistributes 100% of the total\nVCG payment in the best case.\nMoreover, there are actually infinitely many mechanisms\nthat redistribute 100% of the total VCG payment in the best\ncase-for example, any convex combination of the above two\nwill redistribute 100% if both \u02c6vm+2 = \u02c6vm+1 and \u02c6vm+3 = 0.\nDominance. Inside the family of linear VCG\nredistribution mechanisms, we say one mechanism dominates another\nmechanism if the first one redistributes at least as much as\nthe other for any bid vector. For the previous two examples,\nneither dominates the other, because they each redistribute\n100% in different cases. It turns out that there is no\nmechanism in the family that dominates all other mechanisms in\nthe family. For suppose such a mechanism exists. Then,\nit should dominate both examples above. Consider the\nremaining VCG payment (the VCG payment failed to be\nredistributed). The remaining VCG payment of the dominant\nmechanism should be 0 whenever \u02c6vm+2 = \u02c6vm+1 or \u02c6vm+3 = 0.\nNow, the remaining VCG payment is a linear function of the\n\u02c6vi (linear redistribution), and therefore also a polynomial\nfunction. The above implies that this function can be\nwritten as (\u02c6vm+2 \u2212 \u02c6vm+1)(\u02c6vm+3)P(\u02c6v1, \u02c6v2, . . . , \u02c6vn), where P is a\n2\nThe percentage redistributed seems the natural criterion to\nuse, among other things because it is scale-invariant: if we\nmultiply all bids by the same positive constant (for example,\nif we change the units by re-expressing the bids in euros\ninstead of dollars), we would not want the behavior of our\nmechanism to change.\n32\npolynomial function. But since the function must be linear\n(has degree at most 1), it follows that P = 0. Thus, a\ndominant mechanism would always redistribute all of the VCG\npayment, which is not possible. (If it were possible, then our\nworst-case optimal redistribution mechanism would also\nalways redistribute all of the VCG payment, and we will see\nlater that it does not.)\nWorst-case performance. Finally, we can evaluate a\nmechanism by considering the lowest redistribution\npercentage that it guarantees. For the first example, the total\nredistribution payment is (m+1)m\nn\n\u02c6vm+2 +(n\u2212m\u22121)m\nn\n\u02c6vm+1,\nwhich is greater than or equal to (n\u2212m\u22121) m\nn\n\u02c6vm+1. So in the\nworst case, which is when \u02c6vm+2 = 0, the percentage\nredistributed is n\u2212m\u22121\nn\n. For the second example, the total\nredistribution payment is m\u02c6vm+1 \u2212 m(m+1)(m+2)\n(n\u2212m\u22121)(n\u2212m\u22122)\n\u02c6vm+3, which\nis greater than or equal to m\u02c6vm+1(1\u2212 (m+1)(m+2)\n(n\u2212m\u22121)(n\u2212m\u22122)\n). So\nin the worst case, which is when \u02c6vm+3 = \u02c6vm+1, the\npercentage redistributed is 1 \u2212 (m+1)(m+2)\n(n\u2212m\u22121)(n\u2212m\u22122)\n. Since we\nassume that the number of agents n and the number of units\nm are known, we can determine which example mechanism\nhas better worst-case performance by comparing the two\nquantities. When n = 6 and m = 1, for the first example\n(Bailey-Cavallo mechanism), the percentage redistributed in\nthe worst case is 2\n3\n, and for the second example, this\npercentage is 1\n2\n, which implies that for this pair of n and m,\nthe first mechanism has better worst-case performance. On\nthe other hand, when n = 12 and m = 1, for the first\nexample, the percentage redistributed in the worst case is 5\n6\n, and\nfor the second example, this percentage is 14\n15\n, which implies\nthat this time the second mechanism has better worst-case\nperformance.\nThus, it seems most natural to compare mechanisms by\nthe percentage of total VCG payment that they redistribute\nin the worst case. This percentage is undefined when the\ntotal VCG payment is 0. To deal with this, technically, we\ndefine the worst-case redistribution percentage as the largest\nk so that the total amount redistributed is at least k times\nthe total VCG payment, for all bid vectors. (Hence, as long\nas the total amount redistributed is at least 0 when the total\nVCG payment is 0, these cases do not affect the worst-case\npercentage.) This corresponds to the following optimization\nproblem:\nMaximize k (the percentage redistributed in the\nworst case)\nSubject to:\nFor every bid vector \u02c6v1 \u2265 \u02c6v2 \u2265 . . . \u2265 \u02c6vn \u2265 0\nzn \u2265 0 (individual rationality)\nz1 + z2 + . . . + zn \u2264 m\u02c6vm+1 (non-deficit)\nz1 + z2 + . . . + zn \u2265 km\u02c6vm+1 (worst-case constraint)\nWe recall that zi = c0 + c1\u02c6v1 + c2\u02c6v2 + . . . + ci\u22121\u02c6vi\u22121 +\nci\u02c6vi+1 + . . . + cn\u22121\u02c6vn.\n5. TRANSFORMATION TO LINEAR\nPROGRAMMING\nThe optimization problem given in the previous section\ncan be rewritten as a linear program, based on the following\nobservations.\nClaim 1. If c0, c1, . . . , cn\u22121 satisfy both the individual\nrationality and the non-deficit constraints, then ci = 0 for\ni = 0, . . . , m.\nProof. First, let us prove that c0 = 0. Consider the\nbid vector in which \u02c6vi = 0 for all i. To obtain individual\nrationality, we must have c0 \u2265 0. To satisfy the non-deficit\nconstraint, we must have c0 \u2264 0. Thus we know c0 = 0.\nNow, if ci = 0 for all i, there is nothing to prove. Otherwise,\nlet j = min{i|ci = 0}. Assume that j \u2264 m. We recall that\nwe can write the individual rationality constraint as follows:\nzn = c0 +c1\u02c6v1 +c2\u02c6v2 +c3\u02c6v3 +. . .+cn\u22122\u02c6vn\u22122 +cn\u22121\u02c6vn\u22121 \u2265 0\nfor any bid vector. Let us consider the bid vector in which\n\u02c6vi = 1 for i \u2264 j and \u02c6vi = 0 for the rest. In this case zn = cj,\nso we must have cj \u2265 0. The non-deficit constraint can\nbe written as follows: z1 + z2 + . . . + zn \u2264 m\u02c6vm+1 for any\nbid vector. Consider the same bid vector as above. We have\nzi = 0 for i \u2264 j, because for these bids, the jth highest other\nbid has value 0, so all the ci that are nonzero are multiplied\nby 0. For i > j, we have zi = cj, because the jth highest\nother bid has value 1, and all lower bids have value 0. So\nthe non-deficit constraint tells us that cj(n \u2212 j) \u2264 m\u02c6vm+1.\nBecause j \u2264 m, \u02c6vm+1 = 0, so the right hand side is 0. We\nalso have n \u2212 j > 0 because j \u2264 m < n. So cj \u2264 0. Because\nwe have already established that cj \u2265 0, it follows that\ncj = 0; but this is contrary to assumption. So j > m.\nIncidentally, this claim also shows that if m = n \u2212 1,\nthen ci = 0 for all i. Thus, we are stuck with the VCG\nmechanism. From here on, we only consider the case where\nm < n \u2212 1.\nClaim 2. The individual rationality constraint can be\nwritten as follows:\nPj\ni=m+1 ci \u2265 0 for j = m + 1, . . . , n \u2212 1.\nBefore proving this claim, we introduce the following lemma.\nLemma 1. Given a positive integer k and a set of real\nconstants s1, s2, . . . , sk, (s1t1 + s2t2 + . . . + sktk \u2265 0 for any\nt1 \u2265 t2 \u2265 . . . \u2265 tk \u2265 0) if and only if (\nPj\ni=1 si \u2265 0 for\nj = 1, 2, . . . , k).\nProof. Let di = ti \u2212ti+1 for i = 1, 2, . . . , k\u22121, and dk =\ntk. Then (s1t1 +s2t2 +. . .+sktk \u2265 0 for any t1 \u2265 t2 \u2265 . . . \u2265\ntk \u2265 0) is equivalent to ((\nP1\ni=1 si)d1 + (\nP2\ni=1 si)d2 + . . . +\n(\nPk\ni=1 si)dk \u2265 0 for any set of arbitrary non-negative dj).\nWhen\nPj\ni=1 si \u2265 0 for j = 1, 2, . . . , k, the above inequality\nis obviously true. If for some j,\nPj\ni=1 si < 0, if we set dj > 0\nand di = 0 for all i = j, then the above inequality becomes\nfalse. So\nPj\ni=1 si \u2265 0 for j = 1, 2, . . . , k is both necessary\nand sufficient.\nWe are now ready to present the proof of Claim 2.\nProof. The individual rationality constraint can be\nwritten as zn = c0 + c1\u02c6v1 + c2\u02c6v2 + c3\u02c6v3 + . . . + cn\u22122\u02c6vn\u22122 +\ncn\u22121\u02c6vn\u22121 \u2265 0 for any bid vector \u02c6v1 \u2265 \u02c6v2 \u2265 . . . \u2265 \u02c6vn\u22121 \u2265\n\u02c6vn \u2265 0. We have already shown that ci = 0 for i \u2264 m.\nThus, the above can be simplified to zn = cm+1\u02c6vm+1 +\ncm+2\u02c6vm+2+. . .+cn\u22122\u02c6vn\u22122+cn\u22121\u02c6vn\u22121 \u2265 0 for any bid vector.\nBy the above lemma, this is equivalent to\nPj\ni=m+1 ci \u2265 0\nfor j = m + 1, . . . , n \u2212 1.\nClaim 3. The non-deficit constraint and the worst-case\nconstraint can also be written as linear inequalities involving\nonly the ci and k.\nProof. The non-deficit constraint requires that for any\nbid vector, z1 +z2 +. . .+zn \u2264 m\u02c6vm+1, where zi = c0 +c1\u02c6v1 +\n33\nc2\u02c6v2 +. . .+ci\u22121\u02c6vi\u22121 +ci\u02c6vi+1 +. . .+cn\u22121\u02c6vn for i = 1, 2, . . . , n.\nBecause ci = 0 for i \u2264 m, we can simplify this inequality to\nqm+1\u02c6vm+1 + qm+2\u02c6vm+2 + . . . + qn\u02c6vn \u2265 0\nqm+1 = m \u2212 (n \u2212 m \u2212 1)cm+1\nqi = \u2212(i\u22121)ci\u22121 \u2212(n\u2212i)ci, for i = m+2, . . . , n\u22121 (when\nm + 2 > n \u2212 1, this set of equalities is empty)\nqn = \u2212(n \u2212 1)cn\u22121\nBy the above lemma, this is equivalent to\nPj\ni=m+1 qi \u2265 0\nfor j = m + 1, . . . , n. So, we can simplify further as follows:\nqm+1 \u2265 0 \u21d0\u21d2 (n \u2212 m \u2212 1)cm+1 \u2264 m\nqm+1 + . . . + qm+i \u2265 0 \u21d0\u21d2 n\nPj=m+i\u22121\nj=m+1 cj + (n \u2212 m \u2212\ni)cm+i \u2264 m for i = 2, . . . , n \u2212 m \u2212 1\nqm+1 + . . . + qn \u2265 0 \u21d0\u21d2 n\nPj=n\u22121\nj=m+1 cj \u2264 m\nSo, the non-deficit constraint can be written as a set of\nlinear inequalities involving only the ci.\nThe worst-case constraint can be also written as a set of\nlinear inequalities, by the following reasoning. The\nworstcase constraint requires that for any bid input z1 +z2 +. . .+\nzn \u2265 km\u02c6vm+1, where zi = c0 +c1\u02c6v1 +c2\u02c6v2 +. . .+ci\u22121\u02c6vi\u22121 +\nci\u02c6vi+1 + . . . + cn\u22121\u02c6vn for i = 1, 2, . . . , n. Because ci = 0 for\ni \u2264 m, we can simplify this inequality to\nQm+1\u02c6vm+1 + Qm+2\u02c6vm+2 + . . . + Qn\u02c6vn \u2265 0\nQm+1 = (n \u2212 m \u2212 1)cm+1 \u2212 km\nQi = (i \u2212 1)ci\u22121 + (n \u2212 i)ci, for i = m + 2, . . . , n \u2212 1\nQn = (n \u2212 1)cn\u22121\nBy the above lemma, this is equivalent to\nPj\ni=m+1 Qi \u2265 0\nfor j = m + 1, . . . , n. So, we can simplify further as follows:\nQm+1 \u2265 0 \u21d0\u21d2 (n \u2212 m \u2212 1)cm+1 \u2265 km\nQm+1 + . . . + Qm+i \u2265 0 \u21d0\u21d2 n\nPj=m+i\u22121\nj=m+1 cj + (n \u2212 m \u2212\ni)cm+i \u2265 km for i = 2, . . . , n \u2212 m \u2212 1\nQm+1 + . . . + Qn \u2265 0 \u21d0\u21d2 n\nPj=n\u22121\nj=m+1 cj \u2265 km\nSo, the worst-case constraint can also be written as a set\nof linear inequalities involving only the ci and k.\nCombining all the claims, we see that the original\noptimization problem can be transformed into the following\nlinear program.\nVariables: cm+1, cm+2, . . . , cn\u22121, k\nMaximize k (the percentage redistributed in the\nworst case)\nSubject to:Pj\ni=m+1 ci \u2265 0 for j = m + 1, . . . , n \u2212 1\nkm \u2264 (n \u2212 m \u2212 1)cm+1 \u2264 m\nkm \u2264 n\nPj=m+i\u22121\nj=m+1 cj + (n \u2212 m \u2212 i)cm+i \u2264 m for\ni = 2, . . . , n \u2212 m \u2212 1\nkm \u2264 n\nPj=n\u22121\nj=m+1 cj \u2264 m\n6. NUMERICAL RESULTS\nFor selected values of n and m, we solved the linear\nprogram using Glpk (GNU Linear Programming Kit). In the\ntable below, we present the results for a single unit (m = 1).\nWe present 1\u2212k (the percentage of the total VCG payment\nthat is not redistributed by the worst-case optimal\nmechanism in the worst case) instead of k in the second column\nbecause writing k would require too many significant digits.\nCorrespondingly, the third column displays the percentage\n5 10 15 20 25 30\n0.1\n0.2\n0.3\n0.4\n0.5\n0.6\n0.7\n0.8\n0.9\n1\nNumber of AgentsWorst\u2212caseRedistributionPercentage\n1 Unit WO\n1 Unit BC\n2 Units WO\n2 Units BC\n3 Units WO\n3 Units BC\n4 Units WO\n4 Units BC\nFigure 1: A comparison of the worst-case optimal\nmechanism (WO) and the Bailey-Cavallo mechanism\n(BC).\nof the total VCG payment that is not redistributed by the\nBailey-Cavallo mechanism in the worst case (which is equal\nto 2\nn\n).\nn 1 \u2212 k Bailey \u2212 Cavallo Mechanism\n3 66.7% 66.7%\n4 42.9% 50.0%\n5 26.7% 40.0%\n6 16.1% 33.3%\n7 9.52% 28.6%\n8 5.51% 25.0%\n9 3.14% 22.2%\n10 1.76% 20.0%\n20 3.62e \u2212 5 10.0%\n30 5.40e \u2212 8 6.67e \u2212 2\n40 7.09e \u2212 11 5.00e \u2212 2\nThe worst-case optimal mechanism significantly\noutperforms the Bailey-Cavallo mechanism in the worst case.\nPerhaps more surprisingly, the worst-case optimal mechanism\nsometimes does better in the worst case than the\nBaileyCavallo mechanism does on average, as the following\nexample shows.\nRecall that the total redistribution payment of the\nBaileyCavallo mechanism is (m + 1)m\nn\n\u02c6vm+2 + (n \u2212 m \u2212 1)m\nn\n\u02c6vm+1.\nFor the single-unit case, this simplifies to 2\nn\n\u02c6v3 + n\u22122\nn\n\u02c6v2.\nHence the percentage of the total VCG payment that is\nnot redistributed is\n\u02c6v2\u2212 2\nn\n\u02c6v3\u2212 n\u22122\nn\n\u02c6v2\n\u02c6v2\n= 2\nn\n\u2212 2\nn\n\u02c6v3\n\u02c6v2\n, which has\nan expected value of E( 2\nn\n\u2212 2\nn\n\u02c6v3\n\u02c6v2\n) = 2\nn\n\u2212 2\nn\nE \u02c6v3\n\u02c6v2\n.\nSuppose the bid values are drawn from a uniform distribution\nover [0, 1]. The theory of order statistics tells us that the\n34\njoint probability density function of \u02c6v2 and \u02c6v3 is f(\u02c6v3, \u02c6v2) =\nn(n \u2212 1)(n \u2212 2)\u02c6vn\u22123\n3 (1 \u2212 \u02c6v2) for \u02c6v2 \u2265 \u02c6v3. Now, E \u02c6v3\n\u02c6v2\n=\nR 1\n0\nR \u02c6v2\n0\n\u02c6v3\n\u02c6v2\nf(\u02c6v3, \u02c6v2)d\u02c6v3d\u02c6v2 = n\u22122\nn\u22121\n. So, the expected value of\nthe remaining percentage is 2\nn\n\u2212 2\nn\nn\u22122\nn\u22121\n= 2\nn(n\u22121)\n. For n = 20,\nthis is 5.26e \u2212 3, whereas the remaining percentage for the\nworst-case optimal mechanism is 3.62e\u22125 in the worst case.\nLet us present the optimal solution for the case n = 5 in\ndetail. By solving the above linear program, we find that the\noptimal values for the ci are c2 = 11\n45\n, c3 = \u22121\n9\n, and c4 = 1\n15\n.\nThat is, the redistribution payment received by each agent\nis: 11\n45\ntimes the second highest bid among the other agents,\nminus 1\n9\ntimes the third highest bid among the other agents,\nplus 1\n15\ntimes the fourth highest bid among the other agents.\nThe total amount redistributed is 11\n15\n\u02c6v2 + 4\n15\n\u02c6v3 \u2212 4\n15\n\u02c6v4 +\n4\n15\n\u02c6v5; in the worst case, 11\n15\n\u02c6v2 is redistributed. Hence, the\npercentage of the total VCG payment that is not\nredistributed is never more than 4\n15\n= 26.7%.\nFinally, we compare the worst-case optimal mechanism to\nthe Bailey-Cavallo mechanism for m = 1, 2, 3, 4, n = m +\n2, . . . , 30. These results are in Figure 1.\nWe see that for any m, when n = m + 2, the worst-case\noptimal mechanism has the same worst-case performance as\nthe Bailey-Cavallo mechanism (actually, in this case, the\nworst-case optimal mechanism is identical to the\nBaileyCavallo mechanism). When n > m + 2, the worst-case\noptimal mechanism outperforms the Bailey-Cavallo mechanism\n(in the worst case).\n7. ANALYTICAL CHARACTERIZATION\nOF THE WORST-CASE OPTIMAL\nMECHANISM\nWe recall that our linear program has the following form:\nVariables: cm+1, cm+2, . . . , cn\u22121, k\nMaximize k (the percentage redistributed in the\nworst case)\nSubject to:Pj\ni=m+1 ci \u2265 0 for j = m + 1, . . . , n \u2212 1\nkm \u2264 (n \u2212 m \u2212 1)cm+1 \u2264 m\nkm \u2264 n\nPj=m+i\u22121\nj=m+1 cj + (n \u2212 m \u2212 i)cm+i \u2264 m for\ni = 2, . . . , n \u2212 m \u2212 1\nkm \u2264 n\nPj=n\u22121\nj=m+1 cj \u2264 m\nA linear program has no solution if and only if either the\nobjective is unbounded, or the constraints are contradictory\n(there is no feasible solution). It is easy to see that k is\nbounded above by 1 (redistributing more than 100%\nviolates the non-deficit constraint). Also, a feasible solution\nalways exists, for example, k = 0 and ci = 0 for all i. So an\noptimal solution always exists. Observe that the linear\nprogram model depends only on the number of agents n and the\nnumber of units m. Hence the optimal solution is a function\nof n and m. It turns out that this optimal solution can be\nanalytically characterized as follows.\nTheorem 1. For any m and n with n \u2265 m+2, the\nworstcase optimal mechanism (among linear VCG redistribution\nmechanisms) is unique. For this mechanism, the percentage\nredistributed in the worst case is\nk\u2217\n= 1 \u2212\n`n\u22121\nm\n\u00b4\nPn\u22121\nj=m\n`n\u22121\nj\n\u00b4\nThe worst-case optimal mechanism is characterized by the\nfollowing values for the ci:\nc\u2217\ni =\n(\u22121)i+m\u22121\n(n \u2212 m)\n`n\u22121\nm\u22121\n\u00b4\ni\nPn\u22121\nj=m\n`n\u22121\nj\n\u00b4\n1\n`n\u22121\ni\n\u00b4\nn\u22121X\nj=i\nn \u2212 1\nj\n!\nfor i = m + 1, . . . , n \u2212 1.\nIt should be noted that we have proved ci = 0 for i \u2264 m in\nClaim 1.\nProof. We first rewrite the linear program as follows.\nWe introduce new variables xm+1, xm+2, . . . , xn\u22121, defined\nby xj =\nPj\ni=m+1 ci for j = m + 1, . . . , n \u2212 1. The linear\nprogram then becomes:\nVariables: xm+1, xm+2, . . . , xn\u22121, k\nMaximize k\nSubject to:\nkm \u2264 (n \u2212 m \u2212 1)xm+1 \u2264 m\nkm \u2264 (m + i)xm+i\u22121 + (n \u2212 m \u2212 i)xm+i \u2264 m for\ni = 2, . . . , n \u2212 m \u2212 1\nkm \u2264 nxn\u22121 \u2264 m\nxi \u2265 0 for i = m + 1, m + 2, . . . , n \u2212 1\nWe will prove that for any optimal solution to this linear\nprogram, k = k\u2217\n. Moreover, we will prove that when k = k\u2217\n,\nxj =\nPj\ni=m+1 c\u2217\ni for j = m + 1, . . . , n \u2212 1. This will prove\nthe theorem.\nWe first make the following observations:\n(n \u2212 m \u2212 1)c\u2217\nm+1\n= (n \u2212 m \u2212 1)\n(n\u2212m)(n\u22121\nm\u22121)\n(m+1)\nPn\u22121\nj=m (n\u22121\nj )\n1\n(n\u22121\nm+1)\nPn\u22121\nj=m+1\n`n\u22121\nj\n\u00b4\n= (n \u2212 m \u2212 1)\n(n\u2212m)(n\u22121\nm\u22121)\n(m+1)\nPn\u22121\nj=m (n\u22121\nj )\n1\n(n\u22121\nm+1)\n(\nPn\u22121\nj=m\n`n\u22121\nj\n\u00b4\n\u2212\n`n\u22121\nm\n\u00b4\n)\n= (n \u2212 m \u2212 1) m\nn\u2212m\u22121\n\u2212 (n \u2212 m \u2212 1)\nm(n\u22121\nm )\n(n\u2212m\u22121)\nPn\u22121\nj=m (n\u22121\nj )\n= m \u2212 (1 \u2212 k\u2217\n)m = k\u2217\nm\nFor i = m + 1, . . . , n \u2212 2,\nic\u2217\ni + (n \u2212 i \u2212 1)c\u2217\ni+1\n= i\n(\u22121)i+m\u22121\n(n\u2212m)(n\u22121\nm\u22121)\ni\nPn\u22121\nj=m (n\u22121\nj )\n1\n(n\u22121\ni )\nPn\u22121\nj=i\n`n\u22121\nj\n\u00b4\n+\n(n \u2212 i \u2212 1)\n(\u22121)i+m\n(n\u2212m)(n\u22121\nm\u22121)\n(i+1)\nPn\u22121\nj=m (n\u22121\nj )\n1\n(n\u22121\ni+1 )\nPn\u22121\nj=i+1\n`n\u22121\nj\n\u00b4\n=\n(\u22121)i+m\u22121\n(n\u2212m)(n\u22121\nm\u22121)\nPn\u22121\nj=m (n\u22121\nj )\n1\n(n\u22121\ni )\nPn\u22121\nj=i\n`n\u22121\nj\n\u00b4\n\u2212\n(n \u2212 i \u2212 1)\n(\u22121)i+m\u22121\n(n\u2212m)(n\u22121\nm\u22121)\n(i+1)\nPn\u22121\nj=m (n\u22121\nj )\ni+1\n(n\u22121\ni )(n\u2212i\u22121)\nPn\u22121\nj=i+1\n`n\u22121\nj\n\u00b4\n=\n(\u22121)i+m\u22121\n(n\u2212m)(n\u22121\nm\u22121)\nPn\u22121\nj=m (n\u22121\nj )\n= (\u22121)i+m\u22121\nm(1 \u2212 k\u2217\n)\nFinally,\n(n \u2212 1)c\u2217\nn\u22121\n= (n \u2212 1)\n(\u22121)n+m\n(n\u2212m)(n\u22121\nm\u22121)\n(n\u22121)\nPn\u22121\nj=m (n\u22121\nj )\n1\n(n\u22121\nn\u22121)\nPn\u22121\nj=n\u22121\n`n\u22121\nj\n\u00b4\n= (\u22121)m+n\nm(1 \u2212 k\u2217\n)\nSummarizing the above, we have:\n(n \u2212 m \u2212 1)c\u2217\nm+1 = k\u2217\nm\n(m + 1)c\u2217\nm+1 + (n \u2212 m \u2212 2)c\u2217\nm+2 = m(1 \u2212 k\u2217\n)\n(m + 2)c\u2217\nm+2 + (n \u2212 m \u2212 3)c\u2217\nm+3 = \u2212m(1 \u2212 k\u2217\n)\n(m + 3)c\u2217\nm+3 + (n \u2212 m \u2212 4)c\u2217\nm+4 = m(1 \u2212 k\u2217\n)\n...\n35\n(n \u2212 3)c\u2217\nn\u22123 + 2c\u2217\nn\u22122 = (\u22121)m+n\u22122\nm(1 \u2212 k\u2217\n)\n(n \u2212 2)c\u2217\nn\u22122 + c\u2217\nn\u22121 = (\u22121)m+n\u22121\nm(1 \u2212 k\u2217\n)\n(n \u2212 1)c\u2217\nn\u22121 = (\u22121)m+n\nm(1 \u2212 k\u2217\n)\nLet x\u2217\nj =\nPj\ni=m+1 c\u2217\ni for j = m + 1, m + 2, . . . , n \u2212 1, the\nfirst equation in the above tells us that\n(n \u2212 m \u2212 1)x\u2217\nm+1 = k\u2217\nm.\nBy adding the first two equations, we get\n(m + 2)x\u2217\nm+1 + (n \u2212 m \u2212 2)x\u2217\nm+2 = m\nBy adding the first three equations, we get\n(m + 3)x\u2217\nm+2 + (n \u2212 m \u2212 3)x\u2217\nm+3 = k\u2217\nm\nBy adding the first i equations, where i = 2, . . . , n\u2212m\u22121,\nwe get\n(m + i)x\u2217\nm+i\u22121 + (n \u2212 m \u2212 i)x\u2217\nm+i = m if i is even\n(m + i)x\u2217\nm+i\u22121 + (n \u2212 m \u2212 i)x\u2217\nm+i = k\u2217\nm if i is odd\nFinally by adding all the equations, we get\nnx\u2217\nn\u22121 = m if n \u2212 m is even;\nnx\u2217\nn\u22121 = k\u2217\nm if n \u2212 m is odd.\nThus, for all of the constraints other than the\nnonnegativity constraints, we have shown that they are satisfied by\nsetting xj = x\u2217\nj =\nPj\ni=m+1 c\u2217\ni and k = k\u2217\n. We next show\nthat the nonnegativity constraints are satisfied by these\nsettings as well.\nFor m + 1 \u2264 i, i + 1 \u2264 n \u2212 1, we have\n1\ni\nPn\u22121\nj=i (n\u22121\nj )\n(n\u22121\ni )\n= 1\ni\nPn\u22121\nj=i\ni!(n\u22121\u2212i)!\nj!(n\u22121\u2212j)!\n\u2265 1\ni+1\nPn\u22122\nj=i\ni!(n\u22121\u2212i)!\nj!(n\u22121\u2212j)!\n\u2265\n1\ni+1\nPn\u22122\nj=i\n(i+1)!(n\u22121\u2212i\u22121)!\n(j+1)!(n\u22121\u2212j\u22121)!\n= 1\ni+1\nPn\u22121\nj=i+1 (n\u22121\nj )\n(n\u22121\ni+1 )\nThis implies that the absolute value of c\u2217\ni is decreasing\nas i increases (if the c\u2217\ncontains more than one number).\nWe further observe that the sign of c\u2217\ni alternates, with the\nfirst element c\u2217\nm+1 positive. So x\u2217\nj =\nPj\ni=m+1 c\u2217\ni \u2265 0 for\nall j. Thus, we have shown that these xi = x\u2217\ni together\nwith k = k\u2217\nform a feasible solution of the linear program.\nWe proceed to show that it is in fact the unique optimal\nsolution.\nFirst we prove the following claim:\nClaim 4. If \u02c6k, \u02c6xi, i = m + 1, m + 2, . . . , n \u2212 1 satisfy the\nfollowing inequalities:\n\u02c6km \u2264 (n \u2212 m \u2212 1)\u02c6xm+1 \u2264 m\n\u02c6km \u2264 (m + i)\u02c6xm+i\u22121 + (n \u2212 m \u2212 i)\u02c6xm+i \u2264 m for\ni = 2, . . . , n \u2212 m \u2212 1\n\u02c6km \u2264 n\u02c6xn\u22121 \u2264 m\n\u02c6k \u2265 k\u2217\nThen we must have that \u02c6xi = \u02c6x\u2217\ni and \u02c6k = k\u2217\n.\nProof of claim. Consider the first inequality. We know\nthat (n \u2212 m \u2212 1)x\u2217\nm+1 = k\u2217\nm, so (n \u2212 m \u2212 1)\u02c6xm+1 \u2265 \u02c6km \u2265\nk\u2217\nm = (n \u2212 m \u2212 1)x\u2217\nm+1. It follows that \u02c6xm+1 \u2265 x\u2217\nm+1\n(n \u2212 m \u2212 1 = 0).\nNow, consider the next inequality for i = 2. We know\nthat (m + 2)x\u2217\nm+1 + (n \u2212 m \u2212 2)x\u2217\nm+2 = m. It follows that\n(n\u2212m\u22122)\u02c6xm+2 \u2264 m\u2212(m+2)\u02c6xm+1 \u2264 m\u2212(m+2)x\u2217\nm+1 =\n(n \u2212 m \u2212 2)x\u2217\nm+2, so \u02c6xm+2 \u2264 x\u2217\nm+2 (i = 2 \u2264 n \u2212 m \u2212 1 \u21d2\nn \u2212 m \u2212 2 = 0).\nNow consider the next inequality for i = 3. We know\nthat (m + 3)x\u2217\nm+2 + (n \u2212 m \u2212 3)x\u2217\nm+3 = m. It follows that\n(n\u2212m\u22123)\u02c6xm+3 \u2265 \u02c6km\u2212(m+3)\u02c6xm+2 \u2265 k\u2217\nm\u2212(m+3)x\u2217\nm+2 =\n(n \u2212 m \u2212 3)x\u2217\nm+3, so \u02c6xm+3 \u2265 x\u2217\nm+3 (i = 3 \u2264 n \u2212 m \u2212 1 \u21d2\nn \u2212 m \u2212 3 = 0).\nProceeding like this all the way up to i = n\u2212m\u22121, we get\nthat \u02c6xm+i \u2265 x\u2217\nm+i if i is odd and \u02c6xm+i \u2264 x\u2217\nm+i if i is even.\nMoreover, if one inequality is strict, then all subsequent\ninequalities are strict. Now, if we can prove \u02c6xn\u22121 = x\u2217\nn\u22121,\nit would follow that the x\u2217\ni are equal to the \u02c6xi (which also\nimplies that \u02c6k = k\u2217\n). We consider two cases:\nCase 1: n \u2212 m is even. We have: n \u2212 m even \u21d2 n \u2212 m \u2212 1\nodd \u21d2 \u02c6xn\u22121 \u2265 x\u2217\nn\u22121. We also have: n\u2212m even \u21d2 nx\u2217\nn\u22121 =\nm. Combining these two, we get m = nx\u2217\nn\u22121 \u2264 n\u02c6xn\u22121 \u2264\nm \u21d2 \u02c6xn\u22121 = x\u2217\nn\u22121.\nCase 2: n \u2212 m is odd. In this case, we have \u02c6xn\u22121 \u2264 x\u2217\nn\u22121,\nand nx\u2217\nn\u22121 = k\u2217\nm. Then, we have: k\u2217\nm \u2264 \u02c6km \u2264 n\u02c6xn\u22121 \u2264\nnx\u2217\nn\u22121 = k\u2217\nm \u21d2 \u02c6xn\u22121 = x\u2217\nn\u22121.\nThis completes the proof of the claim.\nIt follows that if \u02c6k, \u02c6xi, i = m + 1, m + 2, . . . , n \u2212 1 is a\nfeasible solution and \u02c6k \u2265 k\u2217\n, then since all the inequalities\nin Claim 4 are satisfied, we must have \u02c6xi = x\u2217\ni and \u02c6k =\nk\u2217\n. Hence no other feasible solution is as good as the one\ndescribed in the theorem.\nKnowing the analytical characterization of the worst-case\noptimal mechanism provides us with at least two major\nbenefits. First, using these formulas is computationally more\nefficient than solving the linear program using a\ngeneralpurpose solver. Second, we can derive the following\ncorollary.\nCorollary 1. If the number of units m is fixed, then as\nthe number of agents n increases, the worst-case percentage\nredistributed linearly converges to 1, with a rate of\nconvergence 1\n2\n. (That is, limn\u2192\u221e\n1\u2212k\u2217\nn+1\n1\u2212k\u2217\nn\n= 1\n2\n. That is, in the\nlimit, the percentage that is not redistributed halves for\nevery additional agent.)\nWe note that this is consistent with the experimental data\nfor the single-unit case, where the worst-case remaining\npercentage roughly halves each time we add another agent.\nThe worst-case percentage that is redistributed under the\nBailey-Cavallo mechanism also converges to 1 as the\nnumber of agents goes to infinity, but the convergence is much\nslower-it does not converge linearly (that is, letting kC\nn be\nthe percentage redistributed by the Bailey-Cavallo\nmechanism in the worst case for n agents, limn\u2192\u221e\n1\u2212kC\nn+1\n1\u2212kC\nn\n=\nlimn\u2192\u221e\nn\nn+1\n= 1). We now present the proof of the\ncorollary.\nProof. When the number of agents is n, the worst-case\npercentage redistributed is k\u2217\nn = 1 \u2212\n(n\u22121\nm )\nPn\u22121\nj=m (n\u22121\nj )\n. When the\nnumber of agents is n + 1, the percentage becomes k\u2217\nn+1 =\n1 \u2212\n(n\nm)\nPn\nj=m (n\nj )\n. For n sufficiently large, we will have 2n\n\u2212\nmnm\u22121\n> 0, and hence\n1\u2212k\u2217\nn+1\n1\u2212k\u2217\nn\n=\n(n\nm)\nPn\u22121\nj=m (n\u22121\nj )\n(n\u22121\nm )\nPn\nj=m (n\nj )\n=\nn\nn\u2212m\n2n\u22121\n\u2212\nPm\u22121\nj=0 (n\u22121\nj )\n2n\u2212\nPm\u22121\nj=0 (n\nj )\n, and n\nn\u2212m\n2n\u22121\n\u2212m(n\u22121)m\u22121\n2n \u2264\n1\u2212k\u2217\nn+1\n1\u2212k\u2217\nn\n\u2264 n\nn\u2212m\n2n\u22121\n2n\u2212mnm\u22121 (because\n`n\nj\n\u00b4\n\u2264 ni\nif j \u2264 i).\nSince we have\nlimn\u2192\u221e\nn\nn\u2212m\n2n\u22121\n\u2212m(n\u22121)m\u22121\n2n = 1\n2\n, and\nlimn\u2192\u221e\nn\nn\u2212m\n2n\u22121\n2n\u2212mnm\u22121 = 1\n2\n,\nit follows that limn\u2192\u221e\n1\u2212k\u2217\nn+1\n1\u2212k\u2217\nn\n= 1\n2\n.\n36\n8. WORST-CASE OPTIMALITY OUTSIDE\nTHE FAMILY\nIn this section, we prove that the worst-case optimal\nredistribution mechanism among linear VCG redistribution\nmechanisms is in fact optimal (in the worst case) among\nall redistribution mechanisms that are deterministic,\nanonymous, strategy-proof, efficient and satisfy the non-deficit\nconstraint. Thus, restricting our attention to linear VCG\nredistribution mechanisms did not come at a loss.\nTo prove this theorem, we need the following lemma. This\nlemma is not new: it was informally stated by Cavallo [3].\nFor completeness, we present it here with a detailed proof.\nLemma 2. A VCG redistribution mechanism is\ndeterministic, anonymous and strategy-proof if and only if there exists\na function f : Rn\u22121\n\u2192 R, so that the redistribution payment\nzi received by ai satisfies\nzi = f(\u02c6v1, \u02c6v2, . . . , \u02c6vi\u22121, \u02c6vi+1, . . . , \u02c6vn)\nfor all i and all bid vectors.\nProof. First, let us prove the only if direction, that is,\nif a VCG redistribution mechanism is deterministic,\nanonymous and strategy-proof then there exists a deterministic\nfunction f : Rn\u22121\n\u2192 R, which makes\nzi = f(\u02c6v1, \u02c6v2, . . . , \u02c6vi\u22121, \u02c6vi+1, . . . , \u02c6vn)\nfor all i and all bid vectors.\nIf a VCG redistribution mechanism is deterministic and\nanonymous, then for any bid vector \u02c6v1 \u2265 \u02c6v2 \u2265 . . . \u2265 \u02c6vn, the\nmechanism outputs a unique redistribution payment list:\nz1, z2, . . . , zn. Let G : Rn\n\u2192 Rn\nbe the function that\nmaps \u02c6v1, \u02c6v2, . . . , \u02c6vn to z1, z2, . . . , zn for all bid vectors. Let\nH(i, x1, x2, . . . , xn) be the ith element of G(x1, x2, . . . , xn),\nso that zi = H(i, \u02c6v1, \u02c6v2, . . . , \u02c6vn) for all bid vectors and all\n1 \u2264 i \u2264 n. Because the mechanism is anonymous, two\nagents should receive the same redistribution payment if\ntheir bids are the same. So, if \u02c6vi = \u02c6vj, H(i, \u02c6v1, \u02c6v2, . . . , \u02c6vn) =\nH(j, \u02c6v1, \u02c6v2, . . . , \u02c6vn). Hence, if we let j = min{t|\u02c6vt = \u02c6vi},\nthen H(i, \u02c6v1, \u02c6v2, . . . , \u02c6vn) = H(j, \u02c6v1, \u02c6v2, . . . , \u02c6vn).\nLet us define K : Rn\n\u2192 N \u00d7 Rn\nas follows: K(y, x1, x2,\n. . . , xn\u22121) = [j, w1, w2, . . . , wn], where w1, w2, . . . , wn are\ny, x1, x2, . . . , xn\u22121 sorted in descending order, and\nj = min{t|wt = y}. ({t|wt = y} = \u2205 because y \u2208 {w1, w2,\n. . . , wn}). Also let us define F : Rn\n\u2192 R by\nF(\u02c6vi, \u02c6v1, \u02c6v2, . . . , \u02c6vi\u22121, \u02c6vi+1, . . . , \u02c6vn)\n= H \u25e6 K(\u02c6vi, \u02c6v1, \u02c6v2, . . . , \u02c6vi\u22121, \u02c6vi+1, . . . , \u02c6vn)\n= H(min{t|\u02c6vt = \u02c6vi}, \u02c6v1, \u02c6v2, . . . , \u02c6vn)\n= H(i, \u02c6v1, \u02c6v2, . . . , \u02c6vn) = zi.\nThat is, F is the redistribution payment to an agent that\nbids \u02c6vi when the other bids are \u02c6v1, \u02c6v2, . . . , \u02c6vi\u22121, \u02c6vi+1, . . . , \u02c6vn.\nSince our mechanism is required to be strategy-proof, and\nthe space of valuations is unrestricted, zi should be\nindependent of \u02c6vi by Lemma 1 in Cavallo [3]. Hence, we can simply\nignore the first variable input to F; let f(x1, x2, . . . , xn\u22121) =\nF(0, x1, x2, . . . , xn\u22121). So, we have for all bid vectors and\ni, zi = f(\u02c6v1, \u02c6v2, . . . , \u02c6vi\u22121, \u02c6vi+1, . . . , \u02c6vn). This completes the\nproof for the only if direction.\nFor the if direction, if the redistribution payment\nreceived by ai satisfies zi = f(\u02c6v1, \u02c6v2, . . . , \u02c6vi\u22121, \u02c6vi+1, . . . , \u02c6vn) for\nall bid vectors and i, then this is clearly a deterministic and\nanonymous mechanism. To prove strategy-proofness, we\nobserve that because an agent\"s redistribution payment is not\naffected by her own bid, her incentives are the same as in\nthe VCG mechanism, which is strategy-proof.\nNow we are ready to introduce the next theorem:\nTheorem 2. For any m and n with n \u2265 m+2, the\nworstcase optimal mechanism among the family of linear VCG\nredistribution mechanisms is worst-case optimal among all\nmechanisms that are deterministic, anonymous, strategy-proof,\nefficient and satisfy the non-deficit constraint.\nWhile we needed individual rationality earlier in the\npaper, this theorem does not mention it, that is, we can not\nfind a mechanism with better worst-case performance even if\nwe sacrifice individual rationality. (The worst-case optimal\nlinear VCG redistribution mechanism is of course\nindividually rational.)\nProof. Suppose there is a redistribution mechanism (when\nthe number of units is m and the number of agents is n) that\nsatisfies all of the above properties and has a better\nworstcase performance than the worst-case optimal linear VCG\nredistribution mechanism, that is, its worst-case\nredistribution percentage \u02c6k is strictly greater than k\u2217\n.\nBy Lemma 2, for this mechanism, there is a function f :\nRn\u22121\n\u2192 R so that zi = f(\u02c6v1, \u02c6v2, . . . , \u02c6vi\u22121, \u02c6vi+1, . . . , \u02c6vn) for\nall i and all bid vectors. We first prove that f has the\nfollowing properties.\nClaim 5. f(1, 1, . . . , 1, 0, 0, . . . , 0) = 0 if the number of\n1s is less than or equal to m.\nProof of claim. We assumed that for this mechanism,\nthe worst-case redistribution percentage satisfies \u02c6k > k\u2217\n\u2265\n0. If the total VCG payment is x, the total redistribution\npayment should be in [\u02c6kx, x] (non-deficit criterion).\nConsider the case where all agents bid 0, so that the total VCG\npayment is also 0. Hence, the total redistribution payment\nshould be in [\u02c6k \u00b7 0, 0]-that is, it should be 0. Hence every\nagent\"s redistribution payment f(0, 0, . . . , 0) must be 0.\nNow, let ti = f(1, 1, . . . , 1, 0, 0, . . . , 0) where the number\nof 1s equals i. We proved t0 = 0. If tn\u22121 = 0, consider the\nbid vector where everyone bids 1. The total VCG payment is\nm and the total redistribution payment is nf(1, 1, . . . , 1) =\nntn\u22121 = 0. This corresponds to 0% redistribution, which is\ncontrary to our assumption that \u02c6k > k\u2217\n\u2265 0. Now, consider\nj = min{i|ti = 0} (which is well-defined because tn\u22121 = 0).\nIf j > m, the property is satisfied. If j \u2264 m, consider\nthe bid vector where \u02c6vi = 1 for i \u2264 j and \u02c6vi = 0 for all\nother i. Under this bid vector, the first j agents each get\nredistribution payment tj\u22121 = 0, and the remaining n \u2212 j\nagents each get tj. Thus, the total redistribution payment\nis (n \u2212 j)tj. Because the total VCG payment for this bid\nvector is 0, we must have (n \u2212 j)tj = 0. So tj = 0 (j \u2264\nm < n). But this is contrary to the definition of j. Hence\nf(1, 1, . . . , 1, 0, 0, . . . , 0) = 0 if the number of 1s is less than\nor equal to m.\nClaim 6. f satisfies the following inequalities:\n\u02c6km \u2264 (n \u2212 m \u2212 1)tm+1 \u2264 m\n\u02c6km \u2264 (m + i)tm+i\u22121 + (n \u2212 m \u2212 i)tm+i \u2264 m for\ni = 2, 3, . . . , n \u2212 m \u2212 1\n\u02c6km \u2264 ntn\u22121 \u2264 m\nHere ti is defined as in the proof of Claim 5.\n37\nProof of claim. For j = m + 1, . . . , n, consider the bid\nvectors where \u02c6vi = 1 for i \u2264 j and \u02c6vi = 0 for all other i.\nThese bid vectors together with the non-deficit constraint\nand worst-case constraint produce the above set of\ninequalities: for example, when j = m + 1, we consider the bid\nvector \u02c6vi = 1 for i \u2264 m + 1 and \u02c6vi = 0 for all other i.\nThe first m+1 agents each receive a redistribution payment\nof tm = 0, and all other agents each receive tm+1. Thus,\nthe total VCG redistribution is (n \u2212 m \u2212 1)tm+1. The\nnondeficit constraint gives (n \u2212 m \u2212 1)tm+1 \u2264 m (because the\ntotal VCG payment is m). The worst-case constraint gives\n(n \u2212 m \u2212 1)tm+1 \u2265 \u02c6km. Combining these two, we get the\nfirst inequality. The other inequalities can be obtained in\nthe same way.\nWe now observe that the inequalities in Claim 6, together\nwith \u02c6k \u2265 k\u2217\n, are the same as those in Claim 4 (where the ti\nare replaced by the \u02c6xi). Thus, we can conclude that \u02c6k = k\u2217\n,\nwhich is contrary to our assumption \u02c6k > k\u2217\n. Hence no\nmechanism satisfying all the listed properties has a redistribution\npercentage greater than k\u2217\nin the worst case.\nSo far we have only talked about the case where n \u2265 m+2.\nFor the purpose of completeness, we provide the following\nclaim for the n = m + 1 case.\nClaim 7. For any m and n with n = m + 1, the original\nVCG mechanism (that is, redistributing nothing) is (uniquely)\nworst-case optimal among all redistribution mechanisms that\nare deterministic, anonymous, strategy-proof, efficient and\nsatisfy the non-deficit constraint.\nWe recall that when n = m+1, Claim 1 tells us that the only\nmechanism inside the family of linear redistribution\nmechanisms is the original VCG mechanism, so that this\nmechanism is automatically worst-case optimal inside this family.\nHowever, to prove the above claim, we need to show that it\nis worst-case optimal among all redistribution mechanisms\nthat have the desired properties.\nProof. Suppose a redistribution mechanism exists that\nsatisfies all of the above properties and has a worst-case\nperformance as good as the original VCG mechanism, that\nis, its worst-case redistribution percentage is greater than\nor equal to 0. This implies that the total redistribution\npayment of this mechanism is always nonnegative.\nBy Lemma 2, for this mechanism, there is a function f :\nRn\u22121\n\u2192 R so that zi = f(\u02c6v1, \u02c6v2, . . . , \u02c6vi\u22121, \u02c6vi+1, . . . , \u02c6vn) for\nall i and all bid vectors. We will prove that f(x1, x2, . . . , xn\u22121)\n= 0 for all x1 \u2265 x2 \u2265 . . . \u2265 xn\u22121 \u2265 0.\nFirst, consider the bid vector where \u02c6vi = 0 for all i. Here,\neach agent receives a redistribution payment f(0, 0, . . . , 0).\nThe total redistribution payment is then nf(0, 0, . . . , 0), which\nshould be both greater than or equal to 0 (by the above\nobservation) as well less than or equal to 0 (using the\nnondeficit criterion and the fact that the total VCG payment is\n0). It follows that f(0, 0, . . . , 0) = 0. Now, let us consider\nthe bid vector where \u02c6v1 = x1 \u2265 0 and \u02c6vi = 0 for all other i.\nFor this bid vector, the agent with the highest bid receives\na redistribution payment of f(0, 0, . . . , 0) = 0, and the other\nn \u2212 1 agents each receive f(x1, 0, . . . , 0). By the same\nreasoning as above, the total redistribution payment should be\nboth greater than or equal to 0 and less than or equal to 0,\nhence f(x1, 0, . . . , 0) = 0 for all x1 \u2265 0.\nProceeding by induction, let us assume f(x1, x2, . . . , xk,\n0, . . . , 0) = 0 for all x1 \u2265 x2 \u2265 . . . \u2265 xk \u2265 0, for some\nk < n \u2212 1. Consider the bid vector where \u02c6vi = xi for\ni \u2264 k + 1, and \u02c6vi = 0 for all other i, where the xi are\narbitrary numbers satisfying x1 \u2265 x2 \u2265 . . . \u2265 xk \u2265 xk+1 \u2265 0.\nFor the agents with the highest k + 1 bids, their\nredistribution payment is specified by f acting on an input with\nonly k non-zero variables. Hence they all receive 0 by\ninduction assumption. The other n \u2212 k \u2212 1 agents each\nreceive f(x1, x2, . . . , xk, xk+1, 0, . . . , 0). The total\nredistribution payment is then (n\u2212k\u22121)f(x1, x2, . . . , xk, xk+1, 0, . . . , 0),\nwhich should be both greater than or equal to 0, and less\nthan or equal to the total VCG payment. Now, in this bid\nvector, the lowest bid is 0 because k + 1 < n. But since\nn = m + 1, the total VCG payment is m\u02c6vn = 0. So\nwe have f(x1, x2, . . . , xk, xk+1, 0, . . . , 0) = 0 for all x1 \u2265\nx2 \u2265 . . . \u2265 xk \u2265 xk+1 \u2265 0. By induction, this\nstatement holds for all k < n \u2212 1; when k + 1 = n \u2212 1, we\nhave f(x1, x2, . . . , xn\u22122, xn\u22121) = 0 for all x1 \u2265 x2 \u2265 . . . \u2265\nxn\u22122 \u2265 xn\u22121 \u2265 0. Hence, in this mechanism, the\nredistribution payment is always 0; that is, the mechanism is just\nthe original VCG mechanism.\nIncidentally, we obtain the following corollary:\nCorollary 2. No VCG redistribution mechanism\nsatisfies all of the following: determinism, anonymity,\nstrategyproofness, efficiency, and (strong) budget balance. This holds\nfor any n \u2265 m + 1.\nProof. For the case n \u2265 m + 2: If such a mechanism\nexists, its worst-case performance would be better than that\nof the worst-case optimal linear VCG redistribution\nmechanism, which by Theorem 1 obtains a redistribution\npercentage strictly less than 1. But Theorem 2 shows that it is\nimpossible to outperform this mechanism in the worst case.\nFor the case n = m + 1: If such a mechanism exists,\nit would perform as well as the original VCG mechanism\nin the worst case, which implies that it is identical to the\nVCG mechanism by Claim 7. But the VCG mechanism is\nnot (strongly) budget balanced.\n9. CONCLUSIONS\nFor allocation problems with one or more items, the\nwellknown Vickrey-Clarke-Groves (VCG) mechanism is efficient,\nstrategy-proof, individually rational, and does not incur a\ndeficit. However, the VCG mechanism is not (strongly)\nbudget balanced: generally, the agents\" payments will sum\nto more than 0. If there is an auctioneer who is selling\nthe items, this may be desirable, because the surplus\npayment corresponds to revenue for the auctioneer. However, if\nthe items do not have an owner and the agents are merely\ninterested in allocating the items efficiently among\nthemselves, any surplus payment is undesirable, because it will\nhave to flow out of the system of agents. In 2006,\nCavallo [3] proposed a mechanism that redistributes some of\nthe VCG payment back to the agents, while maintaining\nefficiency, strategy-proofness, individual rationality, and the\nnon-deficit property. In this paper, we extended this\nresult in a restricted setting. We studied allocation settings\nwhere there are multiple indistinguishable units of a\nsingle good, and agents have unit demand. (For this specific\nsetting, Cavallo\"s mechanism coincides with a mechanism\nproposed by Bailey in 1997 [2].) Here we proposed a family\nof mechanisms that redistribute some of the VCG payment\n38\nback to the agents. All mechanisms in the family are\nefficient, strategy-proof, individually rational, and never incur\na deficit. The family includes the Bailey-Cavallo mechanism\nas a special case. We then provided an optimization model\nfor finding the optimal mechanism-that is, the mechanism\nthat maximizes redistribution in the worst case-inside the\nfamily, and showed how to cast this model as a linear\nprogram. We gave both numerical and analytical solutions\nof this linear program, and the (unique) resulting\nmechanism shows significant improvement over the Bailey-Cavallo\nmechanism (in the worst case). Finally, we proved that the\nobtained mechanism is optimal among all anonymous\ndeterministic mechanisms that satisfy the above properties.\nOne important direction for future research is to try to\nextend these results beyond multi-unit auctions with unit\ndemand. However, it turns out that in sufficiently general\nsettings, the worst-case optimal redistribution percentage\nis 0. In such settings, the worst-case criterion provides no\nguidance in determining a good redistribution mechanism\n(even redistributing nothing achieves the optimal worst-case\npercentage), so it becomes necessary to pursue other criteria.\nAlternatively, one can try to identify other special settings\nin which positive redistribution in the worst case is possible.\nAnother direction for future research is to consider whether\nthis mechanism has applications to collusion. For example,\nin a typical collusive scheme, there is a bidding ring\nconsisting of a number of colluders, who submit only a single\nbid [10, 17]. If this bid wins, the colluders must allocate the\nitem amongst themselves, perhaps using payments-but of\ncourse they do not want payments to flow out of the ring.\nThis work is part of a growing literature on designing\nmechanisms that obtain good results in the worst case.\nTraditionally, economists have mostly focused either on\ndesigning mechanisms that always obtain certain properties (such\nas the VCG mechanism), or on designing mechanisms that\nare optimal with respect to some prior distribution over the\nagents\" preferences (such as the Myerson auction [20] and the\nMaskin-Riley auction [18] for maximizing expected revenue).\nSome more recent papers have focused on designing\nmechanisms for profit maximization using worst-case competitive\nanalysis (e.g. [9, 1, 15, 8]). There has also been growing\ninterest in the design of online mechanisms [7] where the\nagents arrive over time and decisions must be taken before\nall the agents have arrived. Such work often also takes a\nworst-case competitive analysis approach [14, 13]. It does\nnot appear that there are direct connections between our\nwork and these other works that focus on designing\nmechanisms that perform well in the worst case. Nevertheless,\nit seems likely that future research will continue to\ninvestigate mechanism design for the worst case, and hopefully a\ncoherent framework will emerge.\n10. REFERENCES\n[1] G. Aggarwal, A. Fiat, A. Goldberg, J. Hartline,\nN. Immorlica, and M. Sudan. Derandomization of\nauctions. STOC, 619-625, 2005.\n[2] M. J. Bailey. The demand revealing process: to\ndistribute the surplus. Public Choice, 91:107-126,\n1997.\n[3] R. Cavallo. Optimal decision-making with minimal\nwaste: Strategyproof redistribution of VCG payments.\nAAMAS, 882-889, 2006.\n[4] E. H. Clarke. Multipart pricing of public goods. Public\nChoice, 11:17-33, 1971.\n[5] B. Faltings. A budget-balanced, incentive-compatible\nscheme for social choice. AMEC, 30-43, 2005.\n[6] J. Feigenbaum, C. Papadimitriou, and S. Shenker.\nSharing the cost of muliticast transmissions. JCSS,\n63:21-41, 2001.\n[7] E. Friedman and D. Parkes. Pricing WiFi at\nStarbucks - Issues in online mechanism design. EC,\n240-241, 2003.\n[8] A. Goldberg, J. Hartline, A. Karlin, M. Saks, and\nA. Wright. Competitive auctions. Games and\nEconomic Behavior, 2006.\n[9] A. Goldberg, J. Hartline, and A. Wright. Competitive\nauctions and digital goods. SODA, 735-744, 2001.\n[10] D. A. Graham and R. C. Marshall. Collusive bidder\nbehavior at single-object second-price and English\nauctions. Journal of Political Economy,\n95(6):1217-1239, 1987.\n[11] J. Green and J.-J. Laffont. Characterization of\nsatisfactory mechanisms for the revelation of\npreferences for public goods. Econometrica,\n45:427-438, 1977.\n[12] T. Groves. Incentives in teams. Econometrica,\n41:617-631, 1973.\n[13] M. T. Hajiaghayi, R. Kleinberg, M. Mahdian, and\nD. C. Parkes. Online auctions with re-usable goods.\nEC, 165-174, 2005.\n[14] M. T. Hajiaghayi, R. Kleinberg, and D. C. Parkes.\nAdaptive limited-supply online auctions. EC, 71-80,\n2004.\n[15] J. Hartline and R. McGrew. From optimal limited to\nunlimited supply auctions. EC, 175-182, 2005.\n[16] L. Hurwicz. On the existence of allocation systems\nwhose manipulative Nash equilibria are Pareto\noptimal, 1975. Presented at the 3rd World Congress of\nthe Econometric Society.\n[17] K. Leyton-Brown, Y. Shoham, and M. Tennenholtz.\nBidding clubs in first-price auctions. AAAI, 373-378,\n2002.\n[18] E. Maskin and J. Riley. Optimal multi-unit auctions.\nIn F. Hahn, editor, The Economics of Missing\nMarkets, Information, and Games, chapter 14,\n312-335. Clarendon Press, Oxford, 1989.\n[19] H. Moulin. Efficient and strategy-proof assignment\nwith a cheap residual claimant. Working paper, March\n2007.\n[20] R. Myerson. Optimal auction design. Mathematics of\nOperations Research, 6:58-73, 1981.\n[21] R. Myerson and M. Satterthwaite. Efficient\nmechanisms for bilateral trading. Journal of Economic\nTheory, 28:265-281, 1983.\n[22] D. Parkes, J. Kalagnanam, and M. Eso. Achieving\nbudget-balance with Vickrey-based payment schemes\nin exchanges. IJCAI, 1161-1168, 2001.\n[23] W. Vickrey. Counterspeculation, auctions, and\ncompetitive sealed tenders. Journal of Finance,\n16:8-37, 1961.\n39", "keywords": "individually rational mechanism;mechanism;payment redistribution;transformation to linear programming;efficient mechanism;worst-case optimal mechanism;analytical characterization;strategy-proofness;redistribution payment;vickrey-clarke-grove mechanism;vickrey-clarke-grove;linear vcg redistribution mechanism;mechanism design"}
-{"name": "test_J-20", "title": "Clearing Algorithms for Barter Exchange Markets: Enabling Nationwide Kidney Exchanges", "abstract": "In barter-exchange markets, agents seek to swap their items with one another, in order to improve their own utilities. These swaps consist of cycles of agents, with each agent receiving the item of the next agent in the cycle. We focus mainly on the upcoming national kidney-exchange market, where patients with kidney disease can obtain compatible donors by swapping their own willing but incompatible donors. With over 70,000 patients already waiting for a cadaver kidney in the US, this market is seen as the only ethical way to significantly reduce the 4,000 deaths per year attributed to kidney disease. The clearing problem involves finding a social welfare maximizing exchange when the maximum length of a cycle is fixed. Long cycles are forbidden, since, for incentive reasons, all transplants in a cycle must be performed simultaneously. Also, in barter-exchanges generally, more agents are affected if one drops out of a longer cycle. We prove that the clearing problem with this cycle-length constraint is NP-hard. Solving it exactly is one of the main challenges in establishing a national kidney exchange. We present the first algorithm capable of clearing these markets on a nationwide scale. The key is incremental problem formulation. We adapt two paradigms for the task: constraint generation and column generation. For each, we develop techniques that dramatically improve both runtime and memory usage. We conclude that column generation scales drastically better than constraint generation. Our algorithm also supports several generalizations, as demanded by real-world kidney exchanges. Our algorithm replaced CPLEX as the clearing algorithm of the Alliance for Paired Donation, one of the leading kidney exchanges. The match runs are conducted every two weeks and transplants based on our optimizations have already been conducted.", "fulltext": "1. INTRODUCTION\nThe role of kidneys is to filter waste from blood. Kidney\nfailure results in accumulation of this waste, which leads\nto death in months. One treatment option is dialysis, in\nwhich the patient goes to a hospital to have his/her blood\nfiltered by an external machine. Several visits are required\nper week, and each takes several hours. The quality of life\non dialysis can be extremely low, and in fact many patients\nopt to withdraw from dialysis, leading to a natural death.\nOnly 12% of dialysis patients survive 10 years [23].\nInstead, the preferred treatment is a kidney transplant.\nKidney transplants are by far the most common transplant.\nUnfortunately, the demand for kidneys far outstrips supply.\nIn the United States in 2005, 4,052 people died waiting for\na life-saving kidney transplant. During this time, almost\n30,000 people were added to the national waiting list, while\nonly 9,913 people left the list after receiving a\ndeceaseddonor kidney. The waiting list currently has over 70,000\npeople, and the median waiting time ranges from 2 to 5\nyears, depending on blood type.1\nFor many patients with kidney disease, the best option is\nto find a living donor, that is, a healthy person willing to\ndonate one of his/her two kidneys. Although there are\nmarketplaces for buying and selling living-donor kidneys, the\ncommercialization of human organs is almost universally\nregarded as unethical, and the practice is often explicitly\nillegal, such as in the US. However, in most countries, live\ndonation is legal, provided it occurs as a gift with no\nfinancial compensation. In 2005, there were 6,563 live donations\nin the US.\nThe number of live donations would have been much higher\nif it were not for the fact that, frequently, a potential donor\n1\nData from the United Network for Organ Sharing [21].\n295\nand his intended recipient are blood-type or tissue-type\nincompatible. In the past, the incompatible donor was sent\nhome, leaving the patient to wait for a deceased-donor\nkidney. However, there are now a few regional kidney exchanges\nin the United States, in which patients can swap their\nincompatible donors with each other, in order to each obtain\na compatible donor.\nThese markets are examples of barter exchanges. In a\nbarter-exchange market, agents (patients) seek to swap their\nitems (incompatible donors) with each other. These swaps\nconsist of cycles of agents, with each agent receiving the\nitem of the next agent in the cycle. Barter exchanges are\nubiquitous: examples include Peerflix (DVDs) [11], Read It\nSwap It (books) [12], and Intervac (holiday houses) [9]. For\nmany years, there has even been a large shoe exchange in\nthe United States [10]. People with different-sized feet use\nthis to avoid having to buy two pairs of shoes. Leg amputees\nhave a separate exchange to share the cost of buying a single\npair of shoes.\nWe can encode a barter exchange market as a directed\ngraph G = (V, E) in the following way. Construct one vertex\nfor each agent. Add a weighted edge e from one agent vi to\nanother vj, if vi wants the item of vj. The weight we of e\nrepresents the utility to vi of obtaining vj\"s item. A cycle c\nin this graph represents a possible swap, with each agent in\nthe cycle obtaining the item of the next agent. The weight\nwc of a cycle c is the sum of its edge weights. An exchange\nis a collection of disjoint cycles. The weight of an exchange\nis the sum of its cycle weights. A social welfare maximizing\nexchange is one with maximum weight.\nFigure 1 illustrates an example market with 5 agents,\n{v1, v2, . . . , v5}, in which all edges have weight 1. The\nmarket has 4 cycles, c1 = v1, v2 , c2 = v2, v3 , c3 = v3, v4\nand c4 = v1, v2, v3, v4, v5 , and two (inclusion) maximal\nexchanges, namely M1 = {c4} and M2 = {c1, c3}. Exchange\nM1 has both maximum weight and maximum cardinality\n(i.e., it includes the most edges/vertices).\nv1 v2 v3 v4\nv5\ne1 e3 e5\nc1 c2 c3\ne8\ne7\ne6e4e2\nc4\nFigure 1: Example barter exchange market.\nThe clearing problem is to find a maximum-weight\nexchange consisting of cycles with length at most some small\nconstant L. This cycle-length constraint arises naturally\nfor several reasons. For example, in a kidney exchange, all\noperations in a cycle have to be performed simultaneously;\notherwise a donor might back out after his incompatible\npartner has received a kidney. (One cannot write a binding\ncontract to donate an organ.) This gives rise to a logistical\nconstraint on cycle size: even if all the donors are operated\non first and the same personnel and facilities are used to\nthen operate on the donees, a k-cycle requires between 3k\nand 6k doctors, around 4k nurses, and almost 2k operating\nrooms.\nDue to such resource constraints, the upcoming national\nkidney exchange market will likely allow only cycles of length\n2 and 3. Another motivation for short cycles is that if the\ncycle fails to exchange, fewer agents are affected. For\nexample, last-minute testing in a kidney exchange often reveals\nnew incompatibilities that were not detected in the initial\ntesting (based on which the compatibility graph was\nconstructed). More generally, an agent may drop out of a cycle\nif his preferences have changed, or he/she simply fails to\nfulfill his obligations (such as sending a book to another agent\nin the cycle) due to forgetfulness.\nIn Section 3, we show that (the decision version of) the\nclearing problem is NP-complete for L \u2265 3. One approach\nthen might be to look for a good heuristic or\napproximation algorithm. However, for two reasons, we aim for an\nexact algorithm based on an integer-linear program (ILP)\nformulation, which we solve using specialized tree search.\n\u2022 First, any loss of optimality could lead to unnecessary\npatient deaths.\n\u2022 Second, an attractive feature of using an ILP\nformulation is that it allows one to easily model a number of\nvariations on the objective, and to add additional\nconstraints to the problem. For example, if 3-cycles are\nbelieved to be more likely to fail than 2-cycles, then\none can simply give them a weight that is\nappropriately lower than 3/2 the weight of a 2-cycle. Or, if\nfor various (e.g., ethical) reasons one requires a\nmaximum cardinality exchange, one can at least in a second\npass find the solution (out of all maximum cardinality\nsolutions) that has the fewest 3-cycles. Other\nvariations one can solve for include finding various forms of\nfault tolerant (non-disjoint) collections of cycles in\nthe event that certain pairs that were thought to be\ncompatible turn out to be incompatible after all.\nIn this paper, we present the first algorithm capable of\nclearing these markets on a nationwide scale. Straight-forward\nILP encodings are too large to even construct on current\nhardware - not to talk about solving them. The key then\nis incremental problem formulation. We adapt two\nparadigms for the task: constraint generation and column\ngeneration. For each, we develop a host of (mainly\nproblemspecific) techniques that dramatically improve both runtime\nand memory usage.\n1.1 Prior Work\nSeveral recent papers have used simulations and\nmarketclearing algorithms to explore the impact of a national\nkidney exchange [13, 20, 6, 14, 15, 17]. For example,\nusing Edmond\"s maximum-matching algorithm [4], [20] shows\nthat a national pairwise-exchange market (using length-2\ncycles only) would result in more transplants, reduced waiting\ntime, and savings of $750 million in heath care costs over 5\nyears. Those results are conservative in two ways. Firstly,\nthe simulated market contained only 4,000 initial patients,\nwith 250 patients added every 3 months. It has been\nreported to us that the market could be almost double this\nsize. Secondly, the exchanges were restricted to length-2\ncycles (because that is all that can be modeled as maximum\nmatching, and solved using Edmonds\"s algorithm).\nAllowing length-3 cycles leads to additional significant gains. This\nhas been demonstrated on kidney exchange markets with\n100 patients by using CPLEX to solve an integer-program\nencoding of the clearing problem [15]. In this paper, we\n296\npresent an alternative algorithm for this integer program\nthat can clear markets with over 10,000 patients (and that\nsame number of willing donors).\nAllowing cycles of length more than 3 often leads to no\nimprovement in the size of the exchange [15]. (Furthermore,\nin a simplified theoretical model, any kidney exchange can\nbe converted into one with cycles of length at most 4 [15].)\nWhilst this does not hold for general barter exchanges, or\neven for all kidney exchange markets, in Section 5.2.3 we\nmake use of the observation that short cycles suffice to\ndramatically increase the speed of our algorithm.\nAt a high-level, the clearing problem for barter exchanges\nis similar to the clearing problem (aka winner\ndetermination problem) in combinatorial auctions. In both settings,\nthe idea is to gather all the pertinent information about the\nagents into a central clearing point and to run a centralized\nclearing algorithm to determine the allocation. Both\nproblems are NP-hard. Both are best solved using tree search\ntechniques. Since 1999, significant work has been done in\ncomputer science and operations research on faster optimal\ntree search algorithms for clearing combinatorial auctions.\n(For a recent review, see [18].) However, the kidney\nexchange clearing problem (with a limit of 3 or more on\ncycle size) is different from the combinatorial auction clearing\nproblem in significant ways. The most important difference\nis that the natural formulations of the combinatorial\nauction problem tend to easily fit in memory, so time is the\nbottleneck in practice. In contrast, the natural formulations\nof the kidney exchange problem (with L = 3) take at least\ncubic space in the number of patients to even model, and\ntherefore memory becomes a bottleneck much before time\ndoes when using standard tree search, such as\nbranch-andcut in CPLEX, to tackle the problem. (On a 1GB computer\nand a realistic standard instance generator, discussed later,\nCPLEX 10.010 runs out of memory on five of the ten\n900patient instances and ten of the ten 1,000-patient instances\nthat we generated.) Therefore, the approaches that have\nbeen developed for combinatorial auctions cannot handle\nthe kidney exchange problem.\n1.2 Paper Outline\nThe rest of the paper is organized as follows. Section 2\ndiscusses the process by which we generate realistic kidney\nexchange market data, in order to benchmark the clearing\nalgorithms. Section 3 contains a proof that the market\nclearing decision problem is NP-complete. Sections 4 and 5 each\ncontain an ILP formulation of the clearing problem. We also\ndetail in those sections our techniques used to solve those\nprograms on large instances. Section 6 presents experiments\non the various techniques. Section 7 discusses recent\nfielding of our algorithm. Finally, we present our conclusions in\nSection 8, and suggest future research directions.\n2. MARKET CHARACTERISTICS AND\nINSTANCE GENERATOR\nWe test the algorithms on simulated kidney exchange\nmarkets, which are generated by a process described in Saidman\net al. [17]. This process is based on the extensive nationwide\ndata maintained by the United Network for Organ Sharing\n(UNOS) [21], so it generates a realistic instance\ndistribution. Several papers have used variations of this process to\ndemonstrate the effectiveness of a national kidney exchange\n(extrapolating from small instances or restricting the\nclearing to 2-cycles) [6, 20, 14, 13, 15, 17].\nBriefly, the process involves generating patients with a\nrandom blood type, sex, and probability of being tissue-type\nincompatible with a randomly chosen donor. These\nprobabilities are based on actual real-world population data. Each\npatient is assigned a potential donor with a random blood\ntype and relation to the patient. If the patient and potential\ndonor are incompatible, the two are entered into the\nmarket. Blood type and tissue type information is then used to\ndecide on which patients and donors are compatible. One\ncomplication, handled by the generator, is that if the\npatient is female, and she has had a child with her potential\ndonor, then the probability that the two are incompatible\nincreases. (This is because the mother develops antibodies\nto her partner during pregnancy.) Finally, although our\nalgorithms can handle more general weight functions, patients\nhave a utility of 1 for compatible donors, since their survival\nprobability is not affected by the choice of donor [3]. This\nmeans that the maximum-weight exchange has maximum\ncardinality.\nTable 1 gives lower and upper bounds on the size of a\nmaximum-cardinality exchange in the kidney-exchange\nmarket. The lower bounds were found by clearing the market\nwith length-2 cycles only, while the upper bounds had no\nrestriction on cycle length. For each market size, the bounds\nwere computed over 10 randomly generated markets. Note\nthat there can be a large amount of variability in the\nmarkets - in one 5000 patient market, less than 1000 patients\nwere in the maximum-cardinality exchange.\nMaximum exchange size\nLength-2 cycles only Arbitrary cycles\nPatients Mean Max Mean Max\n100 4.00e+1 4.60e+1 5.30e+1 6.10e+1\n500 2.58e+2 2.80e+2 2.79e+2 2.97e+2\n1000 5.35e+2 6.22e+2 5.61e+2 6.30e+2\n2000 1.05e+3 1.13e+3 1.09e+3 1.16e+3\n3000 1.63e+3 1.70e+3 1.68e+3 1.73e+3\n4000 2.15e+3 2.22e+3 2.20e+3 2.27e+3\n5000 2.53e+3 2.87e+3 2.59e+3 2.92e+3\n6000 3.26e+3 3.32e+3 3.35e+3 3.39e+3\n7000 3.80e+3 3.86e+3 3.89e+3 3.97e+3\n8000 4.35e+3 4.45e+3 4.46e+3 4.55e+3\n9000 4.90e+3 4.96e+3 5.01e+3 5.07e+3\n10000 5.47e+3 5.61e+3 5.59e+3 5.73e+3\nTable 1: Upper and lower bounds on exchange size.\nTable 2 gives additional characteristics of the kidney-exchange\nmarket. Note that a market with 5000 patients can already\nhave more than 450 million cycles of length 2 and 3.\nEdges Length 2 & 3 cycles\nPatients Mean Max Mean Max\n100 2.38e+3 2.79e+3 2.76e+3 5.90e+3\n500 6.19e+4 6.68e+4 3.96e+5 5.27e+5\n1000 2.44e+5 2.68e+5 3.31e+6 4.57e+6\n2000 9.60e+5 1.02e+6 2.50e+7 3.26e+7\n3000 2.19e+6 2.28e+6 8.70e+7 9.64e+7\n4000 3.86e+6 3.97e+6 1.94e+8 2.14e+8\n5000 5.67e+6 6.33e+6 3.60e+8 4.59e+8\n6000 8.80e+6 8.95e+6\n7000 1.19e+7 1.21e+7\n8000 1.56e+7 1.59e+7\n9000 1.98e+7 2.02e+7\n10000 2.44e+7 2.51e+7\nTable 2: Market characteristics.\n297\n3. PROBLEM COMPLEXITY\nIn this section, we prove that (the decision version of) the\nmarket clearing problem with short cycles is NP-complete.\nTheorem 1. Given a graph G = (V, E) and an integer\nL \u2265 3, the problem of deciding if G admits a perfect cycle\ncover containing cycles of length at most L is NP-complete.\nProof. It is clear that this problem is in NP. For\nNPhardness, we reduce from 3D-Matching, which is the\nproblem of, given disjoint sets X, Y and Z of size q, and a set of\ntriples T \u2286 X \u00d7 Y \u00d7 Z, deciding if there is a disjoint subset\nM of T with size q.\nOne straightforward idea is to construct a tripartite graph\nwith vertex sets X \u222a Y \u222a Z and directed edges (xa, yb),\n(yb, zc), and (zc, xa) for each triple ti = {xa, yb, zc} \u2208 T.\nHowever, it is not too hard to see that this encoding fails\nbecause a perfect cycle cover may include a cycle with no\ncorresponding triple.\nInstead then, we use the following reduction. Given an\ninstance of 3D-Matching, construct one vertex for each\nelement in X, Y and Z. For each triple, ti = {xa, yb, zc}\nconstruct the gadget in Figure 2, which is a similar to one\nin Garey and Johnson [5, pp 68-69]. Note that the gadgets\nintersect only on vertices in X \u222a Y \u222a Z. It is clear that this\nconstruction can be done in polynomial time.\n1\n...\n2 3\ny_b\n...\n2 3\nz_c\ny_b^i z_c^i\nL\u22121 L\u22121 L\u22121\nx_a^i\nx_a\n...\n2 31 1\nFigure 2: NP-completeness gadget for triple ti and\nmaximum cycle length L.\nLet M be a perfect 3D-Matching. We will show the\nconstruction admits a perfect cycle cover by short cycles. If\nti = {xa, yb, zc} \u2208 M, add from ti\"s gadget the three\nlengthL cycles containing xa, yb and zc respectively. Also add the\ncycle\n\u00aa\nxi\na, yi\nb, zi\nc\n\u00ab\n. Otherwise, if ti /\u2208 M, add the three\nlengthL cycles containing xi\na, yi\nb and zi\nc respectively. It is clear that\nall vertices are covered, since M partitions X \u00d7 Y \u00d7 Z.\nConversely, suppose we have a perfect cover by short\ncycles. Note that the construction only has short cycles of\nlengths 3 and L, and no short cycle involves distinct\nvertices from two different gadgets. It is easy to see then that\nin a perfect cover, each gadget ti contributes cycles\naccording to the cases above: ti \u2208 M, or ti /\u2208 M. Hence, there\nexists a perfect 3D-Matching in the original instance.\n4. SOLUTION APPROACHES BASED ON\nAN EDGE FORMULATION\nIn this section, we consider a formulation of the clearing\nproblem as an ILP with one variable for each edge. This\nencoding is based on the following classical algorithm for\nsolving the directed cycle cover problem with no cycle-length\nconstraints.\nGiven a market G = (V, E), construct a bipartite graph\nwith one vertex for each agent, and one vertex for each item.\nAdd an edge ev with weight 0 between each agent v and its\nown item. At this point, the encoding is a perfect matching.\nNow, for each edge e = (vi, vj) in the original market, add\nan edge e with weight we between agent vi and the item of\nvj. Perfect matchings in this encoding correspond exactly\nwith cycle covers, since whenever an agent\"s item is taken,\nit must receive some other agent\"s item. It follows that the\nunrestricted clearing problem can be solved in polynomial\ntime by finding a maximum-weight perfect matching.\nFigure 3 contains the bipartite graph encoding of the example\nmarket from Figure 1. The weight-0 edges are encoded by\ndashed lines, while the market edges are in bold.\nItems\nAgents\nv1 v2 v3 v4 v5\ne1 e3 e8\ne2\nv1 v2 v3 v4 v5\ne7e6\ne5\ne4\nFigure 3: Perfect matching encoding of the market\nin Figure 1.\nAlternatively, we can solve the problem by encoding it\nas an ILP with one variable for each edge in the original\nmarket graph G. This ILP, given below, has the advantage\nthat it can be extended naturally to deal with cycle length\nconstraints. Therefore, for the rest of this section, this is\nthe approach we will pursue.\nmax\ne\u2208E\nwee\nsuch that for all vi \u2208 V , the conservation constraint\neout=(vi,vj )\neout \u2212\nein=(vj ,vi)\nein = 0\nand capacity constraint\neout=(vi,vj )\neout \u2264 1\nare satisfied.\nIf cycles are allowed to have length at most L, it is easy\nto see that we only need to make the following changes\nto the ILP. For each length-L path (throughout the\npaper, we do not include cycles in the definition of path)\np = ep1 , ep2 , . . . , epL , add a constraint\nep1 + ep2 + . . . + epL \u2264 L \u2212 1,\nwhich precludes path p from being in any feasible solution.\nUnfortunately, in a market with only 1000 patients, the\nnumber of length-3 paths is in excess of 400 million, and so\nwe cannot even construct this ILP without running out of\nmemory.\nTherefore, we use a tree search with an incremental\nformulation approach. Specifically, we use CPLEX, though\n298\nwe add constraints as cutting planes during the tree search\nprocess. We begin with only a small subset of the constraints\nin the ILP. Since this ILP is small, CPLEX can solve its LP\nrelaxation. We then check whether any of the missing\nconstraints are violated by the fractional solution. If so, we\ngenerate a set of these constraints, add them to the ILP,\nand repeat. Even once all constraints are satisfied, there\nmay be no integral solution matching the fractional upper\nbound, and even if there were, the LP solver might not find\nit.\nIn these cases, CPLEX branches on a variable (we used\nCPLEX\"s default branching strategy), and generates one\nnew search node corresponding to each of the children. At\neach node of the search tree that is visited, this process of\nsolving the LP and adding constraints is repeated. Clearly,\nthis approach yields an optimal solution once the tree search\nfinishes.\nWe still need to explain the details of the constraint seeder\n(i.e., selecting which constraints to begin with) and the\nconstraint generation (i.e., selecting which violated constraints\nto include). We describe these briefly in the next two\nsubsections, respectively.\n4.1 Constraint Seeder\nThe main constraint seeder we developed forbids any path\nof length L \u2212 1 that does not have an edge closing the cycle\nfrom its head to its tail. While it is computationally\nexpensive to find these constraints, their addition focuses the\nsearch away from paths that cannot be in the final solution.\nWe also tried seeding the LP with a random collection of\nconstraints from the ILP.\n4.2 Constraint Generation\nWe experimented with several constraint generators. In\neach, given a fractional solution, we construct the subgraph\nof edges with positive value. This graph is much smaller\nthan the original graph, so we can perform the following\ncomputations efficiently.\nIn our first constraint generator, we simply search for\nlength-L paths with value sum more than L \u2212 1. For any\nsuch path, we restrict its sum to be at most L \u2212 1. Note\nthat if there is a cycle c with length |c| > L, it could contain\nas many as |c| violating paths.\nIn our second constraint generator, we only add one\nconstraint for such cycles: the sum of edges in the cycle can be\nat most |c|(L \u2212 1)/L .\nThis generator made the algorithm slower, so we went\nin the other direction in developing our final generator. It\nadds one constraint per violating path p, and furthermore,\nit adds a constraint for each path with the same interior\nvertices (not counting the endpoints) as p. This improved\nthe overall speed.\n4.3 Experimental performance\nIt turned out that even with these improvements, the edge\nformulation approach cannot clear a kidney exchange with\n100 vertices in the time the cycle formulation (described\nlater in Section 5) can clear one with 10,000 vertices. In\nother words, column generation based approaches turned\nout to be drastically better than constraint generation based\napproaches. Therefore, in the rest of the paper, we will focus\non the cycle formulation and the column generation based\napproaches.\n5. SOLUTION APPROACHES BASED ON A\nCYCLE FORMULATION\nIn this section, we consider a formulation of the clearing\nproblem as an ILP with one variable for each cycle. This\nencoding is based on the following classical algorithm for\nsolving the directed cycle cover problem when cycles have\nlength 2.\nGiven a market G = (V, E), construct a new graph on\nV with a weight wc edge for each cycle c of length 2. It is\neasy to see that matchings in this new graph correspond\nto cycle covers by length-2 cycles in the original market\ngraph. Hence, the market clearing problem with L = 2 can\nbe solved in polynomial time by finding a maximum-weight\nmatching.\nc_1\nv 1 v 2 v 3 v 4\nc_3c_2\nFigure 4: Maximum-weight matching encoding of\nthe market in Figure 1.\nWe can generalize this encoding for arbitrary L. Let C(L)\nbe the set of all cycles of G with length at most L. Then\nthe following ILP finds the maximum-weight cycle cover by\nC(L) cycles:\nmax\nc\u2208C(L)\nwcc\nsubject to\nc:vi\u2208c\nc \u2264 1 \u2200vi \u2208 V\nwith c \u2208 {0, 1} \u2200c \u2208 C(L)\n5.1 Edge vs Cycle Formulation\nIn this section, we consider the merits of the edge\nformulation and cycle formulation. The edge formulation can\nbe solved in polynomial time when there are no constraints\non the cycle size. The cycle formulation can be solved in\npolynomial time when the cycle size is at most 2.\nWe now consider the case of short cycles of length at most\nL, where L \u2265 3. Our tree search algorithms use the LP\nrelaxation of these formulations to provide upper bounds on\nthe optimal solution. These bounds help prune subtrees and\nguide the search in the usual ways.\nTheorem 2. The LP relaxation of the cycle formulation\nweakly dominates the LP relaxation of the edge formulation.\nProof. Consider an optimal solution to the LP\nrelaxation of the cycle formulation. We show how to construct\nan equivalent solution in the edge formulation. For each\nedge in the graph, set its value as the sum of values of all\nthe cycles of which it is a member. Also, define the value\nof a vertex in the same manner. Because of the cycle\nconstraints, the conservation and capacity constraints of the\nedge encoding are clearly satisfied. It remains to show that\nnone of the path constraints are violated.\nLet p be any length-L path in the graph. Since p has L\u22121\ninterior vertices (not counting the endpoints), the value sum\nof these interior vertices is at most L\u22121. Now, for any cycle\nc of length at most L, the number of edges it has in p, which\nwe denote by ec(p), is at most the number of interior vertices\nit has in p, which we denote by vc(p). Hence,\n\u00c8\ne\u2208p e =\n\u00c8\nc\u2208C(L) c\u2217ec(p) \u2264\n\u00c8\nc\u2208C(L) c\u2217vc(p) =\n\u00c8\nv\u2208p v = L\u22121.\n299\nThe converse of this theorem is not true. Consider a graph\nwhich is simply a cycle with n edges. Clearly, the LP\nrelaxation of the cycle formulation has optimal value 0, since\nthere are no cycles of size at most L. However, the edge\nformulation has a solution of size n/2, with each edge having\nvalue 1/2.\nHence, the cycle formulation is tighter than the edge\nformulation. Additionally, for a graph with m edges, the edge\nformulation requires O(m3\n) constraints, while the cycle\nformulation requires only O(m2\n).\n5.2 Column Generation for the LP\nTable 2 shows how the number of cycles of length at most\n3 grows with the size of the market. With one variable per\ncycle in the cycle formulation, CPLEX cannot even clear\nmarkets with 1,000 patients without running out of\nmemory (see Figure 6). To address this problem, we used an\nincremental formulation approach.\nThe first step in LP-guided tree search is to solve the\nLP relaxation. Since the cycle formulation does not fit in\nmemory, this LP stage would fail immediately without an\nincremental formulation approach. However, motivated by\nthe observation that an exchange solution can include only\na tiny fraction of the cycles, we explored the approach of\nusing column (i.e., cycle) generation.\nThe idea of column generation is to start with a restricted\nLP containing only a small number of columns (variables,\ni.e., cycles), and then to repeatedly add columns until an\noptimal solution to this partially formulated LP is an\noptimal solution to the original (aka master) LP. We explain\nthis further by way of an example.\nConsider the market in Figure 1 with L = 2. Figure 5\ngives the corresponding master LP, P, and its dual, D.\nPrimal P\nmax 2c1 +2c2 +2c3\ns.t. c1 \u2264 1 (v1)\nc1 +c2 \u2264 1 (v2)\n+c2 +c3 \u2264 1 (v3)\n+c3 \u2264 1 (v4)\nwith c1, c2, c3 \u2265 0\nDual D\nmin v1 +v2 +v3 +v4\ns.t v1 +v2 \u2265 2 (c1)\n+v2 +v3 \u2265 2 (c2)\n+v3 +v4 \u2265 2 (c3)\nwith v1, v2, v3, v4 \u2265 0\nFigure 5: Cycle formulation.\nLet P be the restriction of P containing columns c1 and c3\nonly. Let D be the dual of P , that is, D is just D without\nthe constraint c2. Because P and D are small, we can solve\nthem to obtain OPT(P ) = OPT(D ) = 4, with cOP T (P ) =\nc1 = c3 = 1 and vOP T (D ) = v1 = v2 = v3 = v4 = 1 .\nWhile cOP T (P ) must be a feasible solution of P, it turns\nout (fortunately) that vOP T (D ) is feasible for D, so that\nOPT(D ) \u2265 OPT(D). We can verify this by checking that\nvOP T (D ) satisfies the constraints of D not already in\nDi.e. constraint c2. It follows that OPT(P ) = OPT(D ) \u2265\nOPT(D) = OPT(P), and so vOP T (P ) is provably an\noptimal solution for P, even though P is contains a only strict\nsubset of the columns of P.\nOf course, it may turn out (unfortunately) that vOP T (D )\nis not feasible for D. This can happen above if vOP T (D ) =\nv1 = 2, v2 = 0, v3 = 0, v4 = 2 . Although we can still see\nthat OPT(D ) = OPT(D), in general we cannot prove this\nbecause D and P are too large to solve. Instead, because\nconstraint c2 is violated, we add column c2 to P , update\nD , and repeat. The problem of finding a violated constraint\nis called the pricing problem. Here, the price of a column\n(cycle in our setting) is the difference between its weight, and\nthe dual-value sum of the cycle\"s vertices. If any column of P\nhas a positive price, its corresponding constraint is violated\nand we have not yet proven optimality. In this case, we must\ncontinue generating columns to add to P .\n5.2.1 Pricing Problem\nFor smaller instances, we can maintain an explicit\ncollection of all feasible cycles. This makes the pricing problem\neasy and efficient to solve: we simply traverse the collection\nof cycles, and look for cycles with positive price. We can\neven find cycles with the most positive price, which are the\nones most likely to improve the objective value of restricted\nLP [1]. This approach does not scale however. A market\nwith 5000 patients can have as many as 400 million cycles\nof length at most 3 (see Table 2). This is too many cycles\nto keep in memory.\nHence, for larger instances, we have to generate feasible\ncycles while looking for one with a positive price. We do this\nusing a depth-first search algorithm on the market graph\n(see Figure 1). In order to make this search faster, we\nexplore vertices in non-decreasing value order, as these vertices\nare more likely to belong to cycles with positive weight. We\nalso use several pruning rules to determine if the current\nsearch path can lead to a positive weight cycle. For\nexample, at a given vertex in the search, we can prune based on\nthe fact that every vertex we visit from this point onwards\nwill have value at least as great the current vertex.\nEven with these pruning rules, column generation is a\nbottleneck. Hence, we also implemented the following\noptimizations.\nWhenever the search exhaustively proves that a vertex\nbelongs to no positive price cycle, we mark the vertex and\ndo not use it as the root of a depth-first search until its\ndual value decreases. In this way, we avoid unnecessarily\nrepeating our computational efforts from a previous column\ngeneration iteration.\nFinally, it can sometimes be beneficial for column\ngeneration to include several positive-price columns in one\niteration, since it may be faster to generate a second column,\nonce the first one is found. However, we avoid this for the\nfollowing reason. If we attempt to find more positive-price\ncolumns than there are to be found, or if the columns are\nfar apart in the search space, we end up having to generate\nand check a large part of the collection of feasible cycles. In\nour experiments, we have seen this occur in markets with\nhundreds of millions of cycles, resulting in prohibitively\nexpensive computation costs.\n5.2.2 Column Seeding\nEven if there is only a small gap to the master LP\nrelaxation, column generation requires many iterations to\nimprove the objective value of the restricted LP. Each of these\n300\niterations is expensive, as we must solve the pricing problem,\nand re-solve the restricted LP. Hence, although we could\nbegin with no columns in the restricted LP, it is much faster\nto seed the LP with enough columns that the optimal\nobjective value is not too far from the master LP. Of course, we\ncannot include so many columns that we run out of memory.\nWe experimented with several column seeders. In one\nclass of seeder, we use a heuristic to find an exchange, and\nthen add the cycles of that exchange to the initial restricted\nLP. We implemented two heuristics. The first is a greedy\nalgorithm: for each vertex in a random order, if it is\nuncovered, we attempt to include a cycle containing it and\nother uncovered vertices. The other heuristic uses\nspecialized maximum-weight matching code [16] to find an optimal\ncover by length-2 cycles.\nThese heuristics perform extremely well, especially\ntaking into account the fact that they only add a small\nnumber of columns. For example, Table 1 shows that an\noptimal cover by length-2 cycles has almost as much weight\nas the exchange with unrestricted cycle size. However, we\nhave enough memory to include hundreds-of-thousands of\nadditional columns and thereby get closer still to the upper\nbound.\nOur best column seeder constructs a random collection of\nfeasible cycles. Since a market with 5000 patients can have\nas many as 400 million feasible cycles, it takes too long to\ngenerate and traverse all feasible cycles, and so we do not\ninclude a uniformly random collection. Instead, we perform\na random walk on the market graph (see, for example, Figure\n1), in which, after each step of the walk, we test whether\nthere is an edge back onto our path that forms a feasible\ncycle. If we find a cycle, it is included in the restricted LP,\nand we start a new walk from a random vertex. In our\nexperiments (see Section 6), we use this algorithm to seed\nthe LP with 400,000 cycles.\nThis last approach outperforms the heuristic seeders\ndescribed above. However, in our algorithm, we use a\ncombination that takes the union of all columns from all three\nseeders. In Figure 6, we compare the performance of the\ncombination seeder against the combination without the random\ncollection seeder. We do not plot the performance of the\nalgorithm without any seeder at all, because it can take hours\nto clear markets we can otherwise clear in a few minutes.\n5.2.3 Proving Optimality\nRecall that our aim is to find an optimal solution to the\nmaster LP relaxation. Using column generation, we can\nprove that a restricted-primal solution is optimal once all\ncolumns have non-positive prices. Unfortunately though,\nour clearing problem has the so-called tailing-off effect [1,\nSection 6.3], in which, even though the restricted primal is\noptimal in hindsight, a large number of additional iterations\nare required in order to prove optimality (i.e., eliminate all\npositive-price columns). There is no good general solution\nto the tailing-off effect.\nHowever, to mitigate this effect, we take advantage of\nthe following problem-specific observation. Recall from\nSection 1.1 that, almost always, a maximum-weight exchange\nwith cycles of length at most 3 has the same weight as an\nunrestricted maximum-weight exchange. (This does not mean\nthat the solver for the unrestricted case will find a\nsolution with short cycles, however.) Furthermore, the\nunrestricted clearing problem can be solved in polynomial time\n(recall Section 4). Hence, we can efficiently compute an\nupper bound on the master LP relaxation, and, whenever\nthe restricted primal achieves this upper bound, we have\nproven optimality without necessarily having to eliminate\nall positive-price columns!\nIn order for this to improve the running time of the overall\nalgorithm, we need to be able to clear the unrestricted\nmarket in less time than it takes column generation to eliminate\nall the positive-price cycles. Even though the first\nproblem is polynomial-time solvable, this is not trivial for large\ninstances. For example, for a market with 10,000 patients\nand 25 million edges, specialized maximum-weight\nmatching code [16] was too slow, and CPLEX ran out of memory\non the edge formulation encoding from Section 4. To make\nthis idea work then, we used column generation to solve the\nedge formulation.\nThis involves starting with a small random subset of the\nedges, and then adding positive price edges one-by-one until\nnone remain. We conduct this secondary column\ngeneration not in the original market graph G, but in the perfect\nmatching bipartite graph of Figure 3. We do this so that we\nonly need to solve the LP, not the ILP, since the integrality\ngap in the perfect matching bipartite graph is 1-i.e. there\nalways exists an integral solution that achieves the fractional\nupper bound.\nThe resulting speedup to the overall algorithm is\ndramatic, as can be seen in Figure 6.\n5.2.4 Column Management\nIf the optimal value of the initial restricted LP P is far\nfrom the the master LP P, then a large number of columns\nare generated before the gap is closed. This leads to\nmemory problems on markets with as few as 4,000 patients. Also,\neven before memory becomes an issue, the column\ngeneration iterations become slow because of the additional\noverhead of solving a larger LP.\nTo address these issues, we implemented a column\nmanagement scheme to limit the size of the restricted LP.\nWhenever we add columns to the LP, we check to see if it contains\nmore than a threshold number of columns. If this is the\ncase, we selectively remove columns until it is again below\nthe threshold2\n. As we discussed earlier, only a tiny\nfraction of all the cycles will end up in the final solution. It\nis unlikely that we delete such a cycle, and even if we do,\nit can always be generated again. Of course, we must not\nbe too aggressive with the threshold, because doing so may\noffset the per-iteration performance gains by significantly\nincreasing the number of iterations required to get a suitable\ncolumn set in the LP at the same time.\nThere are some columns we never delete, for example\nthose we have branched on (see Section 5.3.2), or those with\na non-zero LP value. Amongst the rest, we delete those with\nthe lowest price, since those correspond to the dual\nconstraints that are most satisfied. This column management\nscheme works well and has enabled us to clear markets with\n10,000 patients, as seen in Figure 6.\n5.3 Branch-and-Price Search for the ILP\nGiven a large market clearing problem, we can\nsuccessfully solve its LP relaxation to optimality by using the\ncolumn generation enhancements described above. However,\nthe solutions we find are usually fractional. Thus the next\n2\nBased on memory size, we set the threshold at 400,000.\n301\nstep involves performing a branch-and-price tree search [1]\nto find an optimal integral solution.\nBriefly, this is the idea of branch-and-price. Whenever we\nset a fractional variable to 0 or 1 (branch), both the master\nLP, and the restriction we are working with, are changed\n(constrained). By default then, we need to perform column\ngeneration (go through the effort of pricing) at each node of\nthe search tree to prove that the constrained restriction is\noptimal for constrained master LP. (However, as discussed\nin Section 5.2.3, we compute the integral upper bound for\nthe root node based on relaxing the cycle length constraint\ncompletely, and whenever any node\"s LP in the tree achieves\nthat value, we do not need to continue pricing columns at\nthat node.)\nFor the clearing problem with cycles of length at most 3,\nwe have found that there is rarely a gap between the optimal\nintegral and fractional solutions. This means we can largely\navoid the expensive per node pricing step: whenever the\nconstrained restricted LP has the same optimal value as its\nparent in the tree search, we can prove LP optimality, as\nin Section 5.2.3, without having to include any additional\ncolumns in the restricted LP.\nAlthough CPLEX can solve ILPs, it does not support\nbranch-and-price (for example, because there can be\nproblemspecific complications involving the interaction between the\nbranching rule and the pricing problem). Hence, we\nimplemented our own branch-and-price algorithm, which explores\nthe search tree in depth-first order. We also experimented\nwith the A* node selection order [7, 2]. However, this search\nstrategy requires significantly more memory, which we found\nwas better employed in making the column generation phase\nfaster (see Section 5.2.2). The remaining major components\nof the algorithm are described in the next two subsections.\n5.3.1 Primal Heuristics\nBefore branching on a fractional variable, we use primal\nheuristics to construct a feasible integral solution. These\nsolutions are lower bounds on the final optimal integral\nsolutions. Hence, whenever a restricted fractional solution is\nno better than the best integral solution found so far, we\nprune the current subtree. A primal heuristic is effective if\nit is efficient and constructs tight lower bounds.\nWe experimented with two primal heuristics. The first\nis a simple rounding algorithm [8]: include all cycles with\nfractional value at least 0.5, and then, ensuring feasibility,\ngreedily add the remaining cycles. Whilst this heuristic is\nefficient, we found that the lower bounds it constructs rarely\nenable much pruning.\nWe also tried using CPLEX as a primal heuristic. At\nany given node of the search tree, we can convert the\nrestricted LP relaxation back to an ILP by reintroducing the\nintegrality constraints. CPLEX has several built-in primal\nheuristics, which we can apply to this ILP. Moreover, we can\nuse CPLEX\"s own tree search to find an optimal integral\nsolution. In general, this tree search is much faster than our\nown.\nIf CPLEX finds an integral solution that matches the\nfractional upper bound at the root node, we are done.\nOtherwise, no such integral solution exists, or we don\"t yet have\nthe right combination of cycles in the restricted LP. For\nkidney-exchange markets, it is usually the second reason\nthat applies (see Sections 5.2.2 and 5.2.4). Hence, at some\npoint in the tree search, once more columns have been\ngenerated as a result of branching, the CPLEX heuristic will\nfind an optimal integral solution.\nAlthough CPLEX tree search is faster than our own, it is\nnot so fast that we can apply it to every node in our search\ntree. Hence, we make the following optimizations. Firstly,\nwe add a constraint that requires the objective value of the\nILP to be as large as the fractional target. If this is not\nthe case, we want to abort and proceed to generate more\ncolumns with our branch-and-price search. Secondly, we\nlimit the number of nodes in CPLEX\"s search tree. This is\nbecause we have observed that no integral solution exists,\nCPLEX can take a very long time to prove that. Finally,\nwe only apply the CPLEX heuristic at a node if it has a\nsufficiently different set of cycles from its parent.\nUsing CPLEX as a primal heuristic has a large impact\nbecause it makes the search tree smaller, so all the\ncomputationally expensive pricing work is avoided at nodes that\nare not generated in this smaller tree.\n5.3.2 Cycle Brancher\nWe experimented with two branching strategies, both of\nwhich select one variable per node. The first strategy,\nbranching by certainty, randomly selects a variable from those\nwhose LP value is closest to 1. The second strategy,\nbranching by uncertainty, randomly selects a variable whose LP\nvalue is closest to 0.5. In either case, two children of the\nnode are generated corresponding to two subtrees, one in\nwhich the variable is set to 0, the other in which it is set\nto 1. Our depth-first search always chooses to explore first\nthe subtree in which the value of the variable is closest to\nits fractional value.\nFor our clearing problem with cycles of length at most\n3, we found branching by uncertainty to be superior, rarely\nrequiring any backtracking.\n6. EXPERIMENTAL RESULTS\nAll our experiments were performed in Linux (Red Hat\n9.0), using a Dell PC with a 3GHz Intel Pentium 4\nprocessor, and 1GB of RAM. Wherever we used CPLEX (e.g., in\nsolving the LP and as a primal heuristic, as discussed in the\nprevious sections), we used CPLEX 10.010.\nFigure 6 shows the runtime performance of four clearing\nalgorithms. For each market size listed, we randomly\ngenerated 10 markets, and attempted to clear them using each\nof the algorithms.\nThe first algorithm is CPLEX on the full cycle\nformulation. This algorithm fails to clear any markets with 1000\npatients or more. Also, its running time on markets smaller\nthan this is significantly worse than the other algorithms.\nThe other algorithms are variations of the incremental\ncolumn generation approach described in Section 5. We begin\nwith the following settings (all optimizations are switched\non):\nCategory Setting\nColumn Seeder Combination of greedy exchange\nand maximum-weight matching\nheuristics, and random walk\nseeder (400,000 cycles).\nColumn Generation One column at a time.\nColumn Management On, with 400,000 column limit.\nOptimality Prover On.\nPrimal Heuristic Rounding & CPLEX tree search.\nBranching Rule Uncertainty.\n302\nThe combination of these optimizations allows us to easily\nclear markets with over 10,000 patients. In each of the next\ntwo algorithms, we turn one of these optimizations off to\nhighlight its effectiveness.\nFirst, we restrict the seeder so that it only begins with\n10,000 cycles. This setting is faster for smaller instances,\nsince the LP relaxations are smaller, and faster to solve.\nHowever, at 5000 vertices, this effect starts to be offset by\nthe additional column generation that must be performed.\nFor larger instance, this restricted seeder is clearly worse.\nFinally, we restore the seeder to its optimized setting, but\nthis time, remove the optimality prover described in\nSection 5.2.3. As in many column generation problems, the\ntailing-off effect is substantial. By taking advantage of the\nproperties of our problem, we manage to clear a market with\n10,000 patients in about the same time it would otherwise\nhave taken to clear a 6000 patient market.\n7. FIELDING THE TECHNOLOGY\nOur algorithm and implementation replaced CPLEX as\nthe clearing algorithm of the Alliance for Paired Donation,\none of the leading kidney exchanges, in December 2006. We\nconduct a match run every two weeks, and the first\ntransplants based on our solutions have already been conducted.\nWhile there are (for political/inter-personal reasons) at\nleast four kidney exchanges in the US currently, everyone\nunderstands that a unified unfragmented national exchange\nwould save more lives. We are in discussions with additional\nkidney exchanges that are interested in adopting our\ntechnology. This way our technology (and the processes around\nit) will hopefully serve as a substrate that will eventually\nhelp in unifying the exchanges. At least computational\nscalability is no longer an obstacle.\n8. CONCLUSIONANDFUTURERESEARCH\nIn this work we have developed the most scalable exact\nalgorithms for barter exchanges to date, with special focus\non the upcoming national kidney-exchange market in which\npatients with kidney disease will be matched with\ncompatible donors by swapping their own willing but incompatible\ndonors. With over 70,000 patients already waiting for a\ncadaver kidney in the US, this market is seen as the only\nethical way to significantly reduce the 4,000 deaths per year\nattributed to kidney disease.\nOur work presents the first algorithm capable of clearing\nthese markets on a nationwide scale. It optimally solves\nthe kidney exchange clearing problem with 10,000\ndonordonee pairs. Thus there is no need to resort to approximate\nsolutions. The best prior technology (vanilla CPLEX)\ncannot handle instances beyond about 900 donor-donee pairs\nbecause it runs out of memory. The key to our\nimprovement is incremental problem formulation. We adapted two\nparadigms for the task: constraint generation and column\ngeneration. For each, we developed a host of techniques\nthat substantially improve both runtime and memory\nusage. Some of the techniques use domain-specific\nobservations while others are domain independent. We conclude\nthat column generation scales dramatically better than\nconstraint generation. For column generation in the LP, our\nenhancements include pricing techniques, column seeding\ntechniques, techniques for proving optimality without\nhaving to bring in all positive-price columns (and using another\ncolumn-generation process in a different formulation to do\nso), and column removal techniques. For the\nbranch-andprice search in the integer program that surrounds the LP,\nour enhancements include primal heuristics and we also\ncompared branching strategies. Undoubtedly, further\nparameter tuning and perhaps additional speed improvement\ntechniques could be used to make the algorithm even faster.\nOur algorithm also supports several generalizations, as\ndesired by real-world kidney exchanges. These include\nmultiple alternative donors per patient, weighted edges in the\nmarket graph (to encode differences in expected life years\nadded based on degrees of compatibility, patient age and\nweight, etc., as well as the probability of last-minute\nincompatibility), angel-triggered chains (chains of transplants\ntriggered by altruistic donors who do not have patients\nassociated with them, each chain ending with a left-over kidney),\nand additional issues (such as different scores for saving\ndifferent altruistic donors or left-over kidneys for future match\nruns based on blood type, tissue type, and likelihood that\nthe organ would not disappear from the market by the donor\ngetting second thoughts). Because we use an ILP\nmethodology, we can also support a variety of side constraints, which\noften play an important role in markets in practice [19]. We\ncan also support forcing part of the allocation, for example,\nThis acutely sick teenager has to get a kidney if possible.\nOur work has treated the kidney exchange as a batch\nproblem with full information (at least in the short run,\nkidney exchanges will most likely continue to run in batch mode\nevery so often). Two important directions for future work\nare to explicitly address both online and limited-information\naspects of the problem.\nThe online aspect is that donees and donors will be\narriving into the system over time, and it may be best to not\nexecute the myopically optimal exchange now, but rather\nsave part of the current market for later matches. In fact,\nsome work has been done on this in certain restricted\nsettings [22, 24].\nThe limited-information aspect is that even in batch mode,\nthe graph provided as input is not completely correct: a\nnumber of donor-donee pairs believed to be compatible turn\nout to be incompatible when more expensive last-minute\ntests are performed. Therefore, it would be desirable to\nperform an optimization with this in mind, such as outputting\na low-degree robust subgraph to be tested before the final\nmatch is produced, or to output a contingency plan in case\nof failure. We are currently exploring a number of\nquestions along these lines but there is certainly much more to\nbe done.\nAcknowledgments\nWe thank economists Al Roth and Utku Unver, as well as\nkidney transplant surgeon Michael Rees, for alerting us to\nthe fact that prior technology was inadequate for the\nclearing problem on a national scale, supplying initial data sets,\nand discussions on details of the kidney exchange process.\nWe also thank Don Sheehy for bringing to our attention the\nidea of shoe exchange. This work was supported in part by\nthe National Science Foundation under grants IIS-0427858\nand CCF-0514922.\n9. REFERENCES\n[1] C. Barnhart, E. L. Johnson, G. L. Nemhauser,\nM. W. P. Savelsbergh, and P. H. Vance.\n303\n0\n500\n1000\n1500\n2000\n2500\n3000\n3500\n4000\n0 2000 4000 6000 8000 10000\nClearingtime(seconds)\nNumber of patients\nOur algorithm\nOur algorithm with restricted column seeder\nOur algorithm with no optimality prover\nCPLEX cycle formulation\nFigure 6: Experimental results: average runtime with standard deviation bars.\nBranch-and-price: Column generation for solving huge\ninteger programs. Operations Research, 46:316-329,\nMay-June 1998.\n[2] R. Dechter and J. Pearl. Generalized best-first search\nstrategies and the optimality of A*. Journal of the\nACM, 32(3):505-536, 1985.\n[3] F. L. Delmonico. Exchanging kidneys - advances in\nliving-donor transplantation. New England Journal of\nMedicine, 350:1812-1814, 2004.\n[4] J. Edmonds. Path, trees, and flowers. Canadian\nJournal of Mathematics, 17:449-467, 1965.\n[5] M. R. Garey and D. S. Johnson. Computers and\nIntractability; A Guide to the Theory of\nNP-Completeness. 1990.\n[6] S. E. Gentry, D. L. Segev, and R. A. Montgomery. A\ncomparison of populations served by kidney paired\ndonation and list paired donation. American Journal\nof Transplantation, 5(8):1914-1921, August 2005.\n[7] P. Hart, N. Nilsson, and B. Raphael. A formal basis\nfor the heuristic determination of minimum cost\npaths. IEEE Transactions on Systems Science and\nCybernetics, 4(2):100-107, 1968.\n[8] K. Hoffman and M. Padberg. Solving airline\ncrew-scheduling problems by branch-and-cut.\nManagement Science, 39:657-682, 1993.\n[9] Intervac. http://intervac-online.com/.\n[10] National odd shoe exchange. http://www.oddshoe.org/.\n[11] Peerflix. http://www.peerflix.com.\n[12] Read it swap it. http://www.readitswapit.co.uk/.\n[13] A. E. Roth, T. Sonmez, and M. U. Unver. Kidney\nexchange. Quarterly Journal of Economics,\n119(2):457-488, May 2004.\n[14] A. E. Roth, T. Sonmez, and M. U. Unver. A kidney\nexchange clearinghouse in New England. American\nEconomic Review, 95(2):376-380, May 2005.\n[15] A. E. Roth, T. Sonmez, and M. U. Unver. Efficient\nkidney exchange: Coincidence of wants in a market\nwith compatibility-based preferences. American\nEconomic Review, forthcoming.\n[16] E. Rothberg. Gabow\"s n3\nmaximum-weight matching\nalgorithm: an implementation. The First DIMACS\nImplementation Challenge, 1990.\n[17] S. L. Saidman, A. E. Roth, T. Snmez, M. U. Unver,\nand F. L. Delmonico. Increasing the opportunity of\nlive kidney donation by matching for two and three\nway exchanges. Transplantation, 81(5):773-782, 2006.\n[18] T. Sandholm. Optimal winner determination\nalgorithms. In Combinatorial Auctions, Cramton,\nShoham, and Steinberg, eds. MIT Press, 2006.\n[19] T. Sandholm and S. Suri. Side constraints and\nnon-price attributes in markets. In IJCAI-2001\nWorkshop on Distributed Constraint Reasoning, pages\n55-61, Seattle, WA, 2001. To appear in Games and\nEconomic Behavior.\n[20] D. L. Segev, S. E. Gentry, D. S. Warren, B. Reeb, and\nR. A. Montgomery. Kidney paired donation and\noptimizing the use of live donor organs. Journal of the\nAmerican Medical Association, 293(15):1883-1890,\nApril 2005.\n[21] United Network for Organ Sharing (UNOS).\nhttp://www.unos.org/.\n[22] M. U. Unver. Dynamic kidney exchange. Working\npaper.\n[23] United States Renal Data System (USRDS).\nhttp://www.usrds.org/.\n[24] S. A. Zenios. Optimal control of a paired-kidney\nexchange program. Management Science,\n48(3):328-342, March 2002.\n304", "keywords": "barter;instance generator;column generation;matching;edge formulation;kidney;cycle formulation;match;branch-and-price;exchange;transplant;barter-exchange market;solution approach;market characteristic"}
-{"name": "test_J-21", "title": "A Strategic Model for Information Markets", "abstract": "Information markets, which are designed specifically to aggregate traders\" information, are becoming increasingly popular as a means for predicting future events. Recent research in information markets has resulted in two new designs, market scoring rules and dynamic parimutuel markets. We develop an analytic method to guide the design and strategic analysis of information markets. Our central contribution is a new abstract betting game, the projection game, that serves as a useful model for information markets. We demonstrate that this game can serve as a strategic model of dynamic parimutuel markets, and also captures the essence of the strategies in market scoring rules. The projection game is tractable to analyze, and has an attractive geometric visualization that makes the strategic moves and interactions more transparent. We use it to prove several strategic properties about the dynamic parimutuel market. We also prove that a special form of the projection game is strategically equivalent to the spherical scoring rule, and it is strategically similar to other scoring rules. Finally, we illustrate two applications of the model to analysis of complex strategic scenarios: we analyze the precision of a market in which traders have inertia, and a market in which a trader can profit by manipulating another trader\"s beliefs.", "fulltext": "1. INTRODUCTION\nMarkets have long been used as a medium for trade. As\na side effect of trade, the participants in a market reveal\nsomething about their preferences and beliefs. For example,\nin a financial market, agents would buy shares which they\nthink are undervalued, and sell shares which they think are\novervalued. It has long been observed that, because the\nmarket price is influenced by all the trades taking place, it\naggregates the private information of all the traders. Thus,\nin a situation in which future events are uncertain, and each\ntrader might have a little information, the aggregated\ninformation contained in the market prices can be used to\npredict future events. This has motivated the creation of\ninformation markets, which are mechanisms for aggregating\nthe traders\" information about an uncertain event.\nInformation markets can be modeled as a game in which\nthe participants bet on a number of possible outcomes, such\nas the results of a presidential election, by buying shares\nof the outcomes and receiving payoffs when the outcome is\nrealized. As in financial markets, the participants aim to\nmaximize their profit by buying low and selling high. In\nthis way, the players\" behavior transmits their personal\ninformation and beliefs about the possible outcomes, and can\nbe used to predict the event more accurately. The benefit of\nwell-designed information markets goes beyond information\naggregation; they can also be used as a hedging instrument,\nto allow traders to insure against risk.\nRecently, researchers have turned to the problem of\ndesigning market structures specifically to achieve better\ninformation aggregation properties than traditional markets.\nTwo designs for information markets have been proposed:\nthe Dynamic Parimutuel Market (DPM) by Pennock [10]\nand the Market Scoring Rules (MSR) by Hanson [6]. Both\nthe DPM and the MSR were designed with the goal of giving\ninformed traders an incentive to trade, and to reveal their\ninformation as soon as possible, while also controlling the\nsubsidy that the market designer needs to pump into the\nmarket.\nThe DPM was created as a combination of a pari-mutuel\nmarket (which is commonly used for betting on horses) and a\ncontinuous double auction, in order to simultaneously obtain\nthe first one\"s infinite buy-in liquidity and the latter\"s\nability to react continuously to new information. One version\nof the DPM was implemented in the Yahoo! Buzz market\n[8] to experimentally test the market\"s prediction\nproperties. The foundations of the MSR lie in the idea of a proper\nscoring rule, which is a technique to reward forecasters in\na way that encourages them to give their best prediction.\n316\nThe innovation in the MSR is to use these scoring rules as\ninstruments that can be traded, thus providing traders who\nhave new information an incentive to trade. The MSR was\nto be used in a policy analysis market in the Middle East\n[15], which was subsequently withdrawn.\nInformation markets rely on informed traders trading for\ntheir own profit, so it is critical to understand the strategic\nproperties of these markets. This is not an easy task,\nbecause markets are complex, and traders can influence each\nother\"s beliefs through their trades, and hence, can\npotentially achieve long term gains by manipulating the market.\nFor the MSR, it has been shown that, if we exclude the\npossibility of achieving gain through misleading other traders,\nit is optimal for each trader to honestly reflect her private\nbelief in her trades. For the DPM, we are not aware of any\nprior strategic analysis of this nature; in fact, a strategic\nhole was discovered while testing the DPM in the Yahoo!\nBuzz market [8].\n1.1 Our Results\nIn this paper, we seek to develop an analytic method to\nguide the design and strategic analysis of information\nmarkets. Our central contribution is a new abstract betting\ngame, the projection 1\ngame, that serves as a useful model\nfor information markets. The projection game is\nconceptually simpler than the MSR and DPM, and thus it is easier\nto analyze. In addition it has an attractive geometric\nvisualization, which makes the strategic moves and interactions\nmore transparent. We present an analysis of the optimal\nstrategies and profits in this game.\nWe then undertake an analysis of traders\" costs and\nprofits in the dynamic parimutuel market. Remarkably, we find\nthat the cost of a sequence of trades in the DPM is\nidentical to the cost of the corresponding moves in the projection\ngame. Further, if we assume that the traders beliefs at the\nend of trading match the true probability of the event being\npredicted, the traders\" payoffs and profits in the DPM are\nidentical to their payoffs and profits in a corresponding\nprojection game. We use the equivalence between the DPM and\nthe projection game to prove that the DPM is arbitrage-free,\ndeduce profitable strategies in the DPM, and demonstrate\nthat constraints on the agents\" trades are necessary to\nprevent a strategic breakdown.\nWe also prove an equivalence between the projection game\nand the MSR: We show that play in the MSR is strategically\nequivalent to play in a restricted projection game, at least\nfor myopic strategies and small trades. In particular, the\nprofitability of any move under the spherical scoring rule is\nexactly proportional to the profitability of the corresponding\nmove in the projection game restricted to a circle, with slight\ndistortion of the prior probabilities. This allows us to use the\nprojection game as a conceptual model for market scoring\nrules.\nWe note that while the MSR with the spherical scoring\nrule somewhat resembles the projection game, due to the\nmathematical similarity of their profit expressions, the DPM\nmodel is markedly different and thus its equivalence to the\nprojection game is especially striking. Further, because the\nrestricted projection game corresponds to a DPM with a\nnatural trading constraint, this sheds light on an intriguing\nconnection between the MSR and the DPM.\n1\nIn an earlier version of this paper, we called this the\nsegment game.\nLastly, we illustrate how the projection game model can\nbe used to analyze the potential for manipulation of\ninformation markets for long-term gain.2\nWe present an example\nscenario in which such manipulation can occur, and suggest\nadditional rules that might mitigate the possibility of\nmanipulation. We also illustrate another application to analyzing\nhow a market maker can improve the prediction accuracy\nof a market in which traders will not trade unless their\nexpected profit is above a threshold.\n1.2 Related Work\nNumerous studies have demonstrated empirically that\nmarket prices are good predictors of future events, and seem to\naggregate the collected wisdom of all the traders [2, 3, 12, 1,\n5, 16]. This effect has also been demonstrated in laboratory\nstudies [13, 14], and has theoretical support in the literature\nof rational expectations [9].\nA number of recent studies have addressed the design of\nthe market structure and trading rules for information\nmarkets, as well as the incentive to participate and other\nstrategic issues. The two papers most closely related to our work\nare the papers by Hanson [6] and Pennock [10]. However,\nstrategic issues in information markets have also been\nstudied by Mangold et al. [8] and by Hanson, Oprea and\nPorter [7]. An upcoming survey paper [11] discusses\ncostfunction formulations of automated market makers.\nOrganization of the paper The rest of this paper is\norganized as follows: In Section 2, we describe the\nprojection game, and analyze the players\" costs, profits, and\noptimal strategies in this game. In Section 3, we study the\ndynamic parimutuel market, and show that trade in a DPM\nis equivalent to a projection game. We establish a\nconnection between the projection game and the MSR in Section 4.\nIn Section 5, we illustrate how the projection game can be\nused to analyze non-myopic, and potentially manipulative,\nactions. We present our conclusions, and suggestions for\nfuture work, in Section 6.\n2. THE PROJECTION GAME\nIn this section, we describe an abstract betting game, the\nprojection game; in the following sections, we will argue\nthat both the MSR and the DPM are strategically similar\nto the projection game. The projection game is conceptually\nsimpler than MSR and DPM, and hence should prove easier\nto analyze. For clarity of exposition, here and in the rest of\nthe paper we assume the space is two dimensional, i.e., there\nare only two possible events. Our results easily generalize\nto more than two dimensions. We also assume throughout\nthat players are risk-neutral.\nSuppose there are two mutually exclusive and exhaustive\nevents, A and B. (In other words, B is the same as not\nA.) There are n agents who may have information about\nthe likelihood of A and B, and we (the designers) would like\nto aggregate their information. We invite them to play the\ngame described below:\nAt any point in the game, there is a current state\ndescribed by a pair of parameters, (x, y), which we sometimes\nwrite in vector form as x. Intuitively, x corresponds to the\n2\nHere, we are referring only to manipulation of the\ninformation market for later gain from the market itself; we do not\nconsider the possibility of traders having vested interests in\nthe underlying events.\n317\ntotal holding of shares in A, and y corresponds to the\nholding of shares in B. In each move of the game, one player\n(say i) plays an arrow (or segment) from (x, y) to (x , y ).\nWe use the notation [(x, y) \u2192 (x , y )] or [x, x ] to\ndenote this move. The game starts at (0, 0), but the market\nmaker makes the first move; without loss of generality, we\ncan assume the move is to (1, 1). All subsequent moves are\nmade by players, in an arbitrary (and potentially repeating)\nsequence.\nEach move has a cost associated with it, given by\nC[x, x ] = |x | \u2212 |x|,\nwhere | \u00b7 | denotes the Euclidean norm, |x| =\np\nx2 + y2.\nNote that none of the variables are constrained to be\nnonnegative, and hence, the cost of a move can be negative.\nThe cost can be expressed in an alternative form, that is\nalso useful. Suppose player i moves from (x, y) to (x , y ).\nWe can write (x , y ) as (x + lex, y + ley), such that l \u2265 0\nand |ex|2\n+ |ey|2\n= 1. We call l the volume of the move, and\n(ex, ey) the direction of the move. At any point (\u02c6x, \u02c6y), there\nis an instantaneous price charged, defined as follows:\nc((\u02c6x, \u02c6y), (ex, ey)) =\n\u02c6xex + \u02c6yey\n|(\u02c6x, \u02c6y)|\n=\n\u02c6x \u00b7 e\n|\u02c6x|\n.\nNote that the price depends only on the angle between the\nline joining the vector (\u02c6x, \u02c6y) and the segment [(x, y), (x , y )],\nand not the lengths. The total cost of the move is the price\nintegrated over the segment [(x, y) \u2192 (x , y )], i.e.,\nC[(x, y) \u2192 (x , y )] =\nZ l\nw=0\nc((x+wex, y +wey), (ex, ey))dw\nWe assume that the game terminates after a finite number\nof moves. At the end of the game, the true probability p of\nevent A is determined, and the agents receive payoffs for the\nmoves they made. Let q = (qx, qy) = (p,1\u2212p)\n|(p,1\u2212p)|\n. The payoff\nto agent i for a segment [(x, y) \u2192 (x , y )] is given by:\nP([(x, y) \u2192 (x , y )]) = qx(x \u2212 x) + qy(y \u2212 y) = q.(x \u2212 x)\nWe call the line through the origin with slope (1 \u2212 p)/p =\nqy/qx the p-line. Note that the payoff, too, may be negative.\nOne drawback of the definition of a projection game is\nthat implementing the payoffs requires us to know the\nactual probability p. This is feasible if the probability can\neventually be determined statistically, such as when\npredicting the relative frequency of different recurring events,\nor vote shares. It is also feasible for one-off events in which\nthere is reason to believe that the true probability is either\n0 or 1. For other one-off events, it cannot be implemented\ndirectly (unlike scoring rules, which can be implemented in\nexpectation). However, we believe that even in these cases,\nthe projection game can be useful as a conceptual and\nanalytical tool.\nThe moves, costs and payoffs have a natural geometric\nrepresentation, which is shown in Figure 1 for three\nplayers with one move each. The players append directed line\nsegments in turn, and the payoff player i finally receives for\na move is the projection of her segment onto the line with\nslope (1 \u2212 p)/p. Her cost is the difference of distances of the\nendpoints of her move to the origin.\n2.1 Strategicproperties oftheprojectiongame\nWe begin our strategic analysis of the projection game by\nobserving the following simple path-independence property.\n1\u2212p\np\n3\"s m\nove\n1\"s payoff\nM\nM\nm\nove\n1\"s move\n2\"smove\n3\"s payoff\n2\"s payoff\nx\ny\nFigure 1: A projection game with three players\nLemma 1. [Path-Independence] Suppose there is a sequence\nof moves leading from (x, y) to (x , y ). Then, the total\ncost of all the moves is equal to the cost of the single move\n[(x, y) \u2192 (x , y )], and the total payoff of all the moves is\nequal to the payoff of the single move [(x, y) \u2192 (x , y )].\nProof. The proof follows trivially from the definition of\nthe costs and payoffs: If we consider a path from point x to\npoint x , both the net change in the vector lengths and the\nnet projection onto the p-line are completely determined by\nx and x .\nAlthough simple, path independence of profits is vitally\nimportant, because it implies (and is implied by) the\nabsence of arbitrage in the market. In other words, there is no\nsequence of moves that start and end at the same point, but\nresult in a positive profit. On the other hand, if there were\ntwo paths from (x, y) to (x , y ) with different profits, there\nwould be a cyclic path with positive profit.\nFor ease of reference, we summarize some more useful\nproperties of the cost and payoff functions in the projection\ngame.\nLemma 2.\n1. The instantaneous price for moving along a line through\nthe origin is 1 or \u22121, when the move is away or toward\nthe origin respectively. The instantaneous price along\na circle centered at the origin is 0.\n2. When x moves along a circle centered at the origin to\npoint \u00afx on the positive p-line, the corresponding payoff\nis P(x, \u00afx) = |x| \u2212 x \u00b7 q, and the cost is C[x, \u00afx] = 0.\n3. The two cost function formulations are equivalent:\nC[x, x ] =\nZ l\nw=0\ncos(x + we, e)dw = |x |\u2212|x| \u2200x, x ,\nwhere e is the unit vector giving the direction of move.\nIn addition, when x moves along the positive p-line,\nthe payoff is equal to the cost, P(x, x ) = |x | \u2212 |x|.\nProof. 1. The instantaneous price is\nc(x, e) = x \u00b7 e/|x| = cos(x, e),\nwhere e is the direction of movement, and the result\nfollows.\n2. Since \u00afx is on the positive p-line, q\u00b7\u00afx = |\u00afx| = |x|, hence\nP(x, \u00afx) = q \u00b7 (\u00afx \u2212 x) = |x| \u2212 x \u00b7 q; the cost is 0 from\nthe definition.\n318\n3. From Part 1, the cost of moving from x to the origin\nis\nC[x, 0] =\nZ l\nw=0\ncos(x + we, e)dw =\nZ l\nw=0\n(\u22121)dw = \u2212|x|,\nwhere l = |x|, e = x/|x|. By the path-independence\nproperty,\nC[x, x ] = C[x, 0] + C[0, x ] = |x | \u2212 |x|.\nFinally, a point on the positive p-line gets projected\nto itself, namely q \u00b7 x = |x| so when the movement\nis along the positive p-line, P(x, x ) = q \u00b7 (x \u2212 x) =\n|x | \u2212 |x| = C[x, x ].\nWe now consider the question of which moves are\nprofitable in this game. The eventual profit of a move [x, x ],\nwhere x = x + l.(ex, ey), is\nprofit[x, x ] = P[x, x ] \u2212 C[x, x ]\n= lq.e \u2212 C[x, x ]\nDifferentiating with respect to l, we get\nd(profit)\ndl\n= q.e \u2212 c(x + le, e)\n= q.e \u2212\nx + le\n|x + le|\n.e\nWe observe that this is 0 if p(y + ley) = (1 \u2212 p)(x + lex),\nin other words, when the vectors q and (x + le) are exactly\naligned. Further, we observe that the price is non-decreasing\nwith increasing l. Thus, along the direction e, the profit is\nmaximized at the point of intersection with the p-line.\nBy Lemma 2, there is always a path from x to the positive\np-line with 0 cost, which is given by an arc of the circle with\ncenter at the origin and radius |x|. Also, any movement\nalong the p-line has 0 additional profit. Thus, for any point\nx, we can define the profit potential \u03c6(x, p) by\n\u03c6(x, p) = |x| \u2212 x \u00b7 q.\nNote, the potential is positive for x off the positive p-line\nand zero for x on the line. Next we show that a move to a\nlower potential is always profitable.\nLemma 3. The profit of a move [x, x ] is equal to the\ndifference in potential \u03c6(x, p) \u2212 \u03c6(x , p).\nProof. Denote z = |x|q and z = |x |q, i.e., these are\nthe points of intersection of the positive p-line with the\ncircles centered at the origin with radii |x| and |x | respectively.\nBy the path-independence property and Lemma 2, the profit\nof move [x, x ] is\nprofit(x, x ) = profit(x, z) + profit(z, z ) + profit(z , x )\n= (|x| \u2212 x \u00b7 q) + 0 + (x \u00b7 q \u2212 |x |)\n= \u03c6(x, p) \u2212 \u03c6(x , p).\nThus, the profit of the move is equal to the change in profit\npotential between the endpoints.\nThis lemma offers another way of seeing that it is optimal to\nmove to the point of lowest potential, namely to the p-line.\np\ny\n1\u2212p\nx\nx\nx\"\nz\nz\"\nprofit = |x|\u2212x.q\nprofit = x\".q\u2212|x\"|\nprofit = 0\nFigure 2: The profit of move [x, x ] is equal to the\nchange in profit potential from x to x .\n3. DYNAMIC PARIMUTUEL MARKETS\nThe dynamic parimutuel market (DPM) was introduced\nby Pennock [10] as an information market structure that\nencourages informed traders to trade early, has guaranteed\nliquidity, and requires a bounded subsidy. This market\nstructure was used in the Yahoo! Buzz market [8]. In this\nsection, we show that the dynamic parimutuel market is also\nremarkably similar to the projection game. Coupled with\nsection 4, this also demonstrates a strong connection\nbetween the DPM and MSR.\nIn a two-event DPM, users can place bets on either event\nA or B at any time, by buying a share in the appropriate\nevent. The price of a share is variable, determined by the\ntotal amount of money in the market and the number of\nshares currently outstanding. Further, existing shares can\nbe sold at the current price. After it is determined which\nevent really happens, the shares are liquidated for cash. In\nthe total-money-redistributed variant of DPM, which is\nthe variant used in the Yahoo! market, the total money\nis divided equally among the shares of the winning event;\nshares of the losing event are worthless. Note that the\npayoffs are undefined if the event has zero outstanding shares;\nthe DPM rules should preclude this possibility.\nWe use the following notation: Let x be the number of\noutstanding shares of A (totalled over all traders), and y be\nthe number of outstanding shares in B. Let M denote the\ntotal money currently in the market. Let cA and cB denote\nthe prices of shares in A and B respectively. The price of a\nshare in the Yahoo! DPM is determined by the share-ratio\nprinciple:\ncA\ncB\n=\nx\ny\n(1)\nThe form of the prices can be fully determined by\nstipulating that, for any given value of M, x, and y, there must\nbe some probability pA such that, if a trader believes that\npA is the probability that A will occur and the market will\nliquidate in the current state, she cannot expect to profit\nfrom either buying or selling either share. This gives us\ncA = pA\nhM\nx\ni\ncB = pB\nhM\ny\ni\n319\nSince pA + pB = 1, we have:\nxcA + ycB = M (2)\nFinally, combining Equations 1 and 2, we get\ncA = x\nM\nx2 + y2\ncB = y\nM\nx2 + y2\nCost of a trade in the DPM Consider a trader who\ncomes to a DPM in state (M, x, y), and buys or sells shares\nsuch that the eventual state is (M , x , y ). What is the net\ncost, M \u2212 M, of her move?\nTheorem 4. The cost of the move from (x, y) to (x , y )\nis\nM \u2212 M = M0[\np\nx 2 + y 2 \u2212\np\nx2 + y2]\nfor some constant M0. In other words, it is a constant\nmultiple of the corresponding cost in the projection game.\nProof. Consider the function G(x, y) = M0[\np\nx2 + y2].\nThe function G is differentiable for all x, y = 0, and it\"s\npartial derivatives are:\n\u2202G\n\u2202x\n= M0[\nx\np\nx2 + y2\n] = x\nG(x, y)\nx2 + y2\n\u2202G\n\u2202y\n= M0[\ny\np\nx2 + y2\n] = y\nG(x, y)\nx2 + y2\nNow, compare these equations to the prices in the DPM,\nand observe that, as a trader buys or sells in the DPM,\nthe instantaneous price is the derivative of the money. It\nfollows that, if at any point of time the DPM is in a state\n(M, x, y) such that M = G(x, y), then, at all subsequent\npoints of time, the state (M , x , y ) of the DPM will satisfy\nM = G(x , y ). Finally, note that we can pick the constant\nM0 such that the equation is satisfied for the initial state of\nthe DPM, and hence, it will always be satisfied.\nOne important consequence of Theorem 4 is that the\ndynamic parimutuel market is arbitrage-free (using Lemma 1).\nIt is interesting to note that the original Yahoo! Buzz market\nused a different pricing rule, which did permit arbitrage; the\nprice rule was changed to the share-ratio rule after traders\nstarted exploiting the arbitrage opportunities [8]. Another\nsomewhat surprising consequence is that the numbers of\noutstanding shares x, y completely determines the total\ncapitalization M of the DPM.\nConstraints in the DPM Although it might seem, based\non the costs, that any move in the projection game has an\nequivalent move in the DPM, the DPM places some\nconstraints on trades. Firstly, no trader is allowed to have a\nnet negative holding in either share. This is important,\nbecause it ensures that the total holdings in each share are\nalways positive. However, this is a boundary constraint,\nand does not impact the strategic choices for a player with\na sufficiently large positive holding in each share. Thus, we\ncan ignore this constraint from a first-order strategic\nanalysis of the DPM. Secondly, for practical reasons a DPM will\nprobably have a minimum unit of trade, but we assume here\nthat arbitrarily small quantities can be traded.\nPayoffs in the DPM At some point, trading in the DPM\nceases and shares are liquidated. We assume here that the\ntrue probability becomes known at liquidation time, and\ndescribe the payoffs in terms of the probability; however, if the\nprobability is not revealed, only the event that actually\noccurs, these payoffs can be implemented in expectation.\nSuppose the DPM terminates in a state (M, x, y), and the true\nprobability of event A is p. When the dynamic parimutuel\nmarket is liquidated, the shares are paid off in the\nfollowing way: Each owner of a share of A receives pM\nx\n, and each\nowner of a share of B receives (1 \u2212 p)M\ny\n, for each share\nowned.\nThe payoffs in the DPM, although given by a fairly\nsimple form, are conceptually complex, because the payoff of\na move depends on the subsequent moves before the\nmarket liquidates. Thus, a fully rational choice of move in the\nDPM for player i should take into account the actions of\nsubsequent players, including player i himself.\nHere, we restrict the analysis to myopic, infinitesimal\nstrategies: Given the market position is (M, x, y), in which\ndirection should a player make an infinitesimal move in order to\nmaximize her profit? We show that the infinitesimal payoffs\nand profits of a DPM with true probability p correspond\nstrategically to the infinitesimal payoffs and profits of a\nprojection game with odds\np\np/(1 \u2212 p), in the following sense:\nLemma 5. Suppose player i is about to make a move in\na dynamic parimutuel market in a state (M, x, y), and the\ntrue probability of event A is p. Then, assuming the market\nis liquidated after i\"s move,\n\u2022 If x\ny\n<\nq\np\n1\u2212p\n, player i profits by buying shares in A ,\nor selling shares in B.\n\u2022 If x\ny\n>\nq\np\n1\u2212p\n, player i profits by selling shares in A,\nor buying shares in B.\nProof. Consider the cost and payoff of buying a small\nquantity \u0394x of shares in A. The cost is C[(x, y) \u2192 (x +\n\u0394x, y)] = \u0394x \u00b7 x M\nx2+y2 , and the payoff is \u0394x \u00b7 pM\nx\n. Thus,\nbuying the shares is profitable iff\n\u0394x \u00b7 x\nM\nx2 + y2\n< \u0394x \u00b7 p\nM\nx\n\u21d4\nx2\nx2 + y2\n< p\n\u21d4\nx2\n+ y2\nx2\n>\n1\np\n\u21d4 1 + (\ny\nx\n)2\n>\n1\np\n\u21d4\ny\nx\n>\nr\n1 \u2212 p\np\n\u21d4\nx\ny\n<\nr\np\n1 \u2212 p\nThus, buying A is profitable if x\ny\n<\nq\np\n1\u2212p\n, and selling A\nis profitable if x\ny\n>\nq\np\n1\u2212p\n. The analysis for buying or selling\nB is similar, with p and (1 \u2212 p) interchanged.\nIt follows from Lemma 5 that it is myopically profitable\nfor players to move towards the line with slope\nq\n1\u2212p\np\n. Note\nthat there is a one-to-one mapping between 1\u2212p\np\nand\nq\n1\u2212p\np\n320\nin their respective ranges, so this line is uniquely defined,\nand each such line also corresponds to a unique p.\nHowever, because the actual payoff of a move depends on the\nfuture moves, players must base their decisions on some belief\nabout the final state of the market. In the light of Lemma 5,\none natural, rational-expectation style assumption is that\nthe final state (M, x\u2217\n, y\u2217\n) will satisfy x\u2217\ny\u2217 =\nq\np\n1\u2212p\n. (In other\nwords, one might assume that the traders\" beliefs will\nultimately converge to the true probability p; knowing p, the\ntraders will drive the market state to satisfy x\ny\n=\nq\np\n1\u2212p\n.)\nThis is very plausible in markets (such as the Yahoo! Buzz\nmarket) in which trading is permitted right until the market\nis liquidated, at which point there is no remaining\nuncertainty about the relevant frequencies. Under this\nassumption, we can prove an even tighter connection between\npayoffs in the DPM (where the true probability is p) and payoffs\nin the projection game, with odds\nq\np\n1\u2212p\n:\nTheorem 6. Suppose that the DPM ultimately terminates\nin a state (M, X, Y ) satisfying X\nY\n=\nq\np\n1\u2212p\n. Assume\nwithout loss of generality that the constant M0 = 1, so M =\u221a\nX2 + Y 2. Then, the final payoff for any move [x \u2192 x ]\nmade in the course of trading is (x \u2212 x) \u00b7 (\n\u221a\np,\n\u221a\n1 \u2212 p), i.e.,\nit is the same as the payoff in the projection game with oddsq\np\n1\u2212p\n.\nProof. First, observe that X\nM\n=\n\u221a\np and Y\nM\n=\n\u221a\n1 \u2212 p.\nThe final payoff is the liquidation value of (x \u2212 x) shares of\nA and (y \u2212 y) shares of B, which is\nPayoffDP M [x \u2212 x] = p\nM\nX\n(x \u2212 x) + (1 \u2212 p)\nM\nY\n(y \u2212 y)\n= p\n1\n\u221a\np\n(x \u2212 x) + (1 \u2212 p)\n1\n\u221a\n1 \u2212 p\n(y \u2212 y)\n=\n\u221a\np(x \u2212 x) +\np\n1 \u2212 p(y \u2212 y).\nStrategic Analysis for the DPM Theorems 4 and 6\ngive us a very strong equivalence between the projection\ngame and the dynamic parimutuel market, under the\nassumption that the DPM converges to the optimal value for\nthe true probability. A player playing in a DPM with true\nodds p/(1 \u2212 p), can imagine himself playing in the\nprojection game with odds\nq\np\n1\u2212p\n, because both the costs and the\npayoffs of any given move are identical.\nUsing this equivalence, we can transfer all the strategic\nproperties proven for the projection game directly to the\nanalysis of the dynamic parimutuel market. One\nparticularly interesting conclusion we can draw is as follows: In\nthe absence of any constraint that disallows it, it is always\nprofitable for an agent to move towards the origin, by selling\nshares in both A and B while maintaining the ratio x/y. In\nthe DPM, this is limited by forbidding short sales, so\nplayers can never have negative holdings in either share. As a\nresult, when their holding in one share (say A) is 0, they\ncan\"t use the strategy of moving towards the origin. We can\nconclude that a rational player should never hold shares of\nboth A and B simultaneously, regardless of her beliefs and\nthe market position.\nThis discussion leads us to consider a modified DPM, in\nwhich this strategic loophole is addressed directly: Instead\nof disallowing all short sales, we place a constraint that no\nagent ever reduce the total market capitalization M (or,\nalternatively, that any agent\"s total investment in the market\nis always non-negative). We call this the nondecreasing\nmarket capitalization constraint for the DPM. This\ncorresponds to a restriction that no move in the projection game\nreduces the radius. However, we can conclude from the\npreceding discussion that players have no incentive to ever\nincrease the radius. Thus, the moves of the projection game\nwould all lie on the quarter circle in the positive quadrant,\nwith radius determined by the market maker\"s move. In\nsection 4, we show that the projection game on this\nquarter circle is strategically equivalent (at least myopically) to\ntrade in a Market Scoring Rule. Thus, the DPM and MSR\nappear to be deeply connected to each other, like different\ninterfaces to the same underlying game.\n4. MARKET SCORING RULES\nThe Market Scoring Rule (MSR) was introduced by\nHanson [6]. It is based on the concept of a proper scoring rule, a\ntechnique which rewards forecasters to give their best\nprediction. Hanson\"s innovation was to turn the scoring rules into\ninstruments that can be traded, thereby providing traders\nwho have new information an incentive to trade. One\npositive effect of this design is that a single trader would still\nhave incentive to trade, which is equivalent to updating the\nscoring rule report to reflect her information, thereby\neliminating the problem of thin markets and illiquidity. In this\nsection, we show that, when the scoring rule used is the\nspherical scoring rule [4], there is a strong strategic\nequivalence between the projection game and the market scoring\nrule.\nProper scoring rules are tools used to reward forecasters\nwho predict the probability distribution of an event. We\ndescribe this in the simple setting of two exhaustive,\nmutually exclusive events A and B. In the simple setting of\ntwo exhaustive, mutually exclusive events A and B, proper\nscoring rules are defined as follows. Suppose the forecaster\npredicts that the probabilities of the events are r = (rA, rB),\nwith rA + rB = 1. The scoring rule is specified by functions\nsA(rA, rB) and sB(rA, rB), which are applied as follows: If\nthe event A occurs, the forecaster is paid sA(rA, rB), and if\nthe event B occurs, the forecaster is paid sB(rA, rB). The\nkey property that a proper scoring rule satisfies is that the\nexpected payment is maximized when the report is identical\nto the true probability distribution.\n4.1 Equivalence with Spherical Scoring Rule\nIn this section, we focus on one specific scoring rule: the\nspherical scoring rule [4].\nDefinition 1. The spherical scoring rule [4] is defined\nby\nsi(r)\ndef\n= ri/||r||. For two events, this can be written as:\nsA(rA, rB) =\nrA\np\nr2\nA + r2\nB\n; sB(rA, rB) =\nrB\np\nr2\nA + r2\nB\nThe spherical scoring rule is known to be a proper scoring\nrule. The definition generalizes naturally to higher\ndimensions.\nWe now demonstrate a close connection between the\nprojection game restricted to a circular arc and a market scoring\nrule that uses the spherical scoring rule. At this point, it is\n321\nconvenient to use vector notation. Let x = (x, y) denote a\nposition in the projection game. We consider the projection\ngame restricted to the circle |x| = 1.\nRestricted projection game Consider a move in this\nrestricted projection game from x to x . Recall that q =\n( p\n\u221a\np2+(1\u2212p)2\n, 1\u2212p\n\u221a\np2+(1\u2212p)2\n), where p is the true probability\nof the event. Then, the projection game profit of a move\n[x, x ] is q \u00b7 [x \u2212 x] (noting that |x| = |x |).\nWe can extend this to an arbitrary collection3\nof (not\nnecessarily contiguous) moves\nX = {[x1, x1], [x2, x2], \u00b7 \u00b7 \u00b7 , [xl, xl]}.\nSEG-PROFITp(X ) =\nX\n[x,x ]\u2208X\nq \u00b7 [x \u2212 x]\n= q \u00b7\n2\n4\nX\n[x,x ]\u2208X\n[x \u2212 x]\n3\n5\nSpherical scoring rule profit We now turn our\nattention to the MSR with the spherical scoring rule (SSR).\nConsider a player who changes the report from r to r . Then, if\nthe true probability of A is p, her expected profit is\nSSR-PROFIT([r, r ]) = p(sA(r )\u2212sA(r))+(1\u2212p)(sB(r )\u2212sB(r))\nNow, let us represent the initial and final position in terms\nof circular coordinates. For r = (rA, rB), define the\ncorresponding coordinates x = ( rA\u221a\nr2\nA+r2\nB\n, rB\u221a\nr2\nA+r2\nB\n). Note that\nthe coordinates satisfy |x| = 1, and thus correspond to valid\ncoordinates for the restricted projection game.\nNow, let p denote the vector [p, 1 \u2212 p]. Then,\nexpanding the spherical scoring functions sA, sB, the player\"s profit\nfor a move from r to r can be rewritten in terms of the\ncorresponding coordinates x, x as:\nSSR-PROFIT([x, x ]) = p \u00b7 (x \u2212 x)\nFor any collection X of moves, the total payoff in the SSR\nmarket is given by:\nSSR-PROFITp(X ) =\nX\n[x,x ]\u2208X\np \u00b7 [x \u2212 x]\n= p \u00b7\n2\n4\nX\n[x,x ]\u2208X\n[x \u2212 x]\n3\n5\nFinally, we note that p and q are related by q = \u03bcpp,\nwhere \u03bcp = 1/\np\np2 + (1 \u2212 p)2 is a scalar that depends only\non p. This immediately gives us the following strong\nstrategic equivalence for the restricted projection game and the\nSSR market:\nTheorem 7. Any collection of moves X yields a\npositive (negative) payoff in the restricted projection game iff X\nyields a positive (negative) payoff in the Spherical Scoring\nRule market.\nProof. As derived above,\nSEG-PROFITp(X ) = \u03bcpSSR-PROFITp(X ).\nFor all p, 1 \u2264 \u03bcp \u2264\n\u221a\n2, (or more generally for an\nndimensional probability vector p, 1 \u2264 \u03bcp = 1\n|p|\n\u2264\n\u221a\nn, by\nthe arithmetic mean-root mean square inequality), and the\nresult follows immediately.\n3\nWe allow the collection to contain repeated moves, i.e., it\nis a multiset.\nAlthough theorem 7 is stated in terms of the sign of the\npayoff, it extends to relative payoffs of two collections of\nmoves:\nCorollary 8. Consider any two collections of moves X ,\nX . Then, X yields a greater payoff than X in the projection\ngame iff X yields a greater payment than X in the SSR\nmarket.\nProof. Every move [x, x ] has a corresponding inverse\nmove [x , x]. In both the projection game and the SSR, the\ninverse move profit is simply the negative profit of the move\n(the moves are reversible). We can define a collection of\nmoves X = X \u2212 X by adding the inverse of X to X . Note\nthat\nSEG-PROFITp(X ) = SEG-PROFITp(X )\u2212SEG-PROFITp(X )\nand\nSSR-PROFITp(X ) = SSR-PROFITp(X )\u2212SSR-PROFITp(X );\napplying theorem 7 completes the proof.\nIt follows that the ex post optimality of a move (or set of\nmoves) is the same in both the projection game and the SSR\nmarket. On its own, this strong ex post equivalence is not\ncompletely satisfying, because in any non-trivial game there\nis uncertainty about the value of p, and the different scaling\nratios for different p could lead to different ex ante optimal\nbehavior. We can extend the correspondence to settings\nwith uncertain p, as follows:\nTheorem 9. Consider the restricted projection game with\nsome prior probability distribution F over possible values of\np. Then, there is a probability distribution G with the same\nsupport as F, and a strictly positive constant c that depends\nonly on F such that:\n\u2022 (i) For any collection X of moves, the expected profits\nare related by:\nEF (SEG-PROFIT(X )) = cEG(SSR-PROFIT(X ))\n\u2022 (ii) For any collection X , and any measurable\ninformation set I \u2286 [0, 1], the expected profits conditioned\non knowing that p \u2208 I satisfy\nEF (SEG-PROFIT(X )|p \u2208 I) = cEG(SSR-PROFIT(X )|p \u2208 I)\nThe converse also holds: For any probability distribution G,\nthere is a distribution F such that both these statements are\ntrue.\nProof. For simplicity, assume that F has a density\nfunction f. (The result holds even for non-continuous\ndistributions). Then, let c =\nR 1\n0\n\u03bcpf(p)dp. Define the density\nfunction g of distribution G by\ng(p) =\n\u03bcpf(p)\nc\nNow, for a collection of moves X ,\nEF (SEG-PROFIT(X )) =\nZ\nSEG-PROFITp(X )f(p)dp\n=\nZ\nSSR-PROFITp(X )\u03bcpf(p)dp\n=\nZ\nSSR-PROFITp(X )cg(p)dp\n= cEG(SSR-PROFIT(X ))\n322\n\u22121 \u22120.5 0 0.5 1 1.5 2 2.5\n\u22121\n\u22120.5\n0\n0.5\n1\n1.5\n2\n2.5\nx\ny\nlog scoring rule\nquadratic scoring rule\nFigure 3: Sample score curves for the log scoring\nrule si(r) = ai + b log ri and the quadratic scoring rule\nsi(r) = ai + b(2ri \u2212\nP\nk r2\nk).\nTo prove part (ii), we simply restrict the integral to values\nin I. The converse follows similarly by constructing F from\nG.\nAnalysis of MSR strategies Theorem 9 provides the\nfoundation for analysis of strategies in scoring rule markets.\nTo the extent that strategies in these markets are\nindependent of the specific scoring rule used, we can use the\nspherical scoring rule as the market instrument. Then, analysis\nof strategies in the projection game with a slightly distorted\ndistribution over p can be used to understand the strategic\nproperties of the original market situation.\nImplementation in expectation Another important\nconsequence of Theorem 9 is that the restricted projection\ngame can be implemented with a small distortion in the\nprobability distribution over values of p, by using a Spherical\nScoring Rule to implement the payoffs. This makes the\nprojection game valuable as a design tool; for example, we can\nanalyze new constraints and rules in the projection game,\nand then implement them via the SSR. Unfortunately, the\nresult does not extend to unrestricted projection games,\nbecause the relative profit of moving along the circle versus\nchanging radius is not preserved through this\ntransformation. However, it is possible to extend the transformation\nto projection games in which the radius ri after the ith move\nis a fixed function of i (not necessarily constant), so that it\nis not within the strategic control of the player making the\nmove; such games can also be strategically implemented via\nthe spherical scoring rule (with distortion of priors).\n4.2 Connection to other scoring rules\nIn this section, we show a weaker similarity between the\nprojection game and the MSR with other scoring rules. We\nprove an infinitesimal similarity between the restricted\nprojection game and the MSR with log scoring rule; the result\ngeneralizes to all proper scoring rules that have a unique\nlocal and global maximum.\nA geometric visualization of some common scoring rules in\ntwo dimensions is depicted in Figure 3. The score curves in\nthe figure are defined by {(s1(r), s2(r)) | r = (r, 1 \u2212 r), r \u2208\n[0, 1]}. Similarly to the projection game, define the profit\npotential of a probability r in MSR to be the change in\nprofit for moving from r to the optimum p, \u03c6MSR(s(r), p) =\nprofitMSR[s(r), s(p)]. We will show that the profit\npotentials in the two games have analogous roles for analyzing\nthe optimal strategies, in particular both potential functions\nhave a global minimum 0 at r = p.\nTheorem 10. Consider the projection game restricted to\nthe non-negative unit circle where strategies x have the\nnatural one-to-one correspondence to probability distributions\nr = (r, 1 \u2212 r) given by x = ( r\n|r|\n, 1\u2212r\n|r|\n). Trade in a log market\nscoring rule is strategically similar to trade in the projection\ngame on the quarter-circle, in that\nd\ndr\n\u03c6(s(r), p) < 0 for r < p\nd\ndr\n\u03c6(s(r), p) > 0 for r > p,\nboth for the projection game and MSR potentials \u03c6(.).\nProof. (sketch) The derivative of the MSR potential is\nd\ndr\n\u03c6(s(r), p) = \u2212p \u00b7\nd\ndr\ns(r) = \u2212\nX\ni\npisi(r).\nFor the log scoring rule si(r) = ai + b log ri with b > 0,\nd\ndr\n\u03c6MSR(s(r), p) = \u2212p \u00b7\nb\nr\n, \u2212\nb\n1 \u2212 r\n\n= \u2212b\np\nr\n\u2212\n1 \u2212 p\n1 \u2212 r\n\n= b\nr \u2212 p\nr(1 \u2212 r)\n.\nSince r = (r, 1 \u2212 r) is a probability distribution, this\nexpression is positive for r > p and negative for r < p as desired.\nNow, consider the projection game on the non-negative\nunit circle. The potential for any x = ( r\n|r|\n, 1\u2212r\n|r|\n) is given by\n\u03c6(x(r), p) = |x| \u2212 q \u00b7 x(r),\nIt is easy to show that d\ndr\n\u03c6(x(r), p) < 0 for r < p and the\nderivative is positive for r > p, so the potential function\nalong the circle is decreasing and then increasing with r\nsimilarly to an energy function, with a global minimum at\nr = p, as desired.\nTheorem 10 establishes that the market log-scoring rule\nis strategically similar to the projection game played on a\ncircle, in the sense that the optimal direction of movement at\nthe current state is the same in both games. For example,\nif the current state is r < p, it is profitable to move to\nr+dr since the effective profit of that move is profit(r, r ) =\n\u03c6(s(r), p) \u2212 \u03c6(s(r + dr), p) > 0. Although stated for\nlogscoring rules, the theorem holds for any scoring rules that\ninduce a potential with a unique local and global minimum\nat p, such as the quadratic scoring rule and others.\n5. USING THEPROJECTION-GAMEMODEL\nThe chief advantages of the projection game are that it\nis analytically tractable, and also easy to visualize. In\nSection 3, we used the projection-game model of the DPM to\nprove the absence of arbitrage, and to infer strategic\nproperties that might have been difficult to deduce otherwise.\nIn this section, we provide two examples that illustrate the\npower of projection-game analysis for gaining insight about\nmore complex strategic settings.\n323\n5.1 Traders with inertia\nThe standard analysis of the trader behavior in any of the\nmarket forms we have studied asserts that traders who\ndisagree with the market probabilities will expect to gain from\nchanging the probability, and thus have a strict incentive to\ntrade in the market. The expected gain may, however, be\nvery small. A plausible model of real trader behavior might\ninclude some form of inertia or -optimality: We assume\nthat traders will trade if their expected profit is greater than\nsome constant . We do not attempt to justify this model\nhere; rather, we illustrate how the projection game may be\nused to analyze such situations, and shed some light on how\nto modify the trading rules to alleviate this problem.\nConsider the simple projection game restricted to a\ncircular arc with unit radius; as we have seen, this\ncorresponds closely to the spherical market scoring rule, and\nto the dynamic parimutuel market under a reasonable\nconstraint. Now, suppose the market probability is p, and a\ntrader believes the true probability is p . Then, his expected\ngain can be calculated, as follows: Let q and q be the unit\nvectors in the directions of p and p respectively. The\nexpected profit is given by E = \u03c6(q, p ) = 1\u2212 q \u00b7q . Thus, the\ntrader will trade only if 1\u2212q\u00b7q > . If we let \u03b8 and \u03b8 be the\nangles of the p-line and p -line respectively (from the x-axis),\nwe get E = 1 \u2212 cos(\u03b8 \u2212 \u03b8 ); when \u03b8 is close to \u03b8 , a Taylor\nseries approximation gives us that E \u2248 (\u03b8 \u2212 \u03b8 )2\n/2. Thus,\nwe can derive a bound on the limit of the market accuracy:\nThe market price will not change as long as (\u03b8 \u2212 \u03b8 )2\n\u2264 2 .\nNow, suppose a market operator faced with this situation\nwanted to sharpen the accuracy of the market. One natural\napproach is simply to multiply all payoffs by a constant.\nThis corresponds to using a larger circle in the projection\ngame, and would indeed improve the accuracy. However, it\nwill also increase the market-maker\"s exposure to loss: the\nmarket-maker would have to pump in more money to achieve\nthis.\nThe projection game model suggests a natural approach\nto improving the accuracy while retaining the same bounds\non the market maker\"s loss. The idea is that, instead of\nrestricting all moves to being on the unit circle, we force\neach move to have a slightly larger radius than the previous\nmove. Suppose we insist that, if the current radius is r,\nthe next trader has to move to r + 1. Then, the trader\"s\nexpected profit would be E = r(1 \u2212 cos(\u03b8 \u2212 \u03b8 )). Using\nthe same approximation as above, the trader would trade\nas long as (\u03b8 \u2212 \u03b8 )2\n> 2 /r. Now, even if the market maker\nseeded the market with r = 1, it would increase with each\ntrade, and the incentives to sharpen the estimate increase\nwith every trade.\n5.2 Analyzing long-term strategies\nUp to this point, our analysis has been restricted to trader\nstrategies that are myopic in the sense that traders do not\nconsider the impact of their trades on other traders\"\nbeliefs. In practice, an informed trader can potentially profit\nby playing a suboptimal strategy to mislead other traders,\nin a way that allows her to profit later. In this section, we\nillustrate how the projection game can be used to analyze\nan instance of this phenomenon, and to design market rules\nthat mitigate this effect.\nThe scenario we consider is as follows. There are two\ntraders speculating on the probability of an event E, who\neach get a 1-bit signal. The optimal probability for each\n2bit signal pair is as follows. If trader 1 gets the signal 0, and\ntrader 2 gets signal 0, the optimal probability is 0.3. If trader\n1 got a 0, but trader 2 got a 1, the optimal probability is\n0.9. If trader 1 gets 1, and trader 2 gets signal 0, the optimal\nprobability is 0.7. If trader 1 got a 0, but trader 2 got a 1, the\noptimal probability is 0.1. (Note that the impact of trader\n2\"s signal is in a different direction, depending on trader 1\"s\nsignal). Suppose that the prior distribution of the signals is\nthat trader 1 is equally likely to get a 0 or a 1, but trader 2\ngets a 0 with probability 0.55 and a 1 with probability 0.45.\nThe traders are playing the projection game restricted to a\ncircular arc. This setup is depicted in Figure 4.\nA\nB\nD\nC\nX\nY\nSignals Opt. Pt\n00 C\nD11\n10\n01\nEvent does not happenEventhappens\nB\nA\nFigure 4: Example illustrating non-myopic\ndeception\nSuppose that, for some exogenous reason, trader 1 has the\nopportunity to trade, followed by trader 2. Then, trader 1\nhas the option of placing a last-minute trade just before\nthe market closes. If traders were playing their myopically\noptimal strategies, here is how the market should run: If\ntrader 1 sees a 0, he would move to some point Y that is\nbetween A and C, but closer to C. Trader 2 would then\ninfer that trader 1 received a 0 signal and move to A or C if\nshe got 1 or 0 respectively. Trader 1 has no reason to move\nagain. If trader 1 had got a 1, he would move to a different\npoint X instead, and trader 2 would move to D if she saw 1\nand B if she saw 0. Again, trader 1 would not want to move\nagain.\nUsing the projection game, it is easy to show that, if\ntraders consider non-myopic strategies, this set of\nstrategies is not an equilibrium. The exact position of the points\ndoes not matter; all we need is the relative position, and the\nobservation that, because of the perfect symmetry in the\nsetup, segments XY, BC, and AD are all parallel to each\nother. Now, suppose trader 1 got a 0. He could move to\nX instead of Y , to mislead trader 2 into thinking he got a\n1. Then, when trader 2 moved to, say, D, trader 1 could\ncorrect the rating to A. To show that this is a profitable\ndeviation, observe that this strategy is equivalent to\nplaying two additional moves over trader 1\"s myopic strategy of\nmoving to Y . The first move, Y X may either move toward\nor away from the optimal final position. The second move,\nDA or BC, is always in the correct direction. Further,\nbecause DA and BC are longer than XY , and parallel to XY ,\ntheir projection on the final p-line will always be greater\n324\nin absolute value than the projection of XY , regardless of\nwhat the true p-line is! Thus, the deception would result in\na strictly higher expected profit for trader 1. Note that this\nproblem is not specific to the projection game form: Our\nequivalence results show that it could arise in the MSR or\nDPM (perhaps with a different prior distribution and\ndifferent numerical values). Observe also that a strategy profile\nin which neither trader moved in the first two rounds, and\ntrader 1 moved to either X or Y would be a subgame-perfect\nequilibrium in this setup.\nWe suggest that one approach to mitigating this problem\nmight be by reducing the radius at every move. This\nessentially provides a form of discounting that motivates trader 1\nto take his profit early rather than mislead trader 2.\nGraphically, the right reduction factor would make the segments\nAD and BC shorter than XY (as they are chords on a\nsmaller circle), thus making the myopic strategy optimal.\n6. CONCLUSIONS AND FUTURE WORK\nWe have presented a simple geometric game, the\nprojection game, that can serve as a model for strategic behavior\nin information markets, as well as a tool to guide the\ndesign of new information markets. We have used this model\nto analyze the cost, profit, and strategies of a trader in a\ndynamic parimutuel market, and shown that both the\ndynamic parimutuel market and the spherical market scoring\nrule are strategically equivalent to the restricted projection\ngame under slight distortion of the prior probabilities.\nThe general analysis was based on the assumption that\ntraders do not actively try to mislead other traders for future\nprofit. In section 5, however, we analyze a small example\nmarket without this assumption. We demonstrate that the\nprojection game can be used to analyze traders\" strategies\nin this scenario, and potentially to help design markets with\nbetter strategic properties.\nOur results raise several very interesting open questions.\nFirstly, the payoffs of the projection game cannot be directly\nimplemented in situations in which the true probability is\nnot ultimately revealed. It would be very useful to have an\nautomatic transformation of a given projection game into\nanother game in which the payoffs can be implemented in\nexpectation without knowing the probability, and preserves\nthe strategic properties of the projection game. Second,\ngiven the tight connection between the projection game and\nthe spherical market scoring rule, it is natural to ask if we\ncan find as strong a connection to other scoring rules or if\nnot, to understand what strategic differences are implied by\nthe form of the scoring rule used in the market. Finally, the\nexistence of long-range manipulative strategies in\ninformation markets is of great interest. The example we studied\nin section 5 merely scratches the surface of this area. A\ngeneral study of this class of manipulations, together with a\ncharacterization of markets in which it can or cannot arise,\nwould be very useful for the design of information markets.\n7. REFERENCES\n[1] S. Debnath, D. M. Pennock, S. Lawrence, E. J.\nGlover, and C. L. Giles. Information incorporation in\nonline in-game sports betting markets. In Proceedings\nof the Fourth Annual ACM Conference on Electronic\nCommerce (EC\"03), pages 258-259, June 2003.\n[2] R. Forsythe, F. Nelson, G. R. Neumann, and\nJ. Wright. Anatomy of an experimental political stock\nmarket. American Economic Review, 82(5):1142-1161,\n1992.\n[3] R. Forsythe, T. A. Rietz, and T. W. Ross. Wishes,\nexpectations, and actions: A survey on price formation\nin election stock markets. Journal of Economic\nBehavior and Organization, 39:83-110, 1999.\n[4] D. Friedman. Effective scoring rules for probabilistic\nforecasts. Management Science, 29(4):447-454, 1983.\n[5] J. M. Gandar, W. H. Dare, C. R. Brown, and R. A.\nZuber. Informed traders and price variations in the\nbetting market for professional basketball games.\nJournal of Finance, LIII(1):385-401, 1998.\n[6] R. Hanson. Combinatorial information market design.\nInformation Systems Frontiers, 5(1):107-119, 2003.\n[7] R. Hanson, R. Oprea, and D. Porter. Information\naggregation and manipulation in an experimental\nmarket. Journal of Economic Behavior and\nOrganization, page to appear, 2006.\n[8] B. Mangold, M. Dooley, G. W. Flake, H. Hoffman,\nT. Kasturi, D. M. Pennock, and R. Dornfest. The tech\nbuzz game. IEEE Computer, 38(7):94-97, July 2005.\n[9] J. A. Muth. Rational expectations and the theory of\nprice movements. Econometrica, 29(6):315-335, 1961.\n[10] D. Pennock. A dynamic parimutuel market for\ninformation aggregation. In Proceedings of the Fourth\nAnnual ACM Conference on Electronic Commerce\n(EC \"04), June 2004.\n[11] D. Pennock and R. Sami. Computational aspects of\nprediction markets. In N. Nisan, T. Roughgarden,\nE. Tardos, and V. V. Vazirani, editors, Algorithmic\nGame Theory. Cambridge University Press, 2007. (to\nappear).\n[12] D. M. Pennock, S. Debnath, E. J. Glover, and C. L.\nGiles. Modeling information incorporation in markets,\nwith application to detecting and explaining events. In\nProceedings of the Eighteenth Conference on\nUncertainty in Artificial Intelligence, pages 405-413,\n2002.\n[13] C. R. Plott and S. Sunder. Rational expectations and\nthe aggregation of diverse information in laboratory\nsecurity markets. Econometrica, 56(5):1085-1118,\n1988.\n[14] C. R. Plott, J. Wit, and W. C. Yang. Parimutuel\nbetting markets as information aggregation devices:\nExperimental results. Technical Report Social Science\nWorking Paper 986, California Institute of\nTechnology, Apr. 1997.\n[15] C. Polk, R. Hanson, J. Ledyard, and T. Ishikida.\nPolicy analysis market: An electronic commerce\napplication of a combinatorial information market. In\nProceedings of the Fourth Annual ACM Conference on\nElectronic Commerce (EC\"03), pages 272-273, June\n2003.\n[16] C. Schmidt and A. Werwatz. How accurate do\nmarkets predict the outcome of an event? the Euro\n2000 soccer championships experiment. Technical\nReport 09-2002, Max Planck Institute for Research\ninto Economic Systems, 2002.\n325", "keywords": "liquidation time;dpm;msr;projection game model;social and behavioral sciences-economics;strategic analysis;prediction market;dynamic parimutuel market;market scoring rule;information market;long-range manipulative strategy;projection game;spherical scoring rule"}
-{"name": "test_J-22", "title": "Betting on Permutations", "abstract": "We consider a permutation betting scenario, where people wager on the final ordering of n candidates: for example, the outcome of a horse race. We examine the auctioneer problem of risklessly matching up wagers or, equivalently, finding arbitrage opportunities among the proposed wagers. Requiring bidders to explicitly list the orderings that they\"d like to bet on is both unnatural and intractable, because the number of orderings is n! and the number of subsets of orderings is 2n! . We propose two expressive betting languages that seem natural for bidders, and examine the computational complexity of the auctioneer problem in each case. Subset betting allows traders to bet either that a candidate will end up ranked among some subset of positions in the final ordering, for example, horse A will finish in positions 4, 9, or 13-21, or that a position will be taken by some subset of candidates, for example horse A, B, or D will finish in position 2. For subset betting, we show that the auctioneer problem can be solved in polynomial time if orders are divisible. Pair betting allows traders to bet on whether one candidate will end up ranked higher than another candidate, for example horse A will beat horse B. We prove that the auctioneer problem becomes NP-hard for pair betting. We identify a sufficient condition for the existence of a pair betting match that can be verified in polynomial time. We also show that a natural greedy algorithm gives a poor approximation for indivisible orders.", "fulltext": "1. INTRODUCTION\nBuying or selling a financial security in effect is a wager\non the security\"s value. For example, buying a stock is a bet\nthat the stock\"s value is greater than its current price. Each\ntrader evaluates his expected profit to decide the quantity\nto buy or sell according to his own information and\nsubjective probability assessment. The collective interaction of\nall bets leads to an equilibrium that reflects an aggregation\nof all the traders\" information and beliefs. In practice, this\naggregate market assessment of the security\"s value is often\nmore accurate than other forecasts relying on experts, polls,\nor statistical inference [16, 17, 5, 2, 15].\nConsider buying a security at price fifty-two cents, that\npays $1 if and only if a Democrat wins the 2008 US\nPresidential election. The transaction is a commitment to accept\na fifty-two cent loss if a Democrat does not win in return\nfor a forty-eight cent profit if a Democrat does win. In this\ncase of an event-contingent security, the price-the market\"s\nvalue of the security-corresponds directly to the estimated\nprobability of the event.\nAlmost all existing financial and betting exchanges pair up\nbilateral trading partners. For example, one trader willing\nto accept an x dollar loss if a Democrat does not win in\nreturn for a y dollar profit if a Democrat wins is matched\nup with a second trader willing to accept the opposite.\nHowever in many scenarios, even if no bilateral agreements\nexist among traders, multilateral agreements may be\npossible. For example, if one trader bets that the Democratic\ncandidate will receive more votes than the Republican\ncandidate, a second trader bets that the Republican candidate\nwill receive more votes than the Libertarian candidate, and a\nthird trader bets that the Libertarian candidate will receive\nmore votes than the Democratic candidate, then,\ndepending on the odds they each offer, there may be a three-way\nagreeable match even though no two-way matches exist.\nWe propose an exchange where traders have considerable\nflexibility to naturally and succinctly express their wagers,\n326\nand examine the computational complexity of the\nauctioneer\"s resulting matching problem of identifying bilateral and\nmultilateral agreements. In particular, we focus on a setting\nwhere traders bet on the outcome of a competition among\nn candidates. For example, suppose that there are n\ncandidates in an election (or n horses in a race, etc.) and thus\nn! possible orderings of candidates after the final vote tally.\nTraders may like to bet on arbitrary properties of the final\nordering, for example candidate D will win, candidate D\nwill finish in either first place or last place, candidate D\nwill defeat candidate R, candidates D and R will both\ndefeat candidate L, etc. The goal of the exchange is to search\namong all the offers to find two or more that together form\nan agreeable match. As we shall see, the matching\nproblem can be set up as a linear or integer program, depending\non whether orders are divisible or indivisible, respectively.\nAttempting to reduce the problem to a bilateral matching\nproblem by explicitly creating n! securities, one for each\npossible final ordering, is both cumbersome for the traders and\ncomputationally infeasible even for modest sized n.\nMoreover, traders\" attention would be spread among n!\nindependent choices, making the likelihood of two traders converging\nat the same time and place seem remote.\nThere is a tradeoff between the expressiveness of the\nbidding language and the computational complexity of the\nmatching problem. We want to offer traders the most expressive\nbidding language possible while maintaining computational\nfeasibility. We explore two bidding languages that seem\nnatural from a trader perspective. Subset betting, described in\nSection 3.2, allows traders to bet on which positions in the\nranking a candidate will fall, for example candidate D will\nfinish in position 1, 3-5, or 10. Symetrically, traders can\nalso bet on which candidates will fall in a particular\nposition. In Section 4, we derive a polynomial-time algorithm\nfor matching (divisible) subset bets. The key to the result\nis showing that the exponentially big linear program has a\ncorresponding separation problem that reduces to maximum\nweighted bipartite matching and consequently we can solve\nit in time polynomial in the number of orders.\nPair betting, described in Section 3.3, allows traders to\nbet on the final ranking of any two candidates, for example\ncandidate D will defeat candidate R. In Section 5, we show\nthat optimal matching of (divisible or indivisible) pair bets\nis NP-hard, via a reduction from the unweighted minimum\nfeedback arc set problem. We also provide a\npolynomiallyverifiable sufficient condition for the existence of a\npairbetting match and show that a greedy algorithm offers poor\napproximation for indivisible pair bets.\n2. BACKGROUND AND RELATED WORK\nWe consider permutation betting, or betting on the\noutcome of a competition among n candidates. The final\noutcome or state s \u2208 S is an ordinal ranking of the n candidates.\nFor example, the candidates could be horses in a race and\nthe outcome the list of horses in increasing order of their\nfinishing times. The state space S contains all n! mutually\nexclusive and exhaustive permutations of candidates.\nIn a typical horse race, people bet on properties of the\noutcome like horse A will win, horse A will show, or\nfinish in either first or second place, or horses A and B will\nfinish in first and second place, respectively. In practice\nat the racetrack, each of these different types of bets are\nprocessed in separate pools or groups. In other words, all\nthe win bets are processed together, and all the show\nbets are processed together, but the two types of bets do\nnot mix. This separation can hurt liquidity and information\naggregation. For example, even though horse A is heavily\nfavored to win, that may not directly boost the horse\"s odds\nto show.\nInstead, we describe a central exchange where all bets\non the outcome are processed together, thus aggregating\nliquidity and ensuring that informational inference happens\nautomatically.\nIdeally, we\"d like to allow traders to bet on any property\nof the final ordering they like, stated in exactly the language\nthey prefer. In practice, allowing too flexible a language\ncreates a computational burden for the auctioneer attempting\nto match willing traders. We explore the tradeoff between\nthe expressiveness of the bidding language and the\ncomputational complexity of the matching problem.\nWe consider a framework where people propose to buy\nsecurities that pay $1 if and only if some property of the\nfinal ordering is true. Traders state the price they are\nwilling to pay per share and the number of shares they would\nlike to purchase. (Sell orders may not be explicitly needed,\nsince buying the negation of an event is equivalent to selling\nthe event.) A divisible order permits the trader to receive\nfewer shares than requested, as long as the price constraint\nis met; an indivisible order is an all-or-nothing order. The\ndescription of bets in terms of prices and shares is without\nloss of generality: we can also allow bets to be described in\nterms of odds, payoff vectors, or any of the diverse array of\napproaches practiced in financial and gambling circles.\nIn principle, we can do everything we want by explicitly\noffering n! securities, one for every state s \u2208 S (or in fact\nany set of n! linearly independent securities). This is the\nso-called complete Arrow-Debreu securities market [1] for\nour setting. In practice, traders do not want to deal with\nlow-level specification of complete orderings: people think\nmore naturally in terms of high-level properties of\norderings. Moreover, operating n! securities is infeasible in\npractice from a computational point of view as n grows.\nA very simple bidding language might allow traders to bet\nonly on who wins the competition, as is done in the win\npool at racetracks. The corresponding matching problem is\npolynomial, however the language is not very expressive. A\ntrader who believes that A will defeat B, but that neither\nwill win outright cannot usefully impart his information to\nthe market. The price space of the market reveals the\ncollective estimates of win probabilities but nothing else. Our\ngoal is to find languages that are as expressive and intuitive\nas possible and reveal as much information as possible, while\nmaintaining computational feasibility.\nOur work is in direct analogy to work by Fortnow et.\nal. [6]. Whereas we explore permutation combinatorics,\nFortnow et. al. explore Boolean combinatorics. The authors\nconsider a state space of the 2n\npossible outcomes of n binary\nvariables. Traders express bets in Boolean logic. The\nauthors show that divisible matching is co-NP-complete and\nindivisible matching is \u03a3p\n2-complete.\nHanson [9] describes a market scoring rule mechanism\nwhich can allow betting on combinatorial number of\noutcomes. The market starts with a joint probability\ndistribution across all outcomes. It works like a sequential version\nof a scoring rule. Any trader can change the probability\ndistribution as long as he agrees to pay the most recent trader\n327\naccording to the scoring rule. The market maker pays the\nlast trader. Hence, he bears risk and may incur loss.\nMarket scoring rule mechanisms have a nice property that the\nworst-case loss of the market maker is bounded. However,\nthe computational aspects on how to operate the\nmechanism have not been fully explored. Our mechanisms have\nan auctioneer who does not bear any risk and only matches\norders.\nResearch on bidding languages and winner determination\nin combinatorial auctions [4, 14, 18] considers similar\ncomputational challenges in finding an allocation of items to\nbidders that maximizes the auctioneer\"s revenue.\nCombinatorial auctions allow bidders to place distinct values on\nbundles of goods rather than just on individual goods.\nUncertainty and risk are typically not considered and the\ncentral auctioneer problem is to maximize social welfare. Our\nmechanisms allow traders to construct bets for an event with\nn! outcomes. Uncertainty and risk are considered and the\nauctioneer problem is to explore arbitrage opportunities and\nrisklessly match up wagers.\n3. PERMUTATION BETTING\nIn this section, we define the matching and optimal\nmatching problems that an auctioneer needs to solve in a general\npermutation betting market. We then illustrate the\nproblem definitions in the context of the subset-betting and\npairbetting markets.\n3.1 Securities, Orders and Matching Problems\nConsider an event with n competing candidates where\nthe outcome (state) is a ranking of the n candidates. The\nbidding language of a market offering securities in the\nfuture outcomes determines the type and number of\nsecurities available and directly affects what information can be\naggregated about the outcome. A fully expressive bidding\nlanguage can capture any possible information that traders\nmay have about the final ranking; a less expressive language\nlimits the type of information that can be aggregated though\nit may enable a more efficient solution to the matching\nproblem. For any bidding language and number of securities in\na permutation betting market, we can succinctly represent\nthe problem of the auctioneer to risklessly match offers as\nfollows.\nConsider an index set of bets or orders O which traders\nsubmit to the auctioneer. Each order i \u2208 O is a triple\n(bi, qi, \u03c6i), where bi denotes how much the trader is willing\nto pay for a unit share of security \u03c6i and qi is the number\nof shares of the security he wants to purchase at price bi.\nNaturally, bi \u2208 (0, 1) since a unit of the security pays off at\nmost $1 when the event is realized. Since order i is defined\nfor a single security \u03c6i, we will omit the security variable\nwhenever it is clear from the context.\nThe auctioneer can accept or reject each order, or in a\ndivisible world accept a fraction of the order. Let xi be the\nfraction of order i \u2208 O accepted. In the indivisible version\nof the market xi = 0 or 1 while in the divisible version\nxi \u2208 [0, 1]. Further let Ii(s) be the indicator variable for\nwhether order i is winning in state s, that is Ii(s) = 1 if the\norder is paid back $1 in state s and Ii(s) = 0 otherwise.\nThere are two possible problems that the auctioneer may\nwant to solve. The simpler one is to find a subset of orders\nthat can be matched risk-free, namely a subset of orders\nwhich accepted together give a nonnegative profit to the\nauctioneer in every possible outcome. We call this problem\nthe existence of a match or sometimes simply, the matching\nproblem. The more complex problem is for the auctioneer to\nfind the optimal match with respect to some criterion such\nas profit, trading volume, etc.\nDefinition 1 (Existence of match, indivisible orders).\nGiven a set of orders O, does there exist a set of xi \u2208\n{0, 1}, i \u2208 O, with at least one xi = 1 such that\ni\n(bi \u2212 Ii(s))qixi \u2265 0, \u2200s \u2208 S? (1)\nSimilarly we can define the existence of a match with\ndivisible orders.\nDefinition 2 (Existence of match, divisible orders).\nGiven a set of orders O, does there exist a set of xi \u2208 [0, 1],\ni \u2208 O, with at least one xi > 0 such that\ni\n(bi \u2212 Ii(s))qixi \u2265 0, \u2200s \u2208 S? (2)\nThe existence of a match is a decision problem. It only\nreturns whether trade can occur at no risk to the\nauctioneer. In addition to the risk-free requirement, the auctioneer\ncan optimize some criterion in determining the orders to\naccept. Some reasonable objectives include maximizing the\ntotal trading volume in the market or the worst-case profit\nof the auctioneer. The following optimal matching problems\nare defined for an auctioneer who maximizes his worst-case\nprofit.\nDefinition 3 (Optimal match, indivisible orders).\nGiven a set of orders O, choose xi \u2208 {0, 1} such that the\nfollowing mixed integer programming problem achieves its\noptimality\nmax\nxi,c\nc (3)\ns.t. i bi \u2212 Ii(s) qixi \u2265 c, \u2200s \u2208 S\nxi \u2208 {0, 1}, \u2200i \u2208 O.\nDefinition 4 (Optimal match, divisible orders).\nGiven a set of orders O, choose xi \u2208 [0, 1] such that the\nfollowing linear programming problem achieves its optimality\nmax\nxi,c\nc (4)\ns.t. i bi \u2212 Ii(s) qixi \u2265 c, \u2200s \u2208 S\n0 \u2264 xi \u2264 1, \u2200i \u2208 O.\nThe variable c is the worst-case profit for the auctioneer.\nNote that, strictly speaking, the optimal matching problems\ndo not require to solve the optimization problems (3) and\n(4), because only the optimal set of orders are needed. The\noptimal worst-case profit may remain unknown.\n3.2 Subset Betting\nA subset betting market allows two different types of bets.\nTraders can bet on a subset of positions a candidate may end\nup at, or they can bet on a subset of candidates that will\noccupy a particular position. A security \u03b1|\u03a6 where \u03a6 is\na subset of positions pays off $1 if candidate \u03b1 stands at a\nposition that is an element of \u03a6 and it pays $0 otherwise.\nFor example, security \u03b1|{2, 4} pays $1 when candidate \u03b1\n328\nis ranked second or fourth. Similarly, a security \u03a8|j where\n\u03a8 is a subset of candidates pays off $1 if any of the\ncandidates in the set \u03a8 ranks at position j. For instance, security\n{\u03b1, \u03b3}|2 pays off $1 when either candidate \u03b1 or candidate\n\u03b3 is ranked second.\nThe auctioneer in a subset betting market faces a\nnontrivial matching problem, that is to determine which orders\nto accept among all submitted orders i \u2208 O. Note that\nalthough there are only n candidates and n possible positions,\nthe number of available securities to bet on is exponential\nsince a trader may bet on any of the 2n\nsubsets of\ncandidates or positions. With this, it is not immediately clear\nwhether one can even find a trading partner or a match for\ntrade to occur, or that the auctioneer can solve the\nmatching problem in polynomial time. In the next section, we will\nshow that somewhat surprisingly there is an elegant\npolynomial solution to both the matching and optimal matching\nproblems, based on classic combinatorial problems.\nWhen an order is accepted, the corresponding trader pays\nthe submitted order price bi to the auctioneer and the\nauctioneer pays the winning orders $1 per share after the\noutcome is revealed. The auctioneer has to carefully choose\nwhich orders and what fractions of them to accept so as\nto be guaranteed a nonnegative profit in any future state.\nThe following example illustrates the matching problem for\nindivisible orders in the subset-betting market.\nExample 1. Suppose n = 3. Objects \u03b1, \u03b2, and \u03b3 compete\nfor positions 1, 2, and 3 in a competition. The auctioneer\nreceives the following 4 orders: (1) buy 1 share \u03b1|{1} at\nprice $0.6; (2) buy 1 share \u03b2|{1, 2} at price $0.7; (3) buy\n1 share \u03b3|{1, 3} at price $0.8; and (4) buy 1 share \u03b2|{3}\nat price $0.7. There are 6 possible states of ordering: \u03b1\u03b2\u03b3,\n\u03b1\u03b3\u03b2, \u03b2\u03b1\u03b3, \u03b2\u03b3\u03b1, \u03b3\u03b1\u03b2,and \u03b3\u03b2\u03b1. The corresponding\nstatedependent profit of the auctioneer for each order can be\ncalculated as the following vectors,\nc1 = (\u22120.4, \u22120.4, 0.6, 0.6, 0.6, 0.6)\nc2 = (\u22120.3, 0.7, \u22120.3, \u22120.3, 0.7, \u22120.3)\nc3 = (\u22120.2, 0.8, \u22120.2, 0.8, \u22120.2, \u22120.2)\nc4 = ( 0.7, \u22120.3, 0.7, 0.7, \u22120.3, 0.7).\n6 columns correspond to the 6 future states. For indivisible\norders, the auctioneer can either accept orders (2) and (4)\nand obtain profit vector\nc = (0.4, 0.4, 0.4, 0.4, 0.4, 0.4),\nor accept orders (2), (3), and (4) and has profit across state\nc = (0.2, 1.2, 0.2, 1.2, 0.2, 0.2).\n3.3 Pair Betting\nA pair betting market allows traders to bet on whether\none candidate will rank higher than another candidate, in an\noutcome which is a permutation of n candidates. A security\n\u03b1 > \u03b2 pays off $ 1 if candidate \u03b1 is ranked higher than\ncandidate \u03b2 and $ 0 otherwise. There are a total of N(N \u22121)\ndifferent securities offered in the market, each corresponding\nto an ordered pair of candidates.\nTraders place orders of the form buy qi shares of \u03b1 > \u03b2\nat price per share no greater than bi. bi in general should\nbe between 0 and 1. Again the order can be either indivisible\nor divisible and the auctioneer needs to decide what fraction\nxi of each order to accept so as not to incur any loss, with\nA\nB\nC\nD\nE\nF\n.99\n.99\n.5\n.5\n.5\n.99\n.99\n.99\n.99\nFigure 1: Every cycle has negative worst-case profit\nof \u22120.02 (for the cycles of length 4) or less (for the\ncycles of length 6), however accepting all edges in\nfull gives a positive worst-case profit of 0.44.\nxi \u2208 {0, 1} for indivisible and xi \u2208 [0, 1] for divisible orders.\nThe same definitions for existence of a match and optimal\nmatch from Section 3.1 apply.\nThe orders in the pair-betting market have a natural\ninterpretation as a graph, where the candidates are nodes in\nthe graph and each order which ranks a pair of candidates\n\u03b1 > \u03b2 is represented by a directed edge e = (\u03b1, \u03b2) with price\nbe and weight qe. With this interpretation, it is tempting\nto assume that a necessary condition for a match is to have\na cycle in the graph with a nonnegative worst-case profit.\nAssuming qe = 1 for all e, this is a cycle C with a total of\n|C| edges such that the worst-case profit for the auctioneer\nis\ne\u2208C\nbe \u2212 (|C| \u2212 1) \u2265 0,\nsince in the worst-case state the auctioneer needs to pay\n$,1 to every order in the cycle except one. However, the\nexample in Figure 1 shows that this is not the case: we\nmay have a set of orders in which every single cycle has a\nnegative worst-case profit, and yet there is a positive\nworstcase match overall. The edge labels in the figure are the\nprices be; both the optimal divisible and indivisible solution\nin this case accept all orders in full, xe = 1.\n4. COMPLEXITY OF SUBSET BETTING\nThe matching problems of the auctioneer in any\npermutation market, including the subset betting market have n!\nconstraints. Brute-force methods would take exponential\ntime to solve. However, given the special form of the\nsecurities in the subset betting market, we can show that the\nmatching problems for divisible orders can be solved in\npolynomial time.\nTheorem 1. The existence of a match and the optimal\nmatch problems with divisible orders in a subset betting\nmarket can both be solved in polynomial time.\n329\nProof. Consider the linear programming problem (4) for\nfinding an optimal match. This linear program has |O| + 1\nvariables, one variable xi for each order i and the profit\nvariable c. It also has exponentially many constraints. However,\nwe can solve the program in time polynomial in the\nnumber of orders |O| by using the ellipsoid algorithm, as long\nas we can efficiently solve its corresponding separation\nproblem in polynomial time [7, 8]. The separation problem for\na linear program takes as input a vector of variable values\nand returns if the vector is feasible, or otherwise it returns\na violated constraint.\nFor given values of the variables, a violated constraint in\nEq. (4) asks whether there is a state or permutation s in\nwhich the profit is less than c, and can be rewritten as\ni\nIi(s)qixi <\ni\nbiqixi \u2212 c \u2200s \u2208 S. (5)\nThus it suffices to show how to find efficiently a state s\nsatisfying the above inequality (5) or verify that the opposite\ninequality holds for all states s.\nWe will show that the separation problem can be reduced\nto the maximum weighted bipartite matching1\nproblem [3].\nThe left hand side in Eq. (5) is the total money that the\nauctioneer needs to pay back to the winning traders in state\ns. The first term on the right hand side is the total money\ncollected by the auctioneer and it is fixed for a given\nsolution vector of xi\"s and c. A weighted bipartite graph can\nbe constructed between the set of candidates and the set of\npositions. For every order of the form \u03b1|\u03a6 there are edges\nfrom candidate node \u03b1 to every position node in \u03a6. For\norders of the form \u03a8|j there are edges from each candidate in\n\u03a8 to position j. For each order i we put weight qixi on each\nof these edges. All multi-edges with the same end points\nare then replaced with a single edge that carries the total\nweight of the multi-edge. Every state s then corresponds\nto a perfect matching in the bipartite graph. In addition,\nthe auctioneer pays out to the winners the sum of all edge\nweights in the perfect matching since every candidate can\nonly stand in one position and every position is taken by\none candidate. Thus, the auctioneer\"s worst-cast state and\npayment are the solution to the maximum weighted\nbipartite matching problem, which has known polynomial-time\nalgorithms [12, 13]. Hence, the separation problem can be\nsolved in polynomial time.\nNaturally, if the optimal solution to (4) gives a worst-case\nprofit of c\u2217\n> 0, there exists a matching. Thus, the matching\nproblem can be solved in polynomial time also.\n5. COMPLEXITY OF PAIR BETTING\nIn this section we show that a slight change of the\nbidding language may bring about a dramatic change in the\ncomplexity of the optimal matching problem of the\nauctioneer. In particular, we show that finding the optimal match in\nthe pair betting market is NP-hard for both divisible and\nindivisible orders. We then identify a polynomially-verifiable\nsufficient condition for deciding the existence of a match.\nThe hardness results are surprising especially in light of\nthe observation that a pair betting market has a seemingly\nmore restrictive bidding language which only offers n(n\u22121)\n1\nThe notion of perfect matching in a bipartite graph, which\nwe use only in this proof, should not be confused with the\nnotion of matching bets which we use throughout the paper.\nsecurities. In contrast, the subset betting market enables\ntraders to bet on an exponential number of securities and\nyet had a polynomial time solution for finding the optimal\nmatch. Our hope is that the comparison of the complexities\nof the subset and pair betting markets would offer insight\ninto what makes a bidding language expressive while at the\nsame time enabling an efficient matching solution.\nIn all analysis that follows, we assume that traders submit\nunit orders in pair betting markets, that is qi = 1. A set\nof orders O received by the auctioneer in a pair betting\nmarket with unit orders can be represented by a directed\ngraph, G(V, E), where the vertex set V contains candidates\nthat traders bet on. An edge e \u2208 E, denoted (\u03b1, \u03b2, be),\nrepresents an order to buy 1 share of the security \u03b1 > \u03b2\nat price be. All edges have equal weight of 1.\nWe adopt the following notations throughout the paper:\n\u2022 G(V, E): original equally weighted directed graph for\nthe set of unit orders O.\n\u2022 be: price of the order for edge e.\n\u2022 G\u2217\n(V \u2217\n, E\u2217\n); a weighted directed graph of accepted\norders for optimal matching, where edge weight xe is the\nquantity of order e accepted by the auctioneer. xe = 1\nfor indivisible orders and 0 < xe \u2264 1 for divisible\norders.\n\u2022 H(V, E): a generic weighted directed graph of accepted\norders.\n\u2022 k(H): solution to the unweighted minimum feedback\narc set problem on graph H. k(H) is the minimum\nnumber of edges to remove so that H becomes acyclic.\n\u2022 l(H): solution to the weighted minimum feedback arc\nset problem on graph H. l(H) is the minimum total\nweights for the set of edges which, when removed, leave\nH acyclic.\n\u2022 c(H): worst-case profit of the auctioneer if he accepts\nall orders in graph H.\n\u2022 : a sufficiently small positive real number. Where\nnot stated, < 1/(2|E|) for a graph H(V, E). In other\ncases, the value is determined in context.\nA feedback arc set of a directed graph is a set of edges\nwhich, when removed from the graph, leave a directed acyclic\ngraph (DAG). Unweighted minimum feedback arc set\nproblem is to find a feedback arc set with the minimum\ncardinality, while weighted minimum feedback arc set problem\nseeks to find a feedback arc set with the minimum total edge\nweight. Both unweighted and weighted minimum feedback\narc set problems have been shown to be NP-complete [10].\nWe will use this result in our complexity analysis on pair\nbetting markets.\n5.1 Optimal Indivisible Matching\nThe auctioneer\"s optimal indivisible matching problem is\nintroduced in Definition 3 of Section 3. Assuming unit\norders and considering the order graph G(V, E), we restate\nthe auctioneer\"s optimal matching problem in a pair\nbetting market as picking a subset of edges to accept such that\n330\nworst-case profit is maximized in the following optimization\nproblem,\nmax\nxe,c\nc (6)\ns.t. e be \u2212 Ie(s) xe \u2265 c, \u2200s \u2208 S\nxe \u2208 {0, 1}, \u2200e \u2208 E.\nWithout lose of generality, we assume that there are no\nmulti-edges in the order graph G.\nWe show that the optimal matching problem for\nindivisible orders is NP-hard via a reduction from the unweighted\nminimum feedback arc set problem. The latter takes as\ninput a directed graph, and asks what is the minimum number\nof edges to delete from the graph so as to be left with a DAG.\nOur hardness proof is based on the following lemmas.\nLemma 2. Suppose the auctioneer accepts all edges in an\nequally weighted directed graph H(V, E) with edge price be =\n(1 \u2212 ) and edge weight xe = 1. Then the worst-case profit\nis equal to k(H) \u2212 |E|, where k(H) is the solution to the\nunweighted minimum feedback arc problem on H.\nProof. If the order of an edge gets $1 payoff at the end\nof the market we call the edge a winning edge, otherwise it\nis called a losing edge. For any state s, all winning edges\nnecessarily form a DAG. Conversely, for every DAG there\nis a state in which the DAG edges are winners (though the\nremaining edges in G are not necessarily losers).\nSuppose that in state s there are ws winning edges and\nls = |E| \u2212 ws losing edges. Then, ls is the cardinality of a\nfeedback arc set that consists of all losing edges in state s.\nThe edges that remain after deleting the minimum feedback\narc set form the maximum DAG for the graph H. Consider\nthe state smax in which all edges of the maximum DAG are\nwinners. This gives the maximum number of winning edges\nwmax. All other edges are necessarily losers in the state\nsmax, since any edge which is not in the max DAG must\nform a cycle together with some of the DAG edges. The\nnumber of losing edges in state smax is the cardinality of the\nminimum feedback arc set of H, that is |E| \u2212 wmax = k(H).\nThe profit of the auctioneer in a state s is\nprofit(s) =\ne\u2208E\nbe \u2212 w\n= (1 \u2212 )|E| \u2212 w\n\u2265 (1 \u2212 )|E| \u2212 wmax,\nwhere equality holds when s = smax. Thus, the worst-case\nprofit is achieved at state smax,\nprofit(smax) = (|E| \u2212 wmax) \u2212 |E| = k(H) \u2212 |E|.\nConsider the graph of accepted orders for optimal\nmatching, G\u2217\n(V \u2217\n, E\u2217\n), which consists of the optimal subset of\nedges E\u2217\nto be accepted by the auctioneer, that is edges\nwith xe = 1 in the solution of the optimization problem (6).\nWe have the following lemma.\nLemma 3. If the edge prices are be = (1\u2212 ), then the\noptimal indivisible solution graph G\u2217\nhas the same unweighted\nminimum feedback arc set size as the graph of all orders G,\nthat is k(G\u2217\n) = k(G). Furthermore, G\u2217\nis the smallest such\nsubgraph of G, i.e., it is the subgraph of G with the\nsmallest number of edges, that has the same size of unweighted\nminimum feedback arc set as G.\nProof. G\u2217\nis a subgraph of G, hence the minimum\nnumber of edges to break cycles in G\u2217\nis no more than that in\nG, namely k(G\u2217\n) \u2264 k(G).\nSuppose k(G\u2217\n) < k(G). Since both k(G\u2217\n) and k(G) are\nintegers, for any < 1\n|E|\nwe have that k(G\u2217\n) \u2212 |E\u2217\n| <\nk(G)\u2212 |E|. Hence by Lemma 2, the auctioneer has a higher\nworst-case profit by accepting G than accepting G\u2217\n, which\ncontradicts the optimality of G\u2217\n. Finally, the worst-case\nprofit k(G) \u2212 |E\u2217\n| is maximized when |E\u2217\n| is minimized.\nHence, G\u2217\nis the smallest subgraph of G such that k(G\u2217\n) =\nk(G).\nThe above two lemmas prove that the maximum\nworstcase profit in the optimal indivisible matching is directly\nrelated to the size of the minimum feedback arc set. Thus\ncomputing each automatically gives the other, hence\ncomputing the maximum worst-case profit in the indivisible pair\nbetting problem is NP-hard.\nTheorem 4. Computing the maximum worst-case profit\nin indivisible pair betting is NP-hard.\nProof. By Lemma 3, the maximum worst-case profit\nwhich is the optimum to the mixed integer programming\nproblem (6), is k(G) \u2212 |E\u2217\n|, where |E\u2217\n| is the number of\naccepted edges. Since k(G) is integer and |E\u2217\n| \u2264 |E| < 1,\nsolving (6) will automatically give us the cardinality of the\nminimum feedback arc set of G, k(G). Because the minimum\nfeedback arc set problem is NP-complete [10], computing the\nmaximum worst-case profit is NP-hard.\nTheorem 4 states that solving the optimization problem\nis hard, because even if the optimal set of orders are\nprovided computing the optimal worst-case profit from\naccepting those orders is NP-hard. However, it does not imply\nwhether the optimal matching problem, i.e. finding the\noptimal set of orders to accept, is NP-hard. It is possible to\nbe able to determine which edges in a graph participating in\nthe optimal match, yet unable to compute the\ncorresponding worst-case profit. Next, we prove that the indivisible\noptimal matching problem is actually NP-hard. We will use\nthe following short fact repeatedly.\nLemma 5 (Edge removal lemma). Given a weighted\ngraph H(V, E), removing a single edge e with weight xe from\nthe graph decreases the weighted minimum feedback arc set\nsolution l(H) by no more than xe and reduces the unweighted\nminimum feedback arc set solution k(H) by no more than 1.\nProof. Suppose the weighted minimum feedback arc set\nfor the graph H \u2212 {e} is F. Then F \u222a {e} is a feedback arc\nset for H, and has total edge weight l(H\u2212{e})+xe. Because\nl(H) is the solution to the weighted minimum feedback arc\nset problem on H, we have l(H) \u2264 l(H \u2212{e})+xe, implying\nthat l(H \u2212 {e}) \u2265 l(H) \u2212 xe.\nSimilarly, suppose the unweighted minimum feedback arc\nset for the graph H \u2212 {e} is F . Then F \u222a {e} is a feedback\narc set for H, and has set cardinality k(H\u2212{e})+1. Because\nk(H) is the solution to the unweighted minimum feedback\narc set problem on H, we have k(H) \u2264 k(H \u2212 {e}) + 1,\ngiving that k(H \u2212 {e}) \u2265 k(H) \u2212 1.\nTheorem 6. Finding the optimal match in indivisible pair\nbetting is NP-hard.\n331\nProof. We reduce from the unweighted minimum\nfeedback arc set problem again, although with a slightly more\ncomplex polynomial transformation involving multiple calls\nto the optimal match oracle. Consider an instance graph G\nof the minimum feedback arc set problem. We are interested\nin computing k(G), the size of the minimum feedback arc\nset of G.\nSuppose we have an oracle which solves the optimal\nmatching problem. Denote by optimal match(G ) the output of\nthe optimal matching oracle on graph G with prices be =\n(1\u2212 ) on all its edges. By Lemma 3, on input G , the oracle\noptimal match returns the subgraph of G with the\nsmallest number of edges, that has the same size of minimum\nfeedback arc set as G .\nThe following procedure finds k(G) by using polynomially\nmany calls to the optimal match oracle on a sequence of\nsubgraphs of G.\nset G := G\niterations := 0\nwhile (G has nonempty edge set)\nreset G := optimal match(G )\nif (G has nonempty edge set)\nincrement iterations by 1\nreset G by removing any edge e\nend if\nend while\nreturn (iterations)\nThis procedure removes edges from the original graph G\nlayer by layer until the graph is empty, while at the same\ntime computing the minimum feedback arc set size k(G) of\nthe original graph as the number of iterations. In each\niteration, we start with a graph G and replace it with the\nsmallest subgraph G that has the same k(G ). At this\nstage, removing an additional edge e necessarily results in\nk(G \u2212{e}) = k(G )\u22121, because k(G \u2212{e}) < k(G ) by the\noptimality of G , and k(G \u2212 {e}) \u2265 k(G ) \u2212 1 by the\nedgeremoval lemma. Therefore, in each iteration the cardinality\nof the minimum feedback arc set gets reduced exactly by 1.\nHence the number of iterations is equal to k(G).\nNote that this procedure gives a polynomial\ntransformation from the optimal matching problem to the unweighted\nminimum feedback arc set problem, which calls the optimal\nmatching oracle exactly k(G) \u2264 |E| times, where |E| is the\nnumber of edges of G. Hence the optimal matching problem\nis NP-hard.\n5.2 Optimal Divisible Matching\nWhen orders are divisible, the auctioneer\"s optimal\nmatching problem is described in Definition 4 of Section 3.\nAssuming unit orders and considering the order graph G(V, E),\nwe restate the auctioneer\"s optimal matching problem for\ndivisible orders as choosing quantity of orders to accept,\nxe \u2208 [0, 1], such that worst-case profit is maximized in the\nfollowing linear programming problem,\nmax\nxe,c\nc (7)\ns.t. e be \u2212 Ie(s) xe \u2265 c, \u2200s \u2208 S\nxe \u2208 [0, 1], \u2200e \u2208 E.\nWe still assume that there are no multi-edges in the order\ngraph G.\nWhen orders are divisible, the auctioneer can be better\noff by accepting partial orders. Example 2 shows a situation\nwhen accepting partial orders generates higher worst-case\nprofit than the optimal indivisible solution.\nExample 2. We show that the linear program (7)\nsometimes has a non-integer optimal solution.\nA\nB\nC\nD\nE\nF\nb\nb\nb\nb\nb\nb\nb\nb\nb\nFigure 2: An order graph. Letters on edges\nrepresent order prices.\nConsider the graph in Figure 2. There are a total of five\ncycles in the graph: three four-edge cycles ABCD, ABEF,\nCDEF, and two six-edge cycles ABCDEF and ABEFCD.\nSuppose each edge has price b such that 4b \u2212 3 > 0 and\n6b \u2212 5 < 0, namely b \u2208 (.75, .80), for example b = .78. With\nthis, the optimal indivisible solution consists of at most one\nfour-edge cycle, with worst case profit (4b\u22123). On the other\nhand, taking 1\n2\nfraction of each of the three four-edge cycles\nwould yield higher worst-case profit of 3\n2\n(4b \u2212 3).\nDespite the potential profit increase for accepting divisible\norders, the auctioneer\"s optimal matching problem remains\nto be NP-hard for divisible orders, which is presented below\nvia several lemmas and theorems.\nLemma 7. Suppose the auctioneer accept orders described\nby a weighted directed graph H(V, E) with edge weight xe to\nbe the quantity accepted for edge order e. The worst-case\nprofit for the auctioneer is\nc(H) =\ne\u2208E\n(be \u2212 1)xe + l(H). (8)\nProof. For any state s, the winning edges form a DAG.\nThus, the worst-case profit for the auctioneer achieves at\nthe state(s) when the total quantity of losing orders is\nminimized. The minimum total quantity of losing orders is the\nsolution to weighted minimal feedback arc set problem on\nH, that is l(H).\nConsider the graph of accepted orders for optimal divisible\nmatching, G\u2217\n(V \u2217\n, E\u2217\n), which consists of the optimal subset\nof edges E\u2217\nto be accepted by the auctioneer, with edge\nweight xe > 0 getting from the optimal solution of the linear\nprogram (7). We have the following lemmas.\n332\nLemma 8. l(G\u2217\n) \u2264 k(G\u2217\n) \u2264 k(G).\nProof. l(G\u2217\n) is the solution of the weighted minimum\nfeedback arc set problem on G\u2217\n, while k(G\u2217\n) is the solution\nof the unweighted minimum feedback arc set problem on\nG\u2217\n. When all edge weights in G\u2217\nare 1, l(G\u2217\n) = k(G\u2217\n).\nWhen xe\"s are less than 1, l(G\u2217\n) can be less than or equal\nto k(G\u2217\n). Since G\u2217\nis a subgraph of G but possibly with\ndifferent edge weights, k(G\u2217\n) \u2264 k(G). Hence, we have the\nabove relation.\nLemma 9. There exists some such that when all edge\nprices be\"s are (1 \u2212 ), l(G\u2217\n) = k(G).\nProof. From lemma 8, l(G\u2217\n) \u2264 k(G). We know that the\nauctioneer\"s worst-case profit when accepting G\u2217\nis\nc(G\u2217\n) =\ne\u2208E\u2217\n(be \u2212 1)xe + l(G\u2217\n) = l(G\u2217\n) \u2212\ne\u2208E\u2217\nxe.\nWhen he accepts the original order graph G in full, his\nworstcase profit is\nc(G) =\ne\u2208E\n(be \u2212 1) + k(G) = k(G) \u2212 |E|.\nSuppose l(G\u2217\n) < k(G). If |E| \u2212 e\u2208E\u2217 xe = 0, it means\nthat G\u2217\nis G. Hence, l(G\u2217\n) = k(G) regardless of , which\ncontradicts with the assumption l(G\u2217\n) < k(G). If |E| \u2212\ne\u2208E\u2217 xe > 0, then when\n<\nk(G) \u2212 l(G\u2217\n)\n|E| \u2212 e\u2208E\u2217 xe\n,\nc(G) is strictly greater than c(G\u2217\n), contradicting with the\noptimality of c(G\u2217\n). Because xe\"s are less than 1, l(G\u2217\n) >\nk(G) is impossible. Thus, l(G\u2217\n) = k(G).\nTheorem 10. Finding the optimal worst-case profit in\ndivisible pair betting is NP-hard.\nProof. Given the optimal set of partial orders to accept\nfor G when edge weights are (1 \u2212 ), if we can calculate\nthe optimal worst-case profit, by lemma 9 we can solve the\nunweighted minimum feedback arc set problem on G, which\nis NP-hard. Hence, finding the optimal worst-case profit is\nNP-hard.\nTheorem 10 states that solving the linear program (7) is\nNP-hard. Similarly to the indivisible case, we still need to\nprove that just finding the optimal divisible match is hard,\nas opposed to being able to compute the optimal\nworstcase profit. Since in the divisible case the edges do not\nnecessarily have unit weights, the proof in Theorem 6 does\nnot apply directly. However, with an additional property\nof the divisible case, we can augment the procedure from\nthe indivisible hardness proof to compute the unweighted\nminimum feedback arc set size k(G) here as well.\nFirst, note that the optimal divisible subgraph G\u2217\nof a\ngraph G is the weighted subgraph with minimum weighted\nfeedback arc set size l(G\u2217\n) = k(G) and smallest sum of edge\nweights e\u2208E\u2217 xe, since its corresponding worst case profit\nis k(G) \u2212 e\u2208E\u2217 xe according to lemmas 7 and 9.\nLemma 11. Suppose graph H satisfies l(H) = k(H) and\nwe remove edge e from it with weight xe < 1. Then, k(H \u2212\n{e}) = k(H).\nProof. Assume the contrary, namely k(H\u2212{e}) < k(H).\nThen by Lemma 5, k(H \u2212 {e}) = k(H) \u2212 1. Since removing\na single edge cannot reduce the minimum feedback arc set\nby more than the edge weight,\nl(H) \u2212 xe \u2264 l(H \u2212 {e}). (9)\nOn the other hand H \u2212 {e} \u2282 H so we have,\nl(H \u2212 {e}) \u2264 k(H \u2212 {e}) = k(H) \u2212 1 = l(H) \u2212 1. (10)\nCombining (9) and (10), we get xe \u2265 1. The\ncontradiction arises. Therefore, removing any edge with less than\nunit weight from an optimal divisible graph does not change\nk(H), the minimal feedback arc set size of the unweighted\nversion of the graph.\nWe now can augment the procedure for the indivisible case\nin Theorem 6, to prove hardness of the divisible version, as\nfollows.\nTheorem 12. Finding the optimal match in divisible pair\nbetting is NP-hard.\nProof. We reduce from the unweighted minimum\nfeedback arc set problem for graph G. Suppose we have an oracle\nfor the optimal divisible problem called optimal divisible match,\nwhich on input graph H computes edge weights xe \u2208 (0, 1]\nfor the optimal subgraph H\u2217\nof H, satisfying l(H\u2217\n) = k(H).\nThe following procedure outputs k(G).\nset G := G\niterations := 0\nwhile (G has nonempty edge set)\nreset G := optimal divisible match(G )\nwhile (G has edges with weight < 1)\nremove an edge with weight < 1 from G\nreset G by setting all edge weights to 1\nreset G := optimal divisible match(G )\nend while\nif (G has nonempty edge set)\nincrement iterations by 1\nreset G by removing any edge e\nend if\nend while\nreturn (iterations)\nAs in the proof of the corresponding Theorem 6 for the\nindivisible case, we compute k(G) by iteratively removing\nedges and recomputing the optimal divisible solution on the\nremaining subgraph, until all edges are deleted. In each\niteration of the outer while loop, the minimum feedback arc\nset is reduced by 1, thus the number of iterations is equal\nto k(G).\nIt remains to verify that each iteration reduces k(G) by\nexactly 1. Starting from a graph at the beginning of an\niteration, we compute its optimal divisible subgraph. We\nthen keep removing one non-unit weight edge at a time and\nrecomputing the optimal divisible subgraph, until the\nlatter contains only edges with unit weight. By Lemma 11\nthroughout the iteration so far the minimum feedback arc set\nof the corresponding unweighted graph remains unchanged.\nOnce the oracle returns a graph G with unit edge weights,\nremoving any edge would reduce the minimum feedback arc\nset: otherwise G is not optimal since G \u2212 {e} would have\n333\nthe same minimum feedback arc set but smaller total edge\nweight. By Lemma 5 removing a single edge cannot reduce\nthe minimum feedback arc set by more than one, thus as\nall edges have unit weight, k(G ) gets reduced by exactly\none. k(G) is equal to the returned value from the procedure.\nHence, the optimal matching problem for divisible orders is\nNP-hard.\n5.3 Existence of a Match\nKnowing that the optimal matching problem is NP-hard\nfor both indivisible and divisible orders in pair betting, we\ncheck whether the auctioneer can identify the existence of\na match with ease. Lemma 13 states a sufficient condition\nfor the matching problem with both indivisible and divisible\norders.\nLemma 13. A sufficient condition for the existence of a\nmatch for pair betting is that there exists a cycle C in G\nsuch that,\ne\u2208C\nbe \u2265 |C| \u2212 1, (11)\nwhere |C| is the number of edges in the cycle C.\nProof. The left-hand side of the inequality (11)\nrepresents the total payment that the auctioneer receives by\naccepting every unit orders in the cycle C in full. Because the\ndirection of an edge represents predicted ordering of the two\nconnected nodes in the final ranking, forming a cycle\nmeaning that there is some logical contradiction on the predicted\norderings of candidates. Hence, whichever state is realized,\nnot all of the edges in the cycle can be winning edges. The\nworst-case for the auctioneer corresponds to a state where\nevery edge in the cycle gets paid by $ 1 except one, with\n|C| \u2212 1 be the maximum payment to traders. Hence, if\ninequality (11) is satisfied, the auctioneer has non-negative\nworst-case profit by accepting the orders in the cycle.\nIt can be shown that identifying such a non-negative\nworstcase profit cycle in an order graph G can be achieved in\npolynomial time.\nLemma 14. It takes polynomial time to find a cycle in an\norder graph G(V, E) that has the highest worst-case profit,\nthat is\nmax\nC\u2208C\ne\u2208C\nbe \u2212 (|C| \u2212 1) ,\nwhere C is the set of all cycles in G.\nProof. Because\ne\u2208C\nbe \u2212 (|C| \u2212 1) =\ne\u2208C\n(be \u2212 1) + 1 = 1 \u2212\ne\u2208C\n(1 \u2212 be),\nfinding the cycle that gives the highest worst-case profit in\nthe original order graph G is equivalent to finding the\nshortest cycle in a converted graph H(V, E), where H is achieved\nby setting the weight for edge e in G to be (1 \u2212 be).\nFinding the shortest cycle in graph H can be done within\npolynomial time by resorting to the shortest path problem.\nFor any vertex v in V , we consider every neighbor vertex\nw such that (v, w) \u2208 E. We then find the shortest path\nfrom w to v, denoted as path(w, v). The shortest cycle that\npasses vertex v is found by choosing the w such that e(v,w) +\npath(w, v) is minimized. Comparing the shortest cycle found\nfor every vertex, we then can determine the shortest overall\ncycle for the graph H. Because the short path problem can\nbe solved in polynomial time [3], we can find the solution to\nour problem in polynomial time.\nIf the worst-case profit for the optimal cycle is non-negative,\nwe know that there exists a match in G. However, the\ncondition in lemma 13 is not a necessary condition for the\nexistence of a match. Even if all single cycles in the order\ngraph have negative worst-case profit, the auctioneer may\naccept multiple interweaving cycles to have positive\nworstcase profit. Figure 1 exhibits such a situation.\nIf the optimal indivisible match consists only of edge\ndisjoint cycles, a natural greedy algorithm can find the cycle\nthat gives the highest worst-case profit, remove its edges\nfrom the graph, and proceed until no more cycles exist.\nHowever, we show that such greedy algorithm can give a\nvery poor approximation.\n\u221a\nn + 1\n\u221a\nn + 1\n\u221a\nn\n\u221a\nn + 1\n\u221a\nn + 1\n\u221a\nn + 1\n\u221a\nn + 1\nFigure 3: Graph with n vertices and n +\n\u221a\nn edges\non which the greedy algorithm finds only two\ncycles, the dotted cycle in the center and the unique\nremaining cycle. The labels in the faces give the\nnumber of edges in the corresponding cycle.\nLemma 15. The greedy algorithm gives at most an O(\n\u221a\n\nn)approximation to the maximum number of disjoint cycles.\nProof. Consider the graph in Figure 3 consisting of a\ncycle with\n\u221a\nn edges, each of which participates in another\n(otherwise disjoint) cycle with\n\u221a\nn + 1 edges. Suppose all\nedge weights are (1 \u2212 ). The maximum number of disjoint\ncycles is clearly\n\u221a\nn, taking all cycles with length\n\u221a\nn + 1.\nBecause smaller cycles gives higher worst-case profit, the\ngreedy algorithm would first select the cycle of length\n\u221a\nn,\nafter which there would be only one remaining cycle of length\nn. Thus the total number of cycles selected by greedy is 2\nand the approximation factor in this case is\n\u221a\nn/2.\nIn light of Lemma 15, one may expect that greedy\nalgorithms would give\n\u221a\nn-approximations at best.\nApproxima334\ntion algorithms for finding the maximum number of\nedgedisjoint cycles have been considered by Krivelevich,\nNutov and Yuster [11, 19]. Indeed, for the case of directed\ngraphs, the authors show that a greedy algorithm gives a\u221a\nn-approximation [11]. When the optimal match does not\nconsist of edge-disjoint cycles as in the example of Figure 3,\ngreedy algorithm trying to finding optimal single cycles fails\nobviously.\n6. CONCLUSION\nWe consider a permutation betting scenario, where traders\nwager on the final ordering of n candidates. While it is\nunnatural and intractable to allow traders to bet directly on\nthe n! different final orderings, we propose two expressive\nbetting languages, subset betting and pair betting. In a\nsubset betting market, traders can bet either on a subset of\npositions that a candidate stands or on a subset of\ncandidates who occupy a specific position in the final ordering.\nPair betting allows traders bet on whether one given\ncandidate ranks higher than another given candidate.\nWe examine the auctioneer problem of matching orders\nwithout incurring risk. We find that in a subset betting\nmarket an auctioneer can find the optimal set and quantity of\norders to accept such that his worst-case profit is maximized\nin polynomial time if orders are divisible. The complexity\nchanges dramatically for pair betting. We prove that the\noptimal matching problem for the auctioneer is NP-hard for\npair betting with both indivisible and divisible orders via\nreductions to the minimum feedback arc set problem. We\nidentify a sufficient condition for the existence of a match,\nwhich can be verified in polynomial time. A natural greedy\nalgorithm has been shown to give poor approximation for\nindivisible pair betting.\nInteresting open questions for our permutation betting\ninclude the computational complexity of optimal indivisible\nmatching for subset betting and the necessary condition for\nthe existence of a match in pair betting markets. We are\ninterested in further exploring better approximation\nalgorithms for pair betting markets.\n7. ACKNOWLEDGMENTS\nWe thank Ravi Kumar, Yishay Mansour, Amin Saberi,\nAndrew Tomkins, John Tomlin, and members of Yahoo!\nResearch for valuable insights and discussions.\n8. REFERENCES\n[1] K. J. Arrow. The role of securities in the optimal\nallocation of risk-bearing. Review of Economic\nStudies, 31(2):91-96, 1964.\n[2] J. E. Berg, R. Forsythe, F. D. Nelson, and T. A.\nRietz. Results from a dozen years of election futures\nmarkets research. In C. A. Plott and V. Smith,\neditors, Handbook of Experimental Economic Results\n(forthcoming). 2001.\n[3] T. H. Cormen, C. E. Leiserson, R. L. Rivest, and\nC. Stein. Introduction to Algorithms (Second Edition).\nMIT Press and McGraw-Hill, 2001.\n[4] P. Cramton, Y. Shoham, and R. Steinberg.\nCombinatorial Auctions. MIT Press, Cambridge, MA,\n2005.\n[5] R. Forsythe, T. A. Rietz, and T. W. Ross. Wishes,\nexpectations, and actions: A survey on price formation\nin election stock markets. Journal of Economic\nBehavior and Organization, 39:83-110, 1999.\n[6] L. Fortnow, J. Kilian, D. M. Pennock, and M. P.\nWellman. Betting boolean-style: A framework for\ntrading in securities based on logical formulas.\nDecision Support Systems, 39(1):87-104, 2004.\n[7] M. Gr\u00a8otschel, L. Lov\u00b4asz, and A. Schrijver. The\nellipsoid method and its consequences in combinatorial\noptimization. Combinatorica, 1(2):169-197, 1981.\n[8] M. Gr\u00a8otschel, L. Lov\u00b4asz, and A. Schrijver. Geometric\nAlgorithms and Combinatorial Optimization.\nSpringer-Verlag, Berlin Heidelberg, 1993.\n[9] R. D. Hanson. Combinatorial information market\ndesign. Information Systems Frontiers, 5(1):107-119,\n2003.\n[10] R. M. Karp. Reducibility among combinatorial\nproblems. In Complexity of computer computations\n(Proc. Sympos., IBM Thomas J. Watson Res. Center,\nYorktown Heights, N.Y.), pages 85-103. Plenum, New\nYork, 1972.\n[11] M. Krivelevich, Z. Nutov, and R. Yuster.\nApproximation algorithms for cycle packing problems.\nProceedings of the sixteenth annual ACM-SIAM\nsymposium on Discrete algorithms, pages 556-561,\n2005.\n[12] H. W. Kuhn. The hungarian method for the\nassignment problem. Naval Research Logistic\nQuarterly, 2:83-97, 1955.\n[13] J. Munkres. Algorithms for the assignment and\ntransportation problems. Journal of the Society of\nIndustrial and Applied Mathematics, 5(1):32-38, 1957.\n[14] N. Nisan. Bidding and allocation in combinatorial\nauctions. In Proceedings of the 2nd ACM Conference\non Electronic Commerce (EC\"00), Minneapolis, MN,\n2000.\n[15] D. M. Pennock, S. Lawrence, C. L. Giles, and F. A.\nNielsen. The real power of artificial markets. Science,\n291:987-988, February 2002.\n[16] C. Plott and S. Sunder. Efficiency of experimental\nsecurity markets with insider information: An\napplication of rational expectations models. Journal of\nPolitical Economy, 90:663-98, 1982.\n[17] C. Plott and S. Sunder. Rational expectations and the\naggregation of diverse information in laboratory\nsecurity markets. Econometrica, 56:1085-1118, 1988.\n[18] T. Sandholm. Algorithm for optimal winner\ndetermination in combinatorial auctions. Artificial\nIntelligence, 135:1-54, 2002.\n[19] R. Yuster and Z. Nutov. Packing directed cycles\nefficiently. Proceedings of the 29th International\nSymposium on Mathematical Foundations of\nComputer Science (MFCS), 2004.\n335", "keywords": "pair-betting market;minimum feedback;permutation combinatoric;greedy algorithm;order match;complex polynomial transformation;expressive bet;permutation betting;prediction market;subset betting;computational complexity;information aggregation;polynomial-time algorithm;bilateral trading partner;bipartite graph"}
-{"name": "test_J-23", "title": "Frugality Ratios And Improved Truthful Mechanisms for Vertex Cover", "abstract": "In set-system auctions, there are several overlapping teams of agents, and a task that can be completed by any of these teams. The auctioneer\"s goal is to hire a team and pay as little as possible. Examples of this setting include shortest-path auctions and vertex-cover auctions. Recently, Karlin, Kempe and Tamir introduced a new definition of frugality ratio for this problem. Informally, the frugality ratio is the ratio of the total payment of a mechanism to a desired payment bound. The ratio captures the extent to which the mechanism overpays, relative to perceived fair cost in a truthful auction. In this paper, we propose a new truthful polynomial-time auction for the vertex cover problem and bound its frugality ratio. We show that the solution quality is with a constant factor of optimal and the frugality ratio is within a constant factor of the best possible worst-case bound; this is the first auction for this problem to have these properties. Moreover, we show how to transform any truthful auction into a frugal one while preserving the approximation ratio. Also, we consider two natural modifications of the definition of Karlin et al., and we analyse the properties of the resulting payment bounds, such as monotonicity, computational hardness, and robustness with respect to the draw-resolution rule. We study the relationships between the different payment bounds, both for general set systems and for specific set-system auctions, such as path auctions and vertex-cover auctions. We use these new definitions in the proof of our main result for vertex-cover auctions via a bootstrapping technique, which may be of independent interest.", "fulltext": "1. INTRODUCTION\nIn a set system auction there is a single buyer and many vendors\nthat can provide various services. It is assumed that the buyer\"s\nrequirements can be satisfied by various subsets of the vendors; these\nsubsets are called the feasible sets. A widely-studied class of\nsetsystem auctions is path auctions, where each vendor is able to sell\naccess to a link in a network, and the feasible sets are those sets\nwhose links contain a path from a given source to a given\ndestination; the study of these auctions has been initiated in the seminal\npaper by Nisan and Ronen [19] (see also [1, 10, 9, 6, 15, 7, 20]).\nWe assume that each vendor has a cost of providing his services,\nbut submits a possibly larger bid to the auctioneer. Based on these\nbids, the auctioneer selects a feasible subset of vendors, and makes\npayments to the vendors in this subset. Each selected vendor enjoys\na profit of payment minus cost. Vendors want to maximise profit,\nwhile the buyer wants to minimise the amount he pays. A natural\ngoal in this setting is to design a truthful auction, in which vendors\nhave an incentive to bid their true cost. This can be achieved by\npaying each selected vendor a premium above her bid in such a\nway that the vendor has no incentive to overbid. An interesting\nquestion in mechanism design is how much the auctioneer will have\nto overpay in order to ensure truthful bids.\nIn the context of path auctions this topic was first addressed by\nArcher and Tardos [1]. They define the frugality ratio of a\nmechanism as the ratio between its total payment and the cost of the\ncheapest path disjoint from the path selected by the mechanism.\nThey show that, for a large class of truthful mechanisms for this\nproblem, the frugality ratio is as large as the number of edges in the\nshortest path. Talwar [21] extends this definition of frugality ratio\nto general set systems, and studies the frugality ratio of the classical\nVCG mechanism [22, 4, 14] for many specific set systems, such as\nminimum spanning trees and set covers.\nWhile the definition of frugality ratio proposed by [1] is\nwellmotivated and has been instrumental in studying truthful\nmechanisms for set systems, it is not completely satisfactory. Consider,\nfor example, the graph of Figure 1 with the costs cAB = cBC =\nA B\nC\nD\nFigure 1: The diamond graph\n336\ncCD = 0, cAC = cBD = 1. This graph is 2-connected and the\nVCG payment to the winning path ABCD is bounded. However,\nthe graph contains no A-D path that is disjoint from ABCD, and\nhence the frugality ratio of VCG on this graph remains undefined.\nAt the same time, there is no monopoly, that is, there is no\nvendor that appears in all feasible sets. In auctions for other types of\nset systems, the requirement that there exist a feasible solution\ndisjoint from the selected one is even more severe: for example, for\nvertex-cover auctions (where vendors correspond to the vertices of\nsome underlying graph, and the feasible sets are vertex covers) the\nrequirement means that the graph must be bipartite. To deal with\nthis problem, Karlin et al. [16] suggest a better benchmark, which\nis defined for any monopoly-free set system. This quantity, which\nthey denote by \u03bd, intuitively corresponds to the value of a cheapest\nNash equilibrium. Based on this new definition, the authors\nconstruct new mechanisms for the shortest path problem and show that\nthe overpayment of these mechanisms is within a constant factor of\noptimal.\n1.1 Our results\nVertex cover auctions We propose a truthful polynomial-time\nauction for vertex cover that outputs a solution whose cost is within\na factor of 2 of optimal, and whose frugality ratio is at most 2\u0394,\nwhere \u0394 is the maximum degree of the graph (Theorem 4). We\ncomplement this result by proving (Theorem 5) that for any \u0394 and\nn, there are graphs of maximum degree \u0394 and size \u0398(n) for which\nany truthful mechanism has frugality ratio at least \u0394/2. This means\nthat the solution quality of our auction is with a factor of 2 of\noptimal and the frugality ratio is within a factor of 4 of the best\npossible bound for worst-case inputs. To the best of our knowledge,\nthis is the first auction for this problem that enjoys these\nproperties. Moreover, we show how to transform any truthful mechanism\nfor the vertex-cover problem into a frugal one while preserving the\napproximation ratio.\nFrugality ratios Our vertex cover results naturally suggest two\nmodifications of the definition of \u03bd in [16]. These modifications\ncan be made independently of each other, resulting in four\ndifferent payment bounds TUmax, TUmin, NTUmax, and NTUmin,\nwhere NTUmin is equal to the original payment bound \u03bd of in [16].\nAll four payment bounds arise as Nash equilibria of certain games\n(see the full version of this paper [8]); the differences between\nthem can be seen as the price of initiative and the price of\ncooperation (see Section 3). While our main result about vertex\ncover auctions (Theorem 4) is with respect to NTUmin = \u03bd, we\nmake use of the new definitions by first comparing the payment of\nour mechanism to a weaker bound NTUmax, and then\nbootstrapping from this result to obtain the desired bound.\nInspired by this application, we embark on a further study of\nthese payment bounds. Our results here are as follows:\n1. We observe (Proposition 1) that the four payment bounds\nalways obey a particular order that is independent of the choice of\nthe set system and the cost vector, namely, TUmin \u2264 NTUmin \u2264\nNTUmax \u2264 TUmax. We provide examples (Proposition 5 and\nCorollaries 1 and 2) showing that for the vertex cover problem any\ntwo consecutive bounds can differ by a factor of n \u2212 2, where n is\nthe number of agents. We then show (Theorem 2) that this\nseparation is almost best possible for general set systems by proving that\nfor any set system TUmax/TUmin \u2264 n. In contrast, we\ndemonstrate (Theorem 3) that for path auctions TUmax/TUmin \u2264 2.\nWe provide examples (Propositions 2, 3 and 4) showing that this\nbound is tight. We see this as an argument for the study of\nvertexcover auctions, as they appear to be more representative of the\ngeneral team -selection problem than the widely studied path auctions.\n2. We show (Theorem 1) that for any set system, if there is a cost\nvector for which TUmin and NTUmin differ by a factor of \u03b1,\nthere is another cost vector that separates NTUmin and NTUmax\nby the same factor and vice versa; the same is true for the pairs\n(NTUmin, NTUmax) and (NTUmax, TUmax). This\nsymmetry is quite surprising, since, e.g., TUmin and NTUmax are\nobtained from NTUmin by two very different transformations. This\nobservation suggests that the four payment bounds should be\nstudied in a unified framework; moreover, it leads us to believe that the\nbootstrapping technique of Theorem 4 may have other applications.\n3. We evaluate the payment bounds introduced here with respect\nto a checklist of desirable features. In particular, we note that the\npayment bound \u03bd = NTUmin of [16] exhibits some\ncounterintuitive properties, such as nonmonotonicity with respect to adding a\nnew feasible set (Proposition 7), and is NP-hard to compute\n(Theorem 6), while some of the other payment bounds do not suffer from\nthese problems. This can be seen as an argument in favour of using\nweaker but efficiently computable bounds NTUmax and TUmax.\nRelated work\nVertex-cover auctions have been studied in the past by Talwar [21]\nand Calinescu [5]. Both of these papers are based on the definition\nof frugality ratio used in [1]; as mentioned before, this means that\ntheir results only apply to bipartite graphs. Talwar [21] shows that\nthe frugality ratio of VCG is at most \u0394. However, since finding\nthe cheapest vertex cover is an NP-hard problem, the VCG\nmechanism is computationally infeasible. The first (and, to the best of\nour knowledge, only) paper to investigate polynomial-time truthful\nmechanisms for vertex cover is [5]. This paper studies an auction\nthat is based on the greedy allocation algorithm, which has an\napproximation ratio of log n. While the main focus of [5] is the more\ngeneral set cover problem, the results of [5] imply a frugality ratio\nof 2\u03942\nfor vertex cover. Our results improve on those of [21] as\nour mechanism is polynomial-time computable, as well as on those\nof [5], as our mechanism has a better approximation ratio, and we\nprove a stronger bound on the frugality ratio; moreover, this bound\nalso applies to the mechanism of [5].\n2. PRELIMINARIES\nIn most of this paper, we discuss auctions for set systems. A\nset system is a pair (E, F), where E is the ground set, |E| = n,\nand F is a collection of feasible sets, which are subsets of E. Two\nparticular types of set systems are of interest to us - shortest path\nsystems, in which the ground set consists of all edges of a network,\nand the feasible sets are paths between two specified vertices s and\nt, and vertex cover systems, in which the elements of the ground set\nare the vertices of a graph, and the feasible sets are vertex covers of\nthis graph.\nIn set system auctions, each element e of the ground set is owned\nby an independent agent and has an associated non-negative cost ce.\nThe goal of the centre is to select (purchase) a feasible set. Each\nelement e in the selected set incurs a cost of ce. The elements that\nare not selected incur no costs.\nThe auction proceeds as follows: all elements of the ground set\nmake their bids, the centre selects a feasible set based on the bids\nand makes payments to the agents. Formally, an auction is defined\nby an allocation rule A : Rn\n\u2192 F and a payment rule P : Rn\n\u2192\nRn\n. The allocation rule takes as input a vector of bids and decides\nwhich of the sets in F should be selected. The payment rule also\ntakes as input a vector of bids and decides how much to pay to each\nagent. The standard requirements are individual rationality, i.e.,\nthe payment to each agent should be at least as high as his incurred\ncost (0 for agents not in the selected set and ce for agents in the\n337\nselected set) and incentive compatibility, or truthfulness, i.e., each\nagent\"s dominant strategy is to bid his true cost.\nAn allocation rule is monotone if an agent cannot increase his\nchance of getting selected by raising his bid. Formally, for any bid\nvector b and any e \u2208 E, if e \u2208 A(b) then e \u2208 A(b1, . . . , be, . . . , bn)\nfor any be > be. Given a monotone allocation rule A and a bid\nvector b, the threshold bid te of an agent e \u2208 A(b) is the highest\nbid of this agent that still wins the auction, given that the bids of\nother participants remain the same. Formally, te = sup{be \u2208 R |\ne \u2208 A(b1, . . . , be, . . . , bn)}. It is well known (see, e.g. [19, 13])\nthat any auction that has a monotone allocation rule and pays each\nagent his threshold bid is truthful; conversely, any truthful auction\nhas a monotone allocation rule.\nThe VCG mechanism is a truthful mechanism that maximises\nthe social welfare and pays 0 to the losing agents. For set system\nauctions, this simply means picking a cheapest feasible set, paying\neach agent in the selected set his threshold bid, and paying 0 to\nall other agents. Note, however, that the VCG mechanism may be\ndifficult to implement, since finding a cheapest feasible set may be\nintractable.\nIf U is a set of agents, c(U) denotes\nP\nw\u2208U cw. Similarly, b(U)\ndenotes\nP\nw\u2208U bw.\n3. FRUGALITY RATIOS\nWe start by reproducing the definition of the quantity \u03bd from [16,\nDefinition 4].\nLet (E, F) be a set system and let S be a cheapest feasible set\nwith respect to the true costs ce. Then \u03bd(c, S) is the solution to the\nfollowing optimisation problem.\nMinimise B =\nP\ne\u2208S be subject to\n(1) be \u2265 ce for all e \u2208 E\n(2)\nP\ne\u2208S\\T be \u2264\nP\ne\u2208T \\S ce for all T \u2208 F\n(3) for every e \u2208 S, there is a Te \u2208 F such that e \u2208 Te andP\ne \u2208S\\Te\nbe =\nP\ne \u2208Te\\S ce\nThe bound \u03bd(c, S) can be seen as an outcome of a two-stage\nprocess, where first each agent e \u2208 S makes a bid be stating how\nmuch it wants to be paid, and then the centre decides whether to\naccept these bids. The behaviour of both parties is affected by the\nfollowing considerations. From the centre\"s point of view, the set\nS must remain the most attractive choice, i.e., it must be among\nthe cheapest feasible sets under the new costs ce = ce for e \u2208 S,\nce = be for e \u2208 S (condition (2)). The reason for that is that\nif (2) is violated for some set T, the centre would prefer T to S.\nOn the other hand, no agent would agree to a payment that does\nnot cover his costs (condition (1)), and moreover, each agent tries\nto maximise his profit by bidding as high as possible, i.e., none\nof the agents can increase his bid without violating condition (2)\n(condition (3)). The centre wants to minimise the total payout, so\n\u03bd(c, S) corresponds to the best possible outcome from the centre\"s\npoint of view.\nThis definition captures many important aspects of our intuition\nabout \u2018fair\" payments. However, it can be modified in two ways,\nboth of which are still quite natural, but result in different payment\nbounds.\nFirst, we can consider the worst rather than the best possible\noutcome for the centre. That is, we can consider the maximum total\npayment that the agents can extract by jointly selecting their bids\nsubject to (1), (2), and (3). Such a bound corresponds to\nmaximising B subject to (1), (2), and (3) rather than minimising it. If it\nis the agents who make the original bids (rather than the centre),\nthis kind of bidding behaviour is plausible. On the other hand, in a\ngame in which the centre proposes payments to the agents in S and\nthe agents accept them as long as (1), (2) and (3) are satisfied, we\nwould be likely to observe a total payment of \u03bd(c, S). Hence, the\ndifference between these two definitions can be seen as the price\nof initiative.\nSecond, the agents may be able to make payments to each other.\nIn this case, if they can extract more money from the centre by\nagreeing on a vector of bids that violates individual rationality (i.e.,\ncondition (1)) for some bidders, they might be willing to do so, as\nthe agents who are paid below their costs will be compensated by\nother members of the group. The bids must still be realistic, i.e.,\nthey have to satisfy be \u2265 0. The resulting change in payments can\nbe seen as the price of co-operation and corresponds to replacing\ncondition (1) with the following weaker condition (1\u2217\n):\nbe \u2265 0 for all e \u2208 E. (1\u2217\n)\nBy considering all possible combinations of these modifications,\nwe obtain four different payment bounds, namely\n\u2022 TUmin(c, S), which is the solution to the optimisation\nproblem Minimise B subject to (1\u2217\n), (2), and (3).\n\u2022 TUmax(c, S), which is the solution to the optimisation\nproblem Maximise B subject to (1\u2217\n), (2), and (3).\n\u2022 NTUmin(c, S), which is the solution to the optimisation\nproblem Minimise B subject to (1), (2), and (3).\n\u2022 NTUmax(c, S), which is the solution to the optimisation\nproblem Maximise B subject to (1), (2), (3).\nThe abbreviations TU and NTU correspond, respectively, to\ntransferable utility and non-transferable utility, i.e., the agents\"\nability/inability to make payments to each other. For concreteness,\nwe will take TUmin(c) to be TUmin(c, S) where S is the\nlexicographically least amongst the cheapest feasible sets. We\ndefine TUmax(c), NTUmin(c), NTUmax(c) and \u03bd(c) similarly,\nthough we will see in Section 6.3 that, in fact, NTUmin(c, S) and\nNTUmax(c, S) are independent of the choice of S. Note that the\nquantity \u03bd(c) from [16] is NTUmin(c).\nThe second modification (transferable utility) is more intuitively\nappealing in the context of the maximisation problem, as both\nassume some degree of co-operation between the agents. While the\nsecond modification can be made without the first, the resulting\npayment bound TUmin(c, S) is too strong to be a realistic\nbenchmark, at least for general set systems. In particular, it can be smaller\nthan the total cost of the cheapest feasible set S (see Section 6).\nNevertheless, we provide the definition as well as some results\nabout TUmin(c, S) in the paper, both for completeness and\nbecause we believe that it may help to understand which properties\nof the payment bounds are important for our proofs. Another\npossibility would be to introduce an additional constraint\nP\ne\u2208S be \u2265P\ne\u2208S ce in the definition of TUmin(c, S) (note that this\ncondition holds automatically for TUmax(c, S), as TUmax(c, S) \u2265\nNTUmax(c, S)); however, such a definition would have no direct\ngame-theoretic interpretation, and some of our results (in\nparticular, the ones in Section 4) would no longer be true.\nREMARK 1. For the payment bounds that are derived from\nmaximisation problems, (i.e., TUmax(c, S) and NTUmax(c, S)),\nconstraints of type (3) are redundant and can be dropped. Hence,\nTUmax(c, S) and NTUmax(c, S) are solutions to linear\nprograms, and therefore can be computed in polynomial time as long\nas we have a separation oracle for constraints in (2). In contrast,\n338\nNTUmin(c, S) can be NP-hard to compute even if the size of F is\npolynomial (see Section 6).\nThe first and third inequalities in the following observation\nfollow from the fact that condition (1\u2217\n) is strictly weaker than\ncondition (1).\nPROPOSITION 1.\nTUmin(c, S) \u2264 NTUmin(c, S) \u2264\nNTUmax(c, S) \u2264 TUmax(c, S).\nLet M be a truthful mechanism for (E, F). Let pM(c) denote\nthe total payments of M when the actual costs are c. A frugality\nratio of M with respect to a payment bound is the ratio between\nthe payment of M and this payment bound. In particular,\n\u03c6TUmin(M) = sup\nc\npM(c)/TUmin(c),\n\u03c6TUmax(M) = sup\nc\npM(c)/TUmax(c),\n\u03c6NTUmin(M) = sup\nc\npM(c)/NTUmin(c),\n\u03c6NTUmax(M) = sup\nc\npM(c)/NTUmax(c).\nWe conclude this section by showing that there exist set systems\nand respective cost vectors for which all four payment bounds are\ndifferent. In the next section, we quantify this difference, both for\ngeneral set systems, and for specific types of set systems, such as\npath auctions or vertex cover auctions.\nEXAMPLE 1. Consider the shortest-path auction on the graph\nof Figure 1. The cheapest feasible sets are all paths from A to D. It\ncan be verified, using the reasoning of Propositions 2 and 3 below,\nthat for the cost vector cAB = cCD = 2, cBC = 1, cAC = cBD =\n5, we have\n\u2022 TUmax(c) = 10 (with bAB = bCD = 5, bBC = 0),\n\u2022 NTUmax(c) = 9 (with bAB = bCD = 4, bBC = 1),\n\u2022 NTUmin(c) = 7 (with bAB = bCD = 2, bBC = 3),\n\u2022 TUmin(c) = 5 (with bAB = bCD = 0, bBC = 5).\n4. COMPARING PAYMENT BOUNDS\n4.1 Path auctions\nWe start by showing that for path auctions any two consecutive\npayment bounds can differ by at least a factor of 2.\nPROPOSITION 2. There is an instance of the shortest-path\nproblem for which we have NTUmax(c)/NTUmin(c) \u2265 2.\nPROOF. This construction is due to David Kempe [17].\nConsider the graph of Figure 1 with the edge costs cAB = cBC =\ncCD = 0, cAC = cBD = 1. Under these costs, ABCD is the\ncheapest path. The inequalities in (2) are bAB + bBC \u2264 cAC = 1,\nbBC + bCD \u2264 cBD = 1. By condition (3), both of these\ninequalities must be tight (the former one is the only inequality\ninvolving bAB, and the latter one is the only inequality involving bCD).\nThe inequalities in (1) are bAB \u2265 0, bBC \u2265 0, bCD \u2265 0. Now,\nif the goal is to maximise bAB + bBC + bCD, the best choice is\nbAB = bCD = 1, bBC = 0, so NTUmax(c) = 2. On the other\nhand, if the goal is to minimise bAB + bBC + bCD, one should set\nbAB = bCD = 0, bBC = 1, so NTUmin(c) = 1.\nPROPOSITION 3. There is an instance of the shortest-path\nproblem for which we have TUmax(c)/NTUmax(c) \u2265 2.\nPROOF. Again, consider the graph of Figure 1. Let the edge\ncosts be cAB = cCD = 0, cBC = 1, cAC = cBD = 1. ABCD\nis the lexicographically-least cheapest path, so we can assume that\nS = {AB, BC, CD}. The inequalities in (2) are the same as in\nthe previous example, and by the same argument both of them are,\nin fact, equalities. The inequalities in (1) are bAB \u2265 0, bBC \u2265 1,\nbCD \u2265 0. Our goal is to maximise bAB + bBC + bCD. If we have\nto respect the inequalities in (1), we have to set bAB = bCD = 0,\nbBC = 1, so NTUmax(c) = 1. Otherwise, we can set bAB =\nbCD = 1, bBC = 0, so TUmax(c) \u2265 2.\nPROPOSITION 4. There is an instance of the shortest-path\nproblem for which we have NTUmin(c)/TUmin(c) \u2265 2.\nPROOF. This construction is also based on the graph of Figure 1.\nThe edge costs are cAB = cCD = 1, cBC = 0, cAC = cBD =\n1. ABCD is the lexicographically least cheapest path, so we can\nassume that S = {AB, BC, CD}. Again, the inequalities in (2)\nare the same, and both are, in fact, equalities. The inequalities in (1)\nare bAB \u2265 1, bBC \u2265 0, bCD \u2265 1. Our goal is to minimise bAB +\nbBC +bCD. If we have to respect the inequalities in (1), we have to\nset bAB = bCD = 1, bBC = 0, so NTUmin(c) = 2. Otherwise,\nwe can set bAB = bCD = 0, bBC = 1, so TUmin(c) \u2264 1.\nIn Section 4.4 (Theorem 3), we show that the separation results\nin Propositions 2, 3, and 4 are optimal.\n4.2 Connections between separation results\nThe separation results for path auctions are obtained on the same\ngraph using very similar cost vectors. It turns out that this is not\ncoincidental. Namely, we can prove the following theorem.\nTHEOREM 1. For any set system (E, F), and any feasible set S,\nmax\nc\nTUmax(c, S)\nNTUmax(c, S)\n= max\nc\nNTUmax(c, S)\nNTUmin(c, S)\n,\nmax\nc\nNTUmax(c, S)\nNTUmin(c, S)\n= max\nc\nNTUmin(c, S)\nTUmin(c, S)\n,\nwhere the maximum is over all cost vectors c for which S is a\ncheapest feasible set.\nThe proof of the theorem follows directly from the four lemmas\nproved below; more precisely, the first equality in Theorem 1 is\nobtained by combining Lemmas 1 and 2, and the second equality is\nobtained by combining Lemmas 3 and 4. We prove Lemma 1 here;\nthe proofs of Lemmas 2- 4 are similar and can be found in the full\nversion of this paper [8].\nLEMMA 1. Suppose that c is a cost vector for (E, F) such that\nS is a cheapest feasible set and TUmax(c, S)/NTUmax(c, S) =\n\u03b1. Then there is a cost vector c such that S is a cheapest feasible\nset and NTUmax(c , S)/NTUmin(c , S) \u2265 \u03b1.\nPROOF. Suppose that TUmax(c, S) = X and NTUmax(c, S) =\nY where X/Y = \u03b1. Assume without loss of generality that S\nconsists of elements 1, . . . , k, and let b1\n= (b1\n1, . . . , b1\nk) and b2\n=\n(b2\n1, . . . , b2\nk) be the bid vectors that correspond to TUmax(c, S)\nand NTUmax(c, S), respectively.\nConstruct the cost vector c by setting ci = ci for i \u2208 S,\nci = min{ci, b1\ni } for i \u2208 S. Clearly, S is a cheapest set under c .\nMoreover, as the costs of elements outside of S remained the same,\nthe right-hand sides of all constraints in (2) did not change, so any\nbid vector that satisfies (2) and (3) with respect to c, also satisfies\nthem with respect to c . We will construct two bid vectors b3\nand\nb4\nthat satisfy conditions (1), (2), and (3) for the cost vector c , and\n339\nX\nX X\nX X 0\nX\n12\n3\nX 4 5\n6\nFigure 2: Graph that separates payment bounds for vertex\ncover, n = 7\nhave\nP\ni\u2208S b3\ni = X,\nP\ni\u2208S b4\ni = Y . As NTUmax(c , S) \u2265 X\nand NTUmin(c , S) \u2264 Y , this implies the lemma.\nWe can set b3\ni = b1\ni : this bid vector satisfies conditions (2)\nand (3) since b1\ndoes, and we have b1\ni \u2265 min{ci, b1\ni } = ci,\nwhich means that b3\nsatisfies condition (1). Furthermore, we can\nset b4\ni = b2\ni . Again, b4\nsatisfies conditions (2) and (3) since b2\ndoes, and since b2\nsatisfies condition (1), we have b2\ni \u2265 ci \u2265 ci,\nwhich means that b4\nsatisfies condition (1).\nLEMMA 2. Suppose c is a cost vector for (E, F) such that S is\na cheapest feasible set and NTUmax(c, S)/NTUmin(c, S) = \u03b1.\nThen there is a cost vector c such that S is a cheapest feasible set\nand TUmax(c , S)/NTUmax(c , S) \u2265 \u03b1.\nLEMMA 3. Suppose that c is a cost vector for (E, F) such that\nS is a cheapest feasible set and NTUmax(c, S)/NTUmin(c, S) =\n\u03b1. Then there is a cost vector c such that S is a cheapest feasible\nset and NTUmin(c , S)/TUmin(c , S) \u2265 \u03b1.\nLEMMA 4. Suppose that c is a cost vector for (E, F) such that\nS is a cheapest feasible set and NTUmin(c, S)/TUmin(c, S) =\n\u03b1. Then there is a cost vector c such that S is a cheapest feasible\nset and NTUmax(c , S)/NTUmin(c , S) \u2265 \u03b1.\n4.3 Vertex-cover auctions\nIn contrast to the case of path auctions, for vertex-cover\nauctions the gap between NTUmin(c) and NTUmax(c) (and hence\nbetween NTUmax(c) and TUmax(c), and between TUmin(c)\nand NTUmin(c)) can be proportional to the size of the graph.\nPROPOSITION 5. For any n \u2265 3, there is a an n-vertex graph\nand a cost vector c for which TUmax(c)/NTUmax(c) \u2265 n \u2212 2.\nPROOF. The underlying graph consists of an (n \u2212 1)-clique on\nthe vertices X1, . . . , Xn\u22121, and an extra vertex X0 adjacent to\nXn\u22121. The costs are cX1 = cX2 = \u00b7 \u00b7 \u00b7 = cXn\u22122 = 0, cX0 =\ncXn\u22121 = 1. We can assume that S = {X0, X1, . . . , Xn\u22122} (this\nis the lexicographically first vertex cover of cost 1). For this set\nsystem, the constraints in (2) are bXi + bX0 \u2264 cXn\u22121 = 1 for\ni = 1, . . . , n \u2212 2. Clearly, we can satisfy conditions (2) and (3)\nby setting bXi = 1 for i = 1, . . . , n \u2212 2, bX0 = 0. Hence,\nTUmax(c) \u2265 n \u2212 2. For NTUmax(c), there is an additional\nconstraint bX0 \u2265 1, so the best we can do is to set bXi = 0 for\ni = 1, . . . , n \u2212 2, bX0 = 1, which implies NTUmax(c) = 1.\nCombining Proposition 5 with Lemmas 1 and 3, we derive the\nfollowing corollaries.\nCOROLLARY 1. For any n \u2265 3, we can construct an instance\nof the vertex cover problem on a graph of size n that satisfies\nNTUmax(c)/NTUmin(c) \u2265 n \u2212 2.\nCOROLLARY 2. For any n \u2265 3, we can construct an instance\nof the vertex cover problem on a graph of size n that satisfies\nNTUmin(c)/TUmin(c) \u2265 n \u2212 2.\nj+2ix\nij\nP \\ P ij+2P \\ P\nyijixix j\nj+1\ni j+2ij+1\ny y\ni i j+2ie j\ne j+1\ne\nij+1\nP \\ P\nFigure 3: Proof of Theorem 3: constraints for \u02c6Pij and \u02c6Pij+2 do\nnot overlap\n4.4 Upper bounds\nIt turns out that the lower bound proved in the previous\nsubsection is almost tight. More precisely, the following theorem shows\nthat no two payment bounds can differ by more than a factor of n;\nmoreover, this is the case not just for the vertex cover problem, but\nfor general set systems. We bound the gap between TUmax(c) and\nTUmin(c). Since TUmin(c) \u2264 NTUmin(c) \u2264 NTUmax(c) \u2264\nTUmax(c), this bound applies to any pair of payment bounds.\nTHEOREM 2. For any set system (E, F) and any cost vector c,\nwe have TUmax(c)/TUmin(c) \u2264 n.\nPROOF. Assume wlog that the winning set S consists of\nelements 1, . . . , k. Let c1, . . . , ck be the true costs of elements in S,\nlet b1, . . . , bk be their bids that correspond to TUmin(c), and let\nb1 , . . . , bk be their bids that correspond to TUmax(c).\nConsider the conditions (2) and (3) for S. One can pick a subset\nL of at most k inequalities in (2) so that for each i = 1, . . . , k there\nis at least one inequality in L that is tight for bi. Suppose that the\njth inequality in L is of the form bi1 + \u00b7 \u00b7 \u00b7 + bit \u2264 c(Tj \\ S). For\nbi, all inequalities in L are, in fact, equalities. Hence, by adding\nup all of them we obtain k\nP\ni=1,...,k bi \u2265\nP\nj=1,...,k c(Tj \\ S).\nOn the other hand, all these inequalities appear in condition (2), so\nthey must hold for bi , i.e.,\nP\ni=1,...,k bi \u2264\nP\nj=1,...,k c(Tj \\ S).\nCombining these two inequalities, we obtain\nnTUmin(c) \u2265 kTUmin(c) \u2265 TUmax(c).\nREMARK 2. The final line of the proof of Theorem 2 shows\nthat, in fact, the upper bound on TUmax(c)/TUmin(c) can be\nstrengthened to the size of the winning set, k. Note that in\nProposition 5, as well as in Corollaries 1 and 2, k = n\u22121, so these results\ndo not contradict each other.\nFor path auctions, this upper bound can be improved to 2,\nmatching the lower bounds of Section 4.1.\nTHEOREM 3. For any instance of the shortest path problem,\nTUmax(c) \u2264 2 TUmin(c).\nPROOF. Given a network (G, s, t), assume without loss of\ngenerality that the lexicographically-least cheapest s-t path, P, in G\nis {e1, . . . , ek}, where e1 = (s, v1), e2 = (v1, v2), . . . , ek =\n(vk\u22121, t). Let c1, . . . , ck be the true costs of e1, . . . , ek, and let\nb = (b1, . . . , bk) and b = (b1 , . . . , bk ) be bid vectors that\ncorrespond to TUmin(c) and TUmax(c), respectively.\nFor any i = 1, . . . , k, there is a constraint in (2) that is tight for\nbi with respect to the bid vector b , i.e., an s-t path Pi that avoids\nei and satisfies b (P \\Pi) = c(Pi \\P). We can assume without loss\nof generality that Pi coincides with P up to some vertex xi, then\ndeviates from P to avoid ei, and finally returns to P at a vertex\n340\nyi and coincides with P from then on (clearly, it might happen\nthat s = xi or t = yi). Indeed, if Pi deviates from P more than\nonce, one of these deviations is not necessary to avoid ei and can\nbe replaced with the respective segment of P without increasing the\ncost of Pi. Among all paths of this form, let \u02c6Pi be the one with the\nlargest value of yi, i.e., the rightmost one. This path corresponds\nto an inequality Ii of the form bxi+1 + \u00b7 \u00b7 \u00b7 + byi\n\u2264 c( \u02c6Pi \\ P).\nAs in the proof of Theorem 2, we construct a set of tight\nconstraints L such that every variable bi appears in at least one of these\nconstraints; however, now we have to be more careful about the\nchoice of constraints in L. We construct L inductively as follows.\nStart by setting L = {I1}. At the jth step, suppose that all\nvariables up to (but not including) bij\nappear in at least one inequality\nin L. Add Iij to L.\nNote that for any j we have yij+1 > yij . This is because the\ninequalities added to L during the first j steps did not cover bij+1\n.\nSee Figure 3. Since yij+2 > yij+1 , we must also have xij+2 >\nyij : otherwise, \u02c6Pij+1 would not be the rightmost constraint for\nbij+1\n. Therefore, the variables in Iij+2 and Iij do not overlap, and\nhence no bi can appear in more than two inequalities in L.\nNow we follow the argument of the proof of Theorem 2 to finish.\nBy adding up all of the (tight) inequalities in L for bi we obtain\n2\nP\ni=1,...,k bi \u2265\nP\nj=1,...,k c( \u02c6Pj \\ P). On the other hand, all\nthese inequalities appear in condition (2), so they must hold for\nbi , i.e.,\nP\ni=1,...,k bi \u2264\nP\nj=1,...,k c( \u02c6Pj \\ P), so TUmax(c) \u2264\n2TUmin(c).\n5. TRUTHFUL MECHANISMS FOR\nVERTEX COVER\nRecall that for a vertex-cover auction on a graph G = (V, E), an\nallocation rule is an algorithm that takes as input a bid bv for each\nvertex and returns a vertex cover \u02c6S of G. As explained in\nSection 2, we can combine a monotone allocation rule with threshold\npayments to obtain a truthful auction.\nTwo natural examples of monotone allocation rules are Aopt, i.e.,\nthe algorithm that finds an optimal vertex cover, and the greedy\nalgorithm AGR. However, Aopt cannot be guaranteed to run in\npolynomial time unless P = NP and AGR has approximation\nratio of log n.\nAnother approximation algorithm for vertex cover, which has\napproximation ratio 2, is the local ratio algorithm ALR [2, 3]. This\nalgorithm considers the edges of G one by one. Given an edge\ne = (u, v), it computes = min{bu, bv} and sets bu = bu \u2212 ,\nbv = bv \u2212 . After all edges have been processed, ALR returns\nthe set of vertices {v | bv = 0}. It is not hard to check that if\nthe order in which the edges are considered is independent of the\nbids, then this algorithm is monotone as well. Hence, we can use it\nto construct a truthful auction that is guaranteed to select a vertex\ncover whose cost is within a factor of 2 from the optimal.\nHowever, while the quality of the solution produced by ALR is\nmuch better than that of AGR, we still need to show that its total\npayment is not too high. In the next subsection, we bound the\nfrugality ratio of ALR (and, more generally, all algorithms that satisfy\nthe condition of local optimality, defined later) by 2\u0394, where \u0394 is\nthe maximum degree of G. We then prove a matching lower bound\nshowing that for some graphs the frugality ratio of any truthful\nauction is at least \u0394/2.\n5.1 Upper bound\nWe say that an allocation rule is locally optimal if whenever bv >P\nw\u223cv bw, the vertex v is not chosen. Note that for any such rule\nthe threshold bid of v satisfies tv \u2264\nP\nw\u223cv bw.\nCLAIM 1. The algorithms Aopt, AGR, and ALR are locally\noptimal.\nTHEOREM 4. Any vertex cover auction M that has a locally\noptimal and monotone allocation rule and pays each agent his\nthreshold bid has frugality ratio \u03c6NTUmin(M) \u2264 2\u0394.\nTo prove Theorem 4, we first show that the total payment of\nany locally optimal mechanism does not exceed \u0394c(V ). We then\ndemonstrate that NTUmin(c) \u2265 c(V )/2. By combining these\ntwo results, the theorem follows.\nLEMMA 5. Consider a graph G = (V, E) with maximum\ndegree \u0394. Let M be a vertex-cover auction on G that satisfies the\nconditions of Theorem 4. Then for any cost vector c, the total\npayment of M satisfies pM(c) \u2264 \u0394c(V ).\nPROOF. First note that any such auction is truthful, so we can\nassume that each agent\"s bid is equal to his cost. Let \u02c6S be the\nvertex cover selected by M. Then by local optimality\npM(c) =\nX\nv\u2208 \u02c6S\ntv \u2264\nX\nv\u2208 \u02c6S\nX\nw\u223cv\ncw \u2264\nX\nw\u2208V\n\u0394cw = \u0394c(V ).\nWe now derive a lower bound on TUmax(c); while not essential\nfor the proof of Theorem 4, it helps us build the intuition necessary\nfor that proof.\nLEMMA 6. For a vertex cover instance G = (V, E) in which S\nis a minimum vertex cover, TUmax(c, S) \u2265 c(V \\ S)\nPROOF. For a vertex w with at least one neighbour in S, let\nd(w) denote the number of neighbours that w has in S. Consider\nthe bid vector b in which, for each v \u2208 S, bv =\nP\nw\u223cv,w\u2208S\ncw\nd(w)\n.\nThen\nP\nv\u2208S bv =\nP\nv\u2208S\nP\nw\u223cv,w\u2208S cw/d(w) =\nP\nw /\u2208S cw =\nc(V \\ S). To finish we want to show that b is feasible in the sense\nthat it satisfies (2). Consider a vertex cover T, and extend the bid\nvector b by assigning bv = cv for v /\u2208 S. Then\nb(T) = c(T \\S)+b(S\u2229T) \u2265 c(T \\S)+\nX\nv\u2208S\u2229T\nX\nw\u2208S\u2229T :w\u223cv\ncw\nd(w)\n,\nand since all edges between S \u2229 T and S go to S \u2229 T, the\nrighthand-side is equal to\nc(T \\S)+\nX\nw\u2208S\u2229T\ncw = c(T \\S)+c(S \u2229T) = c(V \\S) = b(S).\nNext, we prove a lower bound on NTUmax(c, S); we will then\nuse it to obtain a lower bound on NTUmin(c).\nLEMMA 7. For a vertex cover instance G = (V, E) in which S\nis a minimum vertex cover, NTUmax(c, S) \u2265 c(V \\ S)\nPROOF. If c(S) \u2265 c(V \\ S), by condition (1) we are done.\nTherefore, for the rest of the proof we assume that c(S) < c(V \\\nS). We show how to construct a bid vector (be)e\u2208S that satisfies\nconditions (1) and (2) such that b(S) \u2265 c(V \\ S); clearly, this\nimplies NTUmax(c, S) \u2265 c(V \\ S).\nRecall that a network flow problem is described by a directed\ngraph \u0393 = (V\u0393, E\u0393), a source node s \u2208 V\u0393, a sink node t \u2208\nV\u0393, and a vector of capacity constraints ae, e \u2208 E\u0393. Consider a\nnetwork (V\u0393, E\u0393) such that V\u0393 = V \u222a{s, t}, E\u0393 = E1 \u222aE2 \u222aE3,\nwhere E1 = {(s, v) | v \u2208 S}, E2 = {(v, w) | v \u2208 S, w \u2208\n341\nV \\ S, (v, w) \u2208 E}, E3 = {(w, t) | w \u2208 V \\ S}. Since S is\na vertex cover for G, no edge of E can have both of its endpoints\nin V \\ S, and by construction, E2 contains no edges with both\nendpoints in S. Therefore, the graph (V, E2) is bipartite with parts\n(S, V \\ S).\nSet the capacity constraints for e \u2208 E\u0393 as follows: a(s,v) =\ncv, a(w,t) = cw, a(v,w) = +\u221e for all v \u2208 S, w \u2208 V \\ S.\nRecall that a cut is a partition of the vertices in V\u0393 into two sets\nC1 and C2 so that s \u2208 C1, t \u2208 C2; we denote such a cut by\nC = (C1, C2). Abusing notation, we write e = (u, v) \u2208 C if\nu \u2208 C1, v \u2208 C2 or u \u2208 C2, v \u2208 C1, and say that such an edge\ne = (u, v) crosses the cut C. The capacity of a cut C is computed\nas cap(C) =\nP\n(v,w)\u2208C a(v,w). We have cap(s, V \u222a{t}) = c(S),\ncap({s} \u222a V, t) = c(V \\ S).\nLet Cmin = ({s} \u222a S \u222a W , {t} \u222a S \u222a W ) be a minimum\ncut in \u0393, where S , S \u2286 S, W , W \u2286 V \\ S. See Figure 4. As\ncap(Cmin) \u2264 cap(s, V \u222a {t}) = c(S) < +\u221e, and any edge in\nE2 has infinite capacity, no edge (u, v) \u2208 E2 crosses Cmin.\nConsider the network \u0393 = (V\u0393 , E\u0393 ), where V\u0393 = {s} \u222a\nS \u222a W \u222a {t}, E\u0393 = {(u, v) \u2208 E\u0393 | u, v \u2208 V\u0393 }. Clearly,\nC = ({s} \u222a S \u222a W , {t}) is a minimum cut in \u0393 (otherwise,\nthere would exist a smaller cut for \u0393). As cap(C ) = c(W ), we\nhave c(S ) \u2265 c(W ).\nNow, consider the network \u0393 = (V\u0393 , E\u0393 ), where V\u0393 =\n{s} \u222a S \u222a W \u222a {t}, E\u0393 = {(u, v) \u2208 E\u0393 | u, v \u2208 V\u0393 }.\nSimilarly, C = ({s}, S \u222a W \u222a {t}) is a minimum cut in \u0393 ,\ncap(C ) = c(S ). As the size of a maximum flow from s to\nt is equal to the capacity of a minimum cut separating s and t,\nthere exists a flow F = (fe)e\u2208E\u0393\nof size c(S ). This flow has\nto saturate all edges between s and S , i.e., f(s,v) = cv for all\nv \u2208 S . Now, increase the capacities of all edges between s and\nS to +\u221e. In the modified network, the capacity of a minimum cut\n(and hence the size of a maximum flow) is c(W ), and a maximum\nflow F = (fe)e\u2208E\u0393\ncan be constructed by greedily augmenting\nF.\nSet bv = cv for all v \u2208 S , bv = f(s,v) for all v \u2208 S . As F is\nconstructed by augmenting F, we have bv \u2265 cv for all v \u2208 S, i.e.,\ncondition (1) is satisfied.\nNow, let us check that no vertex cover T \u2286 V can violate\ncondition (2). Set T1 = T \u2229 S , T2 = T \u2229 S , T3 = T \u2229 W ,\nT4 = T \u2229 W ; our goal is to show that b(S \\ T1) + b(S \\ T2) \u2264\nc(T3)+c(T4). Consider all edges (u, v) \u2208 E such that u \u2208 S \\T1.\nIf (u, v) \u2208 E2 then v \u2208 T3 (no edge in E2 can cross the cut), and if\nu, v \u2208 S then v \u2208 T1\u222aT2. Hence, T1\u222aT3\u222aS is a vertex cover for\nG, and therefore c(T1)+ c(T3)+ c(S ) \u2265 c(S) = c(T1)+ c(S \\\nT1) + c(S ). Consequently, c(T3) \u2265 c(S \\ T1) = b(S \\ T1).\nNow, consider the vertices in S \\T2. Any edge in E2 that starts in\none of these vertices has to end in T4 (this edge has to be covered by\nT, and it cannot go across the cut). Therefore, the total flow out of\nS \\T2 is at most the total flow out of T4, i.e., b(S \\T2) \u2264 c(T4).\nHence, b(S \\ T1) + b(S \\ T2) \u2264 c(T3) + c(T4).\nFinally, we derive a lower bound on the payment bound that is\nof interest to us, namely, NTUmin(c).\nLEMMA 8. For a vertex cover instance G = (V, E) in which S\nis a minimum vertex cover, NTUmin(c, S) \u2265 c(V \\ S)\nPROOF. Suppose for contradiction that c is a cost vector with\nminimum-cost vertex cover S and NTUmin(c, S) < c(V \\S). Let\nb be the corresponding bid vector and let c be a new cost vector\nwith cv = bv for v \u2208 S and cv = cv for v \u2208 S. Condition (2)\nguarantees that S is an optimal solution to the cost vector c . Now\ncompute a bid vector b corresponding to NTUmax(c , S). We\nS\"\nW\"\"\nS\"\"\nW\"\ns\nt\nT1\nT3\nT2\nT4\n0\n00\n1\n11\n00\n0000\n11\n1111\n00\n0000\n11\n1111\n00\n0000\n11\n1111\n00\n0000\n11\n1111\n00\n0000\n11\n1111\n00\n0000\n11\n1111\n00\n0000\n11\n1111\n00\n0000\n11\n1111\n00\n0000\n11\n1111\n00\n00\n11\n11\n0\n00\n1\n11\n0\n00\n1\n11\n0\n00\n1\n11\n0000000\n00000000000000\n00000000000000\n0000000\n00000000000000\n00000000000000\n0000000\n00000000000000\n1111111\n11111111111111\n11111111111111\n1111111\n11111111111111\n11111111111111\n1111111\n11111111111111\n0000\n00000000\n00000000\n0000\n00000000\n00000000\n0000\n00000000\n1111\n11111111\n11111111\n1111\n11111111\n11111111\n1111\n11111111\n00\n0000\n0000\n00\n0000\n0000\n00\n0000\n11\n1111\n1111\n11\n1111\n1111\n11\n1111\n00\n0000\n0000\n00\n0000\n0000\n00\n0000\n11\n1111\n1111\n11\n1111\n1111\n11\n1111\n00000\n0000000000\n0000000000\n00000\n0000000000\n0000000000\n00000\n0000000000\n11111\n1111111111\n1111111111\n11111\n1111111111\n1111111111\n11111\n1111111111\n0000000\n00000000000000\n00000000000000\n0000000\n00000000000000\n00000000000000\n0000000\n00000000000000\n1111111\n11111111111111\n11111111111111\n1111111\n11111111111111\n11111111111111\n1111111\n11111111111111\n0000000\n00000000000000\n00000000000000\n0000000\n00000000000000\n00000000000000\n0000000\n00000000000000\n1111111\n11111111111111\n11111111111111\n1111111\n11111111111111\n11111111111111\n1111111\n11111111111111\n0000\n00000000\n00000000\n0000\n00000000\n00000000\n0000\n00000000\n1111\n11111111\n11111111\n1111\n11111111\n11111111\n1111\n11111111\n00\n0000\n0000\n00\n0000\n0000\n00\n0000\n11\n1111\n1111\n11\n1111\n1111\n11\n1111000\n000000\n000000\n000\n000000\n000000\n000\n000000\n000000\n000\n000\n111\n111111\n111111\n111\n111111\n111111\n111\n111111\n111111\n111\n111\n0000000\n00000000000000\n00000000000000\n0000000\n00000000000000\n00000000000000\n0000000\n00000000000000\n00000000000000\n0000000\n00000000000000\n1111111\n11111111111111\n11111111111111\n1111111\n11111111111111\n11111111111111\n1111111\n11111111111111\n11111111111111\n1111111\n11111111111111\n000\n000000\n000000\n000\n000000\n000000\n000\n000000\n000000\n000\n000000\n111\n111111\n111111\n111\n111111\n111111\n111\n111111\n111111\n111\n111111\n000000\n000000000000\n000000000000\n000000\n000000000000\n000000000000\n000000\n000000000000\n000000000000\n000000\n000000000000\n111111\n111111111111\n111111111111\n111111\n111111111111\n111111111111\n111111\n111111111111\n111111111111\n111111\n111111111111\n000000\n000000000000\n000000000000\n000000\n000000000000\n000000000000\n000000\n000000000000\n000000000000\n000000\n000000000000\n111111\n111111111111\n111111111111\n111111\n111111111111\n111111111111\n111111\n111111111111\n111111111111\n111111\n111111111111\n000\n000000\n000000\n000\n000000\n000000\n000\n000000\n000000\n000\n000000\n111\n111111\n111111\n111\n111111\n111111\n111\n111111\n111111\n111\n111111\nFigure 4: Proof of Lemma 7. Dashed lines correspond to edges\nin E \\ E2\nclaim that bv = cv for any v \u2208 S. Indeed, suppose that bv > cv\nfor some v \u2208 S (bv = cv for v \u2208 S by construction). As b satisfies\nconditions (1)-(3), among the inequalities in (2) there is one that is\ntight for v and the bid vector b. That is, b(S \\ T) = c(T \\ S). By\nthe construction of c , c (S \\ T) = c (T \\ S). Now since bw \u2265 cw\nfor all w \u2208 S, bv > cv implies b (S \\T) > c (S \\T) = c (T \\S).\nBut this violates (2). So we now know b = c . Hence, we have\nNTUmax(c , S) =\nP\nv\u2208S bv = NTUmin(c, S) < c(V \\ S),\ngiving a contradiction to the fact that NTUmax(c , S) \u2265 c (V \\S)\nwhich we proved in Lemma 7.\nAs NTUmin(c, S) satisfies condition (1), it follows that we\nhave NTUmin(c, S) \u2265 c(S). Together will Lemma 8, this implies\nNTUmin(c, S) \u2265 max{c(V \\ S), c(S)} \u2265 c(V )/2. Combined\nwith Lemma 5, this completes the proof of Theorem 4.\nREMARK 3. As NTUmin(c) \u2264 NTUmax(c) \u2264 TUmax(c),\nour bound of 2\u0394 extends to the smaller frugality ratios that we\nconsider, i.e., \u03c6NTUmax(M) and \u03c6TUmax(M). It is not clear whether\nit extends to the larger frugality ratio \u03c6TUmin(M). However, the\nfrugality ratio \u03c6TUmin(M) is not realistic because the payment\nbound TUmin(c) is inappropriately low - we show in Section 6\nthat TUmin(c) can be significantly smaller than the total cost of a\ncheapest vertex cover.\nExtensions\nWe can also apply our results to monotone vertex-cover algorithms\nthat do not necessarily output locally-optimal solutions. To do so,\nwe simply take the vertex cover produced by any such algorithm\nand transform it into a locally-optimal one, considering the vertices\nin lexicographic order and replacing a vertex v with its neighbours\nwhenever bv >\nP\nu\u223cv bu. Note that if a vertex u has been added to\nthe vertex cover during this process, it means that it has a neighbour\nwhose bid is higher than bu, so after one pass all vertices in the\nvertex cover satisfy bv \u2264\nP\nu\u223cv bu. This procedure is monotone in\nbids, and it can only decrease the cost of the vertex cover.\nTherefore, using it on top of a monotone allocation rule with\napprox342\nimation ratio \u03b1, we obtain a monotone locally-optimal allocation\nrule with approximation ratio \u03b1. Combining it with threshold\npayments, we get an auction with \u03c6NTUmin \u2264 2\u0394. Since any truthful\nauction has a monotone allocation rule, this procedure transforms\nany truthful mechanism for the vertex-cover problem into a frugal\none while preserving the approximation ratio.\n5.2 Lower bound\nIn this subsection, we prove that the upper bound of Theorem 4\nis essentially optimal. Our proof uses the techniques of [9], where\nthe authors prove a similar result for shortest-path auctions.\nTHEOREM 5. For any \u0394 > 0 and any n, there exist a graph G\nof maximum degree \u0394 and size N > n such that for any truthful\nmechanism M on G we have \u03c6NTUmin(M) \u2265 \u0394/2.\nPROOF. Given n and \u0394, set k = n/2\u0394 . Let G be the graph\nthat consists of k blocks B1, . . . , Bk of size 2\u0394 each, where each\nBi is a complete bipartite graph with parts Li and Ri, |Li| =\n|Ri| = \u0394.\nWe will consider two families of cost vectors for G. Under a\ncost vector x \u2208 X, each block Bi has one vertex of cost 1; all\nother vertices cost 0. Under a cost vector y \u2208 Y , there is one block\nthat has two vertices of cost 1, one in each part, all other blocks\nhave one vertex of cost 1, and all other vertices cost 0. Clearly,\n|X| = (2\u0394)k\n, |Y | = k(2\u0394)k\u22121\n\u03942\n. We will now construct a\nbipartite graph W with the vertex set X \u222a Y as follows.\nConsider a cost vector y \u2208 Y that has two vertices of cost 1 in\nBi; let these vertices be vl \u2208 Li and vr \u2208 Ri. By changing the\ncost of either of these vertices to 0, we obtain a cost vector in X.\nLet xl and xr be the cost vectors obtained by changing the cost of\nvl and vr, respectively. The vertex cover chosen by M(y) must\neither contain all vertices in Li or it must contain all vertices in Ri.\nIn the former case, we put in W an edge from y to xl and in the\nlatter case we put in W an edge from y to xr (if the vertex cover\nincludes all of Bi, W contains both of these edges).\nThe graph W has at least k(2\u0394)k\u22121\n\u03942\nedges, so there must\nexist an x \u2208 X of degree at least k\u0394/2. Let y1, . . . , yk\u0394/2 be\nthe other endpoints of the edges incident to x, and for each i =\n1, . . . , k\u0394/2, let vi be the vertex whose cost is different under x\nand yi; note that all vi are distinct.\nIt is not hard to see that NTUmin(x) \u2264 k: the cheapest vertex\ncover contains the all-0 part of each block, and we can satisfy\nconditions (1)-(3) by letting one of the vertices in the all-0 part of each\nblock to bid 1, while all other the vertices in the cheapest set bid 0.\nOn the other hand, by monotonicity of M we have vi \u2208 M(x)\nfor i = 1, . . . , k\u0394/2 (vi is in the winning set under yi, and x is\nobtained from yi by decreasing the cost of vi), and moreover, the\nthreshold bid of each vi is at least 1, so the total payment of M on x\nis at least k\u0394/2. Hence, \u03c6NTUmin(M) \u2265 M(x)/NTUmin(x) \u2265\n\u0394/2.\nREMARK 4. The lower bound of Theorem 5 can be generalised\nto randomised mechanisms, where a randomised mechanism is\nconsidered to be truthful if it can be represented as a probability\ndistribution over truthful mechanisms. In this case, instead of choosing\nthe vertex x \u2208 X with the highest degree, we put both (y, xl)\nand (y, xr) into W , label each edge with the probability that the\nrespective part of the block is chosen, and pick x \u2208 X with the\nhighest weighted degree. The argument can be further extended to\na more permissive definition of truthfulness for randomised\nmechanisms, but this discussion is beyond the scope of this paper.\n6. PROPERTIES OF PAYMENT BOUNDS\nIn this section we consider several desirable properties of\npayment bounds and evaluate the four payment bounds proposed in\nthis paper with respect to them. The particular properties that we\nare interested in are independence of the choice of S (Section 6.3),\nmonotonicity (Section 6.4.1), computational hardness (Section 6.4.2),\nand the relationship with other reasonable bounds, such as the total\ncost of the cheapest set (Section 6.1), or the total VCG payment\n(Section 6.2).\n6.1 Comparison with total cost\nOur first requirement is that a payment bound should not be less\nthan the total cost of the selected set. Payment bounds are used to\nevaluate the performance of set-system auctions. The latter have to\nsatisfy individual rationality, i.e., the payment to each agent must\nbe at least as large as his incurred costs; it is only reasonable to\nrequire the payment bound to satisfy the same requirement.\nClearly, NTUmax(c) and NTUmin(c) satisfy this requirement\ndue to condition (1), and so does TUmax(c), since TUmax(c) \u2265\nNTUmax(c). However, TUmin(c) fails this test. The example\nof Proposition 4 shows that for path auctions, TUmin(c) can be\nsmaller than the total cost by a factor of 2. Moreover, there are set\nsystems and cost vectors for which TUmin(c) is smaller than the\ncost of the cheapest set S by a factor of \u03a9(n). Consider, for\nexample, the vertex-cover auction for the graph of Proposition 5 with\nthe costs cX1 = \u00b7 \u00b7 \u00b7 = cXn\u22122 = cXn\u22121 = 1, cX0 = 0. The cost\nof a cheapest vertex cover is n \u2212 2, and the lexicographically first\nvertex cover of cost n\u22122 is {X0, X1, . . . , Xn\u22122}. The constraints\nin (2) are bXi + bX0 \u2264 cXn\u22121 = 1. Clearly, we can satisfy\nconditions (2) and (3) by setting bX1 = \u00b7 \u00b7 \u00b7 = bXn\u22122 = 0, bX0 = 1,\nwhich means that TUmin(c) \u2264 1. This observation suggests that\nthe payment bound TUmin(c) is too strong to be realistic, since it\ncan be substantially lower than the cost of the cheapest feasible set.\nNevertheless, some of the positive results that were proved in [16]\nfor NTUmin(c) go through for TUmin(c) as well. In particular,\none can show that if the feasible sets are the bases of a\nmonopolyfree matroid, then \u03c6TUmin(VCG) = 1. To show that \u03c6TUmin(VCG)\nis at most 1, one must prove that the VCG payment is at most\nTUmin(c). This is shown for NTUmin(c) in the first paragraph\nof the proof of Theorem 5 in [16]. Their argument does not use\ncondition (1) at all, so it also applies to TUmin(c). On the other hand,\n\u03c6TUmin(VCG) \u2265 1 since \u03c6TUmin(VCG) \u2265 \u03c6NTUmin(VCG)\nand \u03c6NTUmin(VCG) \u2265 1 by Proposition 7 of [16] (and also by\nProposition 6 below).\n6.2 Comparison with VCG payments\nAnother measure of suitability for payment bounds is that they\nshould not result in frugality ratios that are less then 1 for\nwellknown truthful mechanisms. If this is indeed the case, the payment\nbound may be too weak, as it becomes too easy to design\nmechanisms that perform well with respect to it. It particular, a reasonable\nrequirement is that a payment bound should not exceed the total\npayment of the classical VCG mechanism.\nThe following proposition shows that NTUmax(c), and\ntherefore also NTUmin(c) and TUmin(c), do not exceed the VCG\npayment pVCG(c). The proof essentially follows the argument of\nProposition 7 of [16] and can be found in the full version of this\npaper [8].\nPROPOSITION 6. \u03c6NTUmax(VCG) \u2265 1.\nProposition 6 shows that none of the payment bounds TUmin(c),\nNTUmin(c) and NTUmax(c) exceeds the payment of VCG.\nHowever, the payment bound TUmax(c) can be larger that the total\n343\nVCG payment. In particular, for the instance in Proposition 5, the\nVCG payment is smaller than TUmax(c) by a factor of n \u2212 2. We\nhave already seen that TUmax(c) \u2265 n \u2212 2. On the other hand,\nunder VCG, the threshold bid of any Xi, i = 1, . . . , n \u2212 2, is 0:\nif any such vertex bids above 0, it is deleted from the winning set\ntogether with X0 and replaced with Xn\u22121. Similarly, the threshold\nbid of X0 is 1, because if X0 bids above 1, it can be replaced with\nXn\u22121. So the VCG payment is 1.\nThis result is not surprising: the definition of TUmax(c)\nimplicitly assumes there is co-operation between the agents, while\nthe computation of VCG payments does not take into account any\ninteraction between them. Indeed, co-operation enables the agents\nto extract higher payments under VCG. That is, VCG is not\ngroupstrategyproof. This suggests that as a payment bound, TUmax(c)\nmay be too liberal, at least in a context where there is little or\nno co-operation between agents. Perhaps TUmax(c) can be a\ngood benchmark for measuring the performance of mechanisms\ndesigned for agents that can form coalitions or make side payments\nto each other, in particular, group-strategyproof mechanisms.\nAnother setting in which bounding \u03c6TUmax is still of some\ninterest is when, for the underlying problem, the optimal allocation\nand VCG payments are NP-hard to compute. In this case, finding\na polynomial-time computable mechanism with good frugality\nratio with respect to TUmax(c) is a non-trivial task, while bounding\nthe frugality ratio with respect to more challenging payment bounds\ncould be too difficult. To illustrate this point, compare the proofs\nof Lemma 6 and Lemma 7: both require some effort, but the latter\nis much more difficult than the former.\n6.3 The choice of S\nAll payment bounds defined in this paper correspond to the total\nbid of all elements in the cheapest feasible set, where ties are\nbroken lexicographically. While this definition ensures that our\npayment bounds are well-defined, the particular choice of the\ndrawresolution rule appears arbitrary, and one might wonder if our\npayment bounds are sufficiently robust to be independent of this choice.\nIt turns out that is indeed the case for NTUmin(c) and NTUmax(c),\ni.e., these bounds do not depend on the draw-resolution rule. To\nsee this, suppose that two feasible sets S1 and S2 have the same\ncost. In the computation of NTUmin(c, S1), all vertices in S1 \\S2\nwould have to bid their true cost, since otherwise S2 would\nbecome cheaper than S1. Hence, any bid vector for S1 can only have\nbe = ce for e \u2208 S1 \u2229 S2, and hence constitutes a valid bid vector\nfor S2 and vice versa. A similar argument applies to NTUmax(c).\nHowever, for TUmin(c) and TUmax(c) this is not the case.\nFor example, consider the set system\nE = {e1, e2, e3, e4, e5},\nF = {S1 = {e1, e2}, S2 = {e2, e3, e4}, S3 = {e4, e5}}\nwith the costs c1 = 2, c2 = c3 = c4 = 1, c5 = 3. The cheapest\nsets are S1 and S2. Now TUmax(c, S1) \u2264 4, as the total bid of\nthe elements in S1 cannot exceed the total cost of S3. On the other\nhand, TUmax(c, S2) \u2265 5, as we can set b2 = 3, b3 = 0, b4 = 2.\nSimilarly, TUmin(c, S1) = 4, because the inequalities in (2) are\nb1 \u2264 2 and b1 + b2 \u2264 4. But TUmin(c, S2) \u2264 3 as we can set\nb2 = 1, b3 = 2, b4 = 0.\n6.4 Negative results for NTUmin(c) and TUmin(c)\nThe results in [16] and our vertex cover results are proved for the\nfrugality ratio \u03c6NTUmin. Indeed, it can be argued that \u03c6NTUmin is\nthe best definition of frugality ratio, because among all\nreasonable payment bounds (i.e., ones that are at least as large as the cost\nof the cheapest feasible set), it is most demanding of the algorithm.\nHowever, NTUmin(c) is not always the easiest or the most natural\npayment bound to work with. In this subsection, we discuss several\ndisadvantages of NTUmin(c) (and also TUmin(c)) as compared\nto NTUmax(c) and TUmax(c).\n6.4.1 Nonmonotonicity\nThe first problem with NTUmin(c) is that it is not monotone\nwith respect to F, i.e., it may increase when one adds a feasible\nset to F. (It is, however, monotone in the sense that a losing agent\ncannot become a winner by raising his cost.) Intuitively, a good\npayment bound should satisfy this monotonicity requirement, as\nadding a feasible set increases the competition, so it should drive\nthe prices down. Note that this indeed the case for NTUmax(c)\nand TUmax(c) since a new feasible set adds a constraint in (2),\nthus limiting the solution space for the respective linear program.\nPROPOSITION 7. Adding a feasible set to F can increase the\nvalue of NTUmin(c) by a factor of \u03a9(n).\nPROOF. Let E = {x, xx, y1, . . . , yn, z1, . . . , zn}. Set Y =\n{y1, . . . , yn}, S = Y \u222a {x}, Ti = Y \\ {yi} \u222a {zi}, i = 1, . . . , n,\nand suppose that F = {S, T1, . . . , Tn}. The costs are cx = 0,\ncxx = 0, cyi = 0, czi = 1 for i = 1, . . . , n. Note that S is\nthe cheapest feasible set. Let F = F \u222a {T0}, where T0 = Y \u222a\n{xx}. For F, the bid vector by1 = \u00b7 \u00b7 \u00b7 = byn = 0, bx = 1\nsatisfies (1), (2), and (3), so NTUmin(c) \u2264 1. For F , S is still\nthe lexicographically-least cheapest set. Any optimal solution has\nbx = 0 (by constraint in (2) with T0). Condition (3) for yi implies\nbx + byi = czi = 1, so byi = 1 and NTUmin(c) = n.\nFor path auctions, it has been shown [18] that NTUmin(c) is\nnon-monotone in a slightly different sense, i.e., with respect to\nadding a new edge (agent) rather than a new feasible set (a team\nof existing agents).\nREMARK 5. We can also show that NTUmin(c) is non-monotone\nfor vertex cover. In this case, adding a new feasible set corresponds\nto deleting edges from the graph. It turns out that deleting a single\nedge can increase NTUmin(c) by a factor of n \u2212 2; the\nconstruction is similar to that of Proposition 5.\n6.4.2 NP-Hardness\nAnother problem with NTUmin(c, S) is that it is NP-hard to\ncompute even if the number of feasible sets is polynomial in n.\nAgain, this puts it at a disadvantage compared to NTUmax(c, S)\nand TUmax(c, S) (see Remark 1).\nTHEOREM 6. Computing NTUmin(c) is NP-hard, even when\nthe lexicographically-least feasible set S is given in the input.\nPROOF. We reduce EXACT COVER BY 3-SETS(X3C) to our\nproblem. An instance of X3C is given by a universe G = {g1, . . . , gn}\nand a collection of subsets C1, . . . , Cm, Ci \u2282 G, |Ci| = 3, where\nthe goal is to decide whether one can cover G by n/3 of these sets.\nObserve that if this is indeed the case, each element of G is\ncontained in exactly one set of the cover.\nLEMMA 9. Consider a minimisation problem P of the following\nform: Minimise\nP\ni=1,...,n bi under conditions (1) bi \u2265 0 for all\ni = 1, . . . , n; (2) for any j = 1, . . . , k we have\nP\nbi\u2208Sj\nbi \u2264 aj,\nwhere Sj \u2286 {b1, . . . , bn}; (3) for each bj , one of the constraints\nin (2) involving it is tight. For any such P, one can construct a\nset system S and a vector of costs c such that NTUmin(c) is the\noptimal solution to P.\nPROOF. The construction is straightforward: there is an element\nof cost 0 for each bi, an element of cost aj for each aj, the feasible\nsolutions are {b1, . . . , bn}, or any set obtained from {b1, . . . , bn}\nby replacing the elements in Sj by aj.\n344\nBy this lemma, all we have to do to prove Theorem 6 is to show\nhow to solve X3C by using the solution to a minimisation problem\nof the form given in Lemma 9. We do this as follows. For each\nCi, we introduce 4 variables xi, \u00afxi, ai, and bi. Also, for each\nelement gj of G there is a variable dj. We use the following set of\nconstraints:\n\u2022 In (1), we have constraints xi \u2265 0, \u00afxi \u2265 0, ai \u2265 0, bi \u2265 0,\ndj \u2265 0 for all i = 1, . . . , m, j = 1, . . . , n.\n\u2022 In (2), for all i = 1, . . . , m, we have the following 5\nconstraints: xi + \u00afxi \u2264 1, xi +ai \u2264 1, \u00afxi +ai \u2264 1, xi +bi \u2264 1,\n\u00afxi + bi \u2264 1. Also, for all j = 1, . . . , n we have a constraint\nof the form xi1 + \u00b7 \u00b7 \u00b7 + xik + dj \u2264 1, where Ci1 , . . . , Cik\nare the sets that contain gj.\nThe goal is to minimize z =\nP\ni(xi + \u00afxi + ai + bi) +\nP\nj dj.\nObserve that for each j, there is only one constraint involving\ndj , so by condition (3) it must be tight.\nConsider the two constraints involving ai. One of them must be\ntight, and therefore xi +\u00afxi +ai +bi \u2265 xi +\u00afxi +ai \u2265 1. Hence, for\nany feasible solution to (1)-(3) we have z \u2265 m. Now, suppose that\nthere is an exact set cover. Set dj = 0 for j = 1, . . . , n. Also, if Ci\nis included in this cover, set xi = 1, \u00afxi = ai = bi = 0, otherwise\nset \u00afxi = 1, xi = ai = bi = 0. Clearly, all inequalities in (2)\nare satisfied (we use the fact that each element is covered exactly\nonce), and for each variable, one of the constraints involving it is\ntight. This assignment results in z = m.\nConversely, suppose there is a feasible solution with z = m.\nAs each addend of the form xi + \u00afxi + ai + bi contributes at least\n1, we have xi + \u00afxi + ai + bi = 1 for all i, dj = 0 for all j.\nWe will now show that for each i, either xi = 1 and \u00afxi = 0, or\nxi = 0 and \u00afxi = 1. For the sake of contradiction, suppose that\nxi = \u03b4 < 1, \u00afxi = \u03b4 < 1. As one of the constraints involving\nai must be tight, we have ai \u2265 min{1 \u2212 \u03b4, 1 \u2212 \u03b4 }. Similarly,\nbi \u2265 min{1 \u2212 \u03b4, 1 \u2212 \u03b4 }. Hence, xi + \u00afxi + ai + bi = 1 =\n\u03b4 +\u03b4 +2 min{1\u2212\u03b4, 1\u2212\u03b4 } > 1. To finish the proof, note that for\neach j = 1, . . . , m we have xi1 + \u00b7 \u00b7 \u00b7 + xik + dj = 1 and dj = 0,\nso the subsets that correspond to xi = 1 constitute a set cover.\nREMARK 6. In the proofs of Proposition 7 and Theorem 6 all\nconstraints in (1) are of the form be \u2265 0. Hence, the same results\nare true for TUmin(c).\nREMARK 7. For shortest-path auctions, the size of F can be\nsuperpolynomial. However, there is a polynomial-time separation\noracle for constraints in (2) (to construct one, use any algorithm\nfor finding shortest paths), so one can compute NTUmax(c) and\nTUmax(c) in polynomial time. On the other hand, recently and\nindependently it was shown [18] that computing NTUmin(c) for\nshortest-path auctions is NP-hard.\n7. REFERENCES\n[1] A. Archer and E. Tardos, Frugal path mechanisms. In\nProceedings of the 13th Annual ACM-SIAM Symposium on\nDiscrete Algorithms, pages 991-999, 2002\n[2] R. Bar-Yehuda, K. Bendel, A. Freund, and D. Rawitz, Local\nratio: A unified framework for approximation algorithms. In\nMemoriam: Shimon Even 1935-2004. ACM Comput. Surv.,\n36(4):422-463, 2004\n[3] R. Bar-Yehuda and S. Even, A local-ratio theorem for\napproximating the weighted vertex cover problem. Annals of\nDiscrete Mathematics, 25:27-46, 1985\n[4] E. Clarke, Multipart pricing of public goods. Public Choice,\n8:17-33, 1971\n[5] G. Calinescu, Bounding the payment of approximate truthful\nmechanisms. In Proceedings of the 15th International\nSymposium on Algorithms and Computation, pages 221-233,\n2004\n[6] A. Czumaj and A. Ronen, On the expected payment of\nmechanisms for task allocation. In Proceedings of the 5th\nACM Conference on Electronic Commerce (EC\"04), 2004\n[7] E. Elkind, True costs of cheap labor are hard to measure: edge\ndeletion and VCG payments in graphs. In Proceedings of the\n6th ACM Conference on Electronic Commerce (EC\"05), 2005\n[8] E. Elkind, L. A. Goldberg, and P. W. Goldberg, Frugality\nratios and improved truthful mechanisms for vertex cover.\nAvailable from\nhttp://arxiv.org/abs/cs/0606044, 2006\n[9] E. Elkind, A. Sahai, and K. Steiglitz, Frugality in path\nauctions. In Proceedings of the 15th Annual ACM-SIAM\nSymposium on Discrete Algorithms, pages 694-702, 2004\n[10] J. Feigenbaum, C. H. Papadimitriou, R. Sami, and\nS. Shenker, A BGP-based mechanism for lowest-cost routing.\nIn Proceedings of the 21st Symposium on Principles of\nDistributed Computing, pages 173-182, 2002\n[11] A. Fiat, A. Goldberg, J. Hartline, and A. Karlin, Competitive\ngeneralized auctions. In Proceedings of the 34th Annual ACM\nSymposium on Theory of Computation, pages 72-81, 2002\n[12] R. Garg, V. Kumar, A. Rudra and A. Verma, Coalitional\ngames on graphs: core structures, substitutes and frugality. In\nProceedings of the 4th ACM Conference on Electronic\nCommerce (EC\"03), 2005\n[13] A. Goldberg, J. Hartline, and A. Wright, Competitive\nauctions and digital goods. In Proceedings of the 12th Annual\nACM-SIAM Symposium on Discrete Algorithms, pages\n735-744, 2001\n[14] T. Groves, Incentives in teams. Econometrica,\n41(4):617-631, 1973\n[15] N. Immorlica, D. Karger, E. Nikolova, and R. Sami,\nFirst-price path auctions. In Proceedings of the 6th ACM\nConference on Electronic Commerce (EC\"05), 2005\n[16] A. R. Karlin, D. Kempe, and T. Tamir, Beyond VCG:\nfrugality of truthful mechanisms. In Proceedings of the 46th\nAnnual IEEE Symposium on Foundations of Computer\nScience, pages 615-626, 2005\n[17] D. Kempe, Personal communication, 2006\n[18] N. Chen, A. R. Karlin, Cheap labor can be expensive, In\nProceedings of the 18th Annual ACM-SIAM Symposium on\nDiscrete Algorithms, pages 735-744, 2007\n[19] N. Nisan and A. Ronen, Algorithmic mechanism design. In\nProceedings of the 31st Annual ACM Symposium on Theory of\nComputation, pages 129-140, 1999\n[20] A. Ronen and R. Talisman, Towards generic low payment\nmechanisms for decentralized task allocation. In Proceedings\nof the 7th International IEEE Conference on E-Commerce\nTechnology, 2005\n[21] K. Talwar, The price of truth: frugality in truthful\nmechanisms. In Proceedings of 20th International Symposium\non Theoretical Aspects of Computer Science, 2003\n[22] W. Vickrey, Counterspeculation, auctions, and competitive\nsealed tenders. Journal of Finance, 16:8-37, 1961\n345", "keywords": "consecutive payment bound;vertex cover;frugality ratio;bootstrapping technique;polynomial-time;auction;frugality;nonmonotonicity;transferable utility;co-operation;vertex-cover auction;monotone allocation rule"}
-{"name": "test_J-25", "title": "Betting Boolean-Style: A Framework for Trading in Securities Based on Logical Formulas", "abstract": "We develop a framework for trading in compound securities: financial instruments that pay off contingent on the outcomes of arbitrary statements in propositional logic. Buying or selling securities-which can be thought of as betting on or against a particular future outcome-allows agents both to hedge risk and to profit (in expectation) on subjective predictions. A compound securities market allows agents to place bets on arbitrary boolean combinations of events, enabling them to more closely achieve their optimal risk exposure, and enabling the market as a whole to more closely achieve the social optimum. The tradeoff for allowing such expressivity is in the complexity of the agents\" and auctioneer\"s optimization problems. We develop and motivate the concept of a compound securities market, presenting the framework through a series of formal definitions and examples. We then analyze in detail the auctioneer\"s matching problem. We show that, with n events, the matching problem is co-NP-complete in the divisible case and \u03a3p 2-complete in the indivisible case. We show that the latter hardness result holds even under severe language restrictions on bids. With log n events, the problem is polynomial in the divisible case and NP-complete in the indivisible case. We briefly discuss matching algorithms and tractable special cases.", "fulltext": "1. INTRODUCTION\nSecurities markets effectively allow traders to place bets\non the outcomes of uncertain future propositions. Examples\ninclude stock markets like NASDAQ, options markets like\nCBOE [17], futures markets like CME [30], other derivatives\nmarkets, insurance markets, political stock markets [11, 12],\nsports betting markets [7, 13, 32], horse racing markets [33],\nidea futures markets [16], decision markets [14] and even\nmarket games [4, 24, 25]. The economic value of securities\nmarkets is two-fold. First, they allow traders to hedge risk,\nor to insure against undesirable outcomes. For example, the\nowner of a stock might buy a put option (the right to sell\nthe stock at a particular price) in order to insure against\na stock downturn. Or the owner of a house may purchase\nan insurance contract to hedge against unforeseen damage\nto the house. Second, securities markets allow traders to\nspeculate, or to obtain a subjective expected profit when\nmarket prices do not reflect their assessment of the\nlikelihood of future outcomes. For example, a trader might buy\na call option if he believes that the likelihood is high that\nthe price of the underlying stock will go up, regardless of\nrisk exposure to changes in the stock price. Because traders\nstand to earn a profit if they can make effective\nprobability assessments, often prices in financial markets yield very\naccurate aggregate forecasts of future events [10, 29, 27, 28].\nReal securities markets have complex payoff structures\nwith various triggers. However, these can all be modeled\nas collections of more basic or atomic Arrow-Debreu\nsecurities [1, 8, 20]. One unit of one Arrow-Debreu security pays\noff one dollar if and only if (iff) a corresponding binary event\noccurs; it pays nothing if the event does not occur. So, for\nexample, one unit of a security denoted Acme100 might\npay $1 iff Acme\"s stock is above $100 on January 4, 2004.\nAn Acme stock option as it would be defined on a\nfinan144\ncial exchange can be though of as a portfolio of such atomic\nsecurities.1\nIn this paper, we develop and analyze a framework for\ntrading in compound securities markets with payoffs\ncontingent on arbitrary logical combinations of events, including\nconditionals. For example, given binary events A, B, and\nC, one trader might bid to buy three units of a security\ndenoted A \u2227 \u00afB \u2228 C that pays off $1 iff the compound event\nA \u2227 \u00afB \u2228 C occurs for thirty cents each. Another trader may\nbid to sell six units of a security A|C that pays off $1 iff\nA occurs for fifty-five cents each, conditional on event C\noccurring, meaning that the transaction is revoked if C does\nnot occur (i.e., no payoff is given and the price of the\nsecurity is refunded) [5]. Bids may also be divisible, meaning\nthat bidders are willing to accept less than the requested\nquantity, or indivisible, meaning that bids must be fulfilled\neither completely or not at all. Given a set of such bids,\nthe auctioneer faces a complex matching problem to decide\nwhich bids are accepted for how many units at what price.\nTypically, the auctioneer seeks to take on no risk of its own,\nonly matching up agreeable trades among the bidders, but\nwe also consider alternative formulations where the\nauctioneer acts as a market maker willing to accept some risk.\nWe examine the computational complexity of the\nauctioneer\"s matching problem. Let the length of the description\nof all the available securities be O(n). With n events, the\nmatching problem is co-NP-complete in the divisible case\nand \u03a3p\n2-complete in the indivisible case. This \u03a3p\n2-complete\nhardness holds even when the bidding language is\nsignificantly restricted. With log n events, the problem is\npolynomial in the divisible case and NP-complete in the indivisible\ncase.\nSection 2 presents some necessary background\ninformation, motivation, and related work. Section 3 formally\ndescribes our framework for compound securities, and defines\nthe auctioneer\"s matching problem. Section 4 briefly\ndiscusses natural algorithms for solving the matching problem.\nSection 5 proves our central computational complexity\nresults. Section 6 discusses the possibility of tractable special\ncases. Section 7 concludes with a summary and some ideas\nof future directions.\n2. PRELIMINARIES\n2.1 Background and notation\nImagine a world where there are only two future uncertain\nevents of any consequence: (1) the event that one\"s house is\nstruck by lightning by December 31, 2003, denoted struck,\nand (2) the event that Acme\"s stock price goes above $100\nby January 4, 2004, denoted acme100. In this simple world\nthere are four possible future states-all possible\ncombinations of the binary events\" outcomes:\nstruck \u2227 acme100,\nstruck \u2227 acme100,\nstruck \u2227 acme100,\nstruck \u2227 acme100.\nHedging risk can be thought of as an action of moving money\nbetween various possible future states. For example,\ninsur1\nTechnically, an option is a portfolio of infinitely many\natomic securities, though it can be approximately modeled\nwith a finite number.\ning one\"s house transfers money from future states where\nstruck is not true to states where it is. Selling a security\ndenoted acme100 -that pays off $1 iff the event acme100\noccurs-transfers money from future states where Acme\"s\nprice is above $100 on January 4 to states where it\"s not.\nSpeculating is also an act of transferring money between\nfuture states, though usually associated with maximizing\nexpected return rather than reducing risk. For example,\nbetting on a football team moves money from the team\nloses state to the team wins state. In practice, agents\nengage in a mixture of hedging and speculating, and there\nis no clear dividing line between the two [18].\nAll possible future outcomes form a state space \u2126,\nconsisting of mutually exclusive and exhaustive states \u03c9 \u2208 \u2126.\nOften a more natural way to think of possible future\noutcomes is as an event space A of linearly independent events\nA \u2208 A that may overlap arbitrarily. So in our toy example\nstruck \u2227 acme100 is one of the four disjoint states, while\nstruck is one of the two events. Note that a set of n\nlinearly independent events defines a state space \u2126 of size 2n\nconsisting of all possible combinations of event outcomes.\nConversely, any state space \u2126 can be factored into log |\u2126|\nevents.\nSuppose that A exhaustively covers all meaningful future\noutcomes (i.e., covers all eventualities that agents may wish\nto hedge against and/or speculate upon). Then the\nexistence of 2n\nlinearly independent securities-called a\ncomplete market-allows agents to distribute their wealth\narbitrarily across future states.2\nAn agent may create any hedge\nor speculation it desires. Under classical conditions, agents\ntrading in a complete market form an equilibrium where risk\nis allocated Pareto optimally. If the market is incomplete,\nmeaning it consists of fewer than 2n\nlinearly independent\nsecurities, then in general agents cannot construct arbitrary\nhedges and equilibrium allocations may be nonoptimal [1,\n8, 19, 20].\nIn real-world settings, the number of meaningful events n\nis large and thus the number of securities required for\ncompleteness is intractable. No truly complete market exists or\nwill ever exist. One motivation behind compound securities\nmarkets is to provide a mechanism that supports the most\ntransfer of risk using the least number of transactions\npossible. Compound securities allow a high degree of expressivity\nin constructing bids. The tradeoff for increased expressivity\nis increased computational complexity, from both the\nbidder\"s and auctioneer\"s point of view.\n2.2 Related work\nThe quest to reduce the number of financial instruments\nrequired to support an optimal allocation of risk dates to\nArrow\"s original work [1]. The requirement stated above of\nonly 2n\nlinearly-independent securities is itself a reduction\nfrom the most straightforward formulation. In an economy\nwith k standard goods, the most straightforward complete\nmarket contains k\u00b72n\nsecurities, each paying off in one good\nunder one state realization. Arrow [1] showed that a market\nwhere securities and goods are essentially separated, with\n2n\nsecurities paying off in a single numeraire good plus k\nspot markets in the standard goods, is also complete. For\nour purposes, we need consider only the securities market.\n2\nBy linearly independent securities, we mean that the\nvectors of payoffs in all future states of these securities are\nlinearly independent.\n145\nVarian [34] shows that a complete market can be\nconstructed using fewer than 2n\nsecurities, replacing the\nmissing securities with options. Still, the number of linearly\nindependent financial instruments-securities plus\noptionsmust be 2n\nto guarantee completeness.\nThough the requirement of 2n\nfinancial instruments\ncannot be relaxed if one wants to guarantee completeness in\nall circumstances, Pennock and Wellman [26] explore\nconditions under which a smaller securities market may be\noperationally complete, meaning that its equilibrium is Pareto\noptimal with respect to the agents involved, even if the market\ncontains less than 2n\nsecurities. The authors show that in\nsome cases the market can be structured and compacted\nin analogy to Bayesian network representations of joint\nprobability distributions [23]. They show that, if all agents\"\nrisk-neutral independencies agree with the independencies\nencoded in the market structure, then the market is\noperationally complete. For collections of agents all with constant\nabsolute risk aversion, agreement on Markov independencies\nis sufficient.\nBossaerts, Fine, and Ledyard [2] develop a mechanism\nthey call combined-value trading (CVT) that allows traders\nto order an arbitrary portfolio of securities in one bid, rather\nthan breaking up the order into a sequence of bids on\nindividual securities. If the portfolio order is accepted, all of\nthe implied trades on individual securities are executed\nsimultaneously, thus eliminating so-called execution risk that\nprices will change in the middle of a planned sequence of\norders. The authors conduct laboratory experiments showing\nthat, even in thin markets where ordinary sequential\ntrading breaks down, CVT supports efficient pricing and\nallocation. Note that CVT differs significantly from compound\nsecurities trading. CVT allows instantaneous trading of any\nlinear combination of securities, while compound securities\nallow more expressive securities that can encode nonlinear\nboolean combinations of events. For example, CVT may\nallow an agent to order securities A and B in a bundle that\npays off as a linear combination of A and B,3\nbut CVT won\"t\nallow the construction of a compound security A \u2227 B that\npays off $1 iff both A and B occur, or a compound security\nA|B .\nRelated to CVT are combinatorial auctions [6, 21] and\nexchanges [31], mechanisms that have recently received quite\na bit of attention in the economics and computer science\nliteratures. Combinatorial auctions allow bidders to place\ndistinct values on all possible bundles of goods rather than just\non individual goods. In this way bidders can express\nsubstitutability and complementarity relationships among goods\nthat cannot be expressed in standard parallel or sequential\nauctions. Compound securities differ from combinatorial\nauctions in concept and complexity. Compound securities\nallow bidders to construct an arbitrary bet on any of the 22n\npossible compound events expressible as logical functions of\nthe n base events, conditional on any other of the 22n\n\ncompound events. Agents optimize based on their own\nsubjective probabilities and risk attitude (and in general, their\nbeliefs about other agents\" beliefs and utilities, ad infinitum).\nThe central auctioneer problem is identifying arbitrage\nopportunities: that is, to match bets together without taking\non any risk. Combinatorial auctions, on the other hand,\nallow bids on any of the 2n\nbundles of n goods. Typically,\n3\nSpecifically, one unit of each pays off $2 iff both A and B\noccur, $1 iff A or B occurs (but not both), and $0 otherwise.\nuncertainty-and thus risk-is not considered. The central\nauctioneer problem is to maximize social welfare. Also note\nthat the problems lie in different complexity classes. While\nclearing a combinatorial auction is polynomial in the\ndivisible case and NP-complete in the indivisible case, matching\nin a compound securities market is NP-complete in the\ndivisible case and \u03a3p\n2-complete in the indivisible case. In fact,\neven the problem of deciding whether two bids on compound\nsecurities match, even in the divisible case, is NP-complete\n(see Section 5.2).\nThere is a sense in which it is possible to translate our\nmatching problem for compound securities into an\nanalogous problem for clearing two-sided combinatorial exchanges\n[31] of exponential size. Specifically, if we regard payoff in a\nparticular state as a good, then compound securities can be\nviewed as bundles of (fractional quantities of) such goods.\nThe material balance constraint facing the combinatorial\nauctioneer corresponds to a restriction that the\ncompoundsecurity auctioneer be disallowed from assuming any risk.\nNote that this translation is not at all useful for addressing\nthe compound-security matching problem, as the resulting\ncombinatorial exchange has an exponential number of goods.\nHanson [15] develops a market mechanism called a market\nscoring rule that is especially well suited for allowing bets\non a combinatorial number of outcomes. The mechanism\nmaintains a joint probability distribution over all 2n\nstates,\neither explicitly or implicitly using a Bayesian network or\nother compact representation. At any time any trader who\nbelieves the probabilities are wrong can change any part\nof the distribution by accepting a lottery ticket that pays\noff according to a scoring rule (e.g., the logarithmic\nscoring rule) [35], as long as that trader also agrees to pay off\nthe most recent person to change the distribution. In the\nlimit of a single trader, the mechanism behaves like a\nscoring rule, suitable for polling a single agent for its\nprobability distribution. In the limit of many traders, it produces a\ncombined estimate. Since the market essentially always has\na complete set of posted prices for all possible outcomes,\nthe mechanism avoids the problem of thin markets, or\nilliquidity, that necessarily plagues any market containing an\nexponential number of alternative investments. The\nmechanism requires a patron to pay off the final person to change\nthe distribution, though the patron\"s payment is bounded.\nThough Hanson offers some initial suggestions, several open\nproblems remain, including efficient methods for\nrepresenting and updating the joint distribution and recording traders\npositions and portfolios, without resorting to exponential\ntime and space algorithms.\nFagin, Halpern, and Megiddo [9] give a sound and\ncomplete axiomatization for deciding whether sets of\nprobabilistic inequalities are consistent. Bids for compound securities\ncan be thought of as expressions of probabilistic\ninequalities: for example, a bid to buy A \u2227 B at price 0.3 is a\nstatement that the probability of A \u2227 B is greater than 0.3.\nIf a set of single-unit bids correspond to a set of inconsistent\nprobabilistic inequalities, then there is a match. However,\nbecause they are interested in a much different framework,\nFagin et al. do not consider several complicating factors\nspecific to the securities market framework: namely, handling\nmulti-unit or fractional bid quantities, identifying matches,\nchoosing among multiple matches, and optimizing based on\nprobabilities and risk attitudes. We address these issues\nbelow.\n146\n3. FRAMEWORK FOR TRADING IN\nCOMPOUND SECURITIES\n3.1 High-level description\nCommon knowledge among agents is the set of events A.\nThere are no predefined securities. Instead, agents offer to\nbuy or sell securities of their own design that pay off\ncontingent on logical combinations of events and event negations.\nCombination operators may include conjunctions,\ndisjunctions, and conditionals.\nFor all practical purposes, it is impossible for agents to\ntrade in enough securities (2n\n) to form a complete market,\nso agents must devise their best tradeoff between the\nnumber and complexity of their bids, and the extent to which\ntheir risks are hedged and desirable bets are placed. In its\nmost general form, the problem is game-theoretic in nature,\nsince what an agent should offer depends on what it believes\nother agents will accept. At the other end of the spectrum, a\nsimplified version of the problem is to optimize bids only on\ncurrently available securities at current prices. In between\nthese two formulations are other possible interesting\noptimization problems. Approximation algorithms might also\nbe pursued.\nThe auctioneer faces a nontrivial problem of matching buy\nand sell orders to maximize surplus (the cash and securities\nleft over after accepted bids are fulfilled). For example, offers\nto sell A1A2 at $0.2 and A1\n\u00afA2 at $0.1 can match with an\noffer to buy A1 at $0.4, with surplus $0.1. Or an offer to\nsell A1 at $0.3 can match with an offer to buy A1A2 at\n$0.4, with surplus $0.1 in cash and A1\n\u00afA2 in securities. In\ngeneral, a single security might qualify for multiple matches,\nbut only one can be transacted. So the auctioneer must find\nthe optimal set of matches that maximizes surplus, which\ncould be measured in a number of ways. Again,\napproximation algorithms might be considered. In another\nformulation, the auctioneer functions as a market maker willing to\ntake on a certain amount of risk.\nInformally, our motivation is to provide a mechanism that\nallows a very high degree of expressivity in placing hedges\nand bets, and is also capable of approximating the optimal\n(complete-market) allocation of risk, trading off the number\nand complexity of securities and transactions needed.\n3.2 Formal description\n3.2.1 Securities\nWe use \u03c6 and \u03c8 to denote arbitrary boolean formulas, or\nlogical combinations of events in A. We denote securities\n\u03c6|\u03c8 . Securities pay off $1 if and only if (iff) \u03c6 and \u03c8 are\ntrue, pay off $0 iff \u03c6 is false and \u03c8 is true, and are canceled\n(i.e., any price paid is refunded) iff \u03c8 is false. We define\nT \u2261 \u2126 to be the event true and F \u2261 \u2205 to be the event\nfalse. We abbreviate \u03c6|T as \u03c6 .\n3.2.2 Orders\nAgents place orders, denoted o, of the form q units of\n\u03c6|\u03c8 at price p per unit, where q > 0 implies a buy\norder and q < 0 implies a sell order. We assume agents\nsubmitting buy (sell) orders will accept any price p\u2217\n\u2264 p\n(p\u2217\n\u2265 p). We distinguish between divisible and indivisible\norders. Agents submitting divisible orders will accept any\nquantity \u03b1q where 0 < \u03b1 \u2264 1. Agents submitting\nindivisible orders will accept only exactly q units, or none at all.\nWe believe that, given the nature of what is being traded\n(state-contingent dollars), most agents will be content to\ntrade using divisible orders.\nEvery order o can be translated into a payoff vector \u03a5\nacross all states \u03c9 \u2208 \u2126. The payoff \u03a5 \u03c9\nin state \u03c9 is q \u00b7\n1\u03c9\u2208\u03c8(1\u03c9\u2208\u03c6 \u2212 p), where 1\u03c9\u2208E equals 1 iff \u03c9 \u2208 E and zero\notherwise. Recall that the 2n\nstates correspond to the 2n\npossible combinations of event outcomes. We index multiple\norders with subscripts (e.g., oi and \u03a5i). Let the set of all\norders be O and the set of all corresponding payoff vectors\nbe P.\nExample 1. (Translating orders into payoff vectors)\nSuppose that |A| = 3. Consider an order to buy two units of\nA2 \u2228 A3|A1 at price $0.8. The corresponding payoff vector\nis:\n\u03a5 = \u03a5 A1A2A3\n, \u03a5 A1A2\n\u00afA3\n, \u03a5 A1\n\u00afA2A3\n, . . . , \u03a5\n\u00afA1\n\u00afA2\n\u00afA3\n= 2 \u00b7 0.2, 0.2, 0.2, \u22120.8, 0, 0, 0, 0\n2\n3.2.3 The matching problem\nThe auctioneer\"s task, called the matching problem, is to\ndetermine which orders to accept among all orders o \u2208 O.\nLet \u03b1i be the fraction of order oi accepted by the auctioneer\n(in the indivisible case, \u03b1i must be either 0 or 1; in the\ndivisible case, \u03b1i can range from 0 to 1). If \u03b1i = 0, then\norder oi is considered rejected and no transactions take place\nconcerning this order. For accepted orders (\u03b1i > 0), the\nauctioneer receives the money lost by bidders and pays out\nthe money won by bidders, so the auctioneer\"s payoff vector\nis:\n\u03a5auc =\nX\n\u03a5i\u2208P\n\u2212\u03b1i\u03a5i.\nWe also call the auctioneer\"s payoff vector the surplus vector,\nsince it is the (possibly state-contingent) money left over\nafter all accepted orders are filled.\nAssume that the auctioneer wants to choose a set of orders\nso that it is guaranteed not to lose any money in any future\nstate, but that the auctioneer does not necessarily insist on\nobtaining a positive benefit from the transaction (i.e., the\nauctioneer is content to break even).\nDefinition 1. (Matching problem, indivisible case) Given\na set of orders O, does there exist \u03b1i \u2208 {0, 1} with at least\none \u03b1i = 1 such that\n\u2200\u03c9, \u03a5 \u03c9\nauc \u2265 0?\nIn other words, does there exist a nonempty subset of orders\nthat the auctioneer can accept without risk? 2\nIf \u2200\u03c9, \u03a5\n\u03c9\nauc = c where c is nonnegative, then the surplus\nleftover after processing this match is c dollars. Let m =\nmin\u03c9[\u03a5\n\u03c9\nauc]. In general, processing a match leaves m dollars\nin cash and \u03a5\n\u03c9\nauc \u2212 m in state-contingent dollars, which can\nthen be translated into securities.\nExample 2. (Indivisible order matching) Suppose |A| =\n2. Consider an order to buy one unit of A1A2 at price\n0.4 and an order to sell one unit of A1 at price 0.3. The\n147\ncorresponding payoff vectors are:\n\u03a51 = \u03a5\nA1A2\n1 ,\u03a5\nA1\n\u00afA2\n1 ,\u03a5\n\u00afA1A2\n1 ,\u03a5\n\u00afA1\n\u00afA2\n1\n= 0.6, \u22120.4, \u22120.4, \u22120.4\n\u03a52 = \u22120.7, \u22120.7, 0.3, 0.3\nThe auctioneer\"s payoff vector (the negative of the\ncomponentwise sum of the above two vectors) is:\n\u03a5auc = \u2212\u03a51 \u2212 \u03a52 = 0.1, 1.1, 0.1, 0.1 .\nSince all components are nonnegative, the two orders match.\nThe auctioneer can process both orders, leaving a surplus of\n$0.1 in cash and one unit of A1\n\u00afA2 in securities. 2\nNow consider the divisible case, where order can be\npartially filled.\nDefinition 2. (Matching problem, divisible case) Given\na set of orders O, does there exist \u03b1i \u2208 [0, 1] with at least\none \u03b1i > 0 such that\n\u2200\u03c9, \u03a5 \u03c9\nauc \u2265 0,\n2\nExample 3. (Divisible order matching) Suppose |A| = 2.\nConsider an order to sell one unit of A1 at price $0.5, an\norder to buy one unit of A1A2|A1 \u2228 A2 at price $0.5, and\nan order to buy one unit of A1| \u00afA2 at price $0.4. The\ncorresponding payoff vectors are:\n\u03a51 = \u03a5\nA1A2\n1 ,\u03a5\nA1\n\u00afA2\n1 ,\u03a5\n\u00afA1A2\n1 ,\u03a5\n\u00afA1\n\u00afA2\n1\n= \u22120.5, \u22120.5, 0.5, 0.5\n\u03a52 = 0.5, \u22120.5, \u22120.5, 0\n\u03a53 = 0, 0.6, 0, \u22120.4\nIt is clear by inspection that no non-empty subset of whole\norders constitutes a match: in all cases where \u03b1i \u2208 {0, 1}\n(other than all \u03b1i = 0), at least one state sums to a positive\namount (negative for the auctioneer). However, if \u03b11 =\n\u03b12 = 3/5 and \u03b13 = 1, then the auctioneer\"s payoff vector is:\n\u03a5auc = \u2212\n3\n5\n\u03a51 \u2212\n3\n5\n\u03a52 \u2212 \u03a53 = 0, 0, 0, 0.1 ,\nconstituting a match. The auctioneer can process 3/5 of the\nfirst and second orders, and all of the third order, leaving a\nsurplus of 0.1 units of \u00afA1\n\u00afA2 . In this example, a divisible\nmatch exists even though an indivisible match is not possible;\nwe examine the distinction in detail in Section 5, where we\nseparate the two matching problems into distinct complexity\nclasses. 2\nThe matching problems defined above are decision\nproblems: the task is only to show the existence or nonexistence\nof a match. However, there may be multiple matches from\nwhich the auctioneer can choose. Sometimes the choices are\nequivalent from the auctioneer\"s perspective; alternatively,\nan objective function can be used to find an optimal match\naccording to that objective.\nExample 4. (Auctioneer alternatives I) Suppose |A| =\n2. Consider an order to sell one unit of A1 at price $0.7,\nan order to sell one unit of A2 at price $0.7, an order to\nbuy one unit of A1A2 at price $0.4, an order to buy one\nunit of A1\n\u00afA2 at price $0.4, and an order to buy one unit\nof \u00afA1A2 at price $0.4. The corresponding payoff vectors\nare:\n\u03a51 = \u22120.3,\u22120.3, 0.7, 0.7\n\u03a52 = \u22120.3, 0.7,\u22120.3, 0.7\n\u03a53 = 0.6,\u22120.4,\u22120.4,\u22120.4\n\u03a54 = \u22120.4, 0.6,\u22120.4,\u22120.4\n\u03a55 = \u22120.4,\u22120.4, 0.6,\u22120.4\nConsider the indivisible case. The auctioneer could choose\nto accept bids 1, 3, and 4 together, or the auctioneer could\nchoose to accept bids 2, 3, and 5 together. Both constitute\nmatches, and in fact both yield identical payoffs (\u03a5auc =\n0.1, 0.1, 0.1, 0.1 , or $0.1 in cash) for the auctioneer. 2\nExample 5. (Auctioneer alternatives II) Suppose |A| =\n2. Consider an order to sell two units of A1 at price $0.6,\nan order to buy one unit of A1A2 at price $0.3, and an\norder to buy one unit of A1\n\u00afA2 at price $0.5. The\ncorresponding payoff vectors are:\n\u03a51 = \u22120.4,\u22120.4, 0.6, 0.6\n\u03a52 = 0.7,\u22120.3,\u22120.3,\u22120.3\n\u03a53 = \u22120.5, 0.5,\u22120.5,\u22120.5\nConsider the divisible case. The auctioneer could choose to\naccept one unit each of all three bids, yielding a payoff to\nthe auctioneer of $0.2 in cash (\u03a5auc = 0.2, 0.2, 0.2, 0.2 ).\nAlternatively, the auctioneer could choose to accept 4/3 units\nof bid 1, and one unit each of bids 2 and 3, yielding a payoff\nto the auctioneer of 1/3 units of security A1 . Both choices\nconstitute matches (in fact, accepting any number of units\nof bid 1 between 1 and 4/3 can be part of a match), though\ndepending on the auctioneer\"s objective, one choice might\nbe preferred over another. For example, if the auctioneer\nbelieves that A1 is very likely to occur, it may prefer to accept\n4/3 units of bid 1. 2\nThere are many possible criteria for the auctioneer to\ndecide among matches, all of which seem reasonable in some\ncircumstances. One natural quantity to maximize is the\nvolume of trade among bidders; another is the auctioneer\"s\nutility, either with or without the arbitrage constraint.\nDefinition 3. (Trade maximization problem) Given a set\nof indivisible (divisible) orders O, choose \u03b1i \u2208 {0, 1} (\u03b1i \u2208\n[0, 1]) to maximize\nX\ni\n\u03b1iqi,\nunder the constraint that\n\u2200\u03c9, \u03a5 \u03c9\nauc \u2265 0.\n2\nAnother reasonable variation is to maximize the total\npercent of orders filled, or\nP\ni \u03b1i, under the same (risk-free)\nconstraint that \u2200\u03c9, \u03a5\n\u03c9\nauc \u2265 0.\nDefinition 4. (Auctioneer risk-free utility-maximization\nproblem) Let the auctioneer\"s subjective probability for each\nstate \u03c9 be Pr(\u03c9), and let the auctioneer\"s utility for y\ndollars be u(y). Given a set of indivisible (divisible) orders O,\nchoose \u03b1i \u2208 {0, 1} (\u03b1i \u2208 [0, 1]) to maximize\nX\n\u03c9\u2208\u2126\nPr(\u03c9)u(\u03a5 \u03c9\nauc),\n148\nunder the constraint that\n\u2200\u03c9, \u03a5 \u03c9\nauc \u2265 0.\n2\nDefinition 5. (Auctioneer standard utility-maximization\nproblem) Let the auctioneer\"s subjective probability for each\nstate \u03c9 be Pr(\u03c9), and let the auctioneer\"s utility for y\ndollars be u(y). Given a set of indivisible (divisible) orders O,\nchoose \u03b1i \u2208 {0, 1} (\u03b1i \u2208 [0, 1]) to maximize\nX\n\u03c9\u2208\u2126\nPr(\u03c9)u\n\n\u03a5 \u03c9\nauc\n\n.\n2\nThis last objective function drops the risk-free (arbitrage)\nconstraint. In this case, the auctioneer is a market maker\nwith beliefs about the likelihood of outcomes, and the\nauctioneer may actually lose money is some outcomes.\nStill other variations and other optimization criteria seem\nreasonable, including social welfare, etc. It also seems\nreasonable to suppose that the surplus be shared among bidders\nand the auctioneer, rather than retained solely by the\nauctioneer. This is analogous to choosing a common transaction\nprice in a double auction (e.g., the midpoint between the bid\nand ask prices), rather than the buyer paying the bid price\nand the seller receiving the ask price, with the difference\ngoing to the auctioneer. The problem becomes more\ncomplicated when dividing surplus securities, in part because\nthey are valued differently by different agents. Formulating\nreasonable sharing rules and examining the resulting\nincentive properties seems a rich and promising avenue for further\ninvestigation.\n4. MATCHING ALGORITHMS\nThe straightforward algorithm for solving the divisible\nmatching problem is linear programming; we set up an\nappropriate linear program in Section 5.1. The straightforward\nalgorithm for solving the indivisible matching problem is\ninteger programming. With n events, to set up the\nappropriate linear or integer programs, simply writing out the payoff\nvectors in the straightforward way requires O(2n\n) space.\nThere is some hope that specialized algorithms that\nexploit structure among bids can perform better in terms of\naverage-case time and space complexity. For example, in\nsome cases matches can be identified using logical reduction\ntechniques, without writing down the full payoff vectors. So\na match between the following bids:\n\u2022 sell 1 of A1A2 at $0.2\n\u2022 sell 1 of A1\n\u00afA2 at $0.1\n\u2022 buy 1 of A1 at $0.4\ncan be identified by reducing the first two bids to an\nequivalent offer to sell A1 at $0.3 that clearly matches with\nthe third bid. Formalizing a logical-reduction algorithm for\nmatching, or other algorithms that can exploit special\nstructure among the bids, is a promising avenue for future work.\n5. THE COMPUTATIONAL COMPLEXITY\nOF MATCHING\nIn this section we examine the computational complexity\nof the auctioneer\"s matching problem. Here n refers to the\nproblem\"s input size that includes descriptions of all of the\nbuy and sell orders. We also assume that n bounds the\nnumber of base securities.\nWe consider four cases based on two parameters:\n1. Whether to allow divisible or indivisible orders.\n2. The number of securities. We consider two\npossibilities:\n(a) O(log n) base securities yielding a polynomial\nnumber of states.\n(b) An unlimited number of base securities yielding\nan exponential number of states.\nWe show the following results.\nTheorem 1. The matching problem is\n1. computable in polynomial-time for O(log n) base\nsecurities with divisible orders.\n2. co-NP-complete for unlimited securities with divisible\norders.\n3. NP-complete for O(log n) base securities with\nindivisible orders.\n4. \u03a3p\n2-complete for unlimited securities with indivisible\norders.\n5.1 Small number of securities with divisible\norders\nWe can build a linear program based on Definition 2. We\nhave variables \u03b1i. For each i, we have\n0 \u2264 \u03b1i \u2264 1\nand for each state \u03c9 in \u2126 we have the constraint\n\u03a5 \u03c9\nauc =\nX\ni\n\u2212\u03b1i\u03a5\n\u03c9\ni \u2265 0.\nGiven these constraints we maximize\nX\ni\n\u03b1i.\nA set of orders has a matching exactly when\nP\ni \u03b1i > 0.\nWith O(log n) base securities, we have |\u2126| bounded by a\npolynomial so we can solve this linear program in polynomial\ntime.\nNote that one might argue that one should maximize some\nlinear combination of the \u2212\u03a5\n\u03c9\ni s to maximize the surplus.\nHowever this approach will not find matchings that have\nzero surplus.\n5.2 Large number of securities with divisible\norders\nWith unlimited base securities, the linear program given\nin Section 5.1 has an exponential number of constraint\nequations. Each constraint is short to describe and easily\ncomputable given \u03c9.\n149\nLet m \u2264 n be the total number of buy and sell orders.\nBy the theory of linear programming, an upper bound on\nthe objective function can be forced by a collection of m + 1\nconstraints. So if no matching exists there must exist m + 1\nconstraints that force all the \u03b1i to zero. In nondeterministic\npolynomial-time we can guess these constraints and solve\nthe reduced linear program. This shows that matching is in\nco-NP.\nTo show co-NP-completeness we reduce the NP-complete\nproblem of Boolean formula satisfiability to the nonexistence\nof a matching. Fix a formula \u03c6. Let the base securities be\nthe variables of \u03c6 and consider the single security \u03c6 with\na buy order of 0.5. If the formula \u03c6 is satisfiable then there\nis some state with payoff 0.5 and no fractional unit of the\nsecurity \u03c6 is a matching. If the formula \u03c6 is not satisfiable\nthen every state has an auctioneer\"s payoff of 0.5 and a single\nunit of the security \u03c6 is a matching.\nOne could argue that if the formula \u03c6 is not satisfiable\nthen no fully rational buyer would want to buy \u03c6 for a cost\nof 0.5. We can get around this problem by adding auxiliary\nbase securities, A and B, and defining two securities\n\u03c4 = (\u03c6 \u2227 A) \u2228 (A \u2227 B)\n\u03c4 = (\u03c6 \u2227 A) \u2228 (A \u2227 B)\nwith separate buy orders of 0.5 on each.\nIf \u03c6 were satisfiable then in the state corresponding to\nthe satisfying assignment and both A and B to be true, \u03c4\nand \u03c4 both have an auctioneer\"s payoff of \u22120.5 so even no\ndivisible matching can exist.\nIf \u03c6 were not satisfiable then one unit of each would be a\nmatching since in every state at least one of \u03c4 or \u03c4 are\nfalse.\n5.3 Small number of securities with indivisible\norders\nThis case is easily seen to be in NP: Just\nnondeterministically guess a nonempty subset S of orders and check for\neach state \u03c9 in \u2126 that\n\u03a5 \u03c9\nauc =\nX\ni\u2208S\n\u2212\u03a5\n\u03c9\ni \u2265 0.\nSince |\u2126| and |S| are bounded by a polynomial in n, the\nverification can be done in polynomial time.\nTo show that matching is NP-complete we reduce the\nNPcomplete problem EXACT COVER BY 3-SETS (X3C) to a\nmatching of securities.\nThe input to X3C consists of a set X and a collection C\nof 3-element subsets of X. The input (X, C) is in X3C if C\ncontains an exact cover of X, i.e., there is a subcollection\nC of C such that every element of X occurs in exactly one\nmember of C . Karp showed that X3C is NP-complete.\nSuppose we have an instance (X, C) with the vector X =\n{x1, . . . , x3q} and C = {c1, . . . , cm}.\nWe set \u2126 = {e1, . . . , e3q, r, s} and define securities labelled\n\u03c61 , . . . , \u03c6m , \u03c81 , . . . , \u03c8q and \u03c4 , as follows:\n\u2022 Security \u03c6i is true in state r, and is true in state ek\nif k is not in ci.\n\u2022 Security \u03c8j is true only in state s.\n\u2022 Security \u03c4 is true in each state ek but not r or s.\nWe have buy orders on each \u03c6i and \u03c8j security for\n0.5 \u2212 1\n8q\nand a buy order on \u03c4 for 0.5.\nWe claim that a matching exists if and only if (X, C) is\nin X3C.\nIf (X, C) is in X3C, let C be the subcollection that covers\neach element of X exactly once. Note that |C | = q.\nWe claim the collection consisting of \u03c6i for each ci in\nC , every \u03c8j and \u03c4 has a matching.\nIn each state ek we have an auctioneer\"s payoff of\n(.5 \u2212\n1\n8q\n) + (q \u2212 1)(\u2212.5 \u2212\n1\n8q\n) + q(.5 \u2212\n1\n8q\n) \u2212 .5\n= .5 \u2212 2q\n1\n8q\n= .25 \u2265 0.\nIn states r and s the auctioneer\"s payoffs are\n\u2212q(.5 +\n1\n8q\n) + \u2212q(\u2212.5 +\n1\n8q\n) + .5 = \u22125 \u2212 2q\n1\n8q\n= .25 \u2265 0.\nSuppose now that (X, C) is not in X3C but there is a\nmatching. Consider the number q of the \u03c6i in that\nmatching and q the number of \u03c8j in the matching. Since a\nmatching requires a nonempty subset of the orders and \u03c4\nby itself is not a matching we have q + q > 0.\nWe have three cases.\nq > q: In state r, the auctioneer\"s payoff is\n\u2212q (.5 +\n1\n8q\n) \u2212 q(\u2212.5 +\n1\n8q\n) + .5 \u2264 \u2212(q + q)\n1\n8q\n< 0.\nq > q : In state s, the auctioneer\"s payoff is\n\u2212q (.5 +\n1\n8q\n) \u2212 q (\u2212.5 +\n1\n8q\n) + .5 \u2264 \u2212(q + q )\n1\n8q\n< 0.\nq \u2264 q \u2264 q: Consider the set C consisting of the ci\nwhere \u03c6i is in the matching. There must be some state\nek not in any of the ci or C would be an exact cover. The\nauctioneer\"s payoff in ek is at most\n\u2212q (.5 +\n1\n8q\n) \u2212 q (\u2212.5 +\n1\n8q\n) \u2264 \u2212(q + q )\n1\n8q\n< 0.\n5.4 Large Number of Securities with\nIndivisible Orders\nThe class \u03a3p\n2 is the second level of the polynomial-time\nhierarchy. A language L is in \u03a3p\n2 if there exists a polynomial\np and a set A in P such that x is in L if and only if there\nis a y with |y| = p(|x|) such that for all z, with |z| = p(|x|),\n(x, y, z) is in A. The class \u03a3p\n2 contains both NP and\ncoNP. Unless the polynomial-time hierarchy collapses (which\nis considered unlikely), a problem that is complete for \u03a3p\n2 is\nnot contained in NP or co-NP.\nWe will show that computing a matching is \u03a3p\n2-complete,\nand remains so even for quite restricted types of securities,\nand hence is (likely) neither in NP or co-NP. While it may\nseem that being NP-complete or co-NP-complete is hard\nenough, there are further practical consequences of being\noutside of NP and co-NP. If the matching problem were in\nNP, one could use heuristics to search for and verify a match\nif it exists; even if such heuristics fail in the worst case, they\nmay succeed for most examples in practice. Similarly, if\nthe matching problem were in co-NP, one might hope to at\nleast heuristically rule out the possibility of matching. But\nfor problems outside of NP or co-NP, there is no framework\nfor verifying that a heuristically derived answer is correct.\nLess formally, for NP (or co-NP)-complete problems, you\nhave to be lucky; for \u03a3p\n2-complete problems, you can\"t even\ntell if you\"ve been lucky.\n150\nWe note that the existence of a matching is in \u03a3p\n2: We\nuse y to choose a subset of the orders and z to represent\na state \u03c9 with (x, y, z) in A if the set of orders has a total\nnonpositive auctioneer\"s payoff in state \u03c9.\nWe prove a stronger theorem which implies that matching\nis in \u03a3p\n2. Let S1, . . . , Sn be a set of securities, where each\nsecurity Si has cost ci and pays off pi whenever formula Ci\nis satisfied. The 0 \u2212 1-matching problem asks whether one\ncan, by accepting either 0 or 1 of each security, guarantee a\nworst-case payoff strictly larger than the total cost.\nTheorem 2. The 0\u22121-matching problem is \u03a3p\n2-complete.\nFurthermore, the problem remains \u03a3p\n2-complete under the\nfollowing two special cases:\n1. For all i, Ci is a conjunction of 3 base events (or their\nnegations), pi = 1, and ci = cj for all i and j.\n2. For all i, Ci is a conjunction of at most 2 base\nsecurities (or their negations).\nThese hardness results hold even if there is a promise that\nno subset of the securities guarantees a worst-case payoff\nidentical to their cost.\nTo prove Theorem 2, we reduce from the standard \u03a3p\n2\nproblem that we call T\u2203\u2200BF. Given a boolean formula \u03c6\nwith variables x1, . . . , xn and y1, . . . , yn is the following\nfullyquantified formula true\n\u2203x1 . . . \u2203xn\u2200y1 . . . \u2200yn \u03c6(x1, . . . , xn, y1, . . . , yn)?\nThe problem remains \u03a3p\n2-complete when\n\u03c6(x1, . . . , xn, y1, . . . , yn)\nis restricted to being a disjunction of conjunctions of at most\n3 variables (or their negations), e.g.,\n\u03c6(x1, . . . , xn, y1, . . . , yn) =\n(x1 \u2227 \u00afx3 \u2227 y2) \u2228 (x2 \u2227 y3 \u2227 y7) \u2228 \u00b7 \u00b7 \u00b7 .\nThis form, without the bound on the conjunction size, is\nknown as disjunctive normal form (DNF); the restriction to\nconjunctions of 3 variables is 3-DNF.\nWe reduce T\u2203\u2200BF to finding a matching. For the simplest\nreduction, we consider the matching problem where one has\na set of Arrow-Debreu securities whose payoff events are\nconjunctions of the base securities, or their negations. The\nauctioneer has the option of accepting either 0 or 1 of each\nof the given securities.\nWe first reduce to the case where the payoff events are\nconjunctions of arbitrarily many base events (or their\nnegations). By a standard trick we can reduce the number of\nbase events in each conjunction to 3, and with a slight twist\nwe can even ensure that all securities have the same price as\nwell as the same payoff. Finally, we show that the problem\nremains hard even if only conjunctions of 2 variables are\nallowed, though with securities that deviate slightly from\nArrow-Debreu securities in that they may have varying, non\nunit payoffs.\n5.4.1 The basic reduction\nBefore describing the securities, we give some intuition.\nThe T\u2203\u2200BFproblem may be viewed as a game between a\nselector and an adversary. The selector sets the xi variables,\nand then the adversary sets the yi variables so as to falsify\nthe formula \u03c6. We can view the 0 \u2212 1-matching problem as\none in which the auctioneer is a buyer who buys securities\ncorresponding to disjunctions of the base events, and then\nthe adversary sets the values of the base events to minimize\nthe payoff from the securities.\nWe construct our securities so that the optimal buying\nstrategy is to buy n expensive securities along with a set\nof cheap securities, of negligible cost (for some cases we\ncan modify the construction so that all securities have the\nsame cost). The total cost of the securities will be just\nunder 1, and each security pays off 1, so the adversary must\nensure that none of the securities pays off. Each expensive\nsecurity forces the adversary to set some variable, xi to a\nparticular value to prevent the security from paying off; this\ncorresponds to setting the xi variables in the original game.\nThe cheap securities are such that preventing every one of\nof these securities from paying off is equivalent to falsifying\n\u03c6 in the original game.\nAmong the technical difficulties we face is how to prevent\nthe buyer from buying conflicting securities, e.g., one that\nforces xi = 0 and the other that forces xi = 1, allowing for\na trivial arbitrage. Secondly, for our analysis we need to\nensure that a trader cannot spend more to get more, say by\nspending 1.5 for a set of securities with the property that at\nleast 2 securities pay off under all possible events.\nFor each of the variables {xi}, {yi} in \u03c6, we add a\ncorresponding base security (with the same labels). For each\nexistential variable xi we add additional base securities, ni\nand zi. We also include a base security Q.\nIn our basic construction, each expensive security costs C\nand each cheap security costs ; all securities pay off 1. We\nrequire that Cn+ (|cheap securities|) < 1 and C(n+1) > 1.\nThat is, one can buy n expensive securities and all of the\ncheap securities for less than 1, but one cannot buy n + 1\nexpensive securities for less than 1. We at times refer to a\nsecurity by its payoff clause.\nRemark: We may loosely think of as 0. However, this\nwould allow one to buy a security for nothing that pays (in\nthe worst case) nothing. By making > 0 , we can show\nit hard to distinguish portfolios that guarantee a positive\nprofit from those that risk a positive loss. Setting > 0 will\nalso allow us to show hardness results for the case where all\nsecurities have the same cost.\nFor 1 \u2264 i \u2264 n, we have two expensive securities with\npayoff clauses (\u00afxi \u2227Q) and (\u00afni \u2227Q) and two cheap securities\nwith payoff clauses (xi \u2227 \u00afzi) and (ni \u2227 \u00afzi).\nFor each clause C \u2208 \u03c6, we convert every negated variable\n\u00afxi into ni and add the conjunction z1 \u2227 \u00b7 \u00b7 \u00b7 \u2227 zn. Thus, for a\nclause C = (x2 \u2227 \u00afx7 \u2227 \u00afy5) we construct a cheap security SC,\nwith payoff clause\n(z1 \u2227 \u00b7 \u00b7 \u00b7 \u2227 zn \u2227 x2 \u2227 n7 \u2227 \u00afy5).\nFinally, we have a cheap security with payoff clause ( \u00afQ).\nWe now argue that a matching exists iff\n\u2203x1 . . . \u2203xn\u2200y1 . . . \u2200yn \u03c6(x1, . . . , xn, y1, . . . , yn).\nWe do this by successively constraining the buyer and the\nadversary, eliminating behaviors that would cause the other\nplayer to win. The resulting reasonable strategies\ncorrespond exactly to the game version of T\u2203\u2200BF.\nFirst, observe that if the adversary sets all of the base\nsecurities to false (0), then only the ( \u00afQ) security will pay off.\n151\nThus, no buyer can buy more than n expensive securities\nand guarantee a profit. The problem is thus whether one\ncan buy n expensive securities and all the cheap securities,\nso that at for any setting of the base events at least one\nsecurity will pay off.\nClearly, the adversary must make Q hold, or the ( \u00afQ)\nsecurity will pay off. Next, we claim that for each i, 1 \u2264 i \u2264 i,\nthe auctioneer must buy at least one of the (\u00afxi \u2227 Q) and\n(\u00afni \u2227 Q) securities. This follow from the fact that if the\nadversary sets xi, ni and zi to be false, and every other base\nevent to be true, then only the (\u00afxi \u2227 Q) and (\u00afni \u2227 Q)\nsecurities will pay off. As no auctioneer can buy more than\nn expensive securities, it must therefore buy exactly one of\n(\u00afxi \u2227 Q) or (\u00afni \u2227 Q), for each i, 1 \u2264 i \u2264 n. For the rest\nof the analysis, we assume that the auctioneer follows this\nconstraint.\nSuppose that the buyer buys (\u00afxi \u2227Q). Then the adversary\nmust set xi to be true (since it must set Q to be true), or the\nsecurity will pay off. It must then set zi to be true or (xi\u2227\u00afzi)\nwill pay off. Since the buyer doesn\"t buy (\u00afni \u2227 Q) (by the\nabove constraint), and all the other securities pay the same\nor less when ni is made false, we can assume without loss of\ngenerality that the adversary sets ni to be false. Similarly,\nif the buyer buys (\u00afni \u2227 Q), then the adversary must set\nni and zi to be true, and we can assume without loss of\ngenerality that the adversary sets xi to be false. Note that\nthe adversary must in all cases set each zi event to be true.\nSummarizing the preceding argument, there is an exact\ncorrespondence between the rational strategies of the buyer\nand settings for the xi variables forced on the adversary.\nFurthermore, the adversary is also constrained to set the\nvariables Q, z1, . . . , zn to be true, and without loss of\ngenerality may be assumed to set ni = \u00afxi. Under these\nconstraints, those securities not corresponding to clauses in \u03c6\nare guaranteed to not pay off.\nThe adversary also decides the value of the y1, . . . , ym\nbase events. Recall that for each clause C \u2208 \u03c6 there is a\ncorresponding security SC. Given that zi is true and ni = \u00afxi\n(without loss of generality), it follows from the construction\nof SC that the setting of the yis causes SC to pay off iff it\nsatisfies C. This establishes the reduction from T\u2203\u2200BF to\nthe matching problem, when the securities are constrained\nto be a conjunction of polynomially many base events or\ntheir negations.\n5.4.2 Reducing to 3-variable conjunctions\nThere are standard methods for reducing DNF formulae to\n3-DNF formulae, which are trivially modifiable to our\nsecurities framework; we include the reduction for completeness.\nGiven a security S whose payoff clause is\nC = (v1 \u2227 v2 \u2227 \u00b7 \u00b7 \u00b7 \u2227 vk)\n(variable negations are irrelevant to this discussion), cost\nc and payoff p, introduce a new auxiliary variable, w, and\nreplace the security with two securities, S1 and S2, with\npayoff clauses,\nC1 = (v1 \u2227 v2 \u2227 w) and\nC2 = ( \u00afw \u2227 v3 \u2227 \u00b7 \u00b7 \u00b7 \u2227 vk).\nThe securities both have payoff p, and their costs can be\nany positive values that sum to c. Note that at most one of\nthe securities can pay off at a time. If only one security is\nbought, then the adversary can always set w so that it won\"t\npay off; hence the auctioneer will buy either both or neither,\nfor a total cost of c (here we use the fact that one is only\nallowed to buy either 0 or 1 shares of each security). Then,\nit may be verified that, given the ability to set w arbitrarily,\nthe adversary can cause C to be unsatisfied iff it can cause\nboth C1 and C2 to be unsatisfied. Hence, owning one share\neach of S1 and S2 is equivalent to owning one share of S.\nNote that C1 has three variables and C2 has k\u22121 variables.\nBy applying the transformation successively, one obtains an\nequivalent set of securities, of polynomial size, whose payoff\nclauses have at most 3 variables.\nWe note that in the basic construction, all of the clauses\nwith more than 3 variables are associated with cheap\nsecurities (cost ). Instead of subdividing costs, we can simply\nmake all of the resulting securities have cost ; the\nconstraints on C and must reflect the new, larger number of\ncheap securities.\nOne can ensure that all of the payoff clauses have exactly\n3 variables, with a similar construction. A security S with\ncost c, payoff p and defining clause (x \u2227 y) can be replaced\nby securities S1 and S2 with cost c/2, payoff p and defining\nclauses (x\u2227y\u2227w) and (x\u2227y\u2227 \u00afw), where w is a new auxiliary\nvariable. Essentially the same analysis as given above\napplies to this case. The case of single-variable payoff clauses\nis handled by two applications of this technique.\n5.4.3 Reducing to equi-cost securities\nBy setting C and appropriately, one can ensure that\nin the basic reduction every security costs a polynomially\nbounded integer multiple of ; call this ratio r. We now\nshow how to reduce this case to the case where every\nsecurity costs . Recall that the expensive securities have\npayoff clauses (\u00afxi \u2227 Q) or (\u00afni \u2227 Q). Assume that\nsecurity S has payoff clause (\u00afxi \u2227 Q) (the other case is handled\nidentically). Replace S with security S , with payoff clause\n(\u00afxi \u2227 Q \u2227 w1) (w1, . . . , wr\u22121 are auxiliary variables; fresh\nvariables are chosen for each clause), and also S1, . . . , Sr\u22121,\nwith payoff clauses\n( \u00afw1 \u2227 w2), ( \u00afw2 \u2227 w3), . . . , ( \u00afwr\u22122 \u2227 wr\u22121), and( \u00afwr\u22121 \u2227 \u00afw1).\nClearly, buying none of the new securities is equivalent\nto not buying the original security. We show that buying\nall of the new securities is equivalent to buying the original\nsecurity, and that buying a proper, nonempty subset of the\nsecurities is irrational.\nWe first note that if the buyer buys securities S1, . . . , Sr\u22121,\nthen the adversary must set w1 to be true, or one of the\nsecurities will pay off. To see this, note that if wi is set to\nfalse, then ( \u00afwi \u2227wi+1) will be true unless wi+1 is set to false;\nthus, setting w1 to false forces the adversary to set wr\u22121 to\nfalse, causing the final clause to be true. Having set w1\ntrue, the adversary can set w2, . . . , wr\u22121 to false, ensuring\nthat none of the securities S1, . . . , Sr\u22121 pays out. If wi is\ntrue, then (\u00afxi \u2227 Q \u2227 w1) is equivalent to (\u00afxi \u2227 Q). So buying\nall of the replacement securities for each is equivalent to\nbuying the original security for r.\nIt remains to show that buying a proper, nonempty\nsubset of the securities is irrational. If one doesn\"t buy S ,\nthen the adversary can set the w variables so that none of\nS1, . . . , Sr\u22121 will pay off; any money spent on these\nsecurities is wasted. If one doesn\"t buy Sr\u22121, the adversary can\nset all w to false, in which case none of the new clauses will\npay off, regardless of the value of xi and Q. Similarly, if one\n152\ndoesn\"t buy Si, for 1 \u2264 i \u2264 r \u22122, the adversary can set wi+1\nto be true, all the other w variables to be false, and again\nthere is no payoff, regardless of the value of xi and Q. Thus,\nbuying a proper subset of these securities will not increase\nones payoff.\nWe note that this reduction can be combined trivially with\nthe reduction that ensures that all of the defining clauses\nhave 3 or fewer variables. With a slightly messier argument,\nall of the defining clauses can be set up to have exactly 3\nvariables.\n5.4.4 Reducing to clauses of at most 2 variables\nIf we allow securities to have variable payoffs and prices,\nwe can reduce to the case where each security\"s payoff clause\nis a conjunction of at most 2 variables or their negations.\nGiven a security s with payoff clause (X \u2227 Y \u2227 Z), cost c\nand payoff 1, we introduce fresh auxiliary variables, w1, w2\nand w3 (new variables are used for each clause) and replace\nS with the following securities:\n\u2022 Securities S1, S2 and S3, each with cost c/3 and payoff\n1, with respective payoff clauses (X \u2227 w1), (Y \u2227 w2)\nand (Z \u2227 w3).\n\u2022 Securities S1, . . . , S6, each with cost 4 and payoff 24 \u2212\n2, with payoff clauses,\n(w1 \u2227 w2) (w1 \u2227 w3) (w2 \u2227 w3)\n( \u00afw1 \u2227 \u00afw2) ( \u00afw1 \u2227 \u00afw3) ( \u00afw2 \u2227 \u00afw3)\nHere, 2 is a tiny positive quantity, described later. By a\nsimple case analysis, we have the following.\nObservations:\n1. For any i, there exists a setting of w1, w2 an w3 such\nthat of the S securities only Si pays off.\n2. For any setting of w1, w2 and w3, at least one of the\nS securities will pay off.\n3. If w1, w2 and w3 are all false, all of the S securities\nwill pay off.\n4. Setting one of w1, w2 or w3 to be true, and the others\nto be 0, will cause exactly one of the S securities to\npay off.\nBy Observation 1, there is no point in buying a nonempty\nproper subset of the S securities: The adversary can ensure\nthat none of the bought securities will pay off, and even if\nall the S securities pay off, it will not be sufficient to recoup\nthe cost of buying a single S security. By Observation 2, if\none buys all the S securities, one is guaranteed to almost\nmake back ones investment (except for 2), in which case by\nObservations 3 and 4, the adversaries optimal strategy is to\nmake exactly one of w1, w2 or w3 true. We set C, and 2\nso that\nCn + (|cheap securities|) + 2(|clauses|) < 1.\nThus, the accumulated losses of 2 can never spell the\ndifference between making a guaranteed profit and making no\nprofit at all. Note also that by making 2 positive we prevent\nthe existence of break-even buying strategies in which the\nbuyer only purchases S securities.\nSummarizing the previous argument, we may assume\nwithout loss of generality that the buyer buys all of the S\nsecurities (for all clauses), and that for each clause the adversary\nsets exactly one of that clause\"s auxiliary variables w1, w2 or\nw3 to be true. For the rest of the discussion, we assume that\nthe players follow these constraints.\nWe next claim that a rational buyer will either buy all\nof S1, S2 or S3, or none of them. If the buyer doesn\"t buy\nS1, then if the adversary makes w1 true and w2 and w3\nfalse, neither S2 nor S3 will pay off, regardless of how the\nadversary sets X, Y and Z. Hence, there is no point in\nbuying either S2 or S3 if one doesn\"t buy S1. Applying the\nsame argument to S2 and S3 establishes the claim.\nClearly, buying none of S1, S2 and S3 has, up to negligible\n2 factors, the same price/payoff behavior as not buying S.\nWe next argue that, subject to the established constraints\nput on the players\" behaviors, buying all of S1, S2 and S3 has\nthe same price/payoff behavior (again ignoring 2 factors) as\nbuying S, regardless of how the adversary sets X, Y and Z.\nFirst, in both cases, the cost is c. If the adversary makes\nX, Y and Z true, then S pays off 1, and (assuming that\nexactly one of w1, w2 and w3 is true), exactly one of S1, S2\nor S3 will pay off 1. If X is false, then S doesn\"t pay off,\nand the adversary can set w1 true (and w2 and w3 false),\nensuring that none of S1, S2 and S3 pays off. The same\nargument holds if Y or Z are false.\n6. TRACTABLE CASES\nThe logical question to ask in light of these complexity\nresults is whether further, more severe restrictions on the space\nof securities can enable tractable matching algorithms.\nAlthough we have not systematically explored the possibilities,\nthe potential for useful tractable cases certainly exists.\nSuppose, for example, that bids are limited to unit\nquantities of securities of the following two forms:\n1. Disjunctions of positive events: A1 \u2228 \u00b7 \u00b7 \u00b7 \u2228 Ak .\n2. Single negative events: \u00afAi .\nLet p be the price offered for a disjunction A1 \u2228 \u00b7 \u00b7 \u00b7 \u2228 Ak ,\nand qi the maximal price offered for the respective negated\ndisjuncts. This disjunction bid is part of a match iff p +P\ni qi \u2265 k. Evaluating whether this condition is satisfied by\na subset of bids is quite straightforward.\nAlthough this example is contrived, its application is not\nentirely implausible. For example, the disjunctions may\ncorrespond to insurance customers, who want an insurance\ncontract to cover all the potential causes of their asset loss. The\natomic securities are sold by insurers, each of whom\nspecialize in a different form of disaster cause.\n7. CONCLUSIONS AND\nFUTURE DIRECTIONS\nWe have analyzed the computational complexity of\nmatching for securities based on logical formulas. Many possible\navenues for future work exist, including\n1. Analyzing the agents\" optimization problem:\n\u2022 How to choose quantities and bid/ask prices for\na collection of securities to maximizes one\"s\nexpected utility, both for linear and nonlinear utility\nfunctions.\n153\n\u2022 How to choose securities; that is, deciding on\nwhat collection of boolean formulas to offer to\ntrade, subject to constraints or penalties on the\nnumber or complexity of bids.\n\u2022 How do make the above choices in a game\ntheoretically sound way, taking into account the choices\nof other traders, their reasoning about other traders,\netc.\n2. Although matching is likely intractable, are there good\nheuristics that achieve matches in many cases or\napproximate a matching?\n3. Exploring sharing rules for dividing the surplus, and\nincentive properties of the resulting mechanisms.\n4. We may consider a market to be in computational\nequilibrium if no computationally-bounded player can find\na strategy that increases utility. With few exceptions\n[3, 22], little is known about computational\nequilibriums. A natural question is to determine whether a\nmarket can achieve a computational equilibrium that is\nnot a true equilibrium, and under what circumstances\nthis may occur.\nAcknowledgments\nWe thank Rahul Sami for his help with Section 5.4.4. We\nthank Rahul, Joan Feigenbaum and Robin Hanson for useful\ndiscussions.\n8. REFERENCES\n[1] Kenneth J. Arrow. The role of securities in the\noptimal allocation of risk-bearing. Review of Economic\nStudies, 31(2):91-96, 1964.\n[2] Peter Bossaerts, Leslie Fine, and John Ledyard.\nInducing liquidity in thin financial markets through\ncombined-value trading mechanisms. European\nEconomic Review, 46:1671-1695, 2002.\n[3] Paul J. Brewer. Decentralized computation\nprocurement and computational robustness in a smart\nmarket. Economic Theory, 13:41-92, 1999.\n[4] Kay-Yut Chen, Leslie R. Fine, and Bernardo A.\nHuberman. Forecasting uncertain events with small\ngroups. In Third ACM Conference on Electronic\nCommerce (EC\"01), pages 58-64, 2001.\n[5] Bruno de Finetti. Theory of Probability: A Critical\nIntroductory Treatment, volume 1. Wiley, New York,\n1974.\n[6] Sven de Vries and Rakesh V. Vohra. Combinatorial\nauctions: A survey. INFORMS J. of Computing, 2003.\n[7] Sandip Debnath, David M. Pennock, C. Lee Giles, and\nSteve Lawrence. Information incorporation in online\nin-game sports betting markets. In Fourth ACM\nConference on Electronic Commerce (EC\"03), 2003.\n[8] Jacques H. Dreze. Market allocation under\nuncertainty. In Essays on Economic Decisions under\nUncertainty, pages 119-143. Cambridge University\nPress, 1987.\n[9] Ronald Fagin, Joseph Y. Halpern, and Nimrod\nMegiddo. A logic for reasoning about probabilities.\nInformation and Computation, 87(1/2):78-128, 1990.\n[10] Robert Forsythe and Russell Lundholm. Information\naggregation in an experimental market. Econometrica,\n58(2):309-347, 1990.\n[11] Robert Forsythe, Forrest Nelson, George R. Neumann,\nand Jack Wright. Anatomy of an experimental\npolitical stock market. American Economic Review,\n82(5):1142-1161, 1992.\n[12] Robert Forsythe, Thomas A. Rietz, and Thomas W.\nRoss. Wishes, expectations, and actions: A survey on\nprice formation in election stock markets. Journal of\nEconomic Behavior and Organization, 39:83-110,\n1999.\n[13] John M. Gandar, William H. Dare, Craig R. Brown,\nand Richard A. Zuber. Informed traders and price\nvariations in the betting market for professional\nbasketball games. Journal of Finance,\nLIII(1):385-401, 1998.\n[14] Robin Hanson. Decision markets. IEEE Intelligent\nSystems, 14(3):16-19, 1999.\n[15] Robin Hanson. Combinatorial information market\ndesign. Information Systems Frontiers, 5(1), 2002.\n[16] Robin D. Hanson. Could gambling save science?\nEncouraging an honest consensus. Social\nEpistemology, 9(1):3-33, 1995.\n[17] Jens Carsten Jackwerth and Mark Rubinstein.\nRecovering probability distributions from options\nprices. Journal of Finance, 51(5):1611-1631, 1996.\n[18] Joseph B. Kadane and Robert L. Winkler. Separating\nprobability elicitation from utilities. Journal of the\nAmerican Statistical Association, 83(402):357-363,\n1988.\n[19] Michael Magill and Martine Quinzii. Theory of\nIncomplete Markets, Vol. 1. MIT Press, 1996.\n[20] Andreu Mas-Colell, Michael D. Whinston, and\nJerry R. Green. Microeconomic Theory. Oxford\nUniversity Press, New York, 1995.\n[21] Noam Nisan. Bidding and allocation in combinatorial\nauctions. In Second ACM Conference on Electronic\nCommerce (EC\"00), pages 1-12, 2000.\n[22] Noam Nisan and Amir Ronen. Computationally\nfeasible VCG mechanisms. In Second ACM Conference\non Electronic Commerce (EC\"00), pages 242-252,\n2000.\n[23] J. Pearl. Probabilistic Reasoning in Intelligent\nSystems. Morgan Kaufmann, 1988.\n[24] David M. Pennock, Steve Lawrence, C. Lee Giles, and\nFinn \u02daArup Nielsen. The real power of artificial\nmarkets. Science, 291:987-988, February 9 2001.\n[25] David M. Pennock, Steve Lawrence, Finn \u02daArup\nNielsen, and C. Lee Giles. Extracting collective\nprobabilistic forecasts from web games. In Seventh\nInternational Conference on Knowledge Discovery and\nData Mining, pages 174-183, 2001.\n[26] David M. Pennock and Michael P. Wellman. Compact\nsecurities markets for Pareto optimal reallocation of\nrisk. In Sixteenth Conference on Uncertainty in\nArtificial Intelligence, pages 481-488, 2000.\n[27] C. R. Plott, J. Wit, and W. C. Yang. Parimutuel\nbetting markets as information aggregation devices:\nExperimental results. Social Science Working Paper\n986, California Institute of Technology, April 1997.\n154\n[28] Charles R. Plott. Markets as information gathering\ntools. Southern Economic Journal, 67(1):1-15, 2000.\n[29] Charles R. Plott and Shyam Sunder. Rational\nexpectations and the aggregation of diverse\ninformation in laboratory security markets.\nEconometrica, 56(5):1085-1118, 1988.\n[30] R. Roll. Orange juice and weather. American\nEconomic Review, 74(5):861-880, 1984.\n[31] Tuomas Sandholm, Subhash Suri, Andrew Gilpin, and\nDavid Levine. Winner determination in combinatorial\nauction generalizations. In First International Joint\nConference on Autonomous Agents and Multiagent\nSystems (AAMAS), July 2002.\n[32] Carsten Schmidt and Axel Werwatz. How accurate do\nmarkets predict the outcome of an event? the Euro\n2000 soccer championships experiment. Technical\nReport 09-2002, Max Planck Institute for Research\ninto Economic Systems, 2002.\n[33] Richard H. Thaler and William T. Ziemba. Anomalies:\nParimutuel betting markets: Racetracks and lotteries.\nJournal of Economic Perspectives, 2(2):161-174, 1988.\n[34] Hal R. Varian. The arbitrage principle in financial\neconomics. J. Economic Perspectives, 1(2):55-72,\n1987.\n[35] Robert L. Winkler and Allan H. Murphy. Good\nprobability assessors. J. Applied Meteorology,\n7:751-758, 1968.\n155", "keywords": "combinatorial betting;gamble;speculate;bayesian network;combined-value trading;arbitrary logical combination;compound security market;approximation algorithm;effective probability assessment;base security;combinatorial bet;trade in financial instrument base on logical formula;risk allocation;payoff vector;tractable case;hedge;bet;information aggregation;compound security;computational complexity of match"}
-{"name": "test_J-26", "title": "Combinatorial Agency", "abstract": "Much recent research concerns systems, such as the Internet, whose components are owned and operated by different parties, each with his own selfish goal. The field of Algorithmic Mechanism Design handles the issue of private information held by the different parties in such computational settings. This paper deals with a complementary problem in such settings: handling the hidden actions that are performed by the different parties. Our model is a combinatorial variant of the classical principalagent problem from economic theory. In our setting a principal must motivate a team of strategic agents to exert costly effort on his behalf, but their actions are hidden from him. Our focus is on cases where complex combinations of the efforts of the agents influence the outcome. The principal motivates the agents by offering to them a set of contracts, which together put the agents in an equilibrium point of the induced game. We present formal models for this setting, suggest and embark on an analysis of some basic issues, but leave many questions open.", "fulltext": "1. INTRODUCTION\n1.1 Background\nOne of the most striking characteristics of modern\ncomputer networks - in particular the Internet - is that different\nparts of it are owned and operated by different individuals,\nfirms, and organizations. The analysis and design of\nprotocols for this environment thus naturally needs to take into\naccount the different selfish economic interests of the\ndifferent participants. Indeed, the last few years have seen\nmuch work addressing this issue using game-theoretic\nnotions (see [7] for an influential survey). A significant part\nof the difficulty stems from underlying asymmetries of\ninformation: one participant may not know everything that is\nknown or done by another. In particular, the field of\nalgorithmic mechanism design [6] uses appropriate incentives to\nextract the private information from the participants.\nThis paper deals with the complementary lack of\nknowledge, that of hidden actions. In many cases the actual\nbehaviors - actions - of the different participants are hidden\nfrom others and only influence the final outcome indirectly.\nHidden here covers a wide range of situations including\nnot precisely measurable, costly to determine, or even\nnon-contractible - meaning that it can not be formally\nused in a legal contract.\nAn example that was discussed in [3] is Quality of Service\nrouting in a network: every intermediate link or router may\nexert a different amount of effort (priority, bandwidth, ...)\nwhen attempting to forward a packet of information. While\nthe final outcome of whether a packet reached its destination\nis clearly visible, it is rarely feasible to monitor the exact\namount of effort exerted by each intermediate link - how can\nwe ensure that they really do exert the appropriate amount\nof effort? Many other complex resource allocation problems\nexhibit similar hidden actions, e.g., a task that runs on a\ncollection of shared servers may be allocated, by each server,\nan unknown percentage of the CPU\"s processing power or\nof the physical memory. How can we ensure that the right\ncombination of allocations is actually made by the different\nservers? A related class of examples concerns security issues:\neach link in a complex system may exert different levels\nof effort for protecting some desired security property of\nthe system. How can we ensure that the desired level of\n18\ncollective security is obtained?\nOur approach to this problem is based on the well\nstudied principal-agent problem in economic theory: How can a\nprincipal motivate a rational agent to exert costly effort\ntowards the welfare of the principal? The crux of the model\nis that the agent\"s action (i.e. whether he exerts effort or\nnot) is invisible to the principal and only the final outcome,\nwhich is probabilistic and also influenced by other factors,\nis visible. This problem is well studied in many contexts in\nclassical economic theory and we refer the readers to\nintroductory texts on economic theory such as [5] Chapter 14.\nThe solution is based on the observation that a properly\ndesigned contract, in which the payments are contingent upon\nthe final outcome, can influence a rational agent to exert the\nrequired effort.\nIn this paper we initiate a general study of handling\ncombinations of agents rather than a single agent. While much\nwork was already done on motivating teams of agents [4],\nour emphasis is on dealing with the complex combinatorial\nstructure of dependencies between agents\" actions. In the\ngeneral case, each combination of efforts exerted by the n\ndifferent agents may result in a different expected gain for\nthe principal. The general question asks which conditional\npayments should the principal offer to which agents as to\nmaximize his net utility? In our setting and unlike in\nprevious work (see, e.g., [12]), the main challenge is to determine\nthe optimal amount of effort desired from each agent.\nThis paper suggest models for and provides some\ninteresting initial results about this combinatorial agency\nproblem. We believe that we have only scratched the surface and\nleave many open questions, conjectures, and directions for\nfurther research.\nWe believe that this type of analysis may also find\napplications in regular economic activity. Consider for example\na firm that sub-contracts a family of related tasks to many\nindividuals (or other firms). It will often not be possible to\nexactly monitor the actual effort level of each sub-contractor\n(e.g., in cases of public-relations activities, consulting\nactivities, or any activities that require cooperation between\ndifferent sub-contractors.) When the dependencies between\nthe different subtasks are complex, we believe that\ncombinatorial agency models can offer a foundation for the design\nof contracts with appropriate incentives.\nIt may also be useful to view our work as part of a general\nresearch agenda stemming from the fact that all types of\neconomic activity are increasingly being handled with the\naid of sophisticated computer systems. In general, in such\ncomputerized settings, complex scenarios involving multiple\nagents and goods can naturally occur, and they need to\nbe algorithmically handled. This calls for the study of the\nstandard issues in economic theory in new complex settings.\nThe principal-agent problem is a prime example where such\ncomplex settings introduce new challenges.\n1.2 Our Models\nWe start by presenting a general model: in this model\neach of n agents has a set of possible actions, the\ncombination of actions by the players results in some outcome,\nwhere this happens probabilistically. The main part of the\nspecification of a problem in this model is a function that\nspecifies this distribution for each n-tuple of agents\" actions.\nAdditionally, the problem specifies the principal\"s utility for\neach possible outcome, and for each agent, the agent\"s cost\nfor each possible action. The principal motivates the agents\nby offering to each of them a contract that specifies a\npayment for each possible outcome of the whole project1\n. Key\nhere is that the actions of the players are non-observable\nand thus the contract cannot make the payments directly\ncontingent on the actions of the players, but rather only on\nthe outcome of the whole project.\nGiven a set of contracts, the agents will each optimize his\nown utility: i.e. will choose the action that maximizes his\nexpected payment minus the cost of his action. Since the\noutcome depends on the actions of all players together, the\nagents are put in a game and are assumed to reach a Nash\nequilibrium2\n. The principal\"s problem, our problem in this\npaper, is of designing an optimal set of contracts: i.e.\ncontracts that maximize his expected utility from the outcome,\nminus his expected total payment. The main difficulty is\nthat of determining the required Nash equilibrium point.\nIn order to focus on the main issues, the rest of the paper\ndeals with the basic binary case: each agent has only two\npossible actions exert effort and shirk and there are only\ntwo possible outcomes success and failure. It seems that\nthis case already captures the main interesting ingredients3\n.\nIn this case, each agent\"s problem boils down to whether to\nexert effort or not, and the principal\"s problem boils down\nto which agents should be contracted to exert effort. This\nmodel is still pretty abstract, and every problem description\ncontains a complete table specifying the success probability\nfor each subset of the agents who exert effort.\nWe then consider a more concrete model which concerns\na subclass of problem instances where this exponential size\ntable is succinctly represented. This subclass will provide\nmany natural types of problem instances. In this subclass\nevery agent performs a subtask which succeeds with a low\nprobability \u03b3 if the agent does not exert effort and with a\nhigher probability \u03b4 > \u03b3, if the agent does exert effort. The\nwhole project succeeds as a deterministic Boolean function\nof the success of the subtasks. This Boolean function can\nnow be represented in various ways. Two basic examples are\nthe AND function in which the project succeeds only if\nall subtasks succeed, and the OR function which succeeds\nif any of the subtasks succeeds. A more complex example\nconsiders a communication network, where each agent\ncontrols a single edge, and success of the subtask means that\na message is forwarded by that edge. Effort by the edge\nincreases this success probability. The complete project\nsucceeds if there is a complete path of successful edges between\na given source and sink. Complete definitions of the models\nappear in Section 2.\n1.3 Our Results\n1\nOne could think of a different model in which the agents\nhave intrinsic utility from the outcome and payments may\nnot be needed, as in [10, 11].\n2\nIn this paper our philosophy is that the principal can\nsuggest a Nash equilibrium point to the agents, thus\nfocusing on the best Nash equilibrium. One may alternatively\nstudy the worst case equilibrium as in [12], or alternatively,\nattempt modeling some kind of an extensive game between\nthe agents, as in [9, 10, 11].\n3\nHowever, some of the more advanced questions we ask for\nthis case can be viewed as instances of the general model.\n19\nWe address a host of questions and prove a large number\nof results. We believe that despite the large amount of work\nthat appears here, we have only scratched the surface. In\nmany cases we were not able to achieve the general\ncharacterization theorems that we desired and had to settle for\nanalyzing special cases or proving partial results. In many\ncases, simulations reveal structure that we were not able to\nformally prove. We present here an informal overview of\nthe issues that we studied, what we were able to do, and\nwhat we were not. The full treatment of most of our results\nappears only in the extended version [2], and only some are\ndiscussed, often with associated simulation results, in the\nbody of the paper.\nOur first object of study is the structure of the class of\nsets of agents that can be contracted for a given problem\ninstance. Let us fix a given function describing success\nprobabilities, fix the agent\"s costs, and let us consider the set of\ncontracted agents for different values of the principal\"s\nassociated value from success. For very low values, no agent\nwill be contracted since even a single agent\"s cost is higher\nthan the principal\"s value. For very high values, all agents\nwill always be contracted since the marginal contribution\nof an agent multiplied by the principal\"s value will overtake\nany associated payment. What happens for intermediate\nprincipal\"s values?\nWe first observe that there is a finite number of\ntransitions between different sets, as the principal\"s project value\nincreases. These transitions behave very differently for\ndifferent functions. For example, we show that for the AND\nfunction only a single transition occurs: for low enough\nvalues no agent will be contracted, while for higher values all\nagents will be contracted - there is no intermediate range for\nwhich only some of the agents are contracted. For the OR\nfunction, the situation is opposite: as the principal\"s value\nincreases, the set of contracted agents increases one-by-one.\nWe are able to fully characterize the types of functions for\nwhich these two extreme types of transitions behavior occur.\nHowever, the structure of these transitions in general seems\nquite complex, and we were not able to fully analyze them\neven in simple cases like the Majority function (the project\nsucceeds if a majority of subtasks succeeds) or very simple\nnetworks. We do have several partial results, including a\nconstruction with an exponential number of transitions.\nDuring the previous analysis we also study what we term\nthe price of unaccountability: How much is the social\nutility achieved under the optimal contracts worse than what\ncould be achieved in the non-strategic case4\n, where the\nsocially optimal actions are simply dictated by the principal?\nWe are able to fully analyze this price for the AND\nfunction, where it is shown to tend to infinity as the number of\nagents tends to infinity. More general analysis remains an\nopen problem.\nOur analysis of these questions sheds light on the\ndifficulty of the various natural associated algorithmic\nproblems. In particular, we observe that the optimal contract\ncan be found in time polynomial in the explicit\nrepresentation of the probability function. We prove a lower bound\nthat shows that the optimal contract cannot be found in\nnumber of queries that is polynomial just in the number of\nagents, in a general black-box model. We also show that\nwhen the probability function is succinctly represented as\n4\nThe non-strategic case is often referred to as the case with\ncontractible actions or the principal\"s first-best solution.\na read-once network, the problem becomes #P-hard. The\nstatus of some algorithmic questions remains open, in\nparticular that of finding the optimal contract for technologies\ndefined by serial-parallel networks.\nIn a follow-up paper [1] we deal with equilibria in mixed\nstrategies and show that the principal can gain from\ninducing a mixed-Nash equilibrium between the agents rather\nthan a pure one. We also show cases where the principal\ncan gain by asking agents to reduce their effort level, even\nwhen this effort comes for free. Both phenomena can not\noccur in the non-strategic setting.\n2. MODEL AND PRELIMINARIES\n2.1 The General Setting\nA principal employs a set of agents N of size n. Each\nagent i \u2208 N has a possible set of actions Ai, and a cost\n(effort) ci(ai) \u2265 0 for each possible action ai \u2208 Ai (ci : Ai \u2192\n+). The actions of all players determine, in a probabilistic\nway, a contractible outcome o \u2208 O, according to a success\nfunction t : A1\u00d7, . . . \u00d7 An \u2192 \u0394(O) (where \u0394(O) denotes\nthe set of probability distributions on O). A technology is\na pair, (t, c), of a success function, t, and cost functions,\nc = (c1, c2, . . . , cn). The principal has a certain value for\neach possible outcome, given by the function v : O \u2192 .\nAs we will only consider risk-neutral players in this paper5\n,\nwe will also treat v as a function on \u0394(O), by taking simple\nexpected value. Actions of the players are invisible, but the\nfinal outcome o is visible to him and to others (in particular\nthe court), and he may design enforceable contracts based\non the final outcome. Thus the contract for agent i is a\nfunction (payment) pi : O \u2192 ; again, we will also view pi\nas a function on \u0394(O).\nGiven this setting, the agents have been put in a game,\nwhere the utility of agent i under the vector of actions a =\n(a1, . . . , an) is given by ui(a) = pi(t(a))\u2212ci(ai). The agents\nwill be assumed to reach Nash equilibrium, if such\nequilibrium exists. The principal\"s problem (which is our problem\nin this paper) is how to design the contracts pi as to\nmaximize his own expected utility u(a) = v(t(a)) \u2212\nP\ni pi(t(a)),\nwhere the actions a1, . . . , an are at Nash-equilibrium. In the\ncase of multiple Nash equilibria we let the principal choose\nthe equilibrium, thus focusing on the best Nash\nequilibrium. A variant, which is similar in spirit to strong\nimplementation in mechanism design would be to take the worst\nNash equilibrium, or even, stronger yet, to require that only\na single equilibrium exists. Finally, the social welfare for\na \u2208 A is u(a) +\nP\ni\u2208N ui(a) = v(t(a)) \u2212\nP\ni\u2208N ci(ai).\n2.2 The Binary-Outcome Binary-Action Model\nWe wish to concentrate on the complexities introduced\nby the combinatorial structure of the success function t, we\nrestrict ourselves to a simpler setting that seems to focus\nmore clearly on the structure of t. A similar model was\nused in [12]. We first restrict the action spaces to have\nonly two states (binary-action): 0 (low effort) and 1 (high\neffort). The cost function of agent i is now just a scalar ci >\n0 denoting the cost of exerting high effort (where the low\neffort has cost 0). The vector of costs is c = (c1, c2, . . . , cn),\n5\nThe risk-averse case would obviously be a natural second\nstep in the research of this model, as has been for\nnoncombinatorial scenarios.\n20\nand we use the notation (t, c) to denote a technology in\nsuch a binary-outcome model. We then restrict the outcome\nspace to have only two states (binary-outcome): 0 (project\nfailure) and 1 (project success). The principal\"s value for\na successful project is given by a scalar v > 0 (where the\nvalue of project failure is 0). We assume that the principal\ncan pay the agents but not fine them (known as the limited\nliability constraint). The contract to agent i is thus now\ngiven by a scalar value pi \u2265 0 that denotes the payment\nthat i gets in case of project success. If the project fails, the\nagent gets 0. When the lowest cost action has zero cost (as\nwe assume), this immediately implies that the participation\nconstraint holds.\nAt this point the success function t becomes a function t :\n{0, 1}n\n\u2192 [0, 1], where t(a1, . . . , an) denotes the probability\nof project success where players with ai = 0 do not exert\neffort and incur no cost, and players with ai = 1 do exert\neffort and incur a cost of ci.\nAs we wish to concentrate on motivating agents, rather\nthan on the coordination between agents, we assume that\nmore effort by an agent always leads to a better probability\nof success, i.e. that the success function t is strictly\nmonotone. Formally, if we denote by a\u2212i \u2208 A\u2212i the (n \u2212\n1)dimensional vector of the actions of all agents excluding\nagent i. i.e., a\u2212i = (a1, . . . , ai\u22121, ai+1, . . . , an), then a\nsuccess function must satisfy:\n\u2200i \u2208 N, \u2200a\u2212i \u2208 A\u2212i t(1, a\u2212i) > t(0, a\u2212i)\nAdditionally, we assume that t(a) > 0 for any a \u2208 A (or\nequivalently, t(0, 0, . . . , 0) > 0).\nDefinition 1. The marginal contribution of agent i,\ndenoted by \u0394i, is the difference between the probability of\nsuccess when i exerts effort and when he shirks.\n\u0394i(a\u2212i) = t(1, a\u2212i) \u2212 t(0, a\u2212i)\nNote that since t is monotone, \u0394i is a strictly positive\nfunction. At this point we can already make some simple\nobservations. The best action, ai \u2208 Ai, of agent i can now\nbe easily determined as a function of what the others do,\na\u2212i \u2208 A\u2212i, and his contract pi.\nClaim 1. Given a profile of actions a\u2212i, agent i\"s best\nstrategy is ai = 1 if pi \u2265 ci\n\u0394i(a\u2212i)\n, and is ai = 0 if pi \u2264\nci\n\u0394i(a\u2212i)\n. (In the case of equality the agent is indifferent\nbetween the two alternatives.)\nAs pi \u2265 ci\n\u0394i(a\u2212i)\nif and only if ui(1, a\u2212i) = pi \u00b7t(1, a\u2212i)\u2212ci \u2265\npi \u00b7t(0, a\u2212i) = ui(0, a\u2212i), i\"s best strategy is to choose ai = 1\nin this case.\nThis allows us to specify the contracts that are the\nprincipal\"s optimal, for inducing a given equilibrium.\nObservation 1. The best contracts (for the principal)\nthat induce a \u2208 A as an equilibrium are pi = 0 for agent\ni who exerts no effort (ai = 0), and pi = ci\n\u0394i(a\u2212i)\nfor agent\ni who exerts effort (ai = 1).\nIn this case, the expected utility of agent i who exerts effort\nis ci \u00b7\n\nt(1,a\u2212i)\n\u0394i(a\u2212i)\n\u2212 1\n\n, and 0 for an agent who shirk. The\nprincipal\"s expected utility is given by u(a, v) = (v\u2212P)\u00b7t(a),\nwhere P is the total payment in case of success, given by\nP =\nP\ni|ai=1\nci\n\u0394i(a\u2212i)\n.\nWe say that the principal contracts with agent i if pi > 0\n(and ai = 1 in the equilibrium a \u2208 A). The principal\"s goal\nis to maximize his utility given his value v, i.e. to determine\nthe profile of actions a\u2217\n\u2208 A, which gives the highest value\nof u(a, v) in equilibrium. Choosing a \u2208 A corresponds to\nchoosing a set S of agents that exert effort (S = {i|ai = 1}).\nWe call the set of agents S\u2217\nthat the principal contracts with\nin a\u2217\n(S\u2217\n= {i|a\u2217\ni = 1}) an optimal contract for the principal\nat value v. We sometimes abuse notation and denote t(S)\ninstead of t(a), when S is exactly the set of agents that exert\neffort in a \u2208 A.\nA natural yardstick by which to measure this decision\nis the non-strategic case, i.e. when the agents need not be\nmotivated but are rather controlled directly by the principal\n(who also bears their costs). In this case the principal will\nsimply choose the profile a \u2208 A that optimizes the social\nwelfare (global efficiency), t(a) \u00b7 v \u2212\nP\ni|ai=1 ci. The worst\nratio between the social welfare in this non-strategic case\nand the social welfare for the profile a \u2208 A chosen by the\nprincipal in the agency case, may be termed the price of\nunaccountability.\nGiven a technology (t, c), let S\u2217\n(v) denote the optimal\ncontract in the agency case and let S\u2217\nns(v) denote an optimal\ncontract in the non-strategic case, when the principal\"s value\nis v. The social welfare for value v when the set S of agents\nis contracted is t(S) \u00b7 v \u2212\nP\ni\u2208S ci (in both the agency and\nnon-strategic cases).\nDefinition 2. The price of unaccountability POU(t, c)\nof a technology (t, c) is defined as the worst ratio (over v)\nbetween the total social welfare in the non-strategic case and\nthe agency case:\nPOU(t, c) = Supv>0\nt(S\u2217\nns(v)) \u00b7 v \u2212\nP\ni\u2208S\u2217\nns(v) ci\nt(S\u2217(v)) \u00b7 v \u2212\nP\ni\u2208S\u2217(v) ci\nIn cases where several sets are optimal in the agency case,\nwe take the worst set (i.e., the set that yields the lowest social\nwelfare).\nWhen the technology (t, c) is clear in the context we will use\nPOU to denote the price of unaccountability for technology\n(t, c). Note that the POU is at least 1 for any technology.\nAs we would like to focus on results that derived from\nproperties of the success function, in most of the paper we\nwill deal with the case where all agents have an identical\ncost c, that is ci = c for all i \u2208 N. We denote a technology\n(t, c) with identical costs by (t, c). For the simplicity of the\npresentation, we sometimes use the term technology function\nto refer to the success function of the technology.\n2.3 Structured Technology Functions\nIn order to be more concrete, we will especially focus on\ntechnology functions whose structure can be described easily\nas being derived from independent agent tasks - we call\nthese structured technology functions. This subclass will first\ngive us some natural examples of technology function, and\nwill also provide a succinct and natural way to represent the\ntechnology functions.\nIn a structured technology function, each individual\nsucceeds or fails in his own task independently. The project\"s\nsuccess or failure depends, possibly in a complex way, on the\nset of successful sub-tasks. Thus we will assume a\nmonotone Boolean function f : {0, 1}n\n\u2192 {0, 1} which denotes\n21\nwhether the project succeeds as a function of the success of\nthe n agents\" tasks (and is not determined by any set of n\u22121\nagents). Additionally there are constants 0 < \u03b3i < \u03b4i < 1,\nwhere \u03b3i denotes the probability of success for agent i if he\ndoes not exert effort, and \u03b4i (> \u03b3i) denotes the probability\nof success if he does exert effort. In order to reduce the\nnumber of parameters, we will restrict our attention to the case\nwhere \u03b31 = . . . = \u03b3n = \u03b3 and \u03b41 = . . . = \u03b4n = 1 \u2212 \u03b3 thus\nleaving ourselves with a single parameter \u03b3 s.t. 0 < \u03b3 < 1\n2\n.\nUnder this structure, the technology function t is defined\nby t(a1, . . . , an) being the probability that f(x1, . . . , xn) = 1\nwhere the bits x1, . . . , xn are chosen according to the\nfollowing distribution: if ai = 0 then xi = 1 with probability \u03b3 and\nxi = 0 with probability 1 \u2212 \u03b3; otherwise, i.e. if ai = 1, then\nxi = 1 with probability 1 \u2212 \u03b3 and xi = 0 with probability \u03b3.\nWe denote x = (x1, . . . , xn).\nThe question of the representation of the technology\nfunction is now reduced to that of representing the underlying\nmonotone Boolean function f. In the most general case,\nthe function f can be given by a general monotone Boolean\ncircuit. An especially natural sub-class of functions in the\nstructured technologies setting would be functions that can\nbe represented as a read-once network - a graph with a given\nsource and sink, where every edge is labeled by a\ndifferent player. The project succeeds if the edges that belong\nto player\"s whose task succeeded form a path between the\nsource and the sink6\n.\nA few simple examples should be in order here:\n1. The AND technology: f(x1, . . . , xn) is the logical\nconjunction of xi (f(x) =\nV\ni\u2208N xi). Thus the project\nsucceeds only if all agents succeed in their tasks. This\nis shown graphically as a read-once network in\nFigure 1(a). If m agents exert effort (\nP\ni ai = m), then\nt(a) = tm = \u03b3n\u2212m\n(1 \u2212 \u03b3)m\n. E.g. for two players,\nthe technology function t(a1a2) = ta1+a2 is given by\nt0 = t(00) = \u03b32\n, t1 = t(01) = t(10) = \u03b3(1 \u2212 \u03b3), and\nt2 = t(11) = (1 \u2212 \u03b3)2\n.\n2. The OR technology: f(x1, . . . , xn) is the logical\ndisjunction of xi (f(x) =\nW\ni\u2208N xi). Thus the project\nsucceeds if at least one of the agents succeed in their\ntasks. This is shown graphically as a read-once\nnetwork in Figure 1(b). If m agents exert effort, then\ntm = 1 \u2212 \u03b3m\n(1 \u2212 \u03b3)n\u2212m\n.E.g. for two players, the\ntechnology function is given by t(00) = 1 \u2212 (1 \u2212 \u03b3)2\n,\nt(01) = t(10) = 1 \u2212 \u03b3(1 \u2212 \u03b3), and t(11) = 1 \u2212 \u03b32\n.\n3. The Or-of-Ands (OOA) technology: f(x) is the\nlogical disjunction of conjunctions. In the simplest case\nof equal-length clauses (denote by nc the number of\nclauses and by nl their length), f(x) =\nWnc\nj=1(\nVnl\nk=1 xj\nk).\nThus the project succeeds if in at least one clause all\nagents succeed in their tasks. This is shown\ngraphically as a read-once network in Figure 2(a). If mi\nagents on path i exert effort, then t(m1, ..., mnc ) =\n1 \u2212\nQ\ni(1 \u2212 \u03b3nl\u2212mi\n(1 \u2212 \u03b3)mi\n). E.g. for four\nplayers, the technology function t(a1\n1 a1\n2, a2\n1 a2\n2) is given\nby t(00, 00) = 1 \u2212 (1 \u2212 \u03b32\n)2\n, t(01, 00) = t(10, 00) =\nt(00, 01) = t(00, 10) = 1 \u2212 (1 \u2212 \u03b3(1 \u2212 \u03b3))(1 \u2212 \u03b32\n), and\nso on.\n6\nOne may view this representation as directly corresponding\nto the project of delivering a message from the source to the\nsink in a real network of computers, with the edges being\ncontrolled by selfish agents.\nFigure 1: Graphical representations of (a) AND and (b)\nOR technologies.\nFigure 2: Graphical representations of (a) OOA and (b)\nAOO technologies.\n4. The And-of-Ors (AOO) technology: f(x) is the\nlogical conjunction of disjunctions. In the simplest case\nof equal-length clauses (denote by nl the number of\nclauses and by nc their length), f(x) =\nVnl\nj=1(\nWnc\nk=1 xj\nk).\nThus the project succeeds if at least one agent from\neach disjunctive-form-clause succeeds in his tasks. This\nis shown graphically as a read-once network in\nFigure 2(b). If mi agents on clause i exert effort, then\nt(m1, ..., mnc ) =\nQ\ni(1 \u2212 \u03b3mi (1 \u2212 \u03b3)nc\u2212mi ). E.g. for\nfour players, the technology function t(a1\n1 a1\n2, a2\n1 a2\n2)\nis given by t(00, 00) = (1 \u2212 (1 \u2212 \u03b3)2\n)2\n, t(01, 00) =\nt(10, 00) = t(00, 01) = t(00, 10) = (1 \u2212 \u03b3(1 \u2212 \u03b3))(1 \u2212\n(1 \u2212 \u03b3)2\n), and so on.\n5. The Majority technology: f(x) is 1 if a majority of\nthe values xi are 1. Thus the project succeeds if most\nplayers succeed. The majority function, even on 3\ninputs, can not be represented by a read-once network,\nbut is easily represented by a monotone Boolean\nformula maj(x, y, z) = xy+yz+xz. In this case the\ntechnology function is given by t(000) = 3\u03b32\n(1 \u2212 \u03b3) + \u03b33\n,\nt(001) = t(010) = t(100) = \u03b33\n+2(1\u2212\u03b3)2\n\u03b3 +\u03b32\n(1\u2212\u03b3),\netc.\n3. ANALYSIS OF SOME ANONYMOUS\nTECHNOLOGIES\nA success function t is called anonymous if it is symmetric\nwith respect to the players. I.e. t(a1, . . . , an) depends only\non\nP\ni\u2208N ai (the number of agents that exert effort). A\ntechnology (t, c) is anonymous if t is anonymous and the cost c is\nidentical to all agents. Of the examples presented above, the\nAND, OR, and majority technologies were anonymous (but\nnot AOO and OOA). As for an anonymous t only the number\nof agents that exert effort is important, we can shorten the\nnotations and denote tm = t(1m\n, 0n\u2212m\n), \u0394m = tm+1 \u2212 tm,\npm = c\n\u0394m\u22121\nand um = tm \u00b7 (v \u2212 m \u00b7 pm), for the case of\nidentical cost c.\n22\nv\n3\n0\ngamma\n200\n150\n0.4\n100\n50\n0.3\n0\n0.20.10\n2 1 0\n3\n12000\n6000\n8000\n4000\n2000\ngamma\n0\n0.4 0.45\n10000\n0.3 0.350.250.2\nFigure 3: Number of agents in the optimal contract\nof the AND (left) and OR (right) technologies with 3\nplayers, as a function of \u03b3 and v. AND technology: either\n0 or 3 agents are contracted, and the transition value is\nmonotonic in \u03b3. OR technology: for any \u03b3 we can see all\ntransitions.\n3.1 AND and OR Technologies\nLet us start with a direct and full analysis of the AND\nand OR technologies for two players for the case \u03b3 = 1/4\nand c = 1.\nExample 1. AND technology with two agents, c =\n1, \u03b3 = 1/4: we have t0 = \u03b32\n= 1/16, t1 = \u03b3(1 \u2212 \u03b3) = 3/16,\nand t2 = (1 \u2212 \u03b3)2\n= 9/16 thus \u03940 = 1/8 and \u03941 = 3/8.\nThe principal has 3 possibilities: contracting with 0, 1, or 2\nagents. Let us write down the expressions for his utility in\nthese 3 cases:\n\u2022 0 Agents: No agent is paid thus and the principal\"s\nutility is u0 = t0 \u00b7 v = v/16.\n\u2022 1 Agent: This agent is paid p1 = c/\u03940 = 8 on success\nand the principal\"s utility is u1 = t1(v \u2212 p1) = 3v/16\u2212\n3/2.\n\u2022 2 Agents: each agent is paid p2 = c/\u03941 = 8/3 on\nsuccess, and the principal\"s utility is u2 = t2(v\u22122p2) =\n9v/16 \u2212 3.\nNotice that the option of contracting with one agent is always\ninferior to either contracting with both or with none, and will\nnever be taken by the principal. The principal will contract\nwith no agent when v < 6, with both agents whenever v > 6,\nand with either non or both for v = 6.\nThis should be contrasted with the non-strategic case in\nwhich the principal completely controls the agents (and bears\ntheir costs) and thus simply optimizes globally. In this case\nthe principal will make both agents exert effort whenever v \u2265\n4. Thus for example, for v = 6 the globally optimal decision\n(non-strategic case) would give a global utility of 6 \u00b7 9/16 \u2212\n2 = 11/8 while the principal\"s decision (in the agency case)\nwould give a global utility of 3/8, giving a ratio of 11/3.\nIt turns out that this is the worst price of unaccountability\nin this example, and it is obtained exactly at the transition\npoint of the agency case, as we show below.\nExample 2. OR technology with two agents, c = 1,\n\u03b3 = 1/4: we have t0 = 1 \u2212 (1 \u2212 \u03b3)2\n= 7/16, t1 = 1 \u2212\n\u03b3(1 \u2212 \u03b3) = 13/16, and t2 = 1 \u2212 \u03b32\n= 15/16 thus \u03940 = 3/8\nand \u03941 = 1/8. Let us write down the expressions for the\nprincipal\"s utility in these three cases:\n\u2022 0 Agents: No agent is paid and the principal\"s utility\nis u0 = t0 \u00b7 v = 7v/16.\n\u2022 1 Agent: This agent is paid p1 = c/\u03940 = 8/3 on\nsuccess and the principal\"s utility is u1 = t1(v \u2212 p1) =\n13v/16 \u2212 13/6.\n\u2022 2 Agents: each agent is paid p2 = c/\u03941 = 8 on\nsuccess, and the principal\"s utility is u2 = t2(v \u2212 2p2) =\n15v/16 \u2212 15/2.\nNow contracting with one agent is better than contracting\nwith none whenever v > 52/9 (and is equivalent for v =\n52/9), and contracting with both agents is better than\ncontracting with one agent whenever v > 128/3 (and is\nequivalent for v = 128/3), thus the principal will contract with\nno agent for 0 \u2264 v \u2264 52/9, with one agent for 52/9 \u2264 v \u2264\n128/3, and with both agents for v \u2265 128/3.\nIn the non-strategic case, in comparison, the principal will\nmake a single agent exert effort for v > 8/3, and the second\none exert effort as well when v > 8.\nIt turns out that the price of unaccountability here is\n19/13, and is achieved at v = 52/9, which is exactly the\ntransition point from 0 to 1 contracted agents in the agency\ncase. This is not a coincidence that in both the AND and\nOR technologies the POU is obtained for v that is a\ntransition point (see full proof in [2]).\nLemma 1. For any given technology (t, c) the price of\nunaccountability POU(t, c) is obtained at some value v which\nis a transition point, of either the agency or the non-strategic\ncases.\nProof sketch: We look at all transition points in both cases.\nFor any value lower than the first transition point, 0 agents\nare contracted in both cases, and the social welfare ratio\nis 1. Similarly, for any value higher than the last\ntransition point, n agents are contracted in both cases, and the\nsocial welfare ratio is 1. Thus, we can focus on the\ninterval between the first and last transition points. Between any\npair of consecutive points, the social welfare ratio is between\ntwo linear functions of v (the optimal contracts are fixed on\nsuch a segment). We then show that for each segment, the\nsuprimum ratio is obtained at an end point of the segment\n(a transition point). As there are finitely many such points,\nthe global suprimum is obtained at the transition point with\nthe maximal social welfare ratio. 2\nWe already see a qualitative difference between the AND\nand OR technologies (even with 2 agents): in the first case\neither all agents are contracted or none, while in the second\ncase, for some intermediate range of values v, exactly one\nagent is contracted. Figure 3 shows the same phenomena\nfor AND and OR technologies with 3 players.\nTheorem 1. For any anonymous AND technology7\n:\n\u2022 there exists a value8\nv\u2217 < \u221e such that for any v < v\u2217\nit is optimal to contract with no agent, for v > v\u2217 it is\noptimal to contract with all n agents, and for v = v\u2217,\nboth contracts (0, n) are optimal.\n7\nAND technology with any number of agents n and any \u03b3,\nand any identical cost c.\n8\nv\u2217 is a function of n, \u03b3, c.\n23\n\u2022 the price of unaccountability is obtained at the\ntransition point of the agency case, and is\nPOU =\n` 1\n\u03b3\n\u2212 1\n\u00b4n\u22121\n+ (1 \u2212\n\u03b3\n1 \u2212 \u03b3\n)\nProof sketch: For any fixed number of contracted agents,\nk, the principal\"s utility is a linear function in v, where\nthe slope equals the success probability under k contracted\nagents. Thus, the optimal contract corresponds to the\nmaximum over a set of linear functions. Let v\u2217 denote the point\nat which the principal is indifferent between contracting\nwith 0 or n agents. In [2] we show that at v\u2217, the\nprincipal\"s utility from contracting with 0 (or n) agents is higher\nthan his utility when contracting with any number of agents\nk \u2208 {1, . . . , n \u2212 1}. As the number of contracted agents is\nmonotonic non-decreasing in the value (due to Lemma 3),\nfor any v < v\u2217, contracting with 0 agents is optimal, and for\nany v > v\u2217, contracting with n agents is optimal. This is\ntrue for both the agency and the non-strategic cases.\nAs in both cases there is a single transition point, the\nclaim about the price of unaccountability for AND\ntechnology is proved as a special case of Lemma 2 below. For\nAND technology\ntn\u22121\nt0\n= (1\u2212\u03b3)n\u22121\n\u00b7\u03b3\n\u03b3n =\n\n1\n\u03b3\n\u2212 1\nn\u22121\nand\ntn\u22121\ntn\n= (1\u2212\u03b3)n\u22121\n\u00b7\u03b3\n(1\u2212\u03b3)n = \u03b3\n1\u2212\u03b3\n, and the expressions for the POU\nfollows. 2\nIn [2] we present a general characterization of technologies\nwith a single transition in the agency and the non-strategic\ncases, and provide a full proof of Theorem 1 as a special\ncase.\nThe property of a single transition occurs in both the\nagency and the non-strategic cases, where the transition\noccurs at a smaller value of v in the non-strategic case. Notice\nthat the POU is not bounded across the AND family of\ntechnologies (for various n, \u03b3) as POU \u2192 \u221e either if \u03b3 \u2192 0\n(for any given n \u2265 2) or n \u2192 \u221e (for any fixed \u03b3 \u2208 (0, 1\n2\n)).\nNext we consider the OR technology and show that it\nexhibits all n transitions.\nTheorem 2. For any anonymous OR technology, there\nexist finite positive values v1 < v2 < . . . < vn such that\nfor any v s.t. vk < v < vk+1, contracting with exactly k\nagents is optimal (for v < v1, no agent is contracted, and\nfor v > vn, all n agents are contracted). For v = vk, the\nprincipal is indifferent between contracting with k \u2212 1 or k\nagents.\nProof sketch: To prove the claim we define vk to be the\nvalue for which the principal is indifferent between\ncontracting with k \u2212 1 agents, and contracting with k agents. We\nthen show that for any k, vk < vk+1. As the number of\ncontracted agents is monotonic non-decreasing in the value (due\nto Lemma 3), v1 < v2 < . . . < vn is a sufficient condition\nfor the theorem to hold. 2\nThe same behavior occurs in both the agency and the\nnonstrategic case. This characterization is a direct corollary of\na more general characterization given in [2].\nWhile in the AND technology we were able to fully\ndetermine the POU analytically, the OR technology is more\ndifficult to analyze.\nOpen Question 1. What is the POU for OR with n > 2\nagents? Is it bounded by a constant for every n?\nWe are only able to determine the POU of the OR\ntechnology for the case of two agents [2]. Even for the 2 agents\ncase we already observe a qualitative difference between the\nPOU in the AND and OR technologies.\nObservation 2. While in the AND technology the POU\nfor n = 2 is not bounded from above (for \u03b3 \u2192 0), the highest\nPOU in OR technology with two agents is 2 (for \u03b3 \u2192 0).\n3.2 What Determines the Transitions?\nTheorems 1 and 2 say that both the AND and OR\ntechnologies exhibit the same transition behavior (changes of the\noptimal contract) in the agency and the non-strategic cases.\nHowever, this is not true in general. In [2] we provide a full\ncharacterization of the sufficient and necessary conditions\nfor general anonymous technologies to have a single\ntransition and all n transitions. We find that the conditions in the\nagency case are different than the ones in the non-strategic\ncase.\nWe are able to determine the POU for any anonymous\ntechnology that exhibits a single transition in both the agency\nand the non-strategic cases (see full proof in [2]).\nLemma 2. For any anonymous technology that has a\nsingle transition in both the agency and the non-strategic cases,\nthe POU is given by:\nPOU = 1 +\ntn\u22121\nt0\n\u2212\ntn\u22121\ntn\nand it is obtained at the transition point of the agency case.\nProof sketch: Since the payments in the agency case are\nhigher than in the non-strategic case, the transition point\nin the agency case occurs for a higher value than in the\nnon-strategic case. Thus, there exists a region in which the\noptimal numbers of contracted agents in the agency and the\nnon-strategic cases are 0 and n, respectively. By Lemma 1\nthe POU is obtained at a transition point. As the social\nwelfare ratio is decreasing in v in this region, the POU is\nobtained at the higher value, that is, at the transition point\nof the agency case. The transition point in the agency case\nis the point at which the principal is indifferent between\ncontracting with 0 and with n agents, v\u2217 = c\u00b7n\ntn\u2212t0\n\u00b7 tn\ntn\u2212tn\u22121\n.\nSubstituting the transition point of the agency case into the\nPOU expression yields the required expression.\nPOU =\nv\u2217 \u00b7 tn \u2212 c \u00b7 n\nv\u2217 \u00b7 t0\n= 1 +\ntn\u22121\nt0\n\u2212\ntn\u22121\ntn\n2\n3.3 The MAJORITY Technology\nThe project under the MAJORITY function succeeds if\nthe majority of the agents succeed in their tasks (see\nSection 2.3). We are unable to characterize the transition\nbehavior of the MAJORITY technology analytically. Figure 4\npresents the optimal number of contracted agents as a\nfunction of v and \u03b3, for n = 5. The phenomena that we observe\nin this example (and others that we looked at) leads us to\nthe following conjecture.\nConjecture 1. For any Majority technology (any n, \u03b3\nand c), there exists l, 1 \u2264 l \u2264 n/2 such that the first\ntransition is from 0 to l agents, and then all the remaining\nn \u2212 l transitions exist.\n24\n4\n5\n3\n1\n0\n2\n400\n0\n0.3\n100\ngamma\n0.2\n300\n0.450.25\n200\nv\n500\n0.35 0.4\nFigure 4: Simulations results showing the number of\nagents in the optimal contract of the MAJORITY\ntechnology with 5 players, as a function of \u03b3 and v. As \u03b3\ndecreases the first transition is at a lower value and to\na higher number of agents. For any sufficiently small \u03b3,\nthe first transition is to 3 = 5/2 agents, and for any\nsufficiently large \u03b3, the first transition is to 1 agents. For\nany \u03b3, the first transition is never to more than 3 agents,\nand after the first transition we see all following possible\ntransitions.\nMoreover, for any fixed c, n, l = 1 when \u03b3 is close enough\nto 1\n2\n, l is a non-decreasing function of \u03b3 (with image\n{1, . . . , n/2 }), and l = n/2 when \u03b3 is close enough to 0.\n4. NON-ANONYMOUS TECHNOLOGIES\nIn non-anonymous technologies (even with identical costs),\nwe need to talk about the contracted set of agents and not\nonly about the number of contracted agents. In this section,\nwe identify the sets of agents that can be obtained as the\noptimal contract for some v. These sets construct the orbit\nof a technology.\nDefinition 3. For a technology t, a set of agents S is in\nthe orbit of t if for some value v, the optimal contract is\nexactly with the set S of agents (where ties between different\nS\"s are broken according to a lexicographic order9\n). The\nkorbit of t is the collection of sets of size exactly k in the\norbit.\nObserve that in the non-strategic case the k-orbit of any\ntechnology with identical cost c is of size at most 1 (as all sets\nof size k has the same cost, only the one with the maximal\nprobability can be on the orbit). Thus, the orbit of any such\ntechnology in the non-strategic case is of size at most n + 1.\nWe show that the picture in the agency case is very different.\nA basic observation is that the orbit of a technology is\nactually an ordered list of sets of agents, where the order is\ndetermined by the following lemma.\nLemma 3. ( Monotonicity lemma) For any technology\n(t, c), in both the agency and the non-strategic cases, the\n9\nThis implies that there are no two sets with the same\nsuccess probability in the orbit.\nexpected utility of the principal at the optimal contracts, the\nsuccess probability of the optimal contracts, and the expected\npayment of the optimal contract, are all monotonically\nnondecreasing with the value.\nProof. Suppose the sets of agents S1 and S2 are\noptimal in v1 and v2 < v1, respectively. Let Q(S) denote the\nexpected total payment to all agents in S in the case that\nthe principal contracts with the set S and the project\nsucceeds (for the agency case, Q(S) = t(S) \u00b7\nP\ni\u2208S\nci\nt(S)\u2212t(S\\i)\n,\nwhile for the non-strategic case Q(S) =\nP\ni\u2208S ci). The\nprincipal\"s utility is a linear function of the value, u(S, v) =\nt(S)\u00b7v\u2212Q(S). As S1 is optimal at v1, u(S1, v1) \u2265 u(S2, v1),\nand as t(S2) \u2265 0 and v1 > v2, u(S2, v1) \u2265 u(S2, v2). We\nconclude that u(S1, v1) \u2265 u(S2, v2), thus the utility is\nmonotonic non-decreasing in the value.\nNext we show that the success probability is monotonic\nnon-decreasing in the value. S1 is optimal at v1, thus:\nt(S1) \u00b7 v1 \u2212 Q(S1) \u2265 t(S2) \u00b7 v1 \u2212 Q(S2)\nS2 is optimal at v2, thus:\nt(S2) \u00b7 v2 \u2212 Q(S2) \u2265 t(S1) \u00b7 v2 \u2212 Q(S1)\nSumming these two equations, we get that (t(S1) \u2212 t(S2)) \u00b7\n(v1 \u2212 v2) \u2265 0, which implies that if v1 > v2 than t(S1) \u2265\nt(S2).\nFinally we show that the expected payment is monotonic\nnon-decreasing in the value. As S2 is optimal at v2 and\nt(S1) \u2265 t(S2), we observe that:\nt(S2) \u00b7 v2 \u2212 Q(S2) \u2265 t(S1) \u00b7 v2 \u2212 Q(S1) \u2265 t(S2) \u00b7 v2 \u2212 Q(S1)\nor equivalently, Q(S2) \u2264 Q(S1), which is what we wanted\nto show.\n4.1 AOO and OOA Technologies\nWe begin our discussion of non-anonymous technologies\nwith two examples; the And-of-Ors (AOO) and Or-of-Ands\n(OOA) technologies.\nThe AOO technology (see figure 2) is composed of\nmultiple OR-components that are Anded together.\nTheorem 3. Let h be an anonymous OR technology, and\nlet f =\nVnc\nj=1 h be the AOO technology that is obtained by\na conjunction of nc of these OR-components on disjoint\ninputs. Then for any value v, an optimal contract contracts\nwith the same number of agents in each OR-component.\nThus, the orbit of f is of size at most nl + 1, where nl is the\nnumber of agents in h.\nPart of the proof of the theorem (for the complete proof\nsee [2]), is based on such AOO technology being a special\ncase of a more general family of technologies, in which\ndisjoint anonymous technologies are And-ed together, as\nexplained in the next section. We conjecture that a similar\nresult holds for the OOA technology.\nConjecture 2. In an OOA technology which is a\ndisjunction of the same anonymous paths (with the same\nnumber of agents, \u03b3 and c, but over disjoint inputs), for any\nvalue v the optimal contract is constructed from some\nnumber of fully-contracted paths. Moreover, there exist v1 <\n. . . < vnl such that for any v, vi \u2264 v \u2264 vi+1, exactly i paths\nare contracted.\nWe are unable to prove it in general, but can prove it for\nthe case of an OOA technology with two paths of length two\n(see [2]).\n25\n4.2 Orbit Characterization\nThe AOO is an example of a technology whose orbit size\nis linear in its number of agents. If conjecture 2 is true,\nthe same holds for the OOA technology. What can be said\nabout the orbit size of a general non-anonymous technology?\nIn case of identical costs, it is impossible for all subsets of\nagents to be on the orbit. This holds by the observation\nthat the 1-orbit (a single agent that exerts effort) is of size\nat most 1. Only the agent that gives the highest success\nprobability (when only he exerts effort) can be on the orbit\n(as he also needs to be paid the least). Nevertheless, we\nnext show that the orbit can have exponential size.\nA collection of sets of k elements (out of n) is\nadmissible, if every two sets in the collection differ by at least 2\nelements (e.g. for k=3, 123 and 234 can not be together in\nthe collection, but 123 and 345 can be).\nTheorem 4. Every admissible collection can be obtained\nas the k \u2212 orbit of some t.\nProof sketch: The proof is constructive. Let S be some\nadmissible collection of k-size sets. For each set S \u2208 S in the\ncollection we pick S, such that for any two admissible sets\nSi = Sj, Si = Sj . We then define the technology function\nt as follows: for any S \u2208 S, t(S) = 1/2 \u2212 S and \u2200i \u2208 S,\nt(S \\ i) = 1/2 \u2212 2 S. Thus, the marginal contribution of\nevery i \u2208 S is S. Note that since S is admissible, t is well\ndefined, as for any two sets S, S \u2208 S and any two agents\ni, j, S \\ i = S \\ j. For any other set Z, we define t(Z) in\na way that ensures that the marginal contribution of each\nagent in Z is a very small (the technical details appear in\nthe full version). This completes the definition of t.\nWe show that each admissible set S \u2208 S is optimal at the\nvalue vS = ck\n2 2\nS\n. We first show that it is better than any\nother S \u2208 S. At the value vS = ck\n2 2\nS\n, the set S that\ncorresponds to S maximizes the utility of the principal. This\nresult is obtained by taking the derivative of u(S, v).\nTherefore S yields a higher utility than any other S \u2208 S. We also\npick the range of S to ensure that at vS, S is better than\nany other set S \\ i s.t. S \u2208 S. Now we are left to show\nthat at vS, the set S yields a higher utility than any other\nset Z \u2208 S. The construction of t(Z) ensures this since the\nmarginal contribution of each agent in Z is such a small ,\nthat the payment is too high for the set to be optimal. 2\nIn [2] we present the full proof of the theorem, as well as\nthe full proofs of all other claims presented in this section\nwithout such a proof. We next show that there exist very\nlarge admissible collections.\nLemma 4. For any n \u2265 k, there exists an admissible\ncollection of k-size sets of size \u03a9( 1\nn\n\u00b7\n`n\nk\n\u00b4\n).\nProof sketch: The proof is based on an error correcting\ncode that corrects one bit. Such a code has a distance \u2265\n3, thus admissible. It is known that there are such codes\nwith \u03a9(2n\n/n) code words. To ensure that an appropriate\nfraction of these code words have weight k, we construct a\nnew code by XOR-ing each code word with a random word\nr. The properties of XOR ensure that the new code remains\nadmissible. Each code word is now uniformly mapped to\nthe whole cube, and thus its probability of having weight\nk is\n`n\nk\n\u00b4\n/2n\n. Thus the expected number of weight k words\nis \u03a9(\n`n\nk\n\u00b4\n/n), and for some r this expectation is achieved or\nexceeded. 2\nFor k = n/2 we can construct an exponential size\nadmissible collection, which by Theorem 4 can be used to build a\ntechnology with exponential size orbit.\nCorollary 1. There exists a technology (t, c) with orbit\nof size \u03a9( 2n\nn\n\u221a\nn\n).\nThus, we are able to construct a technology with\nexponential orbit, but this technology is not a network technology\nor a structured technology.\nOpen Question 2. Is there a Read Once network with\nexponential orbit? Is there a structured technology with\nexponential orbit?\nNevertheless, so far, we have not seen examples of\nseriesparallel networks whose orbit size is larger than n + 1.\nOpen Question 3. How big can the orbit size of a\nseriesparallel network be?\nWe make the first step towards a solution of this question\nby showing that the size of the orbit of a conjunction of two\ndisjoint networks (taking the two in a serial) is at most the\nsum of the two networks\" orbit sizes.\nLet g and h be two Boolean functions on disjoint inputs\nand let f = g\nV\nh (i.e., take their networks in series). The\noptimal contract for f for some v, denoted by S, is composed\nof some agents from the h-part and some from the g-part,\ncall them T and R respectively.\nLemma 5. Let S be an optimal contract for f = g\nV\nh on\nv. Then, T is an optimal contract for h on v \u00b7 tg(R), and R\nis an optimal contract for g on v \u00b7 th(T).\nProof sketch: We exress the pricipal\"s utility u(S, v) from\ncontracting with the set S when his value is v. We abuse\nnotation and use the function to denote the technology as\nwell. Let \u0394f\ni (S \\ i) denote the marginal contribution of\nagent i \u2208 S. Then, for any i \u2208 T, \u0394f\ni (S \\ i) = g(R) \u00b7\n\u0394h\ni (T \\ i), and for any i \u2208 R, \u0394f\ni (S \\ i) = h(T) \u00b7 \u0394g\ni (R \\ i).\nBy substituting these expressions and f(S) = h(T) \u00b7 g(R),\nwe derive that u(S, v) = h(T)\n\ng(R) \u00b7 v \u2212\nP\ni\u2208T\nci\n\u0394h\ni (T \\i)\n\n+\ng(R) \u00b7\nP\ni\u2208R\nci\n\u0394\ng\ni\n(R\\i)\n. The first term is maximized at a set T\nthat is optimal for h on the value g(R) \u00b7 v, while the second\nterm is independent of T and h. Thus, S is optimal for f on\nv if and only if T is an optimal contract for h on v \u00b7 tg(R).\nSimilarly, we show that R is an optimal contract for g on\nv \u00b7 th(T). 2\nLemma 6. The real function v \u2192 th(T), where T is the\nh \u2212 part of an optimal contract for f on v, is monotone\nnon-decreasing (and similarly for the function v \u2192 tg(R)).\nProof. Let S1 = T1 \u222a R1 be the optimal contract for f\non v1, and let S2 = T2 \u222aR2 be the optimal contract for f on\nv2 < v1. By Lemma 3 f(S1) \u2265 f(S2), and since f = g \u00b7 h,\nf(S1) = h(T1) \u00b7 g(R1) \u2265 h(T2) \u00b7 g(R2) = f(S2). Assume in\ncontradiction that h(T1) < h(T2), then since h(T1)\u00b7g(R1) \u2265\nh(T2)\u00b7g(R2) this implies that g(R1) > g(R2). By Lemma 5,\nT1 is optimal for h on v1 \u00b7 g(R1), and T2 is optimal for h on\nv2 \u00b7g(R2). As v1 > v2 and g(R1) > g(R2), T1 is optimal for h\non a larger value than T2, thus by Lemma 3, h(T1) \u2265 h(T2),\na contradiction.\n26\nBased on Lemma 5 and Lemma 6, we obtain the following\nLemma. For the full proof, see [2].\nLemma 7. Let g and h be two Boolean functions on\ndisjoint inputs and let f = g\nV\nh (i.e., take their networks in\nseries). Suppose x and y are the respective orbit sizes of g\nand h; then, the orbit size of f is less or equal to x + y \u2212 1.\nBy induction we get the following corollary.\nCorollary 2. Assume that {(gj, cj )}m\nj=1 is a set of\nanonymous technologies on disjoint inputs, each with identical agent\ncost (all agents of technology gj has the same cost cj). Then\nthe orbit of f =\nVm\nj=1 gj is of size at most (\nPm\nj=1 nj ) \u2212 1,\nwhere nj is the number of agents in technology gj (the orbit\nis linear in the number of agents).\nIn particular, this holds for AOO technology where each\nOR-component is anonymous.\nIt would also be interesting to consider a disjunction of\ntwo Boolean functions.\nOpen Question 4. Does Lemma 7 hold also for the Boolean\nfunction f = g\nW\nh (i.e., when the networks g, h are taken\nin parallel)?\nWe conjecture that this is indeed the case, and that the\ncorresponding Lemmas 5 and 7 exist for the OR case as well.\nIf this is true, this will show that series-parallel networks\nhave polynomial size orbit.\n5. ALGORITHMIC ASPECTS\nOur analysis throughout the paper sheds some light on\nthe algorithmic aspects of computing the best contract. In\nthis section we state these implications (for the proofs see\n[2]). We first consider the general model where the\ntechnology function is given by an arbitrary monotone function\nt (with rational values), and we then consider the case of\nstructured technologies given by a network representation\nof the underlying Boolean function.\n5.1 Binary-Outcome Binary-Action\nTechnologies\nHere we assume that we are given a technology and value\nv as the input, and our output should be the optimal\ncontract, i.e. the set S\u2217\nof agents to be contracted and the\ncontract pi for each i \u2208 S\u2217\n. In the general case, the success\nfunction t is of size exponential in n, the number of agents,\nand we will need to deal with that. In the special case of\nanonymous technologies, the description of t is only the n+1\nnumbers t0, . . . , tn, and in this case our analysis in section 3\ncompletely suffices for computing the optimal contract.\nProposition 1. Given as input the full description of a\ntechnology (the values t0, . . . , tn and the identical cost c for\nan anonymous technology, or the value t(S) for all the 2n\npossible subsets S \u2286 N of the players, and a vector of costs\nc for non-anonymous technologies), the following can all be\ncomputed in polynomial time:\n\u2022 The orbit of the technology in both the agency and the\nnon-strategic cases.\n\u2022 An optimal contract for any given value v, for both the\nagency and the non-strategic cases.\n\u2022 The price of unaccountability POU(t, c).\nProof. We prove the claims for the non-anonymous case,\nthe proof for the anonymous case is similar.\nWe first show how to construct the orbit of the technology\n(the same procedure apply in both cases). To construct the\norbit we find all transition points and the sets that are in\nthe orbit. The empty contract is always optimal for v = 0.\nAssume that we have calculated the optimal contracts and\nthe transition points up to some transition point v for which\nS is an optimal contract with the highest success probability.\nWe show how to calculate the next transition point and the\nnext optimal contract.\nBy Lemma 3 the next contract on the orbit (for higher\nvalues) has a higher success probability (there are no two sets\nwith the same success probability on the orbit). We\ncalculate the next optimal contract by the following procedure.\nWe go over all sets T such that t(T) > t(S), and\ncalculate the value for which the principal is indifferent between\ncontracting with T and contracting with S. The minimal\nindifference value is the next transition point and the contract\nthat has the minimal indifference value is the next optimal\ncontract. Linearity of the utility in the value and\nmonotonicity of the success probability of the optimal contracts\nensure that the above works. Clearly the above calculation\nis polynomial in the input size.\nOnce we have the orbit, it is clear that an optimal contract\nfor any given value v can be calculated. We find the largest\ntransition point that is not larger than the value v, and\nthe optimal contract at v is the set with the higher success\nprobability at this transition point.\nFinally, as we can calculate the orbit of the technology in\nboth the agency and the non-strategic cases in polynomial\ntime, we can find the price of unaccountability in polynomial\ntime. By Lemma 1 the price of unaccountability POU(t)\nis obtained at some transition point, so we only need to go\nover all transition points, and find the one with the maximal\nsocial welfare ratio.\nA more interesting question is whether if given the\nfunction t as a black box, we can compute the optimal contract\nin time that is polynomial in n. We can show that, in general\nthis is not the case:\nTheorem 5. Given as input a black box for a success\nfunction t (when the costs are identical), and a value v, the\nnumber of queries that is needed, in the worst case, to find\nthe optimal contract is exponential in n.\nProof. Consider the following family of technologies. For\nsome small > 0 and k = n/2 we define the success\nprobability for a given set T as follows. If |T| < k, then\nt(T) = |T| \u00b7 . If |T| > k, then t(T) = 1 \u2212 (n \u2212 |T|) \u00b7 . For\neach set of agents \u02c6T of size k, the technology t \u02c6T is defined\nby t( \u02c6T) = 1 \u2212 (n \u2212 | \u02c6T|) \u00b7 and t(T) = |T| \u00b7 for any T = \u02c6T\nof size k.\nFor the value v = c\u00b7(k + 1/2), the optimal contract for t \u02c6T\nis \u02c6T (for the contract \u02c6T the utility of the principal is about\nv \u2212c\u00b7k = 1/2\u00b7c > 0, while for any other contract the utility\nis negative).\nIf the algorithm queries about at most\n` n\nn/2\n\u00b4\n\u2212 2 sets\nof size k, then it cannot always determine the optimal\ncontract (as any of the sets that it has not queried about might\nbe the optimal one). We conclude that\n` n\nn/2\n\u00b4\n\u2212 1 queries\nare needed to determine the optimal contract, and this is\nexponential in n.\n27\n5.2 Structured Technologies\nIn this section we will consider the natural representation\nof read-once networks for the underlying Boolean function.\nThus the problem we address will be:\nThe Optimal Contract Problem for Read Once\nNetworks:\nInput: A read-once network G = (V, E), with two specific\nvertices s, t; rational values \u03b3e, \u03b4e for each player e \u2208 E (and\nce = 1), and a rational value v.\nOutput: A set S of agents who should be contracted in an\noptimal contract.\nLet t(E) denote the probability of success when each edge\nsucceeds with probability \u03b4e. We first notice that even\ncomputing the value t(E) is a hard problem: it is called the\nnetwork reliability problem and is known to be #P \u2212 hard\n[8]. Just a little effort will reveal that our problem is not\neasier:\nTheorem 6. The Optimal Contract Problem for Read Once\nNetworks is #P-hard (under Turing reductions).\nProof. We will show that an algorithm for this problem\ncan be used to solve the network reliability problem. Given\nan instance of a network reliability problem < G, {\u03b6e}e\u2208E >\n(where \u03b6e denotes e\"s probability of success), we define an\ninstance of the optimal contract problem as follows: first\ndefine a new graph G which is obtained by Anding G with\na new player x, with \u03b3x very close to 1\n2\nand \u03b4x = 1 \u2212 \u03b3x.\nFor the other edges, we let \u03b4e = \u03b6e and \u03b3e = \u03b6e/2. By\nchoosing \u03b3x close enough to 1\n2\n, we can make sure that player\nx will enter the optimal contract only for very large values\nof v, after all other agents are contracted (if we can find the\noptimal contract for any value, it is easy to find a value for\nwhich in the original network the optimal contract is E, by\nkeep doubling the value and asking for the optimal contract.\nOnce we find such a value, we choose \u03b3x s.t. c\n1\u22122\u03b3x\nis larger\nthan that value). Let us denote \u03b2x = 1 \u2212 2\u03b3x.\nThe critical value of v where player x enters the optimal\ncontract of G , can be found using binary search over the\nalgorithm that supposedly finds the optimal contract for any\nnetwork and any value. Note that at this critical value v,\nthe principal is indifferent between the set E and E \u222a {x}.\nNow when we write the expression for this indifference, in\nterms of t(E) and \u0394t\ni(E) , we observe the following.\nt(E) \u00b7 \u03b3x \u00b7 v \u2212\nX\ni\u2208E\nc\n\u03b3x \u00b7 \u0394t\ni(E \\ i)\n!\n=\nt(E)(1 \u2212 \u03b3x) v \u2212\nX\ni\u2208E\nc\n(1 \u2212 \u03b3x) \u00b7 \u0394t\ni(E \\ i)\n\u2212\nc\nt(E) \u00b7 \u03b2x\n!\nif and only if\nt(E) =\n(1 \u2212 \u03b3x) \u00b7 c\n(\u03b2x)2 \u00b7 v\nthus, if we can always find the optimal contract we are\nalso able to compute the value of t(E).\nIn conclusion, computing the optimal contract in general\nis hard. These results suggest two natural research\ndirections. The first avenue is to study families of technologies\nwhose optimal contracts can be computed in polynomial\ntime. The second avenue is to explore approximation\nalgorithms for the optimal contract problem.\nA possible candidate for the first direction is the family\nof series-parallel networks, for which the network reliability\nproblem (computing the value of t) is polynomial.\nOpen Question 5. Can the optimal contract problem for\nRead Once series-parallel networks be solved in polynomial\ntime?\nWe can only handle the non-trivial level of AOO networks:\nLemma 8. Given a Read Once AND-of-OR network such\nthat each OR-component is an anonymous technology, the\noptimal contract problem can be solved in polynomial time.\nAcknowledgments. This work is supported by the\nIsrael Science Foundation, the USA-Israel Binational Science\nFoundation, the Lady Davis Fellowship Trust, and by a\nNational Science Foundation grant number ANI-0331659.\n6. REFERENCES\n[1] M. Babaioff, M. Feldman, and N. Nisan. The Price of\nPurity and Free-Labor in Combinatorial Agency. In\nWorking Paper, 2005.\n[2] M. Babaioff, M. Feldman, and N. Nisan.\nCombinatorial agency, 2006.\nwww.sims.berkeley.edu/\u02dcmoshe/comb-agency.pdf.\n[3] M. Feldman, J. Chuang, I. Stoica, and S. Shenker.\nHidden-action in multi-hop routing. In EC\"05, pages\n117-126, 2005.\n[4] B. Holmstrom. Moral Hazard in Teams. Bell Journal\nof Economics, 13:324-340, 1982.\n[5] A. Mass-Colell, M. Whinston, and J. Green.\nMicroeconomic Theory. Oxford University Press, 1995.\n[6] N. Nisan and A. Ronen. Algorithmic mechanism\ndesign. Games and Economic Behaviour, 35:166 - 196,\n2001. A preliminary version appeared in STOC 1999.\n[7] C. Papadimitriou. Algorithms, Games, and the\nInternet. In Proceedings of 33rd STOC, pages 749-753,\n2001.\n[8] J. S. Provan and M. O. Ball. The complexity of\ncounting cuts and of computing the probability that a\ngraph is connected. SIAM J. Comput., 12(4):777-788,\n1983.\n[9] A. Ronen and L. Wahrmann. Prediction Games.\nWINE, pages 129-140, 2005.\n[10] R. Smorodinsky and M. Tennenholtz. Sequential\nInformation Elicitation in Multi-Agent Systems. 20th\nConference on Uncertainty in AI, 2004.\n[11] R. Smorodinsky and M. Tennenholtz. Overcoming\nFree-Riding in Multi-Party Computations - The\nAnonymous Case. Forthcoming, GEB, 2005.\n[12] E. Winter. Incentives and Discrimination. American\nEconomic Review, 94:764-773, 2004.\n28", "keywords": "combinatorial agency;nash equilibrium;contractible action;k-orbit;price of unaccountability;unaccountability price;con;quality of service;classical principalagent;principal-agent model;agency theory;series-parallel network;service quality;anonymous technology;optimal set of contract;incentive;contract optimal set"}
-{"name": "test_J-27", "title": "Learning From Revealed Preference", "abstract": "A sequence of prices and demands are rationalizable if there exists a concave, continuous and monotone utility function such that the demands are the maximizers of the utility function over the budget set corresponding to the price. Afriat [1] presented necessary and sufficient conditions for a finite sequence to be rationalizable. Varian [20] and later Blundell et al. [3, 4] continued this line of work studying nonparametric methods to forecasts demand. Their results essentially characterize learnability of degenerate classes of demand functions and therefore fall short of giving a general degree of confidence in the forecast. The present paper complements this line of research by introducing a statistical model and a measure of complexity through which we are able to study the learnability of classes of demand functions and derive a degree of confidence in the forecasts. Our results show that the class of all demand functions has unbounded complexity and therefore is not learnable, but that there exist interesting and potentially useful classes that are learnable from finite samples. We also present a learning algorithm that is an adaptation of a new proof of Afriat\"s theorem due to Teo and Vohra [17].", "fulltext": "1. INTRODUCTION\nA market is an institution by which economic agents meet\nand make transactions. Classical economic theory explains\nthe incentives of the agents to engage in this behavior through\nthe agents\" preference over the set of available bundles\nindicating that agents attempt to replace their current bundle\nwith bundles that are both more preferred and attainable if\nsuch bundles exist. The preference relation is therefore the\nkey factor in understanding consumer behavior.\nOne of the common assumptions in this theory is that the\npreference relation is represented by a utility function and\nthat agents strive to maximize their utility given a budget\nconstraint. This pattern of behavior is the essence of supply\nand demand, general equilibria and other aspects of\nconsumer theory. Furthermore, as we elaborate in section 2,\nbasic observations on market demand behavior suggest that\nutility functions are monotone and concave.\nThis brings us to the question, first raised by\nSamuelson [18], to what degree is this theory refutable? Given\nobservations of price and demand, under what circumstances\ncan we conclude that the data is consistent with the\nbehavior of a utility maximizing agent equipped with a monotone\nconcave utility function and subject to a budget constraint?\nSamuelson gave a necessary but insufficient condition on the\nunderlying preference known as the weak axiom of revealed\npreference. Uzawa [16] and Mas-Colell [10, 11] introduced a\nnotion of income-Lipschitz and showed that demand\nfunctions with this property are rationalizable. These properties\ndo not require any parametric assumptions and are\ntechnically refutable, but they do assume knowledge of the entire\ndemand function and rely heavily on the differential\nproperties of demand functions. Hence, an infinite amount of\ninformation is needed to refute the theory.\nIt is often the case that apart form the demand\nobservations there is additional information on the system and\nit is sensible to make parametric assumptions, namely, to\nstipulate some functional form of utility. Consistency with\nutility maximization would then depend on fixing the\nparameters of the utility function to be consistent with the\nobservations and with a set of equations called the\nSlutski equations. If such parameters exist, we conclude that\nthe stipulated utility form is consistent with the\nobservations. This approach is useful when there is reason to make\nthese stipulations, it gives an explicit utility function which\ncan be used to make precise forecasts on demand for\nunob36\nserved prices. The downside of this approach is that real life\ndata is often inconsistent with convenient functional forms.\nMoreover, if the observations are inconsistent it is unclear\nwhether this is a refutation of the stipulated functional form\nor of utility maximization.\nAddressing these issues Houthakker [7] noted that an\nobserver can see only finite quantities of data. He askes when\ncan it be determined that a finite set of observations is\nconsistent with utility maximization without making\nparametric assumptions? He showes that rationalizability of a finite\nset of observations is equivalent to the strong axiom of\nrevealed preference. Richter [15] showes that strong axiom\nof revealed preference is equivalent to rationalizability by a\nstrictly concave monotone utility function. Afriat [1] gives\nanother set of rationalizability conditions the observations\nmust satisfy. Varian [20] introduces the generalized axiom of\nrevealed preference (GARP), an equivalent form of Afriat\"s\nconsistency condition that is easier to verify\ncomputationally. It is interesting to note that these necessary and\nsufficient conditions for rationalizability are essentially versions\nof the well known Farkas lemma [6] (see also [22]).\nAfriat [1] proved his theorem by an explicit construction\nof a utility function witnessing consistency. Varian [20] took\nthis one step further progressing from consistency to\nforecasting. Varian\"s forecasting algorithm basically rules out\nbundles that are revealed inferior to observed bundles and\nfinds a bundle from the remaining set that together with the\nobservations is consistent with GARP. Furthermore, he\nintroduces Samuelson\"s money metric as a canonical utility\nfunction and gives upper and lower envelope utility functions\nfor the money metric. Knoblauch [9] shows these envelopes\ncan be computed efficiently. Varian [21] provides an up to\ndate survey on this line of research.\nA different approach is presented by Blundell et al. [3, 4].\nThese papers introduce a model where an agent observes\nprices and Engel curves for these prices. This gives an\nimprovement on Varian\"s original bounds, though the basic\nidea is still to rule out demands that are revealed inferior.\nThis model is in a sense a hybrid between Mas-Colell and\nAfriat\"s aproaches. The former requires full information for\nall prices, the latter for a finite number of prices. On the\nother hand the approach taken by Blundell et al. requires\nfull information only on a finite number of price\ntrajectories. The motivation for this crossover is to utilize income\nsegmentation in the population to restructure econometric\ninformation. Different segments of the population face the\nsame prices with different budgets, and as much as\naggregate data can testify on individual preferences, show how\ndemand varies with the budget. Applying non parametric\nstatistical methods, they reconstruct a trajectory from the\nobserved demands of different segments and use it to obtain\ntighter bounds.\nBoth these methods would most likely give a good forecast\nfor a fixed demand function after sufficiently many\nobservations assuming they were spread out in a reasonable manner.\nHowever, these methods do not consider the complexity of\nthe demand functions and do not use any probabilistic model\nof the observations. Therefore, they are unable to provide\nany estimate of the number of observations that would be\nsufficient for a good forecast or the degree of confidence in\nsuch a forecast.\nIn this paper we examine the feasibility of demand\nforecasting with a high degree of confidence using Afriat\"s\nconditions. We formulate the question in terms of whether the\nclass of demand functions derived from monotone concave\nutilities is efficiently PAC-learnable. Our first result is\nnegative. We show, by computing the fat shattering dimension,\nthat without any prior assumptions, the set of all demand\nfunctions induced by monotone concave utility functions is\ntoo rich to be efficiently PAC-learnable. However, under\nsome prior assumptions on the set of demand functions we\nshow that the fat shattering dimension is finite and therefore\nthe corresponding sets are PAC-learnable. In these cases,\nassuming the probability distribution by which the observed\nprice-demand pairs are generated is fixed, we are in a\nposition to offer a forecast and a probabilistic estimate on its\naccuracy.\nIn section 2 we briefly discuss the basic assumptions of\ndemand theory and their implications. In section 3 we present\na new proof to Afriat\"s theorem incorporating an algorithm\nfor efficiently generating a forecasting function due to Teo\nand Vohra [17]. We show that this algorithm is\ncomputationally efficient and can be used as a learning algorithm.\nIn section 4 we give a brief introduction to PAC learning\nincluding several modifications to learning real vector\nvalued functions. We introduce the notion of fat shattering\ndimension and use it to devise a lower bound on the\nsample complexity. We also sketch results on upper bounds. In\nsection 5 we study the learnability of demand functions and\ndirectly compute the fat shattering dimension of the class\nof all demand functions and a class of income-Lipschitzian\ndemand functions with a bounded global income-Lipschitz\nconstant.\n2. UTILITY AND DEMAND\nA utility function u : Rn\n+ \u2192 R is a function relating\nbundles of goods to a cardinal in a manner reflecting the\npreferences over the bundles. A rational agent with a budget that\nw.l.g equals 1 facing a price vector p \u2208 Rn\n+ will choose from\nher budget set B(p) = {x \u2208 Rn\n+ : p \u00b7 x \u2264 1} a bundle x \u2208 Rn\n+\nthat maximizes her private utility.\nThe first assumption we make is that the function is\nmonotone increasing, namely, if x \u2265 y, in the sense that the\ninequality holds coordinatewise, then u(x) \u2265 u(y). This\nreflects the assumption that agents will always prefer more of\nany one good. This, of course, does not necessarily hold in\npractice, as in many cases excess supply may lead to\nstorage expenses or other externalities. However, in such cases\nthe demand will be an interior point of the budget set and\nthe less preferred bundles won\"t be observed. The second\nassumption we make on the utility is that all the marginals\n(partial derivatives) are monotone decreasing. This is the\nlaw of diminishing marginal utility which assumes that the\nlarger the excess of one good over the other the less we value\neach additional good of one kind over the other. These\nassumptions imply that the utility function is concave and\nmonotone on the observations.\nThe demand function of the agent is the correspondence\nfu : Rn\n+ \u2192 Rn\n+ satisfying\nf(p) = argmax{u(x) : p \u00b7 x \u2264 I}\nIn general this correspondence is not necessarily single\nvalued, but it is implicit in the proof of Afriat\"s theorem that\nany set of observations can be rationalized by a demand\nfunction that is single valued for unobserved prices.\n37\nSince large quantities of any good are likely to create\nutility decreasing externalities, we assume the prices are limited\nto a compact set. W.l.g. we assume u has marginal utility\nzero outside [0, 1]d\n. Any budget set that is not a subset of\nthe support is maximized on any point outside the support\nand it is therefore difficult to forecast for these prices. We\nare thus interested in forecasts for prices below the simplex\n\u2206d = conv{(0, . . . , 1, . . . , 0)}. For these prices we take the\nmetric\ndP (p, p ) = max{|\n1\npi\n\u2212\n1\npi\n| : i = 1, . . . , d}\nfor p, p \u2208 \u2206d. Note that with this metric \u2206d is compact. A\ndemand function is L-income-Lipschitz, for L \u2208 R+, if\n||f(p) \u2212 f(p )||\u221e\ndP (p, p )\n\u2264 L\nfor any p, p \u2208 \u2206d. This property reflects an assumption\nthat preferences and demands have some sort of stability. It\nrules out different demands for the similar prices. We may\ntherefore assume from here on that demand functions are\nsingle valued.\n3. REVEALED PREFERENCE\nA sequence of prices and demands (p1, x1), . . . , (pn, xn) is\nrationalizable if there exists a utility function u such that\nxi = fu(pi) for i = 1, . . . , n. We begin with a trivial\nobservation, if pi \u00b7 xj \u2264 pi \u00b7 xi and xi = f(pi) then xi is preferred\nover xj since the latter is in the budget set when the\nformer was chosen. It is therefore revealed that u(xj) \u2264 u(xi)\nimplying pj \u00b7 xj \u2264 pj \u00b7 xi.\nSuppose there is a sequence (pi1 , xi1 ), . . . , (pik , xik ) such\nthat pij \u00b7 (xij \u2212 xij+1 ) \u2264 0 for j = 1 . . . k \u2212 1 and pik \u00b7 (xik \u2212\nxi1 ) \u2264 0. Then the same reasoning shows that u(xi1 ) =\nu(xi2 ) = . . . = u(xik ) implying pi1 \u00b7 (xi1 \u2212 xi2 ) = pi2 \u00b7 (xi2 \u2212\nxi3 ) = . . . = pik\u22121 \u00b7(xik\u22121 \u2212xik ) = 0. We call the latter\ncondition the Afriat condition (AC). This argument shows that\nAC is necessary for rationalizability; the surprising result in\nAfriat\"s theorem is that this condition is also sufficient.\nLet A be an n \u00d7 n matrix with entries aij = pi \u00b7 (xj \u2212 xi)\n(aij and aji are independent), aii = 0 and let D(A) be the\nweighted digraph associated with A. The matrix satisfies\nAC if every cycle with negative total weight includes at least\none edge with positive weight.\nTheorem 1. There exists y = (y1, . . . , yn) \u2208 Rn\nand s =\n(s1, . . . , sn) \u2208 Rn\n+ satisfying the set of inequalities L(A),\nyj \u2264 yi + siaij i = j 1 \u2264 i, j \u2264 n\niff D(A) satisfies AC.\nProof : If L(A) is feasible then it is easy to see that\nu(x) = min\ni\n{yi + sipi(x \u2212 xi)}\nis a concave utility function that is consistent with the\nobservations, and from our previous remark it follows that D(A)\nsatisfies AC.\nIn the other direction it is shown by explicit\nconstruction that Afriat\"s condition for D(A) implies L(A) is\nfeasible. The construction provides a utility function that is\nconsistent with the observations. Teo and Vohra [17] give\na strongly polynomial time algorithm for this construction\nwhich will be the heart of our learning algorithm.\nThe construction is executed in two steps. First, the\nalgorithm finds s \u2208 Rn\n+ such that the weighted digraph D(A, s)\ndefined by the matrix \u02dcaij = siaij has no cycle with\nnegative total weight if D(A) satisfies AC and returns a negative\ncycle otherwise.\nThe dual of a shortest path problem is given by the\nconstraints:\nyj \u2212 yi \u2264 siaij i = j\nIt is a standard result (see [14] p 109) that the system is\nfeasible iff D(A, s) has no negative cycles. Thus, in the second\nstep, if D(A) satisfies AC, the algorithm calls a\nSHORTEST PATH algorithm to find y \u2208 Rn\nsatisfying the\nconstraints.\nNow we describe how to choose the si\"s. Define S =\n{(i, j) : aij < 0}, E = {(i, j) : aij = 0} and T = {(i, j) :\naij > 0} and let G = ([n], S \u222a E) be a digraph with weights\nwij = \u22121 if (i, j) \u2208 S and wij = 0 otherwise. D(A) has no\nnegative cycles, hence G is acyclic and breadth first search\ncan assign potentials \u03c6i such that \u03c6j \u2264 \u03c6i + wij for (i, j) \u2208\nS \u222a E. We relabel the vertices so that \u03c61 \u2265 \u03c62 \u2265 . . . \u2265 \u03c6n.\nLet\n\u03b4i = (n \u2212 1)\nmax(i,j)\u2208S(\u2212aij)\nmin(i,j)\u2208T aij\nif \u03c6i < \u03c6i\u22121 and \u03b4i = 1 otherwise, and define\nsi =\niY\nj=2\n\u03b4j = \u03b4i \u00b7 si\u22121\n.\nWe show that for this choice of s, D(A, s) contains no\nnegative weight cycle. Suppose C = (i1, . . . , ik) is a cycle\nin D(A, s). If \u03c6 is constant on C then aij ij+1 = 0 for j =\n1, . . . , k and we are done. Otherwise let iv \u2208 C be the vertex\nwith smallest potential satisfying w.l.o.g. \u03c6(iv) < \u03c6(iv+1).\nFor any cycle C in the digraph D(A, s), let (v, u) be an\nedge in C such that (i) v has the smallest potential among\nall vertices in C, and (ii) \u03c6u > \u03c6v. Such an edge exists,\notherwise \u03c6i is identical for all vertices i in C. In this case,\nall edges in C have non-negative edge weight in D(A, s).\nIf (iv, iv+1) \u2208 S \u222a E, then we have\n\u03c6(iv+1) \u2264 \u03c6(iv) + wiv,iv+1 \u2264 \u03c6(iv)\na contradiction. Hence (iv, iv+1) \u2208 T. Now, note that all\nvertices q in C with the same potential as iv must be incident\nto an edge (q, t) in C such that \u03c6(t) \u2265 \u03c6(q). Hence the edge\n(q, t) must have non-negative weight. i.e., aq,t \u2265 0. Let\np denote a vertex in C with the second smallest potential.\nNow, C has weight\nsvavu+\nX\n(k,l)\u2208C\\(v,u)\nskak,l \u2265 svav,u+sp(n\u22121) max\n(i,j)\u2208S\n{aij } \u2265 0,\ni.e., C has non-negative weight \u2737\nAlgorithm 1 returns in polynomial time a hypothesis that\nis a piecewise linear function and agrees with the labeling\nof the observation namely sample error zero. To use this\nfunction to forecast demand for unobserved prices we need\nalgorithm 2 which maximizes the function on a given budget\nset. Since u(x) = mini{yi + sipi(x \u2212 xi)} this is a linear\nprogram and can be solved in time polynomial in d, n as\nwell as the size of the largest number in the input.\n38\nAlgorithm 1 Utility Algorithm\nInput (x1, p1), . . . , (xn, pn)\nS \u2190 {(i, j) : aij < 0}\nE \u2190 {(i, j) : aij = 0}\nfor all (i, j) \u2208 S do\nwij \u2190 \u22121\nend for\nfor all (i, j) \u2208 E do\nwij \u2190 0\nend for\nwhile there exist unvisited vertices do\nvisit new vertex j\nassign potential to \u03c6j\nend while\nreorder indices so \u03c61 \u2264 \u03c62 . . . \u2264 \u03c6n\nfor all 1 \u2264 i \u2264 n do\n\u03b4i \u2190 (n \u2212 1)\nmax(i,j)\u2208S (\u2212aij )\nmin(i,j)\u2208T aij\nsi \u2190\nQi\nj=2 \u03b4j\nend for\nSHORTEST PATH(yj \u2212 yi \u2264 siaij)\nReturn y1, . . . , yn \u2208 Rd\nand s1, . . . , sn \u2208 R+\nAlgorithm 2 Evaluation\nInput y1, . . . , yn \u2208 Rd\nand s1, . . . , sn \u2208 R+\nmax z\nz \u2264 yi + sipi(x \u2212 xi) for i = 1, . . . , n\npx \u2264 1\nReturn x for which z is maximized\n4. SUPERVISED LEARNING\nIn a supervised learning problem, a learning algorithm is\ngiven a finite sample of labeled observations as input and\nis required to return a model of the functional relationship\nunderlying the labeling. This model, referred to as a\nhypothesis, is usually a computable function that is used to\nforecast the labels of future observations. The labels are\nusually binary values indicating the membership of the\nobserved points in the set that is being learned. However, we\nare not limited to binary values and, indeed, in the demand\nfunctions we are studying the labels are real vectors.\nThe learning problem has three major components:\nestimation, approximation and complexity. The estimation\nproblem is concerned with the tradeoff between the size of\nthe sample given to the algorithm and the degree of\nconfidence we have in the forecast it produces. The\napproximation problem is concerned with the ability of hypotheses\nfrom a certain class to approximate target functions from\na possibly different class. The complexity problem is\nconcerned with the computational complexity of finding a\nhypothesis that approximates the target function.\nA parametric paradigm assumes that the underlying\nfunctional relationship comes from a well defined family, such as\nthe Cobb-Douglas production functions; the system must\nlearn the parameters characterizing this family. Suppose\nthat a learning algorithm observes a finite set of production\ndata which it assumes comes from a Cobb-Douglas\nproduction function and returns a hypothesis that is a polynomial\nof bounded degree. The estimation problem in this case\nwould be to assess the sample size needed to obtain a good\nestimate of the coefficients. The approximation problem\nwould be to assess the error sustained from approximating a\nrational function by a polynomial. The complexity problem\nwould be the assessment of the time required to compute\nthe polynomial coefficients.\nIn the probably approximately correct (PAC) paradigm,\nthe learning of a target function is done by a class of\nhypothesis functions, that does or does not include the\ntarget function itself; it does not necessitate any parametric\nassumptions on this class. It is also assumed that the\nobservations are generated independently by some distribution\non the domain of the relation and that this distribution is\nfixed. If the class of target functions has finite\n\"dimensionality\" then a function in the class is characterized by its values\non a finite number of points. The basic idea is to observe\nthe labeling of a finite number of points and find a\nfunction from a class of hypotheses which tends to agree with\nthis labeling. The theory tells us that if the sample is large\nenough then any function that tends to agree with the\nlabeling will, with high probability, be a good approximation\nof the target function for future observations. The prime\nobjective of PAC theory is to develop the relevant notion\nof dimensionality and to formalize the tradeoff between\ndimensionality, sample size and the level of confidence in the\nforecast.\nIn the revealed preference setting, our objective is to use\na set of observations of prices and demand to forecast\ndemand for unobserved prices. Thus the target function is a\nmapping from prices to bundles, namely f : Rd\n+ \u2192 Rd\n+. The\ntheory of PAC learning for real valued functions is concerned\npredominantly with functions from Rd\nto R. In this section\nwe introduce modifications to the classical notions of PAC\nlearning to vector valued functions and use them to prove\na lower bound for sample complexity. An upper bound on\nthe sample complexity can also be proved for our definition\nof fat shattering, but we do not bring it here as the proof is\nmuch more tedious and analogous to the proof of theorem 4.\nBefore we can proceed with the formal definition, we must\nclarify what we mean by forecast and tend to agree. In the\ncase of discrete learning, we would like to obtain a\nfunction h that with high probability agrees with f. We would\nthen take the probability P\u03c3(f(x) = h(x)) as the measure\nof the quality of the estimation. Demand functions are real\nvector functions and we therefore do not expect f and h\nto agree with high probability. Rather we are content with\nhaving small mean square errors on all coordinates. Thus,\nour measure of estimation error is given by:\ner\u03c3(f, h) =\nZ\n(||f \u2212 h||\u221e)2\nd\u03c3.\nFor given observations S = {(p1, x1), . . . , (pn, xn)} we\nmeasure the agreement by the sample error\nerS(S, h) =\nX\nj\n(||xj \u2212 h(pj)||\u221e)2\n.\nA sample error minimization (SEM) algorithm is an\nalgorithm that finds a hypothesis minimizing erS(S, h). In the\ncase of revealed preference, there is a function that takes the\nsample error to zero. Nevertheless, the upper bounds\ntheorem we use does not require the sample error to be zero.\nDefinition 1. A set of demand functions C is probably\napproximately correct (PAC) learnable by hypotheses set H\nif for any \u03b5, \u03b4 > 0, f \u2208 C and distribution \u03c3 on the prices\n39\nthere exists an algorithm L that for a set of observations of\nlength mL = mL(\u03b5, \u03b4) = Poly(1\n\u03b4\n, 1\n\u03b5\n) finds a function h from\nH such that er\u03c3(f, h) < \u03b5 with probability 1 \u2212 \u03b4.\nThere may be several learning algorithms for C with different\nsample complexities. The minimal mL is called the sample\ncomplexity of C.\nNote that in the definition there is no mention of the time\ncomplexity to find h in H and evaluating h(p). A set C is\nefficiently PAC-learnable if there is a Poly(1\n\u03b4\n, 1\n\u03b5\n) time\nalgorithm for choosing h and evaluating h(p).\nFor discrete function sets, sample complexity bounds may\nbe derived from the VC-dimension of the set (see [19, 8]).\nAn analog to this notion of dimension for real functions is\nthe fat shattering dimension. We use an adaptation of this\nnotion to real vector valued function sets. Let \u0393 \u2282 Rd\n+ and\nlet C be a set of real functions from \u0393 to Rd\n+.\nDefinition 2. For \u03b3 > 0, a set of points p1, . . . , pn \u2208 \u0393\nis \u03b3-shattered by a class of real functions C if there exists\nx1, . . . , xn \u2208 Rd\n+ and parallel affine hyperplanes H0, H1 \u2282 Rd\nsuch that 0 \u2208 H\u2212\n0 \u2229 H+\n1 , dist(H0, H1) > \u03b3 and for each\nb = (b1, . . . , bn) \u2208 {0, 1}n\nthere exists a function fb \u2208 C such\nthat fb(pi) \u2208 xi + H+\n0 if bi = 0 and f(pi) \u2208 xi + H\u2212\n1 if\nbi = 1.\nWe define the \u03b3-fat shattering dimension of C, denoted fatC(\u03b3)\nas the maximal size of a \u03b3-shattered set in \u0393. If this size is\nunbounded then the dimension is infinite.\nTo demonstrate the usefulness of the this notion we use it\nto derive a lower bound on the sample complexity.\nLemma 2. Suppose the functions {fb : b \u2208 {0, 1}n\n}\nwitness the shattering of {p1, . . . , pn}. Then, for any x \u2208 Rd\n+\nand labels b, b \u2208 {0, 1}n\nsuch that bi = bi either ||fb(pi) \u2212\nx||\u221e > \u03b3\n2d\nor ||fb (pi) \u2212 x||\u221e > \u03b3\n2d\n.\nProof : Since the max exceeds the mean, it follows that if\nfb and fb correspond to labels such that bi = bi then\n||fb(pi) \u2212 fb (pi)||\u221e \u2265\n1\nd\n||fb(pi) \u2212 fb (pi)||2 >\n\u03b3\nd\n.\nThis implies that for any x \u2208 Rd\n+ either ||fb(pi) \u2212 x||\u221e > \u03b3\n2d\nor ||fb (pi) \u2212 x||\u221e > \u03b3\n2d \u2737\nTheorem 3. Suppose that C is a class of functions\nmapping from \u0393 to Rd\n+. Then any learning algorithm L for C\nhas sample complexity satisfying\nmL(\u03b5, \u03b4) \u2265\n1\n2\nfatC(4d\u03b5)\nAn analog of this theorem for real valued functions with a\ntighter bound can be found in [2], this version will suffice\nfor our needs.\nProof : Suppose n = 1\n2\nfatC(4d\u03b5) then there exists a set\n\u0393S = {p1, . . . , p2n} that is shattered by C. It suffices to\nshow that at least one distribution requires large sample.\nWe construct such a distribution. Let \u03c3 be the uniform\ndistribution on \u0393S and CS = {fb : b \u2208 {0, 1}2n\n} be the set\nof functions that witness the shattering of {p1. . . . , pn}.\nLet fb be a function chosen uniformly at random from CS.\nIt follows from lemma 2 (with \u03b3 = 2d ) that for any fixed\nfunction h the probability that ||fb(p) \u2212 h(p)||\u221e > 2\u03b5 for\np \u2208 \u0393S is at least as high as getting heads on a fair coin\ntoss. Therefore Eb(||fb(p) \u2212 h(p)||\u221e) > 2\u03b5.\nSuppose for a sequence of observations z = ((pi1 , x1), . . . ,\n(pin , xn)) a learning algorithm L finds a function h. The\nobservation above and Fubini imply Eb(er\u03c3(h, fb)) > \u03b5.\nRandomizing on the sample space we get Eb,z(er\u03c3(h, fb)) > \u03b5.\nThis shows Eh,z(er\u03c3(h, fb0 )) > \u03b5 for some fb0 . W.l.g we\nmay assume the error is bounded (since we are looking at\nwhat is essentially a finite set) therefore the probability that\ner\u03c3(h, fb0 ) > \u03b5 cannot be too small, hence fb0 is not\nPAClearnable with a sample of size n \u2737\nThe following theorem gives an upper bound on the\nsample complexity required for learning a set of functions with\nfinite fat shattering dimension. The theorem is proved in [2]\nfor real valued functions, the proof for the real vector case\nis analogous and so omitted.\nTheorem 4. Let C be a set of real-valued functions from\nX to [0, 1] with fatC(\u03b3) < \u221e. Let A be approximate-SEM\nalgorithm for C and define L(z) = A(z, \u03b50\n6\n) for z \u2208 Zm\nand\n\u03b50 = 16\u221a\nm\n. Then L is a learning algorithm for C with sample\ncomplexity given by:\nmL(\u03b5, \u03b4) = O\n\u201e\n1\n\u03b52\n(ln2\n(\n1\n\u03b5\n)fatC(\u03b5) + ln(\n1\n\u03b4\n))\n\u00ab\nfor any \u03b5, \u03b4 > 0.\n5. LEARNING FROM REVEALED\nPREFERENCE\nAlgorithm 1 is an efficient learning algorithm in the sense\nthat it finds a hypothesis with sample error zero in time\npolynomial in the number of observations. As we have seen\nin section 4 the number of observations required to PAC\nlearn the demand depends on the fat shattering dimension of\nthe class of demand functions which in turn depends on the\nclass of utility functions they are derived from. We compute\nthe fat shattering dimension for two classes of demands. The\nfirst is the class of all demand functions, we show that this\nclass has infinite shattering dimension (we give two proofs)\nand is therefore not PAC learnable. The second class we\nconsider is the class of demand functions derived from utilities\nwith bounded support and income-Lipschitz. We show that\nthe class has a finite fat shattering dimension that depends\non the support and the income-Lipschitz constant.\nTheorem 5. Let C be a set of demand functions from Rd\n+\nto Rd\n+ then\nfatC(\u03b3) = \u221e\nProof 1: For \u03b5 > 0 let pi = 2\u2212i\np for i = 1, . . . , n be\na set of price vectors inducing parallel budget sets Bi and\nlet x1, . . . , xn be the intersection of these hyperplanes with\nan orthogonal line passing through the center. Let H0 and\nH1 be hyperplanes that are not parallel to p and let xi \u2208\nBi \u2229 (xi + H+\n0 ) and xi \u2208 Bi \u2229 (xi + H\u2212\n1 ) for i = 1 . . . n (see\nfigure 1).\nFor any labeling b = (b1, . . . , bn) \u2208 {0, 1}n\nlet y = y(b) =\n(y1, . . . , yn) be a set of demands such that yi = xi if bi = 0\nand yi = xi if bi = 1 (we omit an additional index b in\ny for notational convenience). To show that p1, . . . , pn is\nshattered it suffices to find for every b a demand function\nfb supported by concave utility such that fb(pi) = yb\ni . To\nshow that such a function exists it suffices to show that\nAfriat\"s conditions are satisfied. Since yi are in the budget\n40\nset yi \u00b7 2\u2212i\np = 1, therefore pi \u00b7 (yj \u2212 yi) = 2j\u2212i\n\u2212 1. This\nshows that pi \u00b7 (yj \u2212 yi) \u2264 0 iff j < i hence there can be no\nnegative cycles and the condition is met. \u2737\nProof 2: The utility functions satisfying Afriat\"s condition\nin the first proof could be trivial assigning the same utility\nto xi as to xi . In fact, pick a utility function whose level\nsets are parallel to the budget constraint. Therefore the\nshattering of the prices p1, . . . , pn is the result of indifference\nrather than genuine preference. To avoid this problem we\nreprove the theorem by constructing utility functions u such\nthat u(xi) = u(xi ) for all i and therefore a distinct utility\nfunction is associated with each labeling.\nFor i = 1, . . . n let pi1, . . . , pid be price vectors satisfying\nthe following conditions:\n1. the budget sets Bs\ni are supporting hyperplanes of a\nconvex polytope \u039bi\n2. yi is a vertex of \u039bi\n3. ||yj ||1 \u00b7 ||pis \u2212 pi||\u221e = o(1) for s = 1, . . . d and j =\n1, . . . , n\nFinally let yi1, . . . , yid be points on the facets of \u039bi that\nintersect yi, such that ||pjr||1 \u00b7 ||yi \u2212 yis||\u221e = o(1) for all j,\ns and r. We call the set of points yi, yi1, . . . , yid the level i\ndemand and pi, pi1, . . . , pid level i prices. Applying H\u00a8olders\ninequality we get\n|pir \u00b7 yjs \u2212 pi \u00b7 yj | \u2264 |(pir \u2212 pi) \u00b7 yj| + |pir \u00b7 (yjs \u2212 yj)|\n||pir \u2212 pi||\u221e \u00b7 ||yj ||1 + ||yjs \u2212 yj||\u221e \u00b7 ||pir||1.\n= o(1)\nThis shows that\npir \u00b7 (yjs \u2212 yir) = pi \u00b7 (ys \u2212 yi) + o(1) = 2j\u2212i\n\u2212 1 + o(1)\ntherefore pir \u00b7 (yjs \u2212 yir) \u2264 0 iff j < i or i = j. This implies\nthat if there is a negative cycle then all the points in the\ncycle must belong to the same level. The points of any one\nlevel lie on the facets of a polytope \u039bi and the prices pis are\nsupporting hyperplanes of the polytope. Thus, the polytope\ndefines a utility function for which these demands are utility\nmaximizing. The other direction of Afriat\"s theorem\ntherefore implies there can be no negative cycles within points on\nthe same level.\nIt follows that there are no negative cycles for the union\nof observations from all levels hence the sequence of\nobservations (y1, p1), (y11, p11), (y12, p12), . . . , (ynd, pnd) is\nconsistent with monotone concave utility function maximization\nand again by Afriat\"s theorem there exists u supporting a\ndemand function fb \u2737\nThe proof above relies on the fact that an agent have high\nutility and marginal utility for very large bundles. In many\ncases it is reasonable to assume that the marginal for very\nlarge bundles is very small, or even that the utility or the\nmarginal utility have compact support. Unfortunately,\nrescaling the previous example shows that even a compact set\nmay contain a large shattered set. We notice however, that\nin this case we obtain a utility function that yield demand\nfunctions that are very sensitive to small price changes. We\nshow that the class of utility functions that have marginal\nutilities with compact support and for which the relevant\ndemand functions are income-Lipschitzian has finite fat\nshattering dimension.\n\u2732\n\u273b\n\u2745\n\u2745\n\u2745\n\u2745\n\u2745\n\u2745\n\u2745\n\u2745\n\u2745\n\u2745\n\u2745\n\u2745\n\u2745\n\u2745\n\u2745\n\u2745\n\u2745\n\u2745\n\u2745\n\u2745\n\u2745\n\u2745\n\u2745\n\u2745\n(0,0)\n\n\n\n\n\n\n\nH0\n\n\n\n\n\n\n\nH1r\nx1\nr x1\n\u2748\n\u2748\n\u275c\n\u275c\u275c\nr x2\nr\nx2\n\u275a\n\u275a\n\u275a\n\u275a\n\u275a\n\u275a\n\u275a\n\u275a\n\u275a\n\u275a\n\u275a\n\u275c\n\u275c\u275c\nFigure 1: Utility function shattering x1 and x2\nTheorem 6. Let C be a set of L-income-Lipschitz\ndemand functions from \u2206d to Rd\n+ for some global constant\nL \u2208 R. Then\nfatC(\u03b3) \u2264 (\nL\n\u03b3\n)d\nProof : Let p1, . . . , pn \u2208 \u2206d be a shattered set with\nwitnesses x1, . . . , xn \u2208 Rd\n+. W.l.g. xi+H+\n0 \u2229xj +H\u2212\n0 = \u2205\nimplying xi + H\u2212\n1 \u2229 xj + H+\n1 = \u2205, for a labeling b = (b1, . . . , bn) \u2208\n{0, 1}n\nsuch that bi = 0 and bj = 1, ||fb(pi) \u2212 fb(pj)||\u221e > \u03b3\nhence ||pi \u2212 pj||\u221e > \u03b3\nL\n. A standard packing argument\nimplies n \u2264 (L\n\u03b3\n)d\n\u2737\n6. ACKNOWLEDGMENTS\nThe authors would like to thank Eli Shamir, Ehud Kalai,\nJulio Gonz\u00b4alez D\u00b4\u0131az, Rosa Matzkin, Gad Allon and Adam\nGalambos for helpful discussions and suggestions.\n7. REFERENCES\n[1] Afriat S. N. (1967) The Construction of a Utility\nFunction from Expenditure Data International\nEconomic Review 8, 67-77.\n[2] Anthony M. and Bartlett P. L. (1999) Neural Network\nLearning: Theoretical Foundations Cambridge\nUniversity Press.\n[3] Blundell R., Browning M. and Crawford I. (2003)\nNonparametric Engel curves and revealed preference.\nEconometrica, 71(1):205-240.\n[4] Blundell R. (2005 ) How revealing is revealed\npreference? European Economic Journal 3, 211 - 235.\n[5] Diewert E. (1973) Afriat and Revealed Preference\nTheory Review of Economic Studies 40, 419 - 426.\n[6] Farkas J. (1902) \u00a8Uber die Theorie der Einfachen\nUngleichungen Journal f\u00a8ur die Reine und Angewandte\nMathematik 124 1-27\n[7] Houthakker H. (1950) Revealed Preference and the\nUtility Function Economica 17, 159 - 174.\n[8] Kearns M. and Vazirani U. (1994) An Introduction to\nComputational Learning Theory The MIT Press\nCambridge MA.\n41\n[9] Knoblauch V. (1992) A Tight Upper Bound on the\nMoney Metric Utility Function. The American\nEconomic Review, 82(3):660-663.\n[10] Mas-Colell A. (1977) The Recoverability of\nConsumers\" Preferences from Market Demand.\nEconometrica, 45(6):1409-1430.\n[11] Mas-Colell A. (1978) On Revealed Preference\nAnalysis. The Review of Economic Studies,\n45(1):121-131.\n[12] Mas-Colell A., Whinston M. and Green J. R. (1995)\nMicroeconomic Theory Oxford University Press.\n[13] Matzkin R. and Richter M. (1991) Testing Strictly\nConcave Rationality. Journal of Economic Theory,\n53:287-303.\n[14] Papadimitriou C. H. and Steiglitz K. (1982)\nCombinatorial Optimization Dover Publications inc.\n[15] Richter M. (1966) Revealed Preference Theory.\nEconometrica, 34(3):635-645.\n[16] Uzawa H. (1960 ) Preference and rational choice in\nthe theory of consumption. In K. J. Arrow, S. Karlin,\nand P. Suppes, editors, Mathematical Models in Social\nScience Stanford University Press, Stanford, CA.\n[17] Teo C. P. and Vohra R. V. (2003) Afriat\"s Theorem\nand Negative Cycles Working Paper\n[18] Samuelson P. A. (1948) Consumption Theory in Terms\nof Revealed Preference Economica 15, 243 - 253.\n[19] Vapnik V. N. (1998) Statistical Learning Theory John\nWiley & Sons Inc.\n[20] Varian H. R. (1982) The Non-Parametric Approach to\nDemand Analysis Econometrica 50, 945 - 974.\n[21] Varian H. R. (2005) Revealed Preference, In Michael\nSzenberg editor, Samuelson Economics and the 21st\nCentury.\n[22] Ziegler G. M. (1994) Lectures on Polytopes Springer.\n42", "keywords": "machine learn;finite set of observation;complexity problem;rationalizability;reveal preference;income-lipschitz;learning from revealed preference;fat shattering dimension;monotone concave utility function;fat shatter;probably approximately correct;observation finite set;forecast;demand function"}
-{"name": "test_J-28", "title": "Approximately-Strategyproof and Tractable Multi-Unit Auctions", "abstract": "We present an approximately-efficient and approximatelystrategyproof auction mechanism for a single-good multi-unit allocation problem. The bidding language in our auctions allows marginal-decreasing piecewise constant curves. First, we develop a fully polynomial-time approximation scheme for the multi-unit allocation problem, which computes a (1 + )approximation in worst-case time T = O(n3 / ), given n bids each with a constant number of pieces. Second, we embed this approximation scheme within a Vickrey-Clarke-Groves (VCG) mechanism and compute payments to n agents for an asymptotic cost of O(T log n). The maximal possible gain from manipulation to a bidder in the combined scheme is bounded by /(1+ )V , where V is the total surplus in the efficient outcome.", "fulltext": "1. INTRODUCTION\nIn this paper we present a fully polynomial-time\napproximation scheme for the single-good multi-unit auction problem. Our\nscheme is both approximately efficient and approximately\nstrategyproof. The auction settings considered in our paper are\nmotivated by recent trends in electronic commerce; for instance,\ncorporations are increasingly using auctions for their strategic\nsourcing. We consider both a reverse auction variation and a\nforward auction variation, and propose a compact and\nexpressive bidding language that allows marginal-decreasing piecewise\nconstant curves.\nIn the reverse auction, we consider a single buyer with a\ndemand for M units of a good and n suppliers, each with a\nmarginal-decreasing piecewise-constant cost function. In\naddition, each supplier can also express an upper bound, or capacity\nconstraint on the number of units she can supply. The reverse\nvariation models, for example, a procurement auction to obtain\nraw materials or other services (e.g. circuit boards, power\nsuppliers, toner cartridges), with flexible-sized lots.\nIn the forward auction, we consider a single seller with M\nunits of a good and n buyers, each with a marginal-decreasing\npiecewise-constant valuation function. A buyer can also express\na lower bound, or minimum lot size, on the number of units she\ndemands. The forward variation models, for example, an auction\nto sell excess inventory in flexible-sized lots.\nWe consider the computational complexity of implementing\nthe Vickrey-Clarke-Groves [22, 5, 11] mechanism for the\nmultiunit auction problem. The Vickrey-Clarke-Groves (VCG)\nmechanism has a number of interesting economic properties in this\nsetting, including strategyproofness, such that truthful bidding is\na dominant strategy for buyers in the forward auction and sellers\nin the reverse auction, and allocative efficiency, such that the\noutcome maximizes the total surplus in the system. However, as we\ndiscuss in Section 2, the application of the VCG-based approach\nis limited in the reverse direction to instances in which the total\npayments to the sellers are less than the value of the outcome\nto the buyer. Otherwise, either the auction must run at a loss in\nthese instances, or the buyer cannot be expected to voluntarily\nchoose to participate. This is an example of the budget-deficit\nproblem that often occurs in efficient mechanism design [17].\nThe computational problem is interesting, because even with\nmarginal-decreasing bid curves, the underlying allocation\nproblem turns out to (weakly) intractable. For instance, the classic\n0/1 knapsack is a special case of this problem.1\nWe model the\n1\nHowever, the problem can be solved easily by a greedy scheme\nif we remove all capacity constraints from the seller and all\n166\nallocation problem as a novel and interesting generalization of\nthe classic knapsack problem, and develop a fully\npolynomialtime approximation scheme, computing a (1 + )-approximation\nin worst-case time T = O(n3\n/\u03b5), where each bid has a fixed\nnumber of piecewise constant pieces.\nGiven this scheme, a straightforward computation of the VCG\npayments to all n agents requires time O(nT). We compute\napproximate VCG payments in worst-case time O(\u03b1T log(\u03b1n/\u03b5)),\nwhere \u03b1 is a constant that quantifies a reasonable no-monopoly\nassumption. Specifically, in the reverse auction, suppose that\nC(I) is the minimal cost for procuring M units with all sellers I,\nand C(I \\ i) is the minimal cost without seller i. Then, the\nconstant \u03b1 is defined as an upper bound for the ratio C(I \\i)/C(I),\nover all sellers i. This upper-bound tends to 1 as the number of\nsellers increases.\nThe approximate VCG mechanism is ( \u03b5\n1+\u03b5\n)-strategyproof for\nan approximation to within (1 + ) of the optimal allocation.\nThis means that a bidder can gain at most ( \u03b5\n1+\u03b5\n)V from a\nnontruthful bid, where V is the total surplus from the efficient\nallocation. As such, this is an example of a computationally-tractable\n\u03b5-dominance result.2\nIn practice, we can have good confidence\nthat bidders without good information about the bidding\nstrategies of other participants will have little to gain from attempts at\nmanipulation.\nSection 2 formally defines the forward and reverse auctions,\nand defines the VCG mechanisms. We also prove our claims\nabout \u03b5-strategyproofness. Section 3 provides the generalized\nknapsack formulation for the multi-unit allocation problems and\nintroduces the fully polynomial time approximation scheme.\nSection 4 defines the approximation scheme for the payments in the\nVCG mechanism. Section 5 concludes.\n1.1 Related Work\nThere has been considerable interest in recent years in\ncharacterizing polynomial-time or approximable special cases of the\ngeneral combinatorial allocation problem, in which there are\nmultiple different items. The combinatorial allocation problem (CAP)\nis both NP-complete and inapproximable (e.g. [6]). Although\nsome polynomial-time cases have been identified for the CAP\n[6, 20], introducing an expressive exclusive-or bidding language\nquickly breaks these special cases. We identify a non-trivial but\napproximable allocation problem with an expressive\nexclusiveor bidding language-the bid taker in our setting is allowed to\naccept at most one point on the bid curve.\nThe idea of using approximations within mechanisms, while\nretaining either full-strategyproofness or \u03b5-dominance has received\nsome previous attention. For instance, Lehmann et al. [15]\npropose a greedy and strategyproof approximation to a single-minded\ncombinatorial auction problem. Nisan & Ronen [18] discussed\napproximate VCG-based mechanisms, but either appealed to\nparticular maximal-in-range approximations to retain full\nstrategyproofness, or to resource-bounded agents with information or\ncomputational limitations on the ability to compute strategies.\nFeigenminimum-lot size constraints from the buyers.\n2\nHowever, this may not be an example of what Feigenbaum &\nShenker refer to as a tolerably-manipulable mechanism [8]\nbecause we have not tried to bound the effect of such a\nmanipulation on the efficiency of the outcome. VCG mechanism do have a\nnatural self-correcting property, though, because a useful\nmanipulation to an agent is a reported value that improves the total\nvalue of the allocation based on the reports of other agents and\nthe agent\"s own value.\nbaum & Shenker [8] have defined the concept of strategically\nfaithful approximations, and proposed the study of\napproximations as an important direction for algorithmic mechanism\ndesign. Schummer [21] and Parkes et al [19] have previously\nconsidered \u03b5-dominance, in the context of economic impossibility\nresults, for example in combinatorial exchanges.\nEso et al. [7] have studied a similar procurement problem, but\nfor a different volume discount model. This earlier work\nformulates the problem as a general mixed integer linear program, and\ngives some empirical results on simulated data.\nKalagnanam et al. [12] address double auctions, where multiple\nbuyers and sellers trade a divisible good. The focus of this\npaper is also different: it investigates the equilibrium prices using\nthe demand and supply curves, whereas our focus is on efficient\nmechanism design. Ausubel [1] has proposed an ascending-price\nmulti-unit auction for buyers with marginal-decreasing values\n[1], with an interpretation as a primal-dual algorithm [2].\n2. APPROXIMATELY-STRATEGYPROOF\nVCG AUCTIONS\nIn this section, we first describe the marginal-decreasing\npiecewise bidding language that is used in our forward and reverse\nauctions. Continuing, we introduce the VCG mechanism for the\nproblem and the \u03b5-dominance results for approximations to VCG\noutcomes. We also discuss the economic properties of VCG\nmechanisms in these forward and reverse auction multi-unit\nsettings.\n2.1 Marginal-Decreasing Piecewise Bids\nWe provide a piecewise-constant and marginal-decreasing\nbidding language. This bidding language is expressive for a\nnatural class of valuation and cost functions: fixed unit prices over\nintervals of quantities. See Figure 1 for an example. In\naddition, we slightly relax the marginal-decreasing requirement to\nallow: a bidder in the forward auction to state a minimal\npurchase amount, such that she has zero value for quantities smaller\nthan that amount; a seller in the reverse auction to state a capacity\nconstraint, such that she has an effectively infinite cost to supply\nquantities in excess of a particular amount.\nReverse Auction Bid\n7\n5 10 20 25\n10\n8\nQuantity\nPrice\n7\n5 10 20 25\n10\n8\nQuantity\nPrice\nForward Auction Bid\nFigure 1: Marginal-decreasing, piecewise constant bids. In the\nforward auction bid, the bidder offers $10 per unit for quantity in\nthe range [5, 10), $8 per unit in the range [10, 20), and $7 in the\nrange [20, 25]. Her valuation is zero for quantities outside the range\n[10, 25]. In the reverse auction bid, the cost of the seller is \u221e outside\nthe range [10, 25].\nIn detail, in a forward auction, a bid from buyer i can be\nwritten as a list of (quantity-range, unit-price) tuples, ((u1\ni , p1\ni ),\n(u2\ni , p2\ni ), . . . , (umi\u22121\ni , pmi\u22121\ni )), with an upper bound umi\ni on the\nquantity. The interpretation is that the bidder\"s valuation in the\n167\n(semi-open) quantity range [uj\ni , uj+1\ni ) is pj\ni for each unit.\nAdditionally, it is assumed that the valuation is 0 for quantities less\nthan u1\ni as well as for quantities more than um\ni . This is\nimplemented by adding two dummy bid tuples, with zero prices in the\nrange [0, u1\ni ) and (umi\ni , \u221e). We interpret the bid list as\ndefining a price function, pbid,i(q) = qpj\ni , if uj\ni \u2264 q < uj+1\ni , where\nj = 1, 2, . . . , mi \u22121. In order to resolve the boundary condition,\nwe assume that the bid price for the upper bound quantity umi\ni is\npbid,i(umi\ni ) = umi\ni pmi\u22121\ni .\nA seller\"s bid is similarly defined in the reverse auction. The\ninterpretation is that the bidder\"s cost in the (semi-open)\nquantity range [uj\ni , uj+1\ni ) is pj\ni for each unit. Additionally, it is\nassumed that the cost is \u221e for quantities less than u1\ni as well as\nfor quantities more than um\ni . Equivalently, the unit prices in the\nranges [0, u1\ni ) and (um\ni , \u221e) are infinity. We interpret the bid list\nas defining a price function, pask,i(q) = qpj\ni , if uj\ni \u2264 q < uj+1\ni .\n2.2 VCG-Based Multi-Unit Auctions\nWe construct the tractable and approximately-strategyproof\nmultiunit auctions around a VCG mechanism. We assume that all\nagents have quasilinear utility functions; that is, ui(q, p) = vi(q)\u2212\np, for a buyer i with valuation vi(q) for q units at price p, and\nui(q, p) = p \u2212 ci(q) for a seller i with cost ci(q) at price p. This\nis a standard assumption in the auction literature, equivalent to\nassuming risk-neutral agents [13]. We will use the term payoff\ninterchangeably for utility.\nIn the forward auction, there is a seller with M units to sell.\nWe assume that this seller has no intrinsic value for the items.\nGiven a set of bids from I agents, let V (I) denote the maximal\nrevenue to the seller, given that at most one point on the bid curve\ncan be selected from each agent and no more than M units of the\nitem can be sold. Let x\u2217\n= (x\u2217\n1, . . . , x\u2217\nN ) denote the solution\nto this winner- determination problem, where x\u2217\ni is the number\nof units sold to agent i. Similarly, let V (I \\ i) denote the\nmaximal revenue to the seller without bids from agent i. The VCG\nmechanism is defined as follows:\n1. Receive piecewise-constant bid curves and capacity\nconstraints from all the buyers.\n2. Implement the outcome x\u2217\nthat solves the winner-determination\nproblem with all buyers.\n3. Collect payment pvcg,i = pbid,i(x\u2217\ni ) \u2212 [V (I) \u2212 V (I \\ i)]\nfrom each buyer, and pass the payments to the seller.\nIn this forward auction, the VCG mechanism is strategyproof\nfor buyers, which means that truthful bidding is a dominant\nstrategy, i.e. utility maximizing whatever the bids of other buyers.\nIn addition, the VCG mechanism is allocatively-efficient, and the\npayments from each buyer are always positive.3\nMoreover, each\nbuyer pays less than its value, and receives payoff V (I)\u2212V (I \\\ni) in equilibrium; this is precisely the marginal-value that buyer\ni contributes to the economic efficiency of the system.\nIn the reverse auction, there is a buyer with M units to buy,\nand n suppliers. We assume that the buyer has value V > 0\nto purchase all M units, but zero value otherwise. To simplify\nthe mechanism design problem we assume that the buyer will\ntruthfully announce this value to the mechanism.4\nThe\nwinner3\nIn fact, the VCG mechanism maximizes the expected payoff\nto the seller across all efficient mechanisms, even allowing for\nBayesian-Nash implementations [14].\n4\nWithout this assumption, the Myerson-Satterthwaite [17]\nimpossibility result would already imply that we should not expect\nan efficient trading mechanism in this setting.\ndetermination problem in the reverse auction is to determine the\nallocation, x\u2217\n, that minimizes the cost to the buyer, or forfeits\ntrade if the minimal cost is greater than value, V .\nLet C(I) denote the minimal cost given bids from all sellers,\nand let C(I \\i) denote the minimal cost without bids from seller\ni. We can assume, without loss of generality, that there is an\nefficient trade and V \u2265 C(I). Otherwise, then the efficient\noutcome is no trade, and the outcome of the VCG mechanism is no\ntrade and no payments.\nThe VCG mechanism implements the outcome x\u2217\nthat\nminimizes cost based on bids from all sellers, and then provides\npayment pvcg,i = pask,i(x\u2217\ni )+[V \u2212C(I)\u2212max(0, V \u2212C(I\\i))] to\neach seller. The total payment is collected from the buyer. Again,\nin equilibrium each seller\"s payoff is exactly the marginal-value\nthat the seller contributes to the economic efficiency of the\nsystem; in the simple case that V \u2265 C(I \\ i) for all sellers i, this is\nprecisely C(I \\ i) \u2212 C(I).\nAlthough the VCG mechanism remains strategyproof for\nsellers in the reverse direction, its applicability is limited to cases in\nwhich the total payments to the sellers are less than the buyer\"s\nvalue. Otherwise, there will be instances in which the buyer will\nnot choose to voluntarily participate in the mechanism, based on\nits own value and its beliefs about the costs of sellers. This leads\nto a loss in efficiency when the buyer chooses not to participate,\nbecause efficient trades are missed. This problem with the size of\nthe payments, does not occur in simple single-item reverse\nauctions, or even in multi-unit reverse auctions with a buyer that has\na constant marginal-valuation for each additional item that she\nprocures.5\nIntuitively, the problem occurs in the reverse multi-unit\nsetting because the buyer demands a fixed number of items, and\nhas zero value without them. This leads to the possibility of the\ntrade being contingent on the presence of particular, so-called\npivotal sellers. Define a seller i as pivotal, if C(I) \u2264 V but\nC(I\\i) > V . In words, there would be no efficient trade without\nthe seller. Any time there is a pivotal seller, the VCG payments\nto that seller allow her to extract all of the surplus, and the\npayments are too large to sustain with the buyer\"s value unless this\nis the only winning seller.\nConcretely, we have this participation problem in the reverse\nauction when the total payoff to the sellers, in equilibrium,\nexceeds the total payoff from the efficient allocation:\nV \u2212 C(I) \u2265\ni\n[V \u2212 C(I) \u2212 max(0, V \u2212 C(I \\ i))]\nAs stated above, first notice that we require V > C(I \\ i)\nfor all sellers i. In other words, there must be no pivotal sellers.\nGiven this, it is then necessary and sufficient that:\nV \u2212 C(I) \u2265\ni\n(C(I \\ i) \u2212 C(I)) (1)\n5\nTo make the reverse auction symmetric with the forward\ndirection, we would need a buyer with a constant marginal-value to\nbuy the first M units, and zero value for additional units. The\npayments to the sellers would never exceed the buyer\"s value in\nthis case. Conversely, to make the forward auction symmetric\nwith the reverse auction, we would need a seller with a constant\n(and high) marginal-cost to sell anything less than the first M\nunits, and then a low (or zero) marginal cost. The total payments\nreceived by the seller can be less than the seller\"s cost for the\noutcome in this case.\n168\nIn words, the surplus of the efficient allocation must be greater\nthan the total marginal-surplus provided by each seller.6\nConsider an example with 3 agents {1, 2, 3}, and V = 150\nand C(123) = 50. Condition (1) holds when C(12) = C(23) =\n70 and C(13) = 100, but not when C(12) = C(23) = 80\nand C(13) = 100. In the first case, the agent payoffs \u03c0 =\n(\u03c00, \u03c01, \u03c02, \u03c03), where 0 is the seller, is (10, 20, 50, 20). In the\nsecond case, the payoffs are \u03c0 = (\u221210, 30, 50, 30).\nOne thing we do know, because the VCG mechanism will\nmaximize the payoff to the buyer across all efficient mechanisms\n[14], is that whenever Eq. 1 is not satisfied there can be no\nefficient auction mechanism.7\n2.3 \u03b5-Strategyproofness\nWe now consider the same VCG mechanism, but with an\napproximation scheme for the underlying allocation problem. We\nderive an \u03b5-strategyproofness result, that bounds the maximal\ngain in payoff that an agent can expect to achieve through a\nunilateral deviation from following a simple truth-revealing strategy.\nWe describe the result for the forward auction direction, but it is\nquite a general observation.\nAs before, let V (I) denote the value of the optimal solution\nto the allocation problem with truthful bids from all agents, and\nV (I \\i) denote the value of the optimal solution computed\nwithout bids from agent i. Let \u02c6V (I) and \u02c6V (I \\ i) denote the value\nof the allocation computed with an approximation scheme, and\nassume that the approximation satisfies:\n(1 + ) \u02c6V (I) \u2265 V (I)\nfor some > 0. We provide such an approximation scheme\nfor our setting later in the paper. Let \u02c6x denote the allocation\nimplemented by the approximation scheme.\nThe payoff to agent i, for announcing valuation \u02c6vi, is:\nvi(\u02c6xi) +\nj=i\n\u02c6vj (\u02c6xj) \u2212 \u02c6V (I \\ i)\nThe final term is independent of the agent\"s announced value,\nand can be ignored in an incentive-analysis. However, agent i\ncan try to improve its payoff through the effect of its announced\nvalue on the allocation \u02c6x implemented by the mechanism. In\nparticular, agent i wants the mechanism to select \u02c6x to maximize the\nsum of its true value, vi(\u02c6xi), and the reported value of the other\nagents,\n\u00c8j=i \u02c6vj (\u02c6xj). If the mechanism\"s allocation algorithm is\noptimal, then all the agent needs to do is truthfully state its value\nand the mechanism will do the rest. However, faced with an\napproximate allocation algorithm, the agent can try to improve its\npayoff by announcing a value that corrects for the\napproximation, and causes the approximation algorithm to implement the\nallocation that exactly maximizes the total reported value of the\nother agents together with its own actual value [18].\n6\nThis condition is implied by the agents are substitutes\nrequirement [3], that has received some attention in the combinatorial\nauction literature because it characterizes the case in which VCG\npayments can be supported in a competitive equilibrium. Useful\ncharacterizations of conditions that satisfy agents are substitutes,\nin terms of the underlying valuations of agents have proved quite\nelusive.\n7\nMoreover, although there is a small literature on\nmaximallyefficient mechanisms subject to requirements of\nvoluntaryparticipation and budget-balance (i.e. with the mechanism\nneither introducing or removing money), analytic results are only\nknown for simple problems (e.g. [16, 4]).\nWe can now analyze the best possible gain from\nmanipulation to an agent in our setting. We first assume that the other\nagents are truthful, and then relax this. In both cases, the\nmaximal benefit to agent i occurs when the initial approximation is\nworst-case. With truthful reports from other agents, this occurs\nwhen the value of choice \u02c6x is V (I)/(1 + \u03b5). Then, an agent\ncould hope to receive an improved payoff of:\nV (I) \u2212\nV (I)\n1 + \u03b5\n=\n\u03b5\n1 + \u03b5\nV (I)\nThis is possible if the agent is able to select a reported type to\ncorrect the approximation algorithm, and make the algorithm\nimplement the allocation with value V (I). Thus, if other agents are\ntruthful, and with a (1 + \u03b5)-approximation scheme to the\nallocation problem, then no agent can improve its payoff by more than\na factor \u03b5/(1 + \u03b5) of the value of the optimal solution.\nThe analysis is very similar when the other agents are not\ntruthful. In this case, an individual agent can improve its\npayoff by no more than a factor /(1 + ) of the value of the optimal\nsolution given the values reported by the other agents.\nLet V in the following theorem define the total value of the\nefficient allocation, given the reported values of agents j = i,\nand the true value of agent i.\nTHEOREM 1. A VCG-based mechanism with a (1 +\n\u03b5)allocation algorithm is (1+\n\u2212V ) strategyproof for agent i, and\nagent i can gain at most this payoff through some non-truthful\nstrategy.\nNotice that we did not need to bound the error on the allocation\nproblems without each agent, because the -strategyproofness\nresult follows from the accuracy of the first-term in the VCG\npayment and is independent of the accuracy of the second-term.\nHowever, the accuracy of the solution to the problem without\neach agent is important to implement a good approximation to\nthe revenue properties of the VCG mechanism.\n3. THEGENERALIZED KNAPSACK\nPROBLEM\nIn this section, we design a fully polynomial approximation\nscheme for the generalized knapsack, which models the\nwinnerdetermination problem for the VCG-based multi-unit auctions.\nWe describe our results for the reverse auction variation, but the\nformulation is completely symmetric for the forward-auction.\nIn describing our approximation scheme, we begin with a\nsimple property (the Anchor property) of an optimal knapsack\nsolution. We use this property to develop an O(n2\n) time 2-approximation\nfor the generalized knapsack. In turn, we use this basic\napproximation to develop our fully polynomial-time approximation\nscheme (FPTAS).\nOne of the major appeals of our piecewise bidding language\nis its compact representation of the bidder\"s valuation functions.\nWe strive to preserve this, and present an approximation scheme\nthat will depend only on the number of bidders, and not the\nmaximum quantity, M, which can be very large in realistic\nprocurement settings.\nThe FPTAS implements an (1 + \u03b5) approximation to the\noptimal solution x\u2217\n, in worst-case time T = O(n3\n/\u03b5), where n is\nthe number of bidders, and where we assume that the piecewise\nbid for each bidder has O(1) pieces. The dependence on the\nnumber of pieces is also polynomial: if each bid has a maximum\n169\nof c pieces, then the running time can be derived by substituting\nnc for each occurrence of n.\n3.1 Preliminaries\nBefore we begin, let us recall the classic 0/1 knapsack\nproblem: we are given a set of n items, where the item i has value\nvi and size si, and a knapsack of capacity M; all sizes are\nintegers. The goal is to determine a subset of items of maximum\nvalue with total size at most M. Since we want to focus on a\nreverse auction, the equivalent knapsack problem will be to choose\na set of items with minimum value (i.e. cost) whose size exceeds\nM. The generalized knapsack problem of interest to us can be\ndefined as follows:\nGeneralized Knapsack:\nInstance: A target M, and a set of n lists, where the ith list has\nthe form\nBi = (u1\ni , p1\ni ), . . . , (umi\u22121\ni , pmi\u22121\ni ), (umi\ni (i), \u221e) ,\nwhere uj\ni are increasing with j and pj\ni are decreasing with\nj, and uj\ni , pj\ni , M are positive integers.\nProblem: Determine a set of integers xj\ni such that\n1. (One per list) At most one xj\ni is non-zero for any i,\n2. (Membership) xj\ni = 0 implies xj\ni \u2208 [uj\ni , uj+1\ni ),\n3. (Target)\n\u00c8i\n\u00c8j xj\ni \u2265 M, and\n4. (Objective)\n\u00c8i\n\u00c8j pj\ni xj\ni is minimized.\nThis generalized knapsack formulation is a clear\ngeneralization of the classic 0/1 knapsack. In the latter, each list consists of\na single point (si, vi).8\nThe connection between the generalized knapsack and our\nauction problem is transparent. Each list encodes a bid,\nrepresenting multiple mutually exclusive quantity intervals, and one can\nchoose any quantity in an interval, but at most one interval can\nbe selected. Choosing interval [uj\ni , uj+1\ni ) has cost pj\ni per unit.\nThe goal is to procure at least M units of the good at minimum\npossible cost. The problem has some flavor of the continuous\nknapsack problem. However, there are two major differences that\nmake our problem significantly more difficult: (1) intervals have\nboundaries, and so to choose interval [uj\ni , uj+1\ni ) requires that at\nleast uj\ni and at most uj+1\ni units must be taken; (2) unlike the\nclassic knapsack, we cannot sort the items (bids) by value/size,\nsince different intervals in one list have different unit costs.\n3.2 A 2-Approximation Scheme\nWe begin with a definition. Given an instance of the\ngeneralized knapsack, we call each tuple tj\ni = (uj\ni , pj\ni ) an anchor.\nRecall that these tuples represent the breakpoints in the piecewise\nconstant curve bids. We say that the size of an anchor tj\ni is uj\ni ,\n8\nIn fact, because of the one per list constraint, the generalized\nproblem is closer in spirit to the multiple choice knapsack\nproblem [9], where the underling set of items is partitioned into\ndisjoint subsets U1, U2, . . . , Uk, and one can choose at most one\nitem from each subset. PTAS do exist for this problem [10],\nand indeed, one can convert our problem into a huge instance\nof the multiple choice knapsack problem, by creating one group\nfor each list; put a (quantity, price) point tuple (x, p) for each\npossible quantity for a bidder into his group (subset). However,\nthis conversion explodes the problem size, making it infeasible\nfor all but the most trivial instances.\nthe minimum number of units available at this anchor\"s price pj\ni .\nThe cost of the anchor tj\ni is defined to be the minimum total price\nassociated with this tuple, namely, cost(tj\ni ) = pj\ni uj\ni if j < mi,\nand cost(tmi\ni ) = pmi\u22121\ni umi\ni .\nIn a feasible solution {x1, x2, . . . , xn} of the generalized\nknapsack, we say that an element xi = 0 is an anchor if xi = uj\ni , for\nsome anchor uj\ni . Otherwise, we say that xi is midrange. We\nobserve that an optimal knapsack solution can always be\nconstructed so that at most one solution element is midrange. If there\nare two midrange elements x and x , for bids from two different\nagents, with x \u2264 x , then we can increment x and decrement\nx, until one of them becomes an anchor. See Figure 2 for an\nexample.\nLEMMA 1. [Anchor Property] There exists an optimal\nsolution of the generalized knapsack problem with at most one midrange\nelement. All other elements are anchors.\n1 midrange bid\n5\n20\n15\n10\n25\n5 25 30201510 35\n3\n2\n1\nPrice Quantity\n5\n20\n15\n10\n25\n5 25 30201510 35\n3\n2\n1\nPrice\nQuantity\n(i) Optimal solution with\n2 midrange bids\n(ii) Optimal soltution with\nFigure 2: (i) An optimal solution with more than one bid not\nanchored (2,3); (ii) an optimal solution with only one bid (3) not\nanchored.\nWe use the anchor property to first obtain a polynomial-time\n2-approximation scheme. We do this by solving several instances\nof a restricted generalized-knapsack problem, which we call\niKnapsack, where one element is forced to be midrange for a\nparticular interval.\nSpecifically, suppose element x for agent l is forced to lie in\nits jth range, [uj\n, uj+1\n), while all other elements, x1, . . . , xl\u22121,\nxl+1, xn, are required to be anchors, or zero. This corresponds\nto the restricted problem iKnapsack( , j), in which the goal is to\nobtain at least M \u2212 uj\nunits with minimum cost. Element x\nis assumed to have already contributed uj\nunits. The value of\na solution to iKnapsack( , j) represents the minimal additional\ncost to purchase the rest of the units.\nWe create n \u2212 1 groups of potential anchors, where ith group\ncontains all the anchors of the list i in the generalized knapsack.\nThe group for agent l contains a single element that represents\nthe interval [0, uj+1\n\u2212uj\n), and the associated unit-price pj\n. This\ninterval represents the excess number of units that can be taken\nfrom agent l in iKnapsack( , j), in addition to uj\n, which has\nalready been committed. In any other group, we can choose at\nmost one anchor.\nThe following pseudo-code describes our algorithm for this\nrestriction of the generalized knapsack problem. U is the union\nof all the tuples in n groups, including a tuple t for agent l. The\nsize of this special tuple is defined as uj+1\n\u2212 uj\n, and the cost is\ndefined as pj\nl (uj+1\n\u2212uj\n). R is the number of units that remain to\nbe acquired. S is the set of tuples accepted in the current tentative\n170\nsolution. Best is the best solution found so far. Variable Skip is\nonly used in the proof of correctness.\nAlgorithm Greedy( , j)\n1. Sort all tuples of U in the ascending order of unit price; in\ncase of ties, sort in ascending order of unit quantities.\n2. Set mark(i) = 0, for all lists i = 1, 2, . . . , n.\nInitialize R = M \u2212 uj\n, S = Best = Skip = \u2205.\n3. Scan the tuples in U in the sorted order. Suppose the next\ntuple is tk\ni , i.e. the kth anchor from agent i.\nIf mark(i) = 1, ignore this tuple;\notherwise do the following steps:\n\u2022 if size(tk\ni ) > R and i =\nreturn min {cost(S) + Rpj\n, cost(Best)};\n\u2022 if size(tk\ni ) > R and cost(tk\ni ) \u2264 cost(S)\nreturn min {cost(S) + cost(tk\ni ), cost(Best)};\n\u2022 if size(tk\ni ) > R and cost(tk\ni ) > cost(S)\nAdd tk\ni to Skip; Set Best to S \u222a {tk\ni } if cost\nimproves;\n\u2022 if size(tk\ni ) \u2264 R then\nadd tk\ni to S; mark(i) = 1; subtract size(tk\ni )\nfrom R.\nThe approximation algorithm is very similar to the\napproximation algorithm for knapsack. Since we wish to minimize the total\ncost, we consider the tuples in order of increasing per unit cost. If\nthe size of tuple tk\ni is smaller than R, then we add it to S, update\nR, and delete from U all the tuples that belong to the same group\nas tk\ni . If size(tk\ni ) is greater than R, then S along with tk\ni forms a\nfeasible solution. However, this solution can be far from optimal\nif the size of tk\ni is much larger than R. If total cost of S and tk\ni\nis smaller than the current best solution, we update Best. One\nexception to this rule is the tuple t . Since this tuple can be taken\nfractionally, we update Best if the sum of S\"s cost and fractional\ncost of t is an improvement.\nThe algorithm terminates in either of the first two cases, or\nwhen all tuples are scanned. In particular, it terminates whenever\nwe find a tk\ni such that size(tk\ni ) is greater than R but cost(tk\ni ) is\nless than cost(S), or when we reach the tuple representing agent\nl and it gives a feasible solution.\nLEMMA 2. Suppose A\u2217\nis an optimal solution of the\ngeneralized knapsack, and suppose that element (l, j) is midrange in the\noptimal solution. Then, the cost V (l, j), returned by Greedy( , j),\nsatisfies:\nV ( , j) + cost(tj\n) \u2264 2cost(A\u2217\n)\nPROOF. Let V ( , j) be the value returned by Greedy( , j) and\nlet V \u2217\n( , j) be an optimal solution for iKnapsack( , j). Consider\nthe set Skip at the termination of Greedy( , j). There are two\ncases to consider: either some tuple t \u2208 Skip is also in V \u2217\n( , j),\nor no tuple in Skip is in V \u2217\n( , j). In the first case, let St be the\ntentative solution S at the time t was added to Skip. Because\nt \u2208 Skip then size(t) > R, and St together with t forms a\nfeasible solution, and we have:\nV ( , j) \u2264 cost(Best) \u2264 cost(St) + cost(t).\nAgain, because t \u2208 Skip then cost(t) > cost(St), and we have\nV ( , j) < 2cost(t). On the other hand, since t is included in\nV \u2217\n( , j), we have V \u2217\n( , j) \u2265 cost(t). These two inequalities\nimply the desired bound:\nV \u2217\n( , j) \u2264 V ( , j) < 2V \u2217\n( , j).\nIn the second case, imagine a modified instance of\niKnapsack( , j), which excludes all the tuples of the set Skip. Since\nnone of these tuples were included in V \u2217\n( , j), the optimal\nsolution for the modified problem should be the same as the one for\nthe original. Suppose our approximation algorithm returns the\nvalue V ( , j) for this modified instance. Let t be the last tuple\nconsidered by the approximation algorithm before termination\non the modified instance, and let St be the corresponding\ntentative solution set in that step. Since we consider tuples in order\nof increasing per unit price, and none of the tuples are going to\nbe placed in the set Skip, we must have cost(St ) < V \u2217\n( , j)\nbecause St is the optimal way to obtain size(St ).\nWe also have cost(t ) \u2264 cost(St ), and the following\ninequalities:\nV ( , j) \u2264 V ( , j) \u2264 cost(St ) + cost(t )\n< 2V \u2217\n( , j)\nThe inequality V ( , j) \u2264 V ( , j) follows from the fact that a\ntuple in the Skip list can only affect the Best but not the tentative\nsolutions. Therefore, dropping the tuples in the set Skip can only\nmake the solution worse.\nThe above argument has shown that the value returned by Greedy( , j)\nis within a factor 2 of the optimal solution for iKnapsack( , j).\nWe now show that the value V ( , j) plus cost(tj\n) is a 2-approximation\nof the original generalized knapsack problem.\nLet A\u2217\nbe an optimal solution of the generalized knapsack,\nand suppose that element xj\nis midrange. Let x\u2212 to be set of\nthe remaining elements, either zero or anchors, in this solution.\nFurthermore, define x = xj\n\u2212 uj\n. Thus,\ncost(A\u2217\n) = cost(xl) + cost(tj\nl ) + cost(x\u2212l)\nIt is easy to see that (x\u2212 , x ) is an optimal solution for iKnapsack( , j).\nSince V ( , j) is a 2-approximation for this optimal solution, we\nhave the following inequalities:\nV ( , j) + cost(tj\n) \u2264 cost(tj\n) + 2(cost(x ) + cost(x\u2212 ))\n\u2264 2(cost(x ) + cost(tj\n) + cost(x\u2212 ))\n\u2264 2cost(A\u2217\n)\nThis completes the proof of Lemma 2.\nIt is easy to see that, after an initial sorting of the tuples in U,\nthe algorithm Greedy( , j) takes O(n) time. We have our first\npolynomial approximation algorithm.\nTHEOREM 2. A 2-approximation of the generalized knapsack\nproblem can be found in time O(n2\n), where n is number of item\nlists (each of constant length).\nPROOF. We run the algorithm Greedy( , j) once for each\ntuple (l, j) as a candidate for midrange. There are O(n) tuples,\nand it suffices to sort them once, the total cost of the algorithm is\nO(n2\n). By Lemma 1, there is an optimal solution with at most\none midrange element, so our algorithm will find a 2-approximation,\nas claimed.\nThe dependence on the number of pieces is also polynomial:\nif each bid has a maximum of c pieces, then the running time is\nO((nc)2\n).\n171\n3.3 An Approximation Scheme\nWe now use the 2-approximation algorithm presented in the\npreceding section to develop a fully polynomial approximation\n(FPTAS) for the generalized knapsack problem. The high level\nidea is fairly standard, but the details require technical care. We\nuse a dynamic programming algorithm to solve iKnapsack( , j)\nfor each possible midrange element, with the 2-approximation\nalgorithm providing an upper bound on the value of the solution\nand enabling the use of scaling on the cost dimension of the\ndynamic programming (DP) table.\nConsider, for example, the case that the midrange element is\nx , which falls in the range [uj\n, uj+1\n). In our FPTAS, rather\nthan using a greedy approximation algorithm to solve\niKnapsack( , j), we construct a dynamic programming table to\ncompute the minimum cost at which at least M \u2212 uj+1\nunits can\nbe obtained using the remaining n \u2212 1 lists in the generalized\nknapsack.\nSuppose G[i, r] denotes the maximum number of units that\ncan be obtained at cost at most r using only the first i lists in the\ngeneralized knapsack. Then, the following recurrence relation\ndescribes how to construct the dynamic programming table:\nG[0, r] = 0\nG[i, r] = max\n\u00b4 G[i \u2212 1, r]\nmax\nj\u2208\u03b2(i,r)\n{G[i \u2212 1, r \u2212 cost(tj\ni )] + uj\ni }\n\u00b5\nwhere \u03b2(i, r) = {j : 1 \u2264 j \u2264 mi, cost(tj\ni ) \u2264 r}, is the set\nof anchors for agent i. As convention, agent i will index the row,\nand cost r will index the column.\nThis dynamic programming algorithm is only pseudo-polynomial,\nsince the number of column in the dynamic programming table\ndepends upon the total cost. However, we can convert it into a\nFPTAS by scaling the cost dimension.\nLet A denote the 2-approximation to the generalized knapsack\nproblem, with total cost, cost(A). Let \u03b5 denote the desired\napproximation factor. We compute the scaled cost of a tuple tj\ni ,\ndenoted scost(tj\ni ), as\nscost(tj\ni ) =\nn cost(tj\ni )\n\u03b5cost(A)\n(2)\nThis scaling improves the running time of the algorithm\nbecause the number of columns in the modified table is at most\nn\n\u03b5\n, and independent of the total cost. However, the computed\nsolution might not be an optimal solution for the original\nproblem. We show that the error introduced is within a factor of \u03b5 of\nthe optimal solution.\nAs a prelude to our approximation guarantee, we first show\nthat if two different solutions to the iKnapsack problem have\nequal scaled cost, then their original (unscaled) costs cannot\ndiffer by more than \u03b5cost(A).\nLEMMA 3. Let x and y be two distinct feasible solutions of\niKnapsack( , j), excluding their midrange elements. If x and y\nhave equal scaled costs, then their unscaled costs cannot differ\nby more than \u03b5cost(A).\nPROOF. Let Ix and Iy, respectively, denote the indicator\nfunctions associated with the anchor vectors x and y-there is 1 in\nposition Ix[i, k] if the xk\ni > 0. Since x and y has equal scaled\ncost,\ni= k\nscost(tk\ni )Ix[i, k] =\ni= k\nscost(tk\ni )Iy[i, k] (3)\nHowever, by (2), the scaled costs satisfy the following\ninequalities:\n(scost(tk\ni ) \u2212 1)\u03b5cost(A)\nn\n\u2264 cost(tk\ni ) \u2264\nscost(tk\ni )\u03b5cost(A)\nn\n(4)\nSubstituting the upper-bound on scaled cost from (4) for cost(x),\nthe lower-bound on scaled cost from (4) for cost(y), and using\nequality (3) to simplify, we have:\ncost(x) \u2212 cost(y) \u2264\n\u03b5cost(A)\nn\ni= k\nIy[i, k] \u2264 \u03b5cost(A),\nThe last inequality uses the fact that at most n components\nof an indicator vector are non-zero; that is, any feasible solution\ncontains at most n tuples.\nFinally, given the dynamic programming table for iKnapsack( , j),\nwe consider all the entries in the last row of this table, G[n\u22121, r].\nThese entries correspond to optimal solutions with all agents\nexcept l, for different levels of cost. In particular, we consider the\nentries that provide at least M \u2212 uj+1\nunits. Together with a\ncontribution from agent l, we choose the entry in this set that\nminimizes the total cost, defined as follows:\ncost(G[n \u2212 1, r]) + max {uj\n, M \u2212 G[n \u2212 1, r]}pj\n,\nwhere cost() is the original, unscaled cost associated with\nentry G[n\u22121, r]. It is worth noting, that unlike the 2-approximation\nscheme for iKnapsack( , j), the value computed with this FPTAS\nincludes the cost to acquire uj\nl units from l.\nThe following lemma shows that we achieve a (1+\u03b5)-approximation.\nLEMMA 4. Suppose A\u2217\nis an optimal solution of the\ngeneralized knapsack problem, and suppose that element (l, j) is\nmidrange in the optimal solution. Then, the solution A(l, j) from\nrunning the scaled dynamic-programming algorithm on iKnapsack( , j)\nsatisfies\ncost(A(l, j)) \u2264 (1 + 2\u03b5)cost(A\u2217\n)\nPROOF. Let x\u2212 denote the vector of the elements in\nsolution A\u2217\nwithout element l. Then, by definition, cost(A\u2217\n) =\ncost(x\u2212 ) + pj\nxj\n. Let r = scost(x\u2212 ) be the scaled cost\nassociated with the vector x\u2212 . Now consider the dynamic\nprogramming table constructed for iKnapsack( , j), and consider its\nentry G[n \u2212 1, r]. Let A denote the 2-approximation to the\ngeneralized knapsack problem, and A(l, j) denote the solution from\nthe dynamic-programming algorithm.\nSuppose y\u2212 is the solution associated with this entry in our\ndynamic program; the components of the vector y\u2212 are the\nquantities from different lists. Since both x\u2212 and y\u2212 have equal\nscaled costs, by Lemma 3, their unscaled costs are within \u03b5cost(A)\nof each other; that is,\ncost(y\u2212 ) \u2212 cost(x\u2212 ) \u2264 \u03b5cost(A).\nNow, define yj\n= max{uj\n, M \u2212\n\u00c8i=\n\u00c8j yj\ni }; this is the\ncontribution needed from to make (y\u2212 , yj\n) a feasible solution.\nAmong all the equal cost solutions, our dynamic programming\ntables chooses the one with maximum units. Therefore,\ni= j\nyj\ni \u2265\ni= j\nxj\ni\n172\nTherefore, it must be the case that yj\n\u2264 xj\n. Because (yj\n, y\u2212 )\nis also a feasible solution, if our algorithm returns a solution with\ncost cost(A(l, j)), then we must have\ncost(A(l, j)) \u2264 cost(y\u2212 ) + pj\nyj\n\u2264 cost(x\u2212 ) + \u03b5cost(A) + pj\nxj\n\u2264 (1 + 2\u03b5)cost(A\u2217\n),\nwhere we use the fact that cost(A) \u2264 2cost(A\u2217\n).\nPutting this together, our approximation scheme for the\ngeneralized knapsack problem will iterate the scheme described above\nfor each choice of the midrange element (l, j), and choose the\nbest solution from among these O(n) solutions.\nFor a given midrange, the most expensive step in the algorithm\nis the construction of dynamic programming table, which can be\ndone in O(n2\n/\u03b5) time assuming constant intervals per list. Thus,\nwe have the following result.\nTHEOREM 3. We can compute an (1 + \u03b5) approximation to\nthe solution of a generalized knapsack problem in worst-case\ntime O(n3\n/\u03b5).\nThe dependence on the number of pieces is also polynomial: if\neach bid has a maximum of c pieces, then the running time can\nbe derived by substituting cn for each occurrence of n.\n4. COMPUTING VCG PAYMENTS\nWe now consider the related problem of computing the VCG\npayments for all the agents. A naive approach requires solving\nthe allocation problem n times, removing each agent in turn. In\nthis section, we show that our approximation scheme for the\ngeneralized knapsack can be extended to determine all n payments\nin total time O(\u03b1T log(\u03b1n/\u03b5)), where 1 \u2264 C(I\\i)/C(I) \u2264 \u03b1,\nfor a constant upper bound, \u03b1, and T is the complexity of\nsolving the allocation problem once. This \u03b1-bound can be justified\nas a no monopoly condition, because it bounds the marginal\nvalue that a single buyer brings to the auction. Similarly, in the\nreverse variation we can compute the VCG payments to each\nseller in time O(\u03b1T log(\u03b1n/\u03b5)), where \u03b1 bounds the ratio C(I\\\ni)/C(I) for all i.\nOur overall strategy will be to build two dynamic\nprogramming tables, forward and backward, for each midrange element\n(l, j) once. The forward table is built by considering the agents\nin the order of their indices, where as the backward table is built\nby considering them in the reverse order. The optimal solution\ncorresponding to C(I \\ i) can be broken into two parts: one\ncorresponding to first (i \u2212 1) agents and the other corresponding to\nlast (n \u2212 i) agents. As the (i \u2212 1)th row of the forward table\ncorresponds to the sellers with first (i\u22121) indices, an approximation\nto the first part will be contained in (i \u2212 1)th row of the forward\ntable. Similarly, (n\u2212 i)th row of the backward table will contain\nan approximation for the second part. We first present a\nsimple but an inefficient way of computing the approximate value of\nC(I \\ i), which illustrates the main idea of our algorithm. Then\nwe present an improved scheme, which uses the fact that the\nelements in the rows are sorted, to compute the approximate value\nmore efficiently.\nIn the following, we concentrate on computing an allocation\nwith xj\nbeing midrange, and some agent i = l removed. This\nwill be a component in computing an approximation to C(I \\ i),\nthe value of the solution to the generalized knapsack without bids\nfrom agent i. We begin with the simple scheme.\n4.1 A Simple Approximation Scheme\nWe implement the scaled dynamic programming algorithm for\niKnapsack( , j) with two alternate orderings over the other\nsellers, k = l, one with sellers ordered 1, 2, . . . , n, and one with\nsellers ordered n, n \u2212 1, . . . , 1. We call the first table the\nforward table, and denote it F , and the second table the backward\ntable, and denote it Bl. The subscript reminds us that the agent\nis midrange.9\nIn building these tables, we use the same scaling factor as\nbefore; namely, the cost of a tuple tj\ni is scaled as follows:\nscost(tj\ni ) =\nncost(tj\ni )\n\u03b5cost(A)\nwhere cost(A) is the upper bound on C(I), given by our\n2approximation scheme. In this case, because C(I \\ i) can be \u03b1\ntimes C(I), the scaled value of C(I \\ i) can be at most n\u03b1/\u03b5.\nTherefore, the cost dimension of our dynamic program\"s table\nwill be n\u03b1/\u03b5.\nFlTable\nF (i\u22121)l\n2 3\n1\n2\ni\u22121\n1 m\u22121 m\nn\u22121\ng\n2 31 m\u22121 m\nB (n\u2212i)\nn\u22121\nn\u22122\nn\u2212i\n1\nlh\nTable Bl\nFigure 3: Computing VCG payments. m = n\u03b1\n\u03b5\nNow, suppose we want to compute a (1 + )-approximation\nto the generalized knapsack problem restricted to element (l, j)\nmidrange, and further restricted to remove bids from some seller\ni = l. Call this problem iKnapsack\u2212i\n( , j).\nRecall that the ith row of our DP table stores the best solution\npossible using only the first i agents excluding agent l, all of\nthem either cleared at zero, or on anchors. These first i agents\nare a different subset of agents in the forward and the backward\ntables. By carefully combining one row of Fl with one row of\nBl we can compute an approximation to iKnapsack\u2212i\n( , j). We\nconsider the row of Fl that corresponds to solutions constructed\nfrom agents {1, 2, . . . , i \u2212 1}, skipping agent l. We consider the\nrow of Bl that corresponds to solutions constructed from agents\n{i+1, i+2, . . . , n}, again skipping agent l. The rows are labeled\nFl(i \u2212 1) and Bl(n \u2212 i) respectively.10\nThe scaled costs for\nacquiring these units are the column indices for these entries. To\nsolve iKnapsack\u2212i\n( , j) we choose one entry from row F (i\u22121)\nand one from row B (n\u2212i) such that their total quantity exceeds\nM \u2212 uj+1\nand their combined cost is minimum over all such\ncombinations. Formally, let g \u2208 Fl(i \u2212 1), and h \u2208 Bl(n \u2212 1)\ndenote entries in each row, with size(g), size(h), denoting the\nnumber of units and cost(g) and cost(h) denoting the unscaled\ncost associated with the entry. We compute the following, subject\n9\nWe could label the tables with both and j, to indicate the jth\ntuple is forced to be midrange, but omit j to avoid clutter.\n10\nTo be precise, the index of the rows are (i \u2212 2) and (n \u2212 i) for\nFl and Bl when l < i, and (i \u2212 1) and (n \u2212 i \u2212 1), respectively,\nwhen l > i.\n173\nto the condition that g and h satisfy size(g) + size(h) > M \u2212\nuj+1\n:\nmin\ng\u2208F (i\u22121),h\u2208B (n\u2212i)\n\u00d2cost(g) + cost(h) +\npj\n\u00b7 max{uj\n, M \u2212 size(g) \u2212 size(h)}\n\u00d3 (5)\nLEMMA 5. Suppose A\u2212i\nis an optimal solution of the\ngeneralized knapsack problem without bids from agent i, and suppose\nthat element (l, j) is the midrange element in the optimal\nsolution. Then, the expression in Eq. 5, for the restricted problem\niKnapsack\u2212i\n( , j), computes a (1 + \u03b5)-approximation to A\u2212i\n.\nPROOF. From earlier, we define cost(A\u2212i\n) = C(I \\ i). We\ncan split the optimal solution, A\u2212i\n, into three disjoint parts: xl\ncorresponds to the midrange seller, xi corresponds to first i \u2212 1\nsellers (skipping agent l if l < i), and x\u2212i corresponds to last\nn \u2212 i sellers (skipping agent l if l > i). We have:\ncost(A\u2212i\n) = cost(xi) + cost(x\u2212i) + pj\nxj\nLet ri = scost(xi) and r\u2212i = scost(x\u2212i). Let yi and y\u2212i\nbe the solution vectors corresponding to scaled cost ri and r\u2212i\nin F (i \u2212 1) and B (n \u2212 i), respectively. From Lemma 3 we\nconclude that,\ncost(yi) + cost(y\u2212i) \u2212 cost(xi) \u2212 cost(x\u2212i) \u2264 \u03b5cost(A)\nwhere cost(A) is the upper-bound on C(I) computed with the\n2-approximation.\nAmong all equal scaled cost solutions, our dynamic program\nchooses the one with maximum units. Therefore we also have,\n(size(yi) \u2265 size(xi)) and (size(y\u2212i) \u2265 size(x\u2212i))\nwhere we use shorthand size(x) to denote total number of units\nin all tuples in x.\nNow, define yj\nl = max(uj\nl , M \u2212size(yi)\u2212size(y\u2212i)). From\nthe preceding inequalities, we have yj\nl \u2264 xj\nl . Since (yj\nl , yi, y\u2212i)\nis also a feasible solution to the generalized knapsack problem\nwithout agent i, the value returned by Eq. 5 is at most\ncost(yi) + cost(y\u2212i) + pj\nl yj\nl \u2264 C(I \\ i) + \u03b5cost(A)\n\u2264 C(I \\ i) + 2cost(A\u2217\n)\u03b5\n\u2264 C(I \\ i) + 2C(I \\ i)\u03b5\nThis completes the proof.\nA naive implementation of this scheme will be inefficient\nbecause it might check (n\u03b1/\u03b5)2\npairs of elements, for any\nparticular choice of (l, j) and choice of dropped agent i. In the next\nsection, we present an efficient way to compute Eq. 5, and\neventually to compute the VCG payments.\n4.2 Improved Approximation Scheme\nOur improved approximation scheme for the winner-determination\nproblem without agent i uses the fact that elements in F (i \u2212 1)\nand B (n \u2212 i) are sorted; specifically, both, unscaled cost and\nquantity (i.e. size), increases from left to right. As before, let\ng and h denote generic entries in F (i \u2212 1) and B (n \u2212 i)\nrespectively. To compute Eq. 5, we consider all the tuple pairs, and\nfirst divide the tuples that satisfy condition size(g) + size(h) >\nM \u2212 uj+1\nl into two disjoint sets. For each set we compute the\nbest solution, and then take the best between the two sets.\n[case I: size(g) + size(h) \u2265 M \u2212 uj\nl ]\nThe problem reduces to\nmin\ng\u2208F (i\u22121), h\u2208B (n\u2212i)\n\u00d2cost(g) + cost(h) + pj\nl uj\n\u00d3 (6)\nWe define a pair (g, h) to be feasible if size(g) + size(h) \u2265\nM \u2212 uj\nl . Now to compute Eq. 6, we do a forward and backward\nwalk on F (i \u2212 1) and B (n \u2212 i) respectively. We start from\nthe smallest index of F (i \u2212 1) and move right, and from the\nhighest index of B (n \u2212 i) and move left. Let (g, h) be the\ncurrent pair. If (g, h) is feasible, we decrement B\"s pointer (that\nis, move backward) otherwise we increment F\"s pointer. The\nfeasible pairs found during the walk are used to compute Eq. 6.\nThe complexity of this step is linear in size of F (i \u2212 1), which\nis O(n\u03b1/\u03b5).\n[case II: M \u2212 uj+1\nl \u2264 size(g) + size(h) \u2264 M \u2212 uj\nl ]\nThe problem reduces to\nmin\ng\u2208F (i\u22121), h\u2208B (n\u2212i)\n\u00d2cost(g) + cost(h) +\npj\nl (M \u2212 size(g) \u2212 size(h))\n\u00d3\nTo compute the above equation, we transform the above\nproblem to another problem using modified cost, which is defined as:\nmcost(g) = cost(g) \u2212 pj\nl \u00b7 size(g)\nmcost(h) = cost(h) \u2212 pj\nl \u00b7 size(h)\nThe new problem is to compute\nmin\ng\u2208F (i\u22121), h\u2208B (n\u2212i)\n\u00d2mcost(g) + mcost(h) + pj\nl M\n\u00d3 (7)\nThe modified cost simplifies the problem, but unfortunately\nthe elements in F (i \u2212 1) and B (n \u2212 i) are no longer sorted\nwith respect to mcost. However, the elements are still sorted in\nquantity and we use this property to compute Eq. 7. Call a pair\n(g, h) feasible if M \u2212 uj+1\nl \u2264 size(g) + size(h) \u2264 M \u2212 uj\nl .\nDefine the feasible set of g as the elements h \u2208 B (n \u2212 i) that\nare feasible given g. As the elements are sorted by quantity, the\nfeasible set of g is a contiguous subset of B (n \u2212 i) and shifts\nleft as g increases.\n2 3 4 5\n10 20 30 40 50 60\nBegin End\nB (n\u2212i)15 20 25 30 35 40\n65421 3\n1 6\nF (i\u22121)l\nl\nFigure 4: The feasible set of g = 3, defined on B (n \u2212 i), is\n{2, 3, 4} when M \u2212 uj+1\nl = 50 and M \u2212 uj\nl = 60. Begin and\nEnd represent the start and end pointers to the feasible set.\nTherefore, we can compute Eq. 7 by doing a forward and\nbackward walk on F (i \u2212 1) and B (n \u2212 i) respectively. We walk on\nB (n \u2212 i), starting from the highest index, using two pointers,\nBegin and End, to indicate the start and end of the current\nfeasible set. We maintain the feasible set as a min heap, where the\nkey is modified cost. To update the feasible set, when we\nincrement F\"s pointer(move forward), we walk left on B, first using\nEnd to remove elements from feasible set which are no longer\n174\nfeasible and then using Begin to add new feasible elements. For\na given g, the only element which we need to consider in g\"s\nfeasible set is the one with minimum modified cost which can\nbe computed in constant time with the min heap. So, the main\ncomplexity of the computation lies in heap updates. Since, any\nelement is added or deleted at most once, there are O(n\u03b1\n\u03b5\n) heap\nupdates and the time complexity of this step is O(n\u03b1\n\u03b5\nlog n\u03b1\n\u03b5\n).\n4.3 Collecting the Pieces\nThe algorithm works as follows. First, using the 2\napproximation algorithm, we compute an upper bound on C(I). We use\nthis bound to scale down the tuple costs. Using the scaled costs,\nwe build the forward and backward tables corresponding to each\ntuple (l, j). The forward tables are used to compute C(I). To\ncompute C(I \\ i), we iterate over all the possible midrange\ntuples and use the corresponding forward and backward tables to\ncompute the locally optimal solution using the above scheme.\nAmong all the locally optimal solutions we choose one with the\nminimum total cost.\nThe most expensive step in the algorithm is computation of\nC(I \\ i). The time complexity of this step is O(n2\n\u03b1\n\u03b5\nlog n\u03b1\n\u03b5\n)\nas we have to iterate over all O(n) choices of tj\nl , for all l =\ni, and each time use the above scheme to compute Eq. 5. In\nthe worst case, we might need to compute C(I \\ i) for all n\nsellers, in which case the final complexity of the algorithm will\nbe O(n3\n\u03b1\n\u03b5\nlog n\u03b1\n\u03b5\n).\nTHEOREM 4. We can compute an /(1+ )-strategyproof\napproximation to the VCG mechanism in the forward and reverse\nmulti-unit auctions in worst-case time O(n3\n\u03b1\n\u03b5\nlog n\u03b1\n\u03b5\n).\nIt is interesting to recall that T = O(n3\n\u03b5\n) is the time\ncomplexity of the FPTAS to the generalized knapsack problem with all\nagents. Our combined scheme computes an approximation to the\ncomplete VCG mechanism, including payments to O(n) agents,\nin time complexity O(T log(n/\u03b5)), taking the no-monopoly\nparameter, \u03b1, as a constant. Thus, our algorithm performs much\nbetter than the naive scheme, which computes the VCG\npayment for each agent by solving a new instance of generalized\nknapsack problem. The speed up comes from the way we solve\niKnapsack\u2212i\n( , j). Time complexity of computing iKnapsack\u2212i\n( , j)\nby creating a new dynamic programming table will be O(n2\n\u03b5\n)\nbut by using the forward and backward tables, the complexity is\nreduced to O(n\n\u03b5\nlog n\n\u03b5\n). We can further improve the time\ncomplexity of our algorithm by computing Eq. 5 more efficiently.\nCurrently, the algorithm uses heap, which has logarithmic\nupdate time. In worst case, we can have two heap update operations\nfor each element, which makes the time complexity super linear.\nIf we can compute Eq. 5 in linear time then the complexity of\ncomputing the VCG payment will be same as the complexity of\nsolving a single generalized knapsack problem.\n5. CONCLUSIONS\nWe presented a fully polynomial-time approximation scheme\nfor the single-good multi-unit auction problem, using marginal\ndecreasing piecewise constant bidding language. Our scheme\nis both approximately efficient and approximately strategyproof\nwithin any specified factor \u03b5 > 0. As such it is an example of\ncomputationally tractable \u03b5-dominance result, as well as an\nexample of a non-trivial but approximable allocation problem. It\nis particularly interesting that we are able to compute the\npayments to n agents in a VCG-based mechanism in worst-case time\nO(T log n), where T is the time complexity to compute the\nsolution to a single allocation problem.\n6. REFERENCES\n[1] L M Ausubel and P R Milgrom. Ascending auctions with package\nbidding. Frontiers of Theoretical Economics, 1:1-42, 2002.\n[2] S Bikchandani, S de Vries, J Schummer, and R V Vohra. Linear\nprogramming and Vickrey auctions. Technical report, Anderson\nGraduate School of Management, U.C.L.A., 2001.\n[3] S Bikchandani and J M Ostroy. The package assignment model.\nJournal of Economic Theory, 2002. Forthcoming.\n[4] K Chatterjee and W Samuelson. Bargaining under incomplete\ninformation. Operations Research, 31:835-851, 1983.\n[5] E H Clarke. Multipart pricing of public goods. Public Choice,\n11:17-33, 1971.\n[6] S de Vries and R V Vohra. Combinatorial auctions: A survey.\nInforms Journal on Computing, 2002. Forthcoming.\n[7] M Eso, S Ghosh, J R Kalagnanam, and L Ladanyi. Bid evaluation\nin procurement auctions with piece-wise linear supply curves.\nTechnical report, IBM TJ Watson Research Center, 2001. in\npreparation.\n[8] J Feigenbaum and S Shenker. Distributed Algorithmic Mechanism\nDesign: Recent Results and Future Directions. In Proceedings of\nthe 6th International Workshop on Discrete Algorithms and\nMethods for Mobile Computing and Communications, pages 1-13,\n2002.\n[9] M R Garey and D S Johnson. Computers and Intractability: A\nGuide to the Theory of NP-Completeness. W.H.Freeman and\nCompany, New York, 1979.\n[10] G V Gens and E V Levner. Computational complexity of\napproximation algorithms for combinatorial problems. In\nMathematical Foundation of Computer Science, 292-300, 1979.\n[11] T Groves. Incentives in teams. Econometrica, 41:617-631, 1973.\n[12] J R Kalagnanam, A J Davenport, and H S Lee. Computational\naspects of clearing continuous call double auctions with\nassignment constraints and indivisible demand. Electronic\nCommerce Journal, 1(3):221-238, 2001.\n[13] V Krishna. Auction Theory. Academic Press, 2002.\n[14] V Krishna and M Perry. Efficient mechanism design. Technical\nreport, Pennsylvania State University, 1998. Available at:\nhttp://econ.la.psu.edu/\u02dcvkrishna/vcg18.ps.\n[15] D Lehmann, L I O\"Callaghan, and Y Shoham. Truth revelation in\napproximately efficient combinatorial auctions. JACM,\n49(5):577-602, September 2002.\n[16] R B Myerson. Optimal auction design. Mathematics of Operation\nResearch, 6:58-73, 1981.\n[17] R B Myerson and M A Satterthwaite. Efficient mechanisms for\nbilateral trading. Journal of Economic Theory, 28:265-281, 1983.\n[18] N Nisan and A Ronen. Computationally feasible VCG\nmechanisms. In ACM-EC, pages 242-252, 2000.\n[19] D C Parkes, J R Kalagnanam, and M Eso. Achieving\nbudget-balance with Vickrey-based payment schemes in\nexchanges. In IJCAI, 2001.\n[20] M H Rothkopf, A Peke\u02c7c, and R M Harstad. Computationally\nmanageable combinatorial auctions. Management Science,\n44(8):1131-1147, 1998.\n[21] J Schummer. Almost dominant strategy implementation. Technical\nreport, MEDS Department, Kellogg Graduate School of\nManagement, 2001.\n[22] W Vickrey. Counterspeculation, auctions, and competitive sealed\ntenders. Journal of Finance, 16:8-37, 1961.\n175", "keywords": "reverse auction;bidding language;forward auction;strategyproof;approximation algorithm;single-good multi-unit allocation problem;multi-unit auction;equilibrium;approximately-efficient and approximately strategyproof auction mechanism;marginal-decreasing piecewise constant curve;dynamic programming;vickrey-clarke-grove;fully polynomial-time approximation scheme"}
-{"name": "test_J-3", "title": "Budget Optimization in Search-Based Advertising Auctions", "abstract": "Internet search companies sell advertisement slots based on users\" search queries via an auction. While there has been previous work on the auction process and its game-theoretic aspects, most of it focuses on the Internet company. In this work, we focus on the advertisers, who must solve a complex optimization problem to decide how to place bids on keywords to maximize their return (the number of user clicks on their ads) for a given budget. We model the entire process and study this budget optimization problem. While most variants are NP-hard, we show, perhaps surprisingly, that simply randomizing between two uniform strategies that bid equally on all the keywords works well. More precisely, this strategy gets at least a 1 \u2212 1/e fraction of the maximum clicks possible. As our preliminary experiments show, such uniform strategies are likely to be practical. We also present inapproximability results, and optimal algorithms for variants of the budget optimization problem.", "fulltext": "1. INTRODUCTION\nOnline search is now ubiquitous and Internet search\ncompanies such as Google, Yahoo! and MSN let companies and\nindividuals advertise based on search queries posed by users.\nConventional media outlets, such as TV stations or\nnewspapers, price their ad slots individually, and the advertisers\nbuy the ones they can afford. In contrast, Internet search\ncompanies find it difficult to set a price explicitly for the\nadvertisements they place in response to user queries. This\ndifficulty arises because supply (and demand) varies widely\nand unpredictably across the user queries, and they must\nprice slots for billions of such queries in real time. Thus,\nthey rely on the market to determine suitable prices by\nusing auctions amongst the advertisers. It is a challenging\nproblem to set up the auction in order to effect a stable\nmarket in which all the parties (the advertisers, users as\nwell as the Internet search company) are adequately\nsatisfied. Recently there has been systematic study of the issues\ninvolved in the game theory of the auctions [5, 1, 2], revenue\nmaximization [10], etc.\nThe perspective in this paper is not of the Internet search\ncompany that displays the advertisements, but rather of the\nadvertisers. The challenge from an advertiser\"s point of view\nis to understand and interact with the auction mechanism.\nThe advertiser determines a set of keywords of their interest\nand then must create ads, set the bids for each keyword,\nand provide a total (often daily) budget. When a user poses\na search query, the Internet search company determines the\nadvertisers whose keywords match the query and who still\nhave budget left over, runs an auction amongst them, and\npresents the set of ads corresponding to the advertisers who\nwin the auction. The advertiser whose ad appears pays\nthe Internet search company if the user clicks on the ad.\nThe focus in this paper is on how the advertisers bid.\nFor the particular choice of keywords of their interest1\n, an\nadvertiser wants to optimize the overall effect of the\nadvertising campaign. While the effect of an ad campaign in\nany medium is a complicated phenomenon to quantify, one\ncommonly accepted (and easily quantified) notion in\nsearchbased advertising on the Internet is to maximize the number\nof clicks. The Internet search companies are supportive\nto1\nThe choice of keywords is related to the domain-knowledge\nof the advertiser, user behavior and strategic considerations.\nInternet search companies provide the advertisers with\nsummaries of the query traffic which is useful for them to\noptimize their keyword choices interactively. We do not directly\naddress the choice of keywords in this paper, which is\naddressed elsewhere [13].\n40\nwards advertisers and provide statistics about the history of\nclick volumes and prediction about the future performance\nof various keywords. Still, this is a complex problem for the\nfollowing reasons (among others):\n\u2022 Individual keywords have significantly different\ncharacteristics from each other; e.g., while fishing is a\nbroad keyword that matches many user queries and\nhas many competing advertisers, humane fishing bait\nis a niche keyword that matches only a few queries, but\nmight have less competition.\n\u2022 There are complex interactions between keywords\nbecause a user query may match two or more keywords,\nsince the advertiser is trying to cover all the possible\nkeywords in some domain. In effect the advertiser ends\nup competing with herself.\nAs a result, the advertisers face a challenging optimization\nproblem. The focus of this paper is to solve this optimization\nproblem.\n1.1 The Budget Optimization Problem\nWe present a short discussion and formulation of the\noptimization problem faced by advertisers; a more detailed\ndescription is in Section 2.\nA given advertiser sees the state of the auctions for\nsearchbased advertising as follows. There is a set K of keywords\nof interest; in practice, even small advertisers typically have\na large set K. There is a set Q of queries posed by the\nusers. For each query q \u2208 Q, there are functions giving the\nclicksq(b) and costq(b) that result from bidding a particular\namount b in the auction for that query, which we model more\nformally in the next section. There is a bipartite graph G\non the two vertex sets representing K and Q. For any query\nq \u2208 Q, the neighbors of q in K are the keywords that are\nsaid to match the query q.2\nThe budget optimization problem is as follows. Given\ngraph G together with the functions clicksq(\u00b7) and costq(\u00b7)\non the queries, as well as a budget U, determine the bids\nbk for each keyword k \u2208 K such that\nP\nq clicksq(bq) is\nmaximized subject to\nP\nq costq(bq) \u2264 U, where the effective\nbid bq on a query is some function of the keyword bids in\nthe neighborhood of q.\nWhile we can cast this problem as a traditional\noptimization problem, there are different challenges in practice\ndepending on the advertiser\"s access to the query and graph\ninformation, and indeed the reliability of this information\n(e.g., it could be based on unstable historical data). Thus\nit is important to find solutions to this problem that not\nonly get many clicks, but are also simple, robust and less\nreliant on the information. In this paper we define the notion\nof a uniform strategy which is essentially a strategy that\nbids uniformly on all keywords. Since this type of strategy\nobviates the need to know anything about the particulars\nof the graph, and effectively aggregates the click and cost\nfunctions on the queries, it is quite robust, and thus\ndesirable in practice. What is surprising is that uniform strategy\nactually performs well, which we will prove.\n2\nThe particulars of the matching rule are determined by\nthe Internet search company; here we treat the function as\narbitrary.\n1.2 Our Main Results and Technical Overview\nWe present positive and negative results for the budget\noptimization problem. In particular, we show:\n\u2022 Nearly all formulations of the problem are NP-Hard. In\ncases slightly more general than the formulation above, where\nthe clicks have weights, the problem is inapproximable\nbetter than a factor of 1 \u2212 1\ne\n, unless P=NP.\n\u2022 We give a (1\u22121/e)-approximation algorithm for the budget\noptimization problem. The strategy found by the algorithm\nis a two-bid uniform strategy, which means that it\nrandomizes between bidding some value b1 on all keywords, and\nbidding some other value b2 on all keywords until the budget is\nexhausted3\n. We show that this approximation ratio is tight\nfor uniform strategies. We also give a (1/2)-approximation\nalgorithm that offers a single-bid uniform strategy, only\nusing one value b1. (This is tight for single-bid uniform\nstrategies.) These strategies can be computed in time nearly linear\nin |Q| + |K|, the input size.\nUniform strategies may appear to be naive in first\nconsideration because the keywords vary significantly in their\nclick and cost functions, and there may be complex\ninteraction between them when multiple keywords are relevant to\na query. After all, the optimum can configure arbitrary bids\non each of the keywords. Even for the simple case when the\ngraph is a matching, the optimal algorithm involves\nplacing different bids on different keywords via a knapsack-like\npacking (Section 2). So, it might be surprising that a simple\ntwo-bid uniform strategy is 63% or more effective compared\nto the optimum. In fact, our proof is stronger, showing that\nthis strategy is 63% effective against a strictly more\npowerful adversary who can bid independently on the individual\nqueries, i.e., not be constrained by the interaction imposed\nby the graph G.\nOur proof of the 1 \u2212 1/e approximation ratio relies on an\nadversarial analysis. We define a factor-revealing LP\n(Section 4) where primal solutions correspond to possible\ninstances, and dual solutions correspond to distributions over\nbidding strategies. By deriving the optimal solution to this\nLP, we obtain both the proof of the approximation ratio,\nand a tight worst-case instance.\nWe have conducted simulations using real auction data\nfrom Google. The results of these simulations, which are\nhighlighted at the end of Section 4, suggest that uniform\nbidding strategies could be useful in practice. However,\nimportant questions remain about (among other things)\nalternate bidding goals, on-line or stochastic bidding models [11],\nand game-theoretic concerns [3], which we briefly discuss in\nSection 8.\n2. MODELING A KEYWORD AUCTION\nWe describe an auction from an advertiser\"s point of view.\nAn advertiser bids on a keyword, which we can think of as\na word or set of words. Users of the search engine submit\nqueries. If the query matches a keyword that has been\nbid on by an advertiser, then the advertiser is entered into\nan auction for the available ad slots on the results page.\nWhat constitutes a match varies depending on the search\nengine.\n3\nThis type of strategy can also be interpreted as bidding one\nvalue (on all keywords) for part of the day, and a different\nvalue for the rest of the day.\n41\nAn advertiser makes a single bid for a keyword that\nremains in effect for a period of time, say one day. The\nkeyword could match many different user queries throughout\nthe day. Each user query might have a different set of\nadvertisers competing for clicks. The advertiser could also bid\ndifferent amounts on multiple keywords, each matching a\n(possibly overlapping) set of user queries.\nThe ultimate goal of an advertiser is to maximize traffic\nto their website, given a certain advertising budget. We\nnow formalize a model of keyword bidding and define an\noptimization problem that captures this goal.\n2.1 Landscapes\nWe begin by considering the case of a single keyword that\nmatches a single user query. In this section we define the\nnotion of a query landscape that describes the relationship\nbetween the advertiser\"s bid and what will happen on this\nquery as a result of this bid[9]. This definition will be central\nto the discussion as we continue to more general cases.\n2.1.1 Positions, bids and click-through rate\nThe search results page for a query contains p possible\npositions in which our ad can appear. We denote the highest\n(most favorable) position by 1 and lowest by p.\nAssociated with each position i is a value \u03b1[i] that denotes\nthe click-through rate (ctr) of the ad in position i. The ctr is\na measure of how likely it is that our ad will receive a click\nif placed in position i. The ctr can be measured empirically\nusing past history. We assume throughout this work that\nthat \u03b1[i] \u2264 \u03b1[j] if j < i, that is, higher positions receive at\nleast as many clicks as lower positions.\nIn order to place an ad on this page, we must enter the\nauction that is carried out among all advertisers that have\nsubmitted a bid on a keyword that matches the user\"s query.\nWe will refer to such an auction as a query auction, to\nemphasize that there is an auction for each query rather than\nfor each keyword. We assume that the auction is a\ngeneralized second price (GSP) auction [5, 7]: the advertisers\nare ranked in decreasing order of bid, and each advertiser is\nassigned a price equal to the amount bid by the advertiser\nbelow them in the ranking.4\nIn sponsored search auctions,\nthis advertiser pays only if the user actually clicks on the ad.\nLet (b[1], . . . , b[p]) denote the bids of the top p advertisers in\nthis query auction. For notational convenience, we assume\nthat b[0] = \u221e and b[p] = \u03b1[p] = 0. Since the auction is\na generalized second price auction, higher bids win higher\npositions; i.e. b[i] \u2265 b[i + 1]. Suppose that we bid b on some\nkeyword that matches the user\"s query, then our position is\ndefined by the largest b[i] that is at most b, that is,\npos(b) = arg max\ni\n(b[i] : b[i] \u2264 b). (1)\nSince we only pay if the user clicks (and that happens with\nprobability \u03b1[i]), our expected cost for winning position i\n4\nGoogle, Yahoo! and MSN all use some variant of the GSP\nauction. In the Google auction, the advertisers\" bids are\nmultiplied by a quality score before they are ranked; our\nresults carry over to this case as well, which we omit from this\npaper for clarity. Also, other auctions besides GSP have\nbeen considered; e.g., the Vickrey Clark Groves (VCG)\nauction [14, 4, 7]. Each auction mechanism will result in a\ndifferent sort of optimization problem. In the conclusion we\npoint out that for the VCG auction, the bidding\noptimization problem becomes quite easy.\nwould be cost[i] = \u03b1[i] \u00b7 b[i], where i = pos(b). We use\ncostq(b) and clicksq(b) to denote the expected cost and clicks\nthat result from having a bid b that qualifies for a query\nauction q, and thus\ncostq(b) = \u03b1[i] \u00b7 b[i] where i = pos(b), (2)\nclicksq(b) = \u03b1[i] where i = pos(b). (3)\nThe following observations about cost and clicks follow\nimmediately from the definitions and equations (1), (2) and (3).\nWe use R+ to denote the nonnegative reals.\nObservation 1. For b \u2208 R+,\n1. (costq(b), clicksq(b)) can only take on one of a finite\nset of values Vq = {(cost[1], \u03b1[1]), . . . , (cost[p], \u03b1[p])}.\n2. Both costq(b) and clicksq(b) are non-decreasing\nfunctions of b. Also, cost-per-click (cpc) costq(b)/clicksq(b)\nis non-decreasing in b.\n3. costq(b)/clicksq(b) \u2264 b.\nFor bids (b[1], . . . , b[p]) that correspond to the bids of\nother advertisers, we have: costq(b[i])/clicksq(b[i]) = b[i],\ni \u2208 [p]. When the context is clear, we drop the subscript q.\n2.1.2 Query Landscapes\nWe can summarize the data contained in the functions\ncost(b) and clicks(b) as a collection of points in a plot of cost\nvs. clicks, which we refer to as a landscape. For example, for\na query with four slots, a landscape might look like Table 1.\nbid range cost per click cost clicks\n[$2.60,\u221e) $2.60 $1.30 .5\n[$2.00,$2.60) $2.00 $0.90 .45\n[$1.60,$2.00) $1.60 $0.40 .25\n[$0.50,$1.60) $0.50 $0.10 .2\n[$0,$0.50) $0 $0 0\nTable 1: A landscape for a query\nIt is convenient to represent this data graphically as in\nFigure 1 (ignore the dashed line for now). Here we graph\nclicks as a function of cost. Observe that in this graph, the\ncpc (cost(b)/clicks(b)) of each point is the reciprocal of the\nslope of the line from the origin to the point. Since cost(b),\nclicks(b) and cost(b)/clicks(b) are non-decreasing, the slope\nof the line from the origin to successive points on the plot\ndecreases. This condition is slightly weaker than concavity.\nSuppose we would like to solve the budget optimization\nproblem for a single query landscape.5\nAs we increase our\nbid from zero, our cost increases and our expected number\nof clicks increases, and so we simply submit the highest bid\nsuch that we remain within our budget.\nOne problem we see right away is that since there are\nonly a finite set of points in this landscape, we may not be\nable to target arbitrary budgets efficiently. Suppose in the\nexample from Table 1 and Figure 1 that we had a budget\n5\nOf course it is a bit unrealistic to imagine that an advertiser\nwould have to worry about a budget if only one user query\nwas being considered; however one could imagine multiple\ninstances of the same query and the problem scales.\n42\n$0.50 $1.00 $1.50\n.1\n.2\n.3\n.4\n.5\nClicks\nCost\nFigure 1: A bid landscape.\nof $1.00. Bidding between $2.00 and $2.60 uses only $0.90,\nand so we are under-spending. Bidding more than $2.60 is\nnot an option, since we would then incur a cost of $1.30 and\noverspend our budget.\n2.1.3 Randomized strategies\nTo rectify this problem and better utilize our available\nbudget, we allow randomized bidding strategies. Let B be\na distribution on bids b \u2208 R+. Now we define cost(B) =\nEb\u223cB[cost(b)] and clicks(B) = Eb\u223cB[clicks(b)]. Graphically,\nthe possible values of (cost(B), clicks(B)) lie in the convex\nhull of the landscape points. This is represented in Figure 1\nby the dashed line.\nTo find a bid distribution B that maximizes clicks subject\nto a budget, we simply draw a vertical line on the plot where\nthe cost is equal to the budget, and find the highest point\non this line in the convex hull. This point will always be the\nconvex combination of at most two original landscape points\nwhich themselves lie on the convex hull. Thus, given the\npoint on the convex hull, it is easy to compute a distribution\non two bids which led to this point. Summarizing,\nLemma 1. If an advertiser is bidding on one keyword,\nsubject to a budget U, then the optimal strategy is to pick a\nconvex combination of (at most) two bids which are at the\nendpoints of the line on the convex hull at the highest point\nfor cost U.\nThere is one subtlety in this formulation. Given any\nbidding strategy, randomized or otherwise, the resulting cost\nis itself a random variable representing the expected cost.\nThus if our budget constraint is a hard budget, we have to\ndeal with the difficulties that arise if our strategy would be\nover budget. Therefore, we think of our budget constraint as\nsoft, that is, we only require that our expected cost be less\nthan the budget. In practice, the budget is often an average\ndaily budget, and thus we don\"t worry if we exceed it one\nday, as long as we are meeting the budget in expectation.\nFurther, either the advertiser or the search engine (possibly\nboth), monitor the cost incurred over the day; hence, the\nadvertiser\"s bid can be changed to zero for part of the day,\nso that the budget is not overspent.6\nThus in the\nremain6\nSee https://adwords.google.com/support/bin/answer.\npy?answer=22183, for example.\nder of this paper, we will formulate a budget constraint that\nonly needs to be respected in expectation.\n2.1.4 Multiple Queries: a Knapsack Problem\nAs a warm-up, we will consider next the case when we\nhave a set of queries, each which its own landscape. We want\nto bid on each query independently subject to our budget:\nthe resulting optimization problem is a small generalization\nof the fractional knapsack problem, and was solved in [9].\nThe first step of the algorithm is to take the convex hull\nof each landscape, as in Figure 1, and remove any\nlandscape points not on the convex hull. Each piecewise linear\nsection of the curve represents the incremental number of\nclicks and cost incurred by moving one\"s bid from one\nparticular value to another. We regard these pieces as items\nin an instance of fractional knapsack with value equal to\nthe incremental number of clicks and size equal to the\nincremental cost. More precisely, for each piece connecting\ntwo consecutive bids b and b on the convex hull, we\ncreate a knapsack item with value [clicks(b ) \u2212 clicks(b )] and\nsize [cost(b ) \u2212 cost(b )]. We then emulate the greedy\nalgorithm for knapsack, sorting by value/size (cost-per-click),\nand choosing greedily until the budget is exhausted.\nIn this reduction to knapsack we have ignored the fact\nthat some of the pieces come from the same landscape and\ncannot be treated independently. However, since each curve\nis concave, the pieces that come from a particular query\ncurve are in increasing order of cost-per-click; thus from\neach landscape we have chosen for our knapsack a set of\npieces that form a prefix of the curve.\n2.2 Keyword Interaction\nIn reality, search advertisers can bid on a large set of\nkeywords, each of them qualifying for a different (possibly\noverlapping) set of queries, but most search engines do not\nallow an advertiser to appear twice in the same search results\npage.7\nThus, if an advertiser has a bid on two different\nkeywords that match the same query, this conflict must be\nresolved somehow. For example, if an advertiser has a bid\nout on the keywords shoes and high-heel, then if a user\nissues the query high-heel shoes, it will match on two\ndifferent keywords. The search engine specifies, in advance, a\nrule for resolution based on the query the keyword and the\nbid. A natural rule is to take the keyword with the highest\nbid, which we adopt here, but our results apply to other\nresolution rules.\nWe model the keyword interaction problem using an\nundirected bipartite graph G = (K \u222a Q, E) where K is a set of\nkeywords and Q is a set of queries. Each q \u2208 Q has an\nassociated landscape, as defined by costq(b) and clicksq(b). An\nedge (k, q) \u2208 E means that keyword k matches query q.\nThe advertiser can control their individual keyword bid\nvector a \u2208 R\n|K|\n+ specifying a bid ak for each keyword k \u2208 K.\n(For now, we do not consider randomized bids, but we will\nintroduce that shortly.) Given a particular bid vector a\non the keywords, we use the resolution rule of taking the\nmaximum to define the effective bid on query q as\nbq(a) = max\nk:(k,q)\u2208E\nak.\nBy submitting a bid vector a, the advertiser receives some\n7\nSee https://adwords.google.com/support/bin/answer.\npy?answer=14179, for example.\n43\nnumber of clicks and pays some cost on each keyword. We\nuse the term spend to denote the total cost; similarly, we use\nthe term traffic to denote the total number of clicks:\nspend(a)=\nX\nq\u2208Q\ncostq(bq(a)); traffic(a)=\nX\nq\u2208Q\nclicksq(bq(a))\nWe also allow randomized strategies, where an advertiser\ngives a distribution A over bid vectors a \u2208 R\n|K|\n+ . The\nresulting spend and traffic are given by\nspend(A)=Ea\u223cA[spend(a)]; traffic(A)=Ea\u223cA[traffic(a)]\nWe can now state the problem in its full generality:\nBudget Optimization\nInput: a budget U, a keyword-query graph G = (K \u222a\nQ, E), and landscapes (costq(\u00b7), clicksq(\u00b7)) for each q \u2208 Q.\nFind: a distribution A over bid vectors a \u2208 R\n|K|\n+ such\nthat spend(A) \u2264 U and traffic(A) is maximized.\nWe conclude this section with a small example to illustrate\nsome feature of the budget optimization problem. Suppose\nyou have two keywords K = {u, v} and two queries Q =\n{x, y} and edges E = {(u, x), (u, y), (v, y)}. Suppose query\nx has one position with ctr \u03b1x\n[1] = 1.0, and there is one\nbid bx\n1 = $1. Query y has two positions with ctrs \u03b1y\n[1] =\n\u03b1y\n[2] = 1.0, and bids by\n1 = $ and by\n2 = $1 To get any clicks\nfrom x, an advertiser must bid at least $1 on u. However,\nbecause of the structure of the graph, if the advertiser sets\nbu to $1, then his effective bid is $1 on both x and y. Thus he\nmust trade-off between getting the clicks from x and getting\nthe bargain of a click for $ that would be possible otherwise.\n3. UNIFORM BIDDING STRATEGIES\nAs we will show in Section 5, solving the Budget\nOptimization problem in its full generality is difficult. In\naddition, it may be difficult to reason about strategies that\ninvolve arbitrary distributions over arbitrary bid vectors.\nAdvertisers generally prefer strategies that are easy to\nunderstand, evaluate and use within their larger goals. With\nthis motivation, we look at restricted classes of strategies\nthat we can easily compute, explain and analyze.\nWe define a uniform bidding strategy to be a distribution\nA over bid vectors a \u2208 R\n|K|\n+ where each bid vector in the\ndistribution is of the form (b, b, . . . , b) for some real-valued\nbid b. In other words, each vector in the distribution bids\nthe same value on every keyword.\nUniform strategies have several advantages. First, they\ndo not depend on the edges of the interaction graph, since\nall effective bids on queries are the same. Thus, they are\neffective in the face of limited or noisy information about\nthe keyword interaction graph. Second, uniform strategies\nare also independent of the priority rule being used. Third,\nany algorithm that gives an approximation guarantee will\nthen be valid for any interaction graph over those keywords\nand queries.\nWe now show that we can compute the best uniform\nstrategy efficiently.\nSuppose we have a set of queries Q, where the landscape\nVq for each query q is defined by the set of points Vq =\n{(costq[1], \u03b1q[1]), . . . , (costq[p], \u03b1q[p])}. We define the set of\ninteresting bids Iq = {costq[1]/\u03b1q [1], . . . , costq[p]/\u03b1q [p]}, let\nI = \u222aq\u2208QIq, and let N = |I|. We can index the points\nin I as b1, . . . , bN in increasing order. The ith point in\nour aggregate landscape V is found by summing, over the\nqueries, the cost and clicks associated with bid bi, that is,\nV = \u222aN\ni=1(\nP\nq\u2208Q costq(bi),\nP\nq\u2208Q clicksq(bi)).\nFor any possible bid b, if we use the aggregate landscape\njust as we would a regular landscape, we exactly represent\nthe cost and clicks associated with making that bid\nsimultaneously on all queries associated with the aggregate\nlandscape. Therefore, all the definitions and results of Section 2\nabout landscapes can be extended to aggregate landscapes,\nand we can apply Lemma 1 to compute the best uniform\nstrategy (using the convex hull of the points in this\naggregate landscape). The running time is dominated by the time\nto compute the convex hull, which is O(N log N)[12].\nThe resulting strategy is the convex combination of two\npoints on the aggregate landscape. Define a two-bid strategy\nto be a uniform strategy which puts non-zero weight on at\nmost two bid vectors. We have shown\nLemma 2. Given an instance of Budget Optimization\nin which there are a total of N points in all the landscapes,\nwe can find the best uniform strategy in O(N log N) time.\nFurthermore, this strategy will always be a two-bid strategy.\nPutting these ideas together, we get an O(N log N)-time\nalgorithm for Budget Optimization, where N is the total\nnumber of landscape points (we later show that this is a\n(1 \u2212 1\ne\n)-approximation algorithm):\n1. Aggregate all the points from the individual query\nlandscapes into a single aggregate landscape.\n2. Find the convex hull of the points in the aggregate\nlandscape.\n3. Compute the point on the convex hull for the given\nbudget, which is the convex combination of two points \u03b1\nand \u03b2.\n4. Output the strategy which is the appropriate convex\ncombination of the uniform bid vectors corresponding to\n\u03b1 and \u03b2.\nWe will also consider a special case of two-bid strategies.\nA single-bid strategy is a uniform strategy which puts\nnonzero weight on at most one non-zero vector, i.e. advertiser\nrandomizes between bidding a certain amount b\u2217\non all\nkeywords, and not bidding at all. A single-bid strategy is even\neasier to implement in practice than a two-bid strategy. For\nexample, the search engines often allow advertisers to set a\nmaximum daily budget. In this case, the advertiser would\nsimply bid b\u2217\nuntil her budget runs out, and the ad serving\nsystem would remove her from all subsequent auctions until\nthe end of the day. One could also use an ad scheduling\ntool offered by some search companies8\nto implement this\nstrategy. The best single-bid strategy can also be computed\neasily from the aggregate landscape. The optimal strategy\nfor a budget U will either be the point x s.t. cost(x) is as\nlarge as possible without exceeding U, or a convex\ncombination of zero and the point y, where cost(y) is as small as\npossible while larger than U.\n8\nSee https://adwords.google.com/support/bin/answer.\npy?answer=33227, for example.\n44\nB\nD\nA\nC\nclicks cost cpc\nA 2 $1 $0.50\nB 5 $0.50 $0.10\nC 3 $2 $0.67\nD 4 $1 $0.25\ncpc\n$0.67\n$0.50\n$0.25\n$0.10\nTotal clicks: 5 9 11 14\nFigure 2: Four queries and their click-price curve.\n4. APPROXIMATION ALGORITHMS\nIn the previous section we proposed using uniform\nstrategies and gave an efficient algorithm to compute the best\nsuch strategy. In section we prove that there is always a\ngood uniform strategy:\nTheorem 3. There always exists a uniform bidding\nstrategy that is (1 \u2212 1\ne\n)-optimal. Furthermore, for any > 0,\nthere exists an instance for which all uniform strategies are\nat most (1 \u2212 1\ne\n+ )-optimal.\nWe introduce the notion of a click-price curve, which is\ncentral to our analysis. This definition makes it simple to\nshow that there is always a single-bid strategy that is a 1\n2\n\napproximation (and this is tight); we then build on this to\nprove Theorem 3.\n4.1 Click-price curves\nConsider a set of queries Q, and for each query q \u2208 Q,\nlet (clicksq(\u00b7), costq(\u00b7)) be the corresponding bid landscape.\nConsider an adversarial bidder \u03a9 with the power to bid\nindependently on each query. Note that this bidder is more\npowerful than an optimal bidder, which has to bid on the\nkeywords. Suppose this strategy bids b\u2217\nq for each query q.\nThus, \u03a9 achieves traffic C\u03a9 =\nP\ni clicks(b\u2217\ni ), and incurs total\nspend U\u03a9 =\nP\ni cost(b\u2217\ni ).\nWithout loss of generality we can assume that \u03a9 bids so\nthat for each query q, the cost per click is equal to b\u2217\nq , i.e.\ncostq(b\u2217\nq )/clicksq(b\u2217\nq ) = b\u2217\nq . We may assume this because for\nsome query q, if costq(b\u2217\nq)/clicksq(b\u2217\nq) < b\u2217\nq , we can always\nlower b\u2217\nq and without changing the cost and clicks.\nTo aid our discussion, we introduce the notion of a\nclickprice curve (an example of which is shown in Figure 2),\nwhich describes the cpc distribution obtained by \u03a9.\nFormally the curve is a non-decreasing function h : [0, C\u03a9] \u2192\nR+ defined as h(r) = min{c |\nP\nq:b\u2217\nq \u2264c clicksq(b\u2217\nq ) \u2265 r}.\nAnother way to construct this curve is to sort the queries in\nincreasing order by b\u2217\nq = costq(b\u2217\nq)/clicksq(b\u2217\nq), then make a\nstep function where the qth step has height b\u2217\nq and width\nclicksq(b\u2217\nq ) (see Figure 2). Note that the area of each step is\ncostq(b\u2217\nq ). The following claim follows immediately:\nClaim 1. U\u03a9 =\nR C\u03a9\n0\nh(r)dr.\nSuppose we wanted to buy some fraction r /C\u03a9 of the\ntraffic that \u03a9 is getting. The click-price curve says that if\nwe bid h(r ) on every keyword (and therefore every query),\nwe get at least r traffic, since this bid would ensure that for\nall q such that b\u2217\nq \u2264 h(r ) we win as many clicks as \u03a9. Note\nthat by bidding h(r ) on every keyword, we may actually get\neven more than r traffic, since for queries q where b\u2217\nq is much\nless than h(r ) we may win more clicks than \u03a9. However,\nall of these extra clicks still cost at most h(r ) per click.\nThus we see that for any r \u2208 [0, C\u03a9], if we bid h(r ) on\nevery keyword, we receive at least r traffic at a total spend\nof at most h(r ) per click. Note that by randomizing between\nbidding zero and bidding h(r ), we can receive exactly r\ntraffic at a total spend of at most r \u00b7 h(r ). We summarize\nthis discussion in the following lemma:\nLemma 4. For any r \u2208 [0, C\u03a9], there exists a single-bid\nstrategy that randomizes between bidding h(r) and bidding\nzero, and this strategy receives exactly r traffic with total\nspend at most r \u00b7 h(r).\nLemma 4 describes a landscape as a continuous function.\nFor our lower bounds, we will need to show that given any\ncontinuous function, there exists a discrete landscape that\napproximates it arbitrarily well.\nLemma 5. For any C, U > 0 and non-decreasing\nfunction f : [0, C] \u2192 R+ such that\nR C\n0\nf(r)dr = U, and any\nsmall > 0, there exists an instance of Budget\nOptimization with budget U + , where the optimal solution achieves\nC clicks at cost U + , and all uniform bidding strategies\nare convex combinations of single-bid strategies that achieve\nexactly r clicks at cost exactly rf(r) by bidding f(r) on all\nkeywords.\nProof. Construct an instance as follows. Let > 0 be a\nsmall number that we will later define in terms of . Define\nr0 = 0, r1, r2, . . . , rm = C such that ri\u22121 < ri \u2264 ri\u22121 + ,\nf(ri\u22121) \u2264 f(ri) \u2264 f(ri\u22121)+ , and m \u2264 (C +f(C))/ . (This\nis possible by choosing ri\"s spaced by min( , f(ri)\u2212f(ri\u22121)))\nNow make a query qi for all i \u2208 [m] with bidders bidding\nh(ri), h(ri+1), . . . , h(rm), and ctrs \u03b1[1] = \u03b1[2] = \u00b7 \u00b7 \u00b7 = \u03b1[m\u2212\ni+1] = ri \u2212ri\u22121. The graph is a matching with one keyword\nper query, and so we can imagine the optimal solution as\nbidding on queries. The optimal solution will always bid\nexactly h(ri) on query qi, and if it did so on all queries,\nit would spend U :=\nPm\ni=1(ri \u2212 ri\u22121)h(ri). Define small\nenough so that U = U + , which is always possible since\nU \u2264\nZ C\n0\nf(r)dr +\nmX\ni=1\n(ri \u2212 ri\u22121)(h(ri) \u2212 h(ri\u22121))\n\u2264 U + 2\nm \u2264 U + (C + f(C)).\nNote that the only possible bids (i.e., all others have the\nsame results as one of these) are f(r0), . . . , f(rm), and\nbidding uniformly with f(ri) results in\nPi\nj=1 ri \u2212 ri\u22121 = ri\nclicks at cost h(ri)ri.\n4.2 A 1\n2\n-approximation algorithm\nUsing Lemma 4 we can now show that there is a uniform\nsingle-bid strategy that is 1\n2\n-optimal. In addition to being an\ninteresting result in its own right, it also serves as a warm-up\nfor our main result.\nTheorem 6. There always exists a uniform single-bid\nstrategy that is 1\n2\n-optimal. Furthermore, for any > 0 there\nexists an instance for which all single-bid strategies are at\nmost (1\n2\n+ )-optimal.\n45\nProof. Applying Lemma 4 with r = C\u03a9/2, we see that\nthere is a strategy that achieves traffic C\u03a9/2 with spend\nC\u03a9/2\u00b7h(C\u03a9/2). Now, using the fact that h is a non-decreasing\nfunction combined with Claim 1, we have\n(C\u03a9/2)h(C\u03a9/2) \u2264\nZ C\u03a9\nC\u03a9/2\nh(r)dr \u2264\nZ C\u03a9\n0\nh(r)dr = U\u03a9, (4)\nwhich shows that we spend at most U\u03a9. We conclude that\nthere is a 1\n2\n-optimal single-bid strategy randomizing between\nbidding C\u03a9/2 and zero.\nFor the second part of the theorem, we construct a tight\nexample using two queries Q = {x, y}, two keywords K =\n{u, v}, and edges E = {(u, x), (v, y)}.\nFix some \u03b1 where 0 < \u03b1 \u2264 1, and fix some very small\n> 0. Query x has two positions, with bids of bx\n1 = 1/\u03b1\nand bx\n2 = , and with identical click-through rates \u03b1x\n[1] =\n\u03b1x\n[2] = \u03b1. Query y has one position, with a bid of by\n1 = 1/\u03b1\nand a click-through rate of \u03b1y\n[1] = \u03b1. The budget is U =\n1 + \u03b1. The optimal solution is to bid on u (and therefore\nx) and bid 1/\u03b1 on v (and therefore y), both with probability\n1. This achieves a total of 2\u03b1 clicks and spends the budget\nexactly. The only useful bids are 0, and 1/\u03b1, since for\nboth queries all other bids are identical in terms of cost and\nclicks to one of those three. Any single-bid solution that uses\nas its non-zero bid gets at most \u03b1 clicks. Bidding 1/\u03b1 on\nboth keywords results in 2\u03b1 clicks and total cost 2. Thus,\nsince the budget is U = 1 + \u03b1 < 2, a single-bid solution\nusing 1/\u03b1 can put weight at most (1+ \u03b1)/2 on the 1/\u03b1 bid.\nThis results in at most \u03b1(1 + \u03b1) clicks. This can be made\narbitrarily close to \u03b1 by lowering .\n4.3 A (1 \u2212 1\ne\n)-approximation algorithm\nThe key to the proof of Theorem 3 is to show that there\nis a distribution over single-bid strategies from Lemma 4\nthat obtains at least (1 \u2212 1\ne\n)C\u03a9 clicks. In order to figure\nout the best distribution, we wrote a linear program that\nmodels the behavior of a player who is trying to maximize\nclicks and an adversary who is trying to create an input\nthat is hard for the player. Then using linear programming\nduality, we were able to derive both an optimal strategy\nand a tight instance. After solving the LP numerically, we\nwere also able to see that there is a uniform strategy for\nthe player that always obtains (1 \u2212 1\ne\n)C\u03a9 clicks; and then\nfrom the solution were easily able to guess the optimal\ndistribution. This methodology is similar to that used in\nwork on factor-revealing LPs [8, 10].\n4.3.1 An LP for the worst-case click-price curve.\nConsider the adversary\"s problem of finding a click-price\ncurve for which no uniform bidding strategy can achieve\n\u03b1C\u03a9 clicks. Recall that by Lemma 1 we can assume that a\nuniform strategy randomizes between two bids u and v. We\nalso assume that the uniform strategy uses a convex\ncombination of strategies from Lemma 4, which we can assume by\nLemma 5. Thus, to achieve \u03b1C\u03a9 clicks, a uniform strategy\nmust randomize between bids h(u) and h(v) where u \u2264 \u03b1C\u03a9\nand v \u2265 \u03b1C\u03a9. Call the set of such strategies S. Given a\n(u, v) \u2208 S, the necessary probabilities in order to achieve\n\u03b1C\u03a9 clicks are easily determined, and we denote them by\np1(u, v) and p2(u, v) respectively. Note further that the\nadvertiser is trying to figure out which of these strategies to\nuse, and ultimately wants to compute a distribution over\nuniform strategies. In the LP, she is actually going to\ncompute a distribution over pairs of strategies in S, which we\nwill then interpret as a distribution over strategies.\nUsing this set of uniform strategies as constraints, we can\ncharacterize a set of worst-case click-price curves by the\nconstraints\nZ C\u03a9\n0\nh(r)dr \u2264 U\n\u2200(u, v) \u2208 S p1(u, v)uh(u) + p2(u, v)vh(v) \u2265 U\nA curve h that satisfies these constraints has the property\nthat all uniform strategies that obtain \u03b1C\u03a9 clicks spend\nmore than U. Discretizing this set of inequalities, and\npushing the first constraint into the objective function, we get\nthe following LP over variables hr representing the curve:\nmin\nX\nr\u2208{0, ,2 ,...,C\u03a9}\n\u00b7 hr s.t.\n\u2200(u, v) \u2208 S, p1(u, v)uhu + p2(u, v)vhv \u2265 U\nIn this LP, S is defined in the discrete domain as S =\n{(u, v) \u2208 {0, , 2 , . . . , C\u03a9}2\n: 0 \u2264 u \u2264 \u03b1C\u03a9 \u2264 v \u2264 C\u03a9}.\nSolving this LP for a particular \u03b1, if we get an objective\nless than U, we know (up to some discretization) that an\ninstance of Budget Optimization exists that cannot be\napproximated better than \u03b1. (The instance is constructed\nas in the proof of Lemma 5.) A binary search yields the\nsmallest such \u03b1 where the objective is exactly U.\nTo obtain a strategy for the advertiser, we look at the\ndual, constraining the objective to be equal to U in order to\nget the polytope of optimum solutions:\nX\n(u,v)\u2208S\nwu,v = 1\n\u2200(u, v) \u2208 S,\nX\nv :(u,v )\u2208S\np1(u, v ) \u00b7 u \u00b7 wu,v \u2264 and\nX\nu :(u ,v)\u2208S\np2(u , v) \u00b7 v \u00b7 wu ,v \u2264 .\nIt is straightforward to show that the second set of\nconstraints is equivalent to the following:\n\u2200h \u2208 RC\u03a9/\n:\nX\nr\nhr = U,\nX\n(u,v)\u2208S\nwu,v(p1(u, v) \u00b7 u \u00b7 hu + p2(u, v) \u00b7 v \u00b7 hv) \u2264 U.\nHere the variables can be interpreted as weights on strategies\nin S. A point in this polytope represents a convex\ncombination over strategies in S, with the property that for any\nclick-price curve h, the cost of the mixed strategy is at most\nU. Since all strategies in S get at least \u03b1C\u03a9 clicks, we\nhave a strategy that achieves an \u03b1-approximation.\nInterestingly, the equivalence between this polytope and the LP dual\nabove shows that there is a mixture over values r \u2208 [0, C]\nthat achieves an \u03b1-approximation for any curve h.\nAfter a search for the appropriate \u03b1 (which turned out to\nbe 1 \u2212 1\ne\n), we solved these two LPs and came up with the\nplots in Figure 3, which reveal not only the right\napproximation ratio, but also a picture of the worst-case distribution\nand the approximation-achieving strategy.9\nFrom the\npic9\nThe parameters U and C\u03a9 can be set arbitrarily using\nscaling arguments.\n46\n0\n0\nC/e C 0\n0\nC/e C\nFigure 3: The worst-case click-price curve and (1 \u2212\n1/e)-approximate uniform bidding strategy, as found\nby linear programming.\ntures, we were able to quickly guess the optimal strategy\nand worst case example.\n4.3.2 Proof of Theorem 3\nBy Lemma 4, we know that for each r \u2264 U\u03a9, there is a\nstrategy that can obtain traffic r at cost r \u00b7 h(r). By mixing\nstrategies for multiple values of r, we construct a uniform\nstrategy that is guaranteed to achieve at least 1\u2212e\u22121\n= 0.63\nfraction of \u03a9\"s traffic and remain within budget. Note that\nthe final resulting bid distribution will have some weight\non the zero bid, since the single-bid strategies from Lemma 4\nput some weight on bidding zero.\nConsider the following probability density function over\nsuch strategies (also depicted in Figure 3):\ng(r) =\nj\n0 for r < C\u03a9/e,\n1/r for r \u2265 C\u03a9/e.\nNote that\nR C\u03a9\n0\ng(r)dr =\nR C\u03a9\nC\u03a9/e\n1\nr\ndr = 1, i.e. g is a\nprobability density function. The traffic achieved by our strategy is\nequal to\ntraffic =\nZ C\u03a9\n0\ng(r)\u00b7r dr =\nZ C\u03a9\nC\u03a9/e\n1\nr\n\u00b7r dr =\n\u201e\n1 \u2212\n1\ne\n\u00ab\nC\u03a9.\nThe expected total spend of this strategy is at most\nspend =\nZ C\u03a9\n0\ng(r) \u00b7 rh(r) dr\n=\nZ C\u03a9\nC\u03a9/e\nh(r) dr \u2264\nZ C\u03a9\n0\nh(r) dr = U\u03a9.\nThus we have shown that there exists a uniform bidding\nstrategy that is (1 \u2212 1\ne\n)-optimal.\nWe now show that no uniform strategy can do better. We\nwill prove that for all > 0 there exists an instance for which\nall uniform strategies are at most (1 \u2212 1\ne\n+ )-optimal.\nFirst we define the following click-price curve over the\ndomain [0, 1]:\nh(r) =\n8\n<\n:\n0 for r < e\u22121\n1\ne \u2212 2\n\u201e\ne \u2212\n1\nr\n\u00ab\nfor r \u2265 e\u22121\nNote that h is non-decreasing and non-negative. Since the\ncurve is over the domain [0, 1] it corresponds to an instance\nwhere C\u03a9 = 1. Note also that\nR 1\n0\nh(r) dr = 1\ne\u22122\nR 1\n1/e\ne \u2212\n1\nr\ndr = 1. Thus, this curve corresponds to an instance where\nU\u03a9 = 1. Using Lemma 5, we construct an actual instance\nwhere the best uniform strategies are convex combinations\nof strategies that bid h(u) and achieve u clicks and u \u00b7 h(u)\ncost.\nSuppose for the sake of contradiction that there exists a\nuniform bidding strategy that achieves \u03b1 > 1\u2212e\u22121\ntraffic on\nthis instance. By Lemma 1 there is always a two-bid optimal\nuniform bidding strategy and so we may assume that the\nstrategy achieving \u03b1 clicks randomizes over two bids. To\nachieve \u03b1 clicks, the two bids must be on values h(u) and\nh(v) with probabilities pu and pv such that pu + pv = 1,\n0 \u2264 u \u2264 \u03b1 \u2264 v and puu + pvv = \u03b1.\nTo calculate the spend of this strategy consider two cases:\nif u = 0 then we are bidding h(v) with probability pv = \u03b1/v.\nThe spend in this case is:\nspend = pv \u00b7 v \u00b7 h(v) = \u03b1h(v) =\n\u03b1e \u2212 \u03b1/v\ne \u2212 2\n.\nUsing v \u2265 \u03b1 and then \u03b1 > 1 \u2212 1\ne\nwe get\nspend \u2265\n\u03b1e \u2212 1\ne \u2212 2\n>\n(1 \u2212 1/e)e \u2212 1\ne \u2212 2\n= 1,\ncontradicting the assumption.\nWe turn to the case u > 0. Here we have pu = v\u2212\u03b1\nv\u2212u\nand pv = \u03b1\u2212u\nv\u2212u\n. Note that for r \u2208 (0, 1] we have h(r) \u2265\n1\ne\u22122\n(e \u2212 1\nr\n). Thus\nspend \u2265 pu \u00b7 uh(u) + pv \u00b7 vh(v)\n=\n(v \u2212 \u03b1)(ue \u2212 1) + (\u03b1 \u2212 u)(ve \u2212 1)\n(v \u2212 u)(e \u2212 2)\n=\n\u03b1e \u2212 1\ne \u2212 2\n> 1.\nThe final inequality follows from \u03b1 > 1 \u2212 1\ne\n. Thus in both\ncases the spend of our strategy is over the budget of 1.\n4.4 Experimental Results\nWe ran simulations using the data available at Google\nwhich we briefly summarize here. We took a large\nadvertising campaign, and, using the set of keywords in the\ncampaign, computed three different curves (see Figure 4) for\nthree different bidding strategies. The x-axis is the\nbudget (units removed), and the y-axis is the number of clicks\nobtained (again without units) by the optimal bid(s) under\neach respective strategy. Query bidding represents our\n(unachievable) upper bound \u03a9, bidding on each query\nindependently. The uniform bidding curves represent the\nresults of applying our algorithm: deterministic uses a\nsingle bid level, while randomized uses a distribution. For\nreference, we include the lower bound of a (e \u2212 1)/e fraction\nof the top curve.\nThe data clearly demonstrate that the best single\nuniform bid obtains almost all the possible clicks in practice.\nOf course in a more realistic environment without full\nknowledge, it is not always possible to find the best such bid, so\nfurther investigation is required to make this approach\nuseful. However, just knowing that there is such a bid available\nshould make the on-line versions of the problem simpler.\n5. HARDNESS RESULTS\nBy a reduction from vertex cover we can show the\nfollowing (proof omitted):\nTheorem 7. Budget Optimization is strongly NP-hard.\n47\nQuery Bidding\nUniform Bidding (randomized)\nUniform Bidding (deterministic)\nLower bound\n0\n0\n0.5\n0.5\n1\n1\nBudget\nClicks\nFigure 4: An example with real data.\nNow suppose we introduce weights on the queries that\nindicate the relative value of a click from the various search\nusers. Formally, we have weights wq for all q \u2208 Q and our\ngoal is maximize the total weighted traffic given a budget.\nCall this the Weighted Keyword Bidding problem.\nWith this additional generalization we can show hardness\nof approximation via a simple reduction from the Maximum\nCoverage problem, which is known to be (1\u22121/e)-hard [6]\n(proof omitted).\nTheorem 8. The Weighted Keyword Bidding\nproblem is hard to approximate to within (1 \u2212 1/e).\n6. EXACT ALGORITHMS FOR LAMINAR\nGRAPHS\nIf a graph has special structure, we can sometimes solve\nthe budget optimization problem exactly. Note that the\nknapsack algorithm in Section 2 solves the problem for the\ncase when the graph is a simple matching. Here we\ngeneralize this to the case when the graph has a laminar structure,\nwhich will allow us to impose a (partial) ordering on the\npossible bid values, and thereby give a pseudopolynomial\nalgorithm via dynamic programming.\nWe first show that to solve the Budget Optimization\nproblem (for general graphs) optimally in pseudopolynomial\ntime, it suffices to provide an algorithm that solves the\ndeterministic case. The proof (omitted) uses ideas similar to\nObservation 1 and Lemma 1.\nLemma 9. Let I be an input to the Budget\nOptimization problem and suppose that we find the optimal\ndeterministic solution for every possible budget U \u2264 U. Then we\ncan find the optimal solution in time O(U log U).\nA collection S of n sets S1, . . . , S2 is laminar if, for any\ntwo sets Si and Sj, if Si \u2229 Sj = \u2205 then either Si \u2286 Sj or\nSj \u2286 Si.\nGiven a keyword interaction graph G, we associate a set\nof neighboring queries Qk = {q : (k, q) \u2208 E} with each\nkeyword k. If this collection of sets if laminar, we say that the\ngraph has the laminar property. Note that a laminar\ninteraction graph would naturally fall out as a consequence of\ndesigning a hierarchical keyword set (e.g., shoes,\nhighheel shoes, athletic shoes).\nWe call a solution deterministic if it consists of one bid\nvector, rather than a general distribution over bid vectors.\nThe following lemma will be useful for giving a structure to\nthe optimal solution, and will allow dynamic programming.\nLemma 10. For keywords i, j \u2208 K, if Qi \u2286 Qj then there\nexists an optimal deterministic solution to the Budget\nOptimization problem with ai \u2265 aj.\nWe can view the laminar order as a tree with keyword j as\na parent of keyword i if Qj is the minimal set containing Qi.\nIn this case we say that j is a child of i. Given a keyword j\nwith c children i1, . . . , ic, we now need to enumerate over all\nways to allocate the budget among the children and also over\nall possible minimum bids for the children. A complication\nis that a node may have many children and thus a term of\nUc\nwould not even be pseudopolynomial. We can solve this\nproblem by showing that given any laminar ordering, there\nis an equivalent one in which each keyword has at most 2\nchildren.\nLemma 11. Let G be a graph with the laminar property.\nThere exists another graph G with the same optimal solution\nto the Budget Optimization problem, where each node has\nat most two children in the laminar ordering. Furthermore,\nG has at most twice as many nodes as G.\nGiven a graph with at most two children per node, we\ndefine F[i, b, U] to be the maximum number of clicks achievable\nby bidding at least b on each of keywords j s.t. Qj \u2286 Qi\n(and exactly b on keyword i) while spending at most U. For\nthis definition, we use Z(b, U) to denote set of allowable bids\nand budgets over children:\nZ(b, U) = {b, b , U , U : b \u2265 b, U \u2264 U,\nb \u2265 b, U \u2264 U, U + U \u2264 U}\nGiven a keyword i and a bid ai, compute an incremental\nspend and traffic associated with bidding ai on keyword i,\nthat is\n\u02c6t(i, ai) =\nX\nq\u2208Qi\\Qi\u22121\nclicksq(ai), and\n\u02c6s(i, ai) =\nX\nq\u2208Qi\\Qi\u22121\ncostq(ai).\nNow we define F[i, b, U] as\nmax\nb, b ,U ,U\n\u2208Z(b,U)\nj\nF[j , b , U ] + F[j , b , U ] + \u02c6t(i, b)\nff\n(5)\nif (\u02c6s(i, b) \u2264 U \u2212 U \u2212 U and i > 0), and F[i, b, U] = 0\notherwise.\nLemma 12. If the graph G has the laminar property, then,\nafter applying Lemma 11, the dynamic programming\nrecurrence in (5) finds an optimal deterministic solution to the\nBudget Optimization problem exactly in O(B3\nU3\nn) time.\nIn addition, we can apply Lemma 9 to compute the\noptimal (randomized) solution. Observe that in the dynamic\nprogram, we have already solved the instance for every\nbudget U \u2264 U, so we can find the randomized solution with no\nadditional asymptotic overhead.\n48\nLemma 13. If the graph G has the laminar property, then,\nby applying Lemma 11, the dynamic programming recurrence\nin (5), and Lemma 9, we can find an optimal solution to the\nBudget Optimization problem in O(B3\nU3\nn) time.\nThe bounds in this section make pessimistic assumptions\nabout having to try every budget and every level. For many\nproblems, you only need to choose from a discrete set of\nbid levels (e.g., multiples of one cent). Doing so yields the\nobvious improvement in the bounds.\n7. BID OPTIMIZATION UNDER VCG\nThe GSP auction is not the only possible auction one\ncould use for sponsored search. Indeed the VCG auction\nand variants [14, 4, 7, 1] offer alternatives with compelling\ngame-theoretic properties. In this section we argue that\nthe budget optimization problem is easy under the VCG\nauction.\nFor a full definition of VCG and its application to\nsponsored search we refer the reader to [1, 2, 5]. For the sake\nof the budget optimization problem we can define VCG by\njust redefining costq(b) (replacing Equation (2)):\ncostq(b) =\np\u22121\nX\nj=i\n(\u03b1[j] \u2212 \u03b1[j + 1]) \u00b7 b[j] where i = pos(b).\nObservation 1 still holds, and we can construct a landscape\nas before, where each landscape point corresponds to a\nparticular bid b[i].\nWe claim that in the VCG auction, the landscapes are\nconvex. To see this, consider two consecutive positions i,i + 1.\nThe slope of line segment between the points corresponding\nto those two positions is\ncost(b[i]) \u2212 cost(b[i + 1])\nclicks(b[i]) \u2212 clicks(b[i + 1])\n=\n(\u03b1[i] \u2212 \u03b1[i + 1]) \u00b7 b[i]\n\u03b1[i] \u2212 \u03b1[i + 1]\n= b[i].\nSince b[i] \u2265 b[i + 1], the slopes of the pieces of the\nlandscape decrease, and we get that the curve is convex.\nNow consider running the algorithm described in\nSection 2.1.4 for finding the optimal bids for a set of queries.\nIn this algorithm we took all the pieces from the landscape\ncurves, sorted them by incremental cpc, then took a prefix\nof those pieces, giving us bids for each of the queries. But,\nthe equation above shows that each piece has its\nincremental cpc equal to the bid that achieves it; thus in the case of\nVCG the pieces are also sorted by bid. Hence we can obtain\nany prefix of the pieces via a uniform bid on all the\nkeywords. We conclude that the best uniform bid is an optimal\nsolution to the budget optimization problem.\n8. CONCLUDING REMARKS\nOur algorithmic result presents an intriguing heuristic in\npractice: bid a single value b on all keywords; at the end of\nthe day, if the budget is under-spent, adjust b to be higher;\nif budget is overspent, adjust b to be lower; else, maintain\nb. If the scenario does not change from day to day, this\nsimple strategy will have the same theoretical properties as\nour one-bid strategy, and in practice, is likely to be much\nbetter. Of course the scenario does change, however, and so\ncoming up with a stochastic bidding strategy remains an\nimportant open direction, explored somewhat by [11, 13].\nAnother interesting generalization is to consider weights\non the clicks, which is a way to model conversions. (A\nconversion corresponds to an action on the part of the user who\nclicked through to the advertiser site; e.g., a sale or an\naccount sign-up.) Finally, we have looked at this system as a\nblack box returning clicks as a function of bid, whereas in\nreality it is a complex repeated game involving multiple\nadvertisers. In [3], it was shown that when a set of advertisers\nuse a strategy similar to the one we suggest here, under a\nslightly modified first-price auction, the prices approach a\nwell-understood market equilibrium.\nAcknowledgments\nWe thank Rohit Rao, Zoya Svitkina and Adam Wildavsky\nfor helpful discussions.\n9. REFERENCES\n[1] G. Aggarwal, A. Goel and R. Motwani. Truthful\nauctions for pricing search keywords. ACM\nConference on Electronic Commerce, 1-7, 2006.\n[2] G. Aggarwal, J. Feldman and S. Muthukrishnan\nBidding to the Top: VCG and Equilibria of\nPosition-Based Auctions Proc. WAOA, 2006.\n[3] C. Borgs, J. Chayes, O. Etesami, N. Immorlica, K.\nJain, and M. Mahdian. Dynamics of bid optimization\nin online advertisement auctions. Proc. WWW 2007.\n[4] E. Clarke. Multipart pricing of public goods. Public\nChoice, 11(1):17-33, 1971.\n[5] B. Edelman, M. Ostrovsky and M. Schwarz. Internet\nAdvertising and the Generalized Second Price\nAuction: Selling Billions of Dollars Worth of\nKeywords. Second workshop on sponsored search\nauctions, 2006.\n[6] U. Feige. A threshold of ln n for approximating set\ncover. 28th ACM Symposium on Theory of\nComputing, 1996, pp. 314-318.\n[7] T. Groves. Incentives in teams. Econometrica, 41(4):\n617-631, 1973.\n[8] K. Jain, M. Mahdian, E. Markakis, A. Sabieri and V.\nVazirani. Greedy facility location algorithms\nanalyzed using dual fitting with factor-revealing LP.\nJ. ACM, 50(6): 795-824, 2003.\n[9] W. Labio, M. Rose, S. Ramaswamy. Internal\nDocument, Google, Inc. May, 2004.\n[10] A. Mehta, A. Saberi, U. Vazirani, and V. Vazirani,\nAdwords and Generalized Online Matching. FOCS\n2005.\n[11] S. Muthukrishnan, M. P\u00b4al and Z. Svitkina.\nStochastic models for budget optimization in\nsearch-based advertising. To appear in 3rd Workshop\non Sponsored Search Auctions, WWW 2007.\n[12] F. Preparata and M. Shamos. Computational\nGeometry: An Introduction. Springer-Verlag, New\nYork, NY, 1985.\n[13] P. Rusmevichientong and D. Williamson. An\nadaptive algorithm for selecting profitable keywords\nfor search-based advertising services Proc. 7th ACM\nconference on Electronic commerce, 260 - 269, 2006.\n[14] W. Vickrey. Counterspeculation, auctions and\ncompetitive-sealed tenders. Journal of Finance,\n16(1):8-37, 1961.\n49", "keywords": "bid;game theory;internet;keyword;search-based advertising auction;advertiser;lp;optimization;budget optimization;auction;generalized second price;vickrey clark grove;uniform bidding strategy;intriguing heuristic;sponsor search"}
-{"name": "test_J-30", "title": "Implementation with a Bounded Action Space", "abstract": "While traditional mechanism design typically assumes isomorphism between the agents\" type- and action spaces, in many situations the agents face strict restrictions on their action space due to, e.g., technical, behavioral or regulatory reasons. We devise a general framework for the study of mechanism design in single-parameter environments with restricted action spaces. Our contribution is threefold. First, we characterize sufficient conditions under which the information-theoretically optimal social-choice rule can be implemented in dominant strategies, and prove that any multilinear social-choice rule is dominant-strategy implementable with no additional cost. Second, we identify necessary conditions for the optimality of action-bounded mechanisms, and fully characterize the optimal mechanisms and strategies in games with two players and two alternatives. Finally, we prove that for any multilinear social-choice rule, the optimal mechanism with k actions incurs an expected loss of O( 1 k2 ) compared to the optimal mechanisms with unrestricted action spaces. Our results apply to various economic and computational settings, and we demonstrate their applicability to signaling games, public-good models and routing in networks.", "fulltext": "1. INTRODUCTION\nMechanism design is a sub-field of game theory that\nstudies how to design rules of games resulting in desirable\noutcomes, when the players are rational. In a standard\nsetting, players hold some private information - their types\n- and choose actions from their action spaces to\nmaximize their utilities. The social planner wishes to implement\na social-choice function, which maps each possible state of\nthe world (i.e., a profile of the players\" types) to a single\nalternative. For example, a government that wishes to\nundertake a public-good project (e.g., building a bridge) only\nif the total benefit for the players exceeds its cost.\nMuch of the literature on mechanism design restricts\nattention to direct revelation mechanisms, in which a player\"s\naction space is identical to his type space. This focus is\nowing to the revelation principle that asserts that if some\nmechanism achieves a certain result in an equilibrium, the\nsame result can be achieved in a truthful one - an\nequilibrium where each agent simply reports his private type [15].\nNonetheless, in many environments, direct-revelation\nmechanisms are not viable since the actions available for the\nplayers have a limited expressive power. Consider, for\nexample, the well-studied screening model, where an\ninsurance firm wishes to sell different types of policies to different\ndrivers based on their caution levels, which is their private\ninformation. In this model, drivers may have a continuum of\npossible caution levels, but insurance companies offer only\na few different policies since it might be either infeasible or\nillegal to advertise and sell more then few types of policies.\nThere are various reasons for such strict restrictions on\nthe action spaces. In some situations, firms are not willing,\nor cannot, run a bidding process but prefer fixing a price for\nsome product or service. The buyers in such environemnts\nface only two actions - to buy or not to buy - although they\nmay have an infinite number of possible values for the item.\nIn many similar settings, players might be also reluctant to\nreveal their accurate types, but willing to disclose partial\ninformation about them. For example, agents will typically be\nunwilling to reveal their types, even if it is beneficial for them\nin the short run, since it might harm them in future\ntransactions. Agents may also not trust the mechanism to keep\ntheir valuations private [16], or not even know their exact\ntype while computing it may be expensive [12]. Limitations\non the action space can also be caused by technical\nconstraints, such as severe restrictions on the communication\nlines [5] or from the the need to perform quick transactions\n(e.g., discrete bidding in English auctions [9]).\n62\nConsider for example a public-good model: a social\nplanner needs to decide whether to build a bridge. The two\nplayers in the game have some privately known benefits\n\u03b81, \u03b82 \u2208 [0, 1] from using this bridge. The social planner\naims to build the bridge only if the sum of these benfits\nexceeds the construction cost of the bridge. The social planner\ncannot access the private data of the players, and can only\nlearn about it from the players\" actions. When direct\nrevelation is allowed, the social planner can run the well-known\nVCG mechanism, where the players have incentives to\nreport their true data; hence, the planner can elicit the exact\nprivate information of the players and build the bridge only\nwhen it should be built. Assume now that the players\ncannot send their entire secret data, but can only choose an\naction out of two possible actions (e.g., 0 or 1). Now,\nthe social planner will clearly no longer be able to always\nbuild the bridge according to her objective function, due to\nthe limited expressivness of the players\" messages. In this\nwork we try to analyze what can be achieved in the presence\nof such restrictions.\nRestrictions on the action space, for specific models, were\nstudied in several earlier papers. The work of Blumrosen,\nNisan and Segal [4, 6, 5] is the closest in spirit to this paper.\nThey studied single-item auctions where bidders are allowed\nto send messages with severely bounded size. They\ncharacterized the optimal mechanisms under this restriction, and\nshowed that nearly optimal results can be achieved even with\nvery strict limitations on the action space. Other work\nstudied similar models for the analysis of discrete-bid ascending\nauctions [9, 11, 8, 7], take-it-or-leave-it auctions [17], or for\nmeasuring the effect of discrete priority classes of buyers\non the performance of electricity markets [19, 14]. Our work\ngeneralizes the main results of Blumrosen et al. to a\ngeneral mechanism-design framework that can be applied to a\nmultitude of models. We show that some main properties\nproved by Blumrosen et al. are preserved in more general\nframeworks (for example, that dominant-strategy\nequilibrium can be achieved with no additional cost, and that the\nloss diminishes with the number of possible actions in a\nsimilar rate), where some other properties do not always hold\n(for example, that asymmetric mechanisms are optimal and\nthat players must always use all their action space).\nA standard mechanism design setting is composed of\nagents with private information (their types), and a\nsocial planner, who wishes to implement a social choice\nfunction, c - a function that maps any profile of the agents\"\ntypes into a chosen alternative. A classic result in this\nsetting says that under some monotonicity assumption on the\nagents\" preferences - the single-crossing assumption (see\ndefinition below) - a social-choice function is implementable\nin dominant strategies if and only if it is monotone in the\nplayers\" types. However, in environments with restricted\naction spaces, the social planner cannot typically implement\nevery social-choice function due to inherent informational\nconstraints. That is, for some realizations of the players\"\ntypes, the decision of the social planner will be incompatible\nwith the social-choice function c. In order to quantitatively\nmeasure how well bounded-action mechanisms can\napproximate the original social-choice functions, we follow a\nstandard assumption that the social choice function is derived\nfrom a social-value function, g, which assigns a real value\nfor every alternative A and realization of the players\" types.\nThe social-choice function c will therefore choose an\nalternative that maximizes the social value function, given the type\nvector\n\u2212\u2192\n\u03b8 = (\u03b81, .., \u03b8n), i.e., c(\n\u2212\u2192\n\u03b8 ) = argmaxA{g(\n\u2212\u2192\n\u03b8 , A)}.\nObserve that the social-value function is not necessarily the\nsocial welfare function - the social welfare function is a\nspecial case of g in which g is defined to be the sum of the\nplayers\" valuations for the chosen alternative. Following are\nseveral simple examples of social-value functions:\n\u2022 Public goods. A government wishes to build a bridge\nonly if the sum of the benefits that agents gain from\nit exceeds its construction cost C. The social value\nfunctions in a 2-player game will therefore be:\ng(\u03b81, \u03b82,build)=\u03b81+\u03b82-C and\ng(\u03b81, \u03b82,do not build)=0.\n\u2022 Routing in networks. Consider a network that is\ncomposed of two links in parallel. Each link has a secret\nprobability pi of transferring a message successfully.\nA sender wishes to send his message through the\nnetwork only if the probability of success is greater than,\nsay, 90 percent - the known probability in an alternate\nnetwork. That is,\ng(p1, p2, send in network)=1-(1-p1)\u00b7(1-p2) and\ng(p1, p2,send in alternate network)=0.9.\n\u2022 Single-item auctions. Consider a 2-player auction,\nwhere the auctioneer wishes to allocate the item to the\nplayer who values it the most. The social choice\nfunction is given by: g(\u03b81, \u03b82, player 1 wins) = \u03b81 and for\nthe second alternative is g(\u03b81, \u03b82, player 2 wins) =\n\u03b82.\n1.1 Our Contribution\nIn this paper, we present a general framework for the\nstudy of mechanism design in environments with a limited\nnumber of actions. We assume a Bayesian model where\nplayers have one-dimensional private types, independently\ndistributed on some real interval.\nThe main question we ask is: when agents are only allowed\nto use k different actions, which mechanisms achieve the\noptimal expected social-value? Note that this question is\nactually composed of two separate questions. The first question\nis an information-theoretic question: what is the optimal\nresult achievable when the players can only reveal\ninformation using these k actions (recall that their type space\nmay be continuous). The other question involves\ngametheoretic considerations: what is the best result achievable\nwith k actions, where this result should be achieved in a\ndominant-strategy equilibrium. These questions raise the\nquestion about the price of implementation: can the\noptimal information-theoretic result always be implemented in\na dominant-strategy equilibrium? And if not, to what\nextent does the dominant-strategy requirement degrades the\noptimal result? What we call the price of implementation\nwas also explored in other contexts in game theory where\ncomputational restrictions apply: for example, is it always\ntrue that the optimal polynomial-time approximation ratio\n(for example, in combinatorial auctions) can be achieved in\nequilibrium? (The answer for this interesting problem is still\nunclear, see, e.g., [3, 2, 13].)\nOur first contribution is the characterization of sufficient\nconditions for implementing the optimal\ninformationtheoretic social-choice rule in dominant strategies. We show\nthat for the family of multilinear social-value functions (that\n63\nis, polynomials where each variable has a degree of at most\none in each monomial) the dominant-strategy\nimplementation incurs no additional cost.\nTheorem: Given any multilinear single-crossing\nsocialvalue function, and for any number of alternatives and\nplayers, the social choice rule that is information-theoretically\noptimal is implementable in dominant strategies.\nMultilinear social-value functions capture many\nimportant and well-studied models, and include, for instance, the\nrouting example given above, and any social welfare\nfunction in which the players\" valuations are linear in their types\n(such as public-goods and auctions).\nThe implementability of the information-theoretically\noptimal mechanisms enables us to use a standard routine in\nMechanism Design and first determine the optimal\nsocialchoice rule, and then calculate the appropriate payments\nthat ensure incentive compatibility. To show this result, we\nprove a useful lemma that gives another characterization for\nsocial-choice functions whose price of implementation is\nzero. We show that for any social-choice function, incentive\ncompatibility in action-bounded mechanisms is equivalent\nto the property that the optimal expected social value is\nachieved with non-decreasing strategies (or threshold\nstrategies).1\nIn other words, this lemma implies that one can\nalways implement, with dominant strategies, the best\nsocialchoice rule that is achievable with non-decreasing strategies.\nOur second contribution is in characterizing the optimal\naction-bounded mechanisms. We identify some necessary\nconditions for the optimality of mechanisms in general, and\nusing these conditions, we fully characterize the optimal\nmechanisms in environments with two players and two\nalternatives. The optimal mechanisms turn out to be diagonal\n- that is, in their matrix representation, one alternative will\nbe chosen in, and only in, entries below one of the main\ndiagonals (this term extends the concept of Priority Games\nused in [5] for bounded-communication auctions). We\ncomplete the characterization of the optimal mechanisms with\nthe depiction of the optimal strategies - strategies that are\nmutually maximizers. Since the payments in a\ndominantstrategy implementation are uniquely defined by a monotone\nallocation and a profile of strategies, this also defines the\npayments in the mechanism. We give an intuitive proof for\nthe optimality of such strategies, generalizing the concept\nof optimal mutually-centered strategies from [4].\nSurprisingly, as opposed to the optimal auctions in [4], for some\nnon-trivial social-value functions, the optimal diagonal\nmechanism may not utilize all the k available actions.\nTheorem: For any multilinear single-crossing social-value\nfunction over two alternatives, the informationally optimal\n2-player k-action mechanism is diagonal, and the optimal\ndominant strategies are mutually-maximizers.\nAchieving a full characterization of the optimal\nactionbounded mechanism for multi-player or multi-alternative\nenvironments seems to be harder. To support this claim,\nwe observe that the number of mechanisms that satisfy the\nnecessary conditions above is growing exponentially in the\nnumber of players.\n1\nThe restriction to non-decreasing strategies is very\ncommon in the literature. One remarkable result by Athey [1]\nshows that when a non-decreasing strategy is a best response\nfor any other profile of non-decreasing strategies, a pure\nBayesian-Nash equilibrium must exist.\nOur next result compares the expected social-value in\nk-action mechanisms to the optimal expected social value\nwhen the action space is unrestricted. For any number of\nplayers or alternatives, and for any profile of independent\ndistribution functions, we construct mechanisms that are\nnearly optimal - up to an additive difference of O( 1\nk2 ). This\nresult is achieved in dominant strategies.\nTheorem: For any multilinear social-value function, the\noptimal k-action mechanism incurs an expected social loss\nof O( 1\nk2 ).\nThis is the same asymptotic rate proved for specific\nenvironments in [19, 9, 5]. Note that there are social-choice\nfunctions that can be implemented with k actions with no\nloss at all (for example, the rule always choose alternative\nA). However, we know that in some settings (e.g.,\nauctions [5]) the optimal loss may be proportional to 1\nk2 , thus\na better general upper bound is impossible.\nFinally, we present our results in the context of several\nnatural applications. First, we give an explicit solution for\na public-good game with k-actions. We show that the\noptimum is achieved in symmetric mechanisms (in contrast to\naction-bounded auctions [5]), and that the optimal\nallocation scheme depends on the value of the construction cost\nC. Then, we study the celebrated signaling model, in which\npotential employees send signals about their skills to\npotential employers by means of the education level they acquire.\nThis is a natural application in our context since education\nlevels are often discrete (e.g., B.A, M.A and PhD). Lastly,\nwe present our results in the context of routing in networks,\nwhere it is reasonable to assume that links report whether\nthey have low or high loss rates, but less reasonable to\nrequire them to report their accurate loss rates. The latter\nexample illustrates how our results apply to settings where\nthe goal of the social planner is not welfare maximization\n(nor variants of it like affine maximizers).\nThe rest of the paper is organized as follows: our model\nand notations are described in Section 2. We then describe\nour general results regarding implementation in multi-player\nand multi-alternative environments in Section 3, including\nthe asymptotic analysis of the social-value loss. In\nSection 4, we fully characterize the optimal mechanisms for\n2player environments with two alternative. In Section 5, we\nconclude with applying our general results to several\nwellstudied models. Due to lack of space, some of the proofs\nare missing and can be found in the full version that can be\nfound on the authors\" web pages.\n2. MODEL AND PRELIMINARIES\nWe first describe a standard mechanism-design model for\nplayers with one-dimensional types. Then, in Subsection\n2.2, we impose limitation on the action space. The\ngeneral model studies environments with n players and a set\nA = {A1, A2, ..., Am} of m alternatives. Each player has a\nprivately known type \u03b8i \u2208 [\u03b8i, \u03b8i] (where \u03b8i, \u03b8i \u2208 R, \u03b8i < \u03b8i),\nand a type-dependent valuation function vi(\u03b8i, A) for each\nalternative A \u2208 A. In other words, player i with type \u03b8i is\nwilling to pay an amount of vi(\u03b8i, A) for alternative A to be\nchosen. Each type \u03b8i is independently distributed according\nto a publicly known distribution Fi, with an always positive\ndensity function fi. We denote the set of all possible type\nprofiles by \u0398 = \u00d7n\ni=1[\u03b8i, \u03b8i].\n64\nThe social planner has a social-choice function c : \u0398 \u2192 A,\nwhere the choice of alternatives is made in order to maximize\na social-value function g(\n\u2212\u2192\n\u03b8 ) : \u0398 \u00d7 A \u2192 R. That is, c(\n\u2212\u2192\n\u03b8 ) \u2208\nargmaxA\u2208A{g(\n\u2212\u2192\n\u03b8 , A)}\nWe assume that for every alternative A \u2208 A, the function\ng(\u00b7, A) is continuous and differentiable in every type. Since\nthe players\" types are private information, in order to choose\nthe optimal alternative, the social planner needs to get the\nplayers\" types as an input. The players reveal information\nabout their types by choosing an action, from an action set\nB.\nEach player uses a strategy for determining the action he\nplays for any possible type. A strategy for player i is\ntherefore a function si : [\u03b8i, \u03b8i] \u2212\u2192 B. We denote a profile of\nstrategies by s = s1, ..., sn and the set of the strategies of all\nplayers except i by s\u2212i. The utility of player i of type \u03b8i from\nalternative A under the payment pi is ui = vi(\u03b8i, A) \u2212 pi.\n2.1 Dominant-Strategy Implementation\nFollowing is a standard definition of a mechanism. The\naction space B is traditionally implicit, but we mention it\nexplicitly since we later examine limitations on B.\nDefinition 1. A mechanism with an action set B is a\npair (t, p) where:\n\u2022 t : Bn\n\u2192 A is the allocation rule.2\n\u2022 p : Bn\n\u2192 Rn\nis the payment scheme (i.e., pi(b) is the\npayment to the ith player given a vector of actions b).\nThe main goal of this paper is to optimize the expected\nsocial value (in action-bounded mechanisms) while preserving\na dominant-strategy equilibrium.\nWe say that a strategy si is dominant for player i in\nmechanism (t, p) if player i cannot increase his utility by reporting\na different action than si(\u03b8i), regardless of the actions of the\nother players b\u2212i.3\nDefinition 2. We say that a social-choice function h is\nimplementable with a set of actions B if there exists a\nmechanism (t, p) with a dominant-strategy equilibrium s1, ..., sn\n(where for each i, si : [\u03b8i, \u03b8i] \u2212\u2192 B) that always chooses an\nalternative according to h, i.e., t(s1(\u03b81), ..., sn(\u03b8n)) = h(\n\u2212\u2192\n\u03b8 ).\nA fundamental result in the mechanism-design literature\nstates that under the single-crossing condition, the\nmonotonicity of the social-choice function is a sufficient and\nnecessary condition for dominant-strategy implementability\n(in single-parameter environments). The single-crossing\ncondition (also known as the Spence-Mirrlees condition)\nappears, very often implicitly, in almost every paper on\nmechanism design in one-dimensional domains. Without this\nassumption, general sufficient condition for implementability\nare unknown (for a survey on this topic see [10]).\nThroughout this paper, we assume that the valuation functions of\nthe players are single-crossing, as defined below. A player\"s\nvaluation function will be single-crossing if the effect of an\nincrement in the player\"s type on the player\"s valuation for\n2\nWe will show that, w.l.o.g., we can focus on deterministic\nallocation schemes.\n3\nThat is, for every type \u03b8i and every action bi, we have\nthat vi ( \u03b8i, t(si(\u03b8i), b\u2212i) )-pi(si(\u03b8i), b\u2212i)\u00bfvi ( \u03b8i, t(bi, b\u2212i)\n)pi(bi, b\u2212i)\ntwo alternatives is always greater for one of these\nalternatives. The single-crossing condition on the players\"\npreferences actually defines an order on the alternatives. For\nexample, if the value of player i for alternative A increases\nmore rapidly than his value for alternative B, we can denote\nit by A i B. Later on, we will use these orders for defining\nmonotonicity of social-choice functions.\nDefinition 3. A function h : \u0398 \u00d7 A \u2192 R is single\ncrossing with respect to i if there is a weak order i on the\nalternatives, such that for any two alternatives Aj i Al we\nhave that for every\n\u2212\u2192\n\u03b8 \u2208 \u0398,\n\u2202h(\n\u2212\u2192\n\u03b8 , Aj )\n\u2202\u03b8i\n>\n\u2202h(\n\u2212\u2192\n\u03b8 , Al)\n\u2202\u03b8i\nand if Aj \u223c Al (that is, Al i Aj and Aj i Al) then\nh(\u00b7, Aj) \u2261 h(\u00b7, Al) (i.e., the functions are identical).\nThe definition of monotone social-choice functions also\nrequires an order on the actions. This order is implicit in\nmost of the standard settings where, for example, it is\ndefined by the order on the real numbers (e.g., in direct\nrevelation mechanisms where each type is drawn from a real\ninterval). When the action space is discrete, the order can\nbe determined by the names of the actions, for example,\n0, 1,...,k-1 for k-action mechanisms. (We therefore\ndescribe this order with the standard relation on natural\nnumbers <, >.)\nDefinition 4. A deterministic mechanism is monotone\nif when player i raises his reported action, and fixing the\nactions of the other players, the mechanism never chooses\nan inferior alternative for i. That is, for any b\u2212i \u2208 {0, ..., k\u2212\n1}n\u22121\nif bi > bi then t(bi, b\u2212i) i t(bi, b\u2212i).\nFollowing is a classic result regarding the implementability\nof social-choice functions in single-parameter environments.\nNote, however, that this characterization does not hold when\nthe action space is bounded.\nProposition 1. Assume that the valuation functions\nvi(\u03b8i, A) are single crossing and that the action space is\nunrestricted. A social-choice function c is dominant-strategy\nimplementable if and only if c is monotone.\n2.2 Action-Bounded Mechanisms\nThe set of actions B is usually implicit in the literature,\nand it is assumed to be isomorphic to the type space. In this\npaper, we study environments where this assumption does\nnot hold. We define a k-action game to be a game in which\nthe number of possible actions for each player is k, i.e., |B| =\nk. In k-action games, the social planner typically cannot\nalways choose an alternative according to the social choice\nfunction c due to the informational constraints. Instead,\nwe are interested in implementing a social-choice function\nthat, with k actions, maximizes the expected social value:\nE\u2212\u2192\n\u03b8\ng\n\u2212\u2192\n\u03b8 , t (s1(\u03b81), ..., sn(\u03b8n))\n\n.\nDefinition 5. We say that a social-choice function h :\n\u0398 \u2192 A is informationally achievable with a set of actions B\nif there exists a profile of strategies s1, ..., sn (where for each\ni, si : [\u03b8i, \u03b8i] \u2212\u2192 B), and an allocation rule t : Bn\n\u2192 A,\nsuch that t chooses the same alternative as h for any type\nprofile, i.e., t(s1(\u03b81), ..., t(\u03b8n)) = h(\n\u2212\u2192\n\u03b8 ). If |B| = k, we say\nthat h is k-action informationally achievable.\n65\nNote that this definition does not take into account\nstrategic considerations. For example, consider an environment\nwith two alternatives A = {A, B}, and the following\nsocialchoice function: ec(\u03b81, \u03b82) = A iff {\u03b81 > 1/2 and \u03b82 > 1/2}. ec\nis informationally achievable with two actions: if both\nplayers bid 0 when their value is greater than 1/2 and 1\notherwise, then the allocation rule choose alternative A iff\nboth players report 1 derives exactly the same allocation\nfor every profile of types. In contrast, it is easy to see that\nthe function \u02c6c(\u03b81, \u03b82) = A iff \u03b81 + \u03b82 > 1/2 is not\ninformationally achievable with two actions.\nWe now define a social-choice rule that maximizes the\nsocial value under the information-theoretic constraints that\nare implied by the limitations on the number of actions.\nDefinition 6. A social-choice function is k-action\ninformationally optimal with respect to the social-value function\ng, if it achieves the maximal expected social value among all\nthe k-action informationally achievable social-choice\nfunctions.4\nEarlier in this section, we defined the single-crossing\nproperty for the players valuations. We now define a\nsinglecrossing property on the social-value function g. This\nproperty clearly ensures the monotonicity of the corresponding\nsocial choice rule, and we will later show that it is also useful\nfor action-bounded environments.\nDefinition 7. We say that the social-choice rule g(\n\u2212\u2192\n\u03b8 , A)\nexhibits the single-crossing property if for every player i, g\nexhibits the single-crossing property with respect to i.\nNote that the definition above requires that g will be\nsingle crossing with respect to every player i, given her\nindividual order i on the alternatives. That is, the social value\nfunction g will be compatible in this sense with the\nsinglecrossing conditions on the players\" preferences.\nFinally, we call attention to a natural set of strategies\n- non-decreasing strategies, where each player reports a\nhigher action as her type increases. Equivalently, such\nstrategies are threshold strategies - strategies where each\nplayer divides his type support into intervals, and simply\nreports the interval in which her type lies.\nDefinition 8. A real vector x = (x0, x1, ..., xk) is a\nvector of threshold values if x0 \u2264 x1 \u2264 ... \u2264 xk.\nDefinition 9. A strategy si is a threshold strategy based\non a vector of threshold values x = (x0, x1, ..., xk), if for any\naction j it holds that si(\u03b8i) = j iff \u03b8i \u2208 [xj , xj+1]. A strategy\nsi is called a threshold strategy, if there exists a vector x of\nthreshold values such that si is a threshold strategy based on\nx.\n3. IMPLEMENTATION WITH A LIMITED\nNUMBER OF ACTIONS\nIn this section, we study the general model of\nactionbounded mechanism design. Our first result is a sufficient\nand necessary condition for the implementability of the\noptimal solution achievable with k actions: this condition says\nthat the optimal social-choice rule is achieved when all the\n4\nFor simplicity, we assume that a maximum is attained and\nthus the optimal function is well defined.\nplayers use non-decreasing strategies. The basic idea is that\nwith non-decreasing strategies (i.e., threshold strategies), we\ncan apply the single-crossing property to show that when a\nplayer raises his reported action, the expected value for his\nhigh-priority alternatives increases faster; therefore,\nmonotonicity must hold. The result holds for any number of\nplayers and alternatives, and for any profile of distribution\nfunctions on the players\" types, as long as they are statistically\nindependent. (It is easy to illustrate that this result does\nnot hold if the players\" types are dependent.)\nLemma 1. Consider a single-crossing social-value\nfunction g. The informationally optimal k-action social-choice\nfunction c\u2217\n(with respect to g) is implementable if and only if\nc\u2217\nachieves its optimum when the players use non-decreasing\nstrategies.\nNext, we show that for a wide family of social-value\nfunctions - multilinear functions - the price of implementation\nis zero. That is, the information-theoretically optimal rule\nis dominant-strategy implementable. This family of\nfunctions captures many common settings from the literature.\nIn particular, it generalizes the auction setting studied by\nBlumrosen et al. [4, 6].\nDefinition 10. A multilinear function is a polynomial\nin which the degree of every variable in each monomial is at\nmost 1.5\nWe say that a social-choice rule g is multilinear,\nif g(\u00b7, A) is multilinear for every alternative A \u2208 A.\nThe basic idea behind the proof of the following theorem is\nas follows: for every player, we show that the expected social\nwelfare when he chooses any action (fixing the strategies of\nthe other players) is a linear function of his type. This is a\nresult of the multilinearity of the social-value function and\nof the linearity of expectation. The maximum over a set of\nlinear functions is a piecewise-linear function, hence the\noptimal social value is achieved when the player uses threshold\nstrategies (the thresholds are the switching points). Since\nthe optimum is achieved with threshold strategies, we can\napply Lemma 1 to show the monotonicity of this\nsocialchoice rule. Note that in this argument we characterize the\nplayers\" strategies that maximize the social value, and not\nthe players\" utilities.\nTheorem 1. If the social-value function is multilinear\nand single crossing, the informationally optimal k-action\nsocial-choice function is implementable.\nProof. We will show that for any k-action mechanism,\nthe optimal expected social value is achieved when all\nplayers use threshold strategies. This will be shown by proving\nthat for any player i and for any action bi of player i, the\nexpected welfare when she chooses the action bi is a\nlinear function in player i\"s type \u03b8i. Then, it will follow from\nLemma 1 that the social choice function is implementable.\nFor every action bi of player i, let qA denote the\nprobability that alternative A is allocated, i.e.,\nqA = Pr\u2212\u2192\n\u03b8\nh\nt(s(\n\u2212\u2192\n\u03b8 )) = A|si(\u03b8i) = bi\ni\n5\nFor example, f(x, y, z) = xyz + 5xy + 7.\n66\nDue to the linearity of expectation, the expected social\nvalue when player i with type \u03b8i reports bi is:\nX\nA\u2208A\nqA E\u03b8\u2212i ( g(\u03b8i, \u03b8\u2212i, A) | t(bi, s\u2212i(\u03b8\u2212i)) = A ) (1)\n=\nX\nA\u2208A\nqA\nZ\n\u03b8\u2212i\ng(\u03b8i, \u03b8\u2212i, A)fA\n\u2212i(\u03b8\u2212i)d(\u03b8\u2212i) (2)\nwhere fA\n\u2212i(\u03b8\u2212i) equals\nQ\nj=i fj (\u03b8j )\nqA\nfor types profiles \u03b8\u2212i such\nthat t(bi, s\u2212i(\u03b8\u2212i)) = A, and 0 otherwise.\nSince g is multilinear, every function g(\u03b8i, \u03b8\u2212i, A) is a\nlinear function in \u03b8i, where the coefficients depend on the\nvalues of \u03b8\u2212i. Denote this function by g(\u03b8i, \u03b8\u2212i, A) = \u03bb\u03b8\u2212i \u03b8i +\n\u03b2\u03b8\u2212i . Thus, we can write Equation 2 as:\nX\nA\u2208A\nqA\nZ\n\u03b8\u2212i\n`\n\u03bb\u03b8\u2212i \u03b8i + \u03b2\u03b8\u2212i\n\u00b4\nfA\n\u2212i(\u03b8\u2212i)d(\u03b8\u2212i)\n=\nX\nA\u2208A\nqA \u03b8i\nZ\n\u03b8\u2212i\n\u03bb\u03b8\u2212i fA\n\u2212i(\u03b8\u2212i)d(\u03b8\u2212i) +\nZ\n\u03b8\u2212i\n\u03b2\u03b8\u2212i fA\n\u2212i(\u03b8\u2212i)d(\u03b8\u2212i)\n!\nIn this expression, each integral is a constant\nindependent of \u03b8i when the strategies of the other player are fixed.\nTherefore, each summand, thus the whole function, is a\nlinear function in \u03b8i. For achieving the optimal expected social\nvalue, the player must choose the action that maximizes the\nexpected social value. A maximum of k linear functions is a\npiecewise-linear function with at most k\u22121 breaking points.\nThese breaking points are the thresholds to be used by the\nplayer. For all types between subsequent thresholds, the\noptimum is clearly achieved by a single action; Since linear\nfunctions are single-crossing, every action will be maximal\nin at most one interval.\nThe same argument applies to all the players, and\ntherefore the optimal social value is obtained with threshold\nstrategies.\nObserve that the proof of Theorem 1 actually works for\na more general setting. For proving that the\ninformationtheoretically optimal result is achieved with threshold\nstrategies, it is sufficient to show that the social-choice function\nexhibits a single-crossing condition on expectation: given\nany allocation scheme, and fixing the behavior of the other\nplayers, the expected social value in any two actions (as a\nfunction of \u03b8i) is single crossing. Theorem 1 shows that this\nrequirement holds for multilinear functions, but we were not\nable to give an exact characterization of this general class of\nfunctions.\nThe implementability of the information-theoretically\noptimal solution makes the characterization of the optimal\nincentive-compatible mechanisms significantly easier: we\ncan apply the standard mechanism-design technique and\nfirst calculate the optimal allocation scheme and then find\nthe right payments.\nObserve that if the valuation functions of the players are\nlinear and single crossing, then the social-welfare function\n(i.e., the sum of the players\" valuations) is multilinear and\nsingle-crossing. This holds since the single-crossing\nconditions on the valuations are defined with a similar order on\nthe alternatives as in the social-value function. Therefore,\nan immediate conclusion from Theorem 1 is that the\noptimal social welfare, which is achievable with k actions, is\nimplementable when the valuations are linear.\nCorollary 1. If the valuation functions vi(\u00b7, A) are\nsingle crossing and linear in \u03b8i for every player i and for every\nalternative, then the informationally optimal k-action social\nwelfare function is implementable.\n3.1 Asymptotic Analysis\nIn this section we show that the social value loss of\nmultilinear social-value rules diminishes quadratically with the\nnumber of possible actions, k. This is the same asymptotic\nratio presented in the study of specific models in the same\nspirit [19, 5, 18, 9]. The main challenge here, compared\nto earlier results, is in dealing with the general\nmechanismdesign framework, that allows a large family of social-value\nfunctions for any number of players and alternatives. As\nopposed to the specific models, the social-value function may\nbe asymmetric with respect to the players\" types; for\ninstance, the social-value loss may a-priori occur in any\nentry (i.e., profile of actions).\nThe basic intuition for the proof is that even for this\ngeneral framework, we can construct mechanisms where the\nprobability of having an allocation that is incompatible with\nthe original social-choice function is O( 1\nk\n). (This fact holds\nfor all single-crossing social-choice functions, not only for\nmultilinear functions.) Then, we can use the multilinearity\nto show that the social-value loss will always be O( 1\nk\n) in\nthe mechanisms we construct. Taken together, the expected\nloss becomes O( 1\nk2 ). Our proof is constructive - we present\nan explicit construction for a mechanism that exhibits the\ndesired loss in dominant strategies. The additive expected\nsocial-value depends on the length of the support of the type\nspace. Hence, we assume that the type space is normalized\nto [0, 1], that is, for every player i, \u03b8i = 0 and \u03b8i = 1.\nTheorem 2. Assume that the type spaces are normalized\nto [0, 1]. For any number of players and alternatives, and\nfor any set of distribution functions of the players\" types, if\nthe social-value function g is single crossing and\nmultilinear, then the informationally optimal k-action social-choice\nfunction (with respect to g) incurs an expected social-value\nloss of O( 1\nk2 ).\nMoreover, as discussed in [4], this bound is asymptotically\ntight. That is, there exists a set of distribution functions\nfor the players (the uniform distribution in particular) and\nthere are social-value functions (e.g., auctions) for which any\nmechanism incurs a social-value loss of at least \u2126( 1\nk2 ). In\nthat sense, auctions are the hardest problems with respect to\nthe incurred loss. Yet, note that this claim does not imply\nthat the loss of any social-choice function will be\nproportional to 1\nk2 . For example, in the social choice function that\nchooses the same alternative for any type profile, no loss will\nbe incurred (even with 0 actions).\n67\n4. OPTIMAL MECHANISMS FOR TWO\nPLAYERS AND TWO ALTERNATIVES\nIn this section, we present a full characterization of the\noptimal mechanisms in action-bounded environments with\ntwo players and two alternatives, where the social-choice\nfunctions are multilinear and single crossing.\nNote that in this section, as in most parts of this paper,\nwe characterize monotone mechanisms by their allocation\nscheme and by a profile of strategies for the players. Doing\nthis, we completely describe which alternative is chosen for\nevery profile of types of the players. It is well known that\nin monotone mechanisms for one dimensional environments,\nthe allocation scheme uniquely defines the payments in the\ndominant-strategy implementation. We find this\ndescription, which does not explicitly mention the payments, easier\nfor the presentation.\nA key notion in our characterization of the optimal\nactionbounded mechanism, is the notion of non-degenerate\nmechanisms. In a degenerate mechanism, there are two actions\nfor one of the players that are identical in their allocation.\nIntuitively, a degenrate mechanism does not utilize all the\naction space it is allowed to use, and therefore it cannot\nbe optimal. Using this propery, we then define diagonal\nmechanisms that turns out to exactly characterize the set of\noptimal mechanisms.\nDefinition 11. A mechanism is degenerate with respect\nto player i if there exist two actions bi, bi for player i such\nthat for all profiles b\u2212i of actions of the other players, the\nallocation scheme is identical whether player i reports bi or\nbi (i.e., \u2200b\u2212i, t(bi, b\u2212i) = t(bi, b\u2212i)).\nFor example, a 2-player mechanism is degenerate with\nrespect to the rows player, if there are two rows with\nidentical allocation in the matrix representation of the game.\nDefinition 12. A 2-player 2-alternative mechanism with\nk-possible actions is called diagonal if it is monotone, and\nnon-degenerate with respect to at least one of the players.\nThe term diagonal originates from the matrix\nrepresentation of these mechanisms, in which one of the diagonals\ndetermines the boundary between the choice of the two\nalternatives (see Figure 1). Simple combinatorial\nconsiderations show that diagonal mechanisms may come in very few\nforms. Interestingly, one of these forms is degenerate with\nrespect to one of the players; that is, it can be described as\na mechanism with k \u2212 1 actions for this player.\nProposition 2. Any diagonal 2-player mechanism has\none of the following forms:\n1. If both players favor the same alternative (w.l.o.g.,\nB i A for i = 1, 2) then either\n(a). t(b1, b2) = B iff b1 + b2 \u2265 k \u2212 1\n(b). t(b1, b2) = B iff b1 + b2 \u2265 k.\n2. If the two players have conflicting preferences (e.g.,\nA 1 B and B 2 A) then either\n(a). t(b1, b2) = B iff b1 \u2265 b2\n(b). t(b1, b2) = B iff b1 > b2.\nIn both cases, the optimal mechanism can also take the\nform of one of the possibilities described, except one of the\nplayers is not allowed to choose the fixed allocation action.\nTo complete the description of the optimal allocation\nscheme, we now move to determine the optimal strategies\nin diagonal mechanisms. We define the notion of\nmutuallymaximizer thresholds, and show that threshold strategies\nbased on such thresholds are optimal. The reason why\nmutually-maximizer strategies maximize the expected social\nvalue in monotone mechanisms is intuitive: Consider some\naction i (row in the matrix representation) for player 1. In\na monotone mechanism, the allocation in such a row will be\nof the form [A, A, ..., B, B] (assuming that B 2 A). That\nis, the alternative A will be chosen for low actions of player\n2, and the alternative B will be chosen for higher actions of\nplayer 2. By determining a threshold for player 2, the social\nplanner actually determines the minimal type of player 2\nfrom which the alternative B will be chosen. For\noptimizing the expected social value, this type for player 2 should\nclearly be the type for which the expected social value from\nA equals the expected social value from B (given that player\n1 plays i); for greater values of player 2, the single-crossing\ncondition ensures that B will be preferred.\nDefinition 13. Consider a monotone 2-player\nmechanism g that is non-degenerate with respect to the two\nplayers, where the players use threshold strategies based on the\nthreshold vectors x, y. We say that the threshold xi of one\nplayer (w.l.o.g. player 1) is a maximizer if\nE\u03b82 ( g(xi, \u03b82, A) | \u03b82 \u2208 [yj , yj+1] ) =\nE\u03b82 ( g(xi, \u03b82, B) | \u03b82 \u2208 [yj , yj+1] )\nwhere j is the action of player 2 for which the mechanism\nswaps the chosen alternative exactly when player 1 plays i,\ni.e., t(i, j) = t(i \u2212 1, j) (we denote, w.l.o.g., t(i, j) = A,\nt(i \u2212 1, j) = B).\nThe threshold vectors x, y are called mutually maximizers\nif all their thresholds are maximizers (except the first and\nthe last).\nIt turns out that in 2-player, 2-alternative environments,\nwhere the social-choice rule is multilinear and single\ncrossing, the optimal expected social value is achieved in\ndiagonal mechanisms with mutually-maximizer strategies. In the\nproof, we start with a k \u00d7 k allocation matrix, and show\nthat the mechanism cannot be degenerate with respect to\none of the players (we show how to choose this player). If\nthe player, w.l.o.g., the columns player, is degenerate, then\nthere are two columns with an identical allocation. These\ntwo columns can be unified to a single action, and the\nmechanism can therefore be described as a k \u00d7 k \u2212 1 matrix. We\nthen show that we can insert a new missing column, and\nan appropriately chosen threshold, and strictly increase the\nexpected social value in the mechanism. Therefore, the\noriginal mechanism was not the optimal k-action mechanism.\nTheorem 3. In environments with two alternatives and\ntwo players, if the social-value function is multilinear and\nsingle crossing, then the optimal k-action mechanism is\ndiagonal, and the optimum is achieved with threshold strategies\nthat are mutually maximizers.\nA corollary from the proof of Theorem 1 is that the optimal\n2-player k-action mechanism may be degenerate for one of\nthe players (that is, equivalent to a game where one of the\nplayers has only k \u2212 1 different actions). However, the proof\nidentifies the following sufficient condition under which the\n68\n0 1 2 3\n0 A A A B\n1 A A B B\n2 A B B B\n3 B B B B\n0 1 2 3\n0 A A A A\n1 A A A B\n2 A A B B\n3 A B B B\n0 1 2 3\n0 B B B B\n1 A B B B\n2 A A B B\n3 A A A B\n0 1 2 3\n0 A A A B\n1 A A B B\n2 A B B B\nFigure 1: The three left tables show all possible diagonal allocation scheme with 4 possible actions for each player.\nThe rightmost table show an example for a diagonal allocation scheme where one of the player has only k \u2212 1 possible\nactions.\noptimal mechanism will be non-degenerate with respect to\nboth players: if the players\" preferences are correlated (e.g.,\nA 1 B and A 2 B), then the optimal alternative must be\nthe same under the profiles (\u03b81, \u03b82) and (\u03b81, \u03b82). Similarly,\nif the players\" preferences are conflicting (e.g., A 1 B and\nB 2 A), then the optimal alternative must be the same\nunder the profiles (\u03b81, \u03b82) and (\u03b81, \u03b82). Examples in which\nthis condition holds are the public good model presented in\nsection 5 and auctions [5].\nWe do not know how to give an exact characterization of\nthe optimal mechanisms in multi-player and multi-alternati\nve environments. The hardness stems from the fact that the\nnecessary conditions we specified before for the optimality\nof the mechanisms (i.e., non-degenrate and monotone\nallocations) are not restrictive enough for the general model. In\nother words, for n > 2 players, the number of monotone and\nnon-degenerate mechanisms becomes exponential in n.\nProposition 3. The number of monotone non-\ndegenerate k-action mechanisms in an n-player game is exponential\nin n, even if |A| = 2.\n5. EXAMPLES\nOur results apply to a variety of economic, computational\nand networked settings. In this section, we demonstrate the\napplicability of our results to public-good models, signaling\ngames and routing applications.\n5.1 Application 1: Public Goods\nThe public-good model deals with a social planner (e.g.,\ngovernment) that needs to decide whether to supply a\npublic good, such as building a bridge. Let Y es and No denote\nthe respective alternatives of building and not building the\nbridge. v = v1, . . . , vn is the vector of the players\"\ntypesthe values they gain from using the bridge. The decision that\nmaximizes the social welfare is to build the bridge if and only\nif\nP\ni vi is greater than its cost, denoted by C. If the bridge\nis built, the social welfare is\nP\ni vi \u2212 C, and zero otherwise;\nthus, g(v, Y es) =\nP\ni vi \u2212 C, and g(v, No) = 0. The utility\nof player i under payment pi is ui = vi \u2212 pi if the bridge is\nbuilt, and 0 otherwise. It is well-known that under no\nrestriction on the action space, it is possible to induce truthful\nrevelation by VCG mechanisms, therefore full efficiency can\nbe achieved. Obviously, when the action set is limited to\nk actions, we cannot achieve full efficiency due to the\ninformational constraints. Yet, since g(v, Y es) and g(v, No)\nare multilinear and single crossing, we can directly apply\nTheorem 1. Hence, the information-theoretically optimal\nkaction mechanism is implementable in dominant strategies.\nCorollary 2. The k-action informationally optimal\nsocial welfare in the n-player public-good game is\nimplementable in dominant strategies.\nMoreover, as Theorem 3 suggests, in the k-action 2-player\npublic-good game, we can fully characterize the optimal\nmechanisms. In the proof of Theorem 3, we saw that when\nfor both players g(\u03b8i, \u03b8i, A) = g(\u03b8i, \u03b8i, B), the mechanism is\nnon-degenerate with respect to both players.6\nThis\ncondition clearly holds here (1+ 0\u2212 C = 0+ 1\u2212 C), therefore the\noptimal mechanisms will use all k actions.\nCorollary 3. The optimal expected welfare in a 2-player\nk-action public-good game is achieved with one of the\nfollowing mechanisms:7\n1. Allocation: Build the bridge iff b1 + b2 \u2265 k.\nStrategies: Threshold strategies based on the vectors\n\u2212\u2192x ,\u2212\u2192y where for every 1 \u2264 i \u2264 k-1,\nxi = C \u2212 E[v2|v2 \u2208 [yk\u2212i, yk\u2212i+1]]\nyi = C \u2212 E[v1|v1 \u2208 [xk\u2212i, xk\u2212i+1]]\n2. Allocation: Build the bridge iff b1 + b2 \u2265 k \u2212 1.\nStrategies: Threshold strategies based on the vectors\n\u2212\u2192x ,\u2212\u2192y where for every 1 \u2264 i \u2264 k-1:\nxi = C \u2212 E[v2|v2 \u2208 [yk\u2212i\u22121, yk\u2212i]]\nyi = C \u2212 E[v1|v1 \u2208 [xk\u2212i\u22121, xk\u2212i]]\nRecall that we define the optimal mechanisms by their\nallocation scheme and by the optimal strategies for the\nplayers. It is well known, that the allocation scheme in\nmonotone mechanisms uniquely defines the payments that ensure\nincentive-compatibility. In public-good games, these\npayments satisfy the rule that a player pays his lowest value for\nwhich the bridge is built, when the action of the other player\nis fixed. Therefore, the payments for the players 1 and 2\nreporting the actions b1 and b2 are as follows: in mechanism\n1 from Proposition 3, p1 = xb2 and p2 = yb1 ; in mechanism\n2 from Proposition 3, p1 = xb2\u22121 and p2 = yb1\u22121.\nWe now show a more specific example that assumes\nuniform distributions. The example shows how the optimal\nmechanism is determined by the cost C: for low costs,\nmechanism of type 1 is optimal, and for high costs the optimal\nmechanism is of type 2. An additional interesting feature of\nthe optimal mechanisms in the example is that they are\nsymmetric with respect to the players. This come as opposed to\nthe optimal mechanisms in the auction model [5] that are\nasymmetric (even when the players\" values are drawn from\nidentical distributions).\n6\nMore precisely, the condition for non-degeneracy when\nB 1 A and B 2 A is that sign(g(\u03b8i, \u03b8i, A)\u2212g(\u03b8i, \u03b8i, B)) =\nsign(g(\u03b8i, \u03b8i, A) \u2212 g(\u03b8i, \u03b8i, B)) (when sign(0) is considered\nboth negative and positive).\n7\nWe denote x0 = y0 = 0 and xk = yk = 1.\n69\nC \u2264 1 0 1\n0\nNo\np1 = p2 = 0\nNo\np1 = p2 = 0\n1\nNo\np1 = p2 = 0\nYes\np1 = p2 = 2\n3\nC \u2212 1\n3\nC \u2265 1 0 1\n0\nNo\np1 = p2 = 0\nYes\np1 = 0; p2 = 2C\n3\n1\nYes\np1 = 2C\n3\n; p2 = 0\nYes\np1 = p2 = 0\nFigure 2: Optimal mechanisms in a 2-player, 2-alternative, 2-action public-goods game, when the types are uniformly\ndistributed in [0, 1]. The mechanism on the left is optimal when C \u2264 1 and the other is optimal when C \u2265 1.\nExample 1. Suppose that the types of both players are\nuniformly distributed on [0, 1]. Figure 2 illustrates the\noptimal mechanisms for k = 2, and shows how both the\nallocation scheme and the payments depend on the construction\ncost C. Then, the welfare-maximizing mechanisms are:\n\u2022 If the cost of building is at least 1:\nAllocation: Build iff b1 + b2 \u2265 k\nStrategies: The thresholds of both players are (for i =\n{1, . . . , k \u2212 1}), xi = 2(k\u2212i)\u00b7C\n2k\u22121\n\u2212 2k\u22124i+1\n2k\u22121\n\u2022 If the cost of building is smaller than 1:\nAllocation: Build iff b1 + b2 \u2265 k \u2212 1\nStrategies: The thresholds of both players are (for i =\n{1, . . . , k \u2212 1}), xi = 2iC\n2k\u22121\n5.2 Application 2: Signaling\nWe now study a signaling model in labor markets. In\nthis model, the type of each worker, \u03b8i \u2208 [\u03b8, \u03b8], describes\nthe worker\"s productivity level. The firm wants to make her\nhiring decisions according to a decision function f(\n\u2212\u2192\n\u03b8 ). For\nexample, the firm may want to hire the most productive\nworker (like the auction model), or hire a group of\nworkers only if their sum of productivities is greater than some\nthreshold (similar to the public-good model). However, the\nworker\"s productivity is invisible to the firm; the firm only\nobserves the worker\"s education level e that should convey\nsignals about her productivity level. Note that the\nassumption here is that acquiring education, at any level, does not\naffect the productivity of the worker, but only signals about\nthe worker\"s skills.\nA main component in this model, is the fact that as the\nworker is more productive, it is easier for him to acquire\nhigh-level education. In addition, the cost of acquiring\neducation increases with the education level. More formally,\na continuous function C(e, \u03b8) describes the cost to a worker\nfrom acquiring each education level as a function of his\nproductivity. The standard assumptions about the cost\nfunction are: \u2202C\n\u2202e\n> 0, \u2202C\n\u2202\u03b8\n< 0, \u2202C\n\u2202e\u2202\u03b8\n< 0, where the latter\nrequirement is exactly equivalent to the single-crossing\nproperty (when C is differentiable in both variables). The utility\nof a worker is determined according to the education level he\nchooses and the wage w(e) attached to this education level,\nthat is, ui(e, \u03b8i) = \u2212C(\u03b8i, e) + w(e).\nAn action for a worker in this game is the education level\nhe chooses to acquire. In standard models, this action space\nis continuous, and then a fully separating equilibrium\nexists (under the single-crossing conditions on the cost\nfunction). That is, there exists an equilibrium in which every\ntype is mapped into a different education level; thus, the\nfirm can induce the exact productivity levels of the workers\nby this signaling mechanism. However, it is hard to imagine\na world with a continuum of education levels. It is usually\nthe case that there are only several discrete education levels\n(e.g., BSc, MSc, PhD).\nWith k education levels, the firm may not be able to\nexactly follow the decision function f. For achieving the best\nresult in k actions, the firm may want the workers to play\naccording to specific threshold strategies. It turns out that the\nstandard condition, the single-crossing condition on the cost\nfunction, suffices for ensuring that these threshold strategies\nwill be dominant for the players. We can now apply\nTheorem 2, and show that if the decision function f of the firm\nis multilinear (i.e., the decisions are made to maximize a\nset of multilinear functions), then the firm can design the\neducation system such that the expected loss will be O( 1\nk2 ),\nwith a dominant-strategy equilibrium. Note that while in\nthe classic example of the job market it is not reasonable for\neach firm to select the education level, in other reasonable\napplications the social planners may be able to determine\nthe thresholds, e.g., by fixing the levels of qualifying exams\nor other means for the players to demonstrate their skills.\nCorollary 4. Consider a multilinear decision function\nf, and a single-crossing cost function for the players. With k\neducation levels, the firm can implement in dominant\nstrategies a decision function that incurs a loss of O( 1\nk2 ) compared\nwith the decision function f.\n5.3 Application 3: Routing\nIn our last example, we show the applicability of our\nresults to routing in lossy networks. In such systems, a sender\nneeds to decide through which network to transmit his\nmessage. It is natural to assume that the agents (i.e., links) may\nnot be able to report their accurate probabilities of success,\nbut only, e.g., whether these are low, intermediate, or\nhigh. In this example, we focus on parallel-path networks.\nLet N1, N2 denote two networks, where each network is\ncomposed of multiple parallel paths with variable lengths\nfrom a given source to a given sink (an example for such a\nnetwork appears in Figure 3). The edges in these networks\nare controlled by different selfish agents, and each edge\nappears only in one of the networks. Suppose that the sender,\nwho wishes to send a message from the source to the sink,\nknows the topology of each network, but the probability of\nsuccess on each link, pi, is the link\"s private information.\nThe problem of the sender is to decide whether to send a\nmessage through the network N1 or through an alternate\nnetwork N2. Obviously, the sender wishes to send the\nmessage through N1 only if the total probability of success in N1\nis greater than the success probability in N2. Let fN\n(\u2212\u2192p )\ndenote the probability of success in network N with a\nsuccessprobability vector \u2212\u2192p . The social choice function in this\nexample is thus: c(\u2212\u2192p ) \u2208 argmax{N1,N2}{fN1\n(\u2212\u2192p ), fN2\n(\u2212\u2192p )}.\n70\np5p4\np3\np2p1\ns t\nFigure 3: An example for a parallel-path network,\nwhere each link has a probability pi for transmission\nsuccess. We show that the overall probability of success in\nsuch networks is multilinear in pi, and thus the optimal\nk-action social-choice function is dominant-strategy\nimplementable.\nIn this example, we assume that every agent has a\nsinglecrossing valuation function over the alternatives. That is,\neach player wishes that the message will be sent through\nhis network, and his benefit is positively correlated with his\nsecret data (e.g., the valuation of player i may be exactly\npi). We would like to emphasize that the social planner\nin this example (the sender) does not aim to maximize the\nsocial welfare. That is, the social value is not the sum of the\nplayers\" types nor any weighted sum of the types (affine\nmaximizer).\nThe success probability of sending a message through a\nparallel-path network is multilinear, since it can be expressed\nby the following multilinear formula (where P denotes the\nset of all paths between the source and the sink):\n1 \u2212\nY\nP \u2208P\n(1 \u2212\nY\nj\u2208P\npj) (3)\nFor example, in the network presented in figure 3, the\nprobability of success is given by\nf(\u2212\u2192p ) = 1 \u2212 (1 \u2212 p1p2) \u00b7 (1 \u2212 p3) \u00b7 (1 \u2212 p4p5)\nThus, if all the candidate networks are parallel-path\nnetworks, the social-value function is multilinear, and we can\napply Theorem 1 and get the following corollary. Note that\nfor every link i, the partial derivative in pi of the success\nprobability written in Equation 3 is positive. In all the other\nnetworks, that do not contain link i, the partial derivative\nis clearly zero. Therefore, the social-value function is single\ncrossing and our general results can be applied.\nCorollary 5. For any social-choice function that\nmaximizes the success probability over parallel-path networks,\nthe informationally optimal k-action social-choice function\nis implementable (for any k).\nAcknowledgment. We thank Noam Nisan for helpful\ndiscussions and anonymous referees for helpful comments. This\nwork is supported by grants from the Israeli Academy of\nSciences and the USA-Israel Binational Science Foundation.\nThe work of the second author is also supported by the Lady\nDavis Trust Fellowship.\n6. REFERENCES\n[1] S. Athey. Single crossing properties and the existence\nof pure strategy equilibria in games of incomplete\ninformation. Econometrica, 69(4):861-89, 2001.\n[2] M. Babaioff and L. Blumrosen.\nComputationally-feasible auctions for convex bundles.\nIn APPROX 04, pages 27-38, 2004.\n[3] M. Babaioff, R. Lavi, and E. Pavlov. Mechanism\ndesign for single-value domains. In AAAI\"05, pages\n241-247, 2005.\n[4] L. Blumrosen and N. Nisan. Auctions with severely\nbounded communications. 43th Annual Symposium on\nFoundations of Computer Science (FOCS 2002), 2002.\n[5] L. Blumrosen, N. Nisan, and I. Segal. Auctions with\nseverely bounded communications. Working paper,\nThe Hebrew University. Preliminary versions appeared\nin FOCS 2002 and ESA 03., 2003.\n[6] L. Blumrosen, N. Nisan, and I. Segal. Multi-player\nand multi-round auctions with severely bounded\ncommunication. ESA 2003, 2003.\n[7] M. Chwe. The discrete bid first price auction. In\nEconomics Letters, volume 31, pages 303-306, 1989.\n[8] E. David, A. Rogers, J. Schiff, S. Kraus, and\nN. Jennings. Optimal design of english auctions with\ndiscrete bid levels. In EC 05.\n[9] R. M. Harstad and M. H. Rothkopf. On the role of\ndiscrete bid levels in oral auctions. Europian Journal\nof Operations Research, pages 572-581, 1994.\n[10] B. E. Hermalin. Lecture notes in economics, 2005.\n[11] A. Kress and C. Boutilier. A study of limited\nprecision, incremental elicitation in auctions. In\nAAMAS 04.\n[12] K. Larson and T. Sandholm. Costly valuation\ncomputation in auctions. In 8th conference of\ntheoretical aspects of knowledge and rationality, 2001.\n[13] R. Lavi, A. Mu\"alem, and N. Nisan. Towards a\ncharacterization of truthful combinatorial auctions. In\nIn Proceedings of the 44th Annual IEEE Symposium\non Foundations of Computer Science (FOCS), 2003.,\n2003.\n[14] P. McAfee. Coarse matching. Econometrica,\n70(5):2025-2034, 2002.\n[15] R. B. Myerson. Optimal auction design. Mathematics\nof Operations Research, 6(1):58-73, 1981.\n[16] M. Naor, B. Pinkas, and R. Summer. Privacy\npreserving auctions and mechanism design. In ACM\nConference on Electronic Commerce, 1999.\n[17] T. Sandholm and A. Gilpin. Sequences of\ntake-it-or-leave-it offers: Near-optimal auctions\nwithout full-valuation revelation. In AMEC-V, 2003.\n[18] M. A. Satterthwaite and S. R. Williams. The\noptimality of a simple market mechanism.\nEconometrica, 70(5):1841-1863, 2002.\n[19] R. Wilson. Efficient and competitive rationing.\nEconometrica, 57:1-40, 1989.\n71", "keywords": "implementation;multilinear function;single-crossing condition;communication complexity;probability of success;single-cross condition;action-bounded mechanism;social-choice function;bounded action space;decision function;dominant strategy;optimal mechanism;mechansm design;success probability"}
-{"name": "test_J-31", "title": "Computing the Optimal Strategy to Commit to\u2217", "abstract": "In multiagent systems, strategic settings are often analyzed under the assumption that the players choose their strategies simultaneously. However, this model is not always realistic. In many settings, one player is able to commit to a strategy before the other player makes a decision. Such models are synonymously referred to as leadership, commitment, or Stackelberg models, and optimal play in such models is often significantly different from optimal play in the model where strategies are selected simultaneously. The recent surge in interest in computing game-theoretic solutions has so far ignored leadership models (with the exception of the interest in mechanism design, where the designer is implicitly in a leadership position). In this paper, we study how to compute optimal strategies to commit to under both commitment to pure strategies and commitment to mixed strategies, in both normal-form and Bayesian games. We give both positive results (efficient algorithms) and negative results (NP-hardness results).", "fulltext": "1. INTRODUCTION\nIn multiagent systems with self-interested agents\n(including most economic settings), the optimal action for one agent\nto take depends on the actions that the other agents take.\nTo analyze how an agent should behave in such settings, the\ntools of game theory need to be applied. Typically, when a\nstrategic setting is modeled in the framework of game\ntheory, it is assumed that players choose their strategies\nsimultaneously. This is especially true when the setting is\nmodeled as a normal-form game, which only specifies each\nagent\"s utility as a function of the vector of strategies that\nthe agents choose, and does not provide any information on\nthe order in which agents make their decisions and what\nthe agents observe about earlier decisions by other agents.\nGiven that the game is modeled in normal form, it is\ntypically analyzed using the concept of Nash equilibrium. A\nNash equilibrium specifies a strategy for each player, such\nthat no player has an incentive to individually deviate from\nthis profile of strategies. (Typically, the strategies are\nallowed to be mixed, that is, probability distributions over the\noriginal (pure) strategies.) A (mixed-strategy) Nash\nequilibrium is guaranteed to exist in finite games [18], but one\nproblem is that there may be multiple Nash equilibria. This\nleads to the equilibrium selection problem of how an agent\ncan know which strategy to play if it does not know which\nequilibrium is to be played.\nWhen the setting is modeled as an extensive-form game,\nit is possible to specify that some players receive some\ninformation about actions taken by others earlier in the game\nbefore deciding on their action. Nevertheless, in general,\nthe players do not know everything that happened earlier\nin the game. Because of this, these games are typically still\nanalyzed using an equilibrium concept, where one specifies\na mixed strategy for each player, and requires that each\nplayer\"s strategy is a best response to the others\" strategies.\n(Typically an additional constraint on the strategies is now\nimposed to ensure that players do not play in a way that is\nirrational with respect to the information that they have\nreceived so far. This leads to refinements of Nash equilibrium\nsuch as subgame perfect and sequential equilibrium.)\nHowever, in many real-world settings, strategies are not\nselected in such a simultaneous manner. Oftentimes, one\nplayer (the leader) is able to commit to a strategy before\nanother player (the follower). This can be due to a\nvariety of reasons. For example, one of the players may\narrive at the site at which the game is to be played before\nanother agent (e.g., in economic settings, one player may\nenter a market earlier and commit to a way of doing\nbusi82\nness). Such commitment power has a profound impact on\nhow the game should be played. For example, the leader\nmay be best off playing a strategy that is dominated in the\nnormal-form representation of the game. Perhaps the\nearliest and best-known example of the effect of commitment is\nthat by von Stackelberg [25], who showed that, in Cournot\"s\nduopoly model [5], if one firm is able to commit to a\nproduction quantity first, that firm will do much better than in\nthe simultaneous-move (Nash) solution. In general, if\ncommitment to mixed strategies is possible, then (under minor\nassumptions) it never hurts, and often helps, to commit to\na strategy [26]. Being forced to commit to a pure strategy\nsometimes helps, and sometimes hurts (for example,\ncommitting to a pure strategy in rock-paper-scissors before the\nother player\"s decision will naturally result in a loss). In\nthis paper, we will assume commitment is always forced; if\nit is not, the player who has the choice of whether to\ncommit can simply compare the commitment outcome to the\nnon-commitment (simultaneous-move) outcome.\nModels of leadership are especially important in settings\nwith multiple self-interested software agents. Once the code\nfor an agent (or for a team of agents) is finalized and the\nagent is deployed, the agent is committed to playing the\n(possibly randomized) strategy that the code prescribes. Thus,\nas long as one can credibly show that one cannot change the\ncode later, the code serves as a commitment device. This\nholds true for recreational tournaments among agents (e.g.,\npoker tournaments, RoboSoccer), and for industrial\napplications such as sensor webs.\nFinally, there is also an implicit leadership situation in the\nfield of mechanism design, in which one player (the designer)\ngets to choose the rules of the game that the remaining\nplayers then play. Mechanism design is an extremely important\ntopic to the EC community: the papers published on\nmechanism design in recent EC conferences are too numerous\nto cite. Indeed, the mechanism designer may benefit from\ncommitting to a choice that, if the (remaining) agents\"\nactions were fixed, would be suboptimal. For example, in a\n(first-price) auction, the seller may wish to set a positive\n(artificial) reserve price for the item, below which the item\nwill not be sold-even if the seller values the item at 0. In\nhindsight (after the bids have come in), this (na\u00a8\u0131vely)\nappears suboptimal: if a bid exceeding the reserve price came\nin, the reserve price had no effect, and if no such bid came\nin, the seller would have been better off accepting a lower\nbid. Of course, the reason for setting the reserve price is\nthat it incentivizes the bidders to bid higher, and because\nof this, setting artificial reserve prices can actually increase\nexpected revenue to the seller.\nA significant amount of research has recently been\ndevoted to the computation of solutions according to various\nsolution concepts for settings in which the agents choose\ntheir strategies simultaneously, such as dominance [7, 11, 3]\nand (especially) Nash equilibrium [8, 21, 16, 15, 2, 22, 23,\n4]. However, the computation of the optimal strategy to\ncommit to in a leadership situation has gone ignored.\nTheoretically, leadership situations can simply be thought of as\nan extensive-form game in which one player chooses a\nstrategy (for the original game) first. The number of strategies\nin this extensive-form game, however, can be exceedingly\nlarge. For example, if the leader is able to commit to a\nmixed strategy in the original game, then every one of the\n(continuum of) mixed strategies constitutes a pure strategy\nin the extensive-form representation of the leadership\nsituation. (We note that a commitment to a distribution is not\nthe same as a distribution over commitments.) Moreover,\nif the original game is itself an extensive-form game, the\nnumber of strategies in the extensive-form representation of\nthe leadership situation (which is a different extensive-form\ngame) becomes even larger. Because of this, it is usually\nnot computationally feasible to simply transform the\noriginal game into the extensive-form representation of the\nleadership situation; instead, we have to analyze the game in its\noriginal representation.\nIn this paper, we study how to compute the optimal\nstrategy to commit to, both in normal-form games (Section 2)\nand in Bayesian games, which are a special case of\nextensiveform games (Section 3).\n2. NORMAL-FORM GAMES\nIn this section, we study how to compute the optimal\nstrategy to commit to for games represented in normal form.\n2.1 Definitions\nIn a normal-form game, every player i \u2208 {1, . . . , n} has a\nset of pure strategies (or actions) Si, and a utility function\nui : S1\u00d7S2\u00d7. . .\u00d7Sn \u2192 R that maps every outcome (a vector\nconsisting of a pure strategy for every player, also known\nas a profile of pure strategies) to a real number. To ease\nnotation, in the case of two players, we will refer to player\n1\"s pure strategy set as S, and player 2\"s pure strategy set\nas T. Such games can be represented in (bi-)matrix form,\nin which the rows correspond to player 1\"s pure strategies,\nthe columns correspond to player 2\"s pure strategies, and\nthe entries of the matrix give the row and column player\"s\nutilities (in that order) for the corresponding outcome of the\ngame. In the case of three players, we will use R, S, and T,\nfor player 1, 2, and 3\"s pure strategies, respectively. A mixed\nstrategy for a player is a probability distribution over that\nplayer\"s pure strategies. In the case of two-player games,\nwe will refer to player 1 as the leader and player 2 as the\nfollower.\nBefore defining optimal leadership strategies, consider the\nfollowing game which illustrates the effect of the leader\"s\nability to commit.\n2, 1 4, 0\n1, 0 3, 1\nIn this normal-form representation, the bottom strategy\nfor the row player is strictly dominated by the top strategy.\nNevertheless, if the row player has the ability to commit to\na pure strategy before the column player chooses his\nstrategy, the row player should commit to the bottom strategy:\ndoing so will make the column player prefer to play the\nright strategy, leading to a utility of 3 for the row player.\nBy contrast, if the row player were to commit to the top\nstrategy, the column player would prefer to play the left\nstrategy, leading to a utility of only 2 for the row player. If\nthe row player is able to commit to a mixed strategy, then\nshe can get an even greater (expected) utility: if the row\nplayer commits to placing probability p > 1/2 on the\nbottom strategy, then the column player will still prefer to play\nthe right strategy, and the row player\"s expected utility will\nbe 3p + 4(1 \u2212 p) = 4 \u2212 p \u2265 3. If the row player plays each\nstrategy with probability exactly 1/2, the column player is\n83\nindifferent between the strategies. In such cases, we will\nassume that the column player will choose the strategy that\nmaximizes the row player\"s utility (in this case, the right\nstrategy). Hence, the optimal mixed strategy to commit to\nfor the row player is p = 1/2. There are a few good\nreasons for this assumption. If we were to assume the opposite,\nthen there would not exist an optimal strategy for the row\nplayer in the example game: the row player would play the\nbottom strategy with probability p = 1/2 + with > 0,\nand the smaller , the better the utility for the row player.\nBy contrast, if we assume that the follower always breaks\nties in the leader\"s favor, then an optimal mixed strategy\nfor the leader always exists, and this corresponds to a\nsubgame perfect equilibrium of the extensive-form\nrepresentation of the leadership situation. In any case, this is a\nstandard assumption for such models (e.g. [20]), although some\nwork has investigated what can happen in the other\nsubgame perfect equilibria [26]. (For generic two-player games,\nthe leader\"s subgame-perfect equilibrium payoff is unique.)\nAlso, the same assumption is typically used in mechanism\ndesign, in that it is assumed that if an agent is indifferent\nbetween revealing his preferences truthfully and revealing them\nfalsely, he will report them truthfully. Given this\nassumption, we can safely refer to optimal leadership strategies\nrather than having to use some equilibrium notion.\nHence, for the purposes of this paper, an optimal strategy\nto commit to in a 2-player game is a strategy s \u2208 S that\nmaximizes maxt\u2208BR(s) ul(s, t), where BR(s) =\narg maxt\u2208T uf (s, t). (ul and uf are the leader and follower\"s\nutility functions, respectively.) We can have S = S for the\ncase of commitment to pure strategies, or S = \u2206(S), the\nset of probability distributions over S, for the case of\ncommitment to mixed strategies. (We note that replacing T\nby \u2206(T) makes no difference in this definition.) For games\nwith more than two players, in which the players commit\nto their strategies in sequence, we define optimal strategies\nto commit to recursively. After the leader commits to a\nstrategy, the game to be played by the remaining agents is\nitself a (smaller) leadership game. Thus, we define an\noptimal strategy to commit to as a strategy that maximizes\nthe leader\"s utility, assuming that the play of the remaining\nagents is itself optimal under this definition, and maximizes\nthe leader\"s utility among all optimal ways to play the\nremaining game. Again, commitment to mixed strategies may\nor may not be a possibility for every player (although for the\nlast player it does not matter if we allow for commitment to\nmixed strategies).\n2.2 Commitment to pure strategies\nWe first study how to compute the optimal pure strategy\nto commit to. This is relatively simple, because the number\nof strategies to commit to is not very large. (In the following,\n#outcomes is the number of complete strategy profiles.)\nTheorem 1. Under commitment to pure strategies, the\nset of all optimal strategy profiles in a normal-form game\ncan be found in O(#players \u00b7 #outcomes) time.\nProof. Each pure strategy that the first player may\ncommit to will induce a subgame for the remaining players. We\ncan solve each such subgame recursively to find all of its\noptimal strategy profiles; each of these will give the\noriginal leader some utility. Those that give the leader maximal\nutility correspond exactly to the optimal strategy profiles of\nthe original game.\nWe now present the algorithm formally. Let Su(G, s1) be\nthe subgame that results after the first (remaining) player\nin G plays s1 \u2208 SG\n1 . A game with 0 players is simply an\noutcome of the game. The function Append(s, O) appends\nthe strategy s to each of the vectors of strategies in the set\nO. Let e be the empty vector with no elements. In a slight\nabuse of notation, we will write uG\n1 (C) when all strategy\nprofiles in the set C give player 1 the same utility in the\ngame G. (Here, player 1 is the first remaining player in the\nsubgame G, not necessarily player 1 in the original game.)\nWe note that arg max is set-valued. Then, the following\nalgorithm computes all optimal strategy profiles:\nAlgorithm Solve(G)\nif G has 0 players\nreturn {e}\nC \u2190 \u2205\nfor all s1 \u2208 SG\n1 {\nO \u2190 Solve(Su(G, s1))\nO \u2190 arg maxo\u2208O uG\n1 (s1, o)\nif C = \u2205 or uG\n1 (s1, O ) = uG\n1 (C)\nC \u2190 C\u222aAppend(s1, O )\nif uG\n1 (s1, O ) > uG\n1 (C)\nC \u2190Append(s1, O )\n}\nreturn C\nEvery outcome is (potentially) examined by every player,\nwhich leads to the given runtime bound.\nAs an example of how the algorithm works, consider the\nfollowing 3-player game, in which the first player chooses\nthe left or right matrix, the second player chooses a row,\nand the third player chooses a column.\n0,1,1 1,1,0 1,0,1\n2,1,1 3,0,1 1,1,1\n0,0,1 0,0,0 3,3,0\n3,3,0 0,2,0 3,0,1\n4,4,2 0,0,2 0,0,0\n0,5,1 0,0,0 3,0,0\nFirst we eliminate the outcomes that do not correspond\nto best responses for the third player (removing them from\nthe matrix):\n0,1,1 1,0,1\n2,1,1 3,0,1 1,1,1\n0,0,1\n3,0,1\n4,4,2 0,0,2\n0,5,1\nNext, we remove the entries in which the third player\ndoes not break ties in favor of the second player, as well\nas entries that do not correspond to best responses for the\nsecond player.\n0,1,1\n2,1,1 1,1,1\n0,5,1\nFinally, we remove the entries in which the second and\nthird players do not break ties in favor of the first player, as\nwell as entries that do not correspond to best responses for\nthe first player.\n2,1,1\n84\nHence, in optimal play, the first player chooses the left\nmatrix, the second player chooses the middle row, and the\nthird player chooses the left column. (We note that this\noutcome is Pareto-dominated by (Right, Middle, Left).)\nFor general normal-form games, each player\"s utility for\neach of the outcomes has to be explicitly represented in the\ninput, so that the input size is itself \u2126(#players \u00b7 #outcomes).\nTherefore, the algorithm is in fact a linear-time algorithm.\n2.3 Commitment to mixed strategies\nIn the special case of two-player zero-sum games,\ncomputing an optimal mixed strategy for the leader to commit to\nis equivalent to computing a minimax strategy, which\nminimizes the maximum expected utility that the opponent can\nobtain. Minimax strategies constitute the only natural\nsolution concept for two-player zero-sum games: von Neumann\"s\nMinimax Theorem [24] states that in two-player zero-sum\ngames, it does not matter (in terms of the players\" utilities)\nwhich player gets to commit to a mixed strategy first, and a\nprofile of mixed strategies is a Nash equilibrium if and only\nif both strategies are minimax strategies. It is well-known\nthat a minimax strategy can be found in polynomial time,\nusing linear programming [17]. Our first result in this\nsection generalizes this result, showing that an optimal mixed\nstrategy for the leader to commit to can be efficiently\ncomputed in general-sum two-player games, again using linear\nprogramming.\nTheorem 2. In 2-player normal-form games, an optimal\nmixed strategy to commit to can be found in polynomial time\nusing linear programming.\nProof. For every pure follower strategy t, we compute a\nmixed strategy for the leader such that 1) playing t is a best\nresponse for the follower, and 2) under this constraint, the\nmixed strategy maximizes the leader\"s utility. Such a mixed\nstrategy can be computed using the following simple linear\nprogram:\nmaximize\ns\u2208S\npsul(s, t)\nsubject to\nfor all t \u2208 T,\ns\u2208S\npsuf (s, t) \u2265\ns\u2208S\npsuf (s, t )\ns\u2208S\nps = 1\nWe note that this program may be infeasible for some\nfollower strategies t, for example, if t is a strictly\ndominated strategy. Nevertheless, the program must be feasible\nfor at least some follower strategies; among these follower\nstrategies, choose a strategy t\u2217\nthat maximizes the linear\nprogram\"s solution value. Then, if the leader chooses as her\nmixed strategy the optimal settings of the variables ps for\nthe linear program for t\u2217\n, and the follower plays t\u2217\n, this\nconstitutes an optimal strategy profile.\nIn the following result, we show that we cannot expect to\nsolve the problem more efficiently than linear programming,\nbecause we can reduce any linear program with a probability\nconstraint on its variables to a problem of computing the\noptimal mixed strategy to commit to in a 2-player\nnormalform game.\nTheorem 3. Any linear program whose variables xi (with\nxi \u2208 R\u22650\n) must satsify\ni\nxi = 1 can be modeled as a\nproblem of computing the optimal mixed strategy to commit to in\na 2-player normal-form game.\nProof. Let the leader have a pure strategy i for every\nvariable xi. Let the column player have one pure\nstrategy j for every constraint in the linear program (other than\ni\nxi = 1), and a single additional pure strategy 0. Let the\nutility functions be as follows. Writing the objective of the\nlinear program as maximize\ni\ncixi, for any i, let ul(i, 0) =\nci and uf (i, 0) = 0. Writing the jth constraint of the linear\nprogram (not including\ni\nxi = 1) as\ni\naijxi \u2264 bj, for any\ni, j > 0, let ul(i, j) = mini ci \u2212 1 and uf (i, j) = aij \u2212 bj.\nFor example, consider the following linear program.\nmaximize 2x1 + x2\nsubject to\nx1 + x2 = 1\n5x1 + 2x2 \u2264 3\n7x1 \u2212 2x2 \u2264 2\nThe optimal solution to this program is x1 = 1/3, x2 =\n2/3. Our reduction transforms this program into the\nfollowing leader-follower game (where the leader is the row\nplayer).\n2, 0 0, 2 0, 5\n1, 0 0, -1 0, -4\nIndeed, the optimal strategy for the leader is to play the\ntop strategy with probability 1/3 and the bottom strategy\nwith probability 2/3. We now show that the reduction works\nin general.\nClearly, the leader wants to incentivize the follower to play\n0, because the utility that the leader gets when the follower\nplays 0 is always greater than when the follower does not\nplay 0. In order for the follower not to prefer playing j > 0\nrather than 0, it must be the case that\ni\npl(i)(aij \u2212 bj) \u2264\n0, or equivalently\ni\npl(i)aij \u2264 bj. Hence the leader will\nget a utility of at least mini ci if and only if there is a\nfeasible solution to the constraints. Given that the pl(i)\nincentivize the follower to play 0, the leader attempts to\nmaximize\ni\npl(i)ci. Thus the leader must solve the original\nlinear program.\nAs an alternative proof of Theorem 3, one may observe\nthat it is known that finding a minimax strategy in a\nzerosum game is as hard as the linear programming problem [6],\nand as we pointed out at the beginning of this section,\ncomputing a minimax strategy in a zero-sum game is a special\ncase of the problem of computing an optimal mixed strategy\nto commit to.\nThis polynomial-time solvability of the problem of\ncomputing an optimal mixed strategy to commit to in two-player\nnormal-form games contrasts with the unknown complexity\nof computing a Nash equilibrium in such games [21], as well\nas with the NP-hardness of finding a Nash equilibrium with\nmaximum utility for a given player in such games [8, 2].\nUnfortunately, this result does not generalize to more than\ntwo players-here, the problem becomes NP-hard. To show\nthis, we reduce from the VERTEX-COVER problem.\nDefinition 1. In VERTEX-COVER, we are given a graph\nG = (V, E) and an integer K. We are asked whether there\n85\nexists a subset of the vertices S \u2286 V , with |S| = K, such\nthat every edge e \u2208 E has at least one of its endpoints in S.\nBALANCED-VERTEX-COVER is the special case of\nVERTEX-COVER in which K = |V |/2.\nVERTEX-COVER is NP-complete [9]. The following\nlemma shows that the hardness remains if we require K =\n|V |/2. (Similar results have been shown for other NP-complete\nproblems.)\nLemma 1. BALANCED-VERTEX-COVER is\nNP-complete.\nProof. Membership in NP follows from the fact that\nthe problem is a special case of VERTEX-COVER, which\nis in NP. To show NP-hardness, we reduce an arbitrary\nVERTEX-COVER instance to a\nBALANCED-VERTEXCOVER instance, as follows. If, for the VERTEX-COVER\ninstance, K > |V |/2, then we simply add isolated vertices\nthat are disjoint from the rest of the graph, until K = |V |/2.\nIf K < |V |/2, we add isolated triangles (that is, the\ncomplete graph on three vertices) to the graph, increasing K by\n2 every time, until K = |V |/2.\nTheorem 4. In 3-player normal-form games, finding an\noptimal mixed strategy to commit to is NP-hard.\nProof. We reduce an arbitrary\nBALANCED-VERTEXCOVER instance to the following 3-player normal-form game.\nFor every vertex v, each of the three players has a pure\nstrategy corresponding to that vertex (rv, sv, tv, respectively). In\naddition, for every edge e, the third player has a pure\nstrategy te; and finally, the third player has one additional pure\nstrategy t0. The utilities are as follows:\n\u2022 for all r \u2208 R, s \u2208 S, u1(r, s, t0) = u2(r, s, t0) = 1;\n\u2022 for all r \u2208 R, s \u2208 S, t \u2208 T\u2212{t0}, u1(r, s, t) = u2(r, s, t) =\n0;\n\u2022 for all v \u2208 V, s \u2208 S, u3(rv, s, tv) = 0;\n\u2022 for all v \u2208 V, r \u2208 R, u3(r, sv, tv) = 0;\n\u2022 for all v \u2208 V , for all r \u2208 R \u2212 {rv}, s \u2208 S \u2212 {sv},\nu3(r, s, tv) = |V |\n|V |\u22122\n;\n\u2022 for all e \u2208 E, s \u2208 S, for both v \u2208 e, u3(rv, s, te) = 0;\n\u2022 for all e \u2208 E, s \u2208 S, for all v /\u2208 e, u3(rv, s, te) = |V |\n|V |\u22122\n.\n\u2022 for all r \u2208 R, s \u2208 S, u3(r, s, t0) = 1.\nWe note that players 1 and 2 have the same utility function.\nWe claim that there is an optimal strategy profile in which\nplayers 1 and 2 both obtain 1 (their maximum utility) if\nand only if there is a solution to the\nBALANCED-VERTEXCOVER problem. (Otherwise, these players will both obtain\n0.)\nFirst, suppose there exists a solution to the\nBALANCEDVERTEX-COVER problem. Then, let player 1 play every\nrv such that v is in the cover with probability 2\n|V |\n, and let\nplayer 2 play every sv such that v is not in the cover with\nprobability 2\n|V |\n. Then, for player 3, the expected utility\nof playing tv (for any v) is (1 \u2212 2\n|V |\n) |V |\n|V |\u22122\n= 1, because\nthere is a chance of 2\n|V |\nthat rv or sv is played.\nAdditionally, the expected utility of playing te (for any e) is at most\n(1 \u2212 2\n|V |\n) |V |\n|V |\u22122\n= 1, because there is a chance of at least\n2\n|V |\nthat some rv with v \u2208 e is played (because player 1 is\nrandomizing over the pure strategies corresponding to the\ncover). It follows that playing t0 is a best response for player\n3, giving players 1 and 2 a utility of 1.\nNow, suppose that players 1 and 2 obtain 1 in optimal\nplay. Then, it must be the case that player 3 plays t0. Hence,\nfor every v \u2208 V , there must be a probability of at least 2\n|V |\nthat either rv or sv is played, for otherwise player 3 would\nbe better off playing tv. Because players 1 and 2 have only\na total probability of 2 to distribute, it must be the case\nthat for each v, either rv or sv is played with probability\n2\n|V |\n, and the other is played with probability 0. (It is not\npossible for both to have nonzero probability, because then\nthere would be some probability that both are played\nsimultaneously (correlation is not possible), hence the total\nprobability of at least one being played could not be high\nenough for all vertices.) Thus, for exactly half the v \u2208 V ,\nplayer 1 places probability 2\n|V |\non rv. Moreover, for every\ne \u2208 E, there must be a probability of at least 2\n|V |\nthat some\nrv with v \u2208 e is played, for otherwise player 3 would be\nbetter off playing te. Thus, the v \u2208 V such that player 1 places\nprobability 2\n|V |\non rv constitute a balanced vertex cover.\n3. BAYESIAN GAMES\nSo far, we have restricted our attention to normal-form\ngames. In a normal-form game, it is assumed that every\nagent knows every other agent\"s preferences over the\noutcomes of the game. In general, however, agents may have\nsome private information about their preferences that is not\nknown to the other agents. Moreover, at the time of\ncommitment to a strategy, the agents may not even know their\nown (final) preferences over the outcomes of the game yet,\nbecause these preferences may be dependent on a context\nthat has yet to materialize. For example, when the code for\na trading agent is written, it may not yet be clear how that\nagent will value resources that it will negotiate over later,\nbecause this depends on information that is not yet\navailable at the time at which the code is written (such as orders\nthat will have been placed to the agent before the\nnegotiation). In this section, we will study commitment in Bayesian\ngames, which can model such uncertainty over preferences.\n3.1 Definitions\nIn a Bayesian game, every player i has a set of actions Si,\na set of types \u0398i with an associated probability distribution\n\u03c0i : \u0398i \u2192 [0, 1], and, for each type \u03b8i, a utility function\nu\u03b8i\ni : S1 \u00d7 S2 \u00d7 . . . \u00d7 Sn \u2192 R. A pure strategy in a Bayesian\ngame is a mapping from the player\"s types to actions, \u03c3i :\n\u0398i \u2192 Si. (Bayesian games can be rewritten in normal form\nby enumerating every pure strategy \u03c3i, but this will cause\nan exponential blowup in the size of the representation of\nthe game and therefore cannot lead to efficient algorithms.)\nThe strategy that the leader should commit to depends\non whether, at the time of commitment, the leader knows\nher own type. If the leader does know her own type, the\nother types that the leader might have had become\nirrelevant and the leader should simply commit to the strategy\nthat is optimal for the type. However, as argued above, the\nleader does not necessarily know her own type at the time of\ncommitment (e.g., the time at which the code is submitted).\nIn this case, the leader must commit to a strategy that is\n86\ndependent upon the leader\"s eventual type. We will study\nthis latter model, although we will pay specific attention to\nthe case where the leader has only a single type, which is\neffectively the same as the former model.\n3.2 Commitment to pure strategies\nIt turns out that computing an optimal pure strategy to\ncommit to is hard in Bayesian games, even with two players.\nTheorem 5. Finding an optimal pure strategy to commit\nto in 2-player Bayesian games is NP-hard, even when the\nfollower has only a single type.\nProof. We reduce an arbitrary VERTEX-COVER\ninstance to the following Bayesian game between the leader\nand the follower. The leader has K types \u03b81, \u03b82, . . . , \u03b8K ,\neach occurring with probability 1/K, and for every vertex\nv \u2208 V , the leader has an action sv. The follower has only a\nsingle type; for each edge e \u2208 E, the follower has an action\nte, and the follower has a single additional action t0. The\nutility function for the leader is given by, for all \u03b8l \u2208 \u0398l and\nall s \u2208 S, u\n\u03b8l\nl (s, t0) = 1, and for all e \u2208 E, u\n\u03b8l\nl (s, te) = 0.\nThe follower\"s utility is given by:\n\u2022 For all v \u2208 V , for all e \u2208 E with v /\u2208 e, uf (sv, te) = 1;\n\u2022 For all v \u2208 V , for all e \u2208 E with v \u2208 e, uf (sv, te) =\n\u2212K;\n\u2022 For all v \u2208 V , uf (sv, t0) = 0.\nWe claim that the leader can get a utility of 1 if and only if\nthere is a solution to the VERTEX-COVER instance.\nFirst, suppose that there is a solution to the\nVERTEXCOVER instance. Then, the leader can commit to a pure\nstrategy such that for each vertex v in the cover, the leader\nplays sv for some type. Then, the follower\"s utility for\nplaying te (for any e \u2208 E) is at most K\u22121\nK\n+ 1\nK\n(\u2212K) = \u2212 1\nK\n,\nso that the follower will prefer to play t0, which gives the\nleader a utility of 1, as required.\nNow, suppose that there is a pure strategy for the leader\nthat will give the leader a utility of 1. Then, the follower\nmust play t0. In order for the follower not to prefer playing te\n(for any e \u2208 E) instead, for at least one v \u2208 e the leader must\nplay sv for some type \u03b8l. Hence, the set of vertices v that the\nleader plays for some type must constitute a vertex cover;\nand this set can have size at most K, because the leader\nhas only K types. So there is a solution to the\nVERTEXCOVER instance.\nHowever, if the leader has only a single type, then the\nproblem becomes easy again (#types is the number of types\nfor the follower):\nTheorem 6. In 2-player Bayesian games in which the\nleader has only a single type, an optimal pure strategy to\ncommit to can be found in O(#outcomes \u00b7 #types) time.\nProof. For every leader action s, we can compute, for\nevery follower type \u03b8f \u2208 \u0398f , which actions t maximize the\nfollower\"s utility; call this set of actions BR\u03b8f (s). Then, the\nutility that the leader receives for committing to action s\ncan be computed as\n\u03b8f \u2208\u0398f\n\u03c0(\u03b8f ) maxt\u2208BR\u03b8f\n(s) ul(s, t), and\nthe leader can choose the best action to commit to.\n3.3 Commitment to mixed strategies\nIn two-player zero-sum imperfect information games with\nperfect recall (no player ever forgets something that it once\nknew), a minimax strategy can be constructed in polynomial\ntime [12, 13]. Unfortunately, this result does not extend to\ncomputing optimal mixed strategies to commit to in the\ngeneral-sum case-not even in Bayesian games. We will\nexhibit NP-hardness by reducing from the\nINDEPENDENTSET problem.\nDefinition 2. In INDEPENDENT-SET, we are given a\ngraph G = (V, E) and an integer K. We are asked whether\nthere exists a subset of the vertices S \u2286 V , with |S| = K,\nsuch that no edge e \u2208 E has both of its endpoints in S.\nAgain, this problem is NP-complete [9].\nTheorem 7. Finding an optimal mixed strategy to\ncommit to in 2-player Bayesian games is NP-hard, even when\nthe leader has only a single type and the follower has only\ntwo actions.\nProof. We reduce an arbitrary INDEPENDENT-SET\ninstance to the following Bayesian game between the leader\nand the follower. The leader has only a single type, and for\nevery vertex v \u2208 V , the leader has an action sv. The follower\nhas a type \u03b8v for every v \u2208 V , occurring with probability\n1\n(|E|+1)|V |\n, and a type \u03b8e for every e \u2208 E, occurring with\nprobability 1\n|E|+1\n. The follower has two actions: t0 and t1.\nThe leader\"s utility is given by, for all s \u2208 S, ul(s, t0) = 1\nand ul(s, t1) = 0. The follower\"s utility is given by:\n\u2022 For all v \u2208 V , u\u03b8v\nf (sv, t1) = 0;\n\u2022 For all v \u2208 V and s \u2208 S \u2212 {sv}, u\u03b8v\nf (s, t1) = K\nK\u22121\n;\n\u2022 For all v \u2208 V and s \u2208 S, u\u03b8v\nf (s, t0) = 1;\n\u2022 For all e \u2208 E, s \u2208 S, u\u03b8e\nf (s, t0) = 1;\n\u2022 For all e \u2208 E, for both v \u2208 e, u\u03b8e\nf (sv, t1) = 2K\n3\n;\n\u2022 For all e \u2208 E, for all v /\u2208 e, u\u03b8e\nf (sv, t1) = 0.\nWe claim that an optimal strategy to commit to gives the\nleader an expected utility of at least |E|\n|E|+1\n+ K\n(|E|+1)|V |\nif\nand only if there is a solution to the INDEPENDENT-SET\ninstance.\nFirst, suppose that there is a solution to the\nINDEPENDENT-SET instance. Then, the leader could\ncommit to the following strategy: for every vertex v in the\nindependent set, play the corresponding sv with probability\n1/K. If the follower has type \u03b8e for some e \u2208 E, the expected\nutility for the follower of playing t1 is at most 1\nK\n2K\n3\n= 2/3,\nbecause there is at most one vertex v \u2208 e such that sv is\nplayed with nonzero probability. Hence, the follower will\nplay t0 and obtain a utility of 1. If the follower has type\n\u03b8v for some vertex v in the independent set, the expected\nutility for the follower of playing t1 is K\u22121\nK\nK\nK\u22121\n= 1, because\nthe leader plays sv with probability 1/K. It follows that the\nfollower (who breaks ties to maximize the leader\"s utility)\nwill play t0, which also gives a utility of 1 and gives the\nleader a higher utility. Hence the leader\"s expected utility\nfor this strategy is at least |E|\n|E|+1\n+ K\n(|E|+1)|V |\n, as required.\n87\nNow, suppose that there is a strategy that gives the leader\nan expected utility of at least |E|\n|E|+1\n+ K\n(|E|+1)|V |\n. Then, this\nstrategy must induce the follower to play t0 whenever it\nhas a type of the form \u03b8e (because otherwise, the utility\ncould be at most |E|\u22121\n|E|+1\n+ |V |\n(|E|+1)|V |\n= |E|\n|E|+1\n< |E|\n|E|+1\n+\nK\n(|E|+1)|V |\n). Thus, it cannot be the case that for some edge\ne = (v1, v2) \u2208 E, the probability that the leader plays one of\nsv1 and sv2 is at least 2/K, because then the expected utility\nfor the follower of playing t1 when it has type \u03b8e would be at\nleast 2\nK\n2K\n3\n= 4/3 > 1. Moreover, the strategy must induce\nthe follower to play t0 for at least K types of the form \u03b8v.\nInducing the follower to play t0 when it has type \u03b8v can\nbe done only by playing sv with probability at least 1/K,\nwhich will give the follower a utility of at most K\u22121\nK\nK\nK\u22121\n= 1\nfor playing t1. But then, the set of vertices v such that sv\nis played with probability at least 1/K must constitute an\nindependent set of size K (because if there were an edge e\nbetween two such vertices, it would induce the follower to\nplay t1 for type \u03b8e by the above).\nBy contrast, if the follower has only a single type, then we\ncan generalize the linear programming approach for\nnormalform games:\nTheorem 8. In 2-player Bayesian games in which the\nfollower has only a single type, an optimal mixed strategy\nto commit to can be found in polynomial time using linear\nprogramming.\nProof. We generalize the approach in Theorem 2 as\nfollows. For every pure follower strategy t, we compute a mixed\nstrategy for the leader for every one of the leader\"s types\nsuch that 1) playing t is a best response for the follower,\nand 2) under this constraint, the mixed strategy maximizes\nthe leader\"s ex ante expected utility. To do so, we generalize\nthe linear program as follows:\nmaximize\n\u03b8l\u2208\u0398l\n\u03c0(\u03b8l)\ns\u2208S\np\u03b8l\ns u\u03b8l\nl (s, t)\nsubject to\nfor all t \u2208 T,\n\u03b8l\u2208\u0398l\n\u03c0(\u03b8l)\ns\u2208S\np\n\u03b8l\ns uf (s, t) \u2265\n\u03b8l\u2208\u0398l\n\u03c0(\u03b8l)\ns\u2208S\np\n\u03b8l\ns uf (s, t )\nfor all \u03b8l \u2208 \u0398l,\ns\u2208S\np\n\u03b8l\ns = 1\nAs in Theorem 2, the solution for the linear program that\nmaximizes the solution value is an optimal strategy to\ncommit to.\nThis shows an interesting contrast between commitment\nto pure strategies and commitment to mixed strategies in\nBayesian games: for pure strategies, the problem becomes\neasy if the leader has only a single type (but not if the\nfollower has only a single type), whereas for mixed strategies,\nthe problem becomes easy if the follower has only a single\ntype (but not if the leader has only a single type).\n4. CONCLUSIONS AND FUTURE\nRESEARCH\nIn multiagent systems, strategic settings are often\nanalyzed under the assumption that the players choose their\nstrategies simultaneously. This requires some equilibrium\nnotion (Nash equilibrium and its refinements), and often\nleads to the equilibrium selection problem: it is unclear to\neach individual player according to which equilibrium she\nshould play. However, this model is not always realistic. In\nmany settings, one player is able to commit to a strategy\nbefore the other player makes a decision. For example, one\nagent may arrive at the (real or virtual) site of the game\nbefore the other, or, in the specific case of software agents,\nthe code for one agent may be completed and committed\nbefore that of another agent. Such models are synonymously\nreferred to as leadership, commitment, or Stackelberg\nmodels, and optimal play in such models is often significantly\ndifferent from optimal play in the model where strategies\nare selected simultaneously. Specifically, if commitment to\nmixed strategies is possible, then (optimal) commitment\nnever hurts the leader, and often helps.\nThe recent surge in interest in computing game-theoretic\nsolutions has so far ignored leadership models (with the\nexception of the interest in mechanism design, where the\ndesigner is implicitly in a leadership position). In this paper,\nwe studied how to compute optimal strategies to commit\nto under both commitment to pure strategies and\ncommitment to mixed strategies, in both normal-form and Bayesian\ngames. For normal-form games, we showed that the optimal\npure strategy to commit to can be found efficiently for any\nnumber of players. An optimal mixed strategy to commit\nto in a normal-form game can be found efficiently for two\nplayers using linear programming (and no more efficiently\nthan that, in the sense that any linear program with a\nprobability constraint can be encoded as such a problem). (This\nis a generalization of the polynomial-time computability of\nminimax strategies in normal-form games.) The problem\nbecomes NP-hard for three (or more) players. In Bayesian\ngames, the problem of finding an optimal pure strategy to\ncommit to is NP-hard even in two-player games in which the\nfollower has only a single type, although two-player games\nin which the leader has only a single type can be solved\nefficiently. The problem of finding an optimal mixed\nstrategy to commit to in a Bayesian game is NP-hard even in\ntwo-player games in which the leader has only a single type,\nalthough two-player games in which the follower has only a\nsingle type can be solved efficiently using a generalization\nof the linear progamming approach for normal-form games.\nThe following two tables summarize these results.\n2 players \u2265 3 players\nnormal-form O(#outcomes) O(#outcomes\u00b7\n#players)\nBayesian, O(#outcomes\u00b7 NP-hard\n1-type leader #types)\nBayesian, NP-hard NP-hard\n1-type follower\nBayesian (general) NP-hard NP-hard\nResults for commitment to pure strategies. (With more\nthan 2 players, the follower is the last player to commit,\nthe leader is the first.)\n88\n2 players \u2265 3 players\nnormal-form one LP-solve per NP-hard\nfollower action\nBayesian, NP-hard NP-hard\n1-type leader\nBayesian, one LP-solve per NP-hard\n1-type follower follower action\nBayesian (general) NP-hard NP-hard\nResults for commitment to mixed strategies. (With more\nthan 2 players, the follower is the last player to commit,\nthe leader is the first.)\nFuture research can take a number of directions. First,\nwe can empirically evaluate the techniques presented here on\ntest suites such as GAMUT [19]. We can also study the\ncomputation of optimal strategies to commit to in other1\n\nconcise representations of normal-form games-for example, in\ngraphical games [10] or local-effect/action graph games [14,\n1]. For the cases where computing an optimal strategy to\ncommit to is NP-hard, we can also study the computation\nof approximately optimal strategies to commit to. While the\ncorrect definition of an approximately optimal strategy is in\nthis setting may appear simple at first-it should be a\nstrategy that, if the following players play optimally, performs\nalmost as well as the optimal strategy in expectation-this\ndefinition becomes problematic when we consider that the\nother players may also be playing only approximately\noptimally. One may also study models in which multiple (but\nnot all) players commit at the same time.\nAnother interesting direction to pursue is to see if\ncomputing optimal mixed strategies to commit to can help us\nin, or otherwise shed light on, computing Nash equilibria.\nOften, optimal mixed strategies to commit to are also Nash\nequilibrium strategies (for example, in two-player zero-sum\ngames this is always true), although this is not always the\ncase (for example, as we already pointed out, sometimes the\noptimal strategy to commit to is a strictly dominated\nstrategy, which can never be a Nash equilibrium strategy).\n5. REFERENCES\n[1] N. A. R. Bhat and K. Leyton-Brown. Computing\nNash equilibria of action-graph games. In Proceedings\nof the 20th Annual Conference on Uncertainty in\nArtificial Intelligence (UAI), Banff, Canada, 2004.\n[2] V. Conitzer and T. Sandholm. Complexity results\nabout Nash equilibria. In Proceedings of the\nEighteenth International Joint Conference on\nArtificial Intelligence (IJCAI), pages 765-771,\nAcapulco, Mexico, 2003.\n[3] V. Conitzer and T. Sandholm. Complexity of\n(iterated) dominance. In Proceedings of the ACM\nConference on Electronic Commerce (ACM-EC),\npages 88-97, Vancouver, Canada, 2005.\n[4] V. Conitzer and T. Sandholm. A generalized strategy\neliminability criterion and computational methods for\napplying it. In Proceedings of the National Conference\non Artificial Intelligence (AAAI), pages 483-488,\nPittsburgh, PA, USA, 2005.\n[5] A. A. Cournot. Recherches sur les principes\nmath\u00b4ematiques de la th\u00b4eorie des richesses (Researches\n1\nBayesian games are one potentially concise representation\nof normal-form games.\ninto the Mathematical Principles of the Theory of\nWealth). Hachette, Paris, 1838.\n[6] G. Dantzig. A proof of the equivalence of the\nprogramming problem and the game problem. In\nT. Koopmans, editor, Activity Analysis of Production\nand Allocation, pages 330-335. John Wiley & Sons,\n1951.\n[7] I. Gilboa, E. Kalai, and E. Zemel. The complexity of\neliminating dominated strategies. Mathematics of\nOperation Research, 18:553-565, 1993.\n[8] I. Gilboa and E. Zemel. Nash and correlated\nequilibria: Some complexity considerations. Games\nand Economic Behavior, 1:80-93, 1989.\n[9] R. Karp. Reducibility among combinatorial problems.\nIn R. E. Miller and J. W. Thatcher, editors,\nComplexity of Computer Computations, pages 85-103.\nPlenum Press, NY, 1972.\n[10] M. Kearns, M. Littman, and S. Singh. Graphical\nmodels for game theory. In Proceedings of the\nConference on Uncertainty in Artificial Intelligence\n(UAI), 2001.\n[11] D. E. Knuth, C. H. Papadimitriou, and J. N.\nTsitsiklis. A note on strategy elimination in bimatrix\ngames. Operations Research Letters, 7(3):103-107,\n1988.\n[12] D. Koller and N. Megiddo. The complexity of\ntwo-person zero-sum games in extensive form. Games\nand Economic Behavior, 4(4):528-552, Oct. 1992.\n[13] D. Koller, N. Megiddo, and B. von Stengel. Efficient\ncomputation of equilibria for extensive two-person\ngames. Games and Economic Behavior, 14(2):247-259,\n1996.\n[14] K. Leyton-Brown and M. Tennenholtz. Local-effect\ngames. In Proceedings of the Eighteenth International\nJoint Conference on Artificial Intelligence (IJCAI),\nAcapulco, Mexico, 2003.\n[15] R. Lipton, E. Markakis, and A. Mehta. Playing large\ngames using simple strategies. In Proceedings of the\nACM Conference on Electronic Commerce\n(ACM-EC), pages 36-41, San Diego, CA, 2003.\n[16] M. Littman and P. Stone. A polynomial-time Nash\nequilibrium algorithm for repeated games. In\nProceedings of the ACM Conference on Electronic\nCommerce (ACM-EC), pages 48-54, San Diego, CA,\n2003.\n[17] R. D. Luce and H. Raiffa. Games and Decisions. John\nWiley and Sons, New York, 1957. Dover republication\n1989.\n[18] J. Nash. Equilibrium points in n-person games. Proc.\nof the National Academy of Sciences, 36:48-49, 1950.\n[19] E. Nudelman, J. Wortman, K. Leyton-Brown, and\nY. Shoham. Run the GAMUT: A comprehensive\napproach to evaluating game-theoretic algorithms. In\nInternational Conference on Autonomous Agents and\nMulti-Agent Systems (AAMAS), New York, NY, USA,\n2004.\n[20] M. J. Osborne and A. Rubinstein. A Course in Game\nTheory. MIT Press, 1994.\n[21] C. Papadimitriou. Algorithms, games and the\nInternet. In Proceedings of the Annual Symposium on\nTheory of Computing (STOC), pages 749-753, 2001.\n89\n[22] R. Porter, E. Nudelman, and Y. Shoham. Simple\nsearch methods for finding a Nash equilibrium. In\nProceedings of the National Conference on Artificial\nIntelligence (AAAI), pages 664-669, San Jose, CA,\nUSA, 2004.\n[23] T. Sandholm, A. Gilpin, and V. Conitzer.\nMixed-integer programming methods for finding Nash\nequilibria. In Proceedings of the National Conference\non Artificial Intelligence (AAAI), pages 495-501,\nPittsburgh, PA, USA, 2005.\n[24] J. von Neumann. Zur Theorie der Gesellschaftsspiele.\nMathematische Annalen, 100:295-320, 1927.\n[25] H. von Stackelberg. Marktform und Gleichgewicht.\nSpringer, Vienna, 1934.\n[26] B. von Stengel and S. Zamir. Leadership with\ncommitment to mixed strategies. CDAM Research\nReport LSE-CDAM-2004-01, London School of\nEconomics, Feb. 2004.\n90", "keywords": "normal-form game;nash equilibrium;np-hardness;game theory;bayesian game;pure strategy;commitment;stackelberg;leadership model;leadership;optimal strategy;stackelberg model;simultaneous manner;mixed strategy;multiagent system"}
-{"name": "test_J-32", "title": "Nash Equilibria in Graphical Games on Trees Revisited \u2217", "abstract": "Graphical games have been proposed as a game-theoretic model of large-scale distributed networks of non-cooperative agents. When the number of players is large, and the underlying graph has low degree, they provide a concise way to represent the players\" payoffs. It has recently been shown that the problem of finding Nash equilibria in a general degree-3 graphical game with two actions per player is complete for the complexity class PPAD, indicating that it is unlikely that there is any polynomial-time algorithm for this problem. In this paper, we study the complexity of graphical games with two actions per player on bounded-degree trees. This setting was first considered by Kearns, Littman and Singh, who proposed a dynamic programming-based algorithm that computes all Nash equilibria of such games. The running time of their algorithm is exponential, though approximate equilibria can be computed efficiently. Later, Littman, Kearns and Singh proposed a modification to this algorithm that can find a single Nash equilibrium in polynomial time. We show that this modified algorithm is incorrect - the output is not always a Nash equilibrium. We then propose a new algorithm that is based on the ideas of Kearns et al. and computes all Nash equilibria in quadratic time if the input graph is a path, and in polynomial time if it is an arbitrary graph of maximum degree 2. Moreover, our algorithm can be used to compute Nash equilibria of graphical games on arbitrary trees, but the running time can be exponential, even when the tree has bounded degree. We show that this is inevitable - any algorithm of this type will take exponential time, even on bounded-degree trees with pathwidth 2. It is an open question whether our algorithm runs in polynomial time on graphs with pathwidth 1, but we show that finding a Nash equilibrium for a 2-action graphical game in which the underlying graph has maximum degree 3 and constant pathwidth is PPAD-complete (so is unlikely to be tractable).", "fulltext": "1. INTRODUCTION\nGraphical games were introduced in the papers of Kearns et\nal. [8] and Littman et al. [9] as a succinct representation of games\nwith a large number of players. The classical normal form (or\nmatrix form) representation has a size that is exponential in the number\nof players, making it unsuitable for large-scale distributed games.\nA graphical game associates each player with a vertex of an\nunderlying graph G, and the payoff to that player is a function of the\nactions chosen by himself and his neighbours in G; if G has low\ndegree, this is a concise way to represent a game with many players.\nThe papers [8, 9] give a dynamic-programming algorithm for\nfinding Nash equilibria in graphical games where there are two\nactions per player and G is a tree. The first of these papers describes\na generic algorithm for this problem that can be specialized in two\nways: as an algorithm that computes approximations to all Nash\nequilibria in time polynomial in the input size and the\napproximation quality, or as an exponential-time algorithm that allows the\nexact computation of all Nash equilibria in G. In [9], the authors\npropose a modification to the latter algorithm that aims to find a\nsingle Nash equilibrium in polynomial time. This does not quite\nwork, as we show in Section 3, though it introduces a useful idea.\n1.1 Background\nThe generic algorithm of [8] consists of two phases which we\nwill refer to as the upstream pass and the downstream pass; 1\nthe\nformer starts at the leaves of the tree and ends at the root, while the\nlatter starts at the root and ends at the leaves. It is assumed that\neach player has two pure strategies (actions), which are denoted\nby 0 and 1; it follows that any mixed strategy can be represented\nas a single number x \u2208 [0, 1], where x is the probability that the\nplayer selects 1. During the upstream pass, each vertex V computes\nthe set of its potential best responses to every mixed strategy w\nof its parent W ; a strategy v is a potential best response to w if\n1\nNote that the terminology upstream and downstream are\nreversed in [8, 9] - our trees are rooted at the top.\n100\nthere is a Nash equilibrium in the graphical game downstream of V\n(inclusive) given that W plays w (for a more technical definition,\nthe reader is referred to Section 2). The output of this stage can\nbe viewed as a (continuous) table T(w, v), where T(w, v) = 1\nif and only if v is a potential best response to w; we refer to this\ntable as the best response policy for V . The generic algorithm does\nnot address the problem of representing the best response policy; in\nfact, the most important difference between the two instantiations\nof the generic algorithm described in [8] is in their approach to this\nissue. The computation is performed inductively: the best response\npolicy for V is computed based on the best response policies of V \"s\nchildren U1, . . . , Uk. By the end of the upstream pass, all children\nof the root have computed their best response policies.\nIn the beginning of the downstream pass, the root selects its\nstrategy and informs its children about its choice. It also selects a\nstrategy for each child. A necessary and sufficient condition for the\nalgorithm to proceed is that the strategy of the root is a best response\nto the strategies of its children and, for each child, the chosen\nstrategy is one of the pre-computed potential best responses to the\nchosen strategy of the root. The equilibrium then propagates\ndownstream, with each vertex selecting its children\"s actions. The action\nof the child is chosen to be any strategy from the pre-computed\npotential best responses to the chosen strategy of the parent.\nTo bound the running time of this algorithm, the paper [8] shows\nthat any best response policy can be represented as a union of an\nexponential number of rectangles; the polynomial time\napproximation algorithm is obtained by combining this representation with a\npolynomial-sized grid. The main idea of [9] is that it is not\nnecessary to keep track of all rectangles in the best response policies;\nrather, at each step of the upstream pass, it is possible to select\na polynomial-size subset of the corresponding policy (in [9], this\nsubset is called a breakpoint policy), and still ensure that the\ndownstream pass can proceed successfully (a sufficient condition for this\nis that the subset of the best response policy for V stored by the\nalgorithm contains a continuous path from w = 0 to w = 1).\n1.2 Our Results\nOne of the main contributions of our paper is to show that the\nalgorithm proposed by [9] is incorrect. In Section 3 we describe a\nsimple example for which the algorithm of [9] outputs a vector of\nstrategies that does not constitute a Nash equilibrium of the\nunderlying game.\nIn Sections 4, 5 and 6 we show how to fix the algorithm of [9] so\nthat it always produces correct output.\nSection 4 considers the case in which the underlying graph is a\npath of length n. For this case, we show that the number of\nrectangles in each of the best response policies is O(n2\n). This gives us\nan O(n3\n) algorithm for finding a Nash equilibrium, and for\ncomputing a representation of all Nash equilibria. (This algorithm is a\nspecial case of the generic algorithm of [8] - we show that it runs\nin polynomial time when the underlying graph is a path.)\nWe can improve the running time of the generic algorithm\nusing the ideas of [9]. In particular, we give an O(n2\n) algorithm for\nfinding a Nash equilibrium of a graphical game on a path of length\nn. Instead of storing best response policies, this algorithm stores\nappropriately-defined subsets, which, following [9], we call\nbreakpoint policies (modifying the definition as necessary). We obtain\nthe following theorem\nTHEOREM 1. There is an O(n2\n) algorithm that finds a Nash\nequilibrium of a graphical game with two actions per player on\nan n-vertex path. There is an O(n3\n) algorithm that computes a\nrepresentation of all Nash equilibria of such a game.\nIn Section 5 we extend the results of Section 4 to general\ndegree2 graphs, obtaining the following theorem.\nTHEOREM 2. There is a polynomial-time algorithm that finds a\nNash equilibrium of a graphical game with two actions per player\non a graph with maximum degree 2.\nIn Section 6 we extend our algorithm so that it can be used to find\na Nash equilibrium of a graphical game on an arbitrary tree. Even\nwhen the tree has bounded degree, the running time can be\nexponential. We show that this is inevitable by constructing a family of\ngraphical games on bounded-degree trees for which best response\npolicies of some of the vertices have exponential size, and any\ntwopass algorithm (i.e., an algorithm that is similar in spirit to that\nof [8]) has to store almost all points of the best response policies.\nIn particular, we show the following.\nTHEOREM 3. There is an infinite family of graphical games on\nbounded-degree trees with pathwidth 2 such that any two-pass\nalgorithm for finding Nash equilibria on these trees requires\nexponential time and space.\nIt is interesting to note that the trees used in the proof of Theorem 3\nhave pathwidth 2, that is, they are very close to being paths. It is\nan open question whether our algorithm runs in polynomial time\nfor graphs of pathwidth 1. This question can be viewed as a\ngeneralization of a very natural computational geometry problem - we\ndescribe it in more detail in Section 8.\nIn Section 7, we give a complexity-theoretic intractability result\nfor the problem of finding a Nash equilibrium of a graphical game\non a graph with small pathwidth. We prove the following theorem.\nTHEOREM 4. Consider the problem of finding a Nash\nequilibrium for a graphical game in which the underlying graph has\nmaximum degree 3 and pathwidth k. There is a constant k such that\nthis problem is PPAD-complete.\nTheorem 4 limits the extent to which we can exploit path-like\nproperties of the underlying graph, in order to find Nash equilibria.\nTo prove Theorem 4, we use recent PPAD-completeness results for\ngames, in particular the papers [7, 4] which show that the problem\nof finding Nash equilibria in graphical games of degree d (for d \u2265\n3) is computationally equivalent to the problem of solving r-player\nnormal-form games (for r \u2265 4), both of which are PPAD-complete.\n2. PRELIMINARIES AND NOTATION\nWe consider graphical games in which the underlying graph G\nis an n-vertex tree. Each vertex has two actions, which are denoted\nby 0 and 1. A mixed strategy is given by a single number x \u2208 [0, 1],\nwhich denotes the probability that the player selects action 1.\nFur the purposes of the algorithm, the tree is rooted arbitrarily.\nFor convenience, we assume without loss of generality that the root\nhas a single child, and that its payoff is independent of the action\nchosen by the child. This can be achieved by first choosing an\narbitrary root of the tree, and then adding a dummy parent of this\nroot, giving the new parent a constant payoff function.\nGiven an edge (V, W ) of the tree G, and a mixed strategy w\nfor W , let G(V,W ),W =w be the instance obtained from G by (1)\ndeleting all nodes Z which are separated from V by W (i.e., all\nnodes Z such that the path from Z to V passes through W ), and\n(2) restricting the instance so that W is required to play mixed\nstrategy w.\nDefinition 1. Suppose that (V, W ) is an edge of the tree, that\nv is a mixed strategy for V and that w is a mixed strategy for W .\n101\nWe say that v is a potential best response to w (denoted by v \u2208\npbrV (w)) if there is an equilibrium in the instance G(V,W ),W =w in\nwhich V has mixed strategy v. We define the best response policy\nfor V , given W , as B(W, V ) = {(w, v) | v \u2208 pbrV (w), w \u2208\n[0, 1]}. Typically, W is the parent of V , and this is just referred to\nas the best response policy for V . The expression B(W, V )|V =v\nis used to denote the set B(W, V ) \u2229 [0, 1]\u00d7{v}.\nThe upstream pass of the generic algorithm of [8] computes the\nbest response policy for V for every node V other than the root.\nWith the above assumptions about the root, the downstream pass\nis straightforward: Let W denote the root and V denote its child.\nThe root selects any pair (w, v) from B(W, V ). It decides to play\nmixed strategy w and it instructs V to play mixed strategy v. The\nremainder of the downward pass is recursive. When a node V is\ninstructed by its parent to adopt mixed strategy v, it does the\nfollowing for each child U - It finds a pair (v, u) \u2208 B(V, U) (with\nthe same v value that it was given by its parent) and instructs U to\nplay u.\n3. ALGORITHM OF LITTMAN ET AL.\nThe algorithm of [9] is based on the following observation: to\ncompute a single Nash equilibrium by a two-pass algorithm, it is\nnot necessary to construct the entire best response policy for each\nvertex. As long as, at each step of the downstream pass, the\nvertex under consideration can select a vector of strategies for all its\nchildren so that each child\"s strategy is a potential best response to\nthe parent\"s strategy, the algorithm succeeds in producing a Nash\nequilibrium. This can be achieved if, at the beginning of the\ndownstream pass, we have a data structure in which each vertex V with\nparent W stores a set \u02c6B(W, V ) \u2286 B(W, V ) (called a breakpoint\npolicy) which covers every possible w \u2208 [0, 1]. We will show\nlater that a sufficient condition for the construction of such a data\nstructure is the invariant that, at every level of the upstream pass,\n\u02c6B(W, V ) contains a continuous path from w = 0 to w = 1.\nIn [9], it is suggested that we can select the breakpoint policy in\na particular way. Namely, the paper uses the following definition:\nDefinition 2. (cf. [9]) A breakpoint policy for a node V with\nparent W consists of an ordered set of W -breakpoints w0 = 0 <\nw1 < w2 < \u00b7 \u00b7 \u00b7 < wt\u22121 < wt = 1 and an associated set of\nV -values v1, . . . , vt. The interpretation is that for any w \u2208 [0, 1],\nif wi\u22121 < w < wi for some index i and W plays w, then V shall\nplay vi; and if w = wi for some index i, then V shall play any\nvalue between vi and vi+1. We say such a breakpoint policy has\nt \u2212 1 breakpoints.\nThe paper then claims that any vertex V can compute its\nbreakpoint policy with respect to its parent W given the breakpoint\npolicies of its children U1, . . . , Uk. The proof proceeds by ordering\nthe children\"s breakpoints (i.e., the respective values of v) from left\nto right (it can be assumed without loss of generality that all these\nbreakpoints are distinct) and considering them in turn; each such\npoint vl \u2208 {v1, . . . , vL} corresponds to a fixed choice of strategies\nfor k \u2212 1 children and an interval of admissible strategies for one\nchild. Assume for convenience that this child is U1 and its\ninterval of admissible strategies at vl is [a, b]; assume also that for Uj ,\nj = 2, . . . , k, their respective breakpoint policies prescribe them to\nplay uj in response to vl. Let P i\n(u, w), i = 0, 1, be the expected\npayoff for V when V plays i, U1 plays u, each Uj , j = 2, . . . , k,\nplays uj, and W plays w, and consider the set\nWl = {w \u2208 [0, 1] | \u2203u \u2208 [a, b] s.t. P 0\n(u, w) = P1\n(u, w)};\nnote that for any w \u2208 Wl we have vl \u2208 pbrV (w).\nv1\nv2\nv3\nv4\nv5\nv6\nv7\nV\nW\nFigure 1: LKS: Trimming to find breakpoint policies.\nThe authors show that for any breakpoint vl, the set Wl is either\nempty, a single interval, or a union of two non-floating intervals (an\ninterval is non-floating if one of its endpoints is 0 or 1); moreover,\nthe union of all sets Wl, l = 1, . . . , L, covers the interval [0, 1]. It\nfollows easily that one can cover [0, 1] with at most L+2 intervals,\neach of which is a subset of some Wl. The authors then claim that\nany such cover can be transformed into a breakpoint policy for V .\nNamely, they say that for any two intervals Wl1 and Wl2 in the\ncover, Any overlap between Wl1 and Wl2 can be arbitrarily\nassigned coverage by Wl1 and Wl2 trimmed accordingly (cf. [9],\np. 5). They illustrate their approach in a figure, which is reproduced\nas Figure 1 here. In the figure, the dashed horizontal lines represent\nthe breakpoints v1, v2, . . . , v7 and the solid intervals along these\nbreakpoints are the sets W1, W2, . . . , W7. The thick connected\npath is the corresponding breakpoint policy. It is chosen as\nfollows: begin on the left, and always jump to the interval allowing\ngreatest progress to the right.\nTo see why this approach does not work in general, consider a\npath of length 4 consisting of an indifferent root R, its child W ,\nW \"s child V , and V \"s child U. Suppose that U receives a\npayoff of 1 if it plays differently to V and 0 otherwise. Thus, if v\ndenotes the mixed strategy of V (i.e., V plays 1 with\nprobability v), then the expected payoff that U derives from playing 0 is\ngiven by P0\n(U) = v and the expected payoff that U derives from\nplaying 1 is given by P1\n(U) = 1 \u2212 v. Suppose that V derives\nno payoff from playing 1 (so P1\n(V ) = 0) and that its payoff\nmatrix for playing 0 is\n1 \u22129\n9 \u22121\n, so if u denotes the mixed\nstrategy of U and w denotes the mixed strategy of W , the\nexpected payoff that V derives from playing 0 is given by P0\n(V ) =\n(1 \u2212 u)(1 \u2212 w) + (1 \u2212 u)w(\u22129) + u(1 \u2212 w)9 + uw(\u22121).\nUsing the techniques of [8] (or, alternatively, those of Section 4),\nit is not hard to verify that the best response policies for U and V\n(as in Definition 1) are given by the graphs in Figure 2. The best\nresponse policy for U is a breakpoint policy for U (as in Definition\n2) with V -breakpoints v0 = 0, v1 = 1/2 and v2 = 1 with\nassociated values u1 = 1 and u2 = 0. The best response policy for V is\nnot a breakpoint policy (because of how the curve from w = 0 to\nw = 1 doubles back).\nThe LKS algorithm would trim to get a breakpoint policy such\nas the one in Figure 3. Note that this breakpoint policy \u02c6B(W, V ) is\ninvalid in the sense that it does not satisfy \u02c6B(W, V ) \u2286 B(W, V ).\n102\n1\n10.5\n0.5\n1\n10.1 0.9\nu v\nv w\nFigure 2: Best response policies for U and V .\n0.1 0.9 1\n0.5\n1\nv\nw\nFigure 3: A trimmed policy for V\nThe point is that the payoff matrix of W can now be chosen to\nprevent the LKS algorithm from finding a Nash equilibrium. For\nexample, suppose the payoffs are given so that P0\n(W ) = v and\nP1\n(W ) = (1\u2212v)2. The best response policy for W is a horizontal\nline at w = .1 (This is the value of w that allows v = 2/3 - see\nFigure 2, which makes P0\n(W ) = P1\n(W ).) In the downward pass,\nthe chosen values are w = .1, then, from the trimming, v = 0 and\nu = 1, which is not a Nash equilibrium since W prefers action 1.\nThe failure of the algorithm is not caused by the fact that the\ntrimming policy goes as far to the right as possible. Any other\ntrimming would be just as bad. For example, suppose the\nbreakpoint policy for V has v = 0 until some point w\u2217\n< .9 and then\njumps to v = 1. The algorithm is then defeated by the payoff\nmatrix with P 0\n(W ) = 2v and P1\n(W ) = (1 \u2212 v) in which the best\nresponse policy for W is a horizontal line at w = .9. The\nalgorithm then gives w = .9, v = 1, and u = 0, which is not a Nash\nequilibrium since W prefers action 0.\nWe conclude that the LKS algorithm does not always find a Nash\nequilibrium. In Sections 4 and 6 we show how to modify the\nalgorithm so that it always finds a Nash equilibrium. For the modified\nalgorithm, we have to extend the definition of breakpoint policy\n(see Definition 3) so that it includes breakpoint policies such as\nthe best response policy for V in Figure 2. Unfortunately, such a\nbreakpoint policy may be exponential in size (see Figure 7) so the\ncorrected algorithm does not run in polynomial time on all trees. In\nthe next section, we show that it runs in polynomial time on a path.\n4. FINDING EQUILIBRIA ON A PATH\nIn this section, we focus on the case when the underlying graph\nis a path, i.e., its vertex set is {V1, . . . , Vn}, and its edge set is\n{(Vj , Vj+1) | j = 1, . . . , n \u2212 1}. We show that in this case the\nbest response policy for each vertex can be represented as a union\nof a polynomial number of rectangles, where a rectangle is defined\nby a pair of closed intervals (IV , IU ) and consists of all points in\nIV \u00d7 IU ; it may be the case that one or both of the intervals IV and\nIU consists of a single point.\nTHEOREM 5. For any j = 1, . . . , n, the set B(Vj , Vj\u22121) can\nbe represented as a disjoint union of at most (j + 4)2\nrectangles.\nMoreover, given such representation of B(Vj , Vj\u22121), one can\ncompute a representation of B(Vj+1, Vj) in time O(j2\n).\nPROOF. For any set A \u2286 [0, 1]2\nthat is represented as a union of\na finite number of rectangles, we say that a point u \u2208 [0, 1] on the\nU-axis is a U-event point of A if u = 0 or u = 1 or A contains a\nrectangle of the form IV \u00d7 IU and u is an endpoint of IU ; V -event\npoints are defined similarly. Observe that for any u \u2208 [0, 1], the\nnumber of connected components of [0, 1]\u00d7{u} \u2229 A is at most the\nnumber of V -event points of A.\nWe use induction on j to show that for each Vj the statement of\nthe theorem holds and, additionally, each B(Vj , Vj\u22121) has at most\n2j + 4 event points.\nTo simplify the base case, we modify the graphical game by\nappending a dummy vertex V0 to the beginning of the path: the only\nneighbour of V0 is V1, the payoffs of V0 are always equal to 0, and\nthe payoffs of all other vertices (including V1) are the same as in\nthe original game.\nFor j = 0, we have B(V1, V0) = [0, 1]2\n, so the statement of the\ntheorem is trivially true.\nNow, suppose that j > 0, set V = Vj and let U = Vj\u22121 and\nW = Vj+1 be the vertices that precede and follow V , respectively.\nThe payoffs to V are described by a 2\u00d72\u00d72 matrix P: Pxyz is the\npayoff that V receives when U plays x, V plays y, and W plays z,\nwhere x, y, z \u2208 {0, 1}. Suppose that U plays 1 with probability u\nand W plays 1 with probability w. Then V \"s expected payoff from\nplaying 0 is\nP0\n=(1\u2212u)(1\u2212w)P000+(1\u2212u)wP001+u(1\u2212w)P100+uwP101,\nwhile its expected payoff from playing 1 is\nP1\n=(1\u2212u)(1\u2212w)P010+(1\u2212u)wP011+u(1\u2212w)P110+uwP111.\nIf P 0\n> P1\n, V strictly prefers to play 0, if P0\n< P1\n, V strictly\nprefers to play 1, and if P0\n= P1\n, V is indifferent, i.e., can play\nany (mixed) strategy. Since P0\nand P1\nare linear in w and u, there\nexist some constants A1, A0, B1, and B0 that depend on the matrix\nP, but not on u and w, such that\nP0\n\u2212 P1\n= w(B1u + B0) \u2212 (A1u + A0). (1)\nDepending on the values of A1, A0, B1, and B0, we subdivide\nthe rest of the proof into the following cases.\n\u2022 B1 = 0, B0 = 0.\nIn this case, P0\n> P1\nif and only if A1u + A0 < 0.\nIf also A1 = 0, A0 = 0, clearly, B(W, V ) = [0, 1]2\n, and the\nstatement of the theorem is trivially true.\nOtherwise, the vertex V is indifferent between 0 and 1 if and only\nif A1 = 0 and u = \u2212A0/A1. Let V = {v | v \u2208 (0, 1), \u2212A0/A1 \u2208\npbrU (v)}. By the inductive hypothesis, V consists of at most\n2(j \u2212 1) + 4 segments and isolated points.\nFor any v \u2208 V, we have B(W, V )|V =v = [0, 1]: no matter\nwhat W plays, as long as U is playing \u2212A0/A1, V is content\nto play v. On the other hand, for any v \u2208 (0, 1) \\ V we have\nB(W, V )|V =v = \u2205: when V plays v, U can only respond with\nu = \u2212A0/A1, in which case V can benefit from switching to one\nof the pure strategies.\nTo complete the description of B(W, V ), it remains to analyze\nthe cases v = 0 and v = 1. The vertex V prefers to play 0 if\nA1 > 0 and u \u2264 \u2212A0/A1, or A1 < 0 and u \u2265 \u2212A0/A1, or\n103\nA1 = 0 and A0 < 0. Assume for now that A1 > 0; the other\ntwo cases can be treated similarly. In this case 0 \u2208 pbrV (w) for\nsome w \u2208 [0, 1] if and only if there exists a u \u2208 pbrU (0) such that\nu \u2264 \u2212A0/A1: if no such u exists, whenever V plays 0 either U\"s\nresponse is not in pbrU (0) or V can improve its payoff by playing\n1. Therefore, either B(W, V )|V =0 = [0, 1] or B(W, V )|V =0 = \u2205.\nSimilarly, B(W, V )|V =1 is equal to either [0, 1] or \u2205, depending on\npbrU (1).\nTherefore, the set B(W, V ) consists of at most 2j + 4 \u2264 (j +\n4)2\nrectangles: B(W, V ) \u2229 [0, 1]\u00d7(0, 1) = [0, 1]\u00d7V contributes\nat most 2j + 2 rectangles, and each of the sets B(W, V )|V =0 and\nB(W, V )|V =1 contributes at most one rectangle. Similarly, its total\nnumber of event points is at most 2j + 4: the only W -event points\nare 0 and 1, each V -event point of B(W, V ) is a V -event point of\nB(V, U), and there are at most 2j + 2 of them.\n\u2022 B1u + B0 \u2261 0, A1 = \u03b1B1, A0 = \u03b1B0 for some \u03b1 \u2208 R.\nIn this case, V is indifferent between 0 and 1 if and only if w =\n\u03b1, or B1 = 0 and u = \u2212B0/B1 = \u2212A0/A1. Similarly to the\nprevious case, we can show that B(W, V )\u2229[0, 1]\u00d7(0, 1) consists of\nthe rectangle {\u03b1}\u00d7[0, 1] and at most 2j + 2 rectangles of the form\n[0, 1]\u00d7IV , where each IV corresponds to a connected component\nof B(V, U)|U=\u2212B0/B1\n.\nFurthermore, V prefers to play 0 if B1u + B0 > 0 and w \u2265 \u03b1\nor B1u + B0 < 0 and w \u2264 \u03b1. Therefore, if B1u\u2217\n+ B0 > 0\nfor some u\u2217\n\u2208 pbrU (0), then B(W, V )|V =0 contains [\u03b1, +\u221e) \u2229\n[0, 1] and if B1u\u2217\u2217\n+ B0 < 0 for some u\u2217\u2217\n\u2208 pbrU (0), then\nB(W, V )|V =0 contains [\u2212\u221e, \u03b1] \u2229 [0, 1]; if both u\u2217\nand u\u2217\u2217\nexist,\nB(W, V )|V =0 = [0, 1]. The set B(W, V )|V =1 can be described in\na similar manner.\nBy the inductive hypothesis, B(V, U) has at most 2j + 2 event\npoints; as at least two of these are U-event points, it has at most\n2j V -event points. Since each V -event point of B(W, V ) is a\nVevent point of B(V, U) and B(W, V ) has at most 3 W -event points\n(0, 1, and \u03b1), its total number of event points is at most 2j + 3 <\n2j +4. Also, similarly to the previous case it follows that B(W, V )\nconsists of at most 2j + 4 < (j + 4)2\nrectangles.\n\u2022 B1u + B0 \u2261 0, \u03b1(B1u + B0) \u2261 A1u + A0.\nIn this case, one can define the indifference function f(\u00b7) as\nf(u) = A(u)\nB(u)\n= A1u+A0\nB1u+B0\n, where A(u) and B(u) never turn\ninto zero simultaneously. Observe that whenever w = f(u) and\nu, w \u2208 [0, 1], V is indifferent between playing 0 and 1. For any\nA \u2286 [0, 1]2\n, we define a function \u02c6fV by \u02c6fV (A) = {(f(u), v) |\n(v, u) \u2208 A}; note that \u02c6fV maps subsets of [0, 1]2\nto subsets of\nR\u00d7[0, 1]. Sometimes we drop the subscript V when it is clear\nfrom the context.\nLEMMA 1. For any (w, v) \u2208 [0, 1]\u00d7(0, 1) we have (w, v) \u2208\nB(W, V ) if and only if there exists a u \u2208 [0, 1] such that (v, u) \u2208\nB(V, U) and w = f(u).\nPROOF. Fix an arbitrary v \u2208 (0, 1). Suppose that U plays some\nu \u2208 pbrU (v), w = f(u) satisfies w \u2208 [0, 1], and W plays w.\nThere exists a vector of strategies v1, . . . , vj\u22121 = u, vj = v such\nthat for each Vk, k < j, its strategy is a best response to its\nneighbours\" strategies. Since w = f(u), V is indifferent between\nplaying 0 and 1; in particular, it can play v. Therefore, if we define\nvj+1 = w, the vector of strategies (v1, . . . , vj+1) will satisfy the\nconditions in the definition of potential best response, i.e., we have\nv \u2208 pbrV (w).\nConversely, suppose v \u2208 pbrV (w) for some w \u2208 [0, 1], v =\n0, 1. Then there exists a vector of strategies v1, . . . , vj\u22121, vj =\nv, vj+1 = w such that for each Vk, k \u2264 j, its strategy is a best\nresponse to its neighbours\" strategies. As v = 0, 1, V is, in fact,\nindifferent between playing 0 and 1, which is only possible if w =\nf(vj\u22121). Choose u = vj\u22121; by construction, u \u2208 pbrU (v).\nLemma 1 describes the situations when V is indifferent between\nplaying 0 and playing 1. However, to fully characterize B(W, V ),\nwe also need to know when V prefers a pure strategy.\nDefine \u02c6f(0) = \u222au\u2208pbrU (0)Ru, where\nRu =\n\u00b4\n[f(u), +\u221e)\u00d7{0} if B(u) > 0,\n(\u2212\u221e, f(u)]\u00d7{0} if B(u) < 0.\nand \u02c6f(1) = \u222au\u2208pbrU (1)Ru, where\nRu =\n\u00b4\n[f(u), +\u221e)\u00d7{1} if B(u) < 0,\n(\u2212\u221e, f(u)]\u00d7{1} if B(u) > 0.\nLEMMA 2. For any w \u2208 [0, 1], we have (w, 0) \u2208 \u02c6f(0) if\nand only if 0 \u2208 pbrV (w) and (w, 1) \u2208 \u02c6f(1) if and only if 1 \u2208\npbrV (w).\nPROOF. Consider an arbitrary u0 \u2208 pbrU (0). If B(u0) > 0,\nfor u = u0 the inequality P0\n\u2265 P1\nis equivalent to w \u2265 f(u0).\nTherefore, when U plays u0 and W plays w, w \u2265 f(u0), V prefers\nto play 0; as u0 \u2208 pbrU (u), it follows that 0 \u2208 pbrV (w). The\nargument for the case B(u0) < 0 is similar.\nConversely, if 0 \u2208 pbrV (w) for some w \u2208 [0, 1], there\nexists a vector (v1, . . . , vj\u22121, vj = 0, vj+1 = w) such that for each\nVk, k \u2264 j, Vk plays vk, and this strategy is a best response to\nthe strategies of Vk\"s neighbours. Note that for any such\nvector we have vj\u22121 \u2208 pbrU (0). By way of contradiction, assume\n(w, 0) \u2208\n\u00cb\nu\u2208pbrU (0) Ru. Then it must be the case that for any\nu0 \u2208 pbrU (0) either f(u0) < w and Ru0 = (\u2212\u221e, f(u0)]\u00d7{0}\nor f(u0) > w and Ru0 = [f(u0), +\u221e)\u00d7{0}. In both cases,\nwhen V plays 0, U plays u0, and V plays w, the inequality between\nf(u0) and w is equivalent to P0\n< P1\n, i.e., V would benefit from\nswitching to 1.\nThe argument for \u02c6f(1) is similar.\nTogether, Lemma 1 and Lemma 2 completely describe the set\nB(W, V ): we have\nB(W, V ) = \u02c6f(0) \u222a \u02c6f(B(V, U)) \u222a \u02c6f(1) [0, 1]2\n.\nIt remains to show that B(W, V ) can be represented as a union of\nat most (j + 4)2\nrectangles, has at most 2j + 4 event points, and\ncan be computed in O(j2\n) time.\nSet u\u2217\n= \u2212B0/B1. 2\nConsider an arbitrary rectangle R =\n[v1, v2]\u00d7[u1, u2] \u2286 B(V, U). If u\u2217\n\u2208 [u1, u2], the function f(\u00b7) is\ncontinuous on [u1, u2] and hence \u02c6f(R) = [fmin, fmax]\u00d7[v1, v2],\nwhere\nfmin = min{f(u1), f(u2)}, fmax = max{f(u1), f(u2)},\ni.e., in this case \u02c6f(R) \u2229 [0, 1]2\nconsists of a single rectangle.\nNow, suppose that R is intersected by the line [0, 1]\u00d7{u\u2217\n}; as\nwas noted earlier, there are at most 2j+2 such rectangles. Suppose\nthat limu\u2192u\u2217\u2212 f(u) = +\u221e; as f(\u00b7) is a fractional linear function,\nthis implies that limu\u2192u\u2217+ f(u) = \u2212\u221e and also f(u1) > f(u2).\nSince f(\u00b7) is continuous on [u1, u\u2217\n) and (u\u2217\n, u2], it is easy to see\nthat\n\u02c6f([v1, v2]\u00d7[u1, u\u2217\n)) = [f(u1), +\u221e)\u00d7[v1, v2]\n2\nThe case B1 = 0 causes no special problems. For completeness,\nset u\u2217\nto be any value outside of [0, 1] in this case.\n104\nv\nu v\nu*\n1\nf(0) f(a)f(b) f(1)\na\nb\n(0, 0) w\nv\n2\nv\n(0, 0)\n1\n1\n1\nv 2\nv 1\n1\nFigure 4: f is increasing on (\u2212\u221e, u\u2217\n) and (u\u2217\n, +\u221e).\nand\n\u02c6f([v1, v2]\u00d7(u\u2217\n, u2]) = (\u2212\u221e, f(u2)]\u00d7[v1, v2],\ni.e., in this case \u02c6f(R) \u2229 [0, 1]2\nconsists of at most two rectangles.\nThe case limu\u2192u\u2217\u2212 f(u) = \u2212\u221e is similar.\nAs \u02c6f(B(V, U)) =\n\u00cb\nR\u2282B(V,U)\n\u02c6f(R), it follows that \u02c6f(B(V, U))\nconsists of at most (j + 3)2\n+ 2j + 2 rectangles. Also, it is easy\nto see that both \u02c6f(0) and \u02c6f(1) consist of at most 2 line segments\neach. We conclude that B(W, V ) can be represented as a union of\nat most (j + 3)2\n+ 2j + 6 < (j + 4)2\nrectangles.\nMoreover, if v is a V -event point of B(W, V ), then v is a\nVevent point of B(V, U) (this includes the cases v = 0 and v = 1,\nas 0 and 1 are V -event points of B(V, U)) and if w is a W -event\npoint of B(W, V ), then either w = 0 or w = 1 or there exists some\nu \u2208 [0, 1] such that w = f(u) and u is a U-event point of B(V, U).\nHence, B(W, V ) has at most 2j + 4 event points.\nThe O(j2\n) bound on the running time in Theorem 5 follows from\nour description of the algorithm. The O(n3\n) bound on the overall\nrunning time for finding a Nash equilibrium (and a representation\nof all Nash equilibria) follows.\n4.1 Finding a Single Nash Equilibrium in O(n2\n)\nTime\nThe upper bound on the running time of our algorithm is tight, at\nleast assuming the straightforward implementation, in which each\nB(Vj+1, Vj) is stored as a union of rectangles: it is not hard to\nconstruct an example in which the size of B(Vj+1, Vj) is \u03a9(j2\n).\nHowever, in some cases it is not necessary to represent all Nash\nequilibria; rather, the goal is to find an arbitrary equilibrium of the\ngame. In this section, we show that this problem can be solved in\nquadratic time, thus obtaining a proof of Theorem 1. Our solution\nis based on the idea of [9], i.e., working with subsets of the best\nresponse policies rather than the best response policies themselves;\nfollowing [9], we will refer to such subsets as breakpoint policies.\nWhile it is not always possible to construct a breakpoint policy as\ndefined in [9], we show how to modify this definition so as to ensure\nthat a breakpoint policy always exists; moreover, we prove that for\na path graph, the breakpoint policy of any vertex can be stored in\na data structure whose size is linear in the number of descendants\nthis vertex has.\nDefinition 3. A breakpoint policy \u02c6B(V, U) for a vertex U whose\nparent is V is a non-self-intersecting curve of the form\nX1 \u222a Y1 \u222a \u00b7 \u00b7 \u00b7 \u222a Ym\u22121 \u222a Xm,\nwhere Xi = [vi\u22121, vi]\u00d7{ui}, Yi = {vi}\u00d7[ui, ui+1] and ui, vi \u2208\n[0, 1] for i = 0, . . . , m. We say that a breakpoint policy is valid if\nv0 = 0, vm = 1, and \u02c6B(V, U) \u2286 B(V, U).\nWe will sometimes abuse notation by referring to \u02c6B(V, U) as a\ncollection of segments Xi, Yi rather than their union. Note that\nwe do not require that vi \u2264 vi+1 or ui \u2264 ui+1; consequently,\nin any argument involving breakpoint policies, all segments are to\nbe treated as directed segments. Observe that any valid breakpoint\npolicy \u02c6B(V, U) can be viewed as a continuous 1-1 mapping \u03b3(t) =\n(\u03b3v(t), \u03b3u(t)), \u03b3 : [0, 1] \u2192 [0, 1]2\n, where \u03b3(0) = (0, u1), \u03b3(1) =\n(1, um) and there exist some t0 = 0, t1, . . . , t2m\u22122 = 1 such that\n{\u03b3(t) | t2k \u2264 t \u2264 t2k+1} = Xk+1, {\u03b3(t) | t2k+1 \u2264 t \u2264\nt2k+2} = Yk+1.\nAs explained in Section 3, we can use a valid breakpoint policy\ninstead of the best response policy during the downstream pass, and\nstill guarantee that in the end, we will output a Nash equilibrium.\nTheorem 6 shows that one can inductively compute valid\nbreakpoint policies for all vertices on the path; the proof of this theorem\ncan be found in the full version of this paper [6].\nTHEOREM 6. For any V = Vj, one can find in polynomial time\na valid breakpoint policy \u02c6B(W, V ) that consists of at most 2j + 1\nsegments.\n5. NASH EQUILIBRIA ON GRAPHS WITH\nMAXIMUM DEGREE 2\nIn this section we show how the algorithm for paths can be\napplied to solve a game on any graph whose vertices have degree at\nmost 2. A graph having maximum degree 2 is, of course, a union\nof paths and cycles. Since each connected component can be\nhandled independently, to obtain a proof of Theorem 2, we only need\nto show how to deal with cycles.\nGiven a cycle with vertices V1, . . . , Vk (in cyclic order), we make\ntwo separate searches for a Nash equilibrium: first we search for a\nNash equilibrium where some vertex plays a pure strategy, then we\nsearch for a fully mixed Nash equilibrium, where all vertices play\nmixed strategies. For i \u2264 k let vi denote the probability that Vi\nplays 1.\nThe first search can be done as follows. For each i \u2208 {1, . . . , k}\nand each b \u2208 {0, 1}, do the following.\n1. Let P be the path (Vi+1, Vi+2 . . . , Vk, V1, . . . , Vi\u22121, Vi)\n2. Let payoff to Vi+1 be based on putting vi = b (so it depends\nonly on vi+1 and vi+2.)\n3. Apply the upstream pass to P\n4. Put vi = b; apply the downstream pass For each vertex, Vj,\nkeep track of all possible mixed strategies vj\n5. Check whether Vi+1 has any responses that are consistent\nwith vi = b; if so we have a Nash equilibrium. (Otherwise,\nthere is no Nash equilibrium of the desired form.)\nFor the second search, note that if Vi plays a mixed strategy,\nthen vi+1 and vi\u22121 satisfy an equation of the form vi+1 = (A0 +\nA1vi\u22121)/(B0 + B1vi\u22121). Since all vertices in the cycle play\nmixed strategies, we have vi+3 = (A0 +A1vi+1)/(B0 +B1vi+1).\nComposing the two linear fractional transforms, we obtain vi+3 =\n(A0 +A1 vi\u22121)/(B0 +B1 vi\u22121). for some new constants A0 , A1 ,\nB0 , B1 .\nChoose any vertex Vi. We can express vi in terms of vi+2, then\nvi+4, vi+6 etc. and ultimately vi itself to obtain a quadratic\nequation (for vi) that is simple to derive from the payoffs in the game. If\nthe equation is non-trivial it has at most 2 solutions in (0, 1). For an\nodd-length cycle all other vj \"s are derivable from those solutions,\nand if a fully mixed Nash equilibrium exists, all the vj should turn\nout to be real numbers in the range (0, 1). For an even-length\ncycle, we obtain two quadratic equations, one for vi and another for\n105\nvi+1, and we can in the same way test whether any solutions to\nthese yield values for the other vj , all of which lie in (0, 1).\nIf the quadratic equation is trivial, there is potentially a\ncontinuum of fully-mixed equilibria. The values for vi that may occur in a\nNash equilibrium are those for which all dependent vj values lie in\n(0, 1); the latter condition is easy to check by computing the image\nof the interval (0, 1) under respective fractional linear transforms.\n6. FINDING EQUILIBRIA ON AN\n(ARBITRARY) TREE\nFor arbitrary trees, the general structure of the algorithm remains\nthe same, i.e., one can construct a best response policy (or,\nalternatively, a breakpoint policy) for any vertex based on the best\nresponse policies of its children. We assume that the degree of each\nvertex is bounded by a constant K, i.e., the payoff matrix for each\nvertex is of size O(2K\n).\nConsider a vertex V whose children are U1, . . . , Uk and whose\nparent is W ; the best response policy of each Uj is B(V, Uj).\nSimilarly to the previous section, we can compute V \"s expected\npayoffs P0\nand P1\nfrom playing 0 or 1, respectively. Namely,\nwhen each of the Uj plays uj and W plays w, we have P0\n=\nL0\n(u1, . . . , uk, w), P 1\n= L1\n(u1, . . . , uk, w), where the\nfunctions L0\n(\u00b7, . . . , \u00b7), L1\n(\u00b7, . . . , \u00b7) are linear in all of their arguments.\nHence, the inequality P0\n> P1\ncan be rewritten as\nwB(u1, . . . , uk) > A(u1, . . . , uk),\nwhere both A(\u00b7, . . . , \u00b7) and B(\u00b7, . . . , \u00b7) are linear in all of their\narguments. Set u = (u1, . . . , uk) and define the indifference\nfunction f : [0, 1]k\n\u2192 [0, 1] as f(u) = A(u)/B(u); clearly, if each\nUj plays uj, W plays w and w = f(u), V is indifferent between\nplaying 0 and 1. For any X = X1 \u00d7 \u00b7 \u00b7 \u00b7 \u00d7 Xk, where Xi \u2286 [0, 1]2\ndefine\n\u02c6f(X) = {(f(u), v) | (v, ui) \u2208 Xi, i = 1, . . . , k}\nAlso, set\n\u02c6f(0) = {(w, 0) | \u2203u s.t. ui \u2208 pbrUi\n(0) and wB(u) \u2265 A(u)}\nand\n\u02c6f(1) = {(w, 1) | \u2203u s.t. ui \u2208 pbrUi\n(1) and wB(u) \u2264 A(u)}.\nAs in previous section, we can show that B(W, V ) is equal to\n\u02c6f(0) \u222a \u02c6f(B(V, U1) \u00d7 \u00b7 \u00b7 \u00b7 \u00d7 B(V, Uk)) \u222a \u02c6f(1) [0, 1]2\n;\nalso, any path from w = 0 to w = 1 that is a subset of B(W, V )\nconstitutes a valid breakpoint policy.\n6.1 Exponential Size Breakpoint Policy\nWhile the algorithm of Section 4 can be generalized for\nboundeddegree trees, its running time is no longer polynomial. In fact, the\nconverse is true: we can construct a family of trees and payoff\nmatrices for all players so that the best response policies for some of\nthe players consist of an exponential number of segments.\nMoreover, in our example the breakpoint policies coincide with the best\nresponse policies, which means that even finding a single Nash\nequilibrium using the approach of [8, 9] is going to take\nexponentially long time. In fact, a stronger statement is true: for any\npolynomial-time two-pass algorithm (defined later) that works with\nsubsets of best response policies for this graph, we can choose the\npayoffs of the vertices so that the downstream pass of this algorithm\nwill fail.\nS\n1\n1\nT\nS\nn\u22121\n00\n0000\n11\n1111\n00\n0000\n11\n1111\n00\n0000\n11\n1111\n0\n00\n1\n11\n0\n00\n1\n11\n0\n00\n1\n11\n0\n00\n1\n11\n0\n00\n1\n11\n0\n00\n1\n11\n0\n00\n1\n11\n00\n0000\n11\n1111\n00\n0000\n11\n1111\n0\n00\n1\n11\n0000\n00000000\n00000000\n0000\n1111\n11111111\n11111111\n1111\n000\n000000\n000000\n000\n111\n111111\n111111\n111\nS S\nT T T\n2 n\u22121 n\n2 n\n1 n\u221212 n\nVVVVV\n0\nFigure 5: The tree Tn that corresponds to exponential-size\nbreakpoint policy.\nIn the rest of this subsection, we describe this construction.\nConsider the tree Tn given by Figure 5; let Vn be the root of this tree.\nFor every k = 1, . . . , n, let the payoffs of Sk and Tk be the same as\nthose for the U and V described in Section 3; recall that the\nbreakpoint policies for U and V are shown in Figure 2. It is not hard to\nsee that the indifference function for Tk is given by f(s) = .8s+.1.\nThe payoff of V0 is 1 if V1 selects the same action as V0 and 0\notherwise; V0\"s best response policy is given by Figure 6.\nLEMMA 3. Fix k < n, and let u, t, v, and w denote the\nstrategies of Vk\u22121, Tk, Vk, and Vk+1, respectively. Suppose that Vk\nprefers playing 0 to playing 1 if and only if .5t + .1u + .2 > w.\nThen B(Vk+1, Vk) consists of at least 3k\nsegments. Moreover,\n{(v, w) | (v, w) \u2208 B(Vk+1, Vk), 0 \u2264 w \u2264 .2} = [0, .2]\u00d7{0}\nand\n{(v, w) | (v, w) \u2208 B(Vk+1, Vk), .8 \u2264 w \u2264 1} = [.8, 1]\u00d7{1}.\nPROOF. The proof proceeds by induction on k. For k = 0, the\nstatement is obvious. Now, suppose it is true for B(Vk, Vk\u22121).\nOne can view B(Vk+1, Vk) as a union of seven components:\n\u02c6f(0) \u2229 [0, 1]\u00d7{0}, \u02c6f(1) \u2229 [0, 1]\u00d7{1}, and five components that\ncorrespond to the segments of B(Vk, Tk). Let us examine them in\nturn.\nTo describe \u02c6f(0)\u2229[0, 1]\u00d7{0}, note that f(u, t) = .5t+.1u+.2\nis monotone in t and u and satisfies f(0, 0) = .2. Also, we have\npbrVk\u22121\n(0) = {0} and pbrTk\n(0) = {0}. For any w \u2208 [0, 1]\nwe have f(0, 0) \u2265 w if and only if w \u2208 [0, .2]. We conclude\nthat \u02c6f(0) \u2229 [0, 1]\u00d7{0} = [0, .2]\u00d7{0}. Similarly, it follows that\n\u02c6f(1) \u2229 [0, 1]\u00d7{1} = [.8, 1]\u00d7{1}.\nDefine\nS1 = {(f(u, 0), v) | (v, u) \u2208 B(Vk, Vk\u22121) \u2229 [0, .9]\u00d7[0, 1]},\nS2 = {(f(u, .5), v) | (v, u) \u2208 B(Vk, Vk\u22121) \u2229 [.1, .9]\u00d7[0, 1]},\nS3 = {(f(u, 1), v) | (v, u) \u2208 B(Vk, Vk\u22121) \u2229 [.1, 1]\u00d7[0, 1]};\nthese sets correspond to horizontal segments of B(Vk, Tk).\nIt is easy to see that S1, S2, S3 \u2282 B(Vk+1, Vk). Since f is a\ncontinuous function, the number of segments in each Si is at least\nthe number of segments in B(Vk, Vk\u22121)\u2229[.1, .9]\u00d7[0, 1], which is at\nleast 3k\u22121\nby induction hypothesis. Moreover, as f is monotone in\nu and f(1, 0) < f(0, .5) < f(1, .5) < f(0, 1), all Si, i = 1, 2, 3,\nare disjoint.\nFinally, the set B(Vk+1, Vk) contains two segments that\ncorrespond to the vertical segments of B(Vk, Tk), i.e.,\nS4 = {(f(0, t), .1) | t \u2208 [.5, 1]) = [.45, .7]\u00d7{.1} and\nS5 = {(f(1, t), .9) | t \u2208 [0, .5]) = [.3, .55]\u00d7{.9}.\nClearly, S4 connects S2 and S3, S5 connects S1 and S2, and S4\nand S5 do not intersect each other. We conclude that B(Vk+1, Vk)\n106\n0\n00\n00\n00\n00\n00\n00\n00\n00\n00\n0\n1\n11\n11\n11\n11\n11\n11\n11\n11\n11\n1\n00000000001111111111\n1\n1\n10.8\n1\n1\n0.9\n0.1\nV\nV0.5 0.2\nV\nV 21\n10\nFigure 6: Breakpoint policies for V0 and V1.\nis a continuous line that consist of at least 3k\nsegments and satisfies\nthe condition of the lemma.\nTo complete the construction, we need to show that we can\ndesign the payoff matrix for Vk so that it prefers playing 0 to playing\n1 if and only if .5t + .1u + .2 > w. To this end, we prove a\nmore general statement, namely, that the indifference function of\na vertex can be an arbitrary fractional multilinear function of its\ndescendants\" strategies.\nWe say that a function of k variables is multilinear if it can be\nrepresented as a sum of monomials and each of these monomials\nis linear in all of its variables. Note that this definition is different\nfrom a more standard one in that we do not require that all of the\nmonomials have the same degree. Recall that the payoffs of a\nvertex with k + 1 neighbours are described by matrices P0\nand P1\n,\nwhere Pj\ni0i1...ik\nis the payoff that V gets when it plays j, and its\nneighbours play i0, . . . , ik, and j, i0, . . . , ik \u2208 {0, 1}. Let P[j] =\nP[j](w, u1, . . . , uk) be the expected payoff obtained by this\nvertex when it plays j and the (mixed) strategies of its neighbours\nare given by a vector (w, u1, . . . , uk), i.e., P[j] = E[P j\ni0i1...ik\n]\nwhere i0, . . . , ik are independent Bernoulli random variables, each\nof which is 1 with the respective probabilities w, u1, . . . , uk.\nLEMMA 4. Given a tree vertex V whose parent is W and whose\nchildren are U1, . . . , Uk, for any function f = f(u1, . . . , uk) that\ncan be represented as a ratio of two multilinear functions f1, f2,\ni.e., f = f1(u1,...,uk)\nf2(u1,...,uk)\n, there exist payoff matrices P0\nand P1\nfor V\nsuch that\nP[0] \u2212 P[1] = wf2(u1, . . . , uk) \u2212 f1(u1, . . . , uk).\nThe proof of this lemma is based on the fact that every monomial\nof the form as(u0)s0\n. . . (uk)sk , s1, . . . , sk \u2208 {0, 1}, can be\nrepresented as\nt=t0...tk\u2208\u03a3k+1\nCt(u0)t0\n(1 \u2212 u0)1\u2212t0\n. . . (uk)tk\n(1 \u2212 uk)1\u2212tk\nfor some Ct, t \u2208 {0, 1}k+1\n. The details can be found in the full\nversion of this paper [6].\n6.2 Irreducibility of the Best Response Policy\nfor Tn\nWhile the best response policy constructed in the previous\nsubsection has exponential size, it is not clear `a priori that it is\nnecessary to keep track of all of its line segments rather than to focus\non a small subset of these segments. However, it turns out that for\ntwo-pass algorithms such as the algorithm of [8], the best response\npolicy cannot be simplified. More precisely, we say that an\nalgorithm A is a two-pass algorithm if\n0\n0\n0\n00\n0\n0\n0\n0\n00\n0\n0\n0\n0\n00\n0\n0\n0\n0\n1\n1\n1\n11\n1\n1\n1\n1\n11\n1\n1\n1\n1\n11\n1\n1\n1\n1\n0000000000000000000000000000011111111111111111111111111111\n0.2 0.8\n0.9\n1\n0.1\n1\nV\nV\n2\n3\nS 1 S S\nS\nS\n1\nT 0\nT\n2 3\n4\n5\nFigure 7: Breakpoint policy for V2.\n\u2022 A consists of an upstream pass and a downstream pass.\n\u2022 During the upstream pass, for each vertex V with parent W ,\nA constructs a set BB(W, V ) \u2286 B(W, V ). This set is\nproduced from the sets {BB(V, U) | U is a child of V } by\napplying the procedure from the beginning of Section 6\n(substituting BB(V, Uj ) for B(V, Uj) for all children Uj of V ) ,\nand then possibly omitting some of the points of the resulting\nset (which is then stored explicitly).\n\u2022 The downstream pass is identical to the downstream pass\nof [8] as described in Section 2 except that it operates on\nthe sets BB(W, V ) rather than on the sets B(W, V ).\nTheorem 7 demonstrates that any two-pass algorithm will fail\nduring the downstream pass on Tn if there is an index j such that\nthe set BB(Vj+1, Vj) omits any interior point of any of the (at least\n3j\n) segments of B(Vj+1, Vj). This implies Theorem 3.\nTHEOREM 7. For any two-pass algorithm A for which there\nexists an index j, j \u2208 [1, n/4], a segment S of B(Vj , Vj\u22121), and an\ninterior point (x, y) of S such that BB(Vj, Vj\u22121) does not contain\n(x, y), we can choose payoff matrices of the vertices Vj, . . . , Vn so\nthat the downstream pass of A will fail, and, additionally, payoffs\nto V4j , . . . , Vn are identically 0.\nWe sketch the proof of Theorem 7; the details can be found in\nthe full version of this paper [6]. We proceed by induction. For\nj = 1, the argument is similar to that in Section 3. For the inductive\nstep, the main idea is that we can zoom in on any part of a best\nresponse policy (including the part that was omitted!) by using an\nappropriate indifference function; this allows us to reduce the case\nj = j0 to j = j0 \u2212 1.\n7. PPAD-COMPLETENESS OF BOUNDED\nPATHWIDTH GRAPHICAL GAMES\nIn the previous section, we showed that for graphical games on\ntrees that are almost but not quite paths, two-pass algorithms fail to\nfind Nash equilibria in polynomial time. We next show that a milder\npath-like graph property allows us to construct graphical games for\nwhich it is unlikely that any polynomial-time algorithm will find\nNash equilibria.\n7.1 Pathwidth\nA path decomposition of a graph G = (V, E) is a sequence of\nsubset Si(V ) \u2286 V such that for each edge (v, v ) \u2208 E, v, v \u2208\nSi(V ) for some i, and furthermore, for each v \u2208 V , if v \u2208 Si(V )\nand v \u2208 Sj(V ) for j > i, then v \u2208 Sk(V ) for all i \u2264 k \u2264 j. The\npath decomposition has width k if all sets Si(V ) have cardinality\nat most k + 1. The pathwidth of G is the minimum width of any\npath decomposition of G.\n107\nPathwidth is a restriction of treewidth (in which one would seek\na tree whose vertices were the sets Si(V ), and the sets\ncontaining some vertex would have to form a subtree). For any constant\nk it can be decided in polynomial time whether a graph has\npathwidth (or treewidth) k. Furthermore many graph-theoretic\nproblems seem easier to solve in polynomial time, when restricted to\nfixed treewidth, or pathwidth, graphs, see [1] for an overview. Note\nthat a path has pathwidth 1 and a cycle has pathwidth 2.\n7.2 PPAD-completeness\nWe review some basic definitions from the computational\ncomplexity theory of search problems. A search problem associates\nany input (here, a graphical game) with a set of solutions (here, the\nNash equilibria of the input game), where the description length of\nany solution should be polynomially bounded as a function of the\ndescription length of its input. In a total search problem, there is\na guarantee that at least one solution exists for any input. Nash\"s\ntheorem assures us that the problem of finding Nash equilibria is\ntotal.\nA reduction from search problem S to problem S is a\nmechanism that shows that any polynomial-time algorithm for S implies\na polynomial-time algorithm for S. It consists of functions f and\ng, computable in polynomial time, where f maps inputs of S to\ninputs of S , and g maps solutions of S to solutions of S, in such a\nway that if IS is an input to S, and SS is a solution to f(IS), then\ng(SS ) is a solution to IS.\nObserve that total search problems do not allow the above\nreductions from problems such as CIRCUIT SAT (where the input is\na boolean circuit, and solutions are input vectors that make the\noutput true) due to the fact that CIRCUIT SAT and other NP-complete\nproblems have inputs with empty solution sets. Instead, recent\nwork on the computational complexity of finding a Nash\nequilibrium [7, 4, 5, 2, 3] has related it to the following problem.\nDefinition 4. END OF THE LINE. Input: boolean circuits S and\nP, each having n input and n output bits, where P(0n\n) = 0n\nand\nS(0n\n) = 0n\n. Solution: x \u2208 {0, 1}n\nsuch that S(x) = x, or\nalternatively x \u2208 {0, 1}n\nsuch that P(S(x)) = x.\nS and P can be thought of as standing for successor and\npredecessor. Observe that by computing Si\n(0n\n) (for i = 0, 1, 2, . . .)\nand comparing with P(Si+1\n(0n\n)), we must eventually find a\nsolution to END OF THE LINE. END OF THE LINE characterizes the\ncomplexity class PPAD (standing for parity argument on a graph,\ndirected version), introduced in Papadimitriou [11], and any search\nproblem S is PPAD-complete if END OF THE LINE reduces to\nS. Other PPAD-complete problems include the search for a ham\nsandwich hyperplane, and finding market equilibria in an exchange\neconomy (see [11] for more detailed descriptions of these\nproblems).\n3-GRAPHICAL NASH is the problem of finding a Nash\nequilibrium for a graphical game whose graph has degree 3. Daskalakis et\nal. [4] show PPAD-completeness of 3-GRAPHICAL NASH by a\nreduction from 3-DIMENSIONAL BROUWER, introduced in [4] and\ndefined as follows.\nDefinition 5. 3-DIMENSIONAL BROUWER. Input: a circuit C\nhaving 3n input bits and 2 output bits. The input bits define a\ncubelet of the unit cube, consisting of the 3 coordinates of its\npoints, given to n bits of precision. The output represents one of\nfour colours assigned by C to a cubelet. C is restricted so as to\nassign colour 1 to cubelets adjacent to the (y, z)-plane, colour 2 to\nremaining cubelets adjacent to the (x, z)-plane, colour 3 to\nremaining cubelets on the (x, y)-plane, and colour 0 to all other cubelets\non the surface of the unit cube.\nA solution is a panchromatic vertex, a vertex adjacent to cubelets\nthat have 4 distinct colours.\nThe reason why a solution is guaranteed to exist, is that an\nassociated Brouwer function \u03c6 can be constructed, i.e. a continuous\nfunction from the unit cube to itself, such that panchromatic\nvertices correspond to fixpoints of \u03c6. Brouwer\"s Fixpoint Theorem\npromises the existence of a fixpoint.\nThe proof of Theorem 4 uses a modification of the reduction\nof [4] from 3-DIMENSIONAL BROUWER to 3-GRAPHICAL NASH.\nTo prove the theorem, we begin with some preliminary results as\nfollows. Each player has 2 actions, denoted 0 and 1. For a player\nat vertex V let p[V ] denote the probability that the player plays 1.\nLEMMA 5. [7] There exists a graphical game Gshift of fixed\nsize having vertices V , V where p[V ] is the fractional part of\n2p[V ].\nCOROLLARY 1. There exists a graphical game Gn\u2212shift of size\n\u0398(n) of constant pathwidth, having vertices V , Vn where p[Vn] is\nthe fractional part of 2n\n.p[V ].\nPROOF. Make a chain of n copies of Gshift in Lemma 5. Each\nsubset of vertices in the path decomposition is the vertices in a copy\nof Gshift.\nLet In(x) denote the n-th bit of the binary expansion of x, where\nwe interpret 1 as true and 0 as false. The following uses gadgets\nfrom [7, 4].\nCOROLLARY 2. There exists k such that for all n, and for all\nn1, n2, n3 \u2264 n, there exists a graphical game of size O(n) with\npathwidth k, having vertices V1, V2, V3 where p[V3] = p[V1] +\n2\u2212n3\n(In1 p[V1] \u2227 In2 p[V2]).\nPROOF OF THEOREM 4. Let C be the boolean circuit\ndescribing an instance of 3-DIMENSIONAL BROUWER. Let g1, . . . , gp(n)\nbe the gates of C indexed in such a way that the input(s) to any gate\nare the output(s) of lower-indexed gates. g1, . . . , g3n will be the 3n\ninputs to C.\nAll players in the graphical game G constructed in [4] have 2\nactions denoted 0 and 1. The probability that V plays 1 is denoted\np[V ]. G has 3 players Vx, Vy and Vz for which p[Vx], p[Vy] and\np[Vz] represent the coordinates of a point in the unit cube. G is\ndesigned to incentivize Vx, Vy and Vz to adjust their probabilities\nin directions given by a Brouwer function which is itself specified\nby the circuit C. In a Nash equilibrium, p[Vx], p[Vy] and p[Vz]\nrepresent coordinates of a fixpoint of a function that belongs to the\nclass of functions represented by 3-DIMENSIONAL BROUWER.\nFor 1 \u2264 i \u2264 p(n) we introduce a vertex V\n(i)\nC such that for\n1 \u2264 j \u2264 i, Ij(p[V\n(i)\nC ]) is the output of gate gj; for i < j \u2264 p(n),\nIj(p[V\n(i)\nC ]) is 0.\nConstruct V\n(i)\nC from V\n(i\u22121)\nC using Corollary 2. Let G(i)\nbe the\ngraphical game that does this. Let S1(G(i)\n), . . . , Sn(G(i)\n) be a\nlength n path decomposition of G(i)\n, where V\n(i\u22121)\nC \u2208 S1(G(i)\n)\nand V\n(i)\nC \u2208 Sn(G(i)\n).\nThen, a path decomposition of \u222a1\u2264i\u2264p(n)G(i)\nis obtained by\ntaking the union of the separate path decompositions, together with\nSn(G(i\u22121)\n) \u222a S1(G(i)\n) for 2 \u2264 i \u2264 p(n).\nLet GC be the above graphical game that simulates C. GC has\n3n inputs, consisting of the first n bits of the binary expansions of\np[Vx], p[Vy] and p[Vz]. Similarly to [4], the output of GC affects\nVx, Vy and Vz as follows. Colour 0 incentivizes Vx, Vy and Vz\n108\nto adjust their probabilities p[Vx], p[Vy] and p[Vz] in the\ndirection (\u22121, \u22121, \u22121); colour 2 incentivizes them to move in direction\n(1, 0, 0); colour 2, direction (0, 1, 0); colour 3, direction (0, 0, 1).\nWe need to ensure that at points at the boundaries of adjacent\ncubelets, the change of direction will be approximately the\naverage of directions of surrounding points. That way, all four\ncolors/directions must be nearby so that they can cancel each other out\n(and we are at a panchromatic vertex). This is achieved using the\nsame trick as [4], in which we make a constant number M of copies\nof GC, which differ in that each copy adds a tiny displacement\nvector to its copies of p[Vx], p[Vy] and p[Vz] (which are derived from\nthe original using the addition gadget of [7]). Using the addition\nand multiplication gadgets of [7] we average the directions and add\na small multiple of this average to (p[Vx], p[Vy], p[Vz]).\nAt a Nash equilibrium the outputs of each copy will cancel each\nother out. The pathwidth of the whole game is at most M times the\npathwidth GC.\n8. OPEN PROBLEMS\nThe most important problem left open by this paper is whether\nit is possible to find a Nash equilibrium of a graphical game on a\nbounded-degree tree in polynomial time. Our construction shows\nthat any two-pass algorithm that explicitly stores breakpoint\npolicies needs exponential time and space. However, it does not\npreclude the existence of an algorithm that is based on a similar idea,\nbut, instead of computing the entire breakpoint policy for each\nvertex, uses a small number of additional passes through the graph\nto decide which (polynomial-sized) parts of each breakpoint\npolicy should be computed. In particular, such an algorithm may be\nbased on the approximation algorithm of [8], where the value of\nis chosen adaptively.\nAnother intriguing question is related to the fact that the graph\nfor which we constructed an exponential-sized breakpoint policy\nhas pathwidth 2, while our positive results are for a path, i.e., a\ngraph of pathwidth 1. It is not clear if for any bounded-degree\ngraph of pathwidth 1 the running time of (the breakpoint\npolicybased version of) our algorithm will be polynomial. In particular,\nit is instructive to consider a caterpillar graph, i.e., the graph that\ncan be obtained from Tn by deleting the vertices S1, . . . , Sn. For\nthis graph, the best response policy of a vertex Vk in the spine\nof the caterpillar is obtained by combining the best response\npolicy of its predecessor on the spine Vk\u22121 and its other child Tk;\nsince the latter is a leaf, its best response policy is either trivial\n(i.e., [0, 1]2\n, [0, 1]\u00d7{0}, or [0, 1]\u00d7{1}) or consists of two\nhorizontal segments and one vertical segment of the form {\u03b1}\u00d7[0, 1] that\nconnects them. Assuming for convenience that\nB(Vk, Tk) = [0, \u03b1]\u00d7{0} \u222a {\u03b1}\u00d7[0, 1] \u222a [\u03b1, 1]\u00d7{1},\nand f is the indifference function for Vk, we observe that the best\nresponse policy for Vk consists of 5 components: \u02c6f(0), \u02c6f(1), and\nthree components that correspond to [0, \u03b1]\u00d7{0}, {\u03b1}\u00d7[0, 1], and\n[\u03b1, 1]\u00d7{1}.\nHence, one can think of constructing B(Vk+1, Vk) as the\nfollowing process: turn B(Vk, Vk\u22121) by \u03c0/2, cut it along the (now\nhorizontal) line vk = \u03b1, apply a fractional linear transform to the\nhorizontal coordinate of both parts, and reconnect them using the\nimage of the segment {\u03b1}\u00d7[0, 1] under f. This implies that the\nproblem of bounding the size of the best response policy (or,\nalternatively, the breakpoint policy), can be viewed as a generalization\nof the following computational geometry problem, which we\nbelieve may be of independent interest:\nPROBLEM 1. Given a collection of axis-parallel segments in\nR2\n, consider the following operation: pick an axis-parallel line\nli (either vertical or horizontal), cut the plane along this line, and\nshift one of the resulting two parts by an arbitrary amount \u03b4i; as a\nresult, some segments will be split into two parts. Reconnect these\nparts, i.e., for each segment of the form [a, b] \u00d7 {c} that was\ntransformed into [a, t] \u00d7 {c + \u03b4i} and [t, b] \u00d7 {c}, introduce a segment\n{t}\u00d7[c, c+\u03b4i]. Is it possible to start with the segment [0, 1] and\nafter n operations obtain a set that cannot be represented as a union\nof poly(n) line segments? If yes, can it be the case that in this set,\nthere is no path with a polynomial number of turns that connects\nthe endpoints of the original segment?\nIt turns out that in general, the answer to the first question is\npositive, i.e., after n steps, it is possible to obtain a set that consists\nof \u0398(cn\n) segments for some c > 0. This implies that even for\na caterpillar, the best response policy can be exponentially large.\nHowever, in our example (which is omitted from this version of\nthe paper due to space constraints), there exists a polynomial-size\npath through the best response policy, i.e., it does not prove that\nthe breakpoint policy is necessarily exponential in size. If one can\nprove that this is always the case, it may be possible to adapt this\nproof to show that there can be an exponential gap between the\nsizes of best response policies and breakpoint policies.\n9. REFERENCES\n[1] H. Bodlaender and T. Kloks. Efficient and constructive\nalgorithms for the pathwidth and treewidth of graphs. Journal\nof Algorithms, 21:358-402, 1996.\n[2] X. Chen and X. Deng. 3-NASH is PPAD-complete. Technical\nReport TR-05-134, Electronic Colloquium in Computational\nComplexity, 2005.\n[3] X. Chen and X. Deng. Settling the complexity of 2-player\nNash equilibrium. Technical Report TR-05-140, Electronic\nColloquium in Computational Complexity, 2005.\n[4] C. Daskalakis, P. Goldberg, and C. Papadimitriou. The\ncomplexity of computing a Nash equilibrium. In Proceedings\nof the 38th ACM Symposium on Theory of Computing, 2006.\n[5] C. Daskalakis and C. Papadimitriou. Three-player games are\nhard. Technical Report TR-05-139, Electronic Colloquium in\nComputational Complexity, 2005.\n[6] E. Elkind, L. Goldberg, and P. Goldberg. Nash equilibria in\ngraphical games on trees revisited. Technical Report\nTR-06-005, Electronic Colloquium in Computational\nComplexity, 2006.\n[7] P. Goldberg and C. Papadimitriou. Reducibility among\nequilibrium problems. In Proceedings of the 38th ACM\nSymposium on Theory of Computing, 2006.\n[8] M. Kearns, M. Littman, and S. Singh. Graphical models for\ngame theory. In Proceedings of the 17th Conference on\nUncertainty in Artificial Intelligence, 2001.\n[9] M. Littman, M. Kearns, and S. Singh. An efficient exact\nalgorithm for singly connected graphical games. In\nProceedings of the 15th Annual Conference on Neural\nInformation Processing Systems, 2001.\n[10] L. Ortiz and M. Kearns. Nash propagation for loopy\ngraphical games. In Proceedings of the 17th Annual\nConference on Neural Information Processing Systems, 2003.\n[11] C. Papadimitriou. On the complexity of the parity argument\nand other inefficient proofs of existence. J. Comput. Syst. Sci.,\n48(3):498-532, 1994.\n109", "keywords": "nash equilibrium;bounded-degree tree;dynamic programming-based algorithm;degree;response policy;large-scale distributed network;graphical game;downstream pass;breakpoint policy;ppad-completeness;generic algorithm"}
-{"name": "test_J-4", "title": "Revenue Analysis of a Family of Ranking Rules for Keyword Auctions", "abstract": "Keyword auctions lie at the core of the business models of today\"s leading search engines. Advertisers bid for placement alongside search results, and are charged for clicks on their ads. Advertisers are typically ranked according to a score that takes into account their bids and potential clickthrough rates. We consider a family of ranking rules that contains those typically used to model Yahoo! and Google\"s auction designs as special cases. We find that in general neither of these is necessarily revenue-optimal in equilibrium, and that the choice of ranking rule can be guided by considering the correlation between bidders\" values and click-through rates. We propose a simple approach to determine a revenue-optimal ranking rule within our family, taking into account effects on advertiser satisfaction and user experience. We illustrate the approach using Monte-Carlo simulations based on distributions fitted to Yahoo! bid and click-through rate data for a high-volume keyword.", "fulltext": "1. INTRODUCTION\nMajor search engines like Google, Yahoo!, and MSN sell\nadvertisements by auctioning off space on keyword search\nresults pages. For example, when a user searches the web for\niPod, the highest paying advertisers (for example, Apple\nor Best Buy) for that keyword may appear in a separate\nsponsored section of the page above or to the right of the\nalgorithmic results. The sponsored results are displayed in a\nformat similar to algorithmic results: as a list of items each\ncontaining a title, a text description, and a hyperlink to a\nweb page. Generally, advertisements that appear in a higher\nposition on the page garner more attention and more clicks\nfrom users. Thus, all else being equal, advertisers prefer\nhigher positions to lower positions.\nAdvertisers bid for placement on the page in an\nauctionstyle format where the larger their bid the more likely their\nlisting will appear above other ads on the page. By\nconvention, sponsored search advertisers generally bid and pay per\nclick, meaning that they pay only when a user clicks on their\nad, and do not pay if their ad is displayed but not clicked.\nOverture Services, formerly GoTo.com and now owned by\nYahoo! Inc., is credited with pioneering sponsored search\nadvertising. Overture\"s success prompted a number of\ncompanies to adopt similar business models, most prominently\nGoogle, the leading web search engine today. Microsoft\"s\nMSN, previously an affiliate of Overture, now operates its\nown keyword auction marketplace. Sponsored search is one\nof the fastest growing, most effective, and most profitable\nforms of advertising, generating roughly $7 billion in revenue\nin 2005 after nearly doubling every year for the previous five\nyears.\nThe search engine evaluates the advertisers\" bids and\nallocates the positions on the page accordingly. Notice that,\nalthough bids are expressed as payments per click, the search\nengine cannot directly allocate clicks, but rather allocates\nimpressions, or placements on the screen. Clicks relate only\nstochastically to impressions. Until recently, Yahoo! ranked\nbidders in decreasing order of advertisers\" stated values per\nclick, while Google ranks in decreasing order of advertisers\"\nstated values per impression. In Google\"s case, value per\nimpression is computed by multiplying the advertiser\"s\n(perclick) bid by the advertisement\"s expected click-through rate,\nwhere this expectation may consider a number of\nunspecified factors including historical click-through rate, position\non the page, advertiser identity, user identity, and the\ncontext of other items on the page. We refer to these rules as\nrank-by-bid and rank-by-revenue, respectively.1\nWe analyze a family of ranking rules that contains the\nYahoo! and Google models as special cases. We consider\nrank1\nThese are industry terms. We will see, however, that\nrankby-revenue is not necessarily revenue-optimal.\n50\ning rules where bidders are ranked in decreasing order of\nscore eq\nb, where e denotes an advertiser\"s click-through rate\n(normalized for position) and b his bid. Notice that q = 0\ncorresponds to Yahoo!\"s rank-by-bid rule and q = 1\ncorresponds to Google\"s rank-by-revenue rule. Our premise is\nthat bidders are playing a symmetric equilibrium, as defined\nby Edelman, Ostrovsky, and Schwarz [3] and Varian [11].\nWe show through simulation that although q = 1 yields\nthe efficient allocation, settings of q considerably less than\n1 can yield superior revenue in equilibrium under certain\nconditions. The key parameter is the correlation between\nadvertiser value and click-through rate. If this correlation is\nstrongly positive, then smaller q are revenue-optimal. Our\nsimulations are based on distributions fitted to data from\nYahoo! keyword auctions. We propose that search engines\nset thresholds of acceptable loss in advertiser satisfaction\nand user experience, then choose the revenue-optimal q\nconsistent with these constraints. We also compare the\npotential gains from tuning q with the gains from setting reserve\nprices, and find that the former may be much more\nsignificant.\nIn Section 2 we give a formal model of keyword auctions,\nand establish its equilibrium properties in Section 3. In\nSection 4 we note that giving agents bidding credits can have\nthe same effect as tuning the ranking rule explicitly. In\nSection 5 we give a general formulation of the optimal keyword\nauction design problem as an optimization problem, in a\nmanner analogous to the single-item auction setting. We\nthen provide some theoretical insight into how tuning q can\nimprove revenue, and why the correlation between bidders\"\nvalues and click-through rates is relevant. In Section 6 we\nconsider the effect of q on advertiser satisfaction and user\nexperience. In Section 7 we describe our simulations and\ninterpret their results.\nRelated work. As mentioned the papers of Edelman\net al. [3] and Varian [11] lay the groundwork for our study.\nBoth papers independently define an appealing refinement\nof Nash equilibrium for keyword auctions and analyze its\nequilibrium properties. They called this refinement\nlocally envy-free equilibrium and symmetric equilibrium,\nrespectively. Varian also provides some empirical analysis.\nThe general model of keyword auctions used here, where\nbidders are ranked according to a weight times their bid, was\nintroduced by Aggarwal, Goel, and Motwani [1]. That paper\nalso makes a connection between the revenue of keyword\nauctions in incomplete information settings with the revenue\nin symmetric equilibrium. Iyengar and Kumar [5] study\nthe optimal keyword auction design problem in a setting\nof incomplete information, and also make the connection\nto symmetric equilibrium. We make use of this connection\nwhen formulating the optimal auction design problem in our\nsetting.\nThe work most closely related to ours is that of Feng,\nBhargava, and Pennock [4]. They were the first to realize\nthat the correlation between bidder values and click-through\nrates should be a key parameter affecting the revenue\nperformance of various ranking mechanisms. For simplicity,\nthey assume bidders bid their true values, so their model\nis very different from ours and consequently so are their\nfindings. According to their simulations, rank-by-revenue\nalways (weakly) dominates rank-by-bid in terms of revenue,\nwhereas our results suggest that rank-by-bid may do much\nbetter for negative correlations.\nLahaie [8] gives an example that suggests rank-by-bid\nshould yield more revenue when values and click-through\nrates are positively correlated, whereas rank-by-revenue should\ndo better when the correlation is negative. In this work we\nmake a deeper study of this conjecture.\n2. MODEL\nThere are K positions to be allocated among N bidders,\nwhere N > K. We assume that the (expected) click-through\nrate of bidder s in position t is of the form esxt, i.e. separable\ninto an advertiser effect es \u2208 [0, 1] and position effect xt \u2208\n[0, 1]. We assume that x1 > x2 > . . . > xK > 0 and let\nxt = 0 for t > K. We also refer to es as the relevance of\nbidder s. It is useful to interpret xt as the probability that\nan ad in position t will be noticed, and es as the probability\nthat it will be clicked on if noticed.\nBidder s has value vs for each click. Bidders have\nquasilinear utility, so that the utility to bidder s of obtaining\nposition t at a price of p per click is\nesxt(vs \u2212 p).\nA weight ws is associated with agent s, and agents bid for\nposition. If agent s bids bs, his corresponding score is wsbs.\nAgents are ranked by score, so that the agent with highest\nscore is ranked first, and so on. We assume throughout that\nagents are numbered such that agent s obtains position s.\nAn agent pays per click the lowest bid necessary to retain\nhis position, so that the agent in slot s pays\nws+1\nws\nbs+1. The\nauctioneer may introduce a reserve score of r, so that an\nagent\"s ad appears only if his score is at least r. For agent\ns, this translates into a reserve price (minimum bid) of r/ws.\n3. EQUILIBRIUM\nWe consider the pure-strategy Nash equilibria of the\nauction game. This is a full-information concept. The\nmotivation for this choice is that in a keyword auction, bidders\nare allowed to continuously adjust their bids over time, and\nhence obtain estimates of their profits in various positions.\nAs a result it is reasonable to assume that if bids\nstabilize, bidders should be playing best-responses to each other\"s\nbids [2, 3, 11]. Formally, in a Nash equilibrium of this game\nthe following inequalities hold.\nesxs\n\u201e\nvs \u2212\nws+1\nws\nbs+1\n\u00ab\n\u2265 esxt\n\u201e\nvs \u2212\nwt+1\nws\nbt+1\n\u00ab\n\u2200t > s (1)\nesxs\n\u201e\nvs \u2212\nws+1\nws\nbs+1\n\u00ab\n\u2265 esxt\n\u201e\nvs \u2212\nwt\nws\nbt\n\u00ab\n\u2200t < s (2)\nInequalities (1) and (2) state that bidder s does not prefer\na lower or higher position to his own, respectively. It can\nbe hard to derive any theoretical insight into the properties\nof these Nash equilibria-multiple allocations of positions\nto bidders can potentially arise in equilibrium [2].\nEdelman, Ostrovsky, and Schwarz [3] introduced a refinement of\nNash equilibrium called locally envy-free equilibrium that\nis more tractable to analyze; Varian [11] independently\nproposed this solution concept and called it symmetric\nequilibrium. In a symmetric equilibrium, inequality (1) holds\nfor all s, t rather than just for t > s. So for all s and all\nt = s, we have\nesxs\n\u201e\nvs \u2212\nws+1\nws\nbs+1\n\u00ab\n\u2265 esxt\n\u201e\nvs \u2212\nwt+1\nws\nbt+1\n\u00ab\n,\n51\nor equivalently\nxs(wsvs \u2212 ws+1bs+1) \u2265 xt(wsvs \u2212 wt+1bt+1). (3)\nEdelman et al. [3] note that this equilibrium arises if agents\nare raising their bids to increase the payments of those above\nthem, a practice which is believed to be common in actual\nkeyword auctions. Varian [11] provides some empirical\nevidence that Google bid data agrees well with the hypothesis\nthat bidders are playing a symmetric equilibrium.\nVarian does a thorough analysis of the properties of\nsymmetric equilibrium, assuming ws = es = 1 for all bidders.\nIt is straightforward to adapt his analysis to the case where\nbidders are assigned arbitrary weights and have separable\nclick-through rates.2\nAs a result we find that in symmetric\nequilibrium, bidders are ranked in order of decreasing wsvs.\nTo be clear, although the auctioneer only has access to the\nbids bs and not the values vs, in symmetric equilibrium the\nbids are such that ranking according to wsbs is equivalent\nto ranking according to wsvs.\nThe smallest possible bid profile that can arise in\nsymmetric equilibrium is given by the recursion\nxsws+1bs+1 = (xs \u2212 xs+1)ws+1vs+1 + xs+1ws+2bs+2.\nIn this work we assume that bidders are playing the smallest\nsymmetric equilibrium. This is an appropriate selection for\nour purposes: by optimizing revenue in this equilibrium, we\nare optimizing a lower bound on the revenue in any\nsymmetric equilibrium. Unraveling the recursion yields\nxsws+1bs+1 =\nKX\nt=s\n(xt \u2212 xt+1)wt+1vt+1. (4)\nAgent s\"s total expected payment is es/ws times the\nquantity on the left-hand side of (4). The base case of the\nrecursion occurs for s = K, where we find that the first excluded\nbidder bids his true value, as in the original analysis.\nMultiplying each of the inequalities (4) by the\ncorresponding es/ws to obtain total payments, and summing over all\npositions, we obtain a total equilibrium revenue of\nKX\ns=1\nKX\nt=s\nwt+1\nws\nes(xt \u2212 xt+1)vt+1. (5)\nTo summarize, the minimum possible revenue in symmetric\nequilibrium can be computed as follows, given the agents\"\nrelevance-value pairs (es, vs): first rank the agents in\ndecreasing order of wsvs, and then evaluate (5).\nWith a reserve score of r, it follows from inequality (3)\nthat no bidder with wsvs < r would want to participate\nin the auction. Let K(r) be the number of bidders with\nwsvs \u2265 r, and assume it is at most K. We can impose a\nreserve score of r by introducing a bidder with value r and\nweight 1, and making him the first excluded bidder (who\nin symmetric equilibrium bids truthfully). In this case the\nrecursion yields\nxsws+1bs+1 =\nK(r)\u22121\nX\nt=s\n(xt \u2212 xt+1)wt+1vt+1 + xK(r)r\nand the revenue formula is adapted similarly.\n2\nIf we redefine wsvs to be vs and wsbs to be bs, we\nrecover Varian\"s setup and his original analysis goes through\nunchanged.\n4. BIDDING CREDITS\nAn indirect way to influence the allocation is to introduce\nbidding credits.3\nSuppose bidder s is only required to pay\na fraction cs \u2208 [0, 1] of the price he faces, or equivalently a\n(1 \u2212 cs) fraction of his clicks are received for free. Then in\na symmetric equilibrium, we have\nesxs\n\u201e\nvs \u2212\nws+1\nws\ncsbs+1\n\u00ab\n\u2265 esxt\n\u201e\nvs \u2212\nwt+1\nws\ncsbt+1\n\u00ab\nor equivalently\nxs\n\u201e\nws\ncs\nvs \u2212 ws+1bs+1\n\u00ab\n\u2265 xt\n\u201e\nws\ncs\nvs \u2212 wt+1bt+1\n\u00ab\n.\nIf we define ws = ws\ncs\nand bs = csbs, we recover\ninequality (3). Hence the equilibrium revenue will be as if we had\nused weights w rather than w. The bids will be scaled\nversions of the bids that arise with weights w (and no credits),\nwhere each bid is scaled by the corresponding factor 1/cs.\nThis technique allows one to use credits instead of explicit\nchanges in the weights to affect revenue. For instance,\nrankby-revenue will yield the same revenue as rank-by-bid if we\nset credits to cs = es.\n5. REVENUE\nWe are interested in setting the weights w to achieve\noptimal expected revenue. The setup is as follows. The\nauctioneer chooses a function g so that the weighting scheme is\nws \u2261 g(es). We do not consider weights that also depend on\nthe agents\" bids because this would invalidate the\nequilibrium analysis of the previous section.4\nA pool of N bidders\nis then obtained by i.i.d. draws of value-relevance pairs from\na common probability density f(es, vs). We assume the\ndensity is continuous and has full support on [0, 1]\u00d7[0, \u221e). The\nrevenue to the auctioneer is then the revenue generated in\nsymmetric equilibrium under weighting scheme w. This\nassumes the auctioneer is patient enough not to care about\nrevenue until bids have stabilized.\nThe problem of finding an optimal weighting scheme can\nbe formulated as an optimization problem very similar to\nthe one derived by Myerson [9] for the single-item auction\ncase (with incomplete information). Let Qsk(e, v; w) = 1 if\nagent s obtains slot k in equilibrium under weighting scheme\nw, where e = (e1, . . . , eN ) and v = (v1, . . . , vN ), and let it\nbe 0 otherwise.\nNote that the total payment of agent s in equilibrium is\nesxs\nws+1\nws\nbs+1 =\nKX\nt=s\nes(xt \u2212 xt+1)\nwt+1\nws\nvt+1\n= esxsvs \u2212\nZ vs\n0\nKX\nk=1\nesxkQsk(es, e\u2212s, y, v\u2212s; w) dy.\nThe derivation then continues just as in the case of a\nsingleitem auction [7, 9]. We take the expectation of this payment,\n3\nHal Varian suggested to us that bidding credits could be\nused to affect revenue in keyword auctions, which prompted\nus to look into this connection.\n4\nThe analysis does not generalize to weights that depend on\nbids. It is unclear whether an equilibrium would exist at all\nwith such weights.\n52\nand sum over all agents to obtain the objective\nZ \u221e\n0\nZ \u221e\n0\n\" NX\ns=1\nKX\nk=1\nesxk\u03c8(es, vs)Qsk(e, v; w)\n#\nf(e, v) dv de,\nwhere \u03c8 is the virtual valuation\n\u03c8(es, vs) = vs \u2212\n1 \u2212 F(vs|es)\nf(vs|es)\n.\nAccording to this analysis, we should rank bidders by\nvirtual score es\u03c8(es, vs) to optimize revenue (and exclude any\nbidders with negative virtual score). However, unlike in the\nincomplete information setting, here we are constrained to\nranking rules that correspond to a certain weighting scheme\nws \u2261 g(es). We remark that the virtual score cannot be\nreproduced exactly via a weighting scheme.\nLemma 1. There is no weighting scheme g such that the\nvirtual score equals the score, for any density f.\nProof. Assume there is a g such that e\u03c8(e, v) = g(e)v.\n(The subscript s is suppressed for clarity.) This is equivalent\nto\nd\ndv\nlog(1 \u2212 F(v|e)) = h(e)/v, (6)\nwhere h(e) = (g(e)/e\u22121)\u22121\n. Let \u00afv be such that F(\u00afv|e) < 1;\nunder the assumption of full support, there is always such\na \u00afv. Integrating (6) with respect to v from 0 to \u00afv, we find\nthat the left-hand side converges whereas the right-hand side\ndiverges, a contradiction.\nOf course, to rank bidders by virtual score, we only need\ng(es)vs = h(es\u03c8(es, vs)) for some monotonically increasing\ntransformation h. (A necessary condition for this is that\n\u03c8(es, vs) be increasing in vs for all es.) Absent this\nregularity condition, the optimization problem seems quite difficult\nbecause it is so general: we need to maximize expected\nrevenue over the space of all functions g.\nTo simplify matters, we now restrict our attention to the\nfamily of weights ws = eq\ns for q \u2208 (\u2212\u221e, +\u221e). It should be\nmuch simpler to find the optimum within this family, since\nit is just one-dimensional. Note that it covers rank-by-bid\n(q = 0) and rank-by-revenue (q = 1) as special cases.\nTo see how tuning q can improve matters, consider again\nthe equilibrium revenue:\nR(q) =\nKX\ns=1\nKX\nt=s\n\u201e\net+1\nes\n\u00abq\nes(xt \u2212 xt+1)vt+1. (7)\nIf the bidders are ranked in decreasing order of relevance,\nthen et\nes\n\u2264 1 for t > s and decreasing q slightly without\naffecting the allocation will increase revenue. Similarly, if\nbidders are ranked in increasing order of relevance,\nincreasing q slightly will yield an improvement. Now suppose there\nis perfect positive correlation between value and relevance.\nIn this case, rank-by-bid will always lead to the same\nallocation as rank-by-revenue, and bidders will always be ranked\nin decreasing order of relevance. It then follows from (7) that\nq = 0 will yield more revenue in equilibrium than q = 1.5\n5\nIt may appear that this contradicts the revenue-equivalence\ntheorem [7, 9], because mechanisms that always lead to the\nsame allocation in equilibrium should yield the same\nrevenue. Note though that with perfect correlation, there are\nIf a good estimate of f is available, Monte-Carlo\nsimulations can be used to estimate the revenue curve as a function\nof q, and the optimum can be located. Simulations can also\nbe used to quantify the effect of correlation on the location\nof the optimum. We do this in Section 7.\n6. EFFICIENCY AND RELEVANCE\nIn principle the revenue-optimal parameter q may lie\nanywhere in (\u2212\u221e, \u221e). However, tuning the ranking rule also\nhas consequences for advertiser satisfaction and user\nexperience, and taking these into account reduces the range of\nallowable q.\nThe total relevance of the equilibrium allocation is\nL(q) =\nKX\ns=1\nesxs,\ni.e. the aggregate click-through rate. Presumably users find\nthe ad display more interesting and less of a nuisance if\nthey are more inclined to click on the ads, so we adopt total\nrelevance as a measure of user experience.\nLet ps =\nws+1\nws\nbs+1 be the price per click faced by bidder\ns. The total value (efficiency) generated by the auction in\nequilibrium is\nV (q) =\nKX\ns=1\nesxsvs\n=\nKX\ns=1\nesxs(vs \u2212 ps) +\nKX\ns=1\nesxsps.\nAs we see total value can be reinterpreted as total profits\nto the bidders and auctioneer combined. Since we only\nconsider deviations from maximum efficiency that increase the\nauctioneer\"s profits, any decrease in efficiency in our setting\ncorresponds to a decrease in bidder profits. We therefore\nadopt efficiency as a measure of advertiser satisfaction.\nWe would expect total relevance to increase with q, since\nmore weight is placed on each bidder\"s individual relevance.\nWe would expect efficiency to be maximized at q = 1, since\nin this case a bidder\"s weight is exactly his relevance.\nProposition 1. Total relevance is non-decreasing in q.\nProof. Recall that in symmetric equilibrium, bidders are\nranked in order of decreasing wsvs. Let > 0. Perform an\nexchange sort to obtain the ranking that arises with q +\nstarting from the ranking that arises with q (for a description\nof exchange sort and its properties, see Knuth [6] pp.\n106110). Assume that is large enough to make the rankings\ndistinct. Agents s and t, where s is initially ranked lower\nthan t, are swapped in the process if and only if the following\nconditions hold:\neq\nsvs \u2264 eq\nt vt\neq+\ns vs > eq+\nt vt\nwhich together imply that es > et and hence es > et as\n> 0. At some point in the sort, agent s occupies some slot\n\u03b1, \u03b2 such that vs = \u03b1es + \u03b2. So the assumption of full\nsupport is violated, which is necessary for revenue equivalence.\nRecall that a density has full support over a given domain\nif every point in the domain has positive density.\n53\nk while agent t occupies slot k \u2212 1. After the swap, total\nrelevance will have changed by the amount\nesxk\u22121 + etxk \u2212 etxk\u22121 \u2212 esxk\n= (es \u2212 et)(xk\u22121 \u2212 xk) > 0\nAs relevance strictly increases with each swap in the sort,\ntotal relevance is strictly greater when using q + rather\nthan q.\nProposition 2. Total value is non-decreasing in q for\nq \u2264 1 and non-increasing in q for q \u2265 1.\nProof. Let q \u2265 1 and let > 0. Perform an exchange\nsort to obtain the second ranking from the first as in the\nprevious proof. If agents s and t are swapped, where s was\ninitially ranked lower than t, then es > et. This follows by\nthe same reasoning as in the previous proof. Now e1\u2212q\ns \u2264\ne1\u2212q\nt as 1 \u2212 q \u2264 0. This together with eq\nsvs \u2264 eq\nt vt implies\nthat esvs \u2264 etvt. Hence after swapping agents s and t, total\nvalue has not increased. The case for q \u2264 1 is similar.\nSince the trends described in Propositions 1 and 2 hold\npointwise (i.e. for any set of bidders), they also hold in\nexpectation. Proposition 2 confirms that efficiency is indeed\nmaximized at q = 1.\nThese results motivate the following approach. Although\ntuning q can optimize current revenue, this may come at the\nprice of future revenue because advertisers and users may\nbe lost, seeing as their satisfaction decreases. To guarantee\nfuture revenue will not be hurt too much, the auctioneer\ncan impose bounds on the percent efficiency and relevance\nloss he is willing to tolerate, with q = 1 being a natural\nbaseline. By Proposition 2, a lower bound on efficiency will\nyield upper and lower bounds on the search space for q. By\nProposition 1, a lower bound on relevance will yield another\nlower bound on q. The revenue curve can then be plotted\nwithin the allowable range of q to find the revenue-optimal\nsetting.\n7. SIMULATIONS\nTo add a measure of reality to our simulations, we fit\ndistributions for value and relevance to Yahoo! bid and\nclickthrough rate data for a certain keyword that draws over a\nmillion searches per month. (We do not reveal the identity\nof the keyword to respect the privacy of the advertisers.)\nWe obtained click and impression data for the advertisers\nbidding on the keyword. From this we estimated advertiser\nand position effects using a maximum-likelihood criterion.\nWe found that, indeed, position effects are monotonically\ndecreasing with lower rank. We then fit a beta distribution\nto the advertiser effects resulting in parameters a = 2.71\nand b = 25.43.\nWe obtained bids of advertisers for the keyword. Using\nVarian\"s [11] technique, we derived bounds on the bidders\"\nactual values given these bids. By this technique, upper and\nlower bounds are obtained on bidder values given the bids\naccording to inequality (3). If the interval for a given value is\nempty, i.e. its upper bound lies below its lower bound, then\nwe compute the smallest perturbation to the bids necessary\nto make the interval non-empty, which involves solving a\nquadratic program. We found that the mean absolute\ndeviation required to fit bids to symmetric equilibrium was\n0 1 2 3 4 5 6 7\n0\n0.05\n0.1\n0.15\n0.2\n0.25\n0.3\n0.35\n0.4\n0.45\nValue\nDensity\n0 0.05 0.1 0.15 0.2 0.25\n0\n2\n4\n6\n8\n10\nRelevance\nDensity\nFigure 1: Empirical marginal distributions of value\nand relevance.\nalways at most 0.08, and usually significantly less, over\ndifferent days in a period of two weeks.6\nWe fit a lognormal\ndistribution to the lower bounds on the bidders\" values,\nresulting in parameters \u03bc = 0.35 and \u03c3 = 0.71.\nThe empirical distributions of value and relevance together\nwith the fitted lognormal and beta curves are given in\nFigure 1. It appears that mixtures of beta and lognormal\ndistributions might be better fits, but since these distributions\nare used mainly for illustration purposes, we err on the side\nof simplicity.\nWe used a Gaussian copula to create dependence between\nvalue and relevance.7\nGiven the marginal distributions for\nvalue and relevance together with this copula, we simulated\nthe revenue effect of varying q for different levels of\nSpearman correlation, with 12 slots and 13 bidders. The results\nare shown in Figure 2.8\nIt is apparent from the figure that the optimal choice of q\nmoves to the right as correlation decreases; this agrees with\nour intuition from Section 5. The choice is very sensitive\nto the level of correlation. If choosing only between\nrankby-bid and rank-by-revenue, rank-by-bid is best for positive\ncorrelation whereas rank-by-revenue is best for negative\ncorrelation. At zero correlation, they give about the same\nexpected revenue in this instance. Figure 2 also shows that\nin principle, the optimal q may be negative. It may also\noccur beyond 1 for different distributions, but we do not\nknow if these would be realistic. The trends in efficiency\nand relevance are as described in the results from Section 6.\n(Any small deviation from these trends is due to the\nrandomness inherent in the simulations.) The curves level off\nas q \u2192 +\u221e because eventually agents are ranked purely\naccording to relevance, and similarly as q \u2192 \u2212\u221e.\nA typical Spearman correlation between value and\nrelevance for the keyword was about 0.4-for different days in\na week the correlation lay within [0.36, 0.55]. Simulation\nresults with this correlation are in Figure 3. In this instance\nrank-by-bid is in fact optimal, yielding 25% more revenue\nthan rank-by-revenue. However, at q = 0 efficiency and\nrelevance are 9% and 17% lower than at q = 1, respectively.\nImposing a bound of, say, 5% on efficiency and relevance loss\nfrom the baseline at q = 1, the optimal setting is q = 0.6,\nyielding 11% more revenue than the baseline.\n6\nSee Varian [11] for a definition of mean absolute deviation.\n7\nA copula is a function that takes marginal distributions\nand gives a joint distribution with these marginals. It can\nbe designed so that the variables are correlated. See for\nexample Nelsen [10].\n8\nThe y-axes in Figures 2-4 have been normalized because\nthe simulations are based on proprietary data. Only relative\nvalues are meaningful.\n54\n0\n1\n2\n3\n4\n5\n6\n7\n-2 -1.5 -1 -0.5 0 0.5 1 1.5 2\nR(q)\nq\nRevenue\n0\n1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n-2 -1.5 -1 -0.5 0 0.5 1 1.5 2\nV(q)\nq\nEfficiency\n0.6\n0.8\n1\n1.2\n1.4\n1.6\n1.8\n2\n2.2\n2.4\n2.6\n-2 -1.5 -1 -0.5 0 0.5 1 1.5 2\nL(q)\nq\nRelevance\nFigure 2: Revenue, efficiency, and relevance for different parameters q under varying\nSpearman correlation (key at right). Estimated standard errors are less than 1% of\nthe values shown.\n-1\n-0.5\n0\n0.5\n1\n1\n1.5\n2\n2.5\n3\n3.5\n-2 -1.5 -1 -0.5 0 0.5 1 1.5 2\nR(q)\nq\nRevenue\n2\n2.5\n3\n3.5\n4\n4.5\n5\n5.5\n6\n6.5\n7\n7.5\n-2 -1.5 -1 -0.5 0 0.5 1 1.5 2\nV(q)\nq\nEfficiency\n0.8\n1\n1.2\n1.4\n1.6\n1.8\n2\n2.2\n2.4\n-2 -1.5 -1 -0.5 0 0.5 1 1.5 2\nL(q)\nq\nRelevance\nFigure 3: Revenue, efficiency, and relevance for different parameters q with Spearman correlation\nof 0.4. Estimated standard errors are less than 1% of the values shown.\nWe also looked into the effect of introducing a reserve\nscore. Results are shown in Figure 4. Naturally, both\nefficiency and relevance suffer with an increasing reserve score.\nThe optimal setting is r = 0.2, which gives only an 8%\nincrease in revenue from r = 0. However, it results in a 13%\nefficiency loss and a 26% relevance loss. Tuning weights\nseems to be a much more desirable approach than\nintroducing a reserve score in this instance.\nThe reason why efficiency and relevance suffer more with\na reserve score is that this approach will often exclude\nbidders entirely, whereas this never occurs when tuning weights.\nThe two approaches are not mutually exclusive, however,\nand some combination of the two might prove better than\neither alone, although we did not investigate this possibility.\n8. CONCLUSIONS\nIn this work we looked into the revenue properties of a\nfamily of ranking rules that contains the Yahoo! and Google\nmodels as special cases. In practice, it should be very\nsimple to move between rules within the family: this simply\ninvolves changing the exponent q applied to advertiser effects.\nWe also showed that, in principle, the same effect could be\nobtained by using bidding credits. Despite the simplicity\nof the rule change, simulations revealed that properly\ntuning q can significantly improve revenue. In the simulations,\nthe revenue improvements were greater than what could be\nobtained using reserve prices.\nOn the other hand, we showed that advertiser\nsatisfaction and user experience could suffer if q is made too small.\nWe proposed that the auctioneer set bounds on the decrease\nin advertiser and user satisfaction he is willing to tolerate,\nwhich would imply bounds on the range of allowable q. With\nappropriate estimates for the distributions of value and\nrelevance, and knowledge of their correlation, the revenue curve\ncan then be plotted within this range to locate the optimum.\nThere are several ways to push this research further. It\nwould be interesting to do this analysis for a variety of\nkeywords, to see if the optimal setting of q is always so sensitive\nto the level of correlation. If it is, then simply using\nrank-bybid where there is positive correlation, and rank-by-revenue\nwhere there is negative correlation, could be fine to a first\napproximation and already improve revenue. It would also be\ninteresting to compare the effects of tuning q versus reserve\npricing for keywords that have few bidders. In this instance\nreserve pricing should be more competitive, but this is still\nan open question.\nIn principle the minimum revenue in Nash equilibrium\ncan be found by linear programming. However, many\nallocations can arise in Nash equilibrium, and a linear program\nneeds to be solved for each of these. There is as yet no\nefficient way to enumerate all possible Nash allocations, so\nfinding the minimum revenue is currently infeasible. If this\nproblem could be solved, we could run simulations for Nash\nequilibrium instead of symmetric equilibrium, to see if our\ninsights are robust to the choice of solution concept.\nLarger classes of ranking rules could be relevant. For\ninstance, it is possible to introduce discounts ds and rank\naccording to wsbs \u2212 ds; the equilibrium analysis generalizes to\nthis case as well. With this larger class the virtual score\ncan equal the score, e.g. in the case of a uniform marginal\ndistribution over values. It is unclear, though, whether such\nextensions help with more realistic distributions.\n55\n0\n0.5\n1\n1.5\n2\n2.5\n3\n0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6\nR(r)\nr\nRevenue\n0\n1\n2\n3\n4\n5\n6\n7\n8\n0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6\nV(r)\nr\nEfficiency\n0\n0.5\n1\n1.5\n2\n2.5\n0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6\nL(r)\nr\nRelevance\nFigure 4: Revenue, efficiency, and relevance for different reserve scores r, with Spearman correlation\nof 0.4 and q = 1. Estimates are averaged over 1000 samples.\nAcknowledgements\nWe thank Pavel Berkhin, Chad Carson, Yiling Chen, Ashvin\nKannan, Darshan Kantak, Chris LuVogt, Jan Pedersen, Michael\nSchwarz, Tong Zhang, and other members of Yahoo!\nResearch and Yahoo! Search Marketing.\n9. REFERENCES\n[1] G. Aggarwal, A. Goel, and R. Motwani. Truthful\nauctions for pricing search keywords. In Proceedings of\nthe 7th ACM Conference on Electronic Commerce,\nAnn Arbor, MI, 2006.\n[2] T. B\u00a8orgers, I. Cox, M. Pesendorfer, and V. Petricek.\nEquilibrium bids in auctions of sponsored links:\nTheory and evidence. Working paper, November 2006.\n[3] B. Edelman, M. Ostrovsky, and M. Schwarz. Internet\nadvertising and the Generalized Second Price auction:\nSelling billions of dollars worth of keywords. American\nEconomic Review, forthcoming.\n[4] J. Feng, H. K. Bhargava, and D. M. Pennock.\nImplementing sponsored search in Web search engines:\nComputational evaluation of alternative mechanisms.\nINFORMS Journal on Computing, forthcoming.\n[5] G. Iyengar and A. Kumar. Characterizing optimal\nkeyword auctions. In Proceedings of the 2nd Workshop\non Sponsored Search Auctions, Ann Arbor, MI, 2006.\n[6] D. Knuth. The Art of Computer Programming,\nvolume 3. Addison-Wesley, 1997.\n[7] V. Krishna. Auction Theory. Academic Press, 2002.\n[8] S. Lahaie. An analysis of alternative slot auction\ndesigns for sponsored search. In Proceedings of the 7th\nACM Conference on Electronic Commerce, Ann\nArbor, MI, 2006.\n[9] R. B. Myerson. Optimal auction design. Mathematics\nof Operations Research, 6(1), February 1981.\n[10] R. B. Nelsen. An Introduction to Copulas. Springer,\n2006.\n[11] H. R. Varian. Position auctions. International Journal\nof Industrial Organization, forthcoming.\n56", "keywords": "profit;revenue-optimal ranking;rank-by-revenue;ranking rule;optimal auction design problem;rank-by-bid;revenue;advertising revenue;advertisement;pricing search keyword;keyword auction;search engine;sponsor search;sponsored search"}
-{"name": "test_J-7", "title": "The Role of Compatibility in the Diffusion of Technologies Through Social Networks", "abstract": "In many settings, competing technologies - for example, operating systems, instant messenger systems, or document formatscan be seen adopting a limited amount of compatibility with one another; in other words, the difficulty in using multiple technologies is balanced somewhere between the two extremes of impossibility and effortless interoperability. There are a range of reasons why this phenomenon occurs, many of which - based on legal, social, or business considerations - seem to defy concise mathematical models. Despite this, we show that the advantages of limited compatibility can arise in a very simple model of diffusion in social networks, thus offering a basic explanation for this phenomenon in purely strategic terms. Our approach builds on work on the diffusion of innovations in the economics literature, which seeks to model how a new technology A might spread through a social network of individuals who are currently users of technology B. We consider several ways of capturing the compatibility of A and B, focusing primarily on a model in which users can choose to adopt A, adopt B, or - at an extra cost - adopt both A and B. We characterize how the ability of A to spread depends on both its quality relative to B, and also this additional cost of adopting both, and find some surprising non-monotonicity properties in the dependence on these parameters: in some cases, for one technology to survive the introduction of another, the cost of adopting both technologies must be balanced within a narrow, intermediate range. We also extend the framework to the case of multiple technologies, where we find that a simple This work has been supported in part by NSF grants CCF0325453, IIS-0329064, CNS-0403340, and BCS-0537606, a Google Research Grant, a Yahoo! Research Alliance Grant, the Institute for the Social Sciences at Cornell, and the John D. and Catherine T. MacArthur Foundation. model captures the phenomenon of two firms adopting a limited strategic alliance to defend against a new, third technology.", "fulltext": "1. INTRODUCTION\nDiffusion and Networked Coordination Games. A fundamental\nquestion in the social sciences is to understand the ways in which\nnew ideas, behaviors, and practices diffuse through populations.\nSuch issues arise, for example, in the adoption of new technologies,\nthe emergence of new social norms or organizational conventions,\nor the spread of human languages [2, 14, 15, 16, 17]. An active line\nof research in economics and mathematical sociology is concerned\nwith modeling these types of diffusion processes as a coordination\ngame played on a social network [1, 5, 7, 13, 19].\nWe begin by discussing one of the most basic game-theoretic\ndiffusion models, proposed in an influential paper of Morris [13],\nwhich will form the starting point for our work here. We describe\nit in terms of the following technology adoption scenario, though\nthere are many other examples that would serve the same purpose.\nSuppose there are two instant messenger (IM) systems A and B,\nwhich are not interoperable - users must be on the same system\nin order to communicate. There is a social network G on the users,\nindicating who wants to talk to whom, and the endpoints of each\nedge (v, w) play a coordination game with possible strategies A\nor B: if v and w each choose IM system B, then they they each\nreceive a payoff of q (since they can talk to each other using system\nB); if they each choose IM system A, then they they each receive\na payoff of 1 \u2212 q; and if they choose opposite systems, then they\neach receive a payoff of 0 (reflecting the lack of interoperability).\nNote that A is the better technology if q < 1\n2\n, in the sense that\nA-A payoffs would then exceed B-B payoffs, while A is the worse\ntechnology if q > 1\n2\n.\n75\nA number of qualitative insights can be derived from a diffusion\nmodel even at this level of simplicity. Specifically, consider a\nnetwork G, and let all nodes initially play B. Now suppose a small\nnumber of nodes begin adopting strategy A instead. If we apply\nbest-response updates to nodes in the network, then nodes in effect\nwill be repeatedly applying the following simple rule: switch to A\nif enough of your network neighbors have already adopted A. (E.g.\nyou begin using a particular IM system - or social-networking\nsite, or electronic document format - if enough of your friends\nare users of it.) As this unfolds, there can be a cascading sequence\nof nodes switching to A, such that a network-wide equilibrium is\nreached in the limit: this equilibrium may involve uniformity, with\nall nodes adopting A; or it may involve coexistence, with the nodes\npartitioned into a set adopting A and a set adopting B, and edges\nyielding zero payoff connecting the two sets. Morris [13] provides\na set of elegant graph-theoretic characterizations for when these\nqualitatively different types of equilibria arise, in terms of the\nunderlying network topology and the quality of A relative to B (i.e.\nthe relative sizes of 1 \u2212 q and q).\nCompatibility, Interoperability, and Bilinguality. In most of the\nsettings that form the motivation for diffusion models, coexistence\n(however unbalanced) is the typical outcome: for example, human\nlanguages and social conventions coexist along geographic\nboundaries; it is a stable outcome for the financial industry to use\nWindows while the entertainment industry uses Mac OS. An important\npiece that is arguably missing from the basic game-theoretic\nmodels of diffusion, however, is a more detailed picture of what is\nhappening at the coexistence boundary, where the basic form of the\nmodel posits nodes that adopt A linked to nodes that adopt B.\nIn these motivating settings for the models, of course, one very\noften sees interface regions in which individuals essentially become\nbilingual. In the case of human language diffusion, this\nbilinguality is meant literally: geographic regions where there is substantial\ninteraction with speakers of two different languages tend to have\ninhabitants who speak both. But bilinguality is also an essential\nfeature of technological interaction: in the end, many people have\naccounts on multiple IM systems, for example, and more\ngenerally many maintain the ability to work within multiple computer\nsystems so as to collaborate with people embedded in each.\nTaking this view, it is natural to ask how diffusion models\nbehave when extended so that certain nodes can be bilingual in this\nvery general sense, adopting both strategies at some cost to\nthemselves. What might we learn from such an extension? To begin\nwith, it has the potential to provide a valuable perspective on the\nquestion of compatibility and incompatibility that underpins\ncompetition among technology companies. There is a large literature\non how compatibility among technologies affects competition\nbetween firms, and in particular how incompatibility may be a\nbeneficial strategic decision for certain participants in a market [3, 4, 8, 9,\n12]. Whinston [18] provides an interesting taxonomy of different\nkinds of strategic incompatibility; and specific industry case studies\n(including theoretical perspectives) have recently been carried out\nfor commercial banks [10], copying and imaging technology [11]\nand instant messenger systems [6].\nWhile these existing models of compatibility capture network\neffects in the sense that the users in the market prefer to use\ntechnology that is more widespread, they do not capture the more\nfinegrained network phenomenon represented by diffusion - that each\nuser is including its local view in the decision, based on what its\nown social network neighbors are doing. A diffusion model that\nincorporated such extensions could provide insight into the structure\nof boundaries in the network between technologies; it could\npotentially offer a graph-theoretic basis for how incompatibility may\nbenefit an existing technology, by strengthening these boundaries\nand preventing the incursion of a new, better technology.\nThe present work: Diffusion with bilingual behavior. In this\npaper, we develop a set of diffusion models that incorporate notions\nof compatibility and bilinguality, and we find that some unexpected\nphenomena emerge even from very simple versions of the models.\nWe begin with perhaps the simplest way of extending Morris\"s\nmodel discussed above to incorporate bilingual behavior. Consider\nagain the example of IM systems A and B, with the payoff\nstructure as before, but now suppose that each node can adopt a third\nstrategy, denoted AB, in which it decides to use both A and B. An\nadopter of AB gets to use, on an edge-by-edge basis, whichever\nof A or B yields higher payoffs in each interaction, and the payoff\nstructure is defined according to this principle: if an adopter of AB\ninteracts with an adopter of B, both receive q; with an adopter of\nA, both receive 1 \u2212 q; and with another adopter of AB, both\nreceive max(q, 1 \u2212 q). Finally, an adopter of AB pays a fixed-cost\npenalty of c (i.e. \u2212c is added to its total payoff) to represent the\ncost of having to maintain both technologies.\nThus, in this model, there are two parameters that can be varied:\nthe relative qualities of the two technologies (encoded by q), and\nthe cost of being bilingual, which reflects a type of incompatibility\n(encoded by c).\nFollowing [13] we assume the underlying graph G is infinite;\nwe further assume that for some natural number \u0394, each node has\ndegree \u0394.1\nWe are interested in the question posed at the outset, of\nwhether a new technology A can spread through a network where\nalmost everyone is initially using B. Formally, we say that strategy\nA can become epidemic if the following holds: starting from a state\nin which all nodes in a finite set S adopt A, and all other nodes\nadopt B, a sequence of best-response updates (potentially with\ntiebreaking) in G \u2212 S causes every node to eventually adopt A. We\nalso introduce one additional bit of notation that will be useful in\nthe subsequent sections: we define r = c/\u0394, the fixed penalty for\nadopting AB, scaled so that it is a per-edge cost.\nIn the Morris model, where the only strategic options are A and\nB, a key parameter is the contagion threshold of G, denoted q\u2217\n(G):\nthis is the supremum of q for which A can become epidemic in\nG with parameter q in the payoff structure. A central result of\n[13] is that 1\n2\nis the maximum possible contagion threshold for any\ngraph: supG q\u2217\n(G) = 1\n2\n. Indeed, there exist graphs in which the\ncontagion threshold is as large as 1\n2\n(including the infinite line - the\nunique infinite connected 2-regular graph); on the other hand, one\ncan show there is no graph with a contagion threshold greater than\n1\n2\n.\nIn our model where the bilingual strategy AB is possible, we\nhave a two-dimensional parameter space, so instead of a\ncontagion threshold q\u2217\n(G) we have an epidemic region \u03a9(G), which\nis the subset of the (q, r) plane for which A can become epidemic\nin G. And in place of the maximum possible contagion\nthreshold supG q\u2217\n(G), we must consider the general epidemic region\n\u03a9 = \u222aG\u03a9(G), where the union is taken over all infinite \u0394-regular\ngraphs; this is the set of all (q, r) values for which A can become\nepidemic in some \u0394-regular network.\n1\nWe can obtain strictly analogous results by taking a sequence of\nfinite graphs and expressing results asymptotically, but the use of\nan infinite bounded-degree graph G makes it conceptually much\ncleaner to express the results (as it does in Morris\"s paper [13]): less\nintricate quantification is needed to express the diffusion properties,\nand the qualitative phenomena remain the same.\n76\n1/20 1\nr\nq\n0\n1/2\n1\nFigure 1: The region of the (q, r) plane for which technology A\ncan become epidemic on the infinite line.\nOur Results. We find, first of all, that the epidemic region \u03a9(G)\ncan be unexpectedly complex, even for very simple graphs G.\nFigure 1 shows the epidemic region for the infinite line; one observes\nthat neither the region \u03a9(G) nor its complement is convex in the\npositive quadrant, due to the triangular cut-out shape. (We find\nanalogous shapes that become even more complex for other\nsimple infinite graph structures; see for example Figures 3 and 4.) In\nparticular, this means that for values of q close to but less than 1\n2\n,\nstrategy A can become epidemic on the infinite line if r is\nsufficiently small or sufficiently large, but not if r takes values in some\nintermediate interval. In other words, strategy B (which represents\nthe worse technology, since q < 1\n2\n) will survive if and only if the\ncost of being bilingual is calibrated to lie in this middle interval.\nThis is a reflection of limited compatibility - that it may be in\nthe interest of an incumbent technology to make it difficult but not\ntoo difficult to use a new technology - and we find it surprising\nthat it should emerge from a basic model on such a simple network\nstructure. It is natural to ask whether there is a qualitative\ninterpretation of how this arises from the model, and in fact it is not hard to\ngive such an interpretation, as follows.\nWhen r is very small, it is cheap for nodes to adopt AB as\na strategy, and so AB spreads through the whole network.\nOnce AB is everywhere, the best-response updates cause all\nnodes to switch to A, since they get the same interaction\nbenefits without paying the penalty of r.\nWhen r is very large, nodes at the interface, with one A\nneighbor and one B neighbor, will find it too expensive to choose\nAB, so they will choose A (the better technology), and hence\nA will spread step-by-step through the network.\nWhen r takes an intermediate value, a node v at the\ninterface, with one A neighbor and one B neighbor, will find it\nmost beneficial to adopt AB as a strategy. Once this happens,\nthe neighbor of v who is playing B will not have sufficient\nincentive to switch, and the best-response updates make no\nfurther progress. Hence, this intermediate value of r allows\na boundary of AB to form between the adopters of A and\nthe adopters of B.\nIn short, the situation facing B is this: if it is too permissive, it gets\ninvaded by AB followed by A; if it is too inflexible, forcing nodes\nto choose just one of A or B, it gets destroyed by a cascade of\ndirect conversions to A. But if it has the right balance in the value\nof r, then the adoptions of A come to a stop at a bilingual boundary\nwhere nodes adopt AB.\nMoving beyond specific graphs G, we find that this non-convexity\nholds in a much more general sense as well, by considering the\ngeneral epidemic region \u03a9 = \u222aG\u03a9(G). For any given value of \u0394, the\nregion \u03a9 is a complicated union of bounded and unbounded\npolygons, and we do not have a simple closed-form description for it.\nHowever, we can show via a potential function argument that no\npoint (q, r) with q > 1\n2\nbelongs to \u03a9. Moreover, we can show the\nexistence of a point (q, r) \u2208 \u03a9 for which q < 1\n2\n. On the other hand,\nconsideration of the epidemic region for the infinite line shows that\n(1\n2\n, r) \u2208 \u03a9 for r = 0 and for r sufficiently large. Hence, neither \u03a9\nnor its complement is convex in the positive quadrant.\nFinally, we also extend a characterization that Morris gave for\nthe contagion threshold [13], producing a somewhat more intricate\ncharacterization of the region \u03a9(G). In Morris\"s setting, without an\nAB strategy, he showed that A cannot become epidemic with\nparameter q if and only if every cofinite set of nodes contains a subset\nS that functions as a well-connected community: every node in\nS has at least a (1 \u2212 q) fraction of its neighbors in S. In other\nwords, tightly-knit communities are the natural obstacles to\ndiffusion in his setting. With the AB strategy as a further option, a more\ncomplex structure becomes the obstacle: we show that A cannot\nbecome epidemic with parameters (q, r) if and only if every cofinite\nset contains a structure consisting of a tightly-knit community with\na particular kind of interface of neighboring nodes. We show that\nsuch a structure allows nodes to adopt AB at the interface and B\ninside the community itself, preventing the further spread of A; and\nconversely, this is the only way for the spread of A to be blocked.\nThe analysis underlying the characterization theorem yields a\nnumber of other consequences; a basic one is, roughly speaking,\nthat the outcome of best-response updates is independent of the\norder in which the updates are sequenced (provided only that each\nnode attempts to update itself infinitely often).\nFurther Extensions. Another way to model compatibility and\ninteroperability in diffusion models is through the off-diagonal terms\nrepresenting the payoff for interactions between a node adopting A\nand a node adopting B. Rather than setting these to 0, we can\nconsider setting them to a value x \u2264 min(q, 1 \u2212 q). We find that\nfor the case of two technologies, the model does not become more\ngeneral, in that any such instance is equivalent, by a re-scaling of\nq and r, to one where x = 0. Moreover, using our\ncharacterization of the region \u03a9(G) in terms of communities and interfaces, we\nshow a monotonicty result: if A can become epidemic on a graph\nG with parameters (q, r, x), and then x is increased, then A can\nstill become epidemic with the new parameters.\nWe also consider the effect of these off-diagonal terms in an\nextension to k > 2 competing technologies; for technologies X and\nY , let qX denote the payoff from an X-X interaction on an edge\nand qXY denote the payoff from an X-Y interaction on an edge.\nWe consider a setting in which two technologies B and C, which\ninitially coexist with qBC = 0, face the introduction of a third,\nbetter technology A at a finite set of nodes. We show an example\nin which B and C both survive in equilibrium if they set qBC in\na particular range of values, but not if they set qBC too low or too\nhigh to lie in this range. Thus, in even in a basic diffusion model\nwith three technologies, one finds cases in which two firms have an\nincentive to adopt a limited strategic alliance, partially increasing\ntheir interoperability to defend against a new entrant in the market.\n2. MODEL\nWe now develop some further notation and definitions that will\nbe useful for expressing the model. Recall that we have an infinite\n\u0394-regular graph G, and strategies A, B, and AB that are used in\na coordination game on each edge. For edge (v, w), the payoff\n77\nto each endpoint is 0 if one of the two nodes chooses strategy A\nand the other chooses strategy B; 1 \u2212 q if one chooses strategy A\nand the other chooses either A or AB; q if one chooses strategy B\nand the other chooses either B or AB; and max(q, 1 \u2212 q) if both\nchoose strategy AB. The overall payoff of an agent v is the sum of\nthe above values over all neighbors w of v, minus a cost which is 0\nif v chooses A or B and c = r\u0394 if she chooses AB. We refer to\nthe overall game, played by all nodes in G, as a contagion game,\nand denote it using the tuple (G, q, r).\nThis game can have many Nash equilibria. In particular, the\ntwo states where everybody uses technology A or everybody uses\ntechnology B are both equilibria of this game. As discussed in\nthe previous section, we are interested in the dynamics of reaching\nan equilibrium in this game; in particular, we would like to know\nwhether it is possible to move from an all-B equilibrium to an all-A\nequilibrium by changing the strategy of a finite number of agents,\nand following a sequence of best-response moves.\nWe provide a formal description of this question via the\nfollowing two definitions.\nDEFINITION 2.1. Consider a contagion game (G, q, r). A state\nin this game is a strategy profile s : V (G) \u2192 {A, B, AB}. For\ntwo states s and s and a vertex v \u2208 V (G), if starting from state s\nand letting v play her best-response move (breaking ties in favor of\nA and then AB) we get to the state s , we write s\nv\n\u2192 s . Similarly,\nfor two states s and s and a finite sequence S = v1, v2, . . . , vk of\nvertices of G (where vi\"s are not necessarily distinct), we say s\nS\n\u2192\ns if there is a sequence of states s1, . . . , sk\u22121 such that s\nv1\n\u2192 s1\nv2\n\u2192\ns2\nv3\n\u2192 \u00b7 \u00b7 \u00b7 sk\u22121\nvk\n\u2192 s . For an infinite sequence S = v1, v2, . . . of\nvertices of G, we denote the subsequence v1, v2, . . . , vk by Sk. We\nsay s\nS\n\u2192 s for two states s and s if for every vertex v \u2208 V (G)\nthere exists a k0(v) such that for every k > k0(v), s\nSk\n\u2192 sk for a\nstate sk with sk(v) = s (v).\nDEFINITION 2.2. For T \u2286 V (G), we denote by sT the strategy\nprofile that assigns A to every agent in T and B to every agent in\nV (G) \\ T. We say that technology A can become an epidemic in\nthe game (G, q, r) if there is a finite set T of nodes in G (called the\nseed set) and a sequence S of vertices in V (G) \\ T (where each\nvertex can appear more than once) such that sT\nS\n\u2192 sV (G), i.e.,\nendowing agents in T with technology A and letting other agents\nplay their best response according to schedule S would lead every\nagent to eventually adopt strategy A.2\nThe above definition requires that the all-A equilibrium be\nreachable from the initial state by at least one schedule S of best-response\nmoves. In fact, we will show in Section 4 that if A can become\nan epidemic in a game, then for every schedule of best-response\nmoves of the nodes in V (G) \\ T in which each node is scheduled\nan infinite number of times, eventually all nodes adopt strategy A.3\n3. EXAMPLES\nWe begin by considering some basic examples that yield\nepidemic regions with the kinds of non-convexity properties discussed\n2\nNote that in our definition we assume that agents in T are endowed\nwith the strategy A at the beginning. Alternatively, one can define\nthe notion of epidemic by allowing agents in T to be endowed with\nany combination of AB and A, or with just AB. However, the\ndifference between these definitions is rather minor and our results\ncarry over with little or no change to these alternative models.\n3\nNote that we assume agents in the seed set T cannot change their\nstrategy.\n0\u22121 1 2\nFigure 2: The thick line graph\nin Section 1. We first discuss a natural \u0394-regular generalization of\nthe infinite line graph, and for this one we work out the complete\nanalysis that describes the region \u03a9(G), the set of all pairs (q, r)\nfor which the technology A can become an epidemic. We then\ndescribe, without the accompanying detailed analysis, the epidemic\nregions for the infinite \u0394-regular tree and for the two-dimensional\ngrid.\nThe infinite line and the thick line graph. For a given even\ninteger \u0394, we define the thick line graph L\u0394 as follows: the vertex\nset of this graph is Z \u00d7 {1, 2, . . . , \u0394/2}, where Z is the set of all\nintegers. There is an edge between vertices (x, i) and (x , i ) if and\nonly if |x \u2212 x | = 1. For each x \u2208 Z, we call the set of vertices\n{(x, i) : i \u2208 {1, . . . , \u0394/2} the x\"th group of vertices. Figure 2\nshows a picture of L6\nNow, assume that starting from a position where every node uses\nthe strategy B, we endow all agents in a group (say, group 0) with\nthe strategy A. Consider the decision faced by the agents in group\n1, who have their right-hand neighbors using B and their left-hand\nneighbors using A. For these agents, the payoffs of strategies A,\nB, and AB are (1 \u2212 q)\u0394/2, q\u0394/2, and \u0394/2 \u2212 r\u0394, respectively.\nTherefore, if\nq \u2264\n1\n2\nand q \u2264 2r,\nthe best response of such an agent is A. Hence, if the above\ninequality holds and we let agents in groups 1, \u22121, 2, \u22122, . . . play\ntheir best response in this order, then A will become an epidemic.\nAlso, if we have q > 2r and q \u2264 1 \u2212 2r, the best response of\nan agent with her neighbors on one side playing A and neighbors\non the other side playing B is the strategy AB. Therefore, if we\nlet agents in groups 1 and \u22121 change to their best response, they\nwould switch their strategy to AB. After this, agents in group 2\nwill see AB on their left and B on their right. For these agents\n(and similarly for the agents in group \u22122), the payoff of strategies\nA, B, and AB are (1\u2212q)\u0394/2, q\u0394, and (q+max(q, 1\u2212q))\u0394/2\u2212\nr\u0394, respectively. Therefore, if max(1, 2q) \u2212 2r \u2265 1 \u2212 q and\nmax(1, 2q) \u2212 2r \u2265 2q, or equivalently, if\n2r \u2264 q and q + r \u2264\n1\n2\n,\nthe best response of such an agent is AB. Hence, if the above\ninequality holds and we let agents in groups 2, \u22122, 3, \u22123 . . . play\ntheir best response in this order, then every agent (except for agents\nin group 0) switches to AB. Next, if we let agents in groups\n1, \u22121, 2, \u22122, . . . change their strategy again, for q \u2264 1/2,\nevery agent will switch to strategy A, and hence A becomes an\nepidemic.4\n4\nStrictly speaking, since we defined a schedule of moves as\na single infinite sequence of vertices in V (G) \\ T, the order\n1, \u22121, 2, \u22122, . . . , 1, \u22121, 2, \u22122, . . . is not a valid schedule.\nHowever, since vertices of G have finite degree, it is not hard to see\nthat any ordering of a multiset containing any (possibly infinite)\n78\n1/20\nr\nq\n0\n1/4\n3/16\n1/12\n1/4\nFigure 3: Epidemic regions for the infinite grid\n1/20 1/\u0394\nr\nq\n0\n1/\u0394\nFigure 4: Epidemic regions for the infinite \u0394-regular tree\nThe above argument shows that for any combination of (q, r)\nparameters in the marked region in Figure 1, technology A can\nbecome an epidemic. It is not hard to see that for points outside\nthis region, A cannot become epidemic.\nFurther examples: trees and grids. Figures 3 and 4 show the\nepidemic regions for the infinite grid and the infinite \u0394-regular tree.\nNote they also exhibit non-convexities.\n4. CHARACTERIZATION\nIn this section, we characterize equilibrium properties of\ncontagion games. To this end, we must first argue that contagion games\nin fact have well-defined and stable equilibria. We then discuss\nsome respects in which the equilibrium reached from an initial state\nis essentially independent of the order in which best-response\nupdates are performed.\nWe begin with the following lemma, which proves that agents\neventually converge to a fixed strategy, and so the final state of a\ngame is well-defined by its initial state and an infinite sequence of\nmoves. Specifically, we prove that once an agent decides to adopt\ntechnology A, she never discards it, and once she decides to discard\ntechnology B, she never re-adopts it. Thus, after an infinite number\nof best-response moves, each agent converges to a single strategy.\nLEMMA 4.1. Consider a contagion game (G, q, r) and a\n(possibly infinite) subset T \u2286 V (G) of agents. Let sT be the strategy\nprofile assigning A to every agent in T and B to every agent in\nV (G) \\ T. Let S = v1, v2, . . . be a (possibly infinite) sequence of\nnumber of copies of each vertex of V (G) \\ T can be turned\ninto an equivalent schedule of moves. For example, the sequence\n1, \u22121, 2, \u22122, 1, \u22121, 3, \u22123, 2, \u22122, . . . gives the same outcome as\n1, \u22121, 2, \u22122, . . . , 1, \u22121, 2, \u22122, . . . in the thick line example.\nagents in V (G) \\ T and consider the sequence of states s1, s2, . . .\nobtained by allowing agents to play their best-response in the order\ndefined by S (i.e., s\nv1\n\u2192 s1\nv2\n\u2192 s2\nv3\n\u2192 \u00b7 \u00b7 \u00b7 ). Then for every i, one of\nthe following holds:\n\u2022 si(vi+1) = B and si+1(vi+1) = A,\n\u2022 si(vi+1) = B and si+1(vi+1) = AB,\n\u2022 si(vi+1) = AB and si+1(vi+1) = A,\n\u2022 si(vi+1) = si+1(vi+1).\nPROOF. Let X >k\nv Y indicate that agent v (weakly) prefers\nstrategy X to strategy Y in state sk. For any k let zk\nA, zk\nB, and zk\nAB\nbe the number of neighbors of v with strategies A, B, and AB in\nstate sk, respectively. Thus, for agent v in state sk,\n1. A >k\nv B if (1 \u2212 q)(zk\nA + zk\nAB) is greater than q(zk\nB + zk\nAB),\n2. A >k\nv AB if (1\u2212 q)(zk\nA + zk\nAB) is greater than (1\u2212 q)zk\nA +\nqzk\nB + max(q, 1 \u2212 q)zk\nAB \u2212 \u0394r,\n3. and AB >k\nv B if (1\u2212q)zk\nA +qzk\nB +max(q, 1\u2212q)zk\nAB \u2212\u0394r\nis greater than q(zk\nB + zk\nAB).\nSuppose the lemma is false and consider the smallest i such that the\nlemma is violated. Let v = vi+1 be the agent who played her best\nresponse at time i. Thus, either 1. si(v) = A and si+1(v) = B,\nor 2. si(v) = A and si+1(v) = AB, or 3. si(v) = AB and\nsi+1(v) = B. We show that in the third case, agent v could not\nhave been playing a best response. The other cases are similar.\nIn the third case, we have si(v) = AB and si+1(v) = B. As\nsi(v) = AB, there must be a time j < i where sj\nv\n\u2192 sj+1 and\nsj+1(v) = AB. Since this was a best-response move for v,\ninequality 3 implies that (1 \u2212 q)zj\nA + max(0, 1 \u2212 2q)zj\nAB \u2265 \u0394r.\nFurthermore, as i is the earliest time at which the lemma is\nviolated, zi\nA \u2265 zj\nA and zj\nAB \u2212 zi\nAB \u2264 zi\nA \u2212 zj\nA. Thus, the change Q\nin payoff between AB and B (plus \u0394r) is\nQ \u2261 (1 \u2212 q)zi\nA + max(0, 1 \u2212 2q)zi\nAB\n\u2265 (1 \u2212 q)(zi\nA \u2212 zj\nA + zj\nA)\n+ max(0, 1 \u2212 2q)(zj\nAB \u2212 zi\nA + zj\nA)\n= (1 \u2212 q)zj\nA + max(0, 1 \u2212 2q)zj\nAB\n+ max(q, 1 \u2212 q)(zi\nA \u2212 zj\nA)\n\u2265 (1 \u2212 q)zj\nA + max(0, 1 \u2212 2q)zj\nAB\n\u2265 \u0394r,\nand so, by inequality 3, B can not be a better response than AB for\nv in state si.\nCOROLLARY 4.2. For every infinite sequence S of vertices in\nV (G) \\ T, there is a unique state s such that s0\nS\n\u2192 s, where s0\ndenotes the initial state where every vertex in T plays A and every\nvertex in V (G) \\ T plays B.\nSuch a state s is called the outcome of the game (G, q, r) starting\nfrom T and using the schedule S.\nEquivalence of best-response schedules. Lemma 4.1 shows that\nthe outcome of a game is well-defined and unique. The following\ntheorems show that the outcome is also invariant to the dynamics,\nor sequence of best-response moves, under certain mild conditions.\nThe first theorem states that if the all-A equilibrium is the outcome\nof a game for some (unconstrained) schedule, then it is the outcome\nfor any schedule in which each vertex is allowed to move infinitely\nmany times. The second theorem states that the outcome of a game\nis the same for any schedule of moves in which every vertex moves\ninfinitely many times.\n79\nTHEOREM 4.3. Consider a contagion game (G, q, r), a subset\nT \u2286 V (G), and a schedule S of vertices in V (G) \\ T such that\nthe outcome of the game is the all-A equilibrium. Then for any\nschedule S of vertices in V (G) \\ T such that every vertex in this\nset occurs infinitely many times, the outcome of the game using the\nschedule S is also the all-A equilibrium.\nPROOF. Note that S is a subsequence of S . Let \u03c0 : S \u2192 S be\nthe injection mapping S to its subsequence in S . We show for any\nvi \u2208 S, if vi switches to AB, then \u03c0(vi) switches to AB or A, and\nif vi switches to A, then \u03c0(vi) switches to A (here v switches to\nX means that after the best-response move, the strategy of v is X).\nSuppose not and let i be the smallest integer such that the statement\ndoesn\"t hold. Let zA, zB, and zAB be the number of neighbors of\nvi with strategies A, B, and AB in the current state defined by S.\nDefine zA,zB, and zAB similarly for S . Then, by Lemma 4.1 and\nthe choice of i, zA \u2265 zA, zB \u2264 zB, zAB \u2212 zAB \u2264 zB \u2212 zB, and\nzAB \u2212 zAB \u2264 zA \u2212 zA. Now suppose vi switches to AB. Then\nthe same sequence of inequalities as in Lemma 4.1 show that AB\nis a better response than B for \u03c0(vi) (although A might be the best\nresponse) and so \u03c0(vi) switches to either AB or A. The other case\n(vi switches to A) is similar.\nTHEOREM 4.4. Consider a contagion game (G, q, r) and a\nsubset T \u2286 V (G). Then for every two schedules S and S of vertices\nin V (G)\\T such that every vertex in this set occurs infinitely many\ntimes in each of these schedules, the outcomes of the game using\nthese schedules are the same.\nPROOF. The proof of this theorem is similar to that of\ntheorem 4.3 and is deferred to the full version of the paper.\nBlocking structures. Finally, we prove the characterization\nmentioned in the introduction: A cannot become epidemic if and only\nif (G, q, r) possesses a certain kind of blocking structure. This\nresult generalizes Morris\"s theorem on the contagion threshold for\nhis model; in his case without AB as a possible strategy, a simpler\nkind of community structure was the obstacle to A becoming\nepidemic.\nWe begin by defining the blocking structures.\nDEFINITION 4.5. Consider a contagion game (G, q, r). A pair\n(SAB, SB) of disjoint subsets of V (G) is called a blocking\nstructure for this game if for every vertex v \u2208 SAB,\ndegSB\n(v) >\nr\nq\n\u0394,\nand for every vertex v \u2208 SB,\n(1 \u2212 q) degSB\n(v) + min(q, 1 \u2212 q) degSAB\n(v) > (1 \u2212 q \u2212 r)\u0394,\nand\ndegSB\n(v) + q degSAB\n(v) > (1 \u2212 q)\u0394,\nwhere degS(v) denotes the number of neighbors of v in the set S.\nTHEOREM 4.6. For every contagion game (G, q, r),\ntechnology A cannot become epidemic in this game if and only if every\nco-finite set of vertices of G contains a blocking structure.\nPROOF. We first show that if every co-finite set of vertices of\nG contains a blocking structure, then technology A cannot become\nepidemic. Let T be any finite set of vertices endowed with\ntechnology A, and let (SAB, SB) be the blocking structure contained\nin V (G) \\ T. We claim that in the outcome of the game for any\nsequence S of moves, the vertices in SAB have strategy B or AB and\nthe vertices in SB have strategy B. Suppose not and let v be the first\nvertex in sequence S to violate this (i.e., v \u2208 SAB switches to A or\nv \u2208 SB switches to A or AB). Suppose v \u2208 SAB (the other cases\nare similar). Let zA, zB, and zAB denote the number of neighbors\nof v with strategies A, B, and AB respectively. As v is the first\nvertex violating the claim, zA \u2264 \u0394\u2212 degSB\n(v)\u2212 degSAB\n(v) and\nzB \u2265 degSB\n(v). We show AB is a better strategy than A for v.\nTo show this, we must prove that (1 \u2212 q)zA + qzB + max(q, 1 \u2212\nq)zAB \u2212 \u0394r > (1 \u2212 q)(zA + zAB) or, equivalently, the quantity\nQ \u2261 qzB + max(2q \u2212 1, 0)zAB \u2212 \u0394r > 0:\nQ = (max(2q \u2212 1, 0) \u2212 r)\u0394 \u2212 max(2q \u2212 1, 0)zA\n+(q \u2212 max(2q \u2212 1, 0))zB\n\u2265 (max(2q \u2212 1, 0) \u2212 r)\u0394 + min(q, 1 \u2212 q) degSB\n(v)\n\u2212 max(2q \u2212 1, 0)(\u0394 \u2212 degSB\n(v) \u2212 degSAB\n(v))\n\u2265 [min(q, 1 \u2212 q) + max(2q \u2212 1, 0)] degSB\n(v) \u2212 r\u0394\n= q degSB\n(v) \u2212 r\u0394\n> 0,\nwhere the last inequality holds by the definition of the blocking\nstructure.\nWe next show that A cannot become epidemic if and only if\nevery co-finite set of vertices contains a blocking structure. To\nconstruct a blocking structure for the complement of a finite set T of\nvertices, endow T with strategy A and consider the outcome of\nthe game for any sequence S which schedules each vertex an\ninfinite number of times. Let SAB be the set of vertices with\nstrategy AB and SB be the set of vertices with strategy B in this\noutcome. Note for any v \u2208 SAB, AB is a best-response and so\nis strictly better than strategy A, i.e. q degSB\n(v) + max(q, 1 \u2212\nq) degSAB\n\u2212\u0394r > (1\u2212 q) degSAB\n(v), from where it follows that\ndegSB\n(v) > (r\u0394)/q. The inequalities for the vertices v \u2208 SB can\nbe derived in a similar manner.\nA corollary to the above theorem is that for every infinite graph\nG, the epidemic regions in the q-r plane for this graph is a finite\nunion of bounded and unbounded polygons. This is because the\ninequalities defining blocking structures are linear inequalities in\nq and r, and the coefficients of these inequalities can take only\nfinitely many values.\n5. NON-EPIDEMIC REGIONS IN GENERAL\nGRAPHS\nThe characterization theorem in the previous section provides\none way of thinking about the region \u03a9(G), the set of all (q, r)\npairs for which A can become epidemic in the game (G, q, r). We\nnow consider the region \u03a9 = \u222aG\u03a9(G), where the union is taken\nover all infinite \u0394-regular graphs; this is the set of all (q, r)\nvalues for which A can become epidemic in some \u0394-regular network.\nThe analysis here uses Lemma 4.1 and an argument based on an\nappropriately defined potential function.\nThe first theorem shows that no point (q, r) with q > 1\n2\nbelongs\nto \u03a9. Since q > 1\n2\nimplies that the incumbent technology B is\nsuperior, it implies that in any network, a superior incumbent will\nsurvive for any level of compatibility.\nTHEOREM 5.1. For every \u0394-regular graph G and parameters\nq and r, the technology A cannot become an epidemic in the game\n(G, q, r) if q > 1/2.\nPROOF. Assume, for contradiction, that there is a \u0394-regular\ngraph G and values q > 1/2 and r, a set T of vertices of G that are\ninitially endowed with the strategy A, and a schedule S of moves\nfor vertices in V (G) \\ T such that this sequence leads to an all-A\nequilibrium. We derive a contradiction by defining a non-negative\n80\npotential function that starts with a finite value and showing that\nafter each best response by some vertex the value of this function\ndecreases by some positive amount bounded away from zero. At\nany state in the game, let XA,B denote the number of edges in G\nthat have one endpoint using strategy A and the other endpoint\nusing strategy B. Furthermore, let nAB denote the number of agents\nusing the strategy AB. The potential function is the following:\nqXA,B + cnAB\n(recall c = \u0394r is the cost of adopting two technologies). Since G\nhas bounded degree and the initial set T is finite, the initial value\nof this potential function is finite. We now show that every best\nresponse move decreases the value of this function by some positive\namount bounded away from zero. By Lemma 4.1, we only need to\nanalyze the effect on the potential function for moves of the sort\ndescribed by the lemma. Therefore we have three cases: a node u\nswitches from strategy B to AB, a node u switches from strategy\nAB to A, or a node u switches from strategy B to A. We consider\nthe first case here; the proofs for the other cases are similar.\nSuppose a node u with strategy B switches to strategy AB. Let\nzAB, zA, and zB denote the number of neighbors of u in partition\npiece AB, A, and B respectively. Thus, recalling that q > 1/2, we\nsee u\"s payoff with strategy B is q(zAB + zB) whereas his payoff\nwith strategy AB is q(zAB + zB) + (1 \u2212 q)zA \u2212 c. In order for\nthis strategic change to improve u\"s payoff, it must be the case that\n(1 \u2212 q)zA \u2265 c. (1)\nNow, notice that such a strategic change on the part of u induces\na change in the potential function of \u2212qzA + c as zA edges are\nremoved from the XA,B edges between A and B and the size of\npartition piece AB is increased by one. This change will be\nnegative so long as zA > c/q which holds by inequality 1 as q > (1\u2212q)\nfor q > 1/2. Furthermore, as zA can take only finitely many values\n(zA \u2208 {0, 1, . . . , \u0394}), this change is bounded away from zero.\nThis next theorem shows that for any \u0394, there is a point (q, r) \u2208\n\u03a9 for which q < 1\n2\n. This means that there is a setting of the\nparameters q and r for which the new technology A is superior, but for\nwhich the incumbent technology is guaranteed to survive regardless\nof the underlying network.\nTHEOREM 5.2. There exist q < 1/2 and r such that for every\ncontagion game (G, q, r), A cannot become epidemic.\nPROOF. The proof is based on the potential function from\nTheorem 5.1:\nqXA,B + cnAB.\nWe first show that if q is close enough to 1/2 and r is chosen\nappropriately, this potential function is non-increasing. Specifically,\nlet\nq =\n1\n2\n\u2212\n1\n64\u0394\nand c = r\u0394 = \u03b1,\nwhere \u03b1 is any irrational number strictly between 3/64 and q.\nAgain, there are three cases corresponding to the three possible\nstrategy changes for a node u. Let zAB, zA, and zB denote the\nnumber of neighbors of node u in partition piece AB, A, and B\nrespectively.\nCase 1: B \u2192 AB. Recalling that q < 1/2, we see u\"s payoff\nwith strategy B is q(zAB + zB) whereas his payoff with strategy\nAB is (1 \u2212 q)(zAB + zA) + qzB \u2212 c. In order for this strategic\nchange to improve u\"s payoff, it must be the case that\n(1 \u2212 2q)zAB + (1 \u2212 q)zA \u2265 c. (2)\nNow, notice that such a strategic change on the part of u induces\na change in the potential function of \u2212qzA + c as zA edges are\nremoved from the XA,B edges between A and B and the size of\npartition piece AB is increased by one. This change will be\nnonpositive so long as zA \u2265 c/q. By inequality 2 and the fact that zA\nis an integer,\nzA \u2265\n\u2030\nc\n1 \u2212 q\n\u2212\n(1 \u2212 2q)zAB\n1 \u2212 q\n\u0131\n.\nSubstituting our choice of parameters, (and noting that q \u2208 [1/4, 1/2]\nand zAB \u2264 \u0394), we see that the term inside the ceiling is less than\n1 and at least 3/64\n3/4\n\u2212 1/32\n1/2\n> 0. Thus, the ceiling is one, which is\nlarger than c/q.\nCase 2: AB \u2192 A. Recalling that q < 1/2, we see u\"s payoff\nwith strategy AB is (1 \u2212 q)(zAB + zA) + qzB \u2212 c whereas her\npayoff with strategy A is (1 \u2212 q)(zAB + zA). In order for this\nstrategic change to improve u\"s payoff, it must be the case that\nqzB \u2264 c. (3)\nSuch a strategic change on the part of u induces a change in the\npotential function of qzB \u2212c as zB edges are added to the XA,B edges\nbetween A and B and the size of partition piece AB is decreased\nby one. This change will be non-positive so long as zB \u2264 c/q,\nwhich holds by inequality 3.\nCase 3: B \u2192 A. Note u\"s payoff with strategy B is q(zAB +\nzB) whereas his payoff with strategy A is (1 \u2212 q)(zAB + zA). In\norder for this strategic change to improve u\"s payoff, it must be the\ncase that\n(1 \u2212 2q)zAB \u2265 qzB \u2212 (1 \u2212 q)zA. (4)\nSuch a strategic change on the part of u induces a change in the\npotential function of q(zB \u2212 zA) as zA edges are removed and zB\nedges are added to the XA,B edges between A and B. This change\nwill be negative so long as zB < zA. By inequality 4 and the fact\nthat zA is an integer,\nzA \u2265\n\nqzB\n1 \u2212 q\n+\n(1 \u2212 2q)zAB\n1 \u2212 q\n.\nSubstituting our choice of parameters, it is easy to see that the term\ninside the floor is at most zB + 1/4, and so the floor is at most\nzB as zB is an integer. We have shown the potential function is\nnon-increasing for our choice of q and c. This implies the potential\nfunction is eventually constant. As c is irrational and the\nremaining terms are always rational, both nAB and XA,B must remain\nconstant for the potential function as a whole to remain constant.\nSuppose A is epidemic in this region. As nAB is constant and\nA is epidemic, it must be that nAB = 0. Thus, the only moves\ninvolve a node u switching from strategy B to strategy A. In order\nfor XA,B to be constant for such moves, it must be that zA (the\nnumber of neighbors of u in A) equals zB (the number of neighbors\nof u in B) and, as nAB = 0, we have that zA = zB = \u0394/2. Thus,\nthe payoff of u for strategy A is (1 \u2212 q)zA < \u0394/4 whereas her\npayoff for strategy AB is (1\u2212q)zA +qzB \u2212c > \u0394/2\u2212q \u2265 \u0394/4.\nThis contradicts the assumption that u is playing her best response\nby switching to A.\n6. LIMITED COMPATIBILITY\nWe now consider some further ways of modeling compatibility\nand interoperability. We first consider two technologies, as in the\nprevious sections, and introduce off-diagonal payoffs to capture\na positive benefit in direct A-B interactions. We find that this is\n81\nin fact no more general than the model with zero payoffs for A-B\ninteractions.\nWe then consider extensions to three technologies, identifying\nsituations in which two coexisting incumbent technologies may or\nmay not want to increases their mutual compatibility in the face of\na new, third technology.\nTwo technologies. A natural relaxation of the two-technology model\nis to introduce (small) positive payoffs for A-B interaction; that is,\ncross-technology communication yields some lesser value to both\nagents. We can model this using a variable xAB representing the\npayoff gathered by an agent with technology A when her\nneighbor has technology B, and similarly, a variable xBA representing\nthe payoff gathered by an agent with B when her neighbor has A.\nHere we consider the special case in which these off-diagonal\nentries are symmetric, i.e., xAB = xBA = x. We also assume that\nx < q \u2264 1 \u2212 q.\nWe first show that the game with off-diagonal entries is\nequivalent to a game without these entries, under a simple re-scaling of\nq and r. Note that if we re-scale all payoffs by either an additive\nor a multiplicative constant, the behavior of the game is unaffected.\nGiven a game with off-diagonal entries parameterized by q, r and x,\nconsider subtracting x from all payoffs, and scaling up by a factor\nof 1/(1 \u2212 2x). As can be seen by examining Table 1, the resulting\npayoffs are exactly those of a game without off-diagonal entries,\nparameterized by q = (q \u2212 x)/(1 \u2212 2x) and r = r/(1 \u2212 2x).\nThus the addition of symmetric off-diagonal entries does not\nexpand the class of games being considered.\nTable 1 represents the payoffs in the coordination game in terms\nof these parameters.\nNevertheless, we can still ask how the addition of an off-diagonal\nentry might affect the outcome of any particular game. As the\nfollowing example shows, increasing compatibility between two\ntechnologies can allow one technology that was not initially epidemic\nto become so.\nEXAMPLE 6.1. Consider the contagion game played on a thick\nline graph (see Section 3) with r = 5/32 and q = 3/8. In this\ncase, A is not epidemic, as can be seen by examining Figure 1,\nsince 2r < q and q + r > 1/2. However, if we insert symmetric\noff-diagonal payoffs x = 1/4, we have a new game, equivalent to a\ngame parameterized by r = 5/16 and q = 1/4. Since q < 1/2\nand q < 2r , A is epidemic in this game, and thus also in the game\nwith limited compatibility.\nWe now show that generally, if A is the superior technology (i.e.,\nq < 1/2), adding a compatibility term x can only help A spread.\nTHEOREM 6.2. Let G be a game without compatibility,\nparameterized by r and q on a particular network. Let G be that same\ngame, but with an added symmetric compatibility term x. If A is\nepidemic for G, then A is epidemic for G .\nPROOF. We will show that any blocking structure in G is also\na blocking structure in G. By our characterization theorem,\nTheorem 4.6, this implies the desired result. We have that G is\nequivalent to a game without compatibility parameterized by q = (q \u2212\nx)/(1 \u2212 2x) and r = r/(1 \u2212 2x). Consider a blocking structure\n(SB, SAB) for G . We know that for any v \u2208 SAB, q dSB (v) >\nr \u0394. Thus\nqdSB (v) > (q \u2212 x)dSB (v)\n= q (1 \u2212 2x)dSB (v)\n> r (1 \u2212 2x)\u0394\n= r\u0394,\nas required for a blocking structure in G. Similarly, the two\nblocking structure constraints for v \u2208 SB are only strengthened when\nwe move from G to G.\nMore than two technologies. Given the complex structure\ninherent in contagion games with two technologies, the understanding of\ncontagion games with three or more technologies is largely open.\nHere we indicate some of the technical issues that come up with\nmultiple technologies, through a series of initial results. The\nbasic set-up we study is one in which two incumbent technologies B\nand C are initially coexisting, and a third technology A, superior\nto both, is introduced initially at a finite set of nodes.\nWe first present a theorem stating that for any even \u0394, there is\na contagion game on a \u0394\u2212regular graph in which the two\nincumbent technologies B and C may find it beneficial to increase their\ncompatibility so as to prevent getting wiped out by the new\nsuperior technology A. In particular, we consider a situation in which\ninitially, two technologies B and C with zero compatibility are at\na stable state. By a stable state, we mean that no finite\nperturbation of the current states can lead to an epidemic for either B or\nC. We also have a technology A that is superior to both B and C,\nand can become epidemic by forcing a single node to choose A.\nHowever, by increasing their compatibility, B and C can maintain\ntheir stability and resist an epidemic from A.\nLet qA denote the payoffs to two adjacent nodes that both choose\ntechnology A, and define qB and qC analogously. We will assume\nqA > qB > qC . We also assume that r, the cost of selecting\nadditional technologies, is sufficiently large so as to ensure that\nnodes never adopt more than one technology. Finally, we\nconsider a compatibility parameter qBC that represents the payoffs\nto two adjacent nodes when one selects B and the other selects\nC. Thus our contagion game is now described by five parameters\n(G, qA, qB, qC , qBC ).\nTHEOREM 6.3. For any even \u0394 \u2265 12, there is a \u0394-regular\ngraph G, an initial state s, and values qA, qB, qC , and qBC , such\nthat\n\u2022 s is an equilibrium in both (G, qA, qB, qC , 0) and\n(G, qA, qB, qC , qBC ),\n\u2022 neither B nor C can become epidemic in either\n(G, qA, qB, qC , 0) or (G, qA, qB, qC , qBC ) starting from state\ns,\n\u2022 A can become epidemic (G, qA, qB, qC , 0) starting from state\ns, and\n\u2022 A can not become epidemic in (G, qA, qB, qC , qBC )\nstarting from state s.\nPROOF. (Sketch.) Given \u0394, define G by starting with an infinite\ngrid and connecting each node to its nearest \u0394 \u2212 2 neighbors that\nare in the same row. The initial state s assigns strategy B to even\nrows and strategy C to odd rows. Let qA = 4k2\n+ 4k + 1/2,\nqB = 2k + 2, qC = 2k + 1, and qBC = 2k + 3/4. The first, third,\nand fourth claims in the theorem can be verified by checking the\ncorresponding inequalities. The second claim follows from the first\nand the observation that the alternating rows contain any plausible\nepidemic from growing vertically.\nThe above theorem shows that two technologies may both be\nable to survive the introduction of a new technology by increasing\ntheir level of compatibility with each other. As one might expect,\n82\nA B AB\nA (1 \u2212 q; 1 \u2212 q) (x; x) (1 \u2212 q; 1 \u2212 q \u2212 r)\nB (x; x) (q; q) (q; q \u2212 r)\nAB (1 \u2212 q \u2212 r; 1 \u2212 q) (q \u2212 r; q) (max(q, 1 \u2212 q) \u2212 r; max(q, 1 \u2212 q) \u2212 r)\nTable 1: The payoffs in the coordination game. Entry (x, y) in row i, column j indicates that the row player gets a payoff of x and\nthe column player gets a payoff of y when the row player plays strategy i and the column player plays strategy j.\nthere are cases when increased compatibility between two\ntechnologies helps one technology at the expense of the other.\nSurprisingly, however, there are also instances in which compatibility\nis in fact harmful to both parties; the next example considers a fixed\ninitial configuration with technologies A, B and C that is at\nequilibrium when qBC = 0. However, if this compatibility term is\nincreased sufficiently, equilibrium is lost, and A becomes epidemic.\nEXAMPLE 6.4. Consider the union of an infinite two-dimensional\ngrid graph with nodes u(x, y) and an infinite line graph with nodes\nv(y). Add an edge between u(1, y) and v(y) for all y. For this\nnetwork, we consider the initial configuration in which all v(y) nodes\nselect A, and node u(x, y) selects B if x < 0 and selects C\notherwise.\nWe now define the parameters of this game as follows. Let qA =\n3.95, qB = 1.25, qC = 1, and qBC = 0. It is easily verified that\nfor these values, the initial configuration given above is an\nequilibrium. However, now suppose we increase the coordination term,\nsetting qBC = 0.9. This is not an equilibrium, since each node of\nthe form u(0, y) now has an incentive to switch from C (generating\na payoff of 3.9) to B (thereby generating a payoff of 3.95).\nHowever, once these nodes have adopted B, the best-response for each\nnode of the form u(1, y) is A (A generates a payoff of 4 where as\nB only generates a payoff of 3.95). From here, it is not hard to\nshow that A spreads directly throughout the entire network.\n7. REFERENCES\n[1] L. Blume. The statistical mechanics of strategic interaction.\nGames and Economic Behavior, 5:387-424, 1993.\n[2] R. L. Cooper (editor). Language spread: Studies in diffusion\nand social change. Indiana U. Press, 1982.\n[3] N. Economides. Desirability of Compatibility in the Absence\nof Network Externalities. American Economic Review,\n79(1989), pp. 1165-1181.\n[4] N. Economides. Raising Rivals\" Costs in Complementary\nGoods Markets: LECs Entering into Long Distance and\nMicrosoft Bundling Internet Explorer. NYU Center for Law\nand Business Working Paper 98-004, 1998.\n[5] G. Ellison. Learning, local interaction, and coordination.\nEconometrica, 61:1047-1071, 1993.\n[6] G. Faulhaber. Network Effects and Merger Analysis: Instant\nMessaging and the AOL-Time Warner Case.\nTelecommunications Policy, Jun/Jul 2002, 26, 311-333\n[7] M. Jackson and L. Yariv. Diffusion on social networks.\nEconomiePublique, 16:69-82, 2005.\n[8] M. Katz and C. Shapiro. Network Externalities, Competition\nand Compatibility. American Economic Review. 75(1985),\n424-40.\n[9] M. Kearns, L. Ortiz. Algorithms for Interdependent Security\nGames. NIPS 2003.\n[10] C. R. Knittel and V. Stango. Strategic Incompatibility in\nATM Markets. NBER Working Paper No. 12604, October\n2006.\n[11] J. Mackie-Mason and J. Metzler. Links Between Markets and\nAftermarkets: Kodak (1997). In Kwoka and White eds., The\nAntitrust Revolution, Oxford, 2004.\n[12] C. Matutes and P. Regibeau. Mix and Match: Product\nCompatibility without Network Externalities. RAND Journal\nof Economics, 19(1988), pp. 221-234.\n[13] S. Morris. Contagion. Review of Economic Studies,\n67:57-78, 2000.\n[14] E. Rogers. Diffusion of innovations. Free Press, fourth\nedition, 1995.\n[15] T. Schelling. Micromotives and Macrobehavior. Norton,\n1978.\n[16] D. Strang and S. Soule. Diffusion in organizations and social\nmovements: From hybrid corn to poison pills. Annual\nReview of Sociology, 24:265-290, 1998.\n[17] T. Valente. Network Models of the Diffusion of Innovations.\nHampton Press, 1995.\n[18] M. Whinston. Tying, Foreclosure, and Exclusion. American\nEconomic Review 80(1990), 837-59.\n[19] H. Peyton Young. Individual Strategy and Social Structure:\nAn Evolutionary Theory of Institutions. Princeton University\nPress, 1998.\n83", "keywords": "potential function;interoperability;diffusion process;diffusion of innovation;contagion on network;game-theoretic diffusion model;limited compatibility;algorithmic game theory;non-convexity property;morris's theorem;bilinguality;contagion threshold;contagion game;strategic incompatibility;innovation diffusion;characterization"}
-{"name": "test_J-8", "title": "Strong Equilibrium in Cost Sharing Connection Games", "abstract": "In this work we study cost sharing connection games, where each player has a source and sink he would like to connect, and the cost of the edges is either shared equally (fair connection games) or in an arbitrary way (general connection games). We study the graph topologies that guarantee the existence of a strong equilibrium (where no coalition can improve the cost of each of its members) regardless of the specific costs on the edges. Our main existence results are the following: (1) For a single source and sink we show that there is always a strong equilibrium (both for fair and general connection games). (2) For a single source multiple sinks we show that for a series parallel graph a strong equilibrium always exists (both for fair and general connection games). (3) For multi source and sink we show that an extension parallel graph always admits a strong equilibrium in fair connection games. As for the quality of the strong equilibrium we show that in any fair connection games the cost of a strong equilibrium is \u0398(log n) from the optimal solution, where n is the number of players. (This should be contrasted with the \u2126(n) price of anarchy for the same setting.) For single source general connection games and single source single sink fair connection games, we show that a strong equilibrium is always an optimal solution.", "fulltext": "1. INTRODUCTION\nComputational game theory has introduced the issue of\nincentives to many of the classical combinatorial\noptimization problems. The view that the demand side is many times\nnot under the control of a central authority that optimizes\nthe global performance, but rather under the control of\nindividuals with different incentives, has led already to many\nimportant insights.\nConsider classical routing and transportation problems\nsuch as multicast or multi-commodity problems, which are\nmany times viewed as follows. We are given a graph with\nedge costs and connectivity demands between nodes, and\nour goal is to find a minimal cost solution. The classical\ncentralized approach assumes that all the individual demands\ncan both be completely coordinated and have no individual\nincentives. The game theory point of view would assume\nthat each individual demand is controlled by a player that\noptimizes its own utility, and the resulting outcome could\nbe far from the optimal solution.\nWhen considering individual incentives one needs to\ndiscuss the appropriate solution concept. Much of the research\nin computational game theory has focused on the classical\nNash equilibrium as the primary solution concept. Indeed\nNash equilibrium has many benefits, and most importantly\nit always exists (in mixed strategies). However, the solution\nconcept of Nash equilibrium is resilient only to unilateral\ndeviations, while in reality, players may be able to coordinate\ntheir actions.\nA strong equilibrium [4] is a state from which no coalition\n(of any size) can deviate and improve the utility of every\nmember of the coalition (while possibly lowering the utility\n84\nof players outside the coalition). This resilience to\ndeviations by coalitions of the players is highly attractive, and\none can hope that once a strong equilibrium is reached it is\nhighly likely to sustain. From a computational game theory\npoint of view, an additional benefit of a strong equilibrium\nis that it has a potential to reduce the distance between the\noptimal solution and the solution obtained as an outcome\nof selfish behavior. The strong price of anarchy (SPoA),\nintroduced in [1], is the ratio between the cost of the worst\nstrong equilibrium and the cost of an optimal solution.\nObviously, SPoA is meaningful only in those cases where\na strong equilibrium exists. A major downside of strong\nequilibrium is that most games do not admit any strong\nequilibrium. Even simple classical games like the prisoner\"s\ndilemma do not posses any strong equilibrium (which is also\nan example of a congestion game that does not posses a\nstrong equilibrium1\n). This unfortunate fact has reduced the\nconcentration in strong equilibrium, despite its highly\nattractive properties. Yet, [1] have identified two broad\nfamilies of games, namely job scheduling and network formation,\nwhere a strong equilibrium always exists and the SPoA is\nsignificantly lower than the price of anarchy (which is the\nratio between the worst Nash equilibrium and the optimal\nsolution [15, 18, 5, 6]).\nIn this work we concentrate on cost sharing connection\ngames, introduced by [3, 2]. In such a game, there is an\nunderlying directed graph with edge costs, and individual\nusers have connectivity demands (between a source and a\nsink). We consider two models. The fair cost connection\nmodel [2] allows each player to select a path from the source\nto the sink2\n. In this game the cost of an edge is shared\nequally between all the players that selected the edge, and\nthe cost of the player is the sum of its costs on the edges it\nselected. The general connection game [3] allows each player\nto offer prices for edges. In this game an edge is bought if\nthe sum of the offers at least covers its cost, and the cost of\nthe player is the sum of its offers on the bought edges (in\nboth games we assume that the player has to guarantee the\nconnectivity between its source and sink).\nIn this work we focus on two important issues. The first\none is identifying under what conditions the existence of a\nstrong equilibrium is guaranteed, and the second one is the\nquality of the strong equilibria. For the existence part, we\nidentify families of graph topologies that possess some strong\nequilibrium for any assignment of edge costs. One can view\nthis separation between the graph topology and the edge\ncosts, as a separation between the underlying infrastructure\nand the costs the players observe to purchase edges. While\none expects the infrastructure to be stable over long periods\nof time, the costs the players observe can be easily modified\nover short time periods. Such a topological characterization\nof the underlying infrastructure provides a network designer\ntopological conditions that will ensure stability in his\nnetwork.\nOur results are as follows. For the single commodity case\n(all the players have the same source and sink), there is a\nstrong equilibrium in any graph (both for fair and general\nconnection games). Moreover, the strong equilibrium is also\n1\nwhile any congestion game is known to admit at least one\nNash equilibrium in pure strategies [16].\n2\nThe fair cost sharing scheme is also attractive from a\nmechanism design point of view, as it is a strategyproof\ncostsharing mechanism [14].\nthe optimal solution (namely, the players share a shortest\npath from the common source to the common sink). For\nthe case of a single source and multiple sinks (for\nexample, in a multicast tree), we show that in a fair connection\ngame there is a strong equilibrium if the underlying graph\nis a series parallel graph, and we show an example of a\nnonseries parallel graph that does not have a strong equilibrium.\nFor the case of multi-commodity (multi sources and sinks),\nwe show that in a fair connection game if the graph is an\nextension parallel graph then there is always a strong\nequilibrium, and we show an example of a series parallel graph\nthat does not have a strong equilibrium. As far as we know,\nwe are the first to provide a topological characterization for\nequilibrium existence in multi-commodity and single-source\nnetwork games.\nFor any fair connection game we show that if there exists\na strong equilibrium it is at most a factor of \u0398(log n) from\nthe optimal solution, where n is the number of players. This\nshould be contrasted with the \u0398(n) bound that exists for the\nprice of anarchy [2].\nFor single source general connection games, we show that\nany series parallel graph possesses a strong equilibrium, and\nwe show an example of a graph that does not have a strong\nequilibrium. In this case we also show that any strong\nequilibrium is optimal.\nRelated work\nTopological characterizations for single-commodity network\ngames have been recently provided for various equilibrium\nproperties, including equilibrium existence [12, 7, 8],\nequilibrium uniqueness [10] and equilibrium efficiency [17, 11]. The\nexistence of pure Nash equilibrium in single-commodity\nnetwork congestion games with player-specific costs or weights\nwas studied in [12]. The existence of strong equilibrium\nwas studied in both utility-decreasing (e.g., routing) and\nutility-increasing (e.g., fair cost-sharing) congestion games.\n[7, 8] have provided a full topological characterization for\na SE existence in single-commodity utility-decreasing\ncongestion games, and showed that a SE always exists if and\nonly if the underlying graph is extension-parallel. [19] have\nshown that in single-commodity utility-increasing\ncongestion games, the topological characterization is essentially\nequivalent to parallel links. In addition, they have shown\nthat these results hold for correlated strong equilibria as\nwell (in contrast to the decreasing setting, where correlated\nstrong equilibria might not exist at all). While the fair cost\nsharing games we study are utility increasing network\ncongestion games, we derive a different characterization than\n[19] due to the different assumptions regarding the players\"\nactions.3\n2. MODEL\n2.1 Game Theory definitions\nA game \u039b =< N, (\u03a3i), (ci) > has a finite set N = {1, . . . , n}\nof players. Player i \u2208 N has a set \u03a3i of actions, the joint\naction set is \u03a3 = \u03a31 \u00d7 \u00b7 \u00b7 \u00b7 \u00d7 \u03a3n and a joint action S \u2208 \u03a3\nis also called a profile. The cost function of player i is\n3\nIn [19] they allow to restrict some players from using certain\nlinks, even though the links exist in the graph, while we do\nnot allow this, and assume that the available strategies for\nplayers are fully represented by the underlying graph.\n85\nci : \u03a3 \u2192 R+\n, which maps the joint action S \u2208 \u03a3 to a\nnon-negative real number. Let S = (S1, . . . , Sn) denote\nthe profile of actions taken by the players, and let S\u2212i =\n(S1, . . . , Si\u22121, Si+1, . . . , Sn) denote the profile of actions taken\nby all players other than player i. Note that S = (Si, S\u2212i).\nThe social cost of a game \u039b is the sum of the costs of\nthe players, and we denote by OPT(\u039b) the minimal social\ncost of a game \u039b. i.e., OPT(\u039b) = minS\u2208\u03a3 cost\u039b(S), where\ncost\u039b(S) = i\u2208N ci(S).\nA joint action S \u2208 \u03a3 is a pure Nash equilibrium if no player\ni \u2208 N can benefit from unilaterally deviating from his action\nto another action, i.e., \u2200i \u2208 N \u2200Si \u2208 \u03a3i : ci(S\u2212i, Si) \u2265\nci(S). We denote by NE(\u039b) the set of pure Nash equilibria\nin the game \u039b.\nResilience to coalitions: A pure deviation of a set of\nplayers \u0393 \u2282 N (also called coalition) specifies an action for each\nplayer in the coalition, i.e., \u03b3 \u2208 \u00d7i\u2208\u0393\u03a3i. A joint action S \u2208\n\u03a3 is not resilient to a pure deviation of a coalition \u0393 if there\nis a pure joint action \u03b3 of \u0393 such that ci(S\u2212\u0393, \u03b3) < ci(S) for\nevery i \u2208 \u0393 (i.e., the players in the coalition can deviate in\nsuch a way that each player in the coalition reduces its cost).\nA pure Nash equilibrium S \u2208 \u03a3 is a k-strong equilibrium, if\nthere is no coalition \u0393 of size at most k, such that S is not\nresilient to a pure deviation by \u0393. We denote by k-SE(\u039b)\nthe set of k-strong equilibria in the game \u039b. We denote by\nSE(\u039b) the set of n-strong equilibria, and call S \u2208 SE(\u039b) a\nstrong equilibrium (SE).\nNext we define the Price of Anarchy [9], Price of\nStability [2], and their extension to Strong Price of Anarchy\nand Strong Price of Stability. of anarchy (k-SPoA) for the\ngame \u039b. The Price of Anarchy (PoA) is the ratio between\nthe maximal cost of a pure Nash equilibrium (assuming one\nexists) and the social optimum, i.e., maxS\u2208NE(\u039b) cost\u039b(S)\n/OPT(\u039b). Similarly, the Price of Stability (PoS) is the\nratio between the minimal cost of a pure Nash equilibrium and\nthe social optimum, i.e., minS\u2208NE(\u039b) cost\u039b(S)/OPT(\u039b). The\nk-Strong Price of Anarchy (k-SPoA) is the ratio between\nthe maximal cost of a k-strong equilibrium (assuming one\nexists) and the social optimum, i.e., maxS\u2208k-SE(\u039b) cost\u039b(S)\n/OPT(\u039b). The SPoA is the n-SPoA. Similarly, the Strong\nPrice of Stability (SPoS) is the ratio between the minimal\ncost of a pure strong equilibrium and the social optimum,\ni.e., minS\u2208SE(\u039b) cost\u039b(S)/OPT(\u039b). Note that both k-SPoA\nand SPoS are defined only if some strong equilibrium exists.\n2.2 Cost Sharing Connection Games\nA cost sharing connection game has an underlying\ndirected graph G = (V, E) where each edge e \u2208 E has an\nassociated cost ce \u2265 04\n. In a connection game each player\ni \u2208 N has an associated source si and sink ti.\nIn a fair connection game the actions \u03a3i of player i\ninclude all the paths from si to ti. The cost of each edge is\nshared equally by the set of all players whose paths\ncontain it. Given a joint action, the cost of a player is the\nsum of his costs on the edges it selected. More formally,\nthe cost function of each player on an edge e, in a joint\naction S, is fe(ne(S)) = ce\nne(S)\n, where ne(S) is the\nnumber of players that selected a path containing edge e in\nS. The cost of player i, when selecting path Qi \u2208 \u03a3i is\nci(S) = e\u2208Qi\nfe(ne(S)).\n4\nIn some of the existence proofs, we assume that ce > 0 for\nsimplicity. The full version contains the complete proofs for\nthe case ce \u2265 0.\nIn a general connection game the actions \u03a3i of player i\nis a payment vector pi, where pi(e) is how much player i\nis offering to contribute to the cost of edge e.5\nGiven a\nprofile p, any edge e such that i pi(e) \u2265 ce is considered\nbought, and Ep denotes the set of bought edges. Let Gp =\n(V, Ep) denote the graph bought by the players for profile\np = (p1, . . . , pn). Clearly, each player tries to minimize his\ntotal payment which is ci(p) = e\u2208Ep\npi(e) if si is connected\nto ti in Gp, and infinity otherwise.6\nWe denote by c(p) =\ni ci(p) the total cost under the profile p. For a subgraph\nH of G we denote the total cost of the edges in H by c(H).\nA symmetric connection game implies that the source and\nsink of all the players are identical. (We also call a\nsymmetric connection game a single source single sink connection\ngame, or a single commodity connection game.) A single\nsource connection game implies that the sources of all the\nplayers are identical. Finally, A multi commodity connection\ngame implies that each player has its own source and sink.\n2.3 Extension Parallel and Series Parallel\nDirected Graphs\nOur directed graphs would be acyclic, and would have\na source node (from which all nodes are reachable) and a\nsink node (which every node can reach). We first define the\nfollowing actions for composition of directed graphs.\n\u2022 Identification: The identification operation allows to\ncollapse two nodes to one. More formally, given graph\nG = (V, E) we define the identification of a node v1 \u2208\nV and v2 \u2208 V forming a new node v \u2208 V as creating a\nnew graph G = (V , E ), where V = V \u2212{v1, v2}\u222a{v}\nand E includes the edges of E where the edges of v1\nand v2 are now connected to v.\n\u2022 Parallel composition: Given two directed graphs, G1 =\n(V1, E1) and G2 = (V2, E2), with sources s1 \u2208 V1 and\ns2 \u2208 V2 and sinks t1 \u2208 V1 and t2 \u2208 V2, respectively,\nwe define a new graph G = G1||G2 as follows. Let\nG = (V1 \u222a V2, E1 \u222a E2) be the union graph. To create\nG = G1||G2 we identify the sources s1 and s2, forming\na new source node s, and identify the sinks t1 and t2,\nforming a new sink t.\n\u2022 Series composition: Given two directed graphs, G1 =\n(V1, E1) and G2 = (V2, E2), with sources s1 \u2208 V1 and\ns2 \u2208 V2 and sinks t1 \u2208 V1 and t2 \u2208 V2, respectively, we\ndefine a new graph G = G1 \u2192 G2 as follows. Let G =\n(V1 \u222a V2, E1 \u222a E2) be the union graph. To create G =\nG1 \u2192 G2 we identify the vertices t1 and s2, forming a\nnew vertex u. The graph G has a source s = s1 and a\nsink t = t2.\n\u2022 Extension composition : A series composition when\none of the graphs, G1 or G2, is composed of a\nsingle directed edge is an extension composition, and we\ndenote it by G = G1 \u2192e G2.\nAn extension parallel graph (EPG) is a graph G consisting\nof either: (1) a single directed edge (s, t), (2) a graph G =\nG1||G2 or (3) a graph G = G1 \u2192e G2, where G1 and G2 are\n5\nWe limit the players to select a path connecting si to ti\nand payment only on those edges.\n6\nThis implies that in equilibrium every player has its sink\nand source connected by a path in Gp.\n86\nextension parallel graphs (and in the extension composition\neither G1 or G2 is a single edge.). A series parallel graph\n(SPG) is a graph G consisting of either: (1) a single directed\nedge (s, t), (2) a graph G = G1||G2 or (3) a graph G = G1 \u2192\nG2, where G1 and G2 are series parallel graphs.\nGiven a path Q and two vertices u, v on Q, we denote the\nsubpath of Q from u to v by Qu,v. The following lemma,\nwhose proof appears in the full version, would be the main\ntopological tool in the case of single source graph.\nLemma 2.1. Let G be an SPG with source s and sink t.\nGiven a path Q, from s to t, and a vertex t , there exist a\nvertex y \u2208 Q, such that for any path Q from s to t , the path\nQ contains y and the paths Qy,t and Q are edge disjoint.\n(We call the vertex y the intersecting vertex of Q and t .)\n3. FAIR CONNECTION GAMES\nThis section derives our results for fair connection games.\n3.1 Existence of Strong Equilibrium\nWhile it is known that every fair connection game\npossesses a Nash equilibrium in pure strategies [2], this is not\nnecessarily the case for a strong equilibrium. In this section,\nwe study the existence of strong equilibrium in fair\nconnection games. We begin with a simple case, showing that every\nsymmetric fair connection game possesses a strong\nequilibrium.\nTheorem 3.1. In every symmetric fair connection game\nthere exists a strong equilibrium.\nProof. Let s be the source and t be the sink of all the\nplayers. We show that a profile S in which all the players\nchoose the same shortest path Q (from the source s to the\nsink t ) is a strong equilibrium. Suppose by contradiction\nthat S is not a SE. Then there is a coalition \u0393 that can\ndeviate to a new profile S such that the cost of every player\nj \u2208 \u0393 decreases. Let Qj be a new path used by player j \u2208 \u0393.\nSince Q is a shortest path, it holds that c(Qj \\ (Q \u2229 Qj)) \u2265\nc(Q \\ (Q \u2229 Qj)), for any path Qj. Therefore for every player\nj \u2208 \u0393 we have that cj(S ) \u2265 cj(S). However, this contradicts\nthe fact that all players in \u0393 reduce their cost. (In fact, no\nplayer in \u0393 has reduced its cost.)\nWhile every symmetric fair connection game admits a SE,\nit does not hold for every fair connection game. In what\nfollows, we study the network topologies that admit a strong\nequilibrium for any assignment of edge costs, and give\nexamples of topologies for which a strong equilibrium does not\nexist. The following lemma, whose proof appears in the full\nversion, plays a major role in our proofs of the existence of\nSE.\nLemma 3.2. Let \u039b be a fair connection game on a series\nparallel graph G with source s and sink t. Assume that player\ni has si = s and ti = t and that \u039b has some SE. Let S be\na SE that minimizes the cost of player i (out of all SE),\ni.e., ci(S) = minT \u2208SE(\u039b) ci(T) and let S\u2217\nbe the profile that\nminimizes the cost of player i (out of all possible profiles),\ni.e., ci(S\u2217\n) = minT \u2208\u03a3 ci(T). Then, ci(S) = ci(S\u2217\n).\nThe next lemma considers parallel composition.\nLemma 3.3. Let \u039b be a fair connection game on graph\nG = G1||G2, where G1 and G2 are series parallel graphs. If\nevery fair connection game on the graphs G1 and G2\npossesses a strong equilibrium, then the game \u039b possesses a\nstrong equilibrium.\nProof. Let G1 = (V1, E1) and G2 = (V2, E2) have sources\ns1 and s2 and sinks t1 and t2, respectively. Let Ti be the\nset of players with an endpoint in Vi \\ {s, t}, for i \u2208 {1, 2}.\n(An endpoint is either a source or a sink of a player). Let\nT3 be the set of players j such that sj = s and tj = t. Let\n\u039b1 and \u039b2 be the original game on the respective graphs G1\nand G2 with players T1 \u222a T3 and T2 \u222a T3, respectively.\nLet S and S be the SE in \u039b1 and \u039b2 that minimizes\nthe cost of players in T3, respectively. Assume w.l.o.g. that\nci(S ) \u2264 ci(S ) where player i \u2208 T3. In addition, let \u039b2 be\nthe game on the graph G2 with players T2 and let \u00afS be a\nSE in \u039b2.\nWe will show that the profile S = S \u222a \u00afS is a SE in \u039b.\nSuppose by contradiction that S is not a SE. Then, there\nis a coalition \u0393 that can deviate such that the cost of every\nplayer j \u2208 \u0393 decreases. By Lemma 3.2 and the assumption\nthat ci(S ) \u2264 ci(S ), a player j \u2208 T3 cannot improve his\ncost. Therefore, \u0393 \u2286 T1 \u222a T2. But this is a contradiction to\nS being a SE in \u039b1 or \u00afS being a SE in \u039b2.\nThe following theorem considers the case of single source\nfair connection games.\nTheorem 3.4. Every single source fair connection game\non a series-parallel graph possesses a strong equilibrium.\nProof. We prove the theorem by induction on the\nnetwork size |V |. The claim obviously holds if |V | = 2. We\nshow the claim for a series composition, i.e., G = G1 \u2192 G2,\nand for a parallel composition, i.e., G = G1||G2, where\nG1 = (V1, E1) and G2 = (V2, E2) are SPG\"s with sources\ns1, s2, and sinks t1, t2, respectively.\nseries composition. Let G = G1 \u2192 G2. Let T1 be\nthe set of players j such that tj \u2208 V1, and T2 be the set of\nplayers j such that tj \u2208 V2 \\ {s2}.\nLet \u039b1 and \u039b2 be the original game on the respective\ngraphs G1 and G2 with players T1 \u222a T2 and T2, respectively.\nFor every player i \u2208 T2 with action Si in the game \u039b let\nSi \u2229E1 be his induced action in the game \u039b1, and let Si \u2229E2\nbe his induced action in the game \u039b2.\nLet S be a SE in \u039b1 that minimizes the cost of players in\nT2 (such a SE exists by the induction hypothesis and Lemma\n3.2). Let S be any SE in \u039b2. We will show that the profile\nS = S \u222a S is a SE in the game \u039b, i.e., for player j \u2208 T2 we\nuse the profile Sj = Sj \u222a Sj .\nSuppose by contradiction that S is not a SE. Then, there\nis a coalition \u0393 that can deviate such that the cost of every\nplayer j \u2208 \u0393 decreases. Now, there are two cases:\nCase 1: \u0393 \u2286 T1. This is a contradiction to S being a SE.\nCase 2: There exists a player j \u2208 \u0393 \u2229 T2. By Lemma 3.2,\nplayer j cannot improve his cost in \u039b1 so the improvement\nis due to \u039b2. Consider the coalition \u0393 \u2229 T2, it would still\nimprove its cost. However, this contradicts the fact that S\nis a SE in \u039b2.\nparallel composition. Follows from Lemma 3.3.\nWhile multi-commodity fair connection games on series\nparallel graphs do not necessarily possess a SE (see\nTheorem 3.6), fair connection games on extension parallel graphs\nalways possess a strong equilibrium.\nTheorem 3.5. Every fair connection game on an\nextension parallel graph possesses a strong equilibrium.\n87\nt2\nt1\ns1\ns2 2\n2\n1\n3\n3\n1\n(b)(a)\na\nb\ne\nf\nc\nd\nFigure 1: Graph topologies.\nProof. We prove the theorem by induction on the\nnetwork size |V |. Let \u039b be a fair connection game on an EPG\nG = (V, E). The claim obviously holds if |V | = 2. If the\ngraph G is a parallel composition of two EPG graphs G1\nand G2, then the claim follows from Lemma 3.3. It remains\nto prove the claim for extension composition. Suppose the\ngraph G is an extension composition of the graph G1\nconsisting of a single edge e = (s1, t1) and an EPG G2 = (V2, E2)\nwith terminals s2, t2, such that s = s1 and t = t2. (The case\nthat G2 is a single edge is similar.)\nLet T1 be the set of players with source s1 and sink t1\n(i.e., their path is in G1). Let T2 be the set of players with\nsource and sink in G2. Let T3 be the set of players with\nsource s1 and sink in V2 \\ t1.\nLet \u039b1 and \u039b2 be the original game on the respective\ngraphs G1 and G2 with players T1 \u222a T3 and T2 \u222a T3,\nrespectively. Let S , S be SE in \u039b1 and \u039b2 respectively. We will\nshow that the profile S = S \u222a S is a SE in the game \u039b.\nSuppose by contradiction that S is not a SE. Then, there is\na coalition \u0393 of minimal size that can deviate such that the\ncost of any player j \u2208 \u0393 decreases. Clearly, T1 \u2229\u0393 = \u03c6, since\nplayers in T1 have a single strategy. Hence, \u0393 \u2286 T2 \u222aT3. Any\nplayer j \u2208 T2 \u222aT3 cannot improve his cost in \u039b1. Therefore,\nany player j \u2208 T2 \u222a T3 improves his cost in \u039b2. However,\nthis contradicts the fact that S is a SE in \u039b2.\nIn the following theorem we provide a few examples of\ntopologies in which a strong equilibrium does not exist,\nshowing that our characterization is almost tight.\nTheorem 3.6. The following connection games exist: (1)\nThere exists a multi-commodity fair connection game on a\nseries parallel graph that does not possess a strong\nequilibrium. (2) There exists a single source fair connection game\nthat does not possess a strong equilibrium.\nProof. For claim (1) consider the graph depicted in\nFigure 1(a). This game has a unique NE where S1 = {e, c},\nS2 = {b, f}, and each player has a cost of 5.7\nHowever,\nconsider the following coordinated deviation S . S1 = {a, b, c},\n7\nIn any NE of the game, player 1 will buy the edge e and\nplayer 2 will buy the edge f. This is since the alternate\npath, in the respective part, will cost the player 2.5. Thus,\nplayer 1 (player 2) will buy the edge c (edge b) alone, and\neach player will have a cost of 5.\ns\n2 + 2\n2\n1 \u2212 2\n1 + 3\n1\n2 \u2212 3\n1 1\n1\n2 \u2212 3\nt1 t2\na\nc\nd e\nf\nh\ng\nb\nFigure 2: Example of a single source connection game\nthat does not admit SE.\nand S2 = {b, c, d}. In this profile, each player pays a cost of\n4, and thus improves its cost.\nFor claim (2) consider a single source fair connection game\non the graph G depicted in Figure 2. There are two players.\nPlayer i = 1, 2 wishes to connect the source s to its sink ti\nand the unique NE is S1 = {a, b}, S2 = {a, c}, and each\nplayer has a cost of 2. 8\nThen, both players can deviate to\nS1 = {h, f, d} and S2 = {h, f, e}, and decrease their costs\nto 2 \u2212 /2.\nUnfortunately, our characterization is not completely tight.\nThe graph in Figure 1(b) is an example of a non-extension\nparallel graph which always admits a strong equilibrium.\n3.2 Strong Price of Anarchy\nWhile the price of anarchy in fair connection games can\nbe as bad as n, the following theorem shows that the strong\nprice of anarchy is bounded by H(n) = n\ni=1\n1\ni\n= \u0398(log n).\nTheorem 3.7. The strong price of anarchy of a fair\nconnection game with n players is at most H(n).\nProof. Let \u039b be a fair connection game on the graph G.\nWe denote by \u039b(\u0393) the game played on the graph G by a set\nof players \u0393, where the action of player i \u2208 \u0393 remains \u03a3i (the\nsame as in \u039b). Let S = (S1, . . . , Sn) be a profile in the game\n\u039b. We denote by S(\u0393) = S\u0393 the induced profile of players in\n\u0393 in the game \u039b(\u0393). Let ne(S(\u0393)) denote the load of edge\ne under the profile S(\u0393) in the game \u039b(\u0393), i.e., ne(S(\u0393)) =\n|{j|j \u2208 \u0393, e \u2208 Sj}|. Similar to congestion games [16, 13] we\ndenote by \u03a6(S(\u0393)) the potential function of the profile S(\u0393)\nin the game \u039b(\u0393), where \u03a6(S(\u0393)) = e\u2208E\nne(S(\u0393))\nj=1 fe(j),\nand define \u03a6(S(\u03c6)) = 0. In our case, it holds that\n\u03a6(S) =\ne\u2208E\nce \u00b7 H(ne(S)). (1)\nLet S be a SE, and let S\u2217\nbe the profile of the optimal\nsolution. We define an order on the players as follows. Let\n\u0393n = {1, ..., n} be the set of all the players. For each k =\n8\nWe can show that this is the unique NE by a simple case\nanalysis: (i) If S1 = {h, f, d} and S2 = {h, f, e}, then player\n1 can deviate to S1 = {h, g} and decrease his cost. (ii) If\nS1 = {h, g} and S2 = {h, f, e}, then player 2 can deviate to\nS2 = {a, c} and decrease his cost. (iii) If S1 = {h, g} and\nS2 = {a, c}, then player 1 can deviate to S1 = {a, b} and\ndecrease his cost.\n88\nn, . . . , 1, since S is a SE, there exists a player in \u0393k, w.l.o.g.\ncall it player k, such that,\nck(S) \u2264 ck(S\u2212\u0393k , S\u2217\n\u0393k\n). (2)\nIn this way, \u0393k is defined recursively, such that for every\nk = n, . . . , 2 it holds that \u0393k\u22121 = \u0393k \\ {k}. (I.e., after the\nrenaming, \u0393k = {1, . . . , k}.)\nLet ck(S(\u0393k)) denote the cost of player k in the game\n\u039b(\u0393k) under the induced profile S(\u0393k). It is easy to see that\nck(S(\u0393k)) = \u03a6(S(\u0393k)) \u2212 \u03a6(S(\u0393k\u22121)).9\nTherefore,\nck(S) \u2264 ck(S\u2212\u0393k , S\u2217\n\u0393k\n) (3)\n\u2264 ck(S\u2217\n(\u0393k)) = \u03a6(S\u2217\n(\u0393k)) \u2212 \u03a6(S\u2217\n(\u0393k\u22121)).\nSumming over all players, we obtain:\ni\u2208N\nci(S) \u2264 \u03a6(S\u2217\n(\u0393n)) \u2212 \u03a6(S\u2217\n(\u03c6))\n= \u03a6(S\u2217\n(\u0393n)) =\ne\u2208S\u2217\nce \u00b7 H(ne(S\u2217\n))\n\u2264\ne\u2208S\u2217\nce \u00b7 H(n) = H(n) \u00b7 OPT(\u039b),\nwhere the first inequality follows since the sum of the right\nhand side of equation (3) telescopes, and the second equality\nfollows from equation (1).\nNext we bound the SPoA when coalitions of size at most\nk are allowed.\nTheorem 3.8. The k-SPoA of a fair connection game\nwith n players is at most n\nk\n\u00b7 H(k).\nProof. Let S be a SE of \u039b, and S\u2217\nbe the profile of the\noptimal solution of \u039b. To simplify the proof, we assume that\nn/k is an integer. We partition the players to n/k groups\nT1, . . . , Tn/k each of size k. Let \u039bj be the game on the\ngraph G played by the set of players Tj. Let S(Tj) denote\nthe profile of the k players in Tj in the game \u039bj induced by\nthe profile S of the game \u039b. By Theorem 3.7, it holds that\nfor each game \u039bj, j = 1, . . . , n/k,\ncost\u039bj (S(Tj)) =\ni\u2208Tj\nci(S(Tj))\n\u2264 H(k) \u00b7 OPT(\u039bj) \u2264 H(k) \u00b7 OPT(\u039b).\nSumming over all games \u039bj, j = 1, . . . , n/k,\ncost\u039b(S) \u2264\nn/k\nj=1\ncost\u039bj (S(Tj)) \u2264\nn\nk\n\u00b7 H(k) \u00b7 OPT(\u039b),\nwhere the first inequality follows since for each group Tj and\nplayer i \u2208 Tj, it holds that ci(S) \u2264 ci(S(Tj)).\nNext we show an almost matching lower bound. (The\nlower bound is at most H(n) = O(log n) from the upper\nbound and both for k = O(1) and k = \u2126(n) the difference\nis only a constant.)\nTheorem 3.9. For fair connection games with n players,\nk-SPoA \u2265 max{n\nk\n, H(n)}.\n9\nThis follows since for any strategy profile S, if a single\nplayer k deviates to strategy Sk, then the change in the\npotential value \u03a6(S) \u2212 \u03a6(Sk, S\u2212k) is exactly the change in\nthe cost to player k.\nt2\ns\nt1 tn\u22122 tn\n1\n2\nt3 tn\u22121\n1\n1\n3\n1\nn\u22122\n2\nn\n1 +\n00\n0 0 0 00\n0\nFigure 3: Example of a network topology in which\nSPoS > PoS.\nProof. For the lower bound of H(n) we observe that in\nthe example presented in [2], the unique Nash equilibrium\nis also a strong equilibrium, and therefore k-SPoA = H(n)\nfor any 1 \u2264 k \u2264 n. For the lower bound of n/k, consider\na graph composed of two parallel links of costs 1 and n/k.\nConsider the profile S in which all n players use the link\nof cost n/k. The cost of each player is 1/k, while if any\ncoalition of size at most k deviates to the link of cost 1, the\ncost of each player is at least 1/k. Therefore, the profile S\nis a k-SE, and k-SPoA = n/k.\nThe results of Theorems 3.7 and 3.8 can be extended to\nconcave cost functions. Consider the extended fair\nconnection game, where each edge has a cost which depends on the\nnumber of players using that edge, ce(ne). We assume that\nthe cost function ce(ne) is a nondecreasing, concave\nfunction. Note that the cost of an edge ce(ne) might increase\nwith the number of players using it, but the cost per player\nfe(ne) = ce(ne)/ne decreases when ce(ne) is concave.\nTheorem 3.10. The strong price of anarchy of a fair\nconnection game with nondecreasing concave edge cost functions\nand n players is at most H(n).\nProof. The proof is analogues to the proof of\nTheorem 3.7. For the proof we show that cost(S) \u2264 \u03a6(S\u2217\n) \u2264\nH(n)\u00b7cost(S\u2217\n). We first show the first inequality. Since the\nfunction ce(x) is concave, the cost per player ce(x)/x is a\nnonincreasing function. Therefore inequality (3) in the proof\nof Theorem 3.7 holds. Summing inequality (3) over all\nplayers we obtain cost(S) = i ci(S) \u2264 \u03a6(S\u2217\n(\u0393n))\u2212\u03a6(S\u2217\n(\u03c6)) =\n\u03a6(S\u2217\n). The second inequality follows since ce(x) is\nnondecreasing and therefore ne\nx=1(ce(x)/x) \u2264 H(ne) \u00b7 ce(ne).\nUsing the arguments in the proof of Theorem 3.10 and\nthe proof of Theorem 3.8 we derive,\nTheorem 3.11. The k-SPoA of a fair connection game\nwith nondecreasing concave edge cost functions and n players\nis at most n\nk\n\u00b7 H(k).\nSince the set of strong equilibria is contained in the set of\nNash equilibria, it must hold that SPoA \u2264 PoA, meaning\nthat the SPoA can only be improved compared to the PoA.\nHowever, with respect to the price of stability the opposite\ndirection holds, that is, SPoS \u2265 PoS. We next show that\nthere exists a fair connection game in which the inequality\nis strict.\n89\n2 \u2212 2 \u2212 2 \u2212 3\ns\nt1 t2 t3\nFigure 4: Example of a single source general connection\ngame that does not admit a strong equilibrium. The\nedges that are not labeled with costs have a cost of zero.\nTheorem 3.12. There exists a fair connection game in\nwhich SPoS > PoS.\nProof. Consider a single source fair connection game on\nthe graph G depicted in Figure 3.10\nPlayer i = 1, . . . , n\nwishes to connect the source s to his sink ti. Assume that\neach player i = 1, . . . , n \u2212 2 has his own path of cost 1/i\nfrom s to ti and players i = n \u2212 1, n have a joint path of\ncost 2/n from s to ti. Additionally, all players can share a\ncommon path of cost 1+ for some small > 0. The optimal\nsolution connects all players through the common path of\ncost 1 + , and this is also a Nash equilibrium with total\ncost 1 + . It is easy to verify that the solution where each\nplayer i = 1, . . . , n\u22122 uses his own path and users i = n\u22121, n\nuse their joint path is the unique strong equilibrium of this\ngame with total cost n\u22122\ni=1\n1\ni\n+ 2\nn\n= \u0398(log n)\nWhile the example above shows that the SPoS may be\ngreater than the PoS, the upper bound of H(n) = \u0398(log n),\nproven for the PoS [2], serves as an upper bound for the\nSPoS as well. This is a direct corollary from theorem 3.7, as\nSPoS \u2264 SPoA by definition.\nCorollary 3.13. The strong price of stability of a fair\nconnection game with n players is at most H(n) = O(log n).\n4. GENERAL CONNECTION GAMES\nIn this section, we derive our results for general connection\ngames.\n4.1 Existence of Strong Equilibrium\nWe begin with a characterization of the existence of a\nstrong equilibrium in symmetric general connection games.\nSimilar to Theorem 3.1 (using a similar proof) we establish,\nTheorem 4.1. In every symmetric fair connection game\nthere exists a strong equilibrium.\nWhile every single source general connection game\npossesses a pure Nash equilibrium [3], it does not necessarily\nadmit some strong equilibrium.11\n10\nThis is a variation on the example given in [2].\n11\nWe thank Elliot Anshelevich, whose similar topology for\nthe fair-connection game inspired this example.\nTheorem 4.2. There exists a single source general\nconnection game that does not admit any strong equilibrium.\nProof. Consider single source general connection game\nwith 3 players on the graph depicted in Figure 4. Player i\nwishes to connect the source s with its sink ti.We need to\nconsider only the NE profiles: (i) if all three players use the\nlink of cost 3, then there must be two agents whose total\nsum exceeds 2, thus they can both reduce cost by deviating\nto an edge of cost 2\u2212 . (ii) if two of the players use an edge\nof cost 2\u2212 jointly, and the third player uses a different edge\nof cost 2 \u2212 , then, the players with non-zero payments can\ndeviate to the path with the edge of cost 3 and reduce their\ncosts (since before the deviation the total payments of the\nplayers is 4 \u2212 2 ). We showed that none of the NE are SE,\nand thus the game does not possess any SE.\nNext we show that for the class of series parallel graphs,\nthere is always a strong equilibrium in the case of a single\nsource.\nTheorem 4.3. In every single source general connection\ngame on a series-parallel graph, there exists a strong\nequilibrium.\nProof. Let \u039b be a single source general connection game\non a SPG G = (V, E) with source s and sink t. We present an\nalgorithm that constructs a specific SE. We first consider the\nfollowing partial order between the players. For players i and\nj, we have that i \u2192 j if there is a directed path from ti to tj.\nWe complete the partial order to a full order (in an arbitrary\nway), and w.l.o.g. we assume that 1 \u2192 2 \u2192 \u00b7 \u00b7 \u00b7 \u2192 n.\nThe algorithm COMPUTE-SE, considers the players in\nan increasing order, starting with player 1. Each player i\nwill fully buy a subset of the edges, and any player j > i\nwill consider the cost of those (bought) edges as zero. When\nCOMPUTE-SE considers player j, the cost of the edges\nthat players 1 to j\u22121 have bought is set to zero, and player j\nfully buys a shortest path Qj\nfrom s to tj. Namely, for every\nedges e \u2208 Qj\n\\ \u222ai i pays for any edge on any path from s to ti.\nConsider a player k > i and let Qk = Qk\n\u222a Qk , where Qk is\na path connecting tk to t. Let yk be the intersecting vertex\nof Qk and ti. Since there exists a path from s to yk that was\nfully paid for by players j < k before the deviation, in\nparticularly the path Qi\ns,yk\n, player k will not pay for any edge\non any path connecting s and yk. Therefore player i fully\npays for all edges on the path \u00afQi\ny,ti\n, i.e., \u00afpi(e) = ce for all\nedges e \u2208 \u00afQi\ny,ti\n. Now consider the algorithm\nCOMPUTESE at the step when player i selects a shortest path from\nthe source s to its sink ti and determines his payment pi. At\nthis point, player i could buy the path \u00afQi\ny,ti\n, since a path\nfrom s to y was already paid for by players j < i. Hence,\nci(\u00afp) \u2265 ci(p). This contradicts the fact that player i\nimproved its cost and therefore not all the players in \u0393 reduce\ntheir cost. This implies that p is a strong equilibrium.\n4.2 Strong Price of Anarchy\nWhile for every single source general connection game, it\nholds that PoS = 1 [3], the price of anarchy can be as large\nas n, even for two parallel edges. Here, we show that any\nstrong equilibrium in single source general connection games\nyields the optimal cost.\nTheorem 4.4. In single source general connection game,\nif there exists a strong equilibrium, then the strong price of\nanarchy is 1.\nProof. Let p = (p1, . . . , pn) be a strong equilibrium, and\nlet T\u2217\nbe the minimum cost Steiner tree on all players,\nrooted at the (single) source s. Let T\u2217\ne be the subtree of\nT\u2217\ndisconnected from s when edge e is removed. Let \u0393(Te)\nbe the set of players which have sinks in Te. For a set of\nedges E, let c(E) = e\u2208E ce. Let P(Te) = i\u2208\u0393(Te) ci(p).\nAssume by way of contradiction that c(p) > c(T\u2217\n). We\nwill show that there exists a sub-tree T of T\u2217\n, that connects\na subset of players \u0393 \u2286 N, and a new set of payments \u00afp, such\nthat for each i \u2208 \u0393, ci(\u00afp) < ci(p). This will contradict the\nassumption that p is a strong equilibrium.\nFirst we show how to find a sub-tree T of T\u2217\n, such that\nfor any edge e, the payments of players with sinks in T\u2217\ne is\nmore than the cost of T\u2217\ne \u222a {e}. To build T , define an edge\ne to be bad if the cost of T\u2217\ne \u222a {e} is at least the payments\nof the players with sinks in T\u2217\ne , i.e., c(T\u2217\ne \u222a {e}) \u2265 P(T\u2217\ne ).\nLet B be the set of bad edges. We define T to be T\u2217\n\u2212\n\u222ae\u2208B(T\u2217\ne \u222a {e}). Note that we can find a subset B of B\nsuch that \u222ae\u2208B(T\u2217\ne \u222a {e}) is equal to \u222ae\u2208B (T\u2217\ne \u222a {e}) and\nfor any e1, e2 \u2208 B we have T\u2217\ne1\n\u2229 T\u2217\ne2\n= \u2205. (The set B will\ninclude any edge e \u2208 B for which there is no other edge\ne \u2208 B on the path from e to the source s.) Considering\nthe edges in e \u2208 B we can see that any subtree T\u2217\ne we\ndelete from T can not decrease the difference between the\npayments and the cost of the remaining tree. Therefore, in\nT for every edge e, we have that c(Te \u222a {e}) < P(Te).\nNow we have a tree T and our coalition will be \u0393(T ).\nWhat remain is to find payments \u00afp for the players in \u0393(T )\nsuch that they will buy the tree T and every player in \u0393(T )\nwill lower its cost, i.e. ci(p) > ci(\u00afp) for i \u2208 \u0393(T ). (Recall\nthat the payments have the restriction that player i can only\npay for edges on the path from s to ti.)\nWe will now define the coalition payments \u00afp. Let ci(\u00afp,\nTe) = e\u2208Te\n\u00afpi(e) be the payments of player i for the\nsubtree Te. We will show that for every subtree Te, ci(\u00afp, Te \u222a\n{e}) < ci(p), and hence ci(\u00afp) < ci(p). Consider the\nfollowing bottom up process that defines \u00afp. We assign the\npayments of edge e in T , after we assign payments to all\nthe edges in Te. This implies that when we assign payments\nfor e, we have that the sum of the payments in Te is equal to\nc(Te) = i\u2208\u0393(Te) ci(\u00afp, Te). Since e was not a bad edge, we\nknow that c(Te \u222a {e}) = c(Te) + ce < P(Te). Therefore, we\ncan update the payments \u00afp of players i \u2208 \u0393(Te), by setting\n\u00afpi(e) = ce\u2206i/( j\u2208\u0393(Te) \u2206j), where \u2206j = cj(p) \u2212 cj(\u00afp, Te).\nAfter the update we have for player i \u2208 \u0393(Te),\nci(\u00afp, Te \u222a {e}) = ci(\u00afp, Te) + \u00afpi(e)\n= ci(\u00afp, Te) + \u2206i\nce\nj\u2208\u0393(Te) \u2206j\n= ci(p) \u2212 \u2206i(1 \u2212\nce\nP(\u0393(Te)) \u2212 c(Te)\n),\nwhere we used the fact that j\u2208\u0393(Te) \u2206j = P(\u0393(Te))\u2212c(Te).\nSince ce < P(\u0393(Te)) \u2212 c(Te) it follows that ci(\u00afp, Te \u222a {e}) <\nci(p).\n5. REFERENCES\n[1] N. Andelman, M. Feldman, and Y. Mansour. Strong\nPrice of Anarchy. In SODA\"07, 2007.\n[2] E. Anshelevich, A. Dasgupta, J. M. Kleinberg,\n\u00b4E. Tardos, T. Wexler, and T. Roughgarden. The price\nof stability for network design with fair cost\nallocation. In FOCS, pages 295-304, 2004.\n[3] E. Anshelevich, A. Dasgupta, E. Tardos, and\nT. Wexler. Near-Optimal Network Design with Selfish\nAgents. In STOC\"03, 2003.\n[4] R. Aumann. Acceptable Points in General\nCooperative n-Person Games. In Contributions to the\nTheory of Games, volume 4, 1959.\n[5] A. Czumaj and B. V\u00a8ocking. Tight bounds for\nworst-case equilibria. In SODA, pages 413-420, 2002.\n[6] A. Fabrikant, A. Luthra, E. Maneva,\nC. Papadimitriou, and S. Shenker. On a network\ncreation game. In ACM Symposium on Principles of\nDistriubted Computing (PODC), 2003.\n[7] R. Holzman and N. Law-Yone. Strong equilibrium in\ncongestion games. Games and Economic Behavior,\n21:85-101, 1997.\n[8] R. Holzman and N. L.-Y. (Lev-tov). Network structure\nand strong equilibrium in route selection games.\nMathematical Social Sciences, 46:193-205, 2003.\n[9] E. Koutsoupias and C. H. Papadimitriou. Worst-case\nequilibria. In STACS, pages 404-413, 1999.\n[10] I. Milchtaich. Topological conditions for uniqueness of\nequilibrium in networks. Mathematics of Operations\nResearch, 30:225244, 2005.\n[11] I. Milchtaich. Network topology and the efficiency of\nequilibrium. Games and Economic Behavior,\n57:321346, 2006.\n[12] I. Milchtaich. The equilibrium existence problem in\nfinite network congestion games. Forthcoming in\nLecture Notes in Computer Science, 2007.\n[13] D. Monderer and L. S. Shapley. Potential Games.\nGames and Economic Behavior, 14:124-143, 1996.\n[14] H. Moulin and S. Shenker. Strategyproof sharing of\n91\nsubmodular costs: Budget balance versus efficiency.\nEconomic Theory, 18(3):511-533, 2001.\n[15] C. Papadimitriou. Algorithms, Games, and the\nInternet. In Proceedings of 33rd STOC, pages 749-753,\n2001.\n[16] R. W. Rosenthal. A class of games possessing\npure-strategy Nash equilibria. International Journal of\nGame Theory, 2:65-67, 1973.\n[17] T. Roughgarden. The Price of Anarchy is Independent\nof the Network Topology. In STOC\"02, pages 428-437,\n2002.\n[18] T. Roughgarden and E. Tardos. How bad is selfish\nrouting? Journal of the ACM, 49(2):236 - 259, 2002.\n[19] O. Rozenfeld and M. Tennenholtz. Strong and\ncorrelated strong equilibria in monotone congestion\ngames. In Workshop on Internet and Network\nEconomics, 2006.\n92", "keywords": "specific cost;extension parallel graph;single source multiple sink;cost sharing connection game;player number;graph topology;strong equilibrium;multi source and sink;general connection game;fair connection game;cost share game;cost of the edge;game theory;nash equilibrium;the edge cost;anarchy price;network design;coalition;strong price of anarchy;optimal solution;number of player;single source and sink;price of anarchy"}
-{"name": "test_J-9", "title": "Computation in a Distributed Information Market\u2217", "abstract": "According to economic theory-supported by empirical and laboratory evidence-the equilibrium price of a financial security reflects all of the information regarding the security\"s value. We investigate the computational process on the path toward equilibrium, where information distributed among traders is revealed step-by-step over time and incorporated into the market price. We develop a simplified model of an information market, along with trading strategies, in order to formalize the computational properties of the process. We show that securities whose payoffs cannot be expressed as weighted threshold functions of distributed input bits are not guaranteed to converge to the proper equilibrium predicted by economic theory. On the other hand, securities whose payoffs are threshold functions are guaranteed to converge, for all prior probability distributions. Moreover, these threshold securities converge in at most n rounds, where n is the number of bits of distributed information. We also prove a lower bound, showing a type of threshold security that requires at least n/2 rounds to converge in the worst case.", "fulltext": "1. INTRODUCTION\nThe strong form of the efficient markets hypothesis states\nthat market prices nearly instantly incorporate all\ninformation available to all traders. As a result, market prices\nencode the best forecasts of future outcomes given all\ninformation, even if that information is distributed across many\nsources. Supporting evidence can be found in empirical\nstudies of options markets [14], political stock markets [7, 8,\n22], sports betting markets [3, 9, 27], horse-racing markets\n[30], market games [23, 24], and laboratory investigations of\nexperimental markets [6, 25, 26].\nThe process of information incorporation is, at its essence,\na distributed computation. Each trader begins with his or\nher own information. As trades are made, summary\ninformation is revealed through market prices. Traders learn or\ninfer what information others are likely to have by observing\nprices, then update their own beliefs based on their\nobservations. Over time, if the process works as advertised, all\ninformation is revealed, and all traders converge to the same\ninformation state. At this point, the market is in what is\ncalled a rational expectations equilibrium [11, 16, 19]. All\ninformation available to all traders is now reflected in the\ngoing prices, and no further trades are desirable until some\nnew information becomes available.\nWhile most markets are not designed with information\naggregation as a primary motivation-for example, derivatives\n156\nmarkets are intended mainly for risk management and sports\nbetting markets for entertainment-recently, some markets\nhave been created solely for the purpose of aggregating\ninformation on a topic of interest. The Iowa Electronic\nMarket1\nis a prime example, operated by the University of Iowa\nTippie College of Business for the purpose of investigating\nhow information about political elections distributed among\ntraders gets reflected in securities prices whose payoffs are\ntied to actual election outcomes [7, 8].\nIn this paper, we investigate the nature of the\ncomputational process whereby distributed information is revealed\nand combined over time into the prices in information\nmarkets. To do so, in Section 3, we propose a model of an\ninformation market that is tractable for theoretical analysis and,\nwe believe, captures much of the important essence of real\ninformation markets. In Section 4, we present our main\ntheoretical results concerning this model. We prove that only\nBoolean securities whose payoffs can be expressed as\nthreshold functions of the distributed input bits of information are\nguaranteed to converge as predicted by rational expectations\ntheory. Boolean securities with more complex payoffs may\nnot converge under some prior distributions. We also\nprovide upper and lower bounds on the convergence time for\nthese threshold securities. We show that, for all prior\ndistributions, the price of a threshold security converges to its\nrational expectations equilibrium price in at most n rounds,\nwhere n is the number of bits of distributed information. We\nshow that this worst-case bound is tight within a factor of\ntwo by illustrating a situation in which a threshold security\nrequires n/2 rounds to converge.\n2. RELATIONSHIP TO RELATED WORK\nAs mentioned, there is a great deal of documented\nevidence supporting the notion that markets are able to\naggregate information in a number of scenarios using a variety\nof market mechanisms. The theoretically ideal mechanism\nrequires what is called a complete market. A complete\nmarket contains enough linearly independent securities to span\nthe entire state space of interest [1, 31]. That is, the\ndimensionality of the available securities equals the dimensionality\nof the event space over which information is to be\naggregated.2\nIn this ideal case, all private information becomes\ncommon knowledge in equilibrium, and thus any function\nof the private information can be directly evaluated by any\nagent or observer. However, this theoretical ideal is almost\nnever achievable in practice, because it generally requires a\nnumber of securities exponential in the number of random\nvariables of interest.\nWhen available securities form an incomplete market [17]\nin relation to the desired information space-as is usually\nthe case-aggregation may be partial. Not all private\ninformation is revealed in equilibrium, and prices may not\nconvey enough information to recover the complete joint\nprobability distribution over all events. Still, it is generally\nassumed that aggregation does occur along the dimensions\nrepresented in the market; that is, prices do reflect a\nconsistent projection of the entire joint distribution onto the\nsmaller-dimensional space spanned by securities. In this\npa1\nhttp://www.biz.uiowa.edu/iem/\n2\nWhen we refer to independence or dimensionality of\nsecurities, we mean the independence or dimensionality of the\nrandom variables on which the security payoffs are based.\nper, we investigate cases in which even this partial\naggregation fails. For example, even though there is enough private\ninformation to determine completely the price of a security\nin the market, the equilibrium price may in fact reveal no\ninformation at all! So characterizations of when a rational\nexpectations equilibrium is fully revealing do not\nimmediately apply to our problem. We are not asking whether all\npossible functions of private information can be evaluated,\nbut whether a particular target function can be evaluated.\nWe show that properties of the function itself play a major\nrole, not just the relative dimensionalities of the information\nand security spaces.\nOur second main contribution is examining the dynamics\nof information aggregation before equilibrium, in particular\nproving upper and lower bounds on the time to convergence\nin those cases in which aggregation succeeds.\nShoham and Tennenholtz [29] define a rationally\ncomputable function as a function of agents\" valuations (types)\nthat can be computed by a market, assuming agents follow\nrational equilibrium strategies. The authors mainly consider\nauctions of goods as their basic mechanistic unit and\nexamine the communication complexity involved in computing\nvarious functions of agents\" valuations of goods. For\nexample, they give auction mechanisms that can compute the\nmaximum, minimum, and kth-highest of the agents\"\nvaluations of a single good using 1, 1, and n \u2212 k + 1 bits of\ncommunication, respectively. They also examine the potential\ntradeoff between communication complexity and revenue.\n3. MODEL OF AN INFORMATION\nMARKET\nTo investigate the properties and limitations of the\nprocess whereby an information market converges toward its\nrational-expectations equilibrium, we formulate a\nrepresentative model of the market. In designing the model, our\ngoals were two-fold: (1) to make the model rich enough to\nbe realistic and (2) to make the model simple enough to\nadmit meaningful analysis. Any modeling decisions must\ntrade off these two generally conflicting goals, and the\ndecision process is as much an art as a science. Nonetheless,\nwe believe that our model captures enough of the essence\nof real information markets to lend credence to the results\nthat follow. In this section, we present our modeling\nassumptions and justifications in detail. Section 3.1 describes\nthe initial information state of the system, Section 3.2 covers\nthe market mechanism, and Section 3.3 presents the agents\"\nstrategies.\n3.1 Initial information state\nThere are n agents (traders) in the system, each of whom\nis privy to one bit of information, denoted xi. The\nvector of all n bits is denoted x = (x1, x2, . . . , xn). In the\ninitial state, each agent is aware only of her own bit of\ninformation. All agents have a common prior regarding the\njoint distribution of bits among agents, but none has any\nspecific information about the actual value of bits held by\nothers. Note that this common-prior assumption-typical\nin the economics literature-does not imply that all agents\nagree. To the contrary, because each agent has different\ninformation, the initial state of the system is in general a\nstate of disagreement. Nearly any disagreement that could\nbe modeled by assuming different priors can instead be\nmod157\neled by assuming a common prior with different information,\nand so the common-prior assumption is not as severe as it\nmay seem.\n3.2 Market mechanism\nThe security being traded by the agents is a financial\ninstrument whose payoff is a function f(x) of the agents\" bits.\nThe form of f (the description of the security) is common\nknowledge3\namong agents. We sometimes refer to the xi as\nthe input bits. At some time in the future after trading is\ncompleted, the true value of f(x) is revealed,4\nand every\nowner of the security is paid an amount f(x) in cash per\nunit owned. If an agent ends with a negative quantity of\nthe security (by selling short), then the agent must pay the\namount f(x) in cash per unit. Note that if someone were\nto have complete knowledge of all input bits x, then that\nperson would know the true value f(x) of the security with\ncertainty, and so would be willing to buy it at any price\nlower than f(x) and (short) sell it at any price higher than\nf(x).5\nFollowing Dubey, Geanakoplos, and Shubik [4], and\nJackson and Peck [13], we model the market-price formation\nprocess as a multiperiod Shapley-Shubik market game [28].\nThe Shapley-Shubik process operates as follows: The\nmarket proceeds in synchronous rounds. In each round, each\nagent i submits a bid bi and a quantity qi. The semantics\nare that agent i is supplying a quantity qi of the security and\nan amount bi of money to be traded in the market. For\nsimplicity, we assume that there are no restrictions on credit\nor short sales, and so an agent\"s trade is not constrained\nby her possessions. The market clears in each round by\nsettling at a single price that balances the trade in that\nround: The clearing price is p = i bi/ i qi. At the end\nof the round, agent i holds a quantity qi proportional to the\nmoney she bid: qi = bi/p. In addition, she is left with an\namount of money bi that reflects her net trade at price p:\nbi = bi \u2212 p(qi \u2212 qi) = pqi. Note that agent i\"s net trade in\nthe security is a purchase if p < bi/qi and a sale if p > bi/qi.\nAfter each round, the clearing price p is publicly revealed.\nAgents then revise their beliefs according to any information\ngarnered from the new price. The next round proceeds as\nthe previous. The process continues until an equilibrium is\nreached, meaning that prices and bids do not change from\none round to the next.\nIn this paper, we make a further simplifying restriction\non the trading in each round: We assume that qi = 1 for\neach agent i. This modeling assumption serves two\nanalytical purposes. First, it ensures that there is forced trade in\nevery round. Classic results in economics show that\nperfectly rational and risk-neutral agents will never trade with\neach other for purely speculative reasons (even if they have\ndiffering information) [20]. There are many factors that can\ninduce rational agents to trade, such as differing degrees of\nrisk aversion, the presence of other traders who are trading\nfor liquidity reasons rather than speculative gain, or a\nmarket maker who is pumping money into the market through a\nsubsidy. We sidestep this issue by simply assuming that the\n3\nCommon knowledge is information that all agents know,\nthat all agents know that all agents know, and so on ad\ninfinitum [5].\n4\nThe values of the input bits themselves may or may not be\npublicly revealed.\n5\nThroughout this paper we ignore the time value of money.\ninformed agents will trade (for unspecified reasons).\nSecond, forcing qi = 1 for all i means that the total volume\nof trade and the impact of any one trader on the clearing\nprice are common knowledge; the clearing price p is a simple\nfunction of the agents\" bids, p = i bi/n. We will discuss\nthe implications of alternative market models in Section 5.\n3.3 Agent strategies\nIn order to draw formal conclusions about the price\nevolution process, we need to make some assumptions about how\nagents behave. Essentially we assume that agents are\nriskneutral, myopic,6\nand bid truthfully: Each agent in each\nround bids his or her current valuation of the security, which\nis that agent\"s estimation of the expected payoff of the\nsecurity. Expectations are computed according to each agent\"s\nprobability distribution, which is updated via Bayes\" rule\nwhen new information (revealed via the clearing prices)\nbecomes available. We also assume that it is common\nknowledge that all the agents behave in the specified manner.\nWould rational agents actually behave according to this\nstrategy? It\"s hard to say. Certainly, we do not claim that\nthis is an equilibrium strategy in the game-theoretic sense.\nFurthermore, it is clear that we are ignoring some\nlegitimate tactics, e.g., bidding falsely in one round in order to\neffect other agents\" judgments in the following rounds\n(nonmyopic reasoning). However, we believe that the strategy\noutlined is a reasonable starting point for analysis. Solving\nfor a true game-theoretic equilibrium strategy in this setting\nseems extremely difficult. Our assumptions seem\nreasonable when there are enough agents in the system such that\nextremely complex meta-reasoning is not likely to improve\nupon simply bidding one\"s true expected value. In this case,\naccording the the Shapley-Shubik mechanism, if the\nclearing price is below an agent\"s expected value that agent will\nend up buying (increasing expected profit); otherwise, if the\nclearing price is above the agent\"s expected value, the agent\nwill end up selling (also increasing expected profit).\n4. COMPUTATIONAL PROPERTIES\nIn this section, we study the computational power of\ninformation markets for a very simple class of aggregation\nfunctions: Boolean functions of n variables. We characterize the\nset of Boolean functions that can be computed in our market\nmodel for all prior distributions and then prove upper and\nlower bounds on the worst-case convergence time for these\nmarkets.\nThe information structure we assume is as follows: There\nare n agents, and each agent i has a single bit of private\ninformation xi. We use x to denote the vector (x1, . . . , xn) of\ninputs. All the agents also have a common prior probability\ndistribution P : {0, 1}n\n\u2192 [0, 1] over the values of x. We\ndefine a Boolean aggregate function f(x) : {0, 1}n\n\u2192 {0, 1}\nthat we would like the market to compute. Note that x, and\nhence f(x), is completely determined by the combination of\nall the agents\" information, but it is not known to any one\nagent. The agents trade in a Boolean security F, which\npays off $1 if f(x) = 1 and $0 if f(x) = 0. So an omniscient\n6\nRisk neutrality implies that each agent\"s utility for the\nsecurity is linearly related to his or her subjective estimation of\nthe expected payoff of the security. Myopic behavior means\nthat agents treat each round as if it were the final round:\nThey do not reason about how their bids may affect the bids\nof other agents in future rounds.\n158\nagent with access to all the agents\" bits would know the true\nvalue of security F-either exactly $1 or exactly $0. In\nreality, risk-neutral agents with limited information will value\nF according to their expectation of its payoff, or Ei[f(x)],\nwhere Ei is the expectation operator applied according to\nagent i\"s probability distribution.\nFor any function f, trading in F may happen to converge\nto the true value of f(x) by coincidence if the prior\nprobability distribution is sufficiently degenerate. More\ninterestingly, we would like to know for which functions f does the\nprice of the security F always converge to f(x) for all prior\nprobability distributions P.7\nIn Section 4.2, we prove a\nnecessary and sufficient condition that guarantees convergence.\nIn Section 4.3, we address the natural follow-up question,\nby deriving upper and lower bounds on the worst-case\nnumber of rounds of trading required for the value of f(x) to be\nrevealed.\n4.1 Equilibrium price characterization\nOur analysis builds on a characterization of the\nequilibrium price of F that follows from a powerful result on\ncommon knowledge of aggregates due to McKelvey and Page [19],\nlater extended by Nielsen et al. [21].\nInformation markets aim to aggregate the knowledge of\nall the agents. Procedurally, this occurs because the agents\nlearn from the markets: The price of the security conveys\ninformation to each agent about the knowledge of other\nagents. We can model the flow of information through prices\nas follows.\nLet \u2126 = {0, 1}n\nbe the set of possible values of x; we say\nthat \u2126 denotes the set of possible states of the world. The\nprior P defines everyone\"s initial belief about the likelihood\nof each state. As trading proceeds, some possible states can\nbe logically ruled out, but the relative likelihoods among the\nremaining states are fully determined by the prior P. So the\ncommon knowledge after any stage is completely described\nby the set of states that an external observer-with no\ninformation beyond the sequence of prices observed-considers\npossible (along with the prior). Similarly, the knowledge of\nagent i at any point is also completely described by the set\nof states she considers possible. We use the notation Sr\nto\ndenote the common-knowledge possibility set after round r,\nand Sr\ni to denote the set of states that agent i considers\npossible after round r.\nInitially, the only common knowledge is that the input\nvector x is in \u2126; in other words, the set of states considered\npossible by an external observer before trading has occurred\nis the set S0\n= \u2126. However, each agent i also knows the\nvalue of her bit xi; thus, her knowledge set S0\ni is the set\n{y \u2208 \u2126|yi = xi}. Agent i\"s first-round bid is her conditional\nexpectation of the event f(x) = 1 given that x \u2208 S0\ni . All\nthe agents\" bids are processed, and the clearing price p1\nis\nannounced. An external observer could predict agent i\"s bid\nif he knew the value of xi. Thus, if he knew the value of\nx, he could predict the value of p1\n. In other words, the\nexternal observer knows the function price1\n(x) that relates\nthe first round price to the true state x. Of course, he does\nnot know the value of x; however, he can rule out any vector\nx that would have resulted in a different clearing price from\nthe observed price p1\n.\n7\nWe assume that the common prior is consistent with x in\nthe sense that it assigns a non-zero probability to the actual\nvalue of x.\nThus, the common knowledge after round 1 is the set\nS1\n= {y \u2208 S0\n| price1\n(y) = p1\n}. Agent i knows the\ncommon knowledge and, in addition, knows the value of bit xi.\nHence, after every round r, the knowledge of agent i is given\nby Sr\ni = {y \u2208 Sr\n|yi = xi}. Note that, because knowledge\ncan only improve over time, we must always have Sr\ni \u2286 Sr\u22121\ni\nand Sr\n\u2286 Sr\u22121\n. Thus, only a finite number of changes\nin each agent\"s knowledge are possible, and so eventually\nwe must converge to an equilibrium after which no player\nlearns any further information. We use S\u221e\nto denote the\ncommon knowledge at this point, and S\u221e\ni to denote agent\ni\"s knowledge at this point. Let p\u221e\ndenote the clearing price\nat equilibrium.\nInformally, McKelvey and Page [19] show that, if n\npeople with common priors but different information about the\nlikelihood of some event A agree about a suitable\naggregate of their individual conditional probabilities, then their\nindividual conditional probabilities of event A\"s occurring\nmust be identical. (The precise definition of suitable is\ndescribed below.) There is a strong connection to rational\nexpectation equilibria in markets, which was noted in the\noriginal McKelvey-Page paper: The market price of a\nsecurity is common knowledge at the point of equilibrium. Thus,\nif the price is a suitable aggregate of the conditional\nexpectations of all the agents, then in equilibrium they must\nhave identical conditional expectations of the event that the\nsecurity will pay off. (Note that their information may still\nbe different.)\nDefinition 1. A function g : n\n\u2192 is called\nstochastically monotone if it can be written in the form g(x) =\ni gi(xi), where each function gi : \u2192 is strictly\nincreasing.\nBergin and Brandenburger [2] proved that this simple\ndefinition of stochastically monotone functions is equivalent to\nthe original definition in McKelvey-Page [19].\nDefinition 2. A function g : n\n\u2192 is called\nstochastically regular if it can be written in the form g = h \u25e6 g ,\nwhere g is stochastically monotone and h is invertible on\nthe range of g .\nWe can now state the McKelvey-Page result, as generalized\nby Nielsen et al. [21]. In our context, the following simple\ntheorem statement suffices; more general versions of this\ntheorem can be found in [19, 21].\nTheorem 1. (Nielsen et al. [21]) Suppose that, at\nequilibrium, the n agents have a common prior, but possibly\ndifferent information, about the value of a random variable F,\nas described above. For all i, let p\u221e\ni = E(F|x \u2208 S\u221e\ni ). If g\nis a stochastically regular function and g(p\u221e\n1 , p\u221e\n2 , . . . , p\u221e\nn ) is\ncommon knowledge, then it must be the case that\np\u221e\n1 = p\u221e\n2 = \u00b7 \u00b7 \u00b7 = p\u221e\nn = E(F|x \u2208 S\u221e\n) = p\u221e\nIn one round of our simplified Shapley-Shubik trading\nmodel, the announced price is the mean of the conditional\nexpectations of the n agents. The mean is a stochastically\nregular function; hence, Theorem 1 shows that, at\nequilibrium, all agents have identical conditional expectations of\nthe payoff of the security. It follows that the equilibrium\n159\nprice p\u221e\nmust be exactly the conditional expectations of all\nagents at equilibrium.\nTheorem 1 does not in itself say how the equilibrium is\nreached. McKelvey and Page, extending an argument due\nto Geanakoplos and Polemarchakis [10], show that repeated\nannouncement of the aggregate will eventually result in\ncommon knowledge of the aggregate. In our context, this is\nachieved by announcing the current price at the end of each\nround; this will ultimately converge to a state in which all\nagents bid the same price p\u221e\n.\nHowever, reaching an equilibrium price is not sufficient for\nthe purposes of information aggregation. We also want the\nprice to reveal the actual value of f(x). It is possible that\nthe equilibrium price p\u221e\nof the security F will not be either\n0 or 1, and so we cannot infer the value of f(x) from it.\nExample 1: Consider two agents 1 and 2 with private input\nbits x1 and x2 respectively. Suppose the prior probability\ndistribution is uniform, i.e., x = (x1, x2) takes the values\n(0, 0), (0, 1), (1, 0), and (1, 1) each with probability 1\n4\n. Now,\nsuppose the aggregate function we want to compute is the\nXOR function, f(x) = x1 \u2295 x2. To this end, we design a\nmarket to trade in a Boolean security F, which will\neventually payoff $1 iff x1 \u2295 x2 = 1.\nIf agent 1 observes x1 = 1, she estimates the expected\nvalue of F to be the probability that x2 = 0 (given x1 = 1),\nwhich is 1\n2\n. If she observes x1 = 0, her expectation of the\nvalue of F is the conditional probability that x2 = 1, which\nis also 1\n2\n. Thus, in either case, agent 1 will bid 0.5 for F\nin the first round. Similarly, agent 2 will also always bid\n0.5 in the first round. Hence, the first round of trading\nends with a clearing price of 0.5. From this, agent 2 can\ninfer that agent 1 bid 0.5, but this gives her no information\nabout the value of x1-it is still equally likely to be 0 or\n1. Agent 1 also gains no information from the first round\nof trading, and hence neither agent changes her bid in the\nfollowing rounds. Thus, the market reaches equilibrium at\nthis point. As predicted by Theorem 1, both agents have the\nsame conditional expectation (0.5) at equilibrium. However,\nthe equilibrium price of the security F does not reveal the\nvalue of f(x1, x2), even though the combination of agents\"\ninformation is enough to determine it precisely.\n4.2 Characterizing computable aggregates\nWe now give a necessary and sufficient characterization of\nthe class of functions f such that, for any prior distribution\non x, the equilibrium price of F will reveal the true value\nof f. We show that this is exactly the class of weighted\nthreshold functions:\nDefinition 3. A function f : {0, 1}n\n\u2192 {0, 1} is a\nweighted threshold function iff there are real constants w1, w2,\n. . . , wn such that\nf(x) = 1 iff\nn\ni=1\nwixi \u2265 1\nTheorem 2. If f is a weighted threshold function, then,\nfor any prior probability distribution P, the equilibrium price\nof F is equal to f(x).\nProof:\nLet S\u221e\ni denote the possibility set of agent i at equilibrium.\nAs before, we use p\u221e\nto denote the final trading price at\nthis point. Note that, by Theorem 1, p\u221e\nis exactly agent i\"s\nconditional expectation of the value of f(x), given her final\npossibility set S\u221e\ni .\nFirst, observe that if p\u221e\nis 0 or 1, then we must have\nf(x) = p\u221e\n, regardless of the form of f. For instance, if\np\u221e\n= 1, this means that E(f(y)|y \u2208 S\u221e\n) = 1. As f(\u00b7)\ncan only take the values 0 or 1, it follows that P(f(y) =\n1|y \u2208 S\u221e\n) = 1. The actual value x is always in the final\npossibility set S\u221e\n, and, furthermore, it must have non-zero\nprior probability, because it actually occurred. Hence, it\nfollows that f(x) = 1 in this case. An identical argument\nshows that if p\u221e\n= 0, f(x) = 0.\nHence, it is enough to show that, if f is a weighted\nthreshold function, then p\u221e\nis either 0 or 1. We prove this by\ncontradiction. Let f(\u00b7) be a weighted threshold function\ncorresponding to weights {wi}, and assume that 0 < p\u221e\n< 1. By\nTheorem 1, we must have:\nP(f(y) = 1|y \u2208 S\u221e\n) = p\u221e\n(1)\n\u2200i P(f(y) = 1|y \u2208 S\u221e\ni ) = p\u221e\n(2)\nRecall that S\u221e\ni = {y \u2208 S\u221e\n|yi = xi}. Thus, Equation (2)\ncan be written as\n\u2200i P(f(y) = 1|y \u2208 S\u221e\n, yi = xi) = p\u221e\n(3)\nNow define\nJ+\ni = P(yi = 1|y \u2208 S\u221e\n, f(y) = 1)\nJ\u2212\ni = P(yi = 1|y \u2208 S\u221e\n, f(y) = 0)\nJ+\n=\nn\ni=1\nwiJ+\ni\nJ\u2212\n=\nn\ni=1\nwiJ\u2212\ni\nBecause by assumption p\u221e\n= 0, 1, both J+\ni and J\u2212\ni are\nwell-defined (for all i): Neither is conditioned on a\nzeroprobability event.\nClaim: Eqs. 1 and 3 imply that J+\ni = J\u2212\ni , for all i.\nProof of claim: We consider the two cases xi = 1 and\nxi = 0 separately.\nCase (i): xi = 1. We can assume that J\u2212\ni and J+\ni are not\nboth 0 (or else, the claim is trivially true). In this case, we\nhave\nP(f(y) = 1|y \u2208 S\u221e\n) \u00b7 J+\ni\nP(f(y) = 1|y \u2208 S\u221e) \u00b7 J+\ni + P(f(y) = 0|y \u2208 S\u221e) \u00b7 J\u2212\ni\n= P(f(y) = 1|yi = 1, y \u2208 S\u221e\n) (Bayes\" law)\np\u221e\nJ+\ni\np\u221eJ+\ni + (1 \u2212 p\u221e)J\u2212\ni\n= p\u221e\n(by Eqs. 1 and 3)\nJ+\ni = p\u221e\nJ+\ni + (1 \u2212 p\u221e\n)J\u2212\ni\n=\u21d2 J+\ni = J\u2212\ni (as p\u221e\n= 1)\nCase (ii): xi = 0. When xi = 0, observe that the argument\nof Case (i) can be used to prove that (1 \u2212 J+\ni ) = (1 \u2212 J\u2212\ni ).\nIt immediately follows that J+\ni = J\u2212\ni as well. 2\nHence, we must also have J+\n= J\u2212\n. But using linearity\nof expectation, we can also write J+\nas\nJ+\n= E\nn\ni=1\nwiyi y \u2208 S\u221e\n, f(y) = 1 ,\n160\nand, because f(y) = 1 only when i wiyi \u2265 1, this gives us\nJ+\n\u2265 1. Similarly,\nJ\u2212\n= E\nn\ni=1\nwiyi y \u2208 S\u221e\n, f(y) = 0 ,\nand thus J\u2212\n< 1. This implies J\u2212\n= J+\n, which leads to a\ncontradiction. 2\nPerhaps surprisingly, the converse of Theorem 2 also holds:\nTheorem 3. Suppose f : {0, 1}n\n\u2192 {0, 1} cannot be\nexpressed as a weighted threshold function. Then there exists\na prior distribution P for which the price of the security F\ndoes not converge to the value of f(x).\nProof: We start from a geometric characterization of\nweighted threshold functions. Consider the Boolean hypercube\n{0, 1}n\nas a set of points in n\n. It is well known that f\nis expressible as a weighted threshold function iff there is a\nhyperplane in n\nthat separates all the points at which f\nhas value 0 from all the points at which f has value 1.\nNow, consider the sets\nH+\n= Conv(f\u22121\n(1))\nand\nH\u2212\n= Conv(f\u22121\n(0)),\nwhere Conv(S) denotes the convex hull of S in n\n. H+\nand\nH\u2212\nare convex sets in n\n, and so, if they do not intersect, we\ncan find a separating hyperlane between them. This means\nthat, if f is not expressible as a weighted threshold function,\nH+\nand H\u2212\nmust intersect. In this case, we show how to\nconstruct a prior P for which f(x) is not computed by the\nmarket.\nLet x\u2217\n\u2208 n\nbe a point in H+\n\u2229 H\u2212\n. Because x\u2217\nis in\nH+\n, there exists some points z1\n, z2\n, . . . , zm\nand constants\n\u03bb1, \u03bb2, . . . , \u03bbm, such that the following constraints are\nsatisfied:\n\u2200k zk\n\u2208 {0, 1}n\n, and f(zk\n) = 1\n\u2200k 0 < \u03bbk \u2264 1\nm\nk=1\n\u03bbk = 1\nm\nk=1\n\u03bbkzk\n= x\u2217\nSimilarly, because x\u2217\n\u2208 H\u2212\n, there are points y1\n, y2\n, . . . , yl\nand constants \u00b51, \u00b52, . . . , \u00b5l, such that\n\u2200j yj\n\u2208 {0, 1}n\n, and f(yj\n) = 0\n\u2200j 0 < \u00b5j \u2264 1\nl\nj=1\n\u00b5j = 1\nl\nj=1\n\u00b5j yj\n= x\u2217\nWe now define our prior distribution P as follows:\nP(zk\n) =\n\u03bbk\n2\nfor k = 1, 2, . . . , m\nP(yj\n) =\n\u00b5j\n2\nfor j = 1, 2, . . . , l,\nand all other points are assigned probability 0. It is easy to\nsee that this is a valid probability distribution. Under this\ndistribution P, first observe that P(f(x) = 1) = 1\n2\n. Further,\nfor any i such that 0 < x\u2217\ni < 1, we have\nP(f(x) = 1|xi = 1) =\nP(f(x) = 1 \u2227 xi = 1)\nP(xi = 1)\n=\nx\u2217\ni\n2\nx\u2217\ni\n=\n1\n2\nand\nP(f(x) = 1|xi = 0) =\nP(f(x) = 1 \u2227 xi = 0)\nP(xi = 0)\n=\n(1\u2212x\u2217\ni )\n2\n(1 \u2212 x\u2217\ni )\n=\n1\n2\nFor indices i such that x\u2217\ni is 0 or 1 exactly, i\"s private\ninformation reveals no additional information under prior P,\nand so here too we have P(f(x) = 1|xi = 0) = P(f(x) =\n1|xi = 1) = 1\n2\n.\nHence, regardless of her private bit xi, each agent i will\nbid 0.5 for security F in the first round. The clearing price\nof 0.5 also reveals no additional information, and so this is\nan equilibrium with price p\u221e\n= 0.5 that does not reveal the\nvalue of f(x). 2\nThe XOR function is one example of a function that\ncannot be expressed as weighted threshold function; Example 1\nillustrates Theorem 3 for this function.\n4.3 Convergence time bounds\nWe have shown that the class of Boolean functions\ncomputable in our model is the class of weighted threshold\nfunctions. The next natural question to ask is: How many\nrounds of trading are necessary before the equilibrium is\nreached? We analyze this problem using the same\nsimplified Shapley-Shubik model of market clearing in each round.\nWe first prove that, in the worst case, at most n rounds are\nrequired.\nThe idea of the proof is to consider the sequence of\ncommon knowledge sets \u2126 = S0\n, S1\n, . . ., and show that, until\nthe market reaches equilibrium, each set has a strictly lower\ndimension than the previous set.\nDefinition 4. For a set S \u2286 {0, 1}n\n, the dimension of\nset S is the dimension of the smallest linear subspace of n\nthat contains all the points in S; we use the notation dim(S)\nto denote it.\nLemma 1. If Sr\n= Sr\u22121\n, then dim(Sr\n) < dim(Sr\u22121\n).\nProof: Let k = dim(Sr\u22121\n). Consider the bids in round r.\nIn our model, agent i will bid her current expectation for\nthe value of F,\nbr\ni = E(f(y) = 1|y \u2208 Sr\u22121\n, yi = xi).\nThus, depending on the value of xi, br\ni will take on one of\ntwo values h\n(0)\ni or h\n(1)\ni . Note that h\n(0)\ni and h\n(1)\ni depend only\non the set Sr\u22121\n, which is common knowledge before round\n161\nr. Setting di = h\n(1)\ni \u2212 h\n(0)\ni , we can write br\ni = h\n(0)\ni + dixi. It\nfollows that the clearing price in round r is given by\npr\n=\n1\nn\nn\ni=1\n(h\n(0)\ni + dixi) (4)\nAll the agents already know all the h\n(0)\ni and di values, and\nthey observe the price pr\nat the end of the rth round. Thus,\nthey effectively have a linear equation in x1, x2, . . . , xn that\nthey use to improve their knowledge by ruling out any\npossibility that would not have resulted in price pr\n. In other\nwords, after r rounds, the common knowledge set Sr\nis the\nintersection of Sr\u22121\nwith the hyperplane defined by\nEquation (4).\nIt follows that Sr\nis contained in the intersection of this\nhyperplane with the k-dimension linear space containing\nSr\u22121\n. If Sr\nis not equal to Sr\u22121\n, this intersection defines a\nlinear subspace of dimension (k \u2212 1) that contains Sr\n, and\nhence Sr\nhas dimension at most (k \u2212 1). 2\nTheorem 4. Let f be a weighted threshold function, and\nlet P be an arbitrary prior probability distribution. Then,\nafter at most n rounds of trading, the price reaches its\nequilibrium value p\u221e\n= f(x).\nProof: Consider the sequence of common knowledge sets\nS0\n, S1\n, . . ., and let r be the minimum index such that Sr\n=\nSr\u22121\n. Then, the rth round of trading does not improve any\nagent\"s knowledge, and thus we must have S\u221e\n= Sr\u22121\nand\np\u221e\n= pr\u22121\n. Observing that dim(S0\n) = n, and applying\nLemma 1 to the first r \u2212 1 rounds, we must have (r \u2212 1) \u2264\nn. Thus, the price reaches its equilibrium value within n\nrounds. 2\nTheorem 4 provides an upper bound of O(n) on the\nnumber of rounds required for convergence. We now show that\nthis bound is tight to within a factor of 2 by constructing a\nthreshold function with 2n inputs and a prior distribution\nfor which it takes n rounds to determine the value of f(x)\nin the worst case.\nThe functions we use are the carry-bit functions. The\nfunction Cn takes 2n inputs; for convenience, we write the\ninputs as x1, x2 . . . , xn, y1, y2, . . . , yn or as a pair (x, y). The\nfunction value is the value of the high-order carry bit when\nthe binary numbers xnxn\u22121 \u00b7 \u00b7 \u00b7 x1 and ynyn\u22121 \u00b7 \u00b7 \u00b7 y1 are\nadded together. In weighted threshold form, this can be written\nas\nCn(x, y) = 1 iff\nn\ni=1\nxi + yi\n2n+1\u2212i\n\u2265 1.\nFor this proof, let us call the agents A1, A2, . . . , An, B1, B2,\n. . . , Bn, where Ai holds input bit xi, and Bi holds input bit\nyi.\nWe first illustrate our technique by proving that\ncomputing C2 requires 2 rounds in the worst case. To do this, we\nconstruct a common prior P2 as follows:\n\u2022 The pair (x1, y1) takes on the values (0, 0), (0, 1), (1, 0),\n(1, 1) uniformly (i.e., with probability 1\n4\neach).\n\u2022 We extend this to a distribution on (x1, x2, y1, y2) by\nspecifying the conditional distribution of (x2, y2) given\n(x1, y1): If (x1, y1) = (1, 1), then (x2, y2) takes the\nvalues (0, 0), (0, 1), (1, 0), (1, 1) with probabilities 1\n2\n, 1\n6\n,\n1\n6\n, 1\n6\nrespectively. Otherwise, (x2, y2) takes the values\n(0, 0), (0, 1), (1, 0), (1, 1) with probabilities 1\n6\n, 1\n6\n, 1\n6\n, 1\n2\nrespectively.\nNow, suppose x1 turns out to be 1, and consider agent\nA1\"s bid in the first round. It is given by\nb1\nA1\n= P(C2(x1, x2, y1, y2) = 1|x1 = 1))\n= P(y1 = 1|x1 = 1)\n\u00b7 P((x2, y2) = (0, 0)|x1 = 1, y1 = 1)\n+P(y1 = 0|x1 = 1)\n\u00b7 P((x2, y2) = (1, 1)|x1 = 1, y1 = 0)\n=\n1\n2\n\u00b7\n1\n2\n+\n1\n2\n\u00b7\n1\n2\n=\n1\n2\nOn the other hand, if x1 turns out to be 0, agent A1\"s bid\nwould be given by\nb1\nA1\n= P(C2(x1, x2, y1, y2) = 1|x1 = 0))\n= P((x2, y2) = (1, 1)|x1 = 0)\n=\n1\n2\nThus, irrespective of her bit, A1 will bid 0.5 in the first\nround. Note that the function and distribution are\nsymmetric between x and y, and so the same argument shows that\nB1 will also bid 0.5 in the first round. Thus, the price p1\nannounced at the end of the first round reveals no\ninformation about x1 or y1. The reason this occurs is that, under\nthis distribution, the second carry bit C2 is statistically\nindependent of the first carry bit (x1 \u2227 y1); we will use this\ntrick again in the general construction.\nNow, suppose that (x2, y2) is either (0, 1) or (1, 0). Then,\neven if x2 and y2 are completely revealed by the first-round\nprice, the value of C2(x1, x2, y1, y2) is not revealed: It will\nbe 1 if x1 = y1 = 1 and 0 otherwise. Thus, we have shown\nthat at least 2 rounds of trading will be required to reveal\nthe function value in this case.\nWe now extend this construction to show by induction\nthat the function Cn takes n rounds to reach an equilibrium\nin the worst case.\nTheorem 5. There is a function Cn with 2n inputs and a\nprior distribution Pn such that, in the worst case, the market\ntakes n rounds to reveal the value of Cn(\u00b7).\nProof: We prove the theorem by induction on n. The base\ncase for n = 2 has already been shown to be true.\nStarting from the distribution P2 described above, we construct\nthe distributions P3, P4, . . . , Pn by inductively applying the\nfollowing rule:\n\u2022 Let x\u2212n\ndenote the vector (x1, x2, . . . , xn\u22121), and\ndefine y\u2212n\nsimilarly. We extend the distribution Pn\u22121\non (x\u2212n\n, y\u2212n\n) to a distribution Pn on (x, y) by\nspecifying the conditional distribution of (xn, yn) given\n(x\u2212n\n, y\u2212n\n): If Cn\u22121(x\u2212n\n, y\u2212n\n) = 1, then (xn, yn) takes\nthe values (0, 0), (0, 1), (1, 0), (1, 1) with\nprobabilities 1\n2\n, 1\n6\n, 1\n6\n, 1\n6\nrespectively. Otherwise, (xn, yn) takes\nthe values (0, 0), (0, 1), (1, 0), (1, 1) with probabilities\n1\n6\n, 1\n6\n, 1\n6\n, 1\n2\nrespectively.\nClaim: Under distribution Pn, for all i < n,\nP(Cn(x, y) = 1|xi = 1) = P(Cn(x, y) = 1|xi = 0).\n162\nProof of claim: A similar calculation to that used for C2\nabove shows that the value of Cn(x, y) under this\ndistribution is statistically independent of Cn\u22121(x\u2212n\n, y\u2212n\n). For\ni < n, xi can affect the value of Cn only through Cn\u22121. Also,\nby contruction of Pn, given the value of Cn\u22121, the\ndistribution of Cn is independent of xi. It follows that Cn(x, y) is\nstatistically independent of xi as well. Of course, a similar\nresult holds for yi by symmetry.\nThus, in the first round, for all i = 1, 2, . . . , n \u2212 1, the\nbids of agents Ai and Bi do not reveal anything about their\nprivate information. Thus, the first-round price does not\nreveal any information about the value of (x\u2212n\n, y\u2212n\n).\nOn the other hand, agents An and Bn do have different\nexpectations of Cn(x) depending on whether their input bit\nis a 0 or a 1; thus, the first-round price does reveal whether\nneither, one, or both of xn and yn are 1. Now, consider\na situation in which (xn, yn) takes on the value (1, 0) or\n(0, 1). We show that, in this case, after one round we are\nleft with the residual problem of computing the value of\nCn\u22121(x\u2212n\n, y\u2212n\n) under the prior Pn\u22121.\nClearly, when xn + yn = 1, Cn(x, y) = Cn\u22121(x\u2212n\n, y\u2212n\n).\nFurther, according to the construction of Pn, the event (xn+\nyn = 1) has the same probability (1/3) for all values of\n(x\u2212n\n, y\u2212n\n). Thus, conditioning on this fact does not alter\nthe probability distribution over (x\u2212n\n, y\u2212n\n); it must still be\nPn\u22121.\nFinally, the inductive assumption tells us that solving this\nresidual problem will take at least n \u2212 1 more rounds in the\nworst case and hence that finding the value of Cn(x, y) takes\nat least n rounds in the worst case. 2\n5. DISCUSSION\nOur results have been derived in a simplified model of an\ninformation market. In this section, we discuss the\napplicability of these results to more general trading models.\nAssuming that agents bid truthfully, Theorem 2 holds in\nany model in which the price is a known stochastically\nmonotone aggregate of agents\" bids. While it seems reasonable\nthat the market price satisfies monotonicity properties, the\nexact form of the aggregate function may not be known if\nthe volume of each user\"s trades is not observable; this\ndepends on the details of the market process. Theorem 3 and\nTheorem 5 hold more generally; they only require that an\nagent\"s strategy depends only on her conditional\nexpectation of the security\"s value. Perhaps the most fragile\nresult is Theorem 4, which relies on the linear form of the\nShapley-Shubik clearing price (in addition to the conditions\nfor Theorem 2); however, it seems plausible that a similar\ndimension-based bound will hold for other families of\nnonlinear clearing prices.\nUp to this point, we have described the model with the\nsame number of agents as bits of information. However, all\nthe results hold even if there is competition in the form of a\nknown number of agents who know each bit of information.\nIndeed, modeling such competition may help alleviate the\nstrategic problems in our current model.\nAnother interesting approach to addressing the strategic\nissue is to consider alternative markets that are at least\nmyopically incentive compatible. One example is a\nmarket mechanism called a market scoring rule, suggested by\nHanson [12]. These markets have the property that a\nriskneutral agent\"s best myopic strategy is to truthfully bid her\ncurrent expected value of the security. Additionally, the\nnumber of securities involved in each trade is fixed and\npublicly known. If the market structure is such that, for\nexample, the current scoring rule is posted publicly after each\nagent\"s trade, then in equilibrium there is common\nknowledge of all agents\" expectation, and hence Theorem 2 holds.\nTheorem 3 also applies in this case, and hence we have the\nsame characterization for the set of computable Boolean\nfunctions. This suggests that the problem of eliciting\ntruthful responses may be orthogonal to the problem of\ncomputing the desired aggregate, reminiscent of the revelation\nprinciple [18].\nIn this paper, we have restricted our attention to the\nsimplest possible aggregation problem: computing Boolean\nfunctions of Boolean inputs. The proofs of Theorems 3 and 5\nalso hold if we consider Boolean functions of real inputs,\nwhere each agent\"s private information is a real number.\nFurther, Theorem 2 also holds provided the market reaches\nequilibrium. With real inputs and arbitrary prior\ndistributions, however, it is not clear that the market will reach an\nequilibrium in a finite number of steps.\n6. CONCLUSION\n6.1 Summary\nWe have framed the process of information aggregation in\nmarkets as a computation on distributed information. We\nhave developed a simplified model of an information\nmarket that we believe captures many of the important aspects\nof real agent interaction in an information market. Within\nthis model, we prove several results characterizing precisely\nwhat the market can compute and how quickly. Specifically,\nwe show that the market is guaranteed to converge to the\ntrue rational expectations equilibrium if and only if the\nsecurity payoff function is a weighted threshold function. We\nprove that the process whereby agents reveal their\ninformation over time and learn from the resulting announced\nprices takes at most n rounds to converge to the correct\nfull-information price in the worst case. We show that this\nbound is tight within a factor of two.\n6.2 Future work\nWe view this paper as a first step towards understanding\nthe computational power of information markets. Some\ninteresting and important next steps include gaining a better\nunderstanding of the following:\n\u2022 The effect of price accuracy and precision: We have\nassumed that the clearing price is known with unlimited\nprecision; in practice, this will not be true. Further,\nwe have neglected influences on the market price other\nthan from rational traders; the market price may also\nbe influenced by other factors such as misinformed or\nirrational traders. It is interesting to ask what\naggregates can be computed even in the presence of noisy\nprices.\n\u2022 Incremental updates: If the agents have computed the\nvalue of the function and a small number of input bits\nare switched, can the new value of the function be\ncomputed incrementally and quickly?\n\u2022 Distributed computation: In our model, distributed\ninformation is aggregated through a centralized market\n163\ncomputation. In a sense, some of the computation\nitself is distributed among the participating agents, but\ncan the market computation also be distributed? For\nexample, can we find a good distributed-computational\nmodel of a decentralized market?\n\u2022 Agents\" computation: We have not accounted for the\ncomplexity of the computations that agents must do\nto accurately update their beliefs after each round.\n\u2022 Strategic market models: For reasons of simplicity and\ntractability, we have directly assumed that agents bid\ntruthfully. A more satisfying approach would be to\nassume only rationality and solve for the resulting\ngametheoretic solution strategy, either in our current\ncomputational model or another model of an information\nmarket.\n\u2022 The common-prior assumption: Can we say anything\nabout the market behavior when agents\" priors are\nonly approximately the same or when they differ\ngreatly?\n\u2022 Average-case analysis: Our negative results (Theorems\n3 and 5) examine worst-case scenarios, and thus\ninvolve very specific prior probability distributions. It is\ninteresting to ask whether we would get very different\nresults for generic prior distributions.\n\u2022 Information market design: Non-threshold functions\ncan be implemented by layering two or more\nthreshold functions together. What is the minimum number\nof threshold securities required to implement a given\nfunction? This is exactly the problem of minimizing\nthe size of a neural network, a well-studied problem\nknown to be NP-hard [15]. What configuration of\nsecurities can best approximate a given function? Are\nthere ways to define and configure securities to speed\nup convergence to equilibrium? What is the\nrelationship between machine learning (e.g., neural-network\nlearning) and information-market design?\nAcknowledgments\nWe thank Joe Kilian for many helpful discussions. We thank\nRobin Hanson and the anonymous reviewers for useful\ninsights and pointers.\n7. REFERENCES\n[1] K. J. Arrow. The role of securities in the optimal\nallocation of risk-bearing. Review of Economic\nStudies, 31(2):91-96, 1964.\n[2] J. Bergin and A. Brandenburger. A simple\ncharacterization of stochastically monotone functions.\nEconometrica, 58(5):1241-1243, Sept. 1990.\n[3] S. Debnath, D. M. Pennock, C. L. Giles, and\nS. Lawrence. Information incorporation in online\nin-game sports betting markets. In Proceedings of the\nFourth Annual ACM Conference on Electronic\nCommerce (EC\"03), June 2003.\n[4] P. Dubey, J. Geanakoplos, and M. Shubik. The\nrevelation of information in strategic market games: A\ncritique of rational expectations equilibrium. Journal\nof Mathematical Economics, 16:105-137, 1987.\n[5] R. Fagin, J. Y. Halpern, Y. Moses, and M. Y. Vardi.\nReasoning About Knowledge. MIT Press, Cambridge,\nMA, 1996.\n[6] R. Forsythe and R. Lundholm. Information\naggregation in an experimental market. Econometrica,\n58(2):309-347, 1990.\n[7] R. Forsythe, F. Nelson, G. R. Neumann, and\nJ. Wright. Anatomy of an experimental political stock\nmarket. American Economic Review, 82(5):1142-1161,\n1992.\n[8] R. Forsythe, T. A. Rietz, and T. W. Ross. Wishes,\nexpectations, and actions: A survey on price formation\nin election stock markets. Journal of Economic\nBehavior and Organization, 39:83-110, 1999.\n[9] J. M. Gandar, W. H. Dare, C. R. Brown, and R. A.\nZuber. Informed traders and price variations in the\nbetting market for professional basketball games.\nJournal of Finance, LIII(1):385-401, 1998.\n[10] J. Geanakoplos and H. Polemarchakis. We can\"t\ndisagree forever. Journal of Economic Theory,\n28(1):192-200, 1982.\n[11] S. J. Grossman. An introduction to the theory of\nrational expectations under asymmetric information.\nReview of Economic Studies, 48(4):541-559, 1981.\n[12] R. Hanson. Combinatorial information market design.\nInformation Systems Frontiers, 5(1), 2002.\n[13] M. Jackson and J. Peck. Asymmetric information in a\nstrategic market game: Reexamining the implications\nof rational expectations. Economic Theory,\n13:603-628, 1999.\n[14] J. C. Jackwerth and M. Rubinstein. Recovering\nprobability distributions from options prices. Journal\nof Finance, 51(5):1611-1631, Dec. 1996.\n[15] J.-H. Lin and J. S. Vitter. Complexity results on\nlearning by neural nets. Machine Learning, 6:211-230,\n1991.\n[16] R. E. Lucas. Expectations and the neutrality of\nmoney. Journal of Economic Theory, 4(2):103-24,\n1972.\n[17] M. Magill and M. Quinzii. Theory of Incomplete\nMarkets, Vol. 1. MIT Press, 1996.\n[18] A. Mas-Colell, M. D. Whinston, and J. R. Green.\nMicroeconomic Theory. Oxford University Press, New\nYork, 1995.\n[19] R. D. McKelvey and T. Page. Common knowledge,\nconsensus, and aggregate information. Econometrica,\n54(1):109-127, 1986.\n[20] P. Milgrom and N. Stokey. Information, trade, and\ncommon knowledge. Journal of Economic Theory,\n26:17-27, 1982.\n[21] L. T. Nielsen, A. Brandenburger, J. Geanakoplos,\nR. McKelvey, and T. Page. Common knowledge of an\naggregate of expectations. Econometrica,\n58(5):1235-1238, 1990.\n[22] D. M. Pennock, S. Debnath, E. J. Glover, and C. L.\nGiles. Modeling information incorporation in markets,\nwith application to detecting and explaining events. In\nProceedings of the Eighteenth Conference on\nUncertainty in Artificial Intelligence, 2002.\n164\n[23] D. M. Pennock, S. Lawrence, C. L. Giles, and F. \u02daA.\nNielsen. The real power of artificial markets. Science,\n291:987-988, February 2001.\n[24] D. M. Pennock, S. Lawrence, F. \u02daA. Nielsen, and C. L.\nGiles. Extracting collective probabilistic forecasts from\nweb games. In Proceedings of the 7th ACM SIGKDD\nInternational Conference on Knowledge Discovery and\nData Mining, pages 174-183, 2001.\n[25] C. R. Plott and S. Sunder. Rational expectations and\nthe aggregation of diverse information in laboratory\nsecurity markets. Econometrica, 56(5):1085-1118,\n1988.\n[26] C. R. Plott, J. Wit, and W. C. Yang. Parimutuel\nbetting markets as information aggregation devices:\nExperimental results. Technical Report Social Science\nWorking Paper 986, California Institute of\nTechnology, Apr. 1997.\n[27] C. Schmidt and A. Werwatz. How accurate do\nmarkets predict the outcome of an event? the Euro\n2000 soccer championships experiment. Technical\nReport 09-2002, Max Planck Institute for Research\ninto Economic Systems, 2002.\n[28] L. Shapley and M. Shubik. Trade using one\ncommodity as a means of payment. Journal of\nPolitical Economy, 85:937-968, 1977.\n[29] Y. Shoham and M. Tennenholtz. Rational\ncomputation and the communication complexity of\nauctions. Games and Economic Behavior,\n35(1-2):197-211, 2001.\n[30] R. H. Thaler and W. T. Ziemba. Anomalies:\nParimutuel betting markets: Racetracks and lotteries.\nJournal of Economic Perspectives, 2(2):161-174, 1988.\n[31] H. R. Varian. The arbitrage principle in financial\neconomics. Journal of Economic Perspectives,\n1(2):55-72, 1987.\n165", "keywords": "worst case;number of bit;distributed information;round;information market;computational property of the process;simplified model;lower bound;financial security;convergence to equilibrium;market price;economic theory;trader;empirical and laboratory evidence;security's value;trading strategy;efficient market hypothesis;probability distribution;information aggregation;bit number;threshold function;path toward equilibrium;computational process;payoff;distribute information market;rational expectation;equilibrium price;market computation;security"}
+version https://git-lfs.github.com/spec/v1
+oid sha256:9a01b20e97ff121a13c1528d8e01fcd44bce7e97e5f5159af31cdab6a1e33076
+size 4934478